
#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to Cart failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from wishlist failed.
Please try again later
Adding to library failed
Please try again
Follow podcast failed
Please try again
Unfollow podcast failed
Please try again
-
Narrated by:
-
By:
About this listen
Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI.
We discussed why AI poses an existential risk to humanity, what makes this problem very hard to solve, why Gabe believes we need to prevent the development of superintelligence for at least the next two decades, and more.
Follow Gabe on Twitter
Read The Compendium and A Narrow Path
No reviews yet