astralcodexten.substack.com/p/updated-look-at-long-term-ai-risks
The main takeaway is that no scenario is considered as much more likely than others by the best experts, and they all look more or less equally likely except for the "scenario not listed here" (which is rated as somewhat more likely than the listed scenarios).
Also people seem to be very optimistic for some reason (perhaps, they secretly believe in a benevolent G-d or benevolent aliens keeping an eye of us; otherwise their optimism is difficult to explain).
The main takeaway is that no scenario is considered as much more likely than others by the best experts, and they all look more or less equally likely except for the "scenario not listed here" (which is rated as somewhat more likely than the listed scenarios).
Also people seem to be very optimistic for some reason (perhaps, they secretly believe in a benevolent G-d or benevolent aliens keeping an eye of us; otherwise their optimism is difficult to explain).
Scott Alexander summarizes the takeaways interesting for him as follows:
======= QUOTE =======
1. Even people working in the field of aligning AIs mostly assign “low” probability (~10%) that unaligned AI will result in human extinction
2. While some people are still concerned about the superintelligence scenario, concerns have diversified a lot over the past few years
3. People working in the field don't have a specific unified picture of what will go wrong