Jul. 31st, 2021

dmm: (Default)
astralcodexten.substack.com/p/updated-look-at-long-term-ai-risks

The main takeaway is that no scenario is considered as much more likely than others by the best experts, and they all look more or less equally likely except for the "scenario not listed here" (which is rated as somewhat more likely than the listed scenarios).

Also people seem to be very optimistic for some reason (perhaps, they secretly believe in a benevolent G-d or benevolent aliens keeping an eye of us; otherwise their optimism is difficult to explain).

Scott Alexander summarizes the takeaways interesting for him as follows:

======= QUOTE =======

1. Even people working in the field of aligning AIs mostly assign “low” probability (~10%) that unaligned AI will result in human extinction

2. While some people are still concerned about the superintelligence scenario, concerns have diversified a lot over the past few years

3. People working in the field don't have a specific unified picture of what will go wrong



Profile

dmm: (Default)
Dataflow matrix machines (by Anhinga anhinga)

May 2025

S M T W T F S
    123
456 78910
11 121314151617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 20th, 2025 11:42 am
Powered by Dreamwidth Studios