Entry tags:
Some Scott Alexander links
(Надо же, как всё фигово. Ядерная сверхдержава затеяла фашистскую агрессию вполне в гитлеровском стиле. Перспективы мрачные. Правда, её вооружённые силы совершенно прогнили, всё деньги на "модернизацию" украдены, судя по тому, что мы видим.
Пытаются блокировать твиттер, и всякое такое: twitter.com/RALee85/status/1497546710461165568)
Anyway, back to the post title:
This one seems to be pretty good: slatestarcodex.com/2019/01/08/book-review-the-structure-of-scientific-revolutions/
This is in connection with his latest analysis of attempts to predict superintelligence arrival timeline: astralcodexten.substack.com/p/biological-anchors-a-trick-that-might
Пытаются блокировать твиттер, и всякое такое: twitter.com/RALee85/status/1497546710461165568)
Anyway, back to the post title:
This one seems to be pretty good: slatestarcodex.com/2019/01/08/book-review-the-structure-of-scientific-revolutions/
This is in connection with his latest analysis of attempts to predict superintelligence arrival timeline: astralcodexten.substack.com/p/biological-anchors-a-trick-that-might
no subject
"I think the argument here is that OpenPhil is accounting for normal scientific progress in algorithms, but not for paradigm shifts.
Directional Error
These are the two arguments Eliezer makes against OpenPhil that I find most persuasive. First, that you shouldn’t be using biological anchors at all. Second, that unpredictable paradigm shifts are more realistic than gradual algorithmic progress.
These mostly add uncertainty to OpenPhil’s model, but Eliezer ends his essay making a stronger argument: he thinks OpenPhil is directionally wrong, and AI will come earlier than they think."
no subject