Some Scott Alexander links
Feb. 26th, 2022 10:28 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
(Надо же, как всё фигово. Ядерная сверхдержава затеяла фашистскую агрессию вполне в гитлеровском стиле. Перспективы мрачные. Правда, её вооружённые силы совершенно прогнили, всё деньги на "модернизацию" украдены, судя по тому, что мы видим.
Пытаются блокировать твиттер, и всякое такое: twitter.com/RALee85/status/1497546710461165568)
Anyway, back to the post title:
This one seems to be pretty good: slatestarcodex.com/2019/01/08/book-review-the-structure-of-scientific-revolutions/
This is in connection with his latest analysis of attempts to predict superintelligence arrival timeline: astralcodexten.substack.com/p/biological-anchors-a-trick-that-might
Пытаются блокировать твиттер, и всякое такое: twitter.com/RALee85/status/1497546710461165568)
Anyway, back to the post title:
This one seems to be pretty good: slatestarcodex.com/2019/01/08/book-review-the-structure-of-scientific-revolutions/
This is in connection with his latest analysis of attempts to predict superintelligence arrival timeline: astralcodexten.substack.com/p/biological-anchors-a-trick-that-might
no subject
Date: 2022-02-26 03:42 pm (UTC)https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works
no subject
Date: 2022-02-26 03:44 pm (UTC)[...]
But timing the novel development correctly? That is almost never done, not until things are 2 years out, and often not even then. Nuclear weapons were called, but not nuclear weapons in 1945; heavier-than-air flight was called, but not flight in 1903. In both cases, people said two years earlier that it wouldn't be done for 50 years - or said, decades too early, that it'd be done shortly. There's a difference between worrying that we may eventually get a serious global pandemic, worrying that eventually a lab accident may lead to a global pandemic, and forecasting that a global pandemic will start in November of 2019."
no subject
Date: 2022-02-26 03:47 pm (UTC)no subject
Date: 2022-02-26 03:50 pm (UTC)After I was old enough to be more skeptical of timelines myself, I used to wonder how Vinge had pulled out the "within thirty years" part. This may have gone over my head at the time, but rereading again today, I conjecture Vinge may have chosen the headline figure of thirty years as a deliberately self-deprecating reference to Charles Platt's generalization about such forecasts always being thirty years from the time they're made, which Vinge explicitly cites later in the speech.
Or to put it another way: I conjecture that to the audience of the time, already familiar with some previously-made forecasts about strong AI, the impact of the abstract is meant to be, "Never mind predicting strong AI in thirty years, you should be predicting superintelligence in thirty years, which matters a lot more." But the minds of authors are scarcely more knowable than the Future, if they have not explicitly told us what they were thinking; so you'd have to ask Professor Vinge, and hope he remembers what he was thinking back then.
OpenPhil: Superintelligence before 2023, huh? I suppose Vinge still has two years left to go before that's falsified.
Eliezer: Also in the body of the speech, Vinge says, "I'll be surprised if this event occurs before 2005 or after 2030," which sounds like a more serious and sensible way of phrasing an estimate. I think that should supersede the probably Platt-inspired headline figure for what we think of as Vinge's 1993 prediction. The jury's still out on whether Vinge will have made a good call.
Oh, and sorry if grandpa is boring you with all this history from the times before you were around. I mean, I didn't actually attend Vinge's famous NASA speech when it happened, what with being thirteen years old at the time, but I sure did read it later. Once it was digitized and put online, it was all over the Internet. Well, all over certain parts of the Internet, anyways. Which nerdy parts constituted a much larger fraction of the whole, back when the World Wide Web was just starting to take off among early adopters.
But, yeah, the new kids showing up with some graphs of Moore's Law and calculations about biology and an earnest estimate of strong AI being thirty years out from the time of the report is, uh, well, it's... historically precedented."
no subject
Date: 2022-02-26 03:51 pm (UTC)no subject
Date: 2022-02-26 03:55 pm (UTC)no subject
Date: 2022-02-26 04:05 pm (UTC)"I think the argument here is that OpenPhil is accounting for normal scientific progress in algorithms, but not for paradigm shifts.
Directional Error
These are the two arguments Eliezer makes against OpenPhil that I find most persuasive. First, that you shouldn’t be using biological anchors at all. Second, that unpredictable paradigm shifts are more realistic than gradual algorithmic progress.
These mostly add uncertainty to OpenPhil’s model, but Eliezer ends his essay making a stronger argument: he thinks OpenPhil is directionally wrong, and AI will come earlier than they think."
no subject
Date: 2022-02-26 04:07 pm (UTC)