In the essay, he talks about _the_ superhuman intelligence. Although pioneering, the idea is quite limiting. Why not multiple kinds of such intelligences? Do you know of somebody discussing a multiplicity of AGIs? Thanks.
People are talking about multiplicity of ASIs on different levels of intelligence, but I don't know the history of the question.
I do have an old essay written in September 1998 where I touch on an aspect of that, and I do consider that multiplicity to some extent in my more recent writings such as this recent comment
In a longer unpublished version of that post I even had this remark about the case of a single dominant ASI:
> First of all, while something might look as a "singleton" and a "coherent system" from the outside, there is no such thing when one looks closer. A closer look shows that even a "unitary system" has inner structure, it might have subprocesses, it might need to consider competing viewpoints, sub-agents can arise easily, etc. So, even if from the outside the situation looks like a "singleton system" which has achieved a "decisive strategic advantage", upon a closer look this "singleton system" still would not be all that "unitary" (it's not a magic artifact, it's a "thinking machine", and so it is likely to be a "society of mind" in its various aspects).
But because the overall idea of a non-anthropocentic approach is very non-standard (and not "well aligned" with the "AI alignment community orthodoxy"), I decided to publish a really short version and to keep the longer ones in drafts.
no subject
Date: 2024-03-23 10:54 pm (UTC)Thanks.
no subject
Date: 2024-03-23 11:13 pm (UTC)I do have an old essay written in September 1998 where I touch on an aspect of that, and I do consider that multiplicity to some extent in my more recent writings such as this recent comment
https://www.lesswrong.com/posts/ApZJy3NKfW5CkftQq/on-the-gladstone-report?commentId=9MExQJqDABRgmhqch
and this one year old post https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential
In a longer unpublished version of that post I even had this remark about the case of a single dominant ASI:
> First of all, while something might look as a "singleton" and a "coherent system" from the outside, there is no such thing when one looks closer. A closer look shows that even a "unitary system" has inner structure, it might have subprocesses, it might need to consider competing viewpoints, sub-agents can arise easily, etc. So, even if from the outside the situation looks like a "singleton system" which has achieved a "decisive strategic advantage", upon a closer look this "singleton system" still would not be all that "unitary" (it's not a magic artifact, it's a "thinking machine", and so it is likely to be a "society of mind" in its various aspects).
But because the overall idea of a non-anthropocentic approach is very non-standard (and not "well aligned" with the "AI alignment community orthodoxy"), I decided to publish a really short version and to keep the longer ones in drafts.