On the essay on AI-generating algorithms
Mar. 14th, 2020 05:06 amI am going to discuss the essay arxiv.org/abs/1905.10985 "AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence" by Jeff Clune.
It is a rather involved text, although it does not have any formulas (22 pages + 12 pages of literature).
Jeff Clune got his PhD in 2010, founded Evolving AI Lab at the University of Wyoming and co-founded a start-up named Geometric Intelligence, which eventually became Uber AI Lab.
Together with Ken Stanley and other members of Uber AI Lab, he jump-started "deep neuroevolution" in the end of 2017: eng.uber.com/deep-neuroevolution/ (See also January 2019 review paper "Designing neural networks through neuroevolution" in Nature Machine Intelligence: www.nature.com/articles/s42256-018-0006-z ).
In January 2020, he joined OpenAI to lead a large-scale effort into research of AI-generating algorithms.
***
I am just going to write various quotes from that paper and discuss parts of this essay in the comments to this post now and in the coming days.
March 26 update: I wrote a follow-up essay, "Synergy between AI-generating algorithms and dataflow matrix machines", covering possible interplay between AI-GAs and DMMs: github.com/anhinga/2020-notes/tree/master/research-notes
It is a rather involved text, although it does not have any formulas (22 pages + 12 pages of literature).
Jeff Clune got his PhD in 2010, founded Evolving AI Lab at the University of Wyoming and co-founded a start-up named Geometric Intelligence, which eventually became Uber AI Lab.
Together with Ken Stanley and other members of Uber AI Lab, he jump-started "deep neuroevolution" in the end of 2017: eng.uber.com/deep-neuroevolution/ (See also January 2019 review paper "Designing neural networks through neuroevolution" in Nature Machine Intelligence: www.nature.com/articles/s42256-018-0006-z ).
In January 2020, he joined OpenAI to lead a large-scale effort into research of AI-generating algorithms.
***
I am just going to write various quotes from that paper and discuss parts of this essay in the comments to this post now and in the coming days.
March 26 update: I wrote a follow-up essay, "Synergy between AI-generating algorithms and dataflow matrix machines", covering possible interplay between AI-GAs and DMMs: github.com/anhinga/2020-notes/tree/master/research-notes
no subject
Date: 2020-03-14 09:42 am (UTC)***
AI-GAs may prove to be the fastest path to general AI. However, even if they are not, they are worth pursuing anyway. It is intrinsically scientifically worthwhile to attempt to answer the question of how to create a set of simple conditions and a simple algorithm that can bootstrap itself from simplicity to produce general intelligence. Just such an event happened on Earth, where the extremely simple algorithm of Darwinian evolution ultimately produced human intelligence. Thus, one reason that creating AI-GAs is beneficial is that doing so would shed light on the origins of our own intelligence.
AI-GAs would also teach us about the origins of intelligence generally, including elsewhere in the universe. They could, for example, teach us which conditions are necessary, sufficient, and catalyzing for intelligence to arise. They could inform us as to how likely intelligence is to emerge when the sufficient conditions are present, and how often general intelligence emerges after narrow intelligence does.
Presumably different instantiations of AI-GAs (either different runs of the same AI-GA or different types of AI-GAs) would lead to different kinds of intelligence, including different, alien cultures. AI-GAs would likely produce a much wider diversity of intelligent beings than the manual path to creating AI, because the manual path would be limited by our imagination, scientific understanding, and creativity. We could even create AI-GAs that specifically attempt to create different types of general AI than have already been produced. AI-GAs would thus better allow us to study and understand the space of possible intelligences, shedding light on the different ways intelligent life might think about the world.
The creation of AI-GAs would enable us to perform the ultimate form of cultural travel, allowing us to interact with, and learn from, wildly different intelligent civilizations. Having a diversity of minds could also catalyze even more scientific discovery than the production of a single general AI would. We would also want to reverse engineer each of these minds for the same reasons we study neuroscience in animals, including humans: because we are curious and want to understand intelligence. We and others have been conducting such ‘AI Neuroscience’ for years now , but the work is just beginning. John Maynard Smith wrote the following about evolutionary systems: “So far, we have been able to study only one evolving system, and we cannot wait for interstellar flight to provide us with a second. If we want to discover generalizations about evolving systems, we will have to look at artificial ones.” The same is true regarding intelligence.
Additionally, explained in Section 2.3, the manner in which AI-GA is likely to produce intelligence will involve the creation of a wide diversity of learning environments, which could include novel virtual worlds, artifacts, riddles, puzzles, and other challenges that themselves will likely be intrinsically interesting and valuable. Many of those benefits will begin to accrue in the short-term with AI-GA research, so AI-GAs will provide short-term value even if the long-term goals remain elusive. AI-GAs are also worthwhile to pursue because they inform our attempts to create open-ended algorithms (those that endlessly innovate). As described in Section 1.2, we should also invest in AI-GA research because it might prove to be the fastest way to produce general AI.
For all these reasons, I argue that the creation of AI-GAs should be considered its own independent scientific grand challenge.