On the essay on AI-generating algorithms
Mar. 14th, 2020 05:06 amI am going to discuss the essay arxiv.org/abs/1905.10985 "AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence" by Jeff Clune.
It is a rather involved text, although it does not have any formulas (22 pages + 12 pages of literature).
Jeff Clune got his PhD in 2010, founded Evolving AI Lab at the University of Wyoming and co-founded a start-up named Geometric Intelligence, which eventually became Uber AI Lab.
Together with Ken Stanley and other members of Uber AI Lab, he jump-started "deep neuroevolution" in the end of 2017: eng.uber.com/deep-neuroevolution/ (See also January 2019 review paper "Designing neural networks through neuroevolution" in Nature Machine Intelligence: www.nature.com/articles/s42256-018-0006-z ).
In January 2020, he joined OpenAI to lead a large-scale effort into research of AI-generating algorithms.
***
I am just going to write various quotes from that paper and discuss parts of this essay in the comments to this post now and in the coming days.
March 26 update: I wrote a follow-up essay, "Synergy between AI-generating algorithms and dataflow matrix machines", covering possible interplay between AI-GAs and DMMs: github.com/anhinga/2020-notes/tree/master/research-notes
It is a rather involved text, although it does not have any formulas (22 pages + 12 pages of literature).
Jeff Clune got his PhD in 2010, founded Evolving AI Lab at the University of Wyoming and co-founded a start-up named Geometric Intelligence, which eventually became Uber AI Lab.
Together with Ken Stanley and other members of Uber AI Lab, he jump-started "deep neuroevolution" in the end of 2017: eng.uber.com/deep-neuroevolution/ (See also January 2019 review paper "Designing neural networks through neuroevolution" in Nature Machine Intelligence: www.nature.com/articles/s42256-018-0006-z ).
In January 2020, he joined OpenAI to lead a large-scale effort into research of AI-generating algorithms.
***
I am just going to write various quotes from that paper and discuss parts of this essay in the comments to this post now and in the coming days.
March 26 update: I wrote a follow-up essay, "Synergy between AI-generating algorithms and dataflow matrix machines", covering possible interplay between AI-GAs and DMMs: github.com/anhinga/2020-notes/tree/master/research-notes
no subject
Date: 2020-03-16 06:49 am (UTC)***
"The question of whether and why we should create general AI is a complicated one and
is the focus of many articles [143, 41, 4, 18, 15]. I will not delve into that issue here as it is better served when it is the sole issue of focus. However, the AI-GA path introduces its own unique set of ethical issues that I do want to mention here."
[...]
"It is fair to ask why should I write this paper if I think AI-GA research is more dangerous, as I am attempting to inform people about it potentially being a faster path to general AI and advocating that more people work on this path. One reason is I believe that, on balance, technological advances produce more benefit than harm. That said, this technology is very different and could prove an exception to the rule. A second reason is because I think society is better off knowing about this path and its potential, including its risks and downsides. We might therefore be better prepared to maximize the positive consequences of the technology while working hard to minimize the risks and negative outcomes. Additionally, I find it hard to imagine that, if this is the fastest path to AI, then society will not pursue it. I struggle to think of powerful technologies humanity has not invented soon after it had the capability to do so. Thus, if it is inevitable, then we should be aware of the risks and begin organizing ourselves in a way to minimize those risks. Very intelligent people disagree with my conclusion to make knowledge of this technology public. I respect their opinions and have discussed this issue with them at length. It was not an easy decision for me to make. But ultimately I feel that it is a service to society to make these issues public rather than keep them the secret knowledge of a few experts."
"There is another ethical concern, although many will find it incredible and dismiss it as the realm of fantasy or science fiction. We do not know how physical matter such as atoms can produce feelings and sensations like pain, pleasure, or the taste of chocolate, which philosophers call qualia. While some disagree, I think we have no good reason to believe that qualia will not emerge at some point in artificially intelligent agents once they are complex enough. A simple thought experiment makes the point: imagine if the mimic path enabled us to simulate an entire human brain and body, down to each subatomic particle. It seems likely to me that such a simulation would feel the same sensations as its real-world counterpart."
"Recognizing if and when artificial agents are feeling pain, pleasure, and other qualia that are worthy of our ethical considerations is an important subject that we will have to come to terms with in the future. However, that issue is not specific to the method in which AI is produced, and therefore is not unique to the AI-GA path. There is an AI-GA-specific consideration on this front, however. On Earth, there has been untold amounts of suffering produced in animals en route to the production of general AI. Is it ethical to create algorithms in which such suffering occurs if it is essential, or helpful, to produce AI? Should we ban research into algorithms that create such suffering in order to focus energy on creating AI-GAs that do not involve suffering? How do we balance the benefits to humans and the planet of having general AI vs. the suffering of virtual agents? These are all questions we will have to deal with as research progresses on AI-GAs. They are related to the general question of ethics for artificial agents, but have unique dimensions worthy of specific consideration."
"Some of these ideas will seem fantastical to many researchers. In fact, it is risky for my career to raise them. However, I feel obligated to let society and our community know that I consider some of these seemingly fantastical outcomes possible enough to merit consideration. For example, even if there is a small chance that we create dangerous AI or untold suffering, the costs are so great that we should discuss that possibility. As an analogy, if there were a 1% chance that a civilization-ending asteroid could hit Earth in a decade or ten, we would be foolish not to begin discussing how to track it and prevent that catastrophe."
"We should keep in mind the grandeur of the task we are discussing, which is nothing short than the creation of an artificial intelligence smarter than humans. If we succeed, we arguably have also created life itself, by some definitions. We do not know if that intelligence will feel. We do not know what its values might be. We do not know what its intentions towards us may be. We might have an educated guess, but any student of history would recognize that it would be the height of hubris to assume we know with certainty exactly what general AI will be like. Thus, it is important to encourage, instead of silence, a discussion of the risks and ethical implications of creating general artificial intelligence."
no subject
Date: 2020-03-16 06:58 am (UTC)***
[...]
"I also described an alternative path to AI: creating general AI-generating algorithms, or AI-GAs. This path involves Three Pillars: meta-learning architectures, meta-learning algorithms, and automatically generating effective learning environments. As with the other paths, there are advantages and disadvantages to this approach. A major con is that AI-GAs will require a lot of computation, and therefore may not be practical in time to be the first path to produce general AI. However, AI-GA’s ability to benefit more readily from exponential improvements in the availability of compute may mean that it surpasses the manual path before the manual path succeeds. A reason to believe that the AI-GA path may be the fastest to produce general AI is in line with the longstanding trend in machine learning that hand-coded solutions are ultimately surpassed by learning-based solutions as the availability of computation and data increase over time. Additionally, the AI-GA path may win because it does not require the Herculean Phase 2 of the manual path and all of its scientific, engineering, and sociological challenges. Additional benefits of AI-GA research are that fewer people are working on it, making it an exciting, unexplored research frontier."
"All three paths are worthwhile scientific grand challenges. That said, society should increase its investment in the AI-GA path. There are entire fields and billions of dollars devoted to the mimic path. Similarly, most of the machine learning community is pursuing the manual path, including billions of dollars in government and industry funding. Relative to these levels of investment, there is little research and investment in the AI-GA path. While still small relative to the manual path, there has been a recent surge of interest in Pillar 1 (meta-learning architectures) and Pillar 2 (metalearning algorithms). However, there is little work on Pillar 3, and no work to date on attempting to combine the Three Pillars. Since the AI-GA path might be the fastest path to producing general AI, then society should substantially increase its investment in AI-GA research. Even if one believes the AI-GA path has a 1%-5% of being the first to produce general AI, then we should allocate corresponding resources into the field to catalyze its progress. That, of course, assumes we conclude that the benefits of potentially producing general AI faster outweigh the risks of producing it via AI-GAs, which I ultimately do. At a minimum, I hope this paper motivates a discussion on these questions. While there is great uncertainty about which path will ultimately produce general AI first, I think there is little uncertainty that we are underinvesting in a promising area of machine learning research."
"Finally, this essay has discussed many of the interesting consequences of building general AI that are unique to producing general AI via AI-GAs. One benefit is being able to produce a large diversity of different types of intelligent beings, and thus accelerating our ability to understand intelligence in general and all its potential manifestations. Doing so may also better help us understand our own single instance of intelligence, much as traveling the world is necessary to truly understand one’s hometown. Each different intelligence produced by an AI-GA could also create entire alien histories and cultures from which we can learn from. Downsides unique to AI-GAs were also discussed, including that it might make the sudden, unanticipated production of AI more likely, that it might make producing dangerous forms of AI more likely, and that it may create untold suffering in virtual agents. While I offered my own views on these issues and how I weigh the positives and negatives of this technology for the purpose of deciding whether we should pursue it, a main goal of mine is to motivate others to discuss these important issues."
The last paragraph says:
"My overarching goal in this essay is not to argue that one path to general AI is likely to be better or faster. Instead, it is to highlight that there is an entirely different path to producing general AI that is rarely discussed. Because research in that path is less well known, I briefly summarized some of the research we and others have done to take steps towards creating AI-GAs. I also want to encourage reflection on (1) which path or paths each of us is committed to and why, (2) the assumptions that underlie each path (3) the reasons why each path might prove faster or slower in the production of general AI, (4) whether society and our community should rebalance our investment in the different paths, and (5) the unique benefits and detriments of each approach, including AI safety and ethics considerations. It is my hope that this essay will improve our collective understanding of the space of possible paths to producing general AI, which is worthwhile for everyone regardless of which path we choose to work on. I also hope this essay highlights that there is a relatively unexplored path that may turn out to be the fastest path in the greatest scientific quest in human history. I find that extremely exciting, and hope to inspire others in the community to join the ranks of those working on it."