On the essay on AI-generating algorithms
Mar. 14th, 2020 05:06 amI am going to discuss the essay arxiv.org/abs/1905.10985 "AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence" by Jeff Clune.
It is a rather involved text, although it does not have any formulas (22 pages + 12 pages of literature).
Jeff Clune got his PhD in 2010, founded Evolving AI Lab at the University of Wyoming and co-founded a start-up named Geometric Intelligence, which eventually became Uber AI Lab.
Together with Ken Stanley and other members of Uber AI Lab, he jump-started "deep neuroevolution" in the end of 2017: eng.uber.com/deep-neuroevolution/ (See also January 2019 review paper "Designing neural networks through neuroevolution" in Nature Machine Intelligence: www.nature.com/articles/s42256-018-0006-z ).
In January 2020, he joined OpenAI to lead a large-scale effort into research of AI-generating algorithms.
***
I am just going to write various quotes from that paper and discuss parts of this essay in the comments to this post now and in the coming days.
March 26 update: I wrote a follow-up essay, "Synergy between AI-generating algorithms and dataflow matrix machines", covering possible interplay between AI-GAs and DMMs: github.com/anhinga/2020-notes/tree/master/research-notes
It is a rather involved text, although it does not have any formulas (22 pages + 12 pages of literature).
Jeff Clune got his PhD in 2010, founded Evolving AI Lab at the University of Wyoming and co-founded a start-up named Geometric Intelligence, which eventually became Uber AI Lab.
Together with Ken Stanley and other members of Uber AI Lab, he jump-started "deep neuroevolution" in the end of 2017: eng.uber.com/deep-neuroevolution/ (See also January 2019 review paper "Designing neural networks through neuroevolution" in Nature Machine Intelligence: www.nature.com/articles/s42256-018-0006-z ).
In January 2020, he joined OpenAI to lead a large-scale effort into research of AI-generating algorithms.
***
I am just going to write various quotes from that paper and discuss parts of this essay in the comments to this post now and in the coming days.
March 26 update: I wrote a follow-up essay, "Synergy between AI-generating algorithms and dataflow matrix machines", covering possible interplay between AI-GAs and DMMs: github.com/anhinga/2020-notes/tree/master/research-notes
скульптурно лепить классные штуки из DMMs
Date: 2020-03-27 05:06 am (UTC)***
It seems that in the context of AI-GAs we would like to have more versatile neural machines. It might be useful to be able to easily express algorithms precisely within neural machines, rather than only to learn them approximately, to be able to use readable compact neural networks as well as the overparameterized ones, and to control the degree to which they are overparameterized, to be able to precisely express complicated hierarchical structures and graphs within neural networks, rather than only to model them, and to be able to have flexible self-modification capabilities, where one can take linear combinations and compositions of various self-modification operators and where one is not constrained by the fact that a neural net tends to have more weights than outputs.
It turns out that this can be achieved by a rather mild upgrade: instead of basing neural machines on streams of numbers, one could base them on arbitrary streams supporting the notion of combining several streams with coefficients ("linear combination"). Then one can support the key idea of neural computations, namely that linear and non-linear transformations should be interleaved, and at the same time achieve the wish list above.