ICLR and MetaLearning
Apr. 27th, 2020 02:22 pmICLR 2020 is online-only (registration for full participation is a hundred of US units, but a lot of things can be watched for free).
In particular, the video of yesterday's workshop www.betr-rl.ml/2020/ "Beyond “Tabula Rasa” in Reinforcement Learning (BeTR-RL): Agents that remember, adapt, and generalize" is available here:
www.betr-rl.ml/2020/program/
I've seen Jeff Clune's invited talk (very interesting), and also the panel with Jürgen Schmidhuber (reasonably interesting, with him and all 4 invited speakers, that's at about 2:40 in the video; Jeff's talk should be about 3:30 later than the panel).
During the panel, Jeff's remark about advantages of multi-agent architecture was quite interesting for me, and Jürgen made a radical remark that at the end of the day there will be a 10-line program generating advanced super-human AI, and with the benefit of hindsight we'll say that it was obvious and it was strange that we have not found it earlier (I hope I am not distorting his words too much; of course, the first sentence on his website says: "Since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire".).
In particular, the video of yesterday's workshop www.betr-rl.ml/2020/ "Beyond “Tabula Rasa” in Reinforcement Learning (BeTR-RL): Agents that remember, adapt, and generalize" is available here:
www.betr-rl.ml/2020/program/
I've seen Jeff Clune's invited talk (very interesting), and also the panel with Jürgen Schmidhuber (reasonably interesting, with him and all 4 invited speakers, that's at about 2:40 in the video; Jeff's talk should be about 3:30 later than the panel).
During the panel, Jeff's remark about advantages of multi-agent architecture was quite interesting for me, and Jürgen made a radical remark that at the end of the day there will be a 10-line program generating advanced super-human AI, and with the benefit of hindsight we'll say that it was obvious and it was strange that we have not found it earlier (I hope I am not distorting his words too much; of course, the first sentence on his website says: "Since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire".).
no subject
Date: 2020-05-10 06:20 pm (UTC)'It seems to me that many of the basic principles are already understood. And it's more a question of taking these different well-understood puzzle pieces and put them together in such a way that everybody in hindsight will say "oh, that's so simple, why did not we think of that half a century ago". At the end, it is going to be 10 lines of code or something, and it is going to combine [..] the artificial curiosity thing, and the metalearning thing, in a totally natural and completely obvious, in hindsight obvious, way, and at the moment I strongly believe it is just about getting these puzzle pieces together. We have all the pieces, and we just have to plug them down in the right positions. We are almost there! And in hindsight we'll be able to say: "that's so simple, any high school student will be able to learn it... how to build a general AI based on these simple principles".'
***
His twitter: https://twitter.com/SchmidhuberAI
no subject
Date: 2020-05-12 03:51 pm (UTC)https://github.com/anhinga/2020-notes/blob/master/research-drafts/10-lines-thesis.md
no subject
Date: 2020-05-12 03:52 pm (UTC)Assume that there is a moment (now or in the future), where the thesis that approximately 10 lines of sufficiently high-level computer code could generate AI is literally correct. Assume that a good chunk of Internet is available for such a process (although, it's OK to assume that it is a static snapshot of a subset of Internet within a sandbox environment; of course, the question of whether an effective sandbox for such a situation is at all possible is a whole separate long story, long discussion, and long study).
Assume that this chunk of Internet, in particular, contains a fairly large subset of public github, of arxiv, and of wikipedia. The meditation exercise is, then, what these 10 lines of AI-generating code might look like.
This is, obviously, a series of meditation exercises. Each particular exercise depends on your assumptions about the path by which you and/or community arrived at the imagined moment in question.
no subject
Date: 2020-05-12 04:08 pm (UTC)https://en.wikipedia.org/wiki/Group_method_of_data_handling
https://en.wikipedia.org/wiki/Alexey_Ivakhnenko
Schmidhuber says that this was the earliest deep learning work (training an 8-layer net in 1971).