AGI-21: Oct 15-18, 2021
Oct. 15th, 2021 09:29 amagi-conference.org/
(Registration for online participation is free; a small fee if one wants to do it in person.)
This is not a "first-tier" AI conference, but this year has a larger number of interesting "external" keynote speakers than usual (perhaps, people are starting to feel that "artificial general intelligence" is no longer a far-in-the-future remote topic, but something becoming acutely relevant): Yoshua Bengio, Francois Chollet, Tomas Mikolov, Joseph Urban.
(Registration for online participation is free; a small fee if one wants to do it in person.)
This is not a "first-tier" AI conference, but this year has a larger number of interesting "external" keynote speakers than usual (perhaps, people are starting to feel that "artificial general intelligence" is no longer a far-in-the-future remote topic, but something becoming acutely relevant): Yoshua Bengio, Francois Chollet, Tomas Mikolov, Joseph Urban.
no subject
Date: 2021-10-15 01:37 pm (UTC)no subject
Date: 2021-10-15 02:28 pm (UTC)2018 - Joseph Urban, 2016 - Stephen Grossberg, 2015 - Frank Wood and Jurgen Schmidhuber (yes, that was a good year), 2014 - Yoshua Bengio (that was the year when his group invented modern "attention"), 2013 - Dileep George, 2012 - Nick Bostrom (as a host, in some sense; I attended that one), 2011 - Jurgen Schmidhuber, Ed Boyden (I attended that one, Schmidhuber is not listed as a keynote speaker, but I remember his keynote very well), 2010 - Richard Sutton, 2009 - Jurgen Schmidhuber.
So, yes, we see that Schmidhuber is the optimist, and that Yoshua Bengio and Joseph Urban are also return speakers.
And yes, the period between 2017 and 2020 does look like a crisis (the mainstream AI development is so interesting and the progress there is so decisive during those years that people don't pay much attention to AGI; now the mainstream AI development has finally reached the point where it seems quite realistic that it can lead to AGI sometime soon).
no subject
Date: 2021-10-16 05:11 pm (UTC)Day 2 is starting - this is the stream: https://www.youtube.com/watch?v=_TzEHP99EmE (mess continues, 30 min late at least; hopefully, this will be interesting: http://agi-conf.org/2021/schedule/ )
The way Day 2 is shaping up, the actual progress towards AGI will likely be made by the mainstream AI community, and not by this group.
(Replay is available at the same youtube link)
no subject
Date: 2021-10-17 02:33 pm (UTC)Hopefully, today will be more interesting (most of the invited keynote speakers are today).
(Late start again; I don't know, it might be better to just watch a replay later unless one wants to participate in chat; but I'll try to watch real-time. Schedule change: Francois Chollet first.)
Francois Chollet is pessimistic, is focusing on things which don't adapt and generalize well, (deliberately?) ignoring things which do (perhaps, it is just a function on his Keras focus; a typical work in this field does not generalize, but some do generalize, it's just takes special efforts focused on that; but it might be that autonomous driving does work less well than it should because those systems are too specialized; perhaps, if they are taught to do more tasks than strictly necessary they would work better). The end result is that he is very biased (similarly to his views on impossibility of intelligence explosion; so that's not new about him). So, his criticism is mostly correct about the bulk of the field, but what is missing from his presentation is that many people came to these conclusions a while ago, and actually started to do something about that and actually started to create more adaptable systems.
And we should do more of those!
Then his talk becomes more useful, as he talks about his nice work, in particular, his Abstraction & Reasoning Corpus (ARC) test set: https://twitter.com/fchollet/status/1228011358362324992?lang=en ; https://www.kaggle.com/c/abstraction-and-reasoning-challenge/overview ; https://arxiv.org/abs/1911.01547
Prototype-centric abstraction vs program-centric abstraction (both are important).
Yes, a useful talk, if one discounts the initial segment. A lot of interesting material.
At the end he is talking non-sense on program synthesis, ignoring a variety of advances enabling to avoid discrete search. So the end is not too useful either.
no subject
Date: 2021-10-17 05:49 pm (UTC)(Interesting, difficult; he thinks language is helping to reason "out of distribution"; but then I would say that Transformers might already have this capability (not even a surprise, the only question is whether attention use there is enough for "consciousness", or not quite).)
I really need to listen to this once again.
Discussion between Ben and Yoshua is interesting :-)
no subject
Date: 2021-10-17 07:00 pm (UTC)So far, disappointing. I don't see much in terms of insights yet.
He advocates the ALife approach (which I like too), but so far he is not saying new non-trivial things about that.
no subject
Date: 2021-10-17 08:45 pm (UTC)https://sites.google.com/site/jonathanwarrell/ (this has slides)
A very technical talk; he seems to be closely related with some of the newer Ben/OpenCog developments.
no subject
Date: 2021-10-17 09:45 pm (UTC)Let's hope they fix that. What a mess...
Videos are here: https://www.youtube.com/playlist?list=PLAJnaovHtaFTzIS4eBi9Jm5eMlsRzFoEA
But where are PDFs? They say they might have a new arrangement with Springer which prevents them from posting PDFs (maybe, but... and the call for paper says "and all the accepted papers will be available online", so no, that's unacceptable; they'd better fix that).
no subject
Date: 2021-10-18 03:04 pm (UTC)https://www.youtube.com/watch?v=E7kbK9m3g-U
Grace Solomonoff https://www.researchgate.net/profile/Grace-Solomonoff mentioned during her discussion that what she was talking about might be helpful for models with "matrix-type or matrix-like connectivity".
It seems that Zoom helps people to stay on time somehow, which was completely evading them in a youtube-only format in the previous days. (And Eray Özkural decided to discuss these things with her.)
Nell Watson - a nice talk on dynamic phenomena, she is trying to derive moral development from that. Interesting observations on moral interactions between superintelligence and people.
Gary Marcus - a nice talk, but he does overstate his case somewhat, as usual. Still it has plenty of useful ideas. https://arxiv.org/abs/2002.06177 "The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence". Useful discussion after the talk.
Geordie Rose is a founder and CTO of D-Wave. "Robot Brains". https://www.sanctuary.ai/ Very interesting.
Paul Rosenbloom. "Lumping and Splitting: Understanding Cognition via the Common Model and Dichotomic Maps". (https://cogarch.ict.usc.edu/ Sigma cognitive architecture (formerly involved with https://en.wikipedia.org/wiki/Soar_(cognitive_architecture) I have yet to find a cognitive architecture I'd be able to like.)
Josef Urban – "Towards the Dream of Self-Improving Universal Reasoning AI". (Mentions Sam Alexander & Hutter contributed talk, "Reward-Punishment Symmetric Universal Intelligence", https://arxiv.org/abs/2110.02450 as an example of a more formal talk.) He hopes for a "QED singularity" coming really soon. The state of things: https://github.com/ai4reason/ATP_Proofs