OpenAI code generation breakthrough
May. 22nd, 2020 08:20 pmIn this video Microsoft CTO is interviewing OpenAI CEO starting from 25:00 mark (right before this mark he is talking about a huge computer system Microsoft created for OpenAI; the style of this overall Microsoft video does feel quite weird to my taste, but this fragment with Sam Altman is good):
twitter.com/matvelloso/status/1263193089310461952
At about 29:00 mark OpenAI demos their new transformer-based code-generating system trained on a large subset of GitHub. I'd say, it's quite impressive, it does feel like a breakthrough in coding-assisting tools. Some discussion here:
news.ycombinator.com/item?id=23250379
Generally speaking, people are saying lately that large modern transformer models only pretend to be sequence-to-sequence, but in reality they learn tons of structured linguistic information, see e.g. this informal essay-style paper and references therein:
arxiv.org/abs/2005.06420 "The Unstoppable Rise of Computational Linguistics in Deep Learning"
(This is not yet a artificial junior software engineer one can hire, but this OpenAI prototype is a considerable step in that direction. May 20, 2020 will be remembered as an important milestone.)
twitter.com/matvelloso/status/1263193089310461952
At about 29:00 mark OpenAI demos their new transformer-based code-generating system trained on a large subset of GitHub. I'd say, it's quite impressive, it does feel like a breakthrough in coding-assisting tools. Some discussion here:
news.ycombinator.com/item?id=23250379
Generally speaking, people are saying lately that large modern transformer models only pretend to be sequence-to-sequence, but in reality they learn tons of structured linguistic information, see e.g. this informal essay-style paper and references therein:
arxiv.org/abs/2005.06420 "The Unstoppable Rise of Computational Linguistics in Deep Learning"
(This is not yet a artificial junior software engineer one can hire, but this OpenAI prototype is a considerable step in that direction. May 20, 2020 will be remembered as an important milestone.)
no subject
Date: 2020-06-25 05:05 am (UTC)Today I revisited this point (I was not being sufficiently relaxed in this sense recently, despite observing this May 22 milestone). So, I revisited it, and I decided that I should really drop my "self-imposed obligation to push DMM advances as hard as possible".
I should go back to the "free research mode" which is more natural for me.
Dataflow matrix machines should just be one of the things I am doing (it is already so, effectively anyway), and I should do it only to the extent I feel like it, and only in the directions which feel attractive to me at a given moment.
***
Rich Hickey in his essay "Open Source is Not About You"
https://gist.github.com/richhickey/1563cddea1002958f96e7ba9519972d9
"Just because someone open sources something does not imply they owe the world a change in their status, focus and effort, e.g. from inventor to community manager."
"Open source is a no-strings-attached gift, and all participants should recognize it as such."
So, it would be wrong to think that just because I created DMMs and the body of DMM-related research, papers, and code, I therefore owe it morally to anyone (including myself) to further push hard in this direction.
I was allowing this situation with DMMs and their potential to attach strings and constraints on me, and I should be free to liberate myself from those attachments.
***
I should also be more neutral in the sense of Paul Graham essay "Keep Your Identity Small":
http://www.paulgraham.com/identity.html
"The most intriguing thing about this theory, if it's right, is that it explains not merely which kinds of discussions to avoid, but how to have better ideas. If people can't think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible."
(E.g. regarding the AI timeline, it makes sense not to have strong opinions on that as a part of my core identity. Basically, it makes sense not to be too attached to any outcome here.)