Two events next weekend
Jun. 7th, 2020 11:30 amAn online talk on efforts towards practical use of AI for assistance in software engineering at Google (this is California time):
www.meetup.com/Scala-Bay/events/271129752/
("Towards an ML-augmented Programming Stack" by Eugene Kirpichov. A slide deck is attached to the event page, the key slides are slides 11-13.)
An online demoparty (this is Boston time):
atparty-demoscene.net/
One can just watch, or one can take part in various competitions.
www.meetup.com/Scala-Bay/events/271129752/
("Towards an ML-augmented Programming Stack" by Eugene Kirpichov. A slide deck is attached to the event page, the key slides are slides 11-13.)
An online demoparty (this is Boston time):
atparty-demoscene.net/
One can just watch, or one can take part in various competitions.
no subject
Date: 2020-06-07 06:29 pm (UTC)For an OpenAI system, the way this is done, first of all it is used to generate various unit tests and other tests (which are prime candidates for automated code-generation anyway, since no algorithmic difficulties are involved). And then one can actually run tests to validate the generated software or to find bugs.
***
On the other hand, none of these systems come close to solving problems of finding non-trivial algorithms (of learning to worry about algorithmic complexity).
But then, it is not so frequent that people specify algorithmic complexity in the comments or in the tests (as long as it's inside an engineer's mind, but not written down, and tested only to the extent of "being not too slow", we would not even have data to feed into a computer system for it to learn; alternatively, the computer systems would need to learn to understand mathematical human-written texts on a more serious level than a Transformer does; we read a textbook, and that's how we can learn to reason about algorithmic complexity; we are not yet close to a computer system which would be able to read a textbook and learn to reason about algorithmic complexity from that).
no subject
Date: 2020-06-07 07:00 pm (UTC)is that correct, actually?
Look at this paper from Allen Institute for AI:
https://arxiv.org/abs/2002.05867
"Transformers as Soft Reasoners over Language"
'This paper investigates a modern approach to this problem where the facts and rules are provided as natural language sentences, thus bypassing a formal representation. We train transformers to reason (or emulate reasoning) over these sentences using synthetically generated data. Our models, that we call RuleTakers, provide the first empirical demonstration that this kind of soft reasoning over language is learnable, can achieve high (99%) accuracy, and generalizes to test data requiring substantially deeper chaining than seen during training (95%+ scores). We also demonstrate that the models transfer well to two hand-authored rulebases, and to rulebases paraphrased into more natural language. These findings are significant as it suggests a new role for transformers, namely as limited "soft theorem provers" operating over explicit theories in language.'
perhaps, we might be closer than it seems...