Active Entries
- 1: some Oct-Nov posts I wrote
- 2: Helion details
- 3: "Narrow AGI" this year?
- 4: Tao on coordinate vs coordinate-free math reasoning
- 5: "Aging as a loss of goal-directedness"
- 6: New integrated mode for GPT-4 in ChatGPT+
- 7: Китайский новый год начнётся 10-го февраля
- 8: Automating the Search for Artificial Life with Foundation Models
- 9: "Anatomy of a Formal Proof"
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2020-06-07 06:29 pm (UTC)For an OpenAI system, the way this is done, first of all it is used to generate various unit tests and other tests (which are prime candidates for automated code-generation anyway, since no algorithmic difficulties are involved). And then one can actually run tests to validate the generated software or to find bugs.
***
On the other hand, none of these systems come close to solving problems of finding non-trivial algorithms (of learning to worry about algorithmic complexity).
But then, it is not so frequent that people specify algorithmic complexity in the comments or in the tests (as long as it's inside an engineer's mind, but not written down, and tested only to the extent of "being not too slow", we would not even have data to feed into a computer system for it to learn; alternatively, the computer systems would need to learn to understand mathematical human-written texts on a more serious level than a Transformer does; we read a textbook, and that's how we can learn to reason about algorithmic complexity; we are not yet close to a computer system which would be able to read a textbook and learn to reason about algorithmic complexity from that).