Active Entries
- 1: some Oct-Nov posts I wrote
- 2: Helion details
- 3: "Narrow AGI" this year?
- 4: Tao on coordinate vs coordinate-free math reasoning
- 5: "Aging as a loss of goal-directedness"
- 6: New integrated mode for GPT-4 in ChatGPT+
- 7: Китайский новый год начнётся 10-го февраля
- 8: Automating the Search for Artificial Life with Foundation Models
- 9: "Anatomy of a Formal Proof"
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2020-06-07 07:00 pm (UTC)is that correct, actually?
Look at this paper from Allen Institute for AI:
https://arxiv.org/abs/2002.05867
"Transformers as Soft Reasoners over Language"
'This paper investigates a modern approach to this problem where the facts and rules are provided as natural language sentences, thus bypassing a formal representation. We train transformers to reason (or emulate reasoning) over these sentences using synthetically generated data. Our models, that we call RuleTakers, provide the first empirical demonstration that this kind of soft reasoning over language is learnable, can achieve high (99%) accuracy, and generalizes to test data requiring substantially deeper chaining than seen during training (95%+ scores). We also demonstrate that the models transfer well to two hand-authored rulebases, and to rulebases paraphrased into more natural language. These findings are significant as it suggests a new role for transformers, namely as limited "soft theorem provers" operating over explicit theories in language.'
perhaps, we might be closer than it seems...