Date: 2020-06-07 07:00 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
> we are not yet close to a computer system which would be able to read a textbook and learn to reason about algorithmic complexity from that

is that correct, actually?

Look at this paper from Allen Institute for AI:

https://arxiv.org/abs/2002.05867

"Transformers as Soft Reasoners over Language"

'This paper investigates a modern approach to this problem where the facts and rules are provided as natural language sentences, thus bypassing a formal representation. We train transformers to reason (or emulate reasoning) over these sentences using synthetically generated data. Our models, that we call RuleTakers, provide the first empirical demonstration that this kind of soft reasoning over language is learnable, can achieve high (99%) accuracy, and generalizes to test data requiring substantially deeper chaining than seen during training (95%+ scores). We also demonstrate that the models transfer well to two hand-authored rulebases, and to rulebases paraphrased into more natural language. These findings are significant as it suggests a new role for transformers, namely as limited "soft theorem provers" operating over explicit theories in language.'

perhaps, we might be closer than it seems...
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

dmm: (Default)
Dataflow matrix machines (by Anhinga anhinga)

December 2025

S M T W T F S
 123456
78910111213
141516 17181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 4th, 2026 10:44 pm
Powered by Dreamwidth Studios