Active Entries
- 1: "Narrow AGI" this year?
- 2: Tao on coordinate vs coordinate-free math reasoning
- 3: "Aging as a loss of goal-directedness"
- 4: New integrated mode for GPT-4 in ChatGPT+
- 5: Китайский новый год начнётся 10-го февраля
- 6: Automating the Search for Artificial Life with Foundation Models
- 7: "Anatomy of a Formal Proof"
- 8: C to safe Rust automatic translation using LLMs and dynamic analysis
- 9: GonzoML
- 10: Transformers as a Computational Model (workshop)
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2023-10-29 03:51 pm (UTC)~10:00 (anecdotally) an attempt to interpret small visual models on MNIST by Chris Olah did not work, but visual models became more interpretable when they got larger.
In Transformers, smaller models are easier to understand, but this is by no means obvious (says Neel Nanda in that lecture, but who knows how this would change eventually; in any case, the knowledge thus acquired does seem to be transferrable OK to larger models).