Revisiting Neel Nanda lecture (and the paper itself).
~10:00 (anecdotally) an attempt to interpret small visual models on MNIST by Chris Olah did not work, but visual models became more interpretable when they got larger.
In Transformers, smaller models are easier to understand, but this is by no means obvious (says Neel Nanda in that lecture, but who knows how this would change eventually; in any case, the knowledge thus acquired does seem to be transferrable OK to larger models).
no subject
~10:00 (anecdotally) an attempt to interpret small visual models on MNIST by Chris Olah did not work, but visual models became more interpretable when they got larger.
In Transformers, smaller models are easier to understand, but this is by no means obvious (says Neel Nanda in that lecture, but who knows how this would change eventually; in any case, the knowledge thus acquired does seem to be transferrable OK to larger models).