Active Entries
- 1: "Narrow AGI" this year?
- 2: Tao on coordinate vs coordinate-free math reasoning
- 3: "Aging as a loss of goal-directedness"
- 4: New integrated mode for GPT-4 in ChatGPT+
- 5: Китайский новый год начнётся 10-го февраля
- 6: Automating the Search for Artificial Life with Foundation Models
- 7: "Anatomy of a Formal Proof"
- 8: C to safe Rust automatic translation using LLMs and dynamic analysis
- 9: GonzoML
- 10: Transformers as a Computational Model (workshop)
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2022-12-06 06:01 pm (UTC)Since our work is based on category theory, you might wonder the aforementioned concepts are, what Category theory even is, or even why you would want to abstract away some details in neural networks? This is a question that deserves a proper answer. For now I’ll just say that our paper really answers the following question in a very precise way: “What is the minimal structure, in some suitable sense, that you need to have to perform learning?”. This is certainly valuable. Why? If you try answering that question you might discover, just as we did, that this structure ends up encapsulated some strange types of learning, with hints to even meta-learning. For instance, after defining our framework on neural networks on Euclidean spaces we realized that it includes learning not just in Euclidean spaces, but also on Boolean circuits. This is pretty strange, how can you “differentiate” a Boolean circuit? It turns out you can, and this falls under the same framework of Reverse derivative categories.
Another thing we discovered is that all the optimizers (standard gradient descent, momentum, Nesterov momentum, Adagrad, Adam etc.) are the same kind of structure neural networks themselves are - giving us hints that optimizers are in some sense “hardwired meta-learners”, just as Learning to Learn by Gradient Descent by Gradient Descent describes.
Of course, I still didn’t tell you what this framework is, nor did I tell you how we defined neural networks. I’ll do that briefly now.'