"These parameterised optics can be plugged in in sequence, in parallel, and it even turns out they form something called a topos - allowing us to talk about the “internal language” of a learner, a deeply exciting prospect. They model not just neural networks, but agents found in game theory as well. And as mentioned before, these optimizers which update the parameters of these learners are exactly of the shape of these learners themselves - opening up interesting questions about meta-learning."
no subject
And this topos is described here: "Learners' Languages" by David Spivak, https://arxiv.org/abs/2103.01189