Entry tags:
"Towards Categorical Foundations of Learning"
When one tries to use category theory for the applied work, a number of questions arise: Is it just too difficult to be used at all by me given my level of technical skills? Is it fruitful enough, and is the fruitfulness/efforts ratio high enough for all this to make sense?
I recently discovered Bruno Gavranović, a graduate student in Glasgow, whose work is promising in this sense. They are really trying hard to keep things simple and also trying to make sure that there are non-trivial applications. Here is one of his essays and papers (March 2021, so it's not the most recent one, but probably the most central):
www.brunogavranovic.com/posts/2021-03-03-Towards-Categorical-Foundations-Of-Neural-Networks.html
(I am posting this here because there are people who read this blog who are interested in applied category theory and like it, not because I am trying to convince those who formed a negative opinion of this subject. I am non-committal myself, I have not decided whether applied categories have strong enough fruitfulness/efforts ratio, but this particular entry seems to be one of the best shots in this sense, so I am going to try to go deeper with their work.)
Update: their collection of papers in the intersection between Category Theory and Machine Learning: github.com/bgavran/Category_Theory_Machine_Learning
I recently discovered Bruno Gavranović, a graduate student in Glasgow, whose work is promising in this sense. They are really trying hard to keep things simple and also trying to make sure that there are non-trivial applications. Here is one of his essays and papers (March 2021, so it's not the most recent one, but probably the most central):
www.brunogavranovic.com/posts/2021-03-03-Towards-Categorical-Foundations-Of-Neural-Networks.html
(I am posting this here because there are people who read this blog who are interested in applied category theory and like it, not because I am trying to convince those who formed a negative opinion of this subject. I am non-committal myself, I have not decided whether applied categories have strong enough fruitfulness/efforts ratio, but this particular entry seems to be one of the best shots in this sense, so I am going to try to go deeper with their work.)
Update: their collection of papers in the intersection between Category Theory and Machine Learning: github.com/bgavran/Category_Theory_Machine_Learning
no subject
And this topos is described here: "Learners' Languages" by David Spivak, https://arxiv.org/abs/2103.01189
no subject
"Example 3.6 (Gradient descent). The gradient descent, backpropagation algorithm used by each “neu-
ron” in a deep learning architecture can be phrased as a logical proposition about learners. The whole
learning architecture is then put together as in [9], or as we’ve explained things above, using the operad
Sys from Definition 2.19"
...
"The logical propositions that come from Proposition 3.5 are very special. More generally, one could
have a logical proposition like “whenever I receive two red tokens within three seconds, I will wait five
seconds and then send either three blue tokens or two blues and six reds.” As long as this behavior has
the “whenever” flavor—more precisely as long as it satisfies the condition in Definition 3.4—it will be a
logical proposition in the topos."