Conferences
Jun. 3rd, 2023 08:54 amThe details will be in comments (I have mixed feeling about all this).
I am seeing people talking about their talks being accepted for Strange Loop. It seems that this will be the last Strange Loop conference (I have no idea why).
Terence Tao is co-organizing "AI to Assist Mathematical Reasoning" online workshop (June 12-14).
Joint conference of "Algebra and Coalgebra in Computer Science" and "Mathematical Foundations of Programming Semantics" is on June 19-23 (free for those who participate online).
I am seeing people talking about their talks being accepted for Strange Loop. It seems that this will be the last Strange Loop conference (I have no idea why).
Terence Tao is co-organizing "AI to Assist Mathematical Reasoning" online workshop (June 12-14).
Joint conference of "Algebra and Coalgebra in Computer Science" and "Mathematical Foundations of Programming Semantics" is on June 19-23 (free for those who participate online).
no subject
Date: 2023-06-03 01:10 pm (UTC)In this sense, when I am looking at the abstracts of CALCO 2023 & MFPS XXXIX, https://coalg.org/calco-mfps-2023/ and pondering how to make this kind of difficult math understandable and usable, AI assistance is the only thing I am hoping for.
So, what Terence Tao is co-organizing here is one of the things which are quite relevant, but the program is a bit too bureaucratic for my taste:
https://terrytao.wordpress.com/2023/06/02/ai-to-assist-mathematical-reasoning-a-workshop/
https://www.nationalacademies.org/documents/embed/link/LF2255DA3DD1C41C0A42D3BEF0989ACAECE3053A6A9B/file/D5DBCD07BFB3B3835BA234DD7CC8F45DED111D71E18F?noSaveAs=1
So... I don't know whether this is at all useful to attend...
no subject
Date: 2023-06-03 01:18 pm (UTC)https://thestrangeloop.com/2023/sessions.html - most of the program is posted
https://www.thestrangeloop.com/ - September 21-22
I am not sure why they decided to end this conference series (of course, it's not clear how long the remnants of the "world as we know it" will last; it might be that Fall 2024 would not make much sense anyway, especially for an in-person conference).
no subject
Date: 2023-06-03 01:35 pm (UTC)I enjoyed StrangeLoop once or twice, but that was enough.
As to "Algebra and Coalgebra", I'd be curious a little bit, although it's been all pretty much known for a while.
no subject
Date: 2023-06-03 02:32 pm (UTC)It looks like the organizers have also decided that it has been enough... (I never attended.)
> "Algebra and Coalgebra"
The reason it's newly interesting for me is that this pattern, "let's generate a pair of matrices and then multiply them", is right in the middle of Transformers, and some people seem to be saying that this pattern is right at the heart of why Transformers perform so well (they even say "Hopf algebras"). For example, this guy, although unfortunatley he stops right where one would like to see more details:
https://arxiv.org/abs/2302.01834, "Coinductive guide to inductive transformer heads"
https://twitter.com/adamnemecek1
On a more naive level, 4 years ago I scribbled a symmetrization (duality between the connectivity matrix of a neural machine and the matrix containing its data) of a self-modification mechanism in my "dataflow matrix machines", and this symmetrization also involved this "let's generate a pair of matrices and then multiply them" pattern:
https://github.com/anhinga/2019-design-notes/blob/master/research-notes/dmm-notes-june-2019.pdf
Eventually, this led to me playing with "visual matrix multiplication" and such, and posting all these pictures of "visual matrix multiplication" at https://github.com/anhinga/JuliaCon2021-poster and elsewhere.
It's certainly known, but I feel that if I understood the hard-core math aspects of it better, I would be able to do more with it. Especially given that this guy, Adam Nemecek, is saying that other aspects of Transformers are also convenient to understand in terms of Hopf algebras (but I don't know if he is right about this; that's "catch-22': if my understanding here were better, I would be able to evaluate if he is correct).
no subject
Date: 2023-06-03 06:19 pm (UTC)no subject
Date: 2023-06-03 06:37 pm (UTC)It's a large field now, people are trying a variety of things. Auto-formalization is a big one, for obvious reasons (e.g. https://arxiv.org/abs/2205.12615)
Then there is a dichotomy between working in theorem provers and generating readable proofs (Tim Gowers has been returning to his 10-year-old exploration https://arxiv.org/abs/1309.4501 recently, and I believe his intent has been to create a more anthropomorphic system, which would be closer to inner workings of (some) human mathematicians).
But, yes, for example, https://arxiv.org/search/?searchtype=author&query=Szegedy%2C+C has a number of papers on this between 2016 and 2023, https://arxiv.org/search/cs?searchtype=author&query=Urban%2C+J has a number of papers on this between 2010 and 2023, etc, etc. A large field, different things are going on...
no subject
Date: 2023-06-04 09:30 am (UTC)no subject
Date: 2023-06-20 07:30 pm (UTC)Special Session "Category Theory" in Machine Learning
Lindley 102
15:30 - 15:40
Welcome and Introduction
Fabio Zanasi
Abstract
I will introduce the session and our speakers by giving a very brief overview of the state of the art in categorical approaches to machine learning.
15:40 - 16:10
Differential Categories and Machine Learning
Jean-Simon Pacaud Lemay
Abstract
Differential categories provide the categorical semantics of differentiation in both mathematics and computer science. Differential categories have also found many interesting applications, most recently in relation to automatic differentiation and machine learning. In this talk, I will give an overview of how differential categories (including reverse differential categories and tangent categories) are used for machine learning and other related areas. I'll talk about the past, what's been done so far, the present, what's currently being worked on, and the future, what are some important questions that need to be solved.
16:10 - 16:40
A dynamic monoidal category for deep learning
Brandon Shapiro
Abstract
A gradient descender can be regarded as an "open dynamical system" in which the state of the system (parameters to a function) is updated based on external input (feedback). Open dynamical systems can be modeled using the theory of coalgebras for polynomial functors, which can be composed in network configurations to build up complex systems from simpler parts, and the structure of series and parallel composition of these systems is described by the notion of dynamic monoidal categories. I will introduce dynamic monoidal categories, describe how gradient descenders form an example, and discuss avenues for using this abstract structure to produce concrete applications.
16:40 - 17:10
Is there a place for semantics in machine learning?
Prakash Panangaden
Abstract
Machine learning, especially deep learning, has had unprecedented success and in the past decade and a half and has generated much excitement beyond the confines of academia. As practitioners of formal semantics what can we contribute? In this highly personal and opinionated talk, I will recount some successes, some promising directions and some things that I am less optimistic about. Certainly, we can try to provide proofs of correctness and gain insight into why things work or don’t work. I am less optimistic that we can come up with entirely new algorithms or techniques, but it is still worth trying.
no subject
Date: 2023-06-20 08:00 pm (UTC)Differential categories come from linear logic (!)
Very nice talk and a slide deck containing open problems and such (it would be nice to get a copy of it)
And he is working closely with Bruno Gavranović: https://twitter.com/bgavran3