Conferences
Jun. 3rd, 2023 08:54 amThe details will be in comments (I have mixed feeling about all this).
I am seeing people talking about their talks being accepted for Strange Loop. It seems that this will be the last Strange Loop conference (I have no idea why).
Terence Tao is co-organizing "AI to Assist Mathematical Reasoning" online workshop (June 12-14).
Joint conference of "Algebra and Coalgebra in Computer Science" and "Mathematical Foundations of Programming Semantics" is on June 19-23 (free for those who participate online).
I am seeing people talking about their talks being accepted for Strange Loop. It seems that this will be the last Strange Loop conference (I have no idea why).
Terence Tao is co-organizing "AI to Assist Mathematical Reasoning" online workshop (June 12-14).
Joint conference of "Algebra and Coalgebra in Computer Science" and "Mathematical Foundations of Programming Semantics" is on June 19-23 (free for those who participate online).
no subject
Date: 2023-06-03 02:32 pm (UTC)It looks like the organizers have also decided that it has been enough... (I never attended.)
> "Algebra and Coalgebra"
The reason it's newly interesting for me is that this pattern, "let's generate a pair of matrices and then multiply them", is right in the middle of Transformers, and some people seem to be saying that this pattern is right at the heart of why Transformers perform so well (they even say "Hopf algebras"). For example, this guy, although unfortunatley he stops right where one would like to see more details:
https://arxiv.org/abs/2302.01834, "Coinductive guide to inductive transformer heads"
https://twitter.com/adamnemecek1
On a more naive level, 4 years ago I scribbled a symmetrization (duality between the connectivity matrix of a neural machine and the matrix containing its data) of a self-modification mechanism in my "dataflow matrix machines", and this symmetrization also involved this "let's generate a pair of matrices and then multiply them" pattern:
https://github.com/anhinga/2019-design-notes/blob/master/research-notes/dmm-notes-june-2019.pdf
Eventually, this led to me playing with "visual matrix multiplication" and such, and posting all these pictures of "visual matrix multiplication" at https://github.com/anhinga/JuliaCon2021-poster and elsewhere.
It's certainly known, but I feel that if I understood the hard-core math aspects of it better, I would be able to do more with it. Especially given that this guy, Adam Nemecek, is saying that other aspects of Transformers are also convenient to understand in terms of Hopf algebras (but I don't know if he is right about this; that's "catch-22': if my understanding here were better, I would be able to evaluate if he is correct).