"Neural network function without learning: How can nature achieve that?"
Abstract:
It is reasonable to assume that the human brain acquires symbolic and logical reasoning capabilities not through learning, but through genetically encoded structural properties of neural networks of the human brain. But it has remained an open problem what these structural properties are. An understanding of them would not only provide substantial progress in brain science but also inspire new methods for endowing artificial neural networks with similar "innate" capabilities. Unfortunately, we are still far away from understanding those innate structural properties of neural networks in the brain that provide symbolic reasoning capabilities. But experimental data on generic cortical microcircuits elucidate the structural properties of neural networks in the brain that are likely to be involved. I will show that these structural features that are under genetic control provide a quite powerful "programming language" for inducing specific computational capabilities in neural networks, without a need for synaptic plasticity or other forms of learning. This insight gives rise to a new research program for solving the open problem of structural features of neural networks that are likely to induce symbolic and logical reasoning capabilities.
I think we do have innate limitations (although, with very heavy use of technology we might be able to break through them; so far people hesitate to apply that much technology to a relatively healthy human brain)...
The best talk of the first day of the neurosymbolic workshop https://research.samsung.com/sanw is the first talk by Alan Yuille. In particular, he says that simple generative models trained to generate feature vectors are very useful (really, a nice compromise half-way towards actual symbolic) and relatively easy to invert. He also recommends Zhuowan Li https://lizw14.github.io/ and their paper https://arxiv.org/abs/2110.00519
The workshop is being recorded, the videos and slides are supposed to become publicly available at some point. Another tidbit, someone mentioned having about 1000 participants in that workshop; that's quite a lot if true.
Someone also posted this question during this talk:
"I've got a question for anyone. Back in 1975 a mathematician named Miriam Yevick published a very interesting paper in which she outlined what she called "holograpic or fourier logic." She suggested the one class of objects were best recognized/described with holographic logic, irregular, 'natural' geometry. A very different class of objects were were best recognized/described with standard symbolic logic. This seems directly relevant to neurosymbolic AI. Has this paper dropped off the edge of the intellectual earth? Yevick, Miriam Lipschutz (1975) Holographic or Fourier logic. Pattern Recognition 7: 197-213. https://doi.org/10.1016/0031-3203(75)90005-9 "
She is also having some non-fiction on Amazon, which says
"Miriam Lipshutz Yevick is the author of numerous scientific publications, poetry, and now A Testament for Ariela. She received her Ph.D. in mathematics from MIT in 1947, the 5th woman to earn this degree at MIT. She has lectured and taught at Rutgers University (Assoc. Professor Emeritus), Princeton University, City College, Adelphi College and the University of Victoria."
So the obituary saying, "Mrs. Miriam Lipschutz Yevick, of Monsey, New York, passed away on Wednesday, September 5, 2018, in Valhalla, New York. Mrs. Yevick was born on August 28, 1924, in Holland, Netherlands. She was 94 years old", is probably about her.
The most technically useful talk of the second day is the second talk, the one by Joohyung Lee (Samsung Research), "Injecting Discrete Logical Constraints into Neural Networks". One motif was increasing quality of perception by feedback from constraints. Harder constraints associated with more tight tasks seem to be quite useful for "softer tasks" (which technically do not require them, e.g. it is useful to demand shortest path to just get some legal path with better accuracy). A particularly interesting is a trade-off between very precise, slow gradients, and "surrogate" fast gradients (not even neural-based: just take a discretization and pretend that it is the identity function for the purpose of computing gradients, that gives us low-quality gradients which scale great and perform nicely in practice). It's a rather strong case for the use of fast approximations to gradients (and those approximations don't have to be obtained by sophisticated neural models either, but can just be ad hoc in some cases).
Third day: first talk, Wolfgang Maass - interesting, need to revisit slides.
Second talk: Maximilian Nickel (Facebook AI Research in New York)
"Modeling Symbolic Domains via Compositional and Geometric Representations
In this talk, I will discuss representation learning in symbolic domains and how to use such models for simple reasoning tasks. I will first present compositional models of symbolic knowledge representations such as tensor-product and holographic models, discuss their connections to associative memory, and show that they are able to outperform purely symbolic methods in various deductive reasoning settings. Furthermore, I will discuss how structural properties of symbolic data such as hierarchies and cycles are connected to the geometry of a representation space and how geometric representation learning enables parsimonious models that preserve important semantic properties of the domain. Moreover, I will show how such embeddings can be applied to challenging tasks in NLP and biology. In addition, I will discuss connections of geometric representations to state-of-the-art generative models such Riemmannian continous normalizing flows and Moser flow."
Message-passing neural networks (MPNNs) are the leading architecture for deep learning on graph-structured data, in large part due to their simplicity and scalability. Unfortunately, it was shown that these architectures are limited in their expressive power. In order to gain more expressive power, a recent trend applies message-passing neural networks to subgraphs of the original graph. In this talk, I will present a representative framework of this family of methods, called Equivariant Subgraph Aggregation Networks (ESAN). The main idea behind ESAN is to represent each graph as a set of subgraphs derived from a predefined policy and to process the set of subgraphs using a suitable equivariant architecture. Our analysis shows that ESAN has favorable theoretical properties and that it performs well in practice. Following this, we will discuss some special properties of popular subgraph selection policies by connecting subgraph GNNs with previous work in equivariant deep learning."
no subject
https://scholar.google.com/citations?user=2WpvdH0AAAAJ&hl=en
https://igi-web.tugraz.at/people/maass/
The first talk of the last day:
"Neural network function without learning: How can nature achieve that?"
Abstract:
It is reasonable to assume that the human brain acquires symbolic and logical reasoning capabilities not through learning, but through genetically encoded structural properties of neural networks of the human brain. But it has remained an open problem what these structural properties are. An understanding of them would not only provide substantial progress in brain science but also inspire new methods for endowing artificial neural networks with similar "innate" capabilities. Unfortunately, we are still far away from understanding those innate structural properties of neural networks in the brain that provide symbolic reasoning capabilities. But experimental data on generic cortical microcircuits elucidate the structural properties of neural networks in the brain that are likely to be involved. I will show that these structural features that are under genetic control provide a quite powerful "programming language" for inducing specific computational capabilities in neural networks, without a need for synaptic plasticity or other forms of learning. This insight gives rise to a new research program for solving the open problem of structural features of neural networks that are likely to induce symbolic and logical reasoning capabilities.
no subject
Interesting. If not through learning, it means we have innate limitations of what we can achieve.
no subject
no subject
The workshop is being recorded, the videos and slides are supposed to become publicly available at some point. Another tidbit, someone mentioned having about 1000 participants in that workshop; that's quite a lot if true.
no subject
"I've got a question for anyone. Back in 1975 a mathematician named Miriam Yevick published a very interesting paper in which she outlined what she called "holograpic or fourier logic." She suggested the one class of objects were best recognized/described with holographic logic, irregular, 'natural' geometry. A very different class of objects were were best recognized/described with standard symbolic logic. This seems directly relevant to neurosymbolic AI. Has this paper dropped off the edge of the intellectual earth? Yevick, Miriam Lipschutz (1975) Holographic or Fourier logic. Pattern Recognition 7: 197-213. https://doi.org/10.1016/0031-3203(75)90005-9 "
no subject
"Miriam Lipshutz Yevick is the author of numerous scientific publications, poetry, and now A Testament for Ariela. She received her Ph.D. in mathematics from MIT in 1947, the 5th woman to earn this degree at MIT. She has lectured and taught at Rutgers University (Assoc. Professor Emeritus), Princeton University, City College, Adelphi College and the University of Victoria."
And she has "A Testament for Ariela" book there: https://www.amazon.com/Miriam-Lipschutz-Yevick/e/B00AI9FZ4E
So the obituary saying, "Mrs. Miriam Lipschutz Yevick, of Monsey, New York, passed away on Wednesday, September 5, 2018, in Valhalla, New York. Mrs. Yevick was born on August 28, 1924, in Holland, Netherlands. She was 94 years old", is probably about her.
And she wrote "Mathematics for Life and Society" in 1991, https://scholarship.claremont.edu/hmnj/vol1/iss6/20/
"Holographic or fourier logic" has 23 citations, is behind paywall/library access wall...
no subject
no subject
Second talk: Maximilian Nickel (Facebook AI Research in New York)
"Modeling Symbolic Domains via Compositional and Geometric Representations
In this talk, I will discuss representation learning in symbolic domains and how to use such models for simple reasoning tasks. I will first present compositional models of symbolic knowledge representations such as tensor-product and holographic models, discuss their connections to associative memory, and show that they are able to outperform purely symbolic methods in various deductive reasoning settings. Furthermore, I will discuss how structural properties of symbolic data such as hierarchies and cycles are connected to the geometry of a representation space and how geometric representation learning enables parsimonious models that preserve important semantic properties of the domain. Moreover, I will show how such embeddings can be applied to challenging tasks in NLP and biology. In addition, I will discuss connections of geometric representations to state-of-the-art generative models such Riemmannian continous normalizing flows and Moser flow."
no subject
This talk by Maximilian Nickel is interesting.
Uses this for hierarchical representations: https://en.wikipedia.org/wiki/Hyperbolic_metric_space
no subject
"Subgraph Aggregation Networks
Message-passing neural networks (MPNNs) are the leading architecture for deep learning on graph-structured data, in large part due to their simplicity and scalability. Unfortunately, it was shown that these architectures are limited in their expressive power. In order to gain more expressive power, a recent trend applies message-passing neural networks to subgraphs of the original graph. In this talk, I will present a representative framework of this family of methods, called Equivariant Subgraph Aggregation Networks (ESAN). The main idea behind ESAN is to represent each graph as a set of subgraphs derived from a predefined policy and to process the set of subgraphs using a suitable equivariant architecture. Our analysis shows that ESAN has favorable theoretical properties and that it performs well in practice. Following this, we will discuss some special properties of popular subgraph selection policies by connecting subgraph GNNs with previous work in equivariant deep learning."
no subject
Interesting talk
no subject