Page Summary
Active Entries
- 1: Helion details
- 2: "Narrow AGI" this year?
- 3: Tao on coordinate vs coordinate-free math reasoning
- 4: "Aging as a loss of goal-directedness"
- 5: New integrated mode for GPT-4 in ChatGPT+
- 6: Китайский новый год начнётся 10-го февраля
- 7: Automating the Search for Artificial Life with Foundation Models
- 8: "Anatomy of a Formal Proof"
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2022-06-01 06:28 pm (UTC)https://scholar.google.com/citations?user=2WpvdH0AAAAJ&hl=en
https://igi-web.tugraz.at/people/maass/
The first talk of the last day:
"Neural network function without learning: How can nature achieve that?"
Abstract:
It is reasonable to assume that the human brain acquires symbolic and logical reasoning capabilities not through learning, but through genetically encoded structural properties of neural networks of the human brain. But it has remained an open problem what these structural properties are. An understanding of them would not only provide substantial progress in brain science but also inspire new methods for endowing artificial neural networks with similar "innate" capabilities. Unfortunately, we are still far away from understanding those innate structural properties of neural networks in the brain that provide symbolic reasoning capabilities. But experimental data on generic cortical microcircuits elucidate the structural properties of neural networks in the brain that are likely to be involved. I will show that these structural features that are under genetic control provide a quite powerful "programming language" for inducing specific computational capabilities in neural networks, without a need for synaptic plasticity or other forms of learning. This insight gives rise to a new research program for solving the open problem of structural features of neural networks that are likely to induce symbolic and logical reasoning capabilities.
no subject
Date: 2022-06-01 09:58 pm (UTC)Interesting. If not through learning, it means we have innate limitations of what we can achieve.
no subject
Date: 2022-06-02 01:25 am (UTC)no subject
Date: 2022-06-06 05:12 pm (UTC)The workshop is being recorded, the videos and slides are supposed to become publicly available at some point. Another tidbit, someone mentioned having about 1000 participants in that workshop; that's quite a lot if true.
no subject
Date: 2022-06-06 07:53 pm (UTC)"I've got a question for anyone. Back in 1975 a mathematician named Miriam Yevick published a very interesting paper in which she outlined what she called "holograpic or fourier logic." She suggested the one class of objects were best recognized/described with holographic logic, irregular, 'natural' geometry. A very different class of objects were were best recognized/described with standard symbolic logic. This seems directly relevant to neurosymbolic AI. Has this paper dropped off the edge of the intellectual earth? Yevick, Miriam Lipschutz (1975) Holographic or Fourier logic. Pattern Recognition 7: 197-213. https://doi.org/10.1016/0031-3203(75)90005-9 "
no subject
Date: 2022-06-06 07:58 pm (UTC)"Miriam Lipshutz Yevick is the author of numerous scientific publications, poetry, and now A Testament for Ariela. She received her Ph.D. in mathematics from MIT in 1947, the 5th woman to earn this degree at MIT. She has lectured and taught at Rutgers University (Assoc. Professor Emeritus), Princeton University, City College, Adelphi College and the University of Victoria."
And she has "A Testament for Ariela" book there: https://www.amazon.com/Miriam-Lipschutz-Yevick/e/B00AI9FZ4E
So the obituary saying, "Mrs. Miriam Lipschutz Yevick, of Monsey, New York, passed away on Wednesday, September 5, 2018, in Valhalla, New York. Mrs. Yevick was born on August 28, 1924, in Holland, Netherlands. She was 94 years old", is probably about her.
And she wrote "Mathematics for Life and Society" in 1991, https://scholarship.claremont.edu/hmnj/vol1/iss6/20/
"Holographic or fourier logic" has 23 citations, is behind paywall/library access wall...
no subject
Date: 2022-06-08 01:51 pm (UTC)no subject
Date: 2022-06-08 02:07 pm (UTC)Second talk: Maximilian Nickel (Facebook AI Research in New York)
"Modeling Symbolic Domains via Compositional and Geometric Representations
In this talk, I will discuss representation learning in symbolic domains and how to use such models for simple reasoning tasks. I will first present compositional models of symbolic knowledge representations such as tensor-product and holographic models, discuss their connections to associative memory, and show that they are able to outperform purely symbolic methods in various deductive reasoning settings. Furthermore, I will discuss how structural properties of symbolic data such as hierarchies and cycles are connected to the geometry of a representation space and how geometric representation learning enables parsimonious models that preserve important semantic properties of the domain. Moreover, I will show how such embeddings can be applied to challenging tasks in NLP and biology. In addition, I will discuss connections of geometric representations to state-of-the-art generative models such Riemmannian continous normalizing flows and Moser flow."
no subject
Date: 2022-06-08 02:22 pm (UTC)This talk by Maximilian Nickel is interesting.
Uses this for hierarchical representations: https://en.wikipedia.org/wiki/Hyperbolic_metric_space
no subject
Date: 2022-06-08 02:55 pm (UTC)"Subgraph Aggregation Networks
Message-passing neural networks (MPNNs) are the leading architecture for deep learning on graph-structured data, in large part due to their simplicity and scalability. Unfortunately, it was shown that these architectures are limited in their expressive power. In order to gain more expressive power, a recent trend applies message-passing neural networks to subgraphs of the original graph. In this talk, I will present a representative framework of this family of methods, called Equivariant Subgraph Aggregation Networks (ESAN). The main idea behind ESAN is to represent each graph as a set of subgraphs derived from a predefined policy and to process the set of subgraphs using a suitable equivariant architecture. Our analysis shows that ESAN has favorable theoretical properties and that it performs well in practice. Following this, we will discuss some special properties of popular subgraph selection policies by connecting subgraph GNNs with previous work in equivariant deep learning."
no subject
Date: 2022-06-08 04:24 pm (UTC)Interesting talk
no subject
Date: 2022-06-08 05:08 pm (UTC)