I am reading this paper: arxiv.org/abs/2104.04657
"In this paper, we introduce a new type of generalized neural network where neurons and synapses maintain multiple states. We show that classical gradient-based backpropagation in neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients, with update rules derived from the chain rule. In our generalized framework, networks have neither explicit notion of nor ever receive gradients. The synapses and neurons are updated using a bidirectional Hebb-style update rule parameterized by a shared low-dimensional "genome". We show that such genomes can be meta-learned from scratch, using either conventional optimization techniques, or evolutionary strategies, such as CMA-ES. Resulting update rules generalize to unseen tasks and train faster than gradient descent based optimizers for several standard computer vision and synthetic tasks."
"In this paper, we introduce a new type of generalized neural network where neurons and synapses maintain multiple states. We show that classical gradient-based backpropagation in neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients, with update rules derived from the chain rule. In our generalized framework, networks have neither explicit notion of nor ever receive gradients. The synapses and neurons are updated using a bidirectional Hebb-style update rule parameterized by a shared low-dimensional "genome". We show that such genomes can be meta-learned from scratch, using either conventional optimization techniques, or evolutionary strategies, such as CMA-ES. Resulting update rules generalize to unseen tasks and train faster than gradient descent based optimizers for several standard computer vision and synthetic tasks."
no subject
Date: 2021-04-26 07:25 pm (UTC)"Notice that the genome is defined at the level of individual
neurons and synapses and is independent from the network
architecture. Thus, the same genome can be trained for
different architectures and, more generally, genome trained
on one architecture can be applied to a genome with different
architectures. We show some examples of this in the
experimental section.
Since the proposed framework can use more than two states,
we hypothesize that just as the number of layers relates to
the complexity of learning required for an individual task
(inner loop of the meta-learning), the number of states might
be related to complexity of learning behaviour across the
task (outer loop)."