This is the last (and somewhat unfinished) short book by John von Neumann.
It's a short book, the PDF file is only 97 pages, the download link is visible from here: scholar.google.com/scholar?q=%22The+computer+and+the+brain%22"
I've read this very interesting book during the last few days, and I'll record my impressions in the comments to this post.
It's a short book, the PDF file is only 97 pages, the download link is visible from here: scholar.google.com/scholar?q=%22The+computer+and+the+brain%22"
I've read this very interesting book during the last few days, and I'll record my impressions in the comments to this post.
no subject
Date: 2020-08-25 06:29 pm (UTC)I am going to record my impressions about the technical part of this book sequentially, and after I do that, I'll talk a bit about context (the history of its writing, how I learned about it, etc.).
Page numbers are pages in the PDF file (to obtain the corresponding page numbers of the paper book subtract 12 from them).
no subject
Date: 2020-08-25 06:49 pm (UTC)We were taught analog electronic computers a bit in college and trained to work with them a bit, but, somehow, we were never taught this material (and it is probably almost unknown these days).
The efficient set of basic operation is different from the standard one: "UNUSUAL BASIC OPERATIONS" subsection (x plus/minus y, and the "integrator"). Also "the feedback tricks" or "the feedback principle" (unfortunately, he does not discuss it here, but it is like recurrent connections in neural nets, see e.g. http://www.encyclopedia69.com/eng/d/feedback-principle/feedback-principle.htm and https://en.wikipedia.org/wiki/Feedback etc.)
I think this is an extremely interesting material, and that these basic operations should be imported more frequently into the realm of neural machines.
no subject
Date: 2020-08-25 06:59 pm (UTC)First of all, he is calling parts of machines "organs" throughout the text; so he is really thinking about those machines as "artificial animals".
Then he notes that for analog machines one uses many copies of organs performing the same basic operation unlike typical serial digital computers of his time (so neural nets and other neural machines are more like analog computers; this will also be apparent later in the text; my own paper https://arxiv.org/abs/1712.07447 also traces the origins of our approach to neuromorphic computations and neural machines to analog computers, Section 11.2 page 24).
no subject
Date: 2020-08-25 07:50 pm (UTC)then on page 36 he is talking about a leaky integrator for a condition trigger (this is, basically, the leaky integrate-and-fire scheme - amazingly enough, this is still in the "Computer" part of the book; we would expect to see something like this in the "Brain" part, and then it would be unsurprising: https://en.wikipedia.org/wiki/Biological_neuron_model#Leaky_integrate-and-fire (what's interesting is that the leaky integrate-and-fire model goes back a long way: https://en.wikipedia.org/wiki/Louis_Lapicque )).
no subject
Date: 2020-08-26 01:43 am (UTC)On pages 82-85, the section "Codes and Their Role in the Control of the Functioning of a Machine" talks about the ability of a universal Turing machine (or any reasonable computer) to emulate any other such machine or even a high-level language ("A SHORT CODE").
Von Neumann thinks that brain is also a digital-analog (and predominantly digital) computer, and that it should be able to emulate any "short code", including our natural language, our logic, our mathematics, etc.
This might be a quite fruitful point of view, when meditating on the nature of "consciousness"; this way of thinking is potentially a good companion to "The Consciousness Prior" by Yoshua Bengio, https://arxiv.org/abs/1709.08568 (I recommend both 2017 and 2019 versions of Bengio's text).
So, what's in our consciousness is presumably something emulated by the brain rather than inherent to it, if one believes von Neumann's conjecture about this...
no subject
Date: 2020-08-26 04:43 am (UTC)no subject
Date: 2020-08-26 05:41 pm (UTC)Of course, this observation can now be considered as a correct prediction for artificial neural nets as well.
He conjectures that the "intrinsic math of neural machinery in the brain" is, therefore, quite different from our "conventional math", but does not elaborate.
I am going to elaborate on what this "intrinsic math" might be further in this thread.
no subject
Date: 2020-08-27 04:29 pm (UTC)In addition, one should remember that recurrent machines X_{n+1}=F(X_n) work reasonable well, when F is close to the identity map, e.g. https://dmm.dreamwidth.org/19100.html .
Cf. also the comment on "The Analog Procedure" above.
***
At the same time, if one want to explore the math, specifically oriented towards probability and statistics and/or interval numbers, and then perhaps to add the ability to work with partial contradictions in probabilistic and interval setups (https://www.cs.brandeis.edu/~bukatin/dmm-probabilistic-samples.pdf ; https://www.cs.brandeis.edu/~bukatin/PartiallyInconsistentIntervalNumbers.pdf), this is a fertile ground for such experiments.
And then one might try to formulate all this in topos terms, just like people explored this for quantum theory, e.g. https://arxiv.org/abs/1107.1083 "Unsharp Values, Domains and Topoi".
So, one could still use von Neumann remarks on the need for different "intrinsic math" for neural computations as an inspiration for various non-trivial explorations here, even if rather mild changes are quite sufficient to satisfy von Neumann's desiderata from a superficial viewpoint.
Perhaps, there are reasons to move beyond this superficial viewpoint, even if they are not sufficiently articulated in the von Neumann's text.
no subject
Date: 2020-08-27 04:43 pm (UTC)Digital but Statistical" section, page 88-89).
I am always interesting in the attempts to move from rate coding to spike synchronizations and oscillations associated with those synchronizations. However, I usually assumed that non-synchronized neurons yield something close to Poisson spike trains. Regardless, of what it is in actual biology, it might be quite fruitful to take a hint from von Neumann and consider periodic behavior even for non-synchronized neurons, and proceed with synchronization models from that basis.
no subject
Date: 2020-08-27 04:52 pm (UTC)***
I learned about this book here
https://twitter.com/JohnMeuser/status/1293339704226656258
and then John pushed me to actually read it
https://twitter.com/JohnMeuser/status/1296918998391697415
and there are various discussions between us around those threads, e.g.
https://twitter.com/JohnMeuser/status/1297639656922767362
(He is obviously seeing something else there, not what I am seeing, e.g.
https://twitter.com/JohnMeuser/status/1297640700872450050
https://twitter.com/JohnMeuser/status/1297641242474446848
https://twitter.com/JohnMeuser/status/1297639967280291844
Most of what I wrote in those twitter threads is consolidated in my comments to this post.)