"Example 3.6 (Gradient descent). The gradient descent, backpropagation algorithm used by each “neu- ron” in a deep learning architecture can be phrased as a logical proposition about learners. The whole learning architecture is then put together as in [9], or as we’ve explained things above, using the operad Sys from Definition 2.19"
...
"The logical propositions that come from Proposition 3.5 are very special. More generally, one could have a logical proposition like “whenever I receive two red tokens within three seconds, I will wait five seconds and then send either three blue tokens or two blues and six reds.” As long as this behavior has the “whenever” flavor—more precisely as long as it satisfies the condition in Definition 3.4—it will be a logical proposition in the topos."
no subject
"Example 3.6 (Gradient descent). The gradient descent, backpropagation algorithm used by each “neu-
ron” in a deep learning architecture can be phrased as a logical proposition about learners. The whole
learning architecture is then put together as in [9], or as we’ve explained things above, using the operad
Sys from Definition 2.19"
...
"The logical propositions that come from Proposition 3.5 are very special. More generally, one could
have a logical proposition like “whenever I receive two red tokens within three seconds, I will wait five
seconds and then send either three blue tokens or two blues and six reds.” As long as this behavior has
the “whenever” flavor—more precisely as long as it satisfies the condition in Definition 3.4—it will be a
logical proposition in the topos."