Nicole: More and more I see that people are very concerned with the biological plausibility of neural networks. I think this comes from the fact that we as machine learners are finally achieving human-level performance on some tasks. It has renewed faith in the idea that the best way to “solve” intelligence is to copy the brain, the only real intelligent thing we know of.

It’s interesting that this idea has come back into fashion, after being considered a very outmoded way of thinking for the majority of the 90s and early 2000s. The success of deep learning has given rise to a rebirth of connectionism.

So when I saw this paper, “Toward an Integration of Deep Learning and Neuroscience,” I was intrigued and hoped it could help me see what all the fuss is about.

Alona: Interestingly, this paper seems to focus more on how the brain might do something like what a neural network does, and less on how ML might adapt to be more like the brain (as in earlier works).  Maybe this oscillation between comparing one system to another is how we will eventually come to an understanding of how the brain learns and adapts as a system.  

I find building models to resemble some aspect of neural function to be a sort of navel gazing activity that’s not of much practical use, especially when it takes endless parameter-tweaking to get a reasonable result.  But now that I see a generation of this exercise come full circle (and now we are comparing the biological system to the artificial system) I see how recreating a biological system in silico can be informative.

Nicole: I agree that there is utility in the feedback. A model that successfully imitates your system can help you create new hypotheses to test in vivo, and I think my favorite thing about this paper is that they sketch out neural experiments to test their theoretical claims. That said, we should always remember that the world of models is complex, and multiple models can exhibit similar properties, and therefore be hard to distinguish without further investigation.

~-~

Let’s talk about the two systems (biological and artificial) that underlie this paper

If you come from an ML background, we recommend looking at our post, Neurons 101, to get up to speed on the basics of neurons in the brain. For those from a neuroscience background, or who are new to neural networks, you can check out our post, Neural Networks 101.

With these basics in mind, let’s compare neurons and neural networks. What do they have in common?

  • They integrate several inputs and threshold the result to send an output
  • They can send their output to, and take inputs from, multiple other neurons or nodes
  • Connections between neurons (or nodes) can vary in strength.  That is, the output of a downstream neuron can have variable impact on upstream neurons.
  • “Learning” typically involves changing the strength of connections
  • Artificial Neurons are often organized into layers, inspired by neuronal layers observed in biology.

And what are the differences?

Function Neurons Neural Network Nodes
Communication Neurotransmitter release and binding (quantized) Output can be continuous valued
Connections Unidirectional – a neuron sends information downstream using its axon, there is very little information flow back up toward its inputs. Bidirectional – connections between nodes allow information to flow both upstream and downstream
Architecture Largely genetically determined. While some new local circuits can be formed/destroyed via learning, there are some neurons that will never connect to certain other neurons due to placement in the brain. New neurons can grow in adult brains, though the principle guiding this is not known. Determined a priori by the engineer, typically based on existing networks. Can be arbitrary and without physical constraints. With few exceptions, the architecture is fixed at the beginning of learning and does not change.
Learning Local – connections between two neurons can strengthen/weaken independent of the rest of the circuit. Can form new connections Global – using a global error signal on the output and propagating it backwards, all node connections can be adjusted in a coordinated manner
Signal Constraints Typically neurons are either excitatory (increasing activity in downstream neurons) or inhibitory (suppressing activity) Because the weights between nodes can take on any value, nodes can be either excitatory or inhibitory
Relationship to Time Learning occurs via coincidence detection, therefore timing of spikes is somewhat important. Whether more information is encoded in spike times is unclear. After neurons fire, there is a period of time during which firing is close to impossible (the refractory period) With the exception of recurrent neural networks (RNNs), time does not have an explicit representation in neural networks. Most approaches treat steps in learning as timesteps, where in each timestep an entire pass through the network occurs


~-~

Nicole: There are many more differences than similarities between artificial and biological neural networks. However, I wonder which of these differences are actually important for the purpose of getting human-level performance (or better) on tasks. For example, does it matter that neurotransmitters are released in quanta? They could be functionally continuous. Do the differences between neural network nodes and neurons matter? What if nodes don’t correspond to neurons, but correspond to some other structure in the brain (e.g. a circuit)?

Alona: This paper also highlights the fact that, though they were originally inspired by biological networks, artificial neural networks have progressed on their own largely without concern for what is and is not biologically plausible.  This is fine for machine learning as a field, but one of the contributions of this paper is to bring the focus back to what the brain is doing.  I personally think this refocusing is a good exercise and hope that we will find inspiration for new machine learning methods in the methods employed by biological neural circuits.  

Nicole: Something of particular interest to me is how in machine learning we have focused on efficiency from a computational perspective, but not from a data perspective. As in, we have focused on creating models and training methods that take less time to learn, but not less data. This paper attempts to address this point, and I think focusing on that difference is a good avenue for machine learning advancement.

Alona: This paper covers a lot of ground, though, so we’ve chosen to break our response to it up into several blog posts.  This introduction post covers some of the basics of biological and artificial neurons, and contrasts the two.  

Nicole: In our next post, we’ll try to unpack some of the discussions they have about biological vs machine learning.  Stay tuned!