A deep learning framework for neuroscience

A very interesting paper on how systems neuroscience can benefit from deep learning published in Nature. In work led by Blake Richards, Tim Lillicrap, and Konrad Kording,the main argument is that focusing on the three core elements used to design deep learning systems — network architecture, objective functions, and learning rules — can offer a fresh approach to understanding the computational principles of neural circuits.

Deep learning and neuroscience

 

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.

 

Machine learning methods enable researchers to discover statistical patterns in large datasets to solve a wide variety of tasks, including in neuroscience. Recent advances have led to a major explosion in the scope and complexity of problems to which machine learning can be applied, with an accuracy rivaling or surpassing that of humans in some domains.

To be able to understand the implication of deep learning on the neuroscience it is needed first to understand some keu concepts like:

  • Basic machine learning concepts and resources.
  • Machine learning methods to automate analyses of large neuroscience datasets.
  • Using deep network learning to gain insight into how the brain learns.
  • Combining machine learning concepts with neuroscience theory to predict nervous system function and uncover general principles.

 

Article gives a very good insight on all the previous points and lead the way to the growing number of initiatives blending artificial intelligence and neuroscince.

 

Link to Nature

Deep Learning for Cognitive Neuroscience

There are other intents to link neural networks and the inner workings of the human brain, and how neural network models can now recognise images, understand text, translate languages, and play many human games at human or superhuman levels. These systems are highly abstracted, but are inspired by biological brains and use only biologically plausible computations. In the coming years, neural networks are likely to become less reliant on learning from massive labelled datasets, and more robust and generalisable in their task performance.

From their successes and failures, we can learn about the computational requirements of the different tasks at which brains excel. Deep learning also provides the tools for testing cognitive theories. In order to test a theory, we need to realise the proposed information-processing system at scale, so as to be able to assess its feasibility and emergent behaviours. Deep learning allows us to scale up from principles and circuit models to end-to-end trainable models capable of performing complex tasks.

There are many levels at which cognitive neuroscientists can use deep learning in their work, from inspiring theories to serving as full computational models. Ongoing advances in deep learning bring us closer to understanding how cognition and perception may be implemented in the brain — the grand challenge at the core of cognitive neuroscience.

Link to Arxiv

 

Deep Neural Networks in Computational Neuroscience

Another interesting approach is when looking from the oposite direction and trying to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behavior.

At the heart of the field are its models, that is, mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioral responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI).

As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks.

These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g., visual object and auditory speech recognition) to cognitive tasks (e.g., machine translation), and on to motor control (e.g., playing computer games or controlling a robot arm).

In addition to their ability to model complex intelligent behaviors, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

 

Link to Oxford

Leave a Reply

Your email address will not be published. Required fields are marked *