Research

Machine learning methods for neuroscience

Advances in experimental methods are generating data at an unprecedented scale and resolution, at the single neuron, neural network, and behaviour levels. Extracting knowledge from this wealth of data increasingly depends on accurate, scalable and interpretable theoretical models of neural activity and behaviour. In our lab, we believe that such effort can be potentiated with modern machine learning methods.

Despite the tremendous advances in machine learning, most machine learning methods are not geared at generating interpretable insights. Our lab answers these needs by developing and applying machine learning methods for mechanistic insight in neuroscience. In close collaboration with experimental partners, we aim to design more accurate models, quantitatively test mechanistic hypotheses and derive experimentally-testable predictions, with the ultimate goal of refining our understanding of neural systems in health and disease.

Selected publications

Image
PNAS logo

Energy efficient network activity from disparate circuit parameters

Text

Deistler M., Macke J.H.*, Gonçalves P.J.*

Neural systems have the remarkable feature of showing similar activity patterns despite having disparate underlying mechanistic properties. This feature, called parameter degeneracy, underlies the capacity of neural systems to compensate for perturbations to their components. Less understood is whether parameter degeneracy is reduced (or eliminated) by biological constraints, notably to preserve metabolic efficiency or robustness to environmental fluctuations. Developing machine learning methods for degeneracy analysis, we investigated this question in a computational model of the pyloric circuit in the crab stomatogastric ganglion.

Image
eLife logo

Training deep neural density estimators to identify mechanistic models of neural dynamics

Text

Gonçalves P.J.*, Lueckmann J.*, Deistler M.*, Nonnenmacher M., Oecal K., Bassetto G., Chintaluri C., Podlaski W.F., Haddad S.A., Vogels T.P., Greenberg D.S., Macke J.H.

We designed an algorithm that makes it easier to fit mathematical models to experimental data. First, the algorithm trains an artificial neural network to predict which models are compatible with simulated data. After initial training, the method can rapidly be applied to either raw experimental data or selected data features. The algorithm then returns the models that generate the best match. This newly developed machine learning tool was able to automatically identify models which can replicate the observed data from a diverse set of neuroscience problems, and may help bridge the gap between ‘data-driven’ and ‘theory-driven’ approaches.

Image
NIPS logo

Flexible statistical inference for mechanistic models of neural dynamics

Text

Lueckmann J.*, Gonçalves P.J.*, Bassetto G., Oecal K., Nonnenmacher M., Macke J.H.

Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling.

Coverage on Pedro Gonçalves' research

(Image credit Franz-Georg Stämmele)

Is the pyloric network of the crustacean stomatogastric ganglion robust to perturbations of its parameters?
It turns out that we can move between two data-compatible parameter configurations along a parameter path without ever leaving high probability terrain. This parameter path corresponds therefore to a direction of robustness to perturbations.

Sequential Neural Posterior Estimation or SNPE
In SNPE, parameters for a mechanistic model of neural activity are drawn from a prior distribution provided by the modeler. Running simulations using these parameters creates simulated data on which a deep neural network can be trained. After training, the network can transform experimental data into a posterior distribution indicating which parameter values are consistent with that data.