1.

Record Nr.

UNINA9910254617203321

Autore

Petrovici Mihai Alexandru

Titolo

Form Versus Function: Theory and Models for Neuronal Substrates  / / by Mihai Alexandru Petrovici

Pubbl/distr/stampa

Cham : , : Springer International Publishing : , : Imprint : Springer, , 2016

ISBN

3-319-39552-1

Edizione

[1st ed. 2016.]

Descrizione fisica

1 online resource (XXVI, 374 p. 150 illus., 101 illus. in color.)

Collana

Springer Theses, Recognizing Outstanding Ph.D. Research, , 2190-5053

Disciplina

612.8

Soggetti

Physics

Neural networks (Computer science) 

Neurobiology

Neurosciences

Computer simulation

Numerical and Computational Physics, Simulation

Mathematical Models of Cognitive Processes and Neural Networks

Simulation and Modeling

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Note generali

"Doctoral Thesis accepted by the University of Heidelberg, Germany."

Nota di bibliografia

Includes bibliographical references at the end of each chapters.

Nota di contenuto

Prologue -- Introduction: From Biological Experiments to Mathematical Models -- Artificial Brains: Simulation and Emulation of Neural Networks -- Dynamics and Statistics of Poisson-Driven LIF Neurons -- Cortical Models on Neuromorphic Hardware -- Probabilistic Inference in Neural Networks -- Epilogue.

Sommario/riassunto

This thesis addresses one of the most fundamental challenges for modern science: how can the brain as a network of neurons process information, how can it create and store internal models of our world, and how can it infer conclusions from ambiguous data? The author addresses these questions with the rigorous language of mathematics and theoretical physics, an approach that requires a high degree of abstraction to transfer results of wet lab biology to formal models. The thesis starts with an in-depth description of the state-of-the-art in theoretical neuroscience, which it subsequently uses as a basis to



develop several new and original ideas. Throughout the text, the author connects the form and function of neuronal networks. This is done in order to achieve functional performance of biological brains by transferring their form to synthetic electronics substrates, an approach referred to as neuromorphic computing. The obvious aspect that this transfer can never be perfect but necessarily leads to performance differences is substantiated and explored in detail. The author also introduces a novel interpretation of the firing activity of neurons. He proposes a probabilistic interpretation of this activity and shows by means of formal derivations that stochastic neurons can sample from internally stored probability distributions. This is corroborated by the author’s recent findings, which confirm that biological features like the high conductance state of networks enable this mechanism. The author goes on to show that neural sampling can be implemented on synthetic neuromorphic circuits, paving the way for future applications in machine learning and cognitive computing, for example as energy-efficient implementations of deep learning networks. The thesis offers an essential resource for newcomers to the field and an inspiration for scientists working in theoretical neuroscience and the future of computing.