Brian simulator
Dan F. M. Goodman and Romain Brette (2013), Scholarpedia, 8(1):10883. | doi:10.4249/scholarpedia.10883 | revision #129355 [link to/cite this article] |
Brian is an open source Python package for developing simulations of networks of spiking neurons. The design is aimed at minimizing users' development time, with execution speed a secondary goal. Users specify neuron and synapse models by giving their equations in standard mathematical form, create groups of neurons and connect them via synapses. The intent is to make the process as flexible as possible, so that researchers are not restricted to using neuron and synapse models already built in to the simulator. The entire simulator is written in Python, using the NumPy and SciPy numerical and scientific computing packages. Parts of the simulator can optionally be run using C++ code generated on the fly (Goodman 2010). Computationally, Brian uses vectorization techniques (Brette and Goodman 2011), so that for large numbers of neurons, execution speed is of the same order of magnitude as C++ code (Goodman and Brette 2008, 2009).
Contents |
Overview and example
A simulation written with Brian is a Python module (script) that typically involves doing at least the following:
- Import the Brian package.
- Define parameters and equations.
- Create groups of neurons with the
NeuronGroup
object. - Create synapses with the
Connection
orSynapses
objects. - Specify variables to be recorded using
SpikeMonitor
andStateMonitor
objects. - Run the simulation with the
run
function. - Analyse and plot the output.
The following example illustrates these steps (the output is shown on the right):
from brian import * eqs = """ dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt dge/dt = -ge/(5*ms) : volt dgi/dt = -gi/(10*ms) : volt """ P = NeuronGroup(4000, eqs, threshold='v>-50*mV', reset='v=-60*mV') P.v = -60*mV+10*mV*rand(len(P)) Pe = P.subgroup(3200) Pi = P.subgroup(800) Ce = Connection(Pe, P, 'ge', weight=1.62*mV, sparseness=0.02) Ci = Connection(Pi, P, 'gi', weight=-9*mV, sparseness=0.02) spikemon = SpikeMonitor(P) statemon = StateMonitor(P, 'v', record=range(4)) run(250*ms) subplot(211) raster_plot(spikemon) subplot(212) statemon.plot() show()
Philosophy
The aim of Brian is to be, in order of priority:
- Easy to learn, use and understand
- Expressive and flexible
- Computationally efficient
The rationale for these choices was to encourage correct and reproducible simulations by making it so that researchers find it more desirable to use Brian than to write their own simulation code. Hand-written code is more prone to errors, and it can be more difficult to understand for a third party. In addition, such code typically mixes implementation details with the specification of the model, which may hide the fact that the implementation does not match what is stated in the text of the corresponding paper.
To encourage such practice, it must be easier for researchers to use the simulator than to write their own code from scratch; it must be possible to implement their choice of model rather than restricting them to a fixed choice of pre-existing models; and it must be feasible to run their model within a reasonable time frame. The real time constraint is not just simulation time, but development time plus simulation time. In many cases, the time spent developing and implementing the model far outweighs the time spent simulating it, and therefore making the package easier to use is important in reducing the total time cost of a simulation study.
These design aims led to the following choices in the design and implementation of Brian. In order to make it easier to learn and use, extensive online support was created, and the names and syntax were designed to minimise the cognitive effort to learn the language. In order to make Brian code easy to understand, reproducible, expressive and flexible, it is designed around equations: users specify models in terms of differential equations in standard mathematical notation rather than using predefined neuron types. Finally, in order to remain computationally efficient, Brian uses a combination of vectorisation techniques and code generation.
Design
Equations
Rather than having a fixed set of neuron models that users can choose from, in Brian users explicitly define a set of differential equations specifying the model. This makes Brian highly flexible, allowing users to define arbitrary mathematical models.
Equations are specified as strings in a standard mathematical format, for example:
eqs = """ dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt dge/dt = -ge/(5*ms) : volt dgi/dt = -gi/(10*ms) : volt """
The text after the colon on each line gives the units of the variable being defined (v, ge, gi here).
Equations are required to be dimensionally consistent, so that if dx/dt=f(x)
then
f(x)
should have units of unit(x)/second
.
From these neuron equations, code is automatically generated to solve the differential equations (see code generation).
In addition to differential equations governing the basic dynamics of neurons, equations syntax is also used to specify other properties of the model such as threshold condition, reset code, and synaptic modification code, for further flexibility. Below is the code for short-term synaptic plasticity model of (Markram et al. 1998):
S = Synapses(source, target, model=""" w : 1 dx/dt=(1-x)/taud : 1 (event-driven) du/dt=(U-u)/tauf : 1 (event-driven) """, pre=""" I += w*u*x x *= (1-u) u += U*(1-u) """)
A complete example using this can be found on the Brian website.
Units
Brian includes a system for specifying quantities with units, for example
3*mV
specifies 3 millivolts. Attempting to perform an
arithmetical operation with inconsistent units, calling a function with a
parameter with the wrong units, or defining a differential equation with
inconsistent units will raise an error. The units system has two goals:
- Protect the user from making accidental mistakes with inconsistent units, which can lead to hard to debug errors.
- Do not require the user to remember which units a parameter is specified in, because it is always specified explicitly, e.g.
tau=3*ms
ortau=0.003*second
, instead oftau=3
which would have required the user to remember thattau
is specified in milliseconds.
The following shows a Python session demonstrating how the units system works:
>>> from brian import * >>> print 3*mV 3.0 mV >>> print 0.003*volt 3.0 mV >>> print 3000*mV 3.0 V >>> print 3*mV+0.005*volt 8.0 mV >>> print 1000*(3*mV) 3.0 V >>> print (1*volt)/mV 1000.0 >>> print (1*amp)/nA 1000000000.0 >>> print 1*amp+1*volt DimensionMismatchError: Addition, dimensions were (A) (m^2 kg s^-3 A^-1)
Names and syntax
The syntax of Brian was chosen to minimise the cognitive effort required of the user to learn the language (Brette 2012). At the first level, this involved choosing simple, memorable names that follow a regular pattern. In addition, wherever possible, explicit names were avoided and the user instead chooses their own names. For example, users write the equations defining their model explicitly, so they do not need to remember or look up the parameter names chosen by the developers.
Implementation
There are a handful of basic objects in Brian, most importantly NeuronGroup
, which is used to create a
group of neurons with a specific model, and Connection
and Synapses
, which are used to
create synaptic connectivity between neurons. There are also other basic objects such as SpikeMonitor
and
StateMonitor
which are used to record spikes and state variable traces. Flexibility is gained from the use of
NetworkOperation
, which allows users to specify arbitrary Python code that is called each time step of the
simulation.
Each time step, the following operations are carried out:
- Differential equations are integrated for all neurons.
- The threshold condition (e.g.
V>V_threshold
) is checked for each neuron, and a list of spiking neurons is created. - Each neuron that spiked is reset with the user code (e.g.
V = V_reset
). - For each neuron that spiked, the set of associated synapses is found, and the corresponding synaptic operations are performed.
Differential equations
Differential equations are integrated on a fixed time grid, using one of the following methods:
- Exact integration for linear differential equations (detected automatically).
- Forward Euler, Runge-Kutta or implicit Euler method for nonlinear differential equations.
Vectorisation
Python is an interpreted language, and code such as the following (for solving the differential equation dV/dt=f(V)
)
is very slow:
for i in range(num_neurons): V_tmp = f(V[i]) V[i] += V_tmp*dt
This is because each iteration of the loop requires a (slow) interpretation step, dwarfing the actual arithmetical operations. In order to solve this problem, Brian uses a vectorised form of the operation using the numpy package:
V_tmp = f(V) V += V_tmp*dt
In the first case, operations are performed on scalar values f(V[i])
, etc. In the second case, operations are performed on the whole
vector V
. This means the number of interpreted operations is reduced from O(num_neurons
) to
O(1). The total run time is the time for the arithmetical operations plus the time for the interpretation operations. As the latter
is fixed, and the time for the arithmetical operations is the same whether the code is written in C++, Python, or any other language,
as the number of neurons increases, the simulation time for Brian asymptotes to the simulation time for pure C++ code. Typically,
for simulations with tens of thousands of neurons, the simulation time for Brian is of the same order of
magnitude as pure C++ code. For more details, see (Goodman and Brette 2008).
For the more complicated operations involved in synaptic weight propagation, vectorisation is more involved, but still possible (Brette and Goodman 2011).
Code generation
An even better solution than vectorisation, if the user has a C++ compiler installed, is to use code generation. In this case, Brian generates C++ source code to solve the differential equations, compiles the code and runs it, all automatically without user intervention (Goodman 2010). Compilation times are typically only a few seconds for the small code snippets needed, and they are automatically cached so that recompilation is only necessary if the equations change.
Features
Brian has support for the following basic features:
- Define neuron models using arbitrary differential equations, threshold conditions and reset codes.
- Define synaptic neuron models using arbitrary synaptic differential equations and propagation equations.
- Synaptic connectivity matrices can be dense or sparse.
- Define parameter values using SI units.
- A library of standard neuron and synapse models.
- Record neuron activity, both spikes and traces.
- Plasticity (STDP, STP, arbitrary user-defined plasticity models).
- Gap junctions
- Nonlinear synapses
In addition, Brian comes with the following extension packages:
- Model fitting (Rossant et al. 2010, 2011). This package fits a user-specified neuron model to electrophysological data. Parameter search is conducted using one of several global optimisation algorithms: particle swarm optimisation, genetic algorithms, or the CMA-ES algorithm. Fitting can be performed in parallel over multiple CPUs, multiple machines, and, if available, using graphics processing units (GPUs) for a speed improvement up to 60x.
- Brian Hears (Fontaine et al. 2011). This package is for simple and efficient modelling of the auditory periphery. It consists of a base library for performing filter-based modelling on large numbers of channels simultaneously (the human auditory system uses roughly 3000 filters in parallel for each ear), as well as implementations of many standard filter banks used in auditory periphery modelling.
- Electrophysiology: models of electrodes and amplifiers, compensation and analysis methods.
Online support
The online support for Brian consists of the following:
- Extensive documentation, including tutorials, examples, a manual and a reference section.
- An online support mailing list, with feedback from developers and other users.
Future developments
The next major development planned is "Brian 2.0". This is a rewrite from scratch that will be mostly but not entirely backwards
compatible with the 1.x series. The aim is to massively simplify the code for Brian, to increase its flexibility and performance, and
to support multiple different computational devices.
More objects will accept string arguments in which arbitrary code statements can be given, and
code generation will be used more extensively for efficiency. Devices such as GPU
(Brette and Goodman 2012) or embedded devices for robotics applications will be supported by redefining the
basic objects in Brian. Rather than writing from brian import *
users will be able to write
from brian.devices.gpu import *
.
History and team
Work on Brian started in October 2007, although the name was not chosen until December 2007. The first public release (0.1.0alpha) was made shortly afterwards, in time for Christmas, and the first full release (1.0.0) was made in September 2008. Originally, the team consisted of Romain Brette and Dan Goodman, but has since been extended to include many more contributors. Many new features have been added to the library since its initial release, including notably the model fitting package (Rossant et al. 2010, 2011) for automatically fitting neuron models to electrophysiological recordings, and the Brian Hears auditory system modelling package (Fontaine et al. 2011).
References
- Brette, R (2012). On the design of script languages for neural simulation Network: Computation in Neural Systems early online: .
- Brette(2011). Vectorized algorithms for spiking neural network simulation Neural Computation 23(6): .
- Brette(2012). Simulating spiking neural networks on GPU Network: Computation in Neural Systems early online: .
- Fontaine, B; Goodman, D F M; Benichoux, V and Brette, R (2011). Brian Hears: online auditory processing using vectorization over channels Frontiers in Neuroinformatics 5(9): .
- Goodman(2008). Brian: a simulator for spiking neural networks in Python Frontiers in Neuroinformatics 2(5): .
- Goodman(2009). The Brian simulator Frontiers in Neuroscience 3(2): 192-197.
- Goodman, D F M (2010). Code Generation: A Strategy for Neural Network Simulators Neuroinformatics 8(3): 183-196.
- Markram, H; Wang, Y and Tsodyks, M (1998). Differential signaling via the same axon of neocortical pyramidal neurons Proceedings of the National Academy of Sciences 95(9): 5323-5328.
- Rossant, C; Goodman, D F M; Platkiewicz, J and Brette, R (2010). Automatic fitting of spiking neuron models to electrophysiological recordings Frontiers in Neuroinformatics 4(2): .
- Rossant, C; Goodman, D F M; Platkiewicz, J; Magnusson, AK and Brette, R (2011). Fitting neuron models to spike trains Frontiers in Neuroscience 5(9): .
External links
See also
Genesis, NEST (NEural Simulation Tool), NEURON Simulation Environment, XPPAUT