# Welcome to Brian’s documentation!¶

## Introduction¶

Brian is a clock driven simulator for spiking neural networks, written in the Python programming language.

The simulator is written almost entirely in Python. The idea is that it can be used at various levels of abstraction without the steep learning curve of software like Neuron, where you have to learn their own programming language to extend their models. As a language, Python is well suited to this task because it is easy to learn, well known and supported, and allows a great deal of flexibility in usage and in designing interfaces and abstraction mechanisms. As an interpreted language, and therefore slower than say C++, Python is not the obvious choice for writing a computationally demanding scientific application. However, the SciPy module for Python provides very efficient linear algebra routines, which means that vectorised code can be very fast.

Here’s what the Python web site has to say about themselves:

Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python’s elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms.

The Python interpreter and the extensive standard library are freely available in source or binary form for all major platforms from the Python Web site, http://www.python.org/, and may be freely distributed. The same site also contains distributions of and pointers to many free third party Python modules, programs and tools, and additional documentation.

As an example of the ease of use and clarity of programs written in Brian, the following script defines and runs a randomly connected network of 4000 integrate and fire neurons with exponential currents:

```
from brian import *
eqs='''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P=NeuronGroup(4000,model=eqs,threshold=-50*mV,reset=-60*mV)
P.v=-60*mV
Pe=P.subgroup(3200)
Pi=P.subgroup(800)
Ce=Connection(Pe,P,'ge',weight=1.62*mV,sparseness=0.02)
Ci=Connection(Pi,P,'gi',weight=-9*mV,sparseness=0.02)
M=SpikeMonitor(P)
run(1*second)
raster_plot(M)
show()
```

As an example of the output of Brian, the following two images reproduce figures from Diesmann et al. 1999 on synfire chains. The first is a raster plot of a synfire chain showing the stabilisation of the chain.

The simulation of 1000 neurons in 10 layers, each all-to-all connected to the next, using integrate and fire neurons with synaptic noise for 100ms of simulated time took 1 second to run with a timestep of 0.1ms on a 2.4GHz Intel Xeon dual-core processor. The next image is of the state space, figure 3:

The figure computed 50 averages for each of 121 starting points over 100ms at a timestep of 0.1ms and took 201s to run on the same processor as above.

## Installation¶

If you already have a copy of Python 2.5-2.7, try the Quick installation below, otherwise take a look at Manual installation.

### Quick installation¶

#### easy_install / pip¶

The easiest way to install the most recent version of Brian if you already have
a version of Python 2.5-2.7 including the `easy_install`

script is to simply
run the following in a shell:

```
easy_install brian
```

This will download and install Brian and all its required packages (NumPy, SciPy, etc.).

Similarly, you can use the `pip`

utility:

```
pip install brian
```

Note that there are some optimisations you can make after installation, see the section below on Optimisations.

#### Debian/Ubuntu packages¶

If you use a Debian-based Linux distribution (in addition to Debian itself, this includes for example Ubuntu or Linux Mint), you can install Brian directly from your favourite package manager (e.g. Synaptic or the Ubuntu Software Centre), thanks to the packages provided by the NeuroDebian team.

The package is called `python-brian`

, the documentation and tutorials can be
found in `python-brian-doc`

. To install these packages from the command-line
use:

```
sudo apt-get install python-brian python-brian-doc
```

Note that in contrast to the procedure described above for easy_install / pip, you will not necessarily get the most recent version of Brian this way. On the other hand, you do not have to take care of future updates yourself, as the Brian package gets updated with the standard update process. Additionally, the Brian package already includes all the compiled C code mentioned in the Optimisations section. Another way to install Brian which combines these advantages with up-to-date versions is to directly add the NeuroDebian repository to your software sources.

### Manual installation¶

Installing Brian requires the following components:

- Python version 2.5-2.7.
- NumPy and Scipy packages for Python: an efficient scientific library.
- PyLab package for Python: a plotting library similar to Matlab (see the detailed installation instructions).
- SymPy package for Python: a library for symbolic mathematics (not mandatory yet for Brian).
- Brian itself (don’t forget to download the extras.zip file, which includes examples, tutorials, and a complete copy of the documentation). Brian is also a Python package and can be installed as explained below.

Fortunately, Python packages are very quick and easy to install, so the whole process shouldn’t take very long.

We also recommend using the following for writing programs in Python (see details below):

Finally, if you want to use the (optional) automatic C++ code generation features of Brian, you should
have the `gcc`

compiler installed (on Cygwin if you are
running on Windows).

Mac users: The Enthought Python Distribution (EPD ) is free for academics and contains all the libraries necessary to run Brian. Otherwise, the Scipy Superpack for Intel OS X also includes versions of Numpy, Scipy, Pylab and IPython.

Windows users: the Python(x,y) distribution includes all the packages (including Eclipse and IPython) above except Brian (which is available as an optional plugin).

Another option is the Anaconda distribution, which also includes all the packages above except Brian and Eclipse.

#### Installing Python packages¶

On Windows, Python packages (including Brian) are generally installed simply by running an .exe file. On other operating systems, you can download the source release (typically a compressed archive .tar.gz or .zip that you need to unzip) and then install the package by typing the following in your shell:

```
python setup.py install
```

#### Installing Eclipse¶

Eclipse is an Integrated Development Environment (IDE) for any programming language. PyDev is a plugin for Eclipse with features specifically for Python development. The combination of these two is excellent for Python development (it’s what we use for writing Brian).

To install Eclipse, go to their web page and download any of the base language IDEs. It doesn’t matter which one, but Python is not one of the base languages so you have to choose an alternative language. Probably the most useful is the C++ one or the Java one. The C++ one can be downloaded here.

Having downloaded and installed Eclipse, you should download and install the PyDev plugin from their web site. The best way to do this is directly from within the Eclipse IDE. Follow the instructions on the PyDev manual page.

#### Installing IPython¶

IPython is an interactive shell for Python. It has features for SciPy and PyLab built in, so it is a good choice for scientific work. Download from their page. If you are using Windows, you will also need to download PyReadline from the same page.

#### C++ compilers¶

The default for Brian is to use the `gcc`

compiler which will
be installed already on most unix or linux distributions. If you are using Windows, you can
install cygwin (make sure to include the `gcc`

package). Alternatively,
some but not all versions of Microsoft Visual C++ should be compatible, but this is untested
so far. See the documentation for the SciPy Weave package for
more information on this. Mac users should have XCode installed so as to have access to gcc and hence take advantage of brian compiled code. See also the section on Compiled code.

### Testing¶

You can test whether Brian has installed properly by running Python and typing the following two lines:

```
from brian import *
brian_sample_run()
```

A sample network should run and produce a raster plot.

### Optimisations¶

After a successful installation, there are some optimisations you can make to your Brian installation to get it running faster using compiled C code. We do not include these as standard because they do not work on all computers, and we want Brian to install without problems on all computers. Note that including all the optimisations can result in significant speed increases (around 30%).

These optimisations are described in detail in the section on Compiled code.

## Getting started¶

### Tutorials¶

These tutorials cover some basic topics in writing Brian scripts in Python. The complete source code for the tutorials is available in the tutorials folder in the extras package.

#### Tutorials for Python and Scipy¶

##### Python¶

The first thing to do in learning how to use Brian is to have a basic grasp of the Python programming language. There are lots of good tutorials already out there. The best one is probably the official Python tutorial. There is also a course for biologists at the Pasteur Institute: Introduction to programming using Python.

##### NumPy, SciPy and Pylab¶

The first place to look is the SciPy documentation website. To start using Brian, you do not need to understand much about how NumPy and SciPy work, although understanding how their array structures work will be useful for more advanced uses of Brian.

The syntax of the Numpy and Pylab functions is very similar to Matlab. If you already know Matlab, you could read this tutorial: NumPy for Matlab users and this list of Matlab-Python translations (pdf version here). A tutorial is also available on the web site of Pylab.

#### Tutorial 1: Basic Concepts¶

In this tutorial, we introduce some of the basic concepts of a Brian simulation:

- Importing the Brian module into Python
- Using quantities with units
- Defining a neuron model by its differential equation
- Creating a group of neurons
- Running a network
- Looking at the output of the network
- Modifying the state variables of the network directly
- Defining the network structure by connecting neurons
- Doing a raster plot of the output
- Plotting the membrane potential of an individual neuron

The following Brian classes will be introduced:

We will build a Brian program that defines a randomly connected network of integrate and fire neurons and plot its output.

This tutorial assumes you know:

- The very basics of Python, the
`import`

keyword, variables, basic arithmetical expressions, calling functions, lists - The simplest leaky integrate and fire neuron model

The best place to start learning Python is the official tutorial:

**Tutorial contents**

##### Tutorial 1g: Recording membrane potentials¶

In the previous part of this tutorial, we plotted a raster plot of the firing times of the network. In this tutorial, we introduce a way to record the value of the membrane potential for a neuron during the simulation, and plot it. We continue as before:

from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -49 * mvolt # resting potential (same as the reset) psp = 0.5 * mvolt # postsynaptic potential size G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr) C = Connection(G, G) C.connect_random(sparseness=0.1, weight=psp)

This time we won’t record the spikes.

###### Recording states¶

Now we introduce a second type of monitor, the `StateMonitor`

.
The first argument is the group to monitor, and the second is
the state variable to monitor. The keyword `record`

can be
an integer, list or the value `True`

. If it is an integer `i`

,
the monitor will record the state of the variable for neuron `i`

.
If it’s a list of integers, it will record the states for
each neuron in the list. If it’s set to `True`

it will record
for all the neurons in the group.

M = StateMonitor(G, 'V', record=0)

And then we continue as before:

G.V = Vr + rand(40) * (Vt - Vr)

But this time we run it for a shorter time so we can look at the output in more detail:

run(200 * msecond)

Having run the simulation, we plot the results using the
`plot`

command from PyLab which has the same syntax as the Matlab
`plot``

command, i.e. `plot(xvals,yvals,...)`

. The `StateMonitor`

monitors the times at which it monitored a value in the
array `M.times`

, and the values in the array `M[0]`

. The notation
`M[i]`

means the array of values of the monitored state
variable for neuron `i`

.

In the following lines, we scale the times so that they’re
measured in ms and the values so that they’re measured in
mV. We also label the plot using PyLab’s `xlabel`

, `ylabel`

and
`title`

functions, which again mimic the Matlab equivalents.

plot(M.times / ms, M[0] / mV) xlabel('Time (in ms)') ylabel('Membrane potential (in mV)') title('Membrane potential for neuron 0') show()

You can clearly see the leaky integration exponential decay toward the resting potential, as well as the jumps when a spike was received.

##### Tutorial 1a: The simplest Brian program¶

###### Importing the Brian module¶

The first thing to do in any Brian program is to load Brian and the names of
its functions and classes. The standard way to do this is to use the Python
`from ... import *`

statement.

from brian import *

###### Integrate and Fire model¶

The neuron model we will use in this tutorial is the simplest possible leaky integrate and fire neuron, defined by the differential equation:

tau dV/dt = -(V-El)

and with a threshold value Vt and reset value Vr.

###### Parameters¶

Brian has a system for defining physical quantities (quantities with a physical dimension such as time). The code below illustrates how to use this system, which (mostly) works just as you’d expect.

tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -60 * mvolt # resting potential (same as the reset)

The built in standard units in Brian consist of all the fundamental SI units like second and metre, along with a selection of derived SI units such as volt, farad, coulomb. All names are lowercase following the SI standard. In addition, there are scaled versions of these units using the standard SI prefixes m=1/1000, K=1000, etc.

###### Neuron model and equations¶

The simplest way to define a neuron model in Brian is to write a list of the differential equations that define it. For the moment, we’ll just give the simplest possible example, a single differential equation. You write it in the following form:

```
dx/dt = f(x) : unit
```

where `x`

is the name of the variable, `f(x)`

can be any valid Python
expression, and `unit`

is the physical units of the variable `x`

. In our
case we will write:

```
dV/dt = -(V-El)/tau : volt
```

to define the variable `V`

with units `volt`

.

To complete the specification of the model, we also define a threshold and reset value and create a group of 40 neurons with this model.

G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr)

The statement creates a new object ‘G’ which is an instance of the
Brian class `NeuronGroup`

, initialised with the values in the
line above and 40 neurons. In Python, you can call a function or initialise
a class using keyword arguments as well as ordered arguments, so
if I defined a function `f(x,y)`

I could call it as `f(1,2)`

or
as `f(y=2,x=1)`

and get the same effect. See the Python tutorial
for more information on this.

For the moment, we leave the neurons in this group unconnected to each other, each evolves separately from the others.

###### Simulation¶

Finally, we run the simulation for 1 second of simulated time. By default, the simulator uses a timestep dt = 0.1 ms.

run(1 * second)

And that’s it! To see some of the output of this network, go to the next part of the tutorial.

###### Exercise¶

The units system of Brian is useful for ensuring that everything is consistent, and that you don’t make hard to find mistakes in your code by using the wrong units. Try changing the units of one of the parameters and see what happens.

###### Solution¶

You should see an error message with a Python traceback (telling you which functions were being called when the error happened), ending in a line something like:

```
Brian.units.DimensionMismatchError: The differential equations
are not homogeneous!, dimensions were (m^2 kg s^-3 A^-1)
(m^2 kg s^-4 A^-1)
```

##### Tutorial 1b: Counting spikes¶

In the previous part of the tutorial we looked at the following:

- Importing the Brian module into Python
- Using quantities with units
- Defining a neuron model by its differential equation
- Creating a group of neurons
- Running a network

In this part, we move on to looking at the output of the network.

The first part of the code is the same.

from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -60 * mvolt # resting potential (same as the reset) G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr)

###### Counting spikes¶

Now we would like to have some idea of what this network is
doing. In Brian, we use monitors to keep track of the behaviour
of the network during the simulation. The simplest monitor of
all is the `SpikeMonitor`

, which just records the spikes from a
given `NeuronGroup`

.

M = SpikeMonitor(G)

###### Results¶

Now we run the simulation as before:

run(1 * second)

And finally, we print out how many spikes there were:

print M.nspikes

So what’s going on? Why are there 40 spikes? Well, the answer is that the initial value of the membrane potential for every neuron is 0 mV, which is above the threshold potential of -50 mV and so there is an initial spike at t=0 and then it resets to -60 mV and stays there, below the threshold potential. In the next part of this tutorial, we’ll make sure there are some more spikes to see.

##### Tutorial 1d: Introducing randomness¶

In the previous part of the tutorial, all the neurons start at the same values and proceed deterministically, so they all spike at exactly the same times. In this part, we introduce some randomness by initialising all the membrane potentials to uniform random values between the reset and threshold values.

We start as before:

from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -49 * mvolt # resting potential (same as the reset) G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr) M = SpikeMonitor(G)

But before we run the simulation, we set the values of the
membrane potentials directly. The notation `G.V`

refers
to the array of values for the variable `V`

in group `G`

. In
our case, this is an array of length 40. We set its values
by generating an array of random numbers using Brian’s
`rand`

function. The syntax is `rand(size)`

generates an
array of length `size`

consisting of uniformly distributed
random numbers in the interval 0, 1.

G.V = Vr + rand(40) * (Vt - Vr)

And now we run the simulation as before.

run(1 * second) print M.nspikes

But this time we get a varying number of spikes each time we run it, roughly between 800 and 850 spikes. In the next part of this tutorial, we introduce a bit more interest into this network by connecting the neurons together.

##### Tutorial 1c: Making some activity¶

In the previous part of the tutorial we found that each neuron
was producing only one spike. In this part, we alter the model so
that some more spikes will be generated. What we’ll do is alter
the resting potential `El`

so that it is above threshold, this
will ensure that some spikes are generated. The first few
lines remain the same:

from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value

But we change the resting potential to -49 mV, just above the spike threshold:

El = -49 * mvolt # resting potential (same as the reset)

And then continue as before:

G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr) M = SpikeMonitor(G) run(1 * second) print M.nspikes

Running this program gives the output `840`

. That’s because
every neuron starts at the same initial value and proceeds
deterministically, so that each neuron fires at exactly the
same time, in total 21 times during the 1s of the run.

In the next part, we’ll introduce a random element into the behaviour of the network.

###### Exercises¶

- Try varying the parameters and seeing how the number of spikes generated varies.
- Solve the differential equation by hand and compute a formula for the number of spikes generated. Compare this with the program output and thereby partially verify it. (Hint: each neuron starts at above the threshold and so fires a spike immediately.)

###### Solution¶

Solving the differential equation gives:

V = El + (Vr-El) exp (-t/tau)

Setting V=Vt at time t gives:

t = tau log( (Vr-El) / (Vt-El) )

If the simulator runs for time T, and fires a spike immediately at the beginning of the run it will then generate n spikes, where:

n = [T/t] + 1

If you have m neurons all doing the same thing, you get nm spikes. This calculation with the parameters above gives:

t = 48.0 ms n = 21 nm = 840

As predicted.

##### Tutorial 1e: Connecting neurons¶

In the previous parts of this tutorial, the neurons are
still all unconnected. We add in connections here. The
model we use is that when neuron i is connected to
neuron j and neuron i fires a spike, then the membrane
potential of neuron j is instantaneously increased by
a value `psp`

. We start as before:

from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -49 * mvolt # resting potential (same as the reset)

Now we include a new parameter, the PSP size:

psp = 0.5 * mvolt # postsynaptic potential size

And continue as before:

G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr)

###### Connections¶

We now proceed to connect these neurons. Firstly, we declare
that there is a connection from neurons in `G`

to neurons in `G`

.
For the moment, this is just something that is necessary to
do, the reason for doing it this way will become clear in the
next tutorial.

C = Connection(G, G)

Now the interesting part, we make these neurons be randomly
connected with probability 0.1 and weight `psp`

. Each neuron
i in `G`

will be connected to each neuron j in `G`

with probability 0.1. The weight of the connection is the
amount that is added to the membrane potential of the target
neuron when the source neuron fires a spike.

C.connect_random(sparseness=0.1, weight=psp)

These two previous lines could be done in one line:

```
C = Connection(G,G,sparseness=0.1,weight=psp)
```

Now we continue as before:

M = SpikeMonitor(G) G.V = Vr + rand(40) * (Vt - Vr) run(1 * second) print M.nspikes

You can see that the number of spikes has jumped from around 800-850 to around 1000-1200. In the next part of the tutorial, we’ll look at a way to plot the output of the network.

###### Exercise¶

Try varying the parameter `psp`

and see what happens. How large
can you make the number of spikes output by the network? Why?

###### Solution¶

The logically maximum number of firings is 400,000 = 40 * 1000 / 0.1, the number of neurons in the network * the time it runs for / the integration step size (you cannot have more than one spike per step).

In fact, the number of firings is bounded above by 200,000. The reason for this is that the network updates in the following way:

- Integration step
- Find neurons above threshold
- Propagate spikes
- Reset neurons which spiked

You can see then that if neuron i has spiked at time t, then it
will not spike at time t+dt, even if it receives spikes from
another neuron. Those spikes it receives will be added at step
3 at time t, then reset to `Vr`

at step 4 of time t, then the
thresholding function at time t+dt is applied at step 2, before
it has received any subsequent inputs. So the most a neuron
can spike is every other time step.

##### Tutorial 1f: Recording spikes¶

In the previous part of the tutorial, we defined a network with not entirely trivial behaviour, and printed the number of spikes. In this part, we’ll record every spike that the network generates and display a raster plot of them. We start as before:

from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -49 * mvolt # resting potential (same as the reset) psp = 0.5 * mvolt # postsynaptic potential size G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr) C = Connection(G, G) C.connect_random(sparseness=0.1, weight=psp) M = SpikeMonitor(G) G.V = Vr + rand(40) * (Vt - Vr) run(1 * second) print M.nspikes

Having run the network, we simply use the `raster_plot()`

function
provided by Brian. After creating plots, we have to use the
`show()`

function to display them. This function is from the
PyLab module that Brian uses for its built in plotting
routines.

raster_plot() show()

As you can see, despite having introduced some randomness into our network, the output is very regular indeed. In the next part we introduce one more way to plot the output of a network.

#### Tutorial 2: Connections¶

In this tutorial, we will cover in more detail the concept of a `Connection`

in Brian.

**Tutorial contents**

##### Tutorial 2a: The concept of a Connection¶

###### The network¶

In this first part, we’ll build a network consisting of three neurons. The first two neurons will be under direct control and have no equations defining them, they’ll just produce spikes which will feed into the third neuron. This third neuron has two different state variables, called Va and Vb. The first two neurons will be connected to the third neuron, but a spike arriving at the third neuron will be treated differently according to whether it came from the first or second neuron (which you can consider as meaning that the first two neurons have different types of synapses on to the third neuron).

The program starts as follows.

from brian import * tau_a = 1 * ms tau_b = 10 * ms Vt = 10 * mV Vr = 0 * mV

###### Differential equations¶

This time, we will have multiple differential equations. We will use the
`Equations`

object, although you could equally pass the multi-line string
defining the differential equations directly when initialising the `NeuronGroup`

object (see the next part of the tutorial for an example of this).

eqs = Equations(''' dVa/dt = -Va/tau_a : volt dVb/dt = -Vb/tau_b : volt ''')

So far, we have defined a model neuron with two state variables, `Va`

and `Vb`

, which both decay exponentially towards 0, but with different
time constants `tau_a`

and `tau_b`

. This is just so that you can see
the difference between them more clearly in the plot later on.

###### SpikeGeneratorGroup¶

Now we introduce the `SpikeGeneratorGroup`

class. This is a group of
neurons without a model, which just produces spikes at the times
that you specify. You create a group like this by writing:

```
G = SpikeGeneratorGroup(N,spiketimes)
```

where `N`

is the number of neurons in the group, and `spiketimes`

is a
list of pairs `(i,t)`

indicating that neuron `i`

should fire at time `t`

.
In fact, `spiketimes`

can be any ‘iterable container’ or ‘generator’,
but we don’t cover that here (see the detailed documentation for
`SpikeGeneratorGroup`

).

In our case, we want to create a group with two neurons, the first
of which (neuron 0) fires at times 1 ms and 4 ms, and the second
of which (neuron 1) fires at times 2 ms and 3 ms. The list of
`spiketimes`

then is:

spiketimes = [(0, 1 * ms), (0, 4 * ms), (1, 2 * ms), (1, 3 * ms)]

and we create the group as follows:

G1 = SpikeGeneratorGroup(2, spiketimes)

Now we create a second group, with one neuron, according to the model we defined earlier.

G2 = NeuronGroup(N=1, model=eqs, threshold=Vt, reset=Vr)

###### Connections¶

In Brian, a `Connection`

from one `NeuronGroup`

to another is
defined by writing:

```
C = Connection(G,H,state)
```

Here `G`

is the source group, `H`

is the target group, and `state`

is the
name of the target state variable. When a neuron `i`

in `G`

fires, Brian
finds all the neurons `j`

in `H`

that `i`

in `G`

is connected to, and adds
the amount `C[i,j]`

to the specified state variable of neuron `j`

in `H`

.
Here `C[i,j]`

is the (i,j)th entry of the connection matrix of `C`

(which
is initially all zero).

To start with, we create two connections from the group of two
directly controlled neurons to the group of one neuron with the
differential equations. The first connection has the target state `Va`

and the second has the target state `Vb`

.

C1 = Connection(G1, G2, 'Va') C2 = Connection(G1, G2, 'Vb')

So far, this only declares our intention to connect neurons in group
`G1`

to neurons in group `G2`

, because the connection matrix is initially
all zeros. Now, with connection `C1`

we connect neuron 0 in group `G1`

to neuron 0 in group `G2`

, with weight 3 mV. This means that when neuron
0 in group `G1`

fires, the state variable `Va`

of the neuron in group `G2`

will be increased by 6 mV. Then we use connection `C2`

to connection
neuron 1 in group `G1`

to neuron 0 in group `G2`

, this time with weight
3 mV.

C1[0, 0] = 6 * mV C2[1, 0] = 3 * mV

The net effect of this is that when neuron 0 of `G1`

fires, `Va`

for
the neuron in `G2`

will increase 6 mV, and when neuron 1 of `G1`

fires,
`Vb`

for the neuron in `G2`

will increase 3 mV.

Now we set up monitors to record the activity of the network, run it and plot it.

Ma = StateMonitor(G2, 'Va', record=True) Mb = StateMonitor(G2, 'Vb', record=True) run(10 * ms) plot(Ma.times, Ma[0]) plot(Mb.times, Mb[0]) show()

The two plots show the state variables `Va`

and `Vb`

for the single
neuron in group `G2`

. `Va`

is shown in blue, and `Vb`

in green.
According to the differential equations, `Va`

decays much faster
than `Vb`

(time constant 1 ms rather than 10 ms), but we have set
it up (through the connection strengths) that an incoming
spike from neuron 0 of `G1`

causes a large increase of 6 mV to `Va`

,
whereas a spike from neuron 1 of `G1`

causes a smaller increase of
3 mV to Vb. The value for `Va`

then jumps at times 1 ms and 4 ms,
when we defined neuron 0 of `G1`

to fire, and decays almost back
to rest in between. The value for `Vb`

jumps at times 2 ms and
3 ms, and because the times are closer together and the time
constant is longer, they add together.

In the next part of this tutorial, we’ll see how to use this system to do something useful.

###### Exercises¶

- Try playing with the parameters
`tau_a`

,`tau_b`

and the connection strengths,`C1[0,0]`

and`C2[0,1]`

. Try changing the list of spike times. - In this part of the tutorial, the states
`Va`

and`Vb`

are independent of one another. Try rewriting the differential equations so that they’re not independent and play around with that. - Write a network with inhibitory and excitatory neurons. Hint: you only need one connection.
- Write a network with inhibitory and excitatory neurons whose actions have different time constants (for example, excitatory neurons have a slower effect than inhibitory ones).

###### Solutions¶

- Simple write
`C[i,j]=-3*mV`

to make the connection from neuron i to neuron j inhibitory. - See the next part of this tutorial.

##### Tutorial 2b: Excitatory and inhibitory currents¶

In this tutorial, we use multiple connections to solve a real problem, how to implement two types of synapses with excitatory and inhibitory currents with different time constants.

###### The scheme¶

The scheme we implement is the following diffential equations:

taum dV/dt = -V + ge - gitaue dge/dt = -getaui dgi/dt = -gi

An excitatory neuron connects to state ge, and an inhibitory neuron connects to state gi. When an excitatory spike arrives, ge instantaneously increases, then decays exponentially. Consequently, V will initially but continuously rise and then fall. Solving these equations, if V(0)=0, ge(0)=g0 corresponding to an excitatory spike arriving at time 0, and gi(0)=0 then:

gi = 0ge = g0 exp(-t/taue)V = (exp(-t/taum) - exp(-t/taue)) taue g0 / (taum-taue)

We use a very short time constant for the excitatory currents, a longer one for the inhibitory currents, and an even longer one for the membrane potential.

from brian import * taum = 20 * ms taue = 1 * ms taui = 10 * ms Vt = 10 * mV Vr = 0 * mV eqs = Equations(''' dV/dt = (-V+ge-gi)/taum : volt dge/dt = -ge/taue : volt dgi/dt = -gi/taui : volt ''')

###### Connections¶

As before, we’ll have a group of two neurons under direct control, the first of which will be excitatory this time, and the second will be inhibitory. To demonstrate the effect, we’ll have two excitatory spikes reasonably close together, followed by an inhibitory spike later on, and then shortly after that two excitatory spikes close together.

spiketimes = [(0, 1 * ms), (0, 10 * ms), (1, 40 * ms), (0, 50 * ms), (0, 55 * ms)] G1 = SpikeGeneratorGroup(2, spiketimes) G2 = NeuronGroup(N=1, model=eqs, threshold=Vt, reset=Vr) C1 = Connection(G1, G2, 'ge') C2 = Connection(G1, G2, 'gi')

The weights are the same - when we increase `ge`

the effect on `V`

is excitatory
and when we increase `gi`

the effect on `V`

is inhibitory.

C1[0, 0] = 3 * mV C2[1, 0] = 3 * mV

We set up monitors and run as normal.

Mv = StateMonitor(G2, 'V', record=True) Mge = StateMonitor(G2, 'ge', record=True) Mgi = StateMonitor(G2, 'gi', record=True) run(100 * ms)

This time we do something a little bit different when plotting it. We want
a plot with two subplots, the top one will show `V`

, and the bottom one will
show both `ge`

and `gi`

. We use the `subplot`

command from pylab which mimics the
same command from Matlab.

figure() subplot(211) plot(Mv.times, Mv[0]) subplot(212) plot(Mge.times, Mge[0]) plot(Mgi.times, Mgi[0]) show()

The top figure shows the voltage trace, and the bottom figure shows `ge`

in
blue and `gi`

in green. You can see that although the inhibitory and
excitatory weights are the same, the inhibitory current is much more
powerful. This is because the effect of `ge`

or `gi`

on `V`

is related to the
integral of the differential equation for those variables, and `gi`

decays
much more slowly than `ge`

. Thus the size of the negative deflection at
40 ms is much bigger than the excitatory ones, and even the double
excitatory spike after the inhibitory one can’t cancel it out.

In the next part of this tutorial, we set up our first serious network, with 4000 neurons, excitatory and inhibitory.

###### Exercises¶

- Try changing the parameters and spike times to get a feel for how it works.
- Try an equivalent implementation with the equation taum dV/dt = -V+ge+gi
- Verify that the differential equation has been solved correctly.

###### Solutions¶

Solution for 2:

Simply use the line `C2[1,0] = -3*mV`

to get the same effect.

Solution for 3:

First, set up the situation we described at the top for which we already know the solution of the differential equations, by changing the spike times as follows:

```
spiketimes = [(0,0*ms)]
```

Now we compute what the values ought to be as follows:

```
t = Mv.times
Vpredicted = (exp(-t/taum) - exp(-t/taue))*taue*(3*mV) / (taum-taue)
```

Now we can compute the difference between the predicted and actual values:

```
Vdiff = abs(Vpredicted - Mv[0])
```

This should be zero:

```
print max(Vdiff)
```

Sure enough, it’s as close as you can expect on a computer. When I run this it gives me the value 1.3 aV, which is 1.3 * 10^-18 volts, i.e. effectively zero given the finite precision of the calculations involved.

##### Tutorial 2c: The CUBA network¶

In this part of the tutorial, we set up our first serious network that actually does something. It implements the CUBA network, Benchmark 2 from:

Simulation of networks of spiking neurons: A review of tools and strategies (2006). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, Natschlager, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience

This is a network of 4000 neurons, of which 3200 excitatory, and 800 inhibitory, with exponential synaptic currents. The neurons are randomly connected with probability 0.02.

from brian import * taum = 20 * ms # membrane time constant taue = 5 * ms # excitatory synaptic time constant taui = 10 * ms # inhibitory synaptic time constant Vt = -50 * mV # spike threshold Vr = -60 * mV # reset value El = -49 * mV # resting potential we = (60 * 0.27 / 10) * mV # excitatory synaptic weight wi = (20 * 4.5 / 10) * mV # inhibitory synaptic weight eqs = Equations(''' dV/dt = (ge-gi-(V-El))/taum : volt dge/dt = -ge/taue : volt dgi/dt = -gi/taui : volt ''')

So far, this has been pretty similar to the previous part, the only
difference is we have a couple more parameters, and we’ve added a
resting potential `El`

into the equation for `V`

.

Now we make lots of neurons:

G = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr)

Next, we divide them into subgroups. The `subgroup()`

method of a
`NeuronGroup`

returns a new `NeuronGroup`

that can be used in
exactly the same way as its parent group. At the moment, the
subgrouping mechanism can only be used to create contiguous
groups of neurons (so you can’t have a subgroup consisting
of neurons 0-100 and also 200-300 say). We designate the
first 3200 neurons as `Ge`

and the second 800 as `Gi`

, these
will be the excitatory and inhibitory neurons.

Ge = G.subgroup(3200) # Excitatory neurons Gi = G.subgroup(800) # Inhibitory neurons

Now we define the connections. As in the previous part of the
tutorial, `ge`

is the excitatory current and `gi`

is the inhibitory
one. `Ce`

says that an excitatory neuron can synapse onto any
neuron in `G`

, be it excitatory or inhibitory. Similarly for
inhibitory neurons. We also randomly connect `Ge`

and `Gi`

to the whole of `G`

with
probability 0.02 and the weights given in the list of
parameters at the top.

Ce = Connection(Ge, G, 'ge', sparseness=0.02, weight=we) Ci = Connection(Gi, G, 'gi', sparseness=0.02, weight=wi)

Set up some monitors as usual. The line `record=0`

in the `StateMonitor`

declarations indicates that we only want to record the activity of
neuron 0. This saves time and memory.

M = SpikeMonitor(G) MV = StateMonitor(G, 'V', record=0) Mge = StateMonitor(G, 'ge', record=0) Mgi = StateMonitor(G, 'gi', record=0)

And in order to start the network off in a somewhat more realistic state, we initialise the membrane potentials uniformly randomly between the reset and the threshold.

G.V = Vr + (Vt - Vr) * rand(len(G))

Now we run.

run(500 * ms)

And finally we plot the results. Just for fun, we do a rather more
complicated plot than we’ve been doing so far, with three subplots.
The upper one is the raster plot of the whole network, and the
lower two are the values of `V`

(on the left) and `ge`

and `gi`

(on the
right) for the neuron we recorded from. See the PyLab documentation
for an explanation of the plotting functions, but note that the
`raster_plot()`

keyword `newfigure=False`

instructs the (Brian) function
`raster_plot()`

not to create a new figure (so that it can be placed
as a subplot of a larger figure).

subplot(211) raster_plot(M, title='The CUBA network', newfigure=False) subplot(223) plot(MV.times / ms, MV[0] / mV) xlabel('Time (ms)') ylabel('V (mV)') subplot(224) plot(Mge.times / ms, Mge[0] / mV) plot(Mgi.times / ms, Mgi[0] / mV) xlabel('Time (ms)') ylabel('ge and gi (mV)') legend(('ge', 'gi'), 'upper right') show()

### Examples¶

These examples cover some basic topics in writing Brian scripts in Python. The complete source code for the examples is available in the examples folder in the extras package.

#### electrophysiology¶

##### Example: voltageclamp (electrophysiology)¶

Voltage-clamp experiment

```
from brian import *
from brian.library.electrophysiology import *
defaultclock.dt = .01 * ms
taum = 20 * ms
gl = 20 * nS
Cm = taum * gl
Re = 50 * Mohm
Ce = 0.2 * ms / Re
N = 1
Rs = .9 * Re
tauc = Rs * Ce # critical tau_u
eqs = Equations('''
dvm/dt=(-gl*vm+i_inj)/Cm : volt
''')
eqs += electrode(.2 * Re, Ce)
eqs += voltage_clamp(vm='v_el', v_cmd=20 * mV, i_inj='i_cmd', i_rec='ic',
Re=.8 * Re, Rs=.9 * Re, tau_u=.2 * ms)
setup = NeuronGroup(N, model=eqs)
setup.v = 0 * mV
recording = StateMonitor(setup, 'ic', record=True)
soma = StateMonitor(setup, 'vm', record=True)
run(200 * ms)
figure()
plot(recording.times / ms, recording[0] / nA, 'k')
figure()
plot(soma.times / ms, soma[0] / mV, 'b')
show()
```

##### Example: compensation_ex3_quality (electrophysiology)¶

Example of quality check method. Requires binary files “current.npy” and “rawtrace.npy”.

Rossant et al., “A calibration-free electrode compensation method” J. Neurophysiol 2012

```
import os
from brian import *
import numpy as np
from brian.library.electrophysiology import *
working_dir = os.path.dirname(__file__)
# load data
dt = 0.1*ms
current = np.load(os.path.join(working_dir, "current.npy")) # 10000-long vector, 1s duration
rawtrace = np.load(os.path.join(working_dir, "trace.npy")) # 10000-long vector, 1s duration
compensatedtrace = np.load(os.path.join(working_dir, "compensatedtrace.npy")) # obtained with example1
t = linspace(0., 1., len(current))
# get trace quality of both raw and compensated traces
r = get_trace_quality(rawtrace, current, full=True)
rcomp = get_trace_quality(compensatedtrace, current, full=True)
spikes = r["spikes"]
print "Quality coefficient for raw: %.3f and for compensated trace: %.3f" % \
(r["correlation"], rcomp["correlation"])
# plot trace and spikes
plot(t, rawtrace, 'k')
plot(t, compensatedtrace, 'g')
plot(t[spikes], rawtrace[spikes], 'ok')
plot(t[spikes], compensatedtrace[spikes], 'og')
show()
```

##### Example: compensation_ex2_spikes (electrophysiology)¶

Example of spike detection method. Requires binary files “current.npy” and “rawtrace.npy”.

Rossant et al., “A calibration-free electrode compensation method” J. Neurophysiol 2012

```
import os
from brian import *
import numpy as np
from brian.library.electrophysiology import *
working_dir = os.path.dirname(__file__)
# load data
dt = 0.1*ms
current = np.load(os.path.join(working_dir, "current.npy")) # 10000-long vector, 1s duration
rawtrace = np.load(os.path.join(working_dir, "trace.npy")) # 10000-long vector, 1s duration
t = linspace(0., 1., len(current))
# find spikes and compute score
spikes, scores = find_spikes(rawtrace, dt=dt, check_quality=True)
# plot trace and spikes
plot(t, rawtrace, 'k')
plot(t[spikes], rawtrace[spikes], 'or')
show()
```

##### Example: threshold_analysis (electrophysiology)¶

Analysis of spike threshold.

Loads a current clamp voltage trace, compensates (remove electrode voltage) and analyses the spikes.

```
from brian import *
from brian.library.electrophysiology import *
import numpy
dt=.1*ms
Vraw = numpy.load("trace.npy") # Raw current clamp trace
I = numpy.load("current.npy")
V, _ = Lp_compensate(I, Vraw, dt) # Electrode compensation
# Peaks
spike_criterion=find_spike_criterion(V)
print "Spike detected when V exceeds",float(spike_criterion/mV),"mV"
peaks=spike_peaks(V,vc=spike_criterion) # vc is optional
# Onsets (= spike threshold)
onsets=spike_onsets(V,criterion=3*dt,vc=spike_criterion) # Criterion: dV/dt>3 V/s
# Spike-triggered average of V
STA=spike_shape(V, onsets=onsets, before=100, after=100)
print "Spike duration:",float(spike_duration(V,onsets=onsets)*dt/ms),"ms"
print "Reset potential:",float(reset_potential(V,peaks=peaks)/mV),"mV"
# Spike threshold statistics
slope=slope_threshold(V,onsets=onsets,T=int(5*ms/dt))
# Subthreshold trace
subthreshold=-spike_mask(V)
t=arange(len(V))*dt
subplot(221)
plot(t/ms,V/mV,'k')
plot(t[peaks]/ms,V[peaks]/mV,".b")
plot(t[onsets]/ms,V[onsets]/mV,".r")
subplot(222)
plot(((arange(len(STA))-100)*dt)/ms,STA/mV,'k')
subplot(223)
plot(t[subthreshold]/ms,V[subthreshold]/mV,'k')
subplot(224)
plot(slope/ms,V[onsets]/mV,'.')
show()
```

##### Example: AEC (electrophysiology)¶

AEC experiment (current-clamp)

```
from brian import *
from brian.library.electrophysiology import *
from time import time
myclock = Clock(dt=.1 * ms)
clock_rec = Clock(dt=.1 * ms)
#log_level_debug()
taum = 20 * ms
gl = 20 * nS
Cm = taum * gl
Re = 50 * Mohm
Ce = 0.1 * ms / Re
eqs = Equations('''
dvm/dt=(-gl*vm+i_inj)/Cm : volt
I:amp
''')
eqs += electrode(.6 * Re, Ce)
eqs += current_clamp(vm='v_el', i_inj='i_cmd', i_cmd='I', Re=.4 * Re, Ce=Ce)
setup = NeuronGroup(1, model=eqs, clock=myclock)
board = AEC(setup, 'v_rec', 'I', clock_rec)
recording = StateMonitor(board, 'record', record=True, clock=myclock)
soma = StateMonitor(setup, 'vm', record=True, clock=myclock)
run(50 * ms)
board.command = .5 * nA
run(200 * ms)
board.command = 0 * nA
run(150 * ms)
board.start_injection()
t1 = time()
run(1 * second)
t2 = time()
print 'Duration:', t2 - t1, 's'
board.stop_injection()
run(100 * ms)
board.estimate()
print 'Re=', sum(board.Ke) * ohm
board.switch_on()
run(50 * ms)
board.command = .5 * nA
run(200 * ms)
board.command = 0 * nA
run(150 * ms)
board.switch_off()
figure()
plot(recording.times / ms, recording[0] / mV, 'b')
plot(soma.times / ms, soma[0] / mV, 'r')
figure()
plot(board.Ke)
show()
```

##### Example: SEVC (electrophysiology)¶

Voltage-clamp experiment (SEVC)

```
from brian import *
from brian.library.electrophysiology import *
defaultclock.dt = .01 * ms
taum = 20 * ms # membrane time constant
gl = 1. / (50 * Mohm) # leak conductance
Cm = taum * gl # membrane capacitance
Re = 50 * Mohm # electrode resistance
Ce = 0.1 * ms / Re # electrode capacitance
eqs = Equations('''
dvm/dt=(-gl*vm+i_inj)/Cm : volt
I:amp
''')
eqs += current_clamp(i_cmd='I', Re=Re, Ce=Ce)
setup = NeuronGroup(1, model=eqs)
ampli = SEVC(setup, 'v_rec', 'I', 1 * kHz, gain=250 * nS, gain2=50 * nS / ms)
recording = StateMonitor(ampli, 'record', record=True)
soma = StateMonitor(setup, 'vm', record=True)
ampli.command = 20 * mV
run(200 * ms)
figure()
plot(recording.times / ms, recording[0] / nA, 'k')
figure()
plot(soma.times / ms, soma[0] / mV, 'b')
show()
```

##### Example: bridge (electrophysiology)¶

Bridge experiment (current-clamp)

```
from brian import *
from brian.library.electrophysiology import *
defaultclock.dt = .01 * ms
#log_level_debug()
taum = 20 * ms
gl = 20 * nS
Cm = taum * gl
Re = 50 * Mohm
Ce = 0.5 * ms / Re
N = 10
eqs = Equations('''
dvm/dt=(-gl*vm+i_inj)/Cm : volt
#Rbridge:ohm
CC:farad
I:amp
''')
eqs += electrode(.6 * Re, Ce)
#eqs+=current_clamp(vm='v_el',i_inj='i_cmd',i_cmd='I',Re=.4*Re,Ce=Ce,
# bridge='Rbridge')
eqs += current_clamp(vm='v_el', i_inj='i_cmd', i_cmd='I', Re=.4 * Re, Ce=Ce,
bridge=Re, capa_comp='CC')
setup = NeuronGroup(N, model=eqs)
setup.I = 0 * nA
setup.v = 0 * mV
#setup.Rbridge=linspace(0*Mohm,60*Mohm,N)
setup.CC = linspace(0 * Ce, Ce, N)
recording = StateMonitor(setup, 'v_rec', record=True)
run(50 * ms)
setup.I = .5 * nA
run(200 * ms)
setup.I = 0 * nA
run(150 * ms)
for i in range(N):
plot(recording.times / ms + i * 400, recording[i] / mV, 'k')
show()
```

##### Example: DCC (electrophysiology)¶

An example of single-electrode current clamp recording with discontinuous current clamp (using the electrophysiology library).

```
from brian import *
from brian.library.electrophysiology import *
defaultclock.dt = 0.01 * ms
taum = 20 * ms # membrane time constant
gl = 1. / (50 * Mohm) # leak conductance
Cm = taum * gl # membrane capacitance
Re = 50 * Mohm # electrode resistance
Ce = 0.1 * ms / Re # electrode capacitance
eqs = Equations('''
dvm/dt=(-gl*vm+i_inj)/Cm : volt
Rbridge:ohm # bridge resistance
I:amp # command current
''')
eqs += current_clamp(i_cmd='I', Re=Re, Ce=Ce)
setup = NeuronGroup(1, model=eqs)
ampli = DCC(setup, 'v_rec', 'I', 1 * kHz)
soma = StateMonitor(setup, 'vm', record=True)
recording = StateMonitor(setup, 'v_rec', record=True)
DCCrecording = StateMonitor(ampli, 'record', record=True)
# No compensation
run(50 * ms)
ampli.command = .5 * nA
run(100 * ms)
ampli.command = 0 * nA
run(50 * ms)
ampli.set_frequency(2 * kHz)
ampli.command = .5 * nA
run(100 * ms)
ampli.command = 0 * nA
run(50 * ms)
plot(recording.times / ms, recording[0] / mV, 'b')
plot(DCCrecording.times / ms, DCCrecording[0] / mV, 'k')
plot(soma.times / ms, soma[0] / mV, 'r')
show()
```

##### Example: compensation_ex1 (electrophysiology)¶

Example of L^p electrode compensation method. Requires binary files “current.npy” and “rawtrace.npy”.

Rossant et al., “A calibration-free electrode compensation method” J. Neurophysiol 2012

```
import os
from brian import *
import numpy as np
from brian.library.electrophysiology import *
working_dir = os.path.dirname(__file__)
# load data
dt = 0.1*ms
current = np.load(os.path.join(working_dir, "current.npy")) # 10000-long vector, 1s duration
rawtrace = np.load(os.path.join(working_dir, "trace.npy")) # 10000-long vector, 1s duration
t = linspace(0., 1., len(current))
# launch compensation
r = Lp_compensate(current, rawtrace, dt, p=1.0, full=True)
# print best parameters
print "Best parameters: R, tau, Vr, Re, taue:"
print r["params"]
# plot traces
subplot(211)
plot(t, current, 'k')
subplot(212)
plot(t, rawtrace, 'k') # raw trace
plot(t, r["Vfull"], 'b') # full model trace (neuron and electrode)
plot(t, r["Vcompensated"], 'g') # compensated trace
show()
```

#### misc¶

##### Example: cable (misc)¶

Dendrite with 100 compartments

```
from brian import *
from brian.compartments import *
from brian.library.ionic_currents import *
length = 1 * mm
nseg = 100
dx = length / nseg
Cm = 1 * uF / cm ** 2
gl = 0.02 * msiemens / cm ** 2
diam = 1 * um
area = pi * diam * dx
El = 0 * mV
Ri = 100 * ohm * cm
ra = Ri * 4 / (pi * diam ** 2)
print "Time constant =", Cm / gl
print "Space constant =", .5 * (diam / (gl * Ri)) ** .5
segments = {}
for i in range(nseg):
segments[i] = MembraneEquation(Cm * area) + leak_current(gl * area, El)
segments[0] += Current('I:nA')
cable = Compartments(segments)
for i in range(nseg - 1):
cable.connect(i, i + 1, ra * dx)
neuron = NeuronGroup(1, model=cable)
#neuron.vm_0=10*mV
neuron.I_0 = .05 * nA
trace = []
for i in range(10):
trace.append(StateMonitor(neuron, 'vm_' + str(10 * i), record=True))
run(200 * ms)
for i in range(10):
plot(trace[i].times / ms, trace[i][0] / mV)
show()
```

##### Example: stopping (misc)¶

Network to demonstrate stopping a simulation during a run

Have a fully connected network of integrate and fire neurons with input fed by a group of Poisson neurons with a steadily increasing rate, want to determine the point in time at which the network of integrate and fire neurons switches from no firing to all neurons firing, so we have a network_operation called stop_condition that calls the stop() function if the monitored network firing rate is above a minimum threshold.

```
from brian import *
clk = Clock()
Vr = 0 * mV
El = 0 * mV
Vt = 10 * mV
tau = 10 * ms
weight = 0.2 * mV
duration = 100 * msecond
max_input_rate = 10000 * Hz
num_input_neurons = 1000
input_connection_p = 0.1
rate_per_neuron = max_input_rate / (num_input_neurons * input_connection_p)
P = PoissonGroup(num_input_neurons, lambda t: rate_per_neuron * (t / duration))
G = NeuronGroup(1000, model='dV/dt=-(V-El)/tau : volt', threshold=Vt, reset=Vr)
G.V = Vr + (Vt - Vr) * rand(len(G))
CPG = Connection(P, G, weight=weight, sparseness=input_connection_p)
CGG = Connection(G, G, weight=weight)
MP = PopulationRateMonitor(G, bin=1 * ms)
@network_operation
def stop_condition():
if MP.rate[-1] * Hz > 10 * Hz:
stop()
run(duration)
print "Reached population rate>10 Hz by time", clk.t, "+/- 1 ms."
```

##### Example: topographic_map2 (misc)¶

Topographic map - an example of complicated connections. Two layers of neurons. The first layer is connected randomly to the second one in a topographical way. The second layer has random lateral connections. Each neuron has a position x[i].

```
from brian import *
N = 100
tau = 10 * ms
tau_e = 2 * ms # AMPA synapse
eqs = '''
dv/dt=(I-v)/tau : volt
dI/dt=-I/tau_e : volt
'''
rates = zeros(N) * Hz
rates[N / 2 - 10:N / 2 + 10] = ones(20) * 30 * Hz
layer1 = PoissonGroup(N, rates=rates)
layer1.x = linspace(0., 1., len(layer1)) # abstract position between 0 and 1
layer2 = NeuronGroup(N, model=eqs, threshold=10 * mV, reset=0 * mV)
layer2.x = linspace(0., 1., len(layer2))
# Generic connectivity function
topomap = lambda i, j, x, y, sigma: exp(-abs(x[i] - y[j]) / sigma)
feedforward = Connection(layer1, layer2, sparseness=.5,
weight=lambda i, j:topomap(i, j, layer1.x, layer2.x, .3) * 3 * mV)
recurrent = Connection(layer2, layer2, sparseness=.5,
weight=lambda i, j:topomap(i, j, layer1.x, layer2.x, .2) * .5 * mV)
spikes = SpikeMonitor(layer2)
run(1 * second)
subplot(211)
raster_plot(spikes)
subplot(223)
imshow(feedforward.W.todense(), interpolation='nearest', origin='lower')
title('Feedforward connection strengths')
subplot(224)
imshow(recurrent.W.todense(), interpolation='nearest', origin='lower')
title('Recurrent connection strengths')
show()
```

##### Example: transient_sync (misc)¶

Transient synchronisation in a population of noisy IF neurons with distance-dependent synaptic weights (organised as a ring)

```
from brian import *
tau = 10 * ms
N = 100
v0 = 5 * mV
sigma = 4 * mV
group = NeuronGroup(N, model='dv/dt=(v0-v)/tau + sigma*xi/tau**.5 : volt', \
threshold=10 * mV, reset=0 * mV)
C = Connection(group, group, 'v', weight=lambda i, j:.4 * mV * cos(2. * pi * (i - j) * 1. / N))
S = SpikeMonitor(group)
R = PopulationRateMonitor(group)
group.v = rand(N) * 10 * mV
run(5000 * ms)
subplot(211)
raster_plot(S)
subplot(223)
imshow(C.W.todense(), interpolation='nearest')
title('Synaptic connections')
subplot(224)
plot(R.times / ms, R.smooth_rate(2 * ms, filter='flat'))
title('Firing rate')
show()
```

##### Example: pulsepacket (misc)¶

This example basically replicates what the Brian PulsePacket object does, and then compares to that object.

```
from brian import *
from random import gauss, shuffle
# Generator for pulse packet
def pulse_packet(t, n, sigma):
# generate a list of n times with Gaussian distribution, sort them in time, and
# then randomly assign the neuron numbers to them
times = [gauss(t, sigma) for i in range(n)]
times.sort()
neuron = range(n)
shuffle(neuron)
return zip(neuron, times) # returns a list of pairs (i,t)
G1 = SpikeGeneratorGroup(1000, pulse_packet(50 * ms, 1000, 5 * ms))
M1 = SpikeMonitor(G1)
PRM1 = PopulationRateMonitor(G1, bin=1 * ms)
G2 = PulsePacket(50 * ms, 1000, 5 * ms)
M2 = SpikeMonitor(G2)
PRM2 = PopulationRateMonitor(G2, bin=1 * ms)
run(100 * ms)
subplot(221)
raster_plot(M1)
subplot(223)
plot(PRM1.rate)
subplot(222)
raster_plot(M2)
subplot(224)
plot(PRM2.rate)
show()
```

##### Example: mirollo_strogatz (misc)¶

Mirollo-Strogatz network

```
from brian import *
tau = 10 * ms
v0 = 11 * mV
N = 20
w = .1 * mV
group = NeuronGroup(N, model='dv/dt=(v0-v)/tau : volt', threshold=10 * mV, reset=0 * mV)
W = Connection(group, group, 'v', weight=w)
group.v = rand(N) * 10 * mV
S = SpikeMonitor(group)
run(300 * ms)
raster_plot(S)
show()
```

##### Example: CUBA (misc)¶

This is a Brian script implementing a benchmark described in the following review paper:

Simulation of networks of spiking neurons: A review of tools and strategies (2007). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, Natschlager, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience 23(3):349-98

Benchmark 2: random network of integrate-and-fire neurons with exponential synaptic currents

Clock-driven implementation with exact subthreshold integration (but spike times are aligned to the grid)

###### R. Brette - Oct 2007¶

Brian is a simulator for spiking neural networks written in Python, developed by R. Brette and D. Goodman. http://brian.di.ens.fr

```
from brian import *
import time
start_time = time.time()
taum = 20 * ms
taue = 5 * ms
taui = 10 * ms
Vt = -50 * mV
Vr = -60 * mV
El = -49 * mV
eqs = Equations('''
dv/dt = (ge+gi-(v-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
''')
P = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr, refractory=5 * ms)
P.v = Vr
P.ge = 0 * mV
P.gi = 0 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = (60 * 0.27 / 10) * mV # excitatory synaptic weight (voltage)
wi = (-20 * 4.5 / 10) * mV # inhibitory synaptic weight
Ce = Connection(Pe, P, 'ge', weight=we, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight=wi, sparseness=0.02)
P.v = Vr + rand(len(P)) * (Vt - Vr)
# Record the number of spikes
Me = PopulationSpikeCounter(Pe)
Mi = PopulationSpikeCounter(Pi)
# A population rate monitor
M = PopulationRateMonitor(P)
print "Network construction time:", time.time() - start_time, "seconds"
print len(P), "neurons in the network"
print "Simulation running..."
run(1 * msecond)
start_time = time.time()
run(1 * second)
duration = time.time() - start_time
print "Simulation time:", duration, "seconds"
print Me.nspikes, "excitatory spikes"
print Mi.nspikes, "inhibitory spikes"
plot(M.times / ms, M.smooth_rate(2 * ms, 'gaussian'))
show()
```

##### Example: current_clamp (misc)¶

An example of single-electrode current clamp recording with bridge compensation (using the electrophysiology library).

```
from brian import *
from brian.library.electrophysiology import *
taum = 20 * ms # membrane time constant
gl = 1. / (50 * Mohm) # leak conductance
Cm = taum * gl # membrane capacitance
Re = 50 * Mohm # electrode resistance
Ce = 0.5 * ms / Re # electrode capacitance
eqs = Equations('''
dvm/dt=(-gl*vm+i_inj)/Cm : volt
Rbridge:ohm # bridge resistance
I:amp # command current
''')
eqs += current_clamp(i_cmd='I', Re=Re, Ce=Ce, bridge='Rbridge')
setup = NeuronGroup(1, model=eqs)
soma = StateMonitor(setup, 'vm', record=True)
recording = StateMonitor(setup, 'v_rec', record=True)
# No compensation
run(50 * ms)
setup.I = .5 * nA
run(100 * ms)
setup.I = 0 * nA
run(50 * ms)
# Full compensation
setup.Rbridge = Re
run(50 * ms)
setup.I = .5 * nA
run(100 * ms)
setup.I = 0 * nA
run(50 * ms)
plot(recording.times / ms, recording[0] / mV, 'b')
plot(soma.times / ms, soma[0] / mV, 'r')
show()
```

##### Example: phase_locking (misc)¶

Phase locking of IF neurons to a periodic input

```
from brian import *
tau = 20 * ms
N = 100
b = 1.2 # constant current mean, the modulation varies
f = 10 * Hz
eqs = '''
dv/dt=(-v+a*sin(2*pi*f*t)+b)/tau : 1
a : 1
'''
neurons = NeuronGroup(N, model=eqs, threshold=1, reset=0)
neurons.v = rand(N)
neurons.a = linspace(.05, 0.75, N)
S = SpikeMonitor(neurons)
trace = StateMonitor(neurons, 'v', record=50)
run(1000 * ms)
subplot(211)
raster_plot(S)
subplot(212)
plot(trace.times / ms, trace[50])
show()
```

##### Example: rate_model (misc)¶

A rate model

```
from brian import *
N = 50000
tau = 20 * ms
I = 10 * Hz
eqs = '''
dv/dt=(I-v)/tau : Hz # note the unit here: this is the output rate
'''
group = NeuronGroup(N, eqs, threshold=PoissonThreshold())
S = PopulationRateMonitor(group, bin=1 * ms)
run(100 * ms)
plot(S.rate)
show()
```

##### Example: timed_array (misc)¶

An example of the `TimedArray`

class used for applying input currents
to neurons.

```
from brian import *
N = 5
duration = 100 * ms
Vr = -60 * mV
Vt = -50 * mV
tau = 10 * ms
Rmin = 1 * Mohm
Rmax = 10 * Mohm
freq = 50 * Hz
k = 10 * nA
eqs = '''
dV/dt = (-(V-Vr)+R*I)/tau : volt
R : ohm
I : amp
'''
G = NeuronGroup(N, eqs, reset='V=Vr', threshold='V>Vt')
G.R = linspace(Rmin, Rmax, N)
t = linspace(0 * second, duration, int(duration / defaultclock.dt))
I = clip(k * sin(2 * pi * freq * t), 0, Inf)
G.I = TimedArray(I)
M = MultiStateMonitor(G, record=True)
run(duration)
subplot(211)
M['I'].plot()
ylabel('I (amp)')
subplot(212)
M['V'].plot()
ylabel('V (volt)')
show()
```

##### Example: noisy_ring (misc)¶

Integrate-and-fire neurons with noise

```
from brian import *
tau = 10 * ms
sigma = .5
N = 100
J = -1
mu = 2
eqs = """
dv/dt=mu/tau+sigma/tau**.5*xi : 1
"""
group = NeuronGroup(N, model=eqs, threshold=1, reset=0)
C = Connection(group, group, 'v')
for i in range(N):
C[i, (i + 1) % N] = J
#C.connect_full(group,group,weight=J)
#for i in range(N):
# C[i,i]=0
S = SpikeMonitor(group)
trace = StateMonitor(group, 'v', record=True)
run(500 * ms)
i, t = S.spikes[-1]
subplot(211)
raster_plot(S)
subplot(212)
plot(trace.times / ms, trace[0])
show()
```

##### Example: delays (misc)¶

Random network with external noise and transmission delays

```
from brian import *
tau = 10 * ms
sigma = 5 * mV
eqs = 'dv/dt = -v/tau+sigma*xi/tau**.5 : volt'
P = NeuronGroup(4000, model=eqs, threshold=10 * mV, reset=0 * mV, \
refractory=5 * ms)
P.v = -60 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
C = Connection(P, P, 'v', delay=2 * ms)
C.connect_random(Pe, P, 0.05, weight=.7 * mV)
C.connect_random(Pi, P, 0.05, weight= -2.8 * mV)
M = SpikeMonitor(P, True)
run(1 * second)
print 'Mean rate =', M.nspikes / 4000. / second
raster_plot(M)
show()
```

##### Example: after_potential (misc)¶

A model with depolarizing after-potential.

```
from brian import *
v0=20.5*mV
eqs = '''
dv/dt = (v0-v)/(30*ms) : volt # the membrane equation
dAP/dt = -AP/(3*ms) : volt # the after-potential
vm = v+AP : volt # total membrane potential
'''
IF = NeuronGroup(1, model=eqs, threshold='vm>20*mV',
reset='v=0*mV; AP=10*mV')
Mv = StateMonitor(IF, 'vm', record=True)
run(500 * ms)
plot(Mv.times / ms, Mv[0] / mV)
show()
```

##### Example: named_threshold (misc)¶

Example with named threshold and reset variables

```
from brian import *
eqs = '''
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
dx/dt = (ge+gi-(x+49*mV))/(20*ms) : volt
'''
P = NeuronGroup(4000, model=eqs, threshold='x>-50*mV', \
reset=Refractoriness(-60 * mV, 5 * ms, state='x'))
#P=NeuronGroup(4000,model=eqs,threshold=Threshold(-50*mV,state='x'),\
# reset=Reset(-60*mV,state='x')) # without refractoriness
P.x = -60 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=0.02)
M = SpikeMonitor(P)
run(1 * second)
raster_plot(M)
show()
```

##### Example: ring (misc)¶

A ring of integrate-and-fire neurons.

```
from brian import *
tau = 10 * ms
v0 = 11 * mV
N = 20
w = 1 * mV
ring = NeuronGroup(N, model='dv/dt=(v0-v)/tau : volt', threshold=10 * mV, reset=0 * mV)
W = Connection(ring, ring, 'v')
for i in range(N):
W[i, (i + 1) % N] = w
ring.v = rand(N) * 10 * mV
S = SpikeMonitor(ring)
run(300 * ms)
raster_plot(S)
show()
```

##### Example: I-F_curve2 (misc)¶

Input-Frequency curve of a IF model Network: 1000 unconnected integrate-and-fire neurons (leaky IF) with an input parameter v0. The input is set differently for each neuron. Spikes are sent to a spike counter (counts the spikes emitted by each neuron).

```
from brian import *
N = 1000
tau = 10 * ms
eqs = '''
dv/dt=(v0-v)/tau : volt
v0 : volt
'''
group = NeuronGroup(N, model=eqs, threshold=10 * mV, reset=0 * mV, refractory=5 * ms)
group.v = 0 * mV
group.v0 = linspace(0 * mV, 20 * mV, N)
counter = SpikeCounter(group)
duration = 5 * second
run(duration)
plot(group.v0 / mV, counter.count / duration)
show()
```

##### Example: adaptive_threshold (misc)¶

A model with adaptive threshold (increases with each spike)

```
from brian import *
eqs = '''
dv/dt = -v/(10*ms) : volt
dvt/dt = (10*mV-vt)/(15*ms) : volt
'''
reset = '''
v=0*mV
vt+=3*mV
'''
IF = NeuronGroup(1, model=eqs, reset=reset, threshold='v>vt')
IF.rest()
PG = PoissonGroup(1, 500 * Hz)
C = Connection(PG, IF, 'v', weight=3 * mV)
Mv = StateMonitor(IF, 'v', record=True)
Mvt = StateMonitor(IF, 'vt', record=True)
run(100 * ms)
plot(Mv.times / ms, Mv[0] / mV)
plot(Mvt.times / ms, Mvt[0] / mV)
show()
```

##### Example: spike_triggered_average (misc)¶

Example of the use of the function spike_triggered_average. A white noise is filtered by a gaussian filter (low pass filter) which output is used to generate spikes (poission process) Those spikes are used in conjunction with the input signal to retrieve the filter function.

```
from brian import *
from brian.hears import *
from numpy.random import randn
from numpy.linalg import norm
from matplotlib import pyplot
dt = 0.1*ms
defaultclock.dt = dt
stimulus_duration = 15000*ms
stimulus = randn(int(stimulus_duration/ dt))
#filter
n=200
filt = exp(-((linspace(0.5,n,n))-(n+5)/2)**2/(n/3));
filt = filt/norm(filt)*1000;
filtered_stimulus = convolve(stimulus,filt)
neuron = PoissonGroup(1,lambda t:filtered_stimulus[int(t/dt)])
spikes = SpikeMonitor(neuron)
run(stimulus_duration,report='text')
spikes = spikes[0] #resulting spikes
max_interval = 20*ms #window duration of the spike triggered average
onset = 10*ms
sta,time_axis = spike_triggered_average(spikes,stimulus,max_interval,dt,onset=onset,display=True)
figure()
plot(time_axis,filt/max(filt))
plot(time_axis,sta/max(sta))
xlabel('time axis')
ylabel('sta')
legend(('real filter','estimated filter'))
show()
```

##### Example: leaky_if (misc)¶

A very simple example Brian script to show how to implement
a leaky integrate and fire model. In this example, we also
drive the single leaky integrate and fire neuron with
regularly spaced spikes from the `SpikeGeneratorGroup`

.

```
from brian import *
tau = 10 * ms
Vr = -70 * mV
Vt = -55 * mV
G = NeuronGroup(1, model='dV/dt = -(V-Vr)/tau : volt', threshold=Vt, reset=Vr)
spikes = [(0, t*second) for t in linspace(10 * ms, 100 * ms, 25)]
input = SpikeGeneratorGroup(1, spikes)
C = Connection(input, G)
C[0, 0] = 5 * mV
M = StateMonitor(G, 'V', record=True)
G.V = Vr
run(100 * ms)
plot(M.times / ms, M[0] / mV)
show()
```

##### Example: I-F_curve (misc)¶

Input-Frequency curve of a neuron (cortical RS type) Network: 1000 unconnected integrate-and-fire neurons (Brette-Gerstner) with an input parameter I. The input is set differently for each neuron. Spikes are sent to a ‘neuron’ group with the same size and variable n, which has the role of a spike counter.

```
from brian import *
from brian.library.IF import *
N = 1000
eqs = Brette_Gerstner() + Current('I:amp')
print eqs
group = NeuronGroup(N, model=eqs, threshold= -20 * mV, reset=AdaptiveReset())
group.vm = -70 * mV
group.I = linspace(0 * nA, 1 * nA, N)
counter = NeuronGroup(N, model='n:1')
C = IdentityConnection(group, counter, 'n')
i = N * 8 / 10
trace = StateMonitor(group, 'vm', record=i)
duration = 5 * second
run(duration)
subplot(211)
plot(group.I / nA, counter.n / duration)
xlabel('I (nA)')
ylabel('Firing rate (Hz)')
subplot(212)
plot(trace.times / ms, trace[i] / mV)
xlabel('Time (ms)')
ylabel('Vm (mV)')
show()
```

##### Example: heterogeneous_delays (misc)¶

Script demonstrating use of a `Connection`

with homogenenous delays

The network consists of a ‘starter’ neuron which fires a single spike at time t=0, connected to 100 leaky integrate and fire neurons with different delays for each target neuron, with the delays forming a quadratic curve centred at neuron 50. The longest delay is 10ms, and the network is run for 40ms. At the end, the delays are plotted above a colour plot of the membrane potential of each of the target neurons as a function of time (demonstrating the delays).

```
from brian import *
# Starter neuron, threshold is below 0 so it fires immediately, reset is below
# threshold so it fires only once.
G = NeuronGroup(1, model='V:1', threshold= -1.0, reset= -2.0)
# 100 LIF neurons, no reset or threshold so they will not spike
H = NeuronGroup(100, model='dV/dt=-V/(10*ms):volt')
# Connection with delays, here the delays are specified as a function of (i,j)
# giving the delay from neuron i to neuron j. In this case there is only one
# presynaptic neuron so i will be 0.
C = Connection(G, H, weight=5 * mV, max_delay=10 * ms,
delay=lambda i, j:10 * ms * (j / 50. - 1) ** 2)
M = StateMonitor(H, 'V', record=True)
run(40 * ms)
subplot(211)
# These are the delays from neuron 0 to neuron i in ms
plot([C.delay[0, i] / ms for i in range(100)])
ylabel('Delay (ms)')
title('Delays')
subplot(212)
# M.values is an array of all the recorded values, here transposed to make
# it fit with the plot above.
imshow(M.values.T, aspect='auto', extent=(0, 100, 40, 0))
xlabel('Neuron number')
ylabel('Time (ms)')
title('Potential')
show()
```

##### Example: spikes_io (misc)¶

This script demonstrates how to save/load spikes in AER format from inside Brian.

```
from brian import *
####################### SAVING #########################
# First we need to generate some spikes
N = 1000
g = PoissonGroup(N, 200*Hz)
# And set up a monitor to record those spikes to the disk
Maer = AERSpikeMonitor(g, './dummy.aedat')
# Now we can run
run(100*ms)
# This line executed automatically when the script ends, but here we
# need to close the file because we re-use it from within the same script
Maer.close()
clear(all = True)
reinit_default_clock()
####################### LOADING ########################
# Now we can re-load the spikes
addr, timestamps = load_aer('./dummy.aedat')
# Feed them to a SpikeGeneratorGroup
group = SpikeGeneratorGroup(N, (addr, timestamps))
# The group can now be used as any other, here we choose to monitor
# the spikes
newM = SpikeMonitor(group, record = True)
run(100*ms)
# And plot the result
raster_plot(newM)
show()
```

##### Example: COBAHH (misc)¶

This is an implementation of a benchmark described in the following review paper:

Simulation of networks of spiking neurons: A review of tools and strategies (2006). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, NatschlAger, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience

Benchmark 3: random network of HH neurons with exponential synaptic conductances

Clock-driven implementation (no spike time interpolation)

- Brette - Dec 2007

70s for dt=0.1 ms with exponential Euler

```
from brian import *
# Parameters
area = 20000 * umetre ** 2
Cm = (1 * ufarad * cm ** -2) * area
gl = (5e-5 * siemens * cm ** -2) * area
El = -60 * mV
EK = -90 * mV
ENa = 50 * mV
g_na = (100 * msiemens * cm ** -2) * area
g_kd = (30 * msiemens * cm ** -2) * area
VT = -63 * mV
# Time constants
taue = 5 * ms
taui = 10 * ms
# Reversal potentials
Ee = 0 * mV
Ei = -80 * mV
we = 6 * nS # excitatory synaptic weight (voltage)
wi = 67 * nS # inhibitory synaptic weight
# The model
eqs = Equations('''
dv/dt = (gl*(El-v)+ge*(Ee-v)+gi*(Ei-v)-\
g_na*(m*m*m)*h*(v-ENa)-\
g_kd*(n*n*n*n)*(v-EK))/Cm : volt
dm/dt = alpham*(1-m)-betam*m : 1
dn/dt = alphan*(1-n)-betan*n : 1
dh/dt = alphah*(1-h)-betah*h : 1
dge/dt = -ge*(1./taue) : siemens
dgi/dt = -gi*(1./taui) : siemens
alpham = 0.32*(mV**-1)*(13*mV-v+VT)/ \
(exp((13*mV-v+VT)/(4*mV))-1.)/ms : Hz
betam = 0.28*(mV**-1)*(v-VT-40*mV)/ \
(exp((v-VT-40*mV)/(5*mV))-1)/ms : Hz
alphah = 0.128*exp((17*mV-v+VT)/(18*mV))/ms : Hz
betah = 4./(1+exp((40*mV-v+VT)/(5*mV)))/ms : Hz
alphan = 0.032*(mV**-1)*(15*mV-v+VT)/ \
(exp((15*mV-v+VT)/(5*mV))-1.)/ms : Hz
betan = .5*exp((10*mV-v+VT)/(40*mV))/ms : Hz
''')
P = NeuronGroup(4000, model=eqs,
threshold=EmpiricalThreshold(threshold= -20 * mV,
refractory=3 * ms),
implicit=True, freeze=True)
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=we, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight=wi, sparseness=0.02)
# Initialization
P.v = El + (randn(len(P)) * 5 - 5) * mV
P.ge = (randn(len(P)) * 1.5 + 4) * 10. * nS
P.gi = (randn(len(P)) * 12 + 20) * 10. * nS
# Record the number of spikes and a few traces
trace = StateMonitor(P, 'v', record=[1, 10, 100])
run(1 * second)
plot(trace[1])
plot(trace[10])
plot(trace[100])
show()
```

##### Example: poissongroup (misc)¶

Poisson input to an IF model

```
from brian import *
PG = PoissonGroup(1, lambda t:200 * Hz * (1 + cos(2 * pi * t * 50 * Hz)))
IF = NeuronGroup(1, model='dv/dt=-v/(10*ms) : volt', reset=0 * volt, threshold=10 * mV)
C = Connection(PG, IF, 'v', weight=3 * mV)
MS = SpikeMonitor(PG, True)
Mv = StateMonitor(IF, 'v', record=True)
rates = StateMonitor(PG, 'rate', record=True)
run(100 * ms)
subplot(211)
plot(rates.times / ms, rates[0] / Hz)
subplot(212)
plot(Mv.times / ms, Mv[0] / mV)
show()
```

##### Example: two_neurons (misc)¶

Two connected neurons with delays

```
from brian import *
tau = 10 * ms
w = -1 * mV
v0 = 11 * mV
neurons = NeuronGroup(2, model='dv/dt=(v0-v)/tau : volt', threshold=10 * mV, reset=0 * mV, \
max_delay=5 * ms)
neurons.v = rand(2) * 10 * mV
W = Connection(neurons, neurons, 'v', delay=2 * ms)
W[0, 1] = w
W[1, 0] = w
S = StateMonitor(neurons, 'v', record=True)
#mymonitor=SpikeMonitor(neurons[0])
mymonitor = PopulationSpikeCounter(neurons)
run(500 * ms)
plot(S.times / ms, S[0] / mV)
plot(S.times / ms, S[1] / mV)
show()
```

##### Example: multipleclocks (misc)¶

This example demonstrates using different clocks for different objects
in the network. The clock `simclock`

is the clock used for the
underlying simulation. The clock `monclock`

is the clock used for
monitoring the membrane potential. This monitoring takes place less
frequently than the simulation update step to save time and memory.
Finally, the clock `inputclock`

controls when the external ‘current’
`Iext`

should be updated. In this case, we update it infrequently
so we can see the effect on the network.

This example also demonstrates the @network_operation decorator. A function with this decorator will be run as part of the network update step, in sync with the clock provided (or the default one if none is provided).

```
from brian import *
# define the three clocks
simclock = Clock(dt=0.1 * ms)
monclock = Clock(dt=0.3 * ms)
inputclock = Clock(dt=100 * ms)
# simple leaky I&F model with external 'current' Iext as a parameter
tau = 10 * ms
eqs = '''
dV/dt = (-V+Iext)/tau : volt
Iext: volt
'''
# A single leaky I&F neuron with simclock as its clock
G = NeuronGroup(1, model=eqs, reset=0 * mV, threshold=10 * mV, clock=simclock)
G.V = 5 * mV
# This function will be run in sync with inputclock i.e. every 100 ms
@network_operation(clock=inputclock)
def update_Iext():
G.Iext = rand(len(G)) * 20 * mV
# V is monitored in sync with monclock
MV = StateMonitor(G, 'V', record=0, clock=monclock)
# run and plot
run(1000 * ms)
plot(MV.times / ms, MV[0] / mV)
show()
# You should see 10 different regions, sometimes Iext will be above threshold
# in which case you will see regular spiking at different rates, and sometimes
# it will be below threshold in which case you'll see exponential decay to that
# value
```

##### Example: HodgkinHuxley (misc)¶

Hodgkin-Huxley model Assuming area 1*cm**2

```
from brian import *
from brian.library.ionic_currents import *
#defaultclock.dt=.01*ms # more precise
El = 10.6 * mV
EK = -12 * mV
ENa = 120 * mV
eqs = MembraneEquation(1 * uF) + leak_current(.3 * msiemens, El)
eqs += K_current_HH(36 * msiemens, EK) + Na_current_HH(120 * msiemens, ENa)
eqs += Current('I:amp')
neuron = NeuronGroup(1, eqs, implicit=True, freeze=True)
trace = StateMonitor(neuron, 'vm', record=True)
run(100 * ms)
neuron.I = 10 * uA
run(100 * ms)
plot(trace.times / ms, trace[0] / mV)
show()
```

##### Example: stim2d (misc)¶

Example of a 2D stimulus, see the complete description at the Brian Cookbook.

```
from brian import *
import scipy.ndimage as im
__all__ = ['bar', 'StimulusArrayGroup']
def bar(width, height, thickness, angle):
'''
An array of given dimensions with a bar of given thickness and angle
'''
stimulus = zeros((width, height))
stimulus[:, int(height / 2. - thickness / 2.):int(height / 2. + thickness / 2.)] = 1.
stimulus = im.rotate(stimulus, angle, reshape=False)
return stimulus
class StimulusArrayGroup(PoissonGroup):
'''
A group of neurons which fire with a given stimulus at a given rate
The argument ``stimulus`` should be a 2D array with values between 0 and 1.
The point in the stimulus array at position (y,x) will correspond to the
neuron with index i=y*width+x. This neuron will fire Poisson spikes at
``rate*stimulus[y,x]`` Hz. The stimulus will start at time ``onset``
for ``duration``.
'''
def __init__(self, stimulus, rate, onset, duration):
height, width = stimulus.shape
stim = stimulus.ravel()*rate
self.stimulus = stim
def stimfunc(t):
if onset < t < (onset + duration):
return stim
else:
return 0. * Hz
PoissonGroup.__init__(self, width * height, stimfunc)
if __name__ == '__main__':
import pylab
subplot(121)
stim = bar(100, 100, 10, 90) * 0.9 + 0.1
pylab.imshow(stim, origin='lower')
pylab.gray()
G = StimulusArrayGroup(stim, 50 * Hz, 100 * ms, 100 * ms)
M = SpikeMonitor(G)
run(300 * ms)
subplot(122)
raster_plot(M)
axis(xmin=0, xmax=300)
show()
```

##### Example: reliability (misc)¶

Reliability of spike timing. See e.g. Mainen & Sejnowski (1995) for experimental results in vitro.

- Brette

```
from brian import *
# The common noisy input
N = 25
tau_input = 5 * ms
input = NeuronGroup(1, model='dx/dt=-x/tau_input+(2./tau_input)**.5*xi:1')
# The noisy neurons receiving the same input
tau = 10 * ms
sigma = .015
eqs_neurons = '''
dx/dt=(0.9+.5*I-x)/tau+sigma*(2./tau)**.5*xi:1
I : 1
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold=1, reset=0, refractory=5 * ms)
neurons.x = rand(N)
neurons.I = linked_var(input,'x') # input.x is continuously fed into neurons.I
spikes = SpikeMonitor(neurons)
run(500 * ms)
raster_plot(spikes)
show()
```

##### Example: minimalexample (misc)¶

Very short example program.

```
from brian import *
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(4000, model=eqs,
threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + 10 * mV * rand(len(P))
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=0.02)
M = SpikeMonitor(P)
run(1 * second)
i = 0
while len(M[i]) <= 1:
i += 1
print "The firing rate of neuron", i, "is", firing_rate(M[i]) * Hz
print "The coefficient of variation neuron", i, "is", CV(M[i])
raster_plot(M)
show()
```

##### Example: using_classes (misc)¶

Example of using derived classes in Brian

Using a class derived from one of Brian’s classes can be a useful way of
organising code in complicated simulations. A class such as a `NeuronGroup`

can itself create further `NeuronGroup`

, `Connection`

and
`NetworkOperation`

objects. In order to have these objects included in
the simulation, the derived class has to include them in its `contained_objects`

list (this tells Brian to add these to the `Network`

when the derived
class object is added to the network).

```
from brian import *
class PoissonDrivenGroup(NeuronGroup):
'''
This class is a group of leaky integrate-and-fire neurons driven by
external Poisson inputs. The class creates the Poisson inputs and
connects them to itself.
'''
def __init__(self, N, rate, weight):
tau = 10 * ms
eqs = '''
dV/dt = -V/tau : 1
'''
# It's essential to call the initialiser of the base class
super(PoissonDrivenGroup, self).__init__(N, eqs, reset=0, threshold=1)
self.poisson_group = PoissonGroup(N, rate)
self.conn = Connection(self.poisson_group, self, 'V')
self.conn.connect_one_to_one(weight=weight)
self.contained_objects += [self.poisson_group,
self.conn]
G = PoissonDrivenGroup(100, 100 * Hz, .3)
M = SpikeMonitor(G)
M_pg = SpikeMonitor(G.poisson_group)
trace = StateMonitor(G, 'V', record=0)
run(1 * second)
subplot(311)
raster_plot(M_pg)
title('Input spikes')
subplot(312)
raster_plot(M)
title('Output spikes')
subplot(313)
plot(trace.times, trace[0])
title('Sample trace')
show()
```

##### Example: poisson (misc)¶

This example demonstrates the PoissonGroup object. Here we have used a custom function to generate different rates at different times.

This example also demonstrates a custom SpikeMonitor.

```
#import brian_no_units # uncomment to run faster
from brian import *
# Rates
r1 = arange(101, 201) * 0.1 * Hz
r2 = arange(1, 101) * 0.1 * Hz
def myrates(t):
if t < 10 * second:
return r1
else:
return r2
# More compact: myrates=lambda t: (t<10*second and r1) or r2
# Neuron group
P = PoissonGroup(100, myrates)
# Calculation of rates
ns = zeros(len(P))
def ratemonitor(spikes):
ns[spikes] += 1
Mf = SpikeMonitor(P, function=ratemonitor)
M = SpikeMonitor(P)
# Simulation and plotting
run(10 * second)
print "Rates after 10s:"
print ns / (10 * second)
ns[:] = 0
run(10 * second)
print "Rates after 20s:"
print ns / (10 * second)
raster_plot()
show()
```

##### Example: adaptive (misc)¶

An adaptive neuron model

```
from brian import *
PG = PoissonGroup(1, 500 * Hz)
eqs = '''
dv/dt = (-w-v)/(10*ms) : volt # the membrane equation
dw/dt = -w/(30*ms) : volt # the adaptation current
'''
# The adaptation variable increases with each spike
IF = NeuronGroup(1, model=eqs, threshold=20 * mV,
reset='''v = 0*mV
w += 3*mV ''')
C = Connection(PG, IF, 'v', weight=3 * mV)
MS = SpikeMonitor(PG, True)
Mv = StateMonitor(IF, 'v', record=True)
Mw = StateMonitor(IF, 'w', record=True)
run(100 * ms)
plot(Mv.times / ms, Mv[0] / mV)
plot(Mw.times / ms, Mw[0] / mV)
show()
```

##### Example: van_rossum_metric (misc)¶

Example of how to use the van Rossum metric.

The VanRossumMetric function, which is defined as a monitor and therefore works online, computes the metric between every neuron in a given population. The present example show the concept of phase locking: N neurons are driven by sinusoidal inputs with different amplitude.

Use: output=VanRossumMetric(source, tau=4 * ms)

source is a NeuronGroup of N neurons tau is the time constant of the kernel used in the metric

output is a monitor with attribute distance which is the distance matrix between the neurons in source

```
from brian import *
from time import time
tau=20*ms
N=100
b=1.2 # constant current mean, the modulation varies
f=10*Hz
delta =2*ms
eqs='''
dv/dt=(-v+a*sin(2*pi*f*t)+b)/tau : 1
a : 1
'''
neurons=NeuronGroup(N,model=eqs,threshold=1,reset=0)
neurons.v=rand(N)
neurons.a=linspace(.05,0.75,N)
S=SpikeMonitor(neurons)
trace=StateMonitor(neurons,'v',record=50)
van_rossum_metric=VanRossumMetric(neurons, tau=4 * ms)
run(1000*ms)
raster_plot(S)
title('Raster plot')
figure()
title('Distance matrix between spike trains')
imshow(van_rossum_metric.distance)
colorbar()
show()
```

##### Example: realtime_plotting (misc)¶

Realtime plotting example

```
# These lines are necessary for interactive plotting when launching from the
# Eclipse IDE, they may not be necessary in every environment.
import matplotlib
matplotlib.use('WXAgg') # You may need to experiment, try WXAgg, GTKAgg, QTAgg, TkAgg
from brian import *
###### Set up the standard CUBA example ######
N = 4000
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(N, eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + 10 * mV * rand(len(P))
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=0.02)
M = SpikeMonitor(P)
trace = RecentStateMonitor(P, 'v', record=range(5), duration=200 * ms)
ion()
subplot(211)
raster_plot(M, refresh=10 * ms, showlast=200 * ms, redraw=False)
subplot(212)
trace.plot(refresh=10 * ms, showlast=200 * ms)
run(1 * second)
ioff() # switch interactive mode off
show() # and wait for user to close the window before shutting down
```

##### Example: topographic_map (misc)¶

Topographic map - an example of complicated connections. Two layers of neurons. The first layer is connected randomly to the second one in a topographical way. The second layer has random lateral connections.

```
from brian import *
N = 100
tau = 10 * ms
tau_e = 2 * ms # AMPA synapse
eqs = '''
dv/dt=(I-v)/tau : volt
dI/dt=-I/tau_e : volt
'''
rates = zeros(N) * Hz
rates[N / 2 - 10:N / 2 + 10] = ones(20) * 30 * Hz
layer1 = PoissonGroup(N, rates=rates)
layer2 = NeuronGroup(N, model=eqs, threshold=10 * mV, reset=0 * mV)
topomap = lambda i, j:exp(-abs(i - j) * .1) * 3 * mV
feedforward = Connection(layer1, layer2, sparseness=.5, weight=topomap)
#feedforward[2,3]=1*mV
lateralmap = lambda i, j:exp(-abs(i - j) * .05) * 0.5 * mV
recurrent = Connection(layer2, layer2, sparseness=.5, weight=lateralmap)
spikes = SpikeMonitor(layer2)
run(1 * second)
subplot(211)
raster_plot(spikes)
subplot(223)
imshow(feedforward.W.todense(), interpolation='nearest', origin='lower')
title('Feedforward connection strengths')
subplot(224)
imshow(recurrent.W.todense(), interpolation='nearest', origin='lower')
title('Recurrent connection strengths')
show()
```

##### Example: if (misc)¶

A very simple example Brian script to show how to implement
an integrate and fire model. In this example, we also
drive the single integrate and fire neuron with
regularly spaced spikes from the `SpikeGeneratorGroup`

.

```
from brian import *
tau = 10 * ms
Vr = -70 * mV
Vt = -55 * mV
G = NeuronGroup(1, model='V:volt', threshold=Vt, reset=Vr)
input = SpikeGeneratorGroup(1, [(0, t * ms) for t in linspace(10, 100, 25)])
C = Connection(input, G)
C[0, 0] = 2 * mV
M = StateMonitor(G, 'V', record=True)
G.V = Vr
run(100 * ms)
plot(M.times / ms, M[0] / mV)
show()
```

##### Example: remotecontrolclient (misc)¶

Example of using `RemoteControlServer`

and `RemoteControlClient`

to control a simulation as it runs in Brian.

Run the script remotecontrolserver.py before running this.

```
from brian import *
import time
client = RemoteControlClient()
time.sleep(1)
subplot(121)
plot(*client.evaluate('(M.times, M.values)'))
client.execute('G.I = 1.1')
time.sleep(1)
subplot(122)
plot(*client.evaluate('(M.times, M.values)'))
client.stop()
show()
```

##### Example: linked_var (misc)¶

Example showing `linked_var()`

, connecting two different `NeuronGroup`

variables. Here we show something like a simplified haircell and auditory nerve
fibre model where the hair cells and ANFs are implemented as two separate
`NeuronGroup`

objects. The hair cells filter their inputs via a
differential equation, and then emit graded amounts of neurotransmitter
(variable ‘y’) to the auditory nerve fibres input current (variable ‘I’).

```
from brian import *
N = 5
f = 50 * Hz
a_min = 1.0
a_max = 100.0
tau_haircell = 50 * ms
tau = 10 * ms
duration = 100 * ms
eqs_haircells = '''
input = a*sin(2*pi*f*t) : 1
x = clip(input, 0, Inf)**(1.0/3.0) : 1
a : 1
dy/dt = (x-y)/tau_haircell : 1
'''
haircells = NeuronGroup(N, eqs_haircells)
haircells.a = linspace(a_min, a_max, N)
M_haircells = MultiStateMonitor(haircells, vars=('input', 'y'), record=True)
eqs_nervefibres = '''
dV/dt = (I-V)/tau : 1
I : 1
'''
nervefibres = NeuronGroup(N, eqs_nervefibres, reset=0, threshold=1)
nervefibres.I = linked_var(haircells, 'y')
M_nervefibres = MultiStateMonitor(nervefibres, record=True)
run(duration)
subplot(221)
M_haircells['input'].plot()
ylabel('haircell.input')
subplot(222)
M_haircells['y'].plot()
ylabel('haircell.y')
subplot(223)
M_nervefibres['I'].plot()
ylabel('nervefibres.I')
subplot(224)
M_nervefibres['V'].plot()
ylabel('nervefibres.V')
show()
```

##### Example: gap_junctions (misc)¶

Network of noisy IF neurons with gap junctions

```
from brian import *
N = 300
v0 = 5 * mV
tau = 20 * ms
sigma = 5 * mV
vt = 10 * mV
vr = 0 * mV
g_gap = 1. / N
beta = 60 * mV * 2 * ms
delta = vt - vr
eqs = '''
dv/dt=(v0-v)/tau+g_gap*(u-N*v)/tau : volt
du/dt=(N*v0-u)/tau : volt # input from other neurons
'''
def myreset(P, spikes):
P.v[spikes] = vr # reset
P.v += g_gap * beta * len(spikes) # spike effect
P.u -= delta * len(spikes)
group = NeuronGroup(N, model=eqs, threshold=vt, reset=myreset)
@network_operation
def noise(cl):
x = randn(N) * sigma * (cl.dt / tau) ** .5
group.v += x
group.u += sum(x)
trace = StateMonitor(group, 'v', record=[0, 1])
spikes = SpikeMonitor(group)
rate = PopulationRateMonitor(group)
run(1 * second)
subplot(311)
raster_plot(spikes)
subplot(312)
plot(trace.times / ms, trace[0] / mV)
plot(trace.times / ms, trace[1] / mV)
subplot(313)
plot(rate.times / ms, rate.smooth_rate(5 * ms) / Hz)
show()
```

##### Example: remotecontrolserver (misc)¶

Example of using `RemoteControlServer`

and `RemoteControlClient`

to control a simulation as it runs in Brian.

After running this script, run remotecontrolclient.py or paste the code from that script into an IPython shell for interactive control.

```
from brian import *
eqs = '''
dV/dt = (I-V)/(10*ms)+0.1*xi*(2/(10*ms))**.5 : 1
I : 1
'''
G = NeuronGroup(3, eqs, reset=0, threshold=1)
M = RecentStateMonitor(G, 'V', duration=50*ms)
server = RemoteControlServer()
run(1e10*second)
```

##### Example: non_reliability (misc)¶

Reliability of spike timing. See e.g. Mainen & Sejnowski (1995) for experimental results in vitro.

Here: a constant current is injected in all trials.

- Brette

```
from brian import *
N = 25
tau = 20 * ms
sigma = .015
eqs_neurons = '''
dx/dt=(1.1-x)/tau+sigma*(2./tau)**.5*xi:1
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold=1, reset=0, refractory=5 * ms)
spikes = SpikeMonitor(neurons)
run(500 * ms)
raster_plot(spikes)
show()
```

##### Example: expIF_network (misc)¶

A network of exponential IF models with synaptic conductances

```
from brian import *
from brian.library.IF import *
from brian.library.synapses import *
import time
C = 200 * pF
taum = 10 * msecond
gL = C / taum
EL = -70 * mV
VT = -55 * mV
DeltaT = 3 * mV
# Synapse parameters
Ee = 0 * mvolt
Ei = -80 * mvolt
taue = 5 * msecond
taui = 10 * msecond
eqs = exp_IF(C, gL, EL, VT, DeltaT)
# Two different ways of adding synaptic currents:
eqs += Current('''
Ie=ge*(Ee-vm) : amp
dge/dt=-ge/taue : siemens
''')
eqs += exp_conductance('gi', Ei, taui) # from library.synapses
P = NeuronGroup(4000, model=eqs, threshold= -20 * mvolt, reset=EL, refractory=2 * ms)
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = 1.5 * nS # excitatory synaptic weight
wi = 2.5 * we # inhibitory synaptic weight
Ce = Connection(Pe, P, 'ge', weight=we, sparseness=0.05)
Ci = Connection(Pi, P, 'gi', weight=wi, sparseness=0.05)
# Initialization
P.vm = randn(len(P)) * 10 * mV - 70 * mV
P.ge = (randn(len(P)) * 2 + 5) * we
P.gi = (randn(len(P)) * 2 + 5) * wi
# Excitatory input to a subset of excitatory and inhibitory neurons
# Excitatory neurons are excited for the first 200 ms
# Inhibitory neurons are excited for the first 100 ms
input_layer1 = Pe.subgroup(200)
input_layer2 = Pi.subgroup(200)
input1 = PoissonGroup(200, rates=lambda t: (t < 200 * ms and 2000 * Hz) or 0 * Hz)
input2 = PoissonGroup(200, rates=lambda t: (t < 100 * ms and 2000 * Hz) or 0 * Hz)
input_co1 = IdentityConnection(input1, input_layer1, 'ge', weight=we)
input_co2 = IdentityConnection(input2, input_layer2, 'ge', weight=we)
# Record the number of spikes
M = SpikeMonitor(P)
print "Simulation running..."
start_time = time.time()
run(500 * ms)
duration = time.time() - start_time
print "Simulation time:", duration, "seconds"
print M.nspikes / 4000., "spikes per neuron"
raster_plot(M)
show()
```

##### Example: COBA (misc)¶

This is a Brian script implementing a benchmark described in the following review paper:

Simulation of networks of spiking neurons: A review of tools and strategies (2007). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, Natschlager, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience 23(3):349-98

Benchmark 1: random network of integrate-and-fire neurons with exponential synaptic conductances

Clock-driven implementation with Euler integration (no spike time interpolation)

###### R. Brette - Dec 2007¶

Brian is a simulator for spiking neural networks written in Python, developed by R. Brette and D. Goodman. http://brian.di.ens.fr

```
from brian import *
import time
# Time constants
taum = 20 * msecond
taue = 5 * msecond
taui = 10 * msecond
# Reversal potentials
Ee = (0. + 60.) * mvolt
Ei = (-80. + 60.) * mvolt
start_time = time.time()
eqs = Equations('''
dv/dt = (-v+ge*(Ee-v)+gi*(Ei-v))*(1./taum) : volt
dge/dt = -ge*(1./taue) : 1
dgi/dt = -gi*(1./taui) : 1
''')
# NB 1: conductances are in units of the leak conductance
# NB 2: multiplication is faster than division
P = NeuronGroup(4000, model=eqs, threshold=10 * mvolt, \
reset=0 * mvolt, refractory=5 * msecond,
order=1, compile=True)
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = 6. / 10. # excitatory synaptic weight (voltage)
wi = 67. / 10. # inhibitory synaptic weight
Ce = Connection(Pe, P, 'ge', weight=we, sparseness=0.02)
Ci = Connection(Pi, P, 'gi', weight=wi, sparseness=0.02)
# Initialization
P.v = (randn(len(P)) * 5 - 5) * mvolt
P.ge = randn(len(P)) * 1.5 + 4
P.gi = randn(len(P)) * 12 + 20
# Record the number of spikes
Me = PopulationSpikeCounter(Pe)
Mi = PopulationSpikeCounter(Pi)
print "Network construction time:", time.time() - start_time, "seconds"
print "Simulation running..."
start_time = time.time()
run(1 * second)
duration = time.time() - start_time
print "Simulation time:", duration, "seconds"
print Me.nspikes, "excitatory spikes"
print Mi.nspikes, "inhibitory spikes"
```

#### audition¶

##### Example: licklider (audition)¶

Spike-based adaptation of Licklider’s model of pitch processing (autocorrelation with delay lines) with phase locking.

Romain Brette

```
from brian import *
defaultclock.dt = .02 * ms
# Ear and sound
max_delay = 20 * ms # 50 Hz
tau_ear = 1 * ms
sigma_ear = .1
eqs_ear = '''
dx/dt=(sound-x)/tau_ear+sigma_ear*(2./tau_ear)**.5*xi : 1
sound=5*sin(2*pi*frequency*t)**3 : 1 # nonlinear distorsion
#sound=5*(sin(4*pi*frequency*t)+.5*sin(6*pi*frequency*t)) : 1 # missing fundamental
frequency=(200+200*t*Hz)*Hz : Hz # increasing pitch
'''
receptors = NeuronGroup(2, model=eqs_ear, threshold=1, reset=0, refractory=2 * ms)
traces = StateMonitor(receptors, 'x', record=True)
sound = StateMonitor(receptors, 'sound', record=0)
# Coincidence detectors
min_freq = 50 * Hz
max_freq = 1000 * Hz
N = 300
tau = 1 * ms
sigma = .1
eqs_neurons = '''
dv/dt=-v/tau+sigma*(2./tau)**.5*xi : 1
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold=1, reset=0)
synapses = Connection(receptors, neurons, 'v', structure='dense', max_delay=1.1 * max_delay, delay=True)
synapses.connect_full(receptors, neurons, weight=.5)
synapses.delay[1, :] = 1. / exp(linspace(log(min_freq / Hz), log(max_freq / Hz), N))
spikes = SpikeMonitor(neurons)
run(500 * ms)
raster_plot(spikes)
ylabel('Frequency')
yticks([0, 99, 199, 299], array(1. / synapses.delay.todense()[1, [0, 99, 199, 299]], dtype=int))
show()
```

##### Example: filterbank (audition)¶

An auditory filterbank implemented with Poisson neurons

The input sound has a missing fundamental (only harmonics 2 and 3)

```
from brian import *
defaultclock.dt = .01 * ms
N = 1500
tau = 1 * ms # Decay time constant of filters = 2*tau
freq = linspace(100 * Hz, 2000 * Hz, N) # characteristic frequencies
f_stimulus = 500 * Hz # stimulus frequency
gain = 500 * Hz
eqs = '''
dv/dt=(-a*w-v+I)/tau : Hz
dw/dt=(v-w)/tau : Hz # e.g. linearized potassium channel with conductance a
a : 1
I = gain*(sin(4*pi*f_stimulus*t)+sin(6*pi*f_stimulus*t)) : Hz
'''
neurones = NeuronGroup(N, model=eqs, threshold=PoissonThreshold())
neurones.a = (2 * pi * freq * tau) ** 2
spikes = SpikeMonitor(neurones)
counter = SpikeCounter(neurones)
run(100 * ms)
subplot(121)
CF = array([freq[i] for i, _ in spikes.spikes])
timings = array([t for _, t in spikes.spikes])
plot(timings / ms, CF, '.')
xlabel('Time (ms)')
ylabel('Characteristic frequency (Hz)')
subplot(122)
plot(counter.count / (300 * ms), freq)
xlabel('Firing rate (Hz)')
show()
```

##### Example: jeffress (audition)¶

Jeffress model, adapted with spiking neuron models. A sound source (white noise) is moving around the head. Delay differences between the two ears are used to determine the azimuth of the source. Delays are mapped to a neural place code using delay lines (each neuron receives input from both ears, with different delays).

Romain Brette

```
from brian import *
defaultclock.dt = .02 * ms
dt = defaultclock.dt
# Sound
sound = TimedArray(10 * randn(50000)) # white noise
# Ears and sound motion around the head (constant angular speed)
sound_speed = 300 * metre / second
interaural_distance = 20 * cm # big head!
max_delay = interaural_distance / sound_speed
print "Maximum interaural delay:", max_delay
angular_speed = 2 * pi * radian / second # 1 turn/second
tau_ear = 1 * ms
sigma_ear = .1
eqs_ears = '''
dx/dt=(sound(t-delay)-x)/tau_ear+sigma_ear*(2./tau_ear)**.5*xi : 1
delay=distance*sin(theta) : second
distance : second # distance to the centre of the head in time units
dtheta/dt=angular_speed : radian
'''
ears = NeuronGroup(2, model=eqs_ears, threshold=1, reset=0, refractory=2.5 * ms)
ears.distance = [-.5 * max_delay, .5 * max_delay]
traces = StateMonitor(ears, 'x', record=True)
# Coincidence detectors
N = 300
tau = 1 * ms
sigma = .1
eqs_neurons = '''
dv/dt=-v/tau+sigma*(2./tau)**.5*xi : 1
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold=1, reset=0)
synapses = Connection(ears, neurons, 'v', structure='dense', delay=True, max_delay=1.1 * max_delay)
synapses.connect_full(ears, neurons, weight=.5)
synapses.delay[0, :] = linspace(0 * ms, 1.1 * max_delay, N)
synapses.delay[1, :] = linspace(0 * ms, 1.1 * max_delay, N)[::-1]
spikes = SpikeMonitor(neurons)
run(1000 * ms)
raster_plot(spikes)
show()
```

#### synapses¶

##### Example: licklider (synapses)¶

Spike-based adaptation of Licklider’s model of pitch processing (autocorrelation with delay lines) with phase locking.

Romain Brette

```
from brian import *
defaultclock.dt = .02 * ms
# Ear and sound
max_delay = 20 * ms # 50 Hz
tau_ear = 1 * ms
sigma_ear = .1
eqs_ear = '''
dx/dt=(sound-x)/tau_ear+sigma_ear*(2./tau_ear)**.5*xi : 1
sound=5*sin(2*pi*frequency*t)**3 : 1 # nonlinear distorsion
#sound=5*(sin(4*pi*frequency*t)+.5*sin(6*pi*frequency*t)) : 1 # missing fundamental
frequency=(200+200*t*Hz)*Hz : Hz # increasing pitch
'''
receptors = NeuronGroup(2, model=eqs_ear, threshold=1, reset=0, refractory=2 * ms)
# Coincidence detectors
min_freq = 50 * Hz
max_freq = 1000 * Hz
N = 300
tau = 1 * ms
sigma = .1
eqs_neurons = '''
dv/dt=-v/tau+sigma*(2./tau)**.5*xi : 1
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold=1, reset=0)
synapses = Synapses(receptors, neurons, model='w : 1', pre='v+=w')
synapses[:,:]=True
synapses.w=0.5
synapses.delay[1, :] = 1. / exp(linspace(log(min_freq / Hz), log(max_freq / Hz), N))
spikes = SpikeMonitor(neurons)
run(500 * ms)
raster_plot(spikes)
ylabel('Frequency')
yticks([0, 99, 199, 299], array(1. / synapses.delay[1, [0, 99, 199, 299]], dtype=int))
show()
```

##### Example: two_synapses (synapses)¶

One synapse within several possibilities. Synapse from 0->2,3.

```
from brian import *
P=NeuronGroup(2,model='dv/dt=1/(10*ms):1',threshold=1,reset=0)
Q=NeuronGroup(4,model='v:1')
S=Synapses(P,Q,model='w:1',pre='v+=w')
M=StateMonitor(Q,'v',record=True)
S[0,2]=True
S[0,3]=True
S.w[0,:]=[1.,.7]
S.delay[0,:]=[.5*ms,.7*ms]
run(40*ms)
for i in range(4):
plot(M.times/ms,M[i]+i*2,'k')
show()
```

##### Example: transient_sync (synapses)¶

Transient synchronisation in a population of noisy IF neurons with distance-dependent synaptic weights (organised as a ring)

```
from brian import *
import time
tau = 10 * ms
N = 100
v0 = 5 * mV
sigma = 4 * mV
group = NeuronGroup(N, model='dv/dt=(v0-v)/tau + sigma*xi/tau**.5 : volt', \
threshold=10 * mV, reset=0 * mV)
C = Synapses(group,model='w:1',pre='v+=w')
C[:,:]=True
C.w='.4 * mV * cos(2. * pi * (i - j) * 1. / N)'
S = SpikeMonitor(group)
R = PopulationRateMonitor(group)
group.v = rand(N) * 10 * mV
run(5000 * ms,report='text')
subplot(211)
raster_plot(S)
subplot(223)
imshow(C.w[:].reshape((N,N)), interpolation='nearest')
title('Synaptic connections')
subplot(224)
plot(R.times / ms, R.smooth_rate(2 * ms, filter='flat'))
title('Firing rate')
show()
```

##### Example: synapse_construction (synapses)¶

An example of constructing synapses.

```
from brian import *
import time
N=10
P=NeuronGroup(N,model='dv/dt=1/(10*ms):1',threshold=1,reset=0)
Q=NeuronGroup(N,model='v:1')
S=Synapses(P,Q,model='w:1',pre='v+=w')
S[:,:]='i==j'
S.w='2*i'
M=StateMonitor(Q,'v',record=True)
run(40*ms)
for i in range(N):
plot(M.times/ms,M[i]+i*2,'k')
show()
```

##### Example: CUBA (synapses)¶

CUBA example with delays.

Connection (no delay): 3.5 s DelayConnection: 5.7 s Synapses (with precomputed offsets): 6.6 s # 6.9 s Synapses with weave: 6.4 s Synapses with zero delays: 5.2 s

```
from brian import *
import time
start_time = time.time()
taum = 20 * ms
taue = 5 * ms
taui = 10 * ms
Vt = -50 * mV
Vr = -60 * mV
El = -49 * mV
eqs = Equations('''
dv/dt = (ge+gi-(v-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
''')
P = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr, refractory=5 * ms)
P.v = Vr
P.ge = 0 * mV
P.gi = 0 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = (60 * 0.27 / 10) * mV # excitatory synaptic weight (voltage)
wi = (-20 * 4.5 / 10) * mV # inhibitory synaptic weight
Se = Synapses(Pe, P, model = 'w : 1', pre = 'ge += we')
Si = Synapses(Pi, P, model = 'w : 1', pre = 'gi += wi')
Se[:,:]=0.02
Si[:,:]=0.02
Se.delay='rand()*ms'
Si.delay='rand()*ms'
P.v = Vr + rand(len(P)) * (Vt - Vr)
# Record the number of spikes
Me = PopulationSpikeCounter(Pe)
Mi = PopulationSpikeCounter(Pi)
# A population rate monitor
M = PopulationRateMonitor(P)
print "Network construction time:", time.time() - start_time, "seconds"
print len(P), "neurons in the network"
print "Simulation running..."
run(1 * msecond)
start_time = time.time()
run(1 * second)
duration = time.time() - start_time
print "Simulation time:", duration, "seconds"
print Me.nspikes, "excitatory spikes"
print Mi.nspikes, "inhibitory spikes"
plot(M.times / ms, M.smooth_rate(2 * ms, 'gaussian'))
show()
```

##### Example: poisson_synapses (synapses)¶

This example shows how to efficiently simulate neurons with a large number of
Poisson inputs targetting arbitrarily complex synapses. The approach is very
similiar to what the `PoissonInput`

class does internally, but
`PoissonInput`

cannot be combined with the `Synapses`

class.
You could also just use many `PoissonGroup`

objects as inputs, but this
is very slow and memory consuming.

```
from brian import *
# Poisson inputs
M = 1000 # number of Poisson inputs
max_rate = 100
# Neurons
N = 50 # number of neurons
tau = 10 * ms
E_exc = 0 * mV
E_L = -70 * mV
G = NeuronGroup(N, model='dvm/dt = -(vm - E_L)/tau : mV')
G.rest()
# Dummy neuron group
P = NeuronGroup(1, 'v : 1', threshold= -1, reset=0) # spikes every timestep
# time varying rate
def varying_rate(t):
return defaultclock.dt * max_rate * (0.5 + 0.5 * sin(2 * pi * 5 * t))
# Synaptic connections: binomial(cellM, varying_rate(t)) gives the number of
# events per timestep. The synapse model is a conductance-based instanteneous
# jump in postsynaptic membrane potential
S = Synapses(P, G, model='''
J : 1
cellM : 1
''',
pre='vm += binomial(cellM, varying_rate(t)) * J * (E_exc - vm)')
S[:, :] = True
S.cellM = M #we need one value for M per cell, so that binomial is vectorized
S.J = 0.0005
mon = StateMonitor(G, 'vm', record=True)
run(1 * second, report='text')
mon.plot()
show()
```

##### Example: noisy_ring (synapses)¶

Integrate-and-fire neurons with noise

```
from brian import *
tau = 10 * ms
sigma = .5
N = 100
J = -1
mu = 2
eqs = """
dv/dt=mu/tau+sigma/tau**.5*xi : 1
"""
group = NeuronGroup(N, model=eqs, threshold=1, reset=0)
C = Synapses(group, model='w:1', pre='v+=w')
C[:,:]='j==((i+1)%N)'
C.w=J
S = SpikeMonitor(group)
trace = StateMonitor(group, 'v', record=True)
run(500 * ms)
i, t = S.spikes[-1]
subplot(211)
raster_plot(S)
subplot(212)
plot(trace.times / ms, trace[0])
show()
```

##### Example: barrelcortex (synapses)¶

Late Emergence of the Whisker Direction Selectivity Map in the Rat Barrel Cortex Kremer Y, Leger JF, Goodman DF, Brette R, Bourdieu L (2011). J Neurosci 31(29):10689-700.

Development of direction maps with pinwheels in the barrel cortex. Whiskers are deflected with random moving bars. N.B.: network construction can be long.

In this version, STDP is faster than in the paper so that the script runs in just a few minutes.

Original time: 4m13 s (without construction) With Synapses: 4m36 s

```
from brian import *
import time
# Uncomment if you have a C compiler
# set_global_preferences(useweave=True,usecodegen=True,usecodegenweave=True,usenewpropagate=True,usecstdp=True)
t1=time.time()
# PARAMETERS
# Neuron numbers
M4,M23exc,M23inh=22,25,12 # side of each barrel (in neurons)
N4,N23exc,N23inh=M4**2,M23exc**2,M23inh**2 # neurons per barrel
barrelarraysize=5 # Choose 3 or 4 if memory error
Nbarrels=barrelarraysize**2
# Stimulation
stim_change_time = 5*ms
Fmax=.5/stim_change_time # maximum firing rate in layer 4 (.5 spike / stimulation)
# Neuron parameters
taum,taue,taui=10*ms,2*ms,25*ms
El=-70*mV
Vt,vt_inc,tauvt=-55*mV,2*mV,50*ms # adaptive threshold
# STDP
taup,taud=5*ms,25*ms
Ap,Ad=.05,-.04
# EPSPs/IPSPs
EPSP,IPSP = 1*mV,-1*mV
EPSC = EPSP * (taue/taum)**(taum/(taue-taum))
IPSC = IPSP * (taui/taum)**(taum/(taui-taum))
Ap,Ad=Ap*EPSC,Ad*EPSC
# Model: IF with adaptive threshold
eqs='''
dv/dt=(ge+gi+El-v)/taum : volt
dge/dt=-ge/taue : volt
dgi/dt=-gi/taui : volt
dvt/dt=(Vt-vt)/tauvt : volt # adaptation
x : 1
y : 1
'''
# Tuning curve
tuning=lambda theta:clip(cos(theta),0,Inf)*Fmax
# Layer 4
layer4=PoissonGroup(N4*Nbarrels)
barrels4 = dict(((i, j), layer4.subgroup(N4)) for i in xrange(barrelarraysize) for j in xrange(barrelarraysize))
barrels4active = dict((ij, False) for ij in barrels4)
barrelindices = dict((ij, slice(b._origin, b._origin+len(b))) for ij, b in barrels4.iteritems())
layer4.selectivity = zeros(len(layer4))
for (i, j), inds in barrelindices.iteritems():
layer4.selectivity[inds]=linspace(0,2*pi,N4)
# Layer 2/3
layer23=NeuronGroup(Nbarrels*(N23exc+N23inh),model=eqs,threshold='v>vt',reset='v=El;vt+=vt_inc',refractory=2*ms)
layer23.v=El
layer23.vt=Vt
# Layer 2/3 excitatory
layer23exc=layer23.subgroup(Nbarrels*N23exc)
x,y=meshgrid(arange(M23exc)*1./M23exc,arange(M23exc)*1./M23exc)
x,y=x.flatten(),y.flatten()
barrels23 = dict(((i, j), layer23exc.subgroup(N23exc)) for i in xrange(barrelarraysize) for j in xrange(barrelarraysize))
for i in range(barrelarraysize):
for j in range(barrelarraysize):
barrels23[i,j].x=x+i
barrels23[i,j].y=y+j
# Layer 2/3 inhibitory
layer23inh=layer23.subgroup(Nbarrels*N23inh)
x,y=meshgrid(arange(M23inh)*1./M23inh,arange(M23inh)*1./M23inh)
x,y=x.flatten(),y.flatten()
barrels23inh = dict(((i, j), layer23inh.subgroup(N23inh)) for i in xrange(barrelarraysize) for j in xrange(barrelarraysize))
for i in range(barrelarraysize):
for j in range(barrelarraysize):
barrels23inh[i,j].x=x+i
barrels23inh[i,j].y=y+j
print "Building synapses, please wait..."
# Feedforward connections
feedforward=Synapses(layer4,layer23exc,
model='''w:volt
A_pre:1
A_post:1''',
pre='''ge+=w
A_pre=A_pre*exp((lastupdate-t)/taup)+Ap
A_post=A_post*exp((lastupdate-t)/taud)
w=clip(w+A_post,0,EPSC)''',
post='''
A_pre=A_pre*exp((lastupdate-t)/taup)
A_post=A_post*exp((lastupdate-t)/taud)+Ad
w=clip(w+A_pre,0,EPSC)''')
for i in range(barrelarraysize):
for j in range(barrelarraysize):
feedforward[barrels4[i,j],barrels23[i,j]]=.5
feedforward.w[barrels4[i,j],barrels23[i,j]]=EPSC*.5
# Excitatory lateral connections
recurrent_exc=Synapses(layer23exc,layer23,model='w:volt',pre='ge+=w')
recurrent_exc[layer23exc,layer23exc]='.15*exp(-.5*(((layer23exc.x[i]-layer23exc.x[j])/.4)**2+((layer23exc.y[i]-layer23exc.y[j])/.4)**2))'
recurrent_exc.w[layer23exc,layer23exc]=EPSC*.3
recurrent_exc[layer23exc,layer23inh]='.15*exp(-.5*(((layer23exc.x[i]-layer23inh.x[j])/.4)**2+((layer23exc.y[i]-layer23inh.y[j])/.4)**2))'
recurrent_exc.w[layer23exc,layer23inh]=EPSC
# Inhibitory lateral connections
recurrent_inh=Synapses(layer23inh,layer23exc,model='w:volt',pre='gi+=w')
recurrent_inh[:,:]='exp(-.5*(((layer23inh.x[i]-layer23exc.x[j])/.2)**2+((layer23inh.y[i]-layer23exc.y[j])/.2)**2))'
recurrent_inh.w=IPSC
# Stimulation
stimspeed = 1./stim_change_time # speed at which the bar of stimulation moves
direction = 0.0
stimzonecentre = ones(2)*barrelarraysize/2.
stimcentre,stimnorm = zeros(2),zeros(2)
stimradius = (11*stim_change_time*stimspeed+1)*.5
stimradius2 = stimradius**2
def new_direction():
global direction
direction = rand()*2*pi
stimnorm[:] = (cos(direction), sin(direction))
stimcentre[:] = stimzonecentre-stimnorm*stimradius
@network_operation
def stimulation():
global direction, stimcentre
stimcentre += stimspeed*stimnorm*defaultclock.dt
if sum((stimcentre-stimzonecentre)**2)>stimradius2:
new_direction()
for (i, j), b in barrels4.iteritems():
whiskerpos = array([i,j], dtype=float)+0.5
isactive = abs(dot(whiskerpos-stimcentre, stimnorm))<.5
if barrels4active[i, j]!=isactive:
barrels4active[i, j] = isactive
b.rate = float(isactive)*tuning(layer4.selectivity[barrelindices[i, j]]-direction)
new_direction()
t2=time.time()
print "Construction time:",t2-t1,"s"
run(5*second,report='text')
figure()
# Preferred direction
# perhaps we need to add presynaptic and postsynaptic with 2D/3D access
selectivity=array([mean(array(feedforward.w[feedforward.synapses_post[i][:]])*exp(layer4.selectivity[feedforward.presynaptic[feedforward.synapses_post[i][:]]]*1j)) for i in range(len(layer23exc))])
selectivity=(arctan2(selectivity.imag,selectivity.real) % (2*pi))*180./pi
I=zeros((barrelarraysize*M23exc,barrelarraysize*M23exc))
ix=array(around(layer23exc.x*M23exc),dtype=int)
iy=array(around(layer23exc.y*M23exc),dtype=int)
I[iy,ix]=selectivity
imshow(I)
hsv()
colorbar()
for i in range(1,barrelarraysize+1):
plot([i*max(ix)/barrelarraysize,i*max(ix)/barrelarraysize],[0,max(iy)],'k')
plot([0,max(ix)],[i*max(iy)/barrelarraysize,i*max(iy)/barrelarraysize],'k')
figure()
hist(selectivity)
show()
```

##### Example: short_term_plasticity2 (synapses)¶

Example with short term plasticity, with event-driven updates defined by differential equations.

```
from brian import *
tau_e = 3 * ms
taum = 10 * ms
A_SE = 250 * pA
Rm = 100 * Mohm
N = 10
eqs = '''
dx/dt=rate : 1
rate : Hz
'''
input = NeuronGroup(N, model=eqs, threshold=1., reset=0)
input.rate = linspace(5 * Hz, 30 * Hz, N)
eqs_neuron = '''
dv/dt=(Rm*i-v)/taum:volt
di/dt=-i/tau_e:amp
'''
neuron = NeuronGroup(N, model=eqs_neuron)
taud=1*ms
tauf=100*ms
U=.1
#taud=100*ms
#tauf=10*ms
#U=.6
S=Synapses(input,neuron,
model='''w : 1
dx/dt=(1-x)/taud : 1 (event-driven)
du/dt=(U-u)/tauf : 1 (event-driven)
''',
pre='''i+=w*u*x
x*=(1-u)
u+=U*(1-u)''')
S[:,:]='i==j' # one to one connection
S.w=A_SE
# Initialization of STP variables
S.x = 1
S.u = U
trace = StateMonitor(neuron, 'v', record=[0, N - 1])
run(1000 * ms)
subplot(211)
plot(trace.times / ms, trace[0] / mV)
title('Vm')
subplot(212)
plot(trace.times / ms, trace[N - 1] / mV)
title('Vm')
show()
```

##### Example: one_synapse (synapses)¶

One synapse

```
from brian import *
P=NeuronGroup(1,model='dv/dt=1/(10*ms):1',threshold=1,reset=0)
Q=NeuronGroup(1,model='v:1')
S=Synapses(P,Q,model='w:1',pre='v+=w')
M=StateMonitor(Q,'v',record=True)
S[0,0]=True
S.w[0,0]=1.
S.delay[0,0]=.5*ms
run(40*ms)
plot(M.times/ms,M[0])
show()
```

##### Example: one_synapse_bis (synapses)¶

One synapse within several possibilities. Synapse from 2->3.

```
from brian import *
P=NeuronGroup(5,model='dv/dt=1/(10*ms):1',threshold=1,reset=0)
Q=NeuronGroup(4,model='v:1')
S=Synapses(P,Q,model='w:1',pre='v+=w')
M=StateMonitor(Q,'v',record=True)
S[2,3]=True
S.w[2,3]=1.
S.delay[2,3]=.5*ms
run(40*ms)
for i in range(4):
plot(M.times/ms,M[i]+i*2,'k')
show()
```

##### Example: STDP1_bis (synapses)¶

Spike-timing dependent plasticity Adapted from Song, Miller and Abbott (2000) and Song and Abbott (2001)

This simulation takes a long time!

Original time: 278 s with DelayConnection: 478 s New time: 416 s

```
from brian import *
from time import time
N = 1000
taum = 10 * ms
taupre = 20 * ms
taupost = taupre
Ee = 0 * mV
vt = -54 * mV
vr = -60 * mV
El = -74 * mV
taue = 5 * ms
F = 15 * Hz
gmax = .01
dApre = .01
dApost = -dApre * taupre / taupost * 1.05
dApost *= gmax
dApre *= gmax
eqs_neurons = '''
dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized
dge/dt=-ge/taue : 1
'''
input = PoissonGroup(N, rates=F)
neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr)
S = Synapses(input, neurons,
model='''w:1
dApre/dt=-Apre/taupre : 1 (event-driven)
dApost/dt=-Apost/taupost : 1 (event-driven)''',
pre='''ge+=w
Apre+=dApre
w=clip(w+Apost,0,gmax)''',
post='''Apost+=dApost
w=clip(w+Apre,0,gmax)''')
neurons.v = vr
S[:,:]=True
S.w='rand()*gmax'
rate = PopulationRateMonitor(neurons)
start_time = time()
run(100 * second, report='text')
print "Simulation time:", time() - start_time
subplot(311)
plot(rate.times / second, rate.smooth_rate(100 * ms))
subplot(312)
plot(S.w[:] / gmax, '.')
subplot(313)
hist(S.w[:] / gmax, 20)
show()
```

##### Example: delayed_stdp (synapses)¶

Delayed STDP

```
from brian import *
import time
N = 1
taum = 10 * ms
taupre = 20 * ms
taupost = taupre
Ee = 0 * mV
vt = -54 * mV
vr = -74 * mV
El = -74 * mV
taue = 5 * ms
F = 20 * Hz
dApre = .1
dApost = -dApre * taupre / taupost * 2.
eqs_neurons = '''
dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized
dge/dt=-ge/taue : 1
'''
input = PoissonGroup(N, rates=F)
neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr)
S = Synapses(input, neurons,
model='''w:1
dApre/dt=-Apre/taupre : 1 #(event-driven)
dApost/dt=-Apost/taupost : 1 #(event-driven)''',
pre=('ge+=w',
'''w=clip(w+Apost,0,inf)
Apre+=dApre'''),
post='''Apost+=dApost
w=clip(w+Apre,0,inf)''')
neurons.v = vr
S[:,:]=True
S.w=10
S.delay[1][0,0]=3*ms # delayed trace (try 0 ms to see the difference)
M=StateMonitor(S,'w',record=0)
Mpre=StateMonitor(S,'Apre',record=0)
Mpost=StateMonitor(S,'Apost',record=0)
Mv=StateMonitor(neurons,'v',record=0)
run(10*second,report='text')
subplot(211)
plot(M.times/ms,M[0])
plot(M.times/ms,Mpre[0],'r')
plot(M.times/ms,Mpost[0],'k')
subplot(212)
plot(Mv.times/ms,Mv[0])
show()
```

##### Example: nonlinear_synapses (synapses)¶

NMDA synapses

```
from brian import *
import time
a=1/(10*ms)
b=1/(10*ms)
c=1/(10*ms)
input=NeuronGroup(2,model='dv/dt=1/(10*ms):1', threshold=1, reset=0)
neurons = NeuronGroup(1, model="""dv/dt=(gtot-v)/(10*ms) : 1
gtot : 1""")
S=Synapses(input,neurons,
model='''dg/dt=-a*g+b*x*(1-g) : 1
dx/dt=-c*x : 1
w : 1 # synaptic weight
''',
pre='x+=w') # NMDA synapses
neurons.gtot=S.g
S[:,:]=True
S.w=[1.,10.]
input.v=[0.,0.5]
M=StateMonitor(S,'g',record=True)
Mn=StateMonitor(neurons,'v',record=0)
run(100*ms)
subplot(211)
plot(M.times/ms,M[0])
plot(M.times/ms,M[1])
subplot(212)
plot(Mn.times/ms,Mn[0])
show()
```

##### Example: STDP1 (synapses)¶

Spike-timing dependent plasticity Adapted from Song, Miller and Abbott (2000) and Song and Abbott (2001)

This simulation takes a long time!

```
from brian import *
from time import time
N = 1000
taum = 10 * ms
taupre = 20 * ms
taupost = taupre
Ee = 0 * mV
vt = -54 * mV
vr = -60 * mV
El = -74 * mV
taue = 5 * ms
F = 15 * Hz
gmax = .01
dApre = .01
dApost = -dApre * taupre / taupost * 1.05
dApost *= gmax
dApre *= gmax
eqs_neurons = '''
dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized
dge/dt=-ge/taue : 1
'''
input = PoissonGroup(N, rates=F)
neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr)
S = Synapses(input, neurons,
model='''w:1
Apre:1
Apost:1''',
pre='''ge+=w
Apre=Apre*exp((lastupdate-t)/taupre)+dApre
Apost=Apost*exp((lastupdate-t)/taupost)
w=clip(w+Apost,0,gmax)''',
post='''
Apre=Apre*exp((lastupdate-t)/taupre)
Apost=Apost*exp((lastupdate-t)/taupost)+dApost
w=clip(w+Apre,0,gmax)''')
neurons.v = vr
S[:,:]=True
S.w='rand()*gmax'
rate = PopulationRateMonitor(neurons)
start_time = time()
run(100 * second, report='text')
print "Simulation time:", time() - start_time
subplot(311)
plot(rate.times / second, rate.smooth_rate(100 * ms))
subplot(312)
plot(S.w[:] / gmax, '.')
subplot(313)
hist(S.w[:] / gmax, 20)
show()
```

##### Example: short_term_plasticity (synapses)¶

Example with short term plasticity.

```
from brian import *
tau_e = 3 * ms
taum = 10 * ms
A_SE = 250 * pA
Rm = 100 * Mohm
N = 10
eqs = '''
dx/dt=rate : 1
rate : Hz
'''
input = NeuronGroup(N, model=eqs, threshold=1., reset=0)
input.rate = linspace(5 * Hz, 30 * Hz, N)
eqs_neuron = '''
dv/dt=(Rm*i-v)/taum:volt
di/dt=-i/tau_e:amp
'''
neuron = NeuronGroup(N, model=eqs_neuron)
taud=1*ms
tauf=100*ms
U=.1
#taud=100*ms
#tauf=10*ms
#U=.6
S=Synapses(input,neuron,
model='''x : 1
u : 1
w : 1''',
pre='''u=U+(u-U)*exp(-(t-lastupdate)/tauf)
x=1+(x-1)*exp(-(t-lastupdate)/taud)
i+=w*u*x
x*=(1-u)
u+=U*(1-u)''')
S[:,:]='i==j' # one to one connection
S.w=A_SE
# Initialization of STP variables
S.x = 1
S.u = U
trace = StateMonitor(neuron, 'v', record=[0, N - 1])
run(1000 * ms)
subplot(211)
plot(trace.times / ms, trace[0] / mV)
title('Vm')
subplot(212)
plot(trace.times / ms, trace[N - 1] / mV)
title('Vm')
show()
```

##### Example: gapjunctions (synapses)¶

Neurons with gap junctions

```
from brian import *
N = 10
v0=1.05
tau=10*ms
eqs = '''
dv/dt=(v0-v+Igap)/tau : 1
Igap : 1 # gap junction current
'''
neurons = NeuronGroup(N, model=eqs, threshold=1, reset=0)
neurons.v=linspace(0,1,N)
trace = StateMonitor(neurons, 'v', record=[0, 5])
S=Synapses(neurons,model='''w:1 # gap junction conductance
Igap=w*(v_pre-v_post): 1''')
S[:,:]=True
neurons.Igap=S.Igap
S.w=.02
run(500*ms)
plot(trace.times / ms, trace[0])
plot(trace.times / ms, trace[5])
show()
```

##### Example: jeffress (synapses)¶

Jeffress model, adapted with spiking neuron models. A sound source (white noise) is moving around the head. Delay differences between the two ears are used to determine the azimuth of the source. Delays are mapped to a neural place code using delay lines (each neuron receives input from both ears, with different delays).

Romain Brette

```
from brian import *
from time import time
defaultclock.dt = .02 * ms
dt = defaultclock.dt
# Sound
sound = TimedArray(10 * randn(50000)) # white noise
# Ears and sound motion around the head (constant angular speed)
sound_speed = 300 * metre / second
interaural_distance = 20 * cm # big head!
max_delay = interaural_distance / sound_speed
print "Maximum interaural delay:", max_delay
angular_speed = 2 * pi * radian / second # 1 turn/second
tau_ear = 1 * ms
sigma_ear = .1
eqs_ears = '''
dx/dt=(sound(t-delay)-x)/tau_ear+sigma_ear*(2./tau_ear)**.5*xi : 1
delay=distance*sin(theta) : second
distance : second # distance to the centre of the head in time units
dtheta/dt=angular_speed : radian
'''
ears = NeuronGroup(2, model=eqs_ears, threshold=1, reset=0, refractory=2.5 * ms)
ears.distance = [-.5 * max_delay, .5 * max_delay]
traces = StateMonitor(ears, 'x', record=True)
# Coincidence detectors
N = 300
tau = 1 * ms
sigma = .1
eqs_neurons = '''
dv/dt=-v/tau+sigma*(2./tau)**.5*xi : 1
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold=1, reset=0)
synapses = Synapses(ears,neurons,model='w:1',pre='v+=w')
synapses[:,:]=True
synapses.w=.5
synapses.delay[0, :] = linspace(0 * ms, 1.1 * max_delay, N)
synapses.delay[1, :] = linspace(0 * ms, 1.1 * max_delay, N)[::-1]
spikes = SpikeMonitor(neurons)
run(1*ms)
t1=time()
run(1000 * ms)
t2=time()
print "It took",t2-t1,"s"
raster_plot(spikes)
show()
```

##### Example: probabilistic_synapses2 (synapses)¶

Probabilistic synapses - Katz model

```
from brian import *
from numpy.random import binomial
Nin=1000
Nout=25
input=PoissonGroup(Nin,rates=2*Hz)
tau=10*ms
neurons=NeuronGroup(Nout,model="dv/dt=-v/tau:1",threshold=35*50./5,reset=0)
S=Synapses(input,neurons,model='''w:1 # PSP size for one quantum
nvesicles:1 # Number of vesicles (n is reserved)
p:1 # Release probability''',
pre ='''v+=binomial(nvesicles,p)*w''')
S[:,:]=True # all-to-all
S.w='rand()'
S.nvesicles=50
S.p='rand()'
S=SpikeMonitor(neurons)
run(1000*ms)
raster_plot(S)
show()
```

##### Example: Diesmann_et_al_1999 (synapses)¶

###### Synfire chains¶

M. Diesmann et al. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529-533.

```
from brian import *
# Neuron model parameters
Vr = -70 * mV
Vt = -55 * mV
taum = 10 * ms
taupsp = 0.325 * ms
weight = 4.86 * mV
# Neuron model
eqs = '''
dV/dt=(-(V-Vr)+x)*(1./taum) : volt
dx/dt=(-x+y)*(1./taupsp) : volt
dy/dt=-y*(1./taupsp)+25.27*mV/ms+\
(39.24*mV/ms**0.5)*xi : volt
'''
# Neuron groups
P = NeuronGroup(N=1000, model=eqs,
threshold=Vt, reset=Vr, refractory=1 * ms)
Pinput = PulsePacket(t=50 * ms, n=85, sigma=1 * ms)
# The network structure
Pgp = [ P.subgroup(100) for i in range(10)]
C = Synapses(P, P, model='w:volt', pre='y+=w')
for i in range(9):
C[Pgp[i], Pgp[i + 1]]=True
C.w[Pgp[i], Pgp[i + 1]]=weight
Cinput = Synapses(Pinput, Pgp[0], model='w:volt', pre='y+=w')
Cinput[:,:]=True
Cinput.w[:,:]=weight
# Record the spikes
Mgp = [SpikeMonitor(p) for p in Pgp]
Minput = SpikeMonitor(Pinput)
monitors = [Minput] + Mgp
# Setup the network, and run it
P.V = Vr + rand(len(P)) * (Vt - Vr)
run(100 * ms)
# Plot result
raster_plot(showgrouplines=True, *monitors)
show()
```

##### Example: multiple_delays (synapses)¶

Multiple delays

```
from brian import *
P=NeuronGroup(1,model='dv/dt=1/(20*ms):1',threshold=1,reset=0)
Q=NeuronGroup(1,model='v:1')
S=Synapses(P,Q,model='w:1',pre='v+=w')
M=StateMonitor(Q,'v',record=True)
S[0,0]=2
S.w[0,0,0]=1.
S.w[0,0,1]=.5
S.delay[0,0,0]=.5*ms
S.delay[0,0,1]=1*ms
run(60*ms)
plot(M.times/ms,M[0])
show()
```

##### Example: probabilistic_synapses (synapses)¶

Probabilistic synapses

Seems to work.

```
from brian import *
N=20
tau=5*ms
input=PoissonGroup(2,rates=20*Hz)
neurons=NeuronGroup(N,model='dv/dt=-v/tau : 1')
S=Synapses(input,neurons,model="""w : 1
p : 1 # transmission probability""",
pre="v+=w*(rand()<p)")
# Transmission probabilities
S[:,:]=True
S.w=0.5
S.p[0,:]=linspace(0,1,N) # transmission probability between 0 and 1
S.p[1,:]=linspace(0,1,N)[::-1] # reverse order for the second input
M=StateMonitor(neurons,'v',record=True)
run(500*ms)
for i in range(N):
plot(M.times/ms,M[i]+i,'k')
show()
```

##### Example: weightmonitor (synapses)¶

Monitoring synaptic variables. STDP example.

```
from brian import *
from time import time
N = 1000
taum = 10 * ms
tau_pre = 20 * ms
tau_post = tau_pre
Ee = 0 * mV
vt = -54 * mV
vr = -60 * mV
El = -74 * mV
taue = 5 * ms
F = 15 * Hz
gmax = .01
dA_pre = .01
dA_post = -dA_pre * tau_pre / tau_post * 1.05
dA_post *= gmax
dA_pre *= gmax
eqs_neurons = '''
dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized
dge/dt=-ge/taue : 1
'''
input = PoissonGroup(N, rates=F)
neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr)
S = Synapses(input, neurons,
model='''w:1
A_pre:1
A_post:1''',
pre='''ge+=w
A_pre=A_pre*exp((lastupdate-t)/tau_pre)+dA_pre
A_post=A_post*exp((lastupdate-t)/tau_post)
w=clip(w+A_post,0,gmax)''',
post='''
A_pre=A_pre*exp((lastupdate-t)/tau_pre)
A_post=A_post*exp((lastupdate-t)/tau_post)+dA_post
w=clip(w+A_pre,0,gmax)''')
neurons.v = vr
S[:,:]=True
S.w='rand()*gmax'
rate = PopulationRateMonitor(neurons)
M = StateMonitor(S,'w',record=[0,1]) # monitors synapses number 0 and 1
start_time = time()
run(10 * second, report='text')
print "Simulation time:", time() - start_time
figure()
subplot(311)
plot(rate.times / second, rate.smooth_rate(100 * ms))
subplot(312)
plot(S.w[:] / gmax, '.')
subplot(313)
hist(S.w[:] / gmax, 20)
figure()
plot(M.times,M[0]/gmax)
plot(M.times,M[1]/gmax)
show()
```

#### plasticity¶

##### Example: STDP2 (plasticity)¶

Spike-timing dependent plasticity Adapted from Song, Miller and Abbott (2000), Song and Abbott (2001) and van Rossum et al (2000).

This simulation takes a long time!

```
from brian import *
from time import time
N = 1000
taum = 10 * ms
tau_pre = 20 * ms
tau_post = tau_pre
Ee = 0 * mV
vt = -54 * mV
vr = -60 * mV
El = -74 * mV
taue = 5 * ms
gmax = 0.01
F = 15 * Hz
dA_pre = .01
dA_post = -dA_pre * tau_pre / tau_post * 2.5
eqs_neurons = '''
dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized
dge/dt=-ge/taue : 1
'''
input = PoissonGroup(N, rates=F)
neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr)
synapses = Connection(input, neurons, 'ge', weight=rand(len(input), len(neurons)) * gmax,
structure='dense')
neurons.v = vr
stdp = ExponentialSTDP(synapses, tau_pre, tau_post, dA_pre, dA_post, wmax=gmax, update='mixed')
rate = PopulationRateMonitor(neurons)
start_time = time()
run(100 * second, report='text')
print "Simulation time:", time() - start_time
subplot(311)
plot(rate.times / second, rate.smooth_rate(100 * ms))
subplot(312)
plot(synapses.W.todense() / gmax, '.')
subplot(313)
hist(synapses.W.todense() / gmax, 20)
show()
```

##### Example: short_term_plasticity2 (plasticity)¶

Network (CUBA) with short-term synaptic plasticity for excitatory synapses (Depressing at long timescales, facilitating at short timescales)

```
from brian import *
from time import time
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(4000, model=eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + rand(4000) * 10 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge', weight=1.62 * mV, sparseness=.02)
Ci = Connection(Pi, P, 'gi', weight= -9 * mV, sparseness=.02)
stp = STP(Ce, taud=200 * ms, tauf=20 * ms, U=.2)
M = SpikeMonitor(P)
rate = PopulationRateMonitor(P)
t1 = time()
run(1 * second)
t2 = time()
print "Simulation time:", t2 - t1, "s"
print M.nspikes, "spikes"
subplot(211)
raster_plot(M)
subplot(212)
plot(rate.times / ms, rate.smooth_rate(5 * ms))
show()
```

##### Example: STDP1 (plasticity)¶

Spike-timing dependent plasticity Adapted from Song, Miller and Abbott (2000) and Song and Abbott (2001)

This simulation takes a long time!

```
from brian import *
from time import time
N = 1000
taum = 10 * ms
tau_pre = 20 * ms
tau_post = tau_pre
Ee = 0 * mV
vt = -54 * mV
vr = -60 * mV
El = -74 * mV
taue = 5 * ms
F = 15 * Hz
gmax = .01
dA_pre = .01
dA_post = -dA_pre * tau_pre / tau_post * 1.05
eqs_neurons = '''
dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized
dge/dt=-ge/taue : 1
'''
input = PoissonGroup(N, rates=F)
neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr)
synapses = Connection(input, neurons, 'ge', weight=rand(len(input), len(neurons)) * gmax)
neurons.v = vr
#stdp=ExponentialSTDP(synapses,tau_pre,tau_post,dA_pre,dA_post,wmax=gmax)
## Explicit STDP rule
eqs_stdp = '''
dA_pre/dt=-A_pre/tau_pre : 1
dA_post/dt=-A_post/tau_post : 1
'''
dA_post *= gmax
dA_pre *= gmax
stdp = STDP(synapses, eqs=eqs_stdp, pre='A_pre+=dA_pre;w+=A_post',
post='A_post+=dA_post;w+=A_pre', wmax=gmax)
rate = PopulationRateMonitor(neurons)
start_time = time()
run(100 * second, report='text')
print "Simulation time:", time() - start_time
subplot(311)
plot(rate.times / second, rate.smooth_rate(100 * ms))
subplot(312)
plot(synapses.W.todense() / gmax, '.')
subplot(313)
hist(synapses.W.todense() / gmax, 20)
show()
```

##### Example: short_term_plasticity (plasticity)¶

Example with short term plasticity model Neurons with regular inputs and depressing synapses

```
from brian import *
tau_e = 3 * ms
taum = 10 * ms
A_SE = 250 * pA
Rm = 100 * Mohm
N = 10
eqs = '''
dx/dt=rate : 1
rate : Hz
'''
input = NeuronGroup(N, model=eqs, threshold=1., reset=0)
input.rate = linspace(5 * Hz, 30 * Hz, N)
eqs_neuron = '''
dv/dt=(Rm*i-v)/taum:volt
di/dt=-i/tau_e:amp
'''
neuron = NeuronGroup(N, model=eqs_neuron)
C = Connection(input, neuron, 'i')
C.connect_one_to_one(weight=A_SE)
stp = STP(C, taud=1 * ms, tauf=100 * ms, U=.1) # facilitation
#stp=STP(C,taud=100*ms,tauf=10*ms,U=.6) # depression
trace = StateMonitor(neuron, 'v', record=[0, N - 1])
run(1000 * ms)
subplot(211)
plot(trace.times / ms, trace[0] / mV)
title('Vm')
subplot(212)
plot(trace.times / ms, trace[N - 1] / mV)
title('Vm')
show()
```

#### interface¶

##### Example: interface (interface)¶

Interface example Install cherrypy for this example Then run the script and go to http://localhost:8080 on your web browser You can use cherrypy to write html interfaces to your code.

```
from brian import *
import cherrypy
import os.path
# The server is defined here
class MyInterface(object):
@cherrypy.expose
def index(self): # redirect to the html page we wrote
return '<meta HTTP-EQUIV="Refresh" content="0;URL=index.html">'
@cherrypy.expose
def runscript(self, we="1.62", wi="-9", **kwd): # 'runscript' is the script name
# we and wi are the names of form fields
we = float(we)
wi = float(wi)
# From minimalexample
reinit_default_clock()
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(4000, model=eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + 10 * mV * rand(len(P))
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge')
Ci = Connection(Pi, P, 'gi')
Ce.connect_random(Pe, P, 0.02, weight=we * mV)
Ci.connect_random(Pi, P, 0.02, weight=wi * mV)
M = SpikeMonitor(P)
run(.5 * second)
clf()
raster_plot(M)
savefig('image.png')
# Redirect to the html page we wrote
return '<meta HTTP-EQUIV="Refresh" content="0;URL=results.html">'
# Set the directory for static files
current_dir = os.path.dirname(os.path.abspath(__file__))
conf = {'/': {'tools.staticdir.on':True,
'tools.staticdir.dir':current_dir}}
# Start the server
cherrypy.quickstart(MyInterface(), config=conf)
```

#### twister¶

##### Example: PeterDiehl (twister)¶

Peter Diehl’s entry for the 2012 Brian twister.

```
from brian import *
eqs = '''
dv/dt = ((-60.*mV-v)+(I_synE+I_synI+I_b)/(10.*nS))/(20*ms) : volt
I_synE = 3.*nS*ge*( 0.*mV-v) : amp
I_synI = 30.*nS*gi*(-80.*mV-v) : amp
I_b : amp
dge/dt = -ge/( 5.*ms) : 1
dgi/dt = -gi/(10.*ms) : 1
'''
P = NeuronGroup(10000, eqs, threshold=-50.*mV, refractory=5.*ms, reset=-60.*mV)
Pe = P.subgroup(8000)
Pi = P.subgroup(2000)
Ce = Connection(Pe, P, 'ge', weight=1., sparseness=0.02)
Cie = Connection(Pi, Pe, 'gi', weight=1., sparseness=0.02)
Cii = Connection(Pi, Pi, 'gi', weight=1., sparseness=0.02)
eqs_stdp = '''
dpre/dt = -pre/(20.*ms) : 1.0
dpost/dt = -post/(20.*ms) : 1.0
'''
nu = 0.1 # learning rate
alpha = 0.12 # controls the firing rate
stdp = STDP(Cie, eqs=eqs_stdp, pre='pre+= 1.; w+= nu*(post-alpha)',
post='post+= 1.; w+= nu*pre', wmin=0., wmax= 10.)
M = PopulationRateMonitor(Pe, bin = 1.)
P.I_b = 200.*pA #set the input current
run(10*second)
P.I_b = 600.*pA #increase the input and see how the rate adapts
run(10*second)
plot(M.times[0:-1]/second, M.rate[0:-1])
show()
```

##### Example: FriedemannZenke (twister)¶

Friedemann Zenke’s winning entry for the 2012 Brian twister.

```
# ###########################################
#
# Inhibitory synaptic plasticity in a recurrent network model
# (F. Zenke, 2011)
#
# Adapted from:
# Vogels, T. P., H. Sprekeler, F. Zenke, C. Clopath, and W. Gerstner.
# 'Inhibitory Plasticity Balances Excitation and Inhibition in Sensory
# Pathways and Memory Networks.' Science (November 10, 2011).
#
# ###########################################
from brian import *
# ###########################################
# Defining network model parameters
# ###########################################
NE = 8000 # Number of excitatory cells
NI = NE/4 # Number of inhibitory cells
w = 1.*nS # Basic weight unit
tau_ampa = 5.0*ms # Glutamatergic synaptic time constant
tau_gaba = 10.0*ms # GABAergic synaptic time constant
epsilon = 0.02 # Sparseness of synaptic connections
eta = 1e-2 # Learning rate
tau_stdp = 20*ms # STDP time constant
simtime = 10*second # Simulation time
# ###########################################
# Neuron model
# ###########################################
gl = 10.0*nsiemens # Leak conductance
el = -60*mV # Resting potential
er = -80*mV # Inhibitory reversal potential
vt = -50.*mV # Spiking threshold
memc = 200.0*pfarad # Membrane capacitance
bgcurrent = 200*pA # External current
eqs_neurons='''
dv/dt=(-gl*(v-el)-(g_ampa*w*v+g_gaba*(v-er)*w)+bgcurrent)/memc : volt
dg_ampa/dt = -g_ampa/tau_ampa : 1
dg_gaba/dt = -g_gaba/tau_gaba : 1
'''
# ###########################################
# Initialize neuron group
# ###########################################
neurons=NeuronGroup(NE+NI,model=eqs_neurons,threshold=vt,reset=el,refractory=5*ms)
Pe=neurons.subgroup(NE)
Pi=neurons.subgroup(NI)
# ###########################################
# Connecting the network
# ###########################################
con_e = Connection(Pe,neurons,'g_ampa',weight=0.3,sparseness=epsilon)
con_ie = Connection(Pi,Pe,'g_gaba',weight=1e-10,sparseness=epsilon)
con_ii = Connection(Pi,Pi,'g_gaba',weight=3,sparseness=epsilon)
# ###########################################
# Setting up monitors
# ###########################################
sm = SpikeMonitor(Pe)
# ###########################################
# Run without plasticity
# ###########################################
run(1*second)
# ###########################################
# Inhibitory Plasticity
# ###########################################
alpha = 3*Hz*tau_stdp*2 # Target rate parameter
gmax = 100 # Maximum inhibitory weight
eqs_stdp_inhib = '''
dA_pre/dt=-A_pre/tau_stdp : 1
dA_post/dt=-A_post/tau_stdp : 1
'''
stdp_ie = STDP(con_ie, eqs=eqs_stdp_inhib, pre='A_pre+=1.; w+=(A_post-alpha)*eta;',
post='A_post+=1.; w+=A_pre*eta;', wmax=gmax)
# ###########################################
# Run with plasticity
# ###########################################
run(simtime-1*second,report='text')
# ###########################################
# Make plots
# ###########################################
subplot(211)
raster_plot(sm,ms=1.)
title("Before")
xlabel("")
xlim(float(0.8*second/ms), float(1*second/ms))
subplot(212)
raster_plot(sm,ms=1.)
title("After")
xlim(float((simtime-0.2*second)/ms), float(simtime/ms))
show()
```

##### Example: QuentinPauluis (twister)¶

Quentin Pauluis’s entry for the 2012 Brian twister.

```
from brian import *
taum = 20 * ms # membrane time constant
taue = 5 * ms # excitatory synaptic time constant
taui = 10 * ms # inhibitory synaptic time constant
Vt = -50 * mV # spike threshold
Vr = -60 * mV # reset value
El = -49 * mV # resting potential
we = (60 * 0.27 / 10) * mV # excitatory synaptic weight
wi = (20 * 4.5 / 10) * mV # inhibitory synaptic weight
eqs = Equations('''
dV/dt = (ge-gi-(V-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
''')
G = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr)
Ge = G.subgroup(3200) # Excitatory neurons
Gi = G.subgroup(800) # Inhibitory neurons
Ce = Connection(Ge, G, 'ge', sparseness=0.2, weight=we)
Ci = Connection(Gi, G, 'gi', sparseness=0.2, weight=wi*10)
Cii=Connection(Gi,Gi,'gi',sparseness=0.2, weight=wi)
M = SpikeMonitor(G)
E=SpikeMonitor(Ge,'+')
I=SpikeMonitor(Gi,'o')
MV = StateMonitor(G, 'V', record=0)
Mge = StateMonitor(G, 'ge', record=0)
Mgi = StateMonitor(G, 'gi', record=0)
G.V = Vr + (Vt - Vr) * rand(len(G))
run(2500 * ms)
subplot(211)
raster_plot(M, title='The CUBA network', newfigure=False)
raster_plot(E)
raster_plot(I)
subplot(223)
plot(MV.times / ms, MV[0] / mV)
xlabel('Time (ms)')
ylabel('V (mV)')
subplot(224)
plot(Mge.times / ms, Mge[0] / mV)
plot(Mgi.times / ms, Mgi[0] / mV)
xlabel('Time (ms)')
ylabel('ge and gi (mV)')
legend(('ge', 'gi'), 'upper right')
show()
#new.Figure
```

##### Example: anonymous (twister)¶

Anonymous entry for the 2012 Brian twister.

```
'''
My contribution to the brian twister!
I meant to give it more thought, but I forgot about the deadline!
'''
from brian import *
from brian.hears import *
import pygame
_mixer_status = [-1,-1]
class SoundMonitor(SpikeMonitor):
"""
Listen to you networks!
Plays pure tones whenever a neuron spikes, frequency is set according to the neuron number.
"""
def __init__(self, source, record=False, delay=0,
frange = (100.*Hz, 5000.*Hz),
duration = 50*ms,
samplerate = 44100*Hz):
super(SoundMonitor, self).__init__(source, record = record, delay = delay)
self.samplerate = samplerate
self.nsamples = np.rint(duration * samplerate)
p = linspace(0, 1, len(source)).reshape((1, len(source)))
p = np.tile(p, (self.nsamples, 1))
freqs = frange[0] * p + (1-p) * frange[1]
del p
times = linspace(0*ms, duration, self.nsamples).reshape((self.nsamples, 1))
times = np.tile(times, (1, len(source)))
self.sounds = np.sin(2 * np.pi * freqs * times)
self._init_mixer()
def propagate(self, spikes):
if len(spikes):
data = np.sum(self.sounds[:,spikes], axis = 1)
x = array((2 ** 15 - 1) * clip(data/amax(data), -1, 1), dtype=int16)
x.shape = x.size
# Make sure pygame receives an array in C-order
x = pygame.sndarray.make_sound(np.ascontiguousarray(x))
x.play()
def _init_mixer(self):
global _mixer_status
if _mixer_status==[-1,-1] or _mixer_status[0]!=1 or _mixer_status != self.samplerate:
pygame.mixer.quit()
pygame.mixer.init(int(self.samplerate), -16, 1)
_mixer_status=[1,self.samplerate]
def test_cuba():
# The CUBA example with sound!
taum = 20 * ms
taue = 5 * ms
taui = 10 * ms
Vt = -50 * mV
Vr = -60 * mV
El = -49 * mV
eqs = Equations('''
dv/dt = (ge+gi-(v-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
''')
P = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr, refractory=5 * ms)
P.v = Vr
P.ge = 0 * mV
P.gi = 0 * mV
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
we = (60 * 0.27 / 10) * mV # excitatory synaptic weight (voltage)
wi = (-20 * 4.5 / 10) * mV # inhibitory synaptic weight
Ce = Connection(Pe, P, 'ge', weight=we, sparseness=0.5)
Ci = Connection(Pi, P, 'gi', weight=wi, sparseness=0.5)
P.v = Vr + rand(len(P)) * (Vt - Vr)
# Record the number of spikes
M = SoundMonitor(P)
run(10 * second)
def test_synfire():
from brian import *
# Neuron model parameters
Vr = -70 * mV
Vt = -55 * mV
taum = 10 * ms
taupsp = 0.325 * ms
weight = 4.86 * mV
# Neuron model
eqs = Equations('''
dV/dt=(-(V-Vr)+x)*(1./taum) : volt
dx/dt=(-x+y)*(1./taupsp) : volt
dy/dt=-y*(1./taupsp)+25.27*mV/ms+\
(39.24*mV/ms**0.5)*xi : volt
''')
# Neuron groups
P = NeuronGroup(N=1000, model=eqs,
threshold=Vt, reset=Vr, refractory=1 * ms)
Pinput = PulsePacket(t=50 * ms, n=85, sigma=1 * ms)
# The network structure
Pgp = [ P.subgroup(100) for i in range(10)]
C = Connection(P, P, 'y')
for i in range(9):
C.connect_full(Pgp[i], Pgp[i + 1], weight)
Cinput = Connection(Pinput, Pgp[0], 'y')
Cinput.connect_full(weight=weight)
monitor = SoundMonitor(P)
# Setup the network, and run it
P.V = Vr + rand(len(P)) * (Vt - Vr)
run(1*second)
# Plot result
show()
if __name__ == '__main__':
test_synfire()
```

##### Example: PierreYger (twister)¶

Pierre Yger’s winning entry for the 2012 Brian twister.

```
from brian import *
import numpy, os, pylab
"""
An implementation of a simple topographical network, like those used in Mehring 2005 or Yger 2011.
Cells are aranged randomly on a 2D plane and connected according to a gaussian profile
P(r) = exp(-d**2/(2*sigma**2)), with delays depending linearly on the distances.
Note that the exact number of synapses per neuron is not fixed here.
To avoid any border conditions, the plane is considered to be toroidal.
Script will generate an Synchronous Irregular (SI) slow regime with propagating
waves that will spread in various directions, wandering over the network.
In addition, an external layer of Poisson sources will stimulates some cells on the network, with
a wiring scheme such that the word BRIAN will pop up. External rates can be turned off to observed the
spontaneous activity of the 2D layer. One can observe that despite the inputs is constant, the network
is not always responding to it.
The script will display, while running, the spikes and Vm of the excitatory cells.
Varying sigma will show the various activity structures from a random network (s_lat > 1), to a very
locally connected one (s_lat < 0.1)
"""
### We are setting the global timestep of the simulation
Clock(0.1 * ms)
### Cell parameters ###
tau_m = 20. * ms # Membrane time constant
c_m = 0.2 * nF # Capacitance
tau_exc = 3. * ms # Synaptic time constant (excitatory)
tau_inh = 7. * ms # Synaptic time constant (inhibitory)
tau_ref = 5. * ms # Refractory period
El = -80 * mV # Leak potential
Ee = 0. * mV # Reversal potential (excitation)
Ei = -70.* mV # Reversal potential (inhibition)
Vt = -50 * mV # Spike Threhold
Vr = -60 * mV # Spike Reset
### Equation for a Conductance-based IAF ####
eqs = Equations('''
dv/dt = (El-v)/tau_m + (ge*(Ee-v)+gi*(Ei-v))/c_m : volt
dge/dt = -ge*(1./tau_exc) : uS
dgi/dt = -gi*(1./tau_inh) : uS
''')
n_cells = 12500 # Total number of cells
n_exc = int(0.8 * n_cells) # 4:1 ratio for exc/inh
size = 1. # Size of the network
simtime = 1000 * ms # Simulation time
sim_step = 1 * ms # Display snapshots every sim_step ms
epsilon = 0.02 # Probability density
s_lat = 0.2 # Spread of the lateral connections
g_exc = 4. * nS # Excitatory conductance
g_inh = 64. * nS # Inhibitory conductance
g_ext = 200 * nS # External drive
velocity = 0.3 * mm/ms # velocity
ext_rate = 100 * Hz # Rate of the external source
max_distance = size * mm/numpy.sqrt(2) # Since this is a torus
max_delay = max_distance/velocity # Needed for the connectors
### Generate the images with the letters B, R, I, A, N
### To do that, we create a png image and read it as a matrix
pylab.figure()
pylab.text(0.125, 0.4, "B R I A N", size=80)
pylab.setp(gca(), xticks=[], yticks=[])
pylab.savefig("BRIAN.png")
brian_letters = imread("BRIAN.png")
os.remove("BRIAN.png")
brian_letters = numpy.flipud(mean(brian_letters,2)).T
pylab.close()
### We create the cells and generate random positons in [0, size]x[0, size]
all_cells = NeuronGroup(n_cells, model=eqs, threshold=Vt, reset=Vr, refractory=tau_ref)
all_cells.position = size*numpy.random.rand(n_cells, 2)
exc_cells = all_cells[0:n_exc]
inh_cells = all_cells[n_exc:n_cells]
### We initialize v values slightly above Vt, to have initial spikes
all_cells.v = El + 1.1*numpy.random.rand(n_cells) * (Vt - El)
### Now we create the source that will write the word BRIAN
sources = PoissonGroup(1, ext_rate)
sources.position = array([[0.5, 0.5]])
### Function to get the distance between one position and an array of positions
### This is needed to used the vectorized form of the connections in the brian.Connection objects
### Note that the plane is wrapped, to avoid any border effects.
def get_distance(x, y):
d1 = abs(x - y)
min_d = numpy.minimum(d1, size - d1)
return numpy.sqrt(numpy.sum(min_d**2, 1))
### Function returning the probabilities of connections as a functions of distances
def probas(i, j, x, y):
distance = get_distance(x[i], y[j])
return epsilon * numpy.exp(-distance**2/(2*s_lat**2))
### Function returning linear delays as function of distances
def delays(i, j, x, y):
distance = get_distance(x[i], y[j])
return 0.1*ms + (distance * mm )/ velocity
### Function assessing if a cell is located in a particular letter of the word BRIAN
### Return 0 if not, and 1 if yes.
def is_in_brian(i, j, x, y):
a, b = brian_letters.shape
tmp_x, tmp_y = (y[j][:, 0]*a).astype(int), (y[j][:, 1]*b).astype(int)
return 1 - brian_letters[tmp_x, tmp_y]
print "Building network with wrapped 2D gaussian profiles..."
Ce = Connection(exc_cells, all_cells, 'ge', weight=g_exc, max_delay=max_delay,
sparseness=lambda i, j : probas(i, j, exc_cells.position, all_cells.position),
delay =lambda i, j : delays(i, j, exc_cells.position, all_cells.position))
Ci = Connection(inh_cells, all_cells, 'gi', weight=g_inh, max_delay=max_delay,
sparseness=lambda i, j : probas(i, j, inh_cells.position, all_cells.position),
delay =lambda i, j : delays(i, j, inh_cells.position, all_cells.position))
Cext = Connection(sources, all_cells, 'ge', weight=g_ext, max_delay=max_delay,
sparseness=lambda i, j : is_in_brian(i, j, sources.position, all_cells.position))
print "--> mean probability from excitatory synapses:", Ce.W.getnnz()/float(n_exc*n_cells) * 100, "%"
print "--> mean probability from inhibitory synapses:", Ci.W.getnnz()/float((n_cells - n_exc)*n_cells) * 100, "%"
print "Setting the recorders..."
v_exc = RecentStateMonitor(exc_cells, 'v', record=True)
s_exc = SpikeCounter(exc_cells)
ion() # To enter the interactive mode
print "Initializing the plots..."
fig1 = pylab.subplot(211)
im = fig1.scatter(all_cells.position[:n_exc, 0], all_cells.position[:n_exc, 1], c=[0]*n_exc)
im.set_clim(0, 1)
fig1.set_ylabel("spikes")
pylab.colorbar(im)
fig2 = pylab.subplot(212)
im = fig2.scatter(all_cells.position[:n_exc, 0], all_cells.position[:n_exc, 1], c=[0]*n_exc)
im.set_clim(El, Vt)
fig2.set_ylabel("v")
pylab.colorbar(im)
manager = pylab.get_current_fig_manager()
print "Running network ..."
for time in xrange(int((simtime/sim_step)/ms)):
run(sim_step)
fig1.cla()
fig1.set_title("t = %g s" %((sim_step * time)/ms))
idx = s_exc.count > 0
if numpy.sum(idx) > 0:
im = fig1.scatter(all_cells.position[:n_exc, 0][idx], all_cells.position[:n_exc, 1][idx], c=[0]*n_exc)
s_exc.count = numpy.zeros(n_exc) ## We reset the spike counter
fig1.set_xlim(0, size)
fig1.set_ylim(0, size)
fig1.set_ylabel("spikes")
im.set_clim(0, 1)
setp(fig1, xticks=[], yticks=[])
fig2.cla()
im = fig2.scatter(all_cells.position[:n_exc, 0], all_cells.position[:n_exc, 1], c=v_exc.values[-1])
fig2.set_xlim(0, size)
fig2.set_ylim(0, size)
fig2.set_ylabel("v")
im.set_clim(El, Vt)
setp(fig2, xticks=[], yticks=[])
manager.canvas.draw()
manager.canvas.flush_events()
ioff() # To leave the interactive mode
```

##### Example: MicheleGiugliano (twister)¶

Michele Giugliano’s entry for the 2012 Brian twister.

```
#
# Figure5B - from Giugliano et al., 2004
# Journal of Neurophysiology 92(2):977-96
#
# implemented by Eleni Vasilaki <e.vasilaki@sheffield.ac.uk> and
# Michele Giugliano <michele.giugliano@ua.ac.be>
#
# A sparsely connected network of excitatory neurons, interacting
# via current-based synaptic interactions, and incorporating
# spike-frequency adaptation, is simulated.
#
# Its overall emerging firing rate activity replicates some of the features of
# spontaneous patterned electrical activity, observed experimentally in cultured
# networks of neurons dissociated from the neocortex.
#
from brian import *
# Parameters of the simulation
T = 30000 *ms # life time of the simulation
N = 100 # total number of (excitatory) integrate-and-fire model neurons in the network
# Parameters of each model neuron, voltage dynamics
C = 67.95 *pF # Membrane capacitance of single model neurons
tau = 22.25 *ms # Membrane time-constant of single model neurons
H = 2.39 *mV # Reset voltage, mimicking hyperpolarization potential following a spike
theta= 20 *mV # Threshold voltage for spike initiation
tauarp=7.76 *ms # Absolute refractory period
# Parameters of each model neuron, spike-frequency adaptation dynamics
taua = 2100 *ms # Adaptation time constant
a = 0.75 *pA # Adaptation scaling factor - NO ADAPTATION
D = 1*ms # Unit consistency factor
temp = 1. *ms**(-.5) # Unit consistency factor
# Parameters of network connectivity
Cee = 0.38 # Sparseness of all-to-all random connectivity
taue = 5 *ms # Decay time constant of excitatory EPSPs
delta= 1.5 * ms # Conduction+synaptic propagation delay
J = 14.5* pA # Strenght of synaptic coupling, up to 18 *pA
# Parameters of background synaptic activity, modelled as a identical and independent noisy extra-input to each model neuron
m0 = 25.1 *pA # Mean background input current
s0 = 92 *pA # Std dev of the (noisy) background input current
# Each model neuron is described as a leaky integrate-and-fire with adaptation and current-driven synapses
eqs = """
dv/dt = - v / tau - a/C * x + Ie/C + (m0 + s0 * xi / temp)/C : mV
dx/dt = -x/taua : 1
dIe/dt = -Ie/taue : pA
"""
# Custom refractory mechanisms are employed here, to allow the membrane potential to be clamped to the reset value H
def myresetfunc(P, spikes):
P.v[spikes] = H #reset voltage
P.x[spikes] += 1 #low pass filter of spikes (adaptation mechanism)
SCR = SimpleCustomRefractoriness(myresetfunc, tauarp, state='v')
# The population of identical N model neuon is defined now
P = NeuronGroup(N, model=eqs, threshold=theta, reset=SCR)
# The interneuronal connectivity is defined now
Ce = Connection(P, P, 'Ie', weight=J, sparseness=Cee, delay=delta)
# Initialization of the state variables, for each model neuron
P.v = rand(len(P)) * 20 * mV #membrane potential
P.x = rand(len(P)) * 2 #low pass filter of spikes
P.Ie = 0 *pA #excitatory synaptic input
# Definition of tools for plotting and visualization of single neuron and population quantities
R = PopulationRateMonitor(P)
M = SpikeMonitor(P)
trace = StateMonitor(P, 'v', record=0)
tracex = StateMonitor(P, 'x', record=0)
print "Simulation running... (long-lasting simulation: be patient)"
run(T)
print "Simulation completed! If you did not see any firing rate population burst (lower panel), then slightly increase J!"
# Plot nice spikes - adapted from Brette's code
vm = trace[0]
spikes0 = [t for i,t in M.spikes if i==0]
for i in range(0,len(spikes0)):
k = int(spikes0[i] / defaultclock.dt)
vm[k] = 80 * mV
subplot(311) #membrane potential of neuron 0
plot(trace.times / ms, vm / mV - 60)
subplot(312) #raster plot
raster_plot(M)
subplot(313) #smoothed population rate
plot(R.times / ms, R.smooth_rate(5*ms) / Hz, tracex.times / ms, tracex[0] * 10)
ylim(0, 120)
show()
```

#### multiprocessing¶

##### Example: taskfarm (multiprocessing)¶

Uses the `run_tasks()`

function to run a task on
multiple CPUs and save the results to a
`DataManager`

object.

```
from brian import *
from brian.tools.datamanager import *
from brian.tools.taskfarm import *
def find_rate(k, report):
eqs = '''
dV/dt = (k-V)/(10*ms) : 1
'''
G = NeuronGroup(1000, eqs, reset=0, threshold=1)
M = SpikeCounter(G)
run(30*second, report=report)
return (k, mean(M.count)/30)
if __name__=='__main__':
N = 20
dataman = DataManager('taskfarmexample')
if dataman.itemcount()<N:
M = N-dataman.itemcount()
run_tasks(dataman, find_rate, rand(M)*19+1)
X, Y = zip(*dataman.values())
plot(X, Y, '.')
xlabel('k')
ylabel('Firing rate (Hz)')
show()
```

##### Example: multiple_runs_simple (multiprocessing)¶

Example of using Python multiprocessing module to distribute simulations over multiple processors.

The general procedure for using multiprocessing is to define and run a network inside a function, and then use multiprocessing.Pool.map to call the function with multiple parameter values. Note that on Windows, any code that should only run once should be placed inside an if __name__==’__main__’ block.

```
from brian import *
import multiprocessing
# This is the function that we want to compute for various different parameters
def how_many_spikes(excitatory_weight):
# These two lines reset the clock to 0 and clear any remaining data so that
# memory use doesn't build up over multiple runs.
reinit_default_clock()
clear(True)
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(4000, eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + 10 * mV * rand(len(P))
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge')
Ci = Connection(Pi, P, 'gi')
Ce.connect_random(Pe, P, 0.02, weight=excitatory_weight)
Ci.connect_random(Pi, P, 0.02, weight= -9 * mV)
M = SpikeMonitor(P)
run(100 * ms)
return M.nspikes
if __name__ == '__main__':
# Note that on Windows platforms, all code that is executed rather than
# just defining functions and classes has to be in the if __name__=='__main__'
# block, otherwise it will be executed by each process that starts. This
# isn't a problem on Linux.
pool = multiprocessing.Pool() # uses num_cpu processes by default
weights = linspace(0, 3.5, 100) * mV
args = [w * volt for w in weights]
results = pool.map(how_many_spikes, args) # launches multiple processes
plot(weights, results, '.')
show()
```

##### Example: multiple_runs_with_gui (multiprocessing)¶

A complicated example of using multiprocessing for multiple runs of a simulation with different parameters, using a GUI to monitor and control the runs.

This example features:

- An indefinite number of runs, with a set of parameters for each run generated at random for each run.
- A plot of the output of all the runs updated as soon as each run is completed.
- A GUI showing how long each process has been running for and how long until it completes, and with a button allowing you to terminate the runs.

A simpler example is in `examples/multiprocessing/multiple_runs_simple.py`

.

```
# We use Tk as the backend for the GUI and matplotlib so as to avoid any
# threading conflicts
import matplotlib
matplotlib.use('TkAgg')
from brian import *
import Tkinter, time, multiprocessing, os
from brian.utils.progressreporting import make_text_report
from Queue import Empty as QueueEmpty
class SimulationController(Tkinter.Tk):
'''
GUI, uses Tkinter and features a progress bar for each process, and a callback
function for when the terminate button is clicked.
'''
def __init__(self, processes, terminator, width=600):
Tkinter.Tk.__init__(self, None)
self.parent = None
self.grid()
button = Tkinter.Button(self, text='Terminate simulation',
command=terminator)
button.grid(column=0, row=0)
self.pb_width = width
self.progressbars = []
for i in xrange(processes):
can = Tkinter.Canvas(self, width=width, height=30)
can.grid(column=0, row=1 + i)
can.create_rectangle(0, 0, width, 30, fill='#aaaaaa')
r = can.create_rectangle(0, 0, 0, 30, fill='#ffaaaa', width=0)
t = can.create_text(width / 2, 15, text='')
self.progressbars.append((can, r, t))
self.results_text = Tkinter.Label(self, text='Computed 0 results, time taken: 0s')
self.results_text.grid(column=0, row=processes + 1)
self.title('Simulation control')
def update_results(self, elapsed, complete):
'''
Method to update the total number of results computed and the amount of time taken.
'''
self.results_text.config(text='Computed ' + str(complete) + ', time taken: ' + str(int(elapsed)) + 's')
self.update()
def update_process(self, i, elapsed, complete, msg):
'''
Method to update the status of a given process.
'''
can, r, t = self.progressbars[i]
can.itemconfigure(t, text='Process ' + str(i) + ': ' + make_text_report(elapsed, complete) + ': ' + msg)
can.coords(r, 0, 0, int(self.pb_width * complete), 30)
self.update()
def sim_mainloop(pool, results, message_queue):
'''
Monitors results of a simulation as they arrive
pool is the multiprocessing.Pool that the processes are running in,
results is the AsyncResult object returned by Pool.imap_unordered which
returns simulation results asynchronously as and when they are ready,
and message_queue is a multiprocessing.Queue used to communicate between
child processes and the server process. In this case, we use this Queue to
send messages about the percent complete and time elapsed for each run.
'''
# We use this to enumerate the processes, mapping their process IDs to an int
# in the range 0:num_processes.
pid_to_id = dict((pid, i) for i, pid in enumerate([p.pid for p in pool._pool]))
num_processes = len(pid_to_id)
start = time.time()
stoprunningsim = [False]
# This function terminates all the pool's child processes, it is used as
# the callback function called when the terminate button on the GUI is clicked.
def terminate_sim():
pool.terminate()
stoprunningsim[0] = True
controller = SimulationController(num_processes, terminate_sim)
for i in range(num_processes):
controller.update_process(i, 0, 0, 'no info yet')
i = 0
while True:
try:
# If there is a new result (the 0.1 means wait 0.1 seconds for a
# result before giving up) then this try clause will execute, otherwise
# a TimeoutError will occur and the except clause afterwards will
# execute.
weight, numspikes = results.next(0.1)
# if we reach here, we have a result to plot, so we plot it and
# update the GUI
plot_result(weight, numspikes)
i = i + 1
controller.update_results(time.time() - start, i)
except multiprocessing.TimeoutError:
# if we're still waiting for a new result, we can process events in
# the message_queue and update the GUI if there are any.
while not message_queue.empty():
try:
# messages here are of the form: (pid, elapsed, complete)
# where pid is the process ID of the child process, elapsed
# is the amount of time elapsed, and complete is the
# fraction of the run completed. See function how_many_spikes
# to see where these messages come from.
pid, elapsed, complete = message_queue.get_nowait()
controller.update_process(pid_to_id[pid], elapsed, complete, '')
except QueueEmpty:
break
controller.update()
if stoprunningsim[0]:
print 'Terminated simulation processes'
break
controller.destroy()
def plot_result(weight, numspikes):
plot([weight], [numspikes], '.', color=(0, 0, 0.5))
axis('tight')
draw() # this forces matplotlib to redraw
# Note that how_many_spikes only takes one argument, which is a tuple of
# its actual arguments. The reason for this is that Pool.imap_unordered can only
# pass a single argument to the function its applied to, but that argument can
# be a tuple...
def how_many_spikes((excitatory_weight, message_queue)):
reinit_default_clock()
clear(True)
eqs = '''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
'''
P = NeuronGroup(4000, eqs, threshold= -50 * mV, reset= -60 * mV)
P.v = -60 * mV + 10 * mV * rand(len(P))
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, 'ge')
Ci = Connection(Pi, P, 'gi')
Ce.connect_random(Pe, P, 0.02, weight=excitatory_weight)
Ci.connect_random(Pi, P, 0.02, weight= -9 * mV)
M = SpikeMonitor(P)
# This reporter function is called every second, and it sends a message to
# the server process updating the status of the current run.
def reporter(elapsed, complete):
message_queue.put((os.getpid(), elapsed, complete))
run(4000 * ms, report=reporter, report_period=1 * second)
return (excitatory_weight, M.nspikes)
if __name__ == '__main__':
numprocesses = None # number of processes to use, set to None to have one per CPU
# We have to use a Queue from the Manager to send messages from client
# processes to the server process
manager = multiprocessing.Manager()
message_queue = manager.Queue()
pool = multiprocessing.Pool(processes=numprocesses)
# This generator function repeatedly generates random sets of parameters
# to pass to the how_many_spikes function
def args():
while True:
weight = rand()*3.5 * mV
yield (weight, message_queue)
# imap_unordered returns an AsyncResult object which returns results as
# and when they are ready, we pass this results object which is returned
# immediately to the sim_mainloop function which monitors this, updates the
# GUI and plots the results as they come in.
results = pool.imap_unordered(how_many_spikes, args())
ion() # this puts matplotlib into interactive mode to plot as we go
sim_mainloop(pool, results, message_queue)
```

#### hears¶

##### Example: online_computation (hears)¶

Example of online computation using `process()`

.
Plots the RMS value of each channel output by a gammatone filterbank.

```
from brian import *
from brian.hears import *
sound1 = tone(1*kHz, .1*second)
sound2 = whitenoise(.1*second)
sound = sound1+sound2
sound = sound.ramp()
sound.level = 60*dB
cf = erbspace(20*Hz, 20*kHz, 3000)
fb = Gammatone(sound, cf)
def sum_of_squares(input, running):
return running+sum(input**2, axis=0)
rms = sqrt(fb.process(sum_of_squares)/sound.nsamples)
sound_rms = sqrt(mean(sound**2))
axhline(sound_rms, ls='--')
plot(cf, rms)
xlabel('Frequency (Hz)')
ylabel('RMS')
show()
```

##### Example: sound_localisation_model (hears)¶

Example demonstrating the use of many features of Brian hears, including HRTFs, restructuring filters and integration with Brian. Implements a simplified version of the “ideal” sound localisation model from Goodman and Brette (2010).

The sound is played at a particular spatial location (indicated on the final plot by a red +). Each location has a corresponding assembly of neurons, whose summed firing rates give the sizes of the blue circles in the plot. The most strongly responding assembly is indicated by the green x, which is the estimate of the location by the model.

Reference:

```
from brian import *
from brian.hears import *
# Download the IRCAM database, and replace this filename with the location
# you downloaded it to
hrtfdb = IRCAM_LISTEN(r'F:\HRTF\IRCAM')
subject = 1002
hrtfset = hrtfdb.load_subject(subject)
# This gives the number of spatial locations in the set of HRTFs
num_indices = hrtfset.num_indices
# Choose a random location for the sound to come from
index = randint(hrtfset.num_indices)
# A sound to test the model with
sound = Sound.whitenoise(500*ms)
# This is the specific HRTF for the chosen location
hrtf = hrtfset.hrtf[index]
# We apply the chosen HRTF to the sound, the output has 2 channels
hrtf_fb = hrtf.filterbank(sound)
# We swap these channels (equivalent to swapping the channels in the
# subsequent filters, but simpler to do it with the inputs)
swapped_channels = RestructureFilterbank(hrtf_fb, indexmapping=[1, 0])
# Now we apply all of the possible pairs of HRTFs in the set to these
# swapped channels, which means repeating them num_indices times first
hrtfset_fb = hrtfset.filterbank(Repeat(swapped_channels, num_indices))
# Now we apply cochlear filtering (logically, this comes before the HRTF
# filtering, but since convolution is commutative it is more efficient to
# do the cochlear filtering afterwards
cfmin, cfmax, cfN = 150*Hz, 5*kHz, 40
cf = erbspace(cfmin, cfmax, cfN)
# We repeat each of the HRTFSet filterbank channels cfN times, so that
# for each location we will apply each possible cochlear frequency
gfb = Gammatone(Repeat(hrtfset_fb, cfN),
tile(cf, hrtfset_fb.nchannels))
# Half wave rectification and compression
cochlea = FunctionFilterbank(gfb, lambda x:15*clip(x, 0, Inf)**(1.0/3.0))
# Leaky integrate and fire neuron model
eqs = '''
dV/dt = (I-V)/(1*ms)+0.1*xi/(0.5*ms)**.5 : 1
I : 1
'''
G = FilterbankGroup(cochlea, 'I', eqs, reset=0, threshold=1, refractory=5*ms)
# The coincidence detector (cd) neurons
cd = NeuronGroup(num_indices*cfN, eqs, reset=0, threshold=1, clock=G.clock)
# Each CD neuron receives precisely two inputs, one from the left ear and
# one from the right, for each location and each cochlear frequency
C = Connection(G, cd, 'V')
for i in xrange(num_indices*cfN):
C[i, i] = 0.5 # from right ear
C[i+num_indices*cfN, i] = 0.5 # from left ear
# We want to just count the number of CD spikes
counter = SpikeCounter(cd)
# Run the simulation, giving a report on how long it will take as we run
run(sound.duration, report='stderr')
# We take the array of counts, and reshape them into a 2D array which we sum
# across frequencies to get the spike count of each location-specific assembly
count = counter.count
count.shape = (num_indices, cfN)
count = sum(count, axis=1)
count = array(count, dtype=float)/amax(count)
# Our guess of the location is the index of the strongest firing assembly
index_guess = argmax(count)
# Now we plot the output, using the coordinates of the HRTFSet
coords = hrtfset.coordinates
azim, elev = coords['azim'], coords['elev']
scatter(azim, elev, 100*count)
plot([azim[index]], [elev[index]], '+r', ms=15, mew=2)
plot([azim[index_guess]], [elev[index_guess]], 'xg', ms=15, mew=2)
xlabel('Azimuth (deg)')
ylabel('Elevation (deg)')
xlim(-5, 350)
ylim(-50, 95)
show()
```

##### Example: dcgc (hears)¶

Implementation example of the compressive gammachirp auditory filter as described in Irino, T. and Patterson R., “A compressive gammachirp auditory filter for both physiological and psychophysical data”, JASA 2001.

A class called `DCGC`

implementing this model is available
in the library.

Technical implementation details and notation can be found in Irino, T. and Patterson R., “A Dynamic Compressive Gammachirp Auditory Filterbank”, IEEE Trans Audio Speech Lang Processing.

```
from brian import *
from brian.hears import *
simulation_duration = 50*ms
samplerate = 50*kHz
level = 50*dB # level of the input sound in rms dB SPL
sound = whitenoise(simulation_duration, samplerate).ramp()
sound = sound.atlevel(level)
nbr_cf = 50 # number of centre frequencies
# center frequencies with a spacing following an ERB scale
cf = erbspace(100*Hz, 1000*Hz, nbr_cf)
c1 = -2.96 #glide slope of the first filterbank
b1 = 1.81 #factor determining the time constant of the first filterbank
c2 = 2.2 #glide slope of the second filterbank
b2 = 2.17 #factor determining the time constant of the second filterbank
order_ERB = 4
ERBrate = 21.4*log10(4.37*cf/1000+1)
ERBwidth = 24.7*(4.37*cf/1000 + 1)
ERBspace = mean(diff(ERBrate))
# the filter coefficients are updated every update_interval (here in samples)
update_interval = 1
#bank of passive gammachirp filters. As the control path uses the same passive
#filterbank than the signal path (but shifted in frequency)
#this filterbank is used by both pathway.
pGc = LogGammachirp(sound, cf, b=b1, c=c1)
fp1 = cf + c1*ERBwidth*b1/order_ERB #centre frequency of the signal path
#### Control Path ####
#the first filterbank in the control path consists of gammachirp filters
#value of the shift in ERB frequencies of the control path with respect to the signal path
lct_ERB = 1.5
n_ch_shift = round(lct_ERB/ERBspace) #value of the shift in channels
#index of the channel of the control path taken from pGc
indch1_control = minimum(maximum(1, arange(1, nbr_cf+1)+n_ch_shift), nbr_cf).astype(int)-1
fp1_control = fp1[indch1_control]
#the control path bank pass filter uses the channels of pGc indexed by indch1_control
pGc_control = RestructureFilterbank(pGc, indexmapping=indch1_control)
#the second filterbank in the control path consists of fixed asymmetric compensation filters
frat_control = 1.08
fr2_control = frat_control*fp1_control
asym_comp_control = AsymmetricCompensation(pGc_control, fr2_control, b=b2, c=c2)
#definition of the pole of the asymmetric comensation filters
p0 = 2
p1 = 1.7818*(1-0.0791*b2)*(1-0.1655*abs(c2))
p2 = 0.5689*(1-0.1620*b2)*(1-0.0857*abs(c2))
p3 = 0.2523*(1-0.0244*b2)*(1+0.0574*abs(c2))
p4 = 1.0724
#definition of the parameters used in the control path output levels computation
#(see IEEE paper for details)
decay_tcst = .5*ms
order = 1.
lev_weight = .5
level_ref = 50.
level_pwr1 = 1.5
level_pwr2 = .5
RMStoSPL = 30.
frat0 = .2330
frat1 = .005
exp_deca_val = exp(-1/(decay_tcst*samplerate)*log(2))
level_min = 10**(-RMStoSPL/20)
#definition of the controller class. What is does it take the outputs of the
#first and second fitlerbanks of the control filter as input, compute an overall
#intensity level for each frequency channel. It then uses those level to update
#the filter coefficient of its target, the asymmetric compensation filterbank of
#the signal path.
class CompensensationFilterUpdater(object):
def __init__(self, target):
self.target = target
self.level1_prev = -100
self.level2_prev = -100
def __call__(self, *input):
value1 = input[0][-1,:]
value2 = input[1][-1,:]
#the current level value is chosen as the max between the current
#output and the previous one decreased by a decay
level1 = maximum(maximum(value1, 0), self.level1_prev*exp_deca_val)
level2 = maximum(maximum(value2, 0), self.level2_prev*exp_deca_val)
self.level1_prev = level1 #the value is stored for the next iteration
self.level2_prev = level2
#the overall intensity is computed between the two filterbank outputs
level_total = lev_weight*level_ref*(level1/level_ref)**level_pwr1+\
(1-lev_weight)*level_ref*(level2/level_ref)**level_pwr2
#then it is converted in dB
level_dB = 20*log10(maximum(level_total, level_min))+RMStoSPL
#the frequency factor is calculated
frat = frat0 + frat1*level_dB
#the centre frequency of the asymmetric compensation filters are updated
fr2 = fp1*frat
coeffs = asymmetric_compensation_coeffs(samplerate, fr2,
self.target.filt_b, self.target.filt_a, b2, c2,
p0, p1, p2, p3, p4)
self.target.filt_b, self.target.filt_a = coeffs
#### Signal Path ####
#the signal path consists of the passive gammachirp filterbank pGc previously
#defined followed by a asymmetric compensation filterbank
fr1 = fp1*frat0
varyingfilter_signal_path = AsymmetricCompensation(pGc, fr1, b=b2, c=c2)
updater = CompensensationFilterUpdater(varyingfilter_signal_path)
#the controler which takes the two filterbanks of the control path as inputs
#and the varying filter of the signal path as target is instantiated
control = ControlFilterbank(varyingfilter_signal_path,
[pGc_control, asym_comp_control],
varyingfilter_signal_path, updater, update_interval)
#run the simulation
#Remember that the controler are at the end of the chain and the output of the
#whole path comes from them
signal = control.process()
figure()
imshow(flipud(signal.T), aspect='auto')
show()
```

##### Example: butterworth (hears)¶

Example of the use of the class `Butterworth`

available in
the library. In this example, a white noise is filtered by a bank of butterworth
bandpass filters and lowpass filters which are different for every channels. The
centre or cutoff frequency of the filters are linearly taken between 100kHz and
1000kHz and its bandwidth frequency increases linearly with frequency.

```
from brian import *
from brian.hears import *
level = 50*dB # level of the input sound in rms dB SPL
sound = whitenoise(100*ms).ramp()
sound = sound.atlevel(level)
order = 2 #order of the filters
#### example of a bank of bandpass filter ################
nchannels = 50
center_frequencies = linspace(100*Hz, 1000*Hz, nchannels)
bw = linspace(50*Hz, 300*Hz, nchannels) # bandwidth of the filters
#arrays of shape (2 x nchannels) defining the passband frequencies (Hz)
fc = vstack((center_frequencies-bw/2, center_frequencies+bw/2))
filterbank = Butterworth(sound, nchannels, order, fc, 'bandpass')
filterbank_mon = filterbank.process()
figure()
subplot(211)
imshow(flipud(filterbank_mon.T), aspect='auto')
### example of a bank of lowpass filter ################
nchannels = 50
cutoff_frequencies = linspace(200*Hz, 1000*Hz, nchannels)
filterbank = Butterworth(sound, nchannels, order, cutoff_frequencies, 'low')
filterbank_mon = filterbank.process()
subplot(212)
imshow(flipud(filterbank_mon.T), aspect='auto')
show()
```

##### Example: time_varying_filter2 (hears)¶

This example implements a band pass filter whose center frequency is modulated by
a sinusoid function. This modulator is implemented as a
`FunctionFilterbank`

. One state variable (here time) must
be kept; it is therefore implemented with a class.
The bandpass filter coefficients update is an example of how to use a
`ControlFilterbank`

. The bandpass filter is a basic
biquadratic filter for which the Q factor and the center
frequency must be given. The input is a white noise.

```
from brian import *
from brian.hears import *
samplerate = 20*kHz
SoundDuration = 300*ms
sound = whitenoise(SoundDuration, samplerate).ramp()
#number of frequency channel (here it must be one as a spectrogram of the
#output is plotted)
nchannels = 1
fc_init = 5000*Hz #initial center frequency of the band pass filter
Q = 5 #quality factor of the band pass filter
update_interval = 1 # the filter coefficients are updated every sample
mean_center_freq = 4*kHz #mean frequency around which the CF will oscillate
amplitude = 1500*Hz #amplitude of the oscillation
frequency = 10*Hz #frequency of the oscillation
#this class is used in a FunctionFilterbank (via its __call__). It outputs the
#center frequency of the band pass filter. Its output is thus later passed as
#input to the controler.
class CenterFrequencyGenerator(object):
def __init__(self):
self.t=0*second
def __call__(self, input):
#update of the center frequency
fc = mean_center_freq+amplitude*sin(2*pi*frequency*self.t)
#update of the state variable
self.t = self.t+1./samplerate
return fc
center_frequency = CenterFrequencyGenerator()
fc_generator = FunctionFilterbank(sound, center_frequency)
#the updater of the controller generates new filter coefficient of the band pass
#filter based on the center frequency it receives from the fc_generator
#(its input)
class CoeffController(object):
def __init__(self, target):
self.BW = 2*arcsinh(1./2/Q)*1.44269
self.target=target
def __call__(self, input):
fc = input[-1,:] #the control variables are taken as the last of the buffer
w0 = 2*pi*fc/array(samplerate)
alpha = sin(w0)*sinh(log(2)/2*self.BW*w0/sin(w0))
self.target.filt_b[:, 0, 0] = sin(w0)/2
self.target.filt_b[:, 1, 0] = 0
self.target.filt_b[:, 2, 0] = -sin(w0)/2
self.target.filt_a[:, 0, 0] = 1+alpha
self.target.filt_a[:, 1, 0] = -2*cos(w0)
self.target.filt_a[:, 2, 0] = 1-alpha
# In the present example the time varying filter is a LinearFilterbank therefore
#we must initialise the filter coefficients; the one used for the first buffer computation
w0 = 2*pi*fc_init/samplerate
BW = 2*arcsinh(1./2/Q)*1.44269
alpha = sin(w0)*sinh(log(2)/2*BW*w0/sin(w0))
filt_b = zeros((nchannels, 3, 1))
filt_a = zeros((nchannels, 3, 1))
filt_b[:, 0, 0] = sin(w0)/2
filt_b[:, 1, 0] = 0
filt_b[:, 2, 0] = -sin(w0)/2
filt_a[:, 0, 0] = 1+alpha
filt_a[:, 1, 0] = -2*cos(w0)
filt_a[:, 2, 0] = 1-alpha
#the filter which will have time varying coefficients
bandpass_filter = LinearFilterbank(sound, filt_b, filt_a)
#the updater
updater = CoeffController(bandpass_filter)
#the controller. Remember it must be the last of the chain
control = ControlFilterbank(bandpass_filter, fc_generator, bandpass_filter,
updater, update_interval)
time_varying_filter_mon = control.process()
figure(1)
pxx, freqs, bins, im = specgram(squeeze(time_varying_filter_mon),
NFFT=256, Fs=samplerate, noverlap=240)
imshow(flipud(pxx), aspect='auto')
show()
```

##### Example: log_gammachirp (hears)¶

Example of the use of the class `LogGammachirp`

available in
the library. It implements a filterbank of IIR gammachirp filters as
Unoki et al. 2001, “Improvement of an IIR asymmetric compensation gammachirp
filter”. In this example, a white noise is filtered by a linear gammachirp
filterbank and the resulting cochleogram is plotted. The different impulse
responses are also plotted.

```
from brian import *
from brian.hears import *
sound = whitenoise(100*ms).ramp()
sound.level = 50*dB
nbr_center_frequencies = 50 #number of frequency channels in the filterbank
c1 = -2.96 #glide slope
b1 = 1.81 #factor determining the time constant of the filters
#center frequencies with a spacing following an ERB scale
cf = erbspace(100*Hz, 1000*Hz, nbr_center_frequencies)
gamma_chirp = LogGammachirp(sound, cf, c=c1, b=b1)
gamma_chirp_mon = gamma_chirp.process()
figure()
imshow(flipud(gamma_chirp_mon.T), aspect='auto')
show()
```

##### Example: drnl (hears)¶

Implementation example of the dual resonance nonlinear (DRNL) filter with parameters fitted for human as described in Lopez-Paveda, E. and Meddis, R., A human nonlinear cochlear filterbank, JASA 2001.

A class called `DRNL`

implementing this model is available
in the library.

The entire pathway consists of the sum of a linear and a nonlinear pathway.

The linear path consists of a bank of bandpass filters (second order gammatone), a low pass function, and a gain/attenuation factor, g, in a cascade.

The nonlinear path is a cascade consisting of a bank of gammatone filters, a compression function, a second bank of gammatone filters, and a low pass function, in that order.

The parameters are given in the form `10**(p0+mlog10(cf))`

.

```
from brian import *
from brian.hears import *
simulation_duration = 50*ms
samplerate = 50*kHz
level = 50*dB # level of the input sound in rms dB SPL
sound = whitenoise(simulation_duration, samplerate).ramp()
sound.level = level
nbr_cf = 50 #number of centre frequencies
#center frequencies with a spacing following an ERB scale
center_frequencies = erbspace(100*Hz,1000*Hz, nbr_cf)
#conversion to stape velocity (which are the units needed by the following centres)
sound = sound*0.00014
#### Linear Pathway ####
#bandpass filter (second order gammatone filter)
center_frequencies_linear = 10**(-0.067+1.016*log10(center_frequencies))
bandwidth_linear = 10**(0.037+0.785*log10(center_frequencies))
order_linear = 3
gammatone = ApproximateGammatone(sound, center_frequencies_linear,
bandwidth_linear, order=order_linear)
#linear gain
g = 10**(4.2-0.48*log10(center_frequencies))
func_gain = lambda x:g*x
gain = FunctionFilterbank(gammatone, func_gain)
#low pass filter(cascade of 4 second order lowpass butterworth filters)
cutoff_frequencies_linear = center_frequencies_linear
order_lowpass_linear = 2
lp_l = LowPass(gain, cutoff_frequencies_linear)
lowpass_linear = Cascade(gain, lp_l, 4)
#### Nonlinear Pathway ####
#bandpass filter (third order gammatone filters)
center_frequencies_nonlinear = center_frequencies
bandwidth_nonlinear = 10**(-0.031+0.774*log10(center_frequencies))
order_nonlinear = 3
bandpass_nonlinear1 = ApproximateGammatone(sound, center_frequencies_nonlinear,
bandwidth_nonlinear,
order=order_nonlinear)
#compression (linear at low level, compress at high level)
a = 10**(1.402+0.819*log10(center_frequencies)) #linear gain
b = 10**(1.619-0.818*log10(center_frequencies))
v = .2 #compression exponent
func_compression = lambda x: sign(x)*minimum(a*abs(x), b*abs(x)**v)
compression = FunctionFilterbank(bandpass_nonlinear1, func_compression)
#bandpass filter (third order gammatone filters)
bandpass_nonlinear2 = ApproximateGammatone(compression,
center_frequencies_nonlinear,
bandwidth_nonlinear,
order=order_nonlinear)
#low pass filter
cutoff_frequencies_nonlinear = center_frequencies_nonlinear
order_lowpass_nonlinear = 2
lp_nl = LowPass(bandpass_nonlinear2, cutoff_frequencies_nonlinear)
lowpass_nonlinear = Cascade(bandpass_nonlinear2, lp_nl, 3)
#adding the two pathways
dnrl_filter = lowpass_linear+lowpass_nonlinear
dnrl = dnrl_filter.process()
figure()
imshow(flipud(dnrl.T), aspect='auto')
show()
```

##### Example: simple_anf (hears)¶

Example of a simple auditory nerve fibre model with Brian hears.

```
from brian import *
from brian.hears import *
sound1 = tone(1*kHz, .1*second)
sound2 = whitenoise(.1*second)
sound = sound1+sound2
sound = sound.ramp()
cf = erbspace(20*Hz, 20*kHz, 3000)
cochlea = Gammatone(sound, cf)
# Half-wave rectification and compression [x]^(1/3)
ihc = FunctionFilterbank(cochlea, lambda x: 3*clip(x, 0, Inf)**(1.0/3.0))
# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1
I : 1
'''
anf = FilterbankGroup(ihc, 'I', eqs, reset=0, threshold=1, refractory=5*ms)
M = SpikeMonitor(anf)
run(sound.duration)
raster_plot(M)
show()
```

##### Example: time_varying_filter1 (hears)¶

This example implements a band pass filter whose center frequency is modulated
by an Ornstein-Uhlenbeck. The white noise term used for this process is output
by a FunctionFilterbank. The bandpass filter coefficients update is an example
of how to use a `ControlFilterbank`

. The bandpass filter is
a basic biquadratic filter for which the Q factor and the center frequency must
be given. The input is a white noise.

```
from brian import *
from brian.hears import *
samplerate = 20*kHz
SoundDuration = 300*ms
sound = whitenoise(SoundDuration, samplerate).ramp()
#number of frequency channel (here it must be one as a spectrogram of the
#output is plotted)
nchannels = 1
fc_init = 5000*Hz #initial center frequency of the band pass filter
Q = 5 #quality factor of the band pass filter
update_interval = 4 # the filter coefficients are updated every 4 samples
#parameters of the Ornstein-Uhlenbeck process
s_i = 1200*Hz
tau_i = 100*ms
mu_i = fc_init/tau_i
sigma_i = sqrt(2)*s_i/sqrt(tau_i)
deltaT = defaultclock.dt
#this function is used in a FunctionFilterbank. It outputs a noise term that
#will be later used by the controler to update the center frequency
noise = lambda x: mu_i*deltaT+sigma_i*randn(1)*sqrt(deltaT)
noise_generator = FunctionFilterbank(sound, noise)
#this class will take as input the output of the noise generator and as target
#the bandpass filter center frequency
class CoeffController(object):
def __init__(self, target):
self.target = target
self.deltaT = 1./samplerate
self.BW = 2*arcsinh(1./2/Q)*1.44269
self.fc = fc_init
def __call__(self, input):
#the control variables are taken as the last of the buffer
noise_term = input[-1,:]
#update the center frequency by updateing the OU process
self.fc = self.fc-self.fc/tau_i*self.deltaT+noise_term
w0 = 2*pi*self.fc/samplerate
#update the coefficient of the biquadratic filterbank
alpha = sin(w0)*sinh(log(2)/2*self.BW*w0/sin(w0))
self.target.filt_b[:, 0, 0] = sin(w0)/2
self.target.filt_b[:, 1, 0] = 0
self.target.filt_b[:, 2, 0] = -sin(w0)/2
self.target.filt_a[:, 0, 0] = 1+alpha
self.target.filt_a[:, 1, 0] = -2*cos(w0)
self.target.filt_a[:, 2, 0] = 1-alpha
# In the present example the time varying filter is a LinearFilterbank therefore
#we must initialise the filter coefficients; the one used for the first buffer computation
w0 = 2*pi*fc_init/samplerate
BW = 2*arcsinh(1./2/Q)*1.44269
alpha = sin(w0)*sinh(log(2)/2*BW*w0/sin(w0))
filt_b = zeros((nchannels, 3, 1))
filt_a = zeros((nchannels, 3, 1))
filt_b[:, 0, 0] = sin(w0)/2
filt_b[:, 1, 0] = 0
filt_b[:, 2, 0] = -sin(w0)/2
filt_a[:, 0, 0] = 1+alpha
filt_a[:, 1, 0] = -2*cos(w0)
filt_a[:, 2, 0] = 1-alpha
#the filter which will have time varying coefficients
bandpass_filter = LinearFilterbank(sound, filt_b, filt_a)
#the updater
updater = CoeffController(bandpass_filter)
#the controller. Remember it must be the last of the chain
control = ControlFilterbank(bandpass_filter, noise_generator, bandpass_filter,
updater, update_interval)
time_varying_filter_mon = control.process()
figure(1)
pxx, freqs, bins, im = specgram(squeeze(time_varying_filter_mon),
NFFT=256, Fs=samplerate, noverlap=240)
imshow(flipud(pxx), aspect='auto')
show()
```

##### Example: gammatone (hears)¶

Example of the use of the class `Gammatone`

available in the
library. It implements a fitlerbank of IIR gammatone filters as
described in Slaney, M., 1993, “An Efficient Implementation of the
Patterson-Holdsworth Auditory Filter Bank”. Apple Computer Technical Report #35.
In this example, a white noise is filtered by a gammatone filterbank and the
resulting cochleogram is plotted.

```
from brian import *
from brian.hears import *
from matplotlib import pyplot
sound = whitenoise(100*ms).ramp()
sound.level = 50*dB
nbr_center_frequencies = 50
b1 = 1.019 #factor determining the time constant of the filters
#center frequencies with a spacing following an ERB scale
center_frequencies = erbspace(100*Hz, 1000*Hz, nbr_center_frequencies)
gammatone = Gammatone(sound, center_frequencies, b=b1)
gt_mon = gammatone.process()
figure()
imshow(gt_mon.T, aspect='auto', origin='lower left',
extent=(0, sound.duration/ms,
center_frequencies[0], center_frequencies[-1]))
pyplot.yscale('log')
title('Cochleogram')
ylabel('Frequency (Hz)')
xlabel('Time (ms)')
show()
```

##### Example: cochlear_models (hears)¶

Example of the use of the cochlear models (`DRNL`

,
`DCGC`

and `TanCarney`

) available in the library.

```
from brian import *
from brian.hears import *
simulation_duration = 50*ms
set_default_samplerate(50*kHz)
sound = whitenoise(simulation_duration)
sound = sound.atlevel(50*dB) # level in rms dB SPL
cf = erbspace(100*Hz, 1000*Hz, 50) # centre frequencies
interval = 16 #update interval of the time varying filters
## DNRL
#param_drnl = {}
#param_drnl['lp_nl_cutoff_m'] = 1.1
#drnl_filter=DRNL(sound, cf, type='human', param=param_drnl)
#out = drnl_filter.process()
## DCGC
#param_dcgc = {}
#param_dcgc['c1'] = -2.96
#dcgc_filter = DCGC(sound, cf, interval, param=param_dcgc)
#out = dcgc_filter.process()
## Tan and Carney 2003
tan_filter = TanCarney(sound, cf, interval)
out = tan_filter.process()
figure()
imshow(flipud(out.T), aspect='auto')
show()
```

##### Example: approximate_gammatone (hears)¶

Example of the use of the class `ApproximateGammatone`

available in the library. It implements a filterbank of approximate gammatone
filters as described in Hohmann, V., 2002, “Frequency analysis and synthesis
using a Gammatone filterbank”, Acta Acustica United with Acustica.
In this example, a white noise is filtered by a gammatone filterbank and the
resulting cochleogram is plotted.

```
from brian import *
from brian.hears import *
level=50*dB # level of the input sound in rms dB SPL
sound = whitenoise(100*ms).ramp() # generation of a white noise
sound = sound.atlevel(level) # set the sound to a certain dB level
nbr_center_frequencies = 50 # number of frequency channels in the filterbank
# center frequencies with a spacing following an ERB scale
center_frequencies = erbspace(100*Hz, 1000*Hz, nbr_center_frequencies)
# bandwidth of the filters (different in each channel)
bw = 10**(0.037+0.785*log10(center_frequencies))
gammatone = ApproximateGammatone(sound, center_frequencies, bw, order=3)
gt_mon = gammatone.process()
figure()
imshow(flipud(gt_mon.T), aspect='auto')
show()
```

##### Example: artificial_vowels (hears)¶

This example implements the artificial vowels from Culling, J. F. and Summerfield, Q. (1995a). “Perceptual segregation of concurrent speech sounds: absence of across-frequency grouping by common interaural delay” J. Acoust. Soc. Am. 98, 785-797.

```
from brian import *
from brian.hears import *
duration = 409.6*ms
width = 150*Hz/2
samplerate = 10*kHz
set_default_samplerate(samplerate)
centres = [225*Hz, 625*Hz, 975*Hz, 1925*Hz]
vowels = {
'ee':[centres[0], centres[3]],
'ar':[centres[1], centres[2]],
'oo':[centres[0], centres[2]],
'er':[centres[1], centres[3]]
}
def generate_vowel(vowel):
vowel = vowels[vowel]
x = whitenoise(duration)
y = fft(asarray(x).flatten())
f = fftfreq(len(x), 1/samplerate)
I = zeros(len(f), dtype=bool)
for cf in vowel:
I = I|((abs(f)<cf+width)&(abs(f)>cf-width))
I = -I
y[I] = 0
x = ifft(y)
return Sound(x.real)
v1 = generate_vowel('ee').ramp()
v2 = generate_vowel('ar').ramp()
v3 = generate_vowel('oo').ramp()
v4 = generate_vowel('er').ramp()
for s in [v1, v2, v3, v4]:
s.play(normalise=True, sleep=True)
s1 = Sound((v1, v2))
#s1.play(normalise=True, sleep=True)
s2 = Sound((v3, v4))
#s2.play(normalise=True, sleep=True)
v1.save('mono_sound.wav')
s1.save('stereo_sound.wav')
subplot(211)
plot(v1.times, v1)
subplot(212)
v1.spectrogram()
show()
```

##### Example: linear_gammachirp (hears)¶

Example of the use of the class `LinearGammachirp`

available
in the library. It implements a filterbank of FIR gammatone filters with linear
frequency sweeps as described in Wagner et al. 2009, “Auditory responses in the
barn owl’s nucleus laminaris to clicks: impulse response and signal analysis of
neurophonic potential”, J. Neurophysiol. In this example, a white noise is
filtered by a gammachirp filterbank and the resulting cochleogram is plotted.
The different impulse responses are also plotted.

```
from brian import *
from brian.hears import *
sound = whitenoise(100*ms).ramp()
sound.level = 50*dB
nbr_center_frequencies = 10 #number of frequency channels in the filterbank
#center frequencies with a spacing following an ERB scale
center_frequencies = erbspace(100*Hz, 1000*Hz, nbr_center_frequencies)
c = 0.0 #glide slope
time_constant = linspace(3, 0.3, nbr_center_frequencies)*ms
gamma_chirp = LinearGammachirp(sound, center_frequencies, time_constant, c)
gamma_chirp_mon = gamma_chirp.process()
figure()
imshow(gamma_chirp_mon.T, aspect='auto')
figure()
plot(gamma_chirp.impulse_response.T)
show()
```

##### Example: IIRfilterbank (hears)¶

Example of the use of the class `IIRFilterbank`

available in
the library. In this example, a white noise is filtered by a bank of chebyshev
bandpass filters and lowpass filters which are different for every channels.
The centre frequencies of the filters are linearly taken between 100kHz and
1000kHz and its bandwidth or cutoff frequency increases linearly with frequency.

```
from brian import *
from brian.hears import *
sound = whitenoise(100*ms).ramp()
sound.level = 50*dB
### example of a bank of bandpass filter ################
nchannels = 50
center_frequencies = linspace(200*Hz, 1000*Hz, nchannels) #center frequencies
bw = linspace(50*Hz, 300*Hz, nchannels) #bandwidth of the filters
# The maximum loss in the passband in dB. Can be a scalar or an array of length
# nchannels
gpass = 1.*dB
# The minimum attenuation in the stopband in dB. Can be a scalar or an array
# of length nchannels
gstop = 10.*dB
#arrays of shape (2 x nchannels) defining the passband frequencies (Hz)
passband = vstack((center_frequencies-bw/2, center_frequencies+bw/2))
#arrays of shape (2 x nchannels) defining the stopband frequencies (Hz)
stopband = vstack((center_frequencies-1.1*bw, center_frequencies+1.1*bw))
filterbank = IIRFilterbank(sound, nchannels, passband, stopband, gpass, gstop,
'bandstop', 'cheby1')
filterbank_mon = filterbank.process()
figure()
subplot(211)
imshow(flipud(filterbank_mon.T), aspect='auto')
#### example of a bank of lowpass filter ################
nchannels = 50
cutoff_frequencies = linspace(100*Hz, 1000*Hz, nchannels)
#bandwidth of the transition region between the en of the pass band and the
#begin of the stop band
width_transition = linspace(50*Hz, 300*Hz, nchannels)
# The maximum loss in the passband in dB. Can be a scalar or an array of length
# nchannels
gpass = 1*dB
# The minimum attenuation in the stopband in dB. Can be a scalar or an array of
# length nchannels
gstop = 10*dB
passband = cutoff_frequencies-width_transition/2
stopband = cutoff_frequencies+width_transition/2
filterbank = IIRFilterbank(sound, nchannels, passband, stopband, gpass, gstop,
'low','cheby1')
filterbank_mon=filterbank.process()
subplot(212)
imshow(flipud(filterbank_mon.T), aspect='auto')
show()
```

##### Example: ircam_hrtf (hears)¶

Example showing the use of HRTFs in Brian hears. Note that you will need to
download the `IRCAM_LISTEN`

database.

```
from brian import *
from brian.hears import *
# Load database
hrtfdb = IRCAM_LISTEN(r'F:\HRTF\IRCAM')
hrtfset = hrtfdb.load_subject(1002)
# Select only the horizontal plane
hrtfset = hrtfset.subset(lambda elev: elev==0)
# Set up a filterbank
sound = whitenoise(10*ms)
fb = hrtfset.filterbank(sound)
# Extract the filtered response and plot
img = fb.process().T
img_left = img[:img.shape[0]/2, :]
img_right = img[img.shape[0]/2:, :]
subplot(121)
imshow(img_left, origin='lower left', aspect='auto',
extent=(0, sound.duration/ms, 0, 360))
xlabel('Time (ms)')
ylabel('Azimuth')
title('Left ear')
subplot(122)
imshow(img_right, origin='lower left', aspect='auto',
extent=(0, sound.duration/ms, 0, 360))
xlabel('Time (ms)')
ylabel('Azimuth')
title('Right ear')
show()
```

##### Example: cochleagram (hears)¶

Example of basic filtering of a sound with Brian hears. This example implements a cochleagram based on a gammatone filterbank followed by halfwave rectification, cube root compression and 10 Hz low pass filtering.

```
from brian import *
from brian.hears import *
sound1 = tone(1*kHz, .1*second)
sound2 = whitenoise(.1*second)
sound = sound1+sound2
sound = sound.ramp()
cf = erbspace(20*Hz, 20*kHz, 3000)
gammatone = Gammatone(sound, cf)
cochlea = FunctionFilterbank(gammatone, lambda x: clip(x, 0, Inf)**(1.0/3.0))
lowpass = LowPass(cochlea, 10*Hz)
output = lowpass.process()
imshow(output.T, origin='lower left', aspect='auto', vmin=0)
show()
```

##### Example: sounds (hears)¶

Example of basic use and manipulation of sounds with Brian hears.

```
from brian import *
from brian.hears import *
sound1 = tone(1*kHz, 1*second)
sound2 = whitenoise(1*second)
sound = sound1+sound2
sound = sound.ramp()
# Comment this line out if you don't have pygame installed
sound.play()
# The first 20ms of the sound
startsound = sound[:20*ms]
subplot(121)
plot(startsound.times, startsound)
subplot(122)
sound.spectrogram()
show()
```

#### hears/tan_carney_2003¶

##### Example: tan_carney_Fig7 (hears/tan_carney_2003)¶

CF-dependence of compressive nonlinearity in the Tan&Carney model. Reproduces Fig. 7 from:

- Tan, Q., and L. H. Carney.
- “A Phenomenological Model for the Responses of Auditory-nerve Fibers. II. Nonlinear Tuning with a Frequency Glide”. The Journal of the Acoustical Society of America 114 (2003): 2007.

```
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from brian import *
#set_global_preferences(useweave=True)
from brian.hears import *
from brian.hears.filtering.tan_carney import TanCarneySignal, MiddleEar
samplerate = 50*kHz
set_default_samplerate(samplerate)
duration = 50*ms
def product(*args):
# Simple (and inefficient) variant of itertools.product that works for
# Python 2.5 (directly returns a list instead of yielding one item at a
# time)
pools = map(tuple, args)
result = [[]]
for pool in pools:
result = [x+[y] for x in result for y in pool]
return result
def gen_tone(freq, level):
'''
Little helper function to generate a pure tone at frequency `freq` with
the given `level`. The tone has a duration of 50ms and is ramped with
two ramps of 2.5ms.
'''
freq = float(freq) * Hz
level = float(level) * dB
return tone(freq, duration).ramp(when='both',
duration=2.5*ms,
inplace=False).atlevel(level)
freqs = [500, 1100, 2000, 4000]
levels = np.arange(-10, 100.1, 5)
cf_level = product(freqs, levels)
# steady-state
start = 10*ms*samplerate
end = 45*ms*samplerate
# For Figure 7 we have manually adjusts the gain for different CFs -- otherwise
# the RMS values wouldn't be identical for low CFs. Therefore, try to estimate
# suitable gain values first using the lowest CF as a reference
ref_tone = gen_tone(freqs[0], levels[0])
F_out_reference = TanCarneySignal(MiddleEar(ref_tone, gain=1), freqs[0],
update_interval=1).process().flatten()
ref_rms = np.sqrt(np.mean((F_out_reference[start:end] -
np.mean(F_out_reference[start:end]))**2))
gains = np.linspace(0.1, 1, 50) # for higher CFs we need lower gains
cf_gains = product(freqs[1:], gains)
tones = Sound([gen_tone(freq, levels[0]) for freq, _ in cf_gains])
F_out_test = TanCarneySignal(MiddleEar(tones, gain=np.array([g for _, g in cf_gains])),
[cf for cf,_ in cf_gains], update_interval=1).process()
reshaped_Fout = F_out_test.T.reshape((len(freqs[1:]), len(gains), -1))
rms = np.sqrt(np.mean((reshaped_Fout[:, :, start:end].T -
np.mean(reshaped_Fout[:, :, start:end], axis=2).T).T**2,
axis=2))
# get the best gain for each CF using simple linear interpolation
gain_dict = {freqs[0]: 1.} # reference gain
for idx, freq in enumerate(freqs[1:]):
gain_dict[freq] = interp1d(rms[idx, :], gains)(ref_rms)
# now do the real test: tones at different levels for different CFs
tones = Sound([gen_tone(freq, level) for freq, level in cf_level])
F_out = TanCarneySignal(MiddleEar(tones,
gain=np.array([gain_dict[cf] for cf, _ in cf_level])),
[cf for cf, _ in cf_level],
update_interval=1).process()
reshaped_Fout = F_out.T.reshape((len(freqs), len(levels), -1))
rms = np.sqrt(np.mean((reshaped_Fout[:, :, start:end].T -
np.mean(reshaped_Fout[:, :, start:end], axis=2).T).T**2,
axis=2))
# This should more or less reproduce Fig. 7
plt.plot(levels, rms.T)
plt.legend(['%.0f Hz' % cf for cf in freqs], loc='best')
plt.xlim(-20, 100)
plt.ylim(1e-6, 1)
plt.yscale('log')
plt.xlabel('input signal SPL (dB)')
plt.ylabel('rms of AC component of Fout')
plt.show()
```

##### Example: tan_carney_simple_test (hears/tan_carney_2003)¶

Fig. 1 and 3 (spking output without spiking/refractory period) should reproduce the output of the AN3_test_tone.m and AN3_test_click.m scripts, available in the code accompanying the paper Tan & Carney (2003). This matlab code is available from http://www.urmc.rochester.edu/labs/Carney-Lab/publications/auditory-models.cfm

Tan, Q., and L. H. Carney. “A Phenomenological Model for the Responses of Auditory-nerve Fibers. II. Nonlinear Tuning with a Frequency Glide”. The Journal of the Acoustical Society of America 114 (2003): 2007.

```
import numpy as np
import matplotlib.pyplot as plt
from brian.stdunits import kHz, Hz, ms
from brian.network import Network
from brian.monitor import StateMonitor, SpikeMonitor
from brian.globalprefs import set_global_preferences
#set_global_preferences(useweave=True)
from brian.hears import (Sound, get_samplerate, set_default_samplerate, tone,
click, silence, dB, TanCarney, MiddleEar, ZhangSynapse)
from brian.clock import reinit_default_clock
set_default_samplerate(50*kHz)
sample_length = 1 / get_samplerate(None)
cf = 1000 * Hz
print 'Testing click response'
duration = 25*ms
levels = [40, 60, 80, 100, 120]
# a click of two samples
tones = Sound([Sound.sequence([click(sample_length*2, peak=level*dB),
silence(duration=duration - sample_length)])
for level in levels])
ihc = TanCarney(MiddleEar(tones), [cf] * len(levels), update_interval=1)
syn = ZhangSynapse(ihc, cf)
s_mon = StateMonitor(syn, 's', record=True, clock=syn.clock)
R_mon = StateMonitor(syn, 'R', record=True, clock=syn.clock)
spike_mon = SpikeMonitor(syn)
net = Network(syn, s_mon, R_mon, spike_mon)
net.run(duration * 1.5)
for idx, level in enumerate(levels):
plt.figure(1)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(s_mon.times / ms, s_mon[idx])
plt.xlim(0, 25)
plt.xlabel('Time (msec)')
plt.ylabel('Sp/sec')
plt.text(15, np.nanmax(s_mon[idx])/2., 'Peak SPL=%s SPL' % str(level*dB));
ymin, ymax = plt.ylim()
if idx == 0:
plt.title('Click responses')
plt.figure(2)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(R_mon.times / ms, R_mon[idx])
plt.xlabel('Time (msec)')
plt.xlabel('Time (msec)')
plt.text(15, np.nanmax(s_mon[idx])/2., 'Peak SPL=%s SPL' % str(level*dB));
plt.ylim(ymin, ymax)
if idx == 0:
plt.title('Click responses (with spikes and refractoriness)')
plt.plot(spike_mon.spiketimes[idx] / ms,
np.ones(len(spike_mon.spiketimes[idx])) * np.nanmax(R_mon[idx]), 'rx')
print 'Testing tone response'
reinit_default_clock()
duration = 60*ms
levels = [0, 20, 40, 60, 80]
tones = Sound([Sound.sequence([tone(cf, duration).atlevel(level*dB).ramp(when='both',
duration=10*ms,
inplace=False),
silence(duration=duration/2)])
for level in levels])
ihc = TanCarney(MiddleEar(tones), [cf] * len(levels), update_interval=1)
syn = ZhangSynapse(ihc, cf)
s_mon = StateMonitor(syn, 's', record=True, clock=syn.clock)
R_mon = StateMonitor(syn, 'R', record=True, clock=syn.clock)
spike_mon = SpikeMonitor(syn)
net = Network(syn, s_mon, R_mon, spike_mon)
net.run(duration * 1.5)
for idx, level in enumerate(levels):
plt.figure(3)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(s_mon.times / ms, s_mon[idx])
plt.xlim(0, 120)
plt.xlabel('Time (msec)')
plt.ylabel('Sp/sec')
plt.text(1.25 * duration/ms, np.nanmax(s_mon[idx])/2., '%s SPL' % str(level*dB));
ymin, ymax = plt.ylim()
if idx == 0:
plt.title('CF=%.0f Hz - Response to Tone at CF' % cf)
plt.figure(4)
plt.subplot(len(levels), 1, idx + 1)
plt.plot(R_mon.times / ms, R_mon[idx])
plt.xlabel('Time (msec)')
plt.xlabel('Time (msec)')
plt.text(1.25 * duration/ms, np.nanmax(R_mon[idx])/2., '%s SPL' % str(level*dB));
plt.ylim(ymin, ymax)
if idx == 0:
plt.title('CF=%.0f Hz - Response to Tone at CF (with spikes and refractoriness)' % cf)
plt.plot(spike_mon.spiketimes[idx] / ms,
np.ones(len(spike_mon.spiketimes[idx])) * np.nanmax(R_mon[idx]), 'rx')
plt.show()
```

##### Example: tan_carney_Fig11 (hears/tan_carney_2003)¶

Response area and phase response of a model fiber with CF=2200Hz in the Tan&Carney model. Reproduces Fig. 11 from:

- Tan, Q., and L. H. Carney.
- “A Phenomenological Model for the Responses of Auditory-nerve Fibers. II. Nonlinear Tuning with a Frequency Glide”. The Journal of the Acoustical Society of America 114 (2003): 2007.

```
import matplotlib.pyplot as plt
import numpy as np
from brian import *
# set_global_preferences(useweave=True)
from brian.hears import *
def product(*args):
# Simple (and inefficient) variant of itertools.product that works for
# Python 2.5 (directly returns a list instead of yielding one item at a
# time)
pools = map(tuple, args)
result = [[]]
for pool in pools:
result = [x+[y] for x in result for y in pool]
return result
duration = 50*ms
samplerate = 50*kHz
set_default_samplerate(samplerate)
CF = 2200
freqs = np.arange(250.0, 3501., 50.)
levels = [10, 30, 50, 70, 90]
cf_level = product(freqs, levels)
tones = Sound([Sound.sequence([tone(freq * Hz, duration).atlevel(level*dB).ramp(when='both',
duration=2.5*ms,
inplace=False)])
for freq, level in cf_level])
ihc = TanCarney(MiddleEar(tones), [CF] * len(cf_level), update_interval=2)
syn = ZhangSynapse(ihc, CF)
s_mon = StateMonitor(syn, 's', record=True, clock=syn.clock)
net = Network(syn, s_mon)
net.run(duration)
reshaped = s_mon.values.reshape((len(freqs), len(levels), -1))
# calculate the phase with respect to the stimulus
pi = np.pi
min_freq, max_freq = 1100, 2900
freq_subset = freqs[(freqs>=min_freq) & (freqs<=max_freq)]
reshaped_subset = reshaped[(freqs>=min_freq) & (freqs<=max_freq), :, :]
phases = np.zeros((reshaped_subset.shape[0], len(levels)))
for f_idx, freq in enumerate(freq_subset):
period = 1.0 / freq
for l_idx in xrange(len(levels)):
phase_angles = np.arange(reshaped_subset.shape[2])/samplerate % period / period * 2*pi
temp_phases = (np.exp(1j * phase_angles) *
reshaped_subset[f_idx, l_idx, :])
phases[f_idx, l_idx] = np.angle(np.sum(temp_phases))
plt.subplot(2, 1, 1)
rate = reshaped.mean(axis=2)
plt.plot(freqs, rate)
plt.ylabel('Spikes/sec')
plt.legend(['%.0f dB' % level for level in levels], loc='best')
plt.xlim(0, 4000)
plt.ylim(0, 250)
plt.subplot(2, 1, 2)
relative_phases = (phases.T - phases[:, -1]).T
relative_phases[relative_phases > pi] = relative_phases[relative_phases > pi] - 2*pi
relative_phases[relative_phases < -pi] = relative_phases[relative_phases < -pi] + 2*pi
plt.plot(freq_subset, relative_phases / pi)
plt.ylabel("Phase Re:90dB (pi radians)")
plt.xlabel('Frequency (Hz)')
plt.legend(['%.0f dB' % level for level in levels], loc='best')
plt.xlim(0, 4000)
plt.ylim(-0.5, 0.75)
plt.show()
```

#### modelfitting¶

##### Example: modelfitting (modelfitting)¶

Model fitting example. Fit an integrate-and-fire model to an in-vitro electrophysiological recording during one second.

```
from brian import loadtxt, ms, Equations
from brian.library.modelfitting import *
if __name__ == '__main__':
equations = Equations('''
dV/dt=(R*I-V)/tau : 1
I : 1
R : 1
tau : second
''')
input = loadtxt('current.txt')
spikes = loadtxt('spikes.txt')
results = modelfitting( model = equations,
reset = 0,
threshold = 1,
data = spikes,
input = input,
dt = .1*ms,
popsize = 1000,
maxiter = 3,
delta = 4*ms,
R = [1.0e9, 9.0e9],
tau = [10*ms, 40*ms],
refractory = [0*ms, 10*ms])
print_table(results)
```

##### Example: modelfitting_groups (modelfitting)¶

Example showing how to fit a single model with different target spike trains (several groups).

```
from brian import loadtxt, ms, Equations, second
from brian.library.modelfitting import *
if __name__ == '__main__':
model = Equations('''
dV/dt=(R*I-V)/tau : 1
I : 1
R : 1
tau : second
''')
input = loadtxt('current.txt')
spikes0 = loadtxt('spikes.txt')
spikes = []
for i in xrange(2):
spikes.extend([(i, spike*second + 5*i*ms) for spike in spikes0])
results = modelfitting( model = model,
reset = 0,
threshold = 1,
data = spikes,
input = input,
dt = .1*ms,
popsize = 1000,
maxiter = 3,
cpu = 1,
delta = 4*ms,
R = [1.0e9, 9.0e9],
tau = [10*ms, 40*ms],
delays = [-10*ms, 10*ms])
print_table(results)
```

##### Example: modelfitting_machines (modelfitting)¶

Model fitting example using several machines. Before running this example, you must start the Playdoh server on the remote machines.

```
from brian import loadtxt, ms, Equations
from brian.library.modelfitting import *
if __name__ == '__main__':
# List of machines IP addresses
machines = ['bobs-machine.university.com',
'jims-machine.university.com']
equations = Equations('''
dV/dt=(R*I-V)/tau : 1
I : 1
R : 1
tau : second
''')
input = loadtxt('current.txt')
spikes = loadtxt('spikes.txt')
results = modelfitting( model = equations,
reset = 0,
threshold = 1,
data = spikes,
input = input,
dt = .1*ms,
popsize = 1000,
maxiter = 3,
delta = 4*ms,
unit_type = 'CPU',
machines = machines,
R = [1.0e9, 9.0e9],
tau = [10*ms, 40*ms],
refractory = [0*ms, 10*ms])
print_table(results)
```

#### frompapers¶

##### Example: Touboul_Brette_2008 (frompapers)¶

###### Chaos in the AdEx model¶

Fig. 8B from: Touboul, J. and Brette, R. (2008). Dynamics and bifurcations of the adaptive exponential integrate-and-fire model. Biological Cybernetics 99(4-5):319-34.

This shows the bifurcation structure when the reset value is varied (vertical axis shows the values of w at spike times for a given a reset value Vr).

```
from brian import *
defaultclock.dt=0.01*ms
C=281*pF
gL=30*nS
EL=-70.6
```