Neurokernel¶
Project Website | GitHub Repository | Mailing List | Forum
Contents¶
Introduction¶
Neurokernel is an open software platform written in Python for emulation of the brain of the fruit fly (Drosophila melanogaster) on multiple Graphics Processing Units (GPUs). It provides a programming model based upon the organization of the fly’s brain into fewer than 50 modular subdivisions called local processing units (LPUs) that are each characterized by unique populations of local neurons [1]. Using Neurokernel’s API, researchers can develop models of individual LPUs and combine them with other independently developed LPU models to collaboratively construct models of entire subsystems of the fly brain. Neurokernel’s support for LPU model integration also enables exploration of brain functions that cannot be exclusively attributed to individual LPUs or brain subsystems.
Examples of Neurokernel’s use are available on the project website.
[1] | Chiang, A.-S., Lin, C.-Y., Chuang, C.-C., Chang, H.-M., Hsieh, C.-H., Yeh, C.-W., et al. (2011), Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution, Current Biology, 21, 1, 1–11, doi:10.1016/j.cub.2010.11.056 |
Installation¶
Prerequisites¶
Neurokernel requires
- Linux (other operating systems may work, but have not been tested);
- Python 2.7 (Python 3.0 is not guaranteed to work);
- at least one NVIDIA GPU with Fermi architecture or later;
- NVIDIA’s GPU drivers;
- CUDA 5.0 or later;
- OpenMPI 1.8.4 or later compiled with CUDA support.
To check what GPUs are in your system, you can use the inxi command available on most Linux distributions:
inxi -G
You can verify that the drivers are loaded as follows:
lsmod | grep nvidia
If no drivers are present, you may have to manually load them by running something like:
modprobe nvidia
as root.
Although some Linux distributions do include CUDA in their stock package repositories, you are encouraged to use those distributed by NVIDIA because they often are more up-to-date and include more recent releases of the GPU drivers. See this page for download information.
If you install Neurokernel in a virtualenv environment, you will need to install OpenMPI. See this page for OpenMPI installation information. Note that OpenMPI 1.8 cannot run on Windows.
Some of Neurokernel’s demos require either ffmpeg or libav installed to generate visualizations (see Examples).
Installation¶
Download the latest Neurokernel code as follows:
git clone https://github.com/neurokernel/neurokernel.git
Since Neurokernel requires a fair number of additional Python packages to run, it is recommended that it either be installed in a virtualenv or conda environment. Follow the relevant instructions below.
Virtualenv¶
See this page for virtualenv installation information.
Create a new virtualenv environment and install several required dependencies:
cd ~/
virtualenv NK
~/NK/bin/pip install numpy cython numexpr pycuda
If installation of PyCUDA fails because some of the CUDA development files or
libraries are not found, you may need to specify where they are explicitly. For
example, if CUDA is installed in /usr/local/cuda/
, try installing PyCUDA
as follows:
CUDA_ROOT=/usr/local/cuda/ CFLAGS=-I${CUDA_ROOT}/include \
LDFLAGS=-L${CUDA_ROOT}/lib64 ~/NK/bin/pip install pycuda
Replace ${CUDA_ROOT}/lib
with ${CUDA_ROOT}/lib64
if your system is
running 64-bit Linux. If you continue to encounter installation problems, see
the PyCUDA Wiki for more information.
Run the following to install the remaining Python package dependencies listed in setup.py:
cd ~/neurokernel
~/NK/bin/python setup.py develop
Conda¶
Note that conda packages are currently only available for 64-bit Ubuntu Linux 14.04. If you would like packages for another distribution, please submit a request to the Neurokernel developers.
First, install the following Ubuntu packages:
libibverbs1
libnuma1
libpmi0
libslurm26
libtorque2
Tthese are required by the conda OpenMPI packages prepared
for Neurokernel. Ensure that the stock Ubuntu OpenMPI packages are not installed
because they may interfere with the ones that will be installed by conda. You
also need to ensure that CUDA has been installed in
/usr/local/cuda
.
Install conda by either installing Anaconda or Miniconda. Make sure that the following lines appear in your ~/.condarc file so that conda can find the packages required by Neurokernel:
channels:
- https://conda.binstar.org/neurokernel/channel/ubuntu1404
- defaults
Create a new conda environment containing the packages required by Neurokernel by running the following command:
conda create -n NK neurokernel_deps
PyCUDA packages compiled against several versions of CUDA are available. If you
need one compiled against a specific version that differs from the one
automatically installed by the above command, you will need to manually install
it afterwards as follows (replace cuda75
with the appropriate version):
source activate NK
conda install pycuda=2015.1.3=np110py27_cuda75_0
source deactivate
Activate the new environment and install Neurokernel in it as follows:
source activate NK
cd ~/neurokernel
python setup.py develop
Examples¶
Introductory examples of how to use Neurokernel to build and integrate models of different parts of the fly brain are available in the Neurodriver package. To install it run the following:
git clone https://github.com/neurokernel/neurodriver
cd ~/neurodriver
python setup.py develop
Other models built using Neurokernel are available on GitHub.
Building the Documentation¶
To build Neurokernel’s HTML documentation locally, you will need to install
- mock 1.0 or later.
- sphinx 1.3 or later.
- sphinx_rtd_theme 0.1.6 or later.
Once these are installed, run the following:
cd ~/neurokernel/docs
make html
Reference¶
Model Development API¶
Local Processing Units¶
neurokernel.core.Module |
Processing module. |
neurokernel.core_gpu.Module |
Processing module. |
Inter-LPU Connectivity¶
neurokernel.pattern.Interface |
Container for set of interface comprising ports. |
neurokernel.pattern.Pattern |
Connectivity pattern linking sets of interface ports. |
Emulation Management¶
Construction and Execution¶
neurokernel.core.Manager |
Module manager. |
neurokernel.core_gpu.Manager |
Module manager. |
Support Classes¶
neurokernel.routing_table.RoutingTable |
Routing table class. |
neurokernel.mpi.Worker |
MPI worker class. |
neurokernel.mpi.WorkerManager |
Self-launching MPI worker manager. |
Support Classes and Functions¶
Path-Like Port Identifier Handling¶
Selector |
Validated and expanded port selector. |
SelectorMethods |
Class for manipulating and using path-like selectors. |
SelectorParser |
This class implements a parser for path-like selectors that can be associated with elements in a sequential data structure such as a Pandas DataFrame; in the latter case, each level of the selector corresponds to a level of a Pandas MultiIndex. |
GPU Port Mappers¶
GPUPortMapper |
Maps a PyCUDA GPUArray to/from path-like port identifiers. |
Python Port Mappers¶
BasePortMapper |
Maps integer sequence to/from path-like port identifiers. |
PortMapper |
Maps a numpy array to/from path-like port identifiers. |
XML Tools¶
graph_to_nml_module |
Convert a module expressed as NetworkX graphs into Neurokernel NeuroML. |
graph_to_nml_pattern |
Convert a pattern expressed as a NetworkX graph into Neurokernel NeuroML. |
load |
Load a Neurokernel NeuroML document. |
nml_pattern_to_graph |
Convert a pattern expressed in Neurokernel NeuroML into a NetworkX graph. |
nml_module_to_graph |
Convert a module expressed in Neurokernel NeuroML into NetworkX graphs. |
write |
Write a Neurokernel NeuroML document to an XML file. |
Context Managers¶
ExceptionOnSignal |
Raise a specific exception when the specified signal is detected. |
IgnoreKeyboardInterrupt |
Ignore keyboard interrupts. |
IgnoreSignal |
Ignore the specified signal. |
OnKeyboardInterrupt |
Respond to keyboard interrupt with specified handler. |
TryExceptionOnSignal |
Check for exception raised in response to specific signal. |
GPU Tools¶
bufint |
Return buffer interface to GPU or numpy array. |
set_by_inds |
Set values in a GPUArray by index. |
set_by_inds_from_inds |
Set values in a GPUArray by index from indexed values in another GPUArray. |
set_realloc |
Transfer data into a GPUArray instance. |
ZeroMQ Tools¶
get_random_port |
Return available random ZeroMQ port. |
is_poll_in |
Check for incoming data on a socket using a poller. |
ZMQOutput |
Visualization Tools¶
imdisp |
Display the specified image file using matplotlib. |
show_pydot |
Display a networkx graph using pydot. |
show_pygraphviz |
Display a networkx graph using pygraphviz. |
Logging Tools¶
log_exception |
Log the specified exception data using twiggy. |
set_excepthook |
Set the exception hook to use the specified logger. |
setup_logger |
Setup a twiggy logger. |
Other¶
LoggerMixin (name[, log_on]) |
Mixin that provides a per-instance logger that can be turned off. |
catch_exception |
Catch and report exceptions when executing a function. |
rand_bin_matrix |
Generate a rectangular binary matrix with randomly distributed nonzero entries. |
Authors & Acknowledgements¶
The Neurokernel Project was begun in July 2011 by Aurel A. Lazar at Columbia University’s Department of Electrical Engineering after extensive discussions held during a research seminar on Massively Parallel Neural Computation. The Neurokernel Development Team currently comprises the following Bionet Group researchers:
Past contributors who have participated in the project include
The Neurokernel logo is based upon the logo of the FlyJunkies web site, used with permission for non-profit use by the site maintainers Gavin Davis and Fraser Perry.
Licenses¶
Neurokernel¶
Copyright (c) 2012-2015, Neurokernel Development Team All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the names of the copyright holders nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
libNeuroML¶
Copyright (c) 2012, libNeuroML authors and contributors All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the names of the copyright holders nor the names of the contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.