Welcome to the Spyke Viewer documentation!

Spyke Viewer is a multi-platform GUI application for navigating, analyzing and visualizing electrophysiological datasets. It is based on the Neo library, which enables it to load a wide variety of data formats used in electrophysiology. At its core, Spyke Viewer includes functionality for navigating Neo object hierarchies and performing operations on them.

A central design goal of Spyke Viewer is flexibility. For this purpose, it includes an embedded Python console and a plugin system. It comes with a variety of plugins implementing common neuroscientific analyses and plots (e.g. rasterplot, peristimulus time histogram, correlogram and signal plot). Plugins can be easily created and modified using the integrated Python editor or external editors.

A mailinglist for discussion and support is available at https://groups.google.com/d/forum/spyke-viewer

Users can download and share plugins and other extensions at http://spyke-viewer.g-node.org

If you use Spyke Viewer in work that leads to a scientific publication, please cite:
Pröpper, R. and Obermayer, K. (2013). Spyke Viewer: a flexible and extensible platform for electrophysiological data analysis. Front. Neuroinform. 7:26. doi: 10.3389/fninf.2013.00026

The following screenshots illustrate the functionality of the program:

_images/screenshot1.png _images/screenshot2.png

Contents:

Installation

There are two ways to install Spyke Viewer on your system. The preferred way is to install the spykeviewer package in your Python environment. Depending on what already exists on your system, this might require installing Python itself and a few additional packages for scientific data processing, management and visualization.

On the other hand, there are also binary packages available for Windows and OS X. These packages do not have any additional requirements and can be started immediately (from an app in OS X or an executable file in Windows, please use the source installation for Linux). However, as they are independent of an existing Python installation, you will not be able to use installed additional packages from your Python environment by default (this can be remedied by using the Startup script). The binary packages are especially useful if you do not normally use Python or just want to try Spyke Viewer quickly. You can switch to the source installation at any time.

Binary

If you want to install the binary version, go to the homepage and select the most recent version for your operating system. The downloaded file contains an installer (for OS X) or executable (Spyke Viewer\spykeviewer.exe for Windows). Note that some features of Spyke Viewer are not available in the binary version: If you want an IPython console or advanced plugin editing features such as autocompletion, you need the source version. The rest of this page deals with source installation, when using the binary version, you can go to Usage to learn how to use Spyke Viewer.

Source

If you use the NeuroDebian repositories and a recent version of Debian (>= Wheezy or Sid) or Ubuntu (>= 12.04), you can install the source version of Spyke Viewer using your package manager:

$ sudo apt-get install spykeviewer

After you install the spykeviewer package, you can start Spyke Viewer from your menu (it should appear in the “Science” or “Education” category) or using:

$ spykeviewer

The next sections describe how to install Spyke Viewer if you do not have access to the NeuroDebian repositories (e.g. on Windows or OS X), want to install using the Python packaging system or use the most recent development version from GitHub.

Dependencies

First you need Python 2.7. In addition, the following packages and their respective dependencies need to be installed:

Please see the respective websites for instructions on how to install them if they are not present on your computer. On a recent version of Debian/Ubuntu (e.g. Ubuntu 12.04 or newer), you can install all dependencies that are not automatically installed by pip or easy_install with:

$ sudo apt-get install python-guiqwt python-tables python-matplotlib

On Windows, you can use Python(x,y) if you do yet not have a Python distribution installed. It includes the same dependencies.

spykeviewer

Once the requirements are fulfilled, you need to install the package spykeviewer. If you use Linux, you might not have access rights to your Python package installation directory, depending on your configuration. In this case, you will have to execute all shell commands in this section with administrator privileges, e.g. by using sudo. The easiest way to get it is from the Python Package Index. If you have pip installed:

$ pip install spykeviewer

Alternatively, if you have setuptools:

$ easy_install spykeviewer

Alternatively, you can get the latest version directly from GitHub at https://github.com/rproepp/spykeviewer.

The master branch always contains the current stable version. If you want the latest development version, use the develop branch (selected by default). You can download the repository from the GitHub page or clone it using git and then install from the resulting folder:

$ python setup.py install

Once the package is installed, you can start Spyke Viewer using:

$ spykeviewer

Note

You can also start the program without installing it: Simply execute the script bin/spykeviewer in your Spyke Viewer folder using Python.

On Windows, you might have to start spykeviewer.exe in the Scripts folder in your Python directroy (e.g. C:\Python27\Scripts) because most Python versions do not add this folder to the PATH environment variable.

Usage

This section gives a tutorial of the main functionality of Spyke Viewer. To follow the examples, you need to download and unpack the sample data file. It contains simulated data for two tetrodes over 4 trials. For each tetrode, there are 5 simulated neurons with corresponding spike trains and prototypical template waveforms included.

When you start Spyke Viewer for the first time, you will see the following layout:

_images/initial-layout.png

All elements of the user interface are contained in docks and can be rearranged to suit your needs. Their layout is saved when you close Spyke Viewer. The “View” menu shows all available docks and panels, you can also hide and show them from this menu.

Loading Data

The first thing you want to do when using Spyke Viewer is to load your data. The Files dock contains a view of all the files on your system. You can use it to select one or more files, then click on the “Load” button below to load the selected files into Spyke Viewer. Single files can also be loaded with a double click (this does not work for directories, they will just be expanded. If you want to load a directory, you need to use the “Load” button). Alternatively, you can use the “Load Data...” option in the “File” menu to open a dialog that allows you to select files to load. Now find and select the file “sample.h5” that you just unpacked (an HDF5 File) and load it.

The data file input/output is based on neo and supports formats that have a Neo IO class. For each selected file, the filetype and corresponding IO class is selected automatically from the file extension. If you want to specify which IO class to use, you can do so in the “Format” list in the Files dock. When you select a format with read or write parameters, you can click “Configure selected IO” to change the parameters. The IO and parameters you choose in the Files dock are also used when loading files using the “File” menu. If you want to use a file format that is not supported by Neo, you can write a plugin: IO plugins.

Spyke Viewer and Neo include some features to handle very large data sources that are larger than the main memory or take very long to load. If you want to learn about these features, go to Lazy Features.

Selections

Now that a file was loaded, some entries have appeared in the Navigation dock. To understand how to navigate data with Spyke Viewer, you need to know the Neo object model. The following picture is a complete representation:

_images/neo.png

The rectangular objects are containers, rounded corners indicate a data object. The arrows represent a “contains zero or more” relationship. Note that all data objects belong to a segment and some also belong to other objects. For example, a SpikeTrain is referenced by both Segment and Unit. A unit often represents a single neuron (it is named unit because putative neurons from spike sorting are called units), but it could also represent the results of a spike detection algorithm and therefore include multiple neurons. Each SpikeTrain is specific to one Segment and one Unit, and each Segment or Unit could contain many SpikeTrains. For more detailed information on the Neo object model, see the Neo documentation.

In Spyke Viewer, you use the Navigation dock to select container objects. There is a list for each type of container where you can select an arbitrary set of entries. You can select multiple entries by clicking and dragging or using the control key when clicking. Each list will only show those objects of the respective type that are contained in selected objects further up in the hierarchy. For example, try selecting a different recording channel group and observe how the channels and units list change. To help you navigate, all objects in the Navigation dock are automatically assigned a unique identifier which includes the identifier of containing objects. The identifiers are shown in parentheses after the objects name (if an object has no name, only the identifier is shown). Blocks use capital letters; recording channel groups use small letters; recording channels, units and segments use numbers. For example, a unit might have an identifier “A-b-2”: This denotes unit number 2 of recording channel group “b” of block “A”. The identifiers are recreated whenever the structure of the loaded data changes - they are just a visual aid to help with navigation and ensure that unnamed objects have a reasonable label.

Each list in the Navigation dock has a context menu accessed by right-clicking or control-clicking on OS X. You can use it to remove the selected objects (they will only be removed from Spyke Viewer, not from the loaded files) or open an annotation editor for the current object. The annotation editor can also be opened by double-clicking a list entry.

The sets of selected objects from all container types is called a selection. The selected items you see in the Navigation dock are called the current selection. Selections determine which data will be analyzed by plugins (see Plugins) and can be accessed by the internal console (see Using the Console). You can save a selection using the “Selections” menu: Click on the menu and then on “New”. An additional entry in the “Selections” menu called “Selection 1” will appear. Each selection entry has a submenu where you can load, save, rename or delete the selection. Try selecting something else in the Navigation dock and creating a new selection again. Now try to load your first selection and observe how the Navigation dock changes to reflect what you have loaded. If you use the entry “Save” from a selection, it will be overwritten with the current selection. You can also change the order of the saved selections by dragging the entries in the “Selections” menu:

_images/selections-menu.png

All saved selections together with the current selection are called a selection set. You can save your current selection set as a file (in JSON format, so it can easily be read and edited by humans or other software) using “Save Selection Set...” in the “File” menu. When you load a selection set, your current selection is replaced by the current selection from the file. The other selections in the file are added to your current saved selections. If a selection set includes files that are not currently loaded, they are opened automatically. When you exit Spyke Viewer, your current selection set is saved and will be restored on your next start.

Exporting Data

If you want to export your data, Spyke Viewer offers two entries in the “File” menu: “Save selected data...” exports all data in your current selection. “Save all data...” exports all loaded data. When you click on one of the items, a dialog will open asking you where you want to save the data and in which format. HDF5 and Matlab are available. It is strongly recommended to save your data in HDF5, since the Neo IO for Matlab currently does not support the whole object model – RecordingChannelGroups, RecordingChannels and Units are not saved.

Matlab has an interface for loading HDF5 files as well, so if you want to load your data in Matlab without losing some of the structure, you can use HDF5. On the other hand, if you want to get your data into Matlab quickly or it is structured with segments only, the Matlab export could be the right choice.

Filters

_images/filterdock.png

When dealing with large datasets, it can be inconvenient to create a selection from the full lists of containers. The filter system provides a solution to this problem. By creating filters, you can determine what objects are shown in the Navigation dock. For example, you might want to temporarily exclude RecordingChannelGroups that have no attached units or only display Segments with a certain stimulus. Creating filters requires basic knowledge of Python and the Neo object model.

You can manage your filters with the Filter dock and toolbar (which is positioned on the upper left in the initial layout). When you start Spyke Viewer for the first time, the Filter dock will be empty. You can create a new filter by clicking on “New Filter” in the toolbar (right-clicking the Filter dock also brings up a menu with available actions). You can choose what kind of container objects the filter applies to, the name of the filter and its content: a simple Python function.

There are two kinds of filters: single or combined. Single filters (created when the “Combined” checkbox is unchecked) get a single Neo object and return True if the object should be displayed and False if not. Combined filters get a list of Neo objects and return a list containing only objects that should displayed. The order of the returned list is used for subsequent filters and displaying, so combined filters can also be used to sort the object lists.

For both kinds of filters, the signature of the function is fixed and shown at the top of the window, so you only have to write the function body. The “True on exception” checkbox determines what happens when the filter function raises an exception: If it is checked, an exception will not cause an element to be filtered out, otherwise it will. The following picture shows how you would create a filter that hides all units that do not have at least two SpikeTrains attached:

_images/newfilter.png

As another example, to reverse the order of Segments, you could create combined Segment filter with the following line:

return segments[::-1]

You can also create filter groups. They can be used to organize your filters, but also have an important second function: You can define groups in which only one filter can be active. If another filter in the group is activated, the previously active filter will be deactivated. You can choose which filters are active in the Filter dock. The Navigation dock will be updated each time the set of active filters changes. You can also drag and drop filters inside the Filter dock. Their order in the Filter dock determines the order in which they are applied. All filters and their activation state are saved when you exit Spyke Viewer.

Using Plugins

Once you have selected data, it is time to analyze it. Spyke Viewer includes a number of plugins that enable you to create various plots from your data. Select the Plugins dock (located next to the Filter dock in the initial layout) to see the list of available plugins. To start a plugin, simply double-click it or select it and then click on “Run Plugin” in the plugin toolbar or menu (there is also a context menu available when you right-click a plugin). You can also start a plugin in a different process (so that you can continue using Spyke Viewer while the plugin is busy) by selecting “Start with Remote Script” in the “Plugins” menu.

For example, if you start the “Signal Plot” plugin, it will create a plot of selected analog signals. Try selecting Segment 3, Tetrode 2 and Channels 3 and 4. When you now start the plugin, you will see the signals of the selected channels in Segment 3. Now select some units and then open the plugin configuration by clicking on “Configure Plugin” on the plugin toolbar or menu. Select “Show Spikes” and set “Display” to “Lines”. When you now start the plugin, you will see the analog signals and the spike times of your selected units. Go to the configuration again, set “Display” to “Waveforms” and check “Use first spike as template”. After another run of the plugin, you will see the template spike waveforms overlaid on the analog signals. The configuration of all plugins is saved when you close Spyke Viewer and will be restored on the next start. To set the configurations of all plugins back to their default values, use “Restore Plugin configurations” from the “Plugins” menu.

To learn more about the included plugins and how to use them, go to Plugins. When you want to create your own plugins, go to Analysis plugins.

Using the Console

With the integrated console, you can use the full power of Python in Spyke Viewer, with access to your selected data. Open the Console dock by clicking on the “View” menu and selecting “Console”. You can explore your workspace using the Variable Explorer dock and view your previous commands with the Command History dock. Some packages like scipy and neo are imported on startup, the message in the console shows which. The console features autocompletion (press the Tab key to complete with the selected entry) and docstring popups.

The most important objects in the console environment are current and selections. current gives you access to your currently selected data, selections contains all stored selections (which you can manage using the “Selections” menu, see selections). For example,

>>> current.spike_trains()

gives a list of your currently selected spike trains. Both current and the entries of selections are spykeutils.plugin.data_provider.DataProvider objects, refer to the documentation for details of the methods provided by this class.

As an example, to view the total amount of spikes in your selected spike trains for each segment, enter the following lines:

>>> trains = current.spike_trains_by_segment()
>>> for s, st in trains.iteritems():
...     print s.name, '-', sum((len(train) for train in st)), 'spikes'

Note that the variables used in these lines have now appeared in the Variable Explorer dock.

Note

If you have at least IPython 0.12 and the corresponding Qt console installed, Spyke Viewer will include an IPython dock (accessible under the “View” menu). It can be used as an alternative to the integrated console if you prefer IPython. The current and selections objects are defined as in the integrated console, but no imports are predefined. You can enter the “magic command”:

%pylab

to use the PyLab environment (you can safely ignore the warning message about matplotlib backends). Note that the Variable Explorer and Command History docks, as well as exceptions from plugins, are only available on the internal console.

Settings

The Spyke Viewer settings can be accessed by opening the “File” menu and selecting “Settings” (on OS X, open the “Spyke Viewer” menu and select “Preferences”). You can adjust various paths in the settings:

Selection path
The path where your selections are stored when you exit Spyke Viewer. This is also the default directory when using “Save Selection Set...” or “Load Selection Set...” in the “File” menu.
Filter path
The directory where your filter hierarchy and activation states are stored when you exit Spyke Viewer. Your filters are stored as regular Python files with some special annotation comments, so you can edit them in your favourite editor or share them with other users of Spyke Viewer.
Data path
This directory is important when you are using the data storage features of spykeutils.plugin.analysis_plugin.AnalysisPlugin.
Remote script
A script file that is executed when you use “Start with remote script” action for a plugin. The default script simply starts the plugin locally, but you can write a different script for other purposes, e.g. starting it on a server.
Plugin paths

These are the search paths for plugins. They will be recursively searched for Python files containing AnalysisPlugin classes. Subdirectories will be displayed as nodes in the Plugins dock.

In addition, your IO plugins also have to stored be in one of the plugin paths. The search for IO plugins is not recursive, so you have to put them directly into one of the paths in this list.

More configuration options can be set using the API, for example in the Startup script.

Plugins

This section describes the configuration options of the plugins that are included with Spyke Viewer. All included plugins create plots. For information on how to create your own plugins, see Analysis plugins. You can find additional plugins at the Spyke Repository.

Signal Plot

Shows the selected analog signals. A number of options enable to include additional information in the plot.

_images/plugin-signals.png
Use Subplots
Determines whether multiple subplots are used or all signals are shown in one large plot.
Show subplot names
Only valid when subplots are used. Determines if each subplot has a title with the signal name (if available) or the recording channel name.
Included signals
This option can be used to tune which type of signals are shown: AnalogSignal objects, AnalogSignalArray objects or both. In most cases, a file will only include one of the signal types, so the default option of including both will work well (you probably never need to change it if you do not know the difference between the signal objects).
Show events
When this is checked, events in the selected trial will be shown in the plot.
Show epochs
When this is checked, periods in the selected trial will be shown in the plot.
One plot per segment
When this is not checked, only one plot with signals from the first selected segment is created. Otherwise, one plot for each selected segment is created.
Show spikes

Determines whether spikes are included in the plot. The following options are used to select from what data how the spikes are displayed:

Display as
Spikes can be shown as their waveform overlaid on the analog signal or a vertical line marking their occurrence.
Included data
Determines whether to include spikes from SpikeTrain objects, Spike objects, or both.
Use first spike as template
This option can be used for a special case: All spikes in the SpikeTrain objects have the same waveform (e.g. because they use the same template from spike sorting). If this option is checked, the plugin assumes that each unit has a SpikeTrain and a single Spike. The waveform from the Spike object is used for every spike in the SpikeTrain. The data in the example file is structured in this way.

Spectrogram

Shows spectrograms of the selected analog signals.

_images/plugin-spectrogram.png
Interpolate
Determines whether the dipslayed spectrogram is interpolated.
Show color bar
If this is checked, a colorbar will be shown with each plot, illustrating the logarithmic power represented by the colors.
FFT samples
The number of signal samples used in each FFT window.
Included signals
This option can be used to tune which type of signals are shown: AnalogSignal objects, AnalogSignalArray objects or both. In most cases, a file will only include one of the signal types, so the default option of including both will work well (you probably never need to change it if you do not know the difference between the signal objects).

Spike Waveform Plot

Shows waveforms of selected spikes.

_images/plugin-waveforms.png
Antialiased lines
Determines if antialiasing (smoothing) is used for the plot. If you want to display thousands of spikes or more, unchecking this option will improve the plotting performance considerably.
Include spikes from

Determines which data sources are used for the displayed spike waveforms.

Spikes
Waveforms from Spike objects can be ignored (Do not include), used as other spike data sources are (Regular) or drawn thicker on top of other spikes (Emphasized). The last option is useful if spike objects contain templates from spike sorting which you want to compare to corresponding spikes from the data.
Spike Trains
Spike waveforms embedded in SpikeTrain objects.
Extracted from signal
Spike waveforms can be automatically extracted from corresponding signals using spike times in SpikeTrain objects. In this case you have to choose the spike length and the alignment offset (the length of the signal to extract before each spike event).
Plot type
Three different plot types can be selected: “One plot per channel” creates a subplot for each channel, “One plot per unit” creates a subplot for each unit and “Single plot” creates one plot containing all channels and units.
Split channels
Multichannel waveforms can be split either horizontally or vertically.
Subplot layout
You can choose one of two ways to arrange the resulting subplot: “Linear” will arrange the plots as one row or one column, depending on the other options. “Square” uses an equal number of row and columns.
Fade earlier spikes
If this is enabled, earlier selected spikes for each unit are drawn more transparent than later spikes. This can be useful if you want to compare changes in a unit’s waveform over time (i.e. multiple segments).

Correlogram

Creates auto- and crosscorrelograms for selected spike trains.

_images/plugin-correlogram.png
Bin size (ms)
The bin size used in the calculation of the correlograms.
Cut off (ms)
The maximum time lag for which the correlogram will be calculated and displayed.
Data source

The plugin supports two ways of organizing the data from which the correlograms are created: If “Units” is selected, the spike trains for each currently selected unit are treated as a dataset. For example, if two units are selected, the plugin creates three subplots: one autocorrelogram for each unit and a cross-correlogram between them.

If “Selections” is chosen, spike trains from each saved selection are treated as a dataset. Note that the plot can only be created if all selections contain the same number of spike trains.

Counts per
Determines if the counts are displayed per second or per segment.
Border correction
Determines if an automatic correction for less data at higher timelags is applied.
Include mirrored plots
Determines if all cross-correlograms are included, even if they are just mirrored versions of each other. The autocorrelograms are then displayed as the diagonal of a square plot matrix. Otherwise, mirrored cross-correlograms are omitted.

Interspike Interval Histogram

Creates an interspike interval histogram for one or more units.

_images/plugin-isi.png
Bin size (ms)
The bin size used in the calculation of the histogram.
Cut off (ms)
The maximum interspike interval that is displayed.
Type
Determines the type of histogram. If “Bar” is selected, only the histogram for the first selected unit is displayed. If “Line” is selected, all selected units are included in the plot.
Data source
The plugin supports two ways of organizing the data from which the histograms are created: If “Units” is selected, the spike trains for each currently selected unit are treated as a dataset. If “Selections” is chosen, spike trains from each saved selection are treated as a dataset.

Peristimulus Time Histogram

Creates a peristimulus time histogram (PSTH) for one or multiple units.

_images/plugin-psth.png
Bin size (ms)
The bin size used in the calculation of the histogram.
Start time (ms)
An offset from the alignment event or start of the spike train. Calculation of the PSTH begins at this offset. Negative values are allowed (this can be useful when using an alignment event).
Stop time
A fixed stop time for calculation of the PSTH. If this is not activated, the smallest stop time of all included spike trains is used. If the smallest stop time is smaller than the value entered here, it will be used instead.
Alignment event
An event (identified by label) on which all spike trains are aligned before the PSTH is calculated. After alignment, the event is a time 0 in the plot. The event has to be present in all selected segments that include spike trains for the PSTH.
Type
Determines the type of histogram. If “Bar” is selected, only the histogram for the first selected unit is displayed. If “Line” is selected, all selected units are included in the plot.
Data source
The plugin supports two ways of organizing the data from which the histograms are created: If “Units” is selected, the spike trains for each currently selected unit are treated as a dataset. If “Selections” is chosen, spike trains from each saved selection are treated as a dataset.

Raster Plot

Creates a raster plot from multiple spiketrains.

_images/plugin-rasterplot.png
Domain
The raster plot can either be created from multiple units and one segment (“Units”) or one unit over multiple segments (“Segments”).
Show lines
Determines if a small horizontal black line is displayed for each spike train.
Show events
When this is checked, events in the selected trial will be shown in the plot. If the selected domain is “Segments”, events from all selected segments are included.
Show epochs
When this is checked, periods in the selected trial will be shown in the plot. If the selected domain is “Segments”, epochs from all selected segments are included.

Spike Density Estimation

Creates a spike density estimation (SDE) for one or multiple units. Optionally computes the best kernel width for each unit.

_images/plugin-sde.png
Kernel size (ms)
The width of the kernel used for the plot. If kernel width optimization is enabled, this parameter is not used.
Start time (ms)
An offset from the alignment event or start of the spike train. Calculation of the SDE begins at this offset. Negative values are allowed (this can be useful when using an alignment event).
Stop time
A fixed stop time for calculation of the SDE. If this is not activated, the smallest stop time of all included spike trains is used. If the smallest stop time is smaller than the value entered here, it will be used instead.
Alignment event
An event (identified by label) on which all spike trains are aligned before the SDE is calculated. After alignment, the event is a time 0 in the plot. The event has to be present in all selected segments that include spike trains for the SDE.
Data source
The plugin supports two ways of organizing the data from which the density estimations are created: If “Units” is selected, the spike trains for each currently selected unit are treated as a dataset. If “Selections” is chosen, spike trains from each saved selection are treated as a dataset.
Kernel width optimization

When this option is enabled, the best kernel width for each unit is determined using the algorithm from [1].

Minimum kernel size (ms)
The minimum kernel width that the algorithm should try.
Maximum kernel size (ms)
The maximum kernel width that the algorithm should try.
Kernel size steps
The number of steps from minimum to maximum kernel size that the algorithm should try. The steps are distributed equidistant on a logarithmic scale.
[1]Shimazaki, Shinomoto. (2010). Kernel bandwidth optimization in spike rate estimation. Journal of Computational Neuroscience, 29, 171-182.

Extending Spyke Viewer

There are two ways of extending Spyke Viewer: Analysis plugins and IO plugins. Both are created by placing a Python file with an appropriate class into one of the plugin directories defined in the Settings. In addition, Spyke Viewer include a customizable script that is run each time the program is started. Startup script describes possible applications and how to edit this script. This section describes how to create plugins and how to use the startup script. If you create a useful extension, please share it at the Spyke Repository!

Analysis plugins

The easiest way to create a new analysis plugin is directly from the GUI. Alternatively, you can use your favourite Python editor to create and edit plugin files. This section describes the creation of an example plugin.

From console to plugin

In many cases, you will want to turn code that you have written in the console into a plugin for easy usage and sharing. See Using the Console for an introduction to the integrated console. Here, a similar example will be expanded into a plugin. Load the example data file (see Usage), select all segments and units and enter the following code in the console:

>>> trains = current.spike_trains_by_unit()
>>> for u, st in trains.iteritems():
...     print u.name, '-', sum((len(train) for train in st)), 'spikes'

This will print the total number of spikes for each selected unit in all selected trials. Note that these lines have now appeared in the Command History dock. Now select “New plugin” from the “Plugins” menu or toolbar. The Plugin Editor dock will appear with a tab named “New Plugin” containing a code template. The template is already a working plugin, although without any functionality. It contains a class (which subclasses spykeutils.plugin.analysis_plugin.AnalysisPlugin) with two methods: get_name() and start(current, selections). get_name() is very simple - it just returns a string to identify the plugin in the Plugins dock. Replace the string “New Plugin” by a name for your plugin, for example “Spike Counts”.

The start method gets called whenever you start a plugin. The two parameters are the same objects as the identically named objects that can be used in the console (see Using the Console): current gives access to the currently selected data, selections is a list containing the stored selections. Both current and the entries of selections are spykeutils.plugin.data_provider.DataProvider objects, refer to the documentation for details of the methods provided by this class.

Replace the contents of the start method by the code you entered into the console (you can copy and paste the code from the Command History dock). Now click on “Save plugin” in the “Plugins” menu or toolbar. A Save dialog will appear. Select one of the plugin paths (or a subfolder) that you have configured in the Settings and choose a name (e.g. “spikecount.py”). When you save the plugin, it will appear in the Plugins dock. You can now use it just like the included plugins. Try selecting different subsets of segments and units and observe how the output of the plugin (on the console) always reflects the current selection.

Plugin configuration

This section shows how to make your plugin configurable and use matplotlib to create a plot. Your newly created plugin currently only prints to the console. In order to create a configuration option, add the following line above the get_name method:

output_type = gui_data.ChoiceItem('Output type', ('Total count', 'Count plot'))

Now, when you select your plugin and click on “Configure plugin”, a window with a configuration option (a choice between “Total count” and “Count plot” will appear. The gui_data module encapsulates guidata. You can look at the documentation or the code of existing plugins for its more information.

Next, you will modify the start method so it uses the configuration option and creates a plot if it is configured for “Count plot”. Since you will be using matplotlib for the plot, you first have to import it by adding:

import matplotlib.pyplot as plt

at the top of the plugin file. Note that matplotlib is already imported in the console, but you have to explicitly import everything you need in plugins.

Next, replace the code in the start method by:

trains = current.spike_trains_by_unit()
for u, st in trains.iteritems():
    if self.output_type == 0: # Total count
        print u.name, '-', sum((len(train) for train in st)), 'spikes'
    else: # Count plot
        plt.plot([len(train) for train in st])

If you now set the configuration of the plugin to “Count plot”, you will see a plot with the spike count for each unit in all trials.

IO plugins

If you have data in a format that is not supported by Neo, you can still load it with Spyke Viewer by creating an IO plugin. This is identical to writing a regular Neo IO class [1] (see http://neo.readthedocs.org/en/latest/io_developers_guide.html to learn how to do it) and placing the Python file with the class in a plugin directory (the search for IO plugins is not recursive, so you have to place the file directly in one of the directories that you defined in the Settings). The filename has to end with “IO.py” or “io.py” (e.g. “MyFileIO.py”) to signal that it contains an io plugin. If you create an IO class for a file format that is also used outside of your lab, please consider sharing it with the Neo community.

Startup script

The startup script is run whenever Spyke Viewer is started, after the GUI is setup and before plugins are loaded. To edit the startup script, select the “Edit startup script” item in the “File” menu.

One important use case for this file is manipulating your Python path. For example, you may have a Python file or package that you want to use in your plugins. If it is not on your Python path (for example because it cannot be installed or you are using a binary version of Spyke Viewer, where Python packages installed on the system are not accessible by default), you can modify sys.path to include the path to your files:

import sys
sys.path.insert(0, '/path/to/my/files')

You can also use the startup script to configure anything that is accessible by Python code. In particular, you can use the Spyke Viewer API to access configuration options and the main window itself. For example, if you want the Enter key to always finish a line in the console and only use the Tab key for autocompletion:

spyke.config['codecomplete_console_enter'] = False

To change the font size of the Python console (effective for new input) and title of the window:

import spykeviewer.api as spyke  # This line is included in the default startup script
f = spyke.window.console.font()
f.setPointSize(18) # Gigantic!
spyke.window.console.set_pythonshell_font(f)
spyke.window.setWindowTitle('Big is beatiful')

As a final example, you can customize the colors that are used in spykeutils plots (for colored items like spikes in a rasterplot):

# Let's make everything pink!
from spykeutils.plot import helper
helper.set_color_scheme(['#F52887', '#C12267'])

Footnotes

[1]

There is one small difference between regular Neo IO classes and IO plugins: In plugins, you cannot use relative imports. For example, instead of:

from .tools import create_many_to_one_relationship

as in the Neo example IO, you would write:

from neo.io.tools import create_many_to_one_relationship

API

The Spyke Viewer API. It includes the global application configuration, objects to access the main window and application and convenience functions.

spykeviewer.api.config

Global configuration options for Spyke Viewer. Single options can be set by string like a dictionary (e.g. spykeviewer.api.config['ask_plugin_path'] = False) or directly (e.g. spykeviewer.api.config.ask_plugin_path = False). They can be set in the Startup script, from the console or even in plugins. However, some configuration options are only effective when changed from the startup script. The configurations options are:

ask_plugin_path (bool)
Ask about plugin paths if saving a file outside of the plugin paths. Default: True
save_plugin_before_starting (bool)
Automatically save and reload a modified plugin before starting. Default: True
load_selection_on_start (bool)
Load the selection that was automatically saved when shutting down Spyke Viewer automatically on startup. This parameter is only effective when set in the startup script. Default: True
load_mode (int)

The initially selected loading mode. Possible values:

0
Regular: Load all file contents on initial load.
1
Lazy: Only load file structure. Data objects are loaded automatically when requested and then discarded.
2
Cached lazy: Only load file structure. Data objects are loaded automatically when requested and then kept in the object hierarchy so they only need to be loaded once.

This parameter is only effective when set in the startup script. Default: 0

autoselect_segments (bool)
Select all visible segments by default. Default: False
autoselect_channel_groups (bool)
Select all visible channel groups by default. Default: False
autoselect_channels (bool)
Select all visible channels by default. Default: True
autoselect_units (bool)
Select all visible units by default. Default: False
duplicate_channels (bool)
Treat neo.core.RecordingChannel objects that are referenced in multiple neo.core.RecordingChannelGroup objects as separate objects for each reference. If False, each channel will only be displayed (and returned by spykeutils.plugin.data_provider.DataProvider) once, for the first reference encountered. Default: False
codecomplete_console_enter (bool)
Use Enter key for code completion in console. This parameter is only effective when set in the startup script. Default: True
codecomplete_editor_enter (bool)
Use Enter key for code completion in editor. This parameter is only effective when set in the startup script. Default: True
remote_script_parameters (list)
Additional parameters for remote script. Use this if you have a custom remote script that needs nonstandard parameters. The format is the same as for subprocess.Popen, e.g. ['--param1', 'first value', '-p2', '2']. Default: []
remote_path_transform (function)
When the remote script is used to start plugins on a different computer, the paths of data files can change. This function can be used to change the path of all data files sent to a remote script. For example, if the data files are in the same directory where the plugin is started on the remote computer, you can strip the path and just keep the filename: spykeviewer.api.config.remote_path_transform = lambda x: os.path.split(x)[1] Default: The identity, paths are not changed.
spykeviewer.api.window

The main window of Spyke Viewer.

spykeviewer.api.app

The PyQt application object.

spykeviewer.api.start_plugin(name, current=None, selections=None)

Start first plugin with given name and return result of start() method. Raises a SpykeException if not exactly one plugins with this name exist.

Parameters:
  • name (str) – The name of the plugin. Should not include the directory.
  • current – A DataProvider to use as current selection. If None, the regular current selection from the GUI is used.
  • selections (list) – A list of DataProvider objects to use as selections. If None, the regular selections from the GUI are used.
spykeviewer.api.get_plugin(name)

Get plugin with the given name. Raises a SpykeException if multiple plugins with this name exist. Returns None if no such plugin exists.

Parameters:name (str) – The name of the plugin. Should not include the directory.

Lazy Features

Spyke Viewer offers two ways to deal with very large files.

Lazy Loading

With lazy loading, only the structure of a file is loaded when you first open it, while big data chunks (e.g. signals, spike trains) are not. This can result in faster loading times and much reduced memory usage and enables you to use data files that are larger than your main memory. Spyke Viewer will load the required data automatically once it is needed. This means that while initial loading is faster, data access will be slower. You can switch between regular and lazy loading from the “File” menu under “Read Mode”. The read mode affects newly loaded files and you can have both regularly and lazily loaded files opened at the same time. Most Neo IOs do not support this feature (currently, only the IO for HDF5 files does) - when using lazy mode with an unsopported IO, the file is loaded as in regular mode.

There are two options for lazy loading in the menu: “Lazy Load” and “Cached Lazy Load”. In “Lazy Load”, data objects are loaded on request and discarded afterwards, so the memory usage stays low. In “Cached Lazy Load”, data objects are inserted into the object hierarchy when they are requested, so they only have to be loaded once, but memory usage will grow when more data objects are used while the file is open.

Note

If you create your own plugins or use the integrated console with lazy loading, you need to be aware that the data objects are only loaded when accessed through a DataProvider object (explained below). For example, current.spike_trains() would return correctly loaded objects. But current.segments()[0].spiketrains can contain lazy objects. To be safe, always use the DataProvider to access the data objects you are interested in.

Lazy Cascading

Lazy cascading goes one step further than lazy loading: Not even the complete structure of a file is loaded initially. When lazy cascading is active, each object is automatically loaded when first accessed. For example, if you load a file with multiple Blocks, the Segments of each Block are only loaded when you select the Block in Spyke Viewer and the Segments need to be displayed. Similarly, the spike trains of a segment are only loaded once they are accessed. In contrast to lazy loading, with lazy cascading objects are loaded automatically even if they are not accessed through a DataProvider. Once an object has been accessed using lazy cascading, it stays in memory, making future access faster but potentially filling up main memory. You can use lazy cascading and lazy loading without caching at the same time to mitigate this. You switch between regular and lazy cascading using “Cascade Mode” in the “File” menu. Like lazy loading, lazy cascading depends on support of the IO class and currently only works with the HDF5 IO. You can implement both in your own IO plugins, the Neo documentation describes what is needed.

Changelog

Also see the spykeutils changelog at https://spykeutils.readthedocs.org/en/latest/changelog.html

Version 0.4.2

  • Data file path transform for starting plugins remotely.
  • Various bugfixes and compatibility with Spyder 2.3.0

Version 0.4.1

  • IPython 1.0 supported
  • IPython now supported as dock instead of external console
  • More explicit error messages on file loading failures.
  • Added “Start plugin remotely” to toolbar.

Version 0.4.0

  • Optional transparent lazy loading and lazy cascading for supported IOs.
  • Splash screen while loading the application.
  • Open files dialog as an alternative to the “Files” dock.
  • Remotely started plugins can have a graphical progress bar.
  • Remotely started plugins now show text output and errors on internal console.
  • Filters are automatically deactivated on loading a selection if they prevent parts of it to be shown.
  • A modified plugin is automatically saved before it is sent to a remote script.
  • New features for many plugins: correlogram, interspike interval histogram, peri stimulus time histogram, and spike density estimation support selections in addition to units for plot elements. Spike waveform plot can plot single spikes extracted from analog signals using spike trains, optionally together with Spike object waveforms.
  • Python files can be dragged onto the editor to open them.
  • Annotation editor accessible through API.
  • Files can be loaded through API.
  • The Spyke Repository is available and linked in the documentation and the help menu.

Version 0.3.0

  • Added search and replace functionality to plugin editor (access with Ctrl + F and Ctrl + H).
  • Added startup script. Can be modified using File->Edit startup script.
  • Spyke Viewer now has an API for configuration options and access to plugins and the main window. It can be used from the console, plugins or the startup script.
  • Added context menu for navigation. Includes entries for removing objects and an annotation editor.
  • New files are selected in the file view are now loaded in addition to already loaded files. To unload all data, use File->Clear Data.
  • Plugin configurations are now restored when saving or refreshing plugins and when restarting the program. All plugin configurations can be reset to their defaults using Plugins->Restore Plugin configurations.
  • A modified plugin is automatically saved before it is run.
  • Plugin folders return to their previous state (expanded or minimized) when restarting the program.
  • Plugin editor tabs can be reordered by dragging.
  • Code completion in console can be selected using Enter (in addition to Tab as before).
  • Plugins can import modules from the same directory, even if it is not explicitly on the Python path.

Version 0.2.1

  • New features for plugin editor: Calltips, autocompletion and “jump to” (definitions in code or errors displayed in integrated console).
  • Experimental support for IPython console (File->New IPython console). Needs IPython >= 0.12
  • New spectrogram plugin
  • Combined filters for filtering (or sorting) a whole list of objects
  • “Save all data...” menu option
  • Plugins are sorted alphabetically
  • New option in plugin menu: Open containing folder
  • “Delete” key deletes filters
  • Renamed start script from “spyke-viewer” to “spykeviewer”

Version 0.2.0

Initial documented public release.

Acknowledgements

Spyke Viewer was created by Robert Pröpper [1], supported by the Research Training Group GRK 1589/1. The inspiration for the GUI came from an earlier program developed by Felix Franke [2]. The simulated data used in the examples was created by Philipp Meier [1].

[1](1, 2) Neural Information Processing Group, TU Berlin
[2]ETH Zurich, D-BSSE, Bio Engineering Laboratory (BEL)

Indices and tables