Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
b487c5a
Update `OpenEphysHTTPServer` methods to get/set processor-scoped para…
anjaldoshi Oct 6, 2023
357e873
Fix processor/stream scoped parameter set calls
medengineer Feb 2, 2024
2437496
Update gitignore
medengineer Feb 2, 2024
9f9b684
Add undo/redo commands
medengineer Jun 20, 2023
d7096a2
Add audio device HTTP control
medengineer May 9, 2024
f76723c
Update gitignore
medengineer May 9, 2024
99893c8
Add more HTTPServer commands
medengineer Jul 10, 2024
7231f2a
Get CPU usage
medengineer Jul 11, 2024
4661b80
Get latencies
medengineer Jul 29, 2024
1e2ae8b
Drop support for Python 3.7 and 3.8
joschaschmiedt Apr 1, 2025
10bd827
Add type hints and Metadata dataclasses
joschaschmiedt Apr 1, 2025
a7ac4b8
Fix wrong source_processor_name dict key
joschaschmiedt Apr 1, 2025
2c357bb
Add type hints to BinaryRecording classes
joschaschmiedt Apr 3, 2025
ecd427b
Fix erroneous float63 dypte
joschaschmiedt Apr 3, 2025
1c915b2
Add test data sets for Binary and OpenEphys formats
joschaschmiedt Apr 3, 2025
d8d6f35
Add abstract base classes AbstractSpikes and AbstractContinuous
joschaschmiedt Apr 3, 2025
30a89c7
Update CHANGELOG.md
joschaschmiedt Apr 3, 2025
7477aba
Add test for reading binary format
joschaschmiedt Apr 3, 2025
19e9a52
Add uv lock file for reproducible dev environment
joschaschmiedt Apr 3, 2025
e8ab842
Add test for reading OpenEphys format
joschaschmiedt Apr 3, 2025
2793e30
Add GitHub action for running pytest
joschaschmiedt Apr 3, 2025
a85aebc
Change version to 0.2.0-dev
joschaschmiedt Apr 3, 2025
7d95e3d
Add json schema for oebin file
joschaschmiedt Apr 4, 2025
adca358
Add __str__ for binary format Continuous and Spike
joschaschmiedt Apr 4, 2025
7e8a596
Fix bug in binary Spike __str__
joschaschmiedt Apr 4, 2025
a13e07d
Rename AbstractContinuous to Continuous
joschaschmiedt Apr 7, 2025
c5f163d
Add NWB test dataset
joschaschmiedt Apr 7, 2025
26c0897
Add tests for NWB2 format
joschaschmiedt Apr 7, 2025
540e0e0
Add RecordingFormat enum
joschaschmiedt Apr 7, 2025
d16118a
Update LICENSE
joschaschmiedt Apr 7, 2025
743b970
Update CHANGELOG.md
joschaschmiedt Apr 7, 2025
3f00120
Add docstring to Continuous.get_samples
joschaschmiedt Apr 8, 2025
b992677
Add type hints to RecordNode
joschaschmiedt Apr 8, 2025
7ee5f4a
Minor cleanup and LICENSE update
joschaschmiedt Apr 9, 2025
3942140
Add type hints to Session
joschaschmiedt Apr 9, 2025
0ea4856
Update CHANGELOG.md
joschaschmiedt Apr 9, 2025
047cc53
Add vscode to gitignore
joschaschmiedt Apr 9, 2025
4b38120
Update CHANGELOG.md
joschaschmiedt Apr 9, 2025
299c5c9
Update CHANGELOG.md
joschaschmiedt Apr 9, 2025
c2e267b
Fix uv version in GH action
joschaschmiedt Apr 9, 2025
5dded08
Merge branch 'main' into add-tests
joschaschmiedt May 6, 2025
9818c5f
Fix bug in printing of record node
joschaschmiedt Jun 16, 2025
37e2e5b
Change RecordingFormat to StrEnum
joschaschmiedt Jun 16, 2025
9b47990
Update CHANGELOG
jsiegle Aug 25, 2025
9822965
Fix numpy deprecation warning
jsiegle Aug 25, 2025
fde1b48
Allow continuous objects to be accessed by index or stream name
jsiegle Aug 25, 2025
3f97390
Add dict-like loading methods for NwbRecording
jsiegle Aug 26, 2025
3895639
Update analysis README
jsiegle Aug 26, 2025
b185cb7
Add build/ directory to .gitignore
jsiegle Aug 26, 2025
d76437b
Fix failing spike data loading due to datatype conversion issue
jsiegle Aug 27, 2025
10d925f
Merge branch 'juce8' into development
medengineer Sep 6, 2025
1ff406e
Fix merge conflicts
medengineer Sep 7, 2025
a1cb07e
Remove merge conflict marker
anjaldoshi Oct 3, 2025
06a624c
Refactor continuous data handling to use a dictionary structure for i…
anjaldoshi Oct 4, 2025
b0f37fe
Update version to 1.0.0
jsiegle Oct 17, 2025
8df3837
Update python-tests.yml
jsiegle Oct 17, 2025
031e0b3
Update python-tests.yml
jsiegle Oct 17, 2025
1009423
Update python-tests.yml
jsiegle Oct 17, 2025
f1d65e0
Update python-tests.yml
jsiegle Oct 17, 2025
ce89d43
Update python-tests.yml
jsiegle Oct 17, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions .github/workflows/python-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python

name: Python Tests

on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]

permissions:
contents: read

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Install Python (pin to a wheel-friendly version)
run: |
uv python install 3.12
uv python pin 3.12

- name: Create virtualenv
run: uv venv

- name: Preinstall h5py as wheel only
run: uv pip install "--only-binary=:all:" "h5py==3.13.0"

- name: Install the project
run: uv sync --extra dev

- name: Run tests
run: uv run pytest tests

5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,7 @@ dist
Notebooks
notebooks

.spyproject
.vscode
.spyproject

build
46 changes: 46 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,51 @@
# `open-ephys-python-tools` Changelog

## 1.0.0

- Dropped support for Python < 3.9
- Refactoring without new functionality or API changes
- The `Continuous` and `Spike` classes of the three formats now have an explicit interface
(i.e. abstract parent class) and have been renamed to `BinaryContinuous`, `BinarySpike` etc.
- The metadata of `Continuous` and `Spike` in the analysis package now are typed dataclasses
instead of `dict` objects . This makes accessing metadata more reliable.
- Type hints have been added to the `analysis` package.
- Automated tests for reading Binary, NWB and OpenEphys data formats have been added.
- Added a `RecordingFormat` enum for the three formats
- Added a JSON schema for validating oebin files
- Added a `uv.lock` file for reproducible development environments.
- `BinaryContinuous` and `BinarySpike` now have `__str__` methods to give an overview over
their contents.

## 0.1.13
- Improve NWB format loading
- Add method for selecting channels by name

## 0.1.12
- Fix bug in global timestamp computation

## 0.1.11
- Ensure experiment and recording directories are sorted alphanumericaly

## 0.1.10
- Add option to load events without sorting by timestamp

## 0.1.9
- Allow continuous timestamps to be loaded without memory mapping (necessary when timestamp file will be overwritten)

## 0.1.8
- Change indexing method for extracting processor ID in NwbRecording

## 0.1.7
- Raise exception if no events exist on a selected line for global timestamp computation
- Add option to ignore a sample interval when computing global timestamps

## 0.1.6
- Add `config` method to `OpenEphysHTTPServer` class

## 0.1.5
- Speed up loading of Open Ephys data format
- Add stream names to NWB and Open Ephys events

## 0.1.4

- Include `source_processor_id` and `source_processor_name` when writing .oebin file
Expand Down
18 changes: 14 additions & 4 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,7 +1,17 @@
Copyright 2020 Open Ephys
Copyright 2020-2025 Open Ephys

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons
to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
24 changes: 12 additions & 12 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,29 +5,29 @@ build-backend = "setuptools.build_meta"
[project]
name = "open-ephys-python-tools"
description = "Software tools for interfacing with the Open Ephys GUI"
license = {text = "MIT"}
requires-python = ">=3.7"
license = { text = "MIT" }
requires-python = ">=3.9"
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
"Operating System :: OS Independent",
]
readme = "README.md"

dynamic = ["version"]

dependencies = [
'numpy',
'pandas',
'h5py',
'zmq',
'requests'
]
dependencies = ['numpy', 'pandas', 'h5py', 'zmq', 'requests']

[tool.setuptools.packages.find]
where = ["src"]

[tool.setuptools.dynamic]
version = {attr = "open_ephys.__version__"}

version = { attr = "open_ephys.__version__" }

[dependency-groups]
dev = [
"black>=25.1.0",
"jsonschema>=4.23.0",
"mypy>=1.15.0",
"pytest>=8.3.5",
]
2 changes: 1 addition & 1 deletion src/open_ephys/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.1.13"
__version__ = "1.0.0"
12 changes: 7 additions & 5 deletions src/open_ephys/analysis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,16 @@ Recording Index: 0

## Loading continuous data

Continuous data for each recording is accessed via the `.continuous` property of each `Recording` object. This returns a list of continuous data, grouped by processor/sub-processor. For example, if you have two data streams merged into a single Record Node, each data stream will be associated with a different processor ID. If you're recording Neuropixels data, each probe's data stream will be stored in a separate sub-processor, which must be loaded individually.
Continuous data for each recording is accessed via the `.continuous` property of each `Recording` object. This now returns a dictionary of continuous data grouped by processor/sub-processor. Each stream is stored twice in the dictionary: once under its zero-based index and once under its stream name. For example, if you have two data streams merged into a single Record Node, each data stream will be associated with a different processor ID. If you're recording Neuropixels data, each probe's data stream will be stored in a separate sub-processor, which must be loaded individually.

Continuous data for individual data streams can be accessed by index (e.g., `continuous[0]`), or by stream name (e.g., `continuous["example_data"]`). If there are multiple streams with the same name, the source processor ID will be appended to the stream name so they can be distinguished (e.g., `continuous["example_data_100"]`). Iterating over the dictionary yields the continuous objects in index order, and `continuous.keys()` lists both the integer indices and stream names that can be used for lookup.

Each `continuous` object has four properties:

- `samples` - a `numpy.ndarray` that holds the actual continuous data with dimensions of samples x channels. For Binary, NWB, and Kwik format, this will be a memory-mapped array (i.e., the data will only be loaded into memory when specific samples are accessed).
- `samples` - a `numpy.ndarray` that holds the actual continuous data with dimensions of samples x channels. For Binary and NWB formats, this will be a memory-mapped array (i.e., the data will only be loaded into memory when specific samples are accessed).
- `sample_numbers` - a `numpy.ndarray` that holds the sample numbers since the start of acquisition. This will have the same size as the first dimension of the `samples` array
- `timestamps` - a `numpy.ndarray` that holds global timestamps (in seconds) for each sample, assuming all data streams were synchronized in this recording. This will have the same size as the first dimension of the `samples` array
- `metadata` - a `dict` containing information about this data, such as the ID of the processor it originated from.
- `metadata` - a `ContinousMetadata` dataclass containing information about this data, such as the ID of the processor it originated from.

Because the memory-mapped samples are stored as 16-bit integers in arbitrary units, all analysis should be done on a scaled version of these samples. To load the samples scaled to microvolts, use the `get_samples()` method:

Expand All @@ -81,7 +83,7 @@ Because the memory-mapped samples are stored as 16-bit integers in arbitrary uni
>> data = recording.continuous[0].get_samples(start_sample_index=0, end_sample_index=10000)
```

This will return the first 10,000 continuous samples for all channels in units of microvolts. Note that your computer may run out of memory when requesting a large number of samples for many channels at once. It's also important to note that `start_sample_index` and `end_sample_index` represent relative indices in the `samples` array, rather than absolute sample numbers. The default behavior is to return all channels in the order in which they are stored, typically in increasing numerical order. However, if the `channel map` plugin is placed in the signal chain before a `record node`, the order of channels will follow the order of the specified channel mapping.
This will return the first 10,000 continuous samples for all channels in units of microvolts. Note that your computer may run out of memory when requesting a large number of samples for many channels at once. It's also important to note that `start_sample_index` and `end_sample_index` represent relative indices in the `samples` array, rather than absolute sample numbers. The default behavior is to return all channels in the order in which they are stored, typically in increasing numerical order. However, if the Channel Map plugin is placed in the signal chain before a Record Node, the order of channels will follow the order of the specified channel mapping.

The `get_samples` method includes the arguments:

Expand Down Expand Up @@ -124,7 +126,7 @@ If spike data has been saved by your Record Node (i.e., there is a Spike Detecto
- `sample_numbers` - `numpy.ndarray` of sample indices (one per spikes)
- `timestamps` - `numpy.ndarray` of global timestamps (in seconds)
- `clusters` - `numpy.ndarray` of cluster IDs for each spike (default cluster = 0)
- `metadata` - `dict` with metadata about each electrode
- `metadata` - `SpikeMetadata` dataclass with metadata about each electrode

## Synchronizing timestamps

Expand Down
Loading