Skip to content

Performance benchmark

jamesjun edited this page Dec 15, 2017 · 9 revisions

Test setup

  • Software compared
  • Ground-truth dataset: biophysically detailed simulation
    • Generated by Catalin Mitelut from the Allen Institute for Brain Science
    • 708 neurons in 200 x 200 x 600 um volume (16 minutes, 20 KS/s)
    • Pyramidal cells (588 cells, 7 different morphologies) and basket cells (120 cells, 6 different morphologies)
    • Quantified accuracy for units exceeding SNR of 7 of higher (SNR: peak channel amplitude / SD of Gaussian noise)
    • Pooled four probe layout patterns (two-columns and four-column checkerboard pattern). 32 um horizontal spacing, 20 um vertical spacing (center-to-center)
  • System hardware: Dual Xeon 3.0 GHz (8 cores), 128 GB RAM, Titan X GPU (12 GB, Maxwell)
    • Windows 7 64-bit for running JRCLUST and Kilosort
    • Ubuntu 16 for MountainSort and YASS

Accuracy comparison

In-silico groundtruth

  • Biophysically detailed simulation generated at Allen Institute for Brain Science, 708 cells in 200x200x600 um.
  • Spike sorting error is quantified by the average of the false positive (FP) and false negative (FN) rates for each ground-truth unit.
  • FP: false positive, FN: false negative
  • Panels A,C,D: Mean and SD are shown
  • Panel B: filled circles: static dataset, open circles: drift dataset (composite of four positions shifted by 4 um)

Biophysical groundtruth

  • Concurrent extracellular (high-density silicon probe) and intracellular (patch pipette) recordings, datasets from Adam Kampff's and Ed Boyden's labs.
  • Panels A,B,C: Mean and SD are shown
  • Panel D: 25th, 50th, 75th percentiles are shown

Speed comparison

Speed performance for running the entire pipeline (pre-processing + spike detection + clustering + post-merging/splitting)

Clone this wiki locally