-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sorting failed #3411
Comments
This seems like a Kilosort error. Are you able to at least try this dataset on a local copy of Kilosort2? That could help us diagnose it is a spikeinterface issue. Alternatively you could try sorting with a non Matlab sorter (for example tridesclous2 or SC2 or MS5) and we could see if this is an issue with the dataset itself. |
I guess this error has been reported in #2400 but no clear resolution was posted. Wish this issue could be solved! |
I have tried using a local copy but retrieved the same result: import numpy as np
import matplotlib.pyplot as plt
import spikeinterface.full as si # import core only
import spikeinterface.extractors as se
import spikeinterface.preprocessing as spre
import spikeinterface.sorters as ss
import spikeinterface.postprocessing as spost
import spikeinterface.qualitymetrics as sqm
import spikeinterface.comparison as sc
import spikeinterface.exporters as sexp
import spikeinterface.curation as scur
import spikeinterface.widgets as sw
from spikeinterface.sortingcomponents.peak_detection import detect_peaks
from spikeinterface.sortingcomponents.peak_localization import localize_peaks
import sys
sys.path.append('/share/home/zhangdaohan20h/dredge-main/python')
from dredge.dredge_ap import register
from dredge.dredge_lfp import register_online_lfp
# this has some helpers for plotting
import dredge.motion_util as mu
print('#')
cutoff_um = None
lfprec = se.read_spikeglx("/home/zhangdaohan20h/public_data/NPX_examples/Pt01/", load_sync_channel=False,
stream_id="imec0.lf")
# convert to floating point
lfprec = si.astype(lfprec, np.float32)
# ensure depth order
lfprec = si.depth_order(lfprec)
# optional: remove channels outside the brain
# you could use similar logic to extract a single column
# or trim both ends of the probe, whatever you like
cutoff_um = 8000
if cutoff_um is not None:
geom = lfprec.get_channel_locations()
lfprec = lfprec.remove_channels(lfprec.channel_ids[geom[:, 1] > cutoff_um])
# bandpass filter
# we do an aggressive one since we plan to downsample
lfprec = si.bandpass_filter(
lfprec,
freq_min=0.5,
freq_max=250,
margin_ms=1000,
filter_order=3,
dtype="float32",
add_reflect_padding=True,
)
# fancy bad channels detection and removal from the International Brain Lab
bad_chans, labels = si.detect_bad_channels(lfprec, psd_hf_threshold=1.4, num_random_chunks=100, seed=0)
print("Found bad channels", bad_chans)
lfprec = lfprec.remove_channels(bad_chans)
# correct for ADC sample shifts
lfprec = si.phase_shift(lfprec)
# common median reference
lfprec = si.common_reference(lfprec)
# downsample to 250Hz
lfprec = si.resample(lfprec, 250, margin_ms=1000)
# spatial filters: second derivative and averageing same-depth channels
lfprec = si.directional_derivative(lfprec, order=2, edge_order=1)
lfprec = si.average_across_direction(lfprec)
t_start = 235
t_end = 295
lfprec = lfprec.frame_slice(
start_frame=int(t_start * lfprec.sampling_frequency),
end_frame=int(t_end * lfprec.sampling_frequency),
)
from dredge.dredge_ap import register
from dredge.dredge_lfp import register_online_lfp
# this has some helpers for plotting
import dredge.motion_util as mu
me_rigid, extra_info_rigid = register_online_lfp(lfprec, max_disp_um=1000)
# Load the recording
#from spikeinterface.core import BinaryFolderRecording
#rec = si.read_zarr('recording.zarr')
rec = se.read_spikeglx("/home/zhangdaohan20h/public_data/NPX_examples/Pt01",
load_sync_channel=False, stream_id="imec0.ap")
rec = si.astype(rec, np.float32)
rec = si.bandpass_filter(rec)
rec = si.common_reference(rec)
rec = rec.frame_slice(
start_frame=int(t_start * rec.sampling_frequency),
end_frame=int(t_end * rec.sampling_frequency),
)
#in dredge:
me_rigid_upsampled = mu.resample_to_new_time_bins(me_rigid, rec.get_times(0))
print(len(me_rigid_upsampled.displacement))
from spikeinterface.sortingcomponents.motion import estimate_motion, interpolate_motion
motion_si = mu.motion_estimate_to_spikeinterface_motion(me_rigid_upsampled)
rec = interpolate_motion(rec, motion_si)
# Save the recording
# rec.save(folder='recording', format='zarr',overwrite=True, engine='joblib', engine_kwargs={"n_jobs": 20},) #binary_folder
sorting_KS3 = ss.run_sorter(sorter_name="kilosort3", recording=rec, docker_image=False, remove_existing_folder=True, delete_output_folder = False, n_jobs = 20, verbose=True) My cmd window↓:
|
Did you compile Kilosort CUDA mex files? |
Does it mean recompiling is finishing automatedly? |
Not sure what you mean..did you follow the Installation instructions here? https://github.com/MouseLand/Kilosort/tree/v2.5.2 |
Sorry for the lack of clarity! I mean in my command prompt window, I noticed a message that suggests recompilation may have already taken place. However, I may be misunderstanding the situation. So I rechecked the installation instructions, and I found a specific statement that says, "You must run and complete successfully mexGPUall.m in the CUDA folder." I ran the mexGPUall.m script, but it appeared that nothing significant happened (as shown below). Is this normal? If not, could you please advise me on the next steps I should take? Thanks!
|
You need to first open Matlab:
Then when Matlab opens you have to run the other command:
|
Fine! And here is the error raised by running mexGPUall: >> mexGPUall
Building with 'nvcc'.
Error using mex
In file included from /share/apps/imaging/MATLAB-2020a/sys/cuda/glnxa64/cuda/include/cuda_runtime.h:83,
from <command-line>:
/share/apps/imaging/MATLAB-2020a/sys/cuda/glnxa64/cuda/include/crt/host_config.h:129:2: error: #error -- unsupported GNU
version! gcc versions later than 8 are not supported!
129 | #error -- unsupported GNU version! gcc versions later than 8 are not supported!
| ^~~~~
Error in mexcuda (line 166)
[varargout{1:nargout}] = mex(mexArguments{:});
Error in mexGPUall (line 7)
mexcuda -largeArrayDims spikedetector3.cu
>> So I ckeck my gcc version. Unluckily it goes far beyond 8.
What should I do next? Maybe KS3 or 2.5 is not applicable unless I degrade my gcc version? |
Indeed. Have you tried running it in Docker? |
Oh, I have tried this. But I found something wrong with the dev_mode which stopped me from installing spikeinterface in the container from my own folder. Please see #3325 |
Traceback (most recent call last):
File "D:\sjwlab\bxy\spikeinterface-main\examples\get_started\quickstart.py", line 64, in
sorting_KS2 = ss.run_sorter(sorter_name="kilosort2", recording=recording_preprocessed, extra_requirements=["numpy==1.26"],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\sjwlab\bxy\spikeinterface-main\examples\get_started\myenv\Lib\site-packages\spikeinterface\sorters\runsorter.py", line 210, in run_sorter
return run_sorter_container(
^^^^^^^^^^^^^^^^^^^^^
File "D:\sjwlab\bxy\spikeinterface-main\examples\get_started\myenv\Lib\site-packages\spikeinterface\sorters\runsorter.py", line 663, in run_sorter_container
raise SpikeSortingError(f"Spike sorting in {mode} failed with the following error:\n{run_sorter_output}")
spikeinterface.sorters.utils.misc.SpikeSortingError: Spike sorting in docker failed with the following error:
/sjwlab/bxy/spikeinterface-main/examples/get_started/in_container_sorter_script.py:23: DeprecationWarning:
output_folder
is deprecated and will be removed in version 0.103.0 Please use folder insteadsorting = run_sorter_local(
RUNNING SHELL SCRIPT: /sjwlab/bxy/spikeinterface-main/examples/get_started/kilosort2_output/sorter_output/run_kilosort2.sh
Warning: X does not support locale C.UTF-8
Time 0s. Determining good channels..
found 840 threshold crossings in 307.29 seconds of data
found 511 bad channels
Time 35s. Computing whitening matrix..
Getting channel whitening matrix...
Channel-whitening matrix computed.
Time 36s. Loading raw data and applying filters...
Time 52s. Finished preprocessing 94 batches.
random seed for clusterSingleBatches: 1
----------------------------------------Error using indexing
Subscript indices must either be real positive integers or logicals.
Error running kilosort2
Traceback (most recent call last):
File "/sjwlab/bxy/spikeinterface-main/examples/get_started/in_container_sorter_script.py", line 23, in
sorting = run_sorter_local(
File "/root/.local/lib/python3.9/site-packages/spikeinterface/sorters/runsorter.py", line 276, in run_sorter_local
SorterClass.run_from_folder(folder, raise_error, verbose)
File "/root/.local/lib/python3.9/site-packages/spikeinterface/sorters/basesorter.py", line 301, in run_from_folder
raise SpikeSortingError(
spikeinterface.sorters.utils.misc.SpikeSortingError: Spike sorting error trace:
Traceback (most recent call last):
File "/root/.local/lib/python3.9/site-packages/spikeinterface/sorters/basesorter.py", line 261, in run_from_folder
SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose)
File "/root/.local/lib/python3.9/site-packages/spikeinterface/sorters/external/kilosortbase.py", line 217, in _run_from_folder
raise Exception(f"{cls.sorter_name} returned a non-zero exit code")
Exception: kilosort2 returned a non-zero exit code
Spike sorting failed. You can inspect the runtime trace in /sjwlab/bxy/spikeinterface-main/examples/get_started/kilosort2_output/spikeinterface_log.json.
my code is:
global_job_kwargs = dict(n_jobs=1, chunk_duration="1s")
si.set_global_job_kwargs(**global_job_kwargs)
data_location = "D:\sjwlab\0914\"
data_name = "data.raw.h5"
recording = se.read_maxwell(data_location + data_name)
print("Recordings")
print(recording)
channel_ids = recording.get_channel_ids()
fs = recording.get_sampling_frequency()
num_chan = recording.get_num_channels()
num_seg = recording.get_num_segments()
num_seg = recording.get_num_segments()
probe = recording.get_probe()
print(probe)
recording_cmr = recording
recording_f = si.bandpass_filter(recording, freq_min=300, freq_max=6000)
print(recording_f)
recording_cmr = si.common_reference(recording_f, reference='global', operator='median')
print(recording_cmr)
this computes and saves the recording after applying the preprocessing chain
recording_preprocessed = recording_cmr.save(format='binary')
print(recording_preprocessed)
print('Available sorters', ss.available_sorters())
print('Installed sorters', ss.installed_sorters())
sorting_KS2 = ss.run_sorter(sorter_name="kilosort2", recording=recording_preprocessed, extra_requirements=["numpy==1.26"],
docker_image=True, verbose=True)
The text was updated successfully, but these errors were encountered: