-
Notifications
You must be signed in to change notification settings - Fork 237
Add load_analyzer_from_nwb function
#4270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
My experience of trying to load recordings is that the channel locations are often not saved with the nwb recording, but that they are saved somewhere else in the nwb file. @bendichter mentioned in a meeting that they'd thought about this problem, and maybe had a solution? |
|
This will require some key metadata (e.g., an electrodes table and rel_x/rel_y available). In case some key stuff is missing, it will throw an error! |
1 similar comment
|
This will require some key metadata (e.g., an electrodes table and rel_x/rel_y available). In case some key stuff is missing, it will throw an error! |
| return outputs | ||
|
|
||
|
|
||
| def load_analyzer_from_nwb( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you don't like the name read_nwb_as_analyzer() ? to match the kilosort one.
| templates_ext = ComputeTemplates(sorting_analyzer=analyzer) | ||
| templates_avg_data = np.array([t for t in units["waveform_mean"].values]).astype("float") | ||
| total_ms = templates_avg_data.shape[1] / analyzer.sampling_frequency * 1000 | ||
| template_params = get_default_analyzer_extension_params("templates") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a strange guess.
Do we except nwd to have the same template params as spikeinterface actual version ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a proper way to do it ?
I think I would go directly to the 1/3 2/2 + warnings meachanism.
| tm = pd.DataFrame(index=sorting.unit_ids) | ||
| qm = pd.DataFrame(index=sorting.unit_ids) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we set the correct dtype from the new extension system ?
h-mayorquin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
first read
| template_metric_columns = ComputeTemplateMetrics.get_metric_columns() | ||
| quality_metric_columns = ComputeQualityMetrics.get_metric_columns() | ||
|
|
||
| tm = pd.DataFrame(index=sorting.unit_ids) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is tm?
| return analyzer | ||
|
|
||
|
|
||
| def create_dummy_probegroup_from_locations(locations, shape="circle", shape_params={"radius": 1}): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should make this private as we might want to change this.
| return probegroup | ||
|
|
||
|
|
||
| def make_df(group): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should make this private as we might want to change this. Plus, this is a super generic name that we don't want to contaminate any namespace with.
| num_channels=len(channel_ids), | ||
| num_samples=num_samples, | ||
| is_filtered=True, | ||
| dtype="float32", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need the dtype and why is it fixed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make this optional at the Analyzer level (same for is_filtered)
| t_start_tmp = 0 if t_start is None else t_start | ||
|
|
||
| sorting_tmp = NwbSortingExtractor( | ||
| file_path=file_path, | ||
| electrical_series_path=electrical_series_path, | ||
| unit_table_path=unit_table_path, | ||
| stream_mode=stream_mode, | ||
| stream_cache_path=stream_cache_path, | ||
| cache=cache, | ||
| storage_options=storage_options, | ||
| use_pynwb=use_pynwb, | ||
| t_start=t_start_tmp, | ||
| sampling_frequency=sampling_frequency, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use session_start_time instead.
| if electrodes_indices is not None: | ||
| # here we assume all groups are the same for each unit, so we just check one. | ||
| if "group_name" in electrodes_table.columns: | ||
| group_names = np.array([electrodes_table.iloc[int(ei[0])]["group_name"] for ei in electrodes_indices]) | ||
| if len(np.unique(group_names)) > 0: | ||
| if group_name is None: | ||
| raise Exception( | ||
| f"More than one group, use group_name option to select units. Available groups: {np.unique(group_names)}" | ||
| ) | ||
| else: | ||
| unit_mask = group_names == group_name | ||
| if verbose: | ||
| print(f"Selecting {sum(unit_mask)} / {len(units)} units from {group_name}") | ||
| sorting = sorting.select_units(unit_ids=sorting.unit_ids[unit_mask]) | ||
| units = units.loc[units.index[unit_mask]] | ||
| electrodes_indices = units["electrodes"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could use the same trick as the "aggregation_key" when instantiating a sorting analyzer from grouped recordings/sortings
Useful function to instantiate a
SortingAnalyzerfrom an NWB file as good as we can :)TODO