Detailed CIAtah processing pipeline¶
The following detailed pipeline assumes you have started a CIAtah object using the below command:
obj = ciatah;
Spatially downsample raw movies or convert to HDF5 with modelDownsampleRawMovies
¶
Users have the ability to spatially downsample raw movies, often necessary to denoise the data, save storage space, and improve runtimes of later processing steps. For most data, users can downsample 2 or 4 times in each spatial dimension while still retaining sufficient pixels per cell to facilitate cell-extraction.
To run, either select modelDownsampleRawMovies
in the GUI menu or type the below command after initializing a CIAtah obj.
obj.modelDownsampleRawMovies;
This will pop-up the following screen. Users can
- input several folders where ISXD files are by separating each folder path with a comma (Folder(s) where raw HDF5s are located
),
- specify a common root folder to save files to (Folder to save downsampled HDF5s to:
),
- and input a root directory that contains the sub-folders with the raw data (Decompression source root folder(s)
).
The function will automatically put each file in its corresponding folder, make sure folder names are unique (this should be done anyways for data analysis reasons).
Converting Inscopix ISXD files to HDF5¶
To convert from Inscopix ISXD file format (output by nVista v3+ and nVoke) to HDF5 run modelDownsampleRawMovies
without changing the regular expression or make sure it looks for .*.isxd
or similar. Users will need the latest version of the Inscopix Data Processing Software as these functions take advantage of their API. If CIAtah cannot automatically find the API, it will ask the user to direct it to the root location of the Inscopix Data Processing Software (see below).
Check movie registration before pre-processing with viewMovieRegistrationTest
¶
Users should spatially filter one-photon or other data with background noise (e.g. neuropil). To get a feel for how the different spatial filtering affects SNR/movie data before running the full processing pipeline, run viewMovieRegistrationTest
module. Then select either matlab divide by lowpass before registering
or matlab bandpass before registering
then change filterBeforeRegFreqLow
and filterBeforeRegFreqHigh
settings, see below.
Within each folder will be a sub-folder called preprocRunTest
inside of which is a series of sub-folders called preprocRun##
that will contain a file called settings.mat
that can be loaded into modelPreprocessMovie
so the same settings that worked during the test can be used during the actual pre-processing run.
- You'll get an output like the below:
- A: The top left is without any filtering while the other 3 are with different bandpass filtering options.
- B: Cell ΔF/F intensity profile from the raw movie. Obtain by selecting
Analyze->Plot profile
from Fiji menu after selecting a square segment running through a cell. - C: Same cell ΔF/F intensity profile from the bottom/left movie (note the y-axis is the same as above). Obtained in same manner as B.
Preprocessing calcium imaging movies with modelPreprocessMovie
¶
After users instantiate an object of the CIAtah
class and enter a folder, they can start preprocessing of their calcium imaging data with modelPreprocessMovie
.
- See below for a series of windows to get started, the options for motion correction, cropping unneeded regions, Δ_F/F_, and temporal downsampling were selected for use in the study associated with this repository.
- If users have not specified the path to Miji, a window appears asking them to select the path to Miji's
scripts
folder. - If users are using the test dataset, it is recommended that they do not use temporal downsampling.
- Vertical and horizontal stripes in movies (e.g. CMOS camera artifacts) can be removed via
stripeRemoval
step. Remember to select correctstripOrientationRemove
,stripSize
, andstripfreqLowExclude
options in the preprocessing options menu.
Next the user is presented with a series of options for motion correction, image registration, and cropping.:
- The options highlighted in green are those that should be considered by users.
- Users can over their mouse over each option to get tips on what they mean.
- In particular, make sure that
inputDatasetName
is correct for HDF5 files and thatfileFilterRegexp
matches the form of the calcium imaging movie files to be analyzed. - After this, the user is asked to let the algorithm know how many frames of the movie to analyze (defaults to all frames).
- Then the user is asked to select a region to use for motion correction. In general, it is best to select areas with high contrast and static markers such as blood vessels. Stay away from the edge of the movie or areas outside the brain (e.g. the edge of microendoscope GRIN lens in one-photon miniature microscope movies).
Save/load preprocessing settings¶
Users can also enable saving and loading of previously selected pre-processing settings by changing the red option below.
Settings loaded from previous run (e.g. of modelPreprocessMovie
) or file (e.g. from viewMovieRegistrationTest
runs) are highlighted in orange. Settings that user has just changed are still highlighted in green.
The algorithm will then run all the requested preprocessing steps and presented the user with the option of viewing a slice of the processed file. Users have now completed pre-processing.
Manual movie cropping with modelModifyMovies
¶
If users need to eliminate specific regions of their movie before running cell extraction, that option is provided. Users select a region using an ImageJ interface and select done
when they want to move onto the next movie or start the cropping. Movies have NaNs
or 0s
added in the cropped region rather than changing the dimensions of the movie.
Extracting cells with modelExtractSignalsFromMovie
¶
Users can run PCA-ICA, EXTRACT, CNMF, CNMF-E, and ROI cell extraction by following the below set of option screens. Details on running the new Schnitzer lab cell-extraction methods (e.g. CELLMax) will be added here after they are released.
We normally estimate the number of PCs and ICs on the high end, manually sort to get an estimate of the number of cells, then run PCA-ICA again with IC 1.5-3x the number of cells and PCs 1-1.5x number of ICs.
To run CNMF or CNMF-E, run loadDependencies
module (e.g. obj.loadDependencies
) after CIAtah class is loaded. CVX (a CNMF dependency) will also be downloaded and cvx_setup
run to automatically set it up.
The resulting output (on Figure 45+) at the end should look something like:
Loading cell-extraction output data for custom scripts¶
Users can load outputs from cell extraction using the below command. This will then allow users to use the images and activity traces for downstream analysis as needed.
[inputImages,inputSignals,infoStruct,algorithmStr,inputSignals2] = ciapkg.io.loadSignalExtraction('pathToFile');
Note, the outputs correspond to the below:
inputImages
- 3D or 4D matrix containing cells and their spatial information, format: [x y nCells].inputSignals
- 2D matrix containing activity traces in [nCells nFrames] format.infoStruct
- contains information about the file, e.g. the 'description' property that can contain information about the algorithm.algorithmStr
- String of the algorithm name.inputSignals2
- same as inputSignals but for secondary traces an algorithm outputs.
Loading cell-extraction output data with modelVarsFromFiles
¶
In general, after running cell-extraction (modelExtractSignalsFromMovie
) on a dataset, run the modelVarsFromFiles
module. This allows CIAtah
to load/pre-load information about that cell-extraction run.
If you had to restart MATLAB or are just loading CIAtah fresh but have previously run cell extraction, run this method before doing anything else with that cell-extraction data.
A menu will pop-up like below when modelVarsFromFiles
is loaded, you can normally just leave the defaults as is.
Validating cell extraction with viewCellExtractionOnMovie
¶
After users have run cell extraction, they should check that cells are not being missed during the process. Running the method viewCellExtractionOnMovie
will create a movie with outlines of cell extraction outputs overlaid on the movie.
Below is an example, with black outlines indicating location of cell extraction outputs. If users see active cells (red flashes) that are not outlined, that indicates either exclusion or other parameters should be altered in the previous modelExtractSignalsFromMovie
cell extraction step.
Sorting cell extraction outputs with computeManualSortSignals
¶
CIAtah cell sorting GUI
Outputs from most common cell-extraction algorithms like PCA-ICA, CNMF, etc. contain signal sources that are not cells and thus must be manually removed from the output. The repository contains a GUI for sorting cells from not cells. GUI also contains a shortcut menu that users can access by right-clicking or selecting the top-left menu.
Below users can see a list of options that are given before running the code, those highlighted in green
GUI usage on large imaging datasets¶
- To manually sort on large movies that will not fit into RAM, select the below options (highlighted in green). This will load only chunks of the movie asynchronously into the GUI as you sort cell extraction outputs.
Cell sorting from the command line with signalSorter
¶
Usage instructions below for signalSorter
, e.g. if not using the CIAtah
GUI.
Main inputs
inputImages
- [x y N] matrix where N = number of images, x/y are dimensions.inputSignals
- [N frames] double matrix where N = number of signals (traces).inputMovie
- [x y frames] matrix
Main outputs
choices
- [N 1] vector of 1 = cell, 0 = not a cellinputImagesSorted
- [x y N] filtered bychoices
inputSignalsSorted
- [N frames] filtered bychoice
iopts.inputMovie = inputMovie; % movie associated with traces
iopts.valid = 'neutralStart'; % all choices start out gray or neutral to not bias user
iopts.cropSizeLength = 20; % region, in px, around a signal source for transient cut movies (subplot 2)
iopts.cropSize = 20; % see above
iopts.medianFilterTrace = 0; % whether to subtract a rolling median from trace
iopts.subtractMean = 0; % whether to subtract the trace mean
iopts.movieMin = -0.01; % helps set contrast for subplot 2, preset movie min here or it is calculated
iopts.movieMax = 0.05; % helps set contrast for subplot 2, preset movie max here or it is calculated
iopts.backgroundGood = [208,229,180]/255;
iopts.backgroundBad = [244,166,166]/255;
iopts.backgroundNeutral = repmat(230,[1 3])/255;
[inputImagesSorted, inputSignalsSorted, choices] = signalSorter(inputImages, inputSignals, 'options',iopts);
Examples of the interface on two different datasets:
BLA one-photon imaging data signal sorting GUI¶
mPFC one-photon imaging data signal sorting GUI (from example_downloadTestData.m
)¶
Context menu¶
Removing cells not within brain region with modelModifyRegionAnalysis
¶
If the imaging field-of-view includes cells from other brain regions, they can be removed using modelModifyRegionAnalysis
Cross-session cell alignment with computeMatchObjBtwnTrials
¶
This step allows users to align cells across imaging sessions (e.g. those taken on different days). See the Cross session cell alignment
wiki page for more details and notes on cross-session alignment.
- Users run
computeMatchObjBtwnTrials
to do cross-day alignment (first row in pictures below). - Users then run
viewMatchObjBtwnSessions
to get a sense for how well the alignment ran. computeCellDistances
andcomputeCrossDayDistancesAlignment
allow users to compute the within session pairwise Euclidean centroid distance for all cells and the cross-session pairwise distance for all global matched cells, respectively.
Users can then get the matrix that gives the session IDs
% Global IDs is a matrix of [globalID sessionID]
% Each (globalID, sessionID) pair gives the within session ID for that particular global ID
globalIDs = alignmentStruct.globalIDs;
View cross-session cell alignment with viewMatchObjBtwnSessions
¶
To evaluate how well cross-session alignment works, computeMatchObjBtwnTrials
will automatically run viewMatchObjBtwnSessions
at the end, but users can also run it separately after alignment. The left are raw dorsal striatum cell maps from a single animal. The right shows after cross-session alignment; color is used to indicate a global ID cell (e.g. the same cell matched across multiple days). Thus, same color cell = same cell across sessions.
Save cross-session cell alignment with modelSaveMatchObjBtwnTrials
¶
Users can save out the alignment structure by running modelSaveMatchObjBtwnTrials
. This will allow users to select a folder where CIAtah
will save a MAT-file with the alignment structure information for each animal.