Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

nilearn.decomposition.CanICA

class nilearn.decomposition.CanICA(mask=None, n_components=20, smoothing_fwhm=6, do_cca=True, threshold='auto', n_init=10, standardize=True, random_state=0, target_affine=None, target_shape=None, low_pass=None, high_pass=None, t_r=None, memory=Memory(cachedir=None), memory_level=0, n_jobs=1, verbose=0)

Perform Canonical Independent Component Analysis.

Parameters:

mask: Niimg-like object or MultiNiftiMasker instance, optional :

Mask to be used on data. If an instance of masker is passed, then its mask will be used. If no mask is given, it will be computed automatically by a MultiNiftiMasker with default parameters.

data: array-like, shape = [[n_samples, n_features], ...] :

Training vector, where n_samples is the number of samples, n_features is the number of features. There is one vector per subject.

n_components: int :

Number of components to extract

smoothing_fwhm: float, optional :

If smoothing_fwhm is not None, it gives the size in millimeters of the spatial smoothing to apply to the signal.

do_cca: boolean, optional :

Indicate if a Canonical Correlation Analysis must be run after the PCA.

standardize : boolean, optional

If standardize is True, the time-series are centered and normed: their variance is put to 1 in the time dimension.

threshold: None, ‘auto’ or float :

If None, no thresholding is applied. If ‘auto’, then we apply a thresholding that will keep the n_voxels, more intense voxels across all the maps, n_voxels being the number of voxels in a brain volume. A float value indicates the ratio of voxels to keep (2. means keeping 2 x n_voxels voxels).

n_init: int, optional :

The number of times the fastICA algorithm is restarted

random_state: int or RandomState :

Pseudo number generator state used for random sampling.

target_affine: 3x3 or 4x4 matrix, optional :

This parameter is passed to image.resample_img. Please see the related documentation for details.

target_shape: 3-tuple of integers, optional :

This parameter is passed to image.resample_img. Please see the related documentation for details.

low_pass: None or float, optional :

This parameter is passed to signal.clean. Please see the related documentation for details

high_pass: None or float, optional :

This parameter is passed to signal.clean. Please see the related documentation for details

t_r: float, optional :

This parameter is passed to signal.clean. Please see the related documentation for details

memory: instance of joblib.Memory or string :

Used to cache the masking process. By default, no caching is done. If a string is given, it is the path to the caching directory.

memory_level: integer, optional :

Rough estimator of the amount of memory used by caching. Higher value means more memory for caching.

n_jobs: integer, optional :

The number of CPUs to use to do the computation. -1 means ‘all CPUs’, -2 ‘all CPUs but one’, and so on.

verbose: integer, optional :

Indicate the level of verbosity. By default, nothing is printed

References

  • G. Varoquaux et al. “A group model for stable multi-subject ICA on fMRI datasets”, NeuroImage Vol 51 (2010), p. 288-299
  • G. Varoquaux et al. “ICA-based sparse features recovery from fMRI datasets”, IEEE ISBI 2010, p. 1177

Methods

__init__(mask=None, n_components=20, smoothing_fwhm=6, do_cca=True, threshold='auto', n_init=10, standardize=True, random_state=0, target_affine=None, target_shape=None, low_pass=None, high_pass=None, t_r=None, memory=Memory(cachedir=None), memory_level=0, n_jobs=1, verbose=0)
fit(imgs, y=None, confounds=None)

Compute the mask and the ICA maps across subjects

Parameters:

imgs: list of Niimg-like objects :

See http://nilearn.github.io/building_blocks/manipulating_mr_images.html#niimg. Data on which PCA must be calculated. If this is a list, the affine is considered the same for all.

confounds: CSV file path or 2D matrix :

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:

X : numpy array of shape [n_samples, n_features]

Training set.

y : numpy array of shape [n_samples]

Target values.

Returns:

X_new : numpy array of shape [n_samples, n_features_new]

Transformed array.

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

inverse_transform(component_signals)

Transform regions signals into voxel signals

Parameters:

component_signals: list of numpy array (n_samples x n_components) :

Component signals to tranform back into voxel signals

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self :
transform(imgs, confounds=None)

Project the data into a reduced representation

Parameters:

imgs: iterable of Niimg-like objects :

confounds: CSV file path or 2D matrix :

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details