Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

nilearn.input_data.NiftiMapsMasker

class nilearn.input_data.NiftiMapsMasker(maps_img, mask_img=None, smoothing_fwhm=None, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, resampling_target='data', memory=Memory(cachedir=None), memory_level=0, verbose=0)

Class for masking of Niimg-like objects.

NiftiMapsMasker is useful when data from overlapping volumes should be extracted (contrarily to NiftiLabelsMasker). Use case: Summarize brain signals from large-scale networks obtained by prior PCA or ICA.

Parameters:

maps_img: Niimg-like object :

See http://nilearn.github.io/building_blocks/manipulating_mr_images.html#niimg. Region definitions, as one image of labels.

mask_img: Niimg-like object, optional :

See http://nilearn.github.io/building_blocks/manipulating_mr_images.html#niimg. Mask to apply to regions before extracting signals.

smoothing_fwhm: float, optional :

If smoothing_fwhm is not None, it gives the full-width half maximum in millimeters of the spatial smoothing to apply to the signal.

standardize: boolean, optional :

If standardize is True, the time-series are centered and normed: their mean is put to 0 and their variance to 1 in the time dimension.

detrend: boolean, optional :

This parameter is passed to signal.clean. Please see the related documentation for details

low_pass: False or float, optional :

This parameter is passed to signal.clean. Please see the related documentation for details

high_pass: False or float, optional :

This parameter is passed to signal.clean. Please see the related documentation for details

t_r: float, optional :

This parameter is passed to signal.clean. Please see the related documentation for details

resampling_target: {“mask”, “maps”, None} optional. :

Gives which image gives the final shape/size. For example, if resampling_target is “mask” then maps_img and images provided to fit() are resampled to the shape and affine of mask_img. “None” means no resampling: if shapes and affines do not match, a ValueError is raised. Default value: “maps”.

memory: joblib.Memory or str, optional :

Used to cache the region extraction process. By default, no caching is done. If a string is given, it is the path to the caching directory.

memory_level: int, optional :

Aggressiveness of memory caching. The higher the number, the higher the number of functions that will be cached. Zero means no caching.

verbose: integer, optional :

Indicate the level of verbosity. By default, nothing is printed

Notes

With the default value for resampling_target, every 3D image processed by transform() will be resampled to the shape of maps_img. It may lead to a very large memory consumption if the voxel number in labels_img is large.

Methods

__init__(maps_img, mask_img=None, smoothing_fwhm=None, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, resampling_target='data', memory=Memory(cachedir=None), memory_level=0, verbose=0)
fit(X=None, y=None)

Prepare signal extraction from regions.

All parameters are unused, they are for scikit-learn compatibility.

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

inverse_transform(region_signals)

Compute voxel signals from region signals

Any mask given at initialization is taken into account.

Parameters:

region_signals: 2D numpy.ndarray :

Signal for each region. shape: (number of scans, number of regions)

Returns:

voxel_signals: nibabel.Nifti1Image :

Signal for each voxel. shape: that of maps.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self :
transform(imgs, confounds=None)

Extract signals from images.

Parameters:

imgs: Niimg-like object :

See http://nilearn.github.io/building_blocks/manipulating_mr_images.html#niimg. Images to process. It must boil down to a 4D image with scans number as last dimension.

confounds: array-like, optional :

This parameter is passed to signal.clean. Please see the related documentation for details. shape: (number of scans, number of confounds)

Returns:

region_signals: 2D numpy.ndarray :

Signal for each region. shape: (number of scans, number of regions)