Object Processing

Object Processing modules produce or otherwise manipulate objects that have been identified in an image.

ClassifyObjects

ClassifyObjects classifies objects into different classes according to the value of measurements you choose.

This module classifies objects into a number of different bins according to the value of a measurement (e.g., by size, intensity, shape). It reports how many objects fall into each class as well as the percentage of objects that fall into each class. The module asks you to select the measurement feature to be used to classify your objects and specify the bins to use. It also requires you to have run a measurement or CalculateMath previous to this module in the pipeline so that the measurement values can be used to classify the objects.

There are two flavors of classification:

  • The first classifies each object according to the measurements you choose and assigns each object to one class per measurement. You may specify more than two classification bins per measurement.

  • The second classifies each object according to two measurements and two threshold values. The module classifies each object once per measurement resulting in four possible object classes. The module then stores one measurement per object, based on the object’s class.

Note that objects without a measurement are not counted as belonging in a classification bin and will not show up in the output image (shown in the module display window); in the object classification they will have a value of False for all bins. However, they are still counted in the total number of objects and hence are reflected in the classification percentages.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

NO

See also

See also CalculateMath and any of the modules in the Measure category.

Measurements made by this module

  • Image measurements:

    • NumObjectsPerBin: The number of objects that are classified into each bin.

    • PctObjectsPerBin: The percentage of total objects that are classified into each bin.

  • Object measurements:

    • Single measurement: Classification (true/false) of the Nth bin for the Mth measurement.

    • Two measurement: Classification (true/false) of the 1st measurement versus the 2nd measurement binned into bins above (“high”) and below (“low”) the cutoff.

(Jump to top)

CombineObjects

CombineObjects allows you to combine two object sets into a single object set.

This moduled is geared towards situations where a set of objects was identified using multiple instances of an Identify module, typically to account for large variability in size or intensity. Using this module will combine object sets to create a new set of objects which can be used in other modules.

CellProfiler can only handle a single object in each location of an image, so it is important to carefully choose how to handle objects which would be overlapping.

When performing operations, this module treats the first selected object set, termed “initial objects” as the starting point for a joined set. CellProfiler will try to add objects from the second selected set to the initial set.

Object label numbers are re-assigned after merging the object sets. This can mean that if your settings result in one object being cut into two by another object, the divided segments will be reassigned as seperate objects.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

NO

(Jump to top)

ConvertImageToObjects

ConvertImageToObjects converts an image to objects. This module is useful for importing a previously segmented or labeled image into CellProfiler, as it will preserve the labels of an integer-labelled input.

This module can also convert a grayscale image to binary before converting it to an object. Connected components of the binary image are assigned to the same object. This feature is useful for identifying objects that can be cleanly distinguished using Threshold. If you wish to distinguish clumped objects, see Watershed or the Identify modules.

Note that grayscale images provided as input with this setting will be converted to binary images. Pixel intensities below or equal to 50% of the input’s full intensity range are assigned to the background (i.e., assigned the value 0). Pixel intensities above 50% of the input’s full intensity range are assigned to the foreground (i.e., assigned the value 1).


Supports 2D?

Supports 3D?

Respects masks?

YES

YES

NO

(Jump to top)

ConvertObjectsToImage

ConvertObjectsToImage converts objects you have identified into an image.

This module allows you to take previously identified objects and convert them into an image according to a colormap you select, which can then be saved with the SaveImages module.

This module does not support overlapping objects, such as those produced by the UntangleWorms module. Overlapping regions will be lost during saving.


Supports 2D?

Supports 3D?

Respects masks?

YES

YES

YES

(Jump to top)

EditObjectsManually

EditObjectsManually allows you create, remove and edit objects previously defined.

The interface will show the image that you selected as the guiding image, overlaid with colored outlines of the selected objects (or filled objects if you choose). This module allows you to remove or edit specific objects by pointing and clicking to select objects for removal or editing. Once editing is complete, the module displays the objects as originally identified (left) and the objects that remain after this module (right). More detailed Help is provided in the editing window via the ‘?’ button. The pipeline pauses once per processed image when it reaches this module. You must press the Done button to accept the selected objects and continue the pipeline.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

Measurements made by this module

Image measurements:

  • Count: The number of edited objects in the image.

Object measurements:

  • Location_X, Location_Y: The pixel (X,Y) coordinates of the center of mass of the edited objects.

See also

See also FilterObjects, MaskObject, OverlayOutlines, ConvertToImage.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

(Jump to top)

ExpandOrShrinkObjects

ExpandOrShrinkObjects expands or shrinks objects by a defined distance.

The module expands or shrinks objects by adding or removing border pixels. You can specify a certain number of border pixels to be added or removed, expand objects until they are almost touching, or shrink objects down to a point. The module can also separate touching objects without otherwise shrinking them, and can perform some specialized morphological operations that remove pixels without completely removing an object.

See also IdentifySecondaryObjects which allows creating new objects based on expansion of existing objects, with a a few different options than in this module. There are also several related modules in the Advanced category (e.g., Dilation, Erosion, MorphologicalSkeleton).

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

Measurements made by this module

Image measurements:

  • Count: Number of expanded/shrunken objects in the image.

Object measurements:

  • Location_X, Location_Y: Pixel (X,Y) coordinates of the center of mass of the expanded/shrunken objects.

(Jump to top)

FilterObjects

FilterObjects eliminates objects based on their measurements (e.g., area, shape, texture, intensity).

This module removes selected objects based on measurements produced by another module (e.g., MeasureObjectSizeShape, MeasureObjectIntensity, MeasureTexture, etc). All objects that do not satisfy the specified parameters will be discarded.

This module also may remove objects touching the image border or edges of a mask. This is useful if you would like to unify images via SplitOrMergeObjects before deciding to discard these objects.

Please note that the objects that pass the filtering step comprise a new object set, and hence do not inherit the measurements associated with the original objects. Any measurements on the new object set will need to be made post-filtering by the desired measurement modules.


Supports 2D?

Supports 3D?

Respects masks?

YES

YES

YES

See also

See also any of the MeasureObject modules, MeasureTexture, MeasureColocalization, and CalculateMath.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

Measurements made by this module

Image measurements:

  • Count: The number of objects remaining after filtering.

Object measurements:

  • Parent: The identity of the input object associated with each filtered (remaining) object.

  • Location_X, Location_Y, Location_Z: The pixel (X,Y,Z) coordinates of the center of mass of the filtered (remaining) objects.

(Jump to top)

IdentifyObjectsInGrid

IdentifyObjectsInGrid identifies objects within each section of a grid that has been defined by the DefineGrid module.

This module identifies objects that are contained within in a grid pattern, allowing you to measure the objects using Measure modules. It requires you to have defined a grid earlier in the pipeline, using the DefineGrid module. For several of the automatic options, you will need to enter the names of previously identified objects. Typically, this module is used to refine locations and/or shapes of objects of interest that you roughly identified in a previous Identify module. Within this module, objects are re-numbered according to the grid definitions rather than their original numbering from the earlier Identify module. If placing the objects within the grid is impossible for some reason (the grid compartments are too close together to fit the proper sized circles, for example) the grid will fail and processing will be canceled unless you choose to re-use a grid from a previous successful image cycle.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

Measurements made by this module

Image measurements:

  • Count: The number of objects identified.

Object measurements:

  • Location_X, Location_Y: The pixel (X,Y) coordinates of the center of mass of the identified objects.

  • Number: The numeric label assigned to each identified object according to the arrangement order you specified.

See also

See also DefineGrid.

(Jump to top)

IdentifyObjectsManually

IdentifyObjectsManually allows you to identify objects in an image by hand rather than automatically.

This module lets you outline the objects in an image using the mouse.

The user interface has several mouse tools:

  • Outline: Lets you draw an outline around an object. Press the left mouse button at the start of the outline and draw the outline around your object. The tool will close your outline when you release the left mouse button.

  • Zoom in: Lets you draw a rectangle and zoom the display to within that rectangle.

  • Zoom out: Reverses the effect of the last zoom-in.

  • Erase: Erases an object if you click on it.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

NO

See also

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

(Jump to top)

IdentifyPrimaryObjects

IdentifyPrimaryObjects identifies biological objects of interest. It requires grayscale images containing bright objects on a dark background. Incoming images must be 2D (including 2D slices of 3D images); please use the Watershed module for identification of objects in 3D.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

See also

See also IdentifySecondaryObjects, IdentifyTertiaryObjects, IdentifyObjectsManually, and Watershed (for segmentation of 3D objects).

What is a primary object?

In CellProfiler, we use the term object as a generic term to refer to an identified feature in an image, usually an organism, cell, or cellular compartment (for example, nuclei, cells, colonies, worms).

We define an object as primary when it can be found in an image without needing the assistance of another cellular feature as a reference. For example:

  • The nuclei of cells are usually more easily identifiable than whole- cell stains due to their more uniform morphology, high contrast relative to the background when stained, and good separation between adjacent nuclei. These qualities typically make them appropriate candidates for primary object identification.

  • In contrast, whole-cell stains often yield irregular intensity patterns and are lower-contrast with more diffuse staining, making them more challenging to identify than nuclei without some supplemental image information being provided. In addition, cells often touch or even overlap their neighbors making it harder to delineate the cell borders. For these reasons, cell bodies are better suited for secondary object identification, because they are best identified by using a previously-identified primary object (i.e, the nuclei) as a reference. See the IdentifySecondaryObjects module for details on how to do this.

What do I need as input?

To use this module, you will need to make sure that your input image has the following qualities:

  • The image should be grayscale.

  • The foreground (i.e, regions of interest) are lighter than the background.

  • The image should be 2D. 2D slices of 3D images are acceptable if the image has not been loaded as volumetric in the NamesAndTypes module. For volumetric analysis of 3D images, please see the Watershed module.

If this is not the case, other modules can be used to pre-process the images to ensure they are in the proper form:

  • If the objects in your images are dark on a light background, you should invert the images using the Invert operation in the ImageMath module.

  • If you are working with color images, they must first be converted to grayscale using the ColorToGray module.

  • If your images are brightfield/phase/DIC, they may be processed with the EnhanceOrSuppressFeatures module with its “Texture” or “DIC” settings.

  • If you struggle to find effective settings for this module, you may want to check our tutorial on preprocessing these images with ilastik prior to using them in CellProfiler.

What are the advanced settings?

IdentifyPrimaryObjects allows you to tweak your settings in many ways; so many that it can often become confusing where you should start. This is typically the most important but complex step in creating a good pipeline, so do not be discouraged: other modules are easier to configure! Using IdentifyPrimaryObjects with ‘Use advanced settings?’ set to ‘No’ allows you to quickly try to identify your objects based only their typical size; CellProfiler will then use its built-in defaults to decide how to set the threshold and how to break clumped objects apart. If you are happy with the results produced by the default settings, you can then move on to construct the rest of your pipeline; if not, you can set ‘Use advanced settings?’ to ‘Yes’ which will allow you to fully tweak and customize all the settings.

What do I get as output?

A set of primary objects are produced by this module, which can be used in downstream modules for measurement purposes or other operations. See the section “Measurements made by this module” below for the measurements that are produced directly by this module. Once the module has finished processing, the module display window will show the following panels:

  • Upper left: The raw, original image.

  • Upper right: The identified objects shown as a color image where connected pixels that belong to the same object are assigned the same color (label image). Note that assigned colors are arbitrary; they are used simply to help you distinguish the various objects.

  • Lower left: The raw image overlaid with the colored outlines of the identified objects. Each object is assigned one of three (default) colors:

    • Green: Acceptable; passed all criteria

    • Magenta: Discarded based on size

    • Yellow: Discarded due to touching the border

    If you need to change the color defaults, you can make adjustments in File > Preferences.

  • Lower right: A table showing some of the settings used by the module in order to produce the objects shown. Some of these are as you specified in settings; others are calculated by the module itself.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

Measurements made by this module

Image measurements:

  • Count: The number of primary objects identified.

  • OriginalThreshold: The global threshold for the image.

  • FinalThreshold: For the global threshold methods, this value is the same as OriginalThreshold. For the adaptive or per-object methods, this value is the mean of the local thresholds.

  • WeightedVariance: The sum of the log-transformed variances of the foreground and background pixels, weighted by the number of pixels in each distribution.

  • SumOfEntropies: The sum of entropies computed from the foreground and background distributions.

Object measurements:

  • Location_X, Location_Y: The pixel (X,Y) coordinates of the primary object centroids. The centroid is calculated as the center of mass of the binary representation of the object.

Technical notes

CellProfiler contains a modular three-step strategy to identify objects even if they touch each other (“declumping”). It is based on previously published algorithms (Malpica et al., 1997; Meyer and Beucher, 1990; Ortiz de Solorzano et al., 1999; Wahlby, 2003; Wahlby et al., 2004). Choosing different options for each of these three steps allows CellProfiler to flexibly analyze a variety of different types of objects. The module has many options, which vary in terms of speed and sophistication. More detail can be found in the Settings section below. Here are the three steps, using an example where nuclei are the primary objects:

  1. CellProfiler determines whether a foreground region is an individual nucleus or two or more clumped nuclei.

  2. The edges of nuclei are identified, using thresholding if the object is a single, isolated nucleus, and using more advanced options if the object is actually two or more nuclei that touch each other.

  3. Some identified objects are discarded or merged together if they fail to meet certain your specified criteria. For example, partial objects at the border of the image can be discarded, and small objects can be discarded or merged with nearby larger ones. A separate module, FilterObjects, can further refine the identified nuclei, if desired, by excluding objects that are a particular size, shape, intensity, or texture.

References

  • Malpica N, de Solorzano CO, Vaquero JJ, Santos, A, Vallcorba I, Garcia-Sagredo JM, del Pozo F (1997) “Applying watershed algorithms to the segmentation of clustered nuclei.” Cytometry 28, 289-297. (link)

  • Meyer F, Beucher S (1990) “Morphological segmentation.” J Visual Communication and Image Representation 1, 21-46. (link)

  • Ortiz de Solorzano C, Rodriguez EG, Jones A, Pinkel D, Gray JW, Sudar D, Lockett SJ. (1999) “Segmentation of confocal microscope images of cell nuclei in thick tissue sections.” Journal of Microscopy-Oxford 193, 212-226. (link)

  • Wählby C (2003) Algorithms for applied digital image cytometry, Ph.D., Uppsala University, Uppsala.

  • Wählby C, Sintorn IM, Erlandsson F, Borgefors G, Bengtsson E. (2004) “Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections.” J Microsc 215, 67-76. (link)

(Jump to top)

IdentifySecondaryObjects

IdentifySecondaryObjects identifies objects (e.g., cells) using objects identified by another module (e.g., nuclei) as a starting point.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

See also

See also the other Identify modules.

What is a secondary object?

In CellProfiler, we use the term object as a generic term to refer to an identified feature in an image, usually an organism, cell, or cellular compartment (for example, nuclei, cells, colonies, worms).

We define an object as secondary when it can be found in an image by using another cellular feature as a reference for guiding detection.

For densely-packed cells (such as those in a confluent monolayer), determining the cell borders using a cell body stain can be quite difficult since they often have irregular intensity patterns and are lower-contrast with more diffuse staining. In addition, cells often touch their neighbors making it harder to delineate the cell borders. It is often easier to identify an organelle which is well separated spatially (such as the nucleus) as an object first and then use that object to guide the detection of the cell borders. See the IdentifyPrimaryObjects module for details on how to identify a primary object.

In order to identify the edges of secondary objects, this module performs two tasks:

  1. Finds the dividing lines between secondary objects that touch each other.

  2. Finds the dividing lines between the secondary objects and the background of the image. In most cases, this is done by thresholding the image stained for the secondary objects.

What do I need as input?

This module identifies secondary objects based on two types of input:

  1. An object (e.g., nuclei) identified from a prior module. These are typically produced by an IdentifyPrimaryObjects module, but any object produced by another module may be selected for this purpose.

  2. (optional) An image highlighting the image features defining the edges of the secondary objects (e.g., cell edges). This is typically a fluorescent stain for the cell body, membrane or cytoskeleton (e.g., phalloidin staining for actin). However, any image that produces these features can be used for this purpose. For example, an image processing module might be used to transform a brightfield image into one that captures the characteristics of a cell body fluorescent stain. This input is optional because you can instead define secondary objects as a fixed distance around each primary object.

What do I get as output?

A set of secondary objects are produced by this module, which can be used in downstream modules for measurement purposes or other operations. Because each primary object is used as the starting point for producing a corresponding secondary object, keep in mind the following points:

  • The primary object will always be completely contained within a secondary object. For example, nuclei are completely enclosed within cells identified by actin staining.

  • There will always be at most one secondary object for each primary object.

Once the module has finished processing, the module display window will show the following panels; note that these are just for display: you must use the SaveImages module if you would like to save any of these images to the hard drive (as well, the OverlayOutlines module or ConvertObjectsToImage modules might be needed):

  • Upper left: The raw, original image.

  • Upper right: The identified objects shown as a color image where connected pixels that belong to the same object are assigned the same color (label image). Note that assigned colors are arbitrary; they are used simply to help you distinguish the various objects.

  • Lower left: The raw image overlaid with the colored outlines of the identified secondary objects. The objects are shown with the following colors:

    • Magenta: Secondary objects

    • Green: Primary objects

    If you need to change the color defaults, you can make adjustments in File > Preferences.

  • Lower right: A table showing some of the settings you chose, as well as those calculated by the module in order to produce the objects shown.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

Measurements made by this module

Image measurements:

  • Count: The number of secondary objects identified.

  • OriginalThreshold: The global threshold for the image.

  • FinalThreshold: For the global threshold methods, this value is the same as OriginalThreshold. For the adaptive or per-object methods, this value is the mean of the local thresholds.

  • WeightedVariance: The sum of the log-transformed variances of the foreground and background pixels, weighted by the number of pixels in each distribution.

  • SumOfEntropies: The sum of entropies computed from the foreground and background distributions.

Object measurements:

  • Parent: The identity of the primary object associated with each secondary object.

  • Location_X, Location_Y: The pixel (X,Y) coordinates of the center of mass of the identified secondary objects.

(Jump to top)

IdentifyTertiaryObjects

IdentifyTertiaryObjects identifies tertiary objects (e.g., cytoplasm) by removing smaller primary objects (e.g., nuclei) from larger secondary objects (e.g., cells), leaving a ring shape.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

See also

See also IdentifyPrimaryObjects and IdentifySecondaryObjects modules.

What is a tertiary object?

In CellProfiler, we use the term object as a generic term to refer to an identified feature in an image, usually an organism, cell, or cellular compartment (for example, nuclei, cells, colonies, worms).

We define an object as tertiary when it is identified using prior primary and secondary objects.

As an example, you can find nuclei using IdentifyPrimaryObjects and cell bodies using IdentifySecondaryObjects. Use the IdentifyTertiaryObjects module to define the cytoplasm, the region outside the nucleus but within the cell body, as a new object which can be measured in downstream Measure modules.

What do I need as input?

This module will take the smaller identified objects and remove them from the larger identified objects. For example, “subtracting” the nuclei from the cells will leave just the cytoplasm, the properties of which can then be measured by downstream Measure modules. The larger objects should therefore be equal in size or larger than the smaller objects and must completely contain the smaller objects; IdentifySecondaryObjects will produce objects that satisfy this constraint. Ideally, both inputs should be objects produced by prior Identify modules.

What do I get as output?

A set of objects are produced by this module, which can be used in downstream modules for measurement purposes or other operations. Because each tertiary object is produced from primary and secondary objects, there will always be at most one tertiary object for each larger object. See the section “Measurements made by this module” below for the measurements that are produced by this module.

Note that if the smaller objects are not completely contained within the larger objects, creating subregions using this module can result in objects with a single label (that is, identity) that nonetheless are not contiguous. This may lead to unexpected results when running measurement modules such as MeasureObjectSizeShape because calculations of the perimeter, aspect ratio, solidity, etc. typically make sense only for contiguous objects. Other modules, such as MeasureImageIntensity, are not affected and will yield expected results.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

Measurements made by this module

Image measurements:

  • Count: The number of tertiary objects identified.

Object measurements:

  • Parent: The identity of the primary object and secondary object associated with each tertiary object.

  • Location_X, Location_Y: The pixel (X,Y) coordinates of the center of mass of the identified tertiary objects.

(Jump to top)

MaskObjects

MaskObjects removes objects outside of a specified region or regions.

This module allows you to delete the objects or portions of objects that are outside of a region (mask) you specify. For example, after identifying nuclei and tissue regions in previous Identify modules, you might want to exclude all nuclei that are outside of a tissue region.

If using a masking image, the mask is composed of the foreground (white portions); if using a masking object, the mask is composed of the area within the object. You can choose to remove only the portion of each object that is outside of the region, remove the whole object if it is partially or fully outside of the region, or retain the whole object unless it is fully outside of the region.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

See also

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

Measurements made by this module

Parent object measurements:

  • Count: The number of new masked objects created from each parent object.

Masked object measurements:

  • Parent: The label number of the parent object.

  • Location_X, Location_Y: The pixel (X,Y) coordinates of the center of mass of the masked objects.

(Jump to top)

RelateObjects

RelateObjects assigns relationships; all objects (e.g., speckles) within a parent object (e.g., nucleus) become its children.

This module allows you to associate child objects with parent objects. This is useful for counting the number of children associated with each parent, and for calculating mean measurement values for all children that are associated with each parent.

An object will be considered a child even if the edge is the only partly touching a parent object. If a child object is touching multiple parent objects, the object will be assigned to the parent with maximal overlap. For an alternate approach to assigning parent/child relationships, consider using the MaskObjects module.

If you want to include child objects that lie outside but still near parent objects, you might want to expand the parent objects using ExpandOrShrink or IdentifySecondaryObjects.


Supports 2D?

Supports 3D?

Respects masks?

YES

YES

YES

See also

See also: SplitOrMergeObjects, MaskObjects.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

Measurements made by this module

Parent object measurements:

  • Count: The number of child sub-objects for each parent object.

  • Mean measurements: The mean of the child object measurements, calculated for each parent object.

Child object measurements:

  • Parent: The label number of the parent object, as assigned by an Identify or Watershed module.

  • Distances: The distance of each child object to its respective parent.

(Jump to top)

ResizeObjects

ResizeObjects will upsize or downsize an object’s label matrix by a factor or by specifying the final dimensions in pixels. ResizeObjects is similar to ResizeImage, but ResizeObjects is specific to CellProfiler objects created by modules such as IdentifyPrimaryObjects or Watershed. ResizeObjects uses nearest neighbor interpolation to preserve object labels after the resizing operation.

When resizing 3D data, the height and width will be changed, but the original depth (or z-dimension) will be kept. This 3D behavior was chosen, because in most cases the number of slices in a z-stack is much fewer than the number of pixels that define the x-y dimensions. Otherwise, a significant fraction of z information would be lost during downsizing. ResizeObjects is useful for processing very large or 3D data to reduce computation time. You might downsize a 3D image with ResizeImage to generate a segmentation, then use ResizeObjects to stretch the segmented objects to their original size before computing measurements with the original 3D image. ResizeObjects differs from ExpandOrShrinkObjects and ShrinkToObjectCenters in that the overall dimensions of the object label matrix, or image, are changed. In contrast, ExpandOrShrinkObjects will alter the size of the objects within an image, but it will not change the size of the image itself.

See also

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

(Jump to top)

SplitOrMergeObjects

SplitOrMergeObjects separates or combines a set of objects that were identified earlier in a pipeline.

Objects and their measurements are associated with each other based on their object numbers (also known as labels). Typically, each object is assigned a single unique number, such that the exported measurements are ordered by this numbering. This module allows the reassignment of object numbers by either merging separate objects to share the same label, or splitting portions of separate objects that previously had the same label.

There are many options in this module. For example, objects that share a label, but are not touching can be relabeled into separate objects. Objects that share a boundary can be combined into a single object. Children of the same parent can be given the same label.

Note that this module does not physically connect/bridge/merge objects that are separated by background pixels, it simply assigns the same object number to the portions of the object. The new, “merged” object may therefore consist of two or more unconnected components. If you want to add pixels around objects, see ExpandOrShrink or Morph.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

See also

See also RelateObjects.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

Measurements made by this module

Parent object measurements:

  • Children Count: The number of relabeled objects created from each parent object.

Reassigned object measurements:

  • Parent: The label number of the parent object.

  • Location_X, Location_Y: The pixel (X,Y) coordinates of the center of mass of the reassigned objects.

Technical notes

Reassignment means that the numerical value of every pixel within an object (in the label matrix version of the image) gets changed, as specified by the module settings. In order to ensure that objects are labeled consecutively without gaps in the numbering (which other modules may depend on), SplitOrMergeObjects will typically result in most of the objects having their numbers reordered. This reassignment information is stored as a per-object measurement with both the original input and reassigned output objects, in case you need to track the reassignment.

(Jump to top)

TrackObjects

TrackObjects allows tracking objects throughout sequential frames of a series of images, so that from frame to frame each object maintains a unique identity in the output measurements

This module must be placed downstream of a module that identifies objects (e.g., IdentifyPrimaryObjects). TrackObjects will associate each object with the same object in the frames before and after. This allows the study of objects’ lineages and the timing and characteristics of dynamic events in movies.

Images in CellProfiler are processed sequentially by frame (whether loaded as a series of images or a movie file). To process a collection of images/movies, you will need to do the following:

  • Define each individual movie using metadata either contained within the image file itself or as part of the images nomenclature or folder structure. Please see the Metadata module for more details on metadata collection and usage.

  • Group the movies to make sure that each image sequence is handled individually. Please see the Groups module for more details on the proper use of metadata for grouping.

For complete details, see Help > Creating a Project > Loading Image Stacks and Movies.

For an example pipeline using TrackObjects, see the CellProfiler Examples webpage.


Supports 2D?

Supports 3D?

Respects masks?

YES

NO

YES

See also

See also: Any of the Measure modules, IdentifyPrimaryObjects, Groups.

Note on saving images: You can pass the objects along to the Object Processing module ConvertObjectsToImage to create an image. This image can be saved with the SaveImages module. Additionally, you can use the OverlayOutlines or OverlayObjects module to overlay outlines or objects, respectively, on a base image. The resulting image can also be saved with the SaveImages module.

Measurements made by this module

Object measurements

  • Label: Each tracked object is assigned a unique identifier (label). Child objects resulting from a split or merge are assigned the label of the ancestor.

  • ParentImageNumber, ParentObjectNumber: The ImageNumber and ObjectNumber of the parent object in the prior frame. For a split, each child object will have the label of the object it split from. For a merge, the child will have the label of the closest parent.

  • TrajectoryX, TrajectoryY: The direction of motion (in x and y coordinates) of the object from the previous frame to the current frame.

  • DistanceTraveled: The distance traveled by the object from the previous frame to the current frame (calculated as the magnitude of the trajectory vectors).

  • Displacement: The shortest distance traveled by the object from its initial starting position to the position in the current frame. That is, it is the straight-line path between the two points.

  • IntegratedDistance: The total distance traveled by the object during the lifetime of the object.

  • Linearity: A measure of how linear the object trajectory is during the object lifetime. Calculated as (displacement from initial to final location)/(integrated object distance). Value is in range of [0,1].

  • Lifetime: The number of frames an objects has existed. The lifetime starts at 1 at the frame when an object appears, and is incremented with each frame that the object persists. At the final frame of the image set/movie, the lifetimes of all remaining objects are output.

  • FinalAge: Similar to LifeTime but is only output at the final frame of the object’s life (or the movie ends, whichever comes first). At this point, the final age of the object is output; no values are stored for earlier frames.

    TO_image0 This value is useful if you want to plot a histogram of the object lifetimes; all but the final age can be ignored or filtered out.

The following object measurements are specific to the LAP tracking method:

  • LinkType: The linking method used to link the object to its parent. Possible values are

    • 0: The object was not linked to a parent.

    • 1: The object was linked to a parent in the previous frame.

    • 2: The object is linked as the start of a split path.

    • 3: The object was linked to its parent as a daughter of a mitotic pair.

    • 4: The object was linked to a parent in a frame prior to the previous frame (a gap).

    Under some circumstances, multiple linking methods may apply to a given object, e.g, an object may be both the beginning of a split path and not have a parent. However, only one linking method is assigned.

  • MovementModel: The movement model used to track the object.

    • 0: The Random model was used.

    • 1: The Velocity model was used.

    • -1: Neither model was used. This can occur under two circumstances:

      • At the beginning of a trajectory, when there is no data to determine the model as yet.

      • At the beginning of a closed gap, since a model was not actually applied to make the link in the first phase.

  • LinkingDistance: The difference between the propagated position of an object and the object to which it is matched.

    TO_image1 A slowly decaying histogram of these distances indicates that the search radius is large enough. A cut-off histogram is a sign that the search radius is too small.

  • StandardDeviation: The Kalman filter maintains a running estimate of the variance of the error in estimated position for each model. This measurement records the linking distance divided by the standard deviation of the error when linking the object with its parent.

    TO_image2 This value is multiplied by the “Number of standard deviations for search radius” setting to constrain the search distance. A histogram of this value can help determine if the “Search radius limit, in pixel units (Min,Max)” setting is appropriate.

  • GapLength: The number of frames between an object and its parent. For instance, an object in frame 3 with a parent in frame 1 has a gap length of 2.

  • GapScore: If an object is linked to its parent by bridging a gap, this value is the score for the gap.

  • SplitScore: If an object linked to its parent via a split, this value is the score for the split.

  • MergeScore: If an object linked to a child via a merge, this value is the score for the merge.

  • MitosisScore: If an object linked to two children via a mitosis, this value is the score for the mitosis.

Image measurements

  • LostObjectCount: Number of objects that appear in the previous frame but have no identifiable child in the current frame.

  • NewObjectCount: Number of objects that appear in the current frame but have no identifiable parent in the previous frame.

  • SplitObjectCount: Number of objects in the current frame that resulted from a split from a parent object in the previous frame.

  • MergedObjectCount: Number of objects in the current frame that resulted from the merging of child objects in the previous frame.

(Jump to top)