Statistical Shape Modelling: useful terms and concepts
Throughout learning statistical shape modelling, we use terminology that you are unfamiliar with. This is why we are creating this comprehensive glossary of terms.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A
Active Shape Model (ASM)
A popular method for combining both shape and intensity information into a model. An ASM is usually fitted using an algorithm similar to the ICP algorithm.
Introduced in:
Further reading:
C
Conditional distribution
A distribution derived from the joint distribution of a set of random variables, where the value for a subset of the random variables is fixed.
Introduced in:
Further reading:
- Conditional probability distribution – Wikipedia
- Rasmussen, C. E., Williams, C., Gaussian Processes for Machine Learning (Appendix A)
- Chuong B. Do, More on Multivariate Gaussians (Proof that the conditional distribution of a Gaussian is Gaussian)
Confidence region
A region around a point in which we know that with a certain probability the corresponding point in all shapes of the shape family will lie.
Introduced in:
Further reading:
- Confidence region – Wikipedia
- R. Blanc et al., Confidence regions for statistical model based shape prediction from sparse observations (Advanced material)
Correlation
A measure of dependence between two random variables. Shape modelling is all about discovering and modelling the correlations that exists within a shape family.
Introduced in:
Further reading:
Correspondence
Two points defined on two different shapes of the same shape family are said to ‘correspond’ if they denote the same semantic point (e.g. the tip of the nose in the family of face shapes).
Introduced in:
Further reading:
Covariance function
Given a set of random variables, the covariance function is a (symmetric and positive semi-definite) function which specifies for any two variables of this set their covariance. In shape modelling, the covariance function (k(x,x’)) usually determines the covariance between the random displacements at the points (x) and (x’) of a shape.
Introduced in:
Further reading:
Covariance matrix
A covariance matrix is a symmetric and positive semi-definite matrix, for which the entry (i,j) represents the covariance between the (i-)th and (j-)th entry of a random vector. The covariance matrix defines the shape of the multivariate normal distribution.
Introduced in:
Further reading:
D
Deformation field
A vector-valued function, whose value (v(x)) represents a deformation (or displacement) vector for the point (x).
Introduced in:
F
Fitting a model
The act of finding the parameters of a statistical shape model, which best explain a given surface or image.
Introduced in:
Further reading:
Free-form deformation
A free-form deformation is a model over deformation fields, where the only assumption is that the deformations vary smoothly. This is different from a statistical model, where the deformations vary according to the statistics of the shape family.
Introduced in:
G
Gaussian distribution
Is another name for a normal distribution.
Gaussian Process (GP)
An extension of the multivariate normal distribution to model distributions over functions.
Introduced in:
Further reading:
Gaussian Process regression
A regression algorithm, where it is assumed that the admissible functions are modelled by a Gaussian Process, and the observations are subject to Gaussian noise.
Introduced in:
Further reading:
I
Intensity model
A probabilistic model of the intensity values in an image.
Introduced in:
Iterative Closest Point (ICP) algorithm
Iterative algorithm for finding the best transformation between two points sets.
Introduced in:
Further reading:
J
Joint distribution
Probability distribution defined for a set of random variables.
Introduced in:
Further reading:
K
Karhunen-Loève expansion
A representation of a Gaussian Process in terms of a linear combination of orthogonal functions.
Introduced in:
Further reading:
- Karhunen-Loève theorem – Wikipedia
- Seeger, Matthias, Gaussian Processes in Machine Learning (Section 5.1)
Kernel function
M
Mahalanobis distance
Given two points, which follow the same probability distribution with covariance matrix (Sigma), the Mahalanobis distance is a distance measure, which takes the variances/covariances given in (Sigma) into account. When (Sigma) is the identity matrix, it reduces to the Euclidean distance.
Further reading
Marginal distribution
The distribution we obtain when we consider only a subset of the random variables modelled by a joint distribution, without making any reference to the other variables. (Compare to conditional distribution)
Introduced in:
Further reading:
- Marginal distribution – Wikipedia
- Rasmussen, C. E., Williams, C., Gaussian Processes for Machine Learning (Appendix A)
Marginalisation property
The property of a Gaussian Process, that when we marginalize the Gaussian Process at any finite set of point, the resulting distribution is always a multivariate normal distribution.
Introduced in:
Further reading:
Multivariate normal distribution
N
Normal distribution
A very common continuous probability distribution, which is widely used for modelling shape variations. If is also referred to as Gaussian distribution.
Introduced in:
Further reading:
- Multivariate normal distribution – Wikipedia
- Chuong B. Do, The Multivariate Gaussian Distribution
- Chuong B. Do, More on Multivariate Gaussians (Proofs of main properties)
P
Point Distribution Model
A type of shape model that represents a family of shapes by specifying the probability distribution (usually a normal distribution) for a set of points that describe the surface of the shape.
Introduced in:
Posterior model
A shape model that is constrained such that it matches given observations (usually observed points on a target surface).
Introduced in:
Further reading:
Principal Component Analysis (PCA)
A method for representing the shape variations of a statistical model in terms of a set of (orthogonal) basis vector, which orders the basis vectors according to the amount of variance that is explained. Mathematically, it can be seen as a discrete version of the Karhunen-Loève expansion, where the covariance function is learned from examples.
Introduced in:
Further reading:
Prior model
In the context of posterior models, we refer to the shape model as the prior model, if we want to emphasize that it is not yet constrained by any observations.
Introduced in:
Procrustes Alignment
A method for finding the optimal rigid alignment between two point clouds.
Introduced in:
Further reading:
Positive (semi) definiteness
A special mathematical property of matrices or kernel functions, which is needed to define a valid covariance matrix or covariance function.
Introduced in:
Further reading:
- Positive definite Matrix – Wikipedia
- Rasmussen, C. E., Williams, C., Gaussian Processes for Machine Learning (Chapter 4)
R
Registration
Registration is a method to transform or warp one coordinate system into another, such that a surface defined in one coordinate system become most similar to a target shape. It is often used to establish correspondence between two shapes. Registration can often be formulated as a model fitting problem.
Introduced in:
Further reading:
- Tam, G.K.L., et al. Registration of 3D Point Clouds and Meshes: A Survey From Rigid to Non-Rigid
- Sotiras, A., et al. Deformable medical image registration: a survey
Regression
Regression is the problem of inferring a function (modelled by some stochastic process), given a set of (noisy) observations of this function. In shape modelling, the possible functions are defined by the shape model and the observations are usually known points on a target surface (such as landmark points).
Introduced in:
Further reading:
Rigid transformation
Rigid transformation is a transformation, which combines a translation and a rotation (but not a scaling).
Introduced in:
Further reading:
S
Sampling
Sampling from a probability distribution is the task of generating concrete values (samples) of a random variable, according to the probabilities defined by the distribution.
Introduced in:
Shape
All properties of a geometric object, after rotation, and translation has been filtered out. (Note, the most common definition of shape also requires scale to be filtered out.)
Introduced in:
Shape family
A collection of shapes of the ‘same kind’, such as the family of hand shapes, the family of human faces or the family of triangle shapes.
Introduced in:
Statistical shape model
A probabilistic model of shape variations, where the parameters of the probabilistic model have been learned from data.
Introduced in:
Further reading:
Statistical Shape Modelling: Computing the Human Anatomy
Statistical Shape Modelling: Computing the Human Anatomy
Reach your personal and professional goals
Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.
Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.
Register to receive updates
-
Create an account to receive our newsletter, course recommendations and promotions.
Register for free