Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Wrapping up: Week 4

This week we have seen how we can generalize the classical point distribution models, by allowing the covariance function to be defined by any positive semi-definite kernel function.

We experimented with analytically defined covariance functions and their combinations in Scalismo Lab and generated various shape families. We have seen that this allows us, for example, to define shape families, in which the likely instances of the shape family correspond to smooth deformations of the reference shape. As a particularly interesting example we showed how we can use combinations of kernels to enforce the symmetry of the shapes.

We have already discussed a situation where these analytically defined covariance functions and the shapes resulting thereof are useful. Namely we can enlarge the shape variability in a model when the number of example shapes that we have available to learn the shape family is not sufficient. Another important example of a situation where such analytically defined priors are useful is as prior knowledge in registration methods.

As we will see in Week 6, using such priors will allow us to reformulate registration as the problem of fitting a shape model. Before we explain how this is done, we will, however, first discuss, Gaussian Process regression. As all other methods we will encounter in the following, Gaussian Process regression works for any Gaussian Process model, independently of whether the covariance function is learned from data or analytically defined.

Share this article:

This article is from the free online course:

Statistical Shape Modelling: Computing the Human Anatomy

University of Basel

Contact FutureLearn for Support