Wrapping up: Week 3
With this week’s topic, we have now covered all the concepts that we need for understanding how shape families can be modelled.
After having discussed last week how shape families can be defined using Gaussian Processes, this week we focused on turning this conceptual formulation into a practical one. We have seen two possibilities:
- Using the marginalization property to obtain a discrete representation of the Gaussian Process,
- Representing the process in terms of the leading basis functions of its Karhunen-Loève expansion.
This latter possibility is what is usually used in shape modelling applications. It not only provides us with a mathematically convenient parametric representation, which is independent of the discretization of the reference shape, but it is also identifies the main modes of variations. Thus, it allows us to study the most important shape differences between members of the shape family.
The technique that is most commonly used in the context of statistical shape models is the Principal Component Analysis (PCA), which we have introduced here as a special case of the Karhunen-Loève (KL) expansion. The understanding of the more general, but also slightly more involved concept of the KL expansion will help us in the next weeks. In contrast to PCA, the expansions holds for arbitrary Gaussian Process models, and we will use it to derive shape models from covariance functions, which are not necessarily learned from example data.
We also continued our exploration of Gaussian Processes using Scalismo Lab. We started by learning how to sample random deformation fields from a Gaussian Process and applied them to our reference shape to obtain random face shapes. We then built our first statistical shape model by applying PCA to a dataset of faces. These datasets were already brought into correspondence for us. In the coming weeks we will learn how to perform this step in Scalismo Lab.
© University of Basel