Contact FutureLearn for Support Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Skip to 0 minutes and 7 secondsWelcome back. The goal in this tutorial video is to understand how to find correspondence between a statistical shape model and an image, and in our case, this will be a 3D medical image, and also understand how this relates to the notion of intensity models. So let's start by loading our image and our model that we would like to fit together, and also display them in the 3D scene. So here now you see our statistical model that is rather well rigidly aligned with our 3D image.

Skip to 0 minutes and 38 secondsBut if I now go to this slice view here and then make my model a bit more visible here, the contour, then hopefully-- you immediately see that we have a rather poor fit here where we actually have the contour of the face that is going way too deep inside of the head. The whole goal now of performing this fitting operation and finding correspondences is actually to find an instance of this model where we would have this green contour now nicely following the contour of the image that you see here. So let's for now try to keep things simple and focus on finding correspondence for a single point.

Skip to 1 minute and 18 secondsAnd here, I'm interested in locating the tip of the nose in my image. So since I have now a statistical shape model, I can now locate the position of the tip of the nose in different sample faces, or also marginalise over the tip of the nose. So this is what I'm doing now in this bit of code here. So I start first by specifying the identifier of the point on the tip of the nose. And then I'm marginalising my model over this single identifier. So what this gives me now is it gives me a statistical mesh model that is a shape model of a single point, after which now I'm simply looping 200 iterations.

Skip to 1 minute and 54 secondsAnd then at every iteration, I am taking one sample out of this marginal model, which gives me now a point cloud of 200 tips of the nose that I'm simply now displaying in my scene. And if I now go back to my 3D scene, hopefully you see it here-- and maybe I can make the model invisible-- we have a nice overlap of these candidate positions with our target image.

Skip to 2 minutes and 18 secondsAnd you can maybe notice that here, if some of these candidates are actually-- one might say that these are good candidates for tips of the nose in this image when looking at the intensities, while others are rather far away, far outside, and others are also far inside and would be considered as bad candidates. And the whole idea now of using intensity models is to be able to discriminate between these candidate positions and evaluate which ones are more fit to be tips of the nose on this image. So how can we now build such an intensity model? Well, you know by now to build models, we need data and correspondence.

Skip to 2 minutes and 57 secondsSo what I'm doing here is I'm first starting by cleaning up my scene a bit, and then I'm loading a set of faces that are in correspondence that are stored in this directory here. And here, I'm loading four faces and then simply displaying now the first face in my list of faces in the scene. And this is the face that you see now here. And in addition to this data set of faces and correspondence, I now have a set of MRI images that are actually fitting to these faces.

Skip to 3 minutes and 25 secondsSo what I'm doing now is I'm also reading the corresponding set of MRI images, and they're stored in this directory, and then also now, in this case, displaying the first element of this list of MRI images. And if I know visualise this in the 3D scene, you see that we have now our face and image that are aligned. And if I go to the slice view again, in this case-- and colour my face a bit more distinctively-- hopefully now you understand and you see what I meant by having a set of images that are actually fitting to our faces.

Skip to 3 minutes and 57 secondsHere you can notice if I now scroll down that we have the contour of the face that is nicely following the contour of our image. And this is what I meant by saying that this data set is fitting to each other. And this is the type of result that we would like actually to achieve by fitting our statistical model to our target image. What we know from having a data set of faces and correspondence is that if we're interested in the position of a particular point, we can actually locate it in every face of our data set. And this is what I'm doing here now for the tip of the nose.

Skip to 4 minutes and 32 secondsSo I'm locating first it's position on the first element of my face. This is the face that I displayed in the scene. And then I'm adding it as a landmark to this face in the scene. And you can see it appearing here. This is now the landmark that I added that I can now centre on here and perhaps make it also a bit more visible here and making it thicker. So this is what we know already that by having data in correspondence, we locate the point on the surface of the mesh, and this is how we locate it now on the contour here that we see displayed over the image.

Skip to 5 minutes and 11 secondsBut actually, let me now make this a bit invisible to the make things a bit clearer. What this also gives us-- the fact that we have a data set of images now that are well fitting to our faces-- I can now locate the tip of the nose also in this image, in this MRI image. And I can now read the intensity value of this tip of the nose in this image, and I can do this for all four images that I have in my data set. And this is how I now can collect a set of intensity values from my data set for the tip of the nose and build an intensity model.

Skip to 5 minutes and 47 secondsSo let's see now how we can do this in code. So what I'm starting here by first doing is interpolating my MRI images. So here, I'm looping over my list of MRIs and calling this interpolate method here with a cubic spline interpolation here parameter. And the reason I'm doing this is actually because I want to evaluate the intensity value at this exact point position. And this is-- the MRI that you see here is actually a discrete image that is defined over a discrete grid, and this exact point position might not necessarily be a point that happens to be on the grid.

Skip to 6 minutes and 24 secondsAnd this is why I need a continuous image in order to be able to evaluate the image at that position, and hence the interpolation. After that, what I do is now I zip my face an MRI data together, and then this gives me now an index sequence of tuples where for every tuple, I have the face and its corresponding image that are nicely wrapped together. And what I do then is exactly what I did in the previous step.

Skip to 6 minutes and 53 secondsSo I simply locate the point position, the 3D point position of the tip of the nose on the face, and then I simply evaluate my continuous image, now my MRI image, at that particular position, which gives me back this float intensity value. And to finish, I just wrap now this intensity value in a dense vector, 1-dimension dense vector, which gives me now in this tip intensities an index sequence of intensities of tip of the nose in my MRI images, in my four images. And to finish this, I can now build my intensity model, which in this case, I choose it to be a normal distribution, a scalar- valued normal distribution.

Skip to 7 minutes and 37 secondsAnd I can do this in Scalismo by calling this estimate from data method that is a method of this object, of the multivariate normal distribution object, and simply feeding the collected data values to this method. And what this does is it actually estimates me now a normal distribution that fits to this data, and this is now my intensity model. So now that we have our intensity model, we can go back to our initial goal of evaluating the fitness of our candidate positions of the tip of nose. So this is what I'm doing in this bit of code here.

Skip to 8 minutes and 13 secondsAnd I start here again by interpolating my continuous image, my MRI image, or my target image, since again, I wish to evaluate now this image at the candidate point positions that might not necessarily be the grid points again. And after doing that, I now loop on my candidate points. And then for every point p of my candidates, I now evaluate my target image and retrieve the intensity value at that particular position. And then this is now the important part of evaluating the fitness-- given this intensity value at this candidate position, I now evaluate this mahalanobis distance to the intensity model.

Skip to 8 minutes and 55 secondsAnd this really gives me a measure of how similar the observed intensity at the candidate is to a normal intensity that you would have for a tip of the nose in an MRI. And in this particular case of the scalar-valued normal distribution, this is really the difference or the distance to the mean intensity at the tip of the nose that is now normalised by the standard deviation of this distribution. So let me now executive this code.

Skip to 9 minutes and 26 secondsAnd now that we have this mahalanobis distance per point that is now, again, the fitness of every candidate point, I can now visualise this in my 3D scene. And I can do this in Scalismo by creating a discrete scalar field. This is now the class of point clouds that have a scalar value that is associated to them. And I create such a field by first creating a domain that is my candidate points for which I actually want to visualise the fitness. And then I associate as a value to every point its mahalanobis distance to the tip of the nose intensity model.

Skip to 10 minutes and 6 secondsAnd if I now execute this code, so you see now that we actually have our point cloud now that is displayed on our mesh with a colour code. So now every one of these circles that you see here is actually a candidate position for the tip of the nose, so one of these points clouds that we displayed previously. And now the colour is actually the mahalanobis distance to the intensity model of the tip of the nose. And actually, here, the redder the colour, the lower the distance, which means the better is the candidate in terms of-- at least according to the intensity model. So here, the points that we need to look for are actually the red ones.

Skip to 10 minutes and 52 secondsAnd you see here at positions, at good positions, one might say here, in the image, we actually do retrieve some red candidates, some good candidates according to our intensity model. But you also have a lot of these false positives here that you see here that would be actually a good candidate according to the intensity model, but clearly, when looking at the image, are not such a good idea as a fit.

Skip to 11 minutes and 20 secondsAnd the reason why we have this is because we have this rather simplistic intensity model now where we actually take into consideration only the intensity at the single point, while instead, if we now leverage this neighbourhood or this homogeneous neighbourhood that we have here in this region here, and then build an intensity model based on a region around every point, we would have, actually, a chance of filtering out these false positives.

Skip to 11 minutes and 53 secondsSo I now invite you to check out this in the companion tutorial document to this video, and give this also a try for yourself.

Finding correspondences in an image

What to do when given a model and an image to fit? Here we familiarise ourselves with the notion of intensity models in Scalismo.

We then learn how to build one and use it to localise the best candidate positions for the tip of the nose in a target Magnetic Resonance Imaging (MRI) image.

Each tutorial video is followed by a companion document that you will find in the consecutive Scalismo Lab step.

This video is from the free online course:

Statistical Shape Modelling: Computing the Human Anatomy

University of Basel