Ph.D. course on advanced shape analysis - Spring 2005

Rasmus Larsen

This course is taught as a regular Ph.D. course in the DTU course 029?? "Advanced Topics in Image Analysis, Computer Graphics and Geoinformatics".  The First lecture will be given on

Monday February 28th, at 1pm- 3pm.

On this date and time the participants will decide the schedule for the remainder of the course.

 

Deformable template modelling is an important part of understanding complex patterns in images. Ulf Grenanders seminal work on 2D deformable template modelling (Grenander, Chow & Keenan, 1991) was hugely popularised in the end of the nineteen-nineties by the work of Cootes and Taylor (Cootes, Taylor, Cooper & Graham, 1995; Cootes, Edwards & Taylor, 2001; Stegmann, Ersbøll & Larsen, 2003), where they formulated linear models for shape variability estimated from annotated training data. Ramsay and Silverman (Ramsay & Silverman, 1997) presented seminal work of functional representations of curves.

However, in many situations linear models are likely to fail in accurately modelling natural phenomena. For instance Kendall's shape space for triangles in a plane - the simplest shape imaginable - is a sphere in R3. Linear approximations to the triangle shape space are only valid for low variability in triangle shape. To handle non-linear, large scale variations as occurs in nature new models are required. Developments in the statistics and machine learning fields in the new millennium have led to methods for parameterizing low dimensional manifolds in high dimensional spaces. These developments form the basis of formulating non-linear shape space models. Such as principal curves proposed by Hastie and Stuetzle (Hastie & Stuetzle, 1989); Tenenbaum et al. (Tenenbaum, Silva & Langford, 2000)'s ISOMAP procedure; Roweis et al. (Roweis & Saul 2000)'s Local Linear Embedding (LLE); Belkin and Niyogi (Belkin & Niyogi, 2002)'s Laplacian Eigenmaps; and Donoho and Grimes (Donoho & Grimes, 2003)'s Hessian Eigenmaps.

Furthermore, natural phenomena can often be explained by a set of few underlying parameters. This property has been used in many years in statistics, e.g. in factor rotation (Harman, 1967) for easier interpretation. In recent years sparsity has been used as design criterion to overcome the problem of the dimensionality of measurements vastly exceeding the number of observations available. Mathematically sparsity is envoked by putting a L0 penalty of the parameters. However, this is computationally intractable. Fortunately, in many situation the L1 penalty - for which computationally feasible solution are available - can work as a proxy for the L0 penalty as is used for instance in LASSO and LARS regression (Tibshirani 1996; Efron, Johnstone, Hastie & Tibshirani, 2003).

In this course we will read and discuss a series of articles and book chapters concerning the above mentioned subjects.

  1. Representing functional data as smooth functions - (Ramsay & Silverman, 1997, Chap. 2)

  2. The registration and display of functional data - (Ramsay & Silverman, 1997, Chap. 5)

  3. Principal components analysis for functional data I - (Ramsay & Silverman, 1997, Chap. 6)

  4. Principal components analysis for functional data II - (Ramsay & Silverman, 1997, Chap. 7)

  5. Principal curves - (Hastie & Stuetzle, 1989)

  6. ISOMAP - (Tenenbaum et al., 2000)

  7. Local Linear Embedding (LLE) - (Roweis & Saul, 2000)

  8. Laplacian Eigenmap - (Belkin & Niyogi, 2002)

  9. Hessian Eigenmaps - (Donoho & Grimes, 2003)

DATA

Implementations

Course evaluation is based on completion of mandatory exercises using Matlab and S-plus software available at the following internet addresses

References

  1. Belkin, M., & Niyogi, P. (2002). Laplacian eigenmaps and spectral techniques for embedding and clustering. In T. G. Dietterich, S. Becker, & Z. Ghahramani (Eds.), Advances in neural information processing systems (Vol. 14). Cambridge, MA: MIT Press.

  2. Cootes, T. F., Edwards, G. J., & Taylor, C. J. (2001). Active appearance models. IEEE Trans. on Pattern Recognition and Machine Intelligence, 23 (6), 681-685.

  3. Cootes, T. F., Taylor, C. J., Cooper, D. H., & Graham, J. (1995, January). Active shape models – their training and application. Computer Vision, Graphics and Image Processing, 61 (1), 38–59.

  4. Donoho, D. L., & Grimes, C. (2003). Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Sciences, 100 (10), 5591–5596.

  5. Efron, B., Johnstone, I., Hastie, T., & Tibshirani, R. (2003). Least angle regression. Annals of Statistics.

  6. Grenander, U., Chow, Y., & Keenan, D. M. (1991). Hands: A pattern theoretic study of biological shape. Springer Verlag. (128 pp.)

  7. Harman, H. H. (1967). Modern factor analysis (second ed.). Chicago: The University of Chicago Press. (474 pp.)

  8. Hastie, T., & Stuetzle, W. (1989). Principal curves. Journal of the American Statistical Association, 84, 502–516.

  9. Ramsay, J. O., & Silverman, B. W. (1997). Functional data analysis. New York: Springer Verlag.

  10. Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290, 2323–2326.

  11. Stegmann, M. B., Ersbøll, B. K., & Larsen, R. (2003, may). FAME – a flexible appearance modelling environment. IEEE Transactions on Medical Imaging, 22 (10), 1319–1331.

  12. Tenenbaum, J. B., Silva, V. de, & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290, 2319–2323.

  13. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58 (1), 267–288.