Accurate reconstruction of face shape is important for applications such as tele-presence and gaming. Such a reconstruction problem can be solved efficiently and in the presence of noise with the help of statistical shape models that constrain the shape of the reconstruction.
In this talk, an approach to robustly compute correspondences between a large set of facial motion sequences in a fully automatic way using a multilinear model as statistical prior is proposed. This motion sequence registration gives a compact representation of each motion sequence consisting of one vector of coefficients for identity and a high dimensional curve for expression. Based on this representation, new motion sequences are synthesized for static input face scans.
Furthermore, a statistical model to represent 3D human faces in varying expression is discussed, which decomposes the surface of the face using a wavelet transform, and learns many localized, decorrelated multilinear models on the resulting coefficients. The localized and multi-scale nature of this model allows for recovery of fine-scale detail while retaining robustness to severe noise and occlusion, and is computationally efficient and scalable.