clmtrackr.js

Javascript library for fitting facial models to faces in images and video

View the Project on GitHub auduno/clmtrackr

Library Reference

clmtrackr is a javascript library for fitting facial models to faces in images and video, and can be used for getting precise positions of facial features in an image, or precisely tracking faces in video.

Watch this example of clmtrackr tracking a face in the talking face video:

The facial models included in the library follow this annotation:

Once started, clmtrackr will try to detect a face on the given element. If a face is found, clmtrackr will start to fit the facial model, and the positions can be returned via getCurrentPosition().

The fitting algorithm is based on a paper by Jason Saragih & Simon Lucey. The models are trained on annotated data from the MUCT database plus some self-annotated images.

Basic usage

Initialization:

var ctracker = new clm.tracker();
ctracker.init();

Starting tracking:

ctracker.start(videoElement);

Getting the points of the currently fitted model:

var positions = ctracker.getCurrentPosition();

Drawing the currently fitted model on a given canvas:

var drawCanvas = document.getElementById('somecanvas');
ctracker.draw(drawCanvas);

Functions

These are the functions that the clm.tracker object exposes:

Responses

When trying to fit the model, we calculate the likelihood of where the true points are in a region around each point. These likelihoods are called the responses. Clmtrackr includes three different types of responses: "raw", which is based on SVM regression of the grayscale patches, "sobel", which is based on SVM regression of the sobel gradients of the patches, which means it's more sensitive to edges, and "lbp", which is based on SVM regression of local binary patterns calculated from the patches. The type "raw" is the fastest way to calculate responses, since it doesn't do any preprocessing of the patches, but may be slightly less precise than "lbp" or "sobel". By default, clmtrackr only uses the "raw" type of response, but it is possible to change to the other types of responses to increase precision, by the function setResponseMode above.

Additionally, there are also methods to try to combine the different types of responses. By default, clmtrackr only uses one type of response, but you can try to improve tracking by either blending or cycling different types of responses. When blending different types of responses, clmtrackr will calculate all the given types of responses in the array list, and average these responses. Since we're then calculating multiple responses per iteration, this will lead to slower tracking. If you're cycling different types of responses, clmtrackr will cycle through the list of responses in the array "list", but only calculate one type for each iteration. This means tracking will not be much slower than using single responses, but you may experience that the fitted model "jitters" due to disagreement between the different types of responses.

Try out the different response modes in this example

Parameters

When initializing the object clm.tracker, you can optionally specify some object parameters, for instance:

var ctracker = new clm.tracker({searchWindow : 15, stopOnConvergence : true});

The optional object parameters that can be passed along to clm.tracker() are :

Models

There are several pre-built models included. The models will be loaded with the variable name pModel, so initialization of the tracker with any of the models can be called this way:

ctracker.init(pModel);

All of the models are trained on the same dataset, and follow the same annotation as above. The difference between them is in type of classifier, number of components in the facial model, and how the components were extracted (Sparse PCA or PCA). If no model is specified on initialization, clmtrackr will use the built-in model from model_pca_20_svm.js as a default choice.

A model with fewer components will be slightly faster, with some loss of precision. The MOSSE filter classifiers will run faster than SVM kernels on computers without support for webGL, but has slightly poorer fitting.

Files

Utility libraries

face_deformer.js is a small library for deforming a face from an image or video, and output it on a webgl canvas. This is used in some of the examples.

Example usage:

var fd = new faceDeformer();
// initialize the facedeformer with the webgl canvas to draw on
fd.init(webGLCanvas);
// load the image element where the face should be copied from
// along with the position of the face
fd.load(imageElement, points, model);
// draw the deformed face on the webgl canvas
fd.draw(points);

These are the functions that the faceDeformer object exposes:

License

clmtrackr is distributed under the MIT license