Snapchat Filters/Faceswap actually has quite an interesting history.
The era of Snapchat filters started with a Ukrainian company called Looksery. Looksery was an application that allowed users to modify their facial features during video chats and for photos. In September 2015, Snapchat acquired this company for $150 million (the largest technology acquisition in Ukrainian history) and this is how
the road for filters got paved.
Now to get to the actual technical side of this question.
Snapchat filters utilize Computer Vision. Computer vision can be thought of as a direct opposite of computer graphics. While computer graphics try to produce image models from 3D models, computer vision tries to create a 3D space from image data. Computer Vision is starting to be utilized more and more in our society. One specific example is when we scan our checks and the data is extracted from the lines.
The specific area of Computer Vision that Snapchat filters use is called Image processing. Image processing is the transformation of an image by performing mathematical operations on each individual pixel on the provided picture.
In order to apply a filter on a face, Snapchat first has to find the face, and they do this by using the Viola-Jones algorithm. This algorithm recognizes that (the majority of) humans have similar properties (called Haar Features). Some of these properties are that the eye region is darker than upper cheeks and that the eye area is darker than the nose bridge region. Additionally, the algorithm recognizes that the way that most humans faces are constructed is generally the same (the eyes, bridge of the nose, and mouth are almost always in the same place in relation to each other).
The algorithm then uses rectangles to detect regions that are darker than others. It does this by calculating the total amount of pixels under the white rectangle and subtracting them from the total amount of pixels in the black rectangle (pictured below). These black and white rectangles scan your face repeatedly until it has detected your face (Since the algorithm works based shading, shaking your head quickly or tilting it can screw up the filter).
Visualization of Viola-Jones Algorithm
Now that the face has been detected, Snapchat can use Image Processing to apply features onto a full face. However, they chose to go one step further and they want to find your facial features. This is done with the aid of the Active Shape Model. The Active Shape Model is a facial model that has been trained by the manual marking of the borders of facial features on hundreds to thousands of images. Through machine learning, an “average face” is created and aligns this with the image that is provided. This average face, of course, does not fit exactly with the user's face (we all have diverse faces), so after fitting the face, pixels around the edge of the “average face” are examined to look for differences in shading. Because of the training that the algorithm went through, (the Machine Learning process), it has a basic skeleton of how certain facial features should look, so it looks for a similar pattern in the given image. Even if some of the initial changes are wrong, by taking into account the position of other points that it has fixed, the algorithm will correct errors it made regarding where it thought certain aspects of your face are. The model then adjusts and creates a mesh (a 3D model that can shift and scale with your face).
Active Shape Model
This whole facial/feature recognition process is done when you see that white net right before you choose your filter.
The filters then distorts certain areas of the provided face by enhancing them or adding something on top of them.
I hope that my explanation was clear, and have fun using Snapchat filters!
This question originally appeared on Quora. the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions: