Face recognition technology

Technical introduction

Face recognition technology is based on human facial features. For the input face image or video stream, first determine whether there is a face, if there is a face, then further The location and size of each face and the location information of each major facial organ are given. Based on this information, the identity features contained in each face are further extracted and compared with known faces to identify the identity of each face.

Face recognition in a broad sense actually includes a series of related technologies for constructing a face recognition system, including face image collection, face positioning, face recognition preprocessing, identity confirmation, and identity search, etc.; and in a narrow sense Face recognition in particular refers to a technology or system that uses human faces to confirm or find identity.

The biological characteristics studied by biometric recognition technology include face, fingerprint, palm print, iris, retina, voice (voice), body shape, personal habits (such as the strength and frequency of typing on the keyboard, signature), etc. , The corresponding recognition technologies include face recognition, fingerprint recognition, palmprint recognition, iris recognition, retina recognition, voice recognition (voice recognition can be used for identity recognition, or voice content recognition, only the former belongs to biometric recognition technology) , Body shape recognition, keyboard stroke recognition, signature recognition, etc.

Technical principle

Face recognition technology consists of three parts:

(1) Face detection

Face detection refers to Determine whether there is a face image in a dynamic scene and a complex background, and separate the face image. Generally, there are several methods as follows:

①Reference template method

Firstly design one or several standard face templates, and then calculate the matching between the samples collected in the test and the standard templates Degree, and use the threshold to determine whether there is a face.

②Face rule method

Because the face has certain structural distribution characteristics, the so-called face rule method is to extract these features to generate corresponding rules to determine whether the test sample contains people Face.

③Sample learning method

This method adopts the method of artificial neural network in pattern recognition, that is, the classifier is generated by learning the face image sample set and the non-face image sample set.

④Skin color model method

This method is based on the relatively concentrated distribution of facial skin color in the color space for detection.

⑤Feature sub-face method

This method regards all face image sets as a face image subspace, and is based on the distance between the detection sample and its projection in the subspace Determine whether there is a face image.

It is worth mentioning that the above five methods can also be used comprehensively in actual detection systems.

(2) Face tracking

Face tracking refers to dynamic target tracking of detected faces. Specifically, a model-based method or a method based on a combination of motion and model is adopted. In addition, using skin color model tracking is also a simple and effective method.

(3) Face comparison

Face comparison is to confirm the identity of the detected face or to search for the target in the face library. This actually means that the sampled face images are compared with the stock face images in turn, and the best matching object is found. Therefore, the description of the face image determines the specific method and performance of face image recognition. Two description methods are mainly used: feature vector and facial texture template:

①Feature vector method

This method is to first determine the size of facial features such as iris, nose, and mouth corners. Position, distance and other attributes, and then calculate their geometric feature quantities, and these feature quantities form a feature vector describing the face image.

②Face texture template method

This method is to store a number of standard facial image templates or facial organ templates in the library. When comparing, sample all pixels of the facial image It is matched with all templates in the library using the normalized correlation measure. In addition, there are methods that use pattern recognition to combine autocorrelation networks or features and templates.

The core of face recognition technology is actually "partial human body feature analysis" and "graphic/neural recognition algorithm". This algorithm is a method that uses the various organs and characteristic parts of the human face. For example, the identification parameters formed by multiple data corresponding to the geometric relationship are compared, judged and confirmed with all the original parameters in the database. Generally, the judgment time is required to be less than 1 second.

The recognition process

Generally, there are three steps:

Face recognition technology

(1) First, create a face profile file of the human face. That is, the camera is used to collect the face image files of unit personnel's faces or take their photos to form face image files, and these face image files are generated into Faceprint codes and stored.

(2) Get the current human face image. That is, use the camera to capture the face image of the current person entering and leaving, or take a photo input, and generate the face texture code from the current face image file.

(3) Use the current facial texture code to compare with the file inventory. That is to search and compare the current facial texture code with the facial texture code in the file inventory. The above-mentioned "face texture coding" method works according to the essential features and the beginning of the human face. This facial texture coding can resist changes in light, skin tone, facial hair, hairstyle, glasses, expressions and posture, and has strong reliability, so that it can accurately identify a person from millions of people. The face recognition process can be completed automatically, continuously, and in real time using ordinary image processing equipment.

Technical process

The face recognition system mainly includes four components, namely: face image acquisition and detection, face image preprocessing, face image feature extraction and matching And recognition.

Face image collection and detection

Face image collection: Different face images can be collected by the camera lens, such as static images, dynamic images, different positions, different Expressions and other aspects can be well collected. When the user is within the shooting range of the capture device, the capture device will automatically search for and capture the user's face image.

Face detection: In practice, face detection is mainly used for preprocessing of face recognition, that is, to accurately calibrate the position and size of the face in the image. The pattern features contained in face images are very rich, such as histogram features, color features, template features, structural features, and Haar features. Face detection is to pick out the useful information and use these features to realize face detection.

The mainstream face detection method uses the Adaboost learning algorithm based on the above features. The Adaboost algorithm is a method for classification. It combines some weaker classification methods to form a new strong The classification method.

In the face detection process, the Adaboost algorithm is used to select some rectangular features (weak classifiers) that best represent the face, and the weak classifier is constructed into a strong classifier according to the weighted voting method, and then the Several strong classifiers obtained by training are connected in series to form a cascade structured cascaded classifier, which effectively improves the detection speed of the classifier.

Face image preprocessing

Face image preprocessing: The image preprocessing of the face is based on the result of face detection, the image is processed and finally served for feature extraction process. The original image acquired by the system is often not directly used due to various conditions and random interference. It must be pre-processed with grayscale correction and noise filtering in the early stage of image processing. For face images, the preprocessing process mainly includes light compensation, gray scale transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of the face image.

Face image feature extraction

Face image feature extraction: The features that can be used in a face recognition system are usually divided into visual features, pixel statistical features, face image transformation coefficient features, The algebraic features of face images, etc. Facial feature extraction is based on certain features of the human face. Face feature extraction, also known as face representation, is the process of feature modeling of human faces. Facial feature extraction methods can be summarized into two categories: one is the representation method based on knowledge; the other is the representation method based on algebraic features or statistical learning.

The knowledge-based representation method is mainly based on the description of the shape of the face organs and the distance characteristics between them to obtain feature data that is helpful for face classification. The feature components usually include the Ou Distance, curvature and angle, etc. The human face is composed of parts such as eyes, nose, mouth, and chin. The geometric description of these parts and the structural relationship between them can be used as important features to recognize the face. These features are called geometric features. Knowledge-based face representation mainly includes geometric feature-based methods and template matching methods.

Face image matching and recognition

Face image matching and recognition: The feature data of the extracted face image is searched and matched with the feature template stored in the database. By setting a Threshold, when the similarity exceeds this threshold, the matching result is output. Face recognition is to compare the face features to be recognized with the obtained face feature templates, and judge the identity information of the face based on the degree of similarity. This process is divided into two categories: one is confirmation, which is a process of one-to-one image comparison, and the other is identification, which is a process of one-to-many image matching and comparison.

Functional module

Face capture and tracking function

Face capture refers to the detection of a portrait in a frame of an image or video stream and The portrait is separated from the background and saved automatically. Portrait tracking refers to the use of portrait capture technology to automatically track a designated portrait when it moves within the range captured by the camera.

Face recognition comparison

Face recognition has two comparison modes: verification and search. Verification is to compare the captured portrait or the designated portrait with a registered object in the database to verify whether it is the same person. Search-style comparison refers to searching for the existence of a designated portrait from all the portraits registered in the database.

Face Modeling and Retrieval

The registered portrait data can be modeled to extract the features of the face and generate a face template (face feature file) ) Save to the database. When performing face search (search style), model the designated person, and then compare it with the template of all people in the database for recognition, and finally list the most similar people based on the compared similarity value List.

Real person identification function

The system can identify whether the person in front of the camera is a real person or a photo. In order to prevent users from using photos to falsify. This technology requires the user to perform facial expressions in coordination.

Image quality inspection

The quality of the image directly affects the recognition effect. The image quality inspection function can evaluate the image quality of the photos to be compared and give Corresponding suggested value to assist identification.

Analysis algorithm

A regional feature analysis algorithm widely used in face recognition technology, which combines computer image processing technology and biostatistics principles, using computer image processing technology Extract facial feature points from the video, use the principles of biostatistics to analyze and establish a mathematical model, that is, a facial feature template. Use the built face feature template and the face of the subject to perform feature analysis, and give a similar value based on the analysis result. This value can be used to determine whether it is the same person.

There are many face recognition methods. The main face recognition methods are:

(1) Geometric feature face recognition method: Geometric features can be eyes, nose, mouth, etc. The shape and geometric relationship between them (such as the distance between each other). These algorithms have fast recognition speed and require small memory, but the recognition rate is low.

(2) Face recognition method based on eigenface (PCA): The eigenface method is a face recognition method based on KL transformation, which is an optimal orthogonal transformation for image compression. After the high-dimensional image space is transformed by KL, a new set of orthogonal bases is obtained, and the important orthogonal bases are retained. These bases can be expanded into a low-dimensional linear space. If it is assumed that the projections of human faces in these low-dimensional linear spaces are separable, these projections can be used as eigenvectors for recognition. This is the basic idea of ​​the eigenface method. These methods require more training samples, and are completely based on the statistical characteristics of image grayscale. There are currently some improved eigenface methods.

(3) Neural network face recognition method: The input of the neural network can be the face image with reduced resolution, the autocorrelation function of the local area, the second moment of the local texture, etc. This type of method also requires more samples for training, and in many applications, the number of samples is very limited.

(4) Face recognition method of elastic graph matching: The elastic graph matching method defines a distance in a two-dimensional space that is invariant to the usual face deformation, and uses attributes The topological graph represents the face, and any vertex of the topological graph contains a feature vector, which is used to record the information of the face near the vertex. This method combines gray-scale characteristics and geometric factors, allowing elastic deformation of images during comparison, and has achieved good results in overcoming the influence of facial expression changes on recognition. At the same time, multiple samples are no longer needed for a single person. train.

(5) Face recognition method based on line segment Hausdorff distance (LHD): Psychological research shows that humans are no worse than recognizing grayscale images in speed and accuracy in recognizing contour maps (such as comics) . LHD is based on the line segment graph extracted from the gray-scale image of the human face. It defines the distance between two line segment sets. The difference is that LHD does not establish a one-to-one correspondence between the line segments between different line segment sets. Relationship, so it is more able to adapt to small changes between line segment graphs. Experimental results show that LHD performs very well under different lighting conditions and different postures, but it does not perform well in recognition of large expressions.

(6) Support vector machine (SVM) face recognition method: Support vector machine is a new hot spot in the field of statistical pattern recognition. It tries to make the learning machine reach the level of experience risk and generalization ability A compromise to improve the performance of the learning machine. The support vector machine mainly solves a 2-classification problem. Its basic idea is to try to transform a low-dimensional linearly inseparable problem into a high-dimensional linearly separable problem. The usual experimental results show that SVM has a good recognition rate, but it requires a large number of training samples (300 per class), which is often unrealistic in practical applications. Moreover, the support vector machine training time is long, the method is complicated to implement, and there is no unified theory for the selection of this function.

Technical details

Generally speaking, face recognition systems include image capture, face positioning, image preprocessing, and image processing.

Related Articles
TOP