AP021 » Expression Extraction For Lie Detection using image processing.
Lie detection is an evolving subject. Polygraph techniques is the most trending so far,but a physical contact has to be maintained.The project proposes the lie detection by extracting facial expressions using image processing. The captured images to be analyzed is broken into facial parts like eyes, eyebrows,nose etc. Each facial parts is then studied to determine various emotions like eyebrows raised and pulled together,raised upper eyelids,lips stretched horizontally back to ears signifies fear while eyebrows down and together, narrowing of the lip shows anger. All the emotions can be aggregated to determine wheather a person is lying or not. The interrogation video or live video is broke down into various facial images of the particular individual. Different emotions from the various images is collected and processed with the general face reading criteria to evaluate his truthfullness.
Polygraph technique has been the most successful tecnique so far for detecting truthfullness of the person under test. But a physical contact has to be maintained for better efficiency which is a major drawback of this techniqe. Polygraph is a wired system which instigates and panic and anxiety to the person under test which triggers false positive results.
Due to rapid evolution in computer vision and artificial intelligence we proposes a system which extracts face expressions using image processing to determine wheather a perso is lying or not.
2. Purpose of the design
The main purpose of our design is to create a technique which doesn't involve any physical contact to avoid any false positive result. The research makes use of image processing techniques to extract facial expressions and with the given algorithm it verifies the person's emotions. All the emotions are summed up to determine his/her truthfullness.
3. Application scope
5. Why FPGA?
First of all facial expressions has to be evaluated to initialise the project. The facial expression extraction can be divided into three main categories:-
After all the above process the resulting emotions are recorded to determine the lie.
1. Face detection
There are various face detection approaches so far.
We will be using feature invariant approach fpr face detection.
2. Feature extraction
3.Emotion classification and lie detection
All the emotions are fed in a fuzzy algorithm and recorded to classify various emotions. For example eyebrows raised, eyes widened, mouth open etc determines surprise while eyebrows down & together , eyes glare and narrowing of the lips shows anger. In this way we can segment out various emotions like contempt, guilt,anger,sad and shock. All the emotions are collectively processed to determine the truthfullness.
Humans expresses various emotions like sad,guilt,surprise etc while talking. Our research focuses on extracting those distinct emotions while they speak which requires high computational speed. The efficiency of the project depends mainly on :-
Our project uses fpga because of its folowing virtues:-
All the above reasons made it valid that for a real time image processing for lie detection , FPGA boards is our first choice.
Humans have a well defined rigid skull structure and therefore they can perform finite number of facial expressions.Our proposed system extract these expressions ,classifies them and categorize them to determine the respective emotions.
With respect to the interrogation questions, one can expect a certain emotions from the person under test. Each respective emotions can be categorized to be more lie or more truth. If the peak value is pointing towards lie, then the person can be assumed to be lying.
Facial Expression Recognition ususally performed in four-stages consisting of pre-processing, face detection, feature extraction and expression classification.
In this project we applied various deep learning methods (convolutional neural networks) to identify the key seven human emotions: anger,disgust,fear,happiness,sadness,surprise and neutrality.
There are different approaches for Facial Expression Recognition, we are using the Neural Network Approach.
1. Configuiring OPEN VINO toolkit
1. Configure Model Optimizer for the framework.
2. Convert a trained model to produce an optimised intermediate representation of the model based on the trained network topology, weights and biases value.
3. Test the model in the Intermediate Representation format using the inference engine in the target environment by the validation application.
4. Integrate the Inference engine to deploy the model.
2. Model Optimizer
Model Optimizer produces an OpenVINO supported framework as a trained model input and an Intermediate Representation of the network as output.
Model Optimizer has two main purposes:
1) Produce a valid Intermediate Representation.
2)Produce an optimized Intermediate Representation.
3. Inference Engine
After an Intermediate Representation is created by the Model Optimizer input data can be inferred by the Inference Engine.
The Inference engine is a C++ library with a set of C++ classes to infer input data and get an result.
1. The input video is fed to the OpenVINO stater kit which thereby display the result.
2. Collection of images are segmented and face is detected from the image.
3.Facial expressions are detected by landmarking facial regions using Deep Neural Network(DNN).
4. The recognized is fed to the classifier to determine the percentage of lie or truth.