Create your first Video Face Recognition app + Bonus (Happiness Recognition)

Code

The OpenCV algorithm uses Haar cascades filters to make the detections. It has all the parameters your code needs to identify a face or parts of it. You can read more about it in the article I will link at the end with the theory of face detection. Start by copying the following cascades and pasting as XML inside the same folder as your script:

Video Face Detection App

Folder with the script and Haar cascades XML files

Let’s start coding by importing the library and the XML files.

import cv2face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')

Now we will create a function that will access your camera and look for faces. After it finds the faces, it will look for eyes inside the faces. It will save a lot of processing time keeping the program from looking for eyes in the whole frame. I will show the complete function, and provide an explanation of each step below it.

def detect(gray, frame) :  
    faces = face_cascade.detectMultiScale(gray, 1.3, 5) 
    for (x, y, w, h) in faces:
        cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)
        roi_gray = gray[y:y+h, x:x+w] 
        roi_color = frame[y:y+h, x:x+w]
        
        eyes = eye_cascade.detectMultiScale(roi_gray, 1.1, 18)
        for (ex, ey, ew, eh) in eyes:
            cv2.rectangle(roi_color, (ex, ey), (ex+ew, ey+eh), (0, 255, 0), 2)
            
    return frame

def detect(gray, frame)Most of the time when working with images and machine learning, it’s wise to use grayscale images to deal with fewer data and improve the efficiency of the code. Here we will do exactly that. This function will read two parameters, gray and frame, where gray is the grayscale version of the last frame obtained from the camera, and the frame is the original version of the last frame obtained from the camera.

faces = face_cascade.detectMultiScale(gray, 1.3, 5) MultiScale detects objects of different sizes in the input image and returns rectangles positioned on the faces. The first argument is the image, the second is the scalefactor (how much the image size will be reduced at each image scale), and the third is the minNeighbors (how many neighbors each rectangle should have). The values of 1.3 and 5 are based on experimenting and choosing those that worked best.

for (x, y, w, h) in faces: We will loop through each rectangle (each face detected) using its coordinates generated by the function we discussed above.

cv2.rectangle(frame, (x,y), (x+w, y+h), (255, 0, 0), 2) We are drawing the rectangle in the original image that we are capturing in the camera (last frame). the (255,0,0) is the color of the frame in RGB. The last parameter (2) is the thickness of the rectangle. x is the horizontal initial position, w is the width, y is the vertical initial position, h is the height.

roi_gray = gray[y:y+h, x:x+w] Here we are setting roi_gray to be our region of interest. That’s where we will look for the eyes.

roi_color = frame[y:y+h, x:x+w] We are getting the region of interest in the original frame (colored, not black & white).

Now, we apply the same concepts to find the eyes. The difference is that we won’t look inside the whole image. We will be using the region of interest (roi_gray and roi_color), then drawing the rectangles for the eyes in the frame.

eyes = eye_cascade.detectMultiScale(roi_gray, 1.1, 18)
        for (ex, ey, ew, eh) in eyes:
            cv2.rectangle(roi_color, (ex, ey), (ex+ew, ey+eh), (0, 255, 0), 2)return frame

Now we will access your camera and use the function we just created. The code will keep getting snapshots from the camera and apply the detect() function to detect the face and eyes and draw the rectangles. I will show you the whole block of code and then explain it step by step again.

video_capture = cv2.VideoCapture(0) 
while True:
    _, frame = video_capture.read()  
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 
    canvas = detect(gray, frame)
    cv2.imshow('Video', canvas)
    if cv2.waitKey(1) & 0xFF == ord('q') :   
        break

video_capture = cv2.VideoCapture(0) Here we are accessing your camera. When the parameter is 0, we are accessing an internal camera from your computer. When the parameter is 1, we are accessing an external camera that is plugged on your computer.

while True: We will keep running the detect function with the code that follows while the camera is opened (until we close it using a key that we will define).

_, frame = video_capture.read() the read() function will return 2 objects, but we are interested only in the latest one, that is the last frame from the camera. So we are ignoring the first object _, and naming the second as frame, that we will use to feed our detect() function in a second.

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) We are getting the frame we just read from the function above and converting it to grayscale. We are naming it gray and we will use to feed our detect function as well.

canvas = detect(gray, frame) Canva will be our image to work with.

cv2.imshow('Video', canvas imshow() displays an image in a window. So we will see the camera (`Video`) and apply the canvas (detect() function) in a new window.

if cv2.waiKey(1) & 0xFF == ord('q') : break This piece of code will stop the program when you press q on the keyboard.

Now, we just need to turn off the camera and close the window when we stop the program. We use the following syntax:

video_capture.release()  # Turn off the camera
cv2.destroyAllWindows() # Close the window

Author: admin

Leave a Reply

Your email address will not be published.