Download - Driver Fatigue
Using digital image processing in MATLAB
DROWSY DRIVER DETECTION SYSTEM
Prepared by: Manali K. Shukla (07cp621)Niraj A. Chandrani (07cp622)
Guided by: Ms. Hetal Gaudani
AIM
The aim of this project is to develop a prototype drowsiness detection system.
The focus will be placed on designing a system that will accurately monitor the open or closed state of the driver’s eyes in real-time.
INTRODUCTIONDriver fatigue is a significant factor in a large number of vehicle accidents. Recent statistics estimate that annually 1,200 deaths and 76,000 injuries can be attributed to fatigue related crashes. The development of technologies for detecting or preventing drowsiness at the wheel is a major challenge in the field of accident avoidance systems. Because of the hazard that drowsiness presents on the road, methods need to be developed for counteracting its affects.
Detection of fatigue can be done in one of the following ways mentioned or by combining them.
Measuring changes in physiological signals, such as brain waves, heart rate and eye blinking.
Measuring physical changes such as sagging posture, leaning of the driver’s head and the open/closed states of the eyes.
INTRODUCTION
INTRODUCTION
Monitoring the steering wheel movement, accelerator or brake patterns, vehicle speed, lateral acceleration, and lateral displacement.
Monitoring the response of the driver. This involves periodically requesting the driver to send a response to the system to indicate alertness.
INTRODUCTIONBy monitoring the eyes, it is believed that the symptoms of driver fatigue can be detected early enough to avoid a car accident.
Hence we have used the eye open-closed detection technique combined with detection of blinks to detect drowsiness.
It involves a sequence of images of a face, and the observation of eye position in each image and analysis blink patterns.
OVERVIEW OF IMPLEMENTATION
IMAGE ACQUISITION
IMAGE PROCESSING
OUTPUT GENERATION
‘CORE’ OF IMPLEMENTATION
Implementation of our project is divided into 3 core processes: Image acquisition, Image Processing, Output Generation.
IMAGE ACQUISITION
This process works at getting data into MATLAB and passing it to other processes.
It works in 3 phases as shown.
INTERFACE WEBCAM AND
START LIVE VIDEO
REPLACE PROCESSED
FRAME WITH NEW ONE
EXTRACT THE FRAMES FROM
VIDEO
IMAGE ACQUISITION
• Interface the webcam with matlab. ‘videoinput()’ function is used.
• Make the required changes in the properties ‘set()’ function is used to set properties such as
‘framegrabinterval’ etc.
• Obtain the frames from the live feed into MATLAB memory. ‘getdata()’ function is used
IMAGE PROCESSING
The obtained images from the image acquisition are processed here to get the desired output.
Image processing also works in 3 phases as shown
SKIN COLOR SEGMENTATION
EYE DETECTION FACE
DETECTION
SKIN COLOR SEGMENTATION
This phase is used to find the skin color regions from the image.
We have used HSI model for segmenting skin color from the entire image.
The benefit of using HSI instead of RBG to segment a particular color is that it requires us to filter only the hue part as hue alone defines color
HSV filter applied. 0<h<0.11 or 0.90<h<1.00
Results of skin color segmentation
FACE DETECTION
The job of face detection is to locate a face out of the skin color segmented image.
The phase starts with binarization ie converting the obtained skin color image to binary image. This binary image is subjected to morphological operation followed by connected component analysis to obtain the face component.
Using this face component we can design a mask that would give the entire face from original image.
BINARIZATION
CONNECTED COMPONENT ANALYSIS
MOPHOLOGICAL OPERATION
FACE COMPONENT
MASK
FACE DETECTION
• Binarization of the segmented image. Applying the threshold on gray scale
image. Threshold used is 0.20.
• Morphological operation. To separate the unnecessary joints.
• Connected component analysis To segment catchment area type regions
created by eyes, lips.
FACE DETECTION• Face component
Fill back the watershed components to a bigger single component.
The biggest such component is face
• Mask Obtain a rectangular bounding box around
the biggest component. This is required mask.
Results of face detection
EYEDETECTION
The job of eye detection is to locate the eye portion, find the eye-brow and eye-lashes and compute the distance between them. This distance is sent to the output generation phase.
The technique that we have used is known as Horizontal Average Intensity Technique
The working flow in this module is shown in next slide
SEPARATE RIGHT AND LEFT EYE
SEGMENT OUT BLACK REGION
COMPUTE HORIZONATAL AVERAGES
COMPUTE THE DISTANCE BETWEEN THEM
LOCATE EYE-BROW AND UPPER EYE-LASHES
EYE DETECTION
• Separate right and left eye This is done to avoid getting wrong results
due to tilting of face.• Horizontal average intensities
We calculate the horizontal average intensities over each y-coordinate for both eye regions
• Segment out black regions Using the dips observed in the horizontal
intensities we segment out the black regions
EYE DETECTION• Locate eye-brow and upper eye-lash
Find all the valleys in horizontal average intensity vs y-coordinate graph
Select the two biggest valleys satisfying the conditions – valleys are at most 65 coordinates apart and at least 5 coordinates apart.
• Calculate the distance The distance between valleys satisfying
above condition is the required distance. Find it for both eyes
OPEN EYE CLOSED EYE
Comparison of graphs of horizontal avg intensity for open and closed eye
OUTPUT GENERATION
The job of output generation as the name suggests is to give the output.
It receives the distance between eye-brow and upper eye-lashes from eye-detection module.
It has to work on it to decide if this distance represents open eye or closed eye
CALCULATE REFRENCE DISTANCE
COMPUTE NUMBER OF BLINKS AND EYE-CLOSE FRAMES
COMPARE DISTANCE IN EACH FRAME WITH REFERENCE DISTANCE
DECIDE IF THE DRIVER IS DROWSY
OUTPUT GENERATION
• Calculate reference distance The average distance of first 5 frames
supplied to output generation is used as reference.
• Compare distance in each frame with reference distance As each frame arrives, compare its
distance with reference and accordingly send it to eye-close or eye-open module.
OUTPUT GENERATION
• Compute number of blinks and eye-close frames. Each time eye-close is detected, increment
a sleeping counter and return it Each time an eye open is detected after an
eye close, increment the blink count and return it.
• Decide if the driver is drowsy If the sleeping counter or blink counter is
5, the driver is detected to be drowsy.
FUTURE SCOPE OF IMPROVEMENT A circuit can be designed to raise an
alarm as soon the software returns that driver is drowsy.
If 3D images can be captured localizing eyes can be made more robust by capturing deepest part of a 3D image.
Processing of all frames can be ensured by allocating more memory for software to run in.
This system does not work very efficiently for dark skinned individuals. This can be improved by using an adaptive light source.
THANK YOU.