Driver Drowsiness Detection Dataset

Computer Vision Lab, National Tsuing Hua University

Introduction

This Challenge Special Session uses a driver drowsiness video dataset collected by NTHU Computer Vision Lab. The entire dataset (including training, evaluation, testing dataset) contains 36 subjects of different ethnicities recorded with and without wearing glasses/sunglasses under a variety of simulated driving scenarios, including normal driving, yawning, slow blink rate, falling asleep, burst out laughing, etc., under day and night illumination conditions. The subjects were recorded while sitting on a chair and playing a plain driving game with simulated driving wheel and pedals; meanwhile, they were instructed by an experimenter to perform a series of facial expressions. The total time of the entire dataset is about 9 and a half hours.

The training dataset contains 18 subjects with 5 different scenarios (BareFace,Glasses, Night_BareFace, Night_Glasses, Sunglasses). The sequences for each subject including yawning and slow blink rate with nodding are each recorded for about 1 minute long. The sequences corresponding to two most important scenarios, combination of drowsiness-related symptoms (yawning, nodding, slow blink rate) and combination of non-drowsiness related actions (talking, laughing, looking at both sides), are each recorded about for 1.5 minutes long. The evaluation and testing datasets contain 90 driving videos (from the other 18 subjects) with drowsy and non-drowsy status mixed under different scenarios.

Camera Setting and Video Format

We used an active infrared (IR) illumination to acquire IR videos in the dataset collection.The video resolution is 640x480 in AVI format. The videos of Night_BareFace, Night_Glasses scenario were captured at 15 frames per second; BareFace, Glasses, Sunglasses scenario were 30 frames per second. The dataset is divided into training, evaluation and testing sets. The testing videos are produced by mixing videos of different driving scenarios.

Same scenario, different behavior

Same behavior, different scenarios

Database Access

To access the database, please fill out the Dataset License Agreement, and email with the subject "Dataset on Driver Drowsiness Detection from Video" to shantsun@mx.nthu.edu.tw.

Citation

The following publication must be cited whenever making use of this dataset in any paper, publication, or report.

Ching-Hua Weng, Ying-Hsiu Lai, Shang-Hong Lai, “Driver Drowsiness Detection via a Hierarchical Temporal Deep Belief Network”, In Asian Conference on Computer Vision Workshop on Driver Drowsiness Detection from Video, Taipei, Taiwan, Nov. 2016