|Q    ||In the Readme file, I have found the description of the data.|
|Each video have four annotations:
[video name]_drowsiness.txt : 0 means Stillness and 1 means Drowsy
[video name]_head.txt : 0 means Stillness, 1 means Nodding and 2 means Looking aside.
[video name]_mouth.txt : 0 means Stillness and 1 means Yawning and 2 means Talking & Laughing.
[video name]_eye.txt : 0 means Stillness and 1 means Sleepy-eyes.
Do you consider Yawning, Sleepy-eyes and Nodding as Drowsy behavior?
If not, do we need to classify all them (Drowsy, Nodding, Yawn, Sleepy-eyes) completely ?
|A    ||Thanks for your question. Following is our answer:|
|1. We consider Yawning, Sleepy-eyes and Nodding as Drowsy behavior while Looking aside and Talking & Laughing
as Non-drowsy behavior.
|2. The required results of challenge session is only the classification of drowsy and non-drowsy.
You don't need to submit the detail behaviors, such as Yawning, Sleepy-eyes, Nodding, Looking aside or Talking & Laughing.