Face Tracking

Currently there are a number of robotics arms that can be mounted to wheelchairs and are used for assistive technology. While these arms are beneficial, some of them such as the Jaco 2 arm by Kinova, are difficult to use for daily tasks such as feeding [1]. The difficulty is caused by the mode switching involved in the joy stick and previous studies have been dedicated to solving the control issue by improved mode switching [1]. The main focus of our project is to minimize the time used with the joystick to allow users to eat without outside assistance. We plan to accomplish our goal through facial expression detection. In the earlier weeks we experimented with Matlab to attempt facial expression recognition. While some scripts were successful, there was an inconsistency due to the low quality of the webcam on the computer and different lighting having an impact on the results. Also, the fact that our scripts in Matlab could not track the face as it moved brought up a larger issue. Matlab does have toolboxes that allow for facial tracking but we decided to find other solutions.Matlab Face Track

Eventually we decided to incorporate the Microsoft Kinect camera and the provided Kinect SDK [3].  Not only can the Kinect track faces but it can also recognize facial features as well as provide information about depth. Through the camera we can allow the user to signal the robot to complete a motion when a specific facial expression is detected.  The specific expression we are working with is the opening of the mouth. The provided SDK allows the Kinect to recognize 8 individual features of up to 6 faces within the frame. The features that are relevant to our research include the movement and opening of the mouth. We have successfully modified an existing program within the SDK to print an output if our desired face feature is detected. We have also eliminated false positives through a program loop that requires the mouth to be open for a specific amount of time. By the completion of the our research instead of printing an output, the software will signal the robot to deliver the food to the user. There are many more applications that this technology can be applied to but we are limited by the time provided for our research. Since the Kinect can track multiple features, the concept of our research can be extended. For example, the robot could reach and grab a wallet located on the wheelchair if the Kinect camera detected the user winking. Also a combination of face features can be combined to signal the robot to complete a motion which could lead to an entire library of tasks that can made to fit the needs of a specific user.

Works Cited

[1]        Kinova. 2016.

 

[2]        L. Herlant, R. Holladay and S. Srinivasa, “Assistive Teleoperation of Robot Arms via Automatic      Time-Optimal Mode Switching”, in The Eleventh ACM/IEEE International Conference on Human Robot Interaction, Christchurch, 2016, pp. 35-42.

[3]        Kinect for Windows SDK 2.0. Microsoft, 2014.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s