A computer system that can read body language

body language


Scientists at the Carnegie Mellon University"s Robotics Institute (CMU RI) are working on a computer system that can read body language right down to the position of the fingers. The new process works in real time and even in crowds, opening the door to a more natural way for people and machines to interact.

Communicating with computers is mostly confined to typing, mouse clicks, and screen touching. Though talking also added to that list, human beings don"t just communicate with words. Half of human communication comes from body language and without taking that into account, interactions can become difficult and laborious.

The tricky bit is to get computers to identify human poses. These are often very subtle, include the position of individual fingers, which obscure objects or other people.



The team led by Yaser Sheikh, associate professor of robotics at Carnegie Mellon, combined several approaches to solve this problem. One was to simply provide the computer with more data by having a pair of postgraduate students stand in front of a camera making thousands of different poses and gestures.

Researchers building an artificial spider silk

Panoptic studio


The new method developed with the help of the Panoptic Studio, a two-story dome embedded with 500 video cameras. This allowed the computer to study poses from hundreds of different angles once using a large number of subjects.

[td_block_video_youtube playlist_title="computer system read body language" playlist_yt="https://www.youtube.com/watch?v=cPiN2ncuK0Y" playlist_auto_play="0"]

A single shot gives 500 views of a person"s hand, plus it automatically annotates the hand position. However, in this study, researchers used 31 high-definition cameras, but still able to build a massive data set.

Scientists get first direct look at how electrons ‘dance’ with vibrating atoms

The team is currently working on making the transition from 2D models to 3D models for better recognition. The ultimate goal is to produce a system that allow a single camera and a laptop to read the poses from a group of people.

It will help us diagnose and treat behavioral conditions such as autism, depression, and dyslexia. Also, create new monitoring systems for physical therapy and rehabilitation. It allows to build safer systems, such as self-driving cars and home robotics.

The research will presented at the 2017 Computer Vision and Pattern Recognition Conference in Honolulu, which runs from July 21st to 26.

Comments