More to the face than meets the eye

Posted: 

faces_1.jpg
Researchers have been very successful in defining how facial expressions of emotion are produced, including which muscle movements create the most commonly seen expressions. Yet little is known about how these expressions are processed by the visual system. Aleix Martinez, associate professor of electrical and computer engineering and founder of the Computational Biology and Cognitive Science Lab, is working to change that.

Martinez and his team want to identify the cognitive model used by the human visual system to process facial expressions of emotion, which is a critical precursor to developing technology that imitates human perception. The project, “A Study of the Computational Space of Facial Expressions of Emotion,” is supported by a five-year, $1.8 million grant from the National Institutes of Health.

Discovering how healthy individuals perceive facial expressions of emotion is the first step. Next, Martinez’s group will study and design protocols to help diagnose pathologies including depression, post-traumatic stress disorder (PTSD) and autism.

Martinez also wants to learn how face perception develops from childhood to adulthood, how it develops over a longer span and how the elderly perceive faces.

In their quest to determine if there is a so-called normal way of processing faces—something he thinks is unlikely—Martinez and his team will soon study how young children, even babies as young at seven months old, perceive facial expressions of emotion. They will also examine different populations, and people with completely different life experiences, to see if those factors influence perception of facial expressions of emotions. Studying groups who have experienced trauma-inducing events, such as genocide, could aid in the early detection of posttraumatic stress disorder.

“The question is not whether the face perception will have changed, but whether these changes are consistent across subjects in a way that can be used for diagnosis,” said Martinez. “That’s what we’re working on, to define a protocol that works for the majority of people.”

Martinez’s research is already producing interesting results, including disputing a widely held belief that humans are very good at recognizing the facial expression of fear, a very primal emotion. The team found that adults were actually very bad at recognizing fear, but very good at recognizing happiness.

“One of the questions we are trying to address next is what happens with people with disorders like PTSD, are they much more attuned to recognition of fear?” said Martinez. “That seems to be one of those hypotheses that will turn out to be true, but we’ll find out.”

His research has also shown that certain facial structures influence the way humans perceive emotion. For example, the distance between the baseline of the eyebrows and the mouth
influence the perception of facial expression of emotions. A short distance between the brows and mouth, and wider faces are both perceived as being angry, whereas a longer distance
between the brows and mouth and thinner faces are perceived as sad.

Martinez is also researching a second NIH-supported project, “Computational Methods for Analysis of Mouth Shapes in Sign Languages.” The goal of this two-year, $400,000 research project is to understand what are called “facial expressions of grammar,” in regard to American Sign Language (ASL).

In ASL, as in any other sign language, part of the grammar is encoded on the face, not the hands, Martinez explains. Linguists have tried unsuccessfully for years to identify which components in facial expressions actually encode the grammar. Martinez’s group is working to design technology that can be used to aid in the search of such components. This could
revolutionize the study of sign languages the same way technology that analyzed components of the speech signal did for speech recognition.

“What we’re trying to do is to create a new revolution in the linguistic community by providing a new set of technologies that could be used to study American Sign Language,” explains Martinez. “We have had a project in the past that did similar things for the analysis of the hand motion and hand shapes, now we’re turning to the face, which is a much more complicated
problem.”

The second research project could have vast implications on the way sign language is taught in the future and make it easier for deaf children to learn English and other languages. It could also aid in the design of a machine that could potentially translate American Sign Language to English someday. Such a machine could have a dramatic impact in places like hospitals and emergency rooms where deaf patients sometimes have to wait up to 30 to 60 minutes for an on-call translator to arrive.

about the images: Martinez’s research has shown that when people classify images showing facial expressions of emotions, people often make asymmetric mistakes. For example, people will classify a fearful face as a surprised face, but they will very rarely, or never, classify a surprised face as fearful.


This story originally appeared in the Department of Electrical and Computer Engineering's 2010-2011 Annual Report.

Categories: FacultyResearch