March 17, 1999

Computer program trained to read faces developed by Salk team

Salk News


Computer program trained to read faces developed by Salk team

La Jolla, CA – A computer program developed by a Salk-led team has been trained to distinguish among a number of facial cues, helping to sort false from genuine expressions. What’s more, the program performs as well as a psychologist trained to read faces and markedly better than human non-experts.

“Computers have a difficult time analyzing expressions on faces, something we can do without even thinking,” said Terrence Sejnowski, Salk professor and senior author of the study, which appears in the current issue of Psychophysiology.

“But by mimicking the ability of humans to learn by experience, computers have now broken through this barrier,” he said.

Investigators hope that their program will prove helpful to law enforcement officials and mental health professionals.

“When someone is lying, their true feelings often flicker across their face in what we call a micro-expression, which is quickly covered up by a posed expression,” said Paul Ekman, professor of psychology at the University of California, San Francisco and co-author of the study. “These signals may be too brief for professionals to detect in an interview setting, but they can be picked up if the conversation is videotaped and reviewed.”

The problem is that human analysis is labor-intensive and painstakingly slow. “It takes about one hour to score one minute of tape,” explained Marian S. Bartlett, Salk postdoctoral researcher and first author of the study. “Our program, on the other hand, can do a minute of tape in about five minutes, and once we optimize the program it will run in near real-time.”

So, exactly what is being measured by the computer program? In the 1970s, a team of psychologists led by Ekman developed a code that breaks down facial expressions into component movements by individual facial muscles. For example, the crinkling of the eyes that causes crow’s feet is produced by contraction of the orbicularis oculi muscle, an action that in a spontaneous smile is coordinated with movement of the zygomaticus major muscle, which lifts up the corners of the mouth. Each of these movements has a designated action unit number. “So you could describe a smile as AU6 + AU12,” said Bartlett.

People not well versed in the subtleties of facial movements have a very difficult time “faking” expressions. For example, sadness has a characteristic set of gestures, one of the most distinctive being the contraction of the central frontalis muscle that raises the inner corners of the brows, producing wrinkles in the central forehead.

“That’s a really difficult one to pull off if it’s not spontaneous,” said Bartlett. For that reason, law enforcement officials are interested in programs that can analyze suspects being questioned and raise red flags when it perceives insincere emotions.

The automated program is also of interest to mental health professionals. For example, Ekman was involved in a case in which a woman had convinced her team of doctors that she was ready for discharge from a psychiatric hospital. Shortly before her release however, she confessed that she, in fact, planned to commit suicide.

When Ekman, an expert in facial expression analysis, analyzed a videotape of her interview frame-by-frame, he detected a clue to her deception. When she was asked, ‘What are your plans for the future?’ a look of despair flitted across the woman’s face, which was quickly covered up by a smile.

“Fortunately,” said Ekman, “in this case, the patient had admitted her deception and accepted further treatment. Ideally, psychiatrists would like to have a tool to flag such potentially dangerous situations, but they don’t have the time to score hours of videotape manually.”

The program works by comparing images of faces to 60 filters, or templates, each of which looks for independent components of facial movement in different regions. For instance, raising the left inner brow would increase a face’s match to filter no. 1, whereas raising the left outer brow would increase the match to filter no. 2. The computer analyzes the information from all 60 filters and decides whether the collective output matches AU1 or AU2 and so on.

In the current study, the program was trained to recognize six of 46 individual muscle actions described by Ekman. For all six actions, it out-performed human non-experts and performed as well as highly trained human experts. The investigators plan next to teach it the remaining actions and then tackle combinations of these actions.

“Although we have a proof of principle that computers can be taught to recognize facial expressions,” said Sejnowski, “there is still a long way to go before we have practical systems that are as flexible as humans over a wide range of head positions and lighting conditions. The next step is to integrate what we have done with work by other groups on solving these problems.”

Joseph C. Hager at the Network Information Research Corp in Salt Lake City is a co-author of the study, titled “Measuring Facial Expressions by Computer Image Analysis.” The work was supported by the National Science Foundation, the Howard Hughes Medical Institute, and a Lawrence Livermore National Laboratories Intra-University Agreement.

For additional information, contact: Rebecca Sladek Nowlis
University of California, San Francisco
(415) 476-1045

The Salk Institute for Biological Studies, located in La Jolla, Calif., is an independent nonprofit institution dedicated to fundamental discoveries in the life sciences, the improvement of human health and conditions, and the training of future generations of researchers. The Institute was founded in 1960 by Jonas Salk, MD, with a gift of land from the City of San Diego and the financial support of the March of Dimes Birth Defects Foundation.

For More Information

Office of Communications
Tel: (858) 453-4100
press@salk.edu