Facial Expression

Via Gene Expression, I came across an old article in Monitor on Psychology about facial expressions.

Joseph Campos, PhD, of the University of California at Berkeley […] says, “there is profound agreement that the face, along with the voice, body posture and hand gestures, forecast to outside observers what people will do next.”

The point of contention remains in whether the face also says something about a person’s internal state. Some, such as Izard, say, “Absolutely.” Detractors, such as Alan Fridlund, PhD, of the University of California, Santa Barbara, say an adamant “No.” And others, including Campos and Ekman, land somewhere in the middle. The face surely can provide important information about emotion, but it is only one of many tools and should never be used as a “gold standard” of emotion as some researchers, particularly those studying children, have tended to do.

“The face is a component [of emotion],” says Campos. “But to make it the center of study of the human being experiencing an emotion is like saying the only thing you need to study in a car is the transmission. Not that the transmission is unimportant, but it’s only part of an entire system.”

Based on findings that people label photos of prototypical facial expressions with words that represent the same basic emotions—a smile represents joy, a scowl represents anger—Ekman and Izard pioneered the idea that by carefully measuring facial expression, they could evaluate people’s true emotions. In fact, since the 1970s, Ekman and his colleague Wallace Friesen, PhD, have dominated the field of emotion research with their theory that when an emotion occurs, a cascade of electrical impulses, emanating from emotion centers in the brain, trigger specific facial expressions and other physiological changes—such as increased or decreased heart rate or heightened blood pressure.

If the emotion comes on slowly, or is rather weak, the theory states, the impulse might not be strong enough to trigger the expression. This would explain in part why there can sometimes be emotion without expression, they argue. In addition, cultural “display rules”—which determine when and whether people of certain cultures display emotional expressions—can derail this otherwise automatic process, the theory states. Facial expressions evolved in humans as signals to others about how they feel, says Ekman.

“At times it may be uncomfortable or inconvenient for others to know how we feel,” he says. “But in the long run, over the course of evolution, it was useful to us as signalers. So, when you see an angry look on my face, you know that I may be preparing to respond in an angry fashion, which means that I may attack or abruptly withdraw.”

Dr. Ekman is famous for developing FACS, Facial Action Coding System, for quantifying facial expressions. Here is an interview with him in the New York Times.

The Federal Bureau of Investigation, the Central Intelligence Agency and state and local police forces have turned to Dr. Ekman for help learning to read subtle emotional cues from the faces, voices and body language of potential assassins, terrorists and questionable visa applicants.

Around the world, more than 500 people —- including neurologists, psychiatrists and psychologists —- have learned Dr. Ekman’s research tool called FACS, or Facial Action Coding System, for deciphering which of the 43 muscles in the face are working at any given moment, even when an emotion is so fleeting that the person experiencing it may not be conscious of it.

That detailed knowledge of facial expression has earned Dr. Ekman, 69, a supporting role in the movie industry, where he has consulted with animators from Pixar and Industrial Light & Magic to give lifelike expressions to cartoon characters.

While psychologists use FACS to understand people, we computer vision scientists use it so that we can get machines to recognize or synthesize facial expressions.

The basic facial emotional expressions are seven —- “anger, sadness, fear, surprise, disgust, contempt and happiness.”

One of the problems we have is that of collecting image/video data with spontaneous expressions instead of posed ones. The posed expressions are somewhat different than natural ones.

Q. So how do you tell a fake smile from a real one?

A. In a fake smile, only the zygomatic major muscle, which runs from the cheekbone to the corner of the lips, moves. In a real smile, the eyebrows and the skin between the upper eyelid and the eyebrow come down very slightly. The muscle involved is the orbicularis oculi, pars lateralis.

This is a somewhat similar problem to the one Dr. Ekman discusses in his book “Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage.”

People have been studying facial expressions for a long time. When we talk of facial expressions, an unexpected name comes up: Charles Darwin. He demonstrated the universality of facial expressions in The Expression of the Emotions in Man and Animals.

There was a very interesting article (I like the PDF version better) in the New Yorker by Malcolm Gladwell about facial expressions, their relationship to emotions and their use in law enforcement.

Coming back to FACS, Dr.Ekman et al sell a package for the training of people to score the different facial expressions. You can read parts of their FACS manual and Investigators’ Guide online. An older version of different FACS expressions is available here.

But it is not only psychologists that use FACS. Most psychologists manually score the different facial actions using the FACS system. This requires training and is time-consuming as well.

People in the computer vision research community have been working for quite some time on analysis of the human face. The most well-known is face recognition. There has also been a lot of work on automatic facial expression recognition. You can visit the facial analysis links page. There is also a facial expression web page with links to both psychologists and computer scientists.

FACS has been used both for analyzing and synthesizing facial expressions. Some researchers at Linköping University, Sweden defined a 3-D wireframe model, CANDIDE, for the face with facial actions corresponding to those in the FACS system. You can play with a 3-D face mesh by varying the intensity of different facial actions here.

With the advent of MPEG-4, there has been a move from FACS to MPEG-4 in the computer vision community. The MPEG-4 Facial Definition Parameters (FDP) define different feature points on the face while the Facial Animation Parameters (FAP) define the movement of different parts of the face. Note that these definitions do not refer to specific muscles or are directly related to emotions. This standard is actually more convenient for facial animation.

By Zack

Dad, gadget guy, bookworm, political animal, global nomad, cyclist, hiker, tennis player, photographer

7 comments

  1. Owl: You go to secondary security screening. 😉

    A friend of mine once asked if my algorithms could recognize Mona Lisa’s expression.

  2. That was really interesting, Zack. Thanks for sharing. Plus, now I can go around scrutinizing everyone for real vs. fake smiles. 🙂

    So, hey, could your algorithms recognize Mona Lisa’s expression?

  3. yasmine: You are welcome.

    So, hey, could your algorithms recognize Mona Lisa’s expression?

    Take a look. Can you tell what her expression is? I think it is very ambiguous.

    Any facial expression recognition program will have difficulty with classifying such an ambiguous expression. On the other hand, we can model the face and synthesize it with decent accuracy.

  4. do you have any artickle about what does our face tell us about our personality to email me?

Comments are closed.