Are emotions and facial expressions hardwired?

Sunday, September 29th, 2019 by Cliff.

There are a few myths, and a few dodgy articles around from researchers that are confusing practitioners working in artificial intelligence, marketing, tech, airport security, and human communications industries.

These myths and challenges include:
 

A. “We don’t need to train humans – technology can read faces and detect truth, lies and malintent”. 

B. “Facial expressions aren’t universal – as different people interpret faces differently”

C. “People pose facial expressions differently when they choose to communicate feelings to others”.

These challenges are misguided and reveal a lack of understanding of the subject – or they may be motivated by a need for proponents to become recognised by manipulating the arguments.
Researchers and practitioners trying to understand facial expressions and emotion might be helped by understanding these ten distinctions.
 
  1. The muscles on the face responsible for the expression of felt emotions are controlled by nerves from deep in the subconscious brain (the 7th cranial nerve from the pons of the brainstem). When we experience emotions such as fear, anger, sadness, happiness, disgust and surprise, an affect programme is activated which generates orchestrated impulses within us within 500ms, triggering changes in our physiology (body, ANS system, voice and face). These changes can signal to those who can read these signals that we are experiencing real emotion.
  2. We don’t always feel single emotions in isolation. We often have blends of two or more emotions at once. We can be both angry and disgusted at foul language or behaviour from another person. This can often result in a blend of the facial expressions for those two emotions.
  3. In some contexts, it may be inappropriate to show what we are genuinely feeling to others and so we may mask or suppress any facial movements that are activated subconsciously. This may mean that others do not see any movement in the face or, if it results in a tiny movement before it is suppressed (micro facial expression), maybe missed by the inattentive.
  4. When people choose to attempt to portray an emotion using facial expression they do not always get it right. Many emotion expressions are very difficult to pose voluntarily with only around 10% of us able to manipulate the reliable muscles successfully when we are not experiencing the actual emotion.
  5. When you ask a person to portray an emotion such as sadness, you may see them pulling their inner brows down and pursing their bottom lip up. A felt emotion of sadness, however, triggers motor nerve C7 to raise the inner brow and lower the outer lip corners. This means expression of emotions that aren’t being experienced may not be interpreted as the emotion the person is intending to transmit. Or it may be judged a fake, posed emotion – because it is. In summary… don’t confuse unbidden facial expressions of genuinely felt emotions with posed expressions of emotions that we are not feeling but consciously wish to communicate. They are very often different. 
  6. Judging felt emotion from a photograph of the face is risky. A video is better so we can see onset, offset, duration, synchronicity and symmetry – markers adding to reliable judgements.
  7. Not all movements of the face are about emotions. Some machines and cameras will read a human face with lowered inner eyebrows and interpret it as angry. It could be that the person is simply thinking hard or in pain. A machine or camera cannot (yet) hypothesise and test those hypotheses to eliminate other causes of facial movements.
  8. The impulses that arise from emotions are not only about the face. We can see and hear changes that result from emotions from the body, voice, and our ANS system (breathing/sweating) … with tech assistance we may also pick up heart rate, digestion, pupil size and blood pressure signals.
  9. Words are not emotions – so we have to be careful not to connect facial expressions to one word, such as ‘happy’. There are many pleasurable feelings (pride, satisfaction, ecstasy) that will produce variations on the theme of happiness – though when felt will often involve the same reliable muscles on the face (orbicularis oculi – around the eye socket) and the zygomatic major which pulls up the lip corners.
  10. Attempting to trigger emotions in others with stories needs care. It does not follow that a person feels fear when asked to imagine speaking in front of a large audience. Some may gain great pleasure from such an experience. Others may be angry at the request. Context and individual differences are crucial here.

So back to the myths…

A. “We don’t need to train humans – technology can read faces and detect truth, lies and malintent”.  Not yet. One day we may have equipment that can monitor the six communication channels from humans (face, body, psychophysiology, voice, verbal style and verbal content) and compare behaviour from them to the account or story being presented, the baseline behaviour of the person and factor in the context where the monitoring is taking place. It would need to include artificial intelligence that can hypothesise about any inconsistencies and dynamically introduce probes or questions that test the hypotheses to inform a judgement or conclusion. Trained humans are doing this, though it isn’t easy. I can’t imagine how long it will be before we have technology that can manage this. When that day comes we have another challenge… any interaction with the human being ‘assessed’ will be heavily contaminated by the technology which will itself trigger behaviour that is not about deception or malintent but about the technology context itself. This is the reason the polygraph is falling out of favour as it merely detects stress and it doesn’t differentiate between the stress of being caught in a lie, the stress of being disbelieved when being truthful, or the stress of being wired up to a machine.

B. “Facial expressions aren’t universal – as different people interpret faces differently”. They are universal – felt emotions stimulate the same muscles on the face for each emotion universally – regardless of culture or other individual differences. The second part of this myth is nothing to do with the universality argument and is often true. People do confuse expressions, even when they are felt and displayed with the relevant facial muscles – surprise if often confused with fear; anger is often confused with disgust – the reason being is that there are some common muscles at work in these pairings. Most people aren’t trained to make the finer distinctions in muscle movements and it can be difficult as all we really see are secondary movements of the skin surface, wrinkles and bulges that result from muscles moving deeper in the face.

C. “People pose facial expressions differently when they choose to communicate feelings to others”. Well of course they do! Around 90% of the world are unable to represent the facial expressions that display as a result of real, felt emotions and so when people try to mimic or fake those emotions they often don’t do very well. It is no wonder therefore that there will be differences between those who consciously pose (or fake) emotions to others trying to do the same, and that attempt will likely be misinterpreted depending on how skilled or related that pose is to the real emotion-related expression.

Cliff Lansley
Article by Cliff Lansley

Expert in emotional intelligence, behavioral analysis and high stake deception detection contexts. Cliff holds; B Ed (Hons), MIOD, MABPsych, Cert Ed.

Leave a Reply

Your email address will not be published. Required fields are marked *

Training & development in the social sciences

Further your knowledge and skills with our range of training programs teaching foundation skills up to degree level qualifications.

View Courses

* Required Field

Join the mailing list for the latest details on our courses:

© Copyright 2009-2019 • Emotional intelligence Academy Limited • All Rights Reserved
PayPal