Deception is when one person intends to mislead another and does so deliberately, and without prior notification. In this article, we explore the different reasons humans lie, and how you can protect yourself against high-stake deception within your personal and professional life.
We talk about lies and deception in daily life a great deal, and we often use these words interchangeably. From our personal and professional engagements/relationships, through to the latest media reports on the most pressing political debates, deception (or potential deception) is highlighted time and again.
So you would think that, given its seemingly common occurrence and mention in our daily language, we would have an agreed definition across the spectrum of the social science fields on what ‘deception’ is. Surely we know what a “Lie” is? However, that’s far from the reality, and, if we also bring in a philosophical viewpoint, the water becomes further muddied, and a definition less agreed upon. So lets’ take a look at a handful of ideas on this concept and then we’ll introduce the working definition that we intend to use as a basis for the remainder of this article.
A simple definition of deception, offered by the Oxford English Dictionary (1989), is “…to make a false statement with the intention to deceive.”
An issue with this definition though is that it focuses on only one type of lie. A ‘falsification’, in which we actually convey information (as a statement) with the intent to mislead. This definition does not seem to apply to lies of “omission” or “concealment” i.e. leaving out or hiding information, respectively, that could have been shared to clarify the truth of a statement.
Another definition, this time from Bok (1978), is “…any intentionally deceptive message that is stated.” Again, we see the similarity here with the previous OED definition in that the definition alludes to the liar ‘stating’ a deceptive communication. But what these two definitions also have in common is the concept of ‘intent’. There is a deliberate intention for the communicator of the lie to deceive a receiver. This begins to rule out the misinformed, naïve, deluded, or self-deceived individual from the category of ‘Liar’, as they are not necessarily transferring information they believe to be false, i.e. it is an unintentional misleading of the receiver.
Let’s broaden this definition, moving towards our working definition. DePaulo & Malone (2003) give us, “a deliberate attempt to mislead others.” This has the component of intent (deliberate attempt) and also could be used in a discussion of both lies of omission/concealment and of falsification. However, this is maybe a little too broad. If, for example, you go to a play at the theatre and an actor is portraying his or her character on stage so convincingly that you are drawn into the drama of the story, we can say that according to this definition they are lying. They have deliberately attempted (successfully) to mislead you, if only for a short period of time. So how can we rule actors and the like out of the ‘Liar’ category?
With the following definition from Vrij (2008), we get a little closer to our end goal, stating that deception is “…a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief which the communicator considers to be untrue.”
So we have a definition here that covers off intent, misleading others into believing something known to be untrue by the liar, and also ‘without forewarning’. In the case of the actor, you have forewarning of the actors aims to suspend your belief for the duration of the play. Therefore they are not being deceptive. This definition also adds that the success (or not) of the attempted lie is neither here nor there, it’s a deceptive communication whatever the result.
Our final, and working, definition agrees with Vrij but simplifies his definition somewhat. This definition comes from Ekman (2009); “one person intends to mislead another, doing so deliberately, without prior notification of this purpose.”
In summary, and with this final definition, we can now see that deception (interchangeable here with “lie”) involves someone intentionally attempting to mislead another person or persons, without their knowledge or consent. This can be done through either ‘falsification’ of a story or account i.e. ‘making something up’, or through hiding and omitting information that could/should have been shared. This is the reason for many courts in the world to ask anyone making a statement to agree to tell, “…the truth, the WHOLE truth, and nothing but the truth.”
We’ve defined what we mean by ‘lies’ and ‘deception’, but why do we do it? Think back over the last week. How many times have you lied? How about over the last 24 hours? Maybe even within the last hour?
Lying forms part of our day-to-day communications in one form or another. Unless of course, you are Eli Loker, the fictional character from the Fox TV Drama “Lie to Me” (Roth & Hines, 2009), who practices ‘Radical Honesty’, with varying consequences and degrees of success.
But are all lies bad? At the risk of creating an ethical and moral debate, it would be difficult to go through a 24hr period (let alone a week) without some form of lying at some level.
If you are a parent you may ‘lie’ to your children about the existence of mythical beings that bring presents or chocolate eggs. But, parent or not, think about those altruistic lies; those little white lies that we tell to avoid hurting someone’s feelings or to protect them from the pain of the truth in some way. Here are some examples:
“It’s not THAT bad…” after a friend has shown you the results of their hair dye escapades, sporting a mottled green and orange hair colour.
“We should meet for a coffee sometime…” after bumping into an old acquaintance from high school that you never really liked.
“That was delicious!” after struggling to finish a somewhat tasteless meal cooked by your partner.
“I’m fine thanks…” after being asked how you are by the office receptionist when you actually feel ill and are having a rather stressful day.
None of these examples has malice behind them. They serve the purpose of commenting on something in a way that prevents offence, hurting another’s feelings, and avoiding a potentially awkward social interaction.
We call these types of lies “social lubricant,” as they keep things running smoothly. If everyone were as brutally honest as Eli Loker (mentioned above), our lives would certainly be very different. Whether for the better or worse, I’ll leave that for you to ponder.
So we separate lies into Serious and Non-Serious lies. Those just mentioned would typically fall into the Non-Serious category. But that may not be the case for lies of a more malicious or self-serving nature. Those lies told to avoid punishment, for personal gain (even at the expense of others), to pass blame to another undeserving person, etc.
These types of more serious lies can have consequences far greater consequences, in some cases career, relationship, and even life destroying.
So what are the consequences of undetected deception and why would we want to develop our detection skills.
Consider the costs of deception in organisations. Whether we are dealing with a dispute within a team, reviewing performance, investigating an allegation of harassment, or recruiting for a senior executive within the business, there are many opportunities for deception. The cost of this deceit isn’t easy to quantify and generalise but its prevalence is well documented.
Looking at the costs of recruitment alone, involving needs analysis, applicant searching, and applicant screening, interviewing and offering; this can cause mounting costs. But, throw into the mix that a large proportion of society considers that misrepresentation of our skills, qualifications, etc. on a CV/resume is acceptable (e.g. Wexler, 2006), albeit a criminal offence, detecting deceit during the recruitment process is an essential skill for recruiters. This could prevent the wasted costs of recruitment and onboarding unsuitable job candidates.
There are other professions whose livelihood very much depends on the detection of deception. It is an invaluable skill in Investment, Negotiation, Sales, Business Consultancy, to name but a handful of roles that require the best level of information and business intelligence in order to make the most appropriate decisions.
An obvious area for serious deception and the need to differentiate this from truthfulness is in security, law enforcement, and military sectors. Literal life and death decisions are made in these fields for individual, national, and international safety.
Here we have two aspects of deception, first, the detection of those individuals that may have the intent to cause harm, such as having terrorist intent. Known as ‘primary detection’ this involves the observation and analysis of larger groups and crowds such as in cities, entertainment venues, and transport hubs, with the view to identify outliers that may wish to cause harm (e.g. acts or terror and/or mass violence), an example of this process can be seen in the Lansley et al. (2017) OTER process.
The second aspect of deception detection in this sector is during the initial and subsequent engagements of an individual, or ‘person of interest (PoI)’. This may come in many forms of engagement from an informal, and potentially undercover, conversation aimed to elicit information from the PoI without direct questions, through to a formal interview context in which the roles of interviewer and interviewee are apparent.
The consequences in these sectors of making a mistake in identifying deception during either primary detection or engagement of a PoI is unfortunately prevalent in our news feeds on a daily basis. Successful detection in these contexts saves lives.
Finally, for this sector, a PoI may move to a trial, and potentially sentencing, through the justice system. Again, here we have numerous statements being made by the defendant plus the defense’s representation, and from the prosecution and potential victims, witnesses, and experts. From these, decisions must be made on the credibility and truthfulness of the various statements in order to make decisions on potential criminal guilt and subsequent punishment. Bond and Depaulo (2006) carried out a meta-analysis of the accuracy of lie-truth detection judgments across a number of populations, including judges, lawyers, and police officers, and the average accuracy rate was 54%. Given the consequences of the decisions made in these contexts, an accuracy rate only slightly better than ‘tossing a coin’ is a worrying figure.
In our daily lives, we engage with friends and loved ones, as well as professionals, and our ability to detect deception could also prevent negative consequences here. A cheating partner, a lying teenager experimenting with illegal substances, a car salesperson attempting to sell us a car known to have numerous issues are but a few examples. However, as we learn to read others and begin analysing behaviour with clarifying truthfulness in mind, we need to take care. We can very easily become cynical, developing a bias for spotting guilt, ruling out evidence to the contrary. This can, itself, lead to negative consequence i.e. mistrust in relationships, relationship and communication breakdowns. We must keep an objective balance when assessing truthfulness, credibility, and deception, especially if our friends and loved ones are involved. Sometimes, saying and doing nothing is the best course of action. But this very much depends on the context.
Over the centuries of human development, many examples can be found of ‘lie-detection’ methods. These range from the barbaric through to the downright strange.
Thankfully times have changed, as have our legal and judicial systems. Our understanding of human behaviour has also adapted with the times and continues to do so.
Today, we can look at both technology and human observation/analysis in order to assess the truthfulness of an individual. A combination of these is preferable though, as technology alone has proven less than effective with a high frequency of ‘false-positives’ i.e. misjudging a truth teller as a liar. The most well known technological method for attempting to test for deception is the ‘polygraph’. This equipment measures psychophysiological responses (e.g. heart rate, blood pressure, sweating) in an attempt to identify a change in these responses that might indicate a stress response. In the past, the issue that has surrounded the polygraph is its inference that stress or anxiety is likely linked to deception. However, we know that anxiety can be caused by many different factors, not just deception (Ekman, 2009). In fact, even early studies on the efficacy of the polygraph test highlighted issues with their results, pointing out that whilst they might correctly identify a liar with a rate of around 70% to 94% accurately, the other conditions were not at generous. Correct identification of a truth teller could range from 12% to 94%, false positives ranging up to 75%. Various other technologies are being developed for detection of deception, such as the use of EEG (electroencephalography) scanning (e.g. Farwell & Smith, 2001, Parmar & Mukunden, 2017) or even fMRI (functional magnetic resonance imaging), however in the latter’s case, a critical review of the use of fMRI investigations found that, “…in the foreseeable future, ethical and legal use of fMRI-based lie detection is likely to be limited to cooperative volunteers, such as those seeking acquittal, or clearance after failing a polygraph test.”, (Langleben, 2008).
Strong current research in the area of deception detection is in the use of humans as observers’ i.e. real-time human “behaviour analysis.”
This has also been widely researched and discussed in books, web articles, and through TV ‘body-language’ expert analysis. Some of this research is heavily based on scientific theory and evidence. Others are far more anecdotal in their evidence and, unfortunately, help to perpetuate many myths in the field. An example of this is the linking of a “nose-scratch” to concrete evidence of deception, a myth with no scientific basis. Another is the concept of Eye Accessing Cues, popularised by the self-help system known as Neuro-Linguistic Programming (NLP). In this system eye gaze direction was supposedly correlated with cognitive processes, such as constructing images/sounds, remembering images/sounds, self-talk, and kinesthetic feelings. In some cases, by misinformed trainers, these were taken to infer deception (when constructing images/sounds) was present. However, the concept of eye accessing cues was critiqued at length in recent years (Wiseman et al, 2012) finding no correlation with deception.
Well, we know for sure that there is no single indicator of deception (Ekman, 2009, Vrij, 2008). Each of us is different, and therefore we all behave differently in various contexts. We all respond in various ways depending on the context of the situation, the topic under discussion, and our psychological makeup.
Some of us may cope perfectly well in a formal interview situation, whilst others may exhibit profound anxiety in the same situation. Some of us may be able to lie if it is deemed necessary, whilst others may consider the concept of deceiving someone morally deplorable.
So, in the analysis of human behaviour, an approach known as SCAnR (Six Channel Analysis – Real-time), developed by The Emotional Intelligence Academy (Archer & Lansley, 2015; Lansley, 2017), takes into account three aspects, Account, Baseline, and Context.
Account – What is being discussed? What’s the topic of the interview? What is the core of the conversation? What details are being shared?
Baseline – How does this person typically behave in this type of situation? This is far easier to answer if you are close/intimate with someone. However, time must be spent to understand baselines for those you’ve just met.
Context – In what cultural, societal, environmental, etc. context is this interaction or observation occurring? How might this impact on this person’s behaviour?
Once we understand this we can then identify when communication through one or more of the six channels, Face, Body, Voice, Interactional Style, Verbal Content, and Psychophysiology, is inconsistent with the account, the baseline of the individual, or the context. This is then considered a ‘Point of Interest’ or PIn.
Zucherman & Driver (1985) discussed that four factors are involved in an attempted deception:
This still holds true in our understanding today. In a deceptive interaction, these aspects may be observed through the six channels mentioned above. Let’s take a look at some examples:
Being the primary channel for emotions, seven of which are universal (Ekman & Keltner, 1970; Ekman, 2007; Ekman, 2009) and with we may see an individual’s emotion ‘leak’ onto their face, indicating their true feelings. So, whilst managing to keep a social smile on their face, we may see a sudden expression of fear, maybe even at ½ second or less known as a micro-expression (Ekman, 2009; Matsumoto & Hwang, 2011), when a key interview question is asked. The face can also show heavy cognitive processing, such as furrowed brows, which may be interesting if seen when the question should require a simple response.
We use our bodies to communicate massive amounts of information, in some cases, without the use of words at all. The most important aspect to consider with body language is that, unlike emotional facial expressions, we have no current scientific evidence for universal body language. We must always take into account the cultural/social/individual baselines for how people use their bodies to communicate. However, one example of body ‘leakage’ may be the slightest single sided shoulder shrug as the interviewee states “I’m certain!” The single-sided shrug could be a “micro-gesture” of the bigger double-sided shoulder shrug that typically communicates uncertainty. So in this case, that subtle gesture would be inconsistent with the account.
This channel covers the pitch, the volume, and the tone (quality) of a person’s voice. The voice has the ability to communicate emotion and well as elements of cognitive processing (Ekman, 2007; Matsumoto, Frank & Hwang, 2013). For example, if a friend is discussing a relationship with a spouse and whilst saying, “we make a great couple,” you hear a drop in volume and pitch, plus a trembling tone of the voice; you may consider that this positive statement is inconsistent with these changes in the vocal areas. Perhaps this indicates uncertainty or sadness. Only asking questions further would help to find out.
This channel is ‘how’ we use the words we choose. For example, is the interviewee becoming argumentative or aggressive when a particular topic is raised? Did their use of first-person pronouns (I, me, my) drop when discussing their whereabouts last night? Could that indicate a verbal distancing from a lie? Quite possibly… But we have to keep an open mind. Another example could be the sudden occurrence of disfluencies, such as “uhm” and “err” filler words, stuttering, and false starts. Are they under heavy cognitive load trying to answer the question? If so… Why?
What specific words are being used? How full is the account being given? Are there sparse details on the core topics, or way too many given the situation? If we’re listening to someone describe an event and on one section of the account the level of detail drops dramatically, we should consider paying attention to that section. Why the detail drop? One method of analysing a statement, particularly in text form, is through the use of statement analysis. Using statement analysis, with techniques such as Criteria Based Content Analysis or CBCA (Steller & Köhnken, 1989), analysts can use criteria to assess whether a statement or transcript has credibility or not. An important step toward assessing the truthfulness of an individual, although a statement can ‘sound credible’ yet still be a very well constructed lie. We must use all the channels and corroborate evidence.
The sixth channel, explored here, is usually measured through the use of technology, such as the polygraph. But with keen eyes and ears, the behaviour analyst can discern those visible and audible signals that tell us someone is having a stress response. Visible phenomena might include such things as flushing of the face, sweating of the brow, loosening of the collar, rubbing hands on the legs, licking lips. Audible may include a dry sticking mouth, clearing the throat, deep sighs, or breath holding. This is not an exhaustive list and it is to be remembered that these phenomena are not ‘signs of lies’. They may indicate the onset of a stress response, but the question still remains… What caused the stress? Was it the stress of having to lie in response to a question? Or, was it because the interviewee is feeling disbelieved in a stress-inducing, aggressive interview?
Deception is a function of our daily lives. It can take the form of non-serious, more altruistic or white lies. But it can also take the form of serious more malicious lies, for personal gain, or potentially with intent to cause harm.
The consequences of undetected deception can be high cost, in financial terms, but also in emotional, physical, and psychological terms.
We’ve certainly come a long way in our understanding of deception and how best to detect it, embracing a more scientific approach. Analysts should observe all six channels of communication, maybe using technology as a complementary tool, not a replacement, in order to take a more holistic approach to behaviour analysis.
When behaviours are inconsistent with the Account, the Baseline, or the Context then we need to pay greater attention, especially if there are clusters of inconsistent behaviours observed across the channels at a given point in time.
But most important of all is to remember that the observations we make are points of interest. They are not single indicators of deception. We still need to formulate good interview questions and probe the topic areas where the inconsistencies were observed.
Lastly. Always remember… keep the mindset of “genuine interest in another human being” in order to prevent an interrogatory style of interviewing.
Archer, D. and Lansley, C., (2015). Public appeals, news interviews and crocodile tears: an argument for multi-channel analysis. Corpora, 10(2), pp.231-258.
Ekman, P. and Keltner, D., (1970). Universal facial expressions of emotion. California mental health research digest, 8(4), pp.151-158.
Ekman, P. (2007). Emotions revealed: recognizing faces and feelings to improve communication and emotional life. New York, St. Martin’s Griffin.
Ekman, P. (2009). Telling lies: clues to deceit in the marketplace, politics, and marriage. New York, NY, W.W. Norton.
Farwell, L. A., & Smith, S. S. (2001). Using Brain MERMER Testing to Detect Knowledge Despite Efforts to Conceal. Journal of Forensic Sciences. 46, 14925J.
Langleben, D. D. (2008). Detection of deception with fMRI: Are we there yet? Legal and Criminological Psychology, 13(1), 1–9.doi:10.1348/135532507×251641
Lansley, C. (2017). Getting to the Truth. Manchester, The Emotional Intelligence Academy.
Lansley, C., Garner, A., Archer, D.E., Dimu, R., Blanariu, C., & Losnita, S., (2017). Observe, Target, Engage, Respond (OTER): High-stake behaviour analysis using an integrated, scientific approach within an airport context. White Paper Series: EIA/SRI/OTP. http://e-space.mmu.ac.uk/620124/1/EIA-CNAB-SRI-iALERT-White-Paper-restricted-Apr2017.pdf.
Matsumoto, D. & Hwang, H. S. (2011). Evidence for training the ability to read microexpressions of emotion. Motivation and Emotion. 35, 181-191.
Matsumoto, D., Frank, M. G., & Hwang, H. S. (2013). Nonverbal communication: science and applications. Los Angeles, Calif, Sage.
Oxford University Press. (1989). The Oxford English dictionary. Oxford, Oxford University Press.
Parmar, V. K., & Mukunden, C. R. (2017). Brain Electrical Oscillation Signature Profiling (BEOS). International Journal of Computers in Clinical Practice (IJCCP). 2, 1-24.
Roth, T., & Hines, B. (2009). Lie to me. Season one. Beverly Hills, California. 20th Century Fox.
Saxe, L., Dougherty, D., & Cross, T. (1985). The validity of polygraph testing: Scientific analysis and public controversy. American Psychologist. 40, 355-366.
Steller, M., & Köhnken, G. (1989). Criteria-Based Content Analysis. In D. C. Raskin (Ed.), Psychological methods in criminal investigation and evidence (pp. 217-245). New York: Springer-Verlag.
Vrij, A., (2008). Detecting Lies and Deceit: Pitfalls and Opportunities. The Psychology of Crime, Policing and Law. Wiley. http://www.myilibrary.com?id=132200&ref=toc.
Wexler, M.N., (2006). Successful resume fraud: Conjectures on the origins of amorality in the workplace. Journal of human values, 12(2), pp.137-152.