Fear, power, and passwords: the psychology of cybersecurity success and failure
Innovative research uses human reactions to healthcare threats to better understand why employees do, and don't, comply with cybersecurity policies
The world’s headlines often include accounts of hacking brought about through something known as “phishing.” In this technique, fake emails are typically sent to a company's employees who then, through error or inattention, provide login information to the hackers. Of all the ways hackers break into computer systems, phishing is one the most common and yet, in theory at least, the easiest to prevent. As with passwords given away on a phone call, or laptops left unsecured, phishing illustrates how so often a psychological phenomenon leads to a major cybersecurity failure. Understanding how to ensure cybersecurity compliance, therefore, is a subject of much research and debate. Given the prevalence of the problem, however, there remains much to be discovered and improved in this all too human aspect of information security.
An extensive analysis from Yan Chen (FIU), Dennis Galletta (Pitt), Paul Benjamin Lowry (VaTech), Xin Luo (UNM), Gregory D. Moody (UNLV), and Robert L. Willison (Xi’an Jiaotong-Liverpool) brings a novel perspective to this issue by looking at information security through the lens of healthcare. Their shift in thinking is both instructive and illuminating.
The authors start their paper by noting what they believe are three shortcomings of much previous research on this topic. First of all, they argue, most analyses have focused on compliance and treated non-compliance as the lack of the former. This is too simplistic a view, the authors think, and non-compliance should be studied and dealt with as something with its own unique characteristics and dynamics. As they write:
Nearly twice as many studies focus on compliance versus non-compliance, and over twice as many studies focus on adaptive versus maladaptive outcomes. Notably, this is more than a mere research gap; the problem is the explicit or implicit assumption that the motivations and reasons for non-compliance are merely the opposite of compliance, which is not true.
The second flaw in a lot of the previous work, in their opinion, is that it has looked at compliance through a largely cognitive lens and omitted any discussion of emotion from the subject. In other words, researchers studied whether employees did or did not comply with a security practice but did not dig deeper into the emotional and psychological dimensions that are reflected in, or even caused, the outcomes the researchers observed. The authors argue that "substantial research indicates that employees in security settings operate with bounded, not perfect, rationality, and that researchers therefore must consider emotions, not just cognitions, to understand how employees make security decisions."
Lastly, mainly due to the first critique noted above, past research has not looked at the "tipping point" at which an employee moves from complying with security protocols and not complying, a moment whose dynamics, the authors believe, should be examined and understood. As they note, "a theoretical and empirical account should explain how and why these outcomes are different, identify the point at which a normally compliant employee may suddenly choose to become noncompliant, and determine why such choices are made."
Having presented their critiques of past attempts to understand why employees often fail to comply with information security policies (ISPs), the authors introduce a very different approach for considering the problem at hand. They suggest adopting something called the Extended Parallel Processing Model, a construct developed by communications researcher Kim Witte in the 1990s to understand how people deal with healthcare threats. EPPM suggests that when faced with a threat to their physical well-being, people think about it along two dimensions. The first dimension is rational and is based on what power, if any, the person has in response to the threat. The second dimension is emotional and is based on the fear generated by the external threat. The person’s reaction to the threat, EPPM predicts, will be based on a combination of both rational and emotional reactions:
When perceptions of a threat are strong and perceived levels of efficacy are high, the model predicts self-protective behavior. When perceptions of a threat are strong, but perceived levels of efficacy are low, the model predicts maladaptive denial or rejection of protective behaviors. By asking questions like the ones above, people in an intended audience can be classified as having either high or low levels of perceived efficacy and either high or low levels of perceived threat.
Figure 1: EPPM dimensions and responses (Source: HCCC).
The authors base their research on the idea that just as EPPM helps predict reactions in healthcare, it can also do the same in cybersecurity. Consider two possible reactions to a CEO video sent to employees in which they are told about the disastrous hacking of a competitor. Employee A is convinced by the CEO that (a) the threat is real (and thus generates a high degree of fear) and that (b) her individual actions really help keep the company safe. EPPM predicts she will follow ISPs closely and thus pose a low cybersecurity threat to the company. Employee B, however, concludes that the problem is real (high fear) but what he does really does not matter since hackers “can always figure out to to get into our systems” (very low efficacy). EPPM predicts this employee will be a high cyber risk to the organization. He will, if EPPM is right, decide that nothing he can do will make any difference in the firm’s cybersecurity posture.
To test whether their ideas about EPPM in cybersecurity have merit, the authors conducted two studies:
Study 1 examined ISP compliance/violations of 411 employees using self-report surveys. Study 2 required 405 employees to respond to manipulated scenarios (or vignettes) of ISP violations, asking whether they would do the same in their workplace (i.e., be noncompliant) or not (i.e., comply). It was important to examine these differences, because several behavioral security researchers have claimed that hypothetical vignettes are superior to self-report surveys in organizational security compliance and non-compliance contexts.
By the way, it’s important to remember that “fear” in this context is clearly not the same fear in the original EPPM application. It refers, in the authors’ work, to a sense that a cybersecurity failure could endanger not just the health of the firm but the employee’s own income, position, or even career.
Their main findings are summarized below.
Finding: The optimal condition in employees is one of high efficacy combined with high fear
For the fear dimension to trigger high levels of compliance, "security risks must be perceived by employees as personally relevant, and those employees should receive appropriate training toward that end." This elevated fear state is recommended because "when such employees encounter a meaningful level of ISP-related threat at work, they then internally weigh their efficacy to determine whether they can comply with ISP requirements to thwart the security threat." Employees who see hacks and information thefts as personally dangerous to their income and careers, and who know what specific ISP actions to take, are the ones most likely to do what the company hopes they will. They are, note the authors, "more likely to recognize and report suspicious emails to IT, encrypt attachments, recognize attempts to use social engineering to deceive them, use and remember more complex passwords, report suspicious coworker computer behavior, make use of a VPN, log off when leaving one's workstation, and pay attention and respond to system or browser warning messages.”
Finding: Increasing the fear level in settings with very low efficacy levels is ineffective or even counter-productive
When companies increase the fear level in employees but fail to raise the efficacy level, EPPM predicts that they will, at best, pursue average compliance measures or, at worst, become especially non-compliant. Like a heavy drinker who decides alcoholism is encoded in his genes and so accepts the disease as his destiny, an employee with high fear and very low/zero efficacy can easily become a major internal cyber risk:
A common example is when employees know about the severity and vulnerability of security threats — often through sensational news stories—but in fact have no training or efficacy in responding to these threats. Thus, they feel overwhelmed and ignore the threat. For instance, employees may have attended training that raised their awareness, but they came away confused, stressed, or annoyed because they did not understand the highly technical material and were too embarrassed to ask questions (a common issue for nontechnical employees). A more serious reaction can occur when an employee views non-compliance as an excellent opportunity to "get back" at their boss or company for perceived mistreatment.
Finding: the tipping point between compliance and non-compliance is variable and dependent on the efficacy/emotion conditions
According to EPPM, when faced with a health threat, patients adjust their response continuously as new information about the threat level (and their own power) is obtained. For example, someone may initially take measures not to catch COVID but then, after seeing case numbers rise day after day, become convinced that nothing he can do will prevent infection, at which point he stops taking any safety precautions. The opposite is also true. Someone may not take any precautions against COVID until she reads about a study showing that wearing a mask significantly reduces infection risk at which moment she suddenly becomes compliant with mask mandates. Put simply, as long as a threat's efficacy level is higher than its fear level, people are more likely to take action (danger control response) to lower the risk than minimize it psychologically (fear control response). Likewise, in cybersecurity, note the authors, "the extent of the response elicited by perceived threat or by perceived efficacy determines the tipping point, which identifies which control process is dominant, and the success or failure of threat coping."
The tipping point issue is as nuanced as it is critical for cybersecurity, the authors argue, because "studies have confirmed that for ISP compliance, a self-regulatory approach relying on shared security values and beliefs is more effective than a command-and-control approach." Furthermore, "when employees struggle with competing goals (sometimes caused by role conflict introduced by different management systems) and values, they are more likely to appeal to problematic emotions and react with non-compliance." In this situation, employees "may downplay the value of the prescribed ISP, resulting in a perception of low response efficacy and more non-compliance."
So far, the research findings support what EPPM would predict, but there were some areas where the researchers found outcomes that were different in cybersecurity than in healthcare settings. For example, EPPM suggests that when someone has high fear but little efficacy, they try to control or even deny their fear. For example, heavy smokers discount warning labels as "overblown" and continue to smoke. In their study, however, the authors found that when threats produced high levels of fear, even if efficacy levels were not very high, employees did try to increase their compliance efforts. Unlike in healthcare settings, then, the higher levels of fear did not "overwhelm" the response decisions. The authors surmise that “fear can simply be more overwhelming in a potentially life-threatening personal health context than in an organizational security setting, even one in which employees care about their work and their organization." That's the good news for cybersecurity managers.
The bad news is that the study showed a greater tendency for non-compliance in some situations, even when the efficacy level was high. This outcome suggests to the researchers that situational context might impact the EPPM model more in cybersecurity settings (with their higher dependence on group/company characteristics) than in healthcare (with their higher dependence on individual/personal characteristics).
Based on this first-of-a-kind study, the authors suggest that it may be straightforward to increase compliance through Security Education and Training Awareness (SETA) programs that focus on (a) making employees aware of threats they were not previously aware of and (b) increasing coping efficacy to deal with threat-generated fear by explaining systematically how to deal with specific threats.
In addition, the authors suggest that cybersecurity managers should evolve their training programs to decrease unwanted responses to the various EPPM scenarios. To do so, "they may have to further account for other related contextual factors such as job roles, organizational commitment, type of security threats, organizational differences, industrial differences, and individual differences, to reduce maladaptive coping behaviors."
Furthermore, given the critical role that fear plays in shaping compliance and non-compliance, "it would also be interesting to determine whether other contexts, such as cyberphysical systems (e.g., drones and autonomous vehicles), elicit stronger fear due to the enhanced possibility of physical harm." At the same time, the authors emphasize the need to set their research into the correct cultural context, since, for example, "studies have shown that, compared to Western users, Chinese users are much more prone to uncertainty avoidance, perceive greater power distance from their managers, and adhere more strongly to collective norms." Thus cyber-EPPM [my term] might operate differently in different countries and cultures than it does in its traditional healthcare application.
As noted earlier, this research makes interesting conceptual, methodological, and even practical contributions to how managers should think about cyber compliance and non-compliance. Their novel use of EPPM has expanded the context in which ISP design and efficacy can be considered. As such, one hopes the work will get a wide reading and serious consideration by cybersecurity leaders and their business colleagues across all the many settings where information security continues to be a critical part of operational risk identification and management.
The Research
Yan Chen, Dennis F. Galletta, Paul Benjamin Lowry, Xin (Robert) Luo, Gregory D. Moody, Robert Willison. Understanding Inconsistent Employee Compliance with Information Security Policies Through the Lens of the Extended Parallel Process Model. Information Systems Research 0 (0) https://doi.org/10.1287/isre.2021.1014