It is 7am, and ten minutes into the morning you have already dismissed the first round of notifications littering your screen. Respond, dismiss, delete, save, and keep going. It is impossible to escape the endless stream of decisions required to get through the day. While some people try to minimise unnecessary decision-making it is exceptionally difficult to succeed. For instance, research shows that the average person faces upwards of 200 decisions about food alone every single day. Choosing an outfit and deciding on an entrée are not life changing decisions, but the mental energy to make these selections comes from the same finite reservoir as the energy needed to make much larger, more important decisions.
The human brain has many different goals and purposes, one of which is to conserve energy. One way brains conserve energy is to use mental shortcuts or heuristics. These rules of thumb allow people to do many complex tasks, but when we take a closer look at how they impact our decisions, we find that taking shortcuts can lead to cognitive bias and reasoning errors.
The impact of cognitive biases has no boundaries, and cybersecurity decisions at all levels of an organisation are impacted by bias. Let’s look at three common cognitive biases, explore how they impact different areas of cybersecurity decision making, and finally identify strategies for mitigating the negative impact of cognitive bias.
Priorities, people, and purchases
Building a strategy to protect against cyber threats requires understanding and prioritising efforts to address existing or potential threats. Availability bias impacts what agency leaders and cybersecurity experts perceive as high priority threats. The more often someone encounters specific information, the more readily accessible it is in their memory. If the news is full of privacy breaches carried out by foreign adversaries, that type of threat will be top-of-mind, which may drive leaders to overestimate the likelihood of being targeted with such an attack. In reality, reports seen on the news may not even apply to their industry or may be an extreme outlier, and hence their newsworthiness. Still, availability bias may lead them to hone in on potential outside threats, perhaps at the expense of more urgent internal ones.
Another challenge for cybersecurity professionals is identifying user characteristics that pose the greatest risk to an organisation’s information system. Grouping people together based on specific characteristics or attributes can be both convenient and effective, but it also introduces the risk of representativeness bias. Representativeness bias occurs when we erroneously group people together based on qualities that are considered normal or typical for that group. For instance, if you made the statement, older people are riskier users because they are less technologically savvy than their younger counterparts, you would likely observe affirmative nods from around the room. However, when we take a closer look at the numbers in current research, we find that younger people are actually far more likely to share passwords and they often reuse the same ones across domains. If sharing a streaming service log-in is ultimately sharing banking information or corporate information due to reusing credentials, the younger user is far riskier than the older user.
Fear, uncertainty, and doubt also impact decision making, and as an industry, cybersecurity thrives on using negative language that highlights risk or loss in marketing materials. This works because the way information is presented, or framed, shapes our purchasing decisions. Framing effect is a type of cognitive bias impacted by how a choice is worded. People are biased towards selecting outcomes that focus on positive, or sure thing, outcomes. They are also more willing to choose riskier, or more expensive, options if they are faced with a potential loss. The outcome of the framing effect in the cybersecurity industry is that decision-makers may choose overkill solutions that address specific, low-probability risks. While all-or-nothing security may seem like a sure thing, and a way to avoid risks, bloated solutions can negatively impact employees’ ability to actually do their jobs. People are their most resilient and creative when faced with barriers or security friction, and imaginative security workarounds to poorly selected security solutions may end up being riskier than the original perceived threat.
These biases are only a small sample of how the cybersecurity industry is shaped by human decision making. To address the impact of cognitive bias, we must focus on understanding people and how people make decisions at the individual and organisational level in the cybersecurity industry. This means raising awareness of common cognitive biases across agencies and within our security teams to better identify situations where critical decisions are susceptible to the negative impact of mental shortcuts.
Beyond awareness, analytics can also help remedy some cognitive biases and their accompanying concerns. For instance, instead of relying on anecdotal or stereotypical assumptions that certain user groups are riskier than others, use of behavioural analytics can help by building a data-driven understanding of human behaviour and risk. The ability to apply security controls without relying on broad, inflexible rules, whether for a specific group, or for an entire agency, also solves the problem of overkill cyber solutions that may seem appealing due to availability bias and framing effects.
The bottom line is that cognitive biases shape our cybersecurity decisions from the keyboard to the boardroom, and these decisions ultimately determine the effectiveness of our cybersecurity solutions. By improving our understanding of biases, it becomes easier to identify and mitigate the impact of flawed reasoning and decision-making conventions. More tangibly, understanding user behaviour at the individual level can also help minimise the degree to which cognitive biases drive an agency’s security posture.
By Margaret Cunningham, Principal Research Scientist, Forcepoint.