How to Reduce Human Error in Cybersecurity with Red Teaming

Harry Wilson
Head of Digital Marketing Department   Globex Outreach

Human error in cybersecurity evokes images of untrained employees logging into systems with weak passwords used for multiple accounts or clicking on phishing emails. However, mistakes can come from the most experienced members of your IT and security team – professionals who know more about cybersecurity than anyone else in your company.

As they progress in their job and gain more experience, they get more responsibilities and tighter deadlines. In such a hectic environment, they also have to make complex decisions that often result in mental shortcuts known as cognitive bias. These shortcuts make us more effective workers, but they can cause major mistakes – especially in cybersecurity, where any mistake can result in expensive breaches. 

Cognitive biases are subconscious blind spots that everyone has without even being aware of it. Even if your mind went straight to untrained employees after reading the title, that’s a sign that you have cognitive bias without realizing it.

So, how can you discover cognitive bias within your IT team before cybercriminals do? 

One way we can combat the issue of cognitive bias is by implementing automated red teaming to test how your team mitigates threats and determine whether assumptions cloud their judgment. 

How Cognitive Bias Manifests in Cybersecurity

According to Simply Psychology: “A cognitive bias is a subconscious error in thinking that leads you to misinterpret information from the world around you, and affects the rationality and accuracy of decisions and judgments.” Paired with any technical flaws in the system, cognitive bias forms vulnerabilities even for organizations that have strong security and tools that protect their network.

There are many versions of this error in thinking. The table below explains and exemplifies three common cognitive biases in cybersecurity.

Cognitive Bias Explanation Examples in Cybersecurity
Confirmation Confirmation bias refers to seeking proof to confirm our preconceived assumptions based on what we know about cybersecurity. The security team might assume specific issues within your infrastructure (e.g. virus) and seek vulnerabilities that confirm them instead of checking the entire infrastructure. 
Recency Recency bias overly focuses on recent trends or cybercriminal activity, neglecting more pressing issues at hand. The team focuses on protecting the infrastructure from DDoS attacks if there has been a recent surge of this kind of criminal activity and fails to account for a phishing attack.
Overconfidence Overconfidence bias occurs because we believe that we will not be a victim of an attack or that the tools we have for protection are enough. Professionals can fall for an email phishing attack  because of overconfidence in email filters or a belief that they’re immune to attacks.

These biases stem from failing to review an organization’s infrastructure as a whole or as a system that’s constantly changing and updating, creating pathways for vulnerabilities. Another reason could be a lack of time and resources to thoroughly scan and test cybersecurity tools. Therefore, confirmation bias is not the only thing that’s responsible for vulnerabilities. 

One way to test that is with red teaming.

How Does Red Teaming Work?

Red teaming, in a nutshell, is the method of finding new weaknesses within cybersecurity by putting existing systems, protocols, and people who manage them to the test. It employs an external perspective to discover possible ways hacking groups can intrude into your network by breaching the security just as your adversaries would.

How to Perform Red Teaming?

To start, you need to assess your security. Based on that information, define a clear, measurable objective (what exactly is going to be tested). 

The next step is choosing members of two teams coded red and blue.

The red team is your IT security team that attacks the network, while the blue team has to defend it against the breach. The blue team isn’t aware of the test. Depending on a pre-set objective (a clear goal), what follows is the simulation of an attack. 

The success of the attack is measured and followed by an assessment of how well the team is prepared to combat such an attack.

Using Red Teaming to Test Your Teams’ Cognitive Biases

Red teaming is an exercise that pinpoints cognitive bias because it tests for assumptions of your team when facing an attack they don’t expect.

It’s effective in discovering cognitive biases because: 

  • The red team doesn’t know they’re tested, so they’ll react as they would in case of an actual attack.
  • Red teaming doesn’t rely only on mitigation of common threats, but test systems for  new techniques from the library of resources such as MITRE ATTACK Framework.
  • It shows what your team suspects first and how long it takes for your team to mitigate and discover threats.

To Conclude 

Cybercriminals are getting more sophisticated, but they also rely on techniques that worked in the past — especially recurring human errors that even professionals aren’t aware of. 

Your team has to make important and complex decisions every day. Use red teaming to test whether they have bling spots caused by cognitive biases, to give them additional training if needed.


Harry Wilson

Tags: , , , , , ,