Human error is the biggest factor in cyber breaches
Data breaches, malware, phishing attacks…what do they all have in common? In addition to their ability to cause huge disruption, all rely on some form of human error for success. After all, if everyone who received a phishing email recognised it immediately and deleted it, the technique would die out as a type of cyberattack. Instead, its persistence is entirely due to the continued vulnerability of recipients. Likewise, a complex piece of software code may cause a breach, but an element of human error creates the opportunity to install the code on the system. What are some of the basic errors we all make as humans in managing our use of IT, and how can employers reduce the risk of human error among their employees? We examine the two types of human error, give examples of each, and offer some simple solutions to keep your systems safer.
Ultimately, it’s all human error
The major players in the cybersecurity space monitor and analyse trends in data breaches and other forms of cyberattacks. For example, IBM conducted a study of cyber breaches occurring among thousands of customers in over 130 countries. The study found that human error was a factor in 95% of all breaches. Data analysed from the UK’s Information Commissioner’s Office (ICO) in 2019 found a similar result: up to 90% of cyber data breaches are caused by user error. In addition, Kaspersky Lab, a Russian multinational cybersecurity and anti-virus provider headquartered in Moscow, undertook a study of more than 5,000 businesses around the globe. It found that 52% of businesses perceive there to be a risk to security from within their organisation, predominantly through user error, not malicious intent. Your company’s cybersecurity is not solely the responsibility of your IT security team. Every single employee who uses a pc, tablet, cell phone or even an Apple Watch, if it is connected to your network, has a part to play in keeping your systems safe. Having said that, human error does not just mean unwittingly opening an attachment sent in a phishing email. For all its training and awareness, the IT team can also commit errors, sometimes just by delaying the installation of a patch. So everyone in your organisation, from the lowliest employee to the C suite, is responsible for minimising errors and keeping cybersecurity front of mind.
Two types of human error and examples
Human error, in the cybersecurity context, falls into two broad categories:
Skill-based human errors are the lapses we all make. They are genuine errors that occur when performing familiar tasks – accidental mistakes a user makes despite knowing the right course of action. Alternatively, they are shortcuts taken by the user because the correct process is tedious or time-consuming. Arguably the latter could be called negligence, though it is not generally malign (that would be cybercrime, not human error). Examples of skill-based error include sending an email to the wrong recipient. We’ve all sent an email and got a reply saying, “I think this is meant for the other James/Jane.” When we let our email client autocomplete the name in the address field, this can happen unawares. It’s important to pay attention and check the email before hitting send: Is it addressed to the right person? Has bcc been used for a broad distribution? Is the attachment the correct one? These errors of inattention happen accidentally but are also easily prevented. Another skill-based error, and a very common one, is poor password management. Choosing and remembering a complex, safe password takes more time than using a simple one. “Password” and “123456” are disturbingly common, and using a birth date or child’s name is not much more secure, as this information is easy to access. In addition, passwords are often re-used across websites, meaning that once a hacker cracks one password, they have access to your entire online existence.
Case study: Why is password management so important? The story of Colonial Pipeline
You may have heard the news about the gasoline (petrol) shortage on the east coast of the US in April this year. Hackers infiltrated the network of the company in charge of the largest fuel pipeline in the US – Colonial Pipeline Co – on April 29th. Although the threat actors penetrated the company’s IT network, they did not manage to breach the operational technology network, which controls the actual flow of gasoline. The breach happened on April 29th, but it was not discovered until May 7th, when Colonial received a ransom note demanding cryptocurrency. Immediately the company began to shut down the pipeline, and within the hour, the entire pipeline had been disabled, leading to shortages of gasoline and long queues at petrol stations up and down the east coast. Colonial paid out $4.4 million in ransom money to a Russia-linked cybercrime group. In addition to the ransom, Colonial conducted a thorough ground and air examination of the pipeline, which extends for 29,000 miles. It appeared the pipeline was not damaged. But the cost of this exercise, although not disclosed by Colonial, must surely have equalled or even exceeded the cost of the ransom.
How did a breach of this magnitude happen? It was caused by a single compromised password which the hackers employed to gain entry through a virtual private network (VPN) account, used by employees to access the company’s network remotely. After the attack happened, the forensic investigation found the password, among other leaked passwords on the dark web. It is not certain how the password was obtained, but it is possible a Colonial employee may have used the same password previously used on a compromised account. Undoubtedly, this employee did not set out deliberately to sabotage the company. The passwords were probably in use on innocent, legitimate accounts. But the minute a password is reused across multiple accounts, a breach on one leaves all the rest vulnerable. Good password management is one of the simplest actions every user can take to strengthen their personal and organisational security. And the reason they don’t? The oldest and possibly most common of human errors: laziness. The consequences? Millions of dollars of damage and millions of people affected. In the case of Colonial Pipeline, national security could even have been at stake. Millions of gallons of gasoline flow through that pipeline.
Decision-based errors, by contrast, result from a conscious but incorrect decision. Poor decision-making usually occurs due to a lack of the necessary knowledge or information about the circumstances or consequences. It could even be a default decision due to inaction. Examples of decision-based errors are many. For example, the employee who shares his login details to a secure folder with a co-worker who lacks the relevant credentials may think they are simply facilitating collaboration without understanding the reason for the permission levels. The employee who opens the attachment in the email that looks like it comes from a trusted supplier is acting in good faith, but they don’t know how to recognise the signs of a bogus, phishing email.
Why do errors happen, and how can they be prevented?
Human error happens because…well because we are human. But some circumstances give rise to error more than others, and all can be managed. It’s important to have robust IT security policies in place and communicate them clearly and frequently. But policies alone will not solve the problem because policies require compliance to be effective, and compliance relies on users. The Kaspersky report found that very small businesses (fewer than 50 employees) felt more at risk from inappropriate use of IT resources than very large organisations – those with more than 1000 employees. This is understandable. Only when a company reaches a certain size can it afford the resources to police its IT thoroughly. Global firms issue workers with laptops that have many users feature disabled. All files are backed up to a cloud repository; a central support team manages all user access and issues. Small companies perforce rely on users to follow correct procedures and behave responsibly because central control would be impossible. Smaller companies also tend to be more entrepreneurial and less hierarchical, and so employees have more latitude in their use of IT resources. This is fine if everyone has the same understanding of risk, but the organisation is exposed to variable knowledge, commitment to the greater good, and personalities…some personality types are simply less inclined to be conscientious and compliant! Trust Both of these scenarios – the rigid global corporate with tight controls in place and the young, vibrant start-up with a laissez-faire approach – illustrate different trust issues. The former has no trust in its employees and instead uses controls to manage the variability of a huge workforce, while the latter has high levels of trust, perhaps to its disadvantage. Trust, along with culture, is key in managing human error. Employees need to feel free to ask questions when they are unsure and to report mistakes when they happen. Covering up a cybersecurity error equates to hanging out the welcome sign to hackers. If employees are worried about disciplinary or even financial consequences, they are less likely, to be honest about errors or to report them timeously. A 48-hour delay while a worker wrestles with his conscience could be the window the intruder needs to infiltrate your network and cause untold damage.
A culture of trust, openness, tolerance of mistakes, and continuous learning is one of the two key defences against human error an organisation can introduce. The other is training, training and more training. Because new scams and cyber ploys are developed all the time, it is not sufficient only to train new employees at induction or to provide just an annual refresher in cybersecurity. Training should be relevant to the role; a junior clerk doesn’t need to know what a software patch is, but they do need to know how to recognise a phishing email. It should be engaging and provide the user with feedback, so an online refresher on password management might include a knowledge review and score at the end, with the correct answer given – and reasons for it – in the event of a wrong answer.
NEWORDER has the solution you need
NEWORDER is one of Africa’s leading information security and corporate threat protection services. We can assess your organisational environment, conduct a training needs analysis, and provide a training programme that will keep your employees engaged and on top of the information and skills they need to minimise your risk of human error and the disastrous consequences it can cause.