“Is that a new breach? Or is it the same one we’re still talking about?” If you think you’re hearing about a company getting hacked almost every day, that’s because you’re paying attention; there were over 1,300 significantly damaging breaches of large businesses last year. That’s more than three per day on average, and that’s only counting the ones that were reported publicly. Unfortunately, hacks are occurring at an ever-increasing rate.
Whether you work in the technology sector or not, you may be rightly wondering when the software profession will get around to fixing this problem and finally start to secure your personal data. It is an understandable expectation, and in some ways it is even a reasonable ask – though fixing it will first require a hard look at some of the root causes of this worsening situation.
Cybersecurity Technology is Very Strong,
But Expertise is Weak
With all the headlines that make the news about companies getting hacked, a question you might be thinking is: Why don’t organizations just buy the most secure and advanced solution and be done with security? That is why you buy the latest and greatest, to get the best features and the easiest use, right? If things were truly that simple, fewer data breaches would be happening!
Cybersecurity technology is extremely strong, and we are not finding ourselves short of amazing technologies. You only have to look at the many firms providing advanced cybersecurity solutions that deliver all manner of robust defenses in many unique ways, typically you can feel spoiled for choice, and overwhelmed at the options. Yet the expertise to configure these sophisticated security products for their most optimum performance remains scarce and very niche. Let’s not forget though, no one is actually practically perfect, and security systems are designed, implemented, and managed by run-of-the-mill humans. As long as that remains the case, a flaw may always appear in the chain, and cybercriminals know about this expertise gap and are exploiting it to their advantage.
Cybercriminals Have the Edge
Cybercriminals do what they do for a host of different reasons including: fun, money, government and industrial espionage, political reasons, or anything else that sparks their interest and helps them escape boredom. The cost to hacking is relatively low—simply pick a tool and treat it like a hammer, swinging away at nail-shaped vulnerabilities at random until the system breaks. Hackers often look incredibly skilled and devious in nuanced ways in film and television, but remember, they really only have to find a single flaw in a system, versus the poor leagues of security administrators scrambling to patch and protect against any and all flaws. And the cost of defense, especially for your enterprise, is significantly higher, and far more complicated.
With enough patience and will, even the most hardened, secure system can be compromised by dedicated cybercriminals with enough expertise. What really matters is how fast a company can proactively plan, and when needed, rapidly react to security flaws, patch holes, learn, respond, train, and continue to strengthen their security measures and on-going processes against cyber-attacks. “Reacting quickly” is a great buzz phrase, though it is predicated on an enterprise defining a clear budget for cybersecurity, and proactively investing accordingly in defensive measures that can both solidify your defensive stance and anticipate the needs of your organization before an attack arrives.
Risk Mitigation is a Tough Sell
Almost all technology companies are trying to accelerate their growth, all the time. An early-stage startup will spend its time building new features in its search for product/market fit before it runs out of their capital. On the other side of the seesaw, larger publicly traded companies have to meet their quarterly revenue goals or watch their share-price dip.
When companies are so focused on earnings and market success, it can be tough to convince them to invest in things that don’t appear to directly contribute to increased revenue. It can be a hard enough sell to convince companies to deal with basic operational inefficiencies, let alone to make significant investments in preventative security.
The potential impact of a hypothetical breach is always debatable ahead of time; will it be just some hard-to-quantify damage to the company’s reputation, or will it be real dollars stolen from their accounts? Or will it be something trivial that doesn’t even have to be disclosed to the public? Overconfidence and other human biases can cause the best-case outcomes being favored, and worst-case outcomes being ignored.
This is an economic dilemma, and so it has an actuarial solution (based on the risk of potential breaches, the negative impact such breaches would have, and the cost of preventative measures). For example, if a company is insecurely storing lots of Personally Identifiable Information (PII) and could go quickly out of business if that PII was stolen, it should be easy to justify investing in security to prevent a breach (and in ensuring compliance with their handling of data, of course). Conversely, for organizations that don’t think they make an attractive target, they too often make the “easy” justification that they simply do not need any significant security.
As with so many things in Cyber, it truly never is “if” but “when” a breach happens. Even knowing that, and with paying attention to the publicized breaches each week, companies put blinders on instead of looking at their assets as impactable by risks. More companies should start analyzing their risk around the economics ($) of potential breaches, and plan accordingly. In a well funded world, this means investing in an infrastructure that can track and identify breach attempts, pings, scans, and everything in between—and allow for senior executives to view this data in real-time to interpret and invest further in methods to protect your organization. It can’t always be the loudest customer or prospect driving a product’s feature set.
If someone is successful in penetrating a system, they pretty much always stand to make much more money through extortion or theft than they spent on the hack itself. If they can’t hack into a system, they stand to make nothing. The difference is stark: a great reward for a successful hack, a net loss for a failure. Incentives are very strong for an attacker to make even small investments in their toolbox and techniques as they fumble through attempts.
If a company invests no money or effort in preventing hacks of their systems, and nobody ends up hacking them, they will continue to make profits off their regular business endeavors. That’s not much of an incentive for them to change course.
With these incentive mismatches, it’s no surprise that attackers are far more motivated to penetrate systems than companies seem to be inclined to protect themselves. A solid and objective risk analysis will help any company match the right security controls appropriate for them.
Technology and Techniques Change Rapidly
For as long as locks have existed, lock pickers and lock inventors have been in an arms race.
Reading any popular security blog (such as Krebs on Security, or the CyberArk Threat Research blog) should give you a sense of how difficult it is for software engineers to keep up with the changes in their toolkits, or defend against attacks on the innumerable technologies their applications rely on.
The solution to this problem is, counterintuitively, not to stop adopting new technology. Rather, a careful and judicious adoption of new technology can help defend against threats. Hackers always prefer going after old technology with well-established vulnerabilities available for them to exploit.
Software engineers are wrongly expected to be the end-all security experts. While security is a component at the forefront of good coding, engineers are not the sticking point for blame – though they are often considered convenient. Speaking of the woes of software engineers, the software industry is moving in a worrying direction by expecting application engineers to be the ultimate security experts as well.
Programmers and engineers already have more than enough difficult technical subjects to master just to be able to create applications that are usable, accessible, responsive, resilient – and secure as able within the constraints given to them. Expecting them to become isolated experts in a domain without corporate engagement is a recipe for failure.
This isn’t meant to absolve engineers of their responsibilities in security: they should be aware of common security risks (such as the OWASP Top 10) and how to avoid them; however, placing the weight of security for the whole enterprise solely on their shoulders is unfair. Proven, proactive security experts train for years in their field and have a depth of knowledge that transcends and encompasses the whole range of enterprise operations.
Engineers and security experts should instead work together to build tools that allow applications to be built more safely, with less chance of things being done the wrong way. For example, no reasonably skilled engineer will try to create their own version of the TLS security protocols which are used for secure communications across the Internet. That’s why almost all engineers will use an off-the-shelf, trusted implementation of those protocols rather than try to build their own.
The software industry just needs to agree on a similar approach for engineers to use when storing passwords and their user’s private information. Many solutions exist, but there are no standards. The best approach will be one that takes this task off the engineer’s plate entirely, allowing them to build application features without worrying about common security issues.
If A System Can Be Used, it Can Be Hacked
The only safe computer system is one that is disconnected from the network, unplugged, thrown in a dumpster, put through an industrial shredder, loaded into an incinerator, and turned into ash.
As glib as that statement may seem, it is sadly quite true. If data exists on a hard drive, there’s a chance that it can be accessed by an attacker. That chance increases greatly if the system is powered on and is reachable on a network. Even if a computer system is completely powered off and physically disconnected, a human could still be bribed or blackmailed into accessing it by a determined attacker.
There is no magic solution to this problem, though. As a society we have accepted this risk because of the benefits afforded to us by turning on all of our computers and making them work for us. What we can do, though, is continue to work to keep them secure and enthusiastically train our fellow humans how not to get duped or tricked into helping cybercriminals get access via the latest attack. Oh, and remind all your friends to change their password – from Password.
Larry Letow Justin Petitt