On June 27, 2017 the NotPetya malware hit the Ukraine, targeting Kyivenergo, an electric power supplier to Kiev. NotPetya went on to hit a shipping company, Maersk, a pharmaceutical company, Merck, and a delivery and distribution company, TNT Express, a subsidiary of FedEx. Employees in the TNT Express offices faced displays of a ransom note demanding $300 in Bitcoin for the release of their hijacked data. The ransom note instructed them to send Bitcoin wallet IDs and PINs to a specific address. Employees were given less than an hour to respond before the malware rebooted their computers and began destroying their data. Data was being destroyed because NotPetya was actually wiper software, not ransomware.
Once the malware had successfully infected a small number of computers, it spread laterally, propagating throughout the local area network (LAN), and then, using remote procedure calls (RPC) onto wide area networks (WAN). The NotPetya malware continued to spread rapidly across Europe to the UK, and then across the Atlantic to the U.S. At a FedEx office in Memphis, employees were instructed to turn off their computers to limit the damage. At the FedEx hub in the Memphis airport, more than 120 flights were delayed due to the destruction caused by NotPetya.1
NotPetya’s effects snowballed, paralyzing just-in-time delivery systems. In the UK, warehouses had packages piled to the ceilings. Because their computers were down, employees had to depend on manual processes. Conveyor belts began to break down due to overuse. Corporate emails were not an option and employees used WhatsApp to communicate internally.
As the days became weeks, customers lost patience; deliveries of large items such as furniture were particularly affected. Time-sensitive packages were causing the most stress. Overwrought brides were nervous about getting their wedding dresses on-time, students worried about failing assignments when computer components broke, but these delays paled in comparison to critical medical supplies that never arrived. One urgent medical equipment shipment was left stranded at an airport.
Due to late shipments and costs to restore systems and data, FedEx reported an estimated revenue loss of $400 million.2 As with most enterprises, FedEx did not have cyber insurance to cover any of its losses because it was considered prohibitively expensive. It also acknowledged that some of the hijacked data may never be recovered.
According to USA Today Network 25 years ago: “FedEx founder Frederick W. Smith has long espoused the belief that the information about a package is as important as the package itself.”3 Additionally, Satish Jindel, President of SJ Consulting and ShipMatrix, contends: “Information technology is so critical to the movement of packages. At one time, it was for customer information, to give visibility into where a package is. More and more, without good information, the packages won’t even move through the system.”4
Given our knowledge of the history, it’s interesting to speculate what the outcomes would have been if DevSecOps processes and technology had been implemented before the NotPetya attacks. What could a DevSecOps team have done to prevent, mitigate, and remediate the damage? Let’s re-examine the scenario in that light, step by step.
Beware Third-Party Software
Organizations are beginning to incorporate security into the early stages of design and development instead of treating it as an applique to be applied just prior to production release. For example, healthy DevSecOps practices call for routine scans of software as it is being developed for security vulnerabilities. Unfortunately, many enterprises take it on faith that third-party vendors are doing the same thing. This faith is often misplaced. NotPetya infected computers during updates of a third-party accounting software package titled MeDoc. This attack vector was confirmed by forensic investigations conducted by the Ukrainian cyber-crime unit. All software comprising the enterprise environment, whether home grown, third-party or open-source, must follow company security practices.
NotPetya also depended upon phishing and email attacks for further system compromise. This highlights how human factors impact security. Employees should receive security training upon hiring and this should be reinforced on a periodic basis. Such training could help employees identify phishing emails and specify acceptable behaviors. For example, it is never ok to leave a computer unlocked and unattended, and nobody should be permitted physical access to the office without proper identification. How do you know that the electrician is not a spy?
In addition, if employees had unplugged their computers after the NotPetya ransom text appeared, data would likely not have been lost. DevSecOps, like DevOps, adheres to the core CAMS areas (Culture, Automation, Measurement and Sharing). Essential disciplines in the Culture area include governance, training, policy, documentation, and audits. These basic security hygiene principles could prevent a great many cybersecurity attacks.
NotPetya Moves In and On
NotPetya exploited the same Windows security vulnerability as the WannaCry ransomware. If the system had not kept up-to-date with security patches, it would be vulnerable to NotPetya. (The Equifax breach was made possible partially because of this sort of patching failure in addition to the Struts vulnerability.)
Once the initial computers were infected, NotPetya used hijacked system administration rights to propagate across the network. Had system administration been fully enfranchised as part of a DevSecOps culture and environment, it’s likely that NotPetya would have been stopped before it began. There are many standard administrative techniques for managing credentials that are related to network protection. These include multi-factor authentication, enforcing minimum privilege, deleting user accounts upon employee termination, and minimizing access to system administration and local system administrative rights. An additional administrative security mechanism that might have stopped NotPetya was securing remote procedure calls (RPC).
Much of this administrative security could have been – and in a robust DevSecOps environment would have been – automated. Automated security stops attackers, just like automated integration and deployment stops defects.
A continuous delivery pipeline must include software security within every process. This affords software engineers the capability to make security decisions at the time they are performing the work related to it. There are a variety of tools, technology, and practices that can help ensure a secure environment for each phase of development. The following tools are representative examples:
- GitHub – repository
- Jenkins – continuous integration
- Chef – configuration management
- Gauntlet – build
- Selenium – test
- Sonatype – governance
- Vericode – RASP
- Anomaly Detective – anomaly detection
In the DevSecOps context, automated security testing is built into the CI/CD (continuous integration and continuous deployment) process. Popular tools for enabling that capability include Gauntlt, Mittn, and BDD-Security. It’s important to continually emphasize that DevSecOps isn’t just about technology and automation. The C in CAMS is a reminder of the important role culture plays. Some of the best and most important cultural principles can be found in the Rugged Software Manifesto5 and the DevSecOps Manifesto.6
Red Team Exercises
that enables a culture of continuous security learning is by conducting a series of penetration tests or Red Team exercises. Although penetration testing identifies a target system and an objective, a Red Team will attack and hunt for opportunities in the same manner as real-world attackers would. Red Team simulations assess the entire security environment and a careful analysis of exercise results can provide information that would significantly improve the environment.
Feedback and learning are enabled by prolific system and an environment instrumentation that provides continuous measurement and analysis. The DevOps principle of “fail fast” is useful because if the failure can be rapidly identified, the learned lessons can be rapidly integrated into the development, security, and operations environments.
Breach insurance can be a worthwhile investment. Last year the WannaCry virus resulted in $4 billion worth of losses. NotPetya losses hovered around $10 billion. One in four companies admitted they had a malware incident in 2017.7 But companies may not be taking investments in security seriously. According to Peter Tsai, Senior Technology Analyst with Spiceworks, companies plan on increasing the budget 10% on hardware and 7% on software for security. They plan to expend money on security appliances, security software, securing endpoints and the cloud, and the Internet of Things (IoT). Tsai said “…you have all of these devices that were never meant to have connectivity in the first place, and now that they do, they are designed and manufactured by companies that don’t have any experience in security.”8
Unfortunately, the additional 17% that will be spent securing information and systems by companies this year doesn’t reflect the magnitude of potential losses. More importantly, it doesn’t reflect the adoption of an integrated security approach like DevSecOps that might offer the best opportunity for enterprises to successfully navigate a hostile cyberspace.
NotPetya, by the way, is not over. New malware, BadRabbit, a variation of NotPetya is impacting cyberspace now.