Can AI Actually Improve Cybersecurity? What Experts Are Saying

Kayla Matthews
 

This year, AI spending will grow to nearly $35.8 billion globally.

AI has surged in popularity in most sectors of tech, and cybersecurity isn’t an exception. In fact, according to Capgemini’s 2019 report on cybersecurity and AI, 48% of enterprises say their budget for artificial intelligence will increase in fiscal year 2019. An even greater percentage of enterprises say it will become necessary to defeat future cyberattacks.

However, security experts are divided over whether or not AI can improve cybersecurity. Some are worried that rushing to deploy AI-based systems may make the situation worse.

AI and the State of Cybersecurity

At the 2018 Black Hat Briefing, a major cybersecurity conference, Martin Giles, bureau chief of MIT’s Technology Review, was “struck by the number of companies boasting about how they are using … artificial intelligence” in their new cybersecurity products.

It’s concerning to some experts who worry about over-reliance on AI. However, it makes sense when you consider the state the industry is in. The cybersecurity sector is currently facing huge pressure for algorithms and automation in general.

IT systems are now almost constantly under attack at a time when the supply of cybersecurity experts have been greatly outstripped by demand. In fact, as many as 350,000 positions may be unfilled throughout Europe. Therefore, those currently in the field are forced to pick up the slack, working with little downtime and high stakes.

Last year’s Black Hat, Giles also noted, featured panels with titles like “Mental Health Hacks” and “Holding on for Tonight: Addiction in Infosec.”

A burnout crisis may be looming, where some of the best minds in cybersecurity quit or transition to less-demanding fields.

To most, taking some of the pressure off these workers seems essential. For businesses, automating some cybersecurity processes seems like a pretty good bet. Despite the newness of the technology, AI-based security is good enough for some tech giants. Gmail’s spam filter, for example, already incorporates AI. Right now, there are a few different options for AI-based antiviruses.

Some current AI applications aren’t cybersecurity-specific but still help workers. Consulting firms like Booz Allen Hamilton use AI to manage and optimize their use of human resources, directing staff for better efficiency. The aim is to create less high-pressure situations for their cybersecurity employees.

A Tool, Not a Solution

The excitement around AI is not universal.

Indeed, some commentators, like cybersecurity expert Raffael Marty, don’t have much faith in AI’s ability to improve cybersecurity. He writes that AI is good at finding anomalies, but an anomaly is always defined against the normal. After all, how do you define normal in cybersecurity? Additionally, how do you tell the difference between a manual download and a malicious one triggered by a hacker?

Marty believes that the rapid adoption of AI is less because of its effectiveness and more because of its current popularity. This may create security holes, as cybersecurity firms roll out platforms or software that use AI due to the branding power of AI-based products.

AI algorithms are especially prone to adversarial attacks. Indeed, these attacks use knowledge about the patterns a certain AI looks for to jam its understanding.

If you know how an AI that detects what sort of animal is in an image, for example, you can use that knowledge to create a picture of a panda that AI thinks is 100% a gibbon. Because you’re playing on the subtleties in how artificial intelligence learns about things, the adversarial image is completely normal-looking to the human eye.

In a recent example, the cybersecurity firm Skylight used an adversarial attack to beat CylancePROTECT, one of the most popular AI-based antiviruses. By appending just a few lines of text to malicious files, it was able to sneak them past Cylance. It called it a “near-universal bypass” of the algorithm — more than 90% of the modified files sailed through.

The full post goes into much deeper detail and also echoes some of Marty’s concerns. Indeed, AI is not a silver bullet, like some had promised. Instead, it’s just another tool.

Future of AI and Cybersecurity

Some AI-based systems are designed to withstand adversarial attacks. In fact, Window’s native antivirus product, Windows Defender ATP, uses a few different AI algorithms to detect malicious files and viruses. It calls this a layered ML (Machine Learning) approach. As a result, Windows Defender will be harder for hackers to beat. Therefore, they’ll have to consider several different artificial intelligence algorithms in their attack, all of which may find patterns differently.

Other specific examples of AI being applied to cybersecurity seem few and far between.

Some form of advanced automation is probably going to be necessary in the near future. Unfortunately, outside of a major job-training scheme, the cybersecurity skills gap seems unlikely to close. Therefore, there’s no reason to think cyberattacks will become less frequent as the average cost of data breaches continues to rise.

New technologies mean new vulnerabilities. In fact, we see this with Internet of Things devices, which are notorious for their poor security.

So, will AI improve cybersecurity? In some small ways, it already has. However, improper use could make systems less secure. Whenever this technology becomes more sophisticated, hackers improve their tools — often with the same tech. Staying ahead of malicious hackers remains the goal.

Tags: , , , ,