Skip to main content
Hacker

AI & Cyber Security: Finding New Ways To Hack

Just as companies are constantly looking for new ways to build secure software programs, hackers are equally keeping up with the newest updates to fight defences.

Most security firms see AI and machine learning as the big shiny frontier of cyber security. Almost one third of CISOs adopted some form of AI-based defence in 2018, and the AI security market is projected to grow to $38.2 billion by 2026. What was once a niche technology (with vaguely Sci Fi-esque overtones) has quickly gone mainstream.

But as Silicon Valley cashes in, the industry’s facing a bit of cognitive whiplash, and a lot of cyber experts are worried about the so-called ‘AI Hype Cycle’. Security firms are now rolling out AI-based software, not because it’s necessarily best in market, but because the market expects to see “Machine Learning” on the box. There’s a danger that new technology could create a false sense of security. The pressure to jump on the AI bandwagon is high.

And all this comes as the industry faces a record number of cyber attacks, an exploding Internet of Things (where even your garden sprinkler might get hacked and weaponised), and a shortage of skilled cyber workers.

But there are bigger problems with widespread AI adoption in cyber circles. In short, there’s a new way to hack. 

Ignore the system. Attack the data.

If AI programs have an Achilles Heel, it’s this: they’re only as good as the data they’re fed. And that data is open to manipulation, corruption and simple mistakes. A lot of new cyber products rely on “supervised learning”, which basically means it’s up to the firm to label the data sets used to train the algorithm (e.g. by tagging some code as clean and other code as malware). But if products are rushed to market, anomalous data points can slip through the net (creating cyber blind spots). Or hackers could ignore the software entirely and access the firm’s security systems, corrupting tag parameters. Bad code becomes magically safe. It’s like hacking a vault door by attacking the hinges.  

Fuzzing for Zero Days

Threat researchers have been using AI algorithms to find Zero Day exploits for years. It’s difficult, very lucrative, and you need a lot of experience (far more than executing a simple DDoS attack), but fuzzing for vulnerabilities is something AI excels at. The problem is, hackers learn quickly. Fuzzing for Zero Day exploits has been identified as one of the Top 10 security threats of 2019. As AI technology becomes more commonplace, hackers are developing automated fuzzing programs to identify misuse bugs in software and hardware. How fast that’s happening is hard to measure, but with low overheads and limitless scaling, AI fuzzing makes a lot of sense for criminal organisations.

The master algorithm

Another risk that experts flag is reliance on any so-called ‘Master Algorithm’. If a security system is driven by a single algorithm, it’s almost impossible to tell when that algorithm becomes compromised. In other words, who watches the watchers? It’s part of the reason Microsoft’s Windows Defender uses interlocking algorithms, trained on different data sets. If one becomes compromised, the others will flag the anomaly. This leads into another problem with AI: explainability. Basically, it’s not always clear why AI spits out certain decisions, or flags certain risks. Complex algorithms have moved beyond human understanding (this isn’t a new phenomenon, AlphaGo was already transcending the minds of its creators in 2016).

Where to next?

Just as companies are constantly looking for new ways to build secure software programs, hackers are equally keeping up with the newest updates to fight defences. So how do you protect your business?  Find out more about skilling up in our 6 week Cyber Security Risk and Strategy course.

cyber banner

 

This article was originally published on 28 May 2019