Is AI really the future of Cyber Security?
The data science field is booming, with PwC estimating an additional 2.7 million jobs globally.
The data science field is booming, with PwC estimating an additional 2.7 million jobs globally.
There’s an old saying in cyber security: attackers have the edge. KPMG made the point in their 2019 cyber brief: the defender has to be good across the board; the attacker only needs to be good at the spot they’re attacking. That’s the nature of cyber crime, and it’s why more and more cyber firms are turning to AI as the industry’s new silver bullet. A protection system that actually learns from attacks? The potential applications are limitless. But like most tech leaps in cyber security (from AV scanners to Endpoint Protection Platforms and firewalls), the problem gets complicated very quickly: for one thing, hackers can use AI too.
Was cyber AI overhyped?
2018 was a big year for AI and cyber security. Many firms experimented with AI implementation, but the results were mixed. Forrester reports that AI adoption, while still growing, slowed to 53% last year, and budgets are still out of whack with expected ROI. So-called ‘AI Washing’ is also becoming an issue: like ‘Green Washing’, AI Washing refers to data firms passing off simple algorithms as artificial intelligence, which tends to raise cynicism and hurt industry buy-in. Basically, the technology was overhyped. “As AI accelerates up the Hype Cycle, many software providers are looking to stake their claim in the biggest gold rush of recent years,” says Jim Hare, research VP at Gartner. “Unfortunately, most vendors are focussed on the goal of simply building and marketing an AI-based product, rather than first identifying needs, potential uses and business value to customers.”
The race toward Zero-Day
That’s not to say that AI isn’t doing some interesting things, and you’d be hard pressed to find a cyber expert that doesn’t realistically expect AI to overhaul the entire industry by 2020. One field that’s looking particularly fruitful is AI’s potential to spot so-called Zero-Day exploits (basically a security flaw that hasn’t been patched by the software vendor – a chink in the digital armour). Finding real Zero-Day exploits has traditionally been very hard: even for brute force programs, there’s just too much data to crunch. High profile Zero-Days can now sell for millions of dollars (Zerodium is now offering a $2m bounty for remote iOS jailbreaks). AI’s skills at pattern recognition, automation and data sifting make fuzzing for Zero-Day exploits much faster. But there is a catch: attackers know this too. In fact AI fuzzing has been identified as one of the Top 10 security threats of 2019. The race is now on: who can find Zero-Day exploits faster? Cyber firms or hackers?
Where do humans fit in?
It seems likely the cyber war will soon be fought between competing intelligence algorithms. That’s the way the industry’s trending: almost one third of CIOs have adopted some form of AI cyber defence. So where does that leave cyber security experts? James Hadley from cyber learning platform, Immersive Labs, believes the answer lies in blending AI power with human creativity. Basically there are limitations to even the best machine learning algorithms. For one thing, algorithms are only as good as their data, and that data is open to mistakes, bias, corruption and manipulation. When they’re first deployed, AI solutions are also reliant on humans for guidance and optimisation, learning right from wrong. Cyber experts can also think in ways machines can’t: they’re not limited by rule sets and data boundaries. They can approach problems intuitively, with imagination and unorthodox thinking. “Years in cyber skills training has taught me that the best talent is not necessarily that which has been classically trained,” Hadley writes. “Rather it’s people who bring stubbornness, creativity, abstraction and even downright surrealism to their problem-solving.” Even Elon Musk is coming around to humans over machines.
The future of cyber crime
The only thing we know for sure is that cyber attacks will become more sophisticated. Nearly 88% of UK organisations have reported a data breach in the last 12 months. Accenture estimates that cyber crime will cost companies $5.2 trillion over the next five years. You can bet that most of those attacks will be carried out by AI programs. Artificial Intelligence might be cyber security’s great new hope, but it’s also it’s greatest weakness. Time will tell which side carries the edge.