fbpx

Elite CIOs, CTOs & execs offer firsthand insights on tech & business. Opinions expressed by Forbes Contributors are their own.

Post written by

Ivan Novikov

CEO of Wallarm, a YCombinator-backed AI security startup.

Ivan NovikovIvan Novikov ,

Nowadays, artificial intelligence is a kind of a de facto standard. One would be hard-pressed to find an industry where AI or machine learning found no applications. AI projects are popping up everywhere — from law to medicinefarming to the space industry.

Cybersecurity is not an exception. As early as 2013, pioneer companies such as Cylance, Darktrace and Wallarm have released AI-based cybersecurity products. Since then, the number of security startups using some sort of machine learning has grown year after year. These are cyber threat defenders armed with AI, but what about AI-powered attackers?

It would be foolish to assume that attackers and intruders would forgo such an effective tool as AI to make their exploits better and their attacks more intelligent. It’s especially true now when it’s so easy to use so many machine learning technologies out of the box, leveraging open-source frameworks like TensorFlow, Torch or Caffe. Not being an attacker, I can still speculate what these AI-generated exploits might look like, when we can expect them to materialize and how we can protect us from these threats.

We got our first glimpse of what AI-powered attacks would look like from the DARPA’s Cyber Grand Challenge — the world’s first all-machine cyber hacking tournament that happened two years ago in 2016. That contest proved that it was possible to fully automate practical cybersecurity aspects like exploit generation, attack launch and patch generation processes. We can pinpoint this event as the beginning of the era of fully automated cybersecurity.

To understand how machine learning works regarding cyberattacks, we need to understand the attack process a little better by formalizing it. I’ll attempt to explain what happens from a technical perspective when we hear about a data breach. All the successful attacks that lead to data breaches can be divided into several stages that should be passed by attackers to make the breach happen:

vulnerability discovery

exploitation

post-exploitation (discovery and exploitation of other vulnerabilities inside)

data theft

This is my own way to simplify the famous kill chain model. Let’s look at what happens at each stage to understand how the AI can be applied there.

Vulnerability Discovery

Page 1 / 3