2018 has been a breakthrough year for the adoption of artificial intelligence (AI) by enterprises. Large organisations are launching vast programs tapping into their massive operational data to automate tasks and decisions, improve operational efficiency and extract powerful insights. Smaller and agile companies, on the other hand, leverage AI technologies to develop new products. While the rise in the use of AI is often associated with business-use, many overlook the vital role it is taking in cybersecurity.
Frost & Sullivan and Microsoft estimate that cybersecurity incidents can cost Asia Pacific a staggering US$1.745 trillion monetary loss – more than 7% of the region’s total GDP. With cyber attacks becoming increasingly expensive and destructive, as well as cybersecurity expertise remaining too scarce to face the threat alone, AI brings a valuable opportunity to improve the effectiveness of defences… and attacks. Both sides of the cyber battlefield are set to benefit from AI’s scalability and speed.
On the defensive side, specialists see the maturing of AI tech with hope. Last year’s study by the Ponemon Institute “Closing the IT Security Gap with Automation & AI” reveals that the vast majority of security and IT professionals believe that using AI will increase their staff effectiveness and improve their detection and investigation capabilities.
Effectively, the AI craze is slowly permeating into the cyber-defences. Open any catalogue of security solutions, and there is a high chance that it will claim AI capabilities.
However, before we dive into some of the most common use cases of AI in defensive technology and practices, I believe it is essential to step back and breakdown the “AI” buzzword. Indeed, in many discussions, panels or marketing brochures, “AI” has become almost a synonym of magical capabilities. “Throw AI at anything, and it will solve the problem…”.
In practice, AI is a broad domain of computer sciences referring to a technology’s ability to imitate intelligent human behaviour. It ranges from task automation, to taking decisions based on predefined data sets (shallow machine learning (ML)), to self-teaching brain-like pattern recognition (deep learning (DL)). While most fantasise AI capabilities as being deep learning, few defensive security solutions truly use the advanced AI capabilities of DL. Many organisations use the term loosely to mean rules or scan which can be automated or orchestrated. This is actually mostly ML at play, in absence of a self-learning element.
Machine Learning has tremendous value in optimising specific, narrow tasks: once the model is defined, ML algorithms can categorise data much faster than a human could. With the typical organisation’s IT Security having to monitor hundreds of applications and millions of events a day, we can see how the appropriate use of ML can unlock the human bottleneck and help skim through the massive amount of data to focus on what matters most.
Thus, this is no surprise that ML is increasingly used in SIEMs and User Behavior Analytics platforms to detect anomalies which have not been previously programmed by the security analysts. With the sheer amount of data to process in real-time, Machine Learning is also increasingly present in commercial solutions for Application Security testing, Data Classification, Malware Detection and Identity & Fraud prevention.
However, while the use of Machine Learning has increased the effectiveness of defensive security solutions for evolving threats massively, one should not declare victory too fast. As good as the ML models are, these specialised security solutions retain two pitfalls: first, models are defined at a point in time, and unless the systems are equipped with the capability to update their models, there is no guarantee the models will remain effective over time. Second, specific security solutions provide specific siloed protection. A defence-in-depth requires the capacity to make sense of multiple signals at once and over time. AI models for Security Operations Analytics to “tie in” all components and intelligence are still today the realisation of internal expertise.
More concerning, while the range of AI techniques provides great opportunities to defend our organisations, they can be used by adversaries as well. William Dixon, Head of Operations for the World Economic Forum, estimates that the use of AI by cybercriminals will lead to an increased volume of attacks, faster pace of compromise and even new types of attack. While it is always difficult to assert whether cyberattackers are effectively using AI in their attacks, it is safe to assume that cybercriminal groups leveraging botnets of hundreds of thousands of computers are exploring techniques to improve their capacity to detect new attack vectors and act upon them. Last year’s cybersecurity report by Crowdstrike incidentally pointed out that recent attacks are a matter of minutes. It certainly requires a sharp capacity to analyse the results of the reconnaissance phase, weaponise and attack.
Last year, our colleagues Justin Smith and Rohit Khera presented the results of their research: using neural networks, they demonstrated that AI could run through the source code to detect passwords made public accidentally. With the recent ransom attack against 900 GitHub repositories in retrospect and the rise of credential stuffing attacks, it is easy to see how this innovative application of AI could be used as a defensive measure as well as an entry point to massive attacks.
More advanced attacks are also possible. With the means of the Nation States and the increased focus of leading nations on offensive cyberwar capabilities, we can expect AI to be used to find new vulnerabilities and combine them into attack vectors currently undetectable. It should be a concerning thought: Verizon’s recent Data Breach report notes a sharp increase in attacks from nation-states or actors with capabilities close to nation states, and recent history shows that nation-states’ capabilities often end up in the ends of cybercriminals.
AI also introduces new weaknesses and can also be the target of attacks. As computers start to learn by themselves or improve their decision criteria, new attacks dubbed “Adversarial AI” aim to influence machine learning systems to take the wrong decision or to learn a bad model. Aware of the threats that adversarial AI poses, Google has already launched an AI Fight Club and systems that will train ML systems to defend against such attacks.
The pace of technology evolution is increasing exponentially, as companies embark into an AI-enabled digital future. As the scope and complexity of the IT ecosystems explode, cybersecurity professionals face a new challenge: tackling the transformation of IT on the one hand and keeping adversaries with advanced capabilities at bay on the other side. The integration of ML-capabilities into the existing traditional security systems is a welcome evolution to avoid security operations from being overwhelmed with a flood of information. However, specific solutions do not make for a comprehensive defence. Organisations must begin taking steps to obtain the security domain expertise of tomorrow and carve out the culture to experiment with AI. By allowing organisations to experiment and innovate, we’ll not only be able to understand better how AI can be used to defend us but also figure out the many ways cybercriminals could weaponise AI.