The Potential Use of Artificial Intelligence in Cyberattacks
Artificial intelligence (AI) is quickly becoming a topic discussed in security team meetings and sales presentations. Many misbelieve that AI is a silver bullet for creating substantial barriers to malicious actors. However, AI is ethically neutral. Both security teams and cybercriminals can use it. As AI improves our ability to secure our organizations, it also improves attacker capabilities: providing better target opportunities with potentially higher rewards.
Definitions
Confusion exists regarding some of the terminology used for AI. We need to understand how AI learns so we can understand vendor-speak for AI and know what questions to ask.
AI.
The ability of a computer to perform human-like operations (learning and problem solving) when given an accurate, meaningful, and adequate data set.
Machine learning (ML).
ML is a type of AI in which a computer can learn new approaches to solving problems from data input alone without specific programming. (Furbish, 2018) Machine learning is a type of AI and falls into four types: supervised, unsupervised, and semi-supervised, and reinforced (Fumo, 2017).
Supervised ML. Humans provide data input and predictors and show the computer the correct answers. In other words, we “try to model relationships and dependencies between the target prediction output and the input features” (Fumo, 2017, Supervised Learning). We provide labeled data sets to a computer with associated decisions. We teach the computer. This approach can be used by intrusion prevention systems to determine if traffic is malicious.
Unsupervised ML. With no human intervention, the computer tries to “use techniques on the input data to mine for rules; detect patterns; and summarize and group the data points which help in deriving meaningful insights …” (Fumo, 2017, Unsupervised Learning).
Semi-supervised ML. A combination of supervised and unsupervised ML.
Reinforced ML. In this model, the computer receives feedback that informs if the conclusions reached were correct or incorrect. Based on this reinforcement, the computer learns to interpret input data more correctly. The feedback can be from a human or something in the operating environment.
Advantages of AI
AI provides advantages over human action alone in two distinct ways: by increasing efficiency and improving scalability. AI is faster and has a lower total cost of ownership than humans. It can also work 24/7 without taking a break or a vacation. As computing power and storage technology improves, AI can comb through more information creating an increasing number of findings. While this is very good for the future of security, it also enhances the capabilities of our malicious opponents.
Improved efficiency and scalability provide cybercriminals with three enhancements (Brundage et al., 2018)
- Expansion of existing threats
- Introduction of new threats
- Change to the typical character of threats
Expansion of Existing Threats
Most of the most significant attacks are targeted. One reason for this is the time and effort required to plan, execute and maintain them. A single attacker or a small team has to collect information, analyze it, plan an attack and execute the attack without detection. Tools exist to help with this, but the number of attacks and their effectiveness is both curtailed by human limitations and weaknesses.
AI helps with this by automating the reconnaissance phase and including more possible targets. Further, previous machine learning about what does and does not work enables AI to select the best attack vector for each specific target. This allows cybercriminals and state actors to increase the number of targets included in a campaign.
When the AI is determining where and how an attack takes place, it can also develop and execute the attack plan. Replacing humans with AI means that more malicious actors can launch sophisticated attacks against targets identified as most vulnerable. This will likely result in black market data science and AI sites that sell AI services to anyone who can afford them, including services similar to TensorFlow.
Introduction of New ThreatsOne of the characteristics of advanced persistent threats (APTs) is the time commitment often required to penetrate a target network and achieve all attack objectives successfully. Human resources are finite, making some attacks unsuitable for the desired objectives. AI changes this.
AI can untiringly attempt to gain access to a network, trying different approaches as attempts fail. Over time, unsupervised machine learning can provide an AI computer with petabytes of information about all types of environments: what was encountered and how obstacles were overcome. An AI can apply this knowledge to new attacks to more quickly achieve the human attacker’s goals.
Maintaining access requires patience during attack activities while continuously looking for possible detection threats mounted by the target. AI introduces no emotion, such as impatience, into the attack. Further, adjustment can be made faster to thwart detection attempts than if a human was trying to interpret and react.
Change to the Typical Character of Threats
Adding to the frequency and increased number of threat sources, threat characteristics change. Again, attackers can target specific weaknesses unique to an organization without weeks of planning. Further, AI can recognize multiple attack vectors leading to a target and effectively manage them over time. This adds to attack effectiveness.
Brundage et al. also suggest that AI will increase the anonymity of attackers. Instead of direct human interaction, AI services can provide anonymous attack services to a large number of existing and new malicious actors.
Finally, any time we introduce new technology someone finds a way to exploit it. AI systems located in our data centers are no different. One example is an attack against the data input into the ML system. Attacks against the data are known as poisoning attacks (Szegedy et al., 2013). In a poisoning attack, incorrect data is fed into the ML process that causes the AI to make incorrect decisions.
Attackers may also learn enough about how an AI defense works to apply adversarial attacks. Adversarial attacks manipulate the input used by an AI to recognize and react to external conditions and activities (Kazerounian, 2018). These manipulations cause the AI to misinterpret the input.
In other words, as our ability to use AI to defend against traditional attacks improves, our adversary’s ability to counter our new defenses will also evolve.
Next Steps
If you build it, they will hack it. As we develop ways to combat cyberattacks with AI, attackers will use the same technology to circumvent our efforts. It becomes a perpetual tournament between white hat and black hat AI. As with all tournaments, we win some, and we lose some. Consequently, we still need our traditional layered defenses to fill the gaps when our AI controls fail: and they will occasionally fail.
The state of malicious AI, today, is not at a point yet where we have to rush out an counter it with cutting edge solutions. However, strategic planning for security must include an incremental path to at least supplementing our current controls with those that work mainly by learning about and reacting to emerging threats without significant intervention by humans.
When shopping for a solution, ask the vendor how her machine learning works.
- Is the ML supervised? If so, how often is it updated?
- If it is reinforced, what is your role in providing feedback and how much effort is involved?
- How long does it take to create a large enough data set about your network before the AI can make decisions on its own? Even traditional IPS has to be tuned, over days or weeks, and it isn’t even a real player in ML… yet.
- What is the overall accuracy of the solution, how was it measured and what are the possible events that could negatively affect decision making?
- How does the solution prevent data poisoning?
- How does the solution deal with adversarial attacks?
Planning now will help you meet your goals as AI attacks mature.
Works Cited
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., . . . O hEigeartaigh, S. (2018). University of Oxford Workshop 2017. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.
Fumo, D. (2017, June 15). Types of Machine Learning Algorithms You Should Know. Retrieved from Towards Data Science: https://towardsdatascience.com/types-of-machine- learning-algorithms-you-should-know-953a08248861
Furbish, J. (2018, May 3). Machine Learning: A quick and simple definition. Retrieved from O’Reilly: https://www.oreilly.com/ideas/machine-learning-a-quick-and-simple-definition
Kazerounian, S. (2018, Sep 12). Near and long-term directions for adversarial AI in cybersecurity. Retrieved from Vectra: https://blog.vectra.ai/blog/near-and-long-term-directions-for- adversarial-ai-in-cybersecurity
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, I. (2013). Intriguing properties of neural networks. Retrieved from https://arxiv.org/ abs/1312.6199