The Threat Matrix that Affects Machine Learning Systems

Currently, most attacks directed against artificial intelligence (AI) systems focus on manipulating them. Thus, for example, recommendation systems try to favor specific content, instead of what would legitimately correspond to it. Cybercriminals are now taking advantage of these types of attacks using machine learning (ML) and have been described in a threat matrix.

According to Microsoft, attacks on machine learning (ML) systems are increasing little by little . In addition, MITER comments that, in the last three years, major companies such as Microsoft, Google, Amazon and Tesla have seen their ML systems compromised. In this sense, most organizations do not have the right solutions to protect their machine learning systems and are looking for guidance on how to do it.

Threat Matrix that Affects Machine Learning Systems

Thus, experts from Microsoft, MITER, IBM, NVIDIA, the University of Toronto, the Berryville Institute of Machine Learning along with other companies and organizations, have created the first version of Adversarial ML Threat Matrix . Its objective is to create a threat matrix that helps security analysts detect and respond to these types of attacks.

Artificial intelligence as a method to carry out attacks

Artificial intelligence is a resource increasingly used to infect computers, steal information and compromise security. One of the techniques in which we can use artificial intelligence and machine learning is for data poisoning.

This attack is intended to manipulate a training data set to control predictive behavior. The goal is for it to malfunction and to obey the attacker’s premises. This way, you could classify spam emails as appropriate content and they will reach our inbox. Here is an example of how artificial intelligence is used in cyberattacks .

Machine learning and security

Machine learning, which comes from ML (Machine Learning), is a branch of artificial intelligence whose objective is to develop techniques that allow computers to learn autonomously. Researchers in machine learning are looking for algorithms to convert data samples into computer programs, without having to write them. Thus, the resulting programs must be able to generalize behaviors, make predictions, make decisions or classify things with precision.

Mikel Rodríguez, a machine learning researcher at MITER has commented that we are now in the same stage with AI as we were with the Internet in the late 1980s. At that time the Internet was designed to work, and was not intended to be built on its own. security to mitigate possible attacks.

However, you can learn from that mistake, and that’s why the Adversarial ML Threat Matrix has been created. What is intended to be achieved is that this matrix helps to think holistically, and will stimulate better communication. Thus, it is intended to promote collaboration between organizations by providing a common language of the different vulnerabilities.

What the adversary ML threat matrix gives us

Thanks to this matrix of threats, security administrators can work with models based on real incidents, and that emulate the behavior of an adversary using machine learning. To create the matrix, they used ATT & CK as a template because security analysts are familiar with using this type of matrix.

In the threat matrix, we have the different phases of the attack, such as: recognition, initial access, execution, persistence, evasion, exfiltration and impact. In this way, in the second stage of initial access, we find the phishing attack that is so much talked about. If you want to consult more information about the description of the Adversarial ML Matrix phases, here is this link.

An important fact to keep in mind is that the threat matrix is ​​not a risk prioritization framework, and only compiles known techniques. Furthermore, it has been shown that attacks can be assigned to the matrix. Lastly, the threat matrix will be updated periodically as feedback is received from the security machine learning community and adversaries. They also want to encourage collaborators to point out new techniques, propose best practices, and share examples of successful attacks.