Using AI to provide cybersecurity solutions has received a lot of press in the past two years. The reality is that most “AI cybersecurity” products use Machine Learning (ML) techniques, which is just one subset of a broader range of techniques associated with deep AI.

ML techniques are being used in several cybersecurity domains including:

  • spam filtering
  • intrusion detection and prevention
  • botnet detection
  • reputation rating
  • fraud detection

To a much lesser extent some services are using ML to provide incident forecasting to help answer questions like: Is there observable behavior on the Internet which can be used to measure the likelihood that a particular organization is going to be attacked and the nature of the attack?

ML uses mathematical and statistical functions to extract information from data, and with that information ML tries to guess the unknown. ML uses various algorithms such as Naive Bayes, Random Forest, Decision Tree, and Deep Learning to analyze the data.

In order to increase the success of ML, lots of training data is generally used. Vendors of ML Cybersecurity products typically constantly gather data from their customers as well as using data generated by researchers.

Typically a form of supervised ML is used, in which a data set with metadata about what are valid behaviors versus what are malicious behaviors are used to teach a ML tool to make accurate predictions about related new data. For example, if the data set included 10 million email messages, including their full internet headers, along with metadata indicating which emails are harmless and which are malicious, the resulting tool may be able to determine if a newly encountered email message is harmless or malicious.

There are at least three ways in which cybercriminals might defeat ML cybersecurity:

  1.       Pollute the data set to cause ML tool to have a low rate of success or accuracy
  2.       Identify bias within the data used for training and design the attack to exploit the bias
  3.       Identify bias within the algorithm used to analyze the data and design an attack to exploit the bias

To provide a simple example, data might indicate that email purporting to be from a Nigerian prince, using incorrect English grammar, is spam. As result the attackers might decide to make the email appear to be from an established insurance company, using phrasing that has appeared in legitimate email from the real insurance company. This might be a way to exploit the training data bias. In theory, larger data sets from a wide variety of sources should lower the bias, but this is not always true.

Some products use a variety of algorithms and training data sets to provide a higher level of confidence that a single bias or single set of compromised data won’t compromise the overall integrity of the product.