Anomaly detection is to find data points that deviate from the norm. In other words, those are the points that do not follow expected patterns. Outliers and exceptions are terms used to describe unusual data. Anomaly detection is important in a variety of fields because it gives valuable and actionable insights. An abnormality in an MR imaging scan, for instance, might indicate tumorous region in the brain, while an anomalous readout from a manufacturing plant sensor could indicate a broken component.

After going through this tutorial, you will be able to:

  • Define and understand the anomaly detection.
  • Implement the anomaly detection algorithms to analyze and interpret the results.
  • See hidden patterns in any data that may lead to an anomalous behavior.

Let’s get started.

Anomaly Detection with Isolation Forest and Kernel Density Estimation
Photo by Katherine Chase. Some rights reserved.

What is Anomaly Detection?

An outlier is simply a data point that deviates considerably from the rest of the data points in a particular dataset. Similarly, anomaly detection is the process that helps us to identify the data outliers, or points that deviate considerably from the bulk of other data points.

When it comes to large datasets, there may include very complex patterns that cannot be detected by simply looking at the data. Therefore, in order to implement a critical machine learning application, the study of anomaly detection is of great significance.

Types of Anomalies

In data science domain, we have three different ways to classify anomalies. Understanding them correctly may have a big impact on how you handle anomalies.

  • Point or Global Anomalies: Corresponding to the data points that differ significantly from the rest of the data points, global anomalies are known to be the most common form of anomalies. Usually, global anomalies are found very far away from the mean or median of any data distribution.
  • Contextual or Conditional Anomalies: These anomalies have values that differ dramatically from those of the other data points in the same context. Anomalies in one dataset may not be anomalies in another.
  • Collective Anomalies: The outlier objects that are tightly clustered because they have the same outlier character are referred to as collective outliers. For example, your server is not under a cyber-attack on a daily basis, therefore, it would be consider as an outlier.

While there are a number of techniques used for anomaly detection, let’s implement a few to understand how they can be used for various use cases.

Isolation Forest

Just like the random forests, isolation forests are built using decision trees. They are implemented in an unsupervised fashion as there are no pre-defined labels. Isolation forests were designed with the idea that anomalies are “few and distinct” data points in a dataset.

Recall that decision trees are built using information criteria such as Gini index or entropy. The obviously different groups are separated at the root of the tree and deeper into the branches, the subtler distinctions are identified. Based on randomly picked characteristics, an isolation forest processes the randomly subsampled data in a tree structure. Samples that reach further into the tree and require more cuts to separate them have a very little probability that they are anomalies. Likewise, samples that are found on the shorter branches of the tree are more likely to be anomalies, since the tree found it simpler to distinguish them from the other data.

In this session, we will implement isolation forest in Python to understand how it detects anomalies in a dataset. We all are aware of the incredible scikit-learn API that provides various APIs for easy implementations. Hence, we will be using it to apply Isolation Forests to demonstrate its effectiveness for anomaly detection.

First off, let’s load up the necessary libraries and packages.

Data Preparation

We’ll be using make_blob() function to create a dataset with random data points.

Let’s visualize the dataset plot to see the data points separated randomly in a sample space.

image

Defining and Fitting the Isolation Forest Model for Prediction

As mentioned, we’ll use IsolationForest class from the scikit-learn API to define our model. In the class arguments, we’ll set the number of estimators and the contamination value. Then we’ll use the fit_predict() function to get the predictions for the dataset by fitting it to the model.

Now, let’s extract the negative values as outliers and plot the results with anomalies highlighted in a color.

image

Putting all these together, the following is the complete code:

Kernel Density Estimation

If we consider the norm of a dataset should fit certain kind of probability distribution, the anomaly are those that we should see them rarely, or in a very low probability. Kernel density estimation is a technique that estimates the probability density function of the data points randomly in a sample space. With the density function, we can detect anomalies in a dataset.

For implementation, we’ll prepare data by creating a uniform distribution and then apply KernelDensity class from scikit-learn library to detect outliers.

To start, we’ll load necessary libraries and packages.

Prepare and Plot the Data

Let’s write a simple function to prepare the dataset. A randomly generated data will be used as a target dataset.

Let’s visualize the plot to check the dataset.

image

Prepare and Fit the Kernel Density Function for Prediction

We’ll use scikit-learn API to prepare and fit the model. Then use score_sample() function to get the scores of samples in the dataset. Next, we’ll use quantile() function to obtain the threshold value.

Samples with equal or lower scores than the obtained threshold will be detected, and then visualized with anomalies highlighted in a color:

image

Putting all these together, the following is the complete code:

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

APIs

Summary

In this tutorial, you discovered how to detect anomalies in your dataset.

Specifically, you learned:

  • How to define anomalies and their different types
  • What is Isolation Forest and how to use it for anomaly detection
  • What is Kernel Density Estimation and how to use it for anomaly detection

Discover How Machine Learning Algorithms Work!

Mater Machine Learning Algorithms

See How Algorithms Work in Minutes

...with just arithmetic and simple examples

Discover how in my new Ebook:
Master Machine Learning Algorithms

It covers explanations and examples of 10 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Support Vector Machines and much more...

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

See What's Inside