Anomaly Detection with Isolation Forest and Kernel Density Estimation
Tweet
Tweet
Share
Share
Anomaly detection is to find data points that deviate from the norm. In other words, those are
3.7k
By Nick Cotes
Anomaly detection is to find data points that deviate from the norm. In other words, those are the points that do not follow expected patterns. Outliers and exceptions are terms used to describe unusual data. Anomaly detection is important in a variety of fields because it gives valuable and actionable insights. An abnormality in an MR imaging scan, for instance, might indicate tumorous region in the brain, while an anomalous readout from a manufacturing plant sensor could indicate a broken component.
After going through this tutorial, you will be able to:
Define and understand the anomaly detection.
Implement the anomaly detection algorithms to analyze and interpret the results.
See hidden patterns in any data that may lead to an anomalous behavior.
Let’s get started.
Anomaly Detection with Isolation Forest and Kernel Density Estimation Photo by Katherine Chase. Some rights reserved.
What is Anomaly Detection?
An outlier is simply a data point that deviates considerably from the rest of the data points in a particular dataset. Similarly, anomaly detection is the process that helps us to identify the data outliers, or points that deviate considerably from the bulk of other data points.
When it comes to large datasets, there may include very complex patterns that cannot be detected by simply looking at the data. Therefore, in order to implement a critical machine learning application, the study of anomaly detection is of great significance.
Types of Anomalies
In data science domain, we have three different ways to classify anomalies. Understanding them correctly may have a big impact on how you handle anomalies.
Point or Global Anomalies: Corresponding to the data points that differ significantly from the rest of the data points, global anomalies are known to be the most common form of anomalies. Usually, global anomalies are found very far away from the mean or median of any data distribution.
Contextual or Conditional Anomalies: These anomalies have values that differ dramatically from those of the other data points in the same context. Anomalies in one dataset may not be anomalies in another.
Collective Anomalies: The outlier objects that are tightly clustered because they have the same outlier character are referred to as collective outliers. For example, your server is not under a cyber-attack on a daily basis, therefore, it would be consider as an outlier.
While there are a number of techniques used for anomaly detection, let’s implement a few to understand how they can be used for various use cases.
Isolation Forest
Just like the random forests, isolation forests are built using decision trees. They are implemented in an unsupervised fashion as there are no pre-defined labels. Isolation forests were designed with the idea that anomalies are “few and distinct” data points in a dataset.
Recall that decision trees are built using information criteria such as Gini index or entropy. The obviously different groups are separated at the root of the tree and deeper into the branches, the subtler distinctions are identified. Based on randomly picked characteristics, an isolation forest processes the randomly subsampled data in a tree structure. Samples that reach further into the tree and require more cuts to separate them have a very little probability that they are anomalies. Likewise, samples that are found on the shorter branches of the tree are more likely to be anomalies, since the tree found it simpler to distinguish them from the other data.
In this session, we will implement isolation forest in Python to understand how it detects anomalies in a dataset. We all are aware of the incredible scikit-learn API that provides various APIs for easy implementations. Hence, we will be using it to apply Isolation Forests to demonstrate its effectiveness for anomaly detection.
First off, let’s load up the necessary libraries and packages.
from sklearn.datasets import make_blobs
from numpy import quantile,random,where
from sklearn.ensemble import IsolationForest
import matplotlib.pyplot asplt
Data Preparation
We’ll be using make_blob() function to create a dataset with random data points.
Defining and Fitting the Isolation Forest Model for Prediction
As mentioned, we’ll use IsolationForest class from the scikit-learn API to define our model. In the class arguments, we’ll set the number of estimators and the contamination value. Then we’ll use the fit_predict() function to get the predictions for the dataset by fitting it to the model.
If we consider the norm of a dataset should fit certain kind of probability distribution, the anomaly are those that we should see them rarely, or in a very low probability. Kernel density estimation is a technique that estimates the probability density function of the data points randomly in a sample space. With the density function, we can detect anomalies in a dataset.
For implementation, we’ll prepare data by creating a uniform distribution and then apply KernelDensity class from scikit-learn library to detect outliers.
To start, we’ll load necessary libraries and packages.
from sklearn.neighbors import KernelDensity
from numpy import where,random,array,quantile
from sklearn.preprocessing import scale
import matplotlib.pyplot asplt
from sklearn.datasets import load_boston
Prepare and Plot the Data
Let’s write a simple function to prepare the dataset. A randomly generated data will be used as a target dataset.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
random.seed(135)
def prepData(N):
X=[]
foriinrange(n):
A=i/1000+random.uniform(-4,3)
R=random.uniform(-5,10)
if(R>=8.6):
R=R+10
elif(R<(-4.6)):
R=R+(-9)
X.append([A+R])
returnarray(X)
n=500
X=prepData(n)
Let’s visualize the plot to check the dataset.
x_ax=range(n)
plt.plot(x_ax,X)
plt.show()
Prepare and Fit the Kernel Density Function for Prediction
We’ll use scikit-learn API to prepare and fit the model. Then use score_sample() function to get the scores of samples in the dataset. Next, we’ll use quantile() function to obtain the threshold value.
kern_dens=KernelDensity()
kern_dens.fit(X)
scores=kern_dens.score_samples(X)
threshold=quantile(scores,.02)
print(threshold)
-5.676136054971186
Samples with equal or lower scores than the obtained threshold will be detected, and then visualized with anomalies highlighted in a color:
idx=where(scores<=threshold)
values=X[idx]
plt.plot(x_ax,X)
plt.scatter(idx,values,color='r')
plt.show()
Putting all these together, the following is the complete code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
from sklearn.neighbors import KernelDensity
from numpy import where,random,array,quantile
from sklearn.preprocessing import scale
import matplotlib.pyplot asplt
from sklearn.datasets import load_boston
random.seed(135)
def prepData(N):
X=[]
foriinrange(n):
A=i/1000+random.uniform(-4,3)
R=random.uniform(-5,10)
if(R>=8.6):
R=R+10
elif(R<(-4.6)):
R=R+(-9)
X.append([A+R])
returnarray(X)
n=500
X=prepData(n)
x_ax=range(n)
plt.plot(x_ax,X)
plt.show()
kern_dens=KernelDensity()
kern_dens.fit(X)
scores=kern_dens.score_samples(X)
threshold=quantile(scores,.02)
print(threshold)
idx=where(scores<=threshold)
values=X[idx]
plt.plot(x_ax,X)
plt.scatter(idx,values,color='r')
plt.show()
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
APIs
Summary
In this tutorial, you discovered how to detect anomalies in your dataset.
Specifically, you learned:
How to define anomalies and their different types
What is Isolation Forest and how to use it for anomaly detection
What is Kernel Density Estimation and how to use it for anomaly detection
Discover How Machine Learning Algorithms Work!
See How Algorithms Work in Minutes
...with just arithmetic and simple examples
Discover how in my new Ebook: Master Machine Learning Algorithms
It covers explanations and examples of 10 top algorithms, like: Linear Regression, k-Nearest Neighbors, Support Vector Machines and much more...
Finally, Pull Back the Curtain on Machine Learning Algorithms