Develop a Neural Network for Banknote Authentication



It can be challenging to develop a neural network predictive model for a new dataset.

One tideway is to first inspect the dataset and develop ideas for what models might work, then explore the learning dynamics of simple models on the dataset, then finally develop and tune a model for the dataset with a robust test harness.

This process can be used to develop constructive neural network models for nomenclature and regression predictive modeling problems.

In this tutorial, you will discover how to develop a Multilayer Perceptron neural network model for the skins binary nomenclature dataset.

After completing this tutorial, you will know:

  • How to load and summarize the skins dataset and use the results to suggest data preparations and model configurations to use.
  • How to explore the learning dynamics of simple MLP models on the dataset.
  • How to develop robust estimates of model performance, tune model performance and make predictions on new data.

Let’s get started.

Develop a Neural Network for Skins Authentication

Develop a Neural Network for Skins Authentication
Photo by Lenny K Photography, some rights reserved.

Tutorial Overview

This tutorial is divided into 4 parts; they are:

  1. Banknote Nomenclature Dataset
  2. Neural Network Learning Dynamics
  3. Robust Model Evaluation
  4. Final Model and Make Predictions

Banknote Nomenclature Dataset

The first step is to pinpoint and explore the dataset.

We will be working with the “Banknote” standard binary nomenclature dataset.

The skins dataset involves predicting whether a given skins is pure given a number of measures taken from a photograph.

The dataset contains 1,372 rows with 5 numeric variables. It is a nomenclature problem with two classes (binary classification).

Below provides a list of the five variables in the dataset.

  • variance of Wavelet Transformed image (continuous).
  • skewness of Wavelet Transformed image (continuous).
  • kurtosis of Wavelet Transformed image (continuous).
  • entropy of image (continuous).
  • class (integer).

Below is a sample of the first 5 rows of the dataset

You can learn increasingly well-nigh the dataset here:

We can load the dataset as a pandas DataFrame directly from the URL; for example:

Running the example loads the dataset directly from the URL and reports the shape of the dataset.

In this case, we can personize that the dataset has 5 variables (4 input and one output) and that the dataset has 1,372 rows of data.

This is not many rows of data for a neural network and suggests that a small network, perhaps with regularization, would be appropriate.

It moreover suggests that using k-fold cross-validation would be a good idea given that it will requite a increasingly reliable estimate of model performance than a train/test split and considering a each model will fit in seconds instead of hours or days with the largest datasets.

Next, we can learn increasingly well-nigh the dataset by looking at summary statistics and a plot of the data.

Running the example first loads the data surpassing and then prints summary statistics for each variable.

We can see that values vary with variegated ways and standard deviations, perhaps some normalization or standardization would be required prior to modeling.

A histogram plot is then created for each variable.

We can see that perhaps the first two variables have a Gaussian-like distribution and the next two input variables may have a skewed Gaussian distribution or an exponential distribution.

We may have some goody in using a power transform on each variable in order to make the probability distribution less skewed which will likely modernize model performance.

Histograms of the Skins Nomenclature Dataset

Histograms of the Skins Nomenclature Dataset

Now that we are familiar with the dataset, let’s explore how we might develop a neural network model.

Neural Network Learning Dynamics

We will develop a Multilayer Perceptron (MLP) model for the dataset using TensorFlow.

We cannot know what model tracery of learning hyperparameters would be good or weightier for this dataset, so we must experiment and discover what works well.

Given that the dataset is small, a small batch size is probably a good idea, e.g. 16 or 32 rows. Using the Adam version of stochastic gradient descent is a good idea when getting started as it will automatically transmute the learning rate and works well on most datasets.

Before we evaluate models in earnest, it is a good idea to review the learning dynamics and tune the model tracery and learning configuration until we have stable learning dynamics, then squint at getting the most out of the model.

We can do this by using a simple train/test split of the data and review plots of the learning curves. This will help us see if we are over-learning or under-learning; then we can transmute the configuration accordingly.

First, we must ensure all input variables are floating-point values and encode the target label as integer values 0 and 1.

Next, we can split the dataset into input and output variables, then into 67/33 train and test sets.

We can pinpoint a minimal MLP model. In this case, we will use one subconscious layer with 10 nodes and one output layer (chosen arbitrarily). We will use the ReLU vivification function in the subconscious layer and the “he_normal” weight initialization, as together, they are a good practice.

The output of the model is a sigmoid vivification for binary classification and we will minimize binary cross-entropy loss.

We will fit the model for 50 training epochs (chosen arbitrarily) with a batch size of 32 considering it is a small dataset.

We are fitting the model on raw data, which we think might be a good idea, but it is an important starting point.

At the end of training, we will evaluate the model’s performance on the test dataset and report performance as the nomenclature accuracy.

Finally, we will plot learning curves of the cross-entropy loss on the train and test sets during training.

Tying this all together, the well-constructed example of evaluating our first MLP on the skins dataset is listed below.

Running the example first fits the model on the training dataset, then reports the nomenclature verism on the test dataset.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the stereotype outcome.

In this case, we can see that the model achieved unconfined or perfect verism of 100% percent. This might suggest that the prediction problem is easy and/or that neural networks are a good fit for the problem.

Line plots of the loss on the train and test sets are then created.

We can see that the model appears to converge well and does not show any signs of overfitting or underfitting.

Learning Curves of Simple Multilayer Perceptron on Skins Dataset

Learning Curves of Simple Multilayer Perceptron on Skins Dataset

We did amazingly well on our first try.

Now that we have some idea of the learning dynamics for a simple MLP model on the dataset, we can squint at developing a increasingly robust evaluation of model performance on the dataset.

Robust Model Evaluation

The k-fold cross-validation procedure can provide a increasingly reliable estimate of MLP performance, although it can be very slow.

This is considering k models must be fit and evaluated. This is not a problem when the dataset size is small, such as the skins dataset.

We can use the StratifiedKFold matriculation and enumerate each fold manually, fit the model, evaluate it, and then report the midpoint of the evaluation scores at the end of the procedure.

We can use this framework to develop a reliable estimate of MLP model performance with our wiring configuration, and plane with a range of variegated data preparations, model architectures, and learning configurations.

It is important that we first ripened an understanding of the learning dynamics of the model on the dataset in the previous section surpassing using k-fold cross-validation to estimate the performance. If we started to tune the model directly, we might get good results, but if not, we might have no idea of why, e.g. that the model was over or under fitting.

If we make large changes to the model again, it is a good idea to go when and personize that the model is converging appropriately.

The well-constructed example of this framework to evaluate the wiring MLP model from the previous section is listed below.

Running the example reports the model performance each iteration of the evaluation procedure and reports the midpoint and standard deviation of nomenclature verism at the end of the run.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the stereotype outcome.

In this case, we can see that the MLP model achieved a midpoint verism of well-nigh 99.9 percent.

This confirms our expectation that the wiring model configuration works very well for this dataset, and indeed the model is a good fit for the problem and perhaps the problem is quite trivial to solve.

This is surprising (to me) considering I would have expected some data scaling and perhaps a power transform to be required.

Next, let’s squint at how we might fit a final model and use it to make predictions.

Final Model and Make Predictions

Once we segregate a model configuration, we can train a final model on all misogynist data and use it to make predictions on new data.

In this case, we will use the model with dropout and a small batch size as our final model.

We can prepare the data and fit the model as before, although on the unshortened dataset instead of a training subset of the dataset.

We can then use this model to make predictions on new data.

First, we can pinpoint a row of new data.

Note: I took this row from the first row of the dataset and the expected label is a ‘0’.

We can then make a prediction.

Then capsize the transform on the prediction, so we can use or interpret the result in the correct label (which is just an integer for this dataset).

And in this case, we will simply report the prediction.

Tying this all together, the well-constructed example of fitting a final model for the skins dataset and using it to make a prediction on new data is listed below.

Running the example fits the model on the unshortened dataset and makes a prediction for a each row of new data.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the stereotype outcome.

In this case, we can see that the model predicted a “0” label for the input row.

Further Reading

This section provides increasingly resources on the topic if you are looking to go deeper.

Tutorials

Summary

In this tutorial, you discovered how to develop a Multilayer Perceptron neural network model for the skins binary nomenclature dataset.

Specifically, you learned:

  • How to load and summarize the skins dataset and use the results to suggest data preparations and model configurations to use.
  • How to explore the learning dynamics of simple MLP models on the dataset.
  • How to develop robust estimates of model performance, tune model performance and make predictions on new data.

Do you have any questions?
Ask your questions in the comments unelevated and I will do my weightier to answer.

Develop Deep Learning Projects with Python!

Deep Learning with Python

 What If You Could Develop A Network in Minutes

…with just a few lines of Python

Discover how in my new Ebook:
Deep Learning With Python

It covers end-to-end projects on topics like:
Multilayer PerceptronsConvolutional Nets and Recurrent Neural Nets, and more…

Finally Bring Deep Learning To
Your Own Projects

Skip the Academics. Just Results.

See What’s Inside



Author: Shantun Parmar

Comments are closed.