Evolving Neural Net classifiers


Reading time: about 8 minutes

As a research interest, I play with evolutionary algorithms a lot. Recently I’ve been messing around with Neural Nets that are evolved rather than trained with backpropagation.

Because this is a blog post, and to further demonstrate that literally anything can result in evolution, I’m going to be using a hill climbing algorithm. Here’s the gist of it.

  1. Initially, we will start with a Neural Network with random weights.
  2. We’re going to clone the network, pick a weight and change it to a random number.
  3. Evaluate the old network and the new network and get their scores
  4. If the new network has done better or the same as the old one, replace the old network with it
  5. Repeat until the results are satisfactory

The algorithm

The algorithm is shown below. All it does is split the given data into training and test parts, randomly change the neural network weights until the score improves, and then use the test data to determine how good we did.

def train_and_test(X, y, nn_size, iterations=1000, test_size=None, stratify=None):
    random.seed(445)
    np.random.seed(445)
    net = NeuralNetwork(nn_size)

    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=test_size, random_state=42, stratify=stratify
    )

    score = 0
    for i in range(iterations):
        score = net.get_score(X_train, y_train)

        new = net.clone()
        new.mutate()
        new_score = new.get_score(X_train, y_train)

        if new_score >= score:
            net = new
            score = new_score

    print(f"Training set: {len(X_train)} elements. Error: {score}")

    score = net.get_classify_score(X_test, y_test)

    print(f"Test set: {score} / {len(X_test)}. Score: {score / len(X_test) * 100}%")

Iris flower dataset

If you are learning about classifiers, the Iris flower dataset is probably the first thing you’re going to test. It is like the “Hello World” of classification basically.

The dataset includes petal and sepal size measurements from 3 different Iris species. The goal is to get measurements and classify which species they are from.

You can find more information on the dataset here.

data = pandas.read_csv("IRIS.csv").values

name_to_output = {
    "Iris-setosa": [1, 0, 0],
    "Iris-versicolor": [0, 1, 0],
    "Iris-virginica": [0, 0, 1],
}

rows = data.shape[0]
data_input = data[:, 0:4].reshape((rows, 4, 1)).astype(float)
data_output = np.array(list(map(lambda x: name_to_output[x], data[:, 4]))).reshape(
    (rows, 3)
)

train_and_test(data_input, data_output, (4, 4, 3), 10000, 0.2)
Training set: 120 elements. Error: -5.697678436657024
Test set: 29 / 30. Score: 96.66666666666667%

96% accuracy isn’t bad such a simple algorithm. But it has that accuracy when it trains with 120 samples and tests with 30. Let’s see if it’s good at generalization by turning our train/test split into 0.03/0.97.

As you can see below; just by training on 4 samples, our network is able to classify the rest of the data with a 94% accuracy.

train_and_test(data_input, data_output, (4, 4, 3), 10000, 0.97)
Training set: 4 elements. Error: -0.8103166051741318
Test set: 138 / 146. Score: 94.52054794520548%

Cancer diagnosis dataset

This dataset has includes some data/measurements about tumors, and classifies them as either Benign (B) or Malignant (M).

You can find the dataset and more information about it here.

data = pandas.read_csv("breast_cancer.csv").values[1:]

rows = data.shape[0]

name_to_output = {"B": [1, 0], "M": [0, 1]}

data_input = data[:, 2:32].reshape((rows, 30, 1)).astype(float) / 100
data_output = np.array(list(map(lambda x: name_to_output[x], data[:, 1]))).reshape(
    (rows, 2)
)

train_and_test(data_input, data_output, (30, 30, 15, 2), 10000, 0.3)
Training set: 397 elements. Error: -5.626705318006574
Test set: 159 / 171. Score: 92.98245614035088%

To see if the network is able to generalize, let’s train it on 11 samples and test it on 557. You can see below that it has an 86% accuracy after seeing a tiny amount of samples.

train_and_test(data_input, data_output, (30, 30, 15, 2), 10000, 0.98)
Training set: 11 elements. Error: -0.2742514647152907
Test set: 481 / 557. Score: 86.35547576301616%

Glass classification dataset

This dataset has some material measurements, like how much of each element was found in a piece of glass. Using these measurements, the goal is to classify which of the 8 glass types it was from.

This dataset doesn’t separate cleanly, and there aren’t a lot of samples you get. So I cranked up the iteration number and added more hidden layers. Deep learning baby!

You can find more information on the dataset here.

data = pandas.read_csv("glass.csv").values[1:]

rows = data.shape[0]
data_input = data[:, :-1].reshape((rows, 9, 1)).astype(float)
data_output = np.array(list(map(lambda x: np.eye(8)[int(x)], data[:, -1]))).reshape((rows, 8))

train_and_test(data_input, data_output, (9, 9, 9, 9, 8), 20000, 0.3, stratify=data_output)
Training set: 149 elements. Error: -8.261249669954738
Test set: 47 / 64. Score: 73.4375%

After I saw this result, I wasn’t super thrilled about it. But after I went through the other solutions on Kaggle and looked at their results, I found out that this wasn’t bad compared to other classifiers.

But where’s the Neural Network code?

Here it is. While it’s a large chunk of code, I find that this is the least interesting part of the project. This is basically a bunch of matrices getting multiplied and mutated randomly. You can find a bunch of tutorials/examples of this on the internet.

import numpy as np
import random
import pandas
from sklearn.model_selection import train_test_split

class NeuralNetwork:
    def __init__(self, layer_sizes):
        self.layer_sizes = layer_sizes
        weight_shapes = [(a, b) for a, b in zip(layer_sizes[1:], layer_sizes[:-1])]
        self.weights = [
            np.random.standard_normal(s) / s[1] ** 0.5 for s in weight_shapes
        ]
        self.biases = [np.random.rand(s, 1) for s in layer_sizes[1:]]

    def predict(self, a):
        for w, b in zip(self.weights, self.biases):
            a = self.activation(np.matmul(w, a) + b)
        return a

    def get_classify_score(self, images, labels):
        predictions = self.predict(images)
        num_correct = sum(
            [np.argmax(a) == np.argmax(b) for a, b in zip(predictions, labels)]
        )
        return num_correct
    
    def get_score(self, images, labels):
        predictions = self.predict(images)
        predictions = predictions.reshape(predictions.shape[0:2])
        return -np.sum(np.abs(np.linalg.norm(predictions-labels)))

    def clone(self):
        nn = NeuralNetwork(self.layer_sizes)
        nn.weights = np.copy(self.weights)
        nn.biases = np.copy(self.biases)
        return nn

    def mutate(self):
        for _ in range(self.weighted_random([(20, 1), (3, 2), (2, 3), (1, 4)])):
            l = self.weighted_random([(l.flatten().shape[0], i) for i, l in enumerate(self.weights)])
            shape = self.weights[l].shape
            layer = self.weights[l].flatten()
            layer[np.random.randint(0, layer.shape[0]-1)] = np.random.uniform(-2, 2)
            self.weights[l] = layer.reshape(shape)

            if np.random.uniform() < 0.01:
                b = self.weighted_random([(b.flatten().shape[0], i) for i, b in enumerate(self.biases)])
                shape = self.biases[b].shape
                bias = self.biases[b].flatten()
                bias[np.random.randint(0, bias.shape[0]-1)] = np.random.uniform(-2, 2)
                self.biases[b] = bias.reshape(shape)

    @staticmethod
    def activation(x):
        return 1 / (1 + np.exp(-x))
    
    @staticmethod
    def weighted_random(pairs):
        total = sum(pair[0] for pair in pairs)
        r = np.random.randint(1, total)
        for (weight, value) in pairs:
            r -= weight
            if r <= 0: return value

The following pages link here

Citation

If you find this work useful, please cite it as:
@article{yaltirakli201905evolvingneuralnetclassifiers,
  title   = "Evolving Neural Net classifiers",
  author  = "Yaltirakli, Gokberk",
  journal = "gkbrk.com",
  year    = "2019",
  url     = "https://www.gkbrk.com/2019/05/evolving-neural-net-classifiers/"
}
Not using BibTeX? Click here for more citation styles.
IEEE Citation
Gokberk Yaltirakli, "Evolving Neural Net classifiers", May, 2019. [Online]. Available: https://www.gkbrk.com/2019/05/evolving-neural-net-classifiers/. [Accessed Nov. 12, 2024].
APA Style
Yaltirakli, G. (2019, May 12). Evolving Neural Net classifiers. https://www.gkbrk.com/2019/05/evolving-neural-net-classifiers/
Bluebook Style
Gokberk Yaltirakli, Evolving Neural Net classifiers, GKBRK.COM (May. 12, 2019), https://www.gkbrk.com/2019/05/evolving-neural-net-classifiers/

Comments

© 2024 Gokberk Yaltirakli