Improving the accuracy of a neural network for digit recognition

I’m attempting to code a neural network to recognise handwritten characters currently using the mnist digit database. I’m specifically using a lower 14x14 resolution.

I created a new neural network which is initialised with the number of inputs necessary the number of hidden layers and the number of outputs. Then I train it based on the dataset using an 80/20 test train split. The sigmoid function is calculated before hand and an array of 200 is used to store values which I can then lookup.

The issue I have is that currently its not very accurate, and its probably due to the pre-processing i’m using for the characters i draw myself. I use a square 100 x 100 then crop around the colored pixels with and x and y padding of 9 pixels then use the processing resize function to set it to the correct size. Now I am aware that this causes particular issues with certain characters especially the number 1, which is stretched horizontally.

the first square grid is the training sets current card, then the testing sets current card, then the user input

but this one digit aside the rest display with acceptable proportions.


I then threshold the pixels above a certain value and pass it through the network.

Card loadImage(PImage image){
  
  tempCard = new Card();
  
  for (int i = 0; i < 196; i++) {
      if(red(image.pixels[i])<50)tempCard.inputs[i] = -10;
       if(red(image.pixels[i])>50)tempCard.inputs[i] = map(red(image.pixels[i])+20,0,255,-1,1); 
       else tempCard.inputs[i] = -1;map(red(image.pixels[i]),0,255,-1,1); 
    }
  return tempCard;
};

Now I may not have trained it long enough and that might be the reason for the errors, however even after just 1000000 training cycle its able to guess using the training set correctly about 90% of the time. And yet when I pass hand-drawn digits it fails very often. It clearly seems to be a problem with scaling as it recognises small digits actually fairly well but digits which take up nearly the whole drawing space it has great difficulty with.

As you can see apart from the slight horizontal distortion though the numbers seem to come out fairly well.



many thanks guys.

also whilst I’m at it Any chance someone could check through the neuron class I coded up its uses a code from various sources, as it didnt originally contain a bias variable so I’m not sure I’ve implemented it correctly, it doesnt seem to impact result greatly and still produces around 90 + % accuracy depending on what learning rate I use and how deep the hidden layers are.

/*

 Charles Fried - 2017
 ANN Tutorial 
 Part #2
 
 NEURON
 
 This class is for the neural network, which is hard coded with three layers: input, hidden and output
 
 */


class Neuron {

  Neuron [] inputs; // Strores the neurons from the previous layer
  float [] weights,biases;
  float output,error,errorW,errorB;

  Neuron() {
    error = 0.0;
  }

  Neuron(Neuron [] p_inputs) {

    inputs = new Neuron [p_inputs.length];
    weights = new float [p_inputs.length];
    biases = new float [p_inputs.length];
    
    error = 0.0;
    errorB = 0.0;
    errorW = 0.0;
    
    for (int i = 0; i < inputs.length; i++) {
      
      inputs[i] = p_inputs[i];
      weights[i] = random(-1.0, 1.0);
      biases[i] = random(-1.0, 1.0);
    }
  };

  void respond() {

    float input = 0.0;
    float bias = 0.0;
    for (int i = 0; i < inputs.length; i++) {
      
      input += inputs[i].output * weights[i];
      bias += inputs[i].output * biases[i];
    }
    output = lookupSigmoid(input+ bias);
    error = 0.0;
    errorW = 0;
    errorB = 0;
  };
  
  void respondDeep(){
    float input = 0.0;
    float bias = 0.0;
    for (int i = 0; i < inputs.length; i++) {
      input += inputs[i].output * weights[i] ;
      bias += inputs[i].output * biases[i];
    }
    output = lookupSigmoid(input+bias);
    error = 0;
    errorW = 0;
    errorB = 0;
  };
  
  

  void setError(float desired) {
    error = desired - output;
    //errorW = error - errorB;
  };

  void train() {

    float delta = (1.0 - output) * (1.0 + output) * error * LEARNING_RATE;
    
    for (int i = 0; i < inputs.length; i++) {
      
      inputs[i].error += (weights[i] )* error + biases[i] * error;
      //inputs[i].errorW += (weights[i] )* errorW ;
      //inputs[i].errorB +=  biases[i] * errorB;
      //inputs[i].errorB += (biases[i] )* error;
      
      weights[i] += inputs[i].output * delta;
    }
  };

  void display() {
    stroke(200);
    rectMode(CENTER);
    fill(128 * (1 - output));
    rect(0, 0, 16, 16);
  };
  
};
1 Like

There’s only one bias per neuron. Not one for each weight. calculation should be

As an activation function ReLU would probably work better than sigmoid. It’s so much faster to calculate and at least in deep neural networks it gives about as good results as sigmoid.

Multilayer perceptrons are not that good with scaling or moving. Convolutional networks work much better with images. Still you should be able to get ~98% training accuracy with mnist data using multilayer perceptrons

Thanks, I did notice this and made the change. Is there a recommended convolution I saw a solution online which used max pooling so I could try that. Also I’m assuming that I simply haven’t trained it enough and should probably leave it on all night or something.

Can you explain a bit more about the limitation, one might assume that more layers would mean better prediction.

Again thanks for the input.

HI
is your code public? Would you mind to share it? Thanks a lot
best wishes
m.