Hi I’m trying to programm a neuronal Network but as I looked for some functions for the change of weights to make the programm learn. Here is my code so far:

class Neuron {
int _x, _y;
int id;
float[] weights;
float lastinput[];
Neuron(int id_, int l) {
id=id_;
String save[];
weights=new float[l];
for (int i=0; i<weights.length; i++) weights[i]=random(-12, 12);
try {
save=loadStrings("weights"+id+".txt");
for (int i=0; i<l; i++) weights[i]=float(save[i]);
}
catch(Exception e) {
}
}
void wrong(float correct_answer,float learningrate) {
for (int i=0; i<weights.length; i++) {
// Here is the problematic Part!
}
//saves the new weights
String save[]=new String[weights.length];
for (int i=0; i<weights.length; i++) save[i]=weights[i]+"";
saveStrings("weights"+id+".txt", save);
}
float calculate(float inputs[]) {
float sum=0;
for (int i=0; i<inputs.length; i++) sum+=weights[i]*inputs[i];
lastinput=inputs;
return sum;
}
}

Since you‘re going with supervised learning, the best thing to do would probably have your neuron’s „synapses“ (in your case these would be the weight arrays index) have 2 variables and have a global variable for all neurons. (I‘ll just assume that your neural network is not backtracking) :

Weight (you already got that)
Last change (the last change in weight)

Global : last Correctness

This way you can compare in what direction to change the weight. Ex .:

lastCorrectness = 50%
lastChange = +1 //(the weight was increased by 1 after the correctness was checked,
//to provoke a change, or after some iterations
//as a result of the following calculations)
if (currentCorrectness > lastCorrectness) {
currentChange = lastChange;
} else {
currentChange = -lastChange;
}

Which basically continues to change the weight in the same direction if the network improved and reverses if the Network got worse by that change. (Note that this should never apply to all neurons in the same way, so vary it by multiplying the change by something individual to each iteration (best would probably be to only apply a change each iteration to ~ 20% of the neurons randomly and have that change be no more than a change of ~ 10% of the current weight or something like that).

Is what true for hidden layers? The learning of Neural Networks happens in the synapses of the Neurons, so if they have a weight array, they learn (though there are some exceptions where Input and Output are weighted seperately).

I‘m not sure how a random decreasing change is going to improve the Neural Network, unless you use multiple networks and select the ones with the best performance, but that‘s a combination of supervised and unsupervised Neural Networks, so… well, as long as it works…