Genetic Algorithm Is Not Learning

I’m trying to get a “player”/circle to avoid getting hit by “obstacles”/other circles, based on Daniel Shiffman’s Neuroevolution Flappy Bird/with tensorflow.js. (https://youtu.be/c6y21FkaUqw)

The algorithm has the following inputs:

  • The player’s own velocity and radius
  • The 3 toroidally nearest obstacles’ position (relative to the player), velocity and radius
  • The width and height of the screen
  • A bias of 1.0

The algorithm has the following outputs:

  • Should and how far should it move left?
  • Should and how far should it move right?
  • Should and how far should it move up?
  • Should and how far should it move down?

I am currently training it with ~13 hidden nodes with one hidden layer (I’ve attempted adding a layer and that did not work within my tests)

I’ve experimented with

  • Different obstacle amounts and velocities,
  • Different player amounts,
  • My own GA and the GA in Dan’s videos,
  • Made sure the near obstacles’ relative positions were actually being calculated correctly
  • etc.

Despite all of this, I’ve tried training this for upwards of 1600 iterations and it made no measurable progress. I’m sure I’m missing something very simple, but I just can’t figure it out. Could it be something with the genetic algorithm, the network structure, or just that I haven’t trained it long enough, or something else?
If you would like to help, the relevant files in the p5.js project are most likely “AIPlayer.js” and “ga.js”, since most everything else besides settings is more of backbone for the environment.
I tried my best to comment the code thoroughly. Thank you in advance for helping, and please let me know if I’m being too vague with my question.

The project is at p5.js Web Editor

1 Like

maybe the similar topic may help you AI Flappy Bird Processing Java
AI Flappy Bird Processing Java

1 Like

Thank you, I’m going to try normalizing the inputs now. (Which I really should’ve remembered to do!)

In combination with my other question, is there just a better way to train an algorithm on something like this? I can’t think of a way to fit a method like backpropagation into this kind of example, but is there some other training method that could not only work with better results but possibly make it efficient to use the GPU backend instead?

you cannot train it like backpropagation thing. that is genetic algorithm which is unsupervised learning. i suggest you to learn about reinforcement learning which does better

1 Like

Thank you, again. I just tried training a model on this overnight and they still don’t learn very well. I have to assume that the GA isn’t the best way to train it, perhaps I will try reinforcement learning!

1 Like

Aaaand I think I just found the problem!

  for (const player of players) {
    player.fitness = player.score / sum
  }

I don’t have a player.score!
I knew there was some dumb bug in there!
Trying to train it right now again, and already just 10 iterations in they’re a lot smarter than before.
I’ve also did some things like increase the number of closest obstacles but omitting the velocities of obstacles further away to save computation time. Hopefully this does it!

Note: The fix was to change player.score to player.time