The white dot chases the yellow dots for food. Also the white dot get hurt if it bumps into the red blocks but it doesn’t know where they are. However it has memory so in time it might figure out a strategy.
The white dot is equipped with a controller neural network with an external associative memory bank.
I’ve just been running it for a few minutes. I think I can see some little bits of strategy emerge.
Each trial is run twice, one time for the parent and one time for a mutated child.
The child replaces the parent if it is better.
View:
https://editor.p5js.org/siobhan.491/present/UxuiHZIOj
Code:
https://editor.p5js.org/siobhan.491/sketches/UxuiHZIOj
If the white dot where a mosquito it would have gone extinct already!
I kinda see some adaptive behavior but nothing too impressive. It’s too complex as initial problem and also the training method of just comparing a parent with a mutated child is unlikely to yield great results.
Anyway I’ll try the controller neural network with external associative memory for a few other problems using better training algorithms.
If you like watching paint dry I made the problem a bit simpler for the AI.
View: https://editor.p5js.org/siobhan.491/present/pbGF-pPYD
Code: https://editor.p5js.org/siobhan.491/sketches/pbGF-pPYD
I did a maze version. The circle can only ‘see’ one block ahead,behind,left and right. It takes at least a hour to evolve any kind of capability. The game is quite quantized (very block based.)
I know empirically that it is very difficult to optimize quantized systems and the neural network itself is designed to avoid that problem.
What I learned is that quantization in the environment also makes it very difficult for an ALife to evolve.
Anyway you could improve the cost function to get better results. However I will just take my lesson and run:
Maze: https://editor.p5js.org/siobhan.491/present/l4cvfP0pY
Code: https://editor.p5js.org/siobhan.491/sketches/l4cvfP0pY
I understand now I need to create smooth continuous (non-quantized) environments for ALife entities to prosper in.
Evolutionary algorithms are sometimes called gradient free methods. I think that is not exactly true, they follow a single composite gradient.
You can say an evolutionary algorithm is like a dog following a scent in multidimensional space. If there is locally no difference in scent from place to place in some dimensions (quatization) the dog gets lost or confused in those dimensions and goes wandering around in them, elongating its path to some ultimate goal.