Experiment with Associative Memory

Using the P5 library:
Associative Memory Experiment

3 Likes

Thanks for sharing this!

Consider adding a few lines to the post / the host page explaining what this is for and how to use it, in plain language.

I think this is for teaching the computer to recognize parts of images, and that you click the same kind of thing (trees, windows) many times, then after it trains it will show it can recognize things like them…?

Do we press 1, then click examples, then press 0? Click examples, then press 1, then wait (wait how long?). Should we click at least five (or 20) similar examples?

Couldn’t figure out how to use it. The “training cycles” count goes up when I press 1, but nothing ever happens except little images that I click appearing on the right-hand side.

You select a few parts of the picture. If you don’t like what you selected you can press zero (0) and start again.
Once you have 1 to 32 little images selected you can train the associative memory by pressing 1. I would say you only need to train for a few seconds and then press 1 again to stop.
I set the associative memory to recall exactly what it has seen, though you could set it to recall something entirely different.
Then when you move the square around it will make a guess as to what the output should be. Of course if you exactly place the square on the same area as one of the examples selected for training you will get an exact output.

That may not make much sense to you, but if you were doing something like Neural Turing Machines it might be useful.

1 Like

I did a new version, the updatePixels() method causes a little flickering but that is okay.
https://editor.p5js.org/siobhan.491/full/k7UePTA4H
And here is the code:
https://editor.p5js.org/siobhan.491/sketches/k7UePTA4H
Anyway to cut a long story short mild hashing before the weighted sum allows it to act as an associative memory. Total hashing would work too, but to get an output you would have to use exactly the original input.

1 Like

Anyway associative memory is not so useful in itself, though you could use it with pre-trained conventional neural networks to get one shot learning.
I think the main use would be for Neural Turing Machines, that is neural networks with external (associative) memory banks.
There is very little information about the weighted sum as an associative memory.
Used under capacity you get error correction, at capacity perfect recall but no error correction, over capacity recall with Gaussian noise. That is due to the interaction of the mild hashing and the variance equation for linear combinations of random variables.
Anyway there is a book where the error correction is noted:
https://archive.org/details/SelfOrganizationAndAssociativeMemory/mode/2up