I wanted to post this on a coding train episode on youtube about neural networks but I’ve been banned from commenting (again!) —
There is a basic problem with conventional neural networks. There are too many weighted sums operating off a small set of nonlinearized values. The outputs of the weighed sums then are correlated/entangled with each other. Aside from anything else this is an inefficient use of weight parameters:
There is a solution using random projections, or actually other projections that are faster to calculate as long as they have some specific properties.
Also the linear associative memory (AM) aspect of the weighted sum is poorly know, people should remember it. Linear AM does come with a lot of provisos, however in conjunction with nonlinear functions it becomes a more general type of associative memory. I sort of half explained it here: