ReLU is half a cookie

The ReLU activation function:
f(x)=x when x>=0
f(x)=0 when x<0
is widely used in current artificial neural networks.
It is the final step in an artificial neuron and connects to n (forward) weights in the next layer of the network.
The pattern defined by the n forward weights is projected with intensity f(x) onto the next layer. For x<0 obviously nothing is projected forward and the neuron constitutes an information blockage in that state.
It would perhaps be better then to add an alternative vector of forward connected weights for when x<0, and project that onto the next layer, again with intensity x.
The activation function then is always f(x)=x, however with entrained switching of the forward connected weight vector for the neuron.
use forward connected weight vector A.
use forward connected weight vector B.
That means you need double the number of weights a conventional ReLU neural network needs.
I only have a more complex example with an unusual type of artificial neural network:
However the concept may work nicely with more conventional neural networks.
I also made some comments here.

I’m retired from the subject, it’s just an after-thought I had. It’s up to other people if they are interested.
Don’t ask me to do unpaid work on it. Lol.