The ReLU activation function:
f(x)=x when x>=0
f(x)=0 when x<0
is widely used in current artificial neural networks.
It is the final step in an artificial neuron and connects to n (forward) weights in the next layer of the network.
The pattern defined by the n forward weights is projected with intensity f(x) onto the next layer. For x<0 obviously nothing is projected forward and the neuron constitutes an information blockage in that state.
It would perhaps be better then to add an alternative vector of forward connected weights for when x<0, and project that onto the next layer, again with intensity x.
The activation function then is always f(x)=x, however with entrained switching of the forward connected weight vector for the neuron.
if(x>=0){
use forward connected weight vector A.
}else{
use forward connected weight vector B.
}
That means you need double the number of weights a conventional ReLU neural network needs.
I only have a more complex example with an unusual type of artificial neural network:
https://editor.p5js.org/siobhan.491/sketches/NU37LKz6A
However the concept may work nicely with more conventional neural networks.
I also made some comments here.
https://discourse.numenta.org/t/relu-neural-networks-as-amplitude-modulated-dictionaries/8904
I’m retired from the subject, it’s just an after-thought I had. It’s up to other people if they are interested.
Don’t ask me to do unpaid work on it. Lol.
https://en.wikipedia.org/wiki/Somebody_else%27s_problem#Fiction