Will This Perceptron Work?

I am trying to code my own perceptron, based sort of on The Coading Train’s example, as a biginner I wanted to make it as simple as possible, I also wanted to make something new. So I decided on this example:

(x * z)+(y * z)

  • My perceptron’s weights would start as 1 and wil be adjested to the value of z.
  • It would have two weights, and they would be the exact same like the example.
  • I will compare it to the perceptron’s answer to a pre solved one that is correct.
  • I would make x1 and x2 the inputs.

So, would this work?

1 Like

Hard to say if it works. I think it is more likely not to work. Perceptron has three elements inputs, weights for each input and activation function that gives output. Traditionally activation functions has output in range of [-1,1] or something similar. Idea is to prevent outputs growing too big.
I’m assuming that you are aiming to build multilayer perceptron (MLP), because single layer perceptrons have limited use. Your activation function is f(y) = z*(x+y), so it’s linear function. Unfortunately linear activation function in MLP is as good as one layer of perceptrons.

So in minimum you do need to use non-linear activation function. So called RELU is simple and fast to calculate.

2 Likes

I was trying to make a single layer perceptron.
Which one should I make, a single layered or multi layered?
And should I train it on something else ( I have no other ideas for the training)?
I assume that I would still use RELU right?

Ok. With just a single layer it doesn’t really matter. Have a look at this acticle or this.
Single layer can only separate data points with a one line. If your data is more complicated then you need two or more layers with non-linear activation function. So it depends on your data.

2 Likes