NN Menu


Perceptron Networks


The perceptron was first introduced by Mr. Frank Rosenblatt in 1957.

A Perceptron is an algorithm for supervised learning of binary classifiers.

There are two types of Perceptrons: Single layer and Multilayer.

  • Single layer - Single layer perceptrons can learn only linearly separable patterns
  • Multilayer - Multilayer perceptrons or feedforward neural networks with two or more layers have the greater processing power

The Perceptron algorithm learns the weights for the input signals in order to draw a linear decision boundary.

This enables you to distinguish between the two linearly separable classes +1 and -1.

Perceptron algorithm (Single layer)

Step 0: initialize the weights and the bias(for easy calculation they can be set to zero). also initialize the learning rate α(0, α, 1) for simpicity α is set to 1.

Step 1: Perform step 2 to 6 until the final stopping condition is false.

Step 2: Perform step 3 to 5 each training pari indicated by s:t

Step 3: The input layer containing input units is applied with identity activation function: xi = si

Step 4: calculate the ourput of the network. to do so first obtain the net input:

y i n = i n x i . w i + b

where n is the number of inputs neurons in the input layer. Then apply activation over the net input calculated to obrain the output

f ( y i n ) = { 1 i f y i n > θ 0 i f θ y i n θ 1 i f y i n < θ

Step 5: weight and bias adjustment: compare the value of the actual (calculated) output and desire(targer) output


if y ≠ t, then
      wi(new) = wi(old)+ αtxi
      b(new) = b(old)+ αt
else we have
      wi(new) = wi(old)
      b(new) = b(old)


Step 6: Train the network until there is no weight change. This is the stopping condition for the network. If this condition is not met, then start again form step 2.


Perceptron algorithm (Multiple layer)

Step 0: Initialize the weights, biases and learning rate suitably.

Step 1: Check for stopping condition; if it is false, perform Steps 2–6.

Step 2: Perform Steps 3–5 for each bipolar or binary training vector pair s : t.

Step 3: Set activation (identity) of each input unit i = 1 to n: xi = si

Step 4: Calculate output response of each output unit j = 1 to m: First, the net input is calculated as y i n j = i n x i . w i j + b j
Then activations are applied over the net input to calculate the output response: f ( y i n ) = { 1 i f y i n > θ 0 i f θ y i n θ 1 i f y i n < θ

Step 5: Make adjustment in weights and bias for j = l to m and i = l to n.

if yj ≠ tj, then
      wij(new) = wij(old)+ αtjxi
      bj(new) = bj(old)+ αtj
else we have
      wij(new) = wij(old)
      bj(new) = bj(old)

Step 6: Test for the stopping condition, i.e., if there is no change in weights then stop the training process, else start again from Step 2.


Example of Single layer Perceptron

We need to understand that the output of an AND gate is 1 only if both inputs (in this case, x1 and x2) are 1.

Truth table for AND function with bipolar inputs and targets.

x1 x2 Target
1 1 1
1 -1 -1
-1 1 -1
-1 -1 -1

Row-1

initializing w1, w2, and b as 0, α=1, and θ=0.

we get;

x1(0)+x2(0)+0

Passing the first row of the AND logic table (x1=1, x2=1), we get;

1*0+1*0+0 = 0

yin = 0

Y = f(yin) => f(0) => 0

check Y is equal to t or not, that is 0 ≠ 1, Hence weights change is required.


wi(new) = wi(old)+ αtxi
b(new) = b(old)+ αt


w1(new) = w1(old)+ αtx1

              = 0+ 1*1*1 = 1

w2(new) = w2(old)+ αtx2

              = 0+ 1*1*1 = 1

b(new) = 0+ 1*1 = 1


new updated weightes are w1=w2=b=1

yin = 1*1 + 1*1 + 1 = 3

Y = f(yin) => f(3) => 1

check Y is equal to t or not, that is 1 = 1, Hence weights change is Not required.



Row-2

w1, w2, and b as 1, α=1, and θ=0.

we get;

x1(1)+x2(1)+1

Passing the second row of the AND logic table (x1=1, x2=-1), we get;

1*1+-1*1+1 = 1

yin = 1

Y = f(yin) => f(1) => 1

check Y is equal to t or not, that is 1 ≠ -1, Hence weights change is required.


wi(new) = wi(old)+ αtxi
b(new) = b(old)+ αt


w1(new) = w1(old)+ αtx1

              = 1+ 1*-1*1 = 0

w2(new) = w2(old)+ αtx2

              = 1+ 1*-1*-1 = 2

b(new) = 1+ 1*-1 = 0


new updated weightes are w1=0, w2= 2, b=0

yin = 1*0 + -1*2 + 0 = -2

Y = f(yin) => f(-2) => -1

check Y is equal to t or not, that is -1 = -1, Hence weights change is Not required.



Row-3

w1=0, w2= 2, b=0, α=1, and θ=0.

we get;

x1(0)+x2(2)+0

Passing the third row of the AND logic table (x1=-1, x2=1), we get;

-1*0+1*2+0 = 2

yin = 2

Y = f(yin) => f(2) => 1

check Y is equal to t or not, that is 1 ≠ -1, Hence weights change is required.


wi(new) = wi(old)+ αtxi
b(new) = b(old)+ αt


w1(new) = w1(old)+ αtx1

              = 0+ 1*-1*-1 = 1

w2(new) = w2(old)+ αtx2

              = 2+ 1*-1*1 = 1

b(new) = 0+ 1*-1 = -1


new updated weightes are w1=1, w2= 1, b=-1

yin = -1*1 + 1*1 + -1 = -1

Y = f(yin) => f(-1) => -1

check Y is equal to t or not, that is -1 = -1, Hence weights change is Not required.



Row-4

w1=1, w2= 1, b=-1, α=1, and θ=0.

we get;

x1(1)+x2(1)+-1

Passing the fourth row of the AND logic table (x1=-1, x2=-1), we get;

-1*1+-1*1+(-1) = -3

yin = -3

Y = f(yin) => f(-3) => -1

check Y is equal to t or not, that is -1 = -1, Hence weights change is not required.



Next Topic :Adaptive Linear Neuron