Blog

The perceptron learning algorithm works as follows:
  1. Intilalize the weights for all inputs (including bias )
  2. Present the input xi to neural network
  3. Calculate the outpput for the given input. say it is zi
  4. update the weights as: w=w+a(actual-predicted output)xi
  5. Repeat from step 2 until the convergance or number of iteration reached.


Lets implement perceptron for the inputs:

     inputs              outputs
     x1  x2  b
     o   0  1            0
     0   1  1            1
     1   0  1            1
     1   1  1            1

import numpy as np
import random as rd
unit_step = lambda x: 0 if x < 0 else 1
train_data = [
    (np.array([0,0,1]), 0),
    (np.array([0,1,1]), 1),
    (np.array([1,0,1]), 1),
    (np.array([1,1,1]), 1),
]

w = np.random.rand(3)
out:w
//array([0.83364474, 0.84316914, 0.79095823])
errors = []
eta = 0.2
n = 100
for i in range(n):
    x, expected = rd.choice(train_data)
    result = np.dot(w, x)
    error = expected - unit_step(result)
    errors.append(error)
    w += eta * error * x

for x, _ in train_data:
    result=np.dot(x,w)
    z=unit_step(result)
    print("{}->{}".format(x[:2], z))
Output: 
[0 0]->0
[0 1]->1
[1 0]->1
[1 1]->1

import matplotlib.pyplot as plt
plt.plot(errors)
[]


TOP