1 of 24

PERCEPTRON MODEL

LECTURE 5

2 of 24

What is a Perceptron?

  • A single artificial Neuron that computes its weighted input and uses a threshold activation function.
  • It is also called a TLU (Threshold Logic Unit)
  • It effectively separates the input space into two categories by the hyperplane wTx + bi = 0
  • Thus, the perceptron is an algorithm for binary linear

Classifiers.

  • Simplest kind of feed forward neural network.

3 of 24

4 of 24

  • A perceptron network with its three units is shown in Figure 3-1.
  • As shown in Figure 3-1, a sensory unit can be a two-dimensional matrix of 400 photodetectors upon which a lighted picture with geometric black and white pattern impinges.
  • These detectors provide a binary (0) electrical signal if the input signal is found to exceed a certain value of threshold.
  • Also, these detectors are connected randomly with the associator unit.
  • The associator unit is found to consist of a set of subcircuits called feature predicates.
  • The feature predicates are hard-wired to detect the specific feature of a pattern and are equivalent to the feature detectors. For a particular feature, each predicate is examined with a few or all of the responses of the sensory unit. It can be found that the results from the predicate units are also binary (0 or 1).
  • The last unit, i.e. response unit, contains the pattern-recognizers or perceptrons.
  • The weights present in the input layers are all fixed, while the weights on the response unit are trainable.

5 of 24

HISTORY

  • In 1957 Frank Rosenblatt invented the perceptron.
  • Lets go through a small video regarding the same:

6 of 24

7 of 24

Key Points to be noted:

  • The perceptron network consists of three units, namely, sensory unit (input unit), associator unit (hidden unit), response unit (output unit).
  • The sensory units are connected to associator units with fixed weights having values 1, 0 or -1, which are assigned at random.
  • The binary activation function is used in sensory unit and associator unit.
  • The response unit has an activation of 1, 0 or -1. The binary step with fixed threshold q is used as activation for associator. The output signals that are sent from the associator unit to the response unit are only binary.
  • The output of the perceptron network is given by y=f(yin) where f(yin) is activation function defined as

8 of 24

Key Points to be noted: (continue)

  • The perceptron learning rule is used in the weight updation between the associator unit and the response unit. For each training input, the net will calculate the response and it will determine whether or not an error has occurred.
  • The error calculation is based on the comparison of the values of targets with those of the calculated outputs.
  • The weights on the connections from the units that send the nonzero signal will get adjusted suitably.
  • The weights will be adjusted on the basis of the learning rule if an error has occurred for a particular training pattern, i.e.,

  • If no error occurs, there is no weight updation and hence the training process may be stopped.
  • In the above equations, the target value “ t ’’ is +1 or -1 and a is the learning rate. In general, these learning rules begin with an initial guess at the weight values and then successive adjustments are made on the basis of the evaluation of an objective function.
  • Eventually, the learning rules reach a near-optimal or optimal solution in a finite number of steps.

9 of 24

Perceptron Learning Rule

10 of 24

Architecture of Perceptron

  • The goal of the perceptron net is to classify the input pattern as a member or not a member to a particular class.

11 of 24

Flowchart for Training Process

12 of 24

Perceptron Training Algorithm for Single Output Classes

13 of 24

14 of 24

Perceptron Training Algorithm for Multiple Output Classes

15 of 24

Perceptron Network Testing Algorithm

16 of 24

EXAMPLES

17 of 24

18 of 24

19 of 24

  • Add Question here and solution on next slide

20 of 24

21 of 24

22 of 24

Practice Questions

  1. Find the weights using perceptron network for ANDNOT function when all the inputs are presented only one time. Use bipolar inputs and targets.
  2. Find the weights required to perform the following classification using perceptron network. The vectors (1, 1,1,1) and (- 1, 1 - 1, - 1) are belonging to the class (so have target value 1), vectors (1, 1, 1, - 1) and (1, - 1, - 1, 1) are not belonging to the class (so have target value - 1). Assume learning rate as 1 and initial weights as 0.

23 of 24

3. Classify the two-dimensional input pattern shown in Figure 6 using perceptron network. The symbol “*’’indicates the data representation to be +1and “ ”indicates data to be –1. The patterns are I-F. For pattern I, the target is +1, and for F, the target is –1.

24 of 24

  • Add XOR and why it is not implemented