SKEDSOFT

Neural Network & Fuzzy Systems

Introduction:-Rosenblatt proposed learning networks called Perceptron. The task was to discover a set of connection weights which correctly classified a set of binary input vectors. The basic architecture of the perceptron is similar to the simple AND network in the previous example.

A perceptron consists of a set of input units and a single output unit. As in the AND network, the output of the perceptron is calculated by comparing the net input and a threshold θ.If the net input is greater than the threshold θ , then the output unit is turned on , otherwise it is turned off.

To address the learning question, Rosenblatt solved two problems.

− First, defined a cost function which measured error.

− Second, defined a procedure or a rule which reduced that error by appropriately adjusting each of the weights in the network.

However, the procedure (or learning rule) required to assesses the relative contribution of each weight to the total error. The learning rule that Roseblatt developed, is based on determining the difference between the actual output of the network with the target output (0 or 1), called "error measure".

Error Measure ( learning rule )

The error measure is the difference between actual output of the network with the target output (0 or 1).

― If the input vector is correctly classified (i.e., zero error), then the weights are left unchanged, and the next input vector is presented.

― If the input vector is incorrectly classified (i.e., not zero error), then there are two cases to consider :

Case 1: If the output unit is 1 but need to be 0 then

  •  The threshold is incremented by 1 (to make it less likely that the output unit would be turned on if the same input vector was presented again).
  •   If the input Ii is 0, then the corresponding weight Wi is left unchanged.

 If the input Ii is 1, then the corresponding weight Wi is decreased by 1.

Case 2: If output unit is 0 but need to be 1 then the opposite changes are made.

Perceptron Learning Rule: Equations

The perceptron learning rules are govern by two equations,

− one that defines the change in the threshold and

− the other that defines change in the weights,

The change in the threshold is given by

Δ θ = - (tp - op) = - dp

where,

p specifies the presented input pattern,
op actual output of the input pattern Ipi
tp specifies the correct classification of the input pattern ie target,
dp is the difference between the target and actual outputs.

The change in the weights are given by Δ wi = (tp - op) Ipi = - dpIpi