NN Menu


Counter Propagation Networks


Counterpropagation network

Counterpropagation network (CPN) were proposed by Hecht Nielsen in 1987.They are multilayer network based on the combinations of the input, output, and clustering layers. The application of counterpropagation net are data compression, function approximation and pattern association. The ccounterpropagation network is basically constructed from an instar-outstar model. This model is three layer neural network that performs input-output data mapping, producing an output vector y in response to input vector x, on the basis of competitive learning. The three layer in an instar-outstar model are the input layer, the hidden(competitive) layer and the output layer.

There are two stages involved in the training process of a counterpropagation net. The input vector are clustered in the first stage. In the second stage of training, the weights from the cluster layer units to the output units are tuned to obtain the desired response.


There are two types of counterpropagation network:

  1. Full counterpropagation network
  2. Forward-only counterpropagation network

Full counterpropagation network

Full CPN efficiently represents a large number of vector pair x:y by adaptively constructing a look-up-table. The full CPN works best if the inverse function exists and is continuous. The vector x and y propagate through the network in a counterflow manner to yield output vector x* and y*.

Architecture of Full Counterpropagation Network

The four major components of the instar-outstar model are the input layer, the instar, the competitive layer and the outstar. For each node in the input layer there is an input value xi. All the instar are grouped into a layer called the competitive layer. Each of the instar responds maximally to a group of input vectors in a different region of space. An outstar model is found to have all the nodes in the output layer and a single node in the competitive layer. The outstar looks like the fan-out of a node.

Training Algorithm for Full Counterpropagation Network:

Step 0: Set the initial weights and the initial learning rare.

Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training.

Step 2: For each of the training input vector pair x: y presented, perform Steps 3-5.

Step 3: Make the X-input layer activations to vector X. Make the Y-inpur layer activations to vector Y.

Step 4: Find the winning cluster unit. If dot product method is used, find rhe cluster unit Zj with target net input: for j = 1 to p.

Z i n j = i = 1 n x i v i j + k = 1 m y k w k j

If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input vectors is the smallest

D ( j ) = i = 1 n ( x i v i j ) 2 + k = 1 m ( y k w k j ) 2

If there occurs a tie in case of selection of winner unit, the unit with the smallest index is the winner. Take the winner unit index as J.

Step 5: Update the weights over the calculated winner unit Zj

v i j ( n e w ) = v i j ( o l d ) + α [ x i v i j ( o l d ) ] i = 1 t o n

w k j ( n e w ) = w k j ( o l d ) + β [ y k w k j ( o l d ) ] k = 1 t o m

Step 6: Reduce the learning rates α and β

α ( t + 1 ) = 0.5 α t

β ( t + 1 ) = 0.5 β t

Step 7: Test stopping condition for phase-I training.

Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training.

Step 9: Perform Steps 10-13 for each training input pair x:y. Here α and β are small constant values.

Step 10: Make the X-input layer activations to vector x. Make the Y-input layer activations to vector y.

Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J.

Step 12: Update the weights entering into unit ZJ

v i j ( n e w ) = v i j ( o l d ) + α [ x i v i j ( o l d ) ] i = 1 t o n

w k j ( n e w ) = w k j ( o l d ) + β [ y k w k j ( o l d ) ] k = 1 t o m

Step 13: Update the weights from unit Zj to the output layers.

t j i ( n e w ) = t j i ( o l d ) + b [ x i t j i ( o l d ) ] i = 1 t o n

u j k ( n e w ) = u j k ( o l d ) + a [ y k u j k ( o l d ) ] k = 1 t o m

Step 14: Reduce the learning rates a and b.

a ( t + 1 ) = 0.5 a t

b ( t + 1 ) = 0.5 b t

Step 15: Test stopping condition for phase-II training.

Forward-only Counterpropagation network:

A simplified version of full CPN is the forward-only CPN. Forward-only CPN uses only the x vector to form the cluster on the Kohonen units during phase I training. In case of forward-only CPN, first input vectors are presented to the input units. First, the weights between the input layer and cluster layer are trained. Then the weights between the cluster layer and output layer are trained. This is a specific competitive network, with target known.

Architecture of forward-only CPN

It consists of three layers: input layer, cluster layer and output layer. Its architecture resembles the back-propagation network, but in CPN there exists interconnections between the units in the cluster layer.

Training Algorithm for Forward-only Counterpropagation network:

Step 0: Initial the weights and learning rare.

Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training.

Step 2: Perform Steps 3-5 for each of uaining input X

Step 3: Set the X-input layer activations to vector X.

Step 4: Compute the winning cluster unit (J). If dot product method is used, find the cluster unit zj with the largest net input.

Z i n j = i = 1 n x i v i j

If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input patterns is the smallest

D ( j ) = i = 1 n ( x i v i j ) 2

If there exists a tie in the selection of wiriner unit, the unit with the smallest index is chosen as the winner.

Step 5: Perform weight updation for unit Zj. For i= 1 to n,

v i j ( n e w ) = v i j ( o l d ) + α [ x i v i j ( o l d ) ] i = 1 t o n

Step 6: Reduce the learning rates α

α ( t + 1 ) = 0.5 α t

Step 7: Test stopping condition for phase-I training.

Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training.

Step 9: Perform Steps 10-13 for each training input Pair x:y..

Step 10: Set X-input layer activations to vector X. Sec Y-outpur layer activations to vector Y.

Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J.

Step 12: Update the weights entering into unit ZJ,

v i j ( n e w ) = v i j ( o l d ) + α [ x i v i j ( o l d ) ] i = 1 t o n

Step 13: Update the weights from unit Zj to the output layers.

w k j ( n e w ) = w k j ( o l d ) + β [ y k w k j ( o l d ) ] k = 1 t o m

Step 14: Reduce the learning rates β.

β ( t + 1 ) = 0.5 β t

Step 15: Test stopping condition for phase-II training.