NN Menu


Fixed Weight Competitive Networks


Fixed Weight Competitive Nets

During training process also the weights remains fixed in these competitive networks. The idea of competition is used among neurons for enhancement of contrast in their activation functions. In this, two networks- Maxnet and Hamming networks

Maxnet

Maxnet network was developed by Lippmann in 1987. The Maxner serves as a sub net for picking the node whose input is larger. All the nodes present in this subnet are fully interconnected and there exist symmetrical weights in all these weighted interconnections.

Architecture of Maxnet

The architecrure of Maxnet is a fixed symmetrical weights are present over the weighted interconnections. The weights between the neurons are inhibitory and fixed. The Maxnet with this structure can be used as a subnet to select a particular node whose net input is the largest.



Testing Algorithm of Maxnet

The Maxnet uses the following activation function: f ( x ) = { x i f x > 0 0 i f x 0

Testing algorithm

Step 0: Initial weights and initial activations are set. The weight is set as [0 < ε < 1/m], where "m" is the total number of nodes. Let

Xj(0) = input the node Xj

and

w i j = { 1 i f i = j - ε i f i j

Step 1: Perform Steps 2-4, when stopping condition is false.

Step 2: Update the activations of each node. For j = 1 to m,

X j ( n e w ) = F [ X j ( o l d ) ε i ≠ j X k ( o l d ) ]

Step 3: Save the activations obtained for use in the next iteration. For j = 1 to m,
X j ( n e w ) = X j ( o l d )

Step 4: Finally, test the stopping condition for convergence of the network. The following is the stopping condition: If more than one node has a nonzero activation, continue; else stop.


Hamming Network

The Hamming network is a two-layer feedforward neural network for classification of binary bipolar n-tuple input vectors using minimum Hamming distance denoted as DH(Lippmann, 1987). The first layer is the input layer for the n-tuple input vectors. The second layer (also called the memory layer) stores p memory patterns. A p-class Hamming network has p output neurons in this layer. The strongest response of a neuron is indicative of the minimum Hamming distance between the stored pattern and the input vector.

Hamming Distance

Hamming distance of two vectors, x and y of dimension n

x.y = a - d

where: a is number of bits in aggreement in x & y(No.of Similaritie bits in x & y), and d is number of bits different in x and y(No.of Dissimilaritie bits in x & y).

The value "a - d" is the Hamming distance existing between two vectors. Since, the total number of components is n, we have,
n = a + d
i.e., d = n - a

On simplification, we get
x.y = a - (n - a)

x.y = 2a - n

2a = x.y + n

a = 1 2 x.y + 1 2 n

From the above equation, it is clearly understood that the weights can be set to one-half the exemplar vector and bias can be set initially to n/2


Testing Algorithm of Hamming Network

Step 0: Initialize the weights. For i = 1 to n and j = 1 to m,

w i j = e i ( j ) 2

Initialize the bias for storing the "m" exemplar vectors. For j = 1 to m,

b j = n x

Step 1: Perform Steps 2-4 for each input vector x.

Step 2: Calculate the net input to each unit Yj, i.e.,

y i n j = i = 1 n x i w i j + b j j = 1 t o m

Step 3: Initialize the activations for Maxnet, i.e.,

y j ( 0 ) = y i n j j = 1 t o m

Step 4: Maxnet is found to iterate for finding the exemplar that best matches the input patterns.


Next Topic :Kohonen Self-Organizing Feature Maps