During training process also the weights remains fixed in these competitive networks. The idea of competition is used among neurons for enhancement of contrast in their activation functions. In this, two networks- Maxnet and Hamming networks
Maxnet network was developed by Lippmann in 1987. The Maxner serves as a sub net for picking the node whose input is larger. All the nodes present in this subnet are fully interconnected and there exist symmetrical weights in all these weighted interconnections.
The architecrure of Maxnet is a fixed symmetrical weights are present over the weighted interconnections. The weights between the neurons are inhibitory and fixed. The Maxnet with this structure can be used as a subnet to select a particular node whose net input is the largest.
The Maxnet uses the following activation function:
Step 0: Initial weights and initial activations are set. The weight is set as [0 < ε < 1/m], where "m" is the total number of nodes. Let
Step 1: Perform Steps 2-4, when stopping condition is false.
Step 2: Update the activations of each node. For j = 1 to m,
Step 3: Save the activations obtained for use in the next iteration. For j = 1 to m,
Step 4: Finally, test the stopping condition for convergence of the network. The following is the stopping condition: If more than one node has a nonzero activation, continue; else stop.
The Hamming network is a two-layer feedforward neural network for classification of binary bipolar n-tuple input vectors using minimum Hamming distance denoted as DH(Lippmann, 1987). The first layer is the input layer for the n-tuple input vectors. The second layer (also called the memory layer) stores p memory patterns. A p-class Hamming network has p output neurons in this layer. The strongest response of a neuron is indicative of the minimum Hamming distance between the stored pattern and the input vector.
Hamming distance of two vectors, x and y of dimension n
x.y = a - d
where: a is number of bits in aggreement in x & y(No.of Similaritie bits in x & y), and d is number of bits different in x and y(No.of Dissimilaritie bits in x & y).
The value "a - d" is the Hamming distance existing between two vectors. Since, the total number of components is n, we have,
n = a + d
i.e., d = n - a
On simplification, we get
x.y = a - (n - a)
x.y = 2a - n
2a = x.y + n
a = x.y + n
From the above equation, it is clearly understood that the weights can be set to one-half the exemplar vector and bias can be set initially to n/2
Step 0: Initialize the weights. For i = 1 to n and j = 1 to m,
Initialize the bias for storing the "m" exemplar vectors. For j = 1 to m,
Step 1: Perform Steps 2-4 for each input vector x.
Step 2: Calculate the net input to each unit Yj, i.e.,
Step 3: Initialize the activations for Maxnet, i.e.,
Step 4: Maxnet is found to iterate for finding the exemplar that best matches the input patterns.