NN Menu


Learning Vector Quantization


Learning Vector Quantization

In 1980, Finnish Professor Kohonen discovered that some areas of the brain develop structures with different areas, each of them with a high sensitive for a specific input pattern. It is based on competition among neural units based on a principle called winner-takes-all.

Learning Vector Quantization (LVQ) is a prototype-based supervised classification algorithm. A prototype is an early sample, model, or release of a product built to test a concept or process. One or more prototypes are used to represent each class in the dataset. New (unknown) data points are then assigned the class of the prototype that is nearest to them. In order for "nearest" to make sense, a distance measure has to be defined. There is no limitation on how many prototypes can be used per class, the only requirement being that there is at least one prototype for each class. LVQ is a special case of an artificial neural network and it applies a winner-take-all Hebbian learning-based approach. With a small difference, it is similar to Self-Organizing Maps (SOM) algorithm. SOM and LVQ were invented by Teuvo Kohonen.

LVQ system is represented by prototypes W=(W1....,Wn). In winner-take-all training algorithms, the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly. An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain

Training Algorithm

Step 0: Initialize the reference vectors. This can be done using the following steps.
          From the given set of training vectors, take the first "m" (number of clusters) training vectors and use them as weight vectors, the remaining vectors can be used for training.
          Assign the initial weights and classifications randomly.
          K-means clustering method.
          Set initial learning rate α

Step l: Perform Steps 2-6 if the stopping condition is false.

Step 2: Perform Steps 3-4 for each training input vector x

Step 3: Calculate the Euclidean distance; for i = 1 to n, j = 1 to m,

D ( j ) = i = 1 n j = 1 m ( x i w i j ) 2

Find the winning unit index J, when D(J) is minimum

Step 4: Update the weights on the winning unit, Wj using the following conditions.

if T = Cj then w j ( n e w ) = w j ( o l d ) + α [ x w j ( o l d ) ]

if T ≠ Cj then w j ( n e w ) = w j ( o l d ) α [ x w j ( o l d ) ]

Step 5: Reduce the learning rate α

Step 6: Test for the stopping condition of the training process. (The stopping conditions may be fixed number of epochs or if learning rare has reduced to a negligible value.)


Next Topic :Counter Propagation Networks