Self Organizing Map

 

Self Organizing Map

Ideas first introduced by C. von der Malsburg (1973) and developed and refined by T. Kohonen (1982)

 It is a type of Unsupervised competitive learning. It is primarily used for the organization and visualization of complex data.

SOM Architecture




                    It is two layers of Neuron Architecture 1. Input layer 2.Output Layer. Each input Neuron is                     connected to each output Neuron.It is fully connected Network.The output map usually has                     two dimensions. One and three dimensions are also used

  Neurons in output map can be laid out in different patterns

o   Rectangular

o   Hexagonal

o   Random 

SOM Training

       “Neighbourhood” is an important concept in SOM training.

         The output map neurons that adjoin the winner

         Neighborhood size describes the distance between neighbors and  the winning neuron.

          Neighbors weights are also modified

Neighborhood  Concept




Self-organizing map is working in three different phases

       Competition

       Cooperation

       Synaptic Adaptation

Competition

1.       Randomly initialise all weights

2.       Select input vector x = [x1, x2, x3, … , xn]

3.       wj = [wj1,wj2 , ... ,wjm]T 

        where j = 1,2,...,l (I=total number of neurons in the network)

           4.       Compare x with weights wj for each neuron j to determine winner

                i(x) = arg minj||xwj || 

5.  Update winner so that it becomes more like x, together with      the winner’s                         neighbours

6.  Adjust parameters: learning rate & ‘neighbourhood function’

             7.  Repeat from (2) until the map has converged (i.e. no noticeable changes in the weights) or                             pre-defined no. of training cycles have

Cooperation

  •    How to define a topological neigborhood that is neurobiologically correct?
  •     Let hij denote the topological neighborhood centered on winning neuron i, and encompassing a set of excited neurons denoted by j.
  •    The topological neighborhood is symmetric about the maximum point
  •    The amplitude of the topological neighborhood decreases monotonically with the increasing lateral distance 

                            hj,i(x) (n) = exp(d²j,i/2s²(n))  

                        where 

                            s(n) = s0exp(-n/t1),  n =0,1,2 


 

     The distance among neurons is defined as the Euclidean metric. For example for a 2D lattice we have:

  from figure    dij2 =||rc – ri||2             dij2 =||rj – ri||2

       Where the discrete vector rc or  rj defines the position of excited neuron j and ri defines the position of the winning neuron in the lattice.

Another characteristic feature of the SOM algorithm is that the size of the neighbourhood shrinks with time. This requirement is satisfied by making the width of the Gaussian function decreasing with time.

   Synaptic Adaptation

        wj(n+1) = wj(n) + h(n) hj,i(x)(n)[x-wj(n)] 

       The synaptic weight  vector wj of winning neuron i move toward the input vector x. Upon repeated presentations of the training data, the synaptic weight  vector tend to follow the distribution of the input vectors due to the neighborhood updating. ® topological ordering

       The learning-rate parameter h(n) should be time varying.

h(n) = h 0exp(-n/t2),  n =0,1,2,… 

          The adaptive process can be decomposed in two phases:

          A self-organising or ordering phase;

          A convergence phase.

       Self-organizing or ordering phase: (max 1000 iterations)

       h(n) = [0.1, 0.01], t2 = 1000

       One should choose carefully the learning rate and the neighborhood function

       hj,i(n) = [”radius” of the lattice, the winning neuron and a couple neighboring neurons around],

                                 t1 = 1000/ log s0

       Convergence phase: (fine tune the feature map, 500*the number of neurons in the network)

          h(n) = 0.01,

          hj,i(n) =  the winning neuron and one or zero neighboring neurons. 

Summary of SOM Algorithm

1.       Initialization: Choose random values for the initial weight vectors wj(0). The weight vectors must be different for all neurons. Usually, we keep the magnitude of the weights small.

2.       Sampling: Draw a sample x from the input space with a certain probability; the vector x represents the activation pattern that is applied to the lattice. The dimension of x is equal to m.

                  3.   Similarity Matching: Find the best-matching (winning) neuron i(x) at time step n by                                                     using the minimum Euclidean distance criterion:

                                   i(x)=arg minj ||xwj||, j=1,2,…,l

4.       Updating: Adjust the synaptic weight vectors of all neurons by using the update formula:

                       wj(n+1) = wj(n) + h(n) hji(x)(n) (x(n) wj(n))

               Where h(n) is the learning rate and hji(x)(n) is the neighbourhood function around the winner neuron i(x); both h(n) and hji(x)(n) are varied dynamically for best results.

5.       Continuation: Continue with step 2 until no noticeable changes in the feature map are observed.

Kohonen SOM’s

       Initialize weights

       Repeat until convergence

      Select next input pattern

      Find Best Matching Unit

      Update weights of winner and neighbours

      Decrease learning rate & neighbourhood size

Comments

Popular posts from this blog

WHY DEEP NEURAL NETWORK: Difference between neural network and Deep neural network

Training and Testing Phase of Neural Network

SVM Algorithm for Python Code