Make your own free website on Tripod.com

NEURAL NETWORK

PROJECT # 1

KWTA

 

1. Randomly initialize each neuron's activation, between the min and max

activation.

2. Update the activations at each iteration, synchronously (all neurons update at

the same time).

3. Calculate the energy of the network at each iteration.

4. Halt the program when the network has converged.

When you have your simulation running, use the results to answer the questions

given below.

Network Architecture:

N = Number of neurons.

Connection strengths:

The connection strength matrix: W = (wij):

wij = connection strength between neuron i and neuron j.

Connection strengths are symmetric: wij = wji.

wij = -1 if i is not equal to j

wii = 0 (no self-connections)

W is an nxn symmetric matrix.

Each neuron has an "activation" bounded by maximum and minimum values:

m <= ai <= M i = 1, ..., N

All of the activations can be expressed as a single N-dimensional vector: a.

Each neuron has an "external input":

m <= ei <= M i = 1, ..., N

 

Activation Function

Updating of the neural activation is done synchronously:

ai(t+1) = ai(t) +step*(M-ai(t))* (ai(t) - m) * neti(t).

 

Energy Function:

At each time t, the energy of the network is defined to be:

E(t) = - (1/2) aT W a - aT e

 

Answer:

 

Source code

Graph

1. Does the Energy decrease at each iteration?

 

Yes the energy drop exponentially in each iteration loop and gradually on each step. The smaller step the lesser energy will drop at the end..

 

 

2. Does the network converge to a state that corresponds to “k-winners”?

 

Yes.  If the initial state of the neuron is inhibited or excited, then after the network converge, the “k-winner” results.

 

 

 



 

 

 

   

 

 

 

3. Try a case where the initial activations are all equal (non-zero).

(a)    Is there convergence to winners?

No, there is not convergence to winners.

 

(b)   Does the Energy decrease at each iteration?

Yes, the energy does decrease at each iteration.

 

4) Show (mathematical proof) the k-winner states are stable equilibrium when the external input satisfies: k-1 < ext < k

 

Reference: This answer is copied from the previous student works: Suzan R.

 

The k-winner states are at a stable equilibrium when the neuron activations are no longer changing from one iteration to the next. Neuron activations get updated through the following:

 

(1)   ai(t+1) = ai(t) + step * (M - ai(t)) * (ai(t) - m) * neti(t)

where 'M' is the maximum activation, 'm' is the minimum activation, step is a constant that determines how fast the system converges to a stable equilibrium, and 'net' is the net input to a neuron.

 

The system enters into stable equilibrium when ai(t+1) = ai(t) for all neurons. In our system, this occurs when all neurons have an activation of either 'M' or 'm'. When this occurs, (M - ai(t)) * (ai(t) - m) = 0 and then ai(t+1) = ai(t) for all neurons. The 2N combinations of 'M' and 'm' for all neurons are "hypercube" corners for the system. A system that converges to a stable equilibrium converges to one of these hypercube corners.

 

As the activation vector approaches a hypercube corner, (M - ai(t)) * (ai(t) - m) approaches a minimum (0 in our case), causing the neurons to get closer to either 'm' or 'M' by increasingly smaller increments. However, in order for the neuron's activations to get closer to the hypercube corner, the neti(t) factor must be positive for the neurons approaching 'M' (winners) and negative for the neurons approaching 'm' (losers).

 

(2a)    for k winners: neti(t) > 0

(2b)    for N-k losers: neti(t) < 0

neti(t) is calculated through:

 

(3)     neti(t) = [Wa(t) + e(t)]i

where W is an NxN matrix of connection strengths (wij) populated with 0 where i = j and -1 everywhere else and e is our external input. In the hypercube corner where k neurons have 'M' activation (1 in our case) and N-k neurons have 'm' activation (0 in our case), the following is true:

 

(4a)    for k winners: neti(t) = - (k - 1) + ei

(4b)    for N-k losers: neti(t) = - k + ei

However, as the system approaches the hypercube corner, the winners aren't quite all at 'M' and the losers aren't quite all at 'm' so,

 

(5a)    for k winners: neti(t) < - (k - 1) + ei

(5b)    for N-k losers: neti(t) > - k + ei

With our assumptions regarding the sign of neti(t) from (2), this becomes:

 

(6a)    for k winners: 0 < - (k - 1) + e

(6b)    for N-k losers: 0 > - k + e

Solving for e yields:

 

(7)     k - 1 < e < k