Memristor in Learning Neural Networks

Memristor in Learning Neural Networks
paly

This presentation discusses how memristors, a type of electronic component, can be used in learning neural networks. Parts of the slides are taken from Elham Zamanidoost and Ligang

About Memristor in Learning Neural Networks

PowerPoint presentation about 'Memristor in Learning Neural Networks'. This presentation describes the topic on This presentation discusses how memristors, a type of electronic component, can be used in learning neural networks. Parts of the slides are taken from Elham Zamanidoost and Ligang. The key topics included in this slideshow are . Download this presentation absolutely free.

Presentation Transcript


Slide1Memristor in Learning NeuralNetworks Shaodi Wang Puneet Gupta (puneet@ee.ucla.edu) 1 Parts of slides from Elham Zamanidoost and Ligang Gao

Slide2NanoCAD LabCharacteristics Shaodi Wang (shaodiwang@ucla.edu) 2

Slide3NanoCAD LabNeural Network Shaodi Wang (shaodiwang@ucla.edu) 3

Slide4NanoCAD LabLearning in Neural Network • Supervised Learning - Training set contains input and output *Feed-forward network *Recurrent network • Unsupervised Learning -Training set contains input only *self-organizing  network Shaodi Wang (shaodiwang@ucla.edu) 4

Slide5NanoCAD LabMulti Layer Perceptron Shaodi Wang (shaodiwang@ucla.edu) 5 •  Hidden layer(s) perform classification of features •  Sigmoid activation function      Back Propagation Learning:     Apply gradient decent over the entire network    As before, we have:   For every output neuron:    For every hidden neuron:  

Slide6NanoCAD LabGradient Descent Shaodi Wang (shaodiwang@ucla.edu) 6 • Define  cost  function  as  sum  of  errors  over  entire  training  set,  and errors  as: • Now train the network in order to minimize the cost. This means that we need to minimize the error. Hence, we need a continuous activation function to calculate the derivative. • Sigmoid activation function: *Gradient Descent Learning   where

Slide7NanoCAD LabRecurrent Network • Characteristics:  - Nodes connect back to other nodes or themselves  - Information flow is bidirectional • Fully recurrent network : there is a pair of directed connections between every pair of neurons in the network Shaodi Wang (shaodiwang@ucla.edu) 7

Slide8NanoCAD LabHopfield Network • Characteristics:  - A RNN in which all connections are symmetric  - Binary threshold activation function (CAM)  - No unit has a connection with itself and W i,j  =W j,i  (symmetric)  - symmetric weights guarantee that the energy function decreases monotonically - Hebbian learning: Increase weight between two nodes if both have same activity, otherwise decrease. - Synchronous training: the outputs for all the nodes are calculated before applied to the other nodes - Asynchronous training: randomly choose a node and calculate its output Shaodi Wang (shaodiwang@ucla.edu) 8

Slide9NanoCAD LabSelf Organized Map • The purpose of SOM is to map a multidimensional input space onto a topology preserving map of neurons – Preserve a topological so that neighboring neurons respond to « similar »input patterns – The topological structure is often a 2 or 3 dimensional space • Each neuron is assigned a weight vector with the same dimensionality of the input space • Input patterns are compared to each weight vector and the closest wins (Euclidean Distance) Shaodi Wang (shaodiwang@ucla.edu) 9

Slide10NanoCAD LabThanks Shaodi Wang (shaodiwang@ucla.edu) 10

Related