convolution neural network

Download Convolution Neural Network

Post on 21-Apr-2017

88 views

Category:

Engineering

1 download

Embed Size (px)

TRANSCRIPT

  • Convolution Neural Network

    By-Amit Kushwaha

    The basic learning entity in the network - Perceptron

    Important Aspects of Deep Learning

    Presentation Covers

  • Machine Learning Engineer @

    Who am I

  • a Biological Inspiration

    Artificial Neural Net ( ANN)

  • The most basic form of neural network which can also learn

    Perceptron

    The elementary entity of a neural network.

  • Perceptron

    A Basic Linear discriminant Classifier

  • How Perceptron Work0.3

    -0.5

    0.15Activation Function

    4

    2

    -2

    0

    = (4*0.3 + 2*(-0.5) + (-2*0.15)

  • Activation Functions

  • Training a PerceptronLets train a Perceptron to mimic this pattern

    x y z

    0 0 1

    1 1 1

    1 0 1

    0 1 1

    Training Setout

    0

    1

    1

    0

    T(out)

  • Training a PerceptronPerceptron Model

    W1

    w4Activation Function

    OutPutW2

    W3

    x

    y

    z

    1

  • Wi = Wi + Wi W = - * ( target output - perceptron output)*X

    is learning rate for perceptron

    Training Rules :

    W1

    w4Activation Function

    OutPutW2

    W3

    x

    y

    z

    1

    Training a Perceptron

  • Let's learn wi such that it minimizes the squared error

    On simplifying

    Training a Perceptron

  • W1

    Out

    Assign random values to Weight Vector([W1,W2,W3,W4])

    x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    Training Setout01100110

    T(out)

    W2

    W3

    W4

    x

    y

    z

    bias

    {{

    epoch 1

    epoch 2

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 01 1 1 10 0 0 00 -1 -1 -10 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w40 0 0 00 0 0 01 1 1 11 1 1 11 0 0 01 0 0 01 0 0 01 0 0 0

    out00110110

    Training Set Weights P(out) Weightsout00110110

    T(out)

    0

    =0 00

    0

    0

    0

    0

    1

    1

    {{

    epoch 1

    epoch 2

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    out01100110

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 01 1 1 10 0 0 00 -1 -1 -10 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w40 0 0 00 0 0 01 1 1 11 1 1 11 0 0 01 0 0 01 0 0 01 0 0 0

    out00110110

    Training Set Weights P(out) Weightsout00110110

    T(out)

    {{

    epoch 1

    epoch 2

    0

    =0 00

    0

    0

    1

    1

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    out01100110

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 01 1 1 10 0 0 00 -1 -1 -10 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w40 0 0 00 0 0 01 1 1 11 1 1 11 0 0 01 0 0 01 0 0 01 0 0 0

    out00110110

    Training Set Weights P(out) WeightsT(out)

    {{

    epoch 1

    epoch 2

    0

    =0 00

    0

    0

    1

    0

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    out01100110

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 01 1 1 10 0 0 00 -1 -1 -10 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w40 0 0 00 0 0 01 1 1 11 1 1 11 0 0 01 0 0 01 0 0 01 0 0 0

    out00110110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    {{

    epoch 1

    epoch 2

    1

    =1 10

    1

    1

    0

    1

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 01 1 1 10 0 0 00 -1 -1 -10 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w40 0 0 00 0 0 01 1 1 11 1 1 11 0 0 01 0 0 01 0 0 01 0 0 0

    out00110110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    1

    =2 10

    1

    1

    0

    0

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {{

    epoch 1

    epoch 2

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 01 1 1 10 0 0 00 -1 -1 -10 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w40 0 0 00 0 0 01 1 1 11 1 1 11 0 0 01 0 0 01 0 0 01 0 0 0

    out00111110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    1

    =1 10

    0

    0

    1

    1

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {{

    epoch 1

    epoch 2

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 01 1 1 10 0 0 00 -1 -1 -10 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w40 0 0 00 0 0 01 1 1 11 1 1 11 0 0 01 0 0 01 0 0 01 0 0 0

    out00111110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    1

    =1 10

    0

    0

    1

    0

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {{

    epoch 1

    epoch 2

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 01 1 1 10 0 0 00 -1 -1 -10 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w40 0 0 00 0 0 01 1 1 11 1 1 11 0 0 01 0 0 01 0 0 01 0 0 0

    out00111110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    1

    =0 00

    0

    0

    0

    1

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {{

    epoch 1

    epoch 2

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 00 0 0 01 0 1 10 0 0 00 0 -1 -10 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w41 0 0 00 0 0 00 0 0 01 0 1 11 0 1 11 0 0 01 0 0 01 0 0 0

    out00011110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    {epoch3

    1

    =0 00

    0

    0

    0

    0

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {epoch4

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 00 0 0 01 0 1 10 0 0 00 0 -1 -10 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w41 0 0 01 0 0 00 0 0 01 0 1 11 0 1 11 0 0 01 0 0 01 0 0 0

    out01011110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    {epoch3

    1

    =1 00

    0

    0

    1

    1

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {epoch4

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 00 0 0 00 0 0 00 0 0 00 0 -1 -10 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w41 0 0 01 0 0 01 0 0 01 0 1 11 0 1 11 0 0 01 0 0 01 0 0 0

    out01111110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    {epoch3

    0

    =-2 0-1

    -1

    -1

    1

    0

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {epoch4

  • x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    w1 w2 w3 w40 0 0 00 0 0 00 0 0 00 0 0 00 0 -1 -10 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w41 0 0 01 0 0 01 0 0 01 0 0 01 0 1 11 0 0 01 0 0 01 0 0 0

    out01101110

    Training Set Weights P(out) Weightsout01100110

    T(out)

    {epoch3

    1

    =1 10

    0

    0

    0

    1

    1

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {epoch4

  • w1 w2 w3 w40 0 0 00 0 0 00 0 0 00 0 0 0

    w1 w2 w3 w41 0 0 01 0 0 01 0 0 01 0 0 0

    out0110

    Training Set Weights P(out) WeightsT(out)

    1

    Out0

    0

    0

    x

    y

    z

    1

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    {epoch3

    Wi = Wi + Wi Wi = - * [P(out) -T(Out )] * xi is learning rate for perceptron

    x y z bias0 0 1 11 1 1 11 0 1 10 1 1 10 0 1 11 1 1 11 0 1 10 1 1 1

    out01100110

    In epoch3, loss is 0. So, we can assume that Perceptron has learnt the pattern{epoch4

  • We trained our Perceptron Model

  • On a Linear Separable Distribution

    We trained our Perceptron Model

  • It cannot generalize on data distributed like this

    Limited Functionality of Linear Hyperplane

  • Neural Networks

    Its like Perceptrons are together in a defined fashion

  • Two Layered Neural Network

    These array of perceptrons form the Neural Nets

    h1

    h2

    Input 1

    Input 2Output

    Activation

  • Convolution Neural Network

    CNNs are Neural Networks that are sparsely (not fully) connected on the input layer.

    This makes each neuron in the following layer responsible for only part of the input, which is valuable for recognizing

    parts of images, for example.

  • Convolutional Neural Network

    Convolutional Neural Networks are very similar to ordinary Neural Networks

    The architecture of a CNN is designed to take advantage of the 2D structure of an input image

  • What is an Image ?

    Image is a matrix of pixel values Believe me Image is nothing mo

Recommended

View more >