Neural Network Toolbox User's Guide

Download Neural Network Toolbox User's Guide

Post on 10-Feb-2017

223 views

Category:

Documents

8 download

Embed Size (px)

TRANSCRIPT

  • Neural Network Toolbox

    For Use with MATLABHoward Demuth

    Mark Beale

    Users GuideVersion 4

  • How to Contact The MathWorks:

    www.mathworks.com Webcomp.soft-sys.matlab Newsgroup

    support@mathworks.com Technical supportsuggest@mathworks.com Product enhancement suggestionsbugs@mathworks.com Bug reportsdoc@mathworks.com Documentation error reportsservice@mathworks.com Order status, license renewals, passcodesinfo@mathworks.com Sales, pricing, and general information508-647-7000 Phone

    508-647-7001 Fax

    The MathWorks, Inc. Mail3 Apple Hill DriveNatick, MA 01760-2098

    For contact information about worldwide offices, see the MathWorks Web site.

    Neural Network Toolbox Users Guide COPYRIGHT 1992 - 2004 by The MathWorks, Inc. The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc.

    FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through the federal government of the United States. By accepting delivery of the Program or Documentation, the government hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and Documentation by the federal government (or other entity acquiring for or through the federal government) and shall supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Documentation, unused, to The MathWorks, Inc.

    MATLAB, Simulink, Stateflow, Handle Graphics, and Real-Time Workshop are registered trademarks, and TargetBox is a trademark of The MathWorks, Inc.

    Other product or brand names are trademarks or registered trademarks of their respective holders.

  • Printing History: June 1992 First printingApril 1993 Second printingJanuary 1997 Third printingJuly 1997 Fourth printingJanuary 1998 Fifth printing Revised for Version 3 (Release 11)September 2000 Sixth printing Revised for Version 4 (Release 12)June 2001 Seventh printing Minor revisions (Release 12.1)July 2002 Online only Minor revisions (Release 13)January 2003 Online only Minor revisions (Release 13 SP1)June 2004 Online only Revised for Release 14

  • Preface

    Neural Networks (p. vi) Defines and introduces Neural Networks

    Basic Chapters (p. viii) Identifies the chapters in the book with the basic, general knowledge needed to use the rest of the book

    Mathematical Notation for Equations and Figures (p. ix)

    Defines the mathematical notation used throughout the book

    Mathematics and Code Equivalents (p. xi) Provides simple rules for transforming equations to code and visa versa

    Neural Network Design Book (p. xii) Gives ordering information for a useful supplemental book

    Acknowledgments (p. xiii) Identifies and thanks people who helped make this book possible

  • Preface

    vi

    Neural NetworksNeural networks are composed of simple elements operating in parallel. These elements are inspired by biological nervous systems. As in nature, the network function is determined largely by the connections between elements. We can train a neural network to perform a particular function by adjusting the values of the connections (weights) between elements.

    Commonly neural networks are adjusted, or trained, so that a particular input leads to a specific target output. Such a situation is shown below. There, the network is adjusted, based on a comparison of the output and the target, until the network output matches the target. Typically many such input/target pairs are used, in this supervised learning, to train a network.

    Batch training of a network proceeds by making weight and bias changes based on an entire set (batch) of input vectors. Incremental training changes the weights and biases of a network as needed after presentation of each individual input vector. Incremental training is sometimes referred to as on line or adaptive training.

    Neural networks have been trained to perform complex functions in various fields of application including pattern recognition, identification, classification, speech, vision and control systems. A list of applications is given in Chapter 1.

    Today neural networks can be trained to solve problems that are difficult for conventional computers or human beings. Throughout the toolbox emphasis is placed on neural network paradigms that build up to or are themselves used in engineering, financial and other practical applications.

    Neural Network including connections (called weights) between neurons Input Output

    Target

    Adjust weights

    Compare

  • Neural Networks

    vii

    The supervised training methods are commonly used, but other networks can be obtained from unsupervised training techniques or from direct design methods. Unsupervised networks can be used, for instance, to identify groups of data. Certain kinds of linear networks and Hopfield networks are designed directly. In summary, there are a variety of kinds of design and learning techniques that enrich the choices that a user can make.

    The field of neural networks has a history of some five decades but has found solid application only in the past fifteen years, and the field is still developing rapidly. Thus, it is distinctly different from the fields of control systems or optimization where the terminology, basic mathematics, and design procedures have been firmly established and applied for many years. We do not view the Neural Network Toolbox as simply a summary of established procedures that are known to work well. Rather, we hope that it will be a useful tool for industry, education and research, a tool that will help users find what works and what doesnt, and a tool that will help develop and extend the field of neural networks. Because the field and the material are so new, this toolbox will explain the procedures, tell how to apply them, and illustrate their successes and failures with examples. We believe that an understanding of the paradigms and their application is essential to the satisfactory and successful use of this toolbox, and that without such understanding user complaints and inquiries would bury us. So please be patient if we include a lot of explanatory material. We hope that such material will be helpful to you.

  • Preface

    viii

    Basic ChaptersThe Neural Network Toolbox is written so that if you read Chapter 2, Chapter 3 and Chapter 4 you can proceed to a later chapter, read it and use its functions without difficulty. To make this possible, Chapter 2 presents the fundamentals of the neuron model, the architectures of neural networks. It also will discuss notation used in the architectures. All of this is basic material. It is to your advantage to understand this Chapter 2 material thoroughly.

    The neuron model and the architecture of a neural network describe how a network transforms its input into an output. This transformation can be viewed as a computation. The model and the architecture each place limitations on what a particular neural network can compute. The way a network computes its output must be understood before training methods for the network can be explained.

  • Mathematical Notation for Equations and Figures

    ix

    Mathematical Notation for Equations and Figures

    Basic ConceptsScalars-small italic letters.....a,b,c

    Vectors - small bold non-italic letters.....a,b,c

    Matrices - capital BOLD non-italic letters.....A,B,C

    LanguageVector means a column of numbers.

    Weight Matrices

    Scalar Element - row, - column, - time or iteration

    Matrix

    Column Vector

    Row Vector ...vector made of ith row of weight matrix W

    Bias Vector

    Scalar Element

    Vector

    Layer NotationA single superscript is used to identify elements of layer. For instance, the net input of layer 3 would be shown as n3.

    Superscripts are used to identify the source (l) connection and the destination (k) connection of layer weight matrices ans input weight matrices. For instance, the layer weight matrix from layer 2 to layer 4 would be shown as LW4,2.

    wi j, t( )i j t

    W t( )

    wj t( )

    wi t( )

    bi t( )

    b t( )

    k l,

  • Preface

    x

    Input Weight Matrix

    Layer Weight Matrix

    Figure and Equation ExamplesThe following figure, taken from Chapter 12 illustrates notation used in such advanced figures.

    IWk l,

    LWk l,

    p1(k)

    a1(k)1

    n1(k) 2 x 14 x 2

    4 x 1

    4 x 1

    4 x 1

    Inputs

    IW1,1

    b1

    2 4

    Layers 1 and 2 Layer 3

    a1(k) = tansig (IW1,1p1(k) +b1)

    5

    3 x (2*2)

    IW2,1

    3 x (1*5)IW2,2

    n2(k)3 x 1

    3

    TDL

    p2(k) 5 x 1

    TDL

    1 x 4IW3,1

    1 x 3

    1 x (1*1)

    11 x 1b3

    TDL

    3 x 1

    a2(k)

    a3(k)n3(k)1 x 1 1 x 1

    1

    a2(k) = logsig (IW2,1 [p1(k);p1(k-1) ]+ IW2,2p2(k-1))

    0,1

    1

    1

    a3(k)=purelin(LW3,3a3(k-1)+IW3,1 a1 (k)+b3+LW3,2a2 (k))

    LW3,2

    LW3,3

    y2(k)1 x 1

    y1(k)3 x 1

    Outputs

  • Mathematics and Code Equivalents

    xi

    Mathematics and Code EquivalentsThe transition from mathematics to code or vice versa can be made with the aid of a few rules. They are listed here for future reference.

    To change from mathematics notation to MATLAB notation, the user needs to:

    Change superscripts to cell array indices.For example,

    Change subscripts to parentheses indices.For example, , and

    Change parentheses indices to a second cell array index.For example,

    Change mathematics operators to MATLAB operators and toolbox functions.For example,

    The following equations illustrate the notation used in figures.

    p1 p 1{ }

    p2 p 2( ) p21 p 1{ } 2( )

    p1 k 1( ) p 1 k 1,{ }

    ab a*b

    n w1 1, p1 w1 2, p2 ... w1 R, pR b+ + + +=

    W

    w1 1, w1 2, w1 R,w2 1, w2 2, w2 R,

    wS 1, wS 2, wS R,

    =

  • Preface

    xii

    Neural Network Design BookProfessor Martin Hagan of Oklahoma State University, and Neural Network Toolbox authors Howard Demuth and Mark Beale have written a textbook, Neural Network Design (ISBN 0-9717321-0-8). The book presents the theory of neural networks, discusses their design and application, and makes considerable use of MATLAB and the Neural Network Toolbox. Demonstration programs from the book are used in various chapters of this Guide. (You can find all the book demonstration programs in the Neural Network Toolbox by typing nnd.)

    The book has:

    An INSTRUCTORS MANUAL for adopters and TRANSPARENCY OVERHEADS for class use.

    This book can be obtained from the University of Colorado Bookstore at 1-303-492-3648 or at the online purchase web site, cubooks.colorado.edu.

    To obtain a copy of the INSTRUCTORS MANUAL contact the University of Colorado Bookstore phone 1-303-492-3648. Ask specifically for an instructors manual if you are instructing a class and want one.

    You can go directly to the Neural Network Design page at

    http://ee.okstate.edu/mhagan/nnd.html

    Once there, you can download the TRANSPARENCY MASTERS with a click on Transparency Masters(3.6MB).

    You can get the Transparency Masters in Powerpoint or PDF format. You can obtain sample book chapters in PDF format as well.

  • Acknowledgments

    xiii

    AcknowledgmentsThe authors would like to thank:

    Martin Hagan of Oklahoma State University for providing the original Levenberg-Marquardt algorithm in the Neural Network Toolbox version 2.0 and various algorithms found in version 3.0, including the new reduced memory use version of the Levenberg-Marquardt algorithm, the conjugate gradient algorithm, RPROP, and generalized regression method. Martin also wrote Chapter 5 and Chapter 6 of this toolbox. Chapter 5 on Chapter describes new algorithms, suggests algorithms for pre- and post-processing of data, and presents a comparison of the efficacy of various algorithms. Chapter 6 on control system applications describes practical applications including neural network model predictive control, model reference adaptive control, and a feedback linearization controller.

    Joe Hicklin of The MathWorks for getting Howard into neural network research years ago at the University of Idaho, for encouraging Howard to write the toolbox, for providing crucial help in getting the first toolbox version 1.0 out the door, and for continuing to be a good friend.

    Jim Tung of The MathWorks for his long-term support for this project.

    Liz Callanan of The MathWorks for getting us off the such a good start with the Neural Network Toolbox version 1.0.

    Roy Lurie of The MathWorks for his vigilant reviews of the developing material in this version of the toolbox.

    Matthew Simoneau of The MathWorks for his help with demos, test suite routines, for getting user feedback, and for helping with other toolbox matters.

    Sean McCarthy for his many questions from users about the toolbox operation

    Jane Carmody of The MathWorks for editing help and for always being at her phone to help with documentation problems.

    Donna Sullivan and Peg Theriault of The MathWorks for their editing and other help with the Mac document.

    Jane Price of The MathWorks for getting constructive user feedback on the toolbox document and its Graphical Users Interface.

  • Preface

    xiv

    Orlando De Jess of Oklahoma State University for his excellent work in programming the neural network controllers described in Chapter 6.

    Bernice Hewitt for her wise New Zealand counsel, encouragement, and tea, and for the company of her cats Tiny and Mr. Britches.

    Joan Pilgram for her business help, general support, and good cheer.

    Teri Beale for running the show and having Valerie and Asia Danielle while Mark worked on this toolbox.

    Martin Hagan and Howard Demuth for permission to include various problems, demonstrations, and other material from Neural Network Design, Jan. 1996.

  • xv

    Contents

    Preface

    Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

    Basic Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

    Mathematical Notation for Equations and Figures . . . . . . . . ixBasic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixLanguage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixWeight Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixLayer Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixFigure and Equation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . x

    Mathematics and Code Equivalents . . . . . . . . . . . . . . . . . . . . . xi

    Neural Network Design Book . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

    Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

    1Introduction

    Getting Started . . . . ....