neural network toolbox user's guide
Post on 10-Feb-2017
Embed Size (px)
Neural Network Toolbox
For Use with MATLABHoward Demuth
Users GuideVersion 4
How to Contact The MathWorks:
www.mathworks.com Webcomp.soft-sys.matlab Newsgroup
firstname.lastname@example.org Technical email@example.com Product enhancement firstname.lastname@example.org Bug email@example.com Documentation error firstname.lastname@example.org Order status, license renewals, email@example.com Sales, pricing, and general information508-647-7000 Phone
The MathWorks, Inc. Mail3 Apple Hill DriveNatick, MA 01760-2098
For contact information about worldwide offices, see the MathWorks Web site.
Neural Network Toolbox Users Guide COPYRIGHT 1992 - 2004 by The MathWorks, Inc. The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc.
FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through the federal government of the United States. By accepting delivery of the Program or Documentation, the government hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and Documentation by the federal government (or other entity acquiring for or through the federal government) and shall supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Documentation, unused, to The MathWorks, Inc.
MATLAB, Simulink, Stateflow, Handle Graphics, and Real-Time Workshop are registered trademarks, and TargetBox is a trademark of The MathWorks, Inc.
Other product or brand names are trademarks or registered trademarks of their respective holders.
Printing History: June 1992 First printingApril 1993 Second printingJanuary 1997 Third printingJuly 1997 Fourth printingJanuary 1998 Fifth printing Revised for Version 3 (Release 11)September 2000 Sixth printing Revised for Version 4 (Release 12)June 2001 Seventh printing Minor revisions (Release 12.1)July 2002 Online only Minor revisions (Release 13)January 2003 Online only Minor revisions (Release 13 SP1)June 2004 Online only Revised for Release 14
Neural Networks (p. vi) Defines and introduces Neural Networks
Basic Chapters (p. viii) Identifies the chapters in the book with the basic, general knowledge needed to use the rest of the book
Mathematical Notation for Equations and Figures (p. ix)
Defines the mathematical notation used throughout the book
Mathematics and Code Equivalents (p. xi) Provides simple rules for transforming equations to code and visa versa
Neural Network Design Book (p. xii) Gives ordering information for a useful supplemental book
Acknowledgments (p. xiii) Identifies and thanks people who helped make this book possible
Neural NetworksNeural networks are composed of simple elements operating in parallel. These elements are inspired by biological nervous systems. As in nature, the network function is determined largely by the connections between elements. We can train a neural network to perform a particular function by adjusting the values of the connections (weights) between elements.
Commonly neural networks are adjusted, or trained, so that a particular input leads to a specific target output. Such a situation is shown below. There, the network is adjusted, based on a comparison of the output and the target, until the network output matches the target. Typically many such input/target pairs are used, in this supervised learning, to train a network.
Batch training of a network proceeds by making weight and bias changes based on an entire set (batch) of input vectors. Incremental training changes the weights and biases of a network as needed after presentation of each individual input vector. Incremental training is sometimes referred to as on line or adaptive training.
Neural networks have been trained to perform complex functions in various fields of application including pattern recognition, identification, classification, speech, vision and control systems. A list of applications is given in Chapter 1.
Today neural networks can be trained to solve problems that are difficult for conventional computers or human beings. Throughout the toolbox emphasis is placed on neural network paradigms that build up to or are themselves used in engineering, financial and other practical applications.
Neural Network including connections (called weights) between neurons Input Output
The supervised training methods are commonly used, but other networks can be obtained from unsupervised training techniques or from direct design methods. Unsupervised networks can be used, for instance, to identify groups of data. Certain kinds of linear networks and Hopfield networks are designed directly. In summary, there are a variety of kinds of design and learning techniques that enrich the choices that a user can make.
The field of neural networks has a history of some five decades but has found solid application only in the past fifteen years, and the field is still developing rapidly. Thus, it is distinctly different from the fields of control systems or optimization where the terminology, basic mathematics, and design procedures have been firmly established and applied for many years. We do not view the Neural Network Toolbox as simply a summary of established procedures that are known to work well. Rather, we hope that it will be a useful tool for industry, education and research, a tool that will help users find what works and what doesnt, and a tool that will help develop and extend the field of neural networks. Because the field and the material are so new, this toolbox will explain the procedures, tell how to apply them, and illustrate their successes and failures with examples. We believe that an understanding of the paradigms and their application is essential to the satisfactory and successful use of this toolbox, and that without such understanding user complaints and inquiries would bury us. So please be patient if we include a lot of explanatory material. We hope that such material will be helpful to you.
Basic ChaptersThe Neural Network Toolbox is written so that if you read Chapter 2, Chapter 3 and Chapter 4 you can proceed to a later chapter, read it and use its functions without difficulty. To make this possible, Chapter 2 presents the fundamentals of the neuron model, the architectures of neural networks. It also will discuss notation used in the architectures. All of this is basic material. It is to your advantage to understand this Chapter 2 material thoroughly.
The neuron model and the architecture of a neural network describe how a network transforms its input into an output. This transformation can be viewed as a computation. The model and the architecture each place limitations on what a particular neural network can compute. The way a network computes its output must be understood before training methods for the network can be explained.
Mathematical Notation for Equations and Figures
Mathematical Notation for Equations and Figures
Basic ConceptsScalars-small italic letters.....a,b,c
Vectors - small bold non-italic letters.....a,b,c
Matrices - capital BOLD non-italic letters.....A,B,C
LanguageVector means a column of numbers.
Scalar Element - row, - column, - time or iteration
Row Vector ...vector made of ith row of weight matrix W
Layer NotationA single superscript is used to identify elements of layer. For instance, the net input of layer 3 would be shown as n3.
Superscripts are used to identify the source (l) connection and the destination (k) connection of layer weight matrices ans input weight matrices. For instance, the layer weight matrix from layer 2 to layer 4 would be shown as LW4,2.
wi j, t( )i j t
W t( )
wj t( )
wi t( )
bi t( )
b t( )
Input Weight Matrix
Layer Weight Matrix
Figure and Equation ExamplesThe following figure, taken from Chapter 12 illustrates notation used in such advanced figures.
n1(k) 2 x 14 x 2
4 x 1
4 x 1
4 x 1
Layers 1 and 2 Layer 3
a1(k) = tansig (IW1,1p1(k) +b1)
3 x (2*2)
3 x (1*5)IW2,2
n2(k)3 x 1
p2(k) 5 x 1
1 x 4IW3,1
1 x 3
1 x (1*1)
11 x 1b3
3 x 1
a3(k)n3(k)1 x 1 1 x 1
a2(k) = logsig (IW2,1 [p1(k);p1(k-1) ]+ IW2,2p2(k-1))
a3(k)=purelin(LW3,3a3(k-1)+IW3,1 a1 (k)+b3+LW3,2a2 (k))
y2(k)1 x 1
y1(k)3 x 1