artificial neural network1

Upload: ram-ashraya

Post on 05-Apr-2018

232 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 Artificial Neural Network1

    1/31

    SEMINAR REPORT

    ON

    Artificial Neural Network

    SUBMITTED BY:-

    SURAJ AGARWAL

    107170

    EC-3

    B-TECH 3RD YEAR /5TH SEM

    1

  • 8/2/2019 Artificial Neural Network1

    2/31

    Index

    1) Introduction...4

    2) Definition..4

    3) Structure of human brain..5

    4) Neurons....5

    5) Basics of ANN model..7

    6) Artificial neural network..8

    6.1) How ANN differs from Conventional Computer...9

    6.2) ANN vs Von Neumann Computer..9

    7) Perceptron...10

    8) Learning laws..11

    8.1) Hebbs Rule.12

    8.2) Hopefields Rule..12

    8.3) Delta Rule12

    8.4) Gradient Descent Rule.13

    8.5) Kohonens Learning Rule....13

    9) Basic structure of ANN13

    10) Network architectures...14

    10.1) Single Layer Feed Forward ANN14

    10.2) Multi Layer Feed Forward ANN.15

    10.3) Recurrent ANN16

    11) Learning of ANN..17

    11.1) Learning with a Teacher...17

    11.2) Learning without a Teacher..18

    11.3) Learning tasks...20

    2

  • 8/2/2019 Artificial Neural Network1

    3/31

    12) Control.22

    13) Adaptation22

    14) Generalization..23

    15) Probabilistic ANN23

    16) Advantages of ANN.24

    17) Limitations of ANN.25

    18) Applications of ANN...25

    19) Conclusion...29

    20) References...30

    3

  • 8/2/2019 Artificial Neural Network1

    4/31

    1. Introduction

    Ever since eternity, one thing that has made human beings stand apart from the rest of the

    animal kingdom is, its brain .The most intelligent device on earth, the Human brain is the

    driving force that has given us the ever-progressive species diving into technology and

    development as each day progresses.

    Due to his inquisitive nature, man tried to make machines that could do intelligent job

    processing, and take decisions according to instructions fed to it. What resulted was the

    machine that revolutionized the whole world, the Computer (more technically speaking

    the Von Neumann Computer). Even though it could perform millions of calculations every

    second, display incredible graphics and 3-dimentional animations, play audio and video

    but it made the same mistake every time.

    Practice could not make it perfect. So the quest for making more intelligent device

    continued. These researches lead to birth of more powerful processors with high-tech

    equipments attached to it, supercomputers with capabilities to handle more than one

    task at a time and finally networks with resources sharing facilities. But still the problem

    of designing machines with intelligent self-learning, loomed large in front of mankind. Then

    the idea of initiating human brain stuck the designers who started their researches one of

    the technologies that will change the way computer work Artificial Neural Networks.

    2. Definition

    Neural Network is the specified branch of the Artificial Intelligence.

    In general, Neural Networks are simply mathematical techniques designed to accomplish a

    variety of tasks. Neural Networks uses a set of processing elements (or nodes) loosely

    analogues to neurons in the brain (hence the same, neural networks). These nodes are

    interconnected in a network that can then identify patterns in data as it is exposed to the data.

    In a sense, the network learns from the experience just as people do. Neural networks can be

    configured in various arrangements to perform a range of tasks including pattern recognition,

    data mining, classification, and process modeling.

    4

  • 8/2/2019 Artificial Neural Network1

    5/31

    3. Structure of human brain

    As stated earlier, Neural Networks is very much similar to the biological structure of

    human Brain. Following is the biological structure of brain is given.

    Sequential Parallel

    Functions: Functions:

    Rules Images

    Concepts Pictures

    Calculations Control

    Expert Systems Neural Networks

    Learn by Rules Learn by experience

    Functions of Brain

    As shown in table, left part of the brain consists of rules, concepts and calculations. It

    follows Rule Based Learning and hence solves the problem by passing them through

    rules. It has Sequential pairs of Neurons. Therefore, this part of brain is similar to the expert

    systems. Right part of the brain, as shown below in the figure; consist of functions, images,

    pictures, and controls. It follows parallel learning and hence learns through experience. It has

    parallel pairs of Neurons. Therefore, this brain is similar to the Neural Network.

    4. Neurons

    The conceptual constructs of a neural network stemmed from our early understanding of

    the human brain. The brain is comprised of billion and billions of interconnected neurons

    (some experts estimate upwards of 1011 neurons in the human brain). The fundamental

    building blocks of this massively parallel cellular structure are really quite simply when

    studied in isolation. A neuron receives incoming electrochemical signals from its

    dendrites and collects these signals at the neuron nucleus. The neuron nucleus has a

    internal threshold that determines if neuron itself fires in response to the incoming

    information. If the combined incoming signals exceeds this threshold then neuron fires and

    an electrochemical signal is sent to all neurons connected to the firing neuron on its output

    5

  • 8/2/2019 Artificial Neural Network1

    6/31

    connections or axons. Otherwise the incoming signals are ignored and the neuron remains

    dormant.

    There are many types of neurons or cells. From a neuron body

    (soma) many fine branching fibers, called dendrites, protrude. The dendrites conduct

    signals to the soma or cell body. Extending from a neurons soma, at a point called

    axon hillock (initial segment), is a long giber called an axon, which generally splits into

    the smaller branches of axonal arborization. The tips of these axon branches (also called

    nerve terminals, end bulbs, telondria) impinge either upon the dendrites, somas or axons of

    other neurons or upon effectors.

    6

  • 8/2/2019 Artificial Neural Network1

    7/31

    The axon-dendrite (axon-soma, axon-axon) contact between end bulbs and the cell it

    impinges upon is called a synapse. The signal flow in the neuron is (with some exceptions

    when the flow could be bi-directional) from the dendrites through the soma converging at

    the axon hillock and then down the axon the end bulbs.

    A neuron typically has many dendrites but only a single axon. Some neurons lack axons,

    such as the amacrine cells.

    5. Basics of Artificial Neural Models

    The human brain is made up of computing elements, called neurons, coupled with sensory

    receptors (affecters) and effectors. The average human brain, roughly three pounds in

    weight and 90 cubic inches in volume, is estimated to contain about 100 billion cells of

    various types.

    7

  • 8/2/2019 Artificial Neural Network1

    8/31

    A neuron is a special cell that conducts and electrical signal, and there are about 10 billion

    neurons in the human brain. The remaining 90 billion cells are called glial or glue cells,

    and these serve as support cells for the neurons.

    According to a simplified account, the human brain consists of about ten billion neurons --

    and a neuron is, on average, connected to several thousand other neurons. By way of these

    connections, neurons both send and receive varying quantities of energy. One very

    important feature of neurons is that they don't react immediately to the reception of energy.

    Instead, they sum their received energies, and they send their own quantities of energy to

    other neurons only when this sum has reached a certain critical threshold. The brain learns

    by adjusting the number and strength of these connections. Even though this picture is a

    simplification of the biological facts, it is sufficiently powerful to serve as a model for the

    neural net.

    The motivation for artificial neural network (ANN) researches is the belief that a humans

    capabilities, particularly in real-time visual perception, speech understanding, and sensory

    information processing and in adaptively as well as intelligent decision making in general,

    come from the organizational and computational principles exhibited in the highly

    complex neural network of the human brain.

    Expectations of faster and better solution provide us with the challenge to build machines

    using the same computational and organizational principles, simplified and

    abstracted from neurobiological of the brain.

    Artificial Neural Network Model

    8

  • 8/2/2019 Artificial Neural Network1

    9/31

    6. Artificial Neural Network

    Artificial neural network (ANNs), also called parallel distributed processing systems

    (PDPs) and connectionist systems, are intended for modeling the organization principles of

    the central neurons system, with the hope that the biologically inspired computing

    capabilities of the ANN will allow the cognitive and sensory tasks to be performed more

    easily and more satisfactory than with conventional serial processors. Because of the

    limitation of serial computers, much effort has devoted to the development of the parallel

    processing architecture; the function of single processor is at a level comparable to

    that of a neuron. If the interconnections between the simplest fine-grained processors

    are made adaptive, a neural network results.

    ANN structures, broadly classified as recurrent (involving feedback) or non-

    recurrent (without feedback), have numerous processing

    elements (also dubbed neurons, neurodes, units or cells) and connections (forward and

    backward interlayer connections between neurons in different layers, forward and backward

    interlayer connections or lateral connections between neurons in the same layer, and self-

    connections between the input and output layer of the same neuron. Neural networks may

    not have differing structures or topology but are also distinguished from one anotherby the

    way they learn, the manner in which computations are performed (rule-based, fuzzy, even

    nonalorithmic), and the component characteristic of the neurons or the input/output

    description of the synaptic dynamics).These networks are required to perform

    significant processing tasks through collective local interaction that produces global

    properties.

    Since the components and connections and their packaging under stringent spatial

    constraints make the system large-scale, the role of graph theory, algorithm, and

    neuroscience is pervasive.

    6.1 How Neural Networks differ from Conventional Computer?

    Neural Networks perform computation in a very different way than conventional computers,

    9

  • 8/2/2019 Artificial Neural Network1

    10/31

    where a single central processing unit sequential dictates every piece of the action. Neural

    Networks are built from a large number of very simple processing elements that

    individually deal with pieces of a big problem. A processing element (PE) simply

    multiplies an output value (table lookup). The principles of neural computation come from

    the massive processing tasks, and from the adaptive nature of the parameters (weights)

    that interconnected the PEs.

    6.2 Similarities and difference between neural net and von Neumann

    computer

    Neural net Von Neumann computer

    Trained ( learning by example) by

    adjusting the connection strengths,

    threshold and structure.

    Programmed with instruction (if-then

    analysis based on logic).

    Parallel(discrete or continuous), digital,

    asynchronous

    Sequential or serial

    synchronous(with a clock)

    May be fault-tolerant because of Distributed

    representation and Large -scale redundancy

    Not fault-tolerant.

    Self-organization during learning Software dependent

    Memory & processing elements collocated Memory and processing elements separate

    Knowledge stored is adaptable Knowledge stored in address memory

    Processing is anarchic and cycle time,

    which governs the processing speed, is in

    milliseconds range

    Processing is autocratic and cycle time,

    corresponds to processing one step of

    program in the CP during one clock cycle,

    is in nanosecond range

    7. Perceptron

    At the heart of every Neural Network is what is referred to as the perceptron (sometimes

    10

  • 8/2/2019 Artificial Neural Network1

    11/31

    called processing element or neural node) which is analogous to the neuron nucleus in the

    brain. The second layer that is very first hidden layer is known as perceptron. As was the

    case in the brain the operation of the perceptron is very simple; however also as is the

    case in the brain, when all connected neurons operate as a collective they can provide

    some very powerful learning capacity.

    Input signals are applied to the node via input connection (dendrites in the case of the brain.)

    The connections have strength which changes as

    the system learns. In neural networks the strength of the connections are referred to as

    weights. Weights can either excite or inhibit the transmission of the incoming

    signal. Mathematically incoming signals values are multiplied by the value of those

    particular weights.

    At the perceptron all weighted input are summed. This sum value is than passed to a

    scaling function. The selection of scaling function is part of the neural network design.

    The structure of perceptron (Neuron Node) is as follow.

    Perceptron

    11

  • 8/2/2019 Artificial Neural Network1

    12/31

    8. Learning Laws

    Many learning laws are in common use. Most of these are some sort of variation of the best

    known and oldest learning laws, hebbs rule. Research into different learning functions

    continues as new ideas routine show up in trade publication. Some researches have the

    modeling of biological learning as their main objective. Others are experimenting with

    adaptation of their perceptions of how nature handles learning. Either way, mans

    understanding of how neural processing actually works is very limited. Learning is certainly

    more complex than the simplification represented by the learning laws currently develop. A

    few of the major laws are presented as examples.

    8.1 Hebbs Rule

    The first, and undoubtedly the best known, learning rule were introduced by Donald Hebb.

    The description appeared in his book the Organization ofbehavior in 1949. His basic rule

    is: if a neuron receives an input from another neuron and if both are highly active

    (mathematically have the same sign), the weight between the neurons should be

    strengthened.

    8.2 Hopfield Law

    It is similar to Hebbs rule with the exception that it specifies the magnitude of the

    strengthening or weakening. It states, if the desired output and the input are both active and

    both inactive, increment the connection weight by the learning rate, otherwise decrement the

    weight by the learning rate.

    8.3 The Delta Rule

    This rule is a further variation of Hebbs Rule. It is one of the most commonly used.

    This rule is based on the simple idea of continuously modifying the strengths of the input

    connections to reduce the difference (the delta) between the desired output value and the

    actual output of a processing element. Their rule changes the synaptic weights in the way

    that minimizes the mean squared error of the network. This rule is also referred to as

    12

  • 8/2/2019 Artificial Neural Network1

    13/31

    windrows-Hoff Learning rule and the least mean square (LMS) Learning Rule. The way

    that the Delta Rule works is that the delta rule error in the output layer is transformed by

    the derivative of the transfer function and is then used in the previous neural layer to adjust

    input connection weights. In other words, the back-propagated into previous layers one

    layer at a time. The process of back-propagating the network errors continues until the first

    layer is reached. The network typed called feed forward; back-propagation

    derives its name from this method of computing the error term. When using the delta rule,

    it is important to ensure that the input data set is well randomized. Well-ordered or

    structured presentation of the training set can lend to a network, which cannot converge to

    the desired accuracy. If that happens, then network is incapable of learning the problem.

    8.4 The Gradient Descent Rule

    This rule is similar to the Delta rule in that the derivatives of the transfer function is still

    used to modify the delta error before it is applied to the connection weights. Here,

    however, an additional proportional constant tied to the learning rate is appended to the

    final modifying factor acting upon the weights. This rule is commonly used, even

    though it converges to a point of stability very slowly.

    It has been shown that different learning rates for different layers of network helpthe learning process converge faster. In these tests, the learning rates for those layers

    close to the output were set lower than those layers near the input. This especially

    important for applications where the input data is not derived from a strong underlying

    model.

    8.5 Kohonens Learning Law

    The procedure, developed by Teuvo Kohonen, was inspired by learning in biologicalsystems. In this procedure, the processing elements complete for the opportunity to learn, or

    update their weights. The processing element with the largest output is declared the winner

    and has the capabilities of inhibiting its competitors as well as exciting its neighbors. Only

    the winner is permitted an output, and only the winner plus its neighbors are allowed to

    adjust their connection weights. Further, the size of the neighborhood can vary during

    13

  • 8/2/2019 Artificial Neural Network1

    14/31

    the training period. The usual paradigm is to start with a larger definition of the

    neighborhood, and narrow in as the training process proceeds. Because the winning

    element is defined as the one that has the closest match to the input pattern, Kohonen

    networks model the distributed of the data and is sometimes referred to as self-

    organizing maps or self-organizing topologies.

    9. Basic Structure of artificial neural network

    Input layer: The bottom layer is known as input neuron network in this case x1 to x5 are

    input layer neurons.

    Hidden layer: The in-between input and output layer the layers are known as hidden layers

    where the knowledge of past experience / training is kept.

    Output Layer: The topmost layer which gives the final output. In this case z1 and z2 are

    output neurons.

    Basic Structure Of Artificial Neural Network

    10. Network architectures

    14

  • 8/2/2019 Artificial Neural Network1

    15/31

    1). Single layer feed forward networks:

    In this layered neural network the neurons are organized in the form of layers. In this simplest

    form of a layered network, we have an input layer of source nodes those projects on to an

    output layer of neurons, but not vise-versa. In other words, this network is strictly a feed

    forward or acyclic type. It is as shown in figure:

    Such a network is called single layered network, with designation single later referring to the

    o/p layer of neurons.

    2). Multilayer feed forward networks:

    The second class of the feed forward neural network distinguishes itself by one or more

    hidden layers, whose computation nodes are correspondingly called neurons or units. The

    function of hidden neurons is intervened between the external i/p and the network o/p in

    some useful manner. The ability of hidden neurons is to extract higher order statistics is

    particularly valuable when the size of i/p layer is large.

    The i/p vectors are feed forward to 1 st hidden layer and this pass to 2 nd hidden layer and

    so on until the last layer i.e. output layer, which gives actual network response.

    15

  • 8/2/2019 Artificial Neural Network1

    16/31

    3). Recurrent networks:

    A recurrent network distinguishes itself from feed forward neural network, in that it has

    least one feed forward loop. As shown in figures output of the neurons is fed back into its

    own inputs is referred as self-feedback.

    A recurrent network may consist of a single layer of neurons with each neuron feeding its

    output signal back to the inputs of all the other neurons. Network may have hidden layers

    or not.

    16

  • 8/2/2019 Artificial Neural Network1

    17/31

    11. Learning of ANNS

    The property that is of primary significance for a neural network is the ability of the

    network to learn from environment, and to improve its performance through learning.

    A neural network learns about its environment through an interactive process of

    adjustment applied to its synaptic weights and bias levels. Network becomes more

    knowledgeable about its environment after each iteration of the learning process.

    11.1 Learning with a teacher:

    1). Supervised learning: the learning process in which the teacher teaches the network by

    giving the network the knowledge of environment in the form of sets of the inputs

    outputs pre-calculated examples. As shown in figure

    17

  • 8/2/2019 Artificial Neural Network1

    18/31

    Neural network response to inputs is observed and compared with the predefined output.

    The difference is calculated refer as error signal and that is feed back to input layers

    neurons along with the inputs to reduce the error to get the perfect response of the

    network as per the predefined outputs.

    For example, when learning a language, we hear the sound of a word from a teacher.The sound is stored in the memory banks of our brain, and we try to reproduce the

    sound. When we hear our own sound, we mentally compare it (actual output) with the

    stored sound (target sound) and note the error. If error is large, we try again and again

    until it becomes significantly small; then we stop.

    11.2 Learning without a teacher:

    Unlike supervised learning, in unsupervised learning, the learning process takes place

    without teacher that is there are no examples of the functions to be learned by the network.

    1).Reinforcement learning / neurodynamic programming: In reinforcement learning, the

    learning of an input output mapping is performed through continued interaction with

    environment in order to minimize a scalar index of performance. As shown in figure.

    18

  • 8/2/2019 Artificial Neural Network1

    19/31

    In reinforcement learning, because no information on way the right output should be

    provided, the system must employ some random search strategy so that the space of

    plausible and rational choices is searched until a correct answer is found.

    Reinforcement learning is usually involved in exploring a new environment when some

    knowledge (or subjective feeling) about the right response to environmental inputs is

    available. The system receives an input from the environment and process an output as

    response. Subsequently, it receives a reward or a penalty from the environment. The

    system learns from a sequence of such interactions.

    In reinforced learning it uses a critic instead of a teacher which helps to indicate whether

    the actual input is same as target input or not. The critic does not present the target output

    to the network but presents only a pass/fail indication. Thus the error signal produced is

    binary: pass or fail.

    If the teacher indication is fail the network readjusts its parameter and tries again and

    again until it gets its output response right. During this process there is no indication if the

    output response is moving in the right direction or how close to the correct output it is.

    2). Unsupervised learning: In unsupervised or self-organized learning there is no external

    19

  • 8/2/2019 Artificial Neural Network1

    20/31

    teacher or critic to over see the learning process. As indicated in figure.

    Rather provision is made for a task independent measure of the quality of the

    representation that the network is required to learn and the free parameters of thenetwork are optimized with respect to that measure. Once the network has become tuned to

    the statistical regularities of the input data, it develops the ability to form internal

    representation for encoding features of the input and there by to create the new class

    automatically.

    For example, show a person a set of different objects. Then ask him/her to separate them

    into groups, such that objects in a group have one or more common features that

    distinguish them from another group. When this is done, show the same person another

    object and ask him/her to place the object in one of the groups. If it does not belong to any

    of the existing groups, a new group may be formed.

    Even though unsupervised learning does not require a teacher, it requires guidelines to

    determine how it will form groups.

    11.3 Learning tasks

    Pattern recognition:

    Humans are good at pattern recognition. We can recognize the familiar face of the person

    even though that person has aged since last encounter, identifying a familiar person by

    his voice on telephone, or by smelling the fragments comes to know the food etc.

    Pattern recognition is formally defined as the process where by a received pattern/signal is

    20

  • 8/2/2019 Artificial Neural Network1

    21/31

    assigned to one of a prescribed number of classes. A neural network performs pattern

    recognition by first undergoing a training session, during which the network is repeatedly

    present a set of input pattern along with the category to which each particular pattern

    belongs. Later, a new pattern is presented to the network that has not been seen before, but

    which belongs to the same pattern category used to train the network.

    The network is able to identify the class of that particular pattern because of the information

    it has extracted from the training data. Pattern recognition performed by neural network is

    statistical in nature, with the pattern being represented by points in a multidimensional

    decision space.

    The decision space is divided into regions, each one of which is associated with class. The

    decision boundaries are determined by the training process.

    As shown in figure: in generic terms, pattern-recognition machines using neural network

    may take two forms.

    1). To extract features through unsupervised network.

    2). Features pass to supervised network for pattern classification to give final output.

    21

  • 8/2/2019 Artificial Neural Network1

    22/31

    12.Control

    The control of a plant is another learning task that can be done by a neural network; by a

    plant we mean a process or critical part of a system that is to be maintained in a

    controlled condition. The relevance of learning to control should not be surprising

    because, after all, the human brain is a computer, the output of which as a whole system are

    22

  • 8/2/2019 Artificial Neural Network1

    23/31

    actions. In the context of control, the brain is living proof that it is possible to build a

    generalized controller that takes full advantages of parallel distributed hardware, can

    control many thousands of processes as done by the brain to control the thousands of

    muscles.

    `13. Adaptation

    The environment of the interest is no stationary, which means that the statistical parameters

    of the information bearing generated by the environment vary with the time. In situation of

    the kind, the traditional methods of supervised may learning may prove to be inadequate

    because the network is not equipped with the necessary means to track the statistical

    variation of the environment in which it operates. To overcome these

    shortcomings, it is desirable for a neural network to continually adapt its free parameters tovariation in the incoming signals in a real time fashion. Thus an adaptive system responds

    to every distinct input as a novel one. In other words the learning process encountered in

    the adaptive system never stops, with learning going on while signal processing is being

    performed by the system. This form of learning is called continuous learning or learning on

    the fly.

    23

  • 8/2/2019 Artificial Neural Network1

    24/31

    14. Generalization

    In back propagation learning we typically starts with a training sample and uses the back

    propagation algorithm to compute the synaptic weights of a multiplayer preceptor by

    loading (encoding) as many as of the training example as possible into the network. The

    hope is that the neural network so design will generalize. A network is said generalize well

    when the input output mapping computed by the network is correct or nearly so for the test

    data never used in creating or training the network; the term generalization is borrowed

    from psychology.

    A neural network that is design to generalize well will produced a correct input output

    mapping even when the input is slightly different from the examples used to train the

    network. When however a neural network learns too many input output examples the

    network may end up memorizing the training data. It may do so by finding a feature that

    is present in training data but not true for the underlining function that is to be modeled.

    Such aphenomena is referred to as an over fitting or over training. When the network

    is over trained it looses the ability to generalize between similar input output pattern.

    15.The probabilistic neural network

    Another multilayer feed forward network is the probabilistic neural network (PNN). In

    addition to the input layer, the PNN has two hidden layers and an output layer. The

    major difference from a feed forward network trained by back propagation is that it can

    be constructed after only a single pass of the training exemplars in its original form and two

    passes is a modified version. The activation function of a neural in the case of the PNN is

    statistically derived from estimating of probability density functions (PDFs) based on training

    patterns.

    24

  • 8/2/2019 Artificial Neural Network1

    25/31

    16.Advantages of Neural Networks

    1) Networks start processing the data without any preconceived hypothesis. They start

    random with weight assignment to various input variables. Adjustments are made based

    on the difference between predicted and actual output. This allows for unbiased and better

    understanding of data.

    2) Neural networks can be retained using additional input variables and number of

    individuals. Once trained they can be called on to predict in a new patient.

    3) There are several neural network models available to choose from in a particular

    problem.

    4) Once trained, they are very fast.

    5) Due to increased accuracy, results in cost saving.

    6) Neural networks are able to represent any functions. Therefore they are called Universal

    Approximators.

    7) Neural networks are able to learn representative examples by back propagating errors.

    25

  • 8/2/2019 Artificial Neural Network1

    26/31

    17.Limitations of Neural Network

    Low Learning Rate:- For problems requiring a large and complex networkarchitecture or having a large number of training examples, the time needed to

    train the network can become excessively long.

    Forgetfulness : - The network tends to forget old training e xamples as it is

    presented with new ones. A previously trained neural network that must be updated

    with new information must be trained using the old and new examples - there is

    currently no known way to incrementally train the network.

    Imprecision: - Neural networks do not provide precise numerical answer, but

    rather relate an input pattern to the most p robable output state.

    Black box approach: - Neural networks can be trained to transform an input pattern

    to output but provide no insights to the physics behind the transformation.

    Limited Flexibility: - The ANNS is designed and implemented for only one

    particular system. It is not applicable to another system.

    18. Application of Artificial Neural Network

    In parallel with the development of theories and architectures for neural networks the

    scopes for applications are broadening at a rapid pace. Neural networks may develop

    intuitive concepts but are inherently ill suited for implementing rules precisely, as in the

    case of rule based computing. Some of the decision making tools of the human brain such

    as the seats ofconsciousness, thought, and intuition, do not seem to be within our

    capabilities for comprehension in the near future and are dubbed by some to be essentially

    no algorithmic. Following are a few applications where neural networks are employedpresently:

    1) Time Series Prediction: Predicting the future has always been one of humanitys

    desires. Time series measurements are the means for us to characterize and

    understand a system and to predict in future behavior. Gershenfield and weighed

    defined three goals for time series analysis: forecasting, modeling, and26

  • 8/2/2019 Artificial Neural Network1

    27/31

    characterization. Forecasting is predicting the

    short-term evolution of the system. Modeling involves finding a description that

    accurately captures the features of the long-term behavior. The goal of

    characterization is to determined the fundamental properties of the system, such as

    the degrees of freedom or the amount of randomness. The traditional methods

    used for time series prediction are the moving average (ma), autoregressive (ar), or

    the combination of the two, the ARMA model. Neural network approaches

    produced some of the best short-term predictions. However, methods that

    reconstruct the state space by time delay embedding and develop a representation

    for the geometry in the systems state space yielded better longer-term predictions

    than neural networks in some cases.

    2) Speech Generation : One of the earliest successful applications of the back

    propagation algorithm for training multiplayer feed forward networks were in a

    speech generation system called NET talk, developed by Sejnowski and

    Rosenberg. Net talk is a fully connected layered feed forward network with only

    one hidden layer. It was trained to pronounce written English text. Turning a

    written English text into speech is a difficult task, because most phonological rules

    have exceptions that are context-sensitive. Net talk is a simplest network that

    learns the function in several hours using exemplars.

    3) Speech Recognition : Kohonen used his self-organizing map for inverse

    problem to that addressed by Net talk: speech recognition. He developed a

    phonetic typewriter for the Finnish language. The phonetic typewriter takes as input

    a speech as input speech and converts it into written text. Speech

    recognition in general is a much harder problem that turning text into speech.

    Current state-of-the-art English speech recognition systems arebased on hidden

    Markov Model (HMM). The HMM, which is a Markov process; consist of a

    number of states, the transitions between which depend on the occurrence of

    some symbol.

    4) Autonomous Vehicle Navigation: Vision-based autonomous vehicle and robot

    guidance have proven difficult for algorithm-based computer vision methods,

    mainly because of the diversity of the unexpected cases that must be explicitly

    27

  • 8/2/2019 Artificial Neural Network1

    28/31

    dealt with in the algorithms and the real-time constraint. Pomerleau successfully

    demonstrated the potential of neural networks for overcoming these difficulties.

    His ALVINN (Autonomous Land Vehicle in Neural Networks) set a worked

    record for autonomous navigation distance. After training on a two-mile

    stretch of highway, it drove the CMU Navlab, equipped with video cameras

    and laser range sensors, for 21.2 miles with an average speed of 55 mph on a

    relatively old highway open to normal traffic. ALVINN was not distributed by

    passing cars while it was driven autonomously. ALVINN nearly doubled the

    previous distance world record for autonomous navigation. A network in ALVINN

    for each situation consists of a single hidden layer of only four units, an output layer

    of 30 units and a 30 X 32 retina for the 960 possible input variables. The retina is fully

    connected to the hidden layer, and the hidden layer is fully connected to the output

    layer. The graph

    of the feed forward network is a node-coalesced cascade version of

    bipartite graphs.

    5) Handwriting Recognition: Members of a group at AT&T Bell Laboratories have

    been working in the area of neural networks for many years. One of their projects

    involves the development of a neural network recognizer for handwritten digits. A

    feed forward layered network with three hidden layers is used. One of the key

    features in this network that reduces the number of free parameters to enhance the

    probability of valid generalization by the network. Artificial neural network is also

    applied for image processing.

    6) In Robotics Field: With the help of neural networks and artificial Intelligence.

    Intelligent devices, which behave like human, are designed which are helpful to

    human in performing various tasks.

    Following are some of the application of Neural Networks in various fields:

    Business

    o Marketing

    o Real Estate

    Document and Form Processing

    28

  • 8/2/2019 Artificial Neural Network1

    29/31

    o Machine Printed Character Recognition

    o Graphics Recognition

    o Hand printed Character Recognition

    o Cursive Handwriting Character Recognition

    Food Industry

    o Odor/Aroma Analysis

    o Product Development

    o Quality Assurance

    Financial Industry

    o Market Trading

    o Fraud Detection

    o Credit Rating

    Energy Industry

    o Electrical Load Forecasting

    o Hydroelectric Dam Operation

    o Oil and Natural Gas Company

    Manufacturing

    o Process Control

    o Quality Control

    Medical and Health Care

    o Image Analysis

    o Drug Development

    19.Conclusion

    At last it can be said that after 20 or 30 years neural networks will be so developed that it

    can find the errors of even human beings and will be able to rectify that errors and

    make human being more intelligent .

    29

  • 8/2/2019 Artificial Neural Network1

    30/31

    20.References

    Neural Network by: - Simon Haykin

    Understanding neural networks and fuzzy logic by: - Stamatios V. Kartalopoulos

    30

  • 8/2/2019 Artificial Neural Network1

    31/31

    Artificial Neural Network by:- Robert J. Schalkoff

    Internet: -

    en.wikipedia.org/wiki/Artificial_neural_network

    www.learnartificialneuralnetworks.com

    http://www.ibm.com/developerworks/library/l-neural/

    http://www.ibm.com/developerworks/library/l-neural/http://www.ibm.com/developerworks/library/l-neural/