amr in dynamic environment

Upload: andre-navarro

Post on 06-Apr-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 AMR in Dynamic Environment

    1/17

    Autonomous mobile robot navigation system designed in dynamic

    environment based on transferable belief model

    Wang Yaonan a, Yang Yimin a,, Yuan Xiaofang a, Zuo Yi a,b, Zhou Yuanli a, Yin Feng a, Tan Lei a

    a College of Electrical and Information Engineering, Hunan University, Changsha, Hunan 410082, PR Chinab Key Laboratory of Regenerative Energy, Electric-Technology of Hunan Province, Changsha University of Science and Technology, Changsha, Hunan 410004,

    PR China

    a r t i c l e i n f o

    Article history:

    Received 2 February 2010

    Received in revised form 19 March 2011

    Accepted 12 May 2011

    Available online 20 May 2011

    Keywords:

    Navigation

    Path planning

    Mobile robot

    Robust tracking control

    Transferable belief model

    Dynamic environment

    a b s t r a c t

    This paper investigates the possibility of using transferable belief model (TBM) as a prom-

    ising alternative for the problem of path planning of nonholonomic mobile robot equipped

    with ultrasonic sensors in an unknown dynamic environment, where the workspace is

    cluttered with static obstacles and moving obstacles. The concept of the transferable belief

    model is introduced and used to design a fusion of ultrasonic sensor data. A new strategy

    for path planning of mobile robot is proposed based on transferable belief model. The

    robots path is controlled using proposed navigation strategy that depends on navigation

    parameters which is modified by TBM pignistic belief value. These parameters are tuned

    in real time to adjust the path of the robot. A major advantage of the proposed method

    is that, with detection of the robots trapped state by ultrasonic sensor, the navigation

    law can determine which obstacle is dynamic or static without any previous knowledge,

    and then select the relevant obstacles for corresponding robot avoidance motion. Simula-

    tion is used to illustrate collision detection and path planning.

    2011 Elsevier Ltd. All rights reserved.

    1. Introduction

    During the past few years, autonomous navigation of

    nonholonomic systems such as nonholonomic mobile

    robot has received wide attention and is a topic of great

    research interest. The navigation systems including map

    building and path planning implies that the robot is

    capable of reacting to static obstacles and unpredictable

    dynamic object that may impede the successful exactionof a task. To achieve this level of robustness, many litera-

    tures that deals with path planning is rapidly growing.

    The work of Rueb and Wong [1], Habib and Yuta [2],

    Mataric [3], Rimon and Koditschek [4] and Borenstein

    and Koren [5] are among the earliest attempts to solve

    the problem of path planning. Various classical approaches

    designed originally are extended in order to be applicable

    in real applications. Probabilistic roadmap methods are

    used in [69]. Potential field method is suggested in

    [1012]. These methods perform well in static environ-

    ments. However, this does not automatically imply good

    performance in dynamic environment. Additionally, these

    methods have limited performance when obstacles are al-

    lowed to move in the workspace.

    Recently, other kinds of research were proposed [13

    21], which extended the various approaches to dynamicenvironment. For example, dynamic potential field method

    [1315], kinematics methods to solve the problem of colli-

    sion detection [16], sensor-based path-planning methods

    [1720]. Sensor-based path-planning methods are widely

    used to navigation in dynamic environments. The robot

    calculates and estimates the motion of the moving obsta-

    cles based on its sensory system. There also exist methods

    with other algorithms such as genetic algorithms [21], fuz-

    zy system [22], intelligent algorithms [23,24] to solve this

    problem. These methods have the ability of wheeled mo-

    bile robots to navigate safely and avoid moving obstacle

    0263-2241/$ - see front matter 2011 Elsevier Ltd. All rights reserved.doi:10.1016/j.measurement.2011.05.010

    Corresponding author.

    E-mail address: [email protected] (Y. Yimin).

    Measurement 44 (2011) 13891405

    Contents lists available at ScienceDirect

    Measurement

    j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / m e a s u r e m e n t

    http://dx.doi.org/10.1016/j.measurement.2011.05.010mailto:%3Cxml_chg_old%[email protected]%3C/xml_chg_old%3E%3Cxml_chg_new%[email protected]%3C/xml_chg_new%3Ehttp://dx.doi.org/10.1016/j.measurement.2011.05.010http://www.sciencedirect.com/science/journal/02632241http://www.elsevier.com/locate/measurementhttp://www.elsevier.com/locate/measurementhttp://www.sciencedirect.com/science/journal/02632241http://dx.doi.org/10.1016/j.measurement.2011.05.010mailto:%3Cxml_chg_old%[email protected]%3C/xml_chg_old%3E%3Cxml_chg_new%[email protected]%3C/xml_chg_new%3Ehttp://dx.doi.org/10.1016/j.measurement.2011.05.010
  • 8/3/2019 AMR in Dynamic Environment

    2/17

    in dynamic environment. However, most of the proposed

    approaches for mobile robots in the literature did not con-

    sider autonomous navigation in an environment which in-

    cludes both static obstacles and dynamic obstacles.

    Furthermore, in the nature of the real world, any prior

    knowledge about the environment is, in general, incom-

    plete, uncertain, or completely unknown. These methods

    assume that, each moving objects velocity and direction

    is exactly known, but this prior knowledge is not easy to

    gain.

    The transferable belief model (TBM) is a model for the

    quantified representation of epistemic uncertainty and

    which can be an agent, an intelligent sensor, etc., and pro-

    vides a highly flexible model to manage the uncertainty

    encountered in the multi-sensor data fusion problems.

    Application of the transferable belief model (TBM) to many

    areas has been presented in [2529] including classifica-

    tion and target identification during recent times. And we

    feel it appealing when using navigation of mobile robots.

    This work investigates the possibility of using transfer-

    able belief model (TBM) as a promising alternative for nav-

    igation system of nonholomonic mobile robot. First, a

    neural robust tracking controller, comprising adaptive

    wavelet neural network controller (AWNN), is used to

    achieve the tracking control of the mobile manipulator un-

    der some uncertainties without any knowledge of those ro-

    bot parameters. Then, based on transferable belief model

    (TBM) to design a fusion of ultrasonic sensor data, a new

    approach for path planning is proposed. The local map,

    represented as an occupancy grid, with the time update

    is proven to be suitable for real-time applications. Attrac-

    tive and repulsion potential function is modified by TBM

    pignistic belief value. Taking the velocity of the obstacles

    and static object into account, the suggested method can

    determine which obstacle is dynamic or static without

    any previous knowledge of moving obstacles, then select-

    ing the relevant obstacles for corresponding robot avoid-

    ance motion.

    The rest of the paper is organized as follows: Section 2

    shows the mathematical representation of mobile robot

    with two actuated wheels. Section 3 discusses the nonlin-

    ear kinematics-WNN back stepping controller as applied to

    the tracking problem. Section 4 proposed a new method

    for map building based on sonic rangefinder. A path-plan-

    ning approach is discussed in Section 5. Simulation and

    experiment are shown in Section 6.

    2. Model of a mobile robot with two actuated wheels

    The kinematic model of an ideal mobile robot is widely

    used in the mobile robot (MR) control [3037]. The MR

    with two driven wheels shown in Fig. 1 is a typical exam-

    ple of nonholonomic mechanical systems. OXYis the refer-

    ence coordinate system; OrXrYr is the coordinate system

    fixed to the mobile robot; Or the middle between the right

    and left driving wheels, is the origin of the mobile robot; d

    is the distance from Or to P; 2b is the distance between the

    two driving wheels and r is the radius of the wheel. In the

    2-D Cartesian space, the pose of the robot is represented by

    q x;y; hT 1

    where (x,y)T is the coordinate ofOr in the reference coordi-

    nate system, and the heading direction h is taken counter-

    clockwise from the OX-axis.The motion model including kinematics and dynamics

    of the nonholonomic mobile robot system can be described

    by Hou et al. [34]. It is assumed that the wheels of the ro-

    bot do not slide. This is expressed by the nonholonomic

    constraint.

    _x sin h _y cos h 0 2

    All kinematic equality constraints are assumed to be

    independent of time and to be expressed as follows:

    Aq _q 0 3

    By appropriate procedures and definitions, the robot

    dynamics can be transformed as [30,34]

    M _vw Vvw F sed s 4

    where

    M

    r2

    4b2mb

    2 I0 Iw

    r2

    4b2mb

    2 I0

    r2

    4b2mb

    2 I0

    r2

    4b2mb

    2 I0 Iw

    @ 1A

    V

    0 r2

    2b2mcd _h

    r2

    4b2mcd _h 0

    BB@

    1CCA

    F JTF

    sed JTsed

    B 1 0

    0 1

    where m = mc + 2mw, I0 = mcd

    2 + 2mwb2 + Ic + 2Im; mc and

    mw are the masses of the robot body and wheel with actu-

    ator, respectively; Ic, Iw and Im are the moments of inertia of

    the robot body about the vertical axis throughp, the wheel

    with a motor about the wheel axis, and the wheel with a

    motor about the wheel diameter, respectively.

    y

    x

    rX

    rY

    rO

    P

    d

    2r

    2b

    left wheel

    right wheel

    O

    Fig. 1. Nonholonomic mobile robot.

    1390 W. Yaonan et al. / Measurement 44 (2011) 13891405

    http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-
  • 8/3/2019 AMR in Dynamic Environment

    3/17

    Since the wheels of the robot are driven by actuators, it

    is necessary to take the dynamics of the wheel actuators

    into account. The motor models attained by neglecting

    the voltage on the inductances are:

    sr kavr kbxr=Ra; sl kavl kbxl=Ra 5

    where vr and vl are the input voltages applied to the right

    and left motors; kb is equal to the voltage constant multi-plied by the gear ratio; Ra is the electric resistance con-

    stant; sr and sl are the right and left motor torquesmultiplied by the gear ratio; and ka is the torque constant

    multiplied by the gear ratio. The dynamic equations of the

    motor-wheels are:

    sr kavr

    Ra

    kakbxrRa

    ; sl kavl

    R

    kakbxlRa

    6

    By using (4)(6), the mobile robot model including the

    robot kinematics, robot dynamics, and wheel actuator

    dynamics can be written as:

    M _v

    w Vv

    w F sed

    kavr

    Ra

    kakbxrRa

    7

    wH XV 8

    where V, X is selected as

    X 1

    rRr

    1

    r R

    r

    " #9

    V v

    w

    !10

    3. Nonlinear kinematic-WNN backstepping controller

    In area of research of trajectory tracking in mobile robot,

    based on whether the system is described by a kinematic

    model or a dynamic model, the tracking-control problem

    is classified as either a kinematic or a dynamic tracking-con-

    trol problem. Using kinematic or dynamic models of non-

    holonomic mobile robots, various approaches [21,3237]

    consider that wheel torques are control inputs though in

    reality wheels are driven by actuators and therefore using

    actuator input voltages as control inputs is more realistic.

    To this effect, actuator dynamics is combined with the mo-

    bile robot dynamics. In this section, a neural robusttracking

    controller is discussed briefly to achieve thetrackingcontrol

    under some uncertainties without any knowledge of those

    robot parameters from our recent work [36]. More detail,

    proof and simulation can be also found in [36].

    Recall the robot dynamics (10)

    M _vw Vvw F sed kavr

    Ra

    kakbxrRa

    11

    The controller for s is chosen as

    s ^f K4ec c 12

    where K4 is a positive definite diagonal gain matrix, andf is

    an estimation of f(x) that is defined by

    fx Mq _vc Vmq; _qvc Fv 13

    where x vTvTc _vTc, so error can be defined as

    ec vc v 14

    Using (11)(14), the error dynamics stable the input term

    is chosen as

    M_ec Vmec K4ec ~f sd

    kakbRa

    BXvc kaRa

    Bu

    c 15

    The function in braces in (15) can be approximated by

    an AWNN, such that

    M _vc Vmvc F cWwxi; c; m 16where the term cWwxi; c; m represents an adaptiveapproximation WNN. The structure for the tracking control

    system is presented in Fig. 2. In this figure, no knowledge

    Fig. 2. Structure of the wavelet neural network-based robust control system.

    W. Yaonan et al. / Measurement 44 (2011) 13891405 1391

  • 8/3/2019 AMR in Dynamic Environment

    4/17

    of the dynamics of the cart is assumed. The function of

    WANN is to reconstruct the dynamic (16) by learning it

    on-line.

    Assumption 3.1. The approximation errors and distur-

    bances are bounded, i.e., the specified constants De and Drsatisfy kefk6 De and ksdk 6 Dr, respectively.

    For case of notation, the approximation error

    ~

    f can berewritten as

    ~f WT eW fWT bW ef 17where ~W W cW and ~w w w. The controller isdesigned to achieve good tracking and stability results

    with the connecting weights, dilation, and translation

    parameters of the WNN tuned on line. To achieve this,

    the linearization technique is used to transform the

    nonlinear output of WNN into partially linear from so

    that the Lyapunov theorem extension can be applied.

    The expansion of ~w in Taylor series is obtained as

    follows:

    ~w

    ~w11

    ~w12

    ..

    .

    ~wpq

    26666664

    37777775 @w11@m

    @w11@m

    ..

    .

    @w11@m

    266666664

    377777775

    Tmm

    m m

    @w11@m

    @w11@m

    ..

    .

    @w11@m

    266666664

    377777775

    Tcc

    c c H

    18

    H is a vector of higher-order terms and assume to be

    bounded by a positive constant. Substituting (18) into

    (17), it is revealed that

    ~f sd cWT~w ~WT~w ~WTw ef sd

    cWTw ATm BTc cWTAT ~m BT~c r 19where r WTA

    Tm BTc H cWTATm BTc ef

    sdBy substituting (19) into (15), the closed-loop system

    dynamics can be rewritten as

    M_ec K4 Vm

    ec kakb

    RaBXvc

    kaRa

    Bu

    ~WT w ATm BTc

    cWT AT ~m BT~c r c

    20

    Moreover, assume the following inequality:

    krk @X2m

    4 WT A

    Tm BTc H

    cWT ATm BTc

    ef sd @X2m

    46

    @X2m4

    WT AT

    m BTc H

    ef sd cW ATm BTc 6 kpp

    where @ is a positive constant; pT 1kcWk; kp kp1kp2; kp1 P kW

    TATm BTc; H ef sdk; kp2 PkATm BTck, i.e., kp is an uncertainty bound.

    The robust term c is designed as

    c kpp sgnec 21

    Definition 1. considering (21) for nonholonomic mobile

    robot, if using controller (12), the closed loop error

    dynamics given by (20) is locally asymptotically stable

    with the approximation network parameter tuning laws

    given by.

    _

    cW g1w AT

    m BTcec g1@keckcW 22_m g

    2cW Aec g2@keckm 23

    _c g3cW Bec g3@k@kc 24

    _bK g4keckp 25where g1, g2, g3, g4 are positive constants; kp is an on lineestimated value of the uncertain bound kp.

    4. Map building

    An occupancy grid is essentially a data structure that

    indicates the certainty that a specific part of space is occu-

    pied by an obstacle, which is widely used in map building

    in mobile robot [1721]. It represents an environment as a

    two-dimensional array. Each element of the array corre-

    sponds to a specific square on the surface of the actual

    world, and its value shows the certainty that there is some

    obstacle there. When new information about the world is

    received, the array is adjusted on the basis of the nature

    of the information.

    Here, the proposed map-building process utilizes Trans-

    ferable Belief Model (TBM). The sonar sensor readings are

    interpreted by this theory and used to modify the map

    using transferable belief model rule. Whenever the robot

    moves, it catches new information about the environment

    and updates the old map. Because of using this uncertain

    theory to build an occupancy map of the whole environ-

    ment, it represent important point of view what we must

    to consider: any position in the updated map of the whole

    environment do not exist absolute exactness for the reason

    that any obstacle can not be measured absolutely right-

    ness. So every discrete region of the path position may be

    in two states: belief degree E, O and uncertain degree H.

    For this purpose, the map building system to process the

    readings in order to assess, as accurately as possible, which

    cells are occupied by obstacles (partially certain) and

    which cells are (partially) empty and thus suitable for ro-

    bot navigation.

    4.1. Review theory of transferable belief model

    In this section, we briefly regroup some basic of the be-

    lief function theory as explained in the transferable belief

    model ( TBM). More details can be found in [2529].

    A.1 (Frame of discernment). The frame of discernment is

    a finite set of mutually exclusive elements, denoted

    X hereafter. Beware that the frame of discernment

    is not necessarily exhaustive.

    1392 W. Yaonan et al. / Measurement 44 (2011) 13891405

  • 8/3/2019 AMR in Dynamic Environment

    5/17

    A.2 (Basic belief assignment). A basic belief assignment

    (bba) is a mapping mX from 2X to [0,1] that satisfiesPA#Xm

    XA 1. The basic belief mass (bbm) m(A),

    A # X, is the value taken by the bba at A.

    A.3 (Categorical belief function). A categorical belief

    function onX focused onA # X, is a belief function

    which related bba mX satisfies:

    mX 1 if A A

    0 otherwise

    &26

    When all bbas are categorical, the TBM becomes equivalent

    to classical propositional logic. Two limiting cases of cate-

    gorical bbas have received special names.

    A.4 (Normalizing or nonnormalizing) The basic belief

    mass m(A) represents the part of belief exactly com-

    mitted to the subset A ofX given a piece of evidence,

    or equivalently to the fact that all we know is that A

    holds. Normalizing a bba means requiring that

    E(H

    ) = 0 and that the sum of all bbms givens tothe non-empty subsets is 1. This means closing the

    world. When a bba means requiring that m(H) > 0

    and we call this open world. In TBM, we do not

    require m(H) = 0 as in Shafers work.

    A.5 ( Related function) the degree of belief bel(A) is

    defined as: bel(A): 2X? [0,1] with, for all A # X

    belA X

    hB#A

    mB 27

    The degree of plausibility pl(A) is defined as: pl: 2X? [0,1]

    with, for all A # X

    plA XB#H;B\Ah

    mB belH belA 28

    The commonality function q is defined as: q: 2X? [0,1]

    with, for all A # X

    qA X

    A# B;B#X

    mB 29

    The function q,bel,pl are always in one to one correspon-

    dence. More details and proofs of the relationship among

    functions above can be found in [26,27].

    A.6 (The conjunctive rule). Given two bbas mX1

    ; mX2

    from

    different sensor respectively, the bba that results

    from their conjunctive combination defined by

    mE1 \ mE2A X

    B;C#H;B\CA

    mE1B mE2C;

    8A#H 30

    A.7 (The pignistic transformation for decision). The

    pignistic transformation maps bbas to so called pig-

    nistic probability functions. The pignistic transfor-

    mation of mX is given by

    BetPXA XB#X jA \ BjmXB

    jBj1 mXh; 8A 2 X 31

    where jAj is the number of elements ofX inA. This solution

    is a classical probability measure from which expected

    utilities can be computed in order to take optimal deci-

    sions. Some of its detail and justifications can be found in

    [25,29]

    4.2. sensor modeling and measurements interpretation

    The Polaroid Ultrasonic Rangefinder is used for map

    building. This is a very common device that can detect dis-

    tances in the range of 0.475 m with 1% accuracy. In [21],

    the sensor model converts the range information into

    probability values. The model in Figs. 3 and 4 is given by

    Eqs. (32)(37).

    In region I, where R e < r< R + e

    x

    b@b

    2 ejRrje

    22

    32

    EOjx

    0 1 < x < 0

    12

    2x21x2 1 0 6x 6 1( 33

    EEjx 0:8 EOjx 34

    In region II, where Rmin < r< R e

    x

    b@b

    2 Rer

    Re

    22

    35

    EOjx 0 1 < x < 01

    2 2x

    2

    1x2

    1

    0 6x 6 1

    (36

    EEjx 0:8 EOjx 37R is the range response form the ultrasonic sensor, and

    (r, @) is the coordinate of a point inside the sonar cone. e isthe range error, and it distributes the evidence in Region I.

    b is the half open beam angle of the sonar cone.

    4.3. The fusion of data from sonar

    The sonar data interpreted by Transferable Belief Model

    of evidence are collected and updated into a map using the

    same theory of evidence. In our approach, the basic

    30o

    270

    300

    330

    0

    30

    60

    90

    Fig. 3. Typical beam pattern of Polaroid ultrasonic sensor.

    W. Yaonan et al. / Measurement 44 (2011) 13891405 1393

  • 8/3/2019 AMR in Dynamic Environment

    6/17

    probability assignment corresponding to a range reading r

    is obtained directly using Eqs. (32)(37). The next example

    illustrates the usage of transferable belief model for map-

    building process.

    Example 1. Let the robot be located in cell (15,10) in the

    beginning of the navigation process. The condition is

    shown in Table 1 and Figs. 5 and 6. The new basic

    probability assignments for cells in the map become (the

    first lies in region I and the second lies in region II):

    (Static obstacles)

    xS00

    153

    15 2

    1027102

    2

    2 0:1878 xS1

    0

    15315

    2 1027

    102 2

    2 0:1878

    ES00

    x 12

    2x2

    1x2

    1 0:5341 ES0

    0x 1

    2 2x

    2

    1x2

    1

    0:5341

    ES00

    O 0:8 ExS00

    0:2659 ES00

    O 0:8 ExS00

    0:2659

    EH 0:2 EH 0:2

    and

    xS01

    156

    15 2

    2j109j

    2 2

    2 0:3050 x

    S11

    150

    15 2

    2j109j

    2 2

    2 0:6250

    ES01 x

    1

    2 2x2

    1x2 1 0:5851 ES11 x 12 2x21x2 1 0:7809ES0

    1O 0:8 Ex

    S11

    0:2149 ES11

    O 0:8 ExS11

    0:0191

    EH 0:2 EH 0:2

    (Dynamic obstacles)

    xS24

    153

    15 2

    1025102

    2

    2 0:3903 x

    S25

    153

    15 2

    2j1011j

    2 2

    2 0:4450

    ES24

    x 12

    2x2

    1x2

    1

    0:6322 E

    S24

    x 12

    2x2

    1x2

    1

    0:6625

    ES01

    O 0:8 ExS11

    0:1678 ES01

    O 0:8 ExS11

    0:1375

    EH 0:2 EH 0:2

    In the next moment, the robot is moving in a direction de-

    picted by the broken line and sonars scan the environment

    again. This situation is given in Fig. 6. S0, S1 and S2 detect

    some possible obstacles on @S10 6; r

    S00 4; r

    S10 5;

    rS01 6; rS11 6; r

    S21 7; r

    S12 9; r

    S22 9; @

    S00 3; @

    S01

    6; @S11 0; @

    S21 3; @

    S12 3; @

    S22 6; r

    S26 3; @

    S26 6:

    xS00

    153

    15 2

    1024102

    2

    2 0:3100 x

    S10

    156

    15 2

    1025102

    2

    2 0:2503

    ExS00

    12

    2x2

    1x2

    1

    0:5877 Ex

    S10

    12

    2x2

    1x2

    1

    0:5590

    EOS00

    0:8 ExS00

    0:2123 EOS10

    0:8 ExS00

    0:2410

    EH 0:2 EH 0:2

    xS01

    156

    15 2

    1026102

    2

    2 0:2112 x

    S11

    150

    15 2

    1026102

    2

    2 0:6250

    ExS01

    12

    2x2

    1x2

    1

    0:5427 Ex

    S11

    12

    2x2

    1x2

    1

    0:7809

    EOS01

    0:8 ExS00

    0:2573 EOS11

    0:8 ExS00

    0:0191

    EH 0:2 EH 0:2

    y

    r

    x

    R

    anglePr ofile for the

    l

    15

    15

    Pr ofile for the range

    l

    R R R +minR

    Fig. 4. The profile of the ultrasonic sensor model.

    Table 1

    Condition of this example.

    Robot position xr = 15, yr = 10

    Dynamic

    obstacles

    position

    and its

    moving

    direction

    X4x = 14, X4y = 10 moving direction

    in a positive X-axis direction.

    X5x = 15, X5y = 7 moving direction

    in a positive X-axis direction.

    Static

    obstacles

    position X0x 8; X0y 8

    X1x 10; X1y 5

    Distances r

    and angles

    @ detected

    by different

    sonar

    X0 : rS00

    7; @S00

    3; rS10

    7; @S10

    3

    X1 : rS01

    9; @S01

    6; rS11

    9; @S11

    0

    X4 : rS24

    5; @S24

    3

    X5 : rS25

    11; @S25

    3

    1394 W. Yaonan et al. / Measurement 44 (2011) 13891405

  • 8/3/2019 AMR in Dynamic Environment

    7/17

    xS21

    153

    15 2

    1027102

    2

    2 0:3278 x

    S12

    153

    15 2

    2 109j j

    102 2

    2 0:3278

    ExS21

    12

    2x2

    1x2

    1

    0:5970 Ex

    S21

    12

    2x2

    1x2

    1

    0:5970

    EOS21

    0:8 ExS00

    0:2030 EOS21

    0:8 ExS00

    0:2030

    EH 0:2 EH 0:2

    (Dynamic obstacle)

    xS22

    156

    15 2

    2j109j

    102 2

    2 0:1878 x

    S26

    156

    15 2

    1023102

    2

    2 0:3753

    ExS22

    12

    2x2

    1x2

    1

    0:5341 Ex

    S26

    12

    2x2

    1x2

    1

    0:6235

    EOS22

    0:8 ExS00

    0:2659 EOS26

    0:8 ExS26

    0:1765

    EH 0:2 EH 0:2

    Fig. 5. Initial position.

    Fig. 6. Occupancy determination based on sonar measurements in t= 2 s .

    W. Yaonan et al. / Measurement 44 (2011) 13891405 1395

  • 8/3/2019 AMR in Dynamic Environment

    8/17

    Evidence about occupancies of cells including dynamic ob-

    ject are combination using transferable belief model,

    which is shown in Table 2.

    From the new results it can be concluded that the evi-

    dence about the occupancy of cells is increased. This pro-

    cess continues until the target is reached.

    5. Path planning

    In this section, we adopt a new path planning strategy,

    in which the robot replans the path as new observations

    are acquired. Whats more, the navigation recognition

    law that can determine which obstacle is dynamic or staticwithout any previous knowledge is discussed in this

    section.

    5.1. dynamic object recognition law

    Condition 1. Consider the pignistic value and basic prob-

    ability assignments given by transferable belief model

    based on update information. At ttime, suppose the

    number of the sensor is i; interval time is 1, Event A is

    moving obstacle if satisfying that:

    (1) mS1Si12t AE m

    S1Si12t AH ! 1

    (2) mS1Si12t AHP 0:5

    Proof. Let H= {h1, h2, . . . , hn} denote a set of n hypothesis.

    Given the likelihoods l(hijx) for every hi 2 H Let X denote

    the set of possible values this observation can take.

    mxA Yhi 2A

    lhijxYhi 2

    A

    1 lhijx 38

    plxA 1 Yhi2A1 lhijx 39BetPA

    XA#X

    mXA

    A1 mXH; 8A 2 X 40

    Supposethesensor measurement isA 2 Xandits likelihoods

    are mS1S2Si12t AO a; m

    S1S2Si12t AE b, with (38)

    mS1S2Si12t AH 1 a1 b 41

    From (40),

    BetS1S2 Si12t Ah1

    a1 b Pn1

    i1 Cini1 b

    ni1=i 1

    1 1 a1 b

    aPn1i0 Cinibi1 bni1=i 11 1 a1 b 42

    The sum can be simplified to from Delmotte [28]:Xn1i0

    Cinibi1 b

    ni1=i 1

    1 1 bn

    nb43

    Thus:

    BetS1S2 Si12t Ah1

    a

    nb

    1 1 bn

    1 1 a1 bn1

    44

    Let n = 2 for the reason these are two hypothesis including

    occupancy (O) and empty (E):

    BetS1S2Si12t AO

    a2b

    1 1 b2

    1 1 a1 b

    a

    2b

    1 1 2b b2

    1 1 b a ab

    a

    2b

    2b b2

    b a ab

    a

    2b

    2b b2

    b a ab

    a2 b

    21 mH

    45

    IfBetS1S2Si12t AO ( Bet

    S1S2Si12t AH, it means event A of

    occupancy is unbelievable, thus if"e and e is a small con-

    stant, andBet

    S1 S2 Si12t

    AO

    BetS1 S2Si12t

    Ah< e we can belief that event A is

    occupancy is unlikelihood.

    BetS1S2 Si12t AO

    BetS1 S2 Si12t AH

    a2 b

    21 mXx1; . .. ;xiH0mS1

    S2 Si12t AH

    1 mS1S2 Si12t AH

    a2 b

    2mS1 S2 Si12t AH

    46

    IfBet

    S1 S2 Si12t

    AO

    BetS1 S2 Si12t

    AH< e, we get:

    mS1S2Si12t AO ! 0 and m

    S1 S2 Si12t AE

    ! 1 and mS1S2Si12t AHP 0:5

    Table 2

    Probabilitic result from transferable belief model.

    H H

    (unknown)

    Occupancy Empty

    mS01

    T0 0.3362 0.3921 0.1293

    mS11

    T0 0.3362 0.3921 0.1293

    mS0 S11

    T0 0.6642 0.2561 0.0505

    mS01 T1 0.3248 0.4629 0.0875

    mS02

    T1 0.3347 0.4243 0.1063

    mS0 S11

    T1 0.5409 0.4550 0.0022

    mS21

    T4 0.3007 0.5261 0.0671

    mS21

    T5 0.2911 0.5714 0.0464

    mS02

    T0 0.3248 0.4629 0.0875

    mS12

    T0 0.3320 0.4243 0.1063

    mS0 S12

    T0 0.5371 0.3117 0.0344

    mS0 S112 T0

    0.5281 0.3874 0.0476

    mS02 T10.3396 0.4031 0.1177

    mS12

    T1 0.2149 0.7660 0.0042

    mS22

    T1 0.3212 0.4758 0.0818

    mS0 S1 S22

    T1 0.5478 0.2516 0.0004

    mS0 S1 S212 T1 0.5273 0.3582 0.0682

    mS12

    T2 0.4212 0.4758 0.0818

    mS22 T20.3420 0.3921 0.1239

    mS1 S22 T2

    0.6444 0.3016 0.0368

    mS22

    T6 0.3086 0.5149 0.0661

    mS212T6 m EHS26

    1

    (dyanmic obstacle)

    0.8444 0.1203 0.0309

    mTS24

    1 m EHS24

    2

    (dynamic obstacle)

    0.8442 0.1222 0.0292

    mTS25

    1 m EHS25

    2

    (dyanmic obstacle)

    0.8436 0.1289 0.0239

    1396 W. Yaonan et al. / Measurement 44 (2011) 13891405

  • 8/3/2019 AMR in Dynamic Environment

    9/17

    Because mS1S2Si12t AH m

    S1S2Si12t AE m

    S1S212t

    SiAO 1, we get the condition:

    (1) mS1Si12t AE m

    S1Si12t AH ! 1

    (2) mS1Si12t AHP 0:5If satisfying the condition, we

    believe this obstacle is dynamic. And the velocity

    of this dynamic obstacle is defined:

    vobst

    ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix

    BetPS

    1S

    2Si

    tO

    xBetP

    S1

    S2

    SitT

    O2 y

    BetPS

    1S

    2Si

    tO

    yBetP

    S1

    S2

    SitT

    O2

    r 0T

    47

    where T is interval.

    5.2. potential field method based on TBM

    The potential field method has been studied extensively

    for autonomous mobile robot path planning in the past

    decade. An attractive potential which drives the mobile ro-

    bot to its destination can be described by Ge and Cui

    [10,13]. In this paper, we proposed a new method to

    modify this potential field method based on TBM in com-

    plex dynamic environment under static and dynamic

    obstacle.

    Let x;yBetPX TtOand [xr,yr] is the position both from

    dynamic obstacle and robot. vobst ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix

    BetPS1 S2 Sit

    O x

    BetPS1 S2 SitT

    O

    2 y

    BetPS1S2 Sit

    O y

    BetPS1 S2 SitT

    O

    2

    r 0T,

    vRO(t) = [v(t) vobs(t)]TnRO, at t time,

    Fattq; v

    BetPS1 S2 Si12t Om@qkqtart qtk

    2nRT if not satisfy condition:1

    BetPS1 S2 Sit Om@qkqtart qtk2

    nRT

    BetPS1 S2 Sit On@vkvtart vtk2

    nVRT if satisfy condition:1

    8>>>:48

    where q(t) and qtar(t) denote the positions of the robot andthe target at time t, respectively; q = [xyz]T in a 3-dimen-sional space or q = [xy]T in a 2-dimensional space; v(t)and vtar(t) denote the velocities of the robot and the target

    at time t, respectively; kqtar(t) q(t)k is the Euclidean dis-tance between the robot and the target at time t;

    kvtar(t) v(t)k is the magnitude of the relative velocity be-

    tween the target and the robot at time t; @q and @v are sca-

    lar positive parameters; nand mare positive constants;

    BetPX(O) is pignsitic decision value that sensor detect the

    occupancy belief value both for static and dynamic respec-

    tively; nRT being the unit vector pointing from the robot to

    the target and nVRTbeing the unit vector denoting the rela-

    tive velocity direction of the target with respect to therobot.

    Also a repulsive potential can be written as:

    Frep q;v

    0; ifqs q;qobs qmvROPq0orvRO 6 0

    or satisfy condition:1

    Frep1 Frep2; if 0 < qsq;qobs qmvRO 0

    or satisfy condition:1

    not defined; if vRO > 0 and qsq;qobs >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>: 49

    where

    Frepv1 gBetPXOv

    qsq;qobs qmvRO2

    1 vRO

    amax

    nRO

    Frepv2 BetPXOvgvROvRO?

    qsq;qobsamaxqsq;qobs qmvRO2

    nRO?

    qmv

    RO

    v2ROt

    2amax

    BetPXOv XO#X

    mXOv

    jOvj1 mXH; 8A#X

    With q0 is a positive constant describing the influencerange of the obstacle; qs(q(t),qobs(t)) is denoted the short-est distance between the robot and the body of the obsta-

    cle; vRO(t) is the relative velocity between the robot and the

    obstacle in the direction from the robot to the obstacle; nROis a unit vector pointing from the robot to the obstacle; a

    and g is a positive constant; BetPX(O) and BetPX(Ov) arepignsitic decision value that sensor detect the occupancy

    belief value both for static and dynamic respectively;

    mX(H) is unknown value from pignistic transformation.From Eqs. (48) and (49), we obtain the potential total

    virtual force which drives the reference point of the mobile

    robot to the destination, described by:

    Ftotal Fatt Frep 50

    Assuming that the target moves outward or synchronously

    with the obstacle, the robot is obstructed by the obstacle

    and cannot reach the target. Refs. [1317] indicate in highly

    dynamic environment, waiting method is often adopted

    where both the target and the obstacle are moving.

    5.3. Local minimum problems

    Like other local path planning approaches, the proposed

    navigation algorithm has a local minimum problem. To de-

    tect the outbreak of local minimum, we adopt the method

    in [5] (see Fig. 7), which compares the robot-to-target

    direction, ht, with the actual instantaneous direction of tra-

    vel, h0. If the robots direction of travel is more than a cer-

    tain angle (90 in most cases) off the target point, that is,

    when

    jht h0j > hs 51

    where hs is a trap warning angle, and typically 90. We re-

    gard it is very likely about to get trapped.

    Fig. 7. Incorporating a virtual target on local-minimum alert.

    W. Yaonan et al. / Measurement 44 (2011) 13891405 1397

  • 8/3/2019 AMR in Dynamic Environment

    10/17

    Consider a common situation as shown in Fig. 8, in

    which the robot needs to detour around a long wall to

    reach the goal. At position A, the robot is heading to the

    goal position and hence hs = 0.

    But at position B, the robot changes its direction by

    repulsive force from detected wall and hsP 90. In such

    a situation, the original target point is replaced with the

    virtual target point. The virtual target point is placed by

    following equations:

    xv xi m coshi bvt

    yv

    yi m sinhi bvt 52

    hv h0 bvt

    where m is the distance that the robot needs to travel toreach the virtual goal. Also

    v L BetPXO R 53

    where L is the distance between the robot and the obstacle

    near the virtual target detected by the longest-range sen-

    sor in the corresponding direction. BetPX(O) is a pignsitic

    decision value that sensor detect the occupancy belief va-

    lue in hv that direction. R is an offset which depends on

    the size of the robot. bvt is a certain angler, and typically

    45.

    Based on (51)(53), the robot escapes from this trap-

    ping point and moves to point C. When the robot escapes

    from the condition above or when no obstacle is detected,

    the original target point is restores, and the robot contin-

    ues to proceed to the target point.

    6. Simulation and experiment results

    To show the usefulness of the proposed approach, a ser-

    ies of simulations have been conducted using an arbitrarily

    constructed environment including obstacles.

    6.1. Simulation results

    In these simulations, the positions of all the obstacles

    including moving obstacles in the workspace are unknownto the robot. The robot is aware of its start and finish posi-

    tions only. Simulation results obtained in three working

    scenarios are presented in Figs. 913. The robot has been

    modeled as a small circle, and imaginary sensors (sixteen

    in number) are placed in the form of the arc along the cir-

    cumference of the robot. The minimum distance obtained

    within the cone of each sensor is considered as the dis-

    tance of the obstacle from the sensor which detects any

    obstacle. And any distance information detected from so-

    nar is adding Gaussian White Noise. The first to fourth

    experiments are conducted to verify the proposed method

    in the MATLAB environment, and the fifth experiment is

    carried out from Mobilesim environment.Experiment 1: Fig. 9 represents the obstacle avoidance

    path of the mobile robot in static case. Table 3 shows the

    condition settings of this experiment. As seen from the vir-

    tual plane shown in Fig. 9a, with the decrease of distance

    between Obs6 and robot, BetPX(O) became much large

    which lead the repulsive force much large. Ultimately, ro-

    bot changes its path by deviating to the right. The complete

    process is shown in Fig. 9.

    Experiment 2: This example illustrates the collision

    avoidance process in dynamic environment. The scenarioFig. 8. Anti-deadlock mechanism.

    Fig. 9. Collision avoidance (static case).

    1398 W. Yaonan et al. / Measurement 44 (2011) 13891405

  • 8/3/2019 AMR in Dynamic Environment

    11/17

    shown in Fig. 10 has some similarity with the scenarios of

    example 1 but adding two moving obstacles. Fig. 10 thor-

    oughly plots the obstacle avoidance path of the mobile ro-

    bot in discrete time while it is maneuvered in a complex

    environment (including dynamic and static obstacle)

    where two rectangular shapes of obstacles are moving in

    Fig. 10. Collision avoidance (dynamic case).

    W. Yaonan et al. / Measurement 44 (2011) 13891405 1399

  • 8/3/2019 AMR in Dynamic Environment

    12/17

    different directions and different velocity, meanwhile, five

    static obstacles also set in this environment. Table 4 shows

    the condition settings of this experiment.

    Fig. 10 represents the scenario and the virtual plane in

    different time intervals: (a and b) 06 t6 5s; (c and d)

    06 t6 10s; (e and f) 0 6 t6 15s; (g and h) 0 6 t6 37s. As

    seen from Fig. 10(c and d), the robot is in a collision course

    with both ObsD1, Obs1 and Obs2. However, it is more ur-

    gent to avoid collision with ObsD1, and the robot slows

    down to accomplish this. The robot avoids collision with

    Obs2 as shown in Fig. 10e and f. Fig. 10e and f show the ro-

    bot neglect this moving obstacle and can not change its

    path for the reason there is no collision risk with ObsD2.

    The complete process is shown in Fig. 10g and h.

    Experiment 3: For local minimum problem avoidance, a

    rigorous testing is carried out in cluttered maze consisting

    of concave obstacles and dynamic obstacles (Fig. 11). Inthe

    following Figs. 11 and 12, h represents the static obstacle

    and represents the dynamic obstacle. Point a, b, c, . . . rep-

    resent several trapping points. represents the real goal

    Fig. 11. Collision avoidance (local minimum case).

    1400 W. Yaonan et al. / Measurement 44 (2011) 13891405

  • 8/3/2019 AMR in Dynamic Environment

    13/17

    Fig. 11 (continued)

    Fig. 12. Collision avoidance (mazes case).

    W. Yaonan et al. / Measurement 44 (2011) 13891405 1401

  • 8/3/2019 AMR in Dynamic Environment

    14/17

    and represents the current virtual goal.? represents the

    potential total virtual force which drives the reference

    point of the mobile robot to the current position. Table 5

    shows the dynamic obstacles sets of this experiment.

    Fig. 11 represents the scenario and the virtual plane in

    different time intervals: (b and c) 0 6 t6 10s; (d and e)

    06t6 30

    s; (f and g) 0 6

    t6 40

    s. As seen from Fig. 11b

    and c, the robot does not detect the obsD1 until it reaches

    point A at which time it turns to the right-the side with the

    lowest collision possibility. Since the obsD1 is now to

    change its moving direction and move away from the ro-

    bot, the robot continues to track the virtual goal in a coun-

    terclockwise direction until it reaches the virtual goal at

    point B. Since the goal is now to the left of the robot, it con-

    tinues to track the current virtual goal (point C) at which

    time the robot detects obsD2. But in Fig. 11b, it shows

    the robot neglects this moving obstacle and can not change

    its path for the reason that there is no collision risk with

    ObsD2. In Fig. 11d and e, there are several local minimum

    points including point D, E, F. At these points where the ro-

    bots direction of travel is more than a certain angle

    (jht h0j > 90) off the target point, the robot turns its

    direction and tracks current virtual goal. Then the robot

    continues to see the wall on its left until it arrives at point

    H and I after which it goes towards a real target point. The

    complete process is shown in Fig. 11f and g and the robot

    moves counterclockwise around obstacles through point J

    and K to the final goal.

    Fig. 13. Collision avoidance based on Mobilesim environment (dynamic case).

    Table 3

    Experimental conditions of obstacle avoidance (static case).

    Initial robot position xr = 0[m], yr = 0[m]

    Goal position Goalx = 30[m], Goaly = 20.5[m],

    Obstacle position Obs1x = 8[m], Obs1y = 10[m]

    Obs2x = 13[m], Obs2y = 10[m]

    Obs3x = 18[m], Obs3y = 20[m]

    Obs4x = 20[m], Obs4y = 15[m]

    Obs5x = 24[m], Obs5y = 20[m]

    Obs6x = 9[m]? 12[m], Obs6y = 10[m]

    Processing time 30 s

    1402 W. Yaonan et al. / Measurement 44 (2011) 13891405

  • 8/3/2019 AMR in Dynamic Environment

    15/17

    Experiment 4: In this experiment, the more complex

    test is performed to present the effectiveness of the pro-

    posed approach in cluttered environment with obstacle

    loops and mazes. Similar as experiment 3, h represents

    the static obstacle and represents the dynamic obstacle.

    Fig. 12a, c and e thoroughly plot the obstacle avoidance

    path of the mobile robot in discrete time while it is maneu-

    vered in a dynamic environment where many dynamic

    obstacles are moving in different direction. Whereas,

    Fig. 12b, d and f give some detail information about how

    the robot navigation system behaves at each discrete time.

    The proposed algorithm performs well in a cluttered dy-

    namic environment, as shown in Fig. 12a and b. The pro-

    cesses for wall following, passing through a narrow door

    and escaping from local minimum point using our algo-

    rithm are shown in Fig. 12cf.

    Experiment 5: This example is shown in Fig. 13. This

    illustrates the collision avoidance process by change both

    seed and orientation when facing with multi-moving

    obstacle. Fig. 13a represents the scenario, while Fig. 13b

    e indicate the virtual plane in different time intervals: (b)

    06 t6 1.2; (c) 06 t6 5.1; (d) 06 t6 6.3; (e) 06 t6 8.1.

    As seen from the virtual plane shown in Fig. 13b, there is

    a collision risk with D1 in the time 06 t6 1.2. The robot

    changes its path by deviating to the right, as shown in

    Fig. 13b. In Fig. 13c and d, robot changes its orientation

    by avoiding collision with D1 and D2 using proposed

    method.

    6.2. Experiment with a pioneer robot

    The method presented in this paper has been tested on

    a Pioneer robot. The robot and the experiment setup are

    shown in Fig. 14. The laptop on top on the robot is in

    charge of all computation: motion control, planning, SLAM

    and so on. The navigation is carried out in real-time and we

    only use sonar to detect obstacle.

    In order to simulate dynamic obstacle, two football mo-

    bile robots (MT-Robot) are used based on remote control.The orientation of the two MT-Robot is shown in Fig. 15.

    The track line of Pioneer robot is indicated approximately

    with blue color in this figure. The room is a clean environ-

    ment with long wall obstacles and measures approxi-

    mately 8.6 m by 6.5 m. The objects were placed in the

    room and their positions are point out as shown in

    Fig. 15. The position of the robot is recorded at regular time

    intervals. During the test executions, all programs were

    run under the Windows NT operating system, directly on

    the main processor of the robot, a Pentium 2.3G. A test case

    is presented in Fig. 15.

    In this experiment, the goal position is aligned to the

    front of the mobile robot which is obstructed by obstacleswall and two MT-Robots. The mobile robot starts to move

    towards according to the real goal. As it moves, it detects

    obstacles in front and in the left from its ultrasonic sensors.

    The navigation law makes a correct decision by indicating

    the right direction to the mobile robot until it reaches the

    virtual goal. Then the mobile robot keeps moving and de-

    tects dynamic obstacles in right direction. The robot makes

    the correct decision and turns to the left avoiding the de-

    tected obstacles successfully and then reaches the goal.

    Although feature detectors using sonar is not very

    accurate especially in complex environment and real

    Table 4

    Experimental conditions of obstacle avoidance (dynamic case).

    Initial robot position xr = 0[m], yr = 0[m]

    Goal position Goalx = 29[m], Goaly = 25[m],

    Initial dynamic obstacle

    position and its moving

    direction and velocity

    ObsD1x = 10[m], ObsD1y = 0[m]

    moving direction and velocity

    moving in a positive Y-axis

    direction at 0.88(m/s)

    ObsD2x = 12[m], ObsD1y = 15[m]

    moving direction and velocity

    moving in a positive X-axis

    direction at 0.19(m/s).

    Static obstacle position

    Obs1x 8m; Obs1y 10m

    Obs2x 12m; Obs2y 10m

    Obs3x 18m; Obs3y 20m

    Obs4x 20m; Obs4y 15m

    Obs5x 24m; Obs5y 20m

    Processing time 37 s

    Fig. 14. Pioneer robot.

    Table 5

    Experimental conditions of dynamic obstacles.

    Initial robot position xr = 0[m], yr = 0[m]

    Goal position Goalx = 29[m], Goaly = 25[m],

    Dynamic obstacle position and

    its moving direction and

    velocity

    ObsD 1x = 0[m]M 6[m],

    ObsD1y = 5[m], moving back and

    forth along X-axis and its velocity

    at 1.20(m/s)

    ObsD2x = 10[m],

    ObsD1y = 5[m]M 10[m], moving

    back and forth along Y-axis and its

    velocity at 1.00(m/s)

    ObsD3x

    = 15[m]M 22[m],

    ObsD3y = 10[m], moving back and

    forth along X-axis and its velocity

    at 0.70(m/s)

    ObsD4 = (0,20)M (5,25), moving

    back and forth and its velocity at

    0.71(m/s)

    Processing time 40 s

    W. Yaonan et al. / Measurement 44 (2011) 13891405 1403

  • 8/3/2019 AMR in Dynamic Environment

    16/17

    performance, in this experiment, it is not very robust as

    compared with simulation experiment in Matlab or

    Mobilesim (Process time became long, waiting motion

    happened frequently), it also can achieve the goal based

    on this navigation method. Furthermore, the performance

    of the simulation and the real robot is different. The simu-

    lation has a faster cycle time for the iterations of the pro-

    posed method. This allows the robot to follow the

    smoother path. The reactions of the robot in the simulation

    are also different from the real robot. For instance, once a

    speed command is given to the real robot, the set point is

    obtained over a period of time. In the simulation, the speed

    set point is almost immediately achieved.

    From these results, it shows that based on our collision

    avoidance algorithm, the mobile robot can find a safe and

    smooth path for attaining the target position autono-

    mously whether in a static or dynamic environment when

    adding highly noise to the sensor.

    7. Conclusion

    In this paper, a new mobile robot navigation strategy

    for nonholomonic mobile robot in dynamic environment

    was designed and fully tested in this work based on trans-

    ferable belief model. The aim was to let the robot find a

    collision-free trajectory between the starting configuration

    and the goal configuration in a dynamic environment con-

    taining some obstacle (including static and moving object).

    For this purpose, a navigation strategy which consists of

    sonar data interpretation and fusion, map building, and

    path planning is designed. The sonar data fusion and dy-

    namic object distinguish law were discussed using trans-

    ferable belief model. Based on proposed judging law, a

    path planning algorithm is modified based on potential

    field method. Simulation and experiment results validate

    the effectiveness of the proposed method. Though mobile

    robot is discussed, the method is generally applicable to

    other type of problem, as well as to pattern recognition,

    objective classification.

    Acknowledgements

    The authors thank the anonymous editor and reviewer

    for their invaluable suggestions, which has been incorpo-

    rated to improve the quality of this paper dramatically.

    This work was supported by National Natural Science

    Foundation of China (60775047, 60673084), National High

    Technology Research and Development Program of China

    (863 Program: 2008AA04Z214), National Technology Sup-

    port Project (2008BAF36B01) and Research Foundation of

    Hunan Provincial Education Department (10C0356).

    References

    [1] K.D. Rueb, A.K.C. Wong, Structuring free space as a hypergrarph for

    roving robot path planning and navigation, IEEE Transactions on

    Pattern Analysis and Machine Intelligence 9 (2) (1987) 263273.

    [2] M.K. Habib, S. Yuta, Efficient online path planning algorithm and

    navigation for a mobile robot, International Journal of Electronics 69

    (2) (1990) 187210.

    [3] M.J. Mataric, Integration of representation into goal-riven behavior-

    based robots, IEEE Transactions on Robotics and Automation 8 (3)

    (1992) 304312.

    [4] E. Rimon, D.E. Koditschek, Exact robot navigation using artificial

    potential function, IEEE Transactions on Robotics and Automation 8

    (5) (1992) 501518.

    [5] J. Borensein, Y. Koren, Real-time obstacle avoidance for fast mobile

    robots, IEEE Transactions on Systems Man and Cybernetics 19 (5)

    (1989) 11791187.[6] S. Thrun, W. Burgard, D. Fox, A probabilistic approach to concurrent

    mapping and localization for mobile robots, Machine Learning 31

    (13) (1998) 2953.

    [7] P. Svestka, M.H. Overmars, Motion planning for carlike robotsusing a

    probabilistic learning approach, International Journal of Robotics

    Research 16 (2) (1997) 119143.

    [8] S. Thrun, Probabilistic algorithms in robotics, Ai Magazine 21 (4)

    (2000) 93109.

    [9] C.M. Clark, Probabilistic road map sampling strategies for multi-

    robot motion planning, Robotics and Autonomous Systems 53 (34)

    (2005) 244264.

    [10] S.S. Ge, Y.J. Cui, New potential functions for mobile robot path

    planning, IEEE Transactions on Robotics and Automation 16 (5)

    (2000) 615620.

    [11] K.P. Valavanis, T. Hebert, R. Kolluru, et al., Mobile robot navigation in

    2-D dynamic environments using an electrostatic potential field,

    IEEE Transactions on Systems Man and Cybernetics Part A Systemsand Humans 30 (2) (2000) 187196.

    Fig. 15. Mobile robot and experiment.

    1404 W. Yaonan et al. / Measurement 44 (2011) 13891405

  • 8/3/2019 AMR in Dynamic Environment

    17/17

    [12] N.C. Tsourveloudis, K.P. Valavanis, T. Hebert, Autonomous vehicle

    navigation utilizing electrostatic potential fields and fuzzy logic, IEEE

    Transactions on Robotics and Automation 17 (4) (2001) 490497.

    [13] S.S. Ge, Y.J. Cui, Dynamic motion planning for mobile robots using

    potential field method, Autonomous Robots 13 (3) (2002) 207222.

    [14] J. Ren, K.A. McIsaac, R.V. Patel, Modified Newtons method applied to

    potential field based navigation for nonholonomic robots in dynamic

    environments, Robotica 26 (2008) 285294.

    [15] L. Yin, Y.X. Yin, C.J. Lin, A new potential field method for mobile robot

    path planning in the dynamicenvironments, Asian Journal of Control

    11 (2) (2009) 214225.[16] F. Belkhouche, B. Belkhouche, Kinematics-based characterization of

    the collision course, International Journal of Robotics & Automation

    23 (2) (2008) 127136.

    [17] Y.C. Chang, Y. Yamamoto, On-line path planning strategy integrated

    with collision and dead-lock avoidance schemes for wheeled mobile

    robot in indoor environments, Industrial Robot An International

    Journal 35 (5) (2008) 421434.

    [18] A.O. Djekoune, K. Achour, R. Toumi, A sensor based navigation

    algorithm for a mobile robot using the DVFF approach, International

    Journal of Advanced Robotic Systems 6 (2) (2009) 97108.

    [19] J. Minguez, L. Montano, Sensor-based robot motion generation in

    unknown, dynamic and troublesome scenarios, Robotics and

    Autonomous Systems 52 (4) (2005) 290311.

    [20] L. Moreno, E. Dapena, Path quality measures for sensor-based

    motion planning, Robotics and Autonomous Systems 44 (2) (2003)

    131150.

    [21] J. Velagic, B. Lacevic, B. Perunicic, A 3-level autonomous mobilerobot navigation system designed by using reasoning/search

    approaches, Robotics and Autonomous Systems 54 (12) (2006)

    9891004.

    [22] X.Y. Yang, M. Moallem, R.V. Patel, A layered goal-oriented fuzzy

    motion planning strategy for mobile robot navigation, IEEE

    Transactions on Systems Man and Cybernetics Part B Cybernetics

    35 (6) (2005) 12141224.

    [23] T. Kondo, Evolutionary design and behavior analysis of

    neuromodulatory neural networks for mobile robots control,

    Applied Soft Computing 7 (1) (2007) 189202.

    [24] J.A. Fernandez-Leon, G.G. Acosta, M.A. Mayosky, Behavioral control

    through evolutionary neurocontrollers for autonomous mobile robot

    navigation, Roboticsand AutonomousSystems 57(4) (2009)411419.

    [25] P. Smets, Analyzing the combination of conflicting belief functions,

    Information Fusion 8 (4) (2007) 387412.

    [26] F. Delmotte, P. Smets, Target identification based on the transferable

    belief model interpretation of DempsterShafer model, IEEETransactions on Systems Man and Cybernetics Part A Systems

    and Humans 34 (4) (2004) 457471.

    [27] T. Denoeux, P. Smets, Classification using belief functions:

    Relationship between case-based and model-based approaches,

    IEEE Transactions on Systems Man and Cybernetics Part A

    Systems and Humans 36 (6) (2006) 13951406.

    [28] F. Delmotte, Comparison of the performances of decision aimed

    algorithms with Bayesian and beliefs basis, International Journal of

    Intelligent Systems 16 (8) (2001) 963981.

    [29] P. Smets, Application of the transferable belief model to diagnostic

    problems, International Journal of Intelligent Systems 13 (3) (1998)

    127157.

    [30] R. Fierro, F.L. Lewis, Control of a nonholonomic mobile robot:

    Backstepping kinematics into dynamics, Journal of Robotic Systems

    14 (3) (1997) 149163.

    [31] Q.J. Zhang, J. Shippen, B. Jones, Robust backstepping and neural

    network control of a low-quality nonholonomic mobile robot,International Journal of Machine Tools & Manufacture 39 (7)

    (1999) 11171134.

    [32] S.X. Yang, M. Meng, Real-time fine motion control of robot

    manipulators with unknown dynamics, Dynamics of Continuous

    Discrete and Impulsive Systems-Series B Applications &

    Algorithms 8 (3) (2001) 339358.

    [33] G. Antonelli, S. Chiaverini, G. Fusco, A fuzzy-logic-based approach for

    mobile robot path tracking, IEEE Transactions on Fuzzy Systems 15

    (2) (2007) 211221.

    [34] Z.G. Hou, A.M. Zou, L. Cheng, et al., Adaptive control of an electrically

    driven nonholonomic mobile robot via backstepping and Fuzzy

    approach, IEEE Transactions on Control Systems Technology 17 (4)

    (2009) 803815.

    [35] W. Sun, Y.N. Wang, A robust robotic tracking controller based on

    neural network, International Journal of Robotics & Automation 20

    (3) (2005) 199204.

    [36] Y. Zuo, Y.N. Wang, X.Z. Liu, et al., Neural network robust control for a

    nonholonomic mobile robot including actuator dynamics,

    International Journal of innovative Computing, Information and

    Control 6 (8) (2010) 34373449.

    [37] Y.N. Wang, W. Sun, Y.Q. Xiang, et al., Neural network-based robust

    tracking control for robots, Intelligent Automation and Soft

    Computing 15 (2) (2009) 211222.

    Wang Yaonan received the B.S. degree in

    computer engineering from East China Sci-ence and Technology University (ECSTU),

    Shanghai, China, in 1982 and the M.S., and

    Ph.D. degrees in electrical engineering from

    Hunan University, Changsha, China, in 1990

    and 1994, respectively.

    From 1994 to 1995, he was a Postdoctoral

    Research Fellow with the Normal University

    of Defence Technology. From 1981 to 1994, he

    worked with ECSTU. From 1998 to 2000, he

    was a senior Humboldt Fellow in Germany,

    and from 2001 to 2004, he was a visiting professor with the University of

    Bremen, Bremen, Germany. He has been a Professor at Hunan University

    since 1995. His research interests are intelligent control and information

    processing, robot control and navigation, image processing, and industrial

    process control.

    Yang Yimin received the B.E.E., and M.S.E.E.

    degrees in 2005 and 2009, respectively, from

    Xiangtan University and Hunan University,

    Hunan, China. Now, He is currently working

    toward the Ph.D. degree in Hunan University.

    His research interests include robot control

    and navigation, intelligent information pro-

    cessing, and artificial neural networks.

    Yuan Xiaofang received the B.S., M.S., and

    Ph.D. degrees in electrical engineering from

    Hunan University, Changsha, China, in 2001,

    2006 and 2008, respectively.

    He is currently a lecturer with the College of

    Electrical and Information engineering, Hunan

    University. His research interests include

    intelligent control theory and applications,

    kernel methods, andartificial neural networks.

    Zuo Yi received the Ph.D. degree in ControlScience and Engineering from Hunan Univer-

    sity, Changsha, China, in 2009. He is a visiting

    scholar in University of Waterloo from 2008

    to 2009. His scientific interests include neural

    networks and robotic robust control.

    W. Yaonan et al. / Measurement 44 (2011) 13891405 1405