collision avoidance and path finding in 2-d - shiwei song

16
1 Collision Avoidance and Path Finding in 2-D Environments 1 Shiwei Song Lowell High School Registry 0512 1101 Eucalyptus Drive San Francisco, CA 94132 October 22, 2004 [email protected] Abstract Collision avoidance is useful in a wide range of applications including artificial intelligence, autonomous vehicle navigation, robotics, behavioral simulation, and video games. This paper explores techniques for achieving realistic collision-free movement in a 2-D environment using continuous motion planning. The underlying construct used in this process is a priori information from obstacles within a particle’s vision. This information is used to predict and decide the movement of the particle in order to achieve collision avoidance. A series of actions are executed based on current and predicted information to steer the autonomous particle away from potential collisions. A new method of obstacle merging is presented which allows the particle to make more accurate decisions based on multiple obstacles in its surrounding environment. Keywords Collision avoidance, path finding, automation 1 Simulations using Java applets can be found at http://www.andrew.cmu.edu/research/pathfind

Upload: others

Post on 18-Mar-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

1

Collision Avoidance and Path Finding in 2-D Environments1

Shiwei Song

Lowell High School Registry 0512

1101 Eucalyptus Drive San Francisco, CA 94132

October 22, 2004 [email protected]

Abstract Collision avoidance is useful in a wide range of applications including artificial intelligence, autonomous vehicle navigation, robotics, behavioral simulation, and video games. This paper explores techniques for achieving realistic collision-free movement in a 2-D environment using continuous motion planning. The underlying construct used in this process is a priori information from obstacles within a particle’s vision. This information is used to predict and decide the movement of the particle in order to achieve collision avoidance. A series of actions are executed based on current and predicted information to steer the autonomous particle away from potential collisions. A new method of obstacle merging is presented which allows the particle to make more accurate decisions based on multiple obstacles in its surrounding environment. Keywords Collision avoidance, path finding, automation

1 Simulations using Java applets can be found at http://www.andrew.cmu.edu/research/pathfind

2

1. INTRODUCTION Obstacle avoidance has a wide range of applications in the areas of computing and engineering. Extensive research has been done to explore solutions for collision detection, collision avoidance, and path planning. Although many approaches for collision free movement have been proposed, there is still a demand for efficient, flexible, and adaptable techniques. Collision detection and avoidance techniques can be classified into two categories [1]; 1) static collision detection and 2) continuous motion or path planning. Algorithms for static collision detection use series of non-moving states to check for imminent collisions and take steps to avoid them. Continuous motion or path planning uses a priori information to calculate a collision-free path. There are several proposed techniques for collision avoidance in a 2-D environment. One technique proposed by Erdmann and Lozano-Perez [2] generates and tests paradigms. It gives a priority to each object, and paths are then generated one at a time based on the priority. Cameron [3] proposed a collision detection technique that uses continuous motion or path planning. This paper uses a similar technique that checks paths of particles over a set number of time-steps to detect collisions. A group of techniques for static collision detection applies vector fields [1] and potential fields [4] around objects to repulse them when they get close. Borenstein and Koren [5] derived a similar process on robots using the strengths of feedback from sensors as the repulsive fields. Continuous motion or path planning approach was chosen for this project because of its efficiency and adaptability. 2. OVERVIEW This paper explores algorithms that utilize a priori information from obstacles in a particle’s field of view to achieve collision detection and avoidance. Steering decisions for the autonomous particle are made based on the position, velocity, and direction of both the particle and obstacles in its path. Each step of the process of collision avoidance can be broken down into 3 basic procedures. See Figure 1 on page 3 fore more details.

• The first procedure is obstacle detection. Obstacles are detected when they enter a user-defined field of vision of the particle. The presence of this field of vision enables realistic simulation of real world objects and drastically reduces the lag time caused by computations that plagues other techniques.

• Once obstacles are detected, they are processed in the second procedure –

collision detection. The collision detection technique used here is similar to the approach described by Cameron [2]. This method considers the current position of each of the objects in the field of vision and predicts their movement over the next several steps to check for possible future collisions.

3

• Information from detected collisions is passed to the third procedure of

information processing and steering decisions. Steering decisions can be split into two sub categories: 1) steering from single obstacle and 2) steering from multiple obstacles. The new method of obstacle merging is introduced here to facilitate steering.

Obstacle merging is the idea of viewing multiple obstacles under certain conditions as a single obstacle cluster, and avoiding the cluster as a whole. This method allows the autonomous particle to make decisions based on a more complete view of its environment, and prevents problems present in other path planning techniques where only one moving object can be analyzed at a time.

Figure 1. Processing procedures for each step.

3. OBSTACLE DETECTION Collision avoidance is the collision-free movement between a particle and its surroundings. In order to achieve collision-free movement, the particle must be aware of obstacles in its path. Obstacle detection is the process of filtering through all objects in the simulated environment and assembling a list of objects that is relevant to the moving particle. Obstacle detection is achieved through two filters (Figure 2). The first filter is a large circle surrounding the autonomous particle. Once all objects are filtered through the first filter, they are passed down to the second filter. The second filter is a user-defined polygon (the simulations use a triangle) that simulates the particle’s field of vision. Only objects that are passed through the second filter are considered as obstacles.

4

Figure 2. A screenshot of the filters in action. The blue Figure 3. The obstacle detection circle is the first filter. The red Triangle is the second filter. process. A simulated environment can contain indefinite number of objects. The calculations required to detect whether an object is within the field of vision can cause serious slow-down in the system during execution when the number of objects in the environment becomes large. Since using a circle as a filter takes less processing power than using a polygon, the implementation of the first filter dramatically reduces the number of processes required overall and thus reducing lag. A series of CPU benchmark tests2 using MetaScreem3v1.0.320 revealed the speed advantage of a dual filter system.

Benchmark w/ 2 filters Benchmark w/ 2nd filter only 10 particles 14932 15191 30 particles 15307 15434 100 particles 15344 15469

Table 1. CPU benchmark results comparing speed advantage of filters. Smaller numbers indicates faster times.

The first filter is a large circle with a defined radius. A circle is used to merely speed up the application. The radius of the circle should be large enough so that the minimum distance between the field of vision and the circumference of the circle is greater than the largest radius of any obstacle. 2 Tested on an AMD Athlon 850 MHz processor with 512 MB SDR ram running Windows XP Pro. 3 Copyright © iMP viSiOn 2002.

5

Detection for the first filter is achieved using the distance formula.

2 2( ) ( )o p o px x y y R− + − ≤

where ox and oy are the x and y coordinates of the center of an object in the environment. px and py are the x and y coordinates of the center of autonomous particle. R is the radius of the first filter. A loop is run every step to go through all objects in the environment. Objects detected in this filter are added to an ArrayList and objects no longer detected are removed from the ArrayList. The second filter is a user-defined polygon. The polygon can be defined by declaring the coordinates of its vertices. A triangle is used in the simulations to simulate the field of vision of an object with a forward pointing sensor. Detection is achieved using a combination of three cases. The first case is when an object is touching the sides of the polygon. The second case is when an object is touching the vertices of the polygon. The third case is when an object is within the polygon. A returned value of “true” for any one of the cases will signify a detection of the obstacle. The detection of the first case uses the distance formula between a line and a point. (Figure 4)

1 1( , )x y and 2 2( , )x y are two vertices of the field of vision. ( , )o ox y is the coordinate of the center of the object. First find the general equation of the line that passes through the two vertices in the form 0Ax By C+ + = . This line is labeled 1L . Similarly, we find the equations of 2L and 3L , both orthogonal to 1L and pass through the vertices. Using the formula

2 2o oAx By CdA B+ +

=+

find the distances 2d and 3d between ( , )o ox y and 2L , 3L . If 2d + 3d = 1d , the distance between the two vertices, then it shows that the object is within the area bounded by the vertices. Now if 4d , the distance between the center of the object and 1L , is less than or equal to the radius R of the object, then the first case returns “true.” if( 2d + 3d == 1d && 4d <= R )

return true; The second case returns true if the object is touching any vertices of the field of vision. (Figure 5)

Figure 4. A general diagram of the first case.

6

This task can be achieved by finding the distance 1d between and ( , )o ox y , and then compare it with the radius R of

the object. if( 1d <= R )

return true;

The third case is an extension of the first case. 3 lines are drawn parallel to the 3 sides of the triangle. A technique similar to the first case is used to determine whether the center of object ( , )o ox y is within the area bounded by the 3 pairs of parallel lines. (Figure 6)

if( ( , )o ox y bounded by 1st pair &&

bounded by 2nd pair && bounded by 3rd pair)

return true; If any of the cases return “true,” then the object has passed the second filter and is added into another ArrayList. This ArrayList is passed to the collision detection process. 4. COLLISION DETECTION In order to achieve an efficient collision-free movement, only obstacles that can cause collisions in the future warrant actions by the autonomous particle. The job of the collision detection process is to decide which object will cause a collision in a future time-step. The process then returns the obstacle that will cause a collision in the shortest time-step so that appropriate steering can be made to avoid it. Only objects in the ArrayList passed down by the second filter are considered in the collision detection process. The particle should not act to avoid an object outside of its field of vision. The algorithm used in the collision detection process consists of a series of loops that goes through all obstacles and simulates the next n time-steps. The loops first freeze all objects. It then goes through each obstacle individually, using the a priori information on the obstacle’s current position and velocity, to predict the positions of the future time-steps, and thus predicting any possible collisions in the future. Lastly, the process returns the obstacle that causes a collision the soonest. The process is repeated every step. (Figure 7)

Figure 5. A general diagram of the second case.

Figure 6. A general diagram of the third case.

7

Figure 7. A flowchart of the collision detection process.

The function of “n time-steps” is to provide a stopping point for the loop so that the loop does not go on forever if a collision does not occur. The stop point for each obstacle can vary depending on its position and velocity. The stop is determined by measuring the distance between the center of the obstacle and the center of the autonomous particle for each time-step. When the distance after n time-steps becomes greater than the original distance (meaning that the obstacle is now moving away from the particle), the loop stops. The coordinates after each time step is predicted by assuming that the particle and obstacle will maintain their velocities if no action occurs in the current step. Using the x and y components of the velocity vector of the obstacle and particle, we can attain their future coordinates with ( , )i x i yx k v y k v+ ⋅ + ⋅ where ( , )i ix y is the initial coordinate and k is the current time-step number starting from 1. xv and yv are the x and y components of the velocity vector. In Figure 8, the distance 2d is measured with each increasing k and a future collision is detected when 2d becomes smaller than the sum of the radius pR of the

Figure 8. When 2d = 1d , the loop stops.

8

particle and oR of the obstacle. Below is the simplified Java code4 for the collision detection process. for(int nI=0;nI<bList.size();nI++){ //bList is name of ArrayList aObstacle = ((Obstacle)bList.get(nI)); int nStep = 1; //counting the time-step

while( <= ){ //limits the loop

update coordinates nStep++;

if( <= oR + pR ){ //if it hits

if(lowStep>=nStep){ //makes sure that only the soonest lowStep=nStep; //collision is returned hitObstacle=aObstacle; } break; //breaks out of the inner loop because a hit had } //been detected. } } return hitObstacle;

5. Information processing/steering The most important aspect of an object avoidance system is to be able to steer away from the detected obstacles. The steering algorithm can range from simple ones that consider only the position of the obstacle to complex ones that consider myriads of conditions. The algorithm used for this paper is relatively simple and efficient. Steering can be considered as two different cases. The first case is when there is only one obstacle to be considered. In this case, if the center of the obstacle is to the left of the particle’s trajectory, the particle steers right, and vice versa. The second case is when there is more than one obstacle to be considered. In this case, if the orientation of the obstacles is positive compared to the particle’s trajectory, then the particle steers right, and vice versa. (Figure 9)

Figure 9. A flowchart of the steering process.

4 The code has been simplified and will not actually run in a compiler.

9

Regardless of the differences in these two cases, the obstacle that will cause a collision the soonest will be the center of focus. A loop searches through all objects in view to determine which case to use. If there are no obstacles within a predetermined distance from the center of focus, then the single obstacle case is used. If there are one or more obstacles within the predetermined distance, the obstacle cluster case is used. The predetermined distance used should be large enough to allow the autonomous particle to squeeze through the two or more obstacles. In the case of avoiding a single obstacle, the steering direction is decided by looking at the position of the obstacle relative to the autonomous particle’s trajectory. If the center of the obstacle is at the left of the particle’s trajectory, then the particle will steer away from it by applying acceleration to the right. If the center of the obstacle is at the right of the particle’s trajectory, then the particle will steer away from it by applying acceleration to the left.

An obstacle’s position relative to the particle’s trajectory is determined by comparing the angles they make relative to the x-axis. Following Figure 10, ( , )o ox y and ( , )p px y are the centers of the obstacle and particle respectively. The blue arrow indicates the velocity vector of the particle with components xv and yv . The purple line connects the two centers. 1L is a line parallel to the x-axis and

passes through the center of the particle. Let α be the angle made by the velocity vector and 1L . Let β be the angle made by the line segment that connects the two centers and 1L . α and

β can be defined as follow.

1cos ( )xvv

α −= 1

2 2cos ( )

( ) ( )o p

o p o p

x x

x x y yβ −

−=

− + −

If α > β , then the obstacle is to the right of the particle’s trajectory. If β >α , then the obstacle is to the left of the particle’s trajectory. The above equations need to be slightly modified depending on which quadrant the elements are located. The flaw of looking only at one obstacle is that in some situations, the particle might get caught between two obstacles. To avoid this kind of situation, the second case of looking at an obstacle cluster is employed.

Figure 10. Diagram of single obstacle case in first quadrant. .

10

(a) (b) Figures 11. These two screenshots show the effect of steering based on a cluster of obstacles. The blue

line indicates a magnified acceleration vector.

Figures 11 shows the effect of using an obstacle cluster. The image on the left (a) shows a particle steering towards a wall of obstacles because it only considers a single obstacle when making steering decisions. The image on the right (b) shows the particle steering away from a wall of obstacles because it considers two obstacles. The general idea for avoiding an obstacle cluster is that if obstacles are close together and cannot be avoided individually, then they must be avoided as a whole. The idea of obstacle merging is introduced here to solve the problem of considering more than one obstacle. The basic idea of obstacle merging is to find critical points in an obstacle cluster and react to these points efficiently as a whole. The first task required of obstacle merging is to decide which two5 obstacles should be considered as a cluster. The first obstacle in the cluster is the obstacle detected by the collision detection process. The second obstacle is found by searching for an obstacle in the field of vision that is within a certain distance from the first obstacle. This distance is smaller than the sum of the diameter of the particle and a constant6. Using this rule, obstacles with enough room for the particle to squeeze through are filtered out. The search might return more than one obstacle that fits the criteria but only the obstacle with the closest distance is picked as the second obstacle in obstacle merging. Once the two obstacles are found, the algorithm calculates the critical points. The critical points are the points of tangency made by a line tangent to both obstacles. These two points are chosen because they most closely represent the site of collision between a particle and an object cluster.

5 More obstacles can be considered by slightly modifying the techniques, but in the frame of this paper, only two obstacles are considered when looking at a cluster. 6 In the simulations, the constant is 4 pixels.

11

The coordinates of the critical points are calculated as shown in Figure 12. 2p and 3p are the critical points.

1 1( , )x y and 2 2( , )x y are the centers of the two obstacles. 1L is the line of tangency.

2L and 3L are lines parallel to the x-axis.

1d is the distance between the centers of the two obstacles. 2d + 3d is the radius of the first obstacle. 3d is the radius of the second obstacle. 2p and 3p are found by solving the system of equations.

1 2

1

cos ( )dd

α −=

β π α= −

1 21

1

cos ( )x xd

δ β− −= − 3 2 3 2 3( sin( ), cos( ))p x d y dδ δ= + × − ×

2 1 2 3 1 2 3( ( ) sin( ), ( ) cos( ))p x d d y d dδ δ= + + × − + × One problem the algorithm must deal with is that the two obstacles have two tangent lines and four critical points. This issue is resolved by comparing the sum of the distances from each pair of critical points to the center of the particle. The pair with the smaller sum means that it is on the side facing the particle and will be used. The two critical points construct the tangent line 1L . The autonomous particle steers according to the orientation of the tangent line. If the tangent line is positively sloped relative to the autonomous particle’s trajectory, the particle will turn right. If the tangent line is negatively sloped relative to the particle’s trajectory, the particle will turn left. Angles made by the lines with the x-axis are measured to find the orientation of the tangent line relative to the particle’s trajectory. In Figure 14, 1L and 2L are two lines parallel to the x-axis. β is the angle made by the tangent line and 1L . It can be found by using

Figure 12. Diagram of the method used to find the critical points 2p and 3p

Figure 13. The steering decisions. Red arrow indicates particle trajectory, black lines indicate the tangent lines.

12

1 tangent linecos ( )tangent line

xβ −=

α is the angle made by the trajectory and 2L . It can be found using the same method as β . If β -α is an acute angle, the particle turns left. If β -α is an obtuse angle, the particle turns right. If β -α is a negative acute angle, the particle turns right. If β -α is a

negative obtuse angle, the particle turns left. The obstacle merging technique is very useful for navigation through a maze. The tangent lines formed by obstacles are parallel to the wall itself and will prevent the obstacle from colliding with the wall7. 6. APPLICATIONS One of the advantages of the collision avoidance technique discussed in this paper is that it is very versatile. It is easily adaptable to fit most of the need presented in the environments. Only the a priori information on velocity, position, and size of objects are needed for the system to work. The inner mechanisms of individual objects are not necessary, and will not be considered or modified. The user can easily extend the complexity of the steering process to accommodate different situations. This section presents simulations made from Java to demonstrate the effectiveness of the technique. Applications of the technique in other fields are also discussed.

7 A demonstration of this can be found at http://crepuscule.info/pathfind/sim3/Simulation3.htm

Figure 14. The angles made by the lines and the x-axis.

13

Figure 15. The particle in an environment with static obstacles.

Figure 16. The particle in an environment with moving obstacles.

14

Figure 17. The particle in a maze-like environment.

Figures 15 through 17 are screenshots of the particles in different environments. In all three simulations, there was little, if any, change to the core algorithm. The simulations display the capabilities of the technique. An application of the technique is its use in robotics. The movement of the particle in the simulation closely resembles a robot’s movement in a plane with a forward pointing sensor. Instead of finding critical points through velocity and position of the obstacle, a robot can use its sensors to assign critical points to the surface of obstacles. With the obstacle merging technique, the centers of obstacles are not required for collision-avoidance. The velocity of an obstacle can be computed by comparing the displacement of an obstacle relative to the robot’s known position and velocity. Another application of the technique is behavioral simulations of flocks of birds, crowds of pedestrians, schools of fish, or other animals [6]. The steering algorithm can be easily modified by placing priorities for particular reactions that simulate the animals in real life. The field of vision can be easily modified depending on the vision of the real animal. Perhaps the most direct application of this technique is in video gaming where information on an obstacle’s center and velocity are accessible. The algorithm can be modified to make realistic AI for opponents or computer controlled ally. A great use of the technique is in top down space shooters where autopilot, wingman, homing missiles can all benefit from the obstacle avoidance technique.

15

7. SUMMARY AND CONCLUSION This paper explores a new approach for performing collision avoidance. The goal of this work is to discover simple computation techniques to achieve collision-free movement with high adaptability and efficiency. The application of this work can range from video game development to robotics design. The system uses three main processes to achieve collision avoidance. The first process is to identify the obstacle using filters. The second process is to predict collisions. The last process is to make intelligent steering based on information passed down from the first two steps. In step three, a new technique of obstacle merging is introduced to effectively avoid clusters of obstacles. The techniques were tested through simulations made from Java and showed promising results. The autonomous particle in the simulations was able to avoid obstacles more than 90% of the time. There are a few areas that can be improved. The first is that the steering behaviors can be greatly expanded to accommodate more situations. The second is that the simulations limited obstacles to circles. More advanced methods that incorporates different shaped obstacles can be explored. The third is that the obstacle merging process can be expanded to look at more than two obstacles. 8. ACKNOWLEDGMENTS I would like to thank Dr. Enmin Song for encouragements and advice; Professor Chih-Cheng Hung and Mr. James Spellicy for corrections.

16

References

1. Parris K. Egbert and Scott H. Winkler, “Collision Free Object Movement Using Vector Fields,” IEEE Computer Graphics and Applications, Vol. 16, No. 4, pp. 18-24, July 1996.

2. M. Erdmann and T. Lozano-Perez, “On Multiple Moving Objects,” Proceedings

IEEE International Conference on Robotics and Automation, Vol 3, pp. 1419-1424, April 1986.

3. S. Cameron, “A Study of the Clash Detection Problem in Robotics,” Proceedings

IEEE International Conference on Robotics and Automation, pp. 488-493, March 1985.

4. O. Khatib, “Real-Time Obstacle Avoidance for Manipulators and Mobile

Robots,” Proceedings IEEE International Conference on Robotics and Automation, pp. 500-505, March 1985.

5. J. Borenstein and Y. Koren, “Real-Time Obstacle Avoidance for Fast Mobile

Robots,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 19, No. 5, pp. 1179-1187, 1989.

6. Craig W. Reynolds, “Steering Behaviors for Autonomous Characters,”

Proceedings of Game Developers Conference, pp. 763-782, 1999.