optical navigation system
TRANSCRIPT
Optical Navigation System
Michael A. Paluszek∗
Joseph B. Mueller†
Princeton Satellite Systems, 6 Market Street, Suite 926, Plainsboro, NJ 08536
Michael G. Littman‡
Princeton University Department of Mechanical Engineering, Princeton, NJ 08540
The Optical Navigation System (ONS) is a flexible navigation system for deep spaceoperations that does not require GPS measurements. The navigation solution is computedusing an Unscented Kalman Filter (UKF) that can accept any combination of range, range-rate, planet chord width, landmark and angle measurements using any celestial object. TheUKF employs a full nonlinear dynamical model of the orbit including gravity models anddisturbance models. The ONS package also includes attitude determination algorithmsusing the UKF algorithm with the Inertial Measurement Unit (IMU). The IMU is used asthe dynamical base for the attitude determination algorithms. That is, the gyros modelis propagated, not the spacecraft model. This makes the sensor a more capable plug-in replacement for a star tracker, thus reducing the integration and test cost of addingthis sensor to a spacecraft. The linear accelerometers are used to measure forces on thespacecraft. This permits accurate measurement of the accelerations applied by thrustersduring maneuvers.
The paper includes test results from three cases: a geosynchronous satellite, the NewHorizons spacecraft and the Messenger spacecraft. The navigation accuracy is limited bythe knowledge to the ephemerides of the measurement targets but is sufficient for thepurposes of orbit maneuvering.
I. Introduction
Spacecraft navigation has been done primarily by means of range and range rate measurements fromground based antennas. This has proven to be very successful and reliable. The basis of these measurementsis that the range
ρ =√x2 + y2 + z2 (1)
and the range rate
ρ =vT r
ρ(2)
contains information about all of the spacecraft states: the position in earth-centered-inertial (ECI) referenceframe, r = (x, y, z) and velocity v = r. If the geometry changes sufficiently as measurements are taken allsix states can be recovered. Once the state is reasonably well known, a recursive estimator can track thestate with single range and range rate measurements. However, since typically knowledge to the spacecraftstate is only required for occasional course correcting maneuvers, a batch estimator on the ground is oftenused. Note that when doing ground based navigation additional states, such as the latitude and longitudeof the tracking antenna, are usually also estimated.
The major drawback with this approach is that it ties up ground antennas and facilities, causing it tobe expensive. Furthermore this method is not robust to loss of contact, space segment degradations andother factors. Additionally, if frequent navigation solutions are required it may not be possible to schedule∗President, Senior Member AIAA, [email protected].†Senior Technical Staff, AIAA Member, [email protected]‡Professor, Princeton University Departmant of Mechanical Engineering, [email protected].
1 of 24
American Institute of Aeronautics and Astronautics
the observations when needed. Some proposed missions, including solar sails, spacecraft with continuouspropulsion and formation flying missions require frequent updates. During the terminal approach in arendezvous it may be necessary to update the navigation solution at a very high rate. For this reason opticalnavigation has been proposed for spacecraft navigation.
Position can be determined from the angle between a star and the vector to a landmark. This approachwas used on Apollo1 as a backup to the conventional range and range rate measurements which used theTelemetry, Tracking and Control system. The range and range rate measurements were obtained fromantennas on the ground.
The first major attempt to shift navigation from ground-based to onboard operations was made byNASA during the Deep Space 1 (DS-1) mission2,3 in 1999. During this mission autonomous navigation wasoperational during the cruise phase, until the failure of the onboard star tracker. The principle adopted byDS-1 for autonomous orbit determination was optical triangulation.
Reduced State Encounter Navigation (RSEN), using optical images as the sole data type, was used onDS-1 to track comet Borrelly for 2 hours through closest approach. RSEN is initialized using the last groundor onboard estimate of spacecraft state relative to the target just before commencement of the final approachphase. RSEN was also used in the Stardust mission during comet Wild-2 flyby, and in the Deep Impactmission to navigate the impactor to hit an illuminated area on the comet Tempel 1 and by the flyby spacecraftto track the impact site.
The Japanese asteroid return mission Hayabusa4,5 employed wide angle cameras for onboard navigation,in conjunction with Light radio Detecting and Ranging (LIDAR) for measurement of altitude with respectto the asteroid Itokawa. A target marker was dropped to act as a navigation aid by posing as an artificiallandmark on the surface. A laser range finder was used to measure the altitude as the spacecraft got close(less than 120 m) to the asteroid. Fan beam sensors detected potential obstacles that could hit the solar cellpanels.
European Space Agency’s Small Missions for Advanced Research in Technology (SMART-1) missionemployed the Advanced Moon Imaging Experiment (AMIE) camera for Onboard Autonomous Navigation(OBAN).6,7 The moon and earth were used as beacons. The beacons were too bright to enable observationof stars with the AMIE camera. A star tracker was used to estimate pointing direction. The OBANframework is also being used for designing a vision based autonomous navigation system for future mannedMars exploration missions.8
The NEAR Shoemaker mission relied heavily on optical navigation.9 The system located small craters onthe surface of EROS and used them as references. This enabled rapid orbit determination near and aroundEROS. The more traditional approach of using the planet centroids was not feasible due to the irregularshape of EROS. The planet centroid technique was used on Mariner 9, Viking 1 and 2, Voyager 1 and 2 andGalileo. It was also used by NEAR on its approach to the asteroid Mathilde. More recently, this approachwas used on the Mars Reconnaissance Orbiter (MRO).
Auto-tracking of landmarks is discussed further in.10 This is a complex combination of pre-missionidentification of landmarks and autonomous landmark identification and tracking. Auto-tracking is desirablefor planetary orbit operations or landings.
The examples cited above illustrate the utility of optical navigation to a wide range of missions. In thispaper, we present a flexible Optical Navigation System (ONS) for deep space operations. The overall systemdesign is first presented in Section II. Next, Section III provides a brief discussion of the hardware prototypeof the gimbal mounted dual telescope design. Finally, simulation results from three test cases are presentedin Section IV.
II. Overall System Design
A. Observables
The optical navigation system is designed to use multiple measurement sources. The following opticalmeasurements can be used for navigation in the Optical Navigation System:
1. planet, moon or asteroid chordwidth
2. landmark size
3. centroid/star
2 of 24
American Institute of Aeronautics and Astronautics
4. landmark/star
5. landmark/landmark
6. landmark/centroid
7. star/star (for attitude determination)
For pairs of objects A/B, the angle between the two objects is computed.Figure 1 shows the observables.
1. Range and 2. range rate
4. Angle between landmarks
3. Chordwidth
Ground station
Spacecraft
5. Ang
le be
tween p
lanets
Center of Coordinates
6. Angle between planet vector and start
Figure 1. Observables.
Figure 2 shows the geometry of optical navigation in two dimensions. The angles θk are three possibletypes of angle measurements. θ1 is the angle between two planets or two landmarks on two planets. θ2 is theangle between two landmarks on a single planet or the chord width of the planet. θ3 is the angle between alandmark vector and a star.
The angle between two planets is
cos θ1 =ρT1 ρ2
|ρ1||ρ2|(3)
=(r + l1)T (r + l2)|(r + l1)||(r + l2)|
, (4)
where ρi is the vector from the spacecraft to either the planet i’s landmark location or the planetary center,li is the vector from the sun to either the planet i’s landmark location or the planetary center, and r is thevector from the spacecraft to the sun.
The angle between landmarks on a planet or the angle subtended by the planetary chord is
sinθ22
=a
|l − r|(5)
where a is either the planet radius or half the distance between two landmarks, and l is the vector from thesun to either the planet center or the center of the straight line joining the two landmarks.
3 of 24
American Institute of Aeronautics and Astronautics
The angle between a landmark vector and a star is
cos θ3 =uT (r − l)|r − l|
(6)
where u is the vector from the spacecraft to a reference star.
r
l1l2
θ1
θ2
θ3
Star
Spacecraft
Sun
Planet
Planet
ρ1
ρ2
x
y
u
Figure 2. Measurement geometry.
Solar system bodies will be illuminated by the sun. Table 1 lists the albedo and radiation reflected fromeach body at their mean distance from the sun. The bond albedo is the ratio of the total radiation reflectedto the incident radiation from the Sun. The geometric albedo is the fraction of the radiation relative to thatfrom a flat Lambertian surface which is an ideal reflector at all wavelengths.
To compute the navigation solution it is necessary to have at least one measurement of type θ1 or θ2, andtwo other measurements of any type. While these three measurements form the minimal required set, moremeasurements (whose statistics are known accurately) generally lead to better accuracy in the navigationsolution.
Figures 3 and 4 show values of the observables for the two missions. The planet chords are the angularwidth for all possible planets. The power is the albedo flux from each planet. The minimum range is therange to the closest planet and the minimum angular separation is the minimum separation among all pairsof planets.
The ONS can always measure angles between planetary pairs since its telescopes are articulated. Theangle between the telescopes is measured using stellar attitude determination of both telescopes. It willrarely be the case that two planets will appear in a single telescopes field-of-view.
4 of 24
American Institute of Aeronautics and Astronautics
Spacecraft Position: Messenger Mission
1 0.5 0 0.5 1 1.51
0.5
0
0.5
1
1.5
y (A
U)
x (AU)
Planet Chords: Messenger Mission
0 1 2 3 4 510 5
10 4
10 3
10 2
10 1
100
101
Cho
rds
(deg
)
Time (years)
mercuryvenusearthmarsjupitersaturnuranusneptunepluto
Power flux: Messenger Mission
0 1 2 3 4 510 15
10 10
10 5
100
Pow
er (W
/m2 )
Time (years)
mercuryvenusearthmarsjupitersaturnuranusneptunepluto
Minimum Range: Messenger Mission
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 510 3
10 2
10 1
100
101
Min
imum
rang
e (A
U)
Time (years)
Planet Angular Separation: Messenger Mission
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
101.1
101.2
101.3
101.4
101.5
101.6
101.7
101.8
Angu
lar S
epar
atio
n (d
eg)
Time (years)
Figure 3. Messenger mission.
5 of 24
American Institute of Aeronautics and Astronautics
Spacecraft Position: New Horizons Mission
4 2 0 2 4 6 835
30
25
20
15
10
5
0
5
y (A
U)
x (AU)
Planet Chords: New Horizons Mission
0 2 4 6 8 1010 5
10 4
10 3
10 2
10 1
100
Cho
rds
(deg
)
Time (years)
mercuryvenusearthmarsjupitersaturnuranusneptunepluto
Power flux: New Horizons Mission
0 2 4 6 8 1010 15
10 10
10 5
100
Pow
er (W
/m2 )
Time (years)
mercuryvenusearthmarsjupitersaturnuranusneptunepluto
Minimum Range: New Horizons Mission
0 1 2 3 4 5 6 7 8 9 1010 3
10 2
10 1
100
101
102
Min
imum
rang
e (A
U)
Time (years)
Planet Angular Separation: New Horizons Mission
0 1 2 3 4 5 6 7 8 9 10101
102
103
Angu
lar S
epar
atio
n (d
eg)
Time (years)
Figure 4. New Horizons mission.
6 of 24
American Institute of Aeronautics and Astronautics
Table 1. Planetary albedo
Planet Geometric Albedo Bond Albedo Albedo Flux based on the Bond Albedo (W/m2)
mercury 0.138 0.056 510.873
venus 0.84 0.720 1881.160
earth 0.367 0.390 533.130
mars 0.15 0.160 94.213
jupiter 0.700 35.343
saturn 0.750 11.272
uranus 0.900 3.340
neptune 0.820 1.240
pluto 0.44-0.61 0.145 0.127
europa 0.670 33.828
triton 0.760 1.149
moon 0.113 0.120 164.040
titan 0.210 3.156
B. System Design
The Optical Navigation System is a two telescope navigation and attitude determination system. The designincludes the following prominent features:
1. Refracting telescope with a 200 mm focal length and 50 mm aperture
2. Each telescope has an azimuth and elevation gimbal and both telescopes are mounted on a turntable
3. Calibration cube with precision LED light sources for imager and optical path calibration
4. Built-in MEMs IMU with accelerometers and gyroscopes
5. Stepping motors are used for the gimbal and turntables. These provide high holding torque.
6. Angle encoders to measure angles to the tolerance required by attitude determination.
7. Built-in RAD750 processor with analog interface board The electronics box is separate from the tele-scope assembly. It is mounted inside the spacecraft. The boards are a 6U configuration with a processorboard and analog board in the electronics box.
8. SpaceWire is used to communicate with the spacecraft.
9. Communication with the telescope chips is done either via magnetic induction slip rings or wireless.The latter is under development. It promises increase reliability but radiation hardened chips do notexist at the present time.
10. An Active Pixel Sensor (APS) 2048 × 2048 pixels each pixel 15 µ-m square. 12 bits A/D conversion.
The telescope employs a piezoelectric shaker to produce blur. This is used in place of defocusing. Defo-cusing makes it difficult to correct optical aberrations. Making a shaker with a 15 year mission life is notdifficult. The torsion bar suspensions on scanning earth sensors last 15 years and are similar to this actuator.
C. Software
The navigation software architecture is shown in Figure 6. The telemetry and command modules are notshown. The core of the software is the target tracking. This module decides which targets to track and whatobservables to obtain. Besides commanding the turntable angle and telescope azimuth and elevations it alsosets the focal lengths of the cameras. The camera processing takes the camera data and converts the data intomatrix format. This is then processed by the camera processing algorithms which perform star identification,planet centroiding and landmark identification. It ultimately outputs the centroids of the identified targets.The centroids are processed by the recursive navigation module, the attitude determination module and thebatch navigation module.
7 of 24
American Institute of Aeronautics and Astronautics
Chassis with analog and processor 6U boards
Interface box with power and RF
Telescope
Azimuth and elevation hinge motors
Power and data lines
Antenna
Spacecraft power bus
SpaceWire
Turntable motor
Antenna
Calibration cube
50 mm
FL 200 mmImaging Chip
MirrorPiezoelectric Shaker
Objective Lens
Figure 5. Sensor design.
Figure 8 shows the organization of the attitude determination software. The software generates centroidsof stars in the camera fields of view. These are processed by a star identification algorithm. The resultingstar unit vectors are passed to 3 different Unscented Filters. Two provide a single camera solution. Theother produces solutions based on two cameras and the relatively noisy gimbal measurements. The attitudebase is provided by the built-in IMU. As long as there are at least 3 stars in a camera field-of-view, a pseudoinverse attitude solution is also computed.
The image processing is shown in Figure 9. The approach is to use Canny Edge detection11 to find theedges of planets and to compute the centroids from curve fits to the edges. This permits partial disks, suchas those due to phasing, to be used.
The Gaussian derivative applies a Gaussian noise filter and derivative mask in both the x and y direc-tions. The resulting image is processed to compute the edge strength and edge perpendiculars. Non MaxSuppression thins out any lines that are more than a pixel in thickness. Hysteresis Threshold tracks curvesby finding points above an intensity threshold and by finding adjacent points that are also above a threshold.The resulting curves are processed either by the curve fit routine to find the centroids or are processed bythe landmark fit routine to find landmarks. The landmark fit looks at changes in edge direction to matchlandmarks in the database. Other approaches, such as the Hough Transform might be applied. However,the Hough Transform for an ellipse requires the accumulation of a 3-dimensional matrix which would beimpractical on a flight computer.
The Unscented Kalman Filter (UKF) is able to achieve greater estimation performance than the ExtendedKalman Filter (EKF) through the use of the unscented transformation (UT). It is common to both theattitude determination and recursive navigation algorithms.
The UT allows the UKF to capture first and second order terms of the nonlinear system.12 The stateestimation algorithm employed is that given by van der Merwe13 and the parameter estimation algorithmthat given by.12 Unlike the EKF, the UKF does not require any derivatives or Jacobians of either the stateequations or measurement equations. Instead of just propagating the state, the filter propagates the stateand additional sigma points which are the states plus the square roots of rows or columns of the covariancematrix. Thus the state and the state plus a standard deviation are propagated. This captures the uncertaintyin the state. It is not necessary to numerically integrate the covariance matrix.
The filters use a numerical simulation to propagate the state. No specialized, linearized or simplifiedsimulations are needed. Thus the algorithms can update the actual simulation used for flight operations.The only limitation is that only measurements that are available in telemetry can be used as measurements
8 of 24
American Institute of Aeronautics and Astronautics
Target Tracking
Serial Interface
Recursive Nav Batch Nav
Image Processing
Attitude Determination
Hardware Interface RF
Analog Buffers Imaging Chip
Star Catalog
Ephemeris
Camera Processing
SpaceWire InterfaceCommand
Telemetry
Figure 6. Navigation software.
9 of 24
American Institute of Aeronautics and Astronautics
IMU
Camera 1
Camera 2
Telescope Gimbal Angles
Centroiding
Centroiding
UKF Both Sensors
Star ID
Star ID
Star Catalog
Ephemeris
Acceleration
Measurement Computation
Targeting
Measurement Processing
Position and Velocity
Landmark IDPlanet Centroids
Figure 7. Recursive Navigation.
10 of 24
American Institute of Aeronautics and Astronautics
IMU
Camera 1
Camera 2
Telescope Gimbal Angles
Centroiding
Centroiding UKF Sensor 1
Pseudoinverse Sensor 1
UKF Sensor 2
Pseudoinverse Sensor 2
UKF Both Sensors
Quaternion
RelativeQuaternion
Quaternion
Quaternion
Quaternion
Gyro Bias
Gyro Bias
Gyro Bias
Quaternion
Star ID
Star ID
Star Catalog
Figure 8. Attitude determination.
11 of 24
American Institute of Aeronautics and Astronautics
Monochrome Image
Gaussian Derivative
Non Max Suppression
Hysteresis Threshold
Curve Fit
Edge Strength and Direction
Landmark Fit
Centroid
Figure 9. Image processing.
12 of 24
American Institute of Aeronautics and Astronautics
in the model. This means that some parameters will not be identifiable. The visibility of parameters willdepend on the values of the system states and the dynamical coupling between states. For example, if atemperature were zero it would not be possible to estimate the gain on the temperature sensor. One issuethat cannot be avoided with system identification is the need for there to be sufficient excitation of thesystem at a level to make parameter changes apparent in the measurements.
Both the state and parameter estimators use the same definitions for the weights.
Wm0 =
λ
L+ λ(7)
W c0 =
λ
L+ λ+ 1− α2 + β (8)
Wmi = W c
i =1
2(L+ λ)(9)
whereλ = α2(L+ κ)− L (10)
γ =√L+ λ (11)
The constant α determines the spread of the sigma points around x and is usually set to between 10−4 and1. β incorporates prior knowledge of the distribution of x and is 2 for a Gaussian distribution. κ is set to0 for state estimation and 3− L for parameter estimation. Care must be taken so that the weights are notnegative.
Initialize the parameter filter with the expected value of the parameters13
x(t0) = E{x0} (12)
where E is expected value and the covariance for the parameters
Pxo= E{(x(t0)− x0)(x(t0)− x0)T } (13)
The sigma points are then calculated. These are points found by adding the square root of the covariancematrix to the current estimate of the parameters.
xσ =[x x+ γ
√P x− γ
√P
](14)
If there are L states, the P matrix is L× L so this array will be L× (2L+ 1).The state equations are of the form
χi = f(xσ,i, u, w, t) (15)
where χ is found be integrating the equations of motion.The mean state is
x =2L∑i=0
Wmi χi (16)
The state covariance is found from
P =2L∑i=0
W ci (χi − x)(χi − x)T (17)
Prior to incorporating the measurement it is necessary to recompute the sigma points using the newcovariance.
xσ =[x x+ γ
√P x− γ
√P
](18)
The expected measurements areyσ = h(xσ) (19)
The mean measurement is
y =2L∑i=0
Wmi yσ,i (20)
13 of 24
American Institute of Aeronautics and Astronautics
The covariances are
Pyy =2L∑i=0
W ci (yσ,i − y)(yσ,i − y)T (21)
Pxy =2L∑i=0
W ci (χi − x)(yσ,i − y)T (22)
The Kalman gain isK = PxyP
−1yy (23)
The state update isx = x+K(y − y) (24)
where y is the actual measurement and the covariance update is
P = P −KPyyKT (25)
y must be the measurement that corresponds to the time of the updated state x.The ONS sensor is designed to be calibrated in space using a calibration cube mounted to the spacecraft.
The cube is illustrated in Figure 10. The camera looks at the corner of the cube which proves 144 knownpoints for calibration. Each point has a different 3-d coordinate. The center of the cube is also known.
Figure 10. Calibration cube.
The position of an object in the camera frame is
pc = Rpw + t (26)
where pc is the position in the camera frame, pw is the position in the world frame, R transforms from theworld to the camera frame and t is the offset. The following extrinsic parameters must be estimated
1. R the 3 × 3 rotation matrix
2. t the 3 × 1 translation vector
The following intrinsic parameters must also be estimated
1. fx = f/sx the focal length in effective pixel units and sx and sy are the effective size of each pixel. fis the focal length.
2. α = sx/sy the aspect ratio
14 of 24
American Institute of Aeronautics and Astronautics
3. The image center coordinates, ox and oy
4. The radial distortion coefficients
Given the 144 known points and the locations of their centroids in the focal plane we can determine theseparameters. Including radial distortion, the equations in the focal plane are
x = xd(1 + k1r2d + k2r
4d) (27)
y = yd(1 + k1r2d + k2r
4d) (28)
where x and y are the undistorted locations and xd and yd are the distorted locations. r is the radial distanceand k1 and k2 are the distortion coefficients. The radial distance is
rd = x2d + y2
d (29)
To model the radial distortions, that is to go from undistorted to distorted coordinates, we combine the twoequations in Equation 27 to get
x2 + y2 = r2 = r2d(1 + k1r2d + k2r
4d)
2 (30)
This results in the 5th order equation
r = rd(1 + k1r2d + k2r
4d) (31)
k2r5d + k1r
3d + rd − r = 0 (32)
The only real solution of the set of equations is the value for rd.The image point is found from
x = −(xim − ox)sx (33)y = −(yim − oy)sy (34)
The undistorted locations are
x = − f
sx
xczc
(35)
y = − f
sy
yczc
(36)
Equations 26, 27, 33 and 35 can be combined into a set of 2 nonlinear equations with the world coordinatesand measured coordinates as inputs and the intrinsic parameters, extrinsic parameters, rotation matrix andtranslation center as outputs. The 144 pairs of equations can be solved. The process is to first formulatethem as a set of linear equations with the radial distortions ignored and solve them using the singularvalue decomposition. The resulting solution is iterated upon using downhill simplex. Calibration is done inbackground after the measurements are taken.
The results of the calibration are shown below without and with the downhill simplex to solve for thek1 and k2 coefficients. f is the focal length vector. The computation for R and the first two elements of Tare identical but without downhill simplex the radial errors case a large error in T3 and the focal lengths.This test used a 30 mm calibration cube shown in Figure 11 with the camera 200 mm from the cube. Eachsquare is 5 mm by 5 mm. Table 2 gives the results of the calibration software.
III. Prototype
A prototype instrument was fabricated by modifying high quality optical and mechanical components.The components were assembled into a research instrument by building custom supports.
Two Canon 12.2 megapixel (4272 x 2848; CMOS 450D digital cameras with 100-400mm zoom lenses(f/4.5-5.6; 77 mm diameter) were used. These cameras were selected because of the feature of remote liveview, and the fact that full resolution RAW digital images could be viewed and extracted. Each camera wasconnected to a Sony (Vaio Z series) laptop computer running Canon EOS remote view application program.
15 of 24
American Institute of Aeronautics and Astronautics
Table 2. Calibration results. The top set of parameters are estimated with downhill simplex.
Parameter True Estimate
R [ 1.0 1.0 1.0] [ 1.0 1.0 1.0]
T [ 0.0 0.0 0.0] [ 0.0 0.0 23.3]
f [ 10000.0 10000.0] [ 11070.2 11070.2]
k1 1.00e-10 0.00e+00
k2 1.00e-14 0.00e+00
R [ 1.0 1.0 1.0] [ 1.0 1.0 1.0]
T [ 0.0 0.0 0.0] [ 0.0 0.0 -0.0]
f [ 10000.0 10000.0] [ 10000.0 10000.0]
k1 1.00e-10 1.00e-10
k2 1.00e-14 1.00e-14
Figure 11. 30 mm simulated calibration cube in test.
16 of 24
American Institute of Aeronautics and Astronautics
The core or the mount was three high-precision Newport Corporation rotation stages (models 470A and470B). These stages were assembled with one serving as a base and two others mounted at right angles andoffset from one another. Each of the right angle stages was equipped with a base-plate and a second pair ofrotation stages to give a pitch control to each camera. The rotation stages for pitch control were balancedwith a custom journal bearing to support the camera and lens assembly. Figure 12 below shows severalviews of the mount.
Figure 12. Prototype mount.
An Orion Atlas EQ-G Equatorial “GoTo” Mount was used to support the prototype for ground-basedtests. This mount was modified to accommodate the prototype instrument rather than the usual telescopethat is typically used with the mount. The equatorial mount allowed us to take time exposures to simulatethe kind of measurements that would be made from a satellite. This mount also allowed us to simulatevarious rates of rotation that one would have if the satellite were spinning. Figure 12 below shows a view ofthe equatorial tripod.
Numerous images were provided for the purpose of testing the image processing algorithms. These imagesincluded stars, the moon, planets and their moons. Sample images were also obtained for the sun, using aneutral density filter (optical density 8). Some of these images are given in Figure 14 below. Several fullframe images (full moon, half moon, star field, Jupiter and moons) are given in Figure 14.
IV. Simulation Results
A. Geosynchronous Orbit
Figure 15 shows an example of geosynchronous tracking. The errors are resolution limited. Geometric errorsare removed by using two telescopes and multiple targets. 3-sigma errors are under 10 km assuming thatthere is no subpixel interpolation. With sub pixel interpolation, a factor of 5, the results are acceptable forgeosynchronous mission navigation.
B. New Horizons
New Horizons measurements using only chord width measurements of 3 planets are shown in Figure 16.One measurement is taken per hour. Every ten hours the tracker switches to another planet. The absoluteposition errors are within 2000 km in the last quarter of the mission. The results are consistent with theuncertainty of the planetary ephemerides. The bottom plot shows the planet numbers (1 being Mercury)used for chord measurements. Absolute errors are important for performing orbit change maneuvers. Whenapproaching a target the absolute errors are less important and the relative errors to the target becomesignificant.
17 of 24
American Institute of Aeronautics and Astronautics
Figure 13. Prototype on tripod.
Figure 14. Full screen images.
18 of 24
American Institute of Aeronautics and Astronautics
Figure 15. Geosynchronous test results using lunar landmarks as the primary references.
19 of 24
American Institute of Aeronautics and Astronautics
Estimation Errors
0 1 2 3 4 5 6−5000
05000
x (k
m)
0 1 2 3 4 5 6−1
01
x 104
y (k
m)
0 1 2 3 4 5 6−5000
05000
z (k
m)
0 1 2 3 4 5 6−2
02
x 10−4
v x (km
/s)
0 1 2 3 4 5 6−2
02
x 10−4
v y (km
/s)
0 1 2 3 4 5 6−2
02
x 10−4
v z (km
/s)
0 1 2 3 4 5 605
10
Pla
net i
d
Time (years)
Figure 16. New Horizons mission estimation error.
20 of 24
American Institute of Aeronautics and Astronautics
Orbit
0 1 2 3 4 5 6−5
0
5
10
x (a
u)
0 1 2 3 4 5 6−30
−20
−10
0
y (a
u)
0 1 2 3 4 5 6−15
−10
−5
0
z (a
u)
Time (years)
Figure 17. New Horizons mission mission orbit.
21 of 24
American Institute of Aeronautics and Astronautics
C. Messenger
Messenger measurements using only chord width measurements of 3 planets are shown in Figure 18. Onemeasurement is taken per hour. Every ten hours the tracker switches to another planet. The errors arewithin 100 km in the last quarter of the mission. The results are sensitive to the accuracy of the noisecovariance matrix and care needs to be taken that these number accurately reflect sensor noise. Once again,the bottom plot shows the planet numbers (1 being Mercury) used for chord measurements.
Estimation Errors
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−500
0500
x (k
m)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−500
0500
y (k
m)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−500
0500
z (k
m)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−5
05
x 10−4
v x (km
/s)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−5
05
x 10−4
v y (km
/s)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−2
02
x 10−4
v z (km
/s)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 205
10
Pla
net i
d
Time (years)
Figure 18. Messenger mission estimation error.
V. Conclusions
This paper presents results for the optical navigation system. This system performs both orbit deter-mination and attitude determination anywhere in the solar system. It can incorporate a wide variety ofmeasurement sources. The navigation solutions are sufficient for orbit change maneuvers and other naviga-tion functions. The next step in the development will be the manufacturing of a terrestrial version of thesensor followed by the development of a flight version and flight testing. A further improvement will be theaddition of algorithms to optimally select targets based on their measurement uncertainty and the overallmeasurement geometry.
22 of 24
American Institute of Aeronautics and Astronautics
Orbit
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−0.4
−0.2
0
0.2
0.4
x (a
u)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−0.5
0
0.5
y (a
u)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−0.4
−0.2
0
0.2
0.4
z (a
u)
Time (years)
Figure 19. Messenger mission orbit.
23 of 24
American Institute of Aeronautics and Astronautics
Acknowledgments
This work was done under a NASA Contract No. NNX07CA46P. The authors would like to thank Mr.Owen Brown of JPL and Dr. Landis Markley of NASA/GSFC for their contributions to this work.
References
1Battin, R. A., An Introduction to the Mathematics and Methods of Astrodynamics, AIAA, January 2004, pp. 755–756.2Bhaskaran, S., Reidel, J. E., and Synnott, S. P., “Autonomous optical navigation for interplanetary missions,” Proc.
SPIE , 1996, pp. 32–43.3Riedel, J. E., Bhaskaran, S., Synnott, S. P., Desai, S. D., Bollman, W. E., Dumont, P. J., Halsell, C. A., Han, D.,
Kennedy, B. M., Null, G. W., Owen, W. M., Werner, R. A., and Williams, B. G., “Navigation for the New Millenium:autonomous navigation for Deep Space 1,” Proc. 12th International Symposium on Space Flight Dynamics, 1997.
4Kubota, T., Hashimoto, T., SawaI, S., Kawaguchi, J., Ninomiya, K., Uo, M., and Baba, K., An autonomous navigationand guidance system for MUSES-C asteroid landing, Vol. 52, 2003, pp. 125–131.
5Kubota, T., Sawai, S., Hashimoto, T., and Kawaguchi, J., “Robotics and autonomous technology for asteroid samplereturn mission,” Proc. 12th International Conference on Advanced Robotics, 2005, pp. 31–38.
6Foing, B. H., Heather, D., Almeida, M., Racca, G., and Marini, A., “The ESA SMART-1 Mission To The Moon - Scienceand Technology Objectives,” Proc. National Space Society’s 20th Annual International Space Development Conference, 2001.
7Polle, B., “Low Thrust Missions Guidance and Navigation: the Smart-1 OBAN experiment results and perspectives,”Presented at the 3rd International Workshop on Astrodynamics Tools and Techniques, 2006.
8Polle, B., Frapard, B., Voirin, T., Gil-Fernandez, J., Milic, E., Graziano, M., Panzeca, R., Rebordao, J., Correia, B.,Proenca, M., Dinis, J., Mortena, P., and Dyarte, P., “Vision based navigation for interplanetary exploration opportunity forAURORA,” Proc. International Astronautical Congress, 2003.
9Jr., W. M. O. and Wang, T., “NEAR Optical Navigation at Eros,” AAS 01-376 .10Riedel, J. E., Wang, T., Werner, R., Vaughan, A., Myers, D., Mastrodemos, N., Huntington, G., Grasso, C., Gaskell,
R., and Bayard, D., “Configuring the Deep Impact AutoNav System for Lunar, Comet and Mars Landing,” AIAA/AASAstrodynamics Specialist Conference and Exhibit AIAA 2008 .
11Trucco, E. and Verri, A., Introductory Techniques for 3-D Computer Vision, Prentice-Hall,, January 1998, p. 76.12VanDyke, M. C., Schwartz, J. L., and Hall, C. D., “UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTI-
TUDE STATE AND PARAMETER ESTIMATION,” Proceedings, No. AAS-0115, AAS.13van der Merwe, R. and Wan, E. A., “The Square-Root Unscented Kalman Filter for State and Parameter-Estimation,” .
24 of 24
American Institute of Aeronautics and Astronautics