compact fast scanning lidar for planetary rover...
TRANSCRIPT
COMPACT FAST SCANNING LIDAR FOR PLANETARY ROVER NAVIGATION
A. J. Bakambu(1)
, M. Nimelman(2)
, R. Mukherji(3)
, J. W. Tripp(4)
(1)
MDA Corporation, 9445 Airport Road, Brampton, Ontario, Canada, L6X 4T8,
MDA Corporation, 9445 Airport Road, Brampton, Ontario, Canada, L6X 4T8,
MDA Corporation, 9445 Airport Road, Brampton, Ontario, Canada, L6X 4T8,
Optech Incorporated, 300 Intechange Way, Vaughan, Ontario, Canada, L4K 5Z8,
e-mail: [email protected]
ABSTRACT
The Compact Fast Scanning Lidar (CFSL) was developed by MDA and Optech to meet the need for a compact active sensor for terrestrial and space vehicle navigation applications. Compared with the performance of currently available scanning and flash lidars, the CFSL delivers a superior combination of high
resolution, fast scanning performance over a large field-of-view combined with high data output rates. It meets the needs of a variety of guidance and navigation
functions for small and medium space and terrestrial vehicles within a compact and power efficient package. Moreover, it has been designed with space qualification in mind. Design innovation and superior performance was achieved, within tight cost and schedule constraints, thanks to use of commercially available technologies and leveraging of critical components from Optech’s leading
commercial lidar products. The CFSL prototype performance has already been
demonstrated in a series of indoor and outdoor static tests in conjunction with various rover guidance & navigation applications, including 3D mapping, terrain assessment, path planning and obstacle detection.
1 INTRODUCTION
The prototype lidar concept addresses the original intent to support a tele-operated exploration rover mission with the operator basing their drive commands on fast 3D lidar ‘video’ stream of the scene in front of the unmanned vehicle. To achieve this, an innovative fast scanning design was conceived within the constraints identified
for the sensor. The prototype’s innovative design solution drew from the project team’s combined experience in the general areas
of robotics, operations, and operations planning and specific areas of vision sensors, vision systems, robotic command and control and rover navigation. The design followed the overall concept by providing an
innovative, high performance lidar which exceeded the required lidar scanning speed, FOV, update rates and range. The design may also support potential scientific exploration tasks which benefit from the lidar’s extended range, high resolution and high rate scan data with associated intensity data.
The prototype lidar could be used for a planetary analogue field deployment, mounted on a rover, or may be used for stand-alone experiments in laboratory or
outdoor environments. The prototype lidar specifications were tailored to meet the MDA rover GN&C team’s prioritized ‘wish list’. The lidar resource constraints were based on the (original) target lunar rover power, mass and volume allocations. To maximize the benefit to the customer, the team decided to concentrate on the development of an
innovative sensor head design targeting high scan speed performance with a low-risk ‘path to flight’. The program started with tradeoffs of available ‘space
qualifiable’ scanning mechanisms and lasers. The teams also adopted an approach of leveraging on available, commercially proven electronic components and designs mainly from Optech’s commercial lidar line. The resulting lidar prototype sensor, built within the cost and schedule constraints, is a high performance, high speed, compact sensor head combined with a commercially based avionics platform which met or exceeded all original performance requirements. The CFSL performance has been demonstrated in a lab
environment at ranges of ~10 meters and outdoors mounted on a man-size rover (Fig. 1) under a variety of lighting conditions at ranges exceeding 120 meters.
Figure 1. Rover-mounted lidar
2 INITIAL REQUIREMENTS
The original lidar resource allocations were driven by the
following constraints of a small tele-operated lunar rover.
Mass 5 kg
Power consumption 25 W
Accommodation area 140x175x210 mm
Table 1: Physical Requirements
Its performance requirements were meant to support tele-operation of a rover, traveling at <10km/h speed,
with high rate scene displays allowing the operator to safely navigate the rover in unknown terrain. Requirement Original Value
Minimum lidar range 100m
Lidar FOV 40̊
Lidar close range resolution 50 mm
Lidar long range (100m)
resolution
200mm - 300mm
Lidar detection Object of 100mm diameter at TBD range/reflectivity
Table 2: Performance Requirements to Support Teleoperation These requirements were augmented during the early system design phase to encompass the needs of supervised autonomous operation. This put a further
premium on scanning performance to support obstacle detection/avoidance and rover localization. The team emphasized the need to improve the scan speed and angular resolutions compared to leading terrestrial lidars
performance with resulting additional performance specifications derived and shown in Table 3 below
Requirement New Value
Frame Update Rate [Hz] 2-10
Points/sec 100k / 400k
FOV Horizontal[◦] 60 - 180 *
FOV Vertical [◦] >40
Max Range [m] / 0.8 Reflectivity
120
Min Range [m] 1-2
Range Accuracy [cm] 2-5
Angular Resolution AZ
[mrad]
5
Angular Resolution EL [mrad]
1-2
*Operator selectable Table 3: Additional Performance Requirements to Support Autonomous GN&C Operation
2.1 Concept of Operations
The CFSL supports rover tele-operation and semi-autonomous operations by providing point cloud data for:
• 3D mapping (static rover)
• Obstacle detection (dynamic rover)
• Rover localization (dynamic rover)
The CFSL provides the remote operator with the
capability to select scan parameters (FOV, scan density scan update rate) to optimize the lidar’s performance for various operational modes.
2.2 ‘Target’ Performance Requirements
The lidar’s performance requirements were driven by the GN&C applications and eye safety:
• Maximum range (static) 120m
• Horizontal FOV 60̊
• Vertical FOV >40̊
• ‘Frame’ update rate 2-5hz
• Range accuracy 2-5cm
• Angular accuracy 1-5 mrad
3 PROTOTYPE DESIGN
The trade studies conducted during the first phase of the program resulted in the selection of a fast and power
efficient polygon/galvo scanner coupled with a compact fiber laser and high-speed timing electronics. The benefits of this design are clearly understood ‘path-to-flight’, low power consumption and user
selectable scan settings that can be configured to fit various operational modes.
The vision system configuration is illustrated in Fig. 2.
Figure 2. System Configuration
The vision system’s Sensor Head is composed of a newly
designed lidar head integrated with commercial stereo camera and pan-tilt unit. The focus of this paper is the innovative new design of the lidar sensor head. The
prototype control unit maximized use of commercial off-the-shelf technologies and components from Optech’s commercial products. The operator computer was implemented on a rugged commercial laptop
3.1 CFSL Block Diagram
Figure 3 shows the prototype system block diagram.
Figure 3. CFSL block diagram
3.2 Sensor Head Design
The prototype’s sensor head is a new innovative design with the following main features:
• Compact and rugged design
• High energy, eye safe laser
• User selectable laser pulse rate/energy
• Polygon/Galvo fast scanner
• User selectable scan speed
• Flight-compatible design
• User adjustable scan FOV
• Integrated CCA provisions The CFSL Sensor Head is shown in Fig. 4.
Figure 4. CFSL Sensor Head
3.3 Avionics Design
The prototype control unit utilizes commercial off-the-shelf technologies and leverages control, data
acquisition and time-of-flight components from Optech’s commercial products. The prototype’s power and data external interfaces were tailored to meet the rover’s power and data interface requirements.
3.4 Software Design
The prototype lidar’s software comprises three main modules:
• Lidar control
• Lidar data acquisition and processing
• User’s display and control interface Program constraints dictated use of tailored commercial
software modules. The current software allows the operator to command a lidar scan and collect the scan data into a file. The next software revision will enable real-time lidar data transmission to the operator (for display) or to other
software modules (rover GN&C input).
3.5 Innovation
The prototype lidar took an innovative yet low-risk
design approach which met the demanding performance requirements within the tight resource constraints. At the lidar’s subsystem level:
• Innovative mechanical packaging approach to
minimize optical head mass, size and volume
• Innovative lidar scanning, transmitting, receiving
and processing technologies
• Integrated hardware and software data acquisition solutions to maximize data telemetry output rate
At the ground control station:
• The current prototype program provided only basic ground control functions
• Future innovative applications for use by the ground-based tele-operator may include:
• Innovative 3D lidar data, science data and stereo camera imaging displays
• Sensor data enhanced displays (e.g.; ‘range maps’, ‘intensity maps’, ‘mineral maps’)
• Obstacle detection and avoidance applications (algorithms and displays)
• Terrain assessment applications (algorithms and displays)
• Path planning applications
• User friendly Graphic User Interface
• Data reduction, storage and sorting applications
3.6 Operational Workflow
The following user input settings are supported via the operator computer:
• Horizontal FOV in degrees (0˚-60˚)
• Horizontal and Vertical Resolution (2–50 mrad)
• Frame update rate (0.1 – 6 Hz)
• User will need to define above parameters before Commanding a scan
• Processed lidar data is provided in LAS format
4 PRELIMINARY TEST RESULTS
The CFSL prototype performance has already been demonstrated in a series of indoor and outdoor static tests in conjunction with various rover guidance & navigation applications, including 3D mapping, terrain assessment,
path planning and hazard detection.
The CFSL current performance test results are:
• Maximum Range > 120m(*)
• Field of View 60˚ x 54.5˚
• Lateral Resolution 2-50mrad (Programmable)
• Range Accuracy 2 cm (1 σ)
• Angular Accuracy 1.3 mrad RMS
• Frame Update Rate up to 6hz (Programmable)
(*) Range and scan performance can be enhanced if eye-safety requirements are relaxed.
Compared with existing terrestrial sensors, the CFSL superior performance improves the ability to perform
dynamic collision avoidance, real-time localization and path re-planning on a fast ~10km/h vehicle. The compact CFSL sensor head prototype fits within a 140 x 175 x 210 mm package with remaining development to focus on path-to-flight targets mass and power goals of 5kg and 25W. The CFSL was originally designed to support lunar rover tele-operation and associated guidance and navigation applications. Due to its superior scanning performance, demonstrated on a rugged
prototype, the CFSL also meets performance criteria for other applications such as spacecraft rendezvous and landing.
4.1 Maximum Range Test Results
Outdoors maximum range test results of up to 131 meters have been recorded using the following Scan settings:
• FOV: 60˚x40˚ (HxV)
• Density: 4 mrad x 8 mrad spacing
• Update rates: 1 Hz and 1.5 Hz
• Laser PRF: 100 kHz
131m 105m 124m
Figure 5. Long range test results
4.2 Scan Speed (‘Ball drop’) Demonstration
High density scans performed at ~5m distance to
demonstrate ‘Fast’ (5Hz) and ‘Slow’ (0.2Hz) scan performance.
‘‘‘‘Fast’ ‘Slow’
Figure 6. Fast and Slow scans
The fast scan captures the dropping ball in mid-air while
the slow scan captures the ball (before and after) bouncing of the floor (scan starts and ends at the middle of the FOV)
4.3 Performance Summary
The following table summarizes the actual tested
performance compared to the original program requirements and design goals.
Parameter Requirement Goals Test Result
LIDAR
Maximum Range 100m 120m >130m
FOV[HxV] 40˚ x 40˚ 60˚x40˚ 60˚x 54.5˚
Range Resolution 5cm at 10m 20cm at 100m
TBC
TBC
Angular Resolution
5mrad at 10m 2mrad at 100m
2mrad 2-50mrad
Range Accuracy 1% 2cm 2cm (1ϭ)
Angular Accuracy
1% 1% 0.13%
Frame Rate 0.3Hz 1Hz 0.1-6Hz
4.4 Indoor/Outdoor Sample Scans
CSA/MDA/Optech team indoor scan
Figure 7. Lidar team scan
Several outdoor scans performed at different scan settings and frame update rates: FOV [HxV] Target
Distance [m]
Spacing
[mrad]
Scan Update
Rate [Hz]
60˚x54.5˚ 50 15 x 10 6
60˚x54.5˚ ~14 5 x 5 3
60˚x54.5˚ 50 5 x 5 1.5
Figure 8. Outdoor scan
4.5 Intensity Data Correction
Initial experiments with lidar data enhancements have resulted in very promising improvement of discernible features detectability in a scanned scene. Intensity data correction algorithms were used to compensate for the
reduced intensity return signals of distant targets. The following scans demonstrate the ‘original’ scan display versus the ‘enhanced’ intensity data scan display.
‘‘‘‘Original’ ‘Enhanced’
Figure 9. Intensity data enhancement
5 FUTURE WORK
5.1 Potential Future Enhancements
Most future improvements to the sensor system are focused on software enhancements. Some of the enhancements, resulting in higher ‘laser energy on target’ will also require relaxation of the eye-safety
requirements. The following are some potential enhancements:
• Operation down to 0.5 mrad resolution**
• Extended range 150-200 meters (higher laser energy)**
• Real-time lidar data acquisition, processing and display for rover tele-operation
• Intensity data correction
• Lidar data motion correction
** Improving this requirement may affect the eye-safe capability of the sensor system
5.2 GN&C Testing . Future testing will focus on integrating this sensor into MDA’s already existing guidance, navigation and control system for planetary rovers The 3D lidar scan data may
be used as input to near-real-time terrain assessment, obstacle detection, motion tracking and obstacle avoidance algorithms.
5.3 Additional Applications – Dual Use
The prototype vision system allows for an innovative
dual-use approach which combines the use of the navigation sensors with scientific exploration tasks. Dual-use tasks may include:
• High resolution 3D modeling of exploration sites identified by the science team
• Intensity maps of above noted sites (allowing
reflectivity-based surface characterization)
• Fusion of camera imaging with lidar 3D model
• Fusion of science sensor data with lidar 3D model
6 CONCLUSIONS
The CFSL prototype lidar, developed within tight cost
and schedule constraints, has exceeded all its original performance requirements. The CFSL provides a superior sensor alternative for applications requiring a very fast and accurate scanner with high 3D output data
rate. The lidar requirements specified by the MDA GN&C team and optimized by Optech’s lidar designers resulted in a compact, high performance lidar product optimized to support rover GN&C functions. The promising static test results have already demonstrated
the lidar’s capability to generate high resolution 3D maps at short and medium ranges. Initial lidar intensity data enhancements demonstrate improved feature detectability which result in improved 3D terrain modeling and improved model-based localization.
Initial dynamic laboratory tests have demonstrated the lidar’s fast scanning, high resolution performance which will be essential to support obstacle detection and localization functions.
In the future, the CFSL prototype lidar may also be a candidate sensor for rendezvous and planetary landing applications.
7 ACKNOWLEDGMENT
The author and co-authors would like to acknowledge the following contributors to the CFSL program:
- The Canadian Space Agency (CSA) for its sponsorship of the program
- Alexander Koujelev (CSA science lead) - Marwan Hussein (Optech technical lead)
- Dr. Chris Langley (MDA’ lidar test lead) - Unal Artan (MDA’ lidar test support) - Billy Jun (MDA’ lidar test support)
8 REFERENCES
[1] Dupuis E, et al., “Autonomous Long Range Navigation– Experimental Results”, ASTRA 2006
[2] C. Dickinson, M. Hussein, J. Tripp, M. Nimelman and A. Koujelev, ‘Compact High Speed Scanning Lidar’, SPIE May 2012.
[3] Bakambu J., et al., “Planetary Rover Visual Motion Estimation Improvement for Autonomous, Intelligent,
and Robust Guidance, Navigation and Control, Proceedings of the 2010 International Symposium on Artificial Intelligence, Robotics and Automation in Space (iSAIRAS), Sapporo, Japan.
[4] Bakambu J., et al., “Field trial results of planetary rover visual motion estimation in Mars analogue
terrain”, Journal of Field Robotics, Volume 29, Issue 3, May 2012