eye-safe ladar test-bed with coaxial color imager

9
Eye-Safe LADAR Test-Bed with Coaxial Color Imager Robert T. Pack* a , Jason Swasey b , Rees Fullmer a , Scott Budge a , Paul Israelsen a , Brad Petersen b , and Dean Cook c a Center for Advanced Imaging Ladar, Utah State University, Logan, UT 84322-4170 b Space Dynamics Laboratory, Utah State University, Logan, UT 84341 c NAVAIR Weapons Division, China Lake, CA 93555-6106 ABSTRACT A new experimental full-waveform LADAR system has been developed that fuses a pixel-aligned color imager within the same optical path. The Eye-safe LADAR Test-bed (ELT) consists of a single beam energy-detection LADAR that raster scans within the same field of view as an aperture-sharing color camera. The LADAR includes a pulsed 1.54 μm Erbium-doped fiber laser; a high-bandwidth receiver; a fine steering mirror for raster scanning; and a ball joint gimbal mirror for steering over a wide field of regard are all used. The system has a 6 inch aperture and the LADAR has pulse rate of up to 100 kHz. The color imager is folded into the optical path via a cold mirror. A novel feature of the ELT is its ability to capture LADAR and color data that are registered temporally and spatially. This allows immediate direct association of LADAR-derived 3D point coordinates with pixel coordinates of the color imagery. The mapping allows accurate pointing of the instrument at targets of interest and immediate insight into the nature and source of the LADAR phenomenology observed. The system is deployed on a custom van designed to enable experimentation with a variety of objects. Keywords: lidar, LADAR, laser sensors, co-boresight, sensor suite, coaxial, color imager, texel camera 1. INTRODUCTION The Eye-safe LADAR Test-bed (ELT) described in this paper is a dual mode active/passive sensor that combines an eye- safe LADAR with a color Electro-Optical (EO) imaging sensor into a single unit. The ELT has been designed to be part of a mobile sensor laboratory being developed by the Naval Air Warfare Center, Weapons Division (NAWC-WD) as a research and development asset. The concept for the mobile sensor laboratory, known as the Vehicle Integrated Sensor Suite for Targeting Applications (VISSTA), was motivated by the need to bridge the gap between sensor developers and sensor users. This gap has been responsible for the frequency with which previously undiscovered Automatic Target Recognition (ATR) system problems were manifested during expensive flight testing. The VISSTA laboratory was conceived to support the development of ATR algorithm technology by providing an integrated collection of sensors that could be used to collect multi-sensor data of real-world target objects, and to provide a platform for the development and exercise of ATR algorithms designed to take advantage of multi-sensor data and multi-sensor data fusion technologies. This development loop, closed around the VISSTA laboratory, is shown in Figure 1. VISSTA was designed to serve as a test-bed in which trade studies can be performed, algorithmic concepts proven, and diverse data sets collected. VISSTA is being realized as a truck with a rotating optical bench above the cab, upon which all of the sensors are mounted. Within the truck, racks are installed that contain control electronics and data storage. Additional rack space is provided to support future integration with new sensor hardware, and real-time ATR algorithm processing hardware and computers. The ELT is one of the sensor systems that will be deployed first aboard the VISSTA laboratory. Other sensor systems slated for integration with VISSTA include Mid-Wave Infrared (MWIR) imagers. The VISSTA concept is illustrated in Figure 2. Laser Radar Technology and Applications XIV, edited by Monte D. Turner, Gary W. Kamerman, Proc. of SPIE Vol. 7323, 732303 · © 2009 SPIE · CCC code: 0277-786X/09/$18 · doi: 10.1117/12.818146 Proc. of SPIE Vol. 7323 732303-1

Upload: usu

Post on 05-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Eye-Safe LADAR Test-Bed with Coaxial Color Imager Robert T. Pack*a, Jason Swaseyb, Rees Fullmer a, Scott Budge a, Paul Israelsen a, Brad Petersenb, and

Dean Cookc

aCenter for Advanced Imaging Ladar, Utah State University, Logan, UT 84322-4170 bSpace Dynamics Laboratory, Utah State University, Logan, UT 84341

cNAVAIR Weapons Division, China Lake, CA 93555-6106

ABSTRACT

A new experimental full-waveform LADAR system has been developed that fuses a pixel-aligned color imager within the same optical path. The Eye-safe LADAR Test-bed (ELT) consists of a single beam energy-detection LADAR that raster scans within the same field of view as an aperture-sharing color camera. The LADAR includes a pulsed 1.54 µm Erbium-doped fiber laser; a high-bandwidth receiver; a fine steering mirror for raster scanning; and a ball joint gimbal mirror for steering over a wide field of regard are all used. The system has a 6 inch aperture and the LADAR has pulse rate of up to 100 kHz. The color imager is folded into the optical path via a cold mirror. A novel feature of the ELT is its ability to capture LADAR and color data that are registered temporally and spatially. This allows immediate direct association of LADAR-derived 3D point coordinates with pixel coordinates of the color imagery. The mapping allows accurate pointing of the instrument at targets of interest and immediate insight into the nature and source of the LADAR phenomenology observed. The system is deployed on a custom van designed to enable experimentation with a variety of objects.

Keywords: lidar, LADAR, laser sensors, co-boresight, sensor suite, coaxial, color imager, texel camera

1. INTRODUCTION The Eye-safe LADAR Test-bed (ELT) described in this paper is a dual mode active/passive sensor that combines an eye-safe LADAR with a color Electro-Optical (EO) imaging sensor into a single unit. The ELT has been designed to be part of a mobile sensor laboratory being developed by the Naval Air Warfare Center, Weapons Division (NAWC-WD) as a research and development asset.

The concept for the mobile sensor laboratory, known as the Vehicle Integrated Sensor Suite for Targeting Applications (VISSTA), was motivated by the need to bridge the gap between sensor developers and sensor users. This gap has been responsible for the frequency with which previously undiscovered Automatic Target Recognition (ATR) system problems were manifested during expensive flight testing. The VISSTA laboratory was conceived to support the development of ATR algorithm technology by providing an integrated collection of sensors that could be used to collect multi-sensor data of real-world target objects, and to provide a platform for the development and exercise of ATR algorithms designed to take advantage of multi-sensor data and multi-sensor data fusion technologies. This development loop, closed around the VISSTA laboratory, is shown in Figure 1. VISSTA was designed to serve as a test-bed in which trade studies can be performed, algorithmic concepts proven, and diverse data sets collected.

VISSTA is being realized as a truck with a rotating optical bench above the cab, upon which all of the sensors are mounted. Within the truck, racks are installed that contain control electronics and data storage. Additional rack space is provided to support future integration with new sensor hardware, and real-time ATR algorithm processing hardware and computers. The ELT is one of the sensor systems that will be deployed first aboard the VISSTA laboratory. Other sensor systems slated for integration with VISSTA include Mid-Wave Infrared (MWIR) imagers. The VISSTA concept is illustrated in Figure 2.

Laser Radar Technology and Applications XIV, edited by Monte D. Turner, Gary W. Kamerman,Proc. of SPIE Vol. 7323, 732303 · © 2009 SPIE · CCC code: 0277-786X/09/$18 · doi: 10.1117/12.818146

Proc. of SPIE Vol. 7323 732303-1

I'%4

%.%

.%444.I I

SENSORDEVELOPMENT

SENSOR REQUIREMENTS

VISSTA

SENSOR DATA

DATAEXPLOITATION I

Fig. 1. Bridging the gap between sensor developers and

sensor users. Fig. 2. Vehicle Integrated Sensor Suite for Targeting

Applications (VISSTA).

VISSTA is intended to support various research and development goals. Some of these goals are:

• Exploration of the engineering trade space of tactical handoff of target information from intelligence, surveillance and reconnaissance sensors to tactical sensors.

• Exploration of the strengths and weaknesses of different combinations of sensors for multi-sensor data fusion analyses.

• Exploration of technical issues associated with multi-sensor systems and multi-sensor data fusion techniques.

• Development of multi-sensor target tracking methods.

• Development of selective targeting methods in littoral battlefield environments.

• Development of methods for automated target template extraction and generation from 3D data.

When completed, VISSTA will provide a data collection and research and development platform that is mobile, accurate, and inexpensive relative to flight testing, that will allow developers to design and build better sensor systems.

As noted earlier, the ELT will be the first sensor integrated aboard VISSTA. The design of the ELT has been completed and the hardware is being integrated as of this writing. The following sections describe the ELT in detail.

2. EYE-SAFE LADAR TESTBED DESIGN A photograph of the ELT is shown as Figure 3 and a schematic of its design is shown as Figure 4. The ELT consists of a laser transmitter and beam expander that first reflects off of a mirror that is coaxially aligned at the obscuration point of the receive telescope. The beam then reflects off of a raster scanning Fine Steering Mirror (FSM), passes through a cold mirror, and then reflects off a Ball Joint Gimbal (BJG) steering mirror into the environment. After hitting and reflecting off of the target, the returned beam retraces its path through the system, entering the receive telescope through a doughnut shaped aperture that surrounds the transmit path. The telescope then gathers the light and places it on a single channel Avalanche Photo Diode (APD). A color camera also captures passive incoming light that reflects off of the BJG mirror and then reflects off of the cold mirror and into the camera lens. The lens has a variable focal length (zoom) and can therefore, achieve a variable Field Of View (FOV). The purpose of the color camera is to (1) provide situational awareness, (2) capture color information for texturing of the LADAR pixels, and (3) provide color video for target tracking purposes. The following sections contain a more specific discussion of the ELT subsystems.

The transmitter consists of a 390 mW pulsed erbium-doped fiber laser operating at a wavelength of 1.54 microns. The beam is expanded to a waist diameter of about 50 mm and a divergence of about 90 μrad. The resulting energy density achieves 1M eye-safe levels out of the aperture. The laser can operate at pulse repetition frequencies of between 20 kHz and 100 kHz and has a pulse width of approximately 1.5 nsec.

Proc. of SPIE Vol. 7323 732303-2

The receiver telescope has a Maksutov-Cassegrain design with a 189 cm2 aperture area, a 20 cm2 obscured area and an effective diameter of 147 mm. The light is collected on a detector/amplifier with the bandwidth necessary to respond to the 1.5 nsec pulse width and low enough noise to sense returns from 10% reflective targets at ranges up to 2500 meters. A Commercial Off-The-Shelf (COTS) InGaAs APD and a compatible Transimpedance Amplifier (TIA) were found to meet the bandwidth and noise requirements.

TELESCOPE

AP

D

LASER BEAM EXPANDER

ZOOM LENS

COLOR CAMERA

LASER TXLASER RX

VISIBLE RX

BJG

FSM

COLD MIRROR

Fig. 3. Eye-safe LADAR Test-bed hardware during

integration. Fig. 4. Eye-safe LADAR Test-bed design schematic.

A custom receiver board was designed with a variable gain amplifier that matches the signal to the input range of an Analog to Digital Converter (ADC). The receiver design gives the operator control over three different gain settings: the APD bias, the TIA automatic gain control bias, and the digitally-controlled variable gain amplifier. This makes the system capable of digitizing signals over a wide dynamic range.

The scanning, steering, and stabilization system in the ELT consists of two 6-inch diameter mirrors and associated drive electronics. The first mirror is the FSM, a very high-bandwidth, small-range of travel mirror used primarily for scanning. It can be programmed by the user to perform various high-speed scan patterns. The second mirror is the BJG. This scanner is particularly well suited for the steering and stabilization roles. There are a variety of configurations that can be implemented where the two mirrors cooperate to fill the roles of scanning, steering, and stabilization. For example, if a requested scan width exceeds the range of travel of the FSM in the horizontal dimension, the BJG can perform the horizontal scanning motion while the FSM performs the vertical scanning motion. In the case of a moving target, the BJG mirror can track the moving target through steering and stabilization while the FSM performs a raster scan around the target. This kind of scanning can reduce target distortion in the collected range image. An integrated navigation system (GPS/IMU) is included that also enables inertial stabilization.

The color camera is located behind the BJG steering mirror and in front of the FSM. It is therefore, not affected by the raster scanning operations of the FSM. The FOV of the camera can be matched to subtend the FOV of the LADAR raster pattern for purposes of colorizing or texturing the measured LADAR point clouds. Alternatively, the camera can be zoomed to a wide FOV and used for target detection, target tracking or for operator situational awareness. Automatic zooming is enabled by a motorized zoom lens. The color camera is spatially aligned in the hardware using a cold mirror. This enables pixel-level alignment of the LADAR shots with the color pixels. Temporal correlation is achieved by triggering the camera’s frame acquisition with a signal synchronized to the laser start pulse. This temporal correlation, when combined with pixel level alignment in angle-space, enables efficient rendering of colored 3D surfaces in real-time.

As mentioned previously, the color camera can be used for target tracking. A COTS tracking system has been implemented that requires a TV standard RS-170A input signal. Target tracking is enabled by an Octec video board used to down-sample the images and output RS-170A, 30-Hz video to the tracker board.

Proc. of SPIE Vol. 7323 732303-3

3. ELT DATA COLLECTION ACTIVITIES 3.1 LADAR Transceiver Performance

The ELT is in the early stages of data collection as part of system integration. Initial field testing has been conducted. A test target was built that consists of two 2.1 m by 2.1 m painted wooden panels placed normal to the laser beam. The first panel was placed directly in front of the second and had slits or gaps cut into it. Three slits were cut into the front panel. This enabled a part, or all, of the laser beam footprint to pass through and reflect off of the second panel. The second panel can be placed at various distances from the first in order to determine range resolution. A photograph of the target panels is shown as Figure 5.

Fig. 5. Trailer-mounted target panel used in ELT range resolution tests.

The first test was designed to align the LADAR and color camera systems, then to characterize the beam footprint size and energy distribution. The laser beam was first slowly steered into and out of the various slits in the target. While doing this, an oscilloscope was used to view the shape of the returned signal. The shape of the return signal reflecting only from the front panel is shown as Figure 6. As the beam would start to enter the slit, the signal amplitude would start to drop and then split into two peaks. The split signal occurs because part of the energy is reflecting from the front panel and part is from the back panel. When the split signal has equal amplitude, it was concluded that the center of the beam was centered on the edge of the slit. This case is shown as Figure 7 where the back panel is placed approximately 600 mm behind the front panel. A wide slit was used to ensure that the opposite side of the slit was not being hit by the laser footprint. The edge of the slit, as viewed by the color camera, was then correlated with a row of color camera pixels. The procedure was completed on both horizontal and vertical slits so that both the vertical and horizontal pixel row corresponding with the center of the laser footprint could be identified. This test demonstrates a range resolution of less than 600 mm.

Proc. of SPIE Vol. 7323 732303-4

80

>40

cC

20

I I

10 20 30

50

5 10 15 20 25

40-

20

10

I I I

IRX Data

I J

4U 060 7U

Time since TX detection (ns)090 000 910

C00

Fig. 6. Return signal of laser footprint reflecting from the

nearest target panel. Time is in nanoseconds. Fig. 7. Return signal of laser footprint centered on the edge

of the near/far slit. Time is in nanoseconds.

A second test was conducted to characterize the shape of the footprint. In this case, the laser beam was slowly steered into and out of the 50 mm wide slit at a distance of approximately 223 m. It was found that the entire beam could fit into the slit as the signal would first split, then entirely shift to the return pulse associated with the far panel. An effective beam divergence of about 90 µrad was documented at that range.

The analog signal is sampled at a 2 GHz rate with 10 bit resolution by an on-board ADC. Currently the samples are trimmed to 8 bits and are stored at the full pulse rate of the system over a specified range gate. Figure 8 is an example of one such waveform returned from the surface of a target. Included in the plot are the discrimination parameters and associated detection point used to estimate the range to target. Figure 9 shows a close-up of this return pulse. Note that the return pulse has been attenuated to a pulse width of approximately 2 ns Full-Width at Half Maximum (FWHM). The achieved range accuracy is within millimeters of the predicted range performance of LadarSIM [1].

Fig. 8. Wave-form of a single laser pulse returned from the front face of the pipe shown in Figure 12.

Proc. of SPIE Vol. 7323 732303-5

Co

C00

RX Data

856 858 860 862 864 866 868 87U 672 -74Time since TX detection (ns)

Fig. 9. Zoom-in view of the wave-form shown in Figure 8.

3.2 Pointing Control Performance

The Pointing Control System controls both the optical line-of-sight and the leaving principal ray of the pulsed laser with respect to the ELT. The optical line-of-sight, defined as the boresight of the optical camera, is steered with a large field-of-regard steering mirror. Laser scanning with respect to this line-of-sight is performed using a fast steering mirror to raster the laser path. The location of the ELT relative to local-level (East North Up) coordinates is determined by a NovAtel SPAN GPS/IMU system with a LN200 IMU. Inertial line-of-sight stabilization uses motion measurements from a second Litton LN200 IMU. Tracking objects during operation is performed by an OCTEC Adept 60 Video Tracking System, using optical images from either the EO camera image viewed through the steering mirror, or from an IR camera fixed with respect to the van.

The steering of the optical path is performed by a Ball Joint Gimbal (BJG) steering mirror. The BJG mirror has a ±30° range of mechanical motion about two axes capable of rapid stop-to-stop slew motions. The mirror includes a spherical socket resting on a matching spherical post. The mirror is held in place and controlled by four braided lines in tension, attached to servo-motor capstans. The position of the mirror is inferred from encoder measurements on the capstans. This method provides excellent small position repeatability.

A COTS Fast Steering Mirror (FSM) from Left Hand Design (LHD) is used for controlling the raster scanning of the laser to match the field-of –view of the color imager. The large diameter mirror has a total mechanical range of motion of ±2° about two axes with a bandwidth exceeding 90 Hz. The mirror controller accuracy of the LHD analog controller is less than 1 μrad with respect to the measured displacement. The measured angular position error is under 35 μrad.

The steering command of the optical line of sight can be controlled by operator joystick, geo-referencing a specific target latitude-longitude-altitude, or from the target tracking system. Steering commands defining the line-of-sight vector of the optical steering path in terms of azimuth and elevation angles with respect to the mechanical frame of the ELT structure are converted into control commands to the BJG mirror normal vector. The steering mirror also incorporates angular displacement information from an IMU integrated in the BJG to stabilize the inertial line of sight from the rotational motion of the van.

The scanning mirror control operates independently from the steering mirror. The scanning commands for both the azimuth and elevation angles are programmable in terms of scan range, frequency, bias and waveform. The nominal raster scan pattern defines the scan range and the uniform angular point separation of the laser shots defined by the pulse repetition frequency. Alternatively, uniform point spacing over a defined scan range over a specified time can be defined. The scan rate is constant over the specified scan range, with the range exceeded slightly to provide for mirror reversal accelerations at the end of a scan. Acceleration feed-forward commands are also used to improve the response speed and fidelity during this motion reversal.

The location of the laser leaving pointing ray in the sensor coordinate frame is calculated during operation for point cloud image formation. The inferred mirror angles of the BJG are combined with the measured FSM mirror angles to determine the laser Leaving Principal Ray using pointing algorithms developed in reference [2].

Proc. of SPIE Vol. 7323 732303-6

The aliasing of a high-frequency noise term on the analog FSM sensor signal during sampling creates a significant blurring of the reconstituted point cloud file. A Kalman filter was implemented to reduce this noise. The improvement in the constant rate elevation raster voltage measurement is shown in Figure 10. Figure 11 shows both the noisy raster scan pattern (dashed lines) generated from the raw sensor data and the improved filtered scan (solid lines).

0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29

-0.01

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

Time (sec)

Vol

ts (v

)

Elevation Raster Sensor Filter

El VoltageFiltered El Voltage

-0.1 -0.05 0 0.05 0.1 0.15

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2

FSM Az El Data

Azimuth Voltage (V)

Ele

vatio

n V

olta

ge (V

)

Raw FSM Az-El VoltagesKalman Filtered FSM Az-El Voltages

Fig. 10. Improvement in the constant rate elevation raster voltage

measurement. Fig. 11. Comparison of raw and Kalman-filtered scan lines.

The controller software is partitioned into two separate subsystems, with the steering control running at 200 Hz, while the scanner control operates at 1800 Hz. The controller is written in Matlab and Simulink. The controller code is compiled to a dynamic link library (dll) file using the Matlab Real-Time Control toolbox. The dll file is then embedded into the Labview-based instrument controller software.

At present, the Pointing Control System is operational in all control modes. The system is currently in the process of being calibrated and tested for pointing control accuracy.

Time-tagging the scanner action relative to the laser pulses allows the generation of point clouds in XYZ space. Figure 12 and 13 show side and top views of a superstructure respectively.

3.3 Color Integration

The coaxial design allows the color imager to be used as a “cross-hair” for aiming the LADAR at a point or object of interest. The FOV of the LADAR scan can be made to coincide with the FOV of the color imager at a specific zoom setting. The color imager is fixed in the optical path and cannot be adjusted. However, the FSM has a built-in ability for adjustment such that the center of a raster scan can be redefined to correspond with the optical axis of the color optics. Then, through an alignment procedure involving the analysis of field targets, a shot-to-pixel mapping can be determined for all pixels in the color image. This must be done for a variety of zoom stetting for the imager. This so-called pixel-level alignment enables accurate placement of the cross-hair as well as the colorization of the points in the point cloud. Figure 14 shows the color image corresponding with the target shown in Figure 12. Figure 15 shows the same target from a wider field of view.

4. IMPLICATIONS FOR AUTOMATIC TARGET RECOGNITION The ELT supports several areas of ATR research. One of those areas is the study of LADAR detection techniques and their effect on range accuracy and repeatability. If several detection techniques are in consideration for a LADAR system, the ELT could be used to collect a data set with the full return waveform stored for each laser pulse. In post-processing, the various detection techniques could be applied to all of the waveforms and the resulting range

Proc. of SPIE Vol. 7323 732303-7

measurements compared to ground truth information. Once the optimum detection technique is chosen and refined, it could be implemented in real-time and eventually in other LADAR systems under development. There are other investigations to be conducted using the full waveform that may lead to improvements in ATR. For example, some knowledge of a material’s Modulation Transfer Function (MTF) may be extracted from the shape of a return pulse. If the pulse shapes can be characterized, then it may lead to a way of segmenting man-made objects from clutter. Also, analyzing the waveform having multiple returns may enable a more precise definition of an object’s edge and an improvement to edge-based ATR techniques.

The availability of LADAR-aligned color imagery enables potential improvement to ATR performance through experimentation with data fusion methods. Some of the areas where the fusion of color imagery could prove advantageous are edge detection, target segmentation, and human interpretation. The color imagery has a higher angular resolution than the LADAR making it useful for more precisely defining edges in 3D space. The color information could also be used in segmentation, employing a like-color, region-growing algorithm which could improve the ability to separate the target from its background using LADAR alone. The colorization of ladar-derived 3D points will greatly enhance the interpretation of the data by a human operator performing targeting tasks with such a sensor. The ELT was designed and built to support development of these and many other concepts that have potential to improve ATR performance.

5. CONCLUSION In conclusion, the ELT is a tool for the ATR developer to test innovative solutions to LADAR data collection and processing. It is the first of several sensors to be integrated into a mobile sensor laboratory known as VISSTA. As a part of VISSTA, the ELT will support the development of future LADAR sensors and complimentary ATR algorithms by serving as a data collection asset and a test-bed for LADAR sensor trade studies. The ELT enables the combination of LADAR data with pixel-aligned color information for efficient creation of textured 3D data. Initial data collection has demonstrated some of the useful features of the ELT, such as the ability to analyze the digitized return waveform. Data collection activities and related research will continue in support of LADAR and ATR development programs at the Naval Air Warfare Center, Weapons Division.

Fig. 12. 3D point-cloud of a portion of a tower superstructure. Fig. 13. 3D point cloud of same tower shown in Figure 8 but

shown as a cross-section through a cylindrical pipe.

Proc. of SPIE Vol. 7323 732303-8

Figure 14. Color image of the same target shown in Figure 12. Fig. 15. Overview picture of target superstructure.

REFERENCES

[1] Neilsen, K.D, Budge, S.E., Pack, R.T, Fullmer, R.R. and Cook, T.D. “Design and validation of the Eye-safe LADAR Test-bed (ELT) using the LadarSIM system simulator. SPIE International Society for Optical Engineering Defense and Security Symposium, Orlando Fl, Apr 2009.

[2] Omer, D., B. Call, R. Pack, R. Fullmer, “Generic simulation of multi-element ladar scanner kinematics in USU LadarSIM”, SPIE International Society for Optical Engineering Defense and Security Symposium, Orlando Fl, Apr 2006.

Proc. of SPIE Vol. 7323 732303-9