shack-hartmann sensor engineered for commercial

15
SPIE AM100-20 1 9/3/04 Shack-Hartmann sensor engineered for commercial measurement applications Daniel R. Neal WaveFront Sciences, Inc., 14810 Central S.E., Albuquerque, NM 87123 ABSTRACT The Shack-Hartmann wavefront sensor has seen an explosion of applications in the last several years. It has found powerful uses not only in astronomy and adaptive optics, but also in ophthalmology, optical testing, laser beam analysis, and semiconductor manufacturing. Part of the reason for this growth in application is the advancement of the various technological components, such as the lenslet array, the CCD and the computer. Part is due to market needs that are driven by other technological advances (such as Lasik). This paper describes the historical background of the development of the technology that ultimately resulted in the formation of WaveFront Sciences and in the growth of applications related to this field. Keywords: wavefront sensor, Shack-Hartmann, Hartmann-Shack, lens testing, optical testing, optical metrology, laser beam analysis, adaptive optics 1 INTRODUCTION Practically since the development of the first optics, there has been a need for measurement of the optical quality. Galileo, Newton and others noticed that not all lenses performed equally. While the mathematics of lens design, ray tracing, and optical analysis was slowly developed through the centuries, it wasn’t until Carl Zeiss and Ernst Abbe that the fundamental theory of aberrations was developed and understood. The development of microscopes, telescopes, and many other optical instruments depended on a fundamental understanding of the principles of optics, which were developed by the mathematician Ernst Abbe for use in products being developed by inventor and entrepreneur Carl Zeiss. Early on, this team realized that the properties of a lens were not constant over the aperture of the lens. Zeiss and Abbe undertook a project to understand both the physics and the mathematics of the image formation process. This led to the development of whole new methods for modeling and measuring the resolution of lenses, to the design of new kinds of multi-lens elements (such as the triplet), and to the enlistment of chemist Otto Schott to develop new glass materials that had the appropriate properties. Ernst Abbe even developed a series of simple aberrometers that were able to measure the variation in vergence of the human eye as a function of pupil position. Their developments in microscopy led to better telescopes, camera lenses, and projectors, and to a tremendous advancement of the field of optics. Carl Zeiss was, more than anything, an entrepreneur and businessman. He used his new developments in optics to build a series of diverse products that were all based on a similar technology. To this day, the Carl Zeiss company follows this philosophy, building products based on their extensive optical technology. In founding WaveFront Sciences, we attempted to follow a similar philosophy. We chose a small, simple technology base realizing that it had the potential for a large number of different applications and built a company around the technology that was introduced by Roland Shack. 2 DEVELOPMENT OF THE COMMERCIAL WAVEFRONT SENSOR 2.1 Reactor pumped laser experiments In December 1984, I joined Sandia National Laboratories to work on the development of a Nuclear Reactor Pumped laser. As a team we endeavored to unleash some of the power inherent in a nuclear reaction to directly drive a laser. We built on the earlier work of Dave McArthur and Tollesfrud, 1,2 with the goal of eventually building an extremely high power laser that could be used for defense or civilian applications. Early analysis of the reactor-laser designs indicated the possibility of building extremely powerful (10-100 MW) cw lasers with fuel that could potentially last for years. In the

Upload: others

Post on 27-Dec-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 1 9/3/04

Shack-Hartmann sensor engineered for commercial measurement applications

Daniel R. Neal

WaveFront Sciences, Inc., 14810 Central S.E., Albuquerque, NM 87123

ABSTRACT

The Shack-Hartmann wavefront sensor has seen an explosion of applications in the last several years. It has found powerful uses not only in astronomy and adaptive optics, but also in ophthalmology, optical testing, laser beam analysis, and semiconductor manufacturing. Part of the reason for this growth in application is the advancement of the various technological components, such as the lenslet array, the CCD and the computer. Part is due to market needs that are driven by other technological advances (such as Lasik). This paper describes the historical background of the development of the technology that ultimately resulted in the formation of WaveFront Sciences and in the growth of applications related to this field.

Keywords: wavefront sensor, Shack-Hartmann, Hartmann-Shack, lens testing, optical testing, optical metrology, laser beam analysis, adaptive optics

1 INTRODUCTION

Practically since the development of the first optics, there has been a need for measurement of the optical quality. Galileo, Newton and others noticed that not all lenses performed equally. While the mathematics of lens design, ray tracing, and optical analysis was slowly developed through the centuries, it wasn’t until Carl Zeiss and Ernst Abbe that the fundamental theory of aberrations was developed and understood. The development of microscopes, telescopes, and many other optical instruments depended on a fundamental understanding of the principles of optics, which were developed by the mathematician Ernst Abbe for use in products being developed by inventor and entrepreneur Carl Zeiss.

Early on, this team realized that the properties of a lens were not constant over the aperture of the lens. Zeiss and Abbe undertook a project to understand both the physics and the mathematics of the image formation process. This led to the development of whole new methods for modeling and measuring the resolution of lenses, to the design of new kinds of multi-lens elements (such as the triplet), and to the enlistment of chemist Otto Schott to develop new glass materials that had the appropriate properties. Ernst Abbe even developed a series of simple aberrometers that were able to measure the variation in vergence of the human eye as a function of pupil position. Their developments in microscopy led to better telescopes, camera lenses, and projectors, and to a tremendous advancement of the field of optics.

Carl Zeiss was, more than anything, an entrepreneur and businessman. He used his new developments in optics to build a series of diverse products that were all based on a similar technology. To this day, the Carl Zeiss company follows this philosophy, building products based on their extensive optical technology. In founding WaveFront Sciences, we attempted to follow a similar philosophy. We chose a small, simple technology base realizing that it had the potential for a large number of different applications and built a company around the technology that was introduced by Roland Shack.

2 DEVELOPMENT OF THE COMMERCIAL WAVEFRONT SENSOR

2.1 Reactor pumped laser experiments

In December 1984, I joined Sandia National Laboratories to work on the development of a Nuclear Reactor Pumped laser. As a team we endeavored to unleash some of the power inherent in a nuclear reaction to directly drive a laser. We built on the earlier work of Dave McArthur and Tollesfrud,1,2 with the goal of eventually building an extremely high power laser that could be used for defense or civilian applications. Early analysis of the reactor-laser designs indicated the possibility of building extremely powerful (10-100 MW) cw lasers with fuel that could potentially last for years. In the

Page 2: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 2 of 15 9/3/04

context of the Cold War, this was an area of research that both the U.S. and Soviet governments funded for nearly ten years.

While the potential existed for building such a laser and, in fact, several different laser systems had been demonstrated, no significant power had been extracted from a nuclear pumped laser. Our tasks were identifying an appropriate laser medium, developing the optimum pumping mechanism, and building actual working laser systems that could at least prove the feasibility of the concept. Along the way we developed several new technologies, and went from first proving that we could make a Xenon laser, to eventually demonstrating extraction of up to 1 kW cw. This program, called the FALCON program (for Fission Activated Laser CONcepts) prompted many developments in laser technology,3,4 advanced laser resonator designs,5 thermal lensing,6 nuclear-reactor physics, coherent beam combining,7 IR astronomy8, adaptive optics,9 micro-optics, detectors, cameras, fluid flow and wavefront sensors. Like so many government funded research programs, the tangible results were not necessarily the direct result of the program’s objectives, but merely the serendipity of discovery caused by the continual need to solve difficult technical problems. This same pattern of development and technological innovation was (and is) present in many other NASA, DOE and military programs.

One of the most difficult technical problems associated with the nuclear pumped laser was what we termed medium inhomogeneity. Since the reactor-driven lasers were nearly all gas lasers, the long pulse or cw operation left the gas considerable time to move around during the intended laser operation. The pumping mechanism in our initial lasers was always from the sidewalls of the laser gain region. Thus a significant thermal lens developed during the pulse. In our early experiments we were aware that such an effect was present, but had no means of quantifying it. As I was responsible for measuring and mitigating this effect, I embarked on numerous different schemes for measuring the wavefront of this highly aberrated gain region.

At first we tried to use conventional optical measurement techniques. However, the difficult geometry, the radioactive environment, and the single-shot, short-pulsed nature of the experiments made even the most established measurement techniques extremely difficult. All optical beams had to be relayed out of the reactor vessel over a 17 m optical path. Initially we used high-speed film cameras (1 M frame/sec) to capture data at a sufficient rate to resolve the flow. Even so, with our initial experiments using shearing interferometry, the fringes disappeared after the first few frames.10,11 We were obviously depositing significant energy into the laser gas, but the limited dynamic range of the interferometry experiments didn’t tell us very much about the magnitude of the actual effect. In order to design a laser resonator that had a chance of working, we needed to know the strength of the medium inhomogeneity.

The first experiment that gave us useful data was a simple Hartmann probe experiment.10 In this case we image relayed (over the 17 m path) a series of probe beams that were created by passing an expanded laser beam through a mask with 13 small holes. We recorded the data on the high speed Cordin 330 film camera, which records 80 photographic frames at 1 million frames per second (see Figure 1). While I attempted to use image-processing techniques to reduce this data, the computer technology of the time (1986) made this an unwieldy process.* Instead we used a manual digitization scheme where each spot was visually located and recorded using an instrument designed for aerial trajectory analysis. The amazing thing about this experiment is that it produced surprisingly good results (see Figure 2). We were able to measure the strength of the aberration to about 30 µm OPD across the 1.5 cm gain region. This developed in a period, depending on the driving reactor, of 1 – 12 ms. I was impressed with the ability of this simple minded measurement technique, really relying only on the fact that the light travels in a straight line, to make these difficult measurements. I also vowed never to never again hand-digitize the data, but to find a means for acquiring the data so that it would be directly recorded in a computerized format.

* Digitizing the individual frames resulted in data that was about 300k per frame. While today this doesn’t seem like much, the VAX computer we were using at the time had only a 5 MB hard disk. It took 17 9-track tapes for each measurement sequence, and the results were poor because of background fog on the film.

Figure 1 Experimental geometry for early RPL Hartmann experiments.

Page 3: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 3 of 15 9/3/04

Armed with the knowledge of the strength of the aberration, and coupled with the other advances we made in pumping the laser gas and understanding the gas kinetics, we set about designing a laser resonator to extract energy5. We accomplished this in about 1988 using Xe as a laser medium, after working on XeF for a number of years prior to that. In one of our early experiments5 we noticed that the shape of the laser pulse underwent a dramatic temporal dynamic. Just at the moment when the pumping should have been strongest, the laser energy fell to zero. After the peak pumping, the laser turned back on. After some analysis we realized that the strong aberration in the gain region had led to a change in the resonator stability, and that we had observed two separate transitions. We later used this fact to further quantify the strength of the aberration in a series of experiments that deliberately looked at the transition strength, and thereby inferred the medium inhomogeneity strength6. These were extremely tedious experiments to conduct. We got only one data point for each reactor run and, at a maximum of four runs per day; it was hard to collect very much data. In addition, the experimenters were all exposed to radiation as a result of the setup between each run.

2.2 Other Shack-Hartmann wavefront sensors

At this same time, there was a large group at Lawrence Livermore National Laboratory working on adaptive optics for a long distance beam train. They built a series of high-energy lasers, and needed to transport the beam through an underground tunnel nearly 0.5 miles long. While they evacuated the tunnel atmosphere, small aberrations on the various turning mirrors led to significant beam degradation at the receiving end. Thus they undertook the development of a series of increasingly sophisticated adaptive optical systems.12,13 Their early systems relied on modal deformation of mirror elements to provide a low order correction. They used something they called a “fly’s eye” sensor. This consisted of a series of individual lenses, that each dissected part of the incoming beam and created a focal spot on a quadrant detector. The differential signal from the quad cell gave them a local wavefront slope measurement that they were able to use to drive the deformable mirror (I think it was a totally analog system).

A similar sensor using 19 individual lenses was used by Scott Acton of Lockheed to build a solar observatory adaptive optic system at high bandwidth.14,15 The instrument was built at Lockheed Palo Alto Research Center and installed at the Solar Telescope facility at Sunspot, New Mexico.

I believe that there were other examples of “fly’s eye” or Shack-Hartmann sensors that were used in high energy laser adaptive optics programs. Most of these were highly classified at the time, so there is little published on this subject16,17 I do know that an extensive adaptive optics program was underway at the Air Force Weapons Laboratory beginning as early as 1972. A Shack-Hartmann sensor was built by Itek Corporation under government contract in about 1976, but it was limited by the computer processing power available at that time. Both Perkin-Elmer and Itek also built shearing interferometers to measure wavefront error in several adaptive optics experiments during the 1970’s. In the 1980’s Lockheed designed an outgoing wave adaptive optics system where the high power laser beam was sampled by an array of holographic gratings on a large (4 m) primary mirror which diffracted the sampled spots through a hole in the secondary mirror onto an array of sensors. This setup was another example of a Shack-Hartmann sensor designed to measure wavefront error. A deformable mirror was also included in the laser beam train so as to close an adaptive optics control loop. This system was later integrated with a high power chemical laser built by TRW, and many tests were successfully conducted with the system operating in a vacuum chamber.

2.3 The 1D wavefront sensor

The problem with using this type of sensor for a measurement application was its rather limited dynamic range and its incredible complexity. Scott Acton’s wavefront sensor had 19 lenses to align individually to the quad cells, and a

Figure 2 Hartmann RPL measurements showed significant transient wavefront error in only one dominant direction

Page 4: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 4 of 15 9/3/04

complicated electronics bank that had to buffer, combine and difference 76 channels of data. Thermal variations and other effects meant frequent recalibration. The “Shack-Hartmann” sensor seemed to be useful for a few, dedicated adaptive optics applications, but it certainly wasn’t something that I could use for high-speed metrology. Nevertheless, our need for wavefront metrology was increasing as we began to develop resonators and then adaptive optics systems. The computer technology and data acquisition instrumentation was slowly improving, so it made sense to try to develop a computerized sensor. And so, Bob Michie and I tried several different ideas trying to build a fully computerized wavefront sensor system. I contacted Ben Platt, who had worked with Roland Shack to make a film based wavefront sensor. I didn’t want to use film, but thought that I could use a video camera to acquire and record the data. However, at 30 frames per second, even getting one frame at the appropriate time would be fortuitous. I don’t think the apparatus still existed that Ben had used to build the first lenslet array. He indicated that he could probably resurrect it for $10-20k. But that still didn’t solve the camera problem. So Bob Michie and I went to Lentec (Ben Platt’s company at the time) to visit another contractor, Stewart McKechnie. We asked him to design a “fly’s eye” sensor, similar to the LLNL instrument, which would use a position-sensitive detector (made by UDT) and directly record the signals on a transient digitizer. To build a 4 X 4 array would take 48 transient digitizer channels and 32 channels of electronics, which, while expensive, was doable. The lens array would be a set of plano-spherical lenses that we would cut into squares and glue together. This instrument would tax our budget and resources, but would allow us to make a fast wavefront measurement that was directly recorded and digitized–a worthy goal.

In the car on the way back from the meeting, Bob and I discussed the cost and difficulty of the proposed project. We realized that we really needed data at 10 – 20 kHz, and that 4 X 4 was not very good resolution, even though it implied a tremendous cost in digitizers. Somewhere along the way we realized that the dominant aberration we had seen in previous experiments was only across one direction. Thus we came up with the idea of using a line-scan camera or Reticon diode array, coupled with a cylindrical lenslet array to make a one-dimensional wavefront sensor. This would run at high speed, and could have much better resolution. In addition, it required only a single channel of data acquisition and could readily be scaled to take data during the entire laser pulse. We soon set to work on developing this idea, completely abandoning the 2D sensor as impractical.

The first 1D sensor was made from ten small cylindrical lenses, each 1/10th inch diameter, laid orthogonal to a 2048 element line scan camera. The lenslet mount and lenslet array is shown in Figure 3. We spent months trying to get a CAMAC based frame grabber to work, without success. Instead we used a CAMAC based 12-bit transient digitizer to record the data at 5 MHz after. Dave Bodette worked many long hours to write the data acquisition software in Pascal, and I wrote an analysis package in QuickBasic. Bob Michie built a 2nd sensor with 20 1/20th inch lenses. It was extremely difficult to tell which optical surface was powered on these small lenses, so the assembly was extremely tedious, requiring much iterative trial and error.

This simple sensor exceeded all our expectations. After calibrating with a collimated laser beam, we measured an astounding 0.6 µR RMS error across all the lenslets. I developed algorithms for finding the Areas of Interest, centroiding, comparing to a reference and integrating the slope to form the wavefront (reconstruction is trivial in one dimension). The sensor had 50 µm of dynamic range and 1/50th wave (or better) sensitivity. Truly this was an astounding measurement technique. The first experiments on the reactor got data faster and more accurately than we had ever achieved, and we quickly used this as a standard measurement technique on every reactor experiment.

At that time I mentioned this instrument to Tim Turner as a possible cornerstone for forming a company. His response was: “How could you build it and who cares?” He correctly assessed that the instrument, while extremely useful to the engineer who built it, was not anything like a commercial product. It couldn’t be manufactured with its unwieldy parts and meticulous lenslet assembly, and using it was time-consuming and tedious. Furthermore, I couldn’t think of any other applications that needed such a combination of dynamic range and sensitivity.

Figure 3 Lenslet array and mount for initial 10 and 40-element sensors.

Page 5: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 5 of 15 9/3/04

At the same time as I was building this low cost sensor, other strides were being made in the field. Robert Fugate and his team from the Air Force Weapons Lab (later called the Phillips Lab) were building generation after generation of atmospheric correction adaptive optical systems.18,19,20 There were numerous others in the military-industrial complex who developed sophisticated adaptive optics systems and wavefront sensors. Most of these systems were highly classified in the 1980s. They were also extremely expensive, typically the result of $500k – 1M engineering development projects. Much of this cost was the result of the atmospheric adaptive optics application. The dynamics of turbulence significantly drove the bandwidth requirements. So, special camera systems, electronics, adaptive optics and computer hardware had to be developed in order to make a working system.

By 1991, Bille and Liang,21,22 and later Andreas Dreher, were working on what they called a Hartmann-Shack sensor for ophthalmic measurement and adaptive optics. Stefan Goelz et al used a Hartmann-Shack sensor to measure corneal topography.23 They used a wavefront sensor that had been built for the European Southern Observatory for astronomical applications.

All of these sensors were a long way from being commercial sensors. The primary paradigm was performance based, driven by the application rather than by the available technology. For example, atmospheric adaptive optics required total closed loop bandwidth of at least 100-120 Hz, with enough actuators on the deformable mirror to correct for typical turbulence scales (~5 – 10 cm). Thus the sensor needed to operate at 1 – 2 kHz. This required special sensors, data acquisition, computers and processing24. If the required bandwidth could not be met, then there was no point in building the system. (Fewer number of lenslets leads to less data to process, making it easier to meet the required bandwidth.) In addition, the light levels were extremely low for interesting astronomical elements. Thus it was desirable to collect all the light onto as few detector elements as possible and to minimize the number sensor elements. However, as has been shown by several authors, in the high photon limit the best accuracy will be obtained using a larger number of pixels.25

2.4 The lenslet array

One of the key technologies that limited the fabrication of the Shack-Hartmann sensor was the lenslet array. Early lenslet arrays were made by individually mounting a group of lenses, or by cutting square or hexaganol sections of lenses and gluing them together. The lenslet array originally developed by Platt and Shack was made from a mold that was made by pressing a small ball into a piece of plastic. The array was made by stepping and repeating the process. Adaptive Optics Associates developed a much more sophisticated version of this to make their Monolithic Lens Modules (MLM). LLNL made a gray scale mask by repetitive exposure of film using a Gaussian laser beam.13 They fabricated the lens in a photosensitive material and then adjusted the focal length using an index matching fluid. Most of these techniques produced usable lenslet arrays. However, the quality of the film, the material in which it could be fabricated, and the ability to make small lenslets were often the limiting factor.

In 1989-90, William Veldkampf of MIT/LL introduced a new method for making optics. He used integrated circuit fabrication techniques to make small optics. This new class of optics was initially called binary optics†, from the method of using binary masks to successively build up a profile, but it was the forerunner for a class of optics now known as micro-optics or diffractive optics.26,27 Veldkampf invented a host of new applications and fabrication technologies for these binary optics. As I learned about this new fabrication technique, I immediately knew that this was the way to make small, accurate lenslet arrays.

† The term “binary optics” is really a misnomer. The surface profile was built up from successive exposure of a binary mask that was etched to successively different depths, usually each twice the depth of the previous step. Really these optics should be call digital optics, since their shape is approximated by digital work. A true binary optic would only have two levels.

Page 6: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 6 of 15 9/3/04

As the FALCON program shifted to more adaptive optics and flow applications, I also began to pursue the field of binary or micro-optics. Initially I concentrated on the development of new applications for these optics, but I was also searching for better ways to fabricate them. In conjunction with Mial Warren and others from the CSRL at Sandia, we learned how to implement the binary optics technique, developed programs that converted optical information into mask writer commands, and fabricated and etched micro-optics. Our first results were primarily coming up to speed on the technology, except that the CSRL team was able to make me a series of lenslet arrays for Shack-Hartmann wavefront sensors (both 1D and 2D). An example of one of these early lenslet is shown in Figure 4.

A 40-element sensor that gave excellent data quickly supplanted the 10 and 20 element instruments. Now, the accuracy of the lenslet array allowed me to really know where the center of each lens was located. This made it possible to accurately trace rays from the lenslet pupil to the focal spot, thereby accurately determining the wavefront slope. Furthermore, the added resolution meant that we could measure much more complicated wavefronts, such as turbulence or other aerodynamic phenomena. One afternoon, I set up the sensor in my lab and tried to repeat my PhD thesis, in which I had measured fuel droplet evaporation concentration fields in free-fall.28,29 In a short time I demonstrated that I could measure the fuel evaporation concentration with far better accuracy, repeatability and resolution than I had achieved in several years of work using laser induced fluorescence (see Figure 5). In another experiment, I was able to resolve individual vortex shedding from a small fan in the laboratory environment. The higher resolution, coupled with excellent accuracy and large dynamic range, suddenly enabled a host of applications that had previously required difficult optical experiments.

In trying to measure the reactor pumped gain regions flow, I had, at one time or another, tried many different optical measurement techniques. These included schlieren photography, laser shadowgraph, Mach Zehnder, Michealson and shearing interferometry, heterodyne interferometry, BaTiO3 crystal time-dependent schlieren, temporal resolved scattering, two-pulse holography, and numerous other techniques. The FALCON program had begun to use flowing laser gas to help solve the medium inhomogeneity problem, and now we were faced with the problems of turbulent flow with possibly large temperature gradients. To study this, we built two different wind tunnels in my lab, and studied a number of turbulence and flow issues.30

Our initial measurement technique was the Mach-Zehnder interferometer. Rich Shagam helped to develop this test system and to analyze the data. We were trying to make some turbulence measurements over a 50-cm path that we could extrapolate to the 8-m plus path that we expected in a realistic reactor-laser system. Unfortunately, the aberrations over the 50-cm path were less than λ/40, which was just at the resolution of the interferometer we set up. One day, frustrated with our inability to get a useful result from the interferometer, I set up the 1D wavefront sensor in place of the acquisition camera. We immediately got a strong signal; however, it was strongly periodic. At first we thought that it was AC pickup from the large heating elements that heated the flow upstream. But, on closer examination, we realized that it was a 120 Hz signal, not 60 Hz. This could not be just AC pickup, but instead was the actual temperature variation of the heated gas caused by the I2R heating. After some aerodynamic analysis, we were able to estimate that the effect would be about 0.3

Figure 4 Micro-optic lenslet array fabricated with the multiple mask “binary” method.

Figure 5 Droplet evaporation vapor cloud measured with the 40-element 1D sensor.

Page 7: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 7 of 15 9/3/04

C. This was astounding sensitivity for a sensor that was designed to measure effects of over 30 µm. Rich Shagam, Tim O’Hern, John Torczynski and I went on to use the 1D WFS to measure both the heated screen flow and the turbulence induced by combined thermal gradients and turbulence.30

I presented some of these results at the SPIE annual meeting in San Diego in 1993.31 Between 1991 and 1994 I was approached by a number of different groups interested in further development of the 1D WFS. Several of these were companies looking for either a solution to a time-dependent measurement problem or just a way to measure phase with larger dynamic range. After checking with Sandia, I discovered that there wasn’t any way for Sandia to build such a sensor, except as part of a CRADA or Work For Others (WFO) program. These required a minimum $50,000 per year contract, and there was an extensive justification cycle where I had to convince Sandia that they should accept money from an outside source. Two different military organizations, the Air Force Phillips Laboratory, and the Naval Surface Warfare Center contracted with Sandia to further develop the 1D wavefront sensor for their specific problems. AFPL eventually built eight 1D sensors based on the Sandia design that were used for tomographic flow measurement.32,33,34 The Naval Surface Warfare Center (NSWC) and later the A.F. Arnold Engineering Development Center (AEDC) supported a whole series of experiments for measuring the bore-sight error and window aberrations of a supersonic missile interceptor seeker window.35,36,37 These developments brought renewed interest in the sensor as a potential commercial product. With the micro-optics technology there was now a way to manufacture the lenslet array, and the outside interest indicated that there really might be a market.

2.5 Founding of WaveFront Sciences

I was finally able answer Tim’s question. At Sandia we had a manufacturable sensor that showed some real market potential. Thus during 1994 and early 1995 Tim Turner and I began exploring the idea of forming a company to commercialize the wavefront sensor technology. This relied on the convergence of a number of factors:

• Improved computer technology allowing accessing of significant amounts of memory with integrated graphical user interface

• Availability of frame grabbers that could conveniently be integrated with cameras and computers

• Availability of low cost CCD cameras with good S/N and simplified electronic interfaces

• Development of micro-optics technology that provided excellent quality lenslet arrays

• Development of market applications such as M2 measurement

• Encouragement from Sandia management to commercialize government funded technology

Figure 6 Three generations of ophthalmic aberrometer

Page 8: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 8 of 15 9/3/04

WaveFront Sciences was founded in December 1995, with our initial efforts funded by Tim Turner and Dan Neal. Venture capital funding allowed us to begin full time operations in September of 1996, and our first product shipped in November 1996.

It is interesting to note that WaveFront Sciences was not the first company to attempt to commercialize wavefront sensor technology. Adaptive Optics Associates had for several years promised the introduction of a 2D sensor similar to the instrument developed for NASA to test the Hubble telescope correctors38. Zeiss developed a 2D sensor in both Jena and in Oberkochen (East and West Germany at the time). But, to my knowledge, they never significantly commercialized it except for some internal and university applications. Deborah Kwo, then at Hughes Research Center, developed a micro-optic lenslet array based wavefront sensor as part of an IR&D project,39 but it was never released commercially.

The key difference with the WaveFront Sciences sensor (called the Complete Light Analysis System CLAS2D) is that we changed the paradigm for building Shack-Hartmann wavefront sensors. Instead of designing a custom sensor for astronomical or adaptive optics applications that need to meet certain bandwidth, dynamic range, accuracy and resolution requirements, we designed a range of sensors using off-the-shelf components. This resulted in a significantly different price point, since we could do the design once and then sell multiple copies with little engineering.

Over the last eight years we have developed a very large list of applications. These include laser beam measurement,40,41 optics measurement,42,43,44 IR wavefront sensors for telecomm applications,45 wafer metrology and nanotopography,46 ophthalmic aberrometry47,48,57 large optics testing,49 IOL metrology50 and contact lens and mold measurement. Along the way, we have significantly advanced the development of these sensors. In addition to taking advantage of developments in new computers, micro-optics, cameras, frame grabbers and software, we have developed numerous new analysis and measurement methodologies.25,51

Two of these applications have had a significant impact on the development and utility of other technologies and have resulted in fairly large market potential.

For several years the Shack-Hartmann wavefront sensor has been applied to ocular aberration measurement.52,53 The availability of commercial instruments has significantly improved the entire Lasik process. The initial goal was to use a measurement of the ocular aberrations to provide a near perfect optical system for vision.54 This has proven to be very challenging. With the advent of the aberrometer, laser eye surgeons were able to determine that the laser was introducing a number of aberrations. The ablation profile has since been adjusted to minimize this effect. This, by itself, resulted in a significant improvement in the Lasik outcomes. With better eye registration, a number of companies are now finding that they get even better results with a fully customized ablation, but it is somewhat short of the grand claims of “20/10 vision in every eye.” WaveFront Sciences developed a breadboard instrument in 1999 that was used to assess the effects of low pressure (due to high altitude) on post-Lasik vision. This was one of a key study that provide the U.S. Navy with the information it needed to determine if Lasik was a safe procedure for fighter pilots. This breadboard instrument formed the basis for the development of the Complete Ophthalmic Analysis System (COAS) that has been used by researchers, clinicians, and Lasik surgeons throughout the world. The COAS aberrometer has a high-resolution lenslet array that provides for excellent accuracy, precision and dynamic range. While Figure 6 shows three generations of the ophthalmic instrument, we are currently working on the 4th and 5th generation instruments.

20

-20

-18

-16

-14

-12

-10

-8

-6

-4

-2

0

2

4

6

8

10

12

14

16

18195

0

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150

160

170

180

190

1950 20 40 60 80 100 120 140 160 180

Figure 7 Silicon wafer measured with the Columbus instrument. The fine details are regions of higher boron doping concentration and are about 2-3 nm in height.

Page 9: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 9 of 15 9/3/04

Another application that is beginning to be important is the metrology of silicon wafer surfaces. Small polishing defects in the wafer can lead to failures in the integrated circuit that only develop late in the process. Thus it is extremely important to identify these at the bare wafer stage. Unfortunately, this means that extremely small features must be detected. As the feature sizes shrink, these effects become even more important. The Columbus wafer metrology instrument has been developed over the last five years to measure features that are less than one nm in height (see example measurement in Figure 7). This has been extremely challenging due to environmental, calibration, reliability, wafer chuck interaction and other effects. The first instruments are shipping this year.

3 SHACK-HARTMANN WAVEFRONT SENSING

The Shack-Hartmann wavefront sensor was developed as an extension of the Hartmann test technique by Roland Shack and Ben Platt at the OSC.55,56 A companion paper, Historical Development of the Shack-Hartmann Wavefront Sensor, by Jim Schwiegerling and Dan Neal, included in this volume, describes in more detail the history of the development of this sensor.

2.1 Principle

Figure 8 shows the arrangement of a typical modern Shack-Hartmann sensor. In this case a lenslet array, fabricated using photolithography and etching in fused silica, is used to collect the light and direct it onto a CCD array sensor. The grid of pixels on the CCD array provides an accurate measurement of the focal spot positions.

The lenslet array breaks up the incident wavefront into a large number of small sub-apertures. The key assumption is that, over each sub-aperture, the only wavefront variation is local tilt. This is readily achieved with sufficiently high-resolution lenslets57. The light from each of these samples is collected by the lenslet and focused on the detector. Since the region is small, this usually creates a well-formed focal spot whose position is shifted corresponding to the local wavefront tilt. The CCD detector records this focal spot position, and thus, by comparison against a reference, the local slope can be determined. With a large number of local slope measurements, the wavefront surface can be reconstructed numerically.

Since the information for all of the focal spots is obtained simultaneously, all of the needed information is obtained in a single CCD frame. With modern CCD camera systems very short exposure times can be used. If there is tilt caused by vibration that occurs between successive frames, it will result in a lateral shift of all the focal spots on the CCD. This is readily identified and subtracted out, or measured if it is useful. The single frame acquisition also means that if the wavefront structures are dynamic (changing rapidly,) the instantaneous wavefront will be measured with little error.

The focal spot locations are usually determined by an algorithm called the centroid algorithm:‡

∈=

k

k

AOIjj

AOIjjj

k S

Sx

x , (1)

where Sj is the modified irradiance distribution over a region AOIk corresponding to the light from a particular lenslet. A similar equation applies for the y-coordinate of the spot locations. Typically, a threshold algorithm is applied to the

‡ This is actually a misnomer. It would more accurately be called a center-of-mass algorithm, since it includes a weighted distribution in the calculation, and not just the shape of the boundary. For connection with the literature in this subject, we’ve continued to use the term centroid to refer to the determination of these spot positions.

Figure 8 Basic arrangement of a Shack-Hartmann wavefront sensor.

Page 10: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 10 of 15 9/3/04

irradiance distribution to produce the modified distribution, although other algorithms may apply (deconvolution, for instance).

A reference beam is recorded for use in determining the wavefront gradients from the spot position measurements. Usually this is obtained by recording a plane wave, although the reference may also be calculated numerically. This provides a set of reference centroids xk

ref and ykref.

The wavefront gradient for each location k on the sensor is:

kREF

REF

ky

x

yy

xx

f

−−

=

1ββ

(2)

where f is the lenslet to detector spacing, which is usually set to the focal length of the lenslet.

The wavefront gradients are connected by the assumption that the wavefront is continuous. While there are some situations where this assumption breaks down, for these very small lenslets, it is usually quite realistic. Thus for each point k on the sensor lenslet coordinates (xk, yk):

jijy

wi

x

ww y

kxk

rrrrββ +=

∂∂+

∂∂=∇ (3)

which is just the definition of the gradient in terms of the scalar field w(x,y), except that we have substituted the measured local gradients βx and βy.

This equation can be solved for the wavefront w(x,y) in a number of ways. The surface can be described in terms of polynomials58, and then a least squares fit routine can be used to find the appropriate coefficients. This is the so-called Modal method. Alternatively, the slope data can be used to solve for a self-consistent set of wavefront heights59.

2.2 Shack-Hartmann wavefront sensor design

There are a number of different criteria that influence the design of the wavefront sensor.60 The dynamic range and sensitivity are primary performance criteria that are used to select a sensor, but there are also a number of constraints that influence the design choice. These constraints are illustrated in Figure 9. In this figure the constraints are shown as lines that represent different limits imposed on the sensor. These limits usually include the following:

Minimum lenslet to CCD spacing The mechanical considerations of the CCD (or other detector chip) impose a limitation on how closely the lenslet array may be mounted to the CCD. Therefore, the focal length of the selected lenslet cannot be less than a certain value (usually about 1.5 mm).

Page 11: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 11 of 15 9/3/04

Crosstalk Crosstalk between lenslets is caused by diffraction. For the rectangular grid of lenslets this can be a significant effect, especially for narrow line lasers. Thus we have used the Fresnel number

ρλd

f

dNFr ==

2

(1)

as a parameter to identify lenslet designs that have the same crosstalk. Another way to look at this parameter is as the ratio of the lenslet diameter to the size spot it creates. That is shown in the second equality in Equation 1. If the focal spot completely fills the detectors behind each lens, then the instrument has zero dynamic range. Thus there is a limit on the minimum Fresnel number both from a practical (dynamic range) consideration, and because of crosstalk. In practice the lenslets are usually small enough that the crosstalk is not overly significant. Previous simulations have shown that the optimum Fresnel number for minimizing crosstalk is for NFr >4.0. For most broader band sources (normal line width lasers) Fresnel number down to 2.5 is a practical limit.

Dynamic range The dynamic range of the wavefront sensor is given by:

df

d

4

3

2max

λθ −= ; (2)

thus, there is a constraint associated with achieving a certain minimum dynamic range.

Pixels per focal spot One of the key limitations on the accuracy of the SHWFS is the number of pixels covered under each focal spot. In the low photon limit, it has been shown that this is as few as four pixels per focal spot, so that the sensor more or less operates as a collection of quad cell detectors. However, with a brighter light source, the sensor will obtain better accuracy if there are a large number of pixels covered under each focal spot.25 The number of pixels covered by the focal spot is given by:

2

2

=

xpx dp

fN

λ (3)

where px is the pixel size. To obtain accurate results, at least 9-16 pixels must be covered by the focal spot, thus introducing another constraint shown in Figure 9.

Pixels per lenslet In order to adequately process the data, there must be at least a minimum number of pixels underneath each lenslet. While we have made sensors with as few as 7 pixels per lenslet, in practice this leads to sensors with poor accuracy. So, a minimum number of pixels across each lenslets should be about 8.

Number of samples For a given number of pixels per lenslet and a given size array, there is a fixed number of samples that may be obtained. To adequately sample a typical wavefront that has some higher order structure, there is a minimum number of samples across the aperture that must be obtained. This places constraints on the largest lenslets that can be used for a particular application.

Lenslet design space

0.00

5.00

10.00

15.00

20.00

25.00

30.00

35.00

40.00

45.00

50.00

0.00 0.05 0.10 0.15 0.20 0.25 0.30

Lenslet d iamet er ( mm)

NFr > 2.5

Spacing > 2

Dyn Range > 3.5

Pix/spot > 9

Pix/ lens > 8

N samp > 16

Figure 9 Design space for Shack-Hartmann wavefront sensor lenslet array.

Page 12: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 12 of 15 9/3/04

In Figure 9, lines are shown where these various constraints are held constant. There is a roughly triangular region that meets all the various constraints. In practice the best accuracy is obtained for longer focal length lenslets, so the designs typically concentrate along the upper part of this space.

For any given system, the requirements usually center around a desired dynamic range, resolution or accuracy. Cost, size or other constraint will lead to a selection of camera, which will in turn lead to the identification of a number of the constraint lines in Figure 9. The other constraints are determined by the design goals. For example, the design space outline in this figure shows the constraints for a new large field of view IR sensor to be used for space communication applications. In this case the desire was to have greater FOV than the previous sensor, which had only a 320 X 240 detector to take advantage of new 640 X 512 camera systems. These cameras have a practical minimum spacing of about 2 mm due to mechanical constraints. For the space telecomm application, a large dynamic range was not needed, but better accuracy was desired. For a narrow line laser, the constraint of Nfr>4 was chosen as the limit due to cross talk, and a dynamic range of >3.5 mr was determined. To get adequate accuracy, at least 9 pixels per focal spot was desired and at least 16 samples across the detector. Nearly all the designs satisfy the 8 pixels per lenslet requirement. We chose several different designs along the constant Fresnel number curve to have various options for testing. Typical values are 5.0 mm focal length for 0.1 mm diameter lenslets, 13.0 mm f.l. and 0.15 mm diameter and 25.0 mm f.l. with 0.2 m diameter. In practice, we usually adjust the Fresnel number slightly so that all the lenslet designs have the same level spacing in order to simplify the mask design.

4 CONCLUSION

In this paper I have attempted to describe the history and events that led to the development of a commercial Shack-Hartmann wavefront sensor, based on the principles initially set forth by Roland Shack and Ben Platt of the OSC. I’ve attempted to describe how the (initially) largely military work has led to the development of this technology. In addition, I’ve described the basic principles and outlined an approach for designing practical sensors. The key ingredient for commercializing the Shack-Hartmann wavefront sensor was not any great inspiration or invention on my part, but rather a sense of the long-term applications and a willingness to stick with the difficult task of developing the technology and building a company

Disclaimer

While I have attempted to describe an accurate history of the development of the commercial Shack-Hartmann wavefront sensor, those such as Roland Shack, Ben Platt, and Johannes Hartmann laid the groundwork. If anything, my own contribution is primarily in recognizing and developing some new applications, and in changing the paradigm of instrument development. My attempt to relate the history of this development naturally is biased by my own personal perspective. I have attempted to place the events in perspective, but undoubtedly I have left out key contributors, parallel developments and other important events to which I have not been privy. Thus this paper is mostly autobiographical, and I apologize in advance for any omissions or inaccuracies.

5 ACKNOWLEDGEMENTS

The author would like to acknowledge the many people who contributed to my personal and professional development through the years. Chief among these is my Dad, Dale Neal, whose willingness to tackle any task has been a constant inspiration to me. Others who inspired or helped me along the way include Dave McArthur, Dave Bodette, Paul Pickard, Jim Gerardo, Jim Rice, Jack Walker, Stewart McKechnie, Tim Turner, Ron Rammage, Jim Wyant, John Hays, TD Raymond, Joe Alford, Don Kepler, Roy Stewart, Bill Sweatt, John Torczynski, Don Baganoff, Lenore McMackin, Bill Yanta, Eric Hedlund, Chuck Spring, Manfred Dick, Hartmut Vogelsang, Robert Knollenberg and a host of others. I’ve been blessed by a group of excellent employees, who are responsible for the success of WaveFront. My wife and family’s patience and support have been both instrumental and amazing.

6 REFERENCES

1 D. A. McArhtur and P. B. Tollesfrud, Appl Phys. Lett. 26, 327 (1975).

Page 13: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 13 of 15 9/3/04

2 G. H. Miley, “Review of Nuclear Pumped lasers,” Laser interaction and related plasma phenomena 6, Plenum Pub

(1984). 3 J. K. Rice, G. N. Hays, D. R. Neal, D. A. McArthur and W. J. Alford, “Nuclear Reactor Excitation of XeF laser gas

mixtures,” Proc. Of the Ninth Intl. Conf. on Lasers ‘86 (Nov 1986). 4 G. N. Hays, D. A. McArthur, D. R. Neal and J. K. Rice, “Recent results from nuclear-pumped laser studies: gain

measurement in XeF,” Laser interaction and related plasma Phenomenon 7 , Plenum Pub. (1986). 5 D. R. Neal, W. C. Sweatt, and J. R. Torczynski, “Resonator design with an intracavity time-varying index gradient,”

SPIE 965, pp. 130 – 141 (1988). 6 D. R. Neal, J. R. Torczynski, and W. C. Sweatt, “ Resonator stability effects in “quadratic-duct” nuclear-reactor-pumped

lasers,” Proc. Of the Ninth Intl. Conf. on Lasers ’88 (1989). 7 D. R. Neal, S. D. Tucker, R. Morgan, T. G. Smith, M. E. Warren, J. K. Gruetzner, R. R. Rosenthal, A. E. Bentley,

“Multi-segment coherent beam combining,” SPIE 2534, pp. l 80–94 (1995). 8 D. R. Neal, R. B. Michie, J. Drummond, J. Christou, D. McCarthy, and T. S. McKechnie, “An experiment to investigate

the isoplanatic angle of the atmosphere in the near infrared,” OSA Adaptive Optics for Large Telescopes Topical Meeting, Maui, Hawaii, 1992.

9 D. R. Neal, P. L. McMillin, and R. B. Michie, “An astigmatic unstable resonator with an intracavity deformable mirror,” SPIE 1542, pp.449 - 458 (1991),

10 D. R. Neal, W. C. Sweatt, W. J. Alford, D. A. McArthur, and G. N. Hays, “Application of high-speed photography to time-resolved wavefront measurement,” SPIE 832, pp. 52 – 56, (1997).

11 D. R. Neal, J. R. Torczynski and W.C. Sweatt, “Time-resolved wavefront measurements and analyses for a pulsed, nuclear-reactor-pumped laser gain region,” Optical Engineering 29 11), pp.l 1404 – 1412 (Nov. 1990).

12 J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. S. Ward, “Real-time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” SPIE 1542, pp. 459 – 467 (1991).

13 J. S. Toeppen, E. S. Bliss, T. W. Long and J. T. Salmon, “A video Hartmann wavefront diagnostic that incorporates a monolithic microlens array,” SPIE 1544, pp. 218–225 (1991).

14 D. S. Acton and R. C. Smithson, “Solar astronomy with a 19-segment adaptive mirror,” SPIE 542, pp. 159 – 164 (1991).

15 D. S. Acton and R. B. Dunn, “Solar imaging at the National Solar Observatory using a segmented adaptive optics system,” SPIE 1920, pp. 348 – 352 (1993).

16 J. W. Hardy, “Adaptive optics-a progress review,” SPIE 1542, pp. 2-17 (1991). 17 D. Greenwood and C. A. Primmerman, “The history of adaptive-optics development and the MIT Lincoln Laboratory,”

SPIE 1920, pp. 220 – 234 (1993). 18 R. Q. Fugate, “Measurement of atmospheric distortion using scattered light from a laser guidestar,” Nature 353, pp.

144-146 (Sept 12, 1991). 19 R. Q. Fugate, et al, “Two generations of laser-guide-star adaptive optics experiments at the Starfire Optical Range,”

JOSA A, 11(1), pp. 310-324 (1994). 20 R. Q. Fugate, “Observations of faint objects with laser beacon adaptive optics,” SPIE 2201, pp. 10 – 21 (1994). 21 J. Liang, “Application of Hartmann-Shack sensor to evaluate optical performance of the human eyes,” Ph.D.

dissertation, University of Heidelberg (991) 22 J. Liang, B. Grimm, S. Goeltz, fn J. F. Bille, “Hartmann-Shack sensor as a component in active optical system to

improve the depth resolution of the laser tomographic scanner, ” SPIE 1542, pp. 543 - 554 (1991). 23 S. Goelz, J. J. Persoff, G. D. Bittner, J. Liang, C T. Hsueh, and J. F. Bille, “A new wavefront sensor for metrology of

spherical surfaces,” SPIE 1542, pp. 502 – 511 (1991). 24 L. Jiang,, Z. Rong, and C. Wang, “Architecture of a multiprocessor system for Hartmann-Shack wavefront sensor,”

SPIE 1920, pp. 211-218 (1993). 25 D. R. Neal, J. Copland, D.A. Neal, “Shack-Hartmann wavefront sensor precision and accuracy,” SPIE 4779, 2002. 26 W. B. Veldkamp, “Overview of microoptics: past, present, and future,” SPIE 1544, pp. 287–299 (1991). 27 “Diffractive and Miniaturized Optics,” Sing Lee, ed., SPIE CR49 (1993). 28 D. R. Neal, “Application of Laser Induced Fluorescence to the measurement of Droplet evaporation,” PhD Thesis,

Dept. or Aeronautics and Astronautics, Stanford Univ., 1984. 29 D. R. Neal and D. Baganoff, “Laser-induced fluorescence measurement of vapor concentration surrounding

evaporating droplets,” SPIE 540, pp. 347–355 (1985).

Page 14: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 14 of 15 9/3/04

30 T. J. O’Hern, R. N. Shagam, D. R. Neal, A. J. Suo-Anttila, and J. R. Torczynski, “Downstream evolution of turbulence

from heated screens: Experimental and analytical results,” Sandia Report SAND92–0480 (1992). 31 Neal, D.R., O’Hern., Torczyanski, J.R., Warren, M.E. and Shul, R., “Wavefront Sensors for Optical Diagnostics in

Fluid Mechanics: Applications to Heated Flows, Turbulence and Droplet Evaporation,” from Optical Diagnostics in Fluid and Thermal Flows, SPIE 2005, 1993.

32 L. McMackin, B. Masson, N. Clark, K. Bishop, R. Pierson, and E. Chen, “Hartman wave front sensor studies of dynamic organized structure in flowfields,” AIAA Journal 33 (11), pp. 2158-2164

33 L. McMackin, J. Wissler, N. Clark, E. Chen, K. Bishop, R. Pierson, and B. Staveley, “Hartmann sensor and dynamic tomographical analysis of organized structure in flow fields,” AIAA 94-2548 (1994).

34 B. Masson, L. McMackin, J. Wissler, and K. Bishop, “Study of a round jet using a Shack-Hartmann wavefront sensor,” AIAA 95-0644 (1995).

35 D. R. Neal, D. J. Armstrong, E. Hedlund, M. Lederer, A. Collier, C. Spring, J. Gruetzner, G. Hebner and J. Mansell, “Wavefront sensor testing in hypersonic flows using a laser-spark guide star,” SPIE Vol. 3172-36 (1997).

36 W. J. Yanta, W. C. Spring,III, J. F. Lafferty, A. S. Collier, R. L. Bell, D. Neal, D. Hamrick, J. Copland, L. Pezzaniti, M. Banish, R. Shaw, “Near-and far-field measurements of aero-optical effects due to propagation through hypersonic flows,” AIAA-2000-2357 (2000).

37 Neal, D.R., Hedlund, E., Lederer, M., Collier, A., Spring, C., Yanta, W., “Shack-Hartmann Wavefront Sensor Testing of Aero-optic Phenomenon,” AIAA 98-2701, 20th AIAA Advanced Measurement and Ground Testing Technology Conference, Albuquerque, NM, June 1998.

38 T. Bruno, A. Wirth, A. J. Jankevics, “Applying Hartmann wavefront sensing technology to precision optical testing of the Hubble Space telescope Correctors,” SPIE 1920, pp. 328 – 336 (1993).

39 D. Kwo, D. Damas, and W. Zmek, “A Hartmann-Shack wavefront sensor using a binary optic lenslet array,” SPIE 1544, pp. 66 - 74 (1991).

40 D. R. Neal, W. J. Alford, and J. K. Gruetzner, “Amplitude and phase beam characterization using a two-dimensional wavefront sensor”, SPIE 2870, pp.72-82 (1996).

41 D. R. Neal, J. K. Gruetzner, D. M. Topa, J. Roller, “Use of beam parameters in optical component testing,” SPIE 4451, pp. 394–405 (2001).

42 D. R. Neal, D. J. Armstrong and W. T. Turner, “Wavefront sensors for control and process monitoring in optics manufacture,” SPIE 2993, pp. 211–220, 1997.

43 D. R. Neal and J. Mansell, “Application of Shack-Hartmann Wavefront Sensors to Optical System Calibration and Alignment,” Proceedings of the 2nd International Workshop on Adaptive Optics for Industry and Medicine, Gordon Love, Ed., World Scientific (2000).

44 D. R. Neal, J. Copland, D. A. Neal, D. M. Topa and P. Riera, “Measurement of lens focal length using multi-curvature analysis of Shack-Hartmann wavefront data,” SPIE 5523-27 (2004).

45 D. R. Neal, D. J Armstrong, W. T. Turner, “Characterization of Infrared Laser Systems,” SPIE 3437-05 (1998). 46 T.D. Raymond, D.R. Neal, D.M. Topa, and T.L. Schmitz, “High-speed, non-interferometric nanotopographic

characterization of Si wafer surfaces,” SPIE 4809-34, 2002. 47 S. Panagopoulou, “Correction of high order aberrations using WASCA in LASIK for myopia,” Fall World Refractive

Surgery Symposium, Dallas, TX (October 19–21, 2000). 48 D. R. Neal, R. J. Copland, R. R. Rammage, D. Reinstein, H. Vogelsang, “WaveFront Sciences’ Complete Ophthalmic

Analysis System (COAS),” Wavefront customized visual correction: the quest for super vision II, Ed. Krueger, Applegate, and MacRae, Slack Inc, 2004.

49 D. R. Neal, P. Pulaski, T.D. Raymond, and D. A. Neal, “Testing highly aberrated large optics with a Shack-Hartmann wavefront sensor,” SPIE 5162, 2003.

50 R. R. Rammage, D. R. Neal, R. J. Copland, “Application of Shack-Hartmann wavefront sensing technology to transmissive optic metrology,” SPIE 4779-27 (2002).

51 D. M. Topa, “Data synthesis using slope and OPD stitching,” SPIE 5162-18, 2003. 52 J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of the wave aberrations of the human eye with

the use of a Hartmann-Shack wave-front sensor,” J. Opt. Soc. Am. A 11, pp. 1949–1957 (1994). 53 Liang J, Williams DR, Miller DT. “Supernormal vision and high-resolution retinal imaging through adaptive optics.” J

Opt Soc Am A 14,, pp. 2884-2892 (1997). 54 S. MacRae, J. Schwiegerling, and R. Snyder, “Customized Corneal Ablation and Super Vision,” J. Ref. Surg. 16, S230-

235 (2000).

Page 15: Shack-Hartmann sensor engineered for commercial

SPIE AM100-20 Page 15 of 15 9/3/04

55 R. V. Shack and B. C. Platt, “Production and use of a lenticular Hartmann screen,” JOSA 61(5), p. 656 1971. 56 B. Platt, R. Shack, “History and principles of Shack-Hartmann wavefront sensing” Journal of Refractive Surgery, 17

(Sept/Oct 2001). 57 D. R. Neal, D. M. Topa, and James Copland “The effect of lenslet resolution on the accuracy of ocular wavefront

measurements,” SPIE 4245 (2001). 58 D. Malacara, Optical Shop Testing 2nd Ed., John Wiley & Sons, New York. 59 W.H. Southwell “Wave-front estimation from wavefront slope measurements” JOSA 70, pp.993-1006 (August 1980). 60 P. Pulaski, “Making the trade,” OE magazine, 2003.