numerical simulations of temperature mapping in …

182
1 NUMERICAL SIMULATIONS OF TEMPERATURE MAPPING IN INDUSTRIAL COMBUSTION ENVIRONMENTS Michael P. Wood 2013 School of Electrical and Electronic Engineering A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences.

Upload: others

Post on 22-Mar-2022

13 views

Category:

Documents


0 download

TRANSCRIPT

ThesisSchool of Electrical and Electronic Engineering
A thesis submitted to the University of Manchester for the degree of
Doctor of Philosophy in the Faculty of Engineering and Physical
Sciences.
2
1. Introduction .............................................................................................................. 18
1.1 Motivation .................................................................................................... 18
1.3 Current methods for temperature sensing .................................................... 22
1.3.1 Invasive measurement .................................................................................. 22
1.3.2 Laser-induced fluorescence .......................................................................... 22
1.3.6 Optical pyrometry: soot ................................................................................ 24
1.4 Laser absorption spectroscopy ..................................................................... 25
1.4.1 Line-of-sight thermometry by direct absorption spectroscopy .................... 25
1.4.2 Modulation spectroscopy ............................................................................. 28
1.5 Aims and objectives ..................................................................................... 30
1.5.1 Objective ...................................................................................................... 30
1.5.2 Aims ............................................................................................................. 30
1.6 Overview ...................................................................................................... 31
2. Tomography ............................................................................................................. 32
2.1 Introduction .................................................................................................. 32
2.3 Filtered backprojection ................................................................................. 35
2.6.1 Existence ...................................................................................................... 42
2.6.2 Uniqueness ................................................................................................... 42
2.6.3 Stability ........................................................................................................ 43
2.7 Algorithms .................................................................................................... 43
3.1 Introduction .................................................................................................. 49
3.3 Infrared-active species.................................................................................. 52
3.5 The two-state transition model ..................................................................... 55
3.6 Boltzmann statistics...................................................................................... 58
3.7.1 Natural broadening ....................................................................................... 62
3.7.2 Doppler broadening ...................................................................................... 62
3.7.3 Pressure broadening ..................................................................................... 63
3.7.5 Pressure shifting ........................................................................................... 65
3.9 Line-of-sight thermometry ........................................................................... 66
3.10 Temperature tomography ............................................................................. 72
4.4.1 Voigt calculation .......................................................................................... 83
4.5 Radon transform ........................................................................................... 85
5.2.3 Genetic algorithm ......................................................................................... 97
absorption methods ....................................................................................................... 113
6.2.1 Introduction ................................................................................................ 113
6.4 Detailed simulations ................................................................................... 131
6.4.2 Results ........................................................................................................ 133
6.4.3 Conclusions ................................................................................................ 136
6.5 Annular reconstructions ............................................................................. 139
List of figures
Figure 1.1. Simple schematic of a turbofan engine. The cold, atmospheric intake air is
sucked in, split, compressed, mixed with fuel, combusted, expanded, and then funnelled
out of the rear nozzle imparting a forward thrust on the engine. .................................... 20
Figure 1.2. Variation in temperature sensetivity between two absorption transitions. ... 26
Figure 1.3. Relative similarity in mole fraction sensetivity between two absorption lines.
......................................................................................................................................... 26
Figure 1.4. Effect of pressure increase on a set of near-infrared absorption lines. ......... 28
Figure 2.1. A general point in the domain of , and line in the domain of . 34
Figure 2.2. Graphical representation of the Fourier slice theorem. The one-dimensional
Fourier transform of the Radon transform of at an angle is equal to the
two-dimensional Fourier transform of along the radial slice . .......................... 36
Figure 2.3. A conceptual illustration of the meaning of as the length of line in
pixel . One method of reducing the number of unknowns is to pixelate the image space.
The kernel of the Fredholm equation is represented by a matrix operator. .................... 38
Figure 3.1. Direct absorption measurement over a single beam within a tomography
system. ............................................................................................................................. 51
Figure 3.2. Geometric orientation of the water molecule. The equilibrium bond angle
and bond length values are themselves calculated from spectroscopic measurement. ... 53
Figure 3.3. Near-infrared spectrum of 1% water vapour at 300 K and 1 bar. On the far
left are the and bands, in the middle are ,
and bands, and on the right are , and
bands. The fundamental bands are found in the mid-infrared spectrum below 5000
. ............................................................................................................................ 55
Figure 3.4. Rotation-vibration partition function of water vapour based on equation
3.14. The lower graph represents the fractional change where
: above , ’s estimate of is slightly
greater than that of Harris et al. ....................................................................................... 60
Figure 3.5. Doppler, Pressure and Voigt lineshapes for using linear
and logarithmic vertical scales. ....................................................................................... 65
Figure 3.6. Contours of in the near-infrared at 1000 and 2000 K. The
dotted line is the contour of . .............................................................. 69
Figure 4.1. Five steps to generate a random temperature phantom of variable
smoothness. ..................................................................................................................... 78
Figure 4.2. Control of the smoothness is achieved by widening an axisymmetric
Gaussian filter. 20 is used in all our simulations. ........................................................... 79
Figure 4.3. Temperature and concentration phantoms using Gaussian peaks. ............... 80
Figure 4.4. Sample spectral absorption coefficients at a single (central) pixel in the
phantom. Twenty data points split evenly between a low-temperature (left) and high-
temperature (right) absorption line. ................................................................................ 85
Figure 4.5. Left: temperature phantom; middle: mole fraction phantom; right: spectral
attenuation coefficient at 5263.2 . ....................................................................... 85
7
Figure 4.6. Synthetic transmittance data over a single beam (#1) through the
measurement zone. .......................................................................................................... 86
Figure 4.7. Synthetic absorption data set for 60 beams and 20 wavenumbers (divided
between two absorption transitions). Six peaks can be seen along the ‘beam number’
axis because the beam configuration is par6, a parallel-beam configuration with 6 angles
and 10 beams per angle. The peaks correspond to central beams with the longest paths
through the measurement region. .................................................................................... 87
Figure 4.8. Sample reconstruction of the spectral attenuation coefficient from the
transmittance data (Figure 4.6) generated from a high-resolution phantom (Figure 4.5).
......................................................................................................................................... 88
Figure 4.9. Result of spectral fitting for a central pixel in the image using data points
from a projected Landweber algorithm. The top-right figures show the best-fit
results. ............................................................................................................................. 90
Figure 4.10. Result of spectral fitting for a central pixel in the image using data points
from a Tikhonov algorithm. The top-right figures show the best-fit results. ...... 91
Figure 4.11. Sample temperature reconstructions. Left: phantom; centre and right:
reconstructions. ............................................................................................................... 91
Figure 4.12. Sample mole fraction reconstructions. Left: phantom; centre and right:
reconstructions. ............................................................................................................... 92
Figure 5.1. Flow diagram for a genetic algorithm to optimise beam configurations. ... 100
Figure 5.2. Progress of the genetic algorithm towards minimising , with the
optimised beam configuration in the top-right. The experiment was performed twice.
The vertical dotted lines show when the perturbation noise was reduced, and the
horizontal dotted line shows the overall minimum. ...................................................... 103
Figure 5.3. Left: beam configurations, optimised using a genetic algorithm for varying
annular thicknesses: . Right: a sinogam representation, where every
cross corresponds to a beam. The red borders represent the boundaries of the
recontruction region. ..................................................................................................... 106
Figure 5.4. Distribution of for one million randomly selected beam configurations.
1 in roughly 21,000 samples have below 5, which demonstrates how rare good
beam configurations are. ............................................................................................... 107
Figure 5.5. Twelve candidate lines with relatively good spectral isolation at 30 bar. .. 110
Figure 5.6. Left: linestrength ratio for the two transitions over the
temperature range of interest; right: stimulated emission ratio
for the line pair (red cross). ........................................................................................... 111
Figure 6.1. Left: temperature phantom; middle: mole fraction phantom; right: beam
configuration. ................................................................................................................ 114
Figure 6.2. Reconstruction errors with increasing uniform gas pressure for three
competing reconstruction methods. .............................................................................. 116
Figure 6.3. Reconstructions of the three methods; left to right: SF method, IA method,
PA method. .................................................................................................................... 117
8
Figure 6.4. Six examples of randomly generated temperature phantoms used for the
simulations. The method of generation is documented in §4.2.1.3. Concentration
phantoms are generated in the same way. ..................................................................... 121
Figure 6.5. Six beam configurations used in the comparative study. ........................... 122
Figure 6.6. Relative RMSEs of temperature reconstructions. Each data point is an
average of 12 reconstructions from data generated from different (randomised)
temperature and mole fraction phantoms. ..................................................................... 123
Figure 6.7. Relative RMSEs of individual temperature reconstructions for par6, par10,
and fan3 beam configurations. Vertically aligned crosses of a certain colour are relative
RMSEs using data from different randomised phantoms. The solid lines connect
average values at each pressure for each method of reconstruction. ............................ 124
Figure 6.8. Relative RMSEs of individual temperature reconstructions for fan5,
irreg001, and irreg002 beam configurations. Vertically aligned crosses of a certain
colour are relative RMSEs using data from different randomised phantoms. The solid
lines connect average values at each pressure for each method of reconstruction. ...... 125
Figure 6.9. Left: temperature (top) and concentration (bottom) phantoms; middle: par6
beam configuration; right: temperature (top) and concentration (bottom)
reconstructions. This was the best reconstruction, occurring at 25 bar. ....................... 126
Figure 6.10. Left: temperature (top) and concentration (bottom) phantoms; middle: fan3
beam configuration; right: temperature (top) and concentration (bottom)
reconstructions. This was the worst reconstruction, occurring at 37 bar. ..................... 127
Figure 6.11. Images taken from [151] demonstrating a specific dependence on the gas
pressure of reconstruction errors in both temperature and mole fraction (concentration).
....................................................................................................................................... 128
Figure 6.13. Phantoms, beam configuration and time-averaged reconstructions of
temperature distribution using 100 transmittance datasets. Reconstructions using each
method are shown at the bottom of the image. ............................................................. 134
Figure 6.14. Comparative difference in errors between methods A, B and C. The scatter
graph contains 36 vertically-aligned triplets of red, green and blue data points which
each represent reconstructions of a single phantom using methods A, B, and C
respectively. The horizontal position of a triplet is the average of the relative RMSE of
the reconstruction for all three methods, and the vertical position of each data point
within the triplet is the difference between the relative RMSE of the reconstruction
(obtained using the corresponding method) and the average for all three methods. The
top histogram represents the average relative RMSEs of all the reconstructions for each
method, and the three histograms to the right are histograms of the relative RMSEs of
the reconstructions using each method; they can be used to visually interpret which
method is statistically advantageous, e.g. it appears that method C performs marginally
better because the majority of the histogram volume is below the dotted line. ............ 136
Figure 6.15. Percentage improvement in reconstruction accuracy as a function of the
number of datasets used in the combination. Data points of different colours are
reconstructions from data from different phantoms. ..................................................... 138
9
Figure 6.16. Sample beam configurations for an annulus . .............................. 141
Figure 6.17. Reconstruction accuracies for different beam arrangements for decreasing
annular thicknesses of . Each result is an average 24 randomised phantoms.
....................................................................................................................................... 143
annular thicknesses of . Each result is an average 24 randomised
phantoms. ...................................................................................................................... 144
reconstructions for r = 0. ............................................................................................... 145
Figure 6.20. Sample phantom, irregular beam configurations and corresponding
reconstructions for r = 0.25. .......................................................................................... 146
Figure 6.21. Sample phantom, irregular beam configurations and corresponding
reconstructions for r = 0.5. “irreg051” and “irreg052” are the examples of the genetic
algorithm shown in Figure 5.2. ..................................................................................... 147
Figure 6.22. Sample phantom, irregular beam configurations and corresponding
reconstructions for r = 0.7. ............................................................................................ 148
Figure 6.23. Optimised configuration of 32 beams in an annulus . .......... 151
Figure 6.24. Comparison of reconstruction errors for 24 randomised phantoms using
Landweber and Tikhonov algorithms. Top: temperature errors; middle: concentration
errors; bottom: pressure errors. ..................................................................................... 153
Figure 6.25. Projections of the data points for each
reconstruction onto - , - and -P graphs. Black lines connect Landweber and
Tikhonov reconstructions from the same phantom, and hollow squares represent mean
values............................................................................................................................. 154
Figure 6.26. Sample reconstruction 1 using non-uniform pressure phantoms. ............. 155
Figure 6.27. Sample reconstruction 2 using non-uniform pressure phantoms. ............. 156
Figure 6.28. Sample reconstruction 3 using non-uniform pressure phantoms. ............. 157
Figure 8.1. Screenshot of graphic and inline output from beam optimisation algorithm
for the first 177 generations. Filled data points represent generation averages of and
circle data points represent generation best values of . In this example, the mutation
noise is reduced by 25% if there is no improvement to the best beam configuration after
25 successive generations. The beam configuration after 1000 generations was used in
§6.6. ............................................................................................................................... 174
Table 1.1. Symbols list for tomography and beam optimisation. ................................... 15
Table 1.2. Symbols list for spectroscopy. ....................................................................... 16
Table 3.1. Documented coefficients used in the approximation of in equation 3.14.
......................................................................................................................................... 60
tomography algorithms). ............................................................................................... 129
reconstructions. ............................................................................................................. 130
The comparison between Tikhonov and projected Landweber methods is difficult to
justify; both are performed with a somewhat ad hoc implementation of the prior
assumption, and it is possible that any differences between these two methods is due to
the implementation of the smoothing prior as opposed to any fundamental advantage of
either method. As shown in table 6.3, the projected Landweber iteration is slightly
faster on a desktop PC: it scales considerably better with increasing reconstruction
resolution (any more than 1000 pixels leads to rapidly diminishing returns for the
Tikhonov inversion), and the memory overhead is much smaller so it is more
advantageous to independently reconstruct spectral absorption coefficient images using
parallel threads without causing memory bottlenecks. It is for these practical reasons
that the Landweber iteration is chosen for use in future reconstructions. ..................... 130
Table 6.4. Comparison between three combination methods, averaged over 36 different
phantoms. The difference between each method and the mean is taken to show the
relative difference between methods. ............................................................................ 135
Table 6.5. Computation times of the three methods with a dataset of 100. .................. 137
Table 6.6. Relative RMSEs of reconstructions ............................................................. 138
Table 6.7. Reconstruction resolutions for different annular thicknesses ...................... 142
11
Abstract
This thesis presents the results from a set of numerical experiments of two- dimensional gas temperature imaging using laser absorption spectroscopy inside a turbofan engine. This measurement environment is characterised by temperatures of 2000 K, pressures of 45 bar, and extremely limited access for the installation of measurement hardware, which renders invasive measurement (thermocouple arrays) or direct imaging (PLIF or pyrometry) methods unviable.
An alternative approach is indirect imaging of the temperature, whereby the transmittance of a near-infrared laser light through the gas is measured and used to make inferences about the properties of the gas along the beam; specifically, its temperature, pressure, and molecular constitution. The frequency of the light is chosen to interrogate particular molecular transitions of a target species—water— in such a way that the fraction of light measured at the detector depends on the temperature of the gas through which it has passed. This is an established measurement technique known as tuneable diode laser absorption spectroscopy (TDLAS), but it is possible to extend this method to two dimensions if the transmittance measurements are made over set of coplanar beams that transect the measurement region. Using the principles of tomographic inversion, it becomes possible to image not only the two-dimensional temperature distribution within a gas, but also the pressure and molecular species concentration distributions.
In this thesis, extensive numerical simulations are used to critically evaluate this approach when applied to the particular case of the turbine engine, and a new methodology is developed for use in this environment which opens up—for the first time, to the best of the author’s knowledge—the possibility of tomographic reconstruction of a gas pressure. This is challenging because the gas pressure has a strong influence on not only the width of absorption lines, but of their positions on the spectrum, with each line being affected in a different way. To overcome and eventually exploit this dependence, a robust approach which the author terms the spectral fitting approach is developed and tested against the two main existing methods found in the literature: integrated absorbance and peak absorption reconstructions. The spectral fitting approach was found to outperform both methods not only in the high-pressure regime, but throughout the tested pressure range ( ).
The numerical tests were also applied to more realistic measurement environments, including annular measurement regions (modelling the opaque central driveshaft of a turbine engine) with non-uniform molecular species concentrations and gas pressures. In these investigations, the temperature was reconstructed with a relative root-mean-squared error of 2.47%. This demonstrates the theoretical feasibility of tomographic reconstructions of gas temperature in the turbine environment.
Numerical optimisation of the methodology is also addressed. The geometric arrangement of beams through the measurement region is investigated with a view to maximise the quality of the reconstructed image, and a new design rule is analytically derived and then applied to generate a set of viable beam arrangements that perform competitively when compared to more conventional regular arrangements. The selection of laser frequencies is also optimised in the specific case of high-pressure spectroscopy, and two near-infrared transitions are suggested as a possible candidate pair for experimental verification.
12
Declaration
No portion of the work referred to in this thesis has been submitted in support of an
application for another degree of qualification of this or any other university or other
institute of learning.
Copyright statement
The author of this thesis (including any appendices and/or schedules to this thesis) owns
certain copyright or related rights in it (the “Copyright”) and s/he has given The
University of Manchester certain rights to use such Copyright, including for
administrative purposes.
Copies of this thesis, either in full or in extracts and whether in hard or electronic copy,
may be made only in accordance with the Copyright, Designs and Patents Act 1988 (as
amended) and regulations issued under it or, where appropriate, in accordance with
licensing agreements which the University has from time to time. This page must form
part of any such copies made.
The ownership of certain Copyright, patents, designs, trademarks and other
intellectual property (the “Intellectual Property”) and any reproduction of copyright
works in the thesis, for example graphs and tables (“Reproductions”), which may be
described in this thesis, may not be owned by the author and may be owned by third
parties. Such Intellectual Property and Reproductions cannot and must not be made
available for use without the prior written permission of the owner(s) of the relevant
Intellectual Property and/or Reproductions.
Further information on the conditions under which disclosure, publication and
commercialisation of the thesis, the Copyright and any Intellectual Property and/or
Reproductions described in it may take place is available in the University IP Policy
(see http://documents.manchester.ac.uk/DocuInfo.aspx?DocID=487), in any relevant
Thesis restriction declarations deposited in the University Library, The University
Library’s regulations (see http://www.manchester.ac.uk/library/aboutus/regulations) and
I would like to thank Prof Krikor Ozanyan for his continued support throughout my
time at Manchester, along with Hugh McCann, Paul Wright, Ed Cheadle, and Nataša
Terzija in the Industrial Process Tomography group at the University of Manchester. In
addition, I thank the support of Rolls-Royce, plc. and in particular the help given to me
by Dr John Black.
List of publications and conference presentations
Wood, M.P., Cheadle, E., Wright, P., Ireland, P., Black, J., McCann, H., and Ozanyan,
K.B., “Temperature tomography by NIR molecular absorption”, Proc. Optics and
Photonics Conference (Photon10), p. 55, Southampton, UK, 2010.
Wood, M.P., Cheadle, E., Wright, P., Ireland, P., Black, J., McCann, H., and Ozanyan,
K.B., “Modelling the Performance of Temperature Tomography systems with IR Laser
Sources”, Proc. 6th World Congress on Industrial Process Tomography, Beijing, China.
2010. p. 772-777.
Wood, M.P. and Ozanyan, K.B., “Temperature Mapping from Molecular Absorption
Tomography”, Proc. IEEE Sensors 2011 Conference, Limerick, Ireland, 2011. p. 865-
869; (10.1109/ICSENS.2011.6127014).
Wood, M.P. and Ozanyan, K.B., “Fan Beam Tomography in Annular Geometry”, Proc.
6th International Symposium on Process Tomography, p.23, Cape Town, South Africa,
2012.
Wood, M.P. and Ozanyan, K.B., “Concentration and Temperature Tomography at
Elevated Pressures”, Sensors Journal, IEEE , vol.13, no.8, pp.3060, 3066, Aug. 2013;
(10.1109/JSEN.2013.2260535).
Wood, M.P. and Ozanyan, K.B., “Performance Simulation of a Tomography Sensor for
Imaging of Temperature in a Gas Turbine Engine”, IEEE Sensors 2013 (submitted).
Symbol Description
( ) Cartesian coordinates
( ) Beam coordinates
( ) Objective function
( ) Kernel of the Fredholm integral equation
Beam index
Vector of
Relative root-mean-squared error in variable (RMSE)
16
Symbol Description
( ) Temperature / K
( ) Mole fraction
( ) Pressure / bar
Volumetric number density of molecule
Radiant energy density /
Natural broadening half width at half maximum (HWHM) /
Doppler broadening line shape function
Doppler broadening half width at half maximum (HWHM) /
Pressure broadening line shape function
Pressure broadening half width at half maximum (HWHM) /
Molecular mass / AMU
Vector of discretised phantom mole fractions
Vector of discretised phantom pressures /
Vector of reconstructed temperatures /
Vector of reconsructed pressures /
1. INTRODUCTION
“Turbojets are like people; if anything goes wrong, the temperature rises”
– Sir Stanley Hooker
This thesis is a presentation of results which are intended to demonstrate the feasibility
of remote gas temperature imaging using laser absorption spectroscopy inside an
operational gas turbine engine. The work was undertaken at the University of
Manchester from October March , and with the support of Rolls-Royce
plc.
1.1 Motivation
The objective in this work is to develop a theoretical framework for high-resolution gas
temperature imaging in a combustion environment where pressures may reach ,
temperatures range from , invasive measurement is impossible and
optical access is very limited. For reasons that are detailed in the remainder of this
chapter, these constraints have led to the exploration of laser absorption tomography as
a possible solution to this particular challenge. To elaborate on the motivation for this
work, a description of the fundamental mode of operation of the turbofan engine is
required.
1.2 The gas turbine engine
The gas turbine engine of an aeroplane is a machine designed to convert the chemical
potential energy in jet fuel (typically kerosene) into forward thrust by continually
imparting a rearward impulse on atmospheric air. This can be modelled by a Brayton
cycle: air at the intake is adiabatically compressed, combusted (an isobaric addition of
heat), adiabatically expanded, and then expelled into the atmosphere [1]. In modern
designs the compression is achieved by rotating fan-shaped axial compressors, which
19
sequentially increase the dynamic pressure, interspersed with fixed stator vanes, which
redirect the swirling air flow and increase the static pressure. After passing through a
series of these compressor stages the high-pressure (HP) gas passes through a diffuser
before entering a combustion chamber where it is mixed with vaporised jet fuel and
burnt in an exothermic reaction that significantly raises the temperature of the gas. The
hot gas then expands through a set of turbines, which it does work to rotate, as it flows
towards the rear of the engine. These turbines are mechanically coupled to the
compressors by an axial driveshaft so that the work done by the expanding gas is used
to compress more unburnt gas at the intake. There is a still a significant quantity of
kinetic energy remaining in the hot gas as it exits the turbines, and this is funnelled into
a jet using a nozzle. The rearward expulsion of this gas through the nozzle provides the
forward thrust of the turbojet engine, which is the earliest form of an aeroplane gas
turbine.
Development of the turbojet engine has since led to a variation: if a large fan were
attached to the front of the driveshaft so that it overhangs the front of the engine then a
large fraction of the airflow through it would bypass the combustion chamber and be
propelled rearwards without being burnt. By adding a turbine to the back of the engine
and connecting it to this front fan (two or more coaxial driveshafts can be used in the
same engine by hollowing the outer one(s)), part of the kinetic energy of the gas in the
jet can be diverted to kinetic energy in the bypass air; since this air is travelling slower
than the jet air, this diversion can be used to increase the rate of impulse acting on the
air at the intake (momentum is proportional to velocity, whereas kinetic energy is
proportional to squared velocity). This configuration is more fuel-efficient and quieter,
and so-called “turbofan” engines constitute the majority of engines used in medium to
large-sized commercial aircraft (figure 1). For a more detailed description and analysis,
see [2, 3].
20
Figure 1.1. Simple schematic of a turbofan engine. The cold, atmospheric intake air is sucked in, split, compressed, mixed with fuel, combusted, expanded, and then funnelled out of the rear nozzle imparting a forward thrust on the engine.
The design of a turbofan engine involves many compromises between conflicting
objectives: fuel consumption, thrust, range, safety, longevity, noise, cost, emissions, and
weight, to name a few. For example, a conflict exists between fuel consumption and
longevity/safety in the following way: the thermal efficiency of an engine increases with
higher combustion temperature and pressure ratio, and an engine which produces more
thrust for the same amount of fuel is extremely desirable (especially if the engine weight
and size are also unchanged), but the higher operating temperatures come at a price: the
turbines and nozzle guide vanes are both immediately downstream of the combustor and
there are strict limitations to the temperature they can operate at; it has been reported
that an increase of beyond a limiting operating temperature can halve the
lifespan of a turbine blade [4].
Decades of work have been dedicated to pushing back this temperature limitation:
better aerofoils have been manufactured using improving casting methods from Nickel
superalloys with ever increasing melting points and creep strengths [5], but such
metallurgical progress cannot be expected to continue indefinitely. In addition to
improvements in the blade material itself, their surfaces are treated with thermal barrier
coats (TBCs) [6] to limit the conductive heat flux between the burnt gas and the metal
itself, and modern blades are hollowed and perforated to enable cold HP air — up to
20% is diverted from the compressor yield [7] via air ducts at significant cost to the
21
turbine pressure ratio and thermal efficiency — to be fed through the blades and out of
the surface, creating an additional boundary layer between the gas and the coating [8].
A different way to improving thermal efficiency lies in the combustor design. A
typical gas turbine combustor is a perforated sheet metal parabolic or hemispherical-
shaped bowl with the apex positioned upstream. High-pressure air enters through the
perforations and mixes with vaporised kerosene that is injected via a fuel nozzle. The
geometry is designed so that the chamber contains the flame, and is suspended in the
turbine with cold, unburnt air flowing around the outside. Downstream of the
combustion chamber the hot, burnt gas mixes with the cold, unburnt gas before entering
the turbine section. If the mixing is sufficient then the temperature of the gas, by the
time it reaches the turbines (turbine inlet temperature), will have dropped sufficiently
that the turbine components are not endangered. Typically, the gas temperature is
around 2200 K during combustion, 2000 K on entry to the HP turbines, and 1200 K at
the low-pressure (LP) turbines. The design of the combustion chamber itself therefore
affects the severity of the necessary trade-off between engine performance and integrity,
and a well-designed combustion chamber whose exhaust gas mixes well with the cold
external flow will cause the gas impacting on the turbine section to be cooler which
may, for example, require fewer cooling ducts, or allow for the turbine to be made from
a lighter alloy or using a cheaper manufacturing process. These considerations are all
relevant when optimising blade and combustion chamber design.
As with any product, the research and design process is an iterative exercise in
trial and error; a design is suggested, tested, and the results of the test are analysed and
used to motivate modifications to the original design. In the case of a turbine engine this
process may be repeated a large number of times as an initial concept is developed into
a working prototype. The value of a testing phase lies in the data that can be measured
whilst the engine is running on a test bed, and if more data can be measured then it is
possible that the design of an engine will yield a better outcome in fewer iterations. In
particular, useful data includes (but is not limited to) information about the gas
temperature profile, species concentration profiles (fuel, , , and ), and flow
velocity profiles from the combustor to the turbine inlet [9, 10].
This provides the motivation for knowing temperature distributions inside the turbine
engine. However, measurements of the gas temperature in this hostile environment are
22
difficult to obtain due to the limited optical access, high operating temperatures and
large amounts of engine vibration.
1.3 Current methods for temperature sensing
There are number of different methods of measuring temperature that are currently used
for combustion diagnostics, and it is often the case that two or more methods are used
simultaneously, often for the purposes of independent verification or on-line calibration
of measurements [11, 12]. A brief summary of the methods are given in this section.
1.3.1 Invasive measurement
One approach is to install one or more measurement probes into the flow. Fine-wire
thermocouples are inexpensive and offer the capability of remote sensing of in
combustion environments [13, 14]: Tungsten/rhenium-alloy thermocouples are capable
of measuring temperatures up to and beyond 2500 K [15, 16] but in oxidising conditions
the elevated temperatures cause rapid deterioration of the thermocouple elements which
necessitates the use of protective sheathing. This sheathing increases the thermal mass
of the device which reduces its temporal response, and further limits the operational
temperature range of the device (e.g. platinum/rhodium, 1920 K). Because of these
limitations gas turbine thermocouples are placed downstream of the HP turbines,
usually in front of the LP turbine [17] where temperatures are far lower. Another
limitation to this invasive approach is that the presence of the probe and its connecting
and support wire(s) inside in the flow can cause local velocity and temperature
perturbations which undermine the measured values, and the extent of the perturbations
in turbulent flows is unpredictable [18]. Significant errors associated with these
perturbations have been observed in relatively benign combustion environments [19,
20]. Finally, thermocouples only give localised point measurements of the temperature
rather than continuous distributions.
closely related remote sensing technique for imaging species concentration and
temperature [11, 21-24] and pressure [22, 25] along one-dimensional lines or one or
more two-dimensional planes within the imaging space (three-dimensional imaging is
23
possible with multiple sheets [26]). This is normally achieved using a pulsed laser
beam/sheet (although cw lasers have also been used [27]) with a frequency that targets
an absorption transition of a particular species in the flow. This causes a temporary
excitation of the targeted species. An off-beam/out-of-plane camera is then used to
directly image the fluorescent radiation that is isotropically emitted as the excited
molecules (or radicals, e.g. ) return to their lower-energy states; the signal is
dependent on the concentration of excited species which in turn depends on the
concentration of the species in the ground state. If the target species has electronic
transitions then the resulting signal lies in the optical or ultra-violet (UV) part of the
spectrum [28-30], but the advent of infrared fixed-plane array cameras has led to the
development of infrared PLIF (IR-PLIF) which targets vibration-rotation transitions of
small molecules (e.g. , and ) instead [31-33]. This part of the spectrum is
often favourable because these molecules are natural combustion products and there is
no requirement for any upstream doping of the flow with a UV-active (but inert)
species, e.g. , which may not diffuse evenly in the flow.
LIF/PLIF are non-invasive remote sensing techniques but they can suffer from an
effect known as radiative trapping whereby an amount of the flourescent radiation is re-
absorbed by more target molecules en-route to the detector [22]; this attenuation
depends on out-of-plane gas properties which are often unknown. Furthermore, the two-
line ratiometric approach does not warrant total cancellation of this source of error if the
gas temperature along the emission-detection line-of-sight is not isothermal.
In the context of imaging in restrictive geometries, PLIF’s primary limitation is its
requirement of an out-of-plane detector with a full view of the flourescent sheet. This
difficulty, along with the issues of radiative trapping, renders the deployment inside a
turbine engine very challenging in practice.
1.3.3 Spontaneous Raman scattering
Another remote sensing technique is spontaneous Raman scattering. This process
involves a molecular transition from one energy level, via a virtual energy level, to a
different energy level. In this case a laser is tuned to a “pump” frequency
(corresponding to the energy difference between the virtual level and the original level),
and the molecule re-emits incoherent radiation isotropically at a different “Stokes”
frequency (corresponding to the energy difference between the virtual level and the new
24
level). This scattering effect is very weak, however, and the signal is often dwarfed by
fluorescence or incandescence phenomenon in many conditions [34].
1.3.4 Coherent anti-Stokes Raman scattering
Another method for remotely sensing temperature distributions is coherent anti-Stokes
Raman scattering (CARS) [35]. As with the spontaneous case, anti-Stokes Raman
transitions are targeted but the input laser source is a combination of the pump
frequency and the Stokes frequency of a transition. This induces a resonance in the
target species and causes the stimulated emission of coherent light at the “anti-Stokes”
frequency (2*pump – Stokes), which is measurable along the same path as the original
beam. CARS has been demonstrated for the purposes of concentration imaging [36, 37]
and thermometry along a line of sight [11, 38], including at high pressures [39].
Because the stimulated emission is unidirectional, a well-placed detector can obtain a
far higher signal-to-noise ratio (SNR) than in the case of spontaneous Raman scattering.
1.3.5 Optical pyrometry: turbine blades
Optical pyrometry is an established technology for the on-line measurement of
turbine blade surface temperatures [17, 40, 41] via passive measurement of thermal
emission. The temperature is inferred from an optical measurement of the thermal
emission of a specific area (around ) of the blade using Planck’s law. The fast
temporal response of the optical detector enables continuous monitoring of all the
turbine blades, since multiple measurements of the same area can be made for every
revolution of the turbine. This information is then used to regulate the amount of fuel
burnt in the combustor to ensure that the turbine blades remain within pre-defined
operating limits.
1.3.6 Optical pyrometry: soot
In many cases the flames themselves contain soot particles which, as solids, emit
thermal radiation isotropically with spectrum approximated by a blackbody curve. The
shape of this curve is temperature-dependent, and two measurements of the emissivity
of a part of a flame at two different (but nearby) wavelengths permits the two-
dimensional direct imaging of flame temperatures provided the measurement
wavelengths do not suffer from interference by molecular or atomic absorption bands.
25
This two-colour method has been known for a long time [42] but recent advances in
both CCD and computer processing technology have brought about new methods of
non-invasive, direct, and fast imaging of temperature distributions of sooting flames
[43]. This method has been successfully employed in a coal-fired reactor [44] and a
model turbine engine [45].
1.4 Laser absorption spectroscopy
Laser absorption spectroscopy is a diagnostic technique for the measurement of gas
temperature, concentration, pressure and velocity along the line-of-sight of a laser beam.
A single source-detector collimator pair either side of the measurement region are used
to fire and collect a laser beam through the region, and the beam frequency is tuned to
an absorption transition of a species within the flow. The transmittance over the beam
can be measured as the ratio of the incident to transmitted power to give a measurement
that depends on the spectral properties of the gas along the beam.
1.4.1 Line-of-sight thermometry by direct absorption spectroscopy
If the temperature at every point along the beam is constant then its value can be
calculated from measurements of the beam absorption at two or more absorption
transitions by direct absorption spectroscopy [46-53]. This measurement is performed
experimentally using either wavelength-division multiplexing, where light of different
frequencies is simultaneously sent through the same optical train, or time-division
multiplexing where light of different frequencies is sent through during alternating time
intervals. In certain cases (explained quantitatively in section §3.9) it is possible to infer
the isothermal line-of-sight temperature by two absorption measurements at the
linecentre frequencies of two absorption transitions of a target molecule in the gas. The
absorption signals at each frequency are strongly dependent on the molecular number
density when taken separately. Taking the ratio of these signals, however, almost
entirely removes this dependency and produces a value that is dependent on the relative
molecular quantum ground state populations of the two transitions instead. These
populations are governed by temperature via the Boltzmann distribution, and an
appropriate selection of absorption transitions can be used to obtain sensitive
measurements of the gas temperature along the isothermal laser beam [54]. These
different dependencies are shown in Figure 1.2 and Figure 1.3.
26
Figure 1.3. Relative similarity in mole fraction sensitivity between two absorption lines.
The advent of relatively cheap, rugged, stable, and reliable laser sources, detectors
and optical fibres in the near-infrared region over the last 20 years can be attributed to
the growth of commercial demand in the telecommunications and data storage sectors
[55]. Diode lasers can operate at room temperature and produce narrowband light whose
27
frequency can be tuned by controlling the diode temperature or injection current. The
thermal mass of the diode limits the rate of tuning of the frequency via temperature
control, but there is no such response time limitation on the injection current and it is
possible to modulate diode laser sources at frequencies as high as [56]. In
practice, direct absorption spectroscopy uses typical scanning frequencies of
[57]; by sampling the transmitted light at the photodetector at a higher frequency it is
possible to obtain a large ( ) number of transmittance measurements for a single
sweep of the laser source, producing a continuous sample of the absorption spectrum of
the species instead of a single peak-value. This technique leads to a more general
method of temperature inference than the peak absorption method: if the lineshapes of
the two transitions are different, then two lines with the same linestrength will have
different peak heights; this introduces an error in the peak method 1 . The alternative is to
scan the laser source frequency over an entire transition and integrate the detected
transmittance over every cycle. This gives a direct measurement of the line strength
independently of the lineshape.
However, the integration of an absorption transition requires well-defined limits, and
the existence and location of these limits becomes very difficult at elevated gas
pressures because of the effects of collisional broadening on the individual transition
lines [58], as shown in Figure 1.4. Absorption transitions have narrow spectral widths
and are individually discernible at atmospheric pressure, but at pressures above roughly
5 bar (depending on the specific spectral region of interest), the lines blend together and
it becomes impossible to make a measurement of a transition integrated absorbance
without contamination of the measurement by systematic error by neighbouring
transitions. The positioning of the integration limits becomes a matter of guesswork and
an alternative approach is necessary.
1 The rigorous definitions of a transition linestrength and lineshape are given later in §3. For now
it is sufficient to know that the linestrength of a transition is a measure of its absorbing strength, and the
lineshape is the shape of the absorption line but with an area normalised to unity.
28
Figure 1.4. Effect of pressure increase on a set of near-infrared absorption lines.
1.4.2 Modulation spectroscopy
One alternative to direct absorption measurements for high-pressure gases is
wavelength-modulation spectroscopy (WMS) [59-63]. Using this approach, the
frequency of the laser source is modulated at a much higher frequency ( ) and
the detector is sampled at integer multiples of this frequency, for example and .
This form of measurement is sensitive to the shape of the absorption feature instead of
its absolute height, and the temperature and species mole fraction can be calculated
from the measured values by decomposing the mathematical functions which model the
expected absorption line shape (e.g. Lorentzian, Voigt, or Galatry profiles) into
harmonic functions and calculating the and signal as a function of these
harmonics. The complexity of this relationship also depends on the modulation depth,
and the interference of the signal by a nonlinear amplitude modulation in the laser
source (which results from a large modulation depth in the injection current) can
introduce additional terms; these are discussed in detail in the literature [62, 64, 65].
This approach is particularly suited to measurements over short path lengths or for trace
species in the flow where the signal from direct absorption spectroscopy is too small.
29
WMS is also beneficial in high-pressure environments where a baseline absorption (the
measured valued in the case that there is no target species present) is difficult to obtain
from high-pressure direct absorption measurements of transitions with significant
amounts of interference. The drawback to this method lies in its experimental and
mathematical complexity.
Laser absorption spectroscopy is notably non-invasive and, at typical diode laser
powers ( ), the measurement causes virtually no alternation to the measured
parameter(s). Furthermore, the geometric requirements are particularly unrestrictive:
only hardware at either end is required to measure gas parameters over the entire beam.
This is a highly desirable attribute for gasdynamic sensing and in particular when
optical access comes at a premium, as would certainly be the case inside a turbine
engine.
The concept of laser absorption spectroscopy can be expended to higher-dimensional
reconstructions by recognising that the transmittance of light over a beam can be related
to the Radon transform of a material property of the transected gas that is called the
spectral absorption coefficient, which is a quantitative measure of the local opacity of
the gas at a given frequency due to stimulated absorption. It is then possible to use
tomographic inversion techniques to reconstruct images of this quantity (or linear
functions of it, e.g. its integrals in the integrated absorbance method) from line-of-sight
transmittance measurements. The theoretical basis of this was published in 1975 [66]
and the numerical implementation was developed using a multitude of mathematical
and spectroscopic approaches [67-70]. For example, in the particular case of an
axisymmetric flame, 2-D temperature and species concentration fields can be
reconstructed in a plane perpendicular to the axis of symmetry using the onion-peeling
method [71, 72]. In, general, however, it is necessary to use Fourier or algebraic
methods of reconstruction [73] depending on the number of available measurements.
Early tomographic reconstructions of temperature and OH concentration were achieved
using a continuous wave dye laser [74]. Successful reconstructions of 2-D temperature
profiles using electronic ( ) transitions of were produced [75, 76], with
another target molecule using a He-Ne laser [77, 78]. This was followed by chemical
species tomographic reconstructions using near-infrared transitions of hydrocarbon
30
molecules in chemical reactor environments [79, 80]. A similar method was used by the
same group to obtain time-resolved species reconstructions inside an automotive engine
cylinder [81-83].
Very recent experiments in temperature tomography using laser absorption
spectroscopy have focused on reconstructing two [84, 85] or more than two [86] images
of integrated absorbances. These approaches demonstrate the feasibility of temperature
tomography using laser absorption spectroscopy at atmospheric pressures, where target
lines are well-isolated
1.5.1 Objective
The objective is to develop a theoretically viable solution to permit temperature imaging
inside a turbofan environment characterised by high temperature and pressure and
limited optical access, and to demonstrate the viability of this solution using numerical
simulations.
synthetic data predicted at turbine operating conditions.
To evaluate the method over a wide range of possible conditions.
To compare the method to existing methods of temperature tomography
found in the literature
To investigate how best to use post-processing to aggregate the large amount
of time-series data that can be captured from fast modern photodiode
detectors.
To develop a methodology to optimise the beam configurations for a variety
of differently-shaped annular geometries.
To identify a set of candidate absorption lines that are suitable for high-
pressure temperature measurement.
1.6 Overview
The second and third chapters are dedicated to the description of the fields of
tomography and spectroscopy, tailored to this particular application. These chapters
form an introduction to the methods of temperature tomography, which are introduced
at the end of the third chapter.
The fourth chapter contains a description and illustrated example of the processes
involved in (1) generating synthetic data and (2) reconstructing temperature fields from
that data.
The fifth chapter is dedicated to numerical analysis with the aim of optimising the
measurement system by selecting a good beam configuration and good water vapour
absorption lines.
The feasibility of the developed method is analysed by computer modelling, and
chapter five is used to explain the modelling process that is used. Chapter six contains
the results of a large number of numerical simulations of temperature tomography. The
data are divided into sub-sections which focus on different aims. For each sub-section,
the methodology is given, the results are presented as reconstructions and data
overviews, and a set of conclusions are recorded.
Chapter seven summarises the conclusions of the work, the assumptions made in the
numerical investigations and a list of possible future work in the field, and chapter 8
contains truncated MATLAB code.
2. TOMOGRAPHY
2.1 Introduction
Tomography is the mathematical study of image reconstruction from data that is
measured only at the periphery of the imaged object. In the case of a flat 2D image the
measurement hardware is restricted to the edge of the imaging plane, and not out of the
plane (e.g. in the case of direct imaging using a camera) or in the imaging space itself
(e.g. invasive imaging using a detector array). Tomographic imaging is a specialist
technique which is often exploited in cases where the desired information is, using
alternative imaging methods, either inaccessible or accessible at a prohibitive cost. X-
ray CT scans, PET scans, and NMR imaging are all forms of tomography which allow
physicians to view distributions of physical quantities inside the body without the need
for invasive surgery, and the advent of reliable imaging machines in hospitals has
revolutionised the diagnostic process.
However, the issue of limited measurement access is not unique to the medical
profession. Although tomographic reconstructions differ on a case-by-case basis, there
are mathematical concepts and approaches that are common to many applications and
the same ideas which were first used to image human bone are also applicable to the
imaging of gas parameters in a combustion environment. The history of tomographic
reconstruction is a story of researchers in many different branches of science working in
parallel on many different imaging problems, and it is neither uncommon for a
particularly useful concept to be independently discovered multiple times, nor for an
33
idea in one branch of science to be adopted with great success in an entirely separate
branch by interdisciplinary communication.
The relationship between the imaged quantity and the measurement dictates the
method of inversion, and each relationship can be classified. In many tomographic
applications, a single measurement at the boundary will be dependent on every value of
the object function in the imaging space. This general case is true in electrical
capacitance tomography and electrical impedance tomography. However, there are
certain special cases in which a single measurement will depend only on a readily
identifiable subset of the imaging space; for example, in laser absorption tomography,
the absorption measurement is attributed to the spectral absorption coefficient of the gas
along the beam only. This is known as the hard-field approximation.
The Radon transform is the key to characterising the link between the imaged object
and the measured data in this approximation. In the following section, the imaged object
is treated as a generic scalar function over two-dimensional space, and the two-
dimensional Radon transform is defined accordingly. The finite and approximate nature
of the resulting measurements is used to derive a practical link between the desired
quantity and the known information. To make progress, a discretisation scheme is
employed to reduce the scale of the image to a finite dimension, and recast the
relationship as a discrete linear inverse problem. The issues with such a problem are
discussed and then addressed using the two competing reconstruction algorithms:
projected Landweber iteration and Tikhonov inversion. Finally, the issue of
measurement optimisation via beam placement is discussed in the specific context of
limited-data tomography.
2.2 Defining the Radon transform
Consider a scalar quantity which varies over the two-dimensional unit disc
{( ) }; this defines the nondimensionalised
measurement region, which can later be generalised to a two-dimensional annulus
{ } for a given parameter . and are Cartesian coordinates, and
( ) . The two-dimensional Radon transform [ ]( ) is a mapping from
onto the integrals of over the set of straight lines that pass through the disk:
34
The coordinates ( ) [ ) [ ) represent the position and orientation of a
straight line in the following way: is the minimum distance between the origin and the
line, and is the angle between the -axis and the line connecting the origin to the
closest point on the line (Figure 2.1); the equation of this line is given by:
, 2.2
and is the delta function which is used to select only the points that reside on the line.
This transform is named after Johann Radon who published an expression for the
inverse transform in 1917 [87, 88].
Figure 2.1. A general point ( ) in the domain of , and line ( ) in the domain of .
By writing ( ) and substituting into equation 2.1:
[ ]( ) ∫ ∫ ( ) ( )
2.3
Equation 2.1 can be interpreted as a Fredholm integral equation of the first kind with a
kernel . The objective is to find the ‘object’ given the ‘measurements’ . In
35
practice, it is only possible to measure a discrete number of line integrals, and the task is
to find from a small number of samples of . Let these discrete measurements over
the line ( ) be represented by , indexed by , with measurement errors
. Then equation 2.3 is:
2.3 Filtered backprojection
With enough measurements of this line integral at different angles, a fast solution of
2.4 is possible using the Fourier slice theorem, which states that the one-dimensional
Fourier transform of a single projection of the object (i.e. the function ( ) for a fixed
) is equal to a “slice” of the two-dimensional Fourier transform of the object over the
∫ [ ]( )
( ( ) ( )
) ( )
2.5
and are coordinates in the frequency domain and parameterises the “slice” in the
frequency domain. is used to select values of the Fourier transform along the slice
(the phase shift of exists because the slice is perpendicular to the direction of the
parallel beams used to form the projection); this is shown graphically in Figure 2.2.
36
Figure 2.2. Graphical representation of the Fourier slice theorem. The one-dimensional Fourier transform of the Radon transform of ( ) at an angle is equal to the two-dimensional
Fourier transform of ( ) along the radial slice .
Multiple projection angles can be used to generate multiple slices of the Fourier
transform of , but the discrete Fourier transform must be used because ( ) is a
discrete function. According to the Nyquist sampling theorem, some high-frequency
components of will be lost due to the finite spacing between the lines within a single
projection. This can be remedied by applying a high-pass filter to Fourier transform. A
solution is then found by taking the two-dimensional discrete inverse Fourier transform
and interpolating the result to obtain a solution. This is the filtered backprojection
algorithm; it was first derived by Bracewell [89] in radio astronomy and later
independently by Cormack [90] who shared a Nobel prize with Hounsfield for their
work towards the development of the first x-ray computerised tomography imager.
Filtered backprojection is a fast solution method that is well-suited for medical
applications because, as long as a patient remains still, it is possible to measure a very
large number of line integrals and the resulting filtering and interpolation errors are
small. However, when measurement data are limited, the quality of the reconstructed
image is heavily degraded [73] and alternative approaches must be sought.





37
2.4 Discretisation
Given the incomplete nature of the measurements, the problem can be better approached
via the expansion of in a finite series: ( ) is approximated using the weighted sum
of a finite number of predetermined basis functions ( ):
( ) ( ) ( ) ∑ ( )
( ) 2.6
( ) {
⁄ ⁄
. 2.7
Where is the size of the pixel along the coordinate axes and is the total number of
pixels. are the pixel values, and they completely specify . Substitution of into 2.4
yields:
[ ]( ) 2.8
The order of the double integral and the sum in the first term on the second line can be
changed and, because it is not a function of or , can be taken outside of the double
integral:
[ ]( ) 2.9
and are both known so the double integral can be precalculated for each pixel and
line . These values can be stored in a coefficient (or sensitivity) matrix :
38
2.10
In physical terms, is the length of the segment of beam that resides inside pixel
(Figure 2.3).
Figure 2.3. A conceptual illustration of the meaning of as the length of line in pixel . One
method of reducing the number of unknowns is to pixelate the image space. The kernel of the Fredholm equation is represented by a matrix operator.
This discretisation procedure is a form of numerical quadrature where a curve is
approximated by a series of flat steps: in this case, the curve is a one-dimensional slice
of , and the steps are defined between the intersection points of the line and the pixel
edges. The result is a linear system of equations:

[ ]( )
2.11
The second and third error terms can be combined into a single new term for brevity:
[ ]( ) 2.12
39
. 2.14
is an ( ) vector of the measured values, is an ( ) matrix of the discretised
kernel, is an ( ) vector of pixel values, and is an ( ) vector of the
combined modelling and data errors. Solving equation 2.14 for is a discrete linear
inverse problem.
2.5 Ill-posedness
It is clear from equation 2.14 that it is easier to find given than to find given .
These two problems satisfy a commonly accepted definition of a forward-inverse
problem pair offered by Keller [93] as one in which “the formulation of each involves
all or part of the solution of the other”. The naming of each problem as direct or inverse
is, by convention, chosen so that the direct problem involves the acquisition or
prediction of the measurement data from the state of a physical system, whereas the
inverse problem involves estimating the state of a physical system from such
measurement data. Because the objective is to find from , this represents an inverse
problem. A common feature of inverse problems is ill-posedness which is characterised
by a failure to satisfy any of Hadamard’s three criteria:
1. The problem has a solution,
2. The solution to the problem is unique, and
3. The solution depends continuously on the data.
The problem of reconstructing a discrete image from a limited set of its line integrals is
an ill-posed problem because it will typically fail all three of these conditions. The
following discussion will demonstrate why.
If is naturally pixellated in the same way as then the modelling errors will be
zero, and if the measurement errors are also zero then and . Additionally,
if the number of measurements equals the number of pixels then is square, and if none
of these measurements are redundant (e.g. if no two beams intersect exactly the same
40
pixels by the same amounts) then is non-singular and an exact solution can be found
via . This is an ideal case where the problem is well-posed because the solution
exists, it is unique, and it is stable; Hadamard’s criteria are satisfied.
This ideal case will not be realised in tomography of real objects for three reasons:
(1) unless physical distributions of the temperature, mole fraction, pressure, and local
attenuation are naturally pixelated, and there will be inherent modelling errors.
(2) All measurement data should be assumed to contain random (if not also systematic)
errors. Either of these reasons is sufficient to ensure that . The true (unknown)
line integral values are said to be in the column space of because there exists
an image such that . If the error vector is in the column space of then
there exists a second image such that , so the measurement
( ) corresponds to the image . However, since contains a stochastic
contribution from the random measurement errors, it is extremely unlikely that is in
the column space of and, by implication, extremely unlikely that exists. It is
therefore reasonable to believe that there exists no image that would reconcile
with the measured data . In short, there is no solution to , and direct inversion
via is impossible. This is in violation of Hadamard’s first criterion (of existence).
(3) There are a relatively small number of measurements in limited-data tomography: it
might be possible to place as many as 60 beams inside a combustion environment [79,
82, 83], and if the primary goal is to ensure that (so that is square), the image
resolution is limited to 60 pixels or an grid from which it is difficult to discern
structures of interest in . A better approach involves choosing a sufficiently large set of
basis functions to make it possible for these structures to appear in the image, and work
towards remedying the under-constrained problem which results from . This
problem can be explained using the concept of the matrix rank.
The rank of a matrix is equal to the number of linearly independent rows or columns
it has, and cannot be greater than either the number of rows or columns. If the
measurements are independent then ( ) , but for an under-constrained
problem and is called rank-deficient. By the rank-nullity theorem, if a matrix is rank-
deficient then there must exist least one vector in the nullspace of that matrix, i.e. there
exists a vector such that . This vector is troublesome because it introduces a
degree of freedom in the image space which cannot be detected in the measurement
space: no measurement will be able to help distinguish between an image or ,
41
because ( ). Any solution to equation 2.14 will be one amongst
infinitely many because , and Hadamard’s second criterion of uniqueness is
violated. So far, the data neither matches the model nor is capable of fully constraining
it—this is certainly an inverse problem.
A fourth problem may exist as a result of the discretization of the Fredholm integral
equation. This process may have reduced the number of unknown quantities to , but it
does not do much to address the problems that are often inherent to integral equations.
The Radon transform, being a form of integration, is a natural low-pass filter. High-
frequency (local) noise in the image is often smoothed when the measurements are
taken, because integration over a line is effectively an averaging process: the
measurements are relatively insensitive to localised variations in the image. Any
inversion attempt can be expected to exhibit the opposite effect because local variations
in the image become relatively sensitive to the measurements. Many naïve numerical
solutions of Fredholm integral equations with noisy data will generate nonsensical
results, due to the amplification of the noise. This feature is true in the continuous case,
and is also true in the discretised case, where it can be partially quantified using the
concept of a matrix condition number. The condition number of a matrix is defined as
the ratio of its first to its last singular values: ( ) , where are
found via the process of singular value decomposition. If this number is large then is
called an ill-conditioned matrix and there are often problems with the stability of the
solution of equation 2.14.
A problem with the ill-conditioning of is that two or more of the rows of may be
nearly linearly dependent. These rows might then correspond to a pair of integrals of
over nearly identical lines. Any discrepancy between the two measurements will be
attributed to the small fraction of pixels that one line transects but the other does not.
The length of one of the lines through this small number of pixels is much smaller than
the total length of the line, so a large difference in the values of these pixels must result
from a small discrepancy in the measurement. A linear system of equations which
exhibits this problem is ill-conditioned, and violates Hadamard’s third condition of
stability.
42
2.6.1 Existence
Data and modelling errors will render the measurements incompatible with any possible
image because for any . It is instead reasonable to seek a solution that
minimises the error in lieu of an exact match. A scalar functional can be defined as
the square of the length of this error vector:
( )
2.15
Any image which minimises is a least-squares fit to the data. Note that if there are
no errors then , and is the exact solution. The minimum point can be
calculated by taking the gradient vector of :
( ) ( ) 2.16
and setting it equal to . The resulting image is a global minimum because is a
positive quadratic function. This generates the corresponding Euler equation:
( ) . 2.17
This formulation ensures the existence of a solution, but it does not mean that this
solution is unique.
While ( ) ( ), there will be infinitely many least-squares solutions that
satisfy 2.17. However, a large fraction of these possible solutions are very different
from the sort of physical distributions that might be expected to be found in nature; in
other words, very few of the possible solutions reconcile with knowledge of the
physical processes which are expected to determine the distribution of . When faced
with chosing from one amongst many solutions, it is sensible to favour the one which
least contradicts the available information about from alternative sources, and this
concept— the incorporation of prior knowledge — can be used to adequately constrain
the problem.
43
This incorporation can be achieved in many different ways. If the priors are known
for certain and are not likely to change then they can be encoded into the basis functions
themselves, as was demonstrated by Verhoeven [91]: rather than use square flat blocks
for , one can use pyramid-shaped functions to enforce a degree of smoothness from
the outset of the discretisation; the prior knowledge is that is small. If, on the other
hand, the priors might require adjustment, then another approach is to incorporate them
when solving the discrete inverse problem.
2.6.3 Stability
Priors can be used to constrain the system of equations 2.17 but, even so, the solution is
not trivial to solve. As was explained above, ill-conditioned but full-rank matrices are
still tricky to deal with in the presence of measurement noise if the condition number of
is high. The amplification of noise by the inversion process can mean that two
measurements of a system in the same state can lead to two very different reconstructed
images because the measurement noise is expected to vary between results.
In hard-field tomography, particularly with limited-data, the main problem is that
. However, the limited number of measurements and large number of pixels
means that it is very unlikely for two measurements to be linearly or even nearly
linearly dependent, and the condition number of a typical kernel matrix is ( ) ;
other reconstruction applications face far more severely ill-conditioned matrices; this
feature has also been reported by Twynstra & Daun in their analysis of a similar
problem [95]. The main issue is the severe rank-deficiency of but, in addressing this
by incorporating certain priors, the propagation of measurement noise during inversion
is limited. In this case, the prior information is that the image is smooth, and it is likely
that the enforcement of this prior has a secondary advantage of suppressing the natural
amplification of high-frequency measurement noise by the inversion process.
2.7 Algorithms
2.7.1 Landweber Iteration
Landweber’s method is a stable algorithm for finding a minimum by iterative
means, which also permits simple incorporation of priors via a projection operator that
is applied between the iterative steps. The final equation in 2.16 contains an expression
44
for the gradient of the functional that is to be minimised; by definition the gradient
vector points in the direction of the steepest increase of the function, so the deduction of
it from an initial guess of the solution is an effectively a gradient descent step:
( ) 2.18
with step size . This is Landweber’s iterative scheme [96-98]. Re-arranging 2.18
gives:
( ) . 2.19
To ensure this does not diverge, the spectral norm of ( ) must be between
and 1, which means should be chosen so that ( ( )) . Because
the eigenvalues of are equal to the squares of the singular values of , an
equivalent condition is ⁄ . In practice, if is too small then the iteration
requires many steps to reach a minimum value, if is too large then the iterates tend to
oscillate between values of corresponding to poor solutions [99-102], if it is zero then
the iteration achieves nothing, and if it is negative the iteration represents a gradient
ascent and it should be expected that the new result is further from the solution than
before.
As long as is chosen correctly the iterative scheme 2.18 will exhibit semi-
convergence towards a minimum of [100, 103]. However, this solution is rarely
meaningful in the case of limited-data tomography owing to the severe rank-deficiency
of , which the iterative scheme does little to address: a least-squares solution can be
found by standard Landweber iteration, but it will still be one of an infinite set, and will
generally bear little resemblance to the imaged object.
A more general form of Landweber iteration [103, 104] applies an operator
to the solution after each iteration:
( ( )) 2.20
This is the projected Landweber iteration and can be used to impose prior
knowledge on the solution after each repetition. This step serves to alleviate the
problems caused by the rank-deficiency of because the operator enforces the
interdependency between adjacent pixels in the image. This creates an additional set of
45
constraints on the solution. For example, a median filter with an additional non-
negativity prior [81, 82] has been used to reconstruct species mole fraction images. This
can be written analytically in terms of an operation on each element in the image:
( ) { |( ) ( )| }. 2.21
is a raidal distance which is used to control the size of the filter. The median filter
step updates every pixel in the image whether they are intersected by beams or not, and
the non-negativity prevents unphysical results in the case that measured data implies
negative mole fractions of a species. This filter has the effect of smoothing the image
whilst preserving sharp edges.
( ) {

2.22
Where equals the number of active pixels sharing an edge with pixel . This is
effectively a smoothing convolution which tends to lessen steep gradients in the
reconstructed solution and ensure that every pixel’s value is similar to the neighbouring
ones regardless of whether or not a beam transects it. It has been found that no non-
negativity step is required with this projection, and the numerical implementation is
extremely fast.
Projected Landweber iteration is a very fast method of regularisation if the projection
operation can be computed at a similar speed as the iteration step. Furthermore, the
iterative step does not require large-scale matrix manipulations, and the method scales
well as becomes large. The projection operator enables a wide range of priors to be
incorporated (some at the expense of computation time), including nonlinear functions
of the image, but one disadvantage is that the projection and iterative steps will tend to
conflict. As a result, the iteration will tend to converge to two separate images
depending on whether it is terminated after the iterative step or the projection step, and
it is tricky to control the balance between the competing influences on the solution of
the data and the priors.
46
2.7.2 Tikhonov inversion
A different approach to solving this problem is to modify the original least-squares
problem to include an additional penalty term. This regularises the problem by
incorporating priors from the outset. Tikhonov regularisation [105-107] achieves this by
defining a new functional:
2.23
is a scalar and is a square ( ) matrix. The minimiser of is no longer a
simple least-squares best fit, but instead a solution fitted to the data and some additional
condition that penalises large . This step can be regarded as a form of regularisation
using prior information, which shall be encoded into .
By expanding 2.23 and setting its gradient to zero (as before) it is possible to find the
equivalent minimiser:
( ) 2.24
An appropriate choice of can ensure that the augmented matrix is full-
rank and permits direct matrix inversion. This method has been independently proposed
in the field of statistical optimisation wherein it is called “ridge regression” [108-110].
is sometimes absorbed into , but its value is important in controlling the relative
adherence of the solution to fitting the data (small ) and minimising (large ).
can take many forms depending on the problem; for example if solution with a
small norm is considered favourable then , the identity matrix, will serve to
penalise solutions with large norms. In ridge regression, and the singular
matrix is augmented by adding a “ridge” of ones to the diagonal components. In
this case, the least-squares solution with minimum norm is the solution.
The prior knowledge does not involve any information about the norm of the
solution, so ridge regression is not appropriate. Indeed, least squares solutions of limited
data tomography problems with minimum norms do not give good reconstructions
because the pixels which are not intersected by beams are automatically set to zero
[111, 112]. Instead, the prior knowledge that the reconstructed image must be smooth to
some degree is used, because it is based on physical quantities which are assumed to be
continuous over space: the gas temperature is governed by the flow of heat in the fluid
47
according to the Heat equation, individual molecular species concentrations are
governed by the convection-diffusion equation, and the pressure is governed by the
Navier-Stokes equation. In each of these three equations, the specific gas parameters
evolve in a way that tends to lessens sharp spatial gradients, and it is assumed that there
are no additional point-wise source terms that will counter the smoothing effect of these
physical laws (e.g. a point-wise source of heat or water vapour from a localised
chemical reaction).
{

2.25
Where equals the number of active pixels sharing an edge with pixel . This is a
discrete version of the Laplace operator, and has been used in previous reconstructions
of physical variables [95, 111, 113] because is a scalar measure of the non-
smoothness of . If is relatively smooth, then its value at some pixel will be almost
equal to its value at all its the neighbouring pixels (say, pixels and ). The
corresponding row of is then equal to the sum ⁄ ⁄ , which is close to
zero if . It is in this way that the penalty factor serves to favour
smooth images.
A benefit of Tikhonov regularisation lies in the ease with which priors can be
directly implemented via , and controlled via . Provided the problem scale is not too
large ( was found to be a good rule of