ournal of undergraduate science technology
TRANSCRIPT
1
IN THIS ISSUE: SUPERHYDROPHOBICITY
TSUNAMI DEPOSITS
MAPPING BROWN DWARF STARS
TACKLING CORONARY ARTERY DISEASE
JUST JOURNAL OF UNDERGRADUATE SCIENCE & TECHNOLOGY
VOLUME 2, EDITION 1, SPRING 2014
2
CONTENTS
Acknowledgements 3
About JUST 3
Forewords 4
Bioinspired Superhydrophobicity 5
by L Howes & R Browne
Sediment Transport & Deposition in Tsunamis 16
by T Howell
Brown Dwarf Discs in Upper Scorpius 24
by M. Read, L. Ireland, & N. Mayne
Using Machine Learning to Evaluate Coronary Artery Disease 35
by H Bolt
3
ACKNOWLEDGEMENTS
JUST would like to thank the Exeter
Annual Fund for the generous financial
support given in the production of this
Journal.
ABOUT
The Journal of Undergraduate Science and
Technology (JUST) aims to acknowledge
and showcase undergraduate research being
carried out within the College of
Engineering, Mathematics and Physical
Sciences (CEMPS) at the University of
Exeter. The Journal, as well as providing
undergraduate students with an opportunity
to develop their writing and presentation
skills, also enables them to engage with the
wider ‘STEM’ communities within the
University and beyond, and to exchange
ideas and share intellectual activity.
All undergraduates within CEMPS are
eligible to submit to the Journal for print
and online publication and the editorial
team welcome contributions from students
at any stage of their academic programme.
4
FOREWORDS
JUST is a great opportunity for
undergraduates to showcase some of the
really exciting research activities they are
involved in at the University of Exeter. By
publishing their work in print and online,
they are making a real contribution to
encouraging a national culture of
undergraduate research, and forging what
could be career-long links with their
colleagues in other universities and
academic communities.
Students at the University of Exeter are
immersed in research at the cutting edge of
their chosen fields. JUST demonstrates how
research-informed teaching, along with
opportunities for undergraduates to share
the outcomes of their own research
activities beyond the confines of
assessment, can have a real and lasting
impact on the quality of the student
experience. If you are an academic
colleague reading this edition, I hope that
you will encourage your students to get
involved with JUST, and to submit their
work to our enthusiastic team of student
editors. The next edition of the Journal is
timed to coincide with the College’s
Annual JUST Conference, which, this year,
is taking place on the Streatham Campus on
Wednesday 4th June. All contributors will
be invited to present their work at this
exciting event. If you are a student, do
consider writing for the Journal and
showcasing your work at this year’s
conference, as a presenter or by submitting
an academic poster.
Steve Rose, Academic Adviser to JUST
It is sometimes easy to forget that science
and technology are not obvious things. All
of the great innovations have happened
within 1% of human history, and, in the
modern world, innovation is moving
forward at an astonishing rate.
The foundations of science and technology
are the scientific method – empirical,
measurable, repeatable experiments – and
the process of peer review. The latter lends
papers authority, for those that pass peer
review have demonstrated that they are
well-written, well-researched, and well
worth reading. All researchers need to be
familiar with this process, and it is never too
early to start. This is why JUST exists.
This edition opens with a paper on
superhydrophobes: substances of great
interest to the textiles industry, for they are
virtually self-cleaning. The paper on
sediment deposits from tsunamis is
particularly relevant when one thinks about
the Fukushima incident. This is followed by
a study showing that brown dwarf stars are
typically smaller than previously thought.
Finally, an attempt is made to simulate
atherosclerosis in coronary arteries: a far
too common problem in the worsening
obesity epidemic.
Undergraduates can easily be overlooked
when it comes to research. Thankfully,
universities across the country are starting
to recognise the significant body of
research coming from the talented minds of
students, and finally giving them the
attention they deserve. By the end of this
journal, I am sure that you will agree that
this resource should not be neglected.
Paul Gratrex, Editor
5
ABSTRACT
The aim of this experiment was to explore
natural and artificially produced
superhydrophobic surfaces and investigate
into the physics of self-cleaning surfaces.
These types of surfaces are of great
importance with self-cleaning materials and
fabrics being used in various industries. We
have explored the surface structure and
confirmed the theory that the presence of
micro structures upon a surface increase
contact angles of water droplets with a
decrease in roll off angles. Experimental
data and optical imaging have shown a
difference in inner and outer leaf samples
that have resulted in equally high contact
angle values but differing results for roll
off. Inner samples around the core of the
lotus leaf show an increase in both
hysteresis and roll off values. We speculate
that this is due to surface structure but have
found no current work to compare our
experimental results with.
INTRODUCTION
The production of artificial non-wetting
materials and coatings has been an area of
great industrial interest with popularity
increasing over the turn of the century.
Artificial products are manufactured
attempting to replicate non-wetting
surfaces such as: rose petals; leaves of
numerous flowers; shark skin; and the
backs of beetles. These surfaces all exhibit
water repelling properties, from the beading
up of rain droplets upon leaves, to the
narrow grooves present on shark skin that
reduce drag and allows them to ‘cut’
through the water with ease[1]. In this
investigation we have examined the water
repelling properties of lotus leaves, one of
the most superhydrophobic materials in
nature.
It is commonly accepted that water droplets
should obtain contact angles (CA)
exceeding 150°, with roll off angles no
higher than 10° for a surface to be classed
as superhydrophobic[2]. CA is a quantitative
measure of the wetting of a solid by a liquid.
It is defined geometrically as the angle
formed by a liquid at the three phase
boundary where a liquid, gas and solid
intersect[3], as shown in Figure 1. The angle
that a surface is inclined that results in the
displacement of a droplet is the roll off
angle. These surfaces are very difficult to
wet, with droplets of water simply rolling
off even at low inclinations. This effect is
thought to be created by the surface
(interface) energies of the solid-liquid-gas
boundary and the surface structure and
roughness.
Figure 1: The contact angle between solid,
liquid, and gas.
The lotus leaf surface is often replicated due
to its ability to self-clean. Droplets roll
along the surface of the leaf picking up dirt,
bacteria and other foreign debris and
remove them from the surface. This self-
cleaning effect is known as the ‘Lotus
Effect’, first described by the German
botanist Wilhelm Barthlott and Christoph
Neinhuis in 1997[4]. They emphasise the
BIO-INSPIRED SUPERHYDROPHOBICITY
BY L. HOWES & R. BROWNE
6
importance of surface roughness on the CA,
and that wax coated papillae upon the
surface of leaves contributes to self-
cleaning.
These low wetting and self-cleaning
properties are being exploited in many
industries, from ship hull coatings made to
reduce fuel intake, to fuel cells designed to
vent CO2 through superhydrophobic
membranes, to superhydrophobic clothing.
This led us to a nano-engineered material
known as Nano-Tex. Their website
suggests that the material can repel liquids
and extend the life of fabrics without
effecting breathability[5], something which
superhydrophobic coatings can inhibit. We
obtained a swatch of Nano-Tex from their
head office in California in order for us to
run our investigation.
In our experiment, we have explored the
CA of numerous volumes of water droplets
upon samples of Lotus leaf and Nano-Tex
material. CA hysteresis was investigated
with samples exhibiting differing results.
The surface structure was examined using a
scanning electron microscope (SEM) and
an optical microscope to gain an insight into
what was causing superhydrophobicity on a
micro scale level.
THEORY
When a drop is placed upon a solid surface
you get a three-way boundary between the
solid, liquid and gas described using the
Young equation (1). Young’s model is the
basis of the theory behind how a surface
behaves under wetting and describes the
forces present along the boundary. This
model assumes a flat homogenous
surface[6].
cos 𝜃 =𝛾𝑆𝐺−𝛾𝑆𝐿
𝛾𝐿𝐺 (1)
Here, γSL, γSG and γLG are, respectively, the
surface energies of the solid-liquid, solid-
gas and liquid-air interfaces, and θ is the
static contact angle. The surface energy of a
liquid arises from the force imbalance along
the surface compared to within the bulk of
the fluid[7]. This creates a force known as
surface tension. Water contains one of the
strongest cohesive forces (the hydrogen
bond) because it is a polar fluid, and, to
minimise its surface, droplets will form
spheres[8].
Bulk forces within a surface can categorise
wherever a surface is of high or low energy.
High energy surfaces, such as glass,
ceramics or metals, have strong covalent,
ionic or metallic bonds which would require
large amounts of energy to separate; thus
they are classed as high energy. When in
contact with a high energy surface, a water
droplet will generally wet and achieve θ
values between 0° and 90°[9]. It is
energetically favourable for a liquid to wet
the solid surface rather than be separated
via an air film. For some low energy
materials which are bonded via hydrogen or
Van der Waals forces, it is more
energetically favourable for there to be an
air film separating the solid and liquid.
The CA in (1) is single valued with the
assumption of a flat surface. This
corresponds to the equilibrium position of
the solid-liquid-gas contact line. In reality,
surfaces are not homogenous, and can have
different roughnesses and chemical
compositions. This produces a range of
values for the CA due to adhesive forces
being stronger at certain points upon the
surface. This is more apparent when the
droplet is in motion as surface roughness
plays an important role in the roll off
angles. When placing a droplet upon a
rough surface, you can observe a minimum
and a maximum value for the CA (θrec and
7
θadv respectively) with the difference
between them being the CA hysteresis[10].
Figure 2: Schematic of a droplet upon a tilted
surface showing advancing (θadv) and
receding (θrec) CA. Surfaces that have low
CA hysteresis generally have lower roll
angles. This is dependent on the surface
roughness of the material as discussed on this
page.
To accommodate for the surface roughness,
both the Cassie-Baxter and Wenzel models
were formulated. The Wenzel model
assumes a liquid drop will fill grooves
found upon the surface of the material.
Figure 3: Qualitative representation of
Wenzel’s model with the droplet filling the
grooves that appear on a rough surface[11].
The roughness factor (R) of the surface is a
ratio of the solid surface area (where fluid
fills the grooves) to the area if using the
Young model, assuming a flat surface. For
the Young model, this has a value of one,
but here R>1. This adjusts the contact angle
such that
cos 𝜃 = 𝑅 cos 𝜃0 (2)
where θ is the true contact angle and θ0 is
the angle assumed via the Young model.
This indicates that the roughness of a
surface accentuates the water repellency or
the absorbing nature of the surface. The
Nano-Tex is engineered to have nano scale
fibres running along channels upon the
surface of each strand of fabric. When
testing with small volume drops, there is
evidence of wetting similar to that
presented with the Wenzel model, and,
when tilted, it shows high values for CA
hysteresis. It should be stated that Nano-
Tex claim this is a water resistant product
and not a waterproof fabric, so over time
and prolonged exposure water will wet a
surface[12].
The Cassie-Baxter model is the opposite of
the Wenzel and implies that water droplets
sit upon the surface grooves, leaving air
pockets present between the liquid and the
solid surface. This splits the liquid-solid
boundary into a liquid-solid and a liquid-
gas boundary and results in Cassie’s law.
cosθ = Rf fSLcosθ0 + fSL – 1 (3)
Here, θ, θ0 are as described in the Wenzel
model. Rf is now the roughness associated
with the solid surface in contact with the
liquid, and fSL is the fraction of the solid
surface area in contact with the fluid. When
Rf is equal to R and fSL=1, we return to the
Wenzel model.
The Lotus Effect is modelled using the
Cassie-Baxter model, where droplets of
liquid sit upon closely packed micro
structures known as papillae. These are
found upon the upper epidermis of the leaf
and are individually covered in a nanoscale
wax tubular[13].
8
Figure 4: The hierarchical double structure
presented by the lotus leaves consists of
papillae of the order of 10-20 microns in
height, 10-15 microns in width, and covered
in nanoscale wax tubular[13]. It is this wax that
is believed to be the cause of the Lotus Effect
that allows the surfaces of these leaves to
self-clean[14].
With tip diameters of the order of microns,
they provide a small surface contact area
with droplets. These small diameter tips,
paired with a high number density of
papillae (40 can be observed in an area of
100µm2), allow for even large volume
droplets (20µl) to achieve high contact
angles.
Confusion, however, can occur when
looking into literature on the
superhydrophobicity of rose petals. In
general, a high value for CA and a low CA
hysteresis implies low liquid-solid
adhesion. Experiments of water droplets on
roses reveal high values for CA, but such
strong adhesive forces that, even when
tilted to 180°, droplets will stay in contact
with the petal[15]. This is thought to be
produced via higher separation of
microscale structures which allow droplets
to be held by the surface and is known as
the petal effect. This raises questions over
the true meaning of the term
superhydrophobicity, a topic which is still
under discussion.
EXPERIMENTAL METHOD
With the aim of investigating the CA of
various volumes of droplets upon lotus
leaves and Nano-Tex, our samples were
flattened using weights and left for at least
24 hours. Once flat, they were secured on a
substrate at 0° inclination, and drops were
placed upon the surface from a small height
(a few mm). A ruler was secured in front of
the substrate for scaling purposes. A 10µl
pipette with error of ±0.05µl was used
throughout our investigation, as drop
volumes were varied from 2-50µl. This
broad spread of volumes was used to get a
sense of how the CA decreases with
increasing drop volume and to see its effect
on the roll off angle.
A Canon EOS 1000D camera fitted with an
extension tube was used for taking images
throughout our experiment. The images
were processed using the package
dropsnake found in the software ImageJ.
This provided a piecewise polynomial fit
which fits a polynomial between individual
points, even if using non-axisymmetric
drops. To reduce the error of CA
measurements, they were repeated
numerous times, and the half spread of
results obtained for error use. The standard
deviation of results provided errors that we
deemed too small with the human error that
can occur when taking readings and using
the software.
For the CA hysteresis, we used a set up
found in the Appendix. Images were
obtained and analysed using the same
software as before, this time measuring
both θrec and θadv and repeated to reduce
error. Finally, we plotted inclination against
CA hysteresis.
For investigating the surface structure,
samples of leaf and Nano-Tex were
9
investigated using a scanning electron
microscope (SEM). As a further
investigation, we looked at the surface of
the leaves using an optical microscope, as
the gold coating and exposure to a vacuum
within the SEM can affect the papillae of
the leaf.
RESULTS & DISCUSSION
Contact Angle, Volume, & Roll Off
The data in full can be found in the
appendix; only the key findings are
discussed here.
The samples of lotus leaf used in this
experiment have proven to be
superhydrophobic for volumes below 6µl,
achieving CA values above 150° with a
maximum value of 155.9±1.3°. A higher
resolution camera could be used for
increased magnification potentially
resulting in reduction of errors. Better
backlighting to enhance the boundary of the
water droplets allowing for easier CA
measurements could also have been made.
The most surprising result observed is the
difference in roll off angles for samples of
inner leaf (roughly 7.5cm from the core)
compared to the outer (further than 7.5cm).
Inner surfaces show increased roll off (28°
for a 40µl drop) compared to outer samples
(10° for the same volume), with evidence of
being pinned. Values for the CA hysteresis
angle are also increased as a result. This is
a surprising result as it threw open the
investigation into what is stated in literature
to be one of the best examples of
superhydrophobicity in nature. To
strengthen the findings, several inner
samples from numerous leafs were used; all
were found to have increase rolling angles.
This is discussed further later on.
Nano-Tex samples have been shown to
have lower values for CA than for the lotus
leaf, confirming that the presence of
microstructures upon a surface has
increased CA. Similar to what was
observed for the lotus leaves, increasing
drop volume decreases the CA. When it
comes to roll off, small volumes of fluid
(<15µl) did not roll when inclined. One
could speculate that this is due to droplets
being trapped upon the surface with drop
weight not being enough to overcome
adhesive forces with the surface. When
higher volume droplets are used, the weight
of the fluid and the incline appears to be
enough to achieve roll off. Droplets do,
with time, appear to soak into the material,
decreasing CA with time. Unlike the lotus
leaf, Nano-Tex can become wet when
overused, and droplets will soak into the
already wet regions. This decreases CA
even further.
Contact Hysteresis Plots
Figure 5: CA hysteresis against inclination
plot presenting data from all three samples
with: Nano-Tex in blue; outer leaf in red; and
inner leaf in green. Both Nano-Tex and outer
leaf samples show a linear relation between
the two variables. The inner leaf shows
evidence of a linear relation at the point of
inclination and before roll off.
The plot above shows how each sample
behaved under inclination, and the effect of
CA hysteresis to the point of roll off. Outer
and inner leaf samples show roll off values
of 18° and 26° for a 40µl droplet
10
respectively. Outer samples show a linear
relation between roll off and CA hysteresis
value, confirming that droplets roll along
the surface with little resistance, as
explained in the Lotus Effect[5]. The same
can be said for Nano-Tex, which shows
evidence of being pinned even at low
inclinations with eventual roll off at 44°.
The inner leaf appears to show a linear
relation between inclination and CA
hysteresis, both at the start and just before
roll off. This may be due to the droplet
initially rolling (much like the outer
sample) before being abruptly stopped by
the surface roughness. There is then
evidence of the droplet deforming,
increasing CA hysteresis values, before
finally rolling off. As stated earlier, this is
an unexpected result, and further tests
would be required to confirm the reliability
of our results.
Surface Structure
Figure 6: An image at 0° inclination
displaying the stomata (large structures) and
papillae. When taking separate 100µm2 areas,
both researchers counted 38 papillae, with an
average separation of 17.8 microns. This is a
sample of the outer leaf, where stomata
appear more abundant, and papillae
distributed uniformly.
Whilst using the SEM, we confirmed, using
our samples of lotus leaves, some of the
theories over the surface structure causing
the Lotus Effect.
Figure 7: An image of a single papillae on
the same leaf sample. The sample is tilted at
52° with the height calculated to be
approximately 17 microns. The width at the
tips when at a 0° were all on the order of
microns, as predicted[14].
It can clearly be seen in Figure 7 that the
surface and papillae (including the tip) are
covered in the nanoscale wax that is
believed to be the cause of the Lotus Effect.
The stomata scattered around the outer
parts of the leaf are much larger (Figure 12)
and do not show any evidence of wax upon
their surfaces. The underside of the leaf
(Figure 13) is included in the appendix for
comparison. These results confirm with
theory that the surface consists of uniformly
distributed microstructures that are covered
in nanoscale fibres.
Images of the Nano-Tex, in contrast, show
results no different to viewing samples of
cotton. When zooming past 5 microns, the
samples have bubbled up and cracked the
gold coating (Figure 14).
11
Figure 8: The surface of an individual fibre
of Nano-Tex displaying an apparent valley
structure upon the surface. Upon these
valleys, it is believed, are the nano structures
that are the cause of the material’s water
repellency.
When results taken using inner leaf samples
were found to result in greater values for the
roll off, it was predicted that either a higher
number density of stomata would be
present (which may cause pinning) or an
increase in spacing between the
microstructure papillae (like in roses).
Optical images show these findings.
Figure 9: Optical image at 20x zoom for the
outer leaf. Both stomata and papillae can be
made out, with stomata being the larger
brighter structures.
Figure 10: Optical image at 20x zoom for the
inner leaf. There is no evidence of stomata
present and the spacing between papillae is
comparable with that of the outer leaf
samples.
From taking optical images, no distinct
differences have been observed in spacing
between microstructures, although there is
evidence of a decreased number of stomata
upon the inner leaf. No relevant literature
has been found that has observed a
difference between how inner and outer
lotus leaf samples behave under wetting.
This investigation has shown that there is a
clear increase in roll off angle of 40µl drops
upon inner samples of leaf. The results
where volumes are varied have also shown
evidence of increased roll off when
compared to outer leaves, and, for volumes
less than 10µl, complete adhesion.
CONCLUSION
An SEM was used to image the surface of a
natural superhydrophobic source, the lotus
leaf, to confirm the theory for the causes of
such high values for CA. An artificial nano-
engineered fabric has been used for
comparison, with results indicating that
droplets wet upon the surface, which could
be the result of droplets filling between the
nano fibres.
12
The SEM has also been used to show that
closely-packed wax-covered papillae are
present on the surface of the lotus leaf, with
theory suggesting that low roll angles are
due to this wax. High values for CA have
been shown for various volumes of
droplets; however, there is evidence to
suggest that their surfaces are not uniform.
An increased roll off angle has been
observed with higher values for CA
hysteresis than for inner samples around the
core of the leaf. Outer leaves show lower
roll off angles and low CA hysteresis
values, as expected for a self-cleaning
surface. Possible causes of this have been
speculated, but no obvious conclusion as to
why such an effect occurs has been reached.
It is interesting to look back over the
definition of a superhydrophobic surface, as
CA values exceeding 150° for outer leaf
samples and Nano-Tex have been achieved;
however, roll off angles only reach 10° at a
volume of 40µl (for leaves). At this volume,
CA values of around 130° were measured.
This confusion seems to be replicated in
numerous papers[15][16] that question the
definition of superhydrophobicity. There is
a question as to whether surface roughness
plays a part in the definition.
Superhydrophobicity may only hold if
using the Young model, where there is no
surface roughness. This roughness can
inhibit roll off, even with high CA values,
as shown with inner leaf samples and Nano-
Tex, or even result in no roll off at all when
using low volumes on all samples.
Regardless of any issues with the definition,
this phenomenon is being put to good use in
many industries that attempt to replicate
low wetting and self-cleaning surfaces that
have extensive practical applications.
13
APPENDIX
Setup
Figure 11: Experimental setup for measuring
the CA hysteresis: a simple setup consisting
of a hinged surface that is set at 0°
inclination. A jack is used to slowly raise the
surface, and images were taken side-on. The
angle of inclination was measured using a
protractor secured to the side of the tilting
table.
SEM Images
Figure 12: An image of the tip of the stomata
upon the surface of the leaf. There is no sign
of any wax upon the top of these structures.
As a result, it is not believed they are a factor
in producing low rolling angles.
Figure 13: The underside of the leaf, even at
this magnification, shows no clear
microstructures. There are channels and
valleys that appear to be filled with the
nanoscale wax, and, as predicted[13], are
longer and thicker than the fibres found upon
the upper surface, which are thinner and more
densely packed.
Figure 14: The surface of the Nano-Tex
cracking when attempting to zoom further
into a fibre.
14
TABLES OF RESULTS
Table 1: Table of results for outer samples of leaf.
The angles θrec and θadv and roll off were measured.
A roll off of N/A implies that, when tilted past 90°,
drops have adhered to the surface.
Table 2: Table of results for inner samples of leaf
Table 3: Table of results for samples of Nano-Tex
Volume (µl) θrec (°) θadv (°) Roll off angle (±2°)
2±0.05 149.520±5.7 155.952±1.3 N/A
4±0.05 147.394±1.2 152.319±2.3 70
6±0.05 143.769±0.38 144.847±0.38 30
8±0.05 140.473±1.2 140.663±2.3 13
10±0.05 141.068±3.7 143.776±0.1 24
15±0.10 134.313±0.53 137.680±1.1 18
20±0.10 130.005±1.1 132.682±1.1 14
30±0.15 132.230±2.9 132.363±2.5 12
40±0.20 126.656±2.3 133.317±1.3 10
50±0.25 124.568±2.1 127.331±1.4 23
Volume (µl) θrec (°) θadv (°) Roll off angle (±2°)
2±0.05 149.975±0.44 152.953±3.4 N/A
4±0.05 147.242±0.95 150.310±3.2 N/A
6±0.05 142.515±0.44 142.467±2.6 N/A
8±0.05 136.274±2.3 137.413±2.9 N/A
10±0.05 132.366±1.5 135.420±1.4 85
15±0.10 141.14±0.61 147.254±1.2 67
20±0.10 136.007±2.9 137.939±3.3 65
30±0.15 127.909±1.2 132.752±3.0 22
40±0.20 127.706±1.3 129.298±1.6 28
50±0.25 130.219±1.3 136.068±3.2 19
Volume (µl) θrec (°) θadv (°) Roll off angle (±2°)
2±0.05 149.191±3.2 150.617±1.8 N/A
4±0.05 144.392±2.2 146.490±1.1 N/A
6±0.05 141.071±2.6 145.801±2.6 N/A
8±0.05 134.862±3.3 138.994±3.2 N/A
10±0.05 134.744±3.6 136.100±3.9 N/A
15±0.10 124.998±3.0 126.540±1.3 N/A
20±0.10 124.335±1.3 124.598±2.3 75
30±0.15 115.620±3.2 117.235±2.3 52
40±0.20 116.660±3.0 116.782±2.7 43
50±0.25 112.453±1.8 113.128±2.1 36
15
REFERENCES
[1] http://www.asknature.org/strategy/038caf453c09b3016465cc6a93605#.Uy4UAl63EfI
(accessed 29 Nov. 2013)
[2] http://www.mecheng.osu.edu/nlbb/files/nlbb/Lotus_Effect.pdf (page 3)
(accessed 29 Nov. 2013)
[3] http://attenstion.com/applications/measurements/contact-angle
(accessed 29 Nov. 2013)
[4] Barthlott W, Neinhuis C. 1997. Purity of the Scared Lotus or Escape Biological Surfaces.
Planta. 202(1) pages 1-8
http://www.citeulike.org/user/hendysh/article/2009895
(accessed 29 Nov. 2013)
[5] http://www.nanotex.com/applications/hometextiles_P1.html
(accessed 29 Nov. 2013)
[6] http://web.mit.edu/nnf/education/wettability/wetting.html
(accessed 30 Nov 2013)
[7] http://www.kibron.com/surface-tension
(accessed 30 Nov 2013)
[8] http://hyperphysics.phy-astr.gsu.edu/hbase/surten2.html
(accessed 1 Dec 2013)
[9] http://www.adhesives.org/adhesives-sealants/adhesives-sealants-overview/structural-design/surface-
energy-and-wetting
(accessed 2 Dec 2013)
[10] http://link.springer.com/artcile/10.1007%2Fs00396-012-2796-6
(accessed 2 Dec 2013)
[11] http://www.intechopen.com/source/html/10042/media/image8.jpeg
(accessed 2 Dec 2013)
[12] http://www.nanotex.com/faqs/faqs_spills.html#1
(accessed 2 Dec 2013)
[13] http://www.beilstein-journals.org/bjnano/single/articleFullText.htm?publicId=2190-4286-2-19
(accessed 2 Dec 2013)
[14] http://www.ramehart.com/newsletters/hierarchical_structure.jpg
(accessed 2 Dec 2013)
[15] Petal Effect: A Superhydrophobic State with High Adhesive Force
Feng L, Zhang Y, Xi J, Zhu Y, Wang N, Xia F, Jiang L.
http://www.ncbi.nlm.nih.gov/pubmed/18312016
(accessed 2 Dec 2013)
[16] Green Tribology: Biomimetics, Energy Conservation and Sustainability (page 25)
Michael Nosonovsky, Bharat Bhusan
(accessed 4 Dec 2013)
16
ABSTRACT
Tsunamis have been affecting coastal areas
in significant ways for millions of years, as
sedimentological records and eyewitness
accounts have shown. Research into
tsunami deposits has seemed to raise as
many questions as it has answered, with no
agreed upon definition of a tsunami deposit
currently available.
Sediment transport and deposition studies
cannot only be used in palaeoreconstruction
of past events, but also in the preparation
and harm prevention necessary today.
Important areas of research include
distinguishing storm and tsunami deposits,
characteristics of tsunami deposits in the
geological record, the behaviour of
sediment transport, and deposition and its
long-lasting effects.
The broad range in tsunami deposit
characteristics and areas of available
research is a clear indicator of the
complexity of the topic. By compiling and
reviewing the relevant literature, this report
provides a summary and proposals for
future research regarding the sediment
transport and deposition from tsunamis.
INTRODUCTION
The purpose of this report is to examine the
transportation and deposition of sediments
resulting from tsunamis by providing a
review into recent research surrounding the
topic. The word ‘tsunami’ means ‘harbour
wave’ in Japanese. They may be triggered
by a number of factors, with earthquakes,
land-slides, and bolide impacts considered
to be the main three (Dawson et al.).
Although tsunami events result in
significant alterations to coastal
environments and sediment transport, as
well as loss of life, they are rare in terms of
human history. In terms of the geological
record, however, they are common;
Scheffers et al. estimate that 100
megatsunamis have been recorded
worldwide in the past 2000 years with more
presumably going unrecorded.
In terms of modern science, detailed
tsunami research is in its infancy, and has
displayed increasing prominence within the
last two decades. Catastrophic events such
as the 2004 Indian Ocean tsunami, which
claimed the lives of over 250,000 people, as
well as the more recent 2011 Japanese
tsunami which resulted in contamination of
the Pacific ocean by radioactive waste, have
attracted a great deal of attention.
Excluding the sediment record, past events
have been based on written accounts which
offer little detailed information. It is
thought that the study of sediment transport
and deposition can lead, not only to a
further understanding of the effects of
tsunamis, but also increase the
effectiveness of preparedness and early
warning systems, as well as helping to
reconstruct past tsunamis from the
geological record. Throughout this report,
the term sediment is used to refer to
particles ranging in size from microscopic
foraminifera to large boulders, with grain
size being the main parameter used to
distinguish between tsunami deposits. The
term ‘tsunamiites’ is also used to refer to
sedimentary deposits resulting from a
A TECHNICAL REPORT ON SEDIMENT TRANSPORT &
DEPOSITION FROM TSUNAMIS
BY T. HOWELL
17
tsunami, and covers a range of grain sizes,
sources, transport and depositional
processes.
TSUNAMIS
Tsunamis occur as a series of waves known
as a ‘wave-train’, with significantly
increased wavelengths, which can reach
land within minutes or hours of each other.
Tsunami waves have been recorded as
travelling up to 220m/s (500mph)
(MacInnes et al.), with waves reaching
heights of tens of metres. The first in the
wave-train is not usually the strongest;
indeed, successive waves increase in
strength. These waves can be triggered
through a number of ways, which Dawson
et al. defined as either ‘’bottom-up’
displacement of the sea-bed’, such as
earthquakes, submarine landslides and
volcanic eruptions, or ‘top-down’
displacements such as coastal landslides,
glacier calving and bolide impacts. The
most common trigger for tsunamis is
considered believed to be large, deep-sea
earthquakes originating from tectonic plate
boundaries, such as the 2011 Tohoku
earthquake which registered a magnitude
9.0 and occurred at the Pacific Plate
subduction zone along North-Eastern
Honshu (Nandasena et al.).
Morton et al. stated that the initial tsunami
perturbation could appear as at least one of
three different forms; a continuous surge,
an elevated bore or a recession of the sea.
This perturbation is often influenced by
coastal geography. They also noted that
coastal sites closest to the source region
initially experienced a bore whilst ‘farfield
sites’ initially experienced a surge.
Throughout this report, tsunami sediments
are not distinguished by the origin of the
tsunami but by the sediment type, as no
relationship has yet been discovered
between the cause of the tsunami and any
patterns in transport and deposition.
SEDIMENT TRANSPORT
Tsunamis are capable of transporting a
large amount of material, ranging in grain
size from fine sand and mud to coarse
boulders and clasts, through suspension,
saltation and creep. The amount of
sediment is inferred from several
eyewitness reports from the 2004 Indian
Ocean tsunami, which describe the waves
as ‘being black before breaking on land’
(Lavigne et al.). This grain size range
reflects the sheer size of the source region
for the sediment, and is influenced by
sediment availability and ‘magnitude of the
hydraulic source of the tsunami’
(Nandasena et al.). It is possible to split
sediment transport into two principal flows,
named ‘run-up’ and ‘backwash’. The
former consists of the tsunamigenic waves
reaching inland up to the maximum point of
inundation, where velocity decreases to
zero; thereafter, the wave recedes, and
backwash begins.
Figure 1 (on the following page)
demonstrates the principal pathways of
tsunami sediment transport and deposition.
Inundation can range from tens of metres up
to a kilometre inland, as was nearly seen in
the 2004 Indian Ocean tsunami which Paris
et al. reported as having a maximum
inundation of 763m. As the diagram
demonstrates, the effect on the land surface
consists of a very small part of the passage
of a tsunami.
18
Figure 1: Schematic diagram illustrating the
principal pathways of tsunami sediment
transport and deposition (Sugawara et al.).
RUN-UP
As a tsunami nears land and travels into
shallower waters, its individual waves
decrease in velocity, whilst their amplitude
increases (Dawson et al.). This transition
zone from ocean basin into a shallower
environment is where the coastal
morphology and nature of the nearshore
environment affects the tsunami, and its
potential for sediment transport. As a
tsunami approaches the coast, the
increasing amplitude results in increased
seabed sediment suspension, leaving a
basal erosion surface which is cut.
The amount of sediment available for
transportation at this point in the process
has a significant impact upon the tsunami
deposit, as a nearshore zone which is barren
may result in only a minor trace of the
tsunami’s passage being left (Coleman
1978). For example, there would be a
greater volume of sediment transport from
the accumulation of unlithified sediment
found on a continental shelf compared to
volcanic flanks, resulting in a larger
deposit. The method of arrival of the
tsunami also influences sediment transport.
As mentioned earlier, Morton et al.
identified three forms of initial perturbation
which will each affect transport differently.
For example, a recession and drawdown of
the sea surface and will likely result in
increased erosion and onshore
transportation during run-up in comparison
to a continuous surge, due to the exposure
of nearshore sediment during recession.
Once the decrease in water depth and
velocity has occurred, and the flow has
become turbulent, the movement of water
and sediment in a landward direction slows
to around 10-20m/s, which Nanayama et al.
found to be the average run-up velocity
onto the coast. This is a considerable
reduction from the 220m/s recorded in the
past (MacInnes et al.). Even at such reduced
velocities, Ye et al. established it is possible
for tsunamis to transport a full range of
sediments from fine clays to large boulders.
As the tsunami flow moves landward its
velocity reduces further to around 5m/s as
it appears to resemble a ‘tide-like flood’
(Nanayama et al.). As velocity displays a
negative correlation with distance inland,
the capacity of the water to transport coarse
sediment also decreases, resulting in
deposition of larger sediments nearer the
shoreline, with only the finer sediments
transported further inland.
It is possible for sediments to be transported
over distances greater than a kilometre, and
have even been recorded up to 30km inland
at sites located along the west coast of
19
Australia (Dawson et al.). Figure 2
demonstrates this relationship in tsunami
deposits from the September 2009 tsunami
which struck the U.S. territory of American
Samoa. Sediment thickness is greatest near
the shoreline, where velocity decreases and
larger sediments are unable to be
transported, leading to deposition. The finer
sediments carried in suspension, however,
are able to be transported around 250m
inland before velocity decreases enough for
deposition to occur.
Figure 2: Cross-shore distribution of
sediment thickness plotted against distance
from shoreline for the American Samoa,
September 2009 South Pacific Tsunami
(Apotsos et al.)
Dawson et al. also found that run-up often
caused widespread deposition of large
boulders; the 1992 Flores tsunami led to the
deposition of numerous coral boulders in
the backshore environment, which caused
significant destruction. Evidence from Paris
et al. supports this, as it was found that
more than 80% of transported boulders
were found more than 100m inland after the
2004 Indian Ocean tsunami. It was also
found that the boulders transported from
offshore and deposited inland represent
only 7% of the boulders moved during the
tsunami, demonstrating the sheer force of
tsunami run-up.
BACKWASH
After the tsunami has reached its maximum
point of inundation, the subsequent
backwash flow occurs. Although little is
known about the hydraulics of the
backwash flow, eyewitness reports from the
2004 Indian Ocean tsunami suggest
‘exceptionally high flow velocities’
(Dawson & Stewart) and Nanayama et al.
estimate the outflow velocity of the 1993
Hokkaido tsunami at around 2.3 m/s. The
erosive and carrying capacity of these flows
is unknown, with the only indicators being
the possible imbrication of clasts as well as
any terrestrial debris found seaward of its
source. Aalto et al. found that that coastal
topography and bathymetry often
concentrates backwash into channelized
flows increasing erosive and carrying
capacity, and even possibly ‘inducing
corrosion and cavitation of bedrock
platforms’. This was also strengthened by
Kon’no et al., who found that backwater
converging on topographic depressions led
to further erosion and deposition of
reworked sediments. Sediment previously
agitated by the run-up flow is also
considerably easier to be transported by the
channelised backwash flows, as they have
only just been deposited, leading to seaward
transport and deposition of these sediments
in nearshore zones (Sugawara et al.).
Sugawara et al. attempted to quantify the
characteristics of the 2004 Indian Ocean
Tsunami backwash through a review of
previous studies of submarine
sedimentation by tsunamis. It is possible for
a backwash flow to transport a considerable
amount of terrestrial coastal material
seawards, and, therefore, the accumulation
of allochthonous sediments offshore gives
evidence for the effect of backwash flows.
The specific proxy studied was the change
in benthic foraminiferal assemblages,
which were found to have migrated
seaward alongside plant debris and other
terrestrial materials. They found that the
extent of sediment transport from backwash
did not reach as far as deeper water regions,
but remained constrained to nearshore and
offshore zones.
20
Figure 3: Schematic diagram showing the
interpreted mode of sedimentation by
backwash (Sugarara et al.).
Figure 3 displays the interpreted mode of
sedimentation by a tsunami backwash. The
primary rolling and suspension of sediment
results in bottom surface erosion as any
agitated sediment from run-up is also
transported into the nearshore to offshore
zones. An example of the large volumes of
sediment which these outflows are capable
of moving was seen in the 1983 Sea of
Japan tsunami, which was triggered by a 7.8
magnitude earthquake. Minoura and
Nakaya noted that the mass transport of
sediment occurred predominantly through
suspension and rolling, which led to the
accumulation of allochthonous materials in
nearshore to offshore zones, including the
bodies of some victims. Although large
amounts of sediment are able to be
transported seaward, the volume does not
correspond to the amount previously eroded
and transported elsewhere. Paris et al.
estimated that in Lhok Nga, Indonesia, less
than 10% of the eroded sediments were
deposited inland, meaning very little can
become entrained within the backwash.
Little is known concerning the deposition
location of such eroded sediments, as
nearshore and offshore zones do not display
significant sea level rises, and sampling
deeper regions has not been possible.
GLOBAL DEPOSITION PATTERNS
As mentioned earlier, previous research has
encountered problems with identifying and
classifying tsunami deposits in the
geological record. Deposition occurs on
such a large scale, in terms of grain size and
locality, that researchers such as Scheffers
et al. have taken an inductive approach in
their endeavours to identify deposition
processes and relationships. The world map
in Figure 4 (following page) demonstrates
this approach through ‘considering a
tsunamigenic origin of unusual depositions
21
or geomorphological features in coastal
areas’, as inductive fieldwork is the most
reliable source for data in recent studies.
The main Atlantic tsunami deposits within
sedimentary records can be found in
Scotland, west Norway, the Caribbean, and
the southern coast of Portugal. In the
Mediterranean, deposits are located in
southern Italy, Cyprus, and the Aegean Sea,
whilst evidence in the Indian Ocean is so far
restricted to north-western Australia. The
Pacific Ocean displays the highest
frequency of onshore tsunamigenic
deposits, and is where approximately 80%
of tsunamis and 90% of earthquakes occur.
It is the most geologically active area of the
globe – hence its name, the ‘Pacific Ring of
Fire’ – due to the surrounding active plate
boundaries, resulting in the high abundance
of tsunami deposits.
It is possible for tsunamigenic sediments to
have been deposited during the run-up or
backwash processes. A study from Java
established that this sediment deposition
was frequently associated with ‘sediment
sheets that rise in altitude inland as tapering
sediment wedges’ (Paris et al.). Whether or
not such sediment was a result of run-up or
backwash is dependent upon characteristics
such as imbrications and sediment sources.
Dawson et al. established, whilst studying
the 1992 Flores tsunami, that tsunami run-
up may often cause widespread deposition
of large boulders. With this pattern of
deposition identified as having a
tsunamigenic source, they identified several
former tsunamis which have also been
associated with widespread boulder
deposition. For example, the coral reef at
Rangiroa, Tuamoto archipelago, in the
south-east Pacific, displays a boulder field
in which there is a progressive decrease in
size landwards. This characteristic deposit
can be seen to continue with run-up
inundation, as shown in Figure 5 (following
page).
Figure 4: World map displaying the
distribution of tsunamigenic deposits
according to an inductive approach (Scheffers
et al.).
22
Figure 5 demonstrates the decreasing
sediment thickness in a landward direction
the farther the inundation reaches, as was
seen on American Samoa from the 2009
South Pacific tsunami. After breaching the
shoreline, the sediment deposits begin to
decrease immediately from around 10 cm to
less than 1 cm in thickness at the farthest
point of deposition. Gelfenbaum et al.
studied the 1998 Papua New Guinea
tsunami, and found that, on average, the
farthest deposition was 40 m short of the
inundation point, accounting for 70% of the
inundation distance. This supports the
general theory that MacInnes et al. aimed to
prove, by calculating average run-up and
deposition height of tsunamis. Their
research yielded a figure which states
average deposition distance is 90% of the
inundation distance.
Due to the physical constraints of deep sea
sediment sampling, it has not been possible
to obtain a thorough model for the
deposition of sediments in the offshore and
deep sea zones. However, Sugawara et al.
were able to conclude, through the study of
benthic foraminifer redistribution after the
2004 Indian Ocean tsunami, that a large-
scale redistribution of sediments on the sea
floor akin to related terrestrial deposits did
not occur. Overall, their results found only
a slight landward migration within offshore
zones, suggesting that the main sediment
transport and deposition occurs on land
rather than in nearshore to deep sea areas.
Sugawara et al. therefore highlighted the
important factor that, although backwash
plays an important role, the main deposit
occurs terrestrially, and is deposited by run-
up; ‘deposition by tsunami run-ups is
prominent in coastal lowlands, and
deposition by tsunami backwashes is
evident in nearshore to offshore zones’
(Sugawara 2009).
CONCLUSIONS
Although rare in human history, within the
geological record catastrophic tsunamis
occur frequently. The three main methods
of triggering a tsunami are landslides,
bolide impacts and earthquakes.
Sediment transport can be divided into two
principal flows: run-up and backwash. The
former has an average velocity of
approximately 10-20 m/s, deposits the
majority of sediment, and is what most
people associate with a tsunami; the latter
has an average velocity of around 2-3 m/s
and transports significantly less sediment
seaward.
Future research should attempt to sample
offshore to deep sea tsunamigenic
sediments in an attempt to understand the
processes which occur in such areas. In
locations at the greatest risk, such as the
Indonesian and Japanese archipelagos,
continuous pressure, seismicity,
temperature and other proxy indicators
should be used to improve the accuracy of
Figure 5: ‘C’ shows
the Tsunami run-up
height in relation to
average topographic
height (black line).
‘D’ shows the sediment
thickness against
distance inland for the
same transect (Apotsos
et al.).
23
prediction. This will also help in
understanding the sedimentary processes
involved. Given the high variability in the
nature of tsunami sediments, Dawson states
that ‘solving this particular problem is a
priority for future research’.
REFERENCES
CHENG, W., WEISS, R. 2013. On sediment extent and runup of tsunami waves. Earth and Planetary Science
Letters, 362, 305-309.
DAWSON, A., SHI, S. 2000. Tsunami Deposits. Pure and Applied Geophysics, 157, 875-897.
DAWSON, A., STEWART. Tsunami Deposits in the Geological Record. Sedimentary Geology, 200, 166-183.
GELFENBAUM, G., JAFFE., B. Erosion and Sedimentation from the 17 July, 1998
Papua New Guinea Tsunami’. 2003. Pure and Applied Geophysics, 160, 1969-1999.
GOODMAN-TCHERNOV et al. 2009. Tsunami Waves Generated By The Santorini Eruption Reached
Mediterranean Shores. Geology Society of America-Geology Journal, 37. 943-946.
MACINNES et al. 2009. Tsunami Geomorphology: Erosion and Deposition From 15 Nov ’06 Kuril Island
Tsunami. Geology Society of America-Geology Journal, 37, 1043-1046.
MORTON, R., GELFENBAUM, G., JAFFE, B. ‘Physical criteria for distinguishing sandy tsunami and storm
deposits using modern examples’. 2007. Sedimentary Geology, 200, 184-207.
NANDASENA, N., TANAKA, N., SASAKI, Y., OSADA, M. 2003. Boulder transport by the 2011 Great East
Japan tsunami: Comprehensive field observations and whither model predictions?. Marine Geology, 346, 292-
309.
PARIS, R., FOURNIER, J., POIZOT, E., ETIENNE, S., MORIN, J., LAVIGNE, F., WASSMER, F. 2009.
Boulder and fine sediment transport and deposition by the 2004 tsunami in Lhok Nga (western Banda Aceh,
Sumatra, Indonesia): A coupled offshore–onshore model. Marine Geology, 268, 43-54.
PHANTUWONGRAJ, S., CHOOWONG, M., NANAYAMA, F., HISADA, K., CHARUSIRI, P.,
CHUTAKOSITKANON, V., PAILOPLEE, S., CHABANGBON, A. Coastal geomorphic conditions and styles
of storm surge washover deposits from Southern Thailand. Geomorphology, 192, 43-58.
SCHEFFERS, A., KELLETAT, D. 2003. Sedimentologic and geomorphologic tsunami imprints worldwide—a
review. Earth Science Reviews, 63, 83-92.
SUGAWARA, D., MINOURA, K., NEMOTO, N., TSUKAWAKI, S., GOTO, K., IMAMURA, F. 2009.
Foraminiferal evidence of submarine sediment transport and deposition by backwash during the 2004 Indian
Ocean tsunami. Kanazawa University Repository for Academic Recources Island Arc, 18, 513-525.
IMAMURA, F., GOTO, K., OHKUBO, S. 2008. A numerical model for the transport of a boulder by tsunami.
Journal of Geophysical Research, 113, 1-12.
24
ABSTRACT
We present parameters derived through
fitting of literature fluxes, covering a large
wavelength range, to simulated spectra for
brown dwarfs with and without discs in
Upper Scorpius. Our models include the
contribution of accretion flux to the
photospheric (surface) emission.
Comparisons of our results with previous
studies neglecting accretion flux show they
have systematically overestimated the mass
of the brown dwarf, as photospheric
emission must be increased to account for
the contribution of the accretion flux. Our
non-disc models derived a distance and age
to Upper Scorpius of 138±7pc and
5.5±1.0Myr respectively, agreeing with
previously derived values of 140±20pc and
5±2Myr. Thus we conclude treatment of
accretion flux is vital when modelling the
spectra of brown dwarf objects with discs.
INTRODUCTION
Several studies in recent years have used
radiative transfer models to fit observations
of young brown dwarfs (BDs) with
circumstellar discs[1][2]. BDs correspond to
stars not large enough to ignite fusion in the
core, relying instead on convection currents
for energy transport. Evidence is mounting
that these objects have significant
similarities with higher mass classical T
Tauri star, where matter is accreted onto the
star along magnetic field lines from the
truncated inner edge of a dusty
circumstellar disc[3][4]. The observations,
and resulting fits to the spectral energy
distributions (SEDs), which measure flux
as a function of wavelength, show that
dusty circumstellar discs can exist around
these young pre-main sequence
stars[5][6][7][8]. However, these discs are
challenging to observe and disentangle
from the stellar emission. Mayne et al.[2]
investigated BD discs in the Taurus region,
highlighting the necessity for fluxes to
cover a broad wavelength range to begin
deriving robust disc and stellar parameters.
Additionally, Mayne et al.[2] highlighted the
importance of consistently accounting for
the flux emitted by accreting (infalling)
matter, which has been shown to
significantly alter photometric observations
of young low-mass stars9.
BDs in the Upper Scorpius (UpSco) region,
at 145 ±20pc in distance and an age of
∼5Myr, have recently been surveyed and
the data presented in a number of
publications[5][6][7][8]. The youth and lack of
intermediate dust between the observer and
the region (also called extinction, Av) to
UpSco means the BD population is easier to
observe than older regions with larger
extinctions.
In this paper, we gathered and input
existing literature data into a sophisticated
SED fitting tool[2]. With a large wavelength
coverage of many BD disc candidates, the
data was applied to an associated grid of
theoretical BD models. The derived
parameters could then be compared to those
of external authors.
FITTING SPECTRAL ENERGY DISTRIBUTIONS FOR
BROWN DWARF DISCS IN UPPER SCORPIUS
BY M. READ, L. IRELAND, & N. MAYNE
25
The associated grid of models and
photometry is freely available online1.
DATA
Upper Scorpius (UpSco) is one of three
subgroups belonging to the Scorpius
Centaurus region[10]. At a distance of
∼145pc[10], UpSco shows evidence of
recent or ongoing star formation. It is
relatively free of extinction, AV ≤2[11], and
is a young region at ∼5±2Myr[12]: ideal
conditions for the possible detection of
post-formation BDs associated disc
structures.
UpSco has been extensively probed in the
search for BDs and other very low mass
candidates (≤0.35M⊙ i.e. ≤0.35 solar masses)[6][7][8][13][14][15][16], with
observations covering the near to far
infrared, allowing the separation of
photospheric (surface) and disc emission
components in a given spectrum.
Photometry & Uncertainties
We selected 86 objects from UpSco,
covering a population of BDs and very low
mass stars in the mass range of
∼0.01−0.35M⊙. Photometric magnitudes
(or fluxes) for these objects were sourced
from multiple samples and divided into four
sub-samples according to the source author,
each with their own combination of filter
systems. For the sake of brevity, we refer to
each sub-sample according to the lead
author of the data source. Therefore,
Slesnick et al. (2006)[6], Slesnick et al.
(2008)[13] and Riaz et al. (2009)[14] are
referred to by SL2006, Scholz et al.
(2007)[8] by SC2007, Carpenter et al.
1 http://bd- server.astro.ex.ac.uk/
(2006)[15] and Carpenter et al. (2009)[16] by
CA2006, and Lodieu et al. (2006)[7] by
LO2006.
To investigate disc structure in our sample,
we use a wide spectral coverage of
photometric fluxes from ∼1−70μm, taken
from the UKIDSS, DENIS, SDSS, WISE
and 2MASS mission surveys, and using the
IRAC, IRS and MIPS instruments (the
acronyms of surveys/instruments are
unimportant). We adopt an uncertainty of
0.2mag for all instruments apart from
0.3mag for MIPS. This will account for any
temporal variability in the
observations[6][7][8][13][14][15][16].
Uncertainties correspond to ~1-5% of the
original data points[6][7][8][13][14][15][16] .
Parameters & Constraints
Assuming that the UpSco association has
an approximately spherical shape, the
intrinsic spread of distances is ±20pc[5]
about the mean distance 145pc[10], therefore
we adopt 145±20pc as our distance range.
We use an extinction range 0−2mag[7].
Slesnick et al. (2008)[13] and Scholz et al.
(2007)[8] included their own derived stellar
parameters for each object in their samples.
We use these derived parameters to inform
our fitting process, essentially adopting
them as weak constraints.
THEORY
Flux Components
The need for wide spectral coverage of
fluxes in BD/very low mass star
observations is vital in the separation of the
26
photosphere from any potential infrared
excess, indicating a disk. A majority of
SED fittings for BD and very low mass
objects have previously been akin to
higher-mass main sequence objects, with
parameters arbitrarily modified to produce
a best-fit model, i.e. stellar mass, radius,
etc[8]. This has the disadvantage of lacking
any physical coupling between parameters,
with derived results being entirely
statistical. This technique has been widely
successful in deriving stellar parameters for
naked objects[1]. When fitting objects with
discs it is also common, in order to reduce
the number of free parameters, to assume
accretion to be negligible[8].
Accretion flux is produced in BD disc
systems as material flows from the inner
disc along truncated magnetic field lines to
hot spots on the surface of the photosphere.
Hot spots can provide the primary source of
flux from an object[17][18]. Hence, a
treatment of accretion flux is necessary to
obtain a more realistic SED.
For example, one may overestimate the
mass of an object if accretion is not
considered, as flux contributions from the
photosphere with and without accretion
may be misinterpreted as the intrinsic
stellar luminosity.
We make use of a wide range of wavelength
observations and fit to the models described
in Mayne & Harries (2010)[9], which
include detailed disc physics such as
accretion, dust sublimation, disc flaring,
and vertical hydrostatic equilibrium, using
the fitting techniques and tools of Mayne et
al. (2012)[2]. We summarise the modelling
and fitting procedure below, with a detailed
description given by Mayne & Harries
(2010)[9] and Mayne et al. (2012)[2],
respectively.
Disc Structure/Model
We use a combination of DUSTY00 stellar
interior models and AMES-Dusty
atmospheric models[19], with photospheric
fluxes calculated for many mass or age
contributions through interpolation over
surface gravity, temperature, radius, and
surface luminosity[9].
We model accretion flux as blackbody
emission:
𝐿𝑎𝑐𝑐 = 𝐺𝑀∗�̇�
𝑅(1 −
𝑅∗
𝑅𝑖𝑛𝑛𝑒𝑟)
where 𝐿𝑎𝑐𝑐 is the accretion luminosity, 𝑀∗ is the stellar mass, �̇� is the mass accretion
rate, 𝑅∗ is the stellar radius and Rinner is the
inner radius disc boundary. We
subsequently constrain this flux to a hotspot
area 𝐴 and assume an effective hotspot
temperature:
𝑇𝑎𝑐𝑐 =𝐿𝑎𝑐𝑐
4𝜋𝑅∗2𝜎𝐴
This temperature is used to construct the
accretion flux as a blackbody, and
combined with the modelled photosphere to
produce a final flux distribution of the
object.
Sublimation of material at the inner disc
occurs at temperatures ∼1500K due to
photospheric heating[20], causing the
geometry of the inner disc to become
curved as opposed to a vertical wall[21]. This
can majorly affect the observed infrared
excess, with curved geometries showing
reduced inclination dependence[22]. We
include treatment for dust sublimation
using TORUS, a radiative transfer and
radiation-hydrodynamics code[23], detailed
in Simulating SED. We do not include
models of any other disc clearing
mechanisms, for instance clearing by
planets, or ablation due to flux from nearby
stars.
27
Simulating SED
Radiative transfer models using TORUS[23]
have been run to represent various
combinations of parameters, producing a
grid of models where the corresponding
SED can be derived2.
Simulating Fluxes & Magnitudes
To derive photometric magnitudes and
colours, an SED at a given inclination is
folded through the different filter responses
of the required photometric system[2]. The
models were calibrated using Vega or using
the associated instrument handbook, then
distance and extinction diluted.
RESULTS & ANALYSIS
We present SED models for a sample of 80
objects, excluding six objects for which the
best fits were unsatisfactory. These were fit
with a distance range 145±20pc and an
extinction range 0−2mag. First, we fit our
entire sample using a model grid including
detailed treatment of disc physics[2] (dust
sublimation, accretion flux, etc., section
3.2), with a mass range of 0.01−0.08M⊙
(∆0.01M⊙) and ages between 1Myr and
10Myr. If evidence for a disc was sparse,
i.e. zero disc mass or a lack of infrared
excess, it was identified as a naked object,
and subsequently fit using semi-empirical
modelling[2]. This includes an extended
range for photospheric specific parameters
such as mass and age (see Naked Objects).
The graphs are displayed at the end of this
paper. Figures 1 and 2 respectively show
our best fitting models for naked and disc
systems, with Figure 3 and Figure 4
respectively showing the best fits for our
2 See Mayne & Harries (2010)[9], for the full range
of parameters
least satisfactory cases, for stars with and
without discs. Our naked objects produce a
mean age of 5.5±1.0Myr with a mean
distance of 138±7pc. Disc objects were
omitted from these calculations, because
the grid for actively accreting systems has
just two available age parameters (1Myr
and 10Myr).
Below, we split our analysis into the
modelling of naked and disc objects.
Naked Objects
Our sample contains 41 naked objects.
Assumed to have negligible accretion flux,
these were fitted using a model with an
extended mass and age range of
0.01−1.40M⊙ (∆0.01M⊙) and 1−10Myr
(∆1Myr) respectively. This modelling
technique consists of a static vertical
structure and inner edge location, placing
the inner disc edge at a set semi-empirically
derived dust sublimation radius[2]. Naked
candidates were found in the SL2006 and
LO2006 sub-samples, with a selection of
the best and worst fits shown in Figure 1
and Figure 3 respectively.
Out of 25 objects in SL2006, 19 were found
to be naked systems. These have previously
derived masses, ages and extinctions[13].
Literature masses agree with 14 objects,
with the remaining four objects falling
outside the range of our uncertainty limits
by 0.01M⊙. Previously derived age
estimates were unavailable for two of our
21 objects, with a further six disagreeing
with Slesnick et al. (2008)[13]. We speculate
that this could be a result of Slesnick et al.
(2008)[13] fitting for age and mass using the
region as a whole, compared with using
object specific models, although further
investigations would need to be made to
28
confirm this. Our calculated extinctions
show strong correlation to those from
Slesnick et al. (2006)[6], with just four
objects showing disagreement within
uncertainties. However, these lie within a
range AV≤2mag, assumed by Lodieu et al.
(2006)[7].
The LO2006 sub-sample of 32 contained 21
naked objects. Unlike the SL2006 sub-
sample, these objects do not have
previously derived parameters. Masses
were constrained during the fitting process
at an upper mass limit of 0.4M⊙, indicating
that all our objects do indeed lie within or
close to the 0.01−0.35M⊙ range as defined
by Lodieu et al. (2006)[7]. Our best-fit
extinctions, however, show that four
objects have an extinction unbound by our
upper limit of AV=2mag. Due to the
inherent degeneracy of multiple
parameters, we are unable to prove that one
parameter is responsible (i.e. a possible
systematic age underestimate). However,
one could argue that it is plausible for
extinctions ≥2mag to exist in this region;
Slesnick et al. (2006)[6] showed them to be
∼3mag in some cases. In the future, objects
will be fit using an extended extinction
range.
Disc Objects
Disc candidates were found in all four sub-
samples, totalling 39 objects, with SC2007
and CA2006 entirely consisting of objects
with infrared excess. We note that
satisfactory fits could not be achieved for
the following: SL2006: SCH16093018-
20595409, SCH16224384-19510575,
CA2006: [PBB2002] Usco J161115.3-
175721, [PBB2002] Usco J160827.5-
194904 and LO2006: J163919.07-
253406.8. These objects were removed
from the sample; justifications for this will
be given as they arise. All disc fits are
present in Figure 2, with two unsuitable fits
seen in Figure 4.
Out of the entire SL2006 sub-sample, we fit
four disc systems. A disc fraction for
objects in this region was reported to be
10.7%−3.3%8.7% [14], agreeing with our fraction
for the whole sample of 16%. Due to the age
constraints of the model treating accretion
flux, age comparisons with available
literature values was not possible. We
instead compare derived masses and
extinctions from Slesnick et al. (2006,
2008)[6][13]. Literature masses correlate well
with our data, excluding SCH16263026-
23365552, where a difference of 0.01M⊙ outside the uncertainty range was
calculated. We suggest that this could be
due to disadvantages in fitting masses using
region specific fitting[24], as
SCH16263026-23365552 (Figure 2(a))
represented one of our best-fitting models.
As stated earlier, we could not find
satisfactory fits for SL2006:
SCH16093018-20595409 and
SCH16224384- 19510575. SCH16093018-
20595409 was found to have excess
emission in MIPs 24μm, with emission at
shorter wavelengths originating from the
photosphere[14]. A possible explanation is
that this is a transition disc system, with an
inner hole in the disc too large for our fitting
techniques to model. Slesnick et al.
(2006)[6] postulates that SCH16224384-
19510575 could be an unresolved binary,
being overly luminous and younger. This
agrees with our analysis, as, although a
satisfactory fit could not be achieved when
compared with other objects, the best fit
calculated was one where the mass was
significantly higher than any surrounding
system in this sub-sample (0.2M⊙).
Objects in the SC2007 sub-sample were
chosen if they had infrared excess in
29
literature[8]. Our models support these
observations, with 13 systems found to
include a disc. Direct comparisons were
made with parameters derived by Scholz et
al. (2007)[8], as they also use SED
modelling. However, this fitting technique
neglected accretion rate, unlike our models.
We show differences between derived
masses, photospheric temperatures and
stellar radii. Out of 13 objects, 11 have
photospheric temperatures significantly
reduced when compared with values from
Scholz et al. (2007)[8]. However, we find
that changes in other parameters
compensate for this loss in apparent
photospheric luminosity, either through a
decrease in mass or, most noticeably, a high
accretion temperature, for six of our 11
objects. This suggests that accretion
luminosity has a significant contribution to
the overall flux distribution of our objects.
Therefore, Scholz et al. (2007)[8] may have
overestimated masses due to the lack of
treatment of accretion flux.
Similar to the SC2007 sub-sample, all
objects in CA2006 were chosen due to
showing infrared excess. Again we show
this in our models, with discs present for all
objects. The majority of fits in this sub-
sample have an extinction of zero.
However, our sub-samples all differ in
location within UpSco, thus variations in
extinctions within 0−2mag are expected.
We found 10 objects out of 31 in LO2006
to be disc systems. Comparisons with
previous measurements were limited, due
to the lack of derived parameters in
literature. We note, however, that our
models favour larger disc radii with smaller
masses, demonstrating how UpSco is a
potential region to feature systems with
evolved diffuse disc structure, as outlined in
the Data section.
SUMMARY & CONCLUSIONS
We use sophisticated models, including
thorough dust physics involving the
treatment of accretion, dust sublimation,
disc flaring and vertical hydrostatic
equilibrium, to fit SEDs to 80 members of
the UpSco region. We fit 41 naked and 39
disc objects, producing a mean distance of
138±7pc and a mean age of 5.5±1.0Myr
from the naked objects, agreeing with the
previously derived distance of 145±20pc
and 5±2Myr. We highlight the importance
of treating non-negligible accretion flux in
SED models for disc objects, as this was
found to significantly reduce derived
masses when compared with models that
concentrated on fitting systems without
accretion contributions.
30
GRAPHS
Figure 1: Best-fit
spectra for eight of our
naked objects. Black
crosses represent
external observational
magnitudes, with red
circles as the
corresponding model
value according to our
fitting tool. The
triangle for
J162725.52-213804.0
represents an upper
limit.
31
Figure 2: Best-fit spectra
for eight disc objects.
Objects shown are those
producing the best fitting
statistical values, with
black crosses showing
observation fluxes and red
circles showing the
corresponding model flux.
32
Figure 3: Best-fit
spectra for two examples
where satisfactory fits
could not be obtained.
Slight excess in WISE 4
indicates the possibility
of disc structure;
however, our model is
currently not sensitive
enough to represent it.
Figure 4: Best-fit spectra
for two disc objects,
where satisfactory fits
could not be obtained.
High photospheric flux
and mid/far-IR excess in
observational data
indicates the possibility
of transition discs[14],
which our fitting
technique is currently
unable to model.
33
REFERENCES
[1] Guieu S., Pinte C., Monin J.-L., Ménard F., Fukagawa M., Padgett D. L., Noriega-Crespo A., Carey
S. J., Rebull L. M., Huard T., Guedel M., 2007, A&A, 465, 855
[2] Mayne N. J., Harries T. J., Rowe J., Acreman D. M., 2012, MNRAS, 423, 1775
[3] Jayawardhana R., Ardila D. R., Stelzer B., Haisch Jr. K. E., 2003, AJ, 126, 1515
[4] Mohanty S., Jayawardhana R., Natta A., Fujiyoshi T., Tamura M., Barrado y Navascués D., 2004,
ApJL, 609, L33
[5] Preibisch T., Brown A. G. A., Bridges T., Guenther E., Zinnecker H., 2002, AJ, 124, 404
[6] Slesnick C. L., Carpenter J. M., Hillenbrand L. A., 2006, AJ, 131, 3016
[7] Lodieu N., Hambly N. C., Jameson R. F., 2006, MNRAS, 373, 95
[8] Scholz A., Jayawardhana R., Wood K., Meeus G., Stelzer B., Walker C., O’Sullivan M., 2007, ApJ,
660, 1517
[9] Mayne N. J., Harries T. J., 2010, MNRAS, 409, 1307
[10] de Zeeuw P. T., Hoogerwerf R., de Bruijne J. H. J., Brown A. G. A., Blaauw A., 1999, AJ, 117, 354
[11] Walter F. M., Vrba F. J., Mathieu R. D., Brown A., Myers P. C., 1994, AJ, 107, 692
[12] Preibisch T., Zinnecker H., 1999, AJ, 117, 2381
[13] Slesnick C. L., 2008, PhD thesis, California Institute of Technology
[14] Riaz B., Lodieu N., Gizis J. E., 2009, ApJ, 705, 1173
[15] Carpenter J. M., Mamajek E. E., Hillenbrand L. A., Meyer M. R., 2006, ApJL, 651, L49
[16] Carpenter J. M., Mamajek E. E., Hillenbrand L. A., Meyer M. R., 2009, ApJ, 705, 1646
[17] Bouvier J., Covino E., Kovo O., Martin E. L., Matthews J. M., Terranegra L., Beck S. C., 1995, A&A,
299, 89
[18] Herbst W., Eislöffel J., Mundt R., Scholz A., 2007, Proto-stars and Planets V, pp 297–311
[19] Chabrier G., Baraffe I., Allard F., Hauschildt P., 2000, ApJ, 542, 464
[20] Kobayashi H., Kimura H., Watanabe S.-i., Yamamoto T., Müller S., 2011, Earth, Planets, and Space,
63, 1067
[21] Dullemond C. P., Monnier J. D., 2010, ARA&A, 48, 205
[22] Tannirkulam A., Harries T. J., Monnier J. D., 2007, ApJ, 661, 374
[23] Harries T. J., 2000, MNRAS, 315, 722
[24] Mayne N. J., Naylor T., 2008, MNRAS, 386, 261
34
ABSTRACT
Coronary heart disease is where
atherosclerosis occurs in the coronary
arteries. The aim of this project was to
investigate the possibility of using
Computational Fluid Dynamics (CFD) to
train an Artificial Neural Network (ANN)
to predict the pressure drop across an
idealised stenosis in a coronary artery. A
CFD model was created to represent an
idealised stenosis and the model creation
and analysis were automated to provide
data to train an ANN. Initially a Radial
Basis Function (RBF) network was trained
on the generated data, however this was
unsuccessful and so a Multilayer
Perceptron (MLP) network was tried
instead. The MLP network was more
successful than the RBF network, and was
able to learn the training data with an
average test error of 5% (when using 5-10
hidden units and a weight-decay coefficient
of 0.3). There were signs of instability with
the MLP network however, which was most
likely caused by a lack of training data.
Further work would be needed in order to
fully automate the data creation; this would
enable the significant increase in training
data that would almost certainly improve
the performance of the MLP network.
Heather Bolt would like to thank Dr. Gavin
Tabor for supervising her project. She
would also like to acknowledge Prof.
Richard Everson for his assistance, and to
thank Shenan Grossberg, Matt Berry, and
David Tranter for their help. Her work was
funded by the Education and Physical
Sciences Research Council.
INTRODUCTION
Coronary Heart Disease
Atherosclerosis is a condition where fatty
deposits (such as cholesterol) accumulate
inside arteries. This narrows the arteries and
consequently impedes the blood flow.
Coronary heart disease is where
atherosclerosis occurs in the coronary
arteries (see Figure 1[1]). Coronary heart
disease is the most common cause of
myocardial ischemia, which occurs when
there is a decreased supply of oxygen to the
heart muscle caused by a decrease in blood
flow to the heart. Myocardial ischemia can
lead to a number of complications including
chest pain, irregular heart rhythm, heart
failure, and heart attack.
Figure 1: A coronary artery affected by
atherosclerosis[1].
A STUDY INTO THE FEASIBILITY OF USING MACHINE
LEARNING TO EVALUATE THE SEVERITY OF
CORONARY ARTERY DISEASE
BY H. BOLT
35
Fractional Flow Reserve (FFR)
The fractional flow reserve is a
dimensionless quantity used to measure the
severity of myocardial ischemia in a
patient. It is a measure of the pressure drop
across the stenosis in the coronary artery
and is given as:
𝐹𝐹𝑅 =𝑃𝑏
𝑃𝑎 (1)
where Pb is the pressure after lesion and Pa
is the pressure before lesion. In general, if
the value of FFR is less than 0.8, then the
obstruction is severe enough to cause
myocardial ischemia, and medical
intervention is required. Currently, the FFR
is determined experimentally; a catheter
with a transducer pressure sensor at the tip
is inserted into an artery in the groin and
then fed to the heart to measure the pressure
before and after the stenosis (blockage) in
the coronary artery. This procedure is
unpleasant for the patient and not without
risk; hence, there is currently research into
alternative ways of determining the FFR for
a coronary artery.
One non-invasive method of measuring the
FFR involves using Computational Fluid
Dynamics (CFD). CFD uses a computer to
solve the equations that govern fluid flow in
order to model the behaviour of fluids. In
this process, the geometry for the CFD
model is taken from an MRI scan, the blood
flow through the diseased artery is
simulated, and the pressure drop is
calculated. However, a certain level of skill
is required to carry out a CFD analysis, and
it can be computationally expensive.
Machine Learning
Due to the high cost of CFD (in terms of
both time and money), it is necessary to
investigate quicker and cheaper non-
invasive ways of measuring the FFR. One
possible method is to train a machine
learning system to predict the FFR value
across a stenosis when given a set of input
parameters. Machine learning is a branch of
computer science which involves training a
computer to perform a task (i.e. filter spam
emails, predict the weather, recognise
handwriting) without being explicitly
programmed. One type of machine
learning system is an Artificial Neural
Network (see Figure 2[2]).
Figure 2: Schematic diagram of a feed-
forward artificial neural network[3].
An artificial neural network (ANN) is a
system of interconnected neurons (or
nodes) that imitates a biological neural
network. These nodes ‘can be seen as
computational units that receive inputs and
process them to obtain an output’[4]; the
connections between these nodes are
weighted and determine how the
information is fed through the network. The
ANN is trained by providing a set of inputs
and outputs to the network, called the
training data. The error between the
computed output and the actual output is
then minimised. Once the network has
learned the data, it can be used to predict the
outputs for inputs that were not in the
original training data. One way of
improving the accuracy of the network is to
increase the number of nodes in the hidden
layer (called hidden units). However, if
there are too many hidden units then the
network will learn each point individually
instead of learning the general trend of the
36
data. This is known as over fitting and is
shown in Figure 3[5].
Figure 3: Graphs demonstrating too few
hidden units (left), the correct amount of
hidden units (centre), and too many hidden
units (right)[5].
The aim of this project is to investigate the
possibility of using CFD to train an
Artificial Neural Network to predict the
pressure drop across an idealised stenosis in
a coronary artery.
METHODOLOGY
This project involved three main phases.
The first phase was to create a CFD model
to represent an idealised stenosis. The
second was to automate the CFD model
creation and analysis to provide data to train
the ANN, and the third was to set-up and
train an ANN.
The CFD Model
A CFD model was created to replicate an
idealised stenosis. The CFD software used
in this project was ANSYS FLUENT[6].
Several different geometries for the CFD
model were tried. The final geometry was
chosen by comparing the FFR values given
by FLUENT with experimental FFR
values. The experimental data consisted of
MRI scans and corresponding FFR values
for different stenoses. This annonimised
data was taken from patients at Derriford
Hospital. The final geometry of the
FLUENT model gave FFR predictions that
were 20–40% above the experimental
values. However, an over-simplistic CFD
model was required in order to run multiple
simulations within the timeframe of this
project. The chosen FLUENT geometry
was deemed to be a suitable compromise
between accuracy and computational cost.
The chosen geometry for the CFD model is
shown in Figure 4 and Figure 5 (following
page). This model has: six variable
parameters; a fixed inlet and outlet length
of 80mm and 100mm respectively; and a
fixed outlet diameter of 1.3mm. Figure 5
shows the different variable parameters. D1
is the initial diameter of the artery, D2 is the
smallest diameter of the artery (at the point
of maximum constriction) and D3 is the
recovery diameter. L1 is the length from the
onset of the stenosis to the worst point in
the stenosis, L2 is the length of maximum
constriction of the stenosis and L3 is the
length from maximum restriction to the
recovery diameter. The inlet length to the
stenosis is fixed at 80mm to ensure fully
developed flow at the stenosis. The outlet
length is 100mm to make sure the outlet is
at a sufficient distance from the stenosis so
it does not affect the pressure drop across
the stenosis. The outlet diameter of the
model is 1.3mm, this is a typical diameter
of the distal tip of the Left Anterior
Descending Artery.
Figure 4: CFD model of artery with idealised
stenosis.
37
Figure 5: Variable parameters for CFD
model of idealised stenosis.
Boundary Conditions & Assumptions
For this project, the fluid was defined as
blood with a density of 1050kg/m3 and a
viscosity of 0.004kg/ms. The boundary
conditions were defined as a velocity inlet
and pressure outlet, with values of 0.2m/s
and 0Pa respectively. The walls of the
model were rigid with no slip, as the effects
of vessel elasticity on the model were
assumed to be negligible. The flow was
steady state for simplicity, as accurately
modelling the pulsatile motion of blood
flow would be very difficult and
unachievable within the time frame of this
project. It was assumed that no heat transfer
was taking place between the vessel walls
and the blood flow. The Reynolds number
for the flow was much less than the
laminar/turbulent boundary value of 2,300
so the flow was set as laminar. A mesh
convergence study on the model indicated
that a mesh of around 100,000 cells would
be sufficient.
Automating the Process
The process of creating the geometry for the
model and running the analysis was
automated using the parametric analysis
function in FLUENT. It was decided to use
100 data sets to train the machine learning
system. Each data set consisted of the six
parameter values (D1, D2, D3, L1, L2, L3)
and a corresponding FFR value. The range
of the parameter values are shown in the
table below. These ranges were chosen
from assessing the typical sizes of stenoses
in the experimental data.
2mm ≤ D1
≤ 5mm
0.3 D1 ≤
D2 ≤ 0.9
D1
D2 ≤ D3
≤ D1
1.5mm ≤
L1, L2, L3
≤ 30mm
Table 1: Range of parameter values
Using the parametric analysis function in
ANSYS Workbench, 100 different models
were created, meshed, and analysed. This
formed the data sets for the machine
learning. The FFR value from FLUENT
was found by looking at the pressure graph
for each stenosis. Figure 6 shows where the
values for Pa and Pb are taken from the
pressure vs. distance graph and then
Equation 1 was used to calculate the FFR
value. The pressure data for each stenosis
model was manually exported as a .csv file
from FLUENT (as a way to automate this
part of the process was not found). The .csv
file was then imported into Matlab[7] where
the FFR value was found using a Matlab
function written by Richard Everson.
Training the ANN
The Netlab[8] toolkit in Matlab was used to
select and train two Artificial Neural
Networks on the CFD generated data: a
Gaussian based Radial Basis Function
(RBF) and a Multilayer Perceptron (MLP).
In order to determine the correct amount of
hidden units to use, the number of hidden
units was increased incrementally from 1,
and the training and test error was recorded.
The training error (the average error over
the training data) was found using the
commands rbftrain and mlptrain for the
RBF network and the MLP network
respectively. The test error (the average
prediction error over the independent test
38
data) was found using a technique known as
‘leave one out cross validation’. This
involves removing one point from the data
set and using the remaining points as the
training data. The ANN is trained on the n-
1 data points and the point that has been left
out is used to test the accuracy of the
network. This process is then repeated so
that each point in the data is used once.
RESULTS & DISCUSSION
As mentioned in Methodology, 100 data
sets were created to train the ANN. Initially
a Gaussian-based RBF network was trained
on the data from one to 11 hidden units (the
training of the network failed when using
more than 11 hidden units). Figure 6 is a
graph of the training error and test error
versus the number of hidden units for the
RBF. The test and training error is around
0.04; this is the squared error, and so
corresponds to a percentage error of around
20%. This, together with the failure to run
at more than 11 hidden units, suggested that
there were serious problems with the initial
RBF.
Figure 6: Graph of test error and training
error vs. no. of hidden units for the initial
RBF.
To try and alleviate the problems with the
RBF, the individual error for each data set
was found, and five outliers were identified
and removed from the data. Next, the ‘best
error’ was found by initialising the network
several times, (‘it is common practice to
train the same network many times’[9]) and
the smallest error was taken. However,
neither removing the outliers nor finding
the best error has made any notable
improvement to the RBF.
As it was evident that there were serious
errors with the RBF, an alternative ANN (a
Multilayer Perceptron) was tried. The MLP
was trained on the generated data minus the
five outlying data sets. The coefficient of
weight decay (α) was fixed at 0.01 and the
MLP was trained using 1 to 80 hidden units
(see Figure 7 for the graph of results).
Figure 7: Graph of Test error and training
error vs. no. of hidden units for the initial
MLP.
Figure 7 shows that the error for the MLP is
lower than the RBF at the optimum number
of hidden units (around 10). However, at 10
0
0.01
0.02
0.03
0.04
0.05
0 5 10 15
Err
or
No. of hidden units
RBF1 Test RBF1 Train
0
0.005
0.01
0.015
0.02
0.025
0.03
0 20 40 60 80
Err
or
No. of hidden units
MLP Test MLP Train
39
hidden units the test error is still quite high
at around 0.01 (10%).
To try to reduce the error of the MLP, the
test error was plotted against the FFR value
to see if there was any correlation between
the error and the FFR value (see Figure 8).
Figure 8 shows that the data sets with the
highest error also have very low FFR
values. This is most likely because there is
a lack of data around the lower FFR values,
and so the network has difficulty learning
these data sets.
The range of FFR values from the
experimental data given by Derriford
hospital was 0.67-0.98. Thus the FFR range
of interest is approximately 0.6–1.
Consequently, it was decided to remove the
15 data sets that had an FFR of less than 0.6
and replace them with data sets that had an
FFR of 0.6 or greater and rerun the MLP
training. This reduced the test error of the
MLP (when trained with 10 hidden units
and an alpha value of 0.01) to 7%.
For the MLP the value of alpha (0.01) was
chosen arbitrarily. In order to further
improve the MLP the number of hidden
units was fixed at 10 (the optimum
suggested by Figure 7) and the value of
alpha was varied logarithmically from 10-5
to 10. Figure 8 is a graph of test error versus
alpha for the MLP. From this graph it is
apparent that the optimum value for alpha
is around 0.3.
The MLP was retrained with the improved
data (all points with an FFR value between
0.6 and 1), with the optimum alpha value of
0.3 and with the best of four random
initialisations. This network was trained
with 1-25 hidden units (see Figure 9). This
graph suggests that between 5-10 hidden
units is the optimum number for the MLP.
This graph also shows that increasing the
value of alpha from 0.01-0.30 has caused
greater instability within the training error.
Figure 9: Graph of test error and training
error vs. no. of hidden units for MLP network
with α=0.3.
Training an MLP network with around 5-10
hidden units and an alpha value of 0.3 gave
an average test error of approximately
0.003 or 5%. Figure 9 indicates that there
are some issues with the training of the data,
especially at larger number of hidden units.
The instability displayed in the training
error of the MLP network demonstrates that
the network is having difficulty
determining the correct weights to use. The
MLP network used in this project has six
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.0001 0.001 0.01 0.1 1 10
Err
or
Alpha
Figure 8: Graph of test error vs.
coefficient of weight decay for
the initial MLP network with 10
hidden units.
0
0.005
0.01
0.015
0.02
0.025
0 10 20
Err
or
No. of hidden units
MLP2 Test
40
input nodes and one output node; thus using
ten hidden units results in 70 network
weights. However, the data only contain
100 data sets, and this is probably not
enough to determine the network weights at
higher number of hidden units. It is
therefore most likely that there is not
enough data to train the MLP, which is
causing the instability within the network.
Another indication that there might not be
enough data sets is the failure of the RBF to
learn the data. One study by P. Crowder et
al. into Radial Basis Functions, found that
‘an MLP network appears to outperform an
RBF network when there are fewer data
points’[10]. The fact that the MLP network
has been far more successful in learning the
data than the RBF network also indicates
that there is not enough training data.
CONCLUSIONS
This project has used a MLP network (with
5-10 hidden units and an alpha value of 0.3)
to predict the FFR value of an idealised
stenosis with an average error of 5%.
However, this MLP network displayed
signs of instability, which was most likely
caused by a lack of training data. Although
the majority of the process of creating the
data sets for the ANN was automated, there
were still several steps that had to be done
manually. These steps were: exporting the
pressure data from ANSYS FLUENT as a
.csv file; importing the .csv file into Matlab;
and recording the FFR value. Whilst
individually these steps are not particularly
laborious, when repeated numerous times it
becomes very time-consuming. This was
the primary reason why only 100 data sets
were created. Further work could be done
on this project to fully automate the process
of CFD model creation, analysis and FFR
calculation. It would be useful to have
1,000 or even 10,000 data sets to train the
MLP, which would almost certainly
improve its performance.
41
REFERENCES
[1] Bupa Health Information, 2010. Coronary Heart Disease.
http://www.bupa.co.uk/individuals/health-information/directory/c/coronary-heart-disease
(accessed 28/03/2013)
[2] Versteed, HK, & Malalalsekera, W, 2006.
An Introduction to Computation Fluid Dynamics: The Finite Volume Method
2nd Ed Harlow: Prentic Hall
[3] Kalogirou, SA, 2001.
Artificial Neural Networks in Renewable Energy Systems Applications: A Review.
Renewable and Sustainable Energy Reviews 5(4): 373-401.
[4] Gershenson, C, 2013
Artificial Neural Networks for Beginners
http://arxiv.org/ftp/cs/papers/0308/0308031.pdf
(accessed 28/03/2013)
[5] 2013, The Shape of Data: General Regression and Over Fitting
http://shapeofdata.wordpress.com/2013/03/26/general-regression-and-over-fitting/
(accessed 28/03/2013)
[6] ANSYS, Inc, ANSYS FLUENT (Version 14.5)
[7] Mathworks, Matblab (R2012a)
[8] Nabney, I, Bishop, C, Netlab
[9] Berthold, M, Hand, D (Eds) 2007
Intelligent Data Analysis: An Introduction
2nd Ed. Chapter 8: Neural Networks. New York: Springer
[10] Crowder, P, Cox, R, Dharmendra, S, 2004
A Study of the Radial Basis Function Neural Network Classifiers using Known Data of Varying
Accuracy and Complexity.
Knowledge-Based Intelligent Information and Engineering Systems, 8th International Conference
Wellington, New Zealand. September 2004. New York: Springer