experimental data and total monte carlo - diva portal865178/fulltext01.pdfexperimental data and...

65
Experimental data and Total Monte Carlo Towards justified, transparent and complete nuclear data uncertainties P ETTER H ELGESSON Licentiate thesis Division of Applied Nuclear Physics Department of Physics and Astronomy Uppsala University 2015

Upload: others

Post on 25-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Experimental data and Total Monte Carlo

    Towards justified, transparent and complete nuclear data

    uncertainties

    PETTER HELGESSON

    Licentiate thesisDivision of Applied Nuclear Physics

    Department of Physics and AstronomyUppsala University

    2015

  • Abstract

    The applications of nuclear physics are many with one important being nuclear power, whichcan help decelerating the climate change. In any of these applications, so-called nuclear data(ND, numerical representations of nuclear physics) is used in computations and simulationswhich are necessary for, e.g., design and maintenance. The ND is not perfectly known – thereare uncertainties associated with it – and this thesis concerns the quantification and propagationof these uncertainties. In particular, methods are developed to include experimental data in theTotal Monte Carlo methodology (TMC).

    The work goes in two directions. One is to include the experimental data by giving weightsto the different “random files” used in TMC. This methodology is applied to practical casesusing an automatic interpretation of an experimental database, including uncertainties and cor-relations. The weights are shown to give a consistent implementation of Bayes’ theorem, suchthat the obtained uncertainty estimates in theory can be correct, given the experimental data. Inthe practical implementation, it is more complicated. This is much due to the interpretation ofexperimental data, but also because of model defects – the methodology assumes that there areparameter choices such that the model of the physics reproduces reality perfectly. This assump-tion is not valid, and in future work, model defects should be taken into account. Experimentaldata should also be used to give feedback to the distribution of the parameters, and not only toprovide weights at a later stage.

    The other direction is based on the simulation of the experimental setup as a means to an-alyze the experiments in a structured way, and to obtain the full joint distribution of severaldifferent data points. In practice, this methodology has been applied to the thermal (n,α), (n,p),(n,γ) and (n,tot) cross sections of 59Ni. For example, the estimated expected value and standarddeviation for the (n,α) cross section is (12.87 ± 0.72) b, which can be compared to the estab-lished value of (12.3±0.6) b given in the work of Mughabghab. Note that also the correlationsto the other thermal cross sections as well as other aspects of the distribution are obtained inthis work – and this can be important when propagating the uncertainties.

    The careful evaluation of the thermal cross sections is complemented by a coarse analysisof the cross sections of 59Ni at other energies. The resulting nuclear data is used to study thepropagation of the uncertainties through a model describing stainless steel in the spectrum ofa thermal reactor. In particular, the helium production is studied. The distribution has a largeuncertainty (a standard deviation of (17±3)%), and it shows a strong asymmetry. Much of theuncertainty and its shape can be attributed to the more coarse part of the uncertainty analysis,which, therefore, shall be refined in the future.

  • “It’s mathematics.”Mos Def on the album Black on both sides, 1999

    iii

  • List of papers

    This thesis is based on the following papers, which are referred to in the textby their Roman numerals.

    I P. Helgesson, H. Sjöstrand, A.J. Koning, D. Rochman, E. Alhassan, S.Pomp, “Incorporating experimental information in the TMCmethodology using file weights,” Nuclear Data Sheets, vol. 123,214-219, 2015.My contribution: I developed the method, wrote most of the scripts, performed the

    analysis and wrote the paper.

    II P. Helgesson, H. Sjöstrand, A.J. Koning, J. Rydén, D. Rochman, E.Alhassan, S. Pomp, “Including experimental data into TMC using fileweights from automatically generated experimental covariancematrices,” To be submitted to Progress in Nuclear Energy, 2015.My contribution: I developed the method, wrote most of the scripts, performed the

    analysis and wrote the paper.

    III P. Helgesson, H. Sjöstrand, A.J. Koning, J. Rydén, D. Rochman, E.Alhassan, S. Pomp, “Sampling of systematic errors to estimatelikelihood weights in nuclear data uncertainty propagation,” Acceptedfor publication in Nuclear Instruments and Methods in PhysicsResearch A, 2015.My contribution: I developed the method, wrote most of the scripts, performed the

    analysis and wrote the paper.

    Reprints were made with permission from the publishers.

    v

  • Contents

    1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2 Uncertainty quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    1.2.1 Error and uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2.2 Estimating random and systematic uncertainties . . . . . . . . . . 121.2.3 Uncertainty propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.2.4 Adding uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    1.3 Nuclear data (ND) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181.3.1 Cross sections and other quantities of interest . . . . . . . . . . . . . . . 181.3.2 Experimental data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211.3.3 Evaluated nuclear data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    1.4 Total Monte Carlo (TMC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    2 Using weights from experimental data in TMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.1 Likelihood weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    2.1.1 The weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.1.2 Conventional computation of the likelihood . . . . . . . . . . . . . . . . . . 282.1.3 Sampling of systematic errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.1.4 Comparison of the two methods to compute the

    likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.2 Automatic interpretation of experimental data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.3 Comparing weighted and unweighted distributions . . . . . . . . . . . . . . . . . . . . 322.4 Conclusions for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    3 Ongoing work: evaluating the 59Ni cross sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.1 Evaluation of experimental thermal cross sections . . . . . . . . . . . . . . . . . . . . . . 37

    3.1.1 General treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.1.2 Eiland/Kirouac, 1974 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.1.3 Werner/Santry, 1975 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.1.4 McDonald/Sjöstrand, 1975 (corrected by Ashgar et al.,

    1977) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.1.5 Jurney, 1975 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.1.6 Harvey, 1975/1976 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.1.7 Ashgar et al., 1977 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.1.8 Raman et al., 2004 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.1.9 Merging the information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    3.2 Producing the random files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.2.1 Treating resonances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    vii

  • 3.2.2 Adjustment to the thermal cross sections . . . . . . . . . . . . . . . . . . . . . . . 513.2.3 Completing the ENDF files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    3.3 Using the random files for helium and hydrogen production in athermal spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.3.1 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.5 Conclusions for Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    4 General conclusions and outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

    References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    viii

  • 1. Introduction

    As mentioned already on page v, this thesis in applied nuclear physics is,to a large extent, based on a set of papers. To acquaintance the reader withuncertainty propagation and nuclear data, and to put the work in a context,this first chapter of the thesis makes up a rather exhaustive introduction. InChapter 2, the work described in detail in the different articles is summarizedto be more easily accessed. Finally, Chapter 3 contains details on ongoingwork, which is not covered in any of the papers.

    1.1 BackgroundThe IPCC (Intergovernmental Panel on Climate Change, [1]) claims that “with-out additional mitigation efforts”, the global average temperature will have in-creased between 2.5 ◦C and 7.8 ◦C in year 2100 (compared to pre-industriallevels, and with a confidence level of 90%). The panel also states that this willhave serious consequences such as raised water levels, floods, increased stresson water supply and reduced food production, to mention a few examples.

    Being a low-carbon energy source [1] and not being intermittent (as windand solar power), nuclear power can be part of the solution to this problem.What makes nuclear power attractive for energy production is basically thehigh energy density connected to fission (and fusion); one typically extractsabout 50MWd ≈ 4 · 106MJ per kilogram of uranium in nuclear reactors oftoday, to be compared to the heat content of coal, which is 10 − 30MJ perkilogram [2].

    There are problems related to nuclear power, though. The reactors of todayproduce waste both containing fission products which are highly radioactiveand heavy elements called actinides which can be hazardous for more than 100000 years [3]. This time span is hard to grasp, and it necessitates a complexethical discussion as well as substantial efforts in waste management. A visionfor the “fourth generation” of nuclear reactors (Gen. IV) is to use technologiesthat suppress the production of the long-lived actinides [4], essentially leavingonly the fission products, most of which have decayed into stable nuclidesafter a few hundreds of years [3].

    Also, accidents on nuclear power plants have the potential to be severe, withthe worst example being the Chernobyl accident in 1986. Tens of people diedshortly after the accident while the total number of premature deaths is hard to

    9

  • estimate and strongly debated. The Chernobyl Forum (assembled by, e.g., sev-eral UN organs such as IAEA and WHO) writes that it “could mean eventuallyup to several thousand fatal cancers” among the 600 000 most exposed, andamong the other 5 million people which to some extent were exposed, “dosesare much lower and [...] expected to make a difference of less than 1% incancer mortality” [5]. However, all energy sources come with a risk; a studyin The Lancet [6] estimates that the mortality (deaths per TWh electricity)is 33, 25, 2.8, 18 and 4.6 for lignite, coal, gas, oil and biomass, respectively,compared to 0.074 deaths/TWh for nuclear energy. These numbers includeaccident risk but do not include long term climate effects. The study alsopoints out that the “access to electricity is prerequisite for the achievementof health, and lack of access to it remains one of the principal barriers to thefulfillment of human potential and well-being”.

    Unfortunately, yet another application of nuclear physics is nuclear weapons– the same high energy density that makes nuclear fission attractive for en-ergy production can also be enormously destructive. However, applied nuclearphysics also plays a major role in nuclear safeguards [7], i.e., in the system ofinspections of nuclear facilities which are intended to ensure that no nuclearmaterial gets dissipated. In other words, an application of nuclear physics isto prevent the proliferation of another application, namely, nuclear weapons.

    In connection with these applications (and others, such as in medicine, as-trophysics, etc.), computations (or simulations) are important ingredients inactivities such as design, licensing and maintenance. For such computations,knowledge of so-called nuclear data (ND) is necessary for all involved nu-clides. In everyday language, the ND can describe, e.g., interaction proba-bilities, energy release, decay times (see Sec. 1.3.1 for more details), whichare necessary input for the computations. The ND for applications is a com-bination of experiments and models, so-called evaluated ND (see Sec. 1.3.3).Since there are uncertainties in both experiments and models, there are un-certainties in the evaluated ND which will propagate to the results computedfor the applications. It is necessary to assess these propagated uncertainties inorder to make correct decisions, both in optimizing for economical purposesand when it comes to safety. To phrase it simply, it would be meaningless tospend time and resources on doing a calculation not knowing how trustworthythe results are. In the climate change case, this is seized on by the IPCC, whoconsequently quote confidence intervals and levels of confidence for differentclaims in their report, just as with the temperature increase cited in the verybeginning of this section.

    This thesis studies some aspects of ND uncertainties and their propaga-tion through computations. In particular, the Total Monte Carlo methodology(TMC) presented by Koning and Rochman in 2008 [8, 9] is applied and fur-ther developed. Since TMC was presented, it has been applied to several cases,e.g., for a large set of experimental benchmarks [10], in computing the neu-tron multiplication for fuel assemblies [11], and in shielding simulations [12],

    10

  • but also for more complicated systems such as neutronics simulations in a fullsized reactor core [13] as well as in a transient analysis [14]. The author ofthis thesis has previously used TMC in a careful study [15] of differences inpropagated ND uncertainties for the different reactor fuels UO2 and MOX,e.g., finding that the propagated uncertainty from thermal scattering was sur-prisingly large for MOX because of how the cross sections for 239Pu and 241Purelate to each other. Papers I, II and III (summarized in Chapter 2) all discusshow to calibrate the distributions obtained from TMC by using experimentaldata, but Paper II also presents uncertainties propagated to the high energy fluxwhich constantly degrades the reactor pressure vessels of Ringhals 3 and 4 insouthern Sweden. The work in Chapter 3 makes use of a TMC-like methodol-ogy all the way from the experiments to produce new data for 59Ni, includinguncertainties, and to propagate these uncertainties to the generation of heliumand hydrogen gas in stainless steel – knowledge which also can be importantfor the integrity of structural materials in aging reactors.

    TMC is described in Sec. 1.4 but before that, the reader finds a generalintroduction to uncertainty quantification (and propagation) in Sec. 1.2, and tonuclear data in Sec. 1.3.

    1.2 Uncertainty quantification1.2.1 Error and uncertaintyThe words error and uncertainty are used interchangeably by many authorsand experimenters; however, this thesis follows the convention to distinguishbetween the two, since the author believes this can be helpful for the under-standing.

    To shed light on the difference, consider the conceptually simple situationwhere the distance between two points on a piece of paper is measured with asteel ruler. There is a true distance a, but it is unknown. Using the ruler, weestimate the distance to x. This is our best guess for a, but we are almost surethat x − a, i.e. the error, is non-zero1. It is in the nature of the error that it isunknown – if it were not, it should have been corrected for. However, basedon the measurement technique, one can estimate the uncertainty of x. In theexample of the ruler, a natural limitation is the accuracy of the ruler’s scale; ifthe ruler shows millimeters, a careful reading could perhaps provide guessesfor tenths of millimeters, but there would be a large risk to confuse 0.3mmwith 0.4mm, for example, giving an idea about the uncertainty.

    The error ε can be modeled as a random variable, i.e., such that the esti-mated distance x is an observation of the random variable

    X = a+ ε, (1.1)

    1almost sure is actually a mathematically well-defined term for an event that has probability 1(!), and the event that x− a 6= 0 has probability 1 because distances are continuous quantities.

    11

  • and define uncertainty as the “width” of the distribution for ε. The width isoften quantified by the standard deviation σ(ε), which is the square root ofthe variance V (ε), i.e.,

    σ2(ε) = V (ε) =

    ∫ ∞

    −∞

    (ε− 〈ε〉)2 fε(ε) dε, (1.2)

    where fε(ε) is the probability density function (PDF) for ε, and 〈ε〉 is theexpected value of ε, i.e.,

    〈ε〉 =∫ ∞

    −∞

    εfε(ε) dε, (1.3)

    which can be interpreted as the mean value one would approach when drawingan increasingly large number of samples from random variables distributed asε.

    Note that the standard deviation only captures one particular aspect of thedistribution, see for example Fig. 1.1, where two PDFs (for some random vari-able Z) with the same standard deviation (and expected value) are shown – yetthe probability for, e.g., Z > 0.7 is quite different for the two distributions.One of the functions follows a normal distribution (a.k.a. Gaussian), a dis-tribution which is often assumed, mainly because it can be justified by theso-called Central Limit Theorem in cases where the considered random vari-able can be written as the sum of “many” (more or less independent) randomvariables (which is often the case to some approximation) [16]. Following theprinciple of maximum entropy [17, 18], one can also show that the least in-formed choice of distribution (if it is necessary to choose) is the normal distri-bution. For the normal distribution, ∼ 68% of the probability mass lies within± one standard deviation from the expected value [16]. Despite the above-mentioned limitation of the standard deviation, it is frequently used to quantifythe uncertainty, also in this thesis. Alternatives could be to quote confidenceintervals (but this is normally just the expected value ± the standard deviationscaled by a so-called quantile, a number which relies on an assumption of a(normal) distribution) or tolerance intervals using order statistics along the lineof Wilks’ method [19], or to estimate the full error distribution.

    1.2.2 Estimating random and systematic uncertaintiesIdeally, one would know the distribution for ε, but this is in practice not thecase. In the example with the ruler above one could maybe estimate the uncer-tainty in the reading with a standard deviation of 0.1mm, and then one couldpossibly add uncertainties such as the manufacturing tolerance, and uncer-tainty due to a possibly unknown temperature for which the ruler was intended(and if such a temperature is known, the thermometer has an uncertainty, too,as well as the expansion coefficient for the steel). However, how did the ruler

    12

  • -0.5 0 0.5 10

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    4.5

    5

    Pro

    babi

    lity

    dens

    ity

    atz

    z

    Exponential distributionNormal distribution

    Figure 1.1. Two probability density functions with the same expected value and vari-ance.

    or thermometer manufacturers determine their manufacturing uncertainties inthe first place? Moreover, could we do a more rigorous determination of thereading uncertainty?

    As long as uncertainties are random2 (as opposed to systematic), they arein principle simple to estimate, provided that it is possible to repeat the mea-surement. By random, we mean that if the measurement is repeated, the errorwill fluctuate randomly with an expected value of zero. Then, it is possi-ble to obtain a random sample x = (x1, x2, ..., xn)

    T with observations from

    X = (X1,X2, ...,Xn)T, where all the Xi are independent and distributed as

    X . Then, one can estimate the distribution of X in Eq. (1.1), or certain prop-erties of this distribution. In particular, it is possible to estimate the standarddeviation of X (which is the same as the standard deviation of ε) using thesample standard deviation [16]

    s(x) =

    1

    n− 1n∑

    i=1

    (xi − x̄)2, (1.4)

    where x̄ is the mean value of x, i.e.,

    x̄ =1

    n

    n∑

    i=1

    xi. (1.5)

    2Random uncertainties are also known as aleatoric.

    13

  • In the example with the ruler, one could let many different people redo themeasurement to be able to estimate the uncertainty using Eq. (1.4). Naturally,one would estimate the distance using x̄, rather than just the first observation.The mean value x̄ is an observation of

    X̄ =1

    n

    n∑

    i=1

    Xi, (1.6)

    which has standard deviation [16]

    σ(X̄) =√

    V (X̄) =

    1

    n

    n∑

    i=1

    V (Xi) =σ(X)√

    n, (1.7)

    assuming that the Xi are mutually independent, i.e., that their uncertaintiesare random only. Thus, by repeating the measurement n times, we do not onlyobtain an uncertainty estimate but we also reduce the (random) uncertainty bya factor of 1/

    √n.

    Systematic uncertainties are worse, in both respects. They are not as easyto estimate, and they cannot be reduced by repeating the same measurement,since they are not random, and therefore, one will repeat the same error. Ingeneral, it is desired to “make” systematic uncertainties random, by varyingmore parameters in the measurement setup. In the example of the ruler, theremay be an uncertainty in the manufacturing of the ruler which will be man-ifested as a systematic uncertainty, which one can estimate by using manydifferent rulers (if the manufacturer gives an uncertainty estimate, they mayhave done something similar). However, the manufacturing uncertainty mayalso contain systematic components, so it could make sense to use rulers fromdifferent manufacturers, or even different measurement techniques, and so on.Such a process can obviously be expensive, and it may be necessary to useexperience from similar measurements or somehow derive an uncertainty es-timate from underlying principles, a rough example of which is the estimateof 0.1mm in the very beginning of this section.

    A particular concern arises if the same systematic uncertainty occurs indifferent measurements: their errors will be correlated. This fact will havean important impact when uncertainties are propagated, see Sec. 1.2.3. As asimple example, assume that we want to estimate the sum a + b, where bothquantities are measured separately. If X = a+ε and Y = b+ε (they have acommon error ε), then the variance of their sum becomes [16]

    V (X + Y ) = V (ε) + V (ε) + 2C(ε,ε) = 4V (ε), (1.8)

    This can be compared to the case when X = a + ε1 and Y = b + ε2 whereε1 and ε2 are uncorrelated, giving [16]

    V (X + Y ) = V (ε1) + V (ε2). (1.9)

    14

  • 1.2.3 Uncertainty propagationWhen performing a computation or simulation which uses input containinguncertainties, it is of interest how the uncertainties of the input propagate tothe results. Modeling the input as a random vector X = (X1,X2, ...,Xm)

    T, aresulting quantity g(X) is a random variable which follows a distribution de-pending on X. This section briefly describes two standard methods to estimatethe uncertainty in g(X) given information on X.

    Linear uncertainty propagation

    If the uncertainties are “small enough”, it can be reasonable to keep only thezeroth and the first orders in a Taylor expansion of g(X) about the expectedvalue of X, which yields (assuming g(X) has continuous first partial deriva-tives on its domain) [16]

    V (g(X)) ≈m∑

    i=1

    m∑

    j=1

    (

    ∂g

    ∂xi

    )(

    ∂g

    ∂xj

    )∣

    x=〈X〉

    C(Xi,Xj), (1.10)

    where C(Xi,Xj) is the covariance between Xi and Xj , defined as

    C(Xi,Xj) =

    Rm

    (xi − 〈Xi〉) (xj − 〈Xj〉) fX(x) dx, (1.11)

    where fX(x) is the PDF for X. The covariance describes the uncertainty ofXi and Xj but also their linear correlation – the covariance can be written

    C(Xi,Xj) = σ(Xi)σ(Yi)ρ(Xi, Yi), (1.12)

    where ρ is the correlation coefficient, which can be shown to satisfy −1 ≤ρ ≤ 1 [16]. The closer |ρ| is to 1, the greater is the linear dependence, andρ is positive if Xi and Xj tend to “vary in the same direction” (covary) andnegative in the opposite case.

    Note that C(Xi,Xi) = V (Xi), giving that if all different random variablesare uncorrelated, Eq. (1.10) simplifies to

    V (g(X)) ≈m∑

    i=1

    (

    ∂g

    ∂xi

    )2∣

    x=〈X〉

    V (Xi). (1.13)

    Quite intuitively, the uncertainty of g(X) thus depends on the uncertainty ofthe different arguments Xi and on how strongly g depends on the arguments.Eq. (1.10) can be interpreted similarly, but it also takes the linear correlationinto account.

    Eq. (1.10) can be generalized to the case of multiple output quantities g(X) =

    (g1(X), g2(X), ..., gl(X))T, and it can be neatly written in matrix form as

    Cg ≈ STCXS, (1.14)

    15

  • where Cg and CX are the covariance matrices of g and X, respectively, i.e.,C(Xi,Xj) is found on the i’th row in the j’th column of CX and analogouslyfor Cg, and the element on the i’th row in the k’th column of the so-calledsensitivity matrix S is

    (S)ik =∂gk∂xi

    . (1.15)

    The (approximate) variances of the different random variables g1(X), g2(X),..., g1(X) are found along the diagonal. Their covariances may be used infurther propagation.

    Eqs. (1.14) and (1.15) have made up the backbone of ND uncertainty prop-agation since the work of Usachev [20] in the early 1960’s. The estimationof the sensitivity matrix S is normally performed by perturbing each one ofthe input variables xi at a time (simple numerical differentiation); hence, thewords covariance matrices, sensitivities, and pertutbations are frequently oc-curring in the field of ND uncertainty propagation.

    The main downside of this type of error propagation is that it is an approx-imation; nonlinear dependence on the input is not captured. For example, at aminimum or maximum of g(x) the first derivatives are zero, leading to a zeroestimate of the uncertainty. It is, however, possible to include more terms inthe Taylor expansion, but this would make it necessary to estimate higher orderderivatives [21]. Another disadvantage is that the methodology characterizesthe full distribution of X by its expected value and covariance3. As a univari-ate example, the methodology would not differ between the two distributionsin Fig. 1.1 in Sec. 1.2.1.

    Monte Carlo estimation

    A more direct way to estimate the propagated uncertainty is to generate a ran-dom sample x(1),x(2), ...,x(n) from X, and to simply evaluate g for all obser-vations of this random sample. In this way, one has obtained a random sam-ple from g(X) [16], namely g(x(1)), g(x(2)), ..., g(x(n)). It is then straight-forward to estimate the standard deviation of g(X) using Eq. (1.4), and it isalso possible to estimate the full distribution of g(X), or, say, P (g(X) > c),i.e., the probability that g(X) exceeds some particular value c (a design crite-rion may require this probability to be small). It is also possible to estimatethe uncertainty in these quantifications of the uncertainty, e.g., as outlined inRef. [15].

    Except the possibility to study more aspects of the distribution of g(X) thanthe standard deviation, other advantages compared to the linear uncertaintypropagation are that non-linearities can be captured (the only approximationin this methodology is the finite number of samples) and that other aspects of

    3Higher order moments (the expected value and variance are the two lowest orders) can also beincluded by including more terms in the Taylor expansion, but the number of necessary higherorder derivatives to include would grow even larger [21].

    16

  • the distribution of X than its expected value and covariance can be taken intoaccount.

    The major disadvantage is that more evaluations of g(X) typically are nec-essary, which becomes a problem if g(X) is computationally expensive toevaluate (which often can be considered the case in applied nuclear physics).It may be argued that it is poorly invested time to do such an accurate uncer-tainty propagation since the knowledge of the distribution of X may be a largerlimitation than the approximation in linear uncertainty propagation. Indeed (inND uncertainty propagation), it is often the case that the only information onX consists of (estimates of) the expected value and covariance, and then onewill have to assume a distribution. Normally, this will be a (multivariate) nor-mal distribution because of the reasons pointed out at the end of Sec. 1.2.1. It isworth noting that the number of evaluations of g does not depend on the lengthof X [22], while the number of derivatives to be estimated for the sensitivitymatrix S in Eq. (1.14) depends on how many quantities one believes to havean impact on the uncertainty. If propagating, e.g., cross section uncertaintieswith linear uncertainty propagation, this means that the cross sections must berelatively coarsely grouped in the determination of the sensitivity matrix, orotherwise a large number of “function evaluations” (i.e., computer code runs)are necessary.

    There are several ways to increase the efficiency by introducing some de-gree of determinism in the sampling, such as in the so-called Latin HypercubeSampling [23]. The implementation is not as simple, and it is often difficult todraw as rigorous conclusions from these methods; for example, the varianceestimator used in Latin Hypercube Sampling has an unknown bias, even if thisbias appears to be small in many practical cases [24].

    1.2.4 Adding uncertaintiesIn uncertainty studies, it frequently occurs that “uncertainties are added inquadrature”. To answer why this is the case, consider a situation where theerror ε can be written as m > 1 additive and independent error components,i.e., the model in Eq. (1.1) is extended to

    X = a+ ε = a+m∑

    ℓ=1

    εℓ, (1.16)

    where the εℓ are mutually independent. Since uncertainties normally arequantified by standard deviations, and since [16]

    V

    (

    m∑

    ℓ=1

    εℓ

    )

    =

    m∑

    ℓ=1

    V (εℓ), (1.17)

    17

  • one obtains

    σ (ε) =

    m∑

    ℓ=1

    σ2(εℓ), (1.18)

    which is meant by “adding the uncertainties in quadrature”.Often, uncertainty components can be considered relative to the true value,

    such that

    X = am∏

    ℓ=1

    (1 + εℓ). (1.19)

    In such a case (again assuming that the εℓ are mutually independent), one canapply Eq. (1.13) and obtain (carry out the differentiation and assume 〈εℓ〉 = 0)

    V (X) ≈ am∑

    ℓ=1

    V (εℓ). (1.20)

    Thus, relative uncertainties may also be added in quadrature.It is important to stress that the addition of “uncertainties” is only possible if

    the error components are independent, and that there is an underlying assump-tion of linearity (“small enough uncertainties”) in cases where uncertaintiesare relative rather than additive.

    1.3 Nuclear data (ND)By nuclear data we mean numerical representations of nuclear physics pro-cesses. This thesis focuses on interactions between neutrons and nuclides, dueto their outstanding importance in many applications (e.g., in nuclear power).

    1.3.1 Cross sections and other quantities of interestA very important type of ND is the cross section, which provides a way toquantify interaction probability between an incoming particle and a nuclide(the discussion will be limited to incoming neutrons in the following). Per-haps counter-intuitively, the cross section has the dimension of an area; it canbe seen as the “effective area” of a nuclide, seen in the perspective of an ap-proaching neutron. It is normally given in barns (b), where 1 b = 10−28 m2,which is on the same order of magnitude as the geometrical area of a mediumsized nuclide (although such an area is not well defined).

    To be a bit more precise, consider a neutron with energyE, traveling througha medium with a concentration N (1/volume) of a nuclide of interest, whichhas a cross section ς(E). The probability dP (E) that the particle interacts

    18

  • (with this type of nuclide) when traveling the distance dx is4

    dP (E) = Nς(E) dx, (1.21)

    which can be rearranged to give

    ς(E) =dP (E)

    dx

    1

    N. (1.22)

    Thus, the cross section is the interaction probability per distance traveled bythe neutron, normalized by the concentration of nuclides (or multiplied by the“volume per particle”). To explicitly write out the energy dependence of across section is an attempt to stress this particular fact: cross sections indeeddepend on the energy of the incoming neutron, which is discussed furtherbelow.

    The cross section is usually given the symbol σ (Sigma), which is unfortu-nate for the ND uncertainty quantification community, because another verystrong convention is to use σ for the standard deviation (see Sec. 1.2.1), useda lot in uncertainty quantification. In this text, σ stands for standard deviation.Another variant of Sigma, ς , is used to denote cross sections.

    After the interaction, the particle will move in another direction, with anew energy. Both the interaction probability and the probability for differ-ent angles or combinations of energies and angles is often quantified by so-called differential and double-differential cross sections, respectively. In theND field, however, “cross section” is often reserved for angle and energy inte-grated cross sections, i.e., it only quantifies the probability for a certain type ofinteraction and does not specify the probability density for a particular angleor energy of the recoiling particle. The angular and energy-angle distributionsare then treated as other types of ND, and the product of these yields dif-ferential and double-differential cross sections, respectively. In this text, thisconvention of separating cross sections and angular/energy-angle distributionsis followed.

    Nevertheless, there are several types of nuclear interactions, each quantifiedby its own cross section. The total cross section is the cross section for anyinteraction; this is conventionally denoted (n,tot) for incoming neutrons (“n”denotes neutron). This total cross section can be divided into the cross sectionfor scattering and absorption. Scattering can be divided into elastic scattering,(n,el) and inelastic scattering (n,n′), where the former is scattering withoutkinetic energy loss – the neutron simply bounces off the nuclide – and thelatter is scattering which leaves the nuclide excited. In turn, one may divide

    4Frequently, e.g., in Ref. [25], this equation is written as RA = ςIANA, where it is assumedthat IA neutrons per area and time impinge on a “sufficiently thin” target with NA nuclidesper area, and RA is the number of interactions per area and time. Noting that RA/IA is theinteraction probability for one neutron, and replacing NA by N dx to handle the requirementof the “sufficiently thin target” mathematically (and consequently, the probability becomes dif-ferential, too), gives Eq. (1.21).

    19

  • inelastic scattering into different reactions depending on which energy levelthe nuclide is excited to. The absorption cross section may contain the fissioncross section (n,f) and different cross sections where the nuclide captures theneutron, and ejects one or more other particles or a γ-ray. The latter reactionis called radiative capture and is denoted (n,γ). For conventional nuclear re-actors, relatively slow neutrons (“thermal” neutrons) are the most abundant,and for such low neutron energies the total cross section is often (almost) onlymade up of the (n,el), (n,γ) cross sections, since other interactions need acertain kinetic energy of the neutron – they are so-called threshold reactions.Important exceptions are fissile nuclides such as 235U and 239Pu which arelikely to undergo fission after absorbing a thermal neutron. Another importantexception (studied in Chapter 3) is 59Ni, which has significant (n,α) and (n,p)cross sections (neutron capture followed by α-particle and proton emission,respectively). The list continues, and many of the mentioned interactions canbe further sub-categorized.

    As pointed out previously, cross sections depend on the energy of the in-coming neutron. This dependence is clear from a look at Fig. 1.2, which showsthe (n,γ) and (n,el) cross sections for 59Ni as an illustrative example. For lowenough energies (typically up to ∼ 1 eV), all non-threshold cross sectionsexcept the elastic are proportional to 1/

    √E ∝ 1/v, where v is the neutron ve-

    locity (why the cross section appears as a straight line on a log-log plot). Aninterpretation is that the interaction probability is proportional to the time dur-ing which the neutron is in the vicinity of the nuclide. The elastic cross sectionis constant for such energies, however. For higher energies, the energy levelsof the compound nuclide becomes evident, and resonant behavior appears –interaction is more likely if the energy the neutron adds to the system matchesan excited state of the target nuclide. The resonances are explicitly includedin the data up to a certain energy, above which there is an unresolved reso-nance region where there still are resonances, in most cases. The resonancesare not included in this region because of lack of information, but there canbe so-called unresolved resonance parameters included in the ND which canbe necessary to use in certain computations. Above the unresolved resonanceregion, the different levels are so close that they overlap, and the resonancestructure disappears.

    The resonance structure gives rise to a particular group of ND – so-calledresonance parameters. Instead of storing the cross section on a very fine en-ergy grid, one uses models that describe the resonance structure using a set ofparameters for each resonance.

    Except cross sections and energy-angle distributions, there are several othertypes of ND which are important in different categories of computations. Forexample: fission yields (the probability for different fission fragments after fis-sion of a particular nuclide), average neutron multiplicities (ν̄ (“nu-bar”), thenumber of neutrons released in a fission reaction – a key to maintain a chainreaction in a nuclear reactor), half-lives and branching ratios of radioactive

    20

  • 10-5 10-3 10-1 101 103 105 10710-4

    10-2

    100

    102

    104C

    ross

    sect

    ion

    [b]

    Neutron energy E [eV]

    (n,γ)(n,el)

    Figure 1.2. The (n,γ) and (n,el) cross sections for 59Ni. Evaluated data, fromENDF/B-VII.1 [26].

    nuclides (the latter describes the probability for different paths of decay), andbinding energies (the amount of energy gained by a particular nuclide frombeing bound together, compared to if all nuclei (protons and neutrons) werefree particles – determines the so-called Q-values, the amount of energy re-leased in a particular nuclear reaction). The list goes on. Although much ofthe discussion in this thesis could be generalized to other types of ND (moreor less easily), the practical examples cover cross sections only.

    1.3.2 Experimental dataSince the 1960’s, a substantial amount of experimental ND has been collectedat several data centers [27]. A large fraction of these is compiled and storedinto the EXFOR (EXchange FORmat) database [27] maintained by the UNorgan IAEA (International Atomic Energy Agency) – only for neutron crosssections, there are more than 20 000 datasets in EXFOR, in total containingmore than 3.6 million data points.

    EXFOR contains so-called differential experimental data, i.e., microscopicquantities measured in relatively “clean” experiments, for example, cross sec-tions for a particular nuclide at a particular energy5 and practically all othertypes of data mentioned in Sec. 1.3.1. The differential data is what is directlyinteresting as input for computations.

    5There are, however, experiments reported to EXFOR for naturally occurring mixtures of nu-clides, and also averaged over relatively wide energy spectra.

    21

  • There are also experiments performed on macroscopic systems, often re-ferred to as integral experiments, which can be used to validate computercodes and evaluated ND (see Sec. 1.3.3). Some of these experiments qualifyto be classified as benchmarks, and the OECD sub-organization NEA (NuclearEnergy Agency) maintains, e.g., a “handbook” with criticality safety bench-marks [28], devoted to the neutron multiplication factor keff in a fission chainreaction.

    In differential nuclear physics experiments, one is typically forced to mea-sure something else than the quantity of interest, after which one tries todeduce the value of the quantity of interest. For example, a cross sectionmeasurement can be carried out by detecting the number of interactions C(“counts”) when a target with N nuclides is subject to a neutron flux φ duringan exposure time T . “Knowing” N , φ and T as well as the detection efficiencyǫ, one can obtain the cross section as6

    ς =C

    ǫNφT , (1.23)

    disregarding background subtraction. In practice, the flux φ is easiest to mea-sure by performing the same type of measurement but with a reference mate-rial, and rearrange Eq. (1.23) to

    φ′ =C ′

    ǫ′N ′ς ′T ′ , (1.24)

    where the prime denotes that all quantities are related to this reference mea-surement. Assuming φ′ = φ, one gets

    ς =Cǫ′N ′T ′C ′ǫNT ς

    ′. (1.25)

    Sometimes it is assumed that ǫ′ = ǫ and the expression simplifies further.Often, the ratio ς/ς ′ is reported rather than the deduced cross section.

    Certain cross sections, which are considered to be particularly well-known,are included among the are carefully analyzed Neutron cross section standards[29], maintained by the IAEA. One of these cross sections is typically used asthe reference cross section ς ′ in Eq. (1.25).

    Uncertainties – random and systematic

    Since most of the ND describes random processes, measurements will natu-rally contain a random uncertainty. For example, the cross section describesinteraction probability, and even if there is a value of C that corresponds tothis exact probability, the observed C will be an observation from a certain

    6This is in principle obtained from integrating Eq. (1.21), under the assumption that the targetis thin enough, such that it is unlikely for a neutron to undergo several interactions, and therebychange energy.

    22

  • probability distribution7. It is similar to estimating the probability to get asum of twelve when rolling two dice by repeating the dice roll a finite numberof times – there will be an uncertainty in the result even though the probabilityhas a certain value (1/36, if we restrain ourselves from splitting hairs). Thisnatural, random, uncertainty of C is often referred to as “counting statistics”.

    There may also be other uncertainties that can be considered random, e.g.,an instability in the flux in Eq. (1.23) (or such that φ only approximately equalsφ′ which modifies Eq. (1.25)).

    Further, there may be several systematic uncertainties; in the cross sectionmeasurement, N and N ′ are typical examples. Since the same target (with thesame N ) often is used for several energies, and even reactions, or by differentgroups of experimenters (examples of all these cases are found in Chapter 3),the fact that the error is correlated will be essential.

    A particularly important systematic uncertainty is that of the reference crosssection ς ′, since a few standard cross sections (see Sec. 1.3.2) are used for themajority of all cross section experiments (and the standard cross sections arealso correlated with each other [29]); this introduces correlations between theerrors of very many different experiments.

    1.3.3 Evaluated nuclear dataDespite the impressive number of experimental data points mentioned in Sec.1.3.2, the total number of data points in “all nuclear data” is huge; in theory,it is even infinite since many quantities depend continuously on energy. How-ever, even if one would be satisfied with covering a dense energy grid, thevariety of different reactions and nuclides (not to mention different types ofND) results in a lot more data points than what are measured, so the experi-mental data is not enough for applications.

    In applications, one therefore normally makes use of so-called evaluatednuclear data, which is a combination of experimental data (which is selectedand analyzed) and nuclear model data. For energies above the resolved reso-nance region, nuclear model data can be obtained from nuclear reaction codessuch as TALYS [30]. When handling a reaction between an incoming neutronand a nuclide, TALYS uses a set of parameters (the actual number depends onthe nuclide but it is on the order of 50), which are quite essential to this thesis,see Sec. 1.4. The values of the parameters that provide the best fit with exper-imental data is unique for each nuclide, but there are systematics that can beused to provide parameter values even for nuclides with little or non-existingexperimental information – the results are however quite uncertain.

    7By definition, it will follow a binomial distribution [16]; every passing neutron will eitherundergo the interaction or not. If the interaction probability for each passing neutron is small,the binomial distribution is well approximated by a Poisson distribution [16]. With a largeenough expected value (〈C〉 & 20 according to Ref. [16]), the Poisson distribution is wellapproximated by a normal distribution thanks to the central limit theorem.

    23

  • As mentioned in Sec. 1.3.1, resonance models can be used in the reso-nance range (and below). Examples of such are the Single- or MultilevelBreit-Wigner models or the Reich-Moore approximation, all of which are ap-proximations of R-matrix theory [31]. The R-matrix theory is derived fromquantum mechanics, but the resulting models have parameters which mustbe determined from experiments. As such, these models are not very usefulwhen it comes to larger “gaps” in the experimental data. Systematics mayhowever be used to provide some guess about the overall magnitude but this,once again, implies large uncertainties.

    There are numerous different ND libraries (in different versions), whichcontain evaluated ND for large sets of nuclides, e.g., ENDF/B [26], JEFF [32]and TENDL (Talys Evaluated Nuclear Data Library, [9]). For much of itscontent, the libraries use the same evaluations, but in other cases there aredifferent opinions in which data should be recommended.

    For the “most important” nuclides and types of reactions, nuclear data eval-uators also provide estimates of the covariance matrices for the evaluated nu-clear data, for the use in the assessment of nuclear data uncertainties and theirpropagation. When using these covariances, it is most common to use linearuncertainty propagation, but there are examples of Monte Carlo uncertaintypropagation starting from the covariance matrices [33]. A problem with thecovariance matrices is that information may be lost by storing only the covari-ances, see Sec. 1.2.1.

    The evaluated nuclear data is stored in the ENDF format [31], where differ-ent types of data and different interactions are numbered according to more orless strict conventions. These conventions are followed by processing codessuch as NJOY [34], which can be necessary for making the ND usable in dif-ferent types of applications.

    1.4 Total Monte Carlo (TMC)Total Monte Carlo (TMC) [8, 9] is a methodology for ND uncertainty propa-gation built up around the nuclear reaction model code TALYS [30], accom-panied by a few other codes, e.g., TARES [35, 9] which completes the TALYSinformation with resonance parameters.

    The methodology takes the Monte Carlo approach for the uncertainty prop-agation (see Sec. 1.2.3), but the more unique feature is that the uncertaintiesare propagated all the way from the reaction model parameters, i.e., from theinput to the TALYS code structure (in principle, other codes filling the samepurpose can be used). As illustrated in Fig. 1.3, the methodology can be out-lined as follows:

    1. Randomly sample the nuclear reaction model parameters p (the distri-bution is discussed below), n times

    24

  • Modelparameters

    p(1),p(2), ...

    0 0.5 1 1.5 2 2.5 3 3.5 40

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0 0.5 1 1.5 2 2.5 3 3.5 40

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0 0.5 1 1.5 2 2.5 3 3.5 40

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    TALYS + ... d(1),d(2), ...,d(n)

    n nucleardata files

    Computation

    or simulation

    (e.g., in MCNP)...

    ution

    Figure 1.3. A schematic view of TMC.

    2. Use the resulting p(1),p(2), ...,p(n) in TALYS runs, which give n setsof ND, referred to as random ND files (or simply random files) in thefollowing

    3. Use the n random ND files from the previous step in a computation orsimulation of a nuclear system

    4. Study the distribution of the output quantities to draw conclusions on thepropagated ND uncertainty.

    The computation in step 3 can be done using any type of code (or system ofcodes) making use of ND, as long as the user can access the ND input (insome industrial codes, this can be a practical problem). One may thus use aMonte Carlo transport code such as MCNP [36], and this, “using Monte Carlouncertainty propagation through Monte Carlo codes”, gave rise to the nameTotal Monte Carlo. However, the computation can very well be deterministic,an example of which is found in Ref. [14] mentioned in Sec. 1.1 (where otherpublications using TMC also are briefly presented).

    Compared to linear uncertainty propagation, TMC naturally inherits the ad-vantages with Monte Carlo uncertainty propagation in general, e.g., no linear-ity assumption and the possibility to capture more aspects of output and inputdistributions than expected value and covariance, see Sec. 1.2.3. On top ofthese advantages, each random file of in TMC is physically consistent, givingrise to physically consistent uncertainty estimates with correlations motivatedby the physical models. Also, the implementation is simple with a relativelylow risk for errors; between different types of computation and simulationcodes, a substantial amount of processing of the data is necessary (mentionedin Sec. 1.3.3), and if linear uncertainty propagation is used, the covarianceand sensitivity matrices must be processed, too. Finally, the methodology issuitable for large-scale automation, and this is the reason that TENDL (TalysEvaluated Nuclear Data Library [9]) is the most complete existing nuclear datalibrary when it comes to uncertainties – covariance files are available for all the∼ 2600 available nuclides at the TENDL homepage [37], and in many casesalso random files suitable for TMC. A part of the philosophy of TMC is furtherthat the method should be objective, reproducible and transparent, avoiding asmuch as possible of the personal judgment of nuclear data evaluators.

    25

  • TMC, of course, inherits the main disadvantage of Monte Carlo uncertaintypropagation, too, namely the number of needed computational code runs. An-other problem is that the model driven approach (as opposed to driven by ex-perimental data) introduces very strong correlations, e.g., for a cross sectionat different energies [38]. If there is no experimental data present, this shouldnot be considered a problem since the strong correlations reflect the way thedata is produced. If a lot of experimental data is present, it is probably bestto make more use of this data than the models can allow for, and if the datais produced in this way, the random files should be produced similarly, reduc-ing the correlations. Rochman and Koning have recently presented an idea onhow to make progress in this direction [38], and further development in thisdirection is also one of the recommendations of Paper II.

    Disregarding the issue of the model-induced correlations, it is still neces-sary to choose the distribution for p, and this is not trivial. The model param-eters are determined by experimental data, and, therefore, statistical inferenceusing experimental data should be used to estimate the parameter distribu-tion. For the publicly available random files at the TENDL homepage, theparameter distributions have been chosen with the experimental data in mind,but without any rigorous statistical foundation and no systematic treatmentof experimental correlations. TMC has therefore been criticized, e.g., in Refs.[39, 40], and Papers I, II and III are to a large degree devoted to the distributionof the model parameters p.

    26

  • 2. Using weights from experimental data inTMC

    In Sec. 1.4, it was noted that there is currently a lack of statistical rigorousnessin the determination of the distribution used for sampling the model parame-ters p in TMC. Papers I, II and III consider a way of implementing Bayes’theorem to implicitly adjust the distribution of p using experimental data,namely, by giving each random ND file a weight proportional to the likeli-hood of the random file given the experimental data. The basic idea of thelikelihood weights is presented in Sec. 2.1, and also two ways to compute it:one standard way, and one non-standard way – based on the sampling of sys-tematic errors. It is the sampling of systematic errors which is studied in PaperIII.

    Crucial to an adequate computation of the likelihood is the experimentalcovariance matrix, and large parts of Papers I and II consider an automaticinterpretation of the experimental database EXFOR, and generation of exper-imental covariance matrices based on this interpretation. This is summarizedin Sec. 2.2.

    An important limitation of this work is that previously generated randomND files have been used to describe the prior distribution (in Bayes’ theorem),which is not really adequate since their distribution does not ignore the exper-imental data. A limitation of the whole methodology is that it only considersparameter uncertainty; it is assumed that there exists a choice of p such thatthe model reproduces reality perfectly (thus, we ignore so-called model de-fects or model inadequacies), which is rather naive. The model defects shouldbe addressed in future work, as well as the limiting prior distribution. It maybe worth noting that in Chapter 3, the random files are produced using exper-imental information directly (only for low energies), so their distribution hassome justification without involving any likelihood weights.

    2.1 Likelihood weights2.1.1 The weightsConsider a quantity q which is computed using the nuclear data obtained usingthe parameters p, i.e., q = q(p). It is shown in Paper II that a consistentimplementation of Bayes’ theorem is to give the k’th random file a weight

    wk =L(

    p(k);x)

    ∑nκ=1L

    (

    p(κ);x) (2.1)

    27

  • for k ∈ {1, 2, ..., n}, where L(

    p(k);x)

    is the likelihood function of the pa-

    rameters p(k) and the experimental data vector x = (x1, x2, ..., xm)T, which

    is considered to be an observation of the random vector X. By a consistentimplementation of Bayes’ theorem, we mean for example that the weightedsample variance of q, i.e.,

    σ2observed(q) =

    n∑

    k=1

    wkq2(

    p(k))

    −(

    n∑

    k=1

    wkq(

    p(k))

    )2

    (2.2)

    is a consistent estimator of the underlying variance of q. Basically, this meansthat σ2observed(q) will approach the variance of q(p), given the experimentaldata x, as n grows large.

    The likelihood function is the PDF for the experiments X evaluated at theobserved experimental results x, given that the particular parameter set p istrue, i.e., that p gives a perfect representation of the reality. It addressesthe question “How likely are the observed experimental results assuming pis true?”. Then, Bayes’ theorem helps us to turn the question around, such thatwe obtain information on the distribution for p.

    To be able to compute the likelihood, we need to assume a distribution forX. Two different ways are briefly described in Secs. 2.1.2 and 2.1.3, respec-tively. In both cases, it will be essential to compare x to the correspondingvalues in the k’th random file, which we denote

    τ

    (

    p(k))

    = τ(k) =(

    τ(k)1 , τ

    (k)2 , ..., τ

    (k)n

    )T

    . (2.3)

    To clarify, τ (k)i is the value in the k’th random file (produced using the k’thparameter set p(k)) which describes the same type of ND as xi, at the sameenergy and for the same nuclide, and so on.

    2.1.2 Conventional computation of the likelihoodIf one assumes that the experimental vector X follows a multivariate normaldistribution (possible motivations for this are briefly discussed in Sec. 1.2.3)with experimental covariance matrix CE, it is easily found that

    L(

    p(k);x)

    ∝ e−χ2k/2, (2.4)

    where ∝ stands for “proportional to” and

    χ2k =(

    x− τ(k))T

    C−1E

    (

    x− τ(k))

    (2.5)

    is “the generalized χ2”, which is a measure of the deviation between x andτ(k) taking the uncertainties of the experimental points into account, and also

    28

  • their correlation. If all experimental points would be uncorrelated, CE be-comes diagonal and the matrix multiplication in Eq. (2.5) collapses into

    χ2k =m∑

    i=1

    (

    xi − τ (k)i)2

    σ2i, (2.6)

    which is easier to interpret.Eqs. (2.4) and (2.5) are used to compute the likelihood function in Papers I

    and II.

    2.1.3 Sampling of systematic errorsIn Paper III, it is not assumed thatX follows a multivariate normal distribution.Instead, it is assumed that

    Xi = Yi +ν∑

    ℓ=1

    σiℓεℓ, (2.7)

    where Yi is a random variable “containing” the random uncertainty, and the εℓare random variables describing the ℓ’th systematic uncertainty contribution(in all of X), defined with 〈εℓ〉 = 0 and V (εℓ) = 1 such that σiℓεℓ is theerror in the i’th experimental point due to the ℓ’th systematic contribution. Alldifferent random variables on the right hand side of Eq. (2.7) are mutuallyindependent.

    Then, the systematic errors are sampledS times, i.e., samples ε(s)ℓ are drawnfrom each εℓ for s ∈ {1, 2, ..., S}. For each s, the sampled systematic errorsare subtracted from the experimental values, after which the likelihood func-tion given the sampled systematic error, L

    (

    p(k);x|ε = ε(s))

    , is computedusing that the “new experimental points” are independent given the sampledsystematic error, i.e.,

    L(

    p(k);x|ε = ε(s))

    ∝ e− 12ξ2ks , (2.8)

    where

    ξ2ks =m∑

    i=1

    ((

    xi −∑ν

    ℓ=1 σiℓε(s)ℓ

    )

    − τ (k)i)2

    σ2i, (2.9)

    cf., Eq. (2.6). Finally, the likelihood function is estimated using the mean valueof the S resulting likelihood functions L

    (

    p(k);x|ε = ε(s))

    .An advantage of using this methodology to estimate the likelihood is that

    one can use arbitrary distributions for the systematic errors, if there is reasonfor that. Also, it can be easier to understand this method; it mimics howthe systematic uncertainties really affect the agreement between model andexperiments, and it avoids the matrix inversion, which most people have ahard time to understand intuitively.

    29

  • 2.1.4 Comparison of the two methods to compute the likelihoodAssume that

    • Eq. (2.7) is used as a starting point for the experimental covariance ma-trix CE used in the computation of the likelihood function with Eq. (2.4),

    and• The εℓ are normally distributed in the sampling of systematic errors.

    Then, it is shown in Paper III that the likelihood function obtained using sam-pling of systematic errors converges to the likelihood function computed usingEq. (2.4).

    The convergence is slow, however. In Paper III, the likelihood weightsare estimated with sampling of systematic errors for several random files for239Pu, 235U, 238U and 56Fe, using the same interpretation of the experimentalcross section data in EXFOR as used in II (see Sec. 2.2). It turns out that ifthe full experimental sets are used, acceptable convergence is not reached inany of the studied cases for S = 106, which corresponds to a much greatercomputational time than the matrix inversion. For smaller subsets of the ex-perimental data, convergence could be reached, which for example can beseen in Fig. 5 (left) of Paper III, where only 10 experimental points are in-cluded (higher energy experiments for 239Pu). In Fig. 5 (right) of Paper III,70 experimental points are included and even if convergence is indicated atS = 106, there are deviations of up to about 50%. For the full experimentalset for 239Pu in this energy range (after outlier rejection, see Sec. 2.2), with655 experimental points, convergence was not at all close. The paper alsocontains an investigation of how the convergence rate depends on the numberof experimental points, which indicates that the necessary computational timeto reach convergence grows faster than for matrix inversion, too, even thoughthe computational time for general matrix inversion also grows rapidly andmay become a problem if correlations between very many nuclides or types ofreactions are included.

    Because of this slow convergence, sampling of systematic errors cannotcompete with the conventional method (using matrix inversion) when the lat-ter method is applicable. It is, however, more general, and possibly, there canbe cases when it is important to assume other distributions than the normaldistribution for the systematic errors. The study may also have a pedagogicalvalue, in that it may help in the understanding of how systematic uncertaintiesimpact the likelihood, and it clearly shows that univariate normally distributedsystematic errors will lead to a multivariate normal distribution for the wholeexperimental set. Further, the possibility to combine matrix inversion andsampling of a few systematic errors could (e.g., standard cross sections whichinduce correlations that range over very large numbers of experiments) be in-vestigated in the future. Finally, the sampling of systematic errors could beused for other purposes than computing likelihood weights. For example, it

    30

  • could be used in generating random experimental data for use in the worktowards “more data driven TMC” along the lines of Ref. [38], see Sec. 1.4.

    2.2 Automatic interpretation of experimental dataWhatever approach is taken for the computation of the likelihood, the distri-bution for the random vector X describing the experimental data must be esti-mated based on the experimental information, which consists of the observedvalues and (hopefully) uncertainty information. In the database considered inPapers I and II, i.e., EXFOR, uncertainties can be given in one or more com-ponent, but for many experiments only a total uncertainty (or even none at all)is quoted. The original format is not very easy to interpret automatically, thereare several different ways to compile the data, using different conventions formissing values etc., and different units. Further, it is often unclear whether anuncertainty is random, systematic, or a combination of the two – and many(most?) experiments are reported with incomplete uncertainty information; inmany cases, only the uncertainty from counting statistics (see Sec. 1.3.2) isincluded.

    For efficiency, and to follow the TMC philosophy of transparency and re-producibility, an attempt is nevertheless made to interpret the EXFOR databaseautomatically in Papers I and II, using a set of simple rules. Among the moreimportant rules, experimental points with only a total uncertainty quoted getpenalized by assuming that the experimental point has both a random uncer-tainty and a systematic uncertainty of the quoted magnitude. Also, a 1% sys-tematic uncertainty which is common for all experiments measuring the samequantity is added. Otherwise, all uncertainties except one random uncertaintyare considered systematic uncertainties which are common for all data pointswithin the same experiment.

    A very influential part of the automatic interpretation of EXFOR is an at-tempt to automatically identify and reject outliers. In the work of Papers I andII, this is implemented in a practical but unsatisfactory way, based on the de-viation between the experiments and the distribution of the random files. Thisis unsatisfactory since experiments may be rejected because of an erroneousdistribution of the model data, and not the other way around. If an automatictreatment of outliers is possible, it should in the future be based on the devia-tion between different experiments.

    The rules used for the EXFOR interpretation in Paper I and II differ some-what, because of discussions among the authors initiated by feedback on PaperI from the experimentalist community. The difference gives substantial effecton the comparable results, as seen in Sec. 2.3. Otherwise, Paper II is an ex-tension of the work in Paper I, to a large degree, containing more results and amore detailed description of the methodology and a more detailed discussionof the results, as well as a study where parameters in the automatic EXFOR in-

    31

  • terpretation are varied to study how sensitive the methodology is to the choiceof these parameters.

    2.3 Comparing weighted and unweighted distributionsIn Papers I and II, the experimental covariance matrix implied by Eq. (2.7) iscomputed using the random and systematic uncertainty components obtainedaccording to Sec. 2.2, and this covariance matrix is used in the computation ofthe likelihood weights according to Eqs. (2.1), (2.4) and (2.5). These weightsare then used to weigh the distributions for quantities computed for severalapplications using random files publicly available at the TENDL homepage[37], giving weighted estimates for the ND uncertainty propagated to thesequantities. The random files of three nuclides, 235U, 239Pu, and 56Fe, are usedin both papers, while Paper II also includes 238U.

    In Paper I, the weights shifted the central values of the distributions in mostcases, while the ND uncertainty was not reduced much in any case. An ex-ample is shown in Fig. 2.1, concerning the neutron multiplication factor k∞(disregarding neutron leakage) in a UO2 pin cell (at end of life) in a typicalPWR (pressurized water reactor), varying 235U data and using weights fromexperiments with E < 5 eV. The weighted and unweighted distributions fork∞ are compared, and one can see how the weights from experimental datagenerally suppress files giving low k∞ and amplify files giving k∞, resultingin a shift upwards of the distribution. One can also note the possibility thatthe prior (unweighted) distribution limits the weighted distribution too much– what if there would have been random files giving even greater k∞?

    In Paper II, the weighted uncertainties can be divided into two groups: Forsome applications (where particular ND is varied), the propagated ND uncer-tainty estimates are reduced to practically zero by the weights, and the centralvalues are shifted. For other applications, the ND uncertainty is practically un-affected by the weights, just as the central values. An example of the formeris shown in Fig. 2.2, which illustrates the distribution of the same content asFig. 2.1 from Paper I, but this time, the distribution is very peaked. The largedifference in distribution is due to the different rules used for the EXFOR in-terpretation in the two papers.

    The very peaked distributions can have several quite different reasons, e.g.,1. The parameters are sampled in a region where the likelihood is low, giv-

    ing large absolute differences between χ2-values of different randomfiles, which leads to large relative differences in the likelihood due to theexponential behavior of the likelihood function

    2. The parameters are sampled from a “too” large region resulting in a badresolution of the region with a high likelihood (this case not very prob-lematic, since the small uncertainty estimate would be close to the truth)

    3. Erroneous interpretation of the experimental data

    32

  • 0.885 0.89 0.895 0.90

    100

    200

    300

    density

    Nor

    mal

    ized

    freq

    uenc

    y

    k∞

    UnweightedWeightedFit, unweightedFit, weighted

    Figure 2.1. The distribution of k∞ for a PWR pin cell at end of life varying 235Uusing weights from experiments with E < 5 eV, using the EXFOR interpretation ofPaper I.

    0.89 0.8950

    2000

    4000

    k∞

    Unweighted

    Weighted

    Fit, unweighted

    Fit, weighted

    Nor

    mal

    ized

    freq

    uenc

    y

    Figure 2.2. The distribution of k∞ for a PWR pin cell at end of life varying 235Uusing weights from experiments with E < 5 eV, using the EXFOR interpretation ofPaper II.

    33

  • 4. Model defects.The two former of these could be remedied by using feedback from the ex-perimental data already in the sampling of the model parameters. The thirditem leads us to conclude that the rules for interpreting EXFOR should be fur-ther discussed and refined, and perhaps there is a risk that full automation isvery difficult. Finally, the model defects should be considered in future work.Partially, this can be addressed by using a more data-driven methodology inregions where there is a lot of data, developing the work of Ref. [38].

    The unaffected distributions may be due to model defects or limitations ofthe EXFOR interpretation, too, but it can also be a result of too “narrow” priordistributions – remember that the prior distributions used here are not intendedto be prior distributions, see the introductory part of this chapter. Note that ifthe prior distribution already would be a good representation of the experimen-tal data (as interpreted here), the distribution should not remain unchanged, butrather be narrowed down (inappropriately, since the same experiments wouldbe counted twice). The problem of the possibly too narrow prior should betackled by using feedback from the experiments already in the sampling ofthe model parameters, just as two of the possible reasons for the very peakeddistributions mentioned above. Finally, the studied integral quantities may bemost sensitive to resonances. A large part of the resonance region is kept fixedin this work, but they may nevertheless be so many that it is inefficient to limitthem by weighting full random files. Therefore, it is suggested in Paper II thatthe resonances not only should be treated as other parts of the data.

    Because the distributions separated into these two groups of either beingvery affected by the weights or practically not at all, the study of how sensitivethe weighted propagated uncertainties are to different choices of the EXFORparameters did not give very clear results on the original question, i.e., howsensitive the method is to these choices. It does work as a validation of the im-plementation; the propagated uncertainties behave as one would expect themto when certain uncertainties were increased, etc. An interesting observationfrom the sensitivity study which is discussed in some detail in Paper II is thatthe effect of increasing systematic uncertainties becomes saturated, i.e., theweights do not generally approach a uniform distribution (as they do if all ran-dom uncertainties are increased, cf., Eq. (2.6)) when systematic uncertaintiesincrease. This behavior is explained by that increasing systematic uncertain-ties do not allow for deficiencies in the shape of the energy dependence ofthe experiments compared to the model curves, it only allows for large offsetsbetween experiments and models.

    Despite the difficulty to draw conclusions from the sensitivity study, it isclear from the difference between Figs. 2.1 and 2.2 that the interpretation ofthe results does matter, indeed. A better communication of uncertainties, anda database with transparently evaluated experiments would indeed benefit thistype of analysis.

    34

  • 2.4 Conclusions for Chapter 2Papers I and II describe how likelihood weights can be used to implementBayes’ theorem such that propagated ND uncertainties are adjusted with ex-perimental data. The work is implemented in practice, using an automaticinterpretation of the experimental database EXFOR. The EXFOR interpreta-tion differs between the two papers, and the propagated uncertainties can bevery affected by this, showing that the interpretation of the experimental datais important and suggesting that experiments should be reported with care-ful uncertainty analyses, where as many uncertainty components as possibleshould be kept separate.

    In the practical cases of Paper II, the effect of the weights is either verysmall or a strongly reduced uncertainty. The reasons for these two cases arediscussed, and it is concluded that the methodology should be modified suchthat feedback from experimental data is used already in the sampling of themodel parameters, that model defects should be taken into account – possiblyalong the lines of Ref. [38] – and that the resonance range needs some specialtreatment. Finally, the treatment of outliers should be modified; it should bebased on the deviation between different sets of experimental data and not onthe difference between experiments and model data.

    Paper III presents an alternative path to compute the likelihood weightswhich does not assume that experimental data follows a normal distribution;instead, systematic errors are sampled from an arbitrary distribution, mim-icking how systematic uncertainties affect experimental correlations. If nor-mal distributions are assumed for the systematic errors, it is shown that themethodology is equivalent to the conventional computation – if it converges.It is seen that the convergence is slow, such that the conventional methodol-ogy is preferred if applicable. However, the method is more flexible, easierto understand (and can, therefore, be used as a pedagogical tool), and otherrelated uses of the sampling of systematic errors can be considered, such as inthe more direct inclusion of experimental data inspired by Ref. [38]. Finally,the possibility to combine the sampling methodology of Paper III and the con-ventional computation used in Papers I and II can be investigated as a meansto handle the correlations between very large numbers of experiments whicharise from using relative measurements.

    35

  • 3. Ongoing work: evaluating the 59Ni crosssections

    Much because of aging reactor fleets, the interest is growing for studying mate-rial damage in nuclear reactors. For example, the Swedish Center for NuclearTechnology, funded by the Swedish nuclear industry, has initiated the projectMȧBiL [41], for academic research related to material, aging, and fuel; thework in this chapter is partially financed within the MȧBiL project. The inter-est is not restricted to Sweden; an example is the IAEA coordinated researchproject (CRP) on “Primary Radiation Damage Cross Sections” [42, 43]. Apart of this work was presented at the second research coordination meetingof this particular CRP, in June 2015. The work in this chapter is also acceptedfor an oral presentation at the Fourth international Workshop on Nuclear DataEvaluation for Reactor Applications (WONDER 2015).

    It is is then natural to ask what it is that makes 59Ni relevant in the contextof material damage. In many stainless steels, nickel makes up as much as10% of the content (see for example SS 304 in Ref. [44]), but 59Ni doesnot occur in nature, as it is radioactive with a half-life of 76000 years [45].However, 58Ni constitutes 68% of natural nickel, and it has a high thermal(n,γ) cross section, compared to the other major constituents of stainless steel,i.e., iron and chromium [46]. As a result, a large fraction of the neutronsin stainless steel components in a thermal nuclear reactor will be captured in58Ni nuclides. In this way, 59Ni is produced, and the ratio of 59Ni to 58Ni canbecome more than 4% before it starts to decrease [47].

    In turn, 59Ni has a rare property which makes it important, namely that ithas extraordinarily high (n,α) and (n,p) cross sections, i.e., cross sections forneutron capture followed by the emission of an α-particle or a proton, respec-tively, meaning that helium and hydrogen gas is produced in the steel, leadingto, e.g., embrittlement of the material. The reactions also have high Q-values,such that significant amounts of energy are released in the material, also lead-ing to material damage. Thus, the two-step reactions 58Ni(n,γ)59Ni(n,α)56Feand 58Ni(n,γ)59Ni(n,p)56Fe make 59Ni important in thermal reactors.

    From studying the existing evaluated data on 59Ni, it turns out that, e.g.,ENDF/B-VII.1 and JEFF 3.1 contain the same evaluation [48], from 1990, andthat this evaluation does not contain any uncertainty information. Actually,not even TENDL-2014 [9] contains uncertainty estimates for the particularlyinteresting (n,α) and (n,p) reactions at thermal and resonance energies. Thegoal of this work is to provide random ND files which describe the currentknowledge of the 59Ni cross section data, including the uncertainties.

    36

  • Since the identified importance of 59Ni is for thermal neutrons, the workfocuses on the thermal cross sections; Sec. 3.1 consists of a careful evaluationof experiments measuring the thermal cross section (“at” 0.0253 eV). In Sec.3.2, it is described how some resonance parameters are adjusted such that theyreproduce the distribution of these thermal cross sections, how other resonanceparameters are treated, and how the information is completed to construct fullENDF files. Finally, this data is tested in an MCNP model of stainless steel inthe spectrum of a thermal reactor in Sec. 3.3.

    3.1 Evaluation of experimental thermal cross sectionsThe measurements of different thermal cross sections that are used in thisstudy are summarized in Secs. 3.1.2 to 3.1.8 below. In analyzing their un-certainties, an attempt is made to follow a working process which is as lucidand objective as possible, by treating the different experiments in a way whichis as general as possible, and this general treatment is described in Sec. 3.1.1.

    In Sec. 3.1.9 it is described how the information from the experiments ismerged together, by using a development of TMC, in which uncertain pa-rameters in experimental setups are sampled, analogously to how the nuclearmodel parameters normally are sampled in TMC.

    3.1.1 General treatmentAs mentioned in Sec. 1.3.2, a typical cross section measurement is carriedout by letting a neutron beam with flux φ hit a target containing N nuclei ofinterest during time T and detecting the number of counts C (of a certain type)in a detector with efficiency ǫ, and the cross section ς is obtained from

    ς =C

    ǫNφT . (3.1)

    It is assumed that the typically unavoidable background is already subtractedfrom C.

    For the flux, two strategies are used in the experiments considered here,both relating to a monitor cross section ς , which is assumed to be known. Insome cases, the flux is explicitly estimated, in practice by rearranging Eq. (3.1)to

    φ′ =C ′

    ǫ′N ′ς ′T ′ , (3.2)

    where the prime (′) denotes that all quantities now relate to the monitor crosssection. Multiplying Eq. (3.1) with φ′/φ′ and inserting this expression for φ′

    (once), one gets

    ς =CN ′T ′C ′NT

    ǫ′

    ǫ

    φ′

    φς ′. (3.3)

    37

  • Table 3.1. Default relative uncertainties for constituents of Eq. (3.1) and Eq. (3.3) aswell as the background subtracion, used if other information is not found. The samevalues are used for the corresponding monitor values.

    Background ǫ′/ǫ ǫ N n d φ′/φRel. std. dev. (%) 1 2 5 5 3 4 2

    One normally expects φ′/φ ≈ 1, but there can be an instability in the flux suchthat φ′/φ has an uncertainty. Eqs. (3.1) and (3.2) are the same as Eqs. (1.23)and (1.24), and Eq. (3.3) is similar to Eq. (1.25), found in Sec. 1.3.2, but theyare all reproduced here for convenience.

    The second strategy is to never explicitly compute the flux, and to insteadperform a ratio measurement, i.e., measuring the ratio ς/ς ′, in practice alsousing Eq. (3.3). In both cases, the implicit or explicit flux estimate has a sys-tematic uncertainty due to the uncertainties of all the components in Eq. (3.2).But just as for the background, the flux can have a random uncertainty too,resulting from a possible instability of the flux and the difficulty to reproducethe exact same conditions.

    In the analysis of the experimental uncertainties, we try to deduce from thearticles (or EXFOR entries if the articles are not found) which quantities in Eq.(3.3) that are accounted for in the original uncertainty analysis. Uncertaintiesdue to contributions which are not accounted for are then added according tothe values quoted in Table 3.1, which are based on the uncertainty estimatesfound in the (other) considered publications. On top of the explicit constituentsof Eqs. (3.1) and (3.3), an uncertainty due to the background subtraction is alsoincluded in Table 3.1, while the exposure times T and T ′ are considered to beexactly known. The uncertainty in N is considered to be dominated by thethickness d and the concentration of nuclides N .

    Typically, the procedure will increase the uncertainty by a factor

    η = ∆new/∆old (3.4)

    where ∆old and ∆new is the original uncertainty estimate and the one ob-tained in this work, respectively. If an uncertainty is quoted for the flux, thisuncertainty is scaled by the factor η (computed without the flux uncertainty),assuming that the same underestimation of uncertainties applies to the deter-mination of the flux.

    Finally, the reference cross sections are used for renormalization and to addcorrelated uncertainty components.

    3.1.2 Eiland/Kirouac, 1974 [49]Measured (n,α0), i.e., (n,α) leaving the recoil nuclide in its ground state, ina thermal spectrum from a graphite-moderated neutron-beryllium source de-tected with particle track detectors. The experiment actually consists of two

    38

  • sub-experiments; the major differences are 59Ni enrichment and detector effi-ciency (determined using an americium source).

    Both sub-experiments consisted of six measurements, and the standard de-viation of each set was used as an estimate of the uncertainty from “all sourcesof random error”, giving ςth. = (13.5 ± 1.8) b and ςth. = (13.7 ± 0.6) b forsub-experiment 1 and 2, respectively. The flux was determined using activa-tion of gold via the 197Au(n,γ) reaction. This determination was estimatedto have a 3.6% systematic uncertainty. The americium source strength usedto determine the detector efficiency was estimated to have an uncertainty of2.5% from statistics. Using these systematic uncertainties (assuming all otheruncertainties are random) and taking the weighted average of the two sub-experiments, they quote ςth. = (13.7± 0.6 (random) ± 0.6 (systematic) ) b.

    In practice, the second sub-experiment and its estimate of random uncer-tainty dominate completely, and therefore only this sub-experiment is ana-lyzed further. For each of the six measurements, a new estimate of the detec-tor efficiency and the background (as we interpret it) was performed. Threedifferent targets (and estimates of their thickness and density) were used (twomeasurements with each target). Therefore, uncertainties in all constituents ofEq. (3.1) are to some degree accounted for. There are however several prob-lems with this estimate:

    1. The sample size is small, so the uncertainty of the uncertainty is large2. Only three different targets were used, increasing the problem of the

    small sample size3. The statistical precision varies (the number of counted tracks varies be-

    tween 1500 and 4100), so the different measurements cannot be said tocome from identically distributed random variables