search for heavy vector bosons w0 in the tau decay channel …hebbeker/theses/knutzen_master.pdf ·...

107
Search for heavy vector bosons W 0 in the tau decay channel using pp collisions at 8 TeV von Simon Knutzen Masterarbeit in Physik vorgelegt der Fakult¨atf¨ ur Mathematik, Informatik und Naturwissenschaften der Rheinisch-Westf¨alischen Technischen Hochschule Aachen arz 2013 erstellt im III. Physikalischen Institut A Prof. Dr. Thomas Hebbeker Zweitgutachter Prof. Dr. Christopher Wiebusch

Upload: phungdien

Post on 01-Apr-2019

214 views

Category:

Documents


0 download

TRANSCRIPT

Search for heavy vector bosons W ′ in the tau

decay channel using pp collisions at 8 TeV

von

Simon Knutzen

Masterarbeit in Physik

vorgelegt der

Fakultat fur Mathematik, Informatik und Naturwissenschaftender Rheinisch-Westfalischen Technischen Hochschule Aachen

Marz 2013

erstellt im

III. Physikalischen Institut AProf. Dr. Thomas Hebbeker

ZweitgutachterProf. Dr. Christopher Wiebusch

Abstract

The Large Hadron Collider has successfully taken a huge amount of data in the year2012 at an unprecedented center of mass energy of 8 TeV making it the ideal discoverymachine for new physics processes at highest energies. In this thesis, a search for a newheavy vector boson W ′ is performed with 12.1 fb−1 data taken with the CMS detectorin the tau decay channel. The W ′ boson is predicted by many theories extending theStandard Model, and various searches have been performed in the past in the electronand muon channels. The current world‘s best limit excludes a W ′ boson in a SequentialStandard Model with masses below M(W′) = 3.35 TeV at 95 % CL[1].

In this thesis, for the first time, the tau decay channel of the W ′ is investigated wherethe tau subsequently decays hadronically. Although the sensitivity in the tau channel islower than in the two other channels, this channel is important to gain the full pictureof a hypothetical W ′ boson.

An additional theory is investigated in this analysis where the coupling of the W ′

boson to the different fermion families is nonuniversal and depends on a parameter ofthis model. The analysis of the tau decay channel allows to study a parameter range ofthis model where the electron and muon channel are not sensitive.

Since the decay of the W ′ gives rise to tau leptons at very high momenta which causespecial and challenging requirements on the object reconstruction of these particles,a detailed study on tau identification and energy reconstruction at high momenta isperformed first.

Zusammenfassung

Der Large Hadron Collider hat im Jahr 2012 erfolgreich eine große Menge an Daten beieiner nie zuvor erreichten Schwerpunktsenergie von 8 TeV aufgenommen, was ihn zueiner idealen Entdeckungsmaschine fur neue Physik bei hochsten Energien macht. Indieser Arbeit wird eine Suche nach einem neuen schweren Vektorboson W ′ in Zerfallenmit einem Tau-Lepton mit 12.1 fb−1 Daten durchgefuhrt, die mit dem CMS Detektoraufgenommen wurden. Das W ′ Boson wird von vielen verschiedenen Theorien vorherge-sagt, die das Standard Modell der Teilchenphysik erweitern und verschiedene Suchenwurden in der Vergangenheit im Elektron- und Myon-Zerfallskanal durchgefuhrt. Dasmomentan weltbeste Limit fur das W ′ Boson beschrieben durch ein Sequentielles Stan-dard Modell, das von diesen Analysen gesetzt wurde, schließt mit 95 % CL Massen unterM(W′) = 3.35 TeV aus [1].

In dieser Arbeit wird zum ersten Mal der Zerfallskanal des W ′ Bosons untersucht,in welchem das W ′ in ein Tau-Lepton und ein Neutrino zerfallt und das Tau-Leptonwiederum hadronisch zerfallt. Obwohl die Sensitivitat dieses Tau Zerfallskanals schlechterist als die Sensitivitat des Elektron- oder Myon-Zerfallskanals ist dieser sehr wichtig, umdas vollstandige Bild eines moglichen W ′ Bosons zu erhalten.

Zusatzlich wird eine Theorie untersucht, wo die Kopplungsstarke des W ′ Bosons an dieFermionfamilien nicht universal ist und von einem Parameter dieses Modells abhangt.Die Analyse des Tau Zerfallskanals ermoglicht es den Parameterraum dieses Modells ineinem Bereich zu untersuchen, in welchem der Elektron und der Myon Zerfallskanal nichtsensitiv sind.

Da die Zerfalle des W ′ Bosons zu Tau-Leptonen mit sehr hohen Impulsen fuhrenund die Rekonstruktion dieser Teilchen speziellen Anforderungen unterliegt, wird zuersteine ausfuhrliche Studie zur Tau-Identifikation und Energie-Rekonstruktion bei hohenImpulsen durchgefuhrt.

iii

Contents

1 Theoretical Principles 11.1 System of Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Standard Model of Particle Physics . . . . . . . . . . . . . . . . . . . . . . 1

1.2.1 Particles of the Standard Model . . . . . . . . . . . . . . . . . . . 21.2.2 Gauge field formalism of particle physics . . . . . . . . . . . . . . . 21.2.3 Quantum electrodynamics . . . . . . . . . . . . . . . . . . . . . . . 41.2.4 Weak interaction and electroweak unification . . . . . . . . . . . . 41.2.5 Quantum chromodynamics . . . . . . . . . . . . . . . . . . . . . . 61.2.6 Higgs mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.3 Tau Lepton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.1 Properties of the tau . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.2 Decay properties of highly boosted taus . . . . . . . . . . . . . . . 9

1.4 Wprime in the Sequential Standard Model . . . . . . . . . . . . . . . . . . 101.5 Nonuniversal Gauge Interaction Model . . . . . . . . . . . . . . . . . . . . 11

1.5.1 The SU(2)l x SU(2)h x U(1) gauge group . . . . . . . . . . . . . 111.5.2 Existing exclusion limits for W ′ in the NUGIM . . . . . . . . . . . 13

2 The Experiment 172.1 The Large Hadron Collider . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 The Compact Muon Solenoid . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.2.1 The coordinate system . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.2 Silicon tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.3 Electromagnetic calorimeter . . . . . . . . . . . . . . . . . . . . . . 212.2.4 Hadronic calorimeter . . . . . . . . . . . . . . . . . . . . . . . . . . 222.2.5 Magnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.2.6 Muon system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 Object Reconstruction 253.1 Particle Flow Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2 Particle Flow Jets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3 The Hadron plus Strips Tau Reconstruction Algorithm . . . . . . . . . . . 28

3.3.1 Tau jet reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . 283.3.2 Tau identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.4 Missing Transverse Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.5 Performance of the HPS Algorithm at High Energies . . . . . . . . . . . . 33

3.5.1 Tau energy determination . . . . . . . . . . . . . . . . . . . . . . . 333.5.2 Tau identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

v

Contents

4 Trigger 454.1 The Trigger System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.2 The Tau plus MET Cross Trigger . . . . . . . . . . . . . . . . . . . . . . . 464.3 MET only Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5 Datasets and Analysis Framework 535.1 The Analysis Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535.2 Data of pp Collision Data Recorded in 2012 . . . . . . . . . . . . . . . . . 545.3 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.3.1 Parton distribution function and production cross sections . . . . . 545.3.2 Pileup simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565.3.3 Standard Model process samples . . . . . . . . . . . . . . . . . . . 575.3.4 Signal samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

6 Event Selection 616.1 Transverse Mass as the Main Discriminating Variable . . . . . . . . . . . 616.2 Selection for Event-Quality . . . . . . . . . . . . . . . . . . . . . . . . . . 626.3 Selection of W Event Signature . . . . . . . . . . . . . . . . . . . . . . . . 636.4 Signal Efficiency and Background Contribution . . . . . . . . . . . . . . . 64

7 Data-Driven QCD Multijet Background Estimation 697.1 The QCD Template Sample . . . . . . . . . . . . . . . . . . . . . . . . . . 697.2 Normalization of Template Sample . . . . . . . . . . . . . . . . . . . . . . 697.3 Cross Check of Fake Probability . . . . . . . . . . . . . . . . . . . . . . . 727.4 Full Background Prediction and Final Distributions . . . . . . . . . . . . 73

8 Systematic Uncertainties 758.1 Sources of Systematic Uncertainties . . . . . . . . . . . . . . . . . . . . . . 758.2 Impact on Signal Efficiencies and Background Prediction . . . . . . . . . . 77

9 Exclusion Limits 819.1 Single Bin Counting Experiment with Bayesian Statistics . . . . . . . . . 819.2 Limits in the Sequential Standard Model . . . . . . . . . . . . . . . . . . . 829.3 Limits on NUGIM Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

10 Conclusion and Outlook 8710.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8710.2 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

11 Appendix 89

vi

1 Theoretical Principles

In order to understand the theoretical principles which underlie the processes this anal-ysis searches for, first a short introduction to the Standard Model of particle physics isgiven. The Standard Model is not only the foundation for all further particle models,but is also needed to describe the background for the search. The two specific W′ modelsconsidered in this analysis are described afterwards.

1.1 System of Units

In this work a system of units is used called “natural units” which is common in particlephysics: Energy is expressed as a multiple of the amount of energy a particle carrying oneelementary charge gains by a voltage of 1 V, called electron volt (1 eV ≈ 1.6021 · 10−19J).For practical reasons 1 GeV = 109eV is normally used. Additionally, some constants areset to one to get rid of large (or small) numbers: ~ = c = ε0 = 1. As a result, energy,mass and momentum are expressed in GeV and time and distance are expressed in1/GeV. Velocity is expressed as a fraction of c without an unit. Electrical charge (orcharge number) mostly refers to a multiple of the elementary charge, unless otherwisestated.

1.2 Standard Model of Particle Physics

The Standard Model (SM) of particle physics is an accurate and mathematically eleganttheory of the elemental structure of matter. It describes the properties of all knownelementary particles and the forces and interactions between them. The theory wastested in various experiments and its predictions were verified with very high precision.Three fundamental forces (electromagnetic[2], weak [3, 4] and strong interaction[5]) aredescribed as the interaction of particles with gauge fields in the context of quantumfield theory (QFT). For each force, gauge particles arise from the fields which can beinterpreted as force carrier particles. Gravity is not included in the Standard Model,because no gauge theory has been found for it, yet. But since it is much weaker thanthe other three forces, it is negligible for particle processes.

One imperfection of the Standard Model is the description of the particle masses. Inthe field equations of the three interactions initially all particles have to be massless, sincemass terms would violate gauge invariance. An additional component was introducedin the Standard Model, the so called Higgs mechanism, which can describe the particlemasses. This theory has not been fully proven, yet, but a new particle has been discoveredat the LHC in the year 2012, which seems to be the gauge boson of the Higgs field.

All information in this section is taken from [6] unless otherwise stated.

1

1 Theoretical Principles

1.2.1 Particles of the Standard Model

There are two different types of particles in the Standard Model: The spin one half matterparticles, called fermions and the gauge particles with integer spins called bosons. Thefermions itself can be again divided into two types, depending if they take part in thestrong interaction or not. The first ones are called quarks and the latter ones are calledleptons. Furthermore, both types are divided into three generations with increasingparticle masses where each contains two quarks(one up and one down type quark) andtwo leptons (one charged lepton and one neutrino). These particle pairs are forming(isospin) doublets of the SU(2) transformation of the weak interaction, which meansthat the two different particles of one generation can be interpreted as two orientationsof one isospin object. The up type quarks have an electrical charge of 2/3 and thedown type quarks of -1/3, while charged leptons have a charge of -1 and neutrinos of 0.Corresponding particles in the different generations are identical in all quantum numbersexcept for the one denoting its generation. This number is conserved in all processesexcept for charged weak currents. Since the masses of particles of the second and thirdgeneration are higher than the ones in the first generation they will decay into these viaweak processes so that all stable matter which can be found in the universe is exclusivelybuilt by particles of the first generation. One exception is formed by neutrinos: In theStandard Model they are assumed to be massless and stable. All three generations can befound in the universe today. Latest experiments have shown that neutrinos of differentgenerations oscillate into each other which means that at least two of them must have asmall but non zero mass [7].

And overview of all SM particles and their properties can be seen in Figure 1.1Leptons can occur as single particles, while quarks always form bound state due to the

properties of the strong interaction. The bound states can contain 2 quarks which arecalled mesons or three quarks which are called baryons. The electrical charge of thesehadrons is always an integer number although the electrical charge of the quarks is amultiple of one third.

The relativistic energy of a particle is described by Formula 1.1

E2 = m2 + p2 (1.1)

where m is the particle mass and p its momentum.For each particle in the Standard Model exists a corresponding antiparticle. The

antiparticle is identical to its particle except that all additive quantum numbers like thecharge or the baryon number are inverted. In terms of quantum field theory an incomingantiparticle is equivalent to an outgoing particle. Both particle and antiparticle aredescribed simultaneously as a four dimensional complex wave function called spinor ψ.The spinor also contains the information about the spin orientation of the fermion.

1.2.2 Gauge field formalism of particle physics

As mentioned before the three forces included in the Standard Model are describedas gauge invariant quantum fields which interact with the fermion spinors yielding theequations of motion in the Lagrange formalism.

2

1.2 Standard Model of Particle Physics

uup

2.4 MeV/c

⅔½ c

charm

1.27 GeV/c

⅔½ t

top

171.2 GeV/c

⅔½

ddown

4.8 MeV/c

-⅓½ s

strange

104 MeV/c

½-⅓ b

bottom

4.2 GeV/c

½-⅓

νe

<2.2 eV/c

0

½ νμ

<0.17 MeV/c

0

½ ντ

<15.5 MeV/c

0

½

eelectron

0.511 MeV/c

-1

½ μmuon

105.7 MeV/c

½-1 τ

tau

1.777 GeV/c

½-1

γphoton

0

0

1

ggluon

0

1

0

Z91.2 GeV/c

0

1

80.4 GeV/c

1

±1

mass→

spin→

charge→

Qua

rks

Lept

ons

Gau

ge b

oson

s

I II III

name→

electronneutrino

muonneutrino

tauneutrino Z boson

W boson

Three generations of matter (fermions)

0

±

2 2 2

2 2 2

2 2 2

2 2 2 2

2

Figure 1.1: Particles of the Standard Model and their properties[8].

A non interacting fermion fulfills the Dirac-equation

(iγµ∂µ −m)ψ = 0 (1.2)

which leads to a Lagrangian

L = ψ(iγµ∂µ −m)ψ. (1.3)

The inclusion of an interaction by adding a gauge field to this Lagrangian will be dis-cussed for the electromagnetic interaction: A gauge transformation of the type

ψ → ψ′ = Uψ | U = eiα (1.4)

leaves the Lagrangian and all physical observables invariant as long as α is independentof time and location

L = ψ(iγµ∂µ −m)ψ → ψ′(iγµ∂µ −m)ψ′

=(ψeiα)(iγµ∂µ −m)ψeiα

= ψe−iαeiα(iγµ∂µ −m)ψ

= L

This is called a global gauge transformation. To describe particles interacting withthe electromagnetic force, additional invariance under a local gauge transformation isclaimed where α is a function of the space-time variables (t,~x). A priori, the Lagrangianis not invariant under this transformation since an additional term ∂µα appears. A four

3

1 Theoretical Principles

dimensional potential Aµ has to be added as a gauge field which extend the partialderivative to the covariant derivative

∂µ ⇒ Dµ = ∂µ − igAµ (1.5)

This gauge field is transformed to

Aµ → A′µ = Aµ +1

g· ∂µα (1.6)

and cancels out the additional term in the Lagrangian, restoring invariance for localgauge transformation. This potential Aµ is the electromagnetic photon field.

After adding the kinematic term for the free photon field with the field strengthtensor Fµν and rearranging the equation, one obtains the full Lagrangian of quantumelectrodynamics (QED)

L = iψ(γµ∂µ)ψ −mψψ + g ψγµψAµ −1

4FµνF

µν (1.7)

where g ψγµψAµ describes the electron photon interaction as the coupling of a singlephoton to a fermion flux ψγµψ with the coupling strength g proportional to the electriccharge.

In the following sections the gauge theories of the Standard Model, quantum electro-dynamics (QED) for the electromagnetic interaction, quantum chromodynamics (QCD)for strong interaction and the combined electroweak theory will be discussed. No math-ematical derivation will be done since this would be out of the focus for this work, butrather a phenomenological review of their properties and predictions.

1.2.3 Quantum electrodynamics

QED describes electromagnetic processes and is therefore the interaction which is mostintuitive due to the fact that it can be experienced in normal life. The electromagneticforce has an infinitive range since its force carrier -the photon- is massless and stable.Every charged particle participates in the electromagnetic interaction with the couplingstrength g = α where α = e

4π ≈1

137 is the fine-structure constant. At every interactionvertex all quantum numbers of the system have to be conserved. The photon is a masslessspin one particle which is why it can not have a spin component perpendicular to itsdirection of flight. This restricts the kinds of processes which can be mediated by QEDdue to spin conservation.

1.2.4 Weak interaction and electroweak unification

With the discovery of the radioactive beta decay at the end of the 19th century a new kindof particle process was discovered where one particle type is transformed into another.In a β− decay a down quark is transformed into an up quark whereby an electron anda neutrino occur. This cannot be done by a QED process thus a new interaction (calledthe weak interaction) has to be introduced. As said before the particles of the StandardModel are organized in one lepton and one quark doublet for each generation. The weak

4

1.2 Standard Model of Particle Physics

interaction can transform any up type particle of one doublet into the correspondingdown type particle and vice versa, which can be described as a rotation of the weakisospin vector in two dimensions by a SU(2) gauge field

u =

(10

)→(

0 −11 0

)·(

10

)=

(01

)= d

The SU(2) gauge group has three generators (W1,W2,W3) corresponding to threegauge bosons. Two gauge bosons must have an electrical charge of Q = ±1 since theyhave to carry away a charge of one at every particle transformation vertex.

It was found in experiments (Wu experiment [9], Goldhaber experiment[10]) that thischarged weak interaction only couples to left chiral fermions, which implies maximumparity violation. Chirality is the Lorentz invariant generalization of the helicity, which isthe projection of the spin of a particle onto the direction of flight. “Right“ refers to thestate where the spin points in direction of flight and ”left“ to the opposite orientation.For massless particles like neutrinos helicity and chirality are identical but not for massiveparticles since there are reference frames in which the direction of flight points to theopposite direction. Massive particles are formed through a superposition of left andright chiral states. Since right chiral neutrinos would not take part in any interaction,it is assumed that there are no such particles. On this new condition the particles ofthe Standard Model are again rearranged in terms of the weak interaction: Only theleft chiral particles form doublets while the right chiral particles form singlets. Forantiparticles the behavior of the weak interaction is vice versa: It couples only to rightchiral antifermions.

The weak interaction was combined with the electromagnetic to an unified electroweaktheory which is a first step to a grand unified theory of all forces and additionally it solvessome theoretical problems of the theory of the weak interaction 1.

The electroweak interaction is described by a gauge group of form SU(2)xU(1). Thisadditional U(1) (which is not the U(1) of the QED) gives rise to an additional gaugeboson B. Out of the four generators (W1,W2,W3 and B) the physical force carrier gaugebosons (W+,W−,Z,A) are formed by mixing.

W+ = (W 1 + iW 2)/√

2 (1.8)

W− = (W 1 − iW 2)/√

2 (1.9)

A = cos(θW ) ·B + sin(θW ) ·W 3 (1.10)

Z = − sin(θW ) ·B + cos(θW ) ·W 3 (1.11)

W+ and W− are the bosons of the charged weak currents, A is the photon field andZ is an additional weak uncharged boson. The angle sin2θW ≈ 0.23 is the mixing anglebetween the two gauge groups. The coupling strength g of the W field to fermions isconnected to the electrical charge by g = e

sinθW.

The bosons of the weak interaction are, in contrast to the photon, massive particleswith m(W±) ≈ 80.4 GeV and m(Z) ≈ 92.2 GeV and a short lifetime. Therefore, the

1Without the unification the theory is not renormalizable

5

1 Theoretical Principles

weak interaction is short ranged and weaker than the electromagnetic one as long asthe momentum transfer of a process is small enough that these massive bosons areenergetically suppressed. The mass of the weak gauge bosons is not predicted by thegauge theories, in fact, mass terms not only for these bosons, but also for fermions wouldviolate local gauge invariance. In this theory all particles have to be massless which isobviously wrong. A new field has to be introduced which enables mass terms in theLagrangian. This field is called the Higgs field. The Higgs mechanism will be discussedin Section 1.2.6.

1.2.5 Quantum chromodynamics

Long before QCD was developed, it was obvious that there has to be an additionalforce besides the known electromagnetic, weak and gravitational force, responsible forthe bond of a nucleus. This force has to be strong and short ranged, since the bindingenergy of the nucleus is very large and it only affects neighbouring particles. Additionally,no isolated quarks can be found (or produced at colliders) which implies that the forcebetween them can not be overcome. The force becomes constant with rising distanceand as soon as there is enough energy stored between the two quarks, a new quarkantiquark pair will be created which shields the initial particles against each other.These new quarks combines with the initial ones and the resulting particles are neutralwith respect to the charge of the strong interaction. This feature is called confinementand the creation of quark antiquark pairs happens at a distance of r ≈ 1 GeV−1 ≈ 1 fmwhich is therefore the approximate size of a hadron. If the energy of the primary quark ishigh enough, more than one pair of quarks can be produced. In this way, a large numberof particles can arise which is called hadronisation. There are two types of hadronswhich are bound by the strong force (and neutral to the outside) where the baryonscontain three quarks and the mesons contain two quarks. This can be realized by a forcewhich has three different charges. They are called colour charge where the combinationof the three different colours (red, green, blue) results in a non charged (white) object(a baryon) as well as the combination of a colour and an anticolour which results in ameson. The discovery of the ∆++ baryon necessitates the existence of three colours tofullfill the pauli principle since it consists of three up quarks which are identical in allquantum numbers except for the colour. The force carrier of the strong interaction isthe gluon, which carries colour itself (one colour and one different anticolour) enabling agluon-gluon self interaction. Eight different non-white combinations are possible. Thisself interaction is the reason for the confinement. For rising distances more and moregluons take part in the interaction thus increasing the energy in between.

QCD is constructed out of a SU(3) gauge group with the gluons corresponding tothe eight generators of this group. The gauge transformation is a rotation in the threedimensional colour space. Since physics is invariant under this transformation, the strongcoupling αs = gs

4π is independent of the colour. As seen before for the confinement,the coupling strongly depends on the distance (i.e. the momentum transfer) of twoparticles. For small distances and therefore high momentum transfers the couplingbecomes small which is called asymptotical freedom. At the Z-boson mass of 91 GeV

6

1.2 Standard Model of Particle Physics

the QCD coupling constant is αs ≈ 0.12 which is much higher than the coupling α ofQED. In QCD processes all quantum numbers are conserved.

1.2.6 Higgs mechanism

As said before, all particles which appear in the previously described electroweak gaugetheories have to be massless since a mass term would violate local gauge invariance. Forexample a fermion mass term of the form

mf · ψψ = mf · (ψRψL + ψLψR)

is transformed to

mf · (ψRψ′L + ψ′LψR) 6= mf · ψψ

which is not invariant since the SU(2) transformation only affects the left handed doubletbut not the right handed singlet. The Higgs mechanism [11, 12, 13] introduces a newscalar field

Φ =1√2

(Φ1 + iΦ2

Φ3 + iΦ4

)=

(Φ+

Φ0

)(1.12)

with a corresponding potential

V (Φ) = µ2 · (Φ†Φ) + λ · (Φ†Φ)2 (1.13)

The potential has to satisfy the requirement µ2 < 0 and λ > 0 which leads to a po-tential with a local minimum shifted from zero, called ”mexican hat potential“. At thebeginning of the universe at very high energies the Higgs field is not located at the min-imum of the potential, but it got into this minimum with a random orientation afterthe universe had been sufficiently cooled down. This is called the spontaneous symme-try breaking. The ”trick“ of the Higgs mechanism is to approximate this field at theminimum with a first order taylor series(

Φ+

Φ0

)=

1√2

(0

v + h(x)

)(1.14)

where v is the vacuum expectation value of the Higgs field. Particles gain mass byinteraction with this field. This results in boson masses of

MW =1

2vg2 (1.15)

MZ =MW

cos θW(1.16)

Fermion masses are included through a Yukawa coupling

Mf =λf · v√

2(1.17)

7

1 Theoretical Principles

The Higgs field gives rise to a new gauge boson, the so called Higgs boson with a massof MH = −2µ2. Since the value of µ is not known, the Higgs mass cannot be predicted.

A new boson was found at LHC in the year 2012 at a mass of 125 GeV [14, 15] whichis a promising candidate for the Higgs boson, but more measurements are needed toconfirm this.

1.3 Tau Lepton

The study in this thesis is done with final states including a tau lepton so this particlewill be discussed in more detail. In the following section, the properties of the tau [16]and the kinematics of its decay in a highly boosted system will be presented.

1.3.1 Properties of the tau

The tau is the heaviest of the three charged leptons in the Standard Model. It has amass of mτ = 1776.82± 0.16 MeV[16] and is therefore heavy enough to decay not onlyinto the lighter leptons but also into mesons. The Feynman diagram of the decay isshown in Figure 1.2. The mean life of (290.6± 1.0) · 10−15 s is too short to measure thetau directly at the LHC since even boosted taus decays before they can be detected.

τυτ

W h+/-

τυτ

W

e / μ

υ

Figure 1.2: Feynman diagram of tau decay. The hadronic decay is shown left, the lep-tonic decay is shown right.

The branching ratio to hadrons is BR(τ → h ν) ≈ 64.8% and BR(τ → e/µ ν ν) ≈ 17.5%for each leptonic channel. An other way to categorize the different tau decays is by howmany charged particles arises from the tau decay. Since the hadronic tau decay onlyproduces single meson resonances, but no unbound quarks, a well known number ofcharged tracks arise from all decays. In contrast to this, many charged particles canarise at hadronisation of single quarks.

At the majority of decays only one charged particle arises from the tau decay. Thebranching fraction of these so called 1 prong decays (which also includes the leptonic de-cays) is BR(1Prong) ≈ 85.35%. The branching fraction for three prong decays which canemerge if the initial meson decays into three charged mesons, is BR(3Prong) ≈ 14.6%.All decay modes with higher multiplicities of charged particles are negligible.

In Table 1.1 the most important hadronic decay modes of the tau together with thedifferent intermediate mesons can be seen. Only the charged pion lives long enough tobe detected directly. All other mesons decay in turn to charged and neutral pions. Theinvariant mass of these final state pions yields the mass of the intermediate meson whichis used for tau identification later on.

8

1.3 Tau Lepton

Decay mode Mass (MeV) Resonance Branching fraction (%)τ → hντ 11.6 %

τ− → h−π0ντ 770 ρ− 26.0 %w τ− → h−π0π0ντ 1200 a−1 9.5 %τ− → h−h+h−ντ 1200 a−1 9.8 %τ− → h−h+h−π0ντ 4.8 %

Table 1.1: Hadronic decay modes of the tau.[17]

1.3.2 Decay properties of highly boosted taus

This analysis is dealing with taus with an energy well above 100 GeV. Since this is muchlarger than the tau mass, the tau and its decay products are highly boosted. All decayproducts are flying almost exactly in the same direction as the initial tau direction whichresults in some disadvantages but also in some advantages for the analysis. The distri-bution of ∆R =

√∆φ2 + ∆η2 between the initial tau direction and the decay products

where φ is the azimuth angle and η is the pseudo rapidity of the particle direction canbe seen in Figure 1.3. The pseudo rapidity is used instead of the polar angle θ (see 2.3).

R∆0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1

Num

ber

of T

aus

0

200

400

600

800

1000

1200

1400

1600

1800

2000

2200

[GeV],trueτT

p150 200 250 300 350 400 450 500

R∆

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

0

5

10

15

20

25

30

35

Figure 1.3: Distribution of ∆R between the tau decay products and the initial tau di-rection. On the left side the frequency distribution is shown and on the rightside the two dimensional correlation between ∆R and the true transverse mo-mentum of the tau (pτ,trueT ) for taus with a high boost (pτ,trueT > 100 GeV).The true value is known since this study is done on simulation. The distribu-tion peaks strongly at very low values of ∆R and decreases further for risingpτ,trueT .

One disadvantage concerns the leptonic decays. Since a decay vertex reconstruction isonly possible for at least two charged tracks in the final state, the only way to distinguishan electron or muon originating from a tau decay against those coming directly from thehard interaction would be a clear offset of the impact parameter against the primary

9

1 Theoretical Principles

vertex (dxy). The impact parameter is the perpendicular distance of the extrapolatedparticle track to the primary vertex. Since the track points directly towards the primaryvertex, the impact parameter is nearly zero for high energy taus as can be seen in Figure1.4 for muons arising from the tau decay. They are indistinguishable and therefore theleptonic decays of the tau cannot be used for this analysis which results in a loss ofapproximately 35 % of all tau decays. A dedicated analysis of these leptonic decayswhich can be important for models where the W ′ cannot decay into electrons or muonswas done in [18].

[cm]xyd0 0.01 0.02 0.03 0.04 0.05

Num

ber

of T

aus

210

310

410

CMS 2012 Simulation

Figure 1.4: Distribution of the impact parameter dxy of muons arising from tau decaysof highly boosted taus. For most of the muons, the impact parameter is closeto zero.

The next disadvantage concerns the reconstruction of the 3 prong hadronic decays.The three charged tracks are very close to each other which makes it really challengingto resolve them.

But finally one advantage of the high boost is that the final state looks exactly likea two body decay and therefore the same analysis strategy as for the search channelsW ′ → µν and W ′ → eν [1] can be used. The decay products of the tau remain back-to-back to the tau neutrino from the W ′ decay.

1.4 Wprime in the Sequential Standard Model

Despite all the successes of the Standard Model to predict and model particle interac-tions, some open questions remain, related to the mass hierarchy of the different gen-erations, and the parity violation of the weak interaction. Theories dealing with thesequestions often predict new heavy gauge bosons.

As a first general approach to search for these new bosons a benchmark model calledthe Sequential Standard Model (SSM) is considered which pursues the work of Altarelli

10

1.5 Nonuniversal Gauge Interaction Model

(et al.) [19]. The W ′ is considered as a carbon copy of the SM W with the same couplingsbut higher mass. The decay into a top and bottom quark pair is possible due to the highmass of the W ′ boson and the decay into WZ is assumed to be suppressed which resultsin a branching ratio into each lepton generation of BR(W′ → lν) ≈ 8%. The width ΓW′

[20] of the W ′ is calculated to

ΓW ′ = mW ′g2

2

1

48π(18 + 3F (

mt

mW ′,mt

mW ′)) (1.18)

with

F (x1, x2) = (2− x21 − x22 − (x21 − x22)2)√

(1− (x1 + x2)2) · (1− (x1 − x2)2) (1.19)

The SSM enables to search for various W ′ since it is not restricted to a specific theorybut instead provides an experimental signature for searches.

1.5 Nonuniversal Gauge Interaction Model

Besides the SSM an additional dedicated theory of particular interest which is calledNonuniversal Gauge Interaction Model [21, 22, 23] (NUGIM) is considered in this anal-ysis which gives rise to a W ′ gauge boson. The coupling of the W ′ in this theory isnonuniversal which means that it depends on the fermion generation. Dependent on afree parameter of this model, the coupling to the tau (more precisely to the third gen-eration) can either be suppressed or enhanced. Since an enhanced coupling to the thirdgeneration means in return a suppressed coupling to the first and second generation, thetau channel has the highest sensitivity for searches in the parameter space of this model.

1.5.1 The SU(2)l x SU(2)h x U(1) gauge group

In the NUGIM model the weak interaction is described by two SU(2) gauge groupsinstead of one. The first one SU(2)l (where l stands for ”light“) couples only to first andsecond generation fermions and the second one SU(2)h (where h stands for ”heavy“) onlycouples to the third generation. The full electroweak interaction is therefore describedby a SU(2)l x SU(2)h x U(1) gauge group.

At low energies the model has to reproduce the SM electroweak interaction: An ad-ditional bidoublet scalar field

< Σ > =

(u 00 u

)breaks the SU(2)l x SU(2)h x U(1) symmetry to the SU(2)l+h x U(1) symmetry of the

Standard Model, where u is the vacuum expectation value (VEV) of Σ with u > v andv is the VEV of the Higgs field. The SM Higgs field only couples to the SU(2)l whyfermions of the third generation have to gain mass by another mechanism (which is nota natural part of this theory. For possible mechanisms see [23]). This could be a possible

11

1 Theoretical Principles

explanation for the much higher masses of these particles compared with the ones fromthe first and second generation.

After symmetry breaking the two gauge groups SU(2)l and SU(2)h are mixing with amixing angle φ which yields the following equation for the coupling constants of the twogauge groups (gl, gh):

gl sin θ cos φ = gh sin θ sin φ = g′ cos θ = e

where θ is the electroweak mixing angle (or Weinberg angle) and g′ the weak couplingconstant.

Since the gauge couplings are required to stay pertubative (g2

(g,l)

4π < 1) a constraint onthe mixing angle is obtained:

0.03 < sin2φ < 0.96 (1.20)

The additional gauge group give rise to two additional charged (W′+,W′ −) and oneuncharged (Z′) gauge bosons with degenerate masses

m2W′ = m2

Z′ =m2

W

(v2/u2) sin2φ cos2φ(1.21)

with

mW =e · v

2 sinθ(1.22)

being the SM W-mass in leading order.

The coupling of the W ′ to the fermions of the different generations depends on howlarge the contributions of the two different SU(2) gauge groups for the W ′ are andtherefore on the mixing angle φ:

G′L,1,2 =−g√

2· tan φ (1., 2. generation) (1.23)

G′L,3 =−g√

2· tan φ · (1− 1

sin2 φ) (3. generation) (1.24)

Therefore, also the branching ratio BR(W′ → f f ′) and the (partial) decay width of theW ′ depends on this mixing angle. The BR as a function of sin2φ and the decay widthas a function of m′W for different values of sin2φ can be seen in Figure 1.5. The decaywidth is calculated by

ΓW = mW ′1

48π(16G2

L,1,2 + 2G2L,3 + 3G2

L,3F (mt

m′W,mt

m′W)) (1.25)

which is a generalized form of Equation 1.18 with the new couplings from Equation1.23, 1.24.

The width of the W ′ steeply increases for the mixing angle approaching the pertur-bation boundaries of Equation 1.20. The distributions of the invariant mass and the

12

1.5 Nonuniversal Gauge Interaction Model

0.2 0.4 0.6 0.8sin²Φ

500

1000

1500

Width @GeVD

MHW'L = 3.5 TeV

MHW'L = 3 TeV

MHW'L = 2.5 TeV

MHW'L = 2 TeV

MHW'L = 1 TeV

MHW'L = 0.5 TeV

Figure 1.5: Branching fraction[22] and decay width of the NUGIM W ′.

transverse mass2 of the W ′ for different parameter points can be seen in Figure 1.6. Thelarge decay width of the W ′ for small values of the mixing angle sin2φ leads to massspectra with nearly no peak at the mass of the boson which makes it complicated toseparate its signal from the background causing an decrease of sensitivity relative to thesearch for a SSM W ′. The investigated parameter space in this analysis covers the range0.031 ≤ sin2φ ≤ 0.3 where the tau decay channel is the most sensitive leptonic decaychannel since the branching ratio into taus is greater than into electrons or muons ascan bee seen in Figure 1.5. The signal processes are calculated with the matrix elementcalculation program Madgraph [24] and the events are then hadronised with the eventgenerator PYTHIA [25].

1.5.2 Existing exclusion limits for W ′ in the NUGIM

Since the existence of additional gauge bosons would have an effect on many propertiesof the Standard Model, indirect limits on the W ′ can be obtained by precision mea-surements of SM predictions. Additionally, direct limits have been obtained by previoussearches for a SSM W ′ in the electron and muon channel which have been reinterpretedin terms of a NUGIM W ′. The latest limits on the parameter space of the NUGIM canbe seen in Figure 1.7. The following section will give a short description of the differentindirect limits shown in this plot.

APV: atomic parity violation [26]

Atomic parity violation is the impact of weak interaction parity violation on atomicphysics. Many experiments have been performed which have quantified the impact of

2The transverse mass is the invariant mass gained by restricting the kinematic to the transverse planerelative to the beam axis of the collider. For detailed information see Chapter 6.

13

1 Theoretical Principles

parity violation on different observables. These results are in good agreement with SMpredictions. Since the existence of an universality violating W ′ would alter some ofthese values, exclusion limits can be derived on the NUGIM parameter space from thesemeasurements.

LEP precision

The LEP experiment [27] has measured many parameters of the Standard Model withvery high precision. The measured number of parameters is higher than the degrees offreedom of the SM, so this over-constrained system can be used for consistency checks[28].A fit of the SM to some of these parameters give rise to a theoretical prediction on theother parameters which can be compared to the direct measurements. Since additionalgauge bosons would lead to contributions in loop corrections not predicted by the SM,this can be used to calculate limits on the parameters of these new gauge bosons andthereby on the parameters of the NUGIM.

CKM unitarity

The Cabibbo–Kobayashi–Maskawa (CKM) matrix [29] describes the coupling of theweak charged currents to different quark-currents. Since the physical quarks are mass-Eigenstates of the Hamiltonian which are mixed states of the weak Eigenstates, thequark generation can be changed at a W vertex. The probability for such a transitionis proportional to the corresponding elements of the CKM matrix squared. The SMCKM matrix for three fermion generations is unitary due to the universality of the weakinteraction. The nonuniversality of the NUGIM would give rise to a non-unitary CKMmatrix. All elements of the CKM matrix have been measured directly in agreement withunitarity and therefore limits on the NUGIM can be derived.

LFV: lepton flavor violation

Lepton flavor violation measurements yield the strongest indirect limits on the NUGIMmodel, because it is forbidden in the SM. Additional Gauge bosons with nonuniversalcouplings would give rise to flavor changing neutral currents and lepton flavor violatingprocesses at tree level which have not been observed.

Direct limits from searches at CMS and ATLAS

Searches for new gauge bosons have been performed at LHC with the CMS and ATLASdetector in the last two years. Reinterpretations of these searches have yield constrainson the parameter space of the NUGIM model. Since only the electron and muon decaychannels of the W ′ boson have been investigated so far, it was not possible to set limitson the mass of the W ′ for small values of sin2φ, yet.

14

1.5 Nonuniversal Gauge Interaction Model

) [GeV]ν,τM(0 500 1000 1500 2000 2500

Ent

ries

/ 10

GeV

210

310

= 0.031φ 2sin

= 0.05φ 2sin

= 0.3φ 2sin

CMS 2012 Simulation

) [GeV]ν,τ(TM0 500 1000 1500 2000 2500

Ent

ries

/ 10

GeV

210

310

= 0.031φ 2sin

= 0.05φ 2sin

= 0.3φ 2sin

CMS 2012 Simulation

Figure 1.6: Mass spectra of M = 2 TeV NUGIM W ′ for three different values of theparameter sin2φ. The left plot shows the full mass spectrum and the rightplot shows the spectrum of the transverse mass. The plots are done with fullinformation from simulation.

Figure 1.7: Existing exclusion limits on the NUGIM parameter space determined by pre-vious searches and indirect limits.[22] Everything below the lines is excludedby the given process. No direct limits have been determined for small mixingangles, yet, since only the electron and muon channel have been investigatedso far and only the tau channel is sensitive for these small values of sin2φ.

15

2 The Experiment

The analysis in this work is performed with data taken by the Compact Muon Solenoid(CMS) detector located at CERN near Geneva as one of the four main particle detectorsof the Large Hadron Collider (LHC). In this chapter the basic principles and the currentstatus of the LHC are presented and the setup and components of CMS are described.

2.1 The Large Hadron Collider

The LHC [30] is the current world’s largest hadron collider which was designed to accel-erate protons up to 7 TeV and lead ions up to 2.76 TeV /nucleon 1. It is a superconduct-ing storage ring which is located in the tunnel originally built for the “Large ElectronPositron Collider” (LEP) [27]. The tunnel is between 45 and 170 meter underground andhas a perimeter of 26.7 kilometers. Most of the already existing smaller accelerators atCERN are used as pre-accelerators and the “Super Proton Synchrotron” (SPS) is usedas the main injector for the LHC. There are four main experiments performed at LHC:CMS (which will be discussed in detail later), ATLAS (A Toroidal LHC ApparatuS)[31],ALICE (A Large Ion Collider Experiment)[32] and LHCb (beauty experiment)[33]. CMSand ATLAS are multi purpose detectors which are designed to search for new physicsand particles at high energy. ALICE is designed to deal with the special conditions of ioncollisions in which they are investigating conditions of matter similar to those shortlyafter the big bang, like the quark-gluon plasma. LHCb is an asymmetric detector toinvestigate forwards physics of strongly boosted events. The focal point of its analysisis b-quark physics and CP-violation. The whole accelerator complex and the four mainexperiments are shown in Figure 2.1.

One of the main goals of the LHC is to search for new physics beyond the StandardModel which typically give rise only to very rare processes with small production crosssections (σ). Cross section is measured in units of barn ( b = 10−24 cm2 = 1015 fb) anddescribes the probability for a particle process to happen, if two initial-state particlesinteract. The number of events (Nev) of a process is related to the cross section by

Nev =

∫Ldt · σ (2.1)

where L is the luminosity of the collider at the interaction point. Since σ is very smallbut analyses need sufficiently high statistics, the luminosity of LHC has to be large. Theluminosity at one interaction point is given by various properties of the proton beam

1This study only uses data from proton-proton collisions, and therefore only this mode of operationwill be described

17

2 The Experiment

Figure 2.1: LHC accelerator complex with pre-accelerators, main injector and experi-ments [34]

L =N2b nbfresγ

4πεnβ∗F (2.2)

where Nb is the number of protons in one bunch, nb the number of bunches perbeam, fres the number of revolutions per second, γ the relativistic gamma factor, εnthe normalized transverse beam emittance, β∗ the beta function at collision and F ageometrical factor which considers the non zero crossing angle of the two beams. Theemittance is the phase-space accessible to the protons transverse to the beam and thebeta function describes the longitudinal focus (squeeze) of the proton bunch. The onlyway to achieve a sufficiently high luminosity is to insert a very high number of bunchesper beam with a very high number of protons each. Therefore the only reasonable designfor the LHC was a proton-proton collider instead of a proton-antiproton collider like thepredecessor collider “Tevatron” at Fermilab[35], since it is too elaborate to produceenough antiprotons. The protons are forced onto their orbit inside of two separatevacuum pipes for each direction by 1232 superconducting dipole magnets which canreach a magnetic field strength of B = 8.33 T.

The LHC has not yet run at its design center of mass energy (√

s) of 14 TeV, due tosome technicals issues and started to run in the year 2010 and 2011 with

√s = 7 TeV.

The data for this analysis where taken in the year 2012 when energy was increasedto√

s = 8 TeV. A time-wise bunch spacing of 50 ns is used, resulting in 1374 bunchesper beam. Each bunch contains approximatively 150 billion protons within a squeeze of

18

2.2 The Compact Muon Solenoid

β∗ = 0.6 m leading to an observed peak luminosity of L = 7.7 · 1033 cm−2s−1.

At this high luminosity more than one interaction occurs at each bunch crossing,where most of these events are due to QCD interactions with low momentum transfer.These additional interactions are called pileup and cause some challenges for the objectreconstruction since the events contain a lot of hadrons which can overlap with theinteresting objects in the analysis.

2.2 The Compact Muon Solenoid

The Compact Muon Solenoid [36] is a multi purpose particle detector designed to searchfor rare processes in a high background environment with large particle multiplicities.It was designed as a classical collider experiment with different layers of subdetectorslocated symmetrical around the interaction point in a cylindrical geometry. The detectorhas a diameter of 14.6 meters and a length of 16 meters. To measure as many particlesas possible per collision, a hermetic 4π coverage is aimed at. The central part of thedetector is called “barrel” and the forward parts are called “endcaps”. Both regions differin construction and used subdetector types to take care for the different experimentaladvantages.

CMS was designed for four main requirements:

• Good muon identification and momentum reconstruction up to high energies de-manding an exact track reconstruction and a good muon system

• Good triggering and offline τ and b-jet tagging requiring good vertex reconstruc-tion. This is realized with a pixel detector close to the interaction point.

• Good electromagnetic energy reconstruction and photon identification at high lu-minosities

• Good missing transverse energy reconstruction

The core quality of CMS is the excellent momentum resolution of charged particlesachieved by a silicon tracker with high spatial resolution inside of a 3.8 T magneticfield. The magnetic field is provided by a superconducting solenoid large enough tocover not only the tracker but also the electromagnetic and hadronic calorimeters. Thecalorimeters are located next to the tracker with the electromagnetic one coming first.Outside the solenoid coil there is an additional hadronic calorimeter called the “hadronouter”. Its purpose is to detect non-muon particles penetrating the whole calorimeterand the magnet (so called “punch throughs”). The last part of the detector is the muonsystem which is mounted between the magnetic field return yoke. An overview of thedetector is shown in Figure 2.2

In the following section, the different parts of the detector are described in more detail.

19

2 The Experiment

C ompac t Muon S olenoid

Pixel Detector

Silicon Tracker

Very-forwardCalorimeter

Electromagnetic�Calorimeter

HadronCalorimeter

Preshower

Muon�Detectors

Superconducting Solenoid

Figure 2.2: Overview of the CMS detector and all its subdetectors[36].

2.2.1 The coordinate system

CMS uses cylindrical coordinates with the origin at the nominal collision point. Thex-axis points towards the LHC center, the y-axis upwards perpendicular to the LHCplane and the z-axis in beam direction. The azimuthal angle φ is measured from thex-axis in the x-y plane (as from now called φ plane) and the polar angle θ is definedrelative to the z-axis. Instead of θ the pseudo rapidity

η = − ln(tan(θ

2)) (2.3)

is used since distances in η are invariant under Lorentz boost. The momentum ofa particle transverse to the beam direction is defined as the projection of the totalmomentum into the φ plane. The energy calculated from this transverse momentumis called the “transverse energy“ and points into the direction of the momentum. Thenegative sum of all transverse energies is called ”missing transverse energy” which isthe only way to gain information about escaping particles like neutrinos. This will bediscussed in more detail in Chapter 3.

2.2.2 Silicon tracker

The silicon tracker is divided in two parts: the inner pixel detector and the outer stripdetector. An overview is shown in Figure 2.3.

The pixel detector has a higher granularity but less layers than the strip detector. Thepixels enable a direct two dimensional localization and it has a better spatial resolutionthan the strip detector. Its main purpose is the identification of the interaction point byreconstructing the primary vertex of the event and the measurement of additional decay

20

2.2 The Compact Muon Solenoid

Figure 2.3: Scheme of the CMS silicon tracker[36].

vertices. This is for example needed for the b-jet tagging: A b-jet is identified by theoffset of its vertex from the primary vertex, caused by the lifetime of the b-quark.

The pixel tracker contains 66 million silicon pixels of the size 100x150µm2 distributedover 3 layers in the barrel and two disks in the endcaps. The inner layer is at the minimum4.4 cm apart from the interaction point. A three dimensional position determination witha resolution of approximately 15µm is possible.

It is not possible to build the whole tracker out of pixels, since the number of read-outchannels would become too large. Since the particle flux decreases with radius, it is pos-sible to change to a strip based tracker in the outer region of the tracker without gettingtoo many hit ambiguities. The size of the strips are 10 cm x 80µm for an intermediatedistance to the beamspot and 25 cm x 180µm for the outer region. The first two layersof strips in the barrel and the strips on disks number 1,2 and 5 of the endcaps carry anadditional second micro strip detector module which is twisted by 100 mrad to enable ameasurement of the second spatial coordinate (z for barrel, r on the endcaps).

The tracker has an overall acceptance of |η| < 2.5 and a momentum resolution of1− 2 % in the central region with |η| < 1.6 for particles with pT < 100 GeV. Outsidethis η range it decreases due to a shorter lever arm. Since the momentum of a particle ismeasured by determining the bending of the particle track which becomes more inexactfor a greater bending radius and a fixed lever arm, the momentum resolution decreasesfor rising momentum with

σpTpT∝ pT

2.2.3 Electromagnetic calorimeter

The electromagnetic calorimeter (ECAL) is a homogeneous calorimeter made of 75848lead tungstate (PbWO4) crystals covering an area up to η = 3.0. The crystals aremounted quasi-projective towards the interaction point with a small angle (3◦) relativeto the vector pointing from the interaction point to the crystal to avoid that particletrajectories traverse the gaps between two crystals. The huge number of crystals withhigh density yield a fast calorimeter with fine granularity perfectly suited to measurethe energy and direction of photons and electrons. The good photon reconstruction is

21

2 The Experiment

crucial to analyze one of the golden decay channels of the higgs boson, the one into twophotons.

The PbWO4 crystals have a density of 8.28 g/cm3 and a radiation length of 0.89 cm.With a crystal length of 230 mm, this results in a calorimeter depth of approximately 26radiation length for electromagnetic particles. The scintillation time of the crystals isclose to the design bunch crossing time of the LHC, 80 % of the light is emitted in 25 ns.The light is measured with avalanche photodiodes in the barrel and vacuum phototriodesin the endcaps.

To distinguish photons from the hard interaction against photons arising from π0

decays a preshower detector is installed in the endcaps in front of the ECAL. It consistof two layers of lead radiators with a silicon strip sensor layer behind each. Neutralpions heading toward the endcaps have mostly a high boost and therefore the photonsare collimated. The granularity of the preshower detector is much higher compared tothe ECAL, so it can distinguish shower profiles of two overlapping photons much better.

A sketch of the ECAL can be seen in Figure 2.4.

The energy resolution of the ECAL improves for rising particle energy due to risingphoton statistic. The uncertainty on the measured energy is described by Formula 2.4.

( σE

)2=

(S√E

)2

+

(N

E

)2

+ C2 (2.4)

It is distributed into three different terms:

The first one is the stochastic term which contains the effect of variations in the lateralshower containment and fluctuations in the energy deposition in the preshower detector(where present). The second one is the noise term which contains electronics, digitizationand pileup noise and the constant third term describes intercalibration errors.

In test measurements the energy resolution was determined to be approximately:

( σE

)2=

(2.8 %√E

)2

+

(12 %

E

)2

+ (0.3 %)2 (2.5)

2.2.4 Hadronic calorimeter

The hadronic calorimeter (HCAL) is important to reconstruct hadronic jets and themissing transverse energy. For this purpose it is crucial to measure the full energy ofall particles (except for muons which are measured in the muon system and the trackersince they are not stopped in the calorimeters) in a maximum η range. Since space islimited by the magnet coil, a very dense material is needed to achieve a sufficient higharea density. It is realized as a sandwich calorimeter with 14 layers brass interleaved withplastic scintillators in the barrel resulting in an absorber depth of minimum 5.8 hadronicinteraction length (λl) and 17 layers of scintillators in the endcaps. The spatial resolutionof the HCAL is worse than the ECAL with ∆η x ∆φ = 0.087 x 0.087 for |η| < 1.6 and∆η x ∆φ = 0.17 x 0.17 for |η| > 1.6 The ECAL adds additional material equivalent tolambdal = 1.1. For all hadronic showers, which are not stopped within the HCAL, the

hadron outer calorimeter functions as a so called “tail catcher”. Its purpose is to measure

22

2.2 The Compact Muon Solenoid

Figure 2.4: Sketch of the electromagnetic calorimeter [37].

the end point of these showers to still get a reasonable energy measurement. This resultsin an overall hadronic interaction length for each direction of minimum λl = 11.8.

To extend the coverage of the HCAL in η, an additional hadronic calorimeter is im-plemented in the very forward region (3 < |η| < 5.2). Since the radiation exposure isvery strong in this region, a very robust detector design is needed. This very forwardcalorimeter is realized as a Cerenkov detector which consists out of iron absorbers andquartz fibres to collect the light. For geometrical reasons the detector is located 11.2 mfrom the interaction point.

The energy resolution of the HCAL is(σE

)2=(

C1√E

)2+ C2

2 which is similar to the

ECAL but with a much worse proportionality factor (C1 = 100 %− 150 %). However,since no particle is measured solely in the HCAL, a good energy resolution is not ascrucial as for the ECAL.

2.2.5 Magnet

One important feature of the CMS detector is the excellent momentum resolution ofcharged particles measured by the silicon tracker. To enable this resolution up to highenergies sufficient bending by a very strong magnetic field is needed. Since a minimumamount of material should be in front of the calorimeters to affect the particles as littleas possible, the magnet coil was designed big enough to contain not only the tracker butalso the main calorimeters. This leads to the challenging task to create a strong andhomogeneous magnetic field in a large area. The magnet is realized as a superconductingsolenoid with four layers of NbTi conductor and a diameter of 6.3 m and a length of 12.5meters. The field strength inside the coil is 3.8 T and is returned outside by a 10000 tfour layer iron yoke. The cold mass of 220 t stores an energy of 2.6 GJ and is cooled byliquid helium with a temperature of 4.65 K.

23

2 The Experiment

Figure 2.5: Profile of the muon system [38].

2.2.6 Muon system

The muon system is located outside the solenoid and is mounted between the layers ofthe magnet return yoke. It consists out of three different detector types: Drift tubes(DT) in the barrel and cathode strip chambers (CSC) in the endcaps as the precisiondetectors, and resistive plate chambers (RPC) as a fast trigger in both regions. DTs aredrift chambers with a separate gas volume for each anode wire. Their spatial resolutionis approximately 100µm per station. CSCs are multi-wire proportional-chambers withsix layers of anode wires and seven layers of cathode strips. The cathode strips aresegmented perpendicular to the anode wires, thus enabling a two dimensional spatialresolution (again approximately 100µm). RPCs are parallel plate gas chambers with aworse spatial resolution than DTs or CSCs but with a much better time resolution ofless than 25 ns. This makes them perfect for triggering.

DT are susceptible by magnet fields and can only be used inside the barrel since thefield strength is relatively small between the layers of the yoke. CSCs and RPCs are lesssensitive to magnetic fields due to shorter drift distances and can be used in the endcapswhere the magnetic field is relatively strong even between the layers of the yoke.

A profile of the muon system can be seen in Figure 2.5. The barrel is divided into fivewheels with four detector stations in radial direction. The first three stations contain fourlayers of DTs to measure the z coordinate and eight layers to measure the φ coordinatewhile the fourth station only measures the φ coordinate. It also contains six layers ofRPCs, while each of the endcaps contains three layers of RPCs and four stations ofCSCs.

24

3 Object Reconstruction

Before physics analyses can be performed on data, the different physics objects of eachevent have to be reconstructed from the raw detector output. In this analysis, eventscontaining a hadronically decaying τ and missing transverse energy are considered, sothe focus of this chapter will be on the reconstruction of these two objects. Both arereconstructed by using results of the “particle flow” (PF) event reconstruction algorithmwhich will be explained first.

3.1 Particle Flow Algorithm

General idea and advantages

The particle flow algorithm [39] takes a novel approach to reconstruct physics objectscompared with most previous used reconstruction algorithms. Instead of trying to re-construct one special object out of an event using only the sub detectors designed forthis specific object (for example tracker plus ECAL for electrons or tracker plus muonsystem for muons), it uses all detector components to reconstruct every stable objectat the same time. In this way, ambiguities can be avoided since the algorithm ensuresthat any input object (e.g. track path, calorimeter cluster) is included only in one ob-ject. One main advantage of this algorithm is the detailed event information in a rathersimple form. The output provides a nearly complete list of all particles within the eventwhich are electrons, muons, photons, charged hadrons and neutral hadrons. Informationabout neutrinos can only be gained indirectly by the missing transverse energy. The re-construction of hadronic jets profits the most from this detailed information. A jet is acollimated bunch of hadronic object arising from the hadronisation of one primary ob-ject. Classic jet reconstruction algorithms just sum up all calorimeter clusters or trackswhich fulfill a jet requirement, for example a maximum distance relative to the near-est object already included into the jet. These algorithms typically provide only littleinformation about the jet, like the momentum and the direction, which makes it hardto identify from which kind of primary particle the jet originates. Since particle flowreconstructs all stable hadrons within a jet before clustering them, the compositness ofthe jet and its evolution within the detector is resolved. This enables robust conclusionsabout the jet origin which is crucial for the identification of hadronically decaying tausin this analysis.

The algorithm

As mentioned before, the algorithm uses the combination of all sub detector information,which first have to be made compatible. A scheme of the full particle-flow algorithm is

25

3 Object Reconstruction

shown in Figure 3.1 which will be explained in the following section:

In a first step the algorithm creates general sub detector specific objects out of thedetector response (so called “elements”) which are charged particle tracks, calorimeterclusters and muon tracks.

The charged particle tracks are built from the tracker hits by an iterative trackingalgorithm which first reconstruct one track with very tight criteria and then goes on toreconstruct additional tracks with looser criteria out of the remaining hits.

The purpose of the calorimeter clusters is to divide the energy deposition inside thecalorimeters into separated objects, so that they can be associated to individual particleslater. This is a very challenging task since particles deposit their energy in quite a largearea, which can easily lead to overlaps. To do this, local energy maxima are identified as“seed” cells, among which the surrounding energy is then distributed according to theshape of the energy deposition.

Since one particle can give rise to more than one element, the different elements have tobe linked to “blocks”. Links between the elements are not clear, so the linking algorithmtentatively combines each possible element pair and quantifies the quality of the link.For example, the linking of a charged particle track and a calorimeter cluster is doneby extrapolating the track into the calorimeter and determine the distance toward thecluster position in the (η φ) plane. The linking of a charged particle track in the trackerand a muon track in the muon system is done by performing a global fit including thehits of both tracks. The χ2 of the global fit is the value of the linking quality.

In a next step, particles have to be identified among the blocks and ambiguities haveto be resolved.The identification starts with muons since they are the easiest to identify.All blocks of charged particle tracks and muon tracks leading to a global muon fit withan acceptable χ2 are considered as particle-flow muons. If there is more than one tracklinked to a muon track, only the one leading to the smallest χ2 is considered for themuon reconstruction and also removed from the blocks. Next electrons are reconstructedout of linked tracks and clusters by a Gaussian-Sum Filter (gsf) fit [40]. The gsf fittakes into account the energy loss due to Bremsstrahlung. Tangents are attached tothe track in order to search for ECAL cluster caused by radiated photons. Finally, acombination of track and calorimeter variables is used to identify these candidates aselectrons. After some additional cleaning for low energy muons, all remaining tracks giverise to particle-flow charged hadrons. The energy is taken directly from the momentummeasurement of the track under the assumption that the charged hadron is a pion. Ifthe energy measured by the calorimeter is compatible with this, the energy is redefinedusing also this information. This improves the reconstruction at high energies where themomentum resolution of the track is worse. If the calorimeter cluster energy exceedsthe charged-particle momentum significantly, the remaining ECAL energy is identifiedas a particle-flow photon and the remaining HCAL energy as a neutral hadron. In a laststep, all remaining cluster not linked to any track are identified as photons or neutralhadrons.

The last part of the particle flow Algorithm is the construction of composite objectsout of the list of reconstructed particles, like jets, tau leptons and missing transverseenergy. The reconstruction of these objects is described in more detail since this analysis

26

3.2 Particle Flow Jets

Track Rechits Calo Rechits

Tracking Clustering

PFTracks

Muo Rechits

Tracking

PFClusters Global Muons

Linking

PFBlocks

PFAlgo

PF MuonsPF Electrons PF Photons PF Hadrons

PF Jet PF Tau Other analysesPF MET

Figure 3.1: Scheme of the particle flow algorithm

relies on them.

3.2 Particle Flow Jets

The most common objects arising from proton-proton collisions are hadronic jets (fromnow on referred to as “QCD jets” since they arise from QCD induced processes). Asingle quark or gluon created at the hard interaction of two partons will produce a largenumber of hadrons due to the confinement of the strong force. In order to reconstructsthe properties of the initial quark or gluon a reliable algorithm is needed to identifyall emerging particles and to combine them. Important qualities of jet algorithms areinfrared and collinear safety. The first quality means that the properties of the jet arenot strongly affected by soft QCD radiation and the latter one means that the result issafe against parton splitting. Collinear safety can be illustrated best by an example ofan algorithm which is not collinear safe: In a simple approach a jet can be reconstructedby searching for the object with the highest energy and adding all particles within afixed cone. Since the energy distribution between the objects is strongly affected bythe various splitting processes during hadronisation, the properties of this jet are alsostrongly affected by these mainly random processes.

In this work the anti-kt algorithm [41] is used. It is similar to the commonly usedCambridge/Aachen-algorithm which uses an iterative clustering with a pT dependentdistance parameter and based on a selection of so-called protojets. For particle flowjets, every PF object is a protojet except for well identified electrons and muons. Thealgorithm starts with the highest energy protojet as a seed and merges it with the nextnearest one to a new protojet if the distance is within a given boundary. This step is

27

3 Object Reconstruction

repeated until there are no more protojets left. After that, the algorithm starts againwith the highest of the remaining protojets as the seed. The distance is defined as

dij = min(k2pT i, k2pTj)

(∆R)2

r2(3.1)

[41] where kT is the transverse momentum of the protojet, ∆R the distance betweenthe two protojets in (η,φ) and r and p are parameters of the algorithm. For the anti-ktalgorithm p is set to -1.

3.3 The Hadron plus Strips Tau Reconstruction Algorithm

The tau decay time of τ ≈ 290 · 10−15 s is too short to detect the particle directly at CMSsince the distance from the interaction point to the detector layers is too large. Only itsdecay products are measured and then used to obtain information about the tau. It candecay leptonically or hadronically (see Chapter 1), but only the hadronic decay modesare used here for reconstruction since it is really hard to determine whether an electronor muon arises from a tau decay, or is in fact the primary particle (see Chapter 1). Thepurpose of the Hadron plus strips (HPS) algorithm [42] is to identify whether a hadronicjet arises from a hadronic tau decay (from now on referred to as “tau jet”) or a differentprimary particle, and to re-reconstruct the jet energy under the conditions of a taudecay. The algorithm uses the list of particles reconstructed by the PF algorithm. TheHPS algorithm is not able to reconstruct the full tau energy since undetectable neutrinosoccur at the tau decay. Three properties are special to a tau jet in contrast to a quark orgluon jet: Narrowness of the jet, small particle multiplicity with one dominating chargedhadron and a characteristic invariant mass of the jet. These properties are due to thetau decay properties described in Chapter 1: A tau decays only into meson resonanceswhich are manly the pion, the ρ(770)-meson or the a(1260)-meson. Since the chargedpion is stable for direct detection and the two other mesons decay only to a small numberof particles, the tau jet is narrow and has a small particle multiplicity. The mass of themesons is well known and can be identified by the invariant mass of the jet. A QCD jetinstead arises from a parton which hadronises due to the confinement of the strong forceinto a large number of hadrons.

3.3.1 Tau jet reconstruction

The tau jet reconstruction exploits the well known compositeness of tau jets, dependingon the decay mode. Four different experimental signatures are considered arising fromfour different decay chains of the three main decay modes. The final states of the fourdecay chains are:

1. One π± (BR = 11.6 %)

2. One π± plus one π0 (τ → ρ(770) ντ → π± π0 ντ ) (BR = 26 %)

3. One π± plus two π0s (τ → a(1260) ντ → π± π0 π0 ντ ) (BR = 9.5 %)

28

3.3 The Hadron plus Strips Tau Reconstruction Algorithm

4. Three π± (τ → a(1260) ντ → π± π∓ π± ντ ) (BR = 9.8 %)

The reconstruction of a tau jet starts with a PF jet as an initial seed which waspreviously reconstructed by the PF algorithm. As the first step, the HPS algorithm triesto reconstruct all π0s within the jet. The π0 is not part of the PF particle list since itdecays nearly instantaneously into two photons which can turn into an electron-positronpair by photo conversion. Due to the strong magnetic field inside of CMS, these chargedparticles get bent in the φ-plane. The HPS algorithm takes into account this broadeningby reconstructing photons in “strips” with a size of ∆η = 0.05 and ∆φ = 0.2. It startsby placing a strip around the most energetic electromagnetic object inside the PF jetand than searching for the next highest electromagnetic object within the strip. Thefour momenta of the two objects are combined and afterwards the search is repeateduntil no electromagnetic object can be associated anymore. Finally, all strips with atransverse momentum pstrip

T > 1 GeV are combined with the charged hadrons. Since aclear assignment of strips to neutral pions is not possible, the physical final states of thehadronic tau decay are replaced by experimental final states:

1. One charged hadron (called “1Prong”)

2. One charged hadron plus one strip

3. One charged hadron plus two strip

4. Three charged hadrons (called “3Prong”)

where two strips can either arise from two separated photons from one π0 or from twoπ0s, where the photons are overlapping.

3.3.2 Tau identification

After tau candidates are reconstructed for each PF jet, the HPS algorithm tries toidentify which tau candidate has arisen from a real tau. The algorithm provides a setof binary discriminators for each tau candidate to quantify if it has passed a specificset of identification requirements. Each set of requirements has a specific reconstructionefficiency and missreconstruction (fake) probability. The efficiency of a discriminator isdefined as the probability that a real tau passes its requirements and the fake probabilityis defined as the probability that a tau candidate which is not due to a real tau decaypasses the requirements. Which set of discriminators should be used depends on thegoals of each analysis. In some cases a very pure sample is needed and in other caseshigher statistics is needed. A list with the most important discriminators is shown inTable 3.1.

The “decayModeFinding” discriminator

The most important discriminator which is used in every analysis is “decayModeFind-ing”. It is the one which checks the tau jet for the main tau properties already mentionedabove: First, all charged hadrons and strips have to be contained within a cone of size

29

3 Object Reconstruction

Name Selection

decayModeFinding Narrowness and jet mass requirement

AgainstElectronLoose MVA Selection

AgainstElectronMedium MVA Selection

AgainstElectronTight MVA Selection

AgainstElectronMVA MVA Selection

AgainstMuonLoose Tau Leading Track not matched to chamber hits

AgainstMuonMedium Tau Leading Track not matched to global/tracker muon

AgainstMuonTight Tau Leading Track not matched to global/tracker muonand large enough energy deposit in ECAL + HCAL

VLooseCombinedIsolation- isolation cone of 0.3 , ∆β corrected∑

pT of PF charged andDBSumPtCorr PF γ isolation candidates (pT > 0.5 GeV) less than 3 GeV

LooseCombinedIsolation- isolation cone of 0.5 , ∆β corrected∑

pT of PF charged andDBSumPtCorr PF γ isolation candidates (pT > 0.5 GeV) less than 2 GeV

MediumCombinedIsolation- isolation cone of 0.5 , ∆β corrected∑

pT of PF charged andDBSumPtCorr PF γ isolation candidates (pT > 0.5 GeV) less than 1 GeV

TightCombinedIsolation- isolation cone of 0.5 , ∆β corrected∑

pT of PF charged andDBSumPtCorr PF γ isolation candidates (pT > 0.5 GeV) less than 0.8 GeV

byLooseIsolationMVA MVA Selection

byMediumIsolationMVA MVA Selection

byTightIsolationMVA MVA Selection

Table 3.1: List of HPS tau identification discriminators [43]. The discriminators andselection quantities are described in the text.

∆R = (2.8 GeV)/pτhT and the direction of the reconstructed tau momentum ~pτh has tomatch the direction of the seed PF Jet with ∆R < 0.1. These two requirements exploitthe narrowness of the tau jet. Additionally to this, the invariant mass of the tau jet hasto match one of the mesons occurring in the tau decay. In Figure 3.2 the distribution ofthe invariant mass of jets from true tau decays is shown. To illustrate the decay modefinding the meson mass distributions are included as well as the allowed mass windows.The mass windows are: 0.3 GeV < Mτh < 1.3 GeV for 1 Prong plus 1 strip jets (assump-tion: rho(770) decay with two photons within one strip), 0.4 GeV < Mτh < 1.2 GeV for1 Prong plus 2 strip jets (assumption: ρ(770) decay with two separated photons) and0.8 GeV < Mτh < 1.5 GeV for 3 Prong jet (assumption: a(1260)). For the 1 Prong pluszero strips case there is no mass requirement.

The isolation discriminators

Since the decayModeFinding discriminator is optimized for high efficiency, the fake prob-ability for QCD jets is still very high and a further suppression is needed. This is doneby exploiting the isolation of the tau: If a tau candidate is reconstructed within a QCDjet it is typically accompanied by many other particles, while a true tau is well isolated.The HPS algorithm provides two different types of isolation discriminators, whereby one

30

3.3 The Hadron plus Strips Tau Reconstruction Algorithm

) [GeV]JetτM(-0.5 0 0.5 1 1.5 2

210

310

410

π

Kρ a

all

0.3 1.30.4 1.21.2 1.5

Num

bero

fTau

s

CMS 2012 Simulation

Figure 3.2: Distribution of the invariant mass of true tau jets before reconstruction.Additionally shown are the mass distributions of the meson decay resonanceson generators level and the decayModeFinding mass windows.

type uses a standard cut-based approach and the other one a multivariate approach(MVA). There are different kinds of cut-based isolation discriminators where the latestone is called “CombinedIsolationDBSumPtCorr”. This discriminator probes for a pileupcorrected absolute isolation: The transverse momentum of all PF charged and PF γ ob-jects (with pγT > 0.5 GeV) are summed up in a cone of size ∆R < 0.3 and corrected forpileup. The isolation cone is shown in Figure 3.3. The pileup correction is done forcharged and neutral contributions separately. The charged part is corrected by onlytaking into account tracks which originate from the same vertex as the tau and the neu-tral part by using ∆β corrections. For the ∆β correction it is assumed that the neutralpart amounts to 50 % of the charged part (which can be measured). This contributionis then subtracted from the isolation cone.

The MVA isolation discriminators use a boosted decision tree on various jet propertiesto discriminate a tau jet against a QCD jet.

The anti electron and anti muon discriminators

Not only QCD jets can fake hadronic taus, but also electrons and muons.

A electron causes a dominant track and deposits all its energy in the calorimeter.This signature can easily be misidentified as a charged hadron plus photons caused byBremsstrahlung which can be misidentified as π0s. The discrimination of the electronsis done by a multivariate analysis, where various variables like properties of the showershape or the fraction between the energy within the ECAL and the HCAL are used bya boosted decision tree to separate electrons from taus.

31

3 Object Reconstruction

Figure 3.3: Cone for the absolute tau isolation[43]. All momentum within the cone notassociated to the tau is summed up.

The discrimination of muons is done by checking if the leading track within the taucandidate matches to a muon candidate reconstructed with the muon system.

3.4 Missing Transverse Energy

The missing transverse energy (MET) is an important property at every collider experi-ment since it is the only way to gain information about weakly interacting (sufficiently)stable particles which leave the experiment undetected like neutrinos. Since the mo-mentum of the initial state of a collision is balanced in the transverse plane the finalstate should also be balanced. If a particle remains undetected this balance is violatedand the negative sum of all transverse energies is called missing transverse energy. Theconcept of a vectored energy seems to be strange, but it follows a simple convention: Itis defined as the relativistic energy constructed out of the mass and the momentum in agiven direction (in this case transverse to the beampipe). If there is only one undetectedparticle within an event, MET is approximately equal to the transverse energy of thisparticle. This approach is restricted to the transverse plane since the momentum of theinitial state in beam direction is unknown due to the parton distribution function (forexplanation of the parton distribution function see Chapter 5) and in the final state anon-negligible part of the total energy leaves the detector through the beampipe.

There are different ways to determine the missing transverse energy: A simple ap-proach is to sum up all energy stored in the calorimeters and correct this value with thepT of muons. But since the full set of particles is already reconstructed by the particle

32

3.5 Performance of the HPS Algorithm at High Energies

flow algorithm, these detailed information can be used to determine the MET valuewith much more precision. In this analysis “type-one corrected PF MET” is used whichmeans that the MET is corrected for the influence of the jet energy calibration appliedto PF jets (for detailed information on the energy calibration see [44]).

3.5 Performance of the HPS Algorithm at High Energies

Since the HPS algorithm was not optimized for very high energetic taus which this anal-ysis has to deal with, the performance of the algorithm in terms of energy reconstruction,identification efficiency and fake probability at high energies was investigated and will bepresented in the next two sections. The investigation of these properties in real data isvery challenging and was not possible to perform in this thesis, and therefore simulationwas used. The advantage of a simulation study is that all true information of the eventbefore detection and reconstruction are known. Simulated samples are called “MonteCarlo” (MC) samples 1.

The sample /WToTauNu ptmin500 TuneZ2Star 8TeV-pythia6-tauola

/Summer12 DR53X-PU S10 START53 V7A-v1/AODSIM is used which is a simulation of the taudecay of W bosons produced at high transverse momentum transfer at the hard inter-action since this sample provides a sufficient large number of high energy taus.

3.5.1 Tau energy determination

The HPS algorithm only reconstruct the visible part of the tau energy and is not cor-rected by the neutrino energy. Therefore, in the following this reconstructed visibleenergy will be simply addressed as the reco tau energy (Eτ,reco) and reco tau momentum(pτ,reco), respectively. In simulation various quantities are accessible before reconstruc-tion is performed which are used to gain information about the true particle features.Important here are the true full transverse tau energy Eτ,true,fullT and the true transverse

energy of all visible tau decay products Eτ,true,visT .

A two dimensional distribution of the reconstructed transverse energy Eτ,reco versusthe true transverse visible energy Eτ,true,visT can be seen in Figure 3.4. For good energyreconstruction this distribution should be centered around the angle bisector with aspread proportional to the resolution. But for rising Eτ,true,visT the distribution showsthat more and more taus are being reconstructed with too low energy.

The quality of the energy reconstruction is quantified by the ratio of this “reco tautransverse energy” and the “true visible tau transverse energy”. For good energy recon-struction the distribution of this ratio should be a Gaussian with its maximum at one. InFigure 3.5 this distribution is shown for taus with Eτ,true,visT > 400 GeV, since Figure 3.4shows that the misreconstruction becomes dominant in this region. For a number of tausless than 50 % of the true energy is reconstructed. To quantify the energy dependence ofthis misreconstruction, a profile plot of this ratio versus Eτ,true,visT is shown in Figure 3.6.

The value on the y-axis is the mean of the ratio in each bin of Eτ,true,visT shown on the

1A list of all used MC samples and generators can be found in Chapter 5

33

3 Object Reconstruction

[GeV]true,visTE

0 200 400 600 800 1000 1200

[GeV

]re

coT

E

0

200

400

600

800

1000

1200

0

10

20

30

40

50

60

CMS 2012 Simulation

Figure 3.4: Distribution of Erecoτ versus Etrue,vis

τ . Many taus are reconstructed with toolow energy.

x-axis. The mean of the ratio decreases to 50 % for tau energies in the TeV range whichis unacceptable for this analysis, so the reason for this misreconstruction and possiblesolutions have to be found.

To check if the energy misreconstruction depends on the decay mode of the tau, theplot of Figure 3.5 is redone in Figure 3.7, but separately for the different decay modes(by truth information) . The distribution shows that the 1 Prong decay modes arereconstructed much better than the 3 Prong decays (although the 1 Prong reconstructionis not optimal, either). A possible explanation for this could be the high boost of thetau decay products at high energies as described in Chapter 1: Particle flow is not ableto resolve the three charged tracks, but instead merged them into one. The calorimeterenergy of the two missing tracks is then misidentified as neutral hadrons which are notconsidered in the HPS algorithm and are therefore missing for the tau energy 2. Thiseffect increases for rising Eτ,true,visT .

Unfortunately, it is not possible to veto the analysis against 3 Prong decays since theyare misreconstructed as 1 Prong taus because only one track is identified. Furthermore,for all taus which are actually reconstructed correctly as 3 Prongs the energy reconstruc-tion should be all right since all tracks have been reconstructed. The misidentificationprobability for the different decay modes can be seen in Figure 3.8.

2In this context “neutral hadrons” only refers to these which live long enough to be detected withinthe calorimeters.

34

3.5 Performance of the HPS Algorithm at High Energies

true,visT / Ereco

TE0 0.2 0.4 0.6 0.8 1 1.2 1.4

Num

ber

of T

aus

0

5000

10000

15000

20000

25000

30000

35000

> 400 GeVtrue,visTE

CMS 2012 Simulation

Figure 3.5: Ratio of reconstructed to generated tau ET for taus with Etrue,visT > 400 GeV.

Many taus are reconstructed with too low energy.

true,visTE

200 400 600 800 1000 1200

)tru

e,vi

sT

/Ere

coT

Mea

n(E

0.5

0.6

0.7

0.8

0.9

1

1.1

CMS 2012 Simulation

Figure 3.6: Profile ofEτ,reco

T

Eτ,true,visT

versus Eτ,true,visT . The missreconstruction of the tau energy

increases with rising Etrue,visT .

35

3 Object Reconstruction

true,visT / Ereco

TE0 0.2 0.4 0.6 0.8 1 1.2 1.4

Num

ber

of T

aus

0

2000

4000

6000

8000

10000

12000

14000

16000

0π1 Prong 0

0π1 Prong 1

0π1 Prong 2

0π3 Prong 0

> 400 GeVtrue,visTE

CMS 2012 Simulation

Figure 3.7: Ratio of reconstructed to generated tau pT distributed into different taudecay modes. 3 Prong decays of the tau are reconstructed worse.

During the investigation of the energy misreconstruction one additional feature of thetau decay was discovered: The distribution of the full tau energy on the different decayproducts of the tau depends on the tau decay mode which can be seen in Figure 3.9where the 1 Prong decay modes dominate for small values of Eτ,true,visT : Since the highpT tau sample used for this investigation contains mostly taus with Efull

T ≈ 500 GeV,

small values of Eτ,true,visT are equivalent to a small fraction of the full energy. The impactof this effect on the tau identification needs further investigation.

It was not possible to fix the HPS energy reconstruction for high energy taus for thiswork since extensive changes on the HPS algorithm or even particle flow itself would havebeen necessary and could not be done in time, so a workaround had to be found: Sinceevery tau is reconstructed out of a particle flow jet, the possibility of using the PFJetenergy to yield the tau energy is investigated. The PFJet energy reconstruction takesinto account also the neutral hadron energy component, so by this it can be avoided tomiss energy due to unidentified tracks. In Figure 3.10 the profile plot from Figure 3.6 isshown again together with the corresponding profile plot for the PFJet energy.

The PFJet energy reproduces the true tau energy much better than the HPS algo-rithm for taus with Eτ,true,visT & 200 GeV. A problem arises at low energies where thePFJet energy is higher than the true tau energy. This effect is probably due to pileup:The PFJet energy can only be correct on neutral hadrons arising from pileup eventsby subtracting its average contribution since they have no track to identify their origin

36

3.5 Performance of the HPS Algorithm at High Energies

0.8853 0.1046 0.0011 0.0090

0.3814 0.6137 0.0004 0.0045

0.4399 0.5562 0.0003 0.0031

0.3109 0.4675 0.0003 0.2213

Reco

0π1 Prong 0 0π1 Prong 1 0π1 Prong 2 0π3 Prong 0

True

0π1 Prong 0

0π1 Prong 1

0π1 Prong 2

0π3 Prong 0

CMS 2012 Simulation

Figure 3.8: HPS decay mode identification probability: The fraction of a generateddecay mode being reconstructed as a specific decay mode is shown.

vertex. This average contribution can be lower than the actual value which leads to a re-constructed jet energy higher than the true visible tau energy. The amount of additionalpileup energy is approximately the same in each event, so its relative contribution tothe tau energy becomes more and more negligible for rising tau energies. This analysisdeals only with high energy taus, so this problem can be ignored. The two dimensionaldistribution of the PFjet energy versus the true tau energy and the distribution of thefraction of those two for taus with mathrmEτ,true,visT > 400GeV can be seen in Figure3.11.

The distributions are centered around one with an adequate scattering and since thePFJet energy seems to be a much better quantity to use for tau energy than the HPSenergy, it will be used in this analysis. The HPS algorithm is still used to identify taudecays, so a matching of PFJets to HPS taus is necessary. A geometrical matching isperformed where the PFJet with the smallest ∆R relative to the HPS tau is selected. Inaddition, ∆R < 0.4 is required since the PFJets are cleaned for pileup jets but the HPStaus are not. If the matching fails, the tau must arise from one of these pileup jets andis therefore removed.

3.5.2 Tau identification

Since the efficiencies and misidentification (“fake”) probabilities of the different HPSdiscriminators are not provided for high energy taus by the tau physics object group(tau POG) these values have been determined in this analysis and will be presented inthe following section.

37

3 Object Reconstruction

[GeV]true,visTE

0 200 400 600 800 1000 1200 1400

fina

l sta

tes)

→ ha

BR

(

0

0.2

0.4

0.6

0.8

13 Prong other

0π3 Prong 1

0π3 Prong 0

1 Prong other

0π1 Prong 2

0π1 Prong 1

0π1 Prong 0

CMS 2012 Simulation

Figure 3.9: Branching ratio (BR) of the tau decay into the different decay modes on

generator level as a function of Eτ,true,visT for taus with Eτ,true,fullT > 500 GeV.Most of the pions arising from 1 Prong decays only get a small fraction ofthe full transverse tau energy.

38

3.5 Performance of the HPS Algorithm at High Energies

true,visTE

200 400 600 800 1000 1200

)tru

e,vi

sT

/Ere

coT

Mea

n(E

0.4

0.6

0.8

1

1.2

τ

PFJet

CMS 2012 Simulation

Figure 3.10: Profile ofEPFJet,reco

T

Eτ,true,visT

versus Eτ,true,visT . While the energy reconstruction for

low energy taus is worse using the energy of the PF jet than using the HPStau energy, it becomes much better for high energy taus.

[GeV]true,visTE

0 200 400 600 800 1000 1200

[GeV

]P

FJe

tT

E

0

200

400

600

800

1000

1200

0

5

10

15

20

25

30

35

40

CMS 2012 Simulation

true,visT / Ereco

TE0 0.2 0.4 0.6 0.8 1 1.2 1.4

Num

ber

of T

aus

0

10000

20000

30000

40000

50000

60000

70000

> 400 GeVtrue,visTE

CMS 2012 Simulation

Figure 3.11: PFJet energy for tau reconstruction. Left: Distribution of ErecoPFJet versus

Etrue,visτ . Right: Ratio of reconstructed PFJet pT to generated tau pT. A

few taus are reconstructed with much too low energy. This is most likelydue to a mismatch between the true tau and the reconstructed PF jet butit only affects a small number of taus (4 %) and can be ignored.

39

3 Object Reconstruction

The efficiency of a selection is defined as

ε =# τ( Matched true tau + |ηtrue| < 2.3 + pvis,true

T > 20 GeV + Selection)

# τ( Matched true tau + |ηtrue| < 2.3 + pvis,trueT > 20 GeV)

(3.2)

which is the fraction of the number of true hadronically decaying taus which pass theselection under investigation and the acceptance requirements relative to the number ofall true hadronic decaying taus passing the acceptance requirements. For this efficiencystudy the same sample as for the energy determination is used.

The fake probability is defined as

pfake =# τ candidates(Originating from e, µ or QCD jet + Selection)

# τ candidates(Originating from e, µ or QCD jet)(3.3)

where pfake is the fraction of the number of reconstructed taus which pass the selectionunder investigation and are not originating from a true hadronic decaying tau (but froman electron, muon, or QCD jet) relative to the number of reconstructed taus which passthe selection under investigation.

There are three different kinds of fake probabilities dependent on the initial objectfaking the tau: QCD, electron and muon. They are distinguished by adding a vetoagainst the two other object respectively (with truth information) in the numerator ofthe general definition in Equation 3.3. Only the fake probability for a chosen objectsis shown in each case. The QCD-fake probability uses QCD MC samples while thelepton-fake probability uses high energy W→ lν samples (see Chapter 5).

Investigation of ID preselection and decayModeFinding

Additionally to the discriminators provided by the HPS algorithm, a ID preselection isperformed on the tau collection: pτT > 20GeV, |ητ | < 2.3 and pleadChargedTrack

T > 20GeV.The first two are checking for geometrical and energy acceptance and the third one is aquality criterion which exploits that one clear charged track has to be within the tau jetdue to its decay kinematic. The geometrical acceptance is limited by the requirementthat the entire isolation cone for the isolation discriminators have to be fully containedwithin the tracker which reaches up to |ητ | = 2.5. The efficiency of this preselectionbinned in Etrue,vis

T can be seen on the left side of Figure 3.12

After a turn on, the efficiency reaches a plateau at an efficiency of approximately90 % and stays at this value up to high energies. The efficiency of the preselectiontogether with the requirement of the decayModeFinding discriminator can be seen onthe right side of Figure 3.12. The efficiency only decreases minimally with respect tothe preselection. The efficiency of the decayModeFinding discriminator itself is close to100 %.

Since the preselection and the decayModeFinding is always used, it is added to allfollowing investigated selections implicitly.

40

3.5 Performance of the HPS Algorithm at High Energies

[GeV]true,visTE

0 200 400 600 800 1000 1200

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

preselection

CMS 2012 Simulation

[GeV]true,visTE

0 200 400 600 800 1000 1200∈

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

preselection

+ DecayModeFinding

CMS 2012 Simulation

Figure 3.12: Efficiency of the preselection and decayModeFinding: On the left side theefficiency of the preselection is shown and on the right side the efficiencyof the preselection + decayModeFinding is shown. Both efficiencies reach aplateau at approximately 90 % after a turn-on. Both efficiencies are nearlyidentical since the efficiency of the “decayModeFinding” discriminator isclose to 100 %.

Investigation of the isolation discriminators

The tau isolation selection is the most important tau quality selection since it suppressesfake taus arising from QCD multijet processes which is the dominant source for fake tausdue to the huge QCD cross sections at the LHC.

The efficiency of the CombinedIsolationDBSumPtCorr discriminators can be seen onthe left side of Figure 3.13: The efficiency of all four working points are not flat versusthe true visible energy of the tau. After a turn on it decreases again for rising values ofEtrue,visT > 200GeV down to approximately half the peak value. This effect is correlated

to the energy misreconstruction of the tau jet. The part of the tau decay productsmisidentified as neutral hadrons enter the isolation cone instead of being associated tothe tau jet causing the isolation discriminator to fail. This effect is quite serious since itnot only decreases the sensitivity of this search significantly, but also raises the issue ofdata to MC scale factors: Efficiency determination in data is only possible close to the Zpeak where sufficiently high statistics for data driven methods like “tag and probe” canbe found. While a constant scale factor can be assumed for constant MC efficiencies, thisassumption is no longer valid for energy dependent efficiencies. Despite the efficiencyproblem, the fake probability for the CombinedIsolationDBSumPtCorr discriminatorswhich can be seen on the right side of Figure 3.13 (binned in Eτ,recoT ) looks reasonableand shows a benefiting energy dependence for this analysis: It decreases for rising tauenergy and reaches values below 0.1 %.

The efficiency issue should be solved as soon as the energy reconstruction issue issolved, but since this will be rather a long-term solution, alternative solutions are

41

3 Object Reconstruction

[GeV]true,visTE

0 200 400 600 800 1000 1200

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

preselection

+ VLooseCombDeltaBeta

+ LooseCombDeltaBeta

+ MediumCombDeltaBeta

+ TightCombDeltaBeta

CMS 2012 Simulation

[GeV],recoτTE

0 200 400 600 800 1000 1200fa

ke p

roba

bilit

y-310

-210

-110

1

preselection

+ VLooseCombDeltaBeta

+ LooseCombDeltaBeta

+ MediumCombDeltaBeta

+ TightCombDeltaBeta

CMS 2012 Simulation

Figure 3.13: Cut based isolation discriminators “CombinedIsolationDBSumPtCorr dis-criminators”. Left: Efficiency in bins of true visible tau ET. Right: fakeprobability in bins of reco tau ET. The efficiency decreases for taus withEtrue,visT > 200 GeV while the fake probability (pfake, see Equation 3.3) re-

mains flat at sufficiently low values of (fake probability) < 1%.

searched for. One possibility is to take only into account charged particles within theisolation cone, so that the misidentified neutral hadrons would not contribute to theisolation. Unfortunately, this approach is still under investigation and can not be usedin this analysis.

Additional to these cut-based isolation discriminators some additional discriminatorsbased on multivariate analyses are provided by the HPS algorithm. The efficiency ofthese can be seen on the left side of Figure 3.14. It looks much better than the efficiencyof the CombinedIsolationDBSumPtCorr discriminators, but the fake probabilities areworse. They are increasing with rising tau energy and reaching values beyond 1 % whichis approximately ten times worse than the fake probability of the CombinedIsolationDB-SumPtCorr discriminators. The MVA isolation cannot be used since all distributionswould be dominated by fake taus. The MVA discriminators are not as strongly affectedby the energy missreconstruction, but the algorithm is trained with low energy tau sam-ples, making it not valid at high energies. The tau POG has announced to train thesealgorithm also on high energy tau samples in the future.

As a consequence, these discriminators could also not be used in this analysis, whythe only remaining option is to use the CombinedIsolationDBSumPtCorr discrimina-tors, despite the fact that the sensitivity will be worse and the data to MC agreementquestionable.

42

3.5 Performance of the HPS Algorithm at High Energies

[GeV]true,visTE

0 200 400 600 800 1000 1200

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

preselection

+ LooseMVA

+ MediumMVA

+ TightMVA

CMS 2012 Simulation

[GeV],recoτTE

0 200 400 600 800 1000 1200fa

ke p

roba

bilit

y-310

-210

-110

1

preselection

+ LooseMVA

+ MediumMVA

+ TightMVA

CMS 2012 Simulation

Figure 3.14: MVA isolation discriminators. Left: Efficiency in bins of true visible tauET. Right: fake probability in bins of reco tau ET. The efficiency of the“loose” working point decreases only slightly but the fake probability (pfake,see Equation 3.3) increases to values of approximately 3 %.

Investigation of the “antiElectron” discriminators

The efficiencies of the anti electron discriminators binned in Etrue,visT can be seen on the

left side of Figure 3.15. After a turn-on the efficiencies stay sufficiently flat. In fact, allbut the loose working point show a slight increase in efficiency for high energies. Thefake probabilities can be seen on the right side of Figure 3.15. The medium workingpoint is chosen for this analysis since it is a good compromise for high efficiency and lowfake probability at high energies.

Investigation of the “antiMuon” discriminators

The efficiency and the fake probability of the antiMuon discriminators are shown inFigure 3.16. The efficiency of the medium and the tight working point is nearly identicalso the purple dots cannot be seen. These two working points show an inefficiency forrising tau energies, but the efficiency of the loose working point stays constant. Sincethe fake probability is very low and only slightly worse for the loose working point, thisone will be used for the analysis.

43

3 Object Reconstruction

[GeV]true,visTE

0 200 400 600 800 1000 1200

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

preselection

+ ElectronLoose

+ ElectronMedium

+ ElectronTight

+ ElectronMVA

CMS 2012 Simulation

[GeV]true,visTE

0 200 400 600 800 1000 1200

fake

pro

babi

lity

-310

-210

-110

1preselection

+ ElectronLoose+ ElectronMedium+ ElectronTight+ ElectronMVA

CMS 2012 Simulation

Figure 3.15: Cut based antiElectron discriminators. Left: Efficiency in bins of truevisible tau ET. Right: fake probability in bins of reco tau ET. The efficiencyof all working points increases with rising values of Etrue,vis

T while the fakeprobability (pfake, see Equation 3.3) remains flat for rising energies.

[GeV]true,visTE

0 200 400 600 800 1000 1200

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

preselection

+ MuonLoose

+ MuonMedium

+ MuonTight

CMS 2012 Simulation

[GeV],recoτTE

0 200 400 600 800 1000 1200

fake

pro

babi

lity

-410

-310

-210

-110

1

preselection

+ MuonLoose

+ MuonMedium

+ MuonTight

CMS 2012 Simulation

Figure 3.16: Cut based antiMuon discriminators. Left: Efficiency in bins of true visibletau ET. Right: fake probability in bins of reco tau ET. The efficiency of the“loose” working point stays flat after a turn-on while the efficiencies of theother two working points decreases for rising energy. The fake probability(pfake, see Equation 3.3) is very low for taus with a transverse energy above200 GeV.

44

4 Trigger

The LHC produces a huge amount of data which cannot all be stored. Therefore a fastdecision is needed if an event is important enough to be kept. At a bunch crossing timeof 50 ns events occur with a frequency of 20 MHz which have to be reduced by a factorof O(10−6). This is done by the CMS trigger system [36] which is divided into two levelswhere the first one consists of hardware triggers which are called “Level-1 Triggers” (L1)and the second one is a software analysis system located at a computing farm close tothe detector called “High-Level Trigger” (HLT).

4.1 The Trigger System

The first step of the trigger decision is done by the L1 Trigger which uses only a smallpart of the detector information since it has do reach a decision in a maximum latency of3.2µs which is limited by the pipeline memory for the rest of the detector information.The L1 uses coarsely segmented information from the calorimeters and the muon systemto locally create physics objects candidates which are then combined to a L1 Triggerevent. Based on this event information a decision is taken to whether the event shouldbe rejected or passed to the HLT system. The output rate of the L1 Trigger to the HLTis ≈ 30 kHz which is a reduction of the initial event rate by a factor of O(10−3).

The L1 Triggers are hardware-only triggers consisting of programmable electronicslike FPGAs1 and ASICs2 located partly at the detector or very near.

In the next step, the HLT has to reduce the event rate by an additional factor ofO(10−3). Contrary to the L1 Trigger, the HLT has access to the full detector informationand can perform a more sophisticated physics analysis. The HLT is a software systemimplemented in a computing farm consisting of about thousand standard CPUs locatedat the surface above the detector cavern. The HLT performs a full event reconstruction,but on a simpler level than the offline event reconstruction since its processing timeis limited, to reach the trigger decision. The different physics objects are tested fordifferent quality criteria and energy thresholds. Afterwards, it is tested if the eventsfulfill one (or more) HLT trigger requirement. Each HLT is defined by a set of filterrequirements (the so called trigger path) which an event has to satisfy to be assigned toit. Various HLT paths are implemented at CMS to account for the wide range of physicssignatures which are searched for. Since the trigger paths are not disjunct, an eventcan fulfill more than one trigger condition. An event which does not satisfy any triggerrequirement is rejected. In a next step every event is classified into a primary datasetwhich is defined by a set of different triggers and represents a specific event signature,

1field-programmable gate array2application-specific integrated circuit

45

4 Trigger

for example double muon or single electron events. Since an event can satisfy more thanone trigger, it can also be assigned to more than one primary dataset. The HLT triggerscan be divided into two different kinds: The first one tests an event for a single objectlike a good electron or muon and is called single object trigger. The second one testsan event for an experimental signature consisting of more than one object and is calledcross trigger. The thresholds for the single object triggers are normally higher than forthe cross triggers since they are inclusive triggers and have therefore a smaller rejectionpower. With changing run conditions and instantaneous luminosity small changes areapplied to the various trigger implementations to keep the trigger rate approximatelyconstant.

Some triggers are designed for special purposes for which they need very low thresh-olds which would lead to a much too high trigger rate. To handle this problem, thesetriggers only accept a specific fraction off all events passing the trigger requirements.They are called prescaled triggers and the fraction of accepted events is called prescalefactor. They are normally designed for control reasons and QCD analyses while trig-gers for search analyses have to be unscaled since no event in a search region should bemissed. Also, not all triggers are accessible to physics analysis directly after data takingbecause the storage capacity allows to store more data than can be reconstructed dueto limited computing power. These triggers are called “parked” and all these data willbe reconstructed in the first long shutdown of the LHC.

For MC samples most of the triggers used for data taking are simulated.

4.2 The Tau plus MET Cross Trigger

One of the first steps of each physics analysis is to find an unprescaled trigger thatis best suited for the experimental signature of this analysis. No single tau trigger isimplemented in CMS since the fake probability on HLT level is very high and thereforethe trigger rate would be much too high, but there is a trigger which searches exactlyfor the signature of the analysis performed in this thesis (one tau lepton plus missingtransverse energy): This trigger is called

HLT LooseIsoPFTau35 Trk20 Prong1 MET70

.The tau signature is based on a loosely isolated “particle flow like” hardronically

decaying 1 Prong tau with a leading track with pT > 20 GeV and MET > 70 GeV. Thetau reconstruction on HLT level is not done with the HPS algorithm but with a simplifiedalgorithm to save computation time.

This seems to be the trigger which should be used. It was originally designed for asearch for a charged Higgs boson (H± → τ±ν) resulting in tau leptons of much lowerenergies (pτT . 100 GeV) than in the search for W ′ (pτT ≈ 500 GeV) . The performanceof this trigger at high tau energies has to be checked first.

The efficiency is investigated using the MC samples for W→ τν and data from the“JetHT” primary dataset. This dataset contains events which are triggered by testingthe event for hadronic jet activity.

46

4.2 The Tau plus MET Cross Trigger

[GeV]τTE

0 200 400 600 800 1000

trig

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

HLT_LooseIsoPFTau35_Trk20_Prong1_MET70

CMS 2012 Simulation

vtxN0 5 10 15 20 25 30 35 40 45 50

trig

∈0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

HLT_LooseIsoPFTau35_Trk20_Prong1_MET70

CMS 2012 Simulation

Figure 4.1: Efficiency of the trigger HLT LooseIsoPFTau35 Trk20 Prong1 MET70 in binsof reco tau EτT (left side) and in bins of the number of vertices in the event(right side) determined in MC. The trigger shows a serious inefficiency forEτT > 100 GeV while it shows only a small dependence on the number ofvertices which means that it is stable against pileup.

The trigger efficiency is defined as

εtrig =# Events( offline trigger selection + event is triggered)

# Events( offline trigger selection)(4.1)

where the denominator contains the number of events which should fulfill the triggerrequirement since they pass all needed requirements using information from full eventreconstruction and the numerator contains the number of events which have additionallyactivated the trigger. This is possible for simulated events where all events are stored,no matter if the trigger was activated or not. Since the isolation definition on HLT levelis somewhat different from the isolation of the HPS algorithm, the medium workingpoint of the CombinedIsolationDBSumPtCorr discriminator (see Table 3.1) is used asthe offline representation of the trigger isolation criterion to be sure that the offlinerequirement is not weaker than the trigger requirement.

The MC efficiency of the trigger in bins of the true visible tau ET and in bins of thenumber of vertices can be seen in Figure 4.1. The efficiency is stable against pileupas can be seen up to ≈ 40 vertices, but a serious inefficiency occurs for high energytaus. The efficiency steeply drops for values Etrue,vis

T > 150 GeV to values below 5 % for

Etrue,visT > 400 GeV.

In a next step it is tested whether the efficiency also drops for real data to determineif this inefficiency is caused by a real trigger problem or by an incorrect simulation. Todetermine the trigger efficiency in data, a primary dataset is needed which is collectedwith triggers different to the one under investigation. For this investigation the JetHT

47

4 Trigger

[GeV]τTE

0 200 400 600 800 1000

trig

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

HLT_LooseIsoPFTau35_Trk20_Prong1_MET70

CMS 2012 Private Work

Figure 4.2: Efficiency of the trigger HLT LooseIsoPFTau35 Trk20 Prong1 MET70 in binsof reco tau EτT determined in data. The same decrease in efficiency as in MCoccurs.

dataset is used. The trigger efficiency for data can be seen in Figure 4.2 which showsthe same inefficiency like in MC.

In order to understand the reason for the trigger inefficiency, the efficiency for each ofthe filters related to this trigger is investigated. The definition of the efficiency is againtaken from Equation 4.1 replacing in the numerator “event is triggered” by the filterrequirement under study.

The filter list of the trigger and the explanation of the filters can be seen in Table 4.1.

The efficiencies for the different filters are shown in Figure 4.3 (with the numbers fromTable 4.1). It can be seen that the efficiency drop occurs for all filters which test thetau for its track (first affected filter is Nr. 5). The tau POG was informed about thisinefficiency and it was found that this was caused by a tau track quality requirementwhich cannot be fulfilled at high momenta: The uncertainty on the momentum recon-structed from the track has to be smaller than 0.5 times the uncertainty on the energymeasured in the calorimeter or the tau track is rejected. But since the uncertainty onthe momentum measurement in the tracker increases for rising momentum while theuncertainty on the calorimeter energy decreases with rising energy (see Chapter 2) thiscannot be fulfilled for high energy taus. This trigger inefficiency is a serious problemsince all data collected with the defect trigger are lost and not only this specific triggeris affected but every tau trigger since all of them use this defect filter.

A fixed version of this trigger (version v9) was included in the trigger menus in septem-ber for all runs after run number 202970 where this track quality requirement was re-

48

4.3 MET only Trigger

Nr Filter name Filter selection

1 hltL1sL1ETM36or40Start of the Trigger sequence:Checks event for L1 seed ETM36 or ETM40which checks event for Emiss

T > 36(40) GeV

2hltFilterL2EtCut- Start of the tau part of the trigger:

SingleIsoPFTau35Trk20 Checks event for a calo Jet with pT > 25 GeV

3 hltMET70 Checks event for calo EmissT > 70 GeV

4 hltPFTau35Contain HLT PF tau candidate withpT > 35 GeV and |η| < 2.5

5 hltPFTau35TrackTau candidate contains a charged hadroncandidate track with pT > 0.5 GeVand HLT PF track quality criteria

6 hltPFTau35TrackPt20Leading charged hadron candidatetrack must have pT > 20.GeV

7 hltPFTau35TrackPt20LooseIso

Isolation requirement for tau:No charged had. iso. cand. related to tau with:Nr of hits in tracker > 8 andNr of hits in pixel det. > 3 andχ2 of track < 100 andpT of track > 1.5 GeV

8 hltPFTau35TrackPt20LooseIsoProng2Prong 1 requirement for tau;(Nr. of charged hadron cand.) < 3

Table 4.1: Filter list and explanation of the different filters for the triggerHLT LooseIsoPFTau35 Trk20 Prong1 MET70 v6

moved. The efficiency of this trigger can be seen in Figure 4.4: Unfortunately, onlya small MC sample with very low statistics was available for this investigation, butnevertheless it can be seen that the inefficiency is remedied.

The full statistics is needed in this analysis and not only the data taken after thefixed trigger was implemented (≈ 7 fb−1 until the analysis for this thesis was frozen).An alternative trigger was searched for and found which is described in the next section.

4.3 MET only Trigger

Since the tau part of all triggers was broken, a trigger is used which tests for the secondaspect of the signal signature, which is MET. This trigger is called

HLT MET120 HBHENoiseCleaned.

The missing transverse energy has to be greater than 120 GeV and all events are rejectedwhich are dominated by HCAL detector noise. The kinematic threshold of this triggeris higher than the threshold of the initial tau plus MET cross trigger since the tau

49

4 Trigger

[GeV]τTE

0 100 200 300 400 500 600 700 800

trig

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1CMS 2012 Simulation

Filter Nr: 1 2 3 4 5 6 7 8

Figure 4.3: Efficiency of the different filters of the triggerHLT LooseIsoPFTau35 Trk20 Prong1 MET70 in bins of reco tau EτT de-termined in MC. The inefficiency occurs for filters which check the tau forits leading track. The filter names and selections correlated to the filternumbers can be found in Table 4.1.

50

4.3 MET only Trigger

[GeV]τTE

0 200 400 600 800 1000

trig

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

HLT_LooseIsoPFTau35_Trk20_Prong1_MET70

CMS 2012 Simulation

Figure 4.4: Efficiency of the fixed trigger HLT LooseIsoPFTau35 Trk20 Prong1 MET70 v9

in bins of reco tau EτT determined in MC. The efficiency no longer decreasesfor high energies.

should have approximately the same transverse energy as the value of the MET in W-like decays. This trigger can be used for the search for W ′ because high MET is part ofthe signal signature. A disadvantage of this trigger is that the low energy control regionis strongly reduced.

The efficiency in bins of MET for this trigger in simulation is shown in Figure 4.5. Theefficiency does not decrease for high values of MET and stays constant after a turn-on.Unfortunately the turn-on is quite slow and reaches its plateau only around a value ofMET = 200 GeV. The events affected by the turn on are not removed from the analysissince no control region would be left.

The efficiency of this trigger in data is determined by using part of the single Elec-tron primary dataset collected with the trigger HLT Ele80 CaloIdVT(Gsf)TrkIdT corre-sponding to an integrated luminosity of 11.6 fb−1 and can be seen in Figure 4.6. Sincethe trigger efficiency in data shows a similar behavior as in MC and no inefficiency oc-curs, this trigger will be used for this analysis. Data collected with a version of thistrigger with a decreased threshold of MET > 80 GeV is parked and will be available inthe next few month which will cover nearly the same kinematic range as the tau plusMET trigger so that all data will be recovered for the next iteration of this analysis.

51

4 Trigger

MET [GeV]0 100 200 300 400 500 600 700 800

trig

0.2

0.4

0.6

0.8

1

HLT_MET120_HBHENoiseCleaned

CMS 2012 Simulation

Figure 4.5: Efficiency of the trigger HLT MET120 HBHENoiseCleaned in bins of missingtransverse energy determined in MC. The efficiency reaches a plateau at 100% for values MET > 200 GeV.

MET [GeV]0 100 200 300 400 500 600 700 800

trig

0.2

0.4

0.6

0.8

1

HLT_MET120_HBHENoiseCleaned

CMS 2012 Private Work

Figure 4.6: Efficiency of the trigger HLT MET120 HBHENoiseCleaned in bins of missingtransverse energy determined in data. The turn on of the trigger in data islonger than in MC and the efficiency reaches the plateau at 100 % for valuesMET > 300 GeV.

52

5 Datasets and Analysis Framework

In order to analyze the data collected with the CMS detector, a sophisticated analysissoftware framework is needed to perform the object reconstruction described in Chapter3 and to provide the possibility to simulate physics processes and the detector response.These simulated samples are needed for background estimation as well as to determinesignal selection efficiencies.

In this chapter the used software, MC sample simulation and collision datasets aredescribed.

5.1 The Analysis Framework

The object reconstruction in CMS is done with a software framework called CMSSW1

which is implemented as a set of C++ libraries which are controlled by configurationfiles written in python. The software provides an interface to the GEANT4 [45] detectorsimulation software in which the full CMS detector is simulated. Also, various particlephysics event generators and matrix element calculation programs such as PYTHIA6[25],Madgraph[24] and Powheg[46] are included in the CMSSW framework. A program calledTauola [47] can be executed after event generation and before detector simulation whichsimulates the decay of the tau considering spin correlations and therefore producingmore accurate angular distributions for the decay products than the decay performedby PYTHIA.

The detector simulation of generated events provides the same output as the detectordoes for data (and additional generator information like the particle properties beforereconstruction which are used for example in Chapter 3) so that the same object re-construction can be performed on both. The data format used for the analysis is called“Analysis Object Data” (AOD) and provides all event and object information needed toperform physics analysis. All AOD data are stored on different CMS computing centersall over the world which build the Tier 2 level of the CMS computing grid[48]. Sincereconstruction algorithms and calibration values are optimized regularly, the result ofan analysis can depend on the version of CMSSW used for reconstruction. The HPSalgorithm is not performed at reconstruction level but at analysis level, so the versionof CMSSW used for analysis is also important: All following results are obtained withCMSSW 5 3 3 patch3.

The analysis is performed in two steps: First, all data needed for the analysis areaccessed via the CMS grid computing system and preselected with minimum selectionrequirements in order to reduce the file size. The selected events are stored locally at theAachen Tier 2 server in the data format of flat ROOT [49] trees in which only a subset

1CMS SoftWare

53

5 Datasets and Analysis Framework

of the provided event and object information is stored to further reduce the file size.The size of these flat ROOT trees is < 1% of the size of the initial AOD samples. Thisprocess is called “skimming” and is carried out by a software called “Aachen 3A SusyAnalysis” [50]. In a next step, this data is analyzed locally with the analysis frameworkROOT (version 5.34.00) [49].

5.2 Data of pp Collision Data Recorded in 2012

This analysis is done with the first 12.1 fb−1 of√

s = 8 TeV proton-proton collision datacollected at the CMS detector in the year 2012. As described in Chapter 4, the triggerHLT MET120 HBHENoiseCleaned is used and therefore the data is taken from the MET

primary dataset. The LHC collision runs are up to now distributed into three differentperiods (called run A/B/C), which contain different amounts of data. The runs aredistributed into various sub units called lumi sections. All data have to be tested forcorrect detector performance before they are approved to be used for physics analyses.This data approval is done by a team of detector experts called “data certification group”which releases textfiles containing a list of all approved lumi sections in a data formatcalled JSON files. Some problems in the reconstruction procedure of run A and B hadmade it necessary to rerun the reconstruction and a small part of the run A data neededsome special treatment due to detector problems, which is why there are two specialJSON files for these files with the label “ReReco”.

The following JSON files for data certification are used for this analysis:

Cert 190456− 196531 8TeV 13Jul2012ReReco Collisions12 JSON.txt

Cert 190782− 190949 8TeV 06Aug2012ReReco Collisions12 JSON.txt

Cert 190456− 203002 8TeV PromptReco Collisions12 JSON.txt

The run numbers of the different run periods, their corresponding integrated luminos-ity and the used frontier condition for data reprocessing at analysis level can be seen inTable 5.1. The frontier conditions provide a set of calibration settings and geometricalalignment information valid for a specific run period which are used to gain the correctcalibration values for the different reconstructed physics objects.

5.3 Monte Carlo Simulation

Many different MC samples are used in this analysis to simulate SM processes in order todetermine the number of background events for this analysis and to simulate the variouspredicted W ′ signals.

5.3.1 Parton distribution function and production cross sections

In order to generate MC samples for physics processes at the LHC with the correctproduction cross section, it must be considered that the LHC collides protons which are

54

5.3 Monte Carlo Simulation

Dataset Run Range Frontier Condition L( pb−1 )/MET/Run2012A-recover-06Aug2012-v1/AOD 190782-190949 FT 53 V6C AN2 82/MET/Run2012A-13Jul2012-v1/AOD 190456-193621 FT 53 V6 AN2 810/MET/Run2012B-13Jul2012-v1/AOD 193834-196531 FT 53 V6 AN2 4400/MET/Run2012C-PromptReco-v1/AOD 197770-198913 GR P V40 AN2 470/MET/Run2012C-PromptReco-v2/AOD 198934-203755 GR P V41 AN2 6310

Σ = 12100

Table 5.1: List of the datasets used in the analysis, the corresponding run ranges, thefrontier conditions used for reconstruction and the integrated luminosity cer-tified with the previous mentioned JSON files for each run. These datasetscontain approximately 860000 events passing the MET trigger used for thisanalysis.

composite particles. This means that the center of mass energy of the hard interaction(parton center-of-mass energy

√s) is not equal to the center of mass energy of the

proton-proton collision (√

s) and varies for each collision. The parton cms energy isgiven by

√s =√x1 · x2 ·

√s (5.1)

where x1, x2 are the momentum fractions carried by the two interacting partons (ofthe full proton momentum).

The momentum distribution among the partons of the proton is described by theparton distribution function (PDF). Most of the recently used pdf sets are based onmeasurements performed at the HERA experiment[51]. Each parton has its own PDFf(xi,Q

2) which describes the probability of this parton carrying a momentum fraction ofxi at an energy scale of Q2.

Since the center of mass energy at HERA was much smaller than at LHC, the PDFsets have to be extrapolated up to higher energies. An example for a PDF set used forLHC simulation can be seen in Figure 5.1

The full production cross section of a process is calculated from the differential crosssection of the process at parton level σ(s) integrated over the full phase space weightedwith the PDFs and summed up for all partons.

σ =∑i,j

∫ 1

0

∫ 1

0dx1dx2f(x1, Q

2)f(x2, Q2)σi,j(x1, x2, Q

2) (5.2)

Different extrapolation methods lead to slightly different PDF sets and therefore tosmall differences in the resulting cross section.

Since MC samples are produced with a number of events which normally do not matchthe integrated luminosity of the used collision data, the samples have to be weighted bya factor

55

5 Datasets and Analysis Framework

x-410 -310 -210 -110 1

)2xf(x,Q

0

0.2

0.4

0.6

0.8

1

1.2

g/10

d

d

u

uss,cc,

2= 10 GeV2Q

x-410 -310 -210 -110 1

)2xf(x,Q

0

0.2

0.4

0.6

0.8

1

1.2

x-410 -310 -210 -110 1

)2xf(x,Q

0

0.2

0.4

0.6

0.8

1

1.2

g/10

d

d

u

u

ss,

cc,

bb,

2GeV4= 102Q

x-410 -310 -210 -110 1

)2xf(x,Q

0

0.2

0.4

0.6

0.8

1

1.2

Figure 5.1: PDF sets calculated with method “MSTW 2008” in next-to-leading-orderfor two different energy scales[52].

w =σ · LNMC

(5.3)

where σ is the full cross section used by the MC process, L is the luminosity of theused collision data and NMC the number of generated MC events.

Cross sections of particle processes are calculated in expansion series of a QFT. Sincethere are an infinitive number of higher order corrections to each process like loops orvertex corrections, the calculation can only be performed to a certain level of accuracy.In almost all processes relevant for the LHC, the effect of the corrections decreases withrising order of the corrections. Some of the programs used to produce MC samples onlyconsider cross sections without any higher order corrections. These cross sections arecalled “leading order” (LO) cross sections. Higher order corrections within the StandardModel can arise from electroweak or QCD processes where the QCD corrections areusually the dominant ones. Cross section calculation from diagrams containing one ortwo additional vertices are called “next-to leading order” (NLO) and “next-to-next-to leading order” (NNLO). The number of diagrams needed for one or two additionalvertices depends on the process. These calculated higher order corrected cross sectionsare used to reweight the MC samples which are produced at LO by applying a correctionfactor defined as k =

σ(N)NLO

σLOto the scale factor from Equation 5.3.

5.3.2 Pileup simulation

At the high luminosity of the LHC more than one proton-proton interaction occurs ateach bunch crossing resulting in multiple interactions per event. The average number ofpileup interactions per event is NPU ≈ 20. Most of these interactions are QCD multijetevents with a low momentum transfer and are called pileup. It has to be considered in

56

5.3 Monte Carlo Simulation

MC simulation since many event and object features like the missing transverse energyand the tau isolation can be affected by pileup. As was said before the characterizingvariable for pileup is the number of vertices per events since each interaction per eventshould lead to at least one additional vertex. The number of interactions (and thereforenumber of vertices) per event depends on the instantaneous luminosity of the LHCat the interaction point and changes with time. The pileup is simulated by addingminimum bias events to all simulated specific particle processes where the distributionof the number of interactions per events represents the distribution expected to occurin real data for a specific run period. Minimum bias events are generated to representthe average interaction at LHC and are dominated by soft QCD processes. The pileupscenario used for all MC samples in this analysis is S10. Since the number of pileup eventschanges continuously with rising luminosity but the MC samples are generated for awhole run period, a procedure called pileup reweighting is used to adapt the distributionof the number of pileup interactions per event to the current value [53]. The differencebetween the abundance of a specific number of interactions per event in MC and inthe current data distribution is used to generate weights for each event depending onits number of vertices. These weights are used to reweight all MC distributions. Thenumber of pileup events in data cannot be measured since it can differ from the numberof vertices in the event due to misidentification. It is calculated from the instantaneousluminosity and the total inelastic proton-proton cross section instead.

5.3.3 Standard Model process samples

All SM processes which can lead to an experimental signature similar to the one of aW ′ have to be considered as a possible source of background events. The main sourceof background is the off-peak, high pT region of the Standard Model W-boson leadingto the exact same signature as a W ′ decay. Other backgrounds are due to QCD multijet processes, tt and single top production, dibosons and Drell-Yan (DY) events. Thefollowing sources of background are included (The full overview of sample names, crosssection and event numbers can be seen in Table 11.1 in the appendix):

• W→τντ boson decays. Samples are generated with PYTHIA at leading-order (LO)in three different bins of pT which is a measure of the transverse momentum transferat the hard interaction. This is done instead of using one sample for the wholekinematic range to gain more statistics in the search region at high energies .

• W→`ν (` = e, µ) boson decays. Samples are generated with PYTHIA. These pro-cesses are a source for background since an electron or a muon can fake a HPS tauwhich leads to a signature identical to the signal process.

• DY decays to tau leptons (generated with PYTHIA) are a source for backgroundevents if one tau is not identified and the energy uncertainty fakes a sufficientamount of MET.

• Diboson processes (WW, WZ, ZZ) with W/Z decaying to electrons, muons or taus,generated with PYTHIA.

57

5 Datasets and Analysis Framework

• Top pair and single top production is another potential source of high pT leptonsand MET, and has to be considered as a background process. The tt process hasbeen generated with Madgraph and the single top production has been generatedwith Powheg .

• QCD multi jet processes are an important source of background since they havethe highest production cross section at the LHC and a jet can fake a HPS tau.The samples are generated with PYTHIA.

All samples have been generated in leading order except for the single top sampleswhich have been generated in next-to leading order. The DY and top samples are scaledto next-to-next-to leading order QCD corrected cross section by an energy independentcorrection factor (k = σNNLO

σLO) and the Diboson samples are scaled to next-to leading

order QCD corrections since NNLO corrections have not been calculated, yet. TheQCD samples can not be scaled to higher order corrections since they are not known.They are not used for direct background determination from QCD processes which isdone by a data driven method instead (see Chapter 7). They are only used for crosschecks on this method. All higher order cross sections are taken from [54].

A special attempt is made to determine the higher order corrections for the SM Wbackground since it is the most important background for the analysis. A k-factor iscalculated, binned in MT of the W boson, which takes into account not only NLO QCDcorrections, but also NLO electroweak corrections to the cross section. The correctionswere calculated for the W ′ → eν / W ′ → µν analysis performed at CMS [55, 56] and areused for this analysis as well, since they are independent of the final state of the W decay.The two types of corrections have to be calculated with two different programs. The QCDcorrections are calculated using the program MC@NLO[57] and the electroweak correctionsare calculated using Horace [58]. Afterwards, these two corrections are combined to anoverall k-factor binned in MT in 10 GeV steps up to values MT < 2.5 TeV. In order toparameterize the k-factor, a third degree polynomial is fitted to the distribution of thek-factor and then used to scale the W samples to the corrected cross section (for moreinformation see [56]).

5.3.4 Signal samples

In order to determine the selection efficiency of the analysis for the various W ′ modelsconsidered in this analysis, many signal samples have been produced. The SSM samplesare generated with PYTHIA in a mass range between 300 GeV and 4 TeV and the NUGIMsamples are generated with Madgraph in the mass range between 1 TeV and 3.4 TeVfor the mixing angle parameters sin2φ = 0.031, 0.04, 0.05, 0.1, 0.2, 0.3. The cross sectionsfor the different samples can be seen in Table 5.2. All samples are produced in LO withthe PDF set CTEQ6L1 [59], and for the SSM samples cross sections in NNLO QCD havebeen calculated[60].

58

5.3 Monte Carlo Simulation

SSM NUGIMsin2 φ

0.031 0.04 0.05 0.1 0.2 0.3

mW ′ σLO σNNLO σLO(in GeV) (pb) (pb) (pb)

300 113.5 153.2 - - - - - -500 16.48 22.46 - - - - - -700 4.28 5.782 - - - - - -900 1.471 1.981 - - - - - -1000 - - 2.647 1.833 1.342 0.6435 0.6071 0.74941100 0.5881 0.7828 - - - - - -1300 0.2588 0.3408 - - - - - -1400 - - 0.5810 0.3858 0.2715 0.1181 0.1131 0.1421500 0.1193 0.1543 - - - - - -1700 0.05781 0.0727 - - - - - -1800 - - 0.1932 0.1249 0.0854 0.0326 0.0279 0.03421900 0.02958 0.03638 - - - - - -2000 0.02123 0.02577 0.1231 0.0789 0.0534 0.0191 0.0148 0.01772100 0.01547 0.01855 - - - - - -2200 0.01127 0.01346 0.0822 0.0524 0.0351 0.0119 0.0082 0.00942300 0.00839 0.00983 - - - - - -2400 0.00622 0.00724 0.0571 0.0362 0.0241 0.0077 0.0048 0.00522500 0.00473 0.00539 - - - - - -2600 0.00357 0.00412 0.0410 0.0259 0.0172 0.0053 0.0029 0.00302700 0.00269 0.003104 - - - - - -2800 0.00210 0.00241 0.0301 0.0190 0.0126 0.0037 0.0019 0.00182900 0.00165 0.00190 - - - - - -3000 0.00132 0.00152 0.0227 0.0143 0.0094 0.0027 0.0012 0.00113100 0.00106 0.00124 - - - - - -3200 0.00087 0.0010 - - - - - -3300 0.00071 0.00085 - - - - - -3400 0.00060 0.00073 0.0136 0.0085 0.0056 0.0016 0.0006 0.00163500 0.00051 0.00063 - - - - - -3700 0.00037 0.00047 - - - - - -4000 0.00025 0.00033 - - - - - -

Table 5.2: Signal Monte Carlo samples generated in PYTHIA for SSM W ′ and in Madgraph

for the NUGIM W ′. Cross sections in NNLO are only calculated for the SSMW ′ samples[60].

59

6 Event Selection

In order to search for a W ′ boson, an event selection has to be performed to distinguishpossible signal events from background events since the cross section of the processesleading to background events is many magnitudes higher than the W ′ cross section.Event selection can be divided into two different steps: First select well reconstructedevents based on good detector performance and after that select events with the charac-teristic signature of a W ′ decay. Since the decay of a SM W has a kinematic very similarto the decay of the W ′ this background is the one which is hardest to suppress. Thesuppression against the SM W is done by separation in the transverse mass spectrum:The mean of the spectrum of the W ′ boson decay is much higher than the value ofMT for most of the SM W decays.

where the decay of W ′ bosons leads to events have a peak at higher values due to thehigher mass of the particle.

6.1 Transverse Mass as the Main Discriminating Variable

The transverse mass (MT) is defined as the invariant mass of a set of particles whichis calculated out of the four-vectors of these particles projected onto to the transverseplain relative to the beam axis. The transverse mass of the W ′ is used as the main dis-criminating variable instead of the full invariant mass in this analysis since the neutrinocannot be detected and only information about its transverse component can be gainedout of the missing transverse energy of the event. In addition to the neutrino fromW ′ → τν another neutrino occurs at the tau decay, so that the MET in the event is notidentical to the transverse energy of the first neutrino, but to the combination of both.As shown in Chapter 1, the neutrino from the tau decay is boosted in the initial taudirection and therefore reduces the value of the missing transverse energy by the samevalue as the visible part of tau energy is smaller than the full tau energy. Due to thisfact, the final state of the W ′ decaying into a tau which decays subsequently to mesons,features a two body decay kinematics which can be exploited for signal selection. Adetailed study of the decay kinematics can be found in [18]. As described in Chapter 3the energy of the PF jet matched to a HPS tau is used instead of the HPS tau energyfor all kinematic distributions. The transverse energy of this matched jet is denoted asEτ jetT . The transverse mass used in this analysis is calculated from this Eτ jetT and theMET by

MT =

√2 · EτjetT ·MET (1− cos∆Φ( ~EτjetT , ~MET )). (6.1)

Since the visible part of the transverse energy and MET in the event is reduced by theadditional neutrino, the calculated transverse mass is smaller than the actual transverse

61

6 Event Selection

[GeV]TM0 500 1000 1500 2000 2500

Eve

nts

/ 20

GeV

0

50

100

150

200

250

300

350

400 ) M(W`) = 2 TeV ν, τ(T True M

, MET) M(W`) = 2 TeV visτ(T Reco M

CMS 2012 Simulation

Figure 6.1: MT spectrum of the W ′ (M = 2 TeV) decay with the full kinematics ongenerator level and with the visible part after reconstruction. Due to theadditional neutrino arising from the tau decay, the mass peak vanishes.

mass of the W ′ boson. A comparison of the transverse mass on reconstruction level andof the actual transverse mass of the W ′ is shown in Figure 6.1. No peak can be seenin the transverse mass spectrum after reconstruction, since the visible fraction of thetransverse tau energy is not fixed and therefore the distribution is smeared. It can stillbe used for this search since a W ′ signal would still increase the number of events in thehigh MT region of the spectrum relative to the Standard Model background.

The statistical interpretation of this search for a W ′ bosons is performed on the dis-tribution of the transverse mass and the discrimination power of MT is exploited byoptimizing the search window for the best expected result. This will be discussed inChapter 9.

6.2 Selection for Event-Quality

The first step of the event selection is to restrict the analysis to events which havepassed the HLT MET120 HBHENoiseCleaned trigger and to reject all events which showsigns of bad event reconstruction. The MET selection of the trigger is tightened offlineto MET > 140 GeV to reject events close to the threshold, which are expected to besimulated poorly in MC.

The selection for event quality is done by checking two aspects: The events have tocontain a good reconstructed primary vertex and the event must not be tagged as badby the detector experts.

A primary vertex has to fulfill the following requirements to be tagged as good: Morethan four tracks have been used to reconstruct the vertex, the distance in the z-direction

62

6.3 Selection of W Event Signature

to the interaction point has to be smaller than 24 cm and the distance in r-direction(within the transverse plain) has to be smaller than 2 cm.

The tagging of bad events is done by event filter lists similar to the data certificationdescribed in Chapter 5 but instead of rejecting whole run sections only single events aretagged. A list of all used filters and their explanation can be found in table 6.1.

Eventfilter name Definition

CSC tight beam halo filter Veto against events with muon from beam halo.HBHE noise filter Rejects events where HCAL signal is dominated by noise.HCAL laser filter Rejects events where the HCAl calibration laser was activated.ECAL dead cell trigger The ECAL contains 1 % dead crystals. If surrounding energyprimitive (TP) filter indicate a serious energy lost due to dead crystals the event is rejected.Tracking failure filter Veto against events with a large displaced primary vertex.

Bad EE Supercrystal filterTwo ECal cristal regions have an anomalously high energy response.Events affected by these regions are rejected.

ECAL Laser correction filterSome ECAL crystals have an anomalously high calibration factor.Events affected by these crystals are rejected.

Table 6.1: List of the used event filters [61].

6.3 Selection of W Event Signature

The next step of selection is to restrict the data to events matching the W′ → τν finalstate topology. All events have to contain exactly one well reconstructed hadronicallydecaying tau and large MET which satisfy the kinematics of a two body decay. TheMET requirement is already covered by the trigger requirement described in the pre-vious section so the next step is to probe the event for one good tau. The tau objectidentification is done by using the discriminators of the HPS algorithm and the taupreselection described in Chapter 3. All tau candidates are tested for the followingrequirements:

• Tau quality preselection

• Discriminator: decayModeFinding

• Discriminator: byMediumCombinedIsolationDeltaBetaCorr

• Discriminator: againstElectronMedium

• Discriminator: againstMuonLoose

If not exactly one tau candidate passes all these selections the event is rejected. Thisselection suppresses all backgrounds arising from processes which produce no real tausbut which can contribute to the background if another object is misidentified as a tau.The used discriminators are chosen as a compromise between a sufficiently high efficiency

63

6 Event Selection

to select real taus and an acceptable probability to misidentify a jet, electron or muonas a tau.

It is not intrinsically given that the efficiency of this tau identification is identical

in data and simulation, thus a scale factor kscale =εdataID

εMCID

has to be applied to the MC

distributions to take into account possible differences. This scale factor was calculatedby the tau POG to be consistent with one [62] at low and medium energies with anuncertainty of 6 %. The method used relies on Z→ ττ events and only onshell andhighly boosted Z events which are rare could be used to determine the scale factor athigh energies. Thus, no scale factor is applied in this analysis and it is assumed thatthe scale factor is consistent with one over the full energy range. This assumption isproblematic due to the tau identification problems at high energies described in Chapter3 but no better solution could be found in the short timescale of this analysis. Thisproblem will be solved as soon as the tau identification problem is solved.

The last two selections exploit the two-body-like decay kinematics of the decay chainas described previously. The distributions of the following two kinematic variables areused for this selection:

• The ratio of the transverse tau energy relative to the missing transverse energy:

0.6 <Eτ jet

TMET < 1.4

• The angle between the direction of the tau and the missing transverse energy inthe φ-plane: ∆Φ(~Eτ jetT , ~MET) > 2.7

These two kinematic selections have been investigated in detail in the previous analysis[18] and are used here without further study. The distribution of the MT spectrum afterthe full event selection is shown in Figure 6.2. No QCD MC events are included in theplots since the statistics of these samples is insufficient and the reliability is questionabledue to missing higher order corrections which cannot be calculated, but are expectedto have a large effect for QCD processes. The absence of the QCD contribution is thereason for the deviation between data and MC prediction in these plot. The backgroundcontribution from these source is instead derived by a data driven approach which willbe described in Chapter 7.

One event with very high transverse mass (MT = 1490 GeV) is found in the data.The event display can be found in Figure 6.3. The event contains a well isolated, highenergy tau candidate (reconstructed in the barrel) with ET = 760 GeV and with miss-ing transverse energy of MET = 730 GeV which is back to back to the tau candidate(∆φ = 3.08).

6.4 Signal Efficiency and Background Contribution

The signal efficiency of the full event selection (including acceptance, trigger efficiency,tau identification efficiency and kinematic selection) for the different W ′ samples can beseen in Table 6.2. It is defined as the fraction of signal events of one particular signalsample passing all these selections relative to the full number of events produced for this

64

6.4 Signal Efficiency and Background Contribution

[GeV] T M200 400 600 800 1000 1200 1400 1600

Eve

nts

/ 1 G

eV

-410

-310

-210

-110

1

10

210 ν τ →W ττ →DY

ν µ →W single top

+tt

Diboson ν e →W

data syst uncer.

M = 2 TeVν τ →W'

CMS 2012 Private Work -1 L dt = 12.1 fb∫ = 8 TeVs

Figure 6.2: Final MT distribution after full event selection with MC background contri-bution only excluding QCD samples. The contribution from QCD processesis estimated by a data driven approach and will be presented in the nextSection but is not included in this plot.

Figure 6.3: Event with the highest transverse mass found in this analysis. The eventcontains a well isolated, high energy tau candidate with ET = 760 GeV re-constructed in the barrel and missing transverse energy of MET = 730 GeVwhich is back to back to the tau candidate (∆φ = 3.08). On the left side,the projection on the r-φ plane is shown and on the right side the projectionon the z-y plane is shown.

65

6 Event Selection

sample. The efficiencies for the SSM W ′ samples vary between 5 % for small and highmasses and 12 % for intermediate masses. The efficiencies for the NUGIM W ′ samplesdepend on the value of the parameter sin2φ since it determines the decay width of theW ′ (see Chapter 1). A larger width has the effect that more W ′ bosons are producedat smaller values of MT which means that more signal events are failing the kinematicthreshold given by the trigger. The efficiencies for the first three parameter points(sin2φ = 0.031, 0.04, 0.05) vary between 4 % and 2 %. The width of the W ′ is verybroad for these three values of the mixing angle and even before reconstruction no peakcan be found in the spectrum (see Chapter 1, Figure 1.6). Therefore, the efficiency ofthese samples is similar to each other and very low. For sin2φ = 0.1 the width decreasesand therefore the efficiency increases to values varying between 8 % and 2 %. The widthof the last two parameter points (sin2φ = 0.2, 0.3) is comparable to the width of theSSM W ′ which leads to similar efficiencies (varying between 6 % and 12 %).

The number of data and background events normalized to the used integrated lu-minosity of L = 12.1 fb−1 after each selection step is shown in Figure 6.4, where thedifferent background contributions are stacked. The contribution from QCD is takenfrom the data-driven approach which is described in the next chapter. Since this ap-proach only calculates the contribution after full event selection it is only shown at thisstage (labeled as “kinematics” in the plot). The number of data events can of courseonly be shown after the trigger selection. The tau ID selection has the strongest effecton all background processes without real taus like W → µν and the kinematic selectionhas the strongest effect on background processes with different kinematics like top-quarkdecays. The large discrepancy between the number of data events and the number ofpredicted background events in the bins “trigger” and “tau ID” is due to the missingQCD contribution which is only included in the last bin (“kinematics”).

66

6.4 Signal Efficiency and Background Contribution

all events acceptance trigger tau ID kinematic

Eve

nts

310

410

510

610

710

810

CMS 2012 Private Work -1 L dt = 12.1 fb∫ = 8 TeVs

ν e → W Diboson

ν µ → W

+ single topt t ττ → DY

Data DrivenQCD

ν τ → W Data

Figure 6.4: Number of background and data events after the different selection steps.QCD contribution has to be derived from data as presented in the nextSection and can therefore only be shown after the full selection (labeled as“kinematics” in the plot). Data can only be shown after trigger selection.

67

6 Event Selection

mW′

SS

MW′

NU

GIM

Mod

elsin

0.0310.0

40.0

50.1

0.20.3

300

0.007±

0.005

--

--

--

500

0.047±

0.015

--

--

--

700

0.083±

0.020

--

--

--

900

0.098±

0.022

--

--

--

1000

0.104±

0.024

0.039±

0.0130.0

45±

0.0

14

0.0

51±

0.0

16

0.0

81±

0.0200.100

±0.023

0.106±

0.023110

00.11

0.027

--

--

--

1200

0.113±

0.026

--

--

--

1300

0.113±

0.024

--

--

--

1400

0.119±

0.026

0.035±

0.0120.0

39±

0.0

13

0.0

46±

0.0

14

0.0

81±

0.0190.113

±0.024

0.122±

0.026150

00.12

0.025

--

--

--

1600

0.120±

0.026

--

--

--

1700

0.125±

0.026

--

--

--

1800

0.127±

0.025

0.029±

0.0110.0

32±

0.0

12

0.0

37±

0.0

12

0.0

69±

0.018-

0.129±

0.027190

00.12

0.025

--

--

--

2000

0.120±

0.061

0.027±

0.0110.0

29±

0.0

11

0.0

32±

0.0

12

-0.109

±0.024

0.124±

0.025210

00.11

0.024

--

--

--

2200

0.118±

0.025

0.023±

0.0100.0

28±

0.0

11

0.0

30±

0.0

11

0.0

55±

0.0150.103

±0.022

0.120±

0.025230

00.11

0.024

--

--

--

2400

0.111±

0.024

0.024±

0.0100.0

25±

0.0

10

0.0

28±

0.0

11

0.0

49±

0.0150.098

±0.021

0.113±

0.023250

00.11

0.022

--

--

--

2600

0.106±

0.022

0.022±

0.0090.0

24±

0.0

10

0.0

27±

0.0

11

0.0

45±

0.0140.086

±0.019

0.107±

0.021270

00.10

0.024

--

--

--

2800

0.098±

0.020

0.020±

0.0090.0

22±

0.0

10

0.0

23±

0.0

10

0.0

39±

0.0130.079

±0.018

0.101±

0.021290

00.09

0.022

--

--

--

3000

0.092±

0.021

0.021±

0.0090.0

22±

0.0

09

0.0

24±

0.0

10

0.0

36±

0.0120.073

±0.018

0.093±

0.020320

00.08

0.020

--

--

--

3400

-0.021

±0.009

0.020±

0.0

09

0.0

23±

0.0

10

0.0

23±

0.0100.060

±0.016

0.076±

0.017350

00.06

0.019

--

--

--

4000

0.060±

0.016

--

--

--

Tab

le6.2

:E

fficien

cyof

the

full

event

selectionfor

the

diff

erentW′

samp

lesw

ithth

ecom

bin

edstatistic

and

system

aticu

ncer-

tain

ty.T

he

system

aticu

ncertain

tiesw

illb

ed

iscussed

inC

hap

ter8

68

7 Data-Driven QCD Multijet BackgroundEstimation

Since QCD multijet processes are one important source of background events in thisanalysis due to jets being misidentified as taus, a reliable method to estimate this con-tribution is needed. As said before, the MC prediction cannot be used due to importantbut unknown higher order corrections and insufficient statistics, so a data-driven ap-proach is used instead. An “ABCD method” is used which yields a QCD sample forthe final distributions where the shape and the normalization is derived from data. Thenormalization is cross checked in MC as well as in an independent control region. Anoverview of the method is shown in Figure 7.1 which is explained in detail in the followingtwo sections.

7.1 The QCD Template Sample

The first step of the ABCD method is to generate a template sample from data whichis dominated by QCD processes but includes as many selection steps as possible tominimize the effect of missing selections on the normalization and shape of this sample.In this implementation of the method all selections are applied except for the tau isolationwhich is inverted. While in the regular event selection an isolated tau is required, thistemplate sample should not contain such isolated tau leptons but instead at least onenon-isolated tau candidate. Since the isolation selection is the main suppression againstQCD processes, this inversion of the selection enhances the QCD contribution to thistemplate sample. The remaining contamination of the other background processes tothis sample is removed by subtracting their contribution as determined from MC. Theresulting distribution is a sample dominated by QCD processes, gained from data whichcan be seen for the transverse mass in Figure 7.2. The subtracted contamination amountsto approximately 20 %.

7.2 Normalization of Template Sample

To determine the renormalization factor due to the tau isolation requirement, a tight-to-loose ratio approach is used. The QCD template sample is almost completely composedof hadronic jets being misidentified as taus, so one needs to know the probability of thesemisidentified taus to pass the isolation criterion of the event selection. This probabilityis derived from a disjunct sample relative to the template sample which is achieved by

inverting the kinematic selection of 0.6 <Eτ jet

TMET < 1.4. The QCD purity of the sample is

enhanced by restricting this kinematic variable to only values greater than 1.4 and not

69

7 Data-Driven QCD Multijet Background Estimation

A B

DC

Signal

QCDRegion

Anti Good

Kinematics

Isolation

QCD template

Fake Probability

QCD BG

loose tight

Figure 7.1: Sketch of the ABCD method. Region A is used to obtain a QCD templatesample, region C and D is used to determine the normalization and region Bis the signal region. Two different kinematic variables are used to distinguish

signal and QCD region:Eτ jet

TMET and MET. For further explanations see text.

[GeV]TM0 500 1000 1500 2000 2500 3000

Eve

nts

/40

GeV

-310

-210

-110

1

10

210

QCD template sample

Non QCD BG

not correctedQCD sample

contamination

CMS 2012 Private Work-1L dt = 12.1 fb∫ = 8 TeVs

corrected for contamination

Figure 7.2: QCD template sample derived from the event selection with inverted iso-lation. The distribution is shown before and after subtracting of the con-tamination of additional background processes, as well as the contaminationitself.

70

7.2 Normalization of Template Sample

[GeV]TτE

0 200 400 600 800 1000 1200 1400 1600 1800 2000

Eve

nts

/ 100

GeV

-210

-110

1

10

210

310

410

510

610

710

sampleLoose tau

not correctedLoose tau sample

τ τ →DY + single topt t

ν e →W ν µ →W

ν τ →W Diboson

CMS 2012 Private Work -1 L dt = 12.1 fb∫ = 8 TeVs

[GeV]TτE

0 200 400 600 800 1000 1200 1400 1600 1800 2000E

vent

s / 1

00 G

eV-210

-110

1

10

210

310

410

510

610

710

sampleLoose tau

not correctedLoose tau sample

τ τ →DY + single topt t

ν e →W ν µ →W

ν τ →W Diboson

CMS 2012 Private Work -1 L dt = 12.1 fb∫ = 8 TeVs

Figure 7.3: Distribution of “loose” (left) and “tight” (right) tau candidates used todetermine the tight-to-loose ratio. The same dataset from the MET primarydataset is used for this as for the main analysis, but they are made disjunct by

demandingEτ jet

TMET > 1.4 for the tight and the loose distribution. The definition

of tight and loose taus are given in the text.

taking into account values smaller than 0.6 since QCD processes are expected to containless MET. Additionally, all events are rejected which contain an isolated electron or muonto further enhance the purity. The remaining contamination of real taus is subtractedfrom MC. The set of all taus passing these selections is called the “loose” tau samplewhile the subset which is yield from all taus additionally passing the isolation criterionis called the “tight” sample.

The tight-to-loose ratio is derived out of this sample which is defined as

pT iToLo =tight

loose=

Number of taus passing all selection and passing the isolation

Number of taus passing all selection but not tested for isolation.

The distribution of tight and loose taus binned in Eτ jetT can be seen in Figure 7.3.The tau contamination (before MC subtraction) of the “loose” sample is 0.1 % with anoverall number of taus (after MC subtraction) of 150869 and the tau contamination ofthe “tight” sample is 23 % with an overall number of taus of 139.

The tight-to-loose ratio can depend on the transverse energy of the tau candidate, soit should be calculated in bins of ET. But due to the small statistics of the tight sample,only a very coarse binning of two bins ( 200 GeV < ET < 300 GeV and ET > 300 GeV)is chosen. This will be improved with rising luminosity.

The calculated tight-to-loose ratios in the two bins are:

• pTiToLo = 0.0014 for (200 GeV < ET < 300 GeV)

• pTiToLo = 0.0007 for (ET > 300 GeV)

71

7 Data-Driven QCD Multijet Background Estimation

The tight-to-loose ratio is the fake probability used to scale the QCD template into thesignal region (B). The systematic uncertainty on the fake probability due to a possibleremaining small contamination of real taus after the MC subtraction and due to thechoice of the kinematic variable used to define the QCD region, is estimated out of twoindependent cross checks described in the next section.

7.3 Cross Check of Fake Probability

The first cross check of the fake probability is done by recalculating the tight-to-looseratio on an independent QCD region separated from the signal region using MET

instead ofEτ jet

TMET as a different kinematic selection. QCD events must fulfill the re-

quirement 60 GeV < MET < 140 GeV while signal events have to fulfill the requirementMET > 140 GeV. This selection increases the QCD purity of the sample since QCDevents have typically lower MET than electroweak processes. This cannot be done on thesame dataset as the signal selection since the MET requirement of MET > 140 GeV is de-termined by the trigger threshold. Instead, data corresponding to an integrated luminos-ity of 8.4 fb−1 from the JetHT primary dataset collected with the trigger HLT PFJet320

is used. The use of this dataset with this specific trigger, which probes the events fora PF jet with transverse energy greater than 320 GeV, has the benefit that it increasesthe purity of the QCD sample enormously since this dataset is almost completely QCDdominated.

The distribution of tight and loose taus binned in Eτ jetT can be seen in Figure 7.4.The tau contamination (before MC subtraction) of the “loose” sample is smaller than0.01 % with an overall number of taus (after MC subtraction) of 2598607 and the taucontamination of the “tight” sample is 2.12 % with an overall number of taus of 1311.

The same bins of ET are used for the tight-to-loose ratio as before. The values are:

• pTiToLo = 0.00078 for (200 GeV < ET < 300 GeV)

• pTiToLo = 0.00045 for (ET > 300 GeV)

The validity of the first bin is questionable since the threshold of the used triggerfor PF jets which are the seeds for tau candidates is above this first bin. All events inthis bin have to contain an additional jet with higher energy which have activated thetrigger. This could introduce an additional systematic.

As a second cross check, the tight-to-loose ratio is determined in QCD MC sampleswith the same selection as used for the first cross check. Of course, no subtraction oftrue tau contamination has to be performed since these samples contain only QCD jets.

The tight-to-loose ratio is determined to:

• pTiToLo = 0.00096 for (200 GeV < ET < 300 GeV)

• pTiToLo = 0.00052 for (ET > 300 GeV)

72

7.4 Full Background Prediction and Final Distributions

[GeV]TτE

0 200 400 600 800 1000 1200 1400 1600 1800 2000

Eve

nts

/ 100

GeV

-210

-110

1

10

210

310

410

510

610

710

810sampleLoose tau

not correctedLoose tau sample

τ τ →DY + single topt t

ν e →W ν µ →W

ν τ →W Diboson

CMS 2012 Private Work

JetHT dataset

-1 L dt = 8.4 fb∫ = 8 TeVs

[GeV]TτE

0 200 400 600 800 1000 1200 1400 1600 1800 2000E

vent

s / 1

00 G

eV-210

-110

1

10

210

310

410

510

610

710

810sampleLoose tau

not correctedLoose tau sample

τ τ →DY + single topt t

ν e →W ν µ →W

ν τ →W Diboson

CMS 2012 Private Work

JetHT dataset

-1 L dt = 8.4 fb∫ = 8 TeVs

Figure 7.4: Distribution of “loose” (left) and “tight” (right) tau candidates used to crosscheck the tight-to-loose ratio. Data from the JetHT primary dataset is usedfor this. All events have to fulfill 60 GeV < MET < 140 GeV to be disjunctto the main analysis as well as to the initial tight and loose sample.

The value of all tight-to-loose ratios calculated for the cross check are within 50 %deviation relative to the original values except for the first bin of the first cross checkwhich is approximately 100 % lower. Nevertheless an uncertainty of 50 % is assumed onthe fake probability in all bins since the validity of the first bin of the first cross checkis questionable as stated before. The fake probabilities derived in Section 7.2

• pTiToLo = 0.0014± 0.0007 for (200 GeV < ET < 300 GeV)

• pTiToLo = 0.0007± 0.00035 for (ET > 300 GeV)

are used to scale the QCD template sample.

7.4 Full Background Prediction and Final Distributions

The number of QCD events after the full selection derived by the previously presentedABCD method is NQCD = 271.

Adding this contribution of QCD processes to the contribution of background eventsderived from MC results in the final distributions of this search. The spectrum as wellas the cumulative distribution of the transverse mass which is used for the statisticalanalysis in Chapter 9 can be seen in Figure 7.5. Comparing this spectrum to the onein Figure 6.2 one can see that the QCD contribution to the number of backgroundevents remedies the discrepancy between data and background prediction. The totalnumber of all background events which includes the contribution from MC as well asthe contribution from the QCD data-driven method above a certain MT threshold canbe seen in Table 7.1 together with their uncertainty. No deviation between data and

73

7 Data-Driven QCD Multijet Background Estimation

[GeV]TM200 400 600 800 1000 1200 1400 1600

Eve

nts

/1G

eV

-410

-310

-210

-110

1

10

210 ντ→W Data DrivenQCD

ττ→DY νµ→W

single top+tt Diboson

νe→W data

syst uncer.

M = 2 TeVντ→W'

CMS 2012 Private Work -1L dt = 12.1 fb∫ = 8 TeVs

[GeV]TM200 400 600 800 1000 1200 1400

[GeV

]T

Eve

nts

>M

-110

1

10

210

310

410 ντ→W Data DrivenQCD

ττ→DY νµ→W

single top+tt Diboson

νe→W data

syst uncer.

CMS 2012 Private Work -1L dt = 12.1 fb∫ = 8 TeVs

Figure 7.5: Final MT distribution with full background prediction. The first plot showsthe MT spectrum while the second plot shows the integrated number ofevents. The QCD background contribution is taken from the ABCD methoddescribed previously. Good agreement between data and MC can be seenover the full spectrum. The background is described well and no evidencefor a W ′ boson occurs in the data.

Standard Model prediction can be seen which exceeds the overall uncertainty except forone bin at 300 GeV. This is most likely caused by an insufficient simulation of the triggerturn-on which is described in Chapter 4. The calculation of the systematic uncertaintieson the number of predicted background events will be described in the next chapter.

< MT[GeV] Number of Number of QCD Number of allData Events Background Events Background Events

200 856 271± 136 767± 140400 169 99.8± 49.9 212.± 50.9600 23 9.94± 4.97 26.10± 5.12800 5 2.13± 1.07 5.91± 1.241000 1 0.55± 0.27 1.60± 0.331200 1 0.17± 0.08 0.61± 0.181400 1 0.07± 0.03 0.29± 0.12

Table 7.1: Number of data events and predicted number of background events abovedifferent MT thresholds. The uncertainty on the number of background eventscontains the systematic uncertainties described in Chapter 8

74

8 Systematic Uncertainties

Different sources of systematic uncertainties are considered in this analysis which havean impact on the prediction of the signal efficiencies and on the number of backgroundevents. The uncertainties are calculated using official CMS recommendations and tools.

8.1 Sources of Systematic Uncertainties

There are two different types of systematic uncertainties: One type directly changes thenumber of events in the final distribution by varying scale factors like the uncertaintyon the integrated luminosity or on the object identification efficiencies, but does notalter the shape of the MT spectrum. The second type has an indirect influence like theuncertainties on the energy scales and resolutions of the reconstructed objects whichcan cause bin-to-bin fluctuations of events in the kinematic distributions. The influenceof these second type uncertainties on the signal efficiencies and background predictionsare calculated by repeating the analysis with the shifted or smeared energy values ofthese objects to obtain distorted final distributions for each uncertainty. The deviationbetween the original and the distorted distributions is assumed to be the impact of thesesecond type uncertainties. To obtained the effect of each uncertainty on the full signalefficiency, the efficiency is recalculated with each of the distorted distributions and thedeviation is assumed to be the impact of the uncertainty.

The following uncertainties are considered in this analysis:

• Jet energy scale uncertainty: Since the tau energy is determined by using thereconstructed PF jet energy, the uncertainty on the jet energy has to be used as theuncertainty on the tau energy. The jet energy scale (JES) describes the calibrationof the jet energy determination. A wrong value of the JES in simulation wouldcause a shift of the jet energy in MC relative to the data. The uncertainty on thejet energy scale was determined by the object experts [63] in bins of the jet-E andη and is varying between 3 % and 5 %. The final distributions are recalculatedone time for all jets in the MC samples shifted upwards by their uncertainty andone time for all jets shifted downwards by there uncertainty. Simultaneously, theMET of each event is recalculated taking into account the changed values of thejet energy. This uncertainty alters the shape of the MT spectrum and is thereforean uncertainty of the second type.

• Jet energy resolutions uncertainty: The uncertainty of the jet energy reso-lution (JER) is also determined in bins of the jet E and η and provided by thesame source as the JES uncertainty. A wrong value of the JER in simulation wouldcause an enhanced or reduced smearing of the jet energy in MC relative to the data.

75

8 Systematic Uncertainties

Again, two additional final distributions are calculated where for one an enhancedsmearing of the jet energy is applied and one were reduced smearing is applied.The energy smearing is done separately for each component of the three compo-nents of the momentum vector in simulation: The true value of each componentis used as the mean value of a gaussian distribution with the standard deviationequal to the initial resolution of this jet enhanced (or reduced) by the uncertainty.Using these gaussian distributions, new values of the three components are diced.By this smearing not only the energy of the jet can change but also its direction.The MET is recalculated for the smeared jets in the event.

• MET uncertainty: The uncertainty on the missing transverse energy is deter-mined as recommended by the MET group [64] : All objects in an event have to bealtered by their specific uncertainties to recalculate the MET out of the resultingchanged energy sum of the event. This MET recalculation is done using a toolprovided by [64]. The uncertainty on the part of the MET caused by unclusteredenergy is assumed to be 10 %, flat in pT. For high values of MET the contributionof the unclustered energy to the full MET is only a few percent.

• Pileup reweighting uncertainty:

The uncertainty on the pileup weights arises from the uncertainty on the distri-bution of pileup interactions per event in data. The number of pileup events indata is not identical to the number of vertices in an event and cannot be measureddirectly. Instead, it is calculated from the overall proton-proton cross section (seeChapter 5). The uncertainty on this cross section is taken into account by recal-culating the pileup weights out of a second distribution of pileup interactions perevent in data which is obtained from a slightly different assumption for the crosssection [53]: The first assumption uses a cross section which is gained by extrap-olating the value measured at 7 TeV data to 8 TeV. For the second assumption,measurements at 8 TeV are used. The analysis is repeated with the new weightsto obtain the impact on the final distributions.

• Tau identification efficiency scale factor uncertainty:

The uncertainty on the scale factor between data and MC for the tau identificationefficiency was determined by the tau POG to be 6 % [62]. This efficiency is usedto rescale the simulated number of events in the final distribution. Due to theproblems on tau identification at high energies, the validity of this scale factoruncertainty at high energies is questionable (see Chapter 6). But since no bettervalue could be calculated, this uncertainty is used nevertheless. This problem willbe fixed as soon as the tau identification issue is resolved.

• Luminosity uncertainty: The uncertainty on the determined integrated lumi-nosity is assumed to be 4.4 % as stated by [65]. Since the integrated luminosity isused to scale the MC distributions, this uncertainty has a direct influence of 4.4 %on the number of background events. The signal efficiency is independent of theintegrated luminosity and therefore this uncertainty has no effect on it.

76

8.2 Impact on Signal Efficiencies and Background Prediction

• QCD data driven method uncertainty:

As described in the previous chapter, the uncertainty on the fake probability is es-timated to be 50 %. The resulting uncertainty on the number of QCD backgroundsevents is determined by generating two additional QCD background samples wherefor the first one a fake probability enhanced by 50 % and for the second on a fakeprobability reduction by 50 % is used to scale the template sample.

8.2 Impact on Signal Efficiencies and Background Prediction

The impact of the different systematic uncertainties which cause bin-to-bin variationson the MT spectrum of the background processes can be seen in Figure 8.1. The relativeuncertainty on the number of background events over a certain MT threshold (calledMlower

T ) is shown divided into the different sources of uncertainty and the overall relativeuncertainty. The strongest impact from this kind of uncertainties is due to the jet energyuncertainties which is due to the fact that the jet energy (which is used to reconstructthe tau energy) is the main property of the dominant object in this analysis. The overalldominant uncertainty on the number of background events is due to the uncertainty onthe number of QCD events which can be seen in Table 7.1. For small values of Mlower

T

the uncertainties arising from MC background predictions is negligible compared to theuncertainty on the number of QCD background events.

The relative uncertainty on the number of signal events over a MT threshold (calledMlower

T ) is shown in Figure 8.2 for four different SSM W ′ samples. Again, the jet energyuncertainties have the strongest influence and the rising uncertainty close to Mlower

T equalto the W ′ mass is due to the small number of events at these high values of MT sincethe effect of bin-to-bin variations is strongest for small number of events.

77

8 Systematic Uncertainties

0 200 400 600 800 100012001400

relativeuncertainty

-210

-110

1 All Syst. JER JES MET(unclustered)

MET(electron) MET(muon) pileup

[GeV]lowerTM

back

gro

und

CMS 2012 Simulation

QCD

without QCD

Figure 8.1: . Relative uncertainties on the number of background events in bins of MT forall uncertainty sources causing a distortion of the spectrum and the uncer-tainty arising from the uncertainty on the QCD data-driven method. Theentry labeled “All Syst.” refers to the sum off all uncertainties causing adistortion of the spectrum without the contribution from QCD.

78

8.2 Impact on Signal Efficiencies and Background Prediction

[GeV]lowerTM

200 400 600 800 1000

rela

tive

unce

rtain

ty

-210

-110

1All Syst. JER JES MET(unclustered)

MET(electron) MET(muon) pileup

CMS 2012 Simulation

Signal

MW' = 1 TeV

[GeV]lowerTM

200 400 600 800 1000 1200 1400

rela

tive

unce

rtain

ty

-210

-110

1All Syst. JER JES MET(unclustered)

MET(electron) MET(muon) pileupSignal

MW' = 1.5 TeV

CMS 2012 Simulation

[GeV]lowerTM

500 1000 1500 2000

rela

tive

unce

rtain

ty

-210

-110

1

All Syst. JER JES MET(unclustered)

MET(electron) MET(muon) pileup

MW' = 2 TeV

Signal

CMS 2012 Simulation

[GeV]lowerTM

500 1000 1500 2000 2500 3000

rela

tive

unce

rtain

ty

-210

-110

1All Syst. JER JES MET(unclustered)

MET(electron) MET(muon) pileup

MW' = 3 TeV

Signal

CMS 2012 Simulation

Figure 8.2: Relative uncertainties on the number of signal events in bins of MT for alluncertainty sources causing a distortion of the spectrum. The upper left plotshows the SSM W ′ with M = 1 TeV, the upper right plot shows the SSM W ′

with M = 1.5 TeV, the lower left plot shows the SSM W ′ with M = 2 TeVand the lower right plot shows the SSM W ′ with M = 3 TeV

79

9 Exclusion Limits

No excess in the number of data events above the Standard Model prediction has beenseen in this analysis so no evidence of a W ′ boson occurs in the analyzed data.

Upper limits on the cross section of the W ′ at 95 % confidence level (CL) are set usinga single bin counting experiment approach with Bayesian statistics. The theoreticalpredictions of the cross section of the W ′ in the SSM and the NUGIM (see Table 5.2)are used to derive limits on the W ′ mass and on the parameter space of the NUGIMmodel out of the cross section limits which will be explained later.

9.1 Single Bin Counting Experiment with Bayesian Statistics

In this section the method to calculate limits based on single bin counting is explained.In a single bin counting experiment a threshold on the main discriminating variableis set optimized for best sensitivity and the number of data and background eventsabove this threshold are counted. Four numbers (and their uncertainties) are left whichhave to be considered for the limit calculation: The number of background events (b),the number of data events (N), the signal efficiency (ε) and the integrated luminosity(L). This approach is appropriate for this analysis since the background is low and nospecific signal shape is left after reconstruction which could enhance the limit setting ifconsidered (see Figure 6.1). The search window is set in MT as the main discriminatingvariable where the window starts at a lower threshold of MT and goes up to infinity. Inorder to set exclusion limits, the lower threshold (Mlower

T ) is optimized to gain the bestexpected exclusion limit.

A Bayesian technique is used to calculate the exclusion limits out of the countingexperiment which is described subsequently following [66]. More details on this techniquecan also be found in [16].

In contrast to the frequentist approach where the number of data events is comparedto a diced distribution of the number of signal plus background events to obtain theconfidence level, the Bayesian approach uses a likelihood function (L) of the signal con-tribution (s) at a constant number of background events (b) under the precondition ofN. This likelihood function is not known a-priori and is derived from the probabilitydistribution of N under the parameter s which is a Poisson Likelihood distribution

P (N |s) =(s+ b)N

N !· e−(s+b) (9.1)

by using Bayes Theorem

P (B|A) = P (A|B)P (B)/P (A) (9.2)

81

9 Exclusion Limits

with A,B being arbitrary hypotheses.

This leads to a likelihood function as a function of the signal contribution of

L(s|N) =P (N |s)π(s)∫P (N |s)π(s)ds

(9.3)

where π(s) is the prior probability density for s.

In this analysis an uniform positive prior function is used for π(s) since the only apriori knowledge is that the signal contribution must be positive.

The systematic uncertainties on the different parameters are included in the likeli-hood function by handling them not only as parameters, but as probability distributednuisance parameters. Each parameter is modeled as a log-normal prior function and in-cluded into the likelihood function by using a multi dimensional generalization of BayesTheorem leading to the likelihood function

L(s|N) =P (N |s)π(s)∫ ∫

P (N |s)π(s)π(~ν) ds d~ν(9.4)

where π(~ν)=∏iπ(νi) is the product of all prior functions.

The upper limit on the signal contribution (slimit) is calculated out of this likelihoodfunction by calculating the 95 % CL interval by numerically solving the equation∫ slimit

−∞L(s|N)ds = 0.95 (9.5)

In addition to this limit observed from data, an expected limit and its 1σ and 2σ bandsare calculated using pseudo experiments where N is substituted by a number diced from aPoisson distribution with the mean equal to the number of expected background events.The mean is also shifted according to the nuisance parameters. The median of all limitsresulting from these pseudo experiments is the expected limit while the region whichcontains 68.2 % (95.4 %) of all limits is the 1σ (2σ) band.

The limit calculation including the optimization of the search window is done with aRooStats [67] based program developed for the Higgs analysis [68].

9.2 Limits in the Sequential Standard Model

The method described before is used to set limits on an additional cross section beyondthe Standard Model contribution arising from processes which leads to experimentalsignatures as a SSM W ′ boson decay. The limit on the cross section times BR(W ′ → τν) as a function of the W ′ mass is shown in Figure 9.1: The solid black line shows the limitobserved with 12.1 fb−1 of data while the dotted lines shows the expected limit. Thegreen and the yellow bands indicate the one and two sigma intervals of the limit. Theobserved limit shows only a small deviation from the expected limit and stays withinthe one sigma band which is due to the fact that no deviation from the Standard Modelexpectation was seen in data.

82

9.3 Limits on NUGIM Model

[GeV]W'M500 1000 1500 2000 2500 3000 3500 4000

BR

[fb]

× σ

10

210

310 Observed 95% CL limit

Expected 95% CL limit

σ 1 ±Expected 95% CL limit

σ 2 ±Expected 95% CL limit

SSM W' NNLO

PDF uncertainty

CMS 2012 Private Work = 8 TeVs -112.1 fb

tau + MET

Figure 9.1: Cross section limit as a function of the mass of theW ′ boson in the SequentialStandard Model. All W ′ masses below 2.35 TeV, which corresponds to theintersection of the the theoretical cross section line and the observed crosssection limit, in the tau channel are excluded at 95 % CL.

The thin dotted line within the blue uncertainty band is the predicted cross section ofa SSM W ′. The cross section is calculated in NNLO and the uncertainty band representthe pdf uncertainty on the calculated cross section. The NNLO cross section as well asthe pdf uncertainties are taken from [60]. All W ′ masses leading to a theoretical signalcross section higher than the observed cross section limit are excluded. The intersectionof the line of the observed limit and the theoretical prediction indicates the best masslimit on the W ′ derived from this analysis. The pdf uncertainty on the theoretical signalcross section is small and has a negligible effect on the mass limit. The SSM W ′ isexcluded for MW′ < 2.35 TeV at 95 % CL in the tau channel.

An overview of the limits for the different mass points and the lower border of thesearch window (Mlower

T ) which yielded the best expected limit are shown in Table 9.1

9.3 Limits on NUGIM Model

Additional to the limit on the SSM W ′, limits are set on the parameter space of theNUGIM model. Contrary to the former one, the limits on the NUGIM model are cal-culated with leading order signal cross sections since no higher order corrections arecalculated, yet. For each value of the parameter sin2φ of the model a separate crosssection limit is derived since the signal efficiency depends on this parameter as was de-scribed in Chapter 6 and can be seen in Table 6.2. Out of these cross section limits,

83

9 Exclusion Limits

M ′W [GeV ] 500 700 900 1000 1100 1200 1400 1500 1600 1700 1800 1900M lowerT [GeV ] 200 500 550 650 650 700 750 750 750 850 1000 1000

Observed limit [fb] 460 110 36 32 25 25 16 14 12 13 11 9.9Expected limit [fb] 420 150 59 42 34 27 18 16 14 13 12 10

M ′W [GeV ] 2000 2200 2300 2400 2500 2600 2700 2800 2900 3000 3500 4000M lowerT [GeV ] 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 950 850

Observed limit [fb] 9.9 8.0 8.3 8.8 8.2 8.6 8.7 9.2 11 11 25 50Expected limit [fb] 10 8.2 8.5 9.2 8.4 8.9 9.2 9.5 13 12 26 49

Table 9.1: Overview of the cross section limit for the different masses of the SSM W ′

boson and the used search window in MT .

constraints on the mass of the W ′ as a function of the value of sin2φ are derived in thesame way as described in the previous section for the SSM W ′. The cross section limitsfor the NUGIM model are shown in Figures 9.2, 9.3, 9.4. The mass limit for the firstfour parameter values of sin2φ is approximately constant at 1.8 TeV. For rising valuesof sin2φ the signal efficiency increases which leads to better cross section limits. But si-multaneously the signal cross section times branching ratio decreases which counteractsthe increasing efficiency in terms of setting mass limits on the NUGIM W ′.

The signal efficiency and cross section times branching ratio of the last two consideredparameter points are similar to the SSM W ′ and therefore the resulting mass limitsare similar, too. The limit for sin2φ = 0.2 is MW′ < 2.1 TeV and for sin2φ = 0.3 it isMW′ < 2.25 TeV at 95 % CL.

[GeV]W'M1000 1500 2000 2500 3000

BR

[fb]

× σ

210

310

Observed 95% CL limit

Expected 95% CL limit

σ 1 ±Expected 95% CL limit

σ 2 ±Expected 95% CL limit

= 0.031)φ2NUGIM W' LO (sin

CMS 2012 Private Work = 8 TeVs -112.1 fb

tau + MET

[GeV]W'M1000 1500 2000 2500 3000

BR

[fb]

× σ

210

310

Observed 95% CL limit

Expected 95% CL limit

σ 1 ±Expected 95% CL limit

σ 2 ±Expected 95% CL limit

= 0.04)φ2NUGIM W' LO (sin

CMS 2012 Private Work = 8 TeVs -112.1 fb

tau + MET

Figure 9.2: Cross section limit as a function of the mass of the W ′ boson in the NUGIMmodel. The left plot shows the limit for the parameter sin2φ = 0.031 and theright plot the limit for the parameter sin2φ = 0.04. For the first parameterpoint W ′ masses below 1.8 TeV are excluded and for the second parameterpoint W ′ masses below 1.75 TeV are excluded.

84

9.3 Limits on NUGIM Model

[GeV]W'M1000 1500 2000 2500 3000

BR

[fb]

× σ

210

310

Observed 95% CL limit

Expected 95% CL limit

σ 1 ±Expected 95% CL limit

σ 2 ±Expected 95% CL limit

= 0.05)φ2NUGIM W' LO (sin

CMS 2012 Private Work = 8 TeVs -112.1 fb

tau + MET

[GeV]W'M1000 1500 2000 2500 3000

BR

[fb]

× σ10

210

310

Observed 95% CL limit

Expected 95% CL limit

σ 1 ±Expected 95% CL limit

σ 2 ±Expected 95% CL limit

= 0.1)φ2NUGIM W' LO (sin

CMS 2012 Private Work = 8 TeVs -112.1 fb

tau + MET

Figure 9.3: Cross section limit as a function of the mass of the W ′ boson in the NUGIMmodel. The left plot shows the limit for the parameter sin2φ = 0.05 and theright plot the limit for the parameter sin2φ = 0.1. For both parameter pointsW ′ masses below 1.8 TeV are excluded at 95 % CL.

The resulting constraints from these mass exclusion limits on the parameter spacecan be seen in Figure 9.5. This search has extended the previous existing constraintsfrom direct searches significantly in the parameter region sin2φ < 0.3 while the excludedparameter space is comparable to constraints from indirect limits. With the full 2012dataset and an improved tau reconstruction algorithm, which is in preparation, theselimits will improve and exceed the indirect constraints in the near future.

85

9 Exclusion Limits

[GeV]W'M1000 1500 2000 2500 3000

BR

[fb]

× σ

10

210

Observed 95% CL limit

Expected 95% CL limit

σ 1 ±Expected 95% CL limit

σ 2 ±Expected 95% CL limit

= 0.2)φ2NUGIM W' LO (sin

CMS 2012 Private Work = 8 TeVs -112.1 fb

tau + MET

[GeV]W'M1000 1500 2000 2500 3000

BR

[fb]

× σ

10

210

Observed 95% CL limit

Expected 95% CL limit

σ 1 ±Expected 95% CL limit

σ 2 ±Expected 95% CL limit

= 0.3)φ2NUGIM W' LO (sin

CMS 2012 Private Work = 8 TeVs -112.1 fb

tau + MET

Figure 9.4: Cross section limit as a function of the mass of the W ′ boson in the NUGIMmodel. The left plot shows the limit for the parameter sin2φ = 0.2 and theright plot the limit for the parameter sin2φ = 0.3. For the first parameterW ′ masses below 2.1 TeV are excluded and for the second one W ′ massesbelow 2.25 TeV are excluded at 95 % CL.

Figure 9.5: Excluded parameter space of the NUGIM model. The previous limits aretaken from [22] while the red line indicates the excluded region derived fromthis analysis. Everything below the line is excluded at 95 % CL.

86

10 Conclusion and Outlook

10.1 Conclusion

In this thesis a search for a heavy vector boson W ′ with 12.1 fb−1 of data collected withthe CMS detector at the LHC in the year 2012 at a center of mass energy of 8 TeV ispresented. For the first time the decay of the W ′ boson into hadronically decaying tauleptons is used as a search channel. This search channel is an important addition tothe well established searches for heavy vector bosons in the electron and muon decaychannel. As it is a fermion of the third generation, the tau decay channel of the W ′ isfrom particular interest for searches for new physics beyond the Standard Model.

The very high momentum of taus arising from W ′ boson decays results in very specialand challenging requirements on the object reconstruction of the tau leptons which hasbeen investigated in detail in this thesis. Some issues regarding the tau identification andenergy reconstruction have been found in the official HPS tau reconstruction algorithmat high momenta while the reconstruction of taus with pτT < 100 GeV seems to workcorrectly. First attempts to solve the issues at high momenta have been presented: Theuse of the PF jet energy instead of the HPS energy for the tau energy reconstructionwas found to be one good solution. It was not possible to solve all issues in the timescaleof this thesis and the problems regarding the tau identification inefficiency and resultingconcerns in the reliability of the simulations are still under study.

While investigating the performance of the HLT tau triggers in this analysis, a seriousinefficiency at high tau momenta was found. Much work was done in collaboration withthe trigger experts to find the source for this inefficiency and solve it. This work hashelped to prevent additional data loss since the trigger was fixed for the second half ofthe 2012 data taking period.

A data-driven method to estimate the contribution of QCD multijet processes to thenumber of background events was developed in this analysis. A reliable prediction ofthis background contribution is crucial for this search since it is one of the dominantbackground sources.

The full analysis chain of a search for a W ′ boson was executed in this analysis. Nodeviation from the Standard Model background estimation was found in the investigateddata and a limit on the mass of the W ′ in a Sequential Standard Model was determinedto M(W′) > 2.35 TeV.

Additionally, new constraints on the parameter space of a Nonuniversal Gauge Inter-action Model were set. Since the coupling strength to the different fermion generationsis nonuniversal in this model and depends on a model parameter, the search in the tauchannel enabled to set limits on a parameter region where the electron and muon channelare not sensitive.

87

10 Conclusion and Outlook

10.2 Outlook

Much work is still ongoing to adjust the HPS algorithm for taus with pτT > 100 GeVand to additionally develop a dedicated algorithm for these taus. Some new insightshave been gained, meanwhile, but were not ready at the moment the analysis was frozenfor this thesis. For example, the particle flow algorithm is not able to reconstruct aphoton together with a charged pion if both particles are strongly collimated and thecalorimeter cluster of both particles overlap too much. In this case the particle flowalgorithm interprets the calorimeter cluster as misreconstructed and therefore fully relieson the track measurements. Due to this, the photon is not recognized and is lost for thereconstructed tau energy [69].

This analysis will be continued and redone with the new improved algorithm and thefull 2012 dataset in the near future which will strongly improve the exclusion limitsand solve the remaining issues regarding the reliability of the simulation due to the taureconstruction.

88

11 Appendix

89

11 Appendix

Data

set

nam

egenera

tor

cro

ssse

ctio

n(p

b)

num

ber

of

events

Z/γ∗

→ττ

DY

ToT

auT

au

M-1

00to

200

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

34.9

2200167

Z/γ∗

→ττ

DY

ToT

auT

au

M-2

00to

400

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

1.1

81

100083

Z/γ∗

→ττ

DY

ToT

auT

au

M-4

00to

800

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

8699

100455

Z/γ∗

→ττ

DY

ToT

auT

au

M-8

00

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

04527

100080

single

top

Ts-c

hannel

TuneZ

2sta

r8T

eV

-pow

heg-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

2.8

2259961

single

top

Tt-c

hannel

TuneZ

2sta

r8T

eV

-pow

heg-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v3SIM

47.0

99876

single

top

TtW

-channel-D

RT

uneZ

2sta

r8T

eV

-pow

heg-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

10.7

497658

single

top

Tbar

s-channel

TuneZ

2sta

r8T

eV

-pow

heg-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

1.5

7139974

single

top

Tbar

t-channel

TuneZ

2sta

r8T

eV

-pow

heg-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

25.0

1935072

single

top

Tbar

tW-c

hannel-D

RT

uneZ

2sta

r8T

eV

-pow

heg-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

10.7

493460

ttT

TJets

Massiv

eB

inD

EC

AY

TuneZ

2sta

r8T

eV

-madgra

ph-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v2SIM

136.3

1344783

W→eν

WT

oE

Nu

ptm

in100

ptm

ax500

TuneZ

2Sta

r8T

eV

-pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

1.4

57

1000206

W→eν

WT

oE

Nu

ptm

in500

TuneZ

2Sta

r8T

eV

-pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

01525

1000366

W→µν

WT

oM

uN

uptm

in100

ptm

ax500

TuneZ

2Sta

r8T

eV

-pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

1.4

57

1000206

W→µν

WT

oM

uN

uptm

in500

TuneZ

2Sta

r8T

eV

-pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

01525

1000366

W→τν

WT

oT

auN

uptm

in100

ptm

ax500

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

1.4

57

1000206

W→τν

WT

oT

auN

uptm

in500

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

01525

1000366

Dib

oso

nW

WT

uneZ

2sta

r8T

eV

pyth

ia6

tauola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

33.6

19910431

Dib

oso

nW

Wto

Anyth

ing

ptm

in500

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

05235

1000142

Dib

oso

nW

ZT

uneZ

2sta

r8T

eV

pyth

ia6

tauola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

12.6

310000283

Dib

oso

nW

Zto

Anyth

ing

ptm

in500

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

01695

1000035

Dib

oso

nZ

ZT

uneZ

2sta

r8T

eV

pyth

ia6

tauola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

5.1

96

9799908

Dib

oso

nZ

Zto

Anyth

ing

ptm

in500

TuneZ

2Sta

r8T

eV

-pyth

ia6-ta

uola

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v2SIM

0.0

01065

1000000

QC

DQ

CD

Pt-8

0to

120

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v3SIM

1033680

6000000

QC

DQ

CD

Pt-1

20to

170

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v3SIM

156293.3

6000000

QC

DQ

CD

Pt-1

70to

300

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v2SIM

34138.1

56000000

QC

DQ

CD

Pt-3

00to

470

TuneZ

2sta

r8T

eV

pyth

ia6

v2

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

1759.5

49

3500000

QC

DQ

CD

Pt-4

70to

600

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v2SIM

113.8

791

4000000

QC

DQ

CD

Pt-6

00to

800

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v2SIM

26.9

921

4000000

QC

DQ

CD

Pt-8

00to

1000

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v2SIM

3.5

50036

4000000

QC

DQ

CD

Pt-1

000to

1400

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.7

37844

2000000

QC

DQ

CD

Pt-1

400to

1800

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

3352235

2000000

QC

DQ

CD

Pt-1

800

TuneZ

2sta

r8T

eV

pyth

ia6

Sum

mer1

2D

R53X

-PU

S10

ST

AR

T53

V7A

-v1SIM

0.0

01829005

1000000

Tab

le11.1:

MC

samp

lesfor

back

groun

dp

rocesses.

Sh

own

areth

ed

atasetn

ame

onth

ecom

pu

ting

grid,

the

pro

du

ctioncross

sections

and

the

nu

mb

erof

generated

events

90

Bibliography

[1] CMS Collaboration, “Search for new physics in the final states with a lepton andmissing transverse energy at sqrt(s) = 8 TeV”, CMS Physics Analysis SummaryCMS PAS EWK-12-060 (2012).

[2] P. A. M. Dirac, “The Quantum Theory of the Electron”, Proc. R. Soc. Lond.117 (Feb, 1928) 610–624. doi:10.1098/rspa.1928.0023.

[3] S. Glashow, “Partial Symmetries of Weak Interactions”, Nucl.Phys. 22 (1961)579–588. doi:10.1016/0029-5582(61)90469-2.

[4] S. Weinberg, “A Model of Leptons”, Phys. Rev. Lett. 19 (Nov, 1967) 1264–1266.doi:10.1103/PhysRevLett.19.1264.

[5] D. J. Gross and F. Wilczek, “Asymptotically Free Gauge Theories. I”, Phys. Rev.D 8 (Nov, 1973) 3633–3652. doi:10.1103/PhysRevD.8.3633.

[6] T. Hebbeker, “Skript zur Vorlesung Einfuhrung in die Elementarteilchenphysik”.

[7] D. Forero, M. Tortola, and J. Valle, “Global status of neutrino oscillationparameters after Neutrino-2012”, Phys.Rev. D86 (2012) 073012,arXiv:1205.4018. doi:10.1103/PhysRevD.86.073012.

[8] “Wikipedia article on the Standard Model”.http://en.wikipedia.org/wiki/Standard_Model.

[9] C. S. Wu, E. Ambler, R. W. Hayward et al., “Experimental Test of ParityConservation in Beta Decay”, Phys. Rev. 105 (Feb, 1957) 1413–1415.doi:10.1103/PhysRev.105.1413.

[10] M. Goldhaber, L. Grodzins, and A. W. Sunyar, “Helicity of Neutrinos”, Phys.Rev. 109 (Feb, 1958) 1015–1017. doi:10.1103/PhysRev.109.1015.

[11] F. Englert and R. Brout, “Broken Symmetry and the Mass of Gauge VectorMesons”, Phys. Rev. Lett. 13 (Aug, 1964) 321–323.doi:10.1103/PhysRevLett.13.321.

[12] P. Higgs, “Broken symmetries, massless particles and gauge fields”, PhysicsLetters 12 (1964), no. 2, 132 – 133. doi:DOI: 10.1016/0031-9163(64)91136-9.

[13] P. W. Higgs, “Broken Symmetries and the Masses of Gauge Bosons”, Phys. Rev.Lett. 13 (Oct, 1964) 508–509. doi:10.1103/PhysRevLett.13.508.

91

Bibliography

[14] CMS Collaboration, “Observation of a new boson at a mass of 125 GeV with theCMS experiment at the LHC”, Physics Letters B 716 (2012), no. 1, 30 – 61.doi:10.1016/j.physletb.2012.08.021.

[15] ATLAS Collaboration, “Observation of a new particle in the search for theStandard Model Higgs boson with the ATLAS detector at the LHC”,arXiv:1207.7214.

[16] J. Beringer et al. (Particle Data Group), “Review of particle physics”, Phys. Rev.D86, 010001 (2012) (2012).

[17] CMS Collaboration, “Performance of τ -lepton reconstruction and identification inCMS”, Journal of Instrumentation 7 (January, 2012) 1001, arXiv:1109.6034.doi:10.1088/1748-0221/7/01/P01001.

[18] S. Erdweg, “Study for Sensitivity for W’ to tau nu with CMS”, bachelor’s thesis,RWTH, Germany, 2011.

[19] G. Altarelli, B. Mele, and M. Ruiz-Altaba, “Searching for New Heavy VectorBosons in pp Colliders”, Z. Phys. C45 (1989) 109.

[20] C.Kilic, S. Thomas, K. Hoepfner, S. Thueer, J. Schulte, M. Olschewski, others,“Study of interference effects of Wprime with SM W-bosons in the leptonicchannels”, CMS AN-11-423 (2011).

[21] E. Malkawi, T. M. Tait, and C. Yuan, “A Model of strong flavor dynamics for thetop quark”, Phys.Lett. B385 (1996) 304–310, arXiv:hep-ph/9603349.doi:10.1016/0370-2693(96)00859-3.

[22] Y. G. Kim and K. Y. Lee, “Early LHC bound on W’ boson in the nonuniversalgauge interaction model”, Phys.Lett. B706 (2012) 367–370, arXiv:1105.2653.doi:10.1016/j.physletb.2011.11.032.

[23] C.-W. Chiang, N. Deshpande, X.-G. He et al., “The Family SU(2)lxSU(2)hxU(1)Model”, Phys.Rev. D81 (2010) 015006, arXiv:0911.1480.doi:10.1103/PhysRevD.81.015006.

[24] J. Alwall et al., “MadGraph/MadEvent v4: the new web generation”, JHEP 09(2007) 028. doi:10.1088/1126-6708/2007/09/028.

[25] T. Sjostrand, S. Mrenna, and P. Z. Skands, “PYTHIA 6.4 Physics and Manual”,JHEP 05 (2006) 026, arXiv:hep-ph/0603175.

[26] A. Derevianko and S. G. Porsev, “Theoretical overview of atomic parity violation.Recent developments and challenges”, Eur.Phys.J. A32 (2007) 517–523,arXiv:hep-ph/0608178. doi:10.1140/epja/i2006-10427-7.

[27] “LEP design report”. CERN, Geneva, 1984. Copies shelved as reports in LEP, PSand SPS libraries.

92

Bibliography

[28] S. Roth, “W mass at LEP and standard model fits”, arXiv:hep-ex/0605014.

[29] A. Hocker and Z. Ligeti, “CP violation and the CKM matrix”,Ann.Rev.Nucl.Part.Sci. 56 (2006) 501–567, arXiv:hep-ph/0605217.doi:10.1146/annurev.nucl.56.080805.140456.

[30] L. Evans and P. Bryant, “LHC Machine”, Journal of Instrumentation 3 (2008),no. 08, S08001.

[31] ATLAS Collaboration, “The ATLAS Experiment at the CERN Large HadronCollider”, JINST 3 (2008) S08003. doi:10.1088/1748-0221/3/08/S08003.

[32] ALICE Collaboration, “The ALICE experiment at the CERN LHC ”, JINST 3(2008) S08002. doi:doi:10.1088/1748-0221/3/08/S08002.

[33] LHCb Collaboration, “The LHCb Detector at the LHC ”, JINST 3 (2008)S08005. doi:doi:10.1088/1748-0221/3/08/S08005.

[34] CERN Website, 2013.http://public.web.cern.ch/public/en/Research/AccelComplex-en.html.

[35] S. Holmes, R. S. Moore, and V. Shiltsev, “Overview of the Tevatron ColliderComplex: Goals, Operations and Performance”, JINST 6 (2011) T08001,arXiv:1106.0909. doi:10.1088/1748-0221/6/08/T08001.

[36] CMS Collaboration, “The CMS experiment at the CERN LHC”, JINST 3(2008), no. S08004,. doi:10.1088/1748-0221/3/08/S08004.

[37] R. Brown and D. Cockerill, “Electromagnetic calorimetry”, Nuclear Instrumentsand Methods in Physics Research Section A: Accelerators, Spectrometers,Detectors and Associated Equipment 666 (2012), no. 0, 47 – 79. AdvancedInstrumentation. doi:10.1016/j.nima.2011.03.017.

[38] CMS Collaboration, “Performance of CMS muon reconstruction in pp collisionevents at sqrt(s) = 7 TeV”, arXiv:1206.4071.

[39] CMS Collaboration, “Particle Flow Event Reconstruction in CMS andPerformance for Jets, Taus, and missing transverse energy”, CMS PhysicsAnalysis Summary CMS PAS PFT-09/001 (2009) 25.

[40] W. Adam, R. Fruhwirth, A. Strandlie et al., “RESEARCH NOTE FROMCOLLABORATION: Reconstruction of electrons with the Gaussian-sum filter inthe CMS tracker at the LHC”, Journal of Physics G Nuclear Physics 31(September, 2005) 9, arXiv:arXiv:physics/0306087.doi:10.1088/0954-3899/31/9/N01.

[41] M. Cacciari, G. P. Salam, and G. Soyez, “The Anti-kT jet clustering algorithm”,JHEP 0804 (2008) 063, arXiv:0802.1189.doi:10.1088/1126-6708/2008/04/063.

93

Bibliography

[42] C. Collaboration, “Tau identification in CMS”, CMS Physics Analysis SummaryCMS PAS TAU-11/001 (2011).

[43] CMS Collaboration, “Software Guide for Tau Reconstruction”.https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuidePFTauID.

[44] CMS Collaboration, “Jet Energy Corrections determination at 7 TeV”, CMSPhysics Analysis Summary CMS PAS JME-10-010 (2010).

[45] J. A. et al., “Geant4 developments and applications”, Nuclear Science, IEEETransactions on 53 (feb., 2006) 270 –278. doi:10.1109/TNS.2006.869826.

[46] C. Oleari, “The POWHEG BOX”, Nuclear Physics B - Proceedings Supplements205–206 (2010), no. 0, 36 – 41. Loops and Legs in Quantum Field Theory:Proceedings of the 10th DESY Workshop on Elementary Particle Theory.doi:10.1016/j.nuclphysbps.2010.08.016.

[47] Z. Was, “TAUOLA the library for tau lepton decay, and KKMC / KORALB /KORALZ /... status report”, Nucl.Phys.Proc.Suppl. 98 (2001) 96–102,arXiv:hep-ph/0011305. doi:10.1016/S0920-5632(01)01200-2.

[48] M. Lamanna, “The LHC computing grid project at CERN”, Nuclear Instrumentsand Methods in Physics Research Section A: Accelerators, Spectrometers,Detectors and Associated Equipment 534 (2004), no. 1–2, 1 – 6. Proceedings ofthe IXth International Workshop on Advanced Computing and AnalysisTechniques in Physics Research. doi:10.1016/j.nima.2004.07.049.

[49] I. Antcheva, M. Ballintijn, B. Bellenot et al., “ROOT: A C++ framework forpetabyte data storage, statistical analysis and visualization”,Comput.Phys.Commun. 182 (2011) 1384–1385.doi:10.1016/j.cpc.2011.02.008.

[50] C. Magass et al., “Aachen 3A Susy Analysis”.https://twiki.cern.ch/twiki/bin/viewauth/CMS/Aachen3ASusy.

[51] H. Abramowicz and A. Caldwell, “HERA collider physics”, Rev.Mod.Phys. 71(1999) 1275–1410, arXiv:hep-ex/9903037.doi:10.1103/RevModPhys.71.1275.

[52] A. Martin, W. Stirling, R. Thorne et al., “Parton distributions for the LHC”,Eur.Phys.J. C63 (2009) 189–285, arXiv:0901.0002.doi:10.1140/epjc/s10052-009-1072-5.

[53] CMS Collaboration, ““Utilities for Accessing Pileup Information for Data. TWikiWebsite,” 2012.”. https://twiki.cern.ch/twiki/bin/viewauth/CMS/PileupJSONFileforData#2012_Pileup_JSON_Files.

94

Bibliography

[54] Piergiulio Lenzi, Sanjay Padhi, Guillelmo Gomez Ceballos Retuerto, FrankWuerthwein, “Standard Model Cross Sections for CMS at 8 TeV”.https://twiki.cern.ch/twiki/bin/viewauth/CMS/

StandardModelCrossSectionsat8TeV.

[55] S. Chang, S. Erdweg, K. Hoepfner, D. Kim, S. Knutzen, P. Millet, Y. Oh, M.Olschewski, F. Schneider, and Y. Yang, “Search for new physics in the singlelepton + MET final states with the full 2012 dataset at sqrt(s)=8 TeV”, CMSAN-12-423 (2012).

[56] F. Schneider, “Search for New Physics in Final States with One Muon andMissing Transverse Energy with CMS Data”, Master’s thesis, RWTH, 2012.

[57] S. Frixione and B. R. Webber, “Matching NLO QCD computations and partonshower simulations”, JHEP 0206 (2002) 029, arXiv:hep-ph/0204244.

[58] C. Carloni Calame, G. Montagna, O. Nicrosini et al., “Precision electroweakcalculation of the production of a high transverse-momentum lepton pair athadron colliders”, JHEP 0710 (2007) 109, arXiv:0710.1722.doi:10.1088/1126-6708/2007/10/109.

[59] J. Pumplin, A. Belyaev, J. Huston et al., “Parton distributions and the strongcoupling: CTEQ6AB PDFs”, JHEP 0602 (2006) 032, arXiv:hep-ph/0512167.doi:10.1088/1126-6708/2006/02/032.

[60] D. Kim et al., “PDF Uncertainties and K-factor for the W’ search at 8 TeVcollisions”, CMS Analysis Note: AN-12-172 (2012).

[61] CMS Collaboration, “CMS MET Filters”. https://twiki.cern.ch/twiki/bin/viewauth/CMS/MissingETOptionalFilters.

[62] CMS Tau POG, “TauID: recommendation from the Tau POG ”.https://twiki.cern.ch/twiki/bin/viewauth/CMS/TauIDRecommendation.

[63] CMS JetMET group, “Jet energy scale uncertainty sources. TWiki Website,2012.”.https://twiki.cern.ch/twiki/bin/view/CMS/JECUncertaintySources.

[64] CMS Collaboration - MET group, “Prescriptions for MET uncertainties”.https://twiki.cern.ch/twiki/bin/viewauth/CMS/

MissingETUncertaintyPrescription.

[65] CMS Collaboration, “CMS Luminosity Based on Pixel Cluster Counting - Summer2012 Update”, CMS Physics Analysis Summary CMS PAS LUM-12-001 (2012).

[66] J. Heinrich, C. Blocker, J. Conway et al., “Interval estimation in the presence ofnuisance parameters. 1. Bayesian approach”, ArXiv Physics e-prints (September,2004) arXiv:arXiv:physics/0409129.

95

Bibliography

[67] RooStats Collaboration, “RooStats, 2012”.https://twiki.cern.ch/twiki/bin/view/CMS/RooStats.

[68] S. CMS Collaboration Chatrchyan, V. Khachatryan, A. M. Sirunyan et al.,“Combined results of searches for the standard model Higgs boson in pp collisionsat s=7 TeV”, Physics Letters B 710 (March, 2012) 26–48.doi:10.1016/j.physletb.2012.02.064.

[69] Klaas Padeken, “Private Communication, 2012”,.

96

Danksagung

Zum Abschluss dieser Arbeit mochte ich all jenen Personen danken, die mich unterstutztund es mir ermoglicht haben diese Arbeit zu schreiben. Der Weg hierhin war haufig nichtleicht und ich ware froh gewesen, wenn mir das eine oder andere technische Problemerspart geblieben ware, doch ich konnte immer auf die tatkraftige Hilfe vieler Leutezahlen.

Besonderen Dank mochte ich Kerstin Hoepfner aussprechen fur die intensive Betreu-ung und das Korrekturlesen dieser Arbeit und fur die Hilfe, diese neue Analyse in derCMS Community zu etablieren.

Ausserdem danke ich Prof. Dr. Thomas Hebbeker fur die Moglichkeit, diese interes-sante Arbeit im III. Physikalischen Institut durchfuhren zu konnen.

Vielen Dank mochte ich auch der gesamten Aachener CMS Gruppe aussprechen diemir bei allen physikalischen und technischen Problemen steht behilflich waren. Beson-ders zu erwahnen sind hierbei Fabian Schneider, Mark Olschewski, Martin Weber, JanSchulte, Sebastian Thuer, Paul Papacz, Michael Brodski und meine Burokollegen PhilippMillet, Soren Erdweg und Johannes Hellmund, die mir die Arbeit mit viele interessantenDiskussionen und einer tollen Arbeitsatmosphare stehts erleichtert haben.

Ausserdem danke ich Dr. Lisa Edelhauser und Dr. Alexander Knochel, die mir geholfenhaben das NUGIM Model zu verstehen.

Mein ganz besonderer Dank gilt Klaas Padeken und den Mitgliedern der tau POG,die mir unter großem Zeitaufwand geholfen haben die tau Rekonstruktion so weit zureparieren, dass diese Arbeit uberhaupt zu einem erfolgreichen Ende gebracht werdenkonnte.

Zum Abschluss mochte ich auch meinen Eltern danken, die mich nicht nur wahrendder Arbeit, sondern wahrend des gesamten Studiums stets unterstutzt haben.

Selbstandigkeitserklarung

Hiermit erklare ich schriftlich, dass ich die vorliegende Arbeit eigenstandig verfasst habe.Ich habe Zitate kenntlich gemacht und keine anderen als die angegebenen Hilfsmittel undQuellen verwendet.

Simon Knutzen

Aachen, den 27 Februar 2013