statistical methods for hep lecture 2: multivariate methods

52
G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 1 Statistical Methods for HEP Lecture 2: Multivariate Methods Taller de Altas Energías 2010 Universitat de Barcelona 1-11 September 2010 Glen Cowan Physics Department Royal Holloway, University of Londo [email protected] www.pp.rhul.ac.uk/~cowan

Upload: kiefer

Post on 05-Jan-2016

54 views

Category:

Documents


3 download

DESCRIPTION

Statistical Methods for HEP Lecture 2: Multivariate Methods. Taller de Altas Energías 2010 Universitat de Barcelona 1-11 September 2010. Glen Cowan Physics Department Royal Holloway, University of London [email protected] www.pp.rhul.ac.uk/~cowan. TexPoint fonts used in EMF. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 1

Statistical Methods for HEPLecture 2: Multivariate Methods

Taller de Altas Energías 2010

Universitat de Barcelona

1-11 September 2010

Glen CowanPhysics DepartmentRoyal Holloway, University of [email protected]/~cowan

Page 2: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 2

Outline

Lecture 1: Introduction and basic formalismProbability, statistical tests, parameter estimation.

Lecture 2: Multivariate methodsGeneral considerationsExample of a classifier: Boosted Decision Trees

Lecture 3: Discovery and Exclusion limitsQuantifying significance and sensitivitySystematic uncertainties (nuisance parameters)

Page 3: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 3

Event selection as a statistical testFor each event we measure a set of numbers: nx,,x=x 1

x1 = jet p

T

x2 = missing energy

x3 = particle i.d. measure, ...

x follows some n-dimensional joint probability density, which

depends on the type of event produced, i.e., was it ,ttpp ,g~g~pp

x i

x j

E.g. hypotheses H0, H

1, ...

Often simply “signal”, “background”

1H|xp

0H|xp

Page 4: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 4

Finding an optimal decision boundary

In particle physics usually startby making simple “cuts”:

xi < c

i

xj < c

j

Maybe later try some other type of decision boundary:

H0 H

0

H0

H1

H1

H1

Page 5: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 5

Page 6: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 6

Page 7: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 7

Page 8: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 8

Page 9: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 9

Page 10: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 10

Page 11: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 11

Example of a "cut-based" studyIn the 1990s, the CDF experiment at Fermilab (Chicago) measuredthe number of hadron jets produced in proton-antiproton collisionsas a function of their momentum perpendicular to the beam direction:

Prediction low relative to data forvery high transverse momentum.

"jet" ofparticles

Page 12: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 12

High pT jets = quark substructure?Although the data agree remarkably well with the Standard Model(QCD) prediction overall, the excess at high pT appears significant:

The fact that the variable is "understandable" leads directly to a plausible explanation for the discrepancy, namely, that quarks could possess an internal substructure.

Would not have been the case if the variable plotted was a complicated combination of many inputs.

Page 13: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 13

High pT jets from parton model uncertaintyFurthermore the physical understanding of the variable led oneto a more plausible explanation, namely, an uncertain modeling ofthe quark (and gluon) momentum distributions inside the proton.

When model adjusted, discrepancy largely disappears:

Can be regarded as a "success" of the cut-based approach. Physicalunderstanding of output variable led to solution of apparent discrepancy.

Page 14: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 14

Page 15: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 15

Page 16: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 16

Neural networks in particle physicsFor many years, the only "advanced" classifier used in particle physics.

Usually use single hidden layer, logistic sigmoid activation function:

s(u) = (1+e¡ u)¡ 1

Page 17: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 17

Neural network example from LEP IISignal: ee → WW (often 4 well separated hadron jets)

Background: ee → qqgg (4 less well separated hadron jets)

← input variables based on jetstructure, event shape, ...none by itself gives much separation.

Neural network output:

(Garrido, Juste and Martinez, ALEPH 96-144)

Page 18: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 18

Some issues with neural networksIn the example with WW events, goal was to select these eventsso as to study properties of the W boson.

Needed to avoid using input variables correlated to theproperties we eventually wanted to study (not trivial).

In principle a single hidden layer with an sufficiently large number ofnodes can approximate arbitrarily well the optimal test variable (likelihoodratio).

Usually start with relatively small number of nodes and increaseuntil misclassification rate on validation data sample ceasesto decrease.

Often MC training data is cheap -- problems with getting stuck in local minima, overtraining, etc., less important than concerns of systematic differences between the training data and Nature, and concerns aboutthe ease of interpretation of the output.

Page 19: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 19

Overtraining

training sample independent test sample

If decision boundary is too flexible it will conform too closelyto the training points → overtraining.

Monitor by applying classifier to independent test sample.

Page 20: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 20

Particle i.d. in MiniBooNEDetector is a 12-m diameter tank of mineral oil exposed to a beam of neutrinos and viewed by 1520 photomultiplier tubes:

H.J. Yang, MiniBooNE PID, DNP06H.J. Yang, MiniBooNE PID, DNP06

Search for to e oscillations

required particle i.d. using information from the PMTs.

Page 21: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 21

Decision treesOut of all the input variables, find the one for which with a single cut gives best improvement in signal purity:

Example by MiniBooNE experiment,B. Roe et al., NIM 543 (2005) 577

where wi. is the weight of the ith event.

Resulting nodes classified as either signal/background.

Iterate until stop criterion reached based on e.g. purity or minimum number of events in a node.

The set of cuts defines the decision boundary.

Page 22: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 22

Finding the best single cutThe level of separation within a node can, e.g., be quantified by the Gini coefficient, calculated from the (s or b) purity as:

For a cut that splits a set of events a into subsets b and c, onecan quantify the improvement in separation by the change in weighted Gini coefficients:

where, e.g.,

Choose e.g. the cut to the maximize ; a variant of thisscheme can use instead of Gini e.g. the misclassification rate:

Page 23: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 23

Page 24: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 24

Page 25: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 25

Page 26: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 26

Page 27: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 27

BDT example from MiniBooNE~200 input variables for each event ( interaction producing e, or

Each individual tree is relatively weak, with a misclassification error rate ~ 0.4 – 0.45

B. Roe et al., NIM 543 (2005) 577

Page 28: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 28

Monitoring overtraining

From MiniBooNEexample:

Performance stableafter a few hundredtrees.

Page 29: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 29

Comparison of boosting algorithmsA number of boosting algorithms on the market; differ in theupdate rule for the weights.

Page 30: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 30

Page 31: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 31

Single top quark production (CDF/D0)Top quark discovered in pairs, butSM predicts single top production.

Use many inputs based on jet properties, particle i.d., ...

signal(blue +green)

Pair-produced tops are now a background process.

Page 32: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 32

Different classifiers for single top

Also Naive Bayes and various approximations to likelihood ratio,....

Final combined result is statistically significant (>5 level) but not easy to understand classifier outputs.

Page 33: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 33

Support Vector MachinesMap input variables into high dimensional feature space: x →

Maximize distance between separating hyperplanes (margin) subject to constraints allowing for some misclassification.

Final classifier only depends on scalarproducts of (x):

So only need kernel

Bishop ch 7

Page 34: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 34

Page 35: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 35

Page 36: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 36

Page 37: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 37

Page 38: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 38

Page 39: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 39

Page 40: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 40

Page 41: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 41

Page 42: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 42

Page 43: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 43

Page 44: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 44

Page 45: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 45

Page 46: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 46

Using an SVMTo use an SVM the user must as a minimum choose

a kernel function (e.g. Gaussian)any free parameters in the kernel (e.g. the of the Gaussian)a cost parameter C (plays role of regularization parameter)

The training is relatively straightforward because, in contrast to neuralnetworks, the function to be minimized has a single global minimum.

Furthermore evaluating the classifier only requires that one retainand sum over the support vectors, a relatively small number of points.

The advantages/disadvantages and rationale behind the choices above is not always clear to the particle physicist -- help needed here.

Page 47: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 47

SVM in particle physicsSVMs are very popular in the Machine Learning community but haveyet to find wide application in HEP. Here is an early example froma CDF top quark anlaysis (A. Vaiciulis, contribution to PHYSTAT02).

signaleff.

Page 48: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 48

Summary on multivariate methodsParticle physics has used several multivariate methods for many years:

linear (Fisher) discriminantneural networksnaive Bayes

and has in the last several years started to use a few morek-nearest neighbourboosted decision treessupport vector machines

The emphasis is often on controlling systematic uncertainties betweenthe modeled training data and Nature to avoid false discovery.

Although many classifier outputs are "black boxes", a discoveryat 5 significance with a sophisticated (opaque) method will win thecompetition if backed up by, say, 4 evidence from a cut-based method.

Page 49: Statistical Methods for HEP Lecture 2: Multivariate Methods

Quotes I like

“If you believe in something you don't understand, you suffer,...”

– Stevie Wonder

“Keep it simple.As simple as possible.Not any simpler.”

– A. Einstein

G. Cowan page 49TAE 2010 / Statistics for HEP / Lecture 2

Page 50: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 50

Extra slides

Page 51: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 51

Using classifier output for discovery

y

f(y)

y

N(y)

Normalized to unity Normalized to expected number of events

excess?

signal

background background

searchregion

Discovery = number of events found in search region incompatiblewith background-only hypothesis.

p-value of background-only hypothesis can depend crucially distribution f(y|b) in the "search region".

ycut

Page 52: Statistical Methods for HEP Lecture 2: Multivariate Methods

G. Cowan TAE 2010 / Statistics for HEP / Lecture 2 page 52