“ gpus for fast triggering in na62 experiment ”

32
“GPUs for fast triggering in NA62 experiment” Gianluca Lamanna (CERN) Second International workshop for future challenges in Tracking and Trigger concepts 7.7.2011

Upload: maine

Post on 21-Jan-2016

50 views

Category:

Documents


0 download

DESCRIPTION

“ GPUs for fast triggering in NA62 experiment ”. Gianluca Lamanna (CERN). Second International workshop for future challenges in Tracking and Trigger concepts 7.7.2011. Outline. [http://na62.web.cern.ch/NA62/Documents/TD_Full_doc_v10.pdf]. NA62 Collaboration - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: “ GPUs for  fast  triggering  in NA62  experiment ”

“GPUs for fast triggering in NA62

experiment”

Gianluca Lamanna(CERN)

Second International workshop for future challenges in Tracking and Trigger concepts

7.7.2011

Page 2: “ GPUs for  fast  triggering  in NA62  experiment ”

Outline

2

Overview of the NA62 experiment @CERN

Introduction to the Graphics processing unit (GPU)

The GPU in the NA62 trigger

Parallel pattern recognition

Measurement of GPU latency

Conclusions

NA62 Collaboration Bern ITP, Birmingham, Bristol, CERN, Dubna, Ferrara, Fairfax, Florence,

Frascati, Glasgow, IHEP, INR, Liverpool, Louvain, Mainz, Merced, Naples, Perugia, Pisa, Rome I, Rome II, San Luis Potosi, SLAC, Sofia, TRIUMF,

Turin

[http://na62.web.cern.ch/NA62/Documents/TD_Full_doc_v10.pdf]

Page 3: “ GPUs for  fast  triggering  in NA62  experiment ”

NA62: Overview

Main goal: BR measurement of the ultrarare K→ pnn (BRSM=(8.5±0.7)·10-11)

Stringent test of SM, golden mode for search and characterization of New Physics (complementary with respect to the direct search)

Novel technique: kaon decay in flight, O(100) events in 2 years of data taking

3

Huge background:Hermetic veto system

Efficient PID

Weak signal signature: High resolution measurement of kaon and pion momentum

Ultra rare decay:High intensity beam

Efficient and selective trigger system

Page 4: “ GPUs for  fast  triggering  in NA62  experiment ”

The NA62 TDAQ system

L0: Hardware level. Decision based on primitives produced in the RO card of detectors partecipating to the trigger

L1: Software level. “Single detector” PCs

L2: Software level. The informations coming from different detectors are merged together

4

The L0 is synchronous (through TTC). The input rate is ~10 MHz. The latency is order of 1ms. The trigger decision is based on information coming from RICH, LKr, LAV and MUV.

The L1 is asynchronous (through Ethernet). The input rate is ~1 MHz. The maximum latency is few seconds (spill length). The output rate is ~100 kHz.

Page 5: “ GPUs for  fast  triggering  in NA62  experiment ”

The Video Card processor: GPU

5

The GPUs (Graphics processing units) are the standard processors used in the commercial Video Cards for PCs Main vendors: ATI (now AMD) and NVIDIATwo standards to program the GPU: OpenCL and CUDAGPU originally specialized for math-intensive, highly parallel computationMore transistors can be devoted to data processing rather than data caching and flow control with respect to the standard CPUs

SIMD (Single Instruction Multiple Data) architectureGeneral purpose computing (GPGPU): several applications in lattice QCD, fluid dynamics, medical physics, etc.

Page 6: “ GPUs for  fast  triggering  in NA62  experiment ”

Video Cards used in the tests

NVIDIAQuadro 600

NVIDIATesla C1060

NVIDIATesla C2050

AMDRadeon HD 5970 (*)

Number of multi-processors 2 30 14 20Number of cores 96 240 448 3200Core Frequency (GHz) 0.64 1.3 1.15 0.725Main memory (GB) 1 4 3 2Main memory bandwidth (GB/s)

25.6 102 144 256

Computing power (TFLOPS) 0.246 0.93 1.03 4.6Max power consumption 40 188 247 294

6

* Only half of the board is used at the moment

Page 7: “ GPUs for  fast  triggering  in NA62  experiment ”

GPUs in the NA62 TDAQ system

7

RO board

L0TP

L1 PC

GPU

L1TP

L2 PC

GPU

GPU

1 MHz 100 kHz

RO board

L0 GPU

L0TP

10 MHz 10 MHz

1 MHz

Max 1 ms latency

The use of the GPU at the software levels (L1/2) is straightforward: just put the video card in the PC!

No particular changes to the hardware are needed

The main advantages is to reduce the number of PCs in the L1 farms

The use of GPU at L0 is more challenging:

Fixed and small latency (dimension of the L0 buffers)

Deterministic behavior (synchronous trigger)

Very fast algorithms (high rate)

Page 8: “ GPUs for  fast  triggering  in NA62  experiment ”

8

First application: RICH

~17 m RICH

1 Atm Neon

Light focused by two mirrors on two spots equipped with ~1000 PMs each (pixel 18 mm)

3 - s p m separation in 15-35 GeV/c, ~18 hits per ring in average

~100 ps time resolution, ~10 MHz events rate

Time reference for trigger

Vessel diameter 4→3.4 m

Beam

Beam Pipe

Mirror Mosaic (17 m focal length)

Volume ~ 200 m3

2 × ~1000 PM

Page 9: “ GPUs for  fast  triggering  in NA62  experiment ”

GPU @ L0 RICH trigger

10 MHz input rate, ~95% single track

200 B event size reduced to ~40 B with FPGA preprocessing

Event buffering in order to hide the latency of the GPU and to optimize the data transfer on the Video card

Two level of parallelism:algorithm

data

9

Page 10: “ GPUs for  fast  triggering  in NA62  experiment ”

Algorithms for single ring search (1)

10

DOMH/POMH: Each PM (1000) is considered as the center of a circle. For each center an histogram is constructed with the distances btw center and hits.

HOUGH: Each hit is the center of a test circle with a given radius. The ring center is the best matching point of the test circles. Voting procedure in a 3D parameters space

Page 11: “ GPUs for  fast  triggering  in NA62  experiment ”

Algorithms for single ring search (2)

11

TRIPL: In each thread the center of the ring is computed using three points (“triplets”) .For the same event, several triplets (but not all the possible) are examined at the same time. The final center is obtained by averaging the obtained center positions

MATH: Translation of the ring to centroid. In this system a least square method can be used. The circle condition can be reduced to a linear system , analitically solvable, without any iterative procedure.

Page 12: “ GPUs for  fast  triggering  in NA62  experiment ”

Processing time

Using Monte Carlo data, the algorithms are compared on Tesla C1060

For packets of >1000 events, the MATH algorithm processing time is around 50 ns per event

12

The performance on DOMH (the most resource-dependent algorithm) is compared on several Video Cards

The gain due to different generation of video cards can be clearly recognized.

Page 13: “ GPUs for  fast  triggering  in NA62  experiment ”

Processing time stability

The stability of the execution time is an important parameter in a synchronous system

The GPU (Tesla C1060, MATH algorithm) shows a “quasi deterministic” behavior with very small tails.

13

The GPU temperature, during long runs, rises in different way on the different chips, but the computing performances aren’t affected.

Page 14: “ GPUs for  fast  triggering  in NA62  experiment ”

Data transfer time

The data transfer time significantly influence the total latency

It depends on the number of events to transfer

The transfer time is quite stable (double peak structure in GPUCPU transfer)

Using page locked memory the processing and data transfer can be parallelized (double data transfer engine on Tesla C2050)

14

Page 15: “ GPUs for  fast  triggering  in NA62  experiment ”

Total GPU time to process N events

The total latency is given by the transfer time, the execution time and the overheads of the GPU operations (including the time spent on the PC to access the GPU): for instance ~300 us are needed to process 1000 events

15

Start End

Copy data from CPU to GPU

Copy results from GPU to CPU

Processing time1000 evts per packet

Page 16: “ GPUs for  fast  triggering  in NA62  experiment ”

16

GPU Latency measurement

Other contributions to the total time in a working system:

Time from the moment in which the data are available in user space and the end of the calculation: O(50 us) additional contribution. (plot 1)

Time spent inside the kernel, for the management of the protocol stack :~10 us for network level protocols, could improve with Real Time OS and/or FPGA in the NIC (plot 2)

Transfer time from the NIC to the RAM through the PCI express bus

Page 17: “ GPUs for  fast  triggering  in NA62  experiment ”

Data transfer time inside the PC

ALTERA Stratix IV GX230 development board to send packets (64B) through the PCI express

Direct measure (inside the FPGA) of the maximum round trip time (MRTT) at different frequencies

No critical issues, the fluctuations could be reduced using Real Time OS

17

Page 18: “ GPUs for  fast  triggering  in NA62  experiment ”

GPU@ L1 RICH trigger

After the L0: ~50% 1 track events, ~50% 3 tracks events

Most of the 3 tracks, which are background, have max 2 rings per spot

Standard multiple rings fit methods aren’t suitable for us, since we need:

Trackless

Non iterative

High resolution

Fast: ~1 us (1 MHz input rate)

18

New approach use the Ptolemy’s theorem (from the first book of the Almagest)

“A quadrilater is cyclic (the vertex lie on a circle) if and only if is valid the relation: AD*BC+AB*DC=AC*BD “

Page 19: “ GPUs for  fast  triggering  in NA62  experiment ”

1919

A

B

C

DD

D

Select a triplet randomly (1 triplet per point = N+M triplets in parallel)

If the point satisfy the Ptolemy theorem, it is considered for a fast algebraic fit (i.e. math, riemann sphere, tobin, … ).

Each thread converges to a candidate center point. Each candidate is associated to Q quadrilaters contributing to his definitionFor the center candidates with Q greater than a threshold, the points at distance R are considered for a more precise re-fit.

Consider a fourth point: if the point doesn’t satisfy the Ptolemy theorem reject it

Almagest algorithm description

Page 20: “ GPUs for  fast  triggering  in NA62  experiment ”

Almagest algorithm results

20

The real position of the two generated rings is:

1 (6.65, 6.15) R=11.0

2 (8.42,4.59) R=12.6

The fitted position of the two rings is:

1 (7.29, 6.57) R=11.6

2 (8.44,4.34) R=12.26

Fitting time on Tesla C1060: 1.5 us/event

Page 21: “ GPUs for  fast  triggering  in NA62  experiment ”

Straw tracker (1)

Low mass straw tracker in vacuum (<10-6 mbar): reduced multiple-scattering contribution, low material budget (to avoid photon interactions)

2+2 chambers + magnet with 0.4 T and 2 m of aperture

7168 straws: ~1 cm in diameter, 2.1 m long

Ar:CO2 (70:30), Cu-Au metalization on 30 mm mylar foils.

Momentum resolution: 0.32%+0.009%/p(GeV)

21

Page 22: “ GPUs for  fast  triggering  in NA62  experiment ”

Straw tracker (2)

Each chamber: 4 views (x,y,u,v)

4 staggered layers per view

Gap in the middle of each layers for the beam passage.

Maximum rate per straw >500 kHz

Readout based on TDC

Large drift time windows pileup

22

Page 23: “ GPUs for  fast  triggering  in NA62  experiment ”

Straw tracker (3)

The leading time depends on the track position, while the trailing time is independent.

The time resolution of the trailing time is worst with respect to the leading

Try to use only the trailing time in the trigger (no precise reference time is needed)

23

Page 24: “ GPUs for  fast  triggering  in NA62  experiment ”

Fast online reconstruction: work in progress

Main goal: identify 3p at L1

Clustering of the hits in each chamber

Use only the unbend Y projection

Three hits to define a track

Loop on possible tracks: same decay vertex condition

24

~70% efficiency in 3p identification (mainly due to acceptance)

No effect on the signal

In a time window of 150 ns about 80% are effected by pileup

Use the leading time to reduce the effect of the pileup

Page 25: “ GPUs for  fast  triggering  in NA62  experiment ”

Parallelization: Cellular automaton (to be done)

Cells (Tracklets) definition: segment between CH4 and CH3. Local geometrical criteria (tracklet angle) to reduce the final combinatorial

Neighbour definition: Tracklet A is neighbour of tracklet B if one point is in common and is quasi-parallel (extrapolation with search windows method)

Evolution rules: each tracklet is associated to a counter. The tracklet counter is incremented if there is at least a

neighbour with a counter as high as its own.

Candidates definition: according to the maximum counter principle. c2 for ambiguity. Re-fit with Kalman filter.

25

A

B3.4mm

CH1 CH2 CH3 CH4

In the GPU:One thread per tracklet: N(hits in CH3) xM(hits in CH4) threads per event

Neighbour definition and evolution inside the threadTrack candidates stored in shared memoryFinal threads for reduction and refit

Page 26: “ GPUs for  fast  triggering  in NA62  experiment ”

Conclusions

The GPUs can be used in trigger system to define high quality primitives

In asynchronous software levels the use of video cards is straightforward, while in synchronous hardware levels issues related to the total latency have to be evaluated: in our study no showstoppers evident for a system with 10 MHz input rate and 1 ms of allowed latency

First application: ring finding in the RICH of NA62~50 ns per ring @L0 using a parallel algebraic fit (single ring)

~1.5 us per event @L1 using a new algorithm (double ring)

As further application we are investigating the benefit to use GPU for Online track fitting at L1, using cellular automaton for parallelization

A full scale “demonstrator” will be ready for the technical runs next spring (Online corrections for the CHOD)

26

References:i) IEEE-NSS CR 10/2009: 195-198ii) Nucl.Instrum.Meth.A628:457-460,2011iii) Nucl.Instrum.Meth.A639:267-270,2011iv) “Fast online triggering in high-energy physics experiments using GPUs” submitted to NIM

Page 27: “ GPUs for  fast  triggering  in NA62  experiment ”

27

SPARES

Page 28: “ GPUs for  fast  triggering  in NA62  experiment ”

RICH: mirror and PMs

28

Beam pipe Mirrors

~17 m

~4

m

PMs spots

Page 29: “ GPUs for  fast  triggering  in NA62  experiment ”

Dependence on number of hits

The execution time depends on the number of hits

Different slope for different algorithms: depending on the number and the kind of operation in the GPU

29

Page 30: “ GPUs for  fast  triggering  in NA62  experiment ”

Almagest “a priori” inefficiency

30

In the N+M triplets randomly considered, at least one triplet have to lie on one of the two rings

If none triplet is good the fit doesn’t converge: inefficiency

The inefficiency is negligible either for high number of hits or for “asymmetrics” rings (N≠M)

Page 31: “ GPUs for  fast  triggering  in NA62  experiment ”

Hough fit

31

18 cm test rings 12 cm test rings

The extension of the Hough algorithm isn’t easy

Fake points appear at wrong test radius value several seeds

The final decision should be taken with a hypothesis testing, looping on all the possible seeds (for instance: the correct result is given by the two seeds that use all the hits) complicated and time expensive

Page 32: “ GPUs for  fast  triggering  in NA62  experiment ”

Trigger bandwidth

32

CEDAR

GTK

CHANTI

LAV

STRAWS

RICH

CHOD

MUV3

L0TP

L1 CEDAR

L1 GTK

L1 CHANTI

L1 LAV

L1 STRAWS

L1 RICH

L1 CHOD

L1 LKr

L1 MUV3

L1TP

SWI TCH

L2 FARML2 FARML2 FARML2 FARM

L2 FARML2 FARML2 FARML2 FARM

L2 FARML2 FARML2 FARML2 FARM

L2 FARML2 FARML2 FARML2 FARM

320 Mb/s

18 Gb/s

320 Mb/s

336 Mb/s

5.4 Gb/s

1.6 Gb/s

160 Mb/s

172 Gb/sLKr r/o

LKr trigger