iter control system technology study klemen Žagar [email protected]
TRANSCRIPT
ITER Control System Technology Study
Klemen Ž[email protected]
Overview
About ITER
ITER Control and Data Acquisition System (CODAC) architecture
Communication technologies for the Plant Operation Network Use cases/requirements Performance benchmark
EPICS Collaboration Meeting, Vancouver, April 2009 2
A Note!
Information about ITER and CODAC architecture presented here-in is a summary of ITER Organization’s presentations
Cosylab prepared studies on communication technologies for ITER
3EPICS Collaboration Meeting, Vancouver, April 2009
4
About ITER (International Thermonuclear Experimental Reactor)
EPICS Collaboration Meeting, Vancouver, April 2009
5
Toroidal Field CoilNb3Sn, 18, wedged
Central SolenoidNb3Sn, 6 modules
Poloidal Field CoilNb-Ti, 6
Vacuum Vessel9 sectors
Port Plug heating/current drive, test blanketslimiters/RHdiagnostics
Cryostat24 m high x 28 m dia.
Blanket440 modules
Torus Cryopumps, 8
Major plasma radius 6.2 m
Plasma Volume: 840 m3
Plasma Current: 15 MA
Typical Density: 1020 m-3
Typical Temperature: 20 keV
Fusion Power: 500 MW
Machine mass: 23350 t (cryostat + VV + magnets)- shielding, divertor and manifolds: 7945 t + 1060 port plugs- magnet systems: 10150 t; cryostat: 820 t
Divertor54 cassettes
29m
~28m
About ITER
EPICS Collaboration Meeting, Vancouver, April 2009
6
CODAC Architecture
EPICS Collaboration Meeting, Vancouver, April 2009
Plant Operation Network (PON)
Command Invocation Data Streaming
Event Handling Monitoring Bulk Data Transfer
PON self-diagnostics Diagnosing problems in the PON Monitoring the load of the PON network
Process Control Reacting on events in the control system by issuing commands or
transmitting other events Alarm Handling
Transmission of notification of anomalous behavior Management of currently active alarm states
7EPICS Collaboration Meeting, Vancouver, April 2009
Prototype and Benchmarking
We have measured latency and throughput in a controlled test environment Allows side-by-side comparison Also, hands-on experience is more comparable
Latency test: Where a central service is involved (OmniNotify, IceStorm or EPICS/CA):
Send a message (to the central service) Upon receipt on the sender node, measure difference between send and
receive times Without a central service (OmniORB, ICE, RTI DDS):
Round-trip test Send a message (to the receiving node) Respond Upon receipt of the response, measure the difference
Throughput test: Send messages as fast as possible Measure differences between receive times
Statistical analysis to obtain average, jitter, minimum, 95th percentile, etc.
8
Applicability to Use Cases
9
CHANNEL ACCESS
omniORB CORBA
RTI DDS ZeroC ICE
Command invocation
4/2 5 /5 4/3 5/5
Event handling 4/3 4/4 5/4 4/5
Monitoring 5/5 (EPICS) 5/5 (TANGO) 5/3 5/3
Bulk data transfer
5/3 4/4 5/4 4/4
Diagnostics 5 4 5 3
Process control 5 (EPICS) 5 (TANGO) 4 3
Alarm handling 5 (EPICS) 5 (TANGO) 3 3
First number: performance Second number: functional
applicability of the use case
1. not applicable at all
2. applicable, but at a significant performance/quality cost compared to optimal solution; custom design required
3. applicable, but at some performance/quality cost compared to optimal solution; custom design required
4. applicable, but at some performance/quality cost compared to optimal solution; foreseen in existing design
5. applicable, and close to optimal solution; use case foreseen in design
Applicability to Use Cases
10
CHANNEL ACCESS
omniORB CORBA
RTI DDS ZeroC ICE
Command invocation
4/2 5 /5 4/3 5/5
Event handling 4/3 4/4 5/4 4/5
Monitoring 5/5 (EPICS) 5/5 (TANGO) 5/3 5/3
Bulk data transfer
5/3 4/4 5/4 4/4
Diagnostics 5 4 5 3
Process control 5 (EPICS) 5 (TANGO) 4 3
Alarm handling 5 (EPICS) 5 (TANGO) 3 3
First number: performance Second number: functional
applicability of the use case
1. not applicable at all
2. applicable, but at a significant performance/quality cost compared to optimal solution; custom design required
3. applicable, but at some performance/quality cost compared to optimal solution; custom design required
4. applicable, but at some performance/quality cost compared to optimal solution; foreseen in existing design
5. applicable, and close to optimal solution; use case foreseen in design
PON Latency (small payloads)
11EPICS Collaboration Meeting, Vancouver, April 2009
0
100
200
300
400
500
600
700
0 500 1000 1500 2000 2500
Late
ncy
[ms]
Payload size [bytes]
ICE
OmniORB
RTI DDS
Commercial DDS II
PON Latency (small payloads)
12
100
150
200
250
300
350
400
450
500
0 500 1000 1500 2000 2500
Late
ncy
[ms]
Payload size [bytes]
EPICS (sync)OmniNotifyIceStorm
EPICS Collaboration Meeting, Vancouver, April 2009
Ranking:1. OmniORB (one way invocations)2. ICE (one way invocations)3. RTI DDS (not tuned for latency)4. EPICS5. OmniNotify6. ICE storm
PON Throughput
13EPICS Collaboration Meeting, Vancouver, April 2009
0,0%
10,0%
20,0%
30,0%
40,0%
50,0%
60,0%
70,0%
80,0%
90,0%
100,0%
10 100 1.000 10.000 100.000 1.000.000
1Gbp
s lin
k uti
lizati
on
Payload size [bytes]
OmniORB (oneway)
RTI DDS (unreliable)
ICE (oneway)
Commercial DDS II (unreliable)
PON Throughput
14
0,0%
10,0%
20,0%
30,0%
40,0%
50,0%
60,0%
70,0%
80,0%
10 100 1.000 10.000 100.000 1.000.000
1Gbp
s lin
k uti
lizati
on
Payload size [bytes]
EPICS (async)EPICS (sync)IceStormOmniNotify
EPICS Collaboration Meeting, Vancouver, April 2009
Ranking:1. RTI DDS2. OmniORB (one way invocations)3. ICE (one way invocations)4. EPICS5. ICE storm6. OmniNotify
PON Scalability
RTI DDS efficiently leverages IP multicasting
(source: RTI)
15EPICS Collaboration Meeting, Vancouver, April 2009
With technologies that do not use IP multicasting/broadcasting, per-subscriber throughput is inversely proportional to the number of subscribers!
(source: RTI)
EPICS
Ultimately, ITER Organization has chosen EPICS: Very good performance. Easiest to work with. Very robust. Full-blown control system infrastructure (not just middleware). Likely to be around for a while (widely used by many labs).
Where EPICS could improve? Use IP multicasting for monitors. A remote procedure call layer (e.g., “abuse” waveforms to
transmit data serialized with with Google Protocol Buffers, or use PVData in EPICSv4).
16EPICS Collaboration Meeting, Vancouver, April 2009
17
Thank You for Your Attention