low-cost ics network performance testing
DESCRIPTION
Network performance testing for devices and systems can be a daunting task for vendors and end-users given the cost of test equipment and the investment that the companies have to spend in developing relevant tests and understanding the results. During the last couple years, a group of low cost computing systems have been introduced that are very capable from a functional point of view, but how well do they actually perform? Can they be used in a low-cost performance testing lab system to validate ICS devices before they go into production? Can end-users use them to capture live traffic in their network and get reliable performance results? This talk will discuss how and when different types of equipment can be used to develop a low-cost network performance testing lab. It will also show results from a series of performance tests conducted on some of the equipment and with different testing architectures.TRANSCRIPT
SCADASides 1
Low-Cost ICS Network Performance Testing
Jim GilsinnKenexis Consulting
June 6, 2014
SCADASides 2
How This Got Started
• In 2001 while I worked @ NIST my boss said:• Industrial Ethernet is the next big wave for manufacturing, so say our
customers (auto manufacturers)• There are still a lot of questions about how well it performs• Is it deterministic enough for the factory floor? Yes, but…• Are there standardized metrics to show performance? Yes, but…• Are there test tools available? Yes, but…• Can companies put performance requirements into their procurements
yet? Yes, but…
June 6, 2014
SCADASides 3
Determinism
• Vendors were building industrial Ethernet products that claimed certain performance
• End-users were finding quirky performance
• End-users would complain
• Vendors would say, it works in our lab, there must be a problem in your system
• End-users learned not to trust performance claims from vendors
• Some build labs to approve devices before implementing them
June 6, 2014
SCADASides 4
Standardized Metrics
• Vendors would describe their performance in many different ways and with varying definitions
• With ODVA, I helped to create a standard set of metrics for end-point devices based upon IETF definitions
• Throughput• Jitter/Variability• Latency (action latency, response latency)
June 6, 2014
SCADASides 5
Test Tools
• After creating the metrics, NIST helped ODVA develop a set of performance tests
• We build the ODVA Performance Testing Laboratory that ODVA charges companies money to certify their stated performance
• No one has run the test since no one wants to fail• ODVA charges for every time a company tests and retests
• NIST went on to develop a free capture file analysis tool• Available on SourceForge (1st gen is IENetP, 2nd gen is FENT)• Both of these are dormant
• NIST also worked with the ODVA Interoperability Workshop to develop a series of 5 tests that could be conducted quickly
June 6, 2014
SCADASides 6
Procurement Language
• Big auto manufacturers have tried to get their vendors to use ODVA performance lab
• Hasn’t worked out well• Have convinced vendors to go through PlugFest testing
• Vendors and end-users have started using a common language
• I guess that’s as good as it gets for now
June 6, 2014
SCADASides 7
Low-Cost Performance Testing
• Uses low-cost/readily-available equipment• Low-cost is relative, $15 – $3k• Readily-available, like laptops, switches, etc.
• Uses open-source/low-cost/readily-available software• Open-source, like Linux, Wireshark, background traffic, and analysis
tool• Low-cost analysis tool (Kenexis, in development)• Readily-available, like Windows, Office, browsers
• Additional useful tools• Protocol-dependent master/scanner (software will get you ~2ms)
June 6, 2014
SCADASides 8
Testing Equipment
• Laptops x2• Alienware M14x-R2• Ubuntu 14.04 native• Windows VM• Backtrack 5r3 USB
• DreamPlug
• Raspberry PI• Model B, rev 1
• Netgear GS108E Switch
• Throwing Star LAN Tap
• Hilscher netANALYZERJune 6, 2014
SCADASides 9
Testing Software
• Linux (Ubuntu 14.04, Backtrack 5r3, Kali)
• Wireshark (apt-get and compiled)
• PlugFest background traffic captures and scripts
• NIST Analysis Tool• 1st Generation = IENetP – http://www.sourceforge.net/projects/ienetp• 2nd Generation = FENT – http://www.sourceforge.net/projects/fent
• Kenexis Analysis Tool• Follow-on, in development
June 6, 2014
SCADASides 10
PlugFest Background Traffic
• Traffic Captures• Generated by Ixia network analyzer and packet generator• Assembled into different sets (editcap & mergecap)
• tcpreplay Scripts• Generated Linux scripts to replay capture files
• Conducted Analysis of Results• Packet generator transmitting• Laptop transmitting• Laptop receiving
June 6, 2014
PlugFest Background Traffic
Traffic Type Rate (pps)
Baseline
Steady-State Managed
Steady-State Unmanaged
Burst Managed
Burst Unmanaged
ARP Request Broadcasts 180
Gratuitous ARP Broadcasts 180
DHCP Request Broadcasts 100
ICMP (ping) Request Broadcasts 100
NTP Multicasts 10
EtherNet/IP ListIdentity Req. 10
EtherNet/IP Class 1 1800
ARP Burst Requests 240 pkts @ 4k Hz
SCADASides 12
PlugFest Testing Architecture
June 6, 2014
SCADASides 13
Eye Chart Slides Ahead
June 6, 2014
SCADASides 14June 6, 2014
Example PlugFest Testing (Hilscher)
SCADASides 15June 6, 2014
Example PlugFest Testing (Switch Mirror)
SCADASides 16
Low-Cost Testing Architecture
June 6, 2014
SCADASides 17
Low-Cost Testing
• Laptop Laptop
• Laptop DreamPlug
• DreamPlug Laptop
• Laptop Raspberry PI
• Raspberry PI Laptop
June 6, 2014
SCADASides 18June 6, 2014
SCADASides 19
What The Data Shows
• Hilscher Capture Card• 10ns resolution time stamping• Hardware assisted• Good enough for hard real-time performance testing (1s µs)
• High-End Laptop• Backtrack/Kali better than Ubuntu• Running from USB stick works• Good enough for soft real-time performance testing (~100 µs)
June 6, 2014
SCADASides 20
What The Data Shows
• DreamPlug• Good enough for mostprocess control• Offset of mean (~5-10 ms)• Random delays occur (~5-20 ms, sometimes 100+ ms)• On-par with Windows performance
• Raspberry PI• Good enough for slow process control• Offset of mean (~5-25 ms)• Random delays occur (100-1000 ms)
June 6, 2014
SCADASides 21
More Information
• Jim Gilsinn, Kenexis Consulting• Email: [email protected]• Phone: 614-323-2254• Twitter: @JimGilsinn• SlideShare: http://www.slideshare.net/gilsinnj
• Kenexis GitHub• https://github.com/kenexis/LowCostPerformance
June 6, 2014