design and performance of the openbsd statefull packet filter - slides
TRANSCRIPT
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
1/33
Design and Performance of theOpenBSD Stateful Packet Filter (pf)
Daniel Hartmeier
Systor AG
Usenix 2002 p.1/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
2/33
Introduction
part of a firewall, working on IP packet level (vs.application level proxies or ethernet level bridges)
packet filter intercepting each IP packet that passesthrough the kernel (in and out on each interface),passing or blocking it
stateless inspection based on the fields of eachpacket
stateful filtering keeping track of connections,
additional information makes filtering morepowerful (sequence number checks) and easier(replies, random client ports)
Usenix 2002 p.2/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
3/33
Motivation
OpenBSD included IPFilter in the default install
what appeared to be a BSD license turned out to be
non-freeunlike other license problems discovered by theongoing license audit, this case couldnt be resolved,
IPFilter removed from the tree
existing alternatives were considered (ipfw), largercode base, kernel dependencies
rewrite offers additional options, integrates betterwith existing kernel features
Usenix 2002 p.3/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
4/33
Overview
Introduction
Motivation
Filter rules, skip steps
State table, trees, lookups, translations (NAT,redirections)
Benchmarks
Conclusions
Usenix 2002 p.4/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
5/33
Filter rules
linear linked list, evaluated top to bottom for eachpacket (unlike netfilters chains tree)
rules contain parameters that match/mismatch apacket
rules pass or block a packet
last matching rule wins (except for quick, whichaborts rule evaluation)
rules can create state, further state matching packetsare passed without rule set evaluation
Usenix 2002 p.5/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
6/33
Skip steps
transparent optimization of rule set evaluation,improve performance without affecting semantics
example: ten consecutive rules apply only to packetsfrom source address X, packet has source address Y,first rule evaluated, next nine skipped
skipping is done on most parameters, in pre-definedorder
parameters like direction (in, out), interface or
address family (IPv4/IPv6) partition the rule set alot, performance increase is significant
worst case: consecutive rules have no equal
parameters, every rule must be evaluated, noadditional cost (linked list traversal)
Usenix 2002 p.6/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
7/33
State table
TCP (sequence number checks on each packet),ICMP error messages match referred to packet
(simplifies rules without breaking PMTU etc.)UDP, ICMP queries/replies, other protocols,pseudo-connections with timeouts
adjustable timeouts, pseudo-connections fornon-TCP protocols
binary search tree (AVL, now Red-Black), O(log n)
even in worst-case
key is two address/port pairs
Usenix 2002 p.7/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
8/33
Translations (NAT, redirections)
translating source addresses: NAT/PAT to oneaddress using proxy ports
translating destination: redirections (based onaddresses/ports)
mapping stored in state table
application level proxies (ftp) in userland
Usenix 2002 p.8/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
9/33
State table keys
one state entry per connection, stored in two trees
example: 10.1.1.1:20000 -> 62.65.145.30:50001 ->
129.128.5.191:80outgoing packets: 10.1.1.1:20000 ->129.128.5.191:80, replace source address/port with
gatewayincoming packets: 129.128.5.191:80 ->62.65.145.30:50001, replace destination
address/port with local host
three address/port pairs of one connection: lan, gwy,ext
without translation, two pairs are equalUsenix 2002 p.9/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
10/33
State table keys
two trees: tree-lan-ext (outgoing) and tree-ext-gwy(incoming), contain the same state pointers
no addition translation map (and lookup) needed
Usenix 2002 p.10/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
11/33
Normalization
IP normalization (scrubbing) to removeinterpretation ambiguities, like overlapping
fragments (confusing IDSs)reassembly (caching) of fragments before filtering,only complete packets are filtered
sequence number modulation
Usenix 2002 p.11/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
12/33
Logging
through bpf, virtual network interface pflog0
link layer header used for pf related information
(rule, action)binary log files, readable with tcpdump and othertools
Usenix 2002 p.12/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
13/33
Benchmarks: Setup
two (old) i386 machines with two network interfacecards each, connected with two crosswire Cat5
cables, 10 mbit/s unidirectionaltester: generate TCP packets on ethernet levelthrough first NIC, capture incoming ethernet frames
on second NICfirewall: OpenBSD and GNU/Linux (equalhardware), IP forwarding enabled, packet filter
enabled, no other services, no other network traffic(static arp table)
Usenix 2002 p.13/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
14/33
Benchmarks: Packet generation
TCP packets of variable size, randomsource/destination addresses and ports
embedded timestamp to calculate latency,incremental serial number to detect packet loss
send packets of specified size at specified rate for
several seconds, print throughput, latency and lossverify that setup can handle maximum link ratecorrectly
Usenix 2002 p.14/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
15/33
Local, reaching link limit
0
100
200
300
400
500
600
700
800
900
0 100 200 300 400 500 600 700 800 900
receivingrate(packets/s)
sending rate (packets/s)
1518 bytes/packet
Usenix 2002 p.15/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
16/33
Local, reaching link limit
0
100
200
300
400
500
600
700
800
900
0 100 200 300 400 500 600 700 800 900
receivingrate(packets/s)
sending rate (packets/s)
812
812 1518 bytes/packet
Usenix 2002 p.15/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
17/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
throughput(bytes/s)
sending rate (packets/s)
812
1518 bytes
Usenix 2002 p.16/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
18/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
throughput(bytes/s)
sending rate (packets/s)
961
1280 bytes
Usenix 2002 p.16/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
19/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
throughput(bytes/s)
sending rate (packets/s)
1197
1024 bytes
Usenix 2002 p.16/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
20/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
throughput(bytes/s)
sending rate (packets/s)
1586
768 bytes
Usenix 2002 p.16/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
21/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
throughput(bytes/s)
sending rate (packets/s)
2349
512 bytes
Usenix 2002 p.16/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
22/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
throughput(bytes/s)
sending rate (packets/s)
4528
256 bytes
Usenix 2002 p.16/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
23/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
throughput(bytes/s)
sending rate (packets/s)
8445
128 bytes
Usenix 2002 p.16/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
24/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
throughput(bytes/s)
sending rate (packets/s)
14880
64 bytes
Usenix 2002 p.16/2
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
25/33
Local, varying packet sizes
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 2000 4000 6000 8000 10000 12000 14000 16000
through
put(bytes/s)
sending rate (packets/s)
LocalOpenBSD
GNU/Linux
Usenix 2002 p.16/2
S l 100 l h h
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
26/33
Stateless, 100 rules, throughput
0
500
1000
1500
2000
2500
3000
35004000
4500
5000
0 1000 2000 3000 4000 5000
throughp
ut(packets/s)
sending rate (packets/s)
iptables
Usenix 2002 p.17/2
S l 100 l h h
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
27/33
Stateless, 100 rules, throughput
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
0 1000 2000 3000 4000 5000
throughp
ut(packets/s)
sending rate (packets/s)
iptablesipf
Usenix 2002 p.17/2
St t l 100 l th h t
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
28/33
Stateless, 100 rules, throughput
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
0 1000 2000 3000 4000 5000
throughp
ut(packets/s)
sending rate (packets/s)
iptablesipfpf
Usenix 2002 p.17/2
M i th h t l
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
29/33
Maximum throughput vs. rules
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
0 200 400 600 800 1000
maximumt
hro
ughput(packets
/s)
number of rules
iptablesipfpf
Usenix 2002 p.18/2
M i th h t t t
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
30/33
Maximum throughput vs. states
3000
3500
4000
4500
5000
5500
6000
6500
7000
7500
0 5000 10000 15000 20000
maximumt
hroughput(packets/s)
number of states
ipfpf
Usenix 2002 p.19/2
C i
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
31/33
Conclusions
rule set evaluation is expensive. State lookups arecheap
filtering statefully not only improves filter decisionquality, it actually increases performance
memory cost: 64000 states with 64MB RAM
(without tuning), increasing linearlybinary search tree for states scales with O(log n)
Usenix 2002 p.20/2
P d i l
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
32/33
Production results
Duron 700MHz, 128MB RAM, 3x DEC 21143NICs
25000-40000 concurrent statesaverage of 5000 packets/s
fully stateful filtering (no stateless passing)
CPU load doesnt exceed 10 percent
(same box and filter policy with IPFilter was 90
percent load average)
Usenix 2002 p.21/2
Questions?
-
8/7/2019 Design and Performance of the OpenBSD Statefull Packet Filter - Slides
33/33
Questions?
The OpenBSD Project: http://www.openbsd.org/
Paper and slides: http://www.benzedrine.cx/pf.html
Usenix 2002 p.22/2