cse 127 computer security spring 2009

66
CSE 127 CSE 127 Computer Security Computer Security Spring 2009 Spring 2009 Network worms and worm defense Network worms and worm defense Stefan Savage Stefan Savage

Upload: gypsy

Post on 12-Jan-2016

17 views

Category:

Documents


1 download

DESCRIPTION

CSE 127 Computer Security Spring 2009. Network worms and worm defense Stefan Savage. Network Worms. Programs that actively spread between machines Infection strategy more active Exploit buffer overflows, format bugs, etc Exploit bad password choice Entice user into executing program. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CSE 127 Computer Security Spring 2009

CSE 127CSE 127Computer SecurityComputer Security

Spring 2009Spring 2009

Network worms and worm defenseNetwork worms and worm defense

Stefan SavageStefan Savage

Page 2: CSE 127 Computer Security Spring 2009

2

Network WormsNetwork Worms

Programs that actively spread between machines

Infection strategy more active Exploit buffer overflows, format bugs, etc Exploit bad password choice Entice user into executing program

Page 3: CSE 127 Computer Security Spring 2009

3

Network Worms in actionNetwork Worms in action Self-propagating self-replicating network program

Exploits some vulnerability to infect remote machines Infected machines continue propagating infection

Page 4: CSE 127 Computer Security Spring 2009

4

A brief history of worms…A brief history of worms…

As always Sci-Fi authors get it right first Gerold’s “When H.A.R.L.I.E. was One” (1972) – “Virus” Brunner’s “Shockwave Rider” (1975) – “tapeworm

program” Shoch&Hupp co-opt idea; coin term “worm” (1982)

Key idea: programs that self-propagate through network to accomplish some task; benign

Fred Cohen demonstrates power and threat of self-replicating viruses (1984)

First significant worm in the wild: Morris worm (1988)

Page 5: CSE 127 Computer Security Spring 2009

5

History: Morris Internet WormHistory: Morris Internet Worm November 2, 1988 Infected around 6,000 major Unix machines Cost of the damage estimated at $10m - $100m Robert T. Morris Jr. unleashed Internet worm

Graduate student at Cornell University Convicted in 1990 of violating Computer Fraud and Abuse Act $10,000 fine, 3 yr. Suspended jail sentence, 400 hours of

community service

Son of the chief scientist at the National Computer Security Center -- part of the National Security Agency

Today he’s a professor at MIT (and a great guy I might add)

Page 6: CSE 127 Computer Security Spring 2009

6

Morris Worm TransmissionMorris Worm Transmission

Find user accounts on the target machine Dictionary attack on /etc/passwd If it found a match, it would log in and try the same

username/password on other local machines

Exploit bug in fingerd Classic buffer overflow attack

Exploit trapdoor in sendmail Programmer left DEBUG mode in sendmail, which allowed

sendmail to execute an arbitrary shell command string.

Page 7: CSE 127 Computer Security Spring 2009

7

Morris Worm InfectionMorris Worm Infection

Sent a small loader to target machine 99 lines of C code It was compiled on the remote platform (cross

platform compatibility) The loader program transferred the rest of the

worm from the infected host to the new target. Used authentication! To prevent sys admins from

tampering with loaded code. If there was a transmission error, the loader would

erase its tracks and exit.

Page 8: CSE 127 Computer Security Spring 2009

8

Morris Worm Stealth/DoSMorris Worm Stealth/DoS When loader obtained full code

It was put into main memory and encrypted Original copies were deleted from disk (Even memory dump wouldn’t expose worm)

Worm periodically changed its name and process ID Resource exhaustion

Denial of service (accidental) There was a bug in the loader program that caused many

copies of the worm to be spawned per host System administrators cut their network connections

Couldn’t use internet to exchange fixes!

Page 9: CSE 127 Computer Security Spring 2009

Odd observationOdd observation

Between the late 80s and late 90s there are basically no new worm outbreaks…

April 21, 2023 9CSE 127 -- Lecture 5: User Authentication

Page 10: CSE 127 Computer Security Spring 2009

10

The Modern Worm eraThe Modern Worm era

Email based worms in late 90’s (Melissa & ILoveYou) Infect >1M hosts, but requires user participation

CodeRed worm released in Summer 2001 Exploited buffer overflow in IIS; no user interaction Uniform random target selection (after fixed bug in CRv1) Infects 360,000 hosts in 10 hours (CRv2) Like the energizer bunny… still going years later

Energizes renaissance in worm construction (1000’s) Exploit-based: CRII, Nimda, Slammer, Blaster, Witty, etc… Human-assisted: SoBig, NetSky, MyDoom, etc… 6200 malware variants in 2004; 6x increase from 2003 [Symantec] >200,000 malware variants in first 6mos of 2006 [Symantec] Convergence w/SPAM, DDoS, Spyware, IdTheft, BotNets

Page 11: CSE 127 Computer Security Spring 2009

11

The worm threat by metaphorThe worm threat by metaphor

Imagine the following species: Poor genetic diversity; heavily inbred Lives in “hot zone”; thriving ecosystem of infectious

pathogens Instantaneous transmission of disease Immune response 10-1M times slower Poor hygiene practices

What would its long-term prognosis be? What if diseases were designed…

Trivial to create a new disease Highly profitable to do so

Page 12: CSE 127 Computer Security Spring 2009

12

Technical enablers for wormsTechnical enablers for worms Unrestricted connectivity

Large-scale adoption of IP model for networks & apps

Software homogeneity & user naiveté Single bug = mass vulnerability in millions of hosts Trusting users (“ok”) = mass vulnerability in millions

of hosts

Few meaningful defenses Effective anonymity (minimal risk)

Page 13: CSE 127 Computer Security Spring 2009

13

How to think about outbreaksHow to think about outbreaks Worms well described as infectious epidemics

Simplest model: Homogeneous random contacts Classic SI model

» N: population size

» S(t): susceptible hosts at time t

» I(t): infected hosts at time t

» ß: contact rate

» i(t): I(t)/N, s(t): S(t)/N

N

IS

dt

dSN

IS

dt

dI

)1( ii

dt

di

)(

)(

1)(

Tt

Tt

e

eti

courtesy Paxson, Staniford, Weaver

Page 14: CSE 127 Computer Security Spring 2009

14

What’s important?What’s important?

There are lots of improvements to this model… Chen et al, Modeling the Spread of Active Worms, Infocom 2003 (discrete time) Wang et al, Modeling Timing Parameters for Virus Propagation on the Internet ,

ACM WORM ’04 (delay) Ganesh et al, The Effect of Network Topology on the Spread of Epidemics,

Infocom 2005 (topology) … but the conclusion is the same. We care about two

things:

How likely is it that a given infection attempt is successful?

Target selection (random, biased, hitlist, topological,…) Vulnerability distribution (e.g. density – S(0)/N)

How frequently are infections attempted? ß: Contact rate

Page 15: CSE 127 Computer Security Spring 2009

15

What can be done?What can be done?

Reduce the number of susceptible hosts Prevention, reduce S(t) while I(t) is still small

(ideally reduce S(0))

Reduce the number of infected hosts Treatment, reduce I(t) after the fact

Reduce the contact rate Containment, reduce ß while I(t) is still small

Page 16: CSE 127 Computer Security Spring 2009

16

Prevention: Software QualityPrevention: Software Quality

Goal: eliminate vulnerability

Static/dynamic testing (e.g. Cowan, Wagner, Engler, etc) Software process, code review, etc.

Active research community Taken seriously in industry

Security code review alone for Windows Server 2003 ~ $200M

Traditional problems: soundness, completeness, usability Practical problems: scale and cost

Page 17: CSE 127 Computer Security Spring 2009

17

Prevention: WrappersPrevention: Wrappers

Goal: stop vulnerability from being exploited

Hardware/software buffer overflow prevention NX, /GS, StackGuard, etc

Sandboxing (BSD Jail, Virtual Machines) Limit capabilities of potentially exploited program Don’t allow certain system calls, network packets,

privileges, etc.

Page 18: CSE 127 Computer Security Spring 2009

18

Prevention: Software HeterogeneityPrevention: Software Heterogeneity Goal: reduce impact of vulnerability

Use software diversity to tolerate attack Exploit existing heterogeneity

» Store you data on a Mac and on Windows Create artificial heterogeneity (hot research topic)

» Large contemporary literature (address randomization, execution polymorphism… use the tricks of the virus writer as a good guy)

Open questions: class of vulnerabilities that can be masked, strength of protection, cost of support

Page 19: CSE 127 Computer Security Spring 2009

19

Prevention: Software UpdatingPrevention: Software Updating Goal: reduce window of vulnerability

Most worms exploited known vulnerability (1 day -> 3 months)

Window shrinking: automated patch->exploit (some now have negative windows; zeroday attacks)

Patch deployment challenges, downtime, Q/A, etc Rescorla, Is finding security holes a good idea?, WEIS ’04

Network-based filtering: decouple “patch” from code E.g. TCP packet to port 1434 and > 60 bytes Symantec: Generic Exploit Blocking

Page 20: CSE 127 Computer Security Spring 2009

20

Prevention: Known Exploit BlockingPrevention: Known Exploit Blocking Get early samples of new exploit

Network sensors/honeypots “Zoo” samples

Security company distills “signature” Labor intensive process

Signature pushed out to all customers Host recognizer checks files/memory before execution

Example: Symantec Gets early intelligence via managed service side of business and

DeepSight sensor system >60TB of signature updates per day

Key assumptions: can get samples and signature generation can be amortized

Assumes long reaction window

Page 21: CSE 127 Computer Security Spring 2009

21

Prevention: Hygiene Prevention: Hygiene EnforcementEnforcement

Goal: keep susceptible hosts off network

Only let hosts connect to network if they are “well cared for”

Recently patched, up-to-date anti-virus, etc… Manual version in place at some organizations

(e.g. NSF)

Cisco Network Admission Control (NAC) Can be expensive to administer

Page 22: CSE 127 Computer Security Spring 2009

22

TreatmentTreatment Reduce I(t) after the outbreak is done

Practically speaking this is where much happens because our defenses are so bad

Two issues How to detect infected hosts?

» They still spew traffic (commonly true, but poor assumption)» Look for known signature (malware detector)

What to do with infected hosts?» Wipe whole machine» Custom disinfector (need to be sure you get it all…backdoors)» Interesting opportunities for virtualization (checkpoint/rollback)» Aside: interaction with SB1386…

Page 23: CSE 127 Computer Security Spring 2009

Aside: white wormsAside: white worms

April 21, 2023 23CSE 127 -- Lecture 5: User Authentication

Page 24: CSE 127 Computer Security Spring 2009

24

ContainmentContainment

Reduce contact rate

Slow down Throttle connection rate to slow spread

» Used in some HP switches Important capability, but worm still spreads…

Quarantine Detect and block worm Lock Down, Scan Detection, Signature inference

Page 25: CSE 127 Computer Security Spring 2009

25

Quarantine requirementsQuarantine requirements

We can define reactive defenses in terms of: Reaction time – how long to detect, propagate

information, and activate response Containment strategy – how malicious behavior is

identified and stopped Deployment scenario - who participates in the

system

Given these, what are the engineering requirements for any effective defense?

Page 26: CSE 127 Computer Security Spring 2009

26

Its difficult…Its difficult…

Even with universal defense deployment, containing a CodeRed-style worm (<10% in 24 hours) is tough

Address filtering (blacklists), must respond < 25mins

Content filtering (signatures), must respond < 3hrs For faster worms, seconds For non-universal deployment, life is worse…

See: Moore et al, Internet Quarantine: Requirements for Containing Self-Propagating Code, Infocom 2003 for more details

Page 27: CSE 127 Computer Security Spring 2009

27

A pretty fast outbreak:A pretty fast outbreak:Slammer (2003)Slammer (2003)

First ~1min behaves like classic random scanning worm

Doubling time of ~8.5 seconds CodeRed doubled every 40mins

>1min worm starts to saturateaccess bandwidth

Some hosts issue >20,000 scans per second

Self-interfering(no congestion control)

Peaks at ~3min >55million IP scans/sec

90% of Internet scanned in <10mins Infected ~100k hosts

(conservative) See: Moore et al, IEEE Security & Privacy, 1(4), 2003 for more details

Page 28: CSE 127 Computer Security Spring 2009

28

Was Slammer really fast?Was Slammer really fast?

Yes, it was orders of magnitude faster than CodeRed

No, it was poorly written and unsophisticated Who cares? It is literally an academic point

The current debate is whether one can get < 500ms Bottom line: way faster than people!

See: Staniford et al, ACM WORM, 2004 for more details

Page 29: CSE 127 Computer Security Spring 2009

29

Outbreak Detection/MonitoringOutbreak Detection/Monitoring

Two classes of monitors Ex-situ: “canary in the coal mine”

» Network Telescopes» HoneyNets/Honeypots

In-situ: real activity as it happens

Page 30: CSE 127 Computer Security Spring 2009

30

Network TelescopesNetwork Telescopes

Infected host scans for other vulnerable hosts by randomly generating IP addresses

Network Telescope: monitor large range of unused IP addresses – will receive scans from infected host

Very scalable. UCSD monitors > 1% of all routable addresses

Page 31: CSE 127 Computer Security Spring 2009

31

Why do telescopes work?Why do telescopes work?

Assume worm spreads randomly Picks 32bit IP address at random and probes it

Monitor block of n IP addresses

If worm sends m probes/sec, we expect to see one within:

We monitor receives R’ probes per second, can estimate infected host is sending at:

sec232

nm

nRR

322'

Page 32: CSE 127 Computer Security Spring 2009

32

What can you learn?What can you learn?

How many hosts are infected? How quickly? Where are they from? How quickly are they fixed? What happens in the long term?

Page 33: CSE 127 Computer Security Spring 2009

33

Code Red: GrowthCode Red: Growth

Page 34: CSE 127 Computer Security Spring 2009

34

Code Red: Country of OriginCode Red: Country of Origin

0

20000

40000

60000

80000

100000

120000

140000

160000

Infected Hosts

USKoreaChinaTaiwanCanadaUKGermanyAustraliaJapanNetherlands

525 hosts in NZ

Page 35: CSE 127 Computer Security Spring 2009

35

Code Red: patching rateCode Red: patching rate

Page 36: CSE 127 Computer Security Spring 2009

36

Code Red: decayCode Red: decay

Page 37: CSE 127 Computer Security Spring 2009

37

Global animationGlobal animation

Page 38: CSE 127 Computer Security Spring 2009

38

ProblemProblem

Telescopes are passive, can’t respond to external packets

How to tell the difference between two worms? Initial packet may be identical?

How to tell the difference between a worm something else?

Page 39: CSE 127 Computer Security Spring 2009

39

Example: Blaster WormExample: Blaster Worm

Courtesy Farnam Jahanian

Page 40: CSE 127 Computer Security Spring 2009

40

Example: BlasterExample: BlasterHow telescopes failHow telescopes fail

Courtesy Farnam Jahanian

Page 41: CSE 127 Computer Security Spring 2009

41

One solution: active respondersOne solution: active responders

Active responder blindly responds to all traffic External SYN packet (TCP connection request) Send SYN/ACK packet in response

This is the moral equivalent to saying “uh huh” on the phone…

It elicits a bit more response – hopefully you see what’s going on

Page 42: CSE 127 Computer Security Spring 2009

42

Using Active ResponderUsing Active Responder

Courtesy Farnam Jahanian

Page 43: CSE 127 Computer Security Spring 2009

43

Limited fidelityLimited fidelity

Difficult to mimic complex protocol interactions (some malware requires up to 70 message exchanges)

Can’t tell if the machine would be infected, what it would do etc…

Page 44: CSE 127 Computer Security Spring 2009

44

HoneypotsHoneypots

Solution: redirect scans to real “infectable” hosts (honeypots)

Analyze/monitor hosts for attacks

Challenges Scalability

» Buy one honeypot per IP address? Liability

» What if a honeypot infects someone else? Detection

» What techniques to tell if a honeypot has been compromised

Page 45: CSE 127 Computer Security Spring 2009

45

Aside: UCSD Potemkin honeyfarmAside: UCSD Potemkin honeyfarm

Largest honeypot system on the planet Currently supports 65k live honeypots Scalable to several million at reasonable cost

Uses lots of implementation tricks to make this feasible

Only uses a handful of physical machines Only binds honeypots to addresses when an

external request arrives Supports multiple honeypots per physical machine

using “virtual machines”

Page 46: CSE 127 Computer Security Spring 2009

46

Overall limitations of telescope, Overall limitations of telescope, honeynet, etc monitoringhoneynet, etc monitoring

Depends on worms scanning it What if they don’t scan that range (smart bias) What if they propagate via e-mail, IM?

Inherent tradeoff between liability exposure and detectability

Honeypot detection software exists It doesn’t necessary reflect what’s happening on your

network (can’t count on it for local protection)

Hence, we’re always interested in in situ detection as well

Page 47: CSE 127 Computer Security Spring 2009

47

Detecting worms on Detecting worms on youryour network network

Two classes of approaches Scan detection: detect that host is infected by

infection attempts

Signature inference: automatically identify content signature for exploit (sharable)

Page 48: CSE 127 Computer Security Spring 2009

48

Scan DetectionScan Detection

Basic idea: detect scanning behavior indicative of worms and quarantine individual hosts

Lots of variants of this idea, here’s one: Threshold Random Walk algorithm

» Observation: connection attempts to random hosts usually won’t succeed (no machine there, no service on machine)

» Track ratio of failed connection attempts to connection attempts per IP address; should be small

» Can be implemented at very high speed

See: Jung et al, Fast Portscan Detection Using Sequential Hypothesis Testing, Oakland 2004, Weaver et al, Very Fast Containment of Scanning Worms, USENIX Security 2004

Page 49: CSE 127 Computer Security Spring 2009

49

Signature inferenceSignature inference

Challenge: automatically learn a content “signature” for each new worm – potentially in less than a second!

Singh et al, Automated Worm Fingerprinting, OSDI ’04

Page 50: CSE 127 Computer Security Spring 2009

50

Approach Approach

Monitor network and look for strings common to traffic with worm-like behavior

Signatures can then be used for content filtering

SRC: 11.12.13.14.3920 DST: 132.239.13.24.5000 PROT: TCP

00F0 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 ................0100 90 90 90 90 90 90 90 90 90 90 90 90 4D 3F E3 77 ............M?.w0110 90 90 90 90 FF 63 64 90 90 90 90 90 90 90 90 90 .....cd.........0120 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 ................0130 90 90 90 90 90 90 90 90 EB 10 5A 4A 33 C9 66 B9 ..........ZJ3.f.0140 66 01 80 34 0A 99 E2 FA EB 05 E8 EB FF FF FF 70 f..4...........p. . .

PACKET HEADER

PACKET PAYLOAD (CONTENT)

Kibvu.B signature captured by Earlybird on May 14th, 2004

Page 51: CSE 127 Computer Security Spring 2009

51

Content siftingContent sifting

Assume there exists some (relatively) unique invariant bitstring W across all instances of a particular worm (true today, not tomorrow...)

Two consequences Content Prevalence: W will be more common in traffic

than other bitstrings of the same length Address Dispersion: the set of packets containing W

will address a disproportionate number of distinct sources and destinations

Content sifting: find W’s with high content prevalence and high address dispersion and drop that traffic

Page 52: CSE 127 Computer Security Spring 2009

52

Address Dispersion Table Sources Destinations Prevalence Table

The basic algorithmThe basic algorithmDetector in

networkA B

cnn.com

C

DE

Page 53: CSE 127 Computer Security Spring 2009

53

1 (B)1 (A)

Address Dispersion Table Sources Destinations

1

Prevalence Table

The basic algorithmThe basic algorithmDetector in

networkA B

cnn.com

C

DE

Page 54: CSE 127 Computer Security Spring 2009

541 (A)1 (C)

1 (B)1 (A)

Address Dispersion Table Sources Destinations

1

1

Prevalence Table

The basic algorithmThe basic algorithmDetector in

networkA B

cnn.com

C

DE

Page 55: CSE 127 Computer Security Spring 2009

551 (A)1 (C)

2 (B,D)2 (A,B)

Address Dispersion Table Sources Destinations

1

2

Prevalence Table

The basic algorithmThe basic algorithmDetector in

networkA B

cnn.com

C

DE

Page 56: CSE 127 Computer Security Spring 2009

561 (A)1 (C)

3 (B,D,E)3 (A,B,D)

Address Dispersion Table Sources Destinations

1

3

Prevalence Table

The basic algorithmThe basic algorithmDetector in

networkA B

cnn.com

C

DE

Page 57: CSE 127 Computer Security Spring 2009

57

ChallengesChallenges Computation

To support a 1Gbps line rate we have 12us to process each packet… at 10Gbps 1.2us

» Dominated by memory references; state expensive Content sifting requires looking at every byte in a packet

State On a fully-loaded 1Gbps link a naïve implementation

can easily consume 100MB/sec for tables

If you’re interested you can read the paper for the gory details on how we actually do it :-)

Page 58: CSE 127 Computer Security Spring 2009

70

Software prototype: EarlybirdSoftware prototype: Earlybird

AMD Opteron 242 (1.6Ghz)

Linux 2.6

Libpcap

EB Sensor code (using C)

EarlyBird Sensor

TAPSummary

data

Reporting & Control

EarlyBird Aggregator

EB Aggregator (using C)

Mysql + rrdtools

Apache + PHP

Linux 2.6

Setup 1: Large fraction of the UCSD campus traffic, Traffic mix: approximately 5000 end-hosts, dedicated servers for campus wide services (DNS, Email, NFS etc.)Line-rate of traffic varies between 100 & 500Mbps.

Setup 2: Fraction of local ISP Traffic, Traffic mix: dialup customers, leased-line customers Line-rate of traffic is roughly 100Mbps.

To other sensors and blocking devices

Page 59: CSE 127 Computer Security Spring 2009

73

Experience w/EarlybirdExperience w/Earlybird Extremely good

Detected and automatically generated signatures for every known worm outbreak over eight months

Can produce a precise signature for a new worm in a fraction of a second

Known worms detected: Code Red, Nimda, WebDav, Slammer, Opaserv, …

Unknown worms (with no public signatures) detected:

MsBlaster, Bagle, Sasser, Kibvu, …

Page 60: CSE 127 Computer Security Spring 2009

74

Sasser Sasser

Page 61: CSE 127 Computer Security Spring 2009

77

False NegativesFalse Negatives Easy to prove presence, impossible to prove absence

Live evaluation: over 8 months detected every worm outbreak reported on popular security mailing lists

Offline evaluation: several traffic traces run against both Earlybird and Snort IDS (w/all worm-related signatures)

Worms not detected by Snort, but detected by Earlybird The converse never true

Page 62: CSE 127 Computer Security Spring 2009

78

False PositivesFalse Positives Common protocol

headers Mainly HTTP and SMTP

headers Distributed (P2P) system

protocol headers Procedural whitelist

» Small number of popular protocols

Non-worm epidemic Activity

SPAM BitTorrent

GNUTELLA.CONNECT /0.6..X-Max-TTL:  .3..X-Dynamic-Qu  erying:.0.1..X-V  ersion:.4.0.4..X  -Query-Routing:.  0.1..User-Agent:  .LimeWire/4.0.6.  .Vendor-Message:  .0.1..X-Ultrapee  r-Query-Routing:

Page 63: CSE 127 Computer Security Spring 2009

79

Major UCSD success storyMajor UCSD success story Major UCSD success story

Content sifting technologies patented by UC and licensed to startup, Netsift Inc.

Improved and accelerated (esp for hardware implementation) 12mos later Netsift was acquired by Cisco

When you buy a Cisco enterprise switch in 24mos it will have this capability

Page 64: CSE 127 Computer Security Spring 2009

80

LimitationsLimitations Variant content

Polymorphism, metamorphism… no invariant signature Network evasion

More about this when we cover IDS End-to-end encryption vs content-based security

Privacy vs security policy Slow/stealthy worms

Under threshold DoS via manipulation

Make a worm with the string “Republican” in it Trust between monitors

You say “ABC” is a worm signature, why should I trust you?

Page 65: CSE 127 Computer Security Spring 2009

81

Distributed detection issuesDistributed detection issues

Suppose I detect a worm… I want to tell you about it. Why should I trust you?

If you lie you might tell me that “From: “ was a worm signature

Self-Certifying Alerts (Microsoft Research) Have worm alert encode the vulnerability itself Recipient runs alert in safe “sandboxed”

environment and proves to self that vulnerability exists (and thus signature is valid)

Page 66: CSE 127 Computer Security Spring 2009

82

Next timeNext time

Bots, rootkits and spyware