a methodology to evaluate rate- e n system against distributed denial of...
TRANSCRIPT
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
1
2009-2010
Honours Project Thesis | 08008575 | Flavien Flandrin Supervisor: William Buchanan | Second Marker: Jamie Graves
EDINBURGH
NAPIER
UNIVERSITY
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED
DENIAL OF SERVICE
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
i
Authorship Declaration
I, Flavien Flandrin, confirm that this dissertation and the work presented in it are my own
achievement.
Where I have consulted the published work of others this is always clearly attributed;
Where I have quoted from the work of others the source is always given. With the exception of
such quotations this dissertation is entirely my own work;
I have acknowledged all main sources of help;
If my research follows on from previous work or is part of a larger collaborative research project
I have made clear exactly what was done by others and what I have contributed myself;
I have read and understand the penalties associated with Academic Misconduct.
I also confirm that I have obtained informed consent from all people I have involved in the
work in this dissertation following the School's ethical guidelines
Signed:
Date:
Matriculation no: 08008575
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
ii
Data protection declaration
Under the 1998 Data Protection Act, The University cannot disclose your grade to an
unauthorised person. However, other students benefit from studying dissertations that have
their grades attached.
Please sign your name below one of the options below to state your preference.
The University may make this dissertation, with indicative grade, available to others.
The University may make this dissertation available to others, but the grade may not be
disclosed.
The University may not make this dissertation available to others
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
iii
Acknowledgement
My first thanks goes to William J. Buchanan for giving me the opportunity to carry out this
project and for all the help that he provided me. I would to thank Jamie Graves for being my
second marker.
Finally and not least, I would like to thank Oriane Coulon for the proof reading of this thesis and
for the moral support throughout the duration of this project.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
iv
Contents
Authorship Declaration .................................................................................................................... i
Data protection declaration ............................................................................................................ ii
Acknowledgement .......................................................................................................................... iii
Contents ........................................................................................................................................... iv
List of Tables................................................................................................................................... viii
List of Figures ................................................................................................................................... ix
Abstract .............................................................................................................................................. x
CHAPTER 1 INTRODUCTION ................................................................................................................ 1
1.1 Project Overview ........................................................................................................................ 1
1.2 Background ................................................................................................................................. 1
1.3 Aim and Objectives .................................................................................................................... 2
1.4 Thesis structure .......................................................................................................................... 2
CHAPTER 2 LITERATURE REVIEW ....................................................................................................... 4
2.1 Introduction ................................................................................................................................ 4
2.2 Information Security .................................................................................................................. 4
2.2.1 Confidentiality Integrity Availability (CIA) .................................................................................. 4
2.2.2 Accessing Assets, Vulnerabilities, and Threats to Calculate Risk ............................................... 5
2.2.3 A brief overview of common threats ......................................................................................... 7
2.2.4 Risks ........................................................................................................................................... 8
2.3 Firewalls ...................................................................................................................................... 9
2.3.1 Firewall basics ............................................................................................................................ 9
2.3.2 Packet filters ............................................................................................................................ 10
2.3.3 Stateful Packet Filter ................................................................................................................ 11
2.4 Distributed Denial of Service .................................................................................................. 11
2.4.1Definition and purpose ............................................................................................................. 11
2.4.2 How attackers recruit and control attacking machine ............................................................ 12
2.4.2.1 Recruiting machine ........................................................................................................... 12
2.4.2.2 Controlling the botnet ...................................................................................................... 13
2.4.3 Semantic of Denial of Service .................................................................................................. 15
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
v
2.4.3.1 Exploiting Vulnerability ..................................................................................................... 15
2.4.3.2 Attacking a Protocol .......................................................................................................... 16
2.4.3.3 Attacking Middleware ....................................................................................................... 16
2.4.3.4 Attacking an Application ................................................................................................... 16
2.4.3.5 Attacking a Resource......................................................................................................... 17
2.4.3.6 Pure Flooding .................................................................................................................... 17
2.4.4 Defence Strategy against Distributed Denial of Service .......................................................... 17
2.5 Intrusion Detection and Prevention System ........................................................................ 18
2.5.1 Definition of IDSs and IPSs ....................................................................................................... 19
2.5.1.1 IDS ..................................................................................................................................... 19
2.5.1.2 IPS ...................................................................................................................................... 19
2.5.2 IDPS Methodologies ................................................................................................................. 20
2.5.2.1 Content-based Methodology ............................................................................................ 20
2.5.2.1.1 Rule-based ..................................................................................................................... 20
2.5.2.1.2 Behaviour-based ............................................................................................................ 21
2.5.2.2 Protocol anomaly-based ................................................................................................... 21
2.5.2.3 Rate-based Methodology.................................................................................................. 21
2.5.3 Host-based IDPS and Network-based IDPS .............................................................................. 22
2.6 Testing and Evaluation of an Intrusion Prevention System .............................................. 24
2.6.1 IPS Configuration ..................................................................................................................... 25
2.6.2 Metrics for Evaluation of IPSs .................................................................................................. 25
2.6.3 Black-box and White-box Evaluation of an IPS ........................................................................ 27
2.6.4 DARPA methodology ................................................................................................................ 27
2.6.5 Offline and Offline Evaluation .................................................................................................. 30
2.6.6 Testing tools ............................................................................................................................. 30
2.7 Conclusion ................................................................................................................................. 32
CHAPTER 3 DESIGN ........................................................................................................................... 35
3.1 Introduction .............................................................................................................................. 35
3.2 Network Architecture .............................................................................................................. 35
3.3 Testing method overview ....................................................................................................... 36
3.4 Attack and Background Traffic component Design ............................................................. 37
3.5 Evaluation Metrics Design ...................................................................................................... 39
3.6 Conclusion ................................................................................................................................. 40
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
vi
CHAPTER 4 IMPLEMENTATION ......................................................................................................... 41
4.1 Introduction .............................................................................................................................. 41
4.2 Intrusion Prevention System Configuration ......................................................................... 41
4.2.1 Crafted Rules ............................................................................................................................ 42
4.2.2 Rate Filter Configuration .......................................................................................................... 42
4.2.3 ICMP-based DDoS Mitigation ................................................................................................... 43
4.2.4 TCP-based DDoS Mitigation ..................................................................................................... 44
4.2.5 UDP-based DDoS Mitigation .................................................................................................... 44
4.3 Background Traffic Generation .............................................................................................. 44
4.3.1 Tcpprep .................................................................................................................................... 45
4.3.2 Tcprewrite ................................................................................................................................ 46
4.3.4 Tcpreplay .................................................................................................................................. 46
4.4 Attack Traffic Generation ....................................................................................................... 46
4.5 Experiment Description .......................................................................................................... 47
4.5.1 Test Bed Description ................................................................................................................ 48
4.5.2 Metrics implementation .......................................................................................................... 48
4.6 Conclusion ................................................................................................................................. 50
CHAPTER 5 EVALUATION .................................................................................................................. 51
5.1 Introduction .............................................................................................................................. 51
5.2 Results ....................................................................................................................................... 51
5.2.1 CPU Load and Memory Load Experiment ................................................................................ 51
5.2.2 Available Bandwidth Experiment ............................................................................................. 53
5.2.3 Latency Experiment ................................................................................................................. 54
5.2.4 Rate Filter by Destination Results ............................................................................................ 56
5.2.5 Rate Filter by Source Results.................................................................................................... 57
5.2.6 Time to Live Experiment .......................................................................................................... 58
5.2.7 Reliability Experiment .............................................................................................................. 58
5.3 Analysis ...................................................................................................................................... 58
5.4 Conclusion ................................................................................................................................. 59
CHAPTER 6 CONCLUSION ................................................................................................................. 60
6.1 Introduction .............................................................................................................................. 60
6.2 Appraisal Achievement ........................................................................................................... 60
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
vii
6.2.1 Objective 1 ............................................................................................................................... 60
6.2.2 Objective 2 ............................................................................................................................... 60
6.2.3 Objective 3 ............................................................................................................................... 61
6.2.4 Objective 4 ............................................................................................................................... 61
6.2.5 Objective 5 ............................................................................................................................... 62
6.2.6 Objective 6 ............................................................................................................................... 62
6.3 Critical Analysis......................................................................................................................... 62
6.4 Personal Reflection .................................................................................................................. 63
6.5 Future Work .............................................................................................................................. 64
REFERENCES ........................................................................................................................................ 65
Appendix A List of DDoS ................................................................................................................ 75
Appendix B Test-bed ...................................................................................................................... 77
Appendix C.1 R1 configuration ..................................................................................................... 78
Appendix C.2 R2 configuration ..................................................................................................... 79
Appendix C.3 R3 configuration ..................................................................................................... 79
Appendix D Switch configuration ................................................................................................. 80
Appendix E Initial Project Overview ............................................................................................ 82
Appendix F Week 9 Meeting Report ............................................................................................ 84
Appendix G Gantt Chart ................................................................................................................ 86
Appendix H Diary Sheets ............................................................................................................... 87
Acronyms ....................................................................................................................................... 103
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
viii
List of Tables
Table 1: Assets and values (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008) ........................ 6
Table 2: Vulnerabilities (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008) ............................. 6
Table 3: Threats (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008) ........................................ 7
Table 4: DDoS used overview ............................................................................................................... 38
Table 5: Evaluation Metrics................................................................................................................... 39
Table 6: Rate Filter Option (Snort Team, 2009) .................................................................................... 43
Table 7: Overview of Tcpreplay ............................................................................................................ 44
Table 8: Specification of Machines ....................................................................................................... 48
Table 9: Tools Summarised ................................................................................................................... 50
Table 10: CPU Load Detailed Results .................................................................................................... 51
Table 11: Memory Load Detailed Results ............................................................................................. 52
Table 12: Available Bandwidth Detailed Results .................................................................................. 53
Table 13: Latency Details Results .......................................................................................................... 55
Table 14: Rate Filter by Destination Results Detailed ........................................................................... 56
Table 15: Rate Filter by Source Results Detailed .................................................................................. 57
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
ix
List of Figures
Figure1: Cost against likelihood (Buchanan, 2008) ................................................................................. 8
Figure 2: Typical DMZ Configuration..................................................................................................... 10
Figure 3: Handler/agent architecture (Mirkovic, Dietrich, Dittrich, & Reiher, 2005). .......................... 14
Figure 4: Illustration of a site hosting stepping stone (Mirkovic, Dietrich, Dittrich, & Reiher, 2005) .. 14
Figure 5: Botnet Life Cycle (Schiller, et al., 2007) ................................................................................. 15
Figure 6: IDS Architecture (Garìa-Teodoro, 2009) ................................................................................ 19
Figure 7: Rate-based IPS Response ....................................................................................................... 22
Figure 8: Differences between HIDPS and NIDPS ................................................................................. 24
Figure 9: Block diagram of 1999 test bed. (Lippmann, Haines, Fried, Korba, & Das, 2000) ................. 29
Figure 10: Average connections per day for dominant TCP services (Lippmann, Haines, Fried, Korba,
& Das, 2000) .......................................................................................................................................... 29
Figure 11: The MACE architecture (Sommers, Yegneswaran, & Barford, 2004(a)) .............................. 31
Figure 12: Taxonomy of MACE exploits (Sommers, Yegneswaran, & Barford, 2005) .......................... 31
Figure 13: Metasploit Architecture, Version 3.4 (Gates, 2009) ............................................................ 32
Figure 14: Schematic Implementation .................................................................................................. 36
Figure 15: Schematic Implementation .................................................................................................. 37
Figure 16: Data-set Modification .......................................................................................................... 45
Figure 17: Protocol Distribution...... ..................................................................................................... 45
Figure 18: Service Distribution.................. ........................................................................................... 45
Figure 19: CPU Load Results ................................................................................................................. 52
Figure 20: Memory Load Results .......................................................................................................... 53
Figure 21: Available Bandwidth Results ................................................................................................ 54
Figure 22: Latency Results .................................................................................................................... 55
Figure 23: Rate Filter by Destination Results ........................................................................................ 56
Figure 24: Rate Filter by Source Results ............................................................................................... 57
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
x
Abstract
Nowadays every organisation is connected to the Internet and more and more of the world
population have access to the Internet. The development of Internet permits to simplify the
communication between the people. Now it is easy to have a conversation with people from
everywhere in the world. This popularity of Internet brings also new threats like viruses, worm,
Trojan, or denial of services. Because of this, companies start to develop new security systems,
which help in the protection of networks. The most common security tools used by companies or
even by personal users at home are firewalls, antivirus and now even Intrusion Detection System
(IDS).
Nevertheless, this is not enough so a new security system has been created as Intrusion Prevention
Systems, which are getting more popular with the time .This could be defining as the blend between
a firewall and an IDS. The IPS is using the detection capability of the IDS and the response capability
of a firewall. Two main types of IPS exist, Network-based Intrusion Prevention System (NIPS) and
Host-based Intrusion Prevention System (HIPS). The thirst should be set-up in front of critical
resources as a web server while the second is set-up inside the host and so protect only this host.
Different methodologies are used to evaluate IPSs but all of them have been produced by
constructors or by organisms specialised in the evaluation of security devices. This means that no
standard methodology in the evaluation of IPS exists. The utilisation of such methodology permits
to benchmark system in an objective way and so it will be possible to compare the results with other
systems. This thesis reviews different evaluation methodologies for IPS. Because of the lack of
documentation around them the analysis of IDS evaluation methodology will be also done. This will
permit to help in the creation of an IPS evaluation methodology. The evaluation of such security
system is vast; this is why this thesis will only focus on a particular type of threat: Distributed Denial
of Service (DDoS). The evaluation methodology will be around the capacity of an IPS to handle such
threat.
The produced methodology is capable of generating realistic background traffic along with attacking
traffic, which are DDoS attacks. Four different DDoS attacks will be used to carry out the evaluation
of a chosen IPS. The evaluation metrics are the packet lost that will be evaluated on two different
ways because of the selected IPS. The other metrics are the time to respond to the attack, the
available bandwidth, the latency, the reliability, the CPU load, and memory load.
All experiment have been done in a real environment to ensure that the results are the more
realistic possible. The selected IPS to carry out the evaluation of the methodology is the most
popular and open-source Snort, which has been set-up in a Linux machine. The results shows that
system is effective to handle a DDoS attack but when the rate of 6 000 pps of malicious traffic is
reach Snort start to dropped malicious and legitimate packets without any differences. It also shows
that the IPS could only handle traffic lower than 1Mbps.
The conclusion shows that the produces methodology permits to evaluate the mitigation capability
of an IPS. The limitations of the methodology are also explained. One of the key limitations is the
impossibility to aggregate the background traffic with the attacking traffic. Furthermore, the thesis
shows interesting future work that could be done as the automation of the evaluation procedure to
simply the evaluation of IPSs.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
1
CHAPTER 1
INTRODUCTION
1.1 Project Overview
The main aim of this thesis is to produce a methodology that will permit to investigate the
performance of rate-based Intrusion Prevention System (IPS) for a range of network conditions. This
methodology should offer means to assess the evaluated device on a non-bias way. This will permit
to evaluate and compare different rate-based IPS because they will be tested under the same
conditions. The means used to evaluate the tested device will be a set of metrics that will be the
most objective as possible and that could be measured on any IPS on the market. The methodology
should be developed in a way where it is possible to use it in a real environment as in a lab
environment. Because of the difference of environment, the methodology should offer the
possibility to reduce them as much as possible. Evaluation of such devices in unrealistic environment
will only offer bias results that will never be equivalent of the results obtained when the device is set
up in production. The thesis will present the open-source IPS called Snort that will run under a Linux
environment to evaluate the approach taken also this will permit to assess of this tool.
1.2 Background
Nowadays, open communication network as Internet or cellular telephone system permits anybody
to access these networks with the good device. Internet permitted to different companies to expend
their activity by helping them to communicate with each other. This technology also helped
companies to contact clients or selling their goods. Internet permits the connection between
networks but also people, with all that imply (O'Reilly, 2000). The cost and the facility to access these
networks permit a large population to use these technologies. This large amount of user on these
networks created security issues (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008). This
easiness of communication created also new threats for companies and users. Problems of privacy
are more and more often present in the headlines. These problems could be: stolen bank details
(Montia, 2010), or lost of a laptop which contains sensitive information (BBC, 2009) (Kennedy, 2009).
Security incident could also be about worms (Ahmed, 2009), virus, Distributed Denial of Service
(DDoS) (Wortham, 2009). Two major DDoS attacks against the root name servers have been
reported in 2002 (Vixie, Sneeringer, & Schleifer, 2002) and 2007 (ICANN, 2007). If the attacks had
succeed this would have created havoc on the Internet. This type of attack could have political
interests as the important DDoS attack of Russia against Estonia where their computer network has
been disabled (Traynor, 2007). These example demonstrate that attacks perpetrate using computers
could have important effect. This highlights the importance for companies and users to be protected
against this type of threats.
Different technologies and techniques exist that should be combined to protect a network. The main
drawback is the complexity of some technologies to set-up. If a protection system is not correctly
configured it might render the networks with more flaws than before (Gollman, 2006). The main
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
2
protection mechanisms used to protect a network are firewalls, Intrusion Detection System (IDS)
and Intrusion Prevention System (IPS). For host protection anti-virus, anti-malware, but also firewall,
IDS and IPS oriented for host protection. Many companies are selling devices of software that permit
the protection of systems but the lack of testing of such devices or even of testing methodology
could not inform correctly the users. If the testing results are only produce by the constructors and
never by autonomic searchers, results might be biased. In addition, if this evaluation exists they are
not necessarily done in the same condition so the comparison of results will not be relevant.
This thesis has the aim to produce a methodology to evaluate a device, which is more and more
used: Intrusion Prevention System (see Section 2.4). The methodology that will be produced is
oriented on one type of threat: Distributed Denial of Service (see Section 2.3).
1.3 Aim and Objectives
The thesis investigates the creation of a test-bed network with realistic network traffic. Also an IPS
system will be set up in the test-bed and different DDoS attack will be perform on the network. The
final step will be the evaluation of results of the processing of the threat by the IPS. In order to
achieve this aim, different objectives should be done. These are as follow:
Perform a critical evaluation of network security.
Investigation of IPS systems: their performance impact, and evaluation methodologies.
Investigation of evaluation tools for traffic playback to create a test-bed.
Design a range of experiments for the evaluation.
Implementation of evaluation test-bed and Device Under Test (DUT).
Evaluation results of the system selected with the test-bed implemented for the scenario.
1.4 Thesis structure
The thesis is ordered in six different chapters. Each of them is describe below:
Chapter 1 - Introduction: Contains the project overview and the background where this project
takes place. It also contains the main objectives to complete this project.
Chapter 2 - Literature Review: This chapter provided a description of the current literature
research on the information security. It also contains the presentation of computer security
mechanism such as firewall, IDSs and IPSs with an emphasis of the key strength and weakness of
these systems. This thesis is focus on Distributed Denial of Service so this chapter will present
the threat. The main section of this chapter is the investigation of the different evaluation
methodologies that currently exist.
Chapter 3 - Design and Methodology: This chapter provide the design of the evaluation
methodology, which is composed of the design of the test-bed and the different experiments,
which involve the technology or rate filtering. Justification of why such approach is viable will be
also given.
Chapter 4 - Implementation: Implements the design described in Chapter 3 using the tools
described in Chapter 2.
Chapter 5 - Evaluation: Provides the results of the experiment described in Chapter 3 and
implemented in Chapter 4. It will also contain the analysis of these results.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
3
Chapter 6 - Conclusion: Conclude this project by providing a critical evaluation of the project and
the achievement of the objectives. A section will contain a self-reflection to discuss about the
different problems encountered during the project and how they have been overcome. This
chapter will be finished by a description of the possible directions and future works that could
be taken in this subject area.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
4
CHAPTER 2
LITERATURE REVIEW
2.1 Introduction
This literature review contains the needed background to information security (Section 2.2) and
about one of the most used devices in network security: the firewall (Section 2.3). To understand the
basics concepts of Distributed Denial of Service, Section 2.4 will present the different steps in the
creation of such attack along with the description of the main types of attacks (Mirkovic, Dietrich,
Dittrich, & Reiher, 2005). Section 2.5 will present the basic concept of IDSs and IPSs and the
differences between these two security systems (Rowan, 2007).
The focus of this chapter is Section 2.6, which will present the different methodologies used to
evaluate IDSs system end IPSs system. These will permit to draw a methodology to evaluate rate-
based IPS; such methodologies will be presented later in the thesis.
2.2 Information Security
Information security has the purpose to protect valuable resources of an organisation. These
resources could be hardware, software, or information. By using different mechanisms and
safeguards, security will help an organisation to protect its financial resources and its reputation
(NIST, 1995).
In computing, security could be divided in two groups. Computer security that represents the
measures implemented to protect a single machine, its resources, and its stored data against threat.
The other group is Network security which involve the security of each machine in a network but
also all other devices that are part of this network and all the data that transit in the network (Cole,
Krutz, Conley, Reisman, Ruebush, & Gollman, 2008). Computer security is built on three different
aspects: Confidentiality, Integrity, and Availability (CIA). Different interpretations and aspects exist;
they are given by the needs of individuals and laws in a particular organisation (Bishop, 2004).
Risk evaluation is also important in security management. This evaluation starts with the analysis of
the assets, vulnerabilities, and threats that could be found in the system. A short list of common
threat will be given and each threat will be quickly describe.
2.2.1 Confidentiality Integrity Availability (CIA)
Security and secrecy are closely related. One of the aims of computer system is to protect
information; confidentiality is the aspect that gathers security and privacy. The term privacy could be
used to define the protection of personal data when secrecy defines the protection of data of an
organisation (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008). Military organisations were
the first to need an implementation of confidentiality in their system. The creations of access control
mechanism permit to support confidentiality. One of these control mechanism is cryptography, this
is, by definition, the science of secret writing, this permit to modify information to make it
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
5
impossible to read. A cryptography key permit to retrieve the information, but this key is a new
weak point of a secure system (Gollman, 2006). Confidentiality is important to hide data but also to
hide the existence of this data that could also show a lot of information to an intruder (Bishop,
2004).
The trustworthiness of data or resources in a computer system is vital. This includes data integrity –
the content of the information – and origin identity – the source of the data. Integrity as been
defined by the Department of Defence (1985) as:
“The state that exists when computerised data is the same as that in the source
documents and has not had been exposed to accidental or malicious alteration or
destruction”.
This defines data integrity as a synonym for external consistency. Two classes of mechanism exit to
guarantee data integrity, prevention mechanisms, and detection mechanisms. Prevention
mechanisms seek to ensure the integrity of the data by managing the access only to authorised users
and prevent the modification of the data in unauthorised way. Detection mechanisms are not design
to prevent modification but to report unauthorised modification. These mechanisms could analyse
system events or the data itself to see if the policies still apply (Bishop, 2004). Data integrity is also
important with the communication between hosts. An attacker could intercept the data and modify
it; this type of attack is called man-in-the-middle attack (Cole, Krutz, Conley, Reisman, Ruebush, &
Gollman, 2008).
In a computer system, the availability of the resources is important. The idea of availability comes
from other areas like fault-tolerance system. Different mechanisms exist to permit highly available
services as the gossip architecture (Ladin, Liskov, Shrira, & Ghemawat, 1992) or the Bayou system
(Terry, Theimer, Petersen, Demers, Spreitzer, & Hauser, 1995). These mechanisms exist because user
need services to be highly available, this mean the response time of services should be close to one
hundred per cent. Server failures or network disconnection are part of the factors that are relevant
to ensure a high availability (Coulouris, Dollimore, & Kindberg, 2001). When an attacker try to
attempt to the availability of a system it is called a Denial of Service attack (DoS). These kinds of
attack are difficult to detect, because the detection lie in the capacity of the system discerned
unusual access pattern (Bishop, 2004). To prevent the addition of protection mechanism, designers
of security protocols try to avoid imbalance in workload that could permit attacker to overload and
create a DoS to a victim (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008).
2.2.2 Accessing Assets, Vulnerabilities, and Threats to Calculate Risk
To define and analyse the risk it is important to start by defining an asset, vulnerabilities, and a
threat. The next step is to rank them according their value – asset, impact if they are exploited –
vulnerability, and finally, the likelihood of occurrence (threat).
Assets should be identified and valued. Assets include the hardware (servers, laptops, etc.), software
(application, databases, source code, etc.), data, and information that are essential to run the
company and finally, the reputation. Hardware could be valued by the price it cost, but for data and
information, it is harder. A leak of customer data could create indirect lost as for example lost
business opportunities. Customers may desert the company because they do not feel their
information secured (Gollman, 2006). Table 1 shows some assets and their values.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
6
Asset Values
Payroll records Medium Product design specification High Health insurance claims High Customer list High Account receivable records Medium Sale records Low Employee reviews Low “InventoryAndOrder” database Medium
Table 1: Assets and values (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008)
Vulnerabilities are the weakness of a system. They could have been accidentally or intentionally
created. As explain by SANS (2001) “Vulnerabilities are the gateways by which threats are
manifested” which mean that a system could be compromise through a weakness found in it.
Vulnerability could be a weak password, an unpatched computer, a weak firewall configuration, etc.
It is important to analyse and evaluate vulnerabilities that exist in a system. This could be done by
using a vulnerability scanner could be used. The scanner should be able to access a vulnerability
database keep up to date. Different organisations are doing it as the SysAdmin, Audit, Network,
Security (SANS) or the Computer Emergency Response Team (CERT). Vulnerabilities should be
ranked following the importance of impact if this vulnerability is exploited. For example, a
vulnerability that permits an attacker to take the control of a system account is more important that
a vulnerability that give access to an unprivileged user account (Gollman, 2006). Table 2 shows
different vulnerabilities and the critically level.
Vulnerabilities Critically
Unpatched software Medium Internet connection with no firewall High Antivirus protection missing or not updated High Weak password Medium Common password sharing High Employees make decision about who has access High
Table 2: Vulnerabilities (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008)
Threats are exploits of vulnerabilities. The same vulnerabilities could be exploited by several threats.
This is why system protection mechanism should try to protect the system against vulnerabilities and
not against threats (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008). It is possible to divide
threats in four different classes. The first one is called disclosure, it contain all threat about
unauthorized access of information. The second one is acceptation, which means all threat about
the acceptance of false data. The third group is disruption of correct operation of a system. Finally
the last group is usurpation that contains all unauthorised take of control of some part of the system
(Bishop, 2004). Each threat should be studied and different characteristics should be found as where
the attacker could come from, inside or outside of the system? Could a member or a former member
be the adversary? Does the attacker should launch the attack by accessing the system or could it be
done remotely? When these questions are answered, the threat should be rated according the
likelihood of the threat. This depends of the difficulty of a threat, the motivation and number of
attackers (Gollman, 2006). Following, a non-exhaustive list of threat will be given with the
explanation of each of these threats. Table 3 shows different threats and their likelihood.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
7
Threats Likelihood
A DoS attack against the “InventoryAndOrder” database Medium A DoS attack against the payroll server Low Internal employee reading or modifying payroll data without authorization
High
Internal employee accessing employee review records Medium Internal employee selling customer list Medium External person obtaining customer list of product design Medium
Table 3: Threats (Cole, Krutz, Conley, Reisman, Ruebush, & Gollman, 2008)
2.2.3 A brief overview of common threats
Errors and Omissions are an important threat to data and system integrity. When a user or a system
administrator is entering data in a system, they could make an error. This could generate a threat or
creating vulnerably. An error in a source code may permit the software to crash or to damage the
system and these errors could occur any time in the life of the software (NIST, 1995).
Identity theft is one of the most lucrative ways for criminal to make money; this is also called
phishing (Hinde, 2005). It could take the form of an email or could be found on a web server. A
message will trick the user to give his identification information or bank account information. The
message use by the criminals to trick users is sometimes really well done and it is hard to detect this
is a phishing message. Criminals could for example use bank writing style or even the bank logo. This
phenomenon is growing because people are not aware of it and it is hard to be well protected
against identity theft. The Federal Trade Commission Identity Theft Data Clearinghouse said that 215
000 people had their identity theft in 2003 and 162 000 in 2002 (Dwan, 2004).
Sabotage is in general done by employees from inside the company. They have an excellent
knowledge of the system and they know what will really disturb the company (NIST, 1995). Sabotage
could be done because of different motivation “As long as people feel cheated, bored, harassed,
endangered, or betrayed at work, sabotage will be used as a direct method of achieving job
satisfaction the kind that never has to get the bosses' approval.” (Sprouse, 1992)
Industrial Espionage also known as corporate or business espionage is conduct in a commercial point
on view. This is different that government espionage where they try to gather political or military
information to take advantage of it. The authors of industrial espionage could be an organisation, a
private organisation or an event governmental organisation (Jones A. , 2008). The aim for an
organisation is to possess a commercial advantage against a concurrent. In general targeted
information are these related to technologies or also different listing from the organisation (client
list, contract list...) (NIST, 1995).
Malicious Code mean viruses, worms, Trojans and other software that could be found on a system
but nobody requested it (NIST, 1995).
Denial of Service will be explained later on in a specific part or this report. This is one of the most
common threats, which cost a lot of money to the target organisation. The aim of this type of attack
is to distribute a service (a web site, a Domain Name Server (DNS)...) so nobody could use it.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
8
2.2.4 Risks
When assets have been ranked and their value calculated, it is now important to evaluate and
calculate the risk. It is possible to calculate the risk on two different ways; quantitative and
qualitative risk analysis. In quantitative risk analysis, the annual lost expectancy could be calculated
with the following mathematical formula: Annual Lost Expectancy = Threat x Asset. The threat value
is a probability and the asset value is the monetary cost of the particular asset (Buchanan, 2008). A
qualitative risk analysis is calculated based on rules that tend to take every characteristic of the
system without necessarily use mathematical based. In this type of analysis, the likelihood of the
threat and the monetary cost of the targeted assets are put in relation (Gollman, 2006). Figure 1
shows the monetary cost of an asset against the likelihood of the threat.
Figure1: Cost against likelihood (Buchanan, 2008)
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
9
2.3 Firewalls
The previous section showed different threats and the risks that resources might encounter when
they are accessible through a network. This shows the importance for a company to dispose devices
that will permit to protect their network. Some of these systems are Intrusion Detection System
(IDS), Intrusion Prevention System (IPS) that will be presented in Section 2.5. This section will
describe one of the most common used security tools, a firewall. The basics characteristic of a
firewall will be explained and two common characterises between IDS, IPS and firewall will be
described.
2.3.1 Firewall basics
Firewalls, also called secure gateways are systems that permit to filter access between two
networks. In general, this is between a private network – a company network, and a public network
– Internet. A public network could not be trusted because malicious person can use its accessibility
to spread malicious code or trying to access private network (NIST, 1995). It is also common and
important to protect and restrict access of special infrastructures. Which mean the installation of
firewall inside a private network will permit to add another protection layer in the system (Alder,
2007).
This device permits the creation of a Demilitarize Zone (DMZ), as illustrated in Figure 2. It permits to
divide the network into three different parts. One is the untrusted zone – the Internet - the second is
the trusted zone – users of the networks and the last part is the DMZ. In that part, it is possible to
find the services as the email server or timer server that are accessible from inside and outside the
network. By dividing the network in three parts, it is possible to assigned different policies regarding
the risk. Moreover, this ensures the security of the trusted network in case of a breach in the DMZ
(Ingham & Forrest, 2005).
A firewall will analyse all incoming and outgoing packet that pass through it and took a decision.
Different decisions are possible as accept the packet or discard it. It could also be possible to log the
packet, put the packet in a queue to analyse it with different plug-in or other mechanisms. The
firewall makes its decision following rules that take the form {Predicate} {Decision} (Gouda & Liu,
2007). {Predicate} is a Boolean expression that will permit to identify specific packets with the
physical interface and the direction on which these packets arrive. The {Decision} could be accept,
discard or any other possible solution. A packet will match a rule if and only if it matches the
predicate.
When a firewall is set up in a system it have a new and clean configuration, but during its utilisation,
different administrators will change the firewall configuration to adapt it of the needing of the
system. This could generate different problems as the overlapping of rules. Two rules are
overlapped, if and only if, one packet could match both rules (Gouda & Liu, 2007). When both rules
are the same this will only slow down the traffic because the firewall will do the test twice. The
major risk is when rules have different decisions. To prevent the risk of conflicts the rules with the
highest priority – in general the first will be used. To ensure that every packet could be matched by
a rule, the last one is usually a tautology (Gouda & Liu, 2007). This rule is also called default rule.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
10
Figure 2: Typical DMZ Configuration
Different types of firewall exist. Some of the most common are packet filtering – also known as
screening; proxy host; application layer gateway or screened-host gateways (NIST, 1995).
2.3.2 Packet filters
Packets filtering have been defined for the first time in a paper of Mogul (1989) about screened.
Application and transport level proxies require to pass the datagram up down in the whole protocol
stack. In another hand, packet filtering permits to analyse datagram much faster (Ingham & Forrest,
2005). Besides, packets filtering do not require the intervention of the user. Nowadays the protocol
the most used is the Internet Protocol (IP). To make their decision packet filter firewall use at least
one of information contained by an IP datagram. This might be the source address, destination
address; the protocol used all different flag and all other information that are accessible in an IP
datagram. A firewall using packet filtering are often implemented on edge router, which use Access
Control List (ACL).
Packet filtering is a technology that permit fast analyse of the traffic but, different drawback exists
around this technology. One of is the huge difficulty to create correct and powerful rules, too many
poor design rules will considerably slow down the device and disturb the traffic (Mogul, 1989).
Another drawback is the incapacity of packet filter to determine which user is generating which
traffic. In fact, it is possible to know which host is generating a particular traffic by its address but
host and user are different. If the policies of the system block access to some resources to some
users, this technology is powerless. Moreover, the risk of IP address spoofed is important; a local
machine could easily spoof its address and use one from another local machine. For an outside
attack, even if the edge router and firewall are well configured a risk exists. If an attacker, use the IP
address of a legitimate remote host the security device might not detect them (Ingham & Forrest,
2005).
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
11
The IP protocol encapsulates the majority of time Transfer Control Protocol packets (Braun, 1998).
One of the application layer protocols using TCP is File Transfer Protocol (FTP) – port 21 (Postel &
Reynolds, 1985) is a cause of problem for packet filtering. The active FTP method opens a connection
from the client to the server and the server open a connection back to the client. This connection
has for port source the port 20 and a random port destination. It is possible that malicious program
use the TCP port 20 for the source and a random TCP port for the destination. Which cause the
packet filter writer to assume that this type of connection might be malicious. One of the solutions
to prevent this risk has been proposed by Bellovin (1994) in the RFC 1579. His solution is the use
passive FTP. The main drawback of this solution is that not all clients support this type of connection.
Another solution exists to resolve this problem and problem generated by other protocol; this is
stateful packet filter.
2.3.3 Stateful Packet Filter
When a TCP connection is open, a packet filter firewall will not take in account that a packet is from
a particular established or not connection. An attacker could use this flaw in the packet filtering
technology and send TCP packets that look like a packet from an established connection. The
solution to this problem is to keep trace of every connection; this imply that the firewall will looks at
both network layer and application layer of the Open System Interconnection (OSI) model. The
firewall should monitor the initial TCP packet that request the opening of a connection (SYN flag set)
and allow packets from this connection to pass through the firewall until the FIN packet is
acknowledged. For other protocol not connection-oriented as User Datagram Protocol (UDP) or
Internet Control Message Protocol ICMP, it is possible to keep trace of a pseudo-state. This permits
to keep the trace of Domain Name Server (DNS) or Network Time Protocol (NTP) request for UDP
and accept echo-reply for ICMP (Ingham & Forrest, 2005).
2.4 Distributed Denial of Service
This section will present one of the most difficult threats to handle, Distributed Denial of Service
(DDoS). After the definition and the description of different purpose that involve the creation of
DDoS, the different step done by an attacker to perform a successful attack will be described. This
involves the recruiting machines and controlling them to give them orders. The life of a network of
node control by an attacker is not only attack. The network could be used to expand itself by
recruiting other machine.
Distributed Denial of Service could target different levels of the system, and for each of them
different measures should be taken to protect the network against DDoS. These methodologies will
be also explained; different steps should be done before, during, and after the attack to ensure a
maximum level of protection.
2.4.1Definition and purpose
A Denial of Service has aim to disrupt legitimate activity; this could be the access of Web pages, the
utilisation of e-mail server or an online radio. A DoS is achieved when the target crashed, rebooted
or slowed down. The success of this type of attack corresponds to the time that the attacked is
performed (Tanase, 2003).
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
12
This type of attack consists of sending messages to the target. This will obligate the target to process
received messages and, because their number will be too huge, the target will not be able to handle
it and will not be able to perform it legitimate services. To perform a DoS attack the attacker should
be able to use a powerful computer with a lot a CPU time, memory, and bandwidth. The ability to
send message of the attacker must be bigger than the capacity of the victim to process messages
(Mirkovic, Dietrich, Dittrich, & Reiher, 2005). This type of powerful machine is not common and
hard to access; this is also too expensive for an attacker to build this type of machine. Because of
this another type of DoS has been created, Distributed Denial of Service (DDoS).
DDoS attacks are, as the name implies a DoS perform in a distributed model. In a simple DoS only
one machine is doing the attacks. In the case of a DDoS it could be a couple or thousands of
machines performing the attack, the first apparition of DDoS is in 1998 (Lin & Tseng, 2004). This
technique is used by attackers to permit them to have enough power to disturb legitimate traffic of
powerful network (Tanase, 2002). To perform this kind of attack an attacker should start by
recruiting many machines and controlling them. This network of machine is called a botnet.
Techniques to recruit and control attacking machines will be explained later on.
Motives of attackers to perform Denial of Service attack could be of different nature; an attacker
might want to show his power by attacking a large, popular Web site that will permit him or her to
be recognised in the underground community. Another possibility of motive is a political one, for
example, a political party can ask an attacker to do a DoS attack on the Web site of a concurrent
political party. It could be also use in communication war (Mirkovic, Dietrich, Dittrich, & Reiher,
2005). The last motive and most common is the commercial one; a company can ask an attacker to
attack a concurrent; during the time where the commercial website is unavailable it will lose a lots of
money and worst, the trust of user and they might want to go shopping on another website (maybe
the backer) . Attackers could also threaten companies by promising to attack them if they refuse to
give money to the attackers (Mirkovic & Reiher, 2004).
2.4.2 How attackers recruit and control attacking machine
When an attacker wants to create a botnet to perform DDoS attacks he or she will need to take
control of different machine on the Internet. Easier targets are machines without any protection; it
could be machines from university or companies and even private machines. The attacker will break
into them and will take full control of it, these attacking machines could be called slaves, zombies,
daemons or agents; the last term will be used in this report. After the attackers have finished
infecting the machines precaution will be taken for hide all trace of the attack. Once this is done it is
important for the attacker to be able to control the botnet easily and without being detected
(Mirkovic, Dietrich, Dittrich, & Reiher, 2005).
2.4.2.1 Recruiting machine
Attackers need the biggest possible botnet to be able to launch huge and complex attacked, so they
need to infect many as possible agent around the internet and if possible in different countries for
security reason explained in the next part. If the attackers should infect each machine by hand this is
really time consuming and nearly impossible to have a huge botnet. For this purpose, attackers
developed way to infect machine semi-automatically or completely automatic (Mirkovic & Reiher, A
Taxonomy of DDoS Attack and DDos defense mechanisms, 2004), this different mechanism will be
now explained.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
13
The life of an agent from a botnet begins when it has been exploited. A machine could be exploited
by different ways; the first one is by a malicious code, this kind of exploit can took the form of a
phishing e-mail, a Web site that contain Trojan, e-mail attachments that execute malicious code
when the target open it or even Spam in Instant Messaging (SPIM).
Many types of agents have scanning capability that permit to identify open port of a range of
machine. When the scanning is finish, the agent takes the list of machines with open port and launch
vulnerability-specific scanning to detect machines with unpatched vulnerability. If the agent found a
machine with vulnerability, it could launch an attack to install another agent in the machine.
Vulnerability can be discovering by doing a reverse engineering of the “Patch Tuesday” of Microsoft,
attackers are using this method and when they have understood the newly patched vulnerability,
they create an exploit. Because millions of users are not patching regularly their system, they are
easy target.
Another possibility is to use trace left by malicious code; this could be a backdoor left by a Trojan,
worms, or remote access Trojan. For the last type, many attackers use it because Trojans are easy to
set-up. Skilled-less attackers let the default configuration of the remote access Trojan so anybody
that knows the default password could take control of the computer.
The last possibility for an agent to take control of a machine is to found the password of a user or
even of the administrator of the machine. Two different techniques are used by agents, the first one
is called the password guessing where the agent have a list a most common password and try all of
them until one match. The other technique is called brute-force, the agent try all possible password,
with this technique it is nearly impossible to no find the password if nobody change it before the
agent found it. The problem with this technique is the time consuming.
2.4.2.2 Controlling the botnet
The most important for attackers when they want to control a botnet is to stay hidden. To stay
hidden they send their commands to specific machine called master or handler to distribute them to
the agents of the attackers. Figure 3 shows how attackers send their commands to attack their
victim. By doing this, it is difficult for the victim to found who send the command because if they can
inspect an agent they will only know the handler and the attacker will stay unknown (Mirkovic,
Dietrich, Dittrich, & Reiher, 2005).
Another possibility for the attackers is to use machine called “stepping stones” to access a
“handler”, this technique involve the connection in sequence to different machine so even if the
handler is inspected only the previous stepping stone could be found. If the attacker used many
different stepping-stones stationed in different countries, it would be nearly impossible to inspect
them all and to discover the attacker. Figure 4 shows how the stepping stone technique works
(Mirkovic, Dietrich, Dittrich, & Reiher, 2005).
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
14
Figure 3: Handler/agent architecture (Mirkovic, Dietrich, Dittrich, & Reiher, 2005).
Figure 4: Illustration of a site hosting stepping stone (Mirkovic, Dietrich, Dittrich, & Reiher, 2005)
DoS traffic
C&C traffic
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
15
For controlling agents, the attackers use Command and Control (C&C) server, currently, the majority
of botnet use Internet Relay Chat (IRC). Other possibilities exist as Peer-to-Peer (P2P) C&C, Instant
Messaging (IM) C&C or even Web-based C&C server. When the agent is connected to the C&C it can
ask for updates as IP address for other C&Cs, software update or exploit software. The agent could
also ask for orders, if the agent is newly installed it could ask order for protect himself, this could be
by asking the C&C the location of the latest anti-virus and prevent it to detect the agent by stopping
the service. This action could be suspicious for a user aware to check his anti-virus regularly, to
prevent that, some agent have the capacity to neuters the anti-virus, it looks like it is working but in
reality the anti-virus is unable to detect the agent (Schiller, et al., 2007). Figure 5 shows the life cycle
of a botnet.
Figure 5: Botnet Life Cycle (Schiller, et al., 2007)
2.4.3 Semantic of Denial of Service
Denial of Service could be caused by different ways, exploiting a vulnerability of a system; attacking
a specific protocol; attacking the middleware of the system; attacking a particular resource or
application and finally, just pure flooding. These different types of semantic will be now explained.
2.4.3.1 Exploiting Vulnerability
A DoS attack that aim vulnerability consists by sending few well-crafted packets to the target that
will take advantage of the vulnerability. An example of DoS against vulnerability is the “Bonk” attack
Computer is exploited and becomes a bot.
New bot rallys to let botherder know it is joined the team
Retrieve the Anti A/V Module
Secure the new client
Listen to the C&C for commands
Retrieve the payload module Report result to the C&C
Execute the command
On command erase all evidence and
abandon the client
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
16
that appeared in 1998 (Stutz, 1998), this attack was using vulnerability present in Windows 95, NT
and some Linux kernels. These systems were not handling properly fragmented packet, the attacker
was sending two crafted UDP packets to the target, when the system tried to reform the datagram it
was sending in a loop and after a moment crashed.
These kinds of DoS are really powerful and hard to be detected because of the small number of
packets needed, but, when a patch is realised and the machine is protected the DoS cannot be used
anymore against the machine (Mirkovic, Dietrich, Dittrich, & Reiher, 2005).
2.4.3.2 Attacking a Protocol
These attacks are Denial of Service attack exploits a specific feature or a bug of one or more
protocols installed at the target machine with the goal to consume enormous amount of the target
resources. The results of this attack will prevent the victim to done his job properly, by dealing with
a large amount of illegitimate traffic. The victim will not be able to handle legitimate traffic so users
will not be able to access the machine or use its resources. This kind of DoS is more efficiently if it is
distributed, because the attackers need to consume an important part or all the resources of the
machine, moreover, distributed the attack will be harder for the target to stop it (Mirkovic, Dietrich,
Dittrich, & Reiher, 2005).
A famous DDoS attack using a feature of a protocol is the SYN Flood attack. The attack use the “three
–way handshake” of the TCP protocol. When a machine wants to open a TCP connection with
another machine, the first send a SYN packet to the other. The second machine then send a SYN-ACK
packet, at this moment the connection is called “half-open”, the first machine should normally send
an ACK packet that will open the connection and permit the two machines to communicate. In this
attack the last ACK is never sent, so, the second machine will keep the “half-open” connection in its
Transmission Control Block (TCB) until the timeout of this connection expire. The attacker will keep
sending request to open a connection, this will result in filling the TCB stack until only request from
the attacker are stored so legitimate user cannot open a connection anymore (CERT Coordination
Center, 1996).
2.4.3.3 Attacking Middleware
When an attacker executes a DoS against a middleware it is by using a flaw in the algorithm of the
middleware. This type of law-traffic attack could affect hash algorithm, typically a hash function
execute linearly its operation. If the attackers send, well-crafted packets to the middleware that will
force worst-case condition, such as all values hashing into the same buckets, the attack can cause
the machine to slow down because it will perform an action in term of minutes instead of second.
When the machine is under attack, this might not be visible for an administrator of the system
because the machine will be only slow down, when a new attack is released, the only way to protect
the machine from it is to disable the middleware or wait for an update. If the middleware is vital for
the system, it might not be possible to disable it so the machine will not be protected until a patch is
released (Crosby & Wallach, 2003).
2.4.3.4 Attacking an Application
Attackers that target a specific application for a Denial of Service will in general prefer a Distributed
Denial of Service because it will request a lot of bandwidth and power to have an effect against an
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
17
important system. If an attacker possesses a botnet of one thousand computers, take as target a
web site that can handle ten thousands simultaneous connection, each computer of the botnet will
need to open ten connections. The target will be full of illegitimate traffic and the legitimate one will
be refused or dropped (Mirkovic, Dietrich, Dittrich, & Reiher, 2005).
The number of connection open by each agent is small; resources needed are few, so this might be
possible that the user of the infected machine never see that his or her machine belongs to a botnet.
2.4.3.5 Attacking a Resource
The resource that could be target are for example the CPU cycle of a machine, the attack will force
the system to do more works that needed. It could be also a direct attack against a router switching
capacity, this type of attack could be disastrous if the network is not well thought (Mirkovic, Dietrich,
Dittrich, & Reiher, 2005). In January 2001 an attack against the router that direct traffic to Microsoft
Web site was perform. When news of this attack were known it was discover that all Domain Name
Server (DNS) were on the same segment of the network. When the router was under the DoS attack,
no Web sites of Microsoft were accessible anymore (ZDNet Uk, 2001).
2.4.3.6 Pure Flooding
Flooding attacks are also known as bandwidth consumption attacks are attacks that just send the
maximum possible of packets to the victim with aim to use all the possible bandwidth of the target.
This is hard for the target to handle alone this kind of attack so in many case the help of the Internet
Service Provider (ISP) is needed. A filter of the packets should be done directly by the ISP, if the
packets of the attack have an easy signature to discover as large UDP packets to unused ports or IP
packets with a protocol value of 255. The filtering might be easy and quick to set-up. If attacking
packets are well crafted, and looks like legitimate traffic, it could be hard to filter it and in some
cases, the only possibility is to wait that the attackers get tired of his attack and stop it (Mirkovic,
Dietrich, Dittrich, & Reiher, 2005). Appendix A contains a non-exhaustive list of different DDoS
attacks using TCP, UDP, and ICMP protocol.
2.4.4 Defence Strategy against Distributed Denial of Service
Because of their nature, it will be never possible to block Distributed Denial of Service attacks
completely; after all, it is possible to ask thousands of people to access a particular Web site at a
particular time. This action will create a Denial of Service and it will be impossible to determine the
legitimate of the illegitimate traffic (Nazario, 2008). Even so, it is possible to protect a network by
minimizing the risk by configuring properly the network. The major problem of this threat is the lack
of awareness of DDoS attack; too many networks are not enough protected and prepared to handle
DDoS attack or even, prevent to be part of a botnet. Which means an attacker can easily perform
effective attacks or easily recruit new agents.
Protecting your network to prevent attacks will prevent attacker using your network as stepping
stones, to hide themselves or even as part of a botnet could be done with the same strategy. The
first thing to do is to prepare the network and understand it. The administrators of the network
should possess tool to record and analyse traffic from host or different part of the network. This will
permit to analyse the traffic with forensics tools to detect strange behaviour. It is also important that
the administrator install last patch on the different machines of the system to protect it. Too many
machines are infected because they are unpatched. An analysis of the network along how each part
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
18
communicates between them will permit the administrator to know what good traffic is and what
bad traffic is. This knowledge will permit him to create automatic procedure to deal with the threat
without disturbing legitimate traffic.
Because not all DDoS attack will necessarily, cause the network to fail. It is important to be able to
detect even small DDoS attacks. This type of behaviour could be an attacker testing the capacity of
the network to handle an attack, or, it is possible the attack just fail. In any case, the attacker could
try again by changing the attack process or using more agents to launch the attack. An Intrusion
Detection System (IDS) could for example do the detection of DDoS attacks - this tool will be cover
later in this report. When an attack is detected, the victim should try to take as many information
possible, this is called the characterization. The amount of traffic needed to characterize an attack is
in general small, different tools exist that help the investigator to determine the type of the attack.
All this information will help the victim to handle the attack, but also other administrators to help
them to protect their network against the same attack. Another interesting point of characterizing
the attack is to determine the provenance of the attack. It might not be possible to found exactly
where the attack come from but, it could be possible to detect part of the botnet and maybe even
agents. When they are detected, it will be possible to ask the owner of the infected machine to clean
them. The next step of dealing with a DDoS attack is the reaction of the network. This could be by
blocking the traffic with a Firewall of an Intrusion Prevention System (IPS) – this tool will be covered
later in the report. If the network is used for an attack, the reaction could be the detection of the
infected host and gather evidence and doing forensics analyse. To permit a quick response, it is vital
to have established procedure and established standards for investigation, documentation, and
reporting.
Finally after the attack it is important to review how the procedure to deal with the attack worked,
how the network handle the attack, how the tools that you used helped you to respond or not, etc.
This will permit to detect and understand the weakness of the network and help to build better
security procedure for the future (Mirkovic, Dietrich, Dittrich, & Reiher, 2005).
2.5 Intrusion Detection and Prevention System
In this report the term IDPS means Intrusion Detection Prevention System, this will be used to refer
Intrusion Detection System (IDS) and Intrusion Prevention System (IPS). These devices are one of the
other component used by administrators in a network to protect assess. Nowadays the most used
device is IDSs but they have poor reactivity. The need of trained administrator to use it is to be taken
in consideration before setup one of these devices. This section will present the different concept
and methodologies use to create and setup an IDPS. These methodologies could be divided in three
different categories, the first one is the content-based methodology that corresponds to the
capacity of the IDS to analyse the content of a packet. The second one is the protocol-based
methodology, the device should have the capacity to analyse packet regarding the protocol used and
to find irregular packet. In addition, the last one is the rate-based methodology that concern only
Intrusion Prevention System. Indeed, this is the capacity of a device to handle and control amount of
traffic regarding a specific policy. Finally, this section will conclude with the description of two
different ways to set-up on IDPS, host-based and network-based.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
19
2.5.1 Definition of IDSs and IPSs
2.5.1.1 IDS
Intrusion detection means the capability of monitor events and analyse them for sign of intrusion.
Intrusion Detection System (IDS) is software with the capacity to interpret network traffic and/or
host activity. IDSs are used to protect the network and host from attacked or violation not detected
by others security system. They can also detect reconnaissance attempt of a network by detecting
port scan or other way of scanning the network (Mell & Scarfone, 2007).
To detect attack, the IDS used different method as signature-based method or behaviour-based
method. These two methods will be described further in this document. When a packet is detected
as malicious, the IDS execute answer by following the different rules given by the administrator for
handler this packet. This could log details of the packet or the entire packet; it could even trigger a
system to page an administrator if the threat is classified as dangerous (Alder, 2007). Figure 6 shows
the different component that permits IDS to detect malicious packets. The data pass through the
sensor “E-box” (Event-box) and stored in “D-block” (Database-block) to permit “A-block” (Analysis-
block) to process the packet for detecting malicious content. Finally “R-block” (Response-block)
execute actions define by rules to handle this packet (Garìa-Teodoro, 2009). IDS methodology is
content-based but it could be rule-based or behaviour-based.
Figure 6: IDS Architecture (Garìa-Teodoro, 2009)
2.5.1.2 IPS
Detection became prevention in about 1998, different companies began to create product that have
the capacity to block attacks because firewalls were not good enough to protect network from
Denial of Service (DoS) (Rowan, 2007). Intrusion Prevention System (IPS) is software that has the
same capabilities of an Intrusion Detection System with the capability to indentify and block
malicious network activity. (Mell & Scarfone, 2007) This type of device may have firewall ability but
instead of a firewall that permit to pass only specific packet an IPS let pass all packets except that for
which they have a reason to block (Fuchsberger, 2005). An IPS is setup in-line, which means it sits
directly on the path where packets pass, as a router or a firewall.
Inline IPS are designed for detecting known attacked if it is placed at the perimeter of the network,
but, this is very costly because the IPS should not come a bottleneck for the network (Lancope,
2006). A capable device should have a throughput bigger than the peak load to permit the network
to grow.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
20
A important capacity of an IPS is to be stateful, the definition of stateful was explain earlier in
Section 2.4, this is the only way for the IPS to detect certain type of attack as low-DoS level as Brute
Force attack. Stateful inspection is also an excellent way to protect the network from hybrid DoS
attack. During a rate-based attack, it is impossible to find a signature to be matched and it might not
have any protocol anomaly; in this case, the only way to block the attack is to keep trace of each IP
address of the attack (Raja, 2005). A stateful IPS should take in consideration in which order TCP
packets arrived, the IPS learn information from the connection with the layer 3-7 of the Open
Systems Interconnection (OSI) (International Telecomuniation Union, 1994) model; by combining the
information of different layer (transport, network and session) the IPS can understand the protocol
in use and take better decision (Raja, 2005).
The direction of the traffic is also important, if the connection is client-server in a HTTP connection,
the client is the only one that could send a “GET” request, if a request of this type comes from the
server, this is suspicious (NSS Labs, 2005). Stateful system are really important for the security of the
network but because of the need of keeping trace of each connection with the maximum possible of
details and processing all this data the system throughput will be affected in some way (Ierace,
Urrutia, & Bassett, 2006). IPS system are powerless against certain types of attack as the Slammer
worm because it need only one packet that pass through the IPS for the attack to be successful. If
the IPS is stressed and just let some packets to pass without being processed, this type of attack
could be dangerous. IPSs are used to protect systems where a decision should be taken in real time.
They are defined in two categories, rate-based IPS and content-based IPS (called also signature-
based and anomaly based). (Rash, 2005)
2.5.2 IDPS Methodologies
This section will describe the different methodologies used by IDPS to detect threats. Content-based
methodology contains two subcategories: Rule-based and Behaviour-based. The second is Protocol-
based methodology and they are both common for all IDPS. The last methodology is only for IPS,
which is called Rate-based methodology.
2.5.2.1 Content-based Methodology
2.5.2.1.1 Rule-based
Content-based IDPSs work on the deep-inspection of packets, they have different ways to detect
malicious packets, and the first is with rules. The system compares each packet with the database of
rules looking for the signature of a malicious code (Lerace, Urrutia, & Bassett, 2005); this technique
is from the same idea than the technique of anti-virus. The rule can look for a signature everywhere
in the packet, as the IP address, TTL flag, or checksum.
This methodology permit to detect all well-known and well-defined attack, if an attacker arrive to
change just a bit the code, the signature will not match anymore so the packet will ne be detected.
When a new threat is discover, the time that a rule is created is longer than the time someone need
to exploit the new threat (Farshchi, 2003). Also, is the database is too huge it will be hard for the
device to test each packet quickly enough to not slow down the traffic and create a bottleneck for
the network.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
21
2.5.2.1.2 Behaviour-based
This methodology is about the capacity of the IDS or IPS to detect anomalous packets by studying
the network and extract a statistical analyse of it. The system knows what “normal” traffic is and
everything that is not marked “normal” is anomalous. All behaviour-based method involves the
training of the system all along the utilisation of the network (Garìa-Teodoro, 2009). During the
processing of packet the system used complex and powerful statistical algorithm and mark the
packet with an anomaly score; when the score is higher than a certain threshold the system will take
actions (Farshchi, 2003).
This methodologies help with the “Zero-day” protection because if a new threat is created,
anomalous network traffic will be generated so the administrator will be able to see it. Different
troubles exist for this techniques, the administrator should be high skilled in packet inspection if he
wants to understand if a packet is malicious or not. If the system is not well trained many false
positive alerts could be generated for an IDS or worst, legitimate packet drop if the system is an IPS
(Pasquinucci, 2007). Also, during the training it is possible that the system is trained by an attacker,
in this case, the system will never detect the same type of attack because it will be in the “normal”
traffic group.
2.5.2.2 Protocol anomaly-based
A protocol anomaly-based device will compared well-defined and predetermined known benign
protocol definition to detect attack. This method is based on protocol standard that are in general
well defined with documentation easy to access (e.g., Internet Engineering Task Force [IETF] Request
for Comments [RFC]) (Mell & Scarfone, 2007). When processing the packet the system will compare
the protocol used with his definition. The IDS or IPS should be able to understand stateful protocol,
for example during the initialisation of an FTP connection, only some command can be used.
Different rules are applied to open a connection as the maximum number of letter for the password;
if it is too long, it might be suspicious. After the connection, the device must keep trace of the
connection to analyse correctly the different commands executed by the client, if this command
contain no standard code it might be suspicious (Das, 2001).
Rule-based system should be updated frequently because new threats are created everyday so the
maintenance of this type of system is more expensive in time and money. Network protocols
evolved or are created but really less often than threats, also, many new threats violate protocol
standard so nothing more should be down to protect the network from these new threats
(Lemonnier, 2001). Different vendors use network protocols for their own tools and sometimes they
are not following the RFC of this protocol or they are creating their own one. The solution of this
problem is to study the different protocols of the network. In the case of protocols created by a
company, which is using these protocols only locally, if the same protocol is directed outside the
network, it is suspicious.
2.5.2.3 Rate-based Methodology
This methodology is principally only used by IPS system because the only interest to use rate-based
system is to take action in real-time, this is something that an IDS cannot do. Rate-based Intrusion
Prevention System, also called Attack Mitigator (NSS Labs, 2005), is an IPS that permits to block
traffic by taking account of the network load. This is a good design to protect a network from DoS
attacks, but, if the algorithm used to detect packets of the attack is not capable, the device may drop
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
22
good packet, which means participate at the DoS attack. The good algorithm should be able to
effectuate an in-depth traffic inspection; for example if ninety per cent of packets received have the
same length and TCP checksum, the probability that all this packets are from the same attack are
high because all packets are different (Rowan, 2007). Figure 7 shows how a rate-based IPS should
respond when the traffic rate is too high.
Figure 7: Rate-based IPS Response
The green line symbolise the normal maximum traffic that the network encounter in normal use, if
the network traffic become higher the system should not respond directly because it may be normal.
If the traffic attain the maximum traffic rate accepted (red line) the IPS should enter in action and
manage the traffic by closing unwanted connection, dropping illegitimate packets and every action
to force the traffic network to be reduced until it attain the normal maximum traffic. If the IPS just
stop the traffic to cross the maximum traffic accepted, the network will be in constant overused and
legitimate traffic will be slow down. Fuchsberger (2005) explain that the main problem of deploying
this device is to define what normal traffic is. The network administrator should know the average of
amount of traffic at different time, the number of connection that each server can handle and the
capacity of each device to handle traffic. This means that, an IPS system should be maintained and
adjusted frequently, because the needs of the network evolve all time.
2.5.3 Host-based IDPS and Network-based IDPS
Host-based Intrusion Detection Prevention System (HIDPS) are agent directly installed on the
machine to protected, it communicate directly with the Operating System (OS) kernel and services.
The IDPS monitor and process system call to the kernel or Application Protocol Interface (API) to
detect attack and block or log them (Rash, 2005). NSS Labs (2005) defines that HIDPSs have the
possibility to control specific files from the system as the system, register of the OS or the register
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
23
setting of a Web server. The HIDPS will control all modifications on these files and respond in case of
suspicious behaviour; this permits the protection of important files from attacks where no signature
currently exists. In the case of an Host-based Intrusion Prevention System (HIPS) it is important that
it can do more than just generating alerts, it should be able to prevent attack and block them in real
time, a log of the attack should be kept by the system for further analyse of the threat. The presence
of HIPSs should not perturb normal use of the machine, which means it must not block normal
events. Another important capacity is to be able to protect permitted applications of different flaws,
for example, a web server should accept connection from the Internet, an attacker can use this to
perform an attack, but the HIPS should prevent it without shutting down all connection. (Carter &
Hogue, 2006). Different troubles could be encounter by using HIDPS system, because it works in
close relation with the OS kernel, if an update is done the HIPS might not be working anymore. In
addition, machine resources will be used to run the HIDPS, so if the machine is not powerful enough
it could slow down the complete system and perturb the user using the machine.
Network-based Intrusion Detection Prevention System (NIDPS) is specialized in the monitoring of
network traffic by reading the network, transport, and application layer of the OSI model. Mell and
Scarfone (2007) show that in general, a NIDPS is composed of at least one sensor to monitor the
network traffic and one or more database for logging purpose. The placing of the different parts of
the IPS are really important, the sensor can be in-line so the network will pass through it or it could
be passive, the traffic analysed will be a copy of the real traffic. Network-based Intrusion Detection
Systems (NIDS) are in general just using a copy of the traffic for discovering attacks. Packets are kept
in a buffer before to be processed, thanks to this possibility the NIDS can do a longer deep
inspection of the packet because it will not slow down the real traffic. The risk is that the buffer of
the NIDS becomes full, so the system will start to not process packets and not detect attacks (Mell &
Scarfone, 2007).
Network-based Intrusion Prevention Systems (NIPS) in another hand are setup in-line, with two
different options: invisible, which means the system has not got any IP (stealth mode) address or
Gateway, which means the NIPS is at the boundary of the network (Allen, Christie, Fithen, McHugh,
Pickel, & Stoner, 2000). NIPS systems have the possibility to detect and log attack as NIDS systems
but they have also the possibility to respond to the attack by different ways. The first and simple way
is to drop the malicious packet, the target never received the packet so the network is protected,
but, the attacker can send again the same packet and the NIPS will need to inspect it again before
dropping it. This will of course consume too much resources of the system. The second possibility is
to drop all packets of the connection used by the attacker; the NIPS will look for packets with specific
parameters as source or destination IP address and source port. The device will drop all packets
detected from the attacker during a certain period of time, this will permit to drop packet without
needs to inspect them but, if the attacker change packets so they will not match anymore the
connection to be dropped (by spoofing the IP source address or changing targeted service) the
connection will not be dropped. Finally, Carter and Hogue (2006) explain the last possibility. This is
to drop all packets from a source IP address; when a malicious packet is detected, all others packets
from this IP address will be dropped. This will permit to save processing time of the NIPS because the
packets do not need to be inspected. This technique is also not perfect because if the attacker spoof
is IP address by taking a trusted IP address from a business partner he can perform his attack or if
the NIPS block this IP address, performs a DoS to the network. The main drawbacks of using NIPSs
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
24
are to create a bottleneck and slow down the network because all the traffic will pass by the NIPS. If
the NIPS is at the borderline of the network, it will be impossible to detect attack from inside the
network to inside the network. Figure 8 shows the differences between the implantation of an
HIDPS and a NIDPS.
Network-based IDPS Host-based IDPS
Figure 8: Differences between HIDPS and NIDPS
Another concept exists, call Distributed Intrusion Prevention System (DIDPS), this consist of multiple
HIDPS and NIDPS used in a large network. The aim of this model is too permit different system to
share resources as the database where alerts are recorded; all command control could be also on
the same server or managed by a software that will permit the administrator to control all sensors
(Snapp, et al.), (Einwechter, 2001).
2.6 Testing and Evaluation of an Intrusion Prevention System
While Section 2.4 explained what an IDS and an IPS are; this section will focus on IPSs evaluation, as
it is the main topic of this dissertation. One of the main challenges with IPS systems is to be able to
evaluate them and compare them with other IPS systems (TippingPoint, 2008). Different techniques
exist to test system like black box testing and white-box testing. These concepts are in general used
in the world of software development but the concept could be extended to other applications. The
Defence Advanced Research Projects Agency (DARPA) created a methodology to test anomaly-based
IDS, but in a way that it is possible to use it for other objective. Following this idea the Lincoln
Adaptable Real-time Information Assurance Test bed (LARIAT) created years later, another
methodology to evaluate Intrusion Detection Systems but with the aim to replace the DARPA
Internet Internet
NIDS
HIDPS
HIDPS
HIDPS
NIPS
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
25
methodology. Following this section will present two different environments where a device should
be tested: the on-line environment, which it is using real-time traffic and, the off-line environment
which it is using dataset of packets.
2.6.1 IPS Configuration
The configuration of the device will influence the capacity of the device to handle traffic. If the
configuration is poor quality, the results of the testing will not be relevant. Since 1998 the number
of vulnerabilities rise quickly, in 1998, 246 vulnerabilities were catalogued. Nine years later, in 2007,
7,236 vulnerabilities were catalogued (CERT). These augmentations show the importance of using
devices that have the capability to detect and handle these vulnerabilities. Intrusion Prevention
System disposes of filter that can handle vulnerabilities, so, the tested device shall have its filters
enabled (TippingPoint, 2008). The device should also be capable of analysing the traffic without
slowing down the traffic and create a bottleneck. The best solution to adopt is to use only the rules
that are needed (Rowan, 2007).
The IPS architecture will influence a lot about the capacity of the device. Software-based IPS will
process each received packets with all its rules one by one; hardware-based solution might possess
system, as parallel processing engine or accelerator engine that will have an impact on the final
performance (TippingPoint, 2008).
2.6.2 Metrics for Evaluation of IPSs
This section will present the different metrics evaluation for an IPS. The first thing is to define
the major criterion to measure the quality of an IPS. For an IDS, it would be the how well it
detect intrusions (Ranum, 2001). An Intrusion Prevention Systems have also the capacity to
detect intrusion but also to responds in real time. Therefore, to define the quality of an IPS the
capacity to detect intrusion and its capacity to responds to threats should be measured.
Fink, O'Donoghue, Chappell, and Turner (2002) attempt to classify the different metrics, which
could be used for the evaluation of Intrusion Detection System. They, classified them in three
categories: logistical, architectural and performance metrics. Logical metrics have the aim to
measure the easiness to setup, administer, and manage the IDS. The second category,
architecture metrics evaluate how the IDS architecture match the deployment architecture,
which is different regarding if it is an HIDS or a NIDS. Finally, the last category contains metrics,
which measure how well the IDS interact with other systems as firewalls and routers; but also it
contains the false positive and false-negative ratio (Fink, O'Donoghue, Chappell, & Turner,
2002). As it was point out by the authors of this paper, the main drawback is the difficulties of
measuring some of this metrics. For example, the evaluation of the easiness to setup an IDS
could not be objective. It is also hard to define how to evaluate the interoperability between
IDS/Firewalls/Routers.
Another work show that searcher try to evaluate Intrusion Detection System with two different
criteria, efficiency and effectiveness. Efficiency is the ratio of true positives against all alarms. If
the result is zero, all alarms are false positives and if it is one, no false alarm has been generated
by the IDS. It is calculated by:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
26
𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 =𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑜𝑡𝑖𝑣𝑒
𝐴𝑙𝑙𝐴𝑙𝑎𝑟𝑚𝑠 {Eq. 1}
Effectiveness measures the ratio false negative generated by the system. When the result is
one, no false negative has been generated and if it is zero all alarms are false negatives. It is
calculated by:
𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 =𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑜𝑡𝑖𝑣𝑒
𝐴𝑙𝑙𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑠 (Sommers, Yegneswaran, & Barford, 2005). {Eq. 2}
These metrics were defined for the evaluation of IDS so it is also possible to use it for the
evaluation of the detection engine of an IPS.
Other metrics should be used to evaluate the capacity of responds of an IPS. Throughput is one
of them. Raja defined throughput as the maximum traffic that could be handle by the IPS
without dropping a packet (Raja, 2005). Two main criteria have an important impact on the
throughput of an IPS. The first is the size of packets. Smaller they are longer it will take to the
IPS to analyse them. In real traffic, the sizes of packets are always different, but if the majority of
packets are the smallest authorized – 64kbytes; this could have a huge impact on the system
performance. The second criterion is the amount of processing done by the system on a packet.
This includes all protection mechanisms of the system. The mechanism that has the most impact
is the Deep Packet Inspection capacity of the IPS (Raja, 2005). Deeper the device will look in the
packet longer it will take to analyse it.
The importance of the throughput cannot be separate of the importance of the latency. This
metric represent the time that a packet take to cross a device (Lyu & Lau, 2000). As for
throughput the latency, depend on the packet size and the amount of analysis done by the IPS.
The latency has an important impact on TCP-based protocol. The maximum throughput of a TCP
connection could be calculated by devising the window size by the Round-trip Time (RTT). The
maximum window to TCP is 64 kilobytes (ISI, 1981). The RTT represent the latency and in a Local
Area Network (LAN), the typical latency is of one millisecond. Therefore, the maximum
throughput of a TCP connection is 512Mbps. The RFC 1323 (Jacobson, Braden, & Borman, 1992)
define a solution to work around this limitation. This is called TCP window scale option. This
permit to negotiate the size of the window but it should be done at the initialisation of the TCP
connection. The main drawback is that many applications are not using this option. When a new
network component is added, the latency is augmented. If the latency of this device is of two
milliseconds, the global latency of the network will increase from one millisecond to five
milliseconds. This represents a reduction of the maximum throughput for a TCP connection of
80%. Therefore, it will decrease from 512Mbps to around 100Mbps (TippingPoint, 2008). It is
possible for a system to keep a good throughput but a high latency. This will result in a decrease
of the Quality-of-Service (QoS) for some protocol who needs a low latency as Voice-over
Internet Protocol (VoIP). An IPS is active, so it is important to measure how fast the system
responds to a threat. This will have also an influence on the QoS.
In a network, the reliability and availability are keys considerations when designing it (Srivastava
& Soi, 1983). Which mean the reliability of the IPS is vital. Because of the position of an IPS in
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
27
the network (in general before vital resources or at the edge of the network, see Section 2.5 for
more information), if it failed, two solution are possible. Alternatively, it fail open and all
malicious traffic could cross the network without been analysed. The other possibility it that the
IPS fail close, so all or a part of the network is not anymore accessible from outside. This will
protect the network against attack but also considerably disturb the users of the network (NSS
Labs, 2006).
2.6.3 Black-box and White-box Evaluation of an IPS
Sharma, Kumar, and Grover (2007) describe the black box and white-box testing are two different
methods to test components. They were originally intended to test software component but the
idea could be extended to Intrusion Prevention System. In a black box, testing the tester sees the
input and the output but not the implementation of the component. This means, he does not know
what is happening inside the tested component. In another hand, the tester using the white-box
methodology knows the input and the output but also has the possibility to visualise the
modification done by the tested component. Alessandri (2004) developed a white-box testing to
analyse an Intrusion Detection System. He started by classing attacks in different categories. They
are classified according to their characteristics, which are defined by a model that permit to see the
different systems component, the interaction between these components and the impact of the
attacks on the system. This means that two attacks will be in the same class if they have the same
characteristics. Alessandri knew the design of the IDS so he could predict the results of the IDS when
the system is under a class of attack (2004). This is a white-box testing because the only way to be
able to predict the results is to know the inner of the tested device (Sharma, Kumar, & Grover,
2007). As for software development the aim of white-box, testing will help the designer of an IDS
during the development procedure (Alessandri, 2004).
If an organisation wants to test an IDS, they could have the problem that the seller will refuse to give
details about the implementation of their system. Without knowing the design, algorithms used or
even sometimes a description of the rules; it will not be possible to use the white-box methodology.
So, the only solution is to use the black-box testing methodology. Mutz, Vigna, and Richard (2003)
developed a tool called Mucus that permits to perform the synthetic generation of events stream for
a particular event stream from an IDS. This type of software is called an IDS Simulator. Mucus has
got the aim to overcome other tools that permit to generate synthetic attack traffic as IDSWakeup
(Schauer, 2002). These types of tools permit the black-box testing methodology because it is
possible to control the input – attacks, and the output – the IDS detect the attack or not. This
methodology permits to evaluate a system in a qualitative way (Mutz, Vigna, & Richard, 2003). The
previous examples are only about Intrusion Detection System but it is possible to extend the same
ideas to Intrusion Prevention System. It might be possible that these methodologies have never
been tested on IPSs systems by searcher, but, private testing companies as NSS Lab use black-box
testing (NSS Labs, 2009) (NSS Labs, 2008).
2.6.4 DARPA methodology
The Defence Advanced Research Projects Agency (DARPA) sponsored the MIT Lincoln Labs to create
the first well-known IDS evaluation procedure. This was in 1998 when they created the DARPA
Evaluation 1998. The aim was to “perform a comprehensive technical evaluation of intrusion
detection technology” (Lippmann, Haines, Fried, Korba, & Das, 2000). The design of this evaluation
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
28
methodology has been developed only for DARPA funded intrusion detection system. This
methodology is also known as Intrusion Detection Evaluation (IDEVAL).
One year later, MIT Lincoln Labs started the DARAP 1999 Evaluation with the aim to provide an
“unbiased measurement of current performance levels.” Another objective was to produce an
experimental dataset that could be used by others searchers (McHugh, 2000). To do so, they setup
the test bed visible in Figure 9. It was composed of four targets running with the four most popular
operating systems at that moment (Linux 2.0.27, SunOS 4.1.4, Sun Solaris 2.5.1, and Windows NT
4.0). It also contains two sniffers, one of each side of the gateway routers. Inside the network, the
DARPA 1999 team setup hundreds simulated PC and workstation. On the other side of the gateway,
they setup thousands of simulated web servers that will represent the Internet. (Lippmann, Haines,
Fried, Korba, & Das, 2000).
This test bed permitted to generate realistic network traffic. To do so, the team used scripts that
simulated hundreds of different users as programmers, secretary, manager and other types of users.
These users were running popular version of UNIX systems and Windows NT systems. Figure 10
shows a representation of the number of TCO connection for the most common TCP protocol. It is
possible to see that web traffic dominate the network traffic. The average of generated traffic is 411
Mbytes with 384 Mbytes for TCP, 26 Mbytes for UDP, and 96 Kbytes for ICMP (Lippmann, Haines,
Fried, Korba, & Das, 2000). The 1998 Evaluation contains seven weeks of training data generated for
anomaly-based system and two weeks of testing data with the aim to test different IDS on their
detection capacity. In this background, traffic the team mix different attacks that will permit the
testing of Intrusion Detection Systems.
The DARPA datasets are free to download. Moreover, they are never updated so this is possible to
reproduce previous evaluation using the same environment. This methodology seems to be well
suited for the testing and evaluation of IDSs but different searcher evaluated the 1998 and 1999
DARPA evaluation and point out different drawback.
Mahoney and Chan (2003) compared two weeks of the 1999 DARPA datasets (week 1 and 3) and
real traffic that they took from a similar environment. During the analysis of the both traffic they
discover that the DARPA datasets are not similar enough to real traffic. The datasets have a TCP SYN
regularity pattern, also the window size is always one of seven different values (between 512 and 32
120) when the real traffic cover the full range of values (between 0 and 65 535) with 513 different
window value. Another drawback is the source address predictability with only 29 for IDEVAL against
24 924 for the real traffic. When they observed 177 different TTL values for the real traffic, the
DARAP dataset possess only nine different values on the 256 possible. Finally, they point out that the
HTTP, SMTP and SSH, requests look too similar each time. These findings mean that an IDS system
will have more false alarms in a real network environment than in the IDEVAL environment
(Mahoney & Chan, 2003).
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
29
Figure 9: Block diagram of 1999 test bed. (Lippmann, Haines, Fried, Korba, & Das, 2000)
Figure 10: Average connections per day for dominant TCP services (Lippmann, Haines, Fried, Korba, & Das, 2000)
Attacks traffic in the DARPA datasets are also under criticism of searchers. The 1998 DARPA
evaluation is composed of four main types of attack. The first one is user to root attack with 114
attacks carried out. The second is remote to local user attack with 34 attacks; the third, Denial of
Service with 34 attacks and finally, probe with 64 attacks (Lippmann, Haines, Fried, Korba, & Das,
2000). The main drawback is that theses 311 attacks have launched over the nine weeks of testing
which represent between five and six attacks by day, which is small in comparison of the real world
(Singaraju, Teo, & Zheng, 2004). The 1999 DARPA datasets are nearly as bad as the 1998 with 200
attacks on five weeks (Lippmann, Haines, Fried, Korba, & Das, 2000) which represent 8 attacks a day.
It is possible to conclude about the DARPA evaluation lack of realism for both attack and background
traffic than real-life network traffic. It is also important to point out that the 1998 and 1999 DARPA
evaluations have been created for more than a decade. Since, new protocols have been developed,
old ones have been modified, new, and attacks that are more sophisticated might exist. This means
that the validity of this methodology to test IDSs should be reassessed.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
30
2.6.5 Offline and Offline Evaluation
Real traffic is a main component in IPS testing, but it is not always possible to use this method. If it is
not possible, the best alternative is to use capture traffic from the network where the IPS will be
used. This will permit to use a dataset, which looks like real traffic. This will permit to have packets,
which are all different as in a real world like difference of packets size, protocol distribution, packet
contents, packet per second, etc. (TippingPoint, 2008) These differences might have an impact on
the performance of the system, moreover, it is nearly impossible to generate traffic that look real
from scratch (Walsh & Koconis, 2006). If the testing is done with only one traffic type, the results will
be not relevant because they will vary dramatically between the results found during the laboratory
test and the production environment.
Because of the importance of real network traffic, it is important to understand what it is. The
Cooperative Association for Internet Data Analysis (CAIDA) (CAIDA, 1998) (Braun, 1998) has done
different studies to characterized network traffic that could be found on the Internet. Their studies
show that an Internet packet has an average size between 413 and 474 bytes. About 85% of the
traffic is TCP packets. The second most important protocol with IP is UDP that represent 12% of the
traffic. The remaining traffic contain Internet Control Message Protocol (ICMP) packet, Generic
Routing Encapsulation (GRE) and others protocols. CAIDA also discover that the Two-third of the
UDP traffic contain Domain Name Server (DNS) traffic and RealAudio traffic. The other part of the
traffic could not be classified by the organisation. Studies also show some differences between the
Internet traffic and Local Area Network traffic (LAN), the ratio of UDP traffic grow because many
Remote Procedure Call (RPC) applications prefer use the UDP protocol for performance reason.
When a dataset is created, it should be prepared in a way that it could be used in a test-bed
environment. Walsh and Koconis explained that recorded traffic should be cleaned of all non-full
frames, incomplete sessions, malicious traffic, and retransmitted, duplicate, and missing packets of a
TCP connection. To prepare the dataset to fit the test-bed, it is possible to use tools as Tcpprep and
Tcprewrite (Turner, 2003). Finally, the traffic should be replayed in the test-bed using tools as
Tomahawk (Tomahawk, 2002) or Tcpreplay (Turner, 2003).
2.6.6 Testing tools
During the testing of IDS, the main methods is to replay real traffic or generate it, and, adding
malicious traffic and observe if the IDS detects malicious packets. (Vigna, Robertson, & Balzarotti,
2004) (Sommers, Yegneswaran, & Barford, 2005). Because of the capacity to respond of an IPS, it
should be tested if it detects malicious packets and if proper responses have been done.
Sommers, Yegneswaran, & Barford created a tool that permits to evaluate IDS. It is called Trident.
This tool also has the capacity to generate realistic traffic to effectuated online evaluation. The
method used is called "protocol aware emulation based on payload interleaving" (Sommers,
Yegneswaran, & Barford, 2005). To generate the traffic, Trident use packets from payload tool
chosen randomly. But they should correspond to a particular state in service automaton. Automata
are states that describe classes of packets that represent particular services. These services are the
most popular found in the DARPA data set as HTTP, SSH, Telnet, etc. In this generated traffic, the
tool permit to add exploits. An interesting feature of Trident is that the user has the possibility to
choose the percentage of background traffic and of malicious traffic (Sommers, Yegneswaran, &
Barford, 2005). For example, the user could choose that 80% of the generated traffic is benign and
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
31
the twenty other percents are exploits. Trident is the combination of two others tools that will be
describe later in this section.
The first one is Malicious trAffic Composition Environment (MACE) (Sommers, Yegneswaran, &
Barford, 2004(a)). MACE is a framework that permits the generation of malicious traffic; the tool
does not possess a complete database of all possible attack that could be created. Instead, this tool
only produces specific attacks that are used nowadays. Figure 11 represents the architecture of
MACE.
Figure 11: The MACE architecture (Sommers, Yegneswaran, & Barford, 2004(a))
The exploit part corresponds of the set of vulnerabilities that will be used to perform the attack. The
obfuscation parts the modification of different parts of the packet as payload or header to elude the
tested device. These modifications could be at the network layer or the application layer of the OSI
model. The propagation model corresponds to the order in which victims are chosen to be attacked.
Finally the background traffic is the legitimate traffic the pass through the network. The MACE
framework is not producing legitimate traffic, so this tool should be associated with a network traffic
generator. The one used by Trident is called Harpoon. Figure 12 shows the taxonomy of MACE
exploits.
Figure 12: Taxonomy of MACE exploits (Sommers, Yegneswaran, & Barford, 2005)
The network traffic generator used by the Trident software is Harpoon (Sommers, 2004(b)). This tool
has the capacity to generate flow-level traffic. To do so, Harpoon extract from NetFlow traces
parameters. Then it use this to generate flow with the same statistical qualities that traffic that could
be found over the Internet.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
32
Another tool to evaluate security product is called Metasploit (Metasploit, 2008). This tool is a
framework that has been developed for penetration testing goal. It permits to search to develop
exploits, payloads, and other security modules that could be added to the framework. As for Trident,
Metasploit have the capacity to generate code that could be reused in different exploits. The last
version of the framework (3.x) has been completely rewritten in Ruby. The previous version (2.x)
was a blend of Perl, Python, C and even assembly. This last version has been designed to offer the
capacity to discover and exploit process with automation capability (Maynor, Mookhey, Cervini,
Roslan, & Beaver, 2007). Figure 13 shows the architecture of the Metasploit framework 3.4. A good
point in comparison of Trident is that Metasploit offer different interfaces. The user could choose
between console interface, GUI interface and even Web interface (Metasploit, 2008). Another
interesting different is that Metasploit is under the MSF Licence. This looks like an End User Licence
Agreement (EULA) but with the interesting point of an open-source licence. Shortly, Metasploit is
open-source and free to use, developers have the right to create and add modules but these
modules should be available to every user. Finally, it is not possible to sell Metasploit as part as a
bundle or other commercial product (Maynor, Mookhey, Cervini, Roslan, & Beaver, 2007).
The last tool that will be presented in this section is called hping, which is at its third version. The
aim of hping3 (Sanfilippo, 2006) is to be a framework with scripting capacity over the TCP/IP stack.
This tool permits create nearly any kind of IP packet. In fact, the user got the possibility to choose a
value for each component of an IP packet (Parker, 2003). The users also have the possibility to
control hping3 by the command-line but it also possesses a tcl/tk interpreter. Hping3 could be
represented as a "scriptable TCP/IP stacks" (Sanfilippo, 2006). This tool permits to create all rate-
based attack that use the TCP/IP stack.
Figure 13: Metasploit Architecture, Version 3.4 (Gates, 2009)
2.7 Conclusion
Section 2.2 provided a description of information security, which included the definition of the CIA
concept. This section also provided the description of threats, vulnerabilities, and assets along with
the description of how this information should be used to calculate the risk of a system. Common
threats were presented to offer a security background, which will permit to understand the place of
the project in this topic. These threats could be viruses, worms, DoS, DDoS, or Trojans and could
have disastrous consequences for the assets of companies. If has been shown, that the financial loss
for companies, when a vulnerability is exploited, could be huge. These threats are more technical
but other threats are also dangerous but less well known as social engineering or phishing. Attackers
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
33
are using more and more these techniques because they are easy and they do not need lots of time
to carry out an attack. These threats are easy to perform because computer users are not
knowledgeable enough on the importance of security like having a secure and personal password.
Finally, this section provided a way to calculate the risk for a company; plus the description of when
a risk is worth to be protected against.
Section 2.3 presented one of the most common security system used nowadays by companies:
firewalls. The different mechanisms present in firewall were discussed along with a history of the
evaluation of such system. This section highlighted that firewalls are not enough to protect a
network against known and unknown threats. It has also been presented that firewalls tend to offer
a false feeling of security, which could be disastrous for the security of the network (Ingham &
Forrest, 2005). Firewall could authorise users to access resources but could not analyse what the
traffic contains. This traffic could contain attacks and the firewall will never see them. This is why
other security mechanisms have been developed such as Intrusion Detection Systems.
Section 2.5 contained the description of IDS and the different types of IDS, which exist. This security
device works by processing every packets pass through it against a set of rules. When a packet
matches one of these rules an alert is fired and logged in the system. The person in charge in the
security of the network should then, take appropriate response. IDSs are a passive type of security
tools, which means that they never take any action against a packet. If an alert is logged but nobody
is managing these alerts or nobody is here at this moment to analyse this alert the threat will never
stopped. It will be too late and the damages are already done. Managing IDSs cost money for
companies and too many of them did not possess specialised employee to manage them. This lack of
response capacity has been answered with the creation of Intrusion Prevention System. As the name
pointed it out, this security tool is oriented on the prevention of intrusion. IPSs have the same
detection capacity than an IDS but it could also take the decision to drop a packet or modify its
content. This permits to ensure a better security of the network because attacks are handling in real
times. When IDSs are passive and so have no impact on the network performance IPSs could be the
bottleneck of the system if they are not resourceful enough to handle the needed amount of traffic.
Moreover, when an IDS generate a false positive it only add times to the person who manage these
alerts but it has no impact for the end user. IPSs when they generate a false positive it means they
could drop the packet or rewrite it and this will perturb legitimates user when they want to access
the resources. This is why the configuration of such device should be done carefully.
As the area of security is vast, this thesis will only focus on one type of threat, Distributed Denial of
Service. Section 2.4 described the difference process of the creation of a botnet plus how malicious
persons could manage it. When attackers possess a network, of agent, they could launch DDoS
attacks and the power of these attacks is in correlation with the number of agents in the botnet. The
different types of DDoS were also presented along with the different possible targets of a system
that such attacks aim for.
The last section analysed the different methodologies that could be used to evaluate IDS and IPS.
IDS evaluation methodologies were discussed because of the lack of IPS evaluation methodology and
because as Section 2.5 demonstrated it, they have many common points. McHugh (2000) shows the
importance of realistic traffic in the evaluation if such devices and that the DARPA methodology is
now obsolete because it is out-dated. Another methodology called Trident (Sommers, Yegneswaran,
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
34
& Barford, 2005) had the aim to overcome this problem but Corsini (2009) showed that the Trident
methodology could not produce realistic enough traffic. The literature review also showed the
importance of evaluating IPS with realistic background traffic along with realistic attacks to perform
an objective evaluation. This permitted to highlight that the most important feature of the
methodology that will be produced is that it should be able to generate realistic background traffic
along with realistic attacking traffic. The metrics used for the evaluation of the device should also be
objectives and not dependent of the system.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
35
CHAPTER 3
DESIGN AND METHODOLOGY
3.1 Introduction
This chapter will detail the different parts of the design and the main goals of this thesis. The
literature review shows the importance of different components that should be put in place as the
background traffic and the attack traffic. In this project, only one type of attack will be used:
Distributed Denial of Service. As presented in the literature review and as visible in Appendix A; the
number of different DDoS is huge and by mixing them, it is possible to find infinity of DDoS.
Therefore, the evaluation of the IPS will be done by using only four different attacks.
Using only particular attack will permit to evaluate easily another IPS following the same
methodology. Besides reproducing the same type of attack, it is also important to design a test bed
that will welcome a particular IPS. In these configurations, the system tested is called a Device Under
Test (DUT). This methodology has been developed for black box testing in offline environment. Thus,
a way to generate background traffic should be given.
Section 3.2 will present the test bed and will give the different configuration of all device used for
the evaluation of the DUT. The following section (Section 3.3) will be a presentation of the different
components used for the evaluation and the methodology used. Section 3.4 contains the description
of the four different DDoSs that will be used and an overview of the traffic background used. Section
3.5 will contain the description of the different metrics that will be used in this project. Finally, the
last section (Section 3.6) will conclude on the chapter.
3.2 Network Architecture
The network architecture, which could be found in Appendix B used for the evaluation of the
system, is composed of three routers and one switch. The routers are using the Routing Information
Protocol (RIP) protocol which has been chosen for the easiness of installation. Because the
background traffic is split up between outside traffic and inside traffic, two Virtual Local Area
Network (VLAN) have been set-up to ensure that no interference occurs. The network tries to mimic
a typical small company network. A part is used by the different servers (FTP, HTTP, etc.), the other
part by the clients’ machines used by the users. One router for each part ensures the connection
between the VLAN and the edge router. This last router ensures the communication between the
server side, client side and the outside of the network. Outside of the network could be found the
different attacking machines and different clients. The DUT is installed between the outside network
and the edge router. Appendix C shows the configuration of each router. The background traffic
generator will be connected to two different VLAN. The first will be the VLAN of the outside network
and the second of the inside network.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
36
Finally, the switch will also be configured to offer a Switch Port Analyser (SPAN) port. This
technology permit to copy the traffic of particular port or VLAN to another specified port. This
permit to send all the traffic to a special host that has the capacity to record and analyse the traffic.
This SPAN port will be used only for some of the different experimentations. Appendix D contains
the configuration of the switch.
3.3 Testing method overview
As shown in the literature review, the evaluation of an IPS needs the utilisation of realistic traffic and
attack traffic. These both traffics should be easily combined and easily configured. These needs will
permit to the user to perform the evaluation of the IPS as easily as possible. Finally, the evaluation
methodology should offer the proper metrics related to such evaluation and how to understand the
results. Figure 14 shows the schematic implementation of the evaluation methodology while the
Figure 15 shows the different components involved within the DUT.
Figure 14: Schematic Implementation
To evaluate an IPS, the background traffic should come from inside the network and outside the
network. The toolset Tcpreplay has the capacity to split the traffic in two parts following specific
requirement, which will explain later on. When the traffic is split, it could be sent on two different
interfaces, one inside the second, and one outside the LAN. For the attack traffic, the tool Hping3
describe in Section 2.5.6 will be used. The switch will do the aggregation of the attack traffic and
background traffic. To do so, no specific configuration is needed because it will automatically direct
both traffics to the port where the IPS is connected. Finally, the methodology integrates the
different evaluations metrics. The following sections will now describe more precisely each different
parts of this methodology.
Attack Traffic
Hping3
DDoS : ICMP /
UDP / TCP / MIX
Background Traffic
Tcpreplay
DARPA data-set
Aggregation
IPS
Eth0 Eth1
Evaluation Metrics
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
37
Figure 15: Schematic Implementation
3.4 Attack and Background Traffic component Design
The aim of this thesis is to evaluate an IPS against rate-based attack. Which means that for the
malicious traffic, only one category of threat will be used: Distributed Denial of Service. The
literature review describes this threat and how a DDoS was generated and its impact on a network.
Different approaches to generate an attack have been analysed. The best solution would have been
the creation of a botnet. The main problems by choosing this solution was on a security point of
view and logistical point of view. To create a network that contains infected host include the
utilisation and manipulation of real malicious software. If a manipulation error occur this could have
a huge impact on the network of the university. Moreover, a botnet needs a large number of hosts
to be realistic, but the cost of this type of installation is too important for this project so, this
solution has been dropped.
The other solution is to use a program called Hping3, which has been presented in the literature
review. It will be used to generate in real time crafted packets that will match the behaviour of a
chosen DDoS. Four different attacks will be used: a TCP-based, an UDP-based, an ICMP-based, and
the last one will be the mix of these three attacks. Table 4 give a short description of each DDoS
Snort
Packets logs
Event logs
Configuration
Rule sets
Bridge
Detection engine
Packets queue /
iptables
Action engine / iptables
IPS under test
Verdicts Packets
Packets Results
Internal/eth0 External/eth1
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
38
used. The TCP-based and UDP-based will be against the FTP server while the ICMP-based will be
against the host of the FTP server. By using three different protocols for the evaluation, it will be
possible to view if the protocols have an impact on the capacity to handle attacks of the DUT.
Distributed Denial of Service Description
TCP-based TCP packets with the port 21 for destination and the SYN flag on. The IP source will be randomly spoofed.
UDP-based UDP packets with the port 21 for destination. The IP source will be randomly spoofed.
ICMP-based ICMP echo request packets. The IP source will be randomly spoofed.
Mix traffic A mix of the three previous DDoS Table 4: DDoS used overview
As explained before, the cost to create a botnet is too important so, when hping3 will generated
crafted packets it will also spoof the source IP on a random way. This will permit to simulate a huge
number of nodes. Because it is random, it is not possible to define a ratio that determines how much
an address should appear. Which means some address could appear just once while other appears
thousands of times. Moreover, this solution loses the advantage of power that offers a botnet.
As the literature review described it, generate realistic background traffic is something complex.
Moreover, the tool that will generate the traffic should have the capacity to replay the same traffic
previously generated so each evaluation of the DUT has the same characteristics. This means that
the utilisation of Harpoon is not possible because, it generates realistic traffic but it cannot
guarantee that it will generate the same traffic again. Furthermore, Corsini (2009) shows that
Harpoon does not generate realistic enough traffic. This is why Tcpreplay (Turner, 2003) will be used
during this project. This tool offers the possibility to repeat data set with different options (speed,
which interfaces to use and so on).
Tcpreplay is not furnished with any data set; the user should provide it. As explained before the
utilisation of traffic that came from the environment where the IPS will be set up is the best solution.
However, it is not possible to have access to this kind of traffic in the scope of this project. A solution
would have been to create an environment to generate traffic and capture it. This have been discard
because of time require to do it and the difficulties. The other solution is to use already existing data
set. At least two public repositories of data set exist: the DARPA data-set (McHugh, 2000) and the
enterprise trace that could be found on the Bro web site (Allman et al, 2004). Bro (Paxson, 2003) is
an open-source NIDS, which aim to handle high-speeds traffic (Gbps).
The data set from Bro’s website have been discarded because nobody has used it during studies
while the DARPA data sets have been widely used and commented. The lecture review shows that
these data sets have many drawbacks, but this is the only possible source. Thus, the testing
methodology depends of these data sets however; the methodology is opened to accept other data
set that prove to be better than DARPA’s data sets.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
39
3.5 Evaluation Metrics Design
The literature review demonstrated the importance of metrics to evaluate Intrusion Prevention
Systems. The evaluation will also take place around metrics proper to an IPS as the available
bandwidth, the latency, and the time to respond to a threat. Nevertheless, metrics used by both IDS
and IPS will be used as the resource metrics which correspond to CPU load and memory load
(Sommers, Yegneswaran, & Barford, 2005). The rate of packets loss should also be monitored. These
metrics could be classified in three different categories. The first one will contain the packets lost by
source and packets lost by destination. This will be the Response Metrics category. This category
represents the capacity of the DUT so it does not drop legitimate packets.
The destination mode for the rate filter tells the IPS to keep in memory for each destination IP
address the rate of alerts, which are watched by a filter. Snort does not have the capacity to discern
the malicious traffic from the legitimate traffic when the filter is triggered. All packets that will be to
the watched destination that match the watched alert will be dropped. The obvious problem is that
legitimate user will not have access to this particular destination. The other mode for the rate filter
is source mode. Snort keeps in memory for each source IP address the rate of alerts, which are
watched by a filter. When the rate is reach, the filter is triggered. This will permit to protect the
network and still accepting legitimate user to access this network. The main drawback of this mode
is that if a huge number of machines do the attacking traffic the filter might never be triggered.
The second will be the Impact Metrics category, which contains available bandwidth, latency, time
to respond metrics and reliability. The last one, Resources Metrics contains the CPU and memory
usage. Tables 5 summarises these metrics.
Response Metrics Description
Packets lost by source Rate legitimate packets lost Packets lost by destination Overall packets lost
Impact Metrics
Available Bandwidth Rate of maximum traffic that could pass through the DUT
Latency Time spent for a packet to cross the DUT Time to respond Number of packets before the DUT responds to
the threat Reliability Time without a system error
Resources Metrics
CPU Load Percentage of CPU load Memory Load Percentage of memory load
Table 5: Evaluation Metrics
The two packets lost metrics will permit to determine the capacity of the DUT to not block legitimate
traffic (false positive) and to not miss malicious packets (false negative). The monitoring of the CPU
and memory utilisation with the number of packet loss will permit to define if relations exist
between them or not.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
40
3.6 Conclusion
This chapter gave an overview of the different components of the methodology as well as a
schematic representation of these components. Nevertheless, it also described the different
requirement that a solid IPS evaluation methodology should meet. These requirements are:
Generate background traffic, which will be identical each time.
Integrate attack traffic which is configurable
Key evaluation metrics
The methodology allows the user to include a data set that will simulate the background traffic with
the help of Tcpreplay and this traffic will not be changed during the evaluation. The Distributed
Denial of Services will be generated with the Hping3 that has the capacity to generate crafted traffic
at chosen speed. A switch will do the aggregation of these traffics from the test bed. This will not
permit to choose the rate of background traffic and the rate of attacking traffic. The company
TippingPoint (2008) demonstrate that it is important to be able to send traffic, which covers 80% of
the maximum throughput of the IPS. When an IPS is chosen, it is important that it covered the needs
of the network where it is installed. TippingPoint defined that 80% of the maximum capacity of the
IPS should be so it could still handle pick of traffic and no resources are wasted. If the IPS is too
highly capable for the system this will results only in loss of money because the price of such device
is in correlation with its capacity to handle traffic (NSS Labs, 2006) (TippingPoint, 2008). The
maximum throughput of the tested device will have to be measured before any experiment take
place. After a short evaluation of the chosen IPS, it has been discovered that it could handle only
1Mbps of traffic. To follow the ration of 80% of background traffic to evaluate the device this
represents a rate of 819Kbps.
The background traffic will be generated with the help of Tcpreplay and as explained with a data set
for the DARPA project. It is known that this is not the best solution but the only other source of data
set, which have been found is from the company Bro. Searcher in comparison of the DARPA data set
has not used the data set from Bro. Finally, this chapter described the different metrics that will be
used to evaluate the IPS, which are objective so they could be used to compare the chosen system
with other Intrusion Prevention System.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
41
CHAPTER 4
IMPLEMENTATION
4.1 Introduction
The previous chapter described the design that will be used test and evaluate a rate-based IPS. This
chapter will now present how this design has been implemented. The different tools that will be
used will be also described and the procedure to configure and install these tools. This chapter will
also give a part of the IPS configuration. The chapter will be split in three parts. Section 4.2 will
present the chosen IPS, which is called Snort, and how it has been implemented in the system and
the configuration of this device. The following section contains the presentation of the procedure to
generate the background traffic using Tcpreplay and the DARPA data set. The evaluation of the
chosen traffic will be also described. Section 4.3 will present the specification to produce the
different attacks used to evaluate the DUT. The tool used for the generation of DDoS is called
Hping3.
4.2 Intrusion Prevention System Configuration
The IPS that will be used during this thesis is called Snort. This is an open-source IDPS. It could run
under many different OS and it is widely supported which made this tool popular. Furthermore,
Sourcefire offers the possibility to the user to obtain and uses rules produced by the Vulnerability
Research Team (VRT). These rules are tested and cover the majority of threat currently known when
they were released. Moreover, they are updated on regular basis. These rules will be used as the
default configuration of the system. The last reason why Snort has been selected is the important
number of searchers (Sommer et al (2004(a)) (2005); Mutz, Vigna and Richard (2003) Salash and
Kahtani (2009)) are using this tool to do their research.
The installation on Snort is by default for IDS mode. If the user wants to use Snort in IPS mode it
should specify it when he is compiling Snort. The following command lines have been used to
compile Snort:
The --with-mysql flag permit to Snort to connect itself to a MySQL database to store alerts. This
feature is included only in the purpose of further experiment. During this project, the alerts were not
monitored. When running Snort as an IPS a system should be setup to permit Snort to process the
incoming packets. This is possible in two steps. The first one is to add rules to iptables so when a
packet arrives on an interface it is stored in the queue. The following has to been executed to do so:
# ./configure --with-mysql --enable-inline
# make
# make install
# iptables –A OUTPUT –j QUEUE
# iptables –A FORWARD –j QUEUE
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
42
The second step is to tell Snort that it should read the queue and to process the packet that are from
the queue:
4.2.1 Crafted Rules
Rate filters that will be presented in the next section will use these rules. As described in Chapter 3,
the attacks will use three different protocols: ICMP, TCP and, UDP. The packets that will be used
during the attacks are known, so it is possible to analyse them and produce rules that will match the
malicious packets pattern. For ICMP-based attack a default rules already exist that match these
packets:
Malicious packets for TCP-based and UDP-based attacks are using regular packets but because of the
important rate, they are becoming malicious. No rules were available to detect these packets so they
should have been developed. To do so, captures of each packet generated by Hping3 have been
analysed. For the TCP and UDP it has been discovered that the source port is random but always
higher than 1024. For TCP, Snort should look for the SYN flag. The destination port that will be used
is the FTP port - 21. The following lines are the results of this analysis:
4.2.2 Rate Filter Configuration
Since the Version 2.8.5 (Snort Team, 2009) Snort offers the possibility to tune the processing of
different events. Four different mechanisms exist which are Detection Filter, Event Suppression,
Event Filter and, Rate Filter:
Detection Filter: Specifies a threshold that must be exceeded before a rule generate an
event.
Event Filter: Permit to reduce the number of event logged for noisy rules. This permits to
limit the number of false negative.
Event Suppression: Simply suppress an event to be logged.
Rate Filter: This is the mechanism that will be used for this project. It permits to specify a
change of behaviour during a specified time of a rule when a rate of event is exceeded.
The Rate Filter associated with the rules of Section 4.2.1 will permit to Snort to take an action when
these rules are generated too much events. Now the details to implement a Rate Filter will be
# snort –Q –c /etc/snort/snort.conf
alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP PING
NMAP"; dsize:0; itype:8; reference:arachnids,162;
classtype:attempted-recon; sid:469; rev:4;)
# TCP rule that detect TCP packet with the SYN flag on in
# destination of an FTP server.
alert tcp $EXTERNAL_NET any -> $HOME_NET 21 (flags: S; msg:"FTP
- TCP SYN FLAG"; classtype:attempted-dos; sid:100001;
rev:1;)
# UDP rule that detect UDP packet in destination of an FTP
# server.
alert udp $EXTERNAL_NET any -> $HOME_NET 21 (msg:"FTP - UDP";
classtype:attempted-dos; sid:100002; rev:1;)
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
43
describe and Sections 4.2.3, Section 4.3.4, and Section 4.2.5 will detail the implementation of the
rate filter that will be used to evaluate the IPS.
Rate Filter configuration are standalone, which mean they are not defined inside a rule and have the
following format (Snort Team, 2009):
The different options are described in Table 6:
Option Description
track by src | by dst | by rule Rate is tracked either by source IP address, destination IP address, or by rule. This means the match statistics are maintained for each unique source IP address, for each unique destination IP address, or they are aggregated at rule level. For rules related to Stream5 sessions, source and destination means client and server respectively. Track by rule and apply to may not be used together.
count c The maximum number of rule matches in s seconds before the rate filter limit to is exceeded. c must be nonzero value.
seconds s The time period over which count is accrued. 0 second means count is a total count instead of a specific rate. For example, rate filter may be used to detect if the number of connections to a specific server exceed a specific count. 0 second only applies to internal rules (gen id 135) and other use will produce a fatal error by Snort.
new action alert | drop | pass | log | sdrop | reject
New action replaces rule action for t seconds. Drop, reject, and sdrop can be used only when snort is used in inline mode. Sdrop and reject are conditionally compiled with GIDS.
timeout t Revert to the original rule action after t seconds. If t is 0, then rule action is never reverted back. An event filter may be used to manage number of alerts after the rule action is enabled by rate filter.
apply to <ip-list> Restrict the configuration to only to source or destination IP address (indicated by track parameter) determined by <ip-list>. Track by rule and apply to may not be used together. Note that events are generated during the timeout period, even if the rate falls below the configured limit.
Table 6: Rate Filter Option (Snort Team, 2009)
4.2.3 ICMP-based DDoS Mitigation
It has been decided that the maximum number of ICMP echo request that should be received by a
host of the internal network is of 15 packets in maximum 2 seconds. If this rate is reached, the IPS
will start to drop these packets during 20 seconds. The sig_id 469 means that this rate filter look for
the rules 469 presented in Section 4.2.1. The next line shows the configuration of the rate filter.
rate_filter \
gen_id <gid>, sig_id <sid>, \
track <by_src|by_dst|by_rule>, \
count <c>, seconds <s>, \
new_action alert|drop|pass|log|sdrop|reject, \
timeout <seconds> \
[, apply_to <ip-list>]
rate_filter \
gen_id 1, sig_id 469, \
track by_dst, \
count 15, seconds 2, \
new_action drop, timeout 30
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
44
4.2.4 TCP-based DDoS Mitigation
To mitigate TCP-based mitigation the rate filter will use the rule 1 describe in Section 4.2.1. This
mean only TCP-based attack directed on the port 21 that request an opening of connection will be
mitigated. The maximum rate of TCP SYN that will be accepted is 30 packets in 5 seconds and it
drops these packets during 30 seconds.
4.2.5 UDP-based DDoS Mitigation
The last type of DDoS is UDP-based. As for the TCP-based attack, the IPS will only mitigate attacks
directed to the port 21. The maximum rate has been decided to be 10 packets in maximum 2
seconds. It will also drop packets during 30 seconds when the maximum rate is reached.
4.3 Background Traffic Generation
The generation of the background traffic will be done by using the suite of tool Tcpreplay. It is
composed of Tcpprep, Tcprewrite, Tcpreplay, Tcpreplay-edit, and Tcpbridge (Turner, 2003). An
overview of these different tools could be found in Table 7. Saliou and Graves have created a tutorial
that could be found in the master thesis of Corsini (2009).
Tool Description
Tcpprep It determines packets from a data-set as client or server. It generates a cache file that will be used by Tcprewrite and Tcpreplay. The method that Tcpprep used could be chosen by the user.
Tcprewrite It has the capacity to read and rewrites TCP/IP and also Layer 2 characteristics of a pcap file.
Tcpreplay Replay the data-set at an arbitrary speed chosen by the user. Tcpreplay-edit This is the combination of the three previous tools. Tcpbridge It has the same function than Tcprewrite plus the capacity to bridge two
network interfaces. Table 7: Overview of Tcpreplay
The evaluation methodology will used Tcpprep, Tcprewrite, and Tcpreplay. This choice has been
preferred at Tcpreplay-edit, which offers the same characteristic by combining the three tools, but it
will have an impact on the performance because everything should be done in real-time. Figure 16 is
a schematic representation of the modification that will be done on a data set.
rate_filter \
gen_id 1, sig_id , \
track by_dst, \
count 30, seconds 5, \
new_action drop, timeout 30
rate_filter \
gen_id 1, sig_id , \
track by_dst, \
count 10, seconds 2, \
new_action drop, timeout 30
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
45
The pcap file used during this project is from the Monday of the first week of clean traffic from the
1999 DARPA. The selected file is inside.tcpdump. Figures 17 and 18 provide a brief overview of
different characteristics of this data set.
Figure 16: Data-set Modification
Figure 17: Protocol Distribution Figure 18: Service Distribution
4.3.1 Tcpprep
The first step in the customisation of the DARPA data set is to divide each packet in a client group
and a server group. The tool that should be used to do so is called Tcpprep; it take in arguments a
pcap file and give in output a cache file that will contain the specification of each packets. Different
ways to class theses packets exists. Their ports, Media Access Control (MAC) address, or even using
regular expression could divide them. The option that has been used for customise the data set is
called auto/bridge. This option permits to class a packet in relation of its behaviour. Tcprewrite and
Tcpreplay will then use the cache file that will be generated by Tcpprep. The following command line
permit to create the cache file called cache.cache for a data-set called inside.tcpdump:
94%
5% 1%
TCP
UDP
ICMP
62%21%
4%4%
4%2% 3%
HTTP
DNS
FTP
SSH
SNMTP
Other
Eth0
Eth1
Tcpprep Tcprewrite Tcpreplay
1
2
2 3
3
Data-set
Cache file
Data-set modified
# tcpprep –auto=bridge –-pcap=inside.tcpdum –cachefile=cache.cache
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
46
4.3.2 Tcprewrite
The next tool to use is Tcprewrite. This tool will be used to modify the TCP/IP and. Layer 2
information of each packet. This should be done so the packets will find their way in the test-bed. To
do so it should modified the IP address source and destination and the MAC address of each packet.
Nevertheless, it has also the capacity to correct the checksum and even change the MTU. Tcprewrite
take in argument the writing configuration, the cache file and the source pcap file. The output is a
pcap file that will match the user’s configuration. The following command lines show an example of
the utilisation of Tcprewrite:
In this configuration, the server has the MAC address aa:aa:aa:aa:aa:aa and the client has the MAC
address bb:bb:bb:bb:bb:bb. All servers packet will have an IP address from the 192.168.0.1/24
subnet and all clients will have an IP address from the 10.0.3.0/24 subnet. The three last flags will
respectively correct all checksum, skip IP broadcast and skip Layer 2 broadcast so these information
are not rewritten.
4.3.4 Tcpreplay
The final step is the replay of the customised pcap file. The tool that will be used is Tcpreplay. This
tool permit to replay a pcap file following the characteristics of a cache file previously generated
with the help of Tcpprep. Different settings could be chosen by the user as on which interface the
traffic will be replayed or the speed (original speed, packet per second, megabytes per second,
multiplier or the original speed or top speed which is the maximum speed that the interface could
offer.) or the number of time that the pcap file will be replayed. The following command line is an
example where a pcap file called custom.pcap will be replayed on two interfaces, at a speed of two
megabytes per second, and following the instruction of the cache file cache.cache:
It is important to note that this command should be executed with the super-user right because
Tcpreplay should have access to the interfaces.
4.4 Attack Traffic Generation
The generation of Distributed Denial of Services will be done by using the tool Hping3. This tool will
be used to generate the four different types of DDoS that have been chosen to conduct the IPS
testing. Hping3 will be configured in a way that all packets that will be send will have their IP source
address spoofed. This will help to simulate a DDoS attack with a limited number of attacking devices.
The following command line is the template that will be used to generate these attacks:
# tcprewrite --enet–dmac=aa:aa:aa:aa:aa:aa,bb:bb:bb:bb:bb:bb
–-enet-smac=bb:bb:bb:bb:bb:bb,aa:aa:aa:aa:aa:aa
-e 192.168.1.0/24:10.0.3.0/24
–-cachefile=cache.cache
–-input=inside.tcpdump
–-output=out.pcap
--fixcsum –-skipbroadcast --skipl2broadcast
# tcpreplay –-intf1=eth0 --intf2=eth1 --cachefile=cache.cache
--mbps=2 custom.pcap
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
47
The flag used to spoof the source address of the IP packets is “--rand-source” and the flag used to
specify the speed is “-i”. This flag tell Hping3 how long it should wait between two packets are sent.
The time is expressed in microsecond and the letter “u” should be put before the time chosen. For
example, if the speed option is “-i u1000”, a thousand of packets will be sent each second. Now, the
four attacks will be presented:
ICMP-based: This attack will send ICMP echo-request (ping) to the target. The flag “-1” specify that
the protocol is ICMP. By default, Hping3 will send ICMP echo-request packets.
TCP-based: This attack will send TCP SYN packet to the target, the flag “-S” will tell Hping3 to do so.
The port should be specify by using the “-p” flag. If it is not specify it will send a TCP SYN packet with
the destination port as 0. The source port will be chosen randomly (but upper 1024) and it will be
incremented of one between each packet.
UDP-based: This attack will send UDP packet, the flag used is “-2”. As for the previous attack the port
should be specify with “-p” and the same procedure will be applied to define the source port.
Mix-based: Finally, the last attack will be a blend of the three previous attacks. The repartition
between the attacking devices should be one third for each protocol so none of them are privileged.
4.5 Experiment Description
This section will present the description of all experiments that have been done to evaluate the IPS.
In each test scenario, the background traffic was all time the same: The Monday of the first week
from the DARPA 1998 data-set (DARPA, 1998). It was then associated with the attacking traffic, the
switch made the aggregation. The experiments should ensure as much as possible that they are not
biased. During the evaluation of the results, it is important to not underscore the difference
between correlation and causation (Peiser & Bishop, 2007). Peiser and Bishop (2007) describe the
common process to follow when scientific experimentation has to be done:
1. First hypothesis.
2. Perform the experiment then collect data.
3. Analyze data.
4. Interpret results and draw conclusion.
5. Depending on the result, the scientist might return to the first step.
They also pointed out that an experiment should only have one variable. For the experiments of the
IPS the only variable will be the rate of attacking traffic. By carrying out the experiment in this way, it
will be possible to access if the malicious traffic have an impact on the DUT regarding the rate.
# hping3 <protocol> <target IP> <port if needed>
<source spoof option> <speed>
# hping3 -1 10.0.3.10 --rand-source –i u1000
# hping3 -S 10.0.3.10 -p 21 --rand-source –i u1000
# hping3 -2 10.0.3.10 -p 21 --rand-source –i u1000
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
48
As explained previously the IPS will be Snort Version 2.8.5 (Snort Team, 2009) and the rules used for
the evaluation were released the 23th of October (VRT Team, 2009). As describe in section 4.2.1
crafted rule has been added in Snort to detect the particular packet sent by Hping3. Rate filter have
also been added as explained in Section 4.2.2.
The first step in the evaluation was to determine the maximum rate of legitimate traffic that Snort
could handle. This has been done by sending the dataset across the DUT augmenting the speed until
Snort starts to drop packets, the results is 1Mbps. Section 2.5.5 explains that an IPS should be tested
when the rate of legitimate traffic is of 80% of the maximum traffic, which mean 800Kbps. This
speed has been measured with the system monitor of the IPS machine. The variations of malicious
traffic are of the number of eleven and they are 0pps, 3000pps, 3750pps, 5000pps, 6000pps,
7500pps, 10000pps, 15000pps, 20000pps, 25000pps 30000pps. More data will be collected when
the rate of attacking traffic is low because, as Section 5.2 will present it, this at this speed where the
DUT comportment evaluate. For each experiment Snort was first started. The background traffic plus
the attacking traffic where then launched manually. After the experiment, Snort is stopped and the
raw data for the metrics are recorded.
4.5.1 Test Bed Description
The implemented test bed is, has described, in Appendix A. The operating system used for each
machine is Ubuntu 9.10 (Ubuntu, 2009); three of them are the attacking machines. This is enough to
struggle the system, as the results in Chapter 5 will present it. The experimentation will be carrying
out in a real environment, which ensures the best accuracy. Table 8 describes the characteristics of
each machine used for the project.
Machine Operating System CPU Memory
IPS Ubuntu 9.10 P4, 3.20 GHz 512 Mbytes Tcpreplay Machine Ubuntu 9.10 P4, 3.20 GHz 512 Mbytes Attacking machines Ubuntu 9.10 P4, 3.20 GHz 512 Mbytes Client machines Ubuntu 9.10 P4, 3.20 GHz 512 Mbytes Server machines Ubuntu 9.10 P4, 3.20 GHz 512 Mbytes
Table 8: Specification of Machines
4.5.2 Metrics implementation
Packets lost by destination: A sniffer will record the traffic that pass through the IPS from each side
then it will be compared with the attack-free results. This will permit to extract the percentages of
legitimate traffic that have been dropped for each rate of attacking traffic. The rate filter for this
experiment is configured as described in Section 4.2.2.
Packets lost by source: For this metrics, a modification should be done on the rate filter described in
Section 4.2.2. The track by_dst attribute should be modified to track by_src. When the rate of
malicious traffic fire the rate filter it will start to drop all packets in destination of the target. The
filter will not recognise legitimate traffic from malicious traffic because the IPS does not have the
capacity to do it. In that case, measuring effectiveness in this situation is pointless. The by_src
attribute will tell Snort to keep a trace of each source IP address that passes through it. When the
filter is fired it will start to drop packets regardless from the destination. This filter could be used to
prevent attacker of attacking multiple sources from inside the network to try to struggle for example
an edge router. This experiment will be the only one using this filter. The --rand-source flag should
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
49
not be used with Hping3 for this experiment. Because the source IP address is random, the
percentage of chance that the two packets are from the same source is small. The rate filters might
never been triggered so the results will not be relevant. During the experimentation for each speed
60 packets using the same protocol that the attacking traffic will be sent. The interval time between
each packet will be of one second. The tool used to do so is called Hping3 and it will be done on a
client from outside the network. The following line is used during the experimentation with the TCP
protocol:
The metric used for this experiment will be the percentage of packets lost displayed by Hping3.
Available Bandwidth: The tool Netperf 2.4.5 (Jones R. , 2000) was used to calculate the available
bandwidth that the DUT could offer under attack. The server was set-up inside the network while
the client was set-up outside the network. This metrics will permit to determine the capacity of the
DUT to keep a good throughput while handling malicious traffic. When the IPS is not processing any
traffic, this experiment permits to determine the throughput of the DUT. The formula to calculate
the throughput of a device is the following:
𝑇𝑟𝑜𝑢𝑔𝑝𝑢𝑡 = 𝑊𝑖𝑛𝑑𝑜𝑤 𝑆𝑖𝑧𝑒
𝑅𝑜𝑢𝑛𝑑 𝑇𝑟𝑖𝑝 𝑇𝑖𝑚𝑒 (Jones R. , 2007). {Eq. 3}
To calculate the throughput with Netperf the TCP_STREAM test was used. This test involved the
transfer of data between the client and the server; the time of transfer will be used to calculate the
throughput (Jones R. , 2007). The following line is the template used to launch Netperf client:
Latency: The experiment will measure the RTT, which determine the latency. The tool used to
calculate the latency will be Ping (Muuss, 1983). The tool will send 60 packets across the IPS and take
the average time to come back for each packet. It is also able to compute the rate of lost packets.
The following example shows how Ping will be used:
Time to respond: Section 4.2.2 showed that the different rate filters have attributes which specify
the number of event that should pass in a particular time before firing up the filter. The aim of this
experiment is to evaluate if the rate of malicious traffic have an impact on the maximum number
needed to fire up the filter. The number of packets that pass the IPS before the filter takes action will
be monitored with the sniffer Wireshark (Combs, 2006).
Reliability: This metrics is evaluated by running the system during a long period. The time chosen for
this experiment is seven hours. During this time, the background traffic will be replayed with
malicious traffic at a reasonable rate. This will permit to measure if the DUT could handle DDoS
during a long period.
# hping3 -S 10.0.3.10 -p 21 –c 60
# netperf -H <IP address of the Netperf server>
# ping 192.168.1.100 -c 60
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
50
CPU Load: This represents the percentage of CPU used by the machine hosting the IPS. This metrics
will evaluate the impact of attacking traffic on the physical resources of the system. The percentage
of CPU load is calculated with the UNIX tool uptime.
Memory Load: This is the same experiment as for CPU load. The UNIX tool used to calculate the
memory used by the system is called free.
4.6 Conclusion
The aim of this thesis is to produce a methodology that could be used to evaluate the rate-based
capacity of an IPS. The previous section describes the different implementation of each tool used
and a part of the IPS configuration. Sommers, Yegneswaran, and Barford, (2005) define some
metrics proper to IDS that could be used for the evaluation of IPS as the resources metrics. They also
pointed out the importance of the rate of packets loss. However, the other metrics are from the
testing methodology used by TippingPoint (TippingPoint, 2008) and the NSS Lab (NSS Labs, 2006) to
carry out the evaluation of IPS. The first is a constructor so they might have designed their
experiment for their devices and so, they will be more efficient than with another methodology. The
NSS Lab is specialised in the evaluation of such systems (NSS Labs, 2005) (NSS Labs, 2009) (NSS Labs,
2008) but they do not provide a complete testing methodology, only the metrics used could be
found. The Impact metrics are from both evaluation methodologies furnished by TippingPoint and
NSS Lab so they could be defined as important. The Respond Metrics have been developed in regard
to evaluate Snort. This represents the rate of packet loss but it will be evaluated with two different
experiments. Table 9 shows the list of the different tools that will be used. The aim of this thesis is
also to demonstrate the capacity of the methodology to evaluate a rate-based IPS system. The
following chapter will present the evaluation of the selected IPS and the results of this evaluation.
Program Version Purpose
Snort 2.8.5 Intrusion Prevention System capability
Wireshark 1.2.6 Determine the number of packet that attain a target
Hping3 hping3-20051105 Generate DDoS attack Tcpreplay 3.4.3 Background traffic generation Ping Iputils-sss20071127 To measure the latency Free 3.2.8 Determine the memory load Uptime 3.2.8 Determine the CPU load
Table 9: Tools Summarised
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
51
CHAPTER 5
EVALUATION
5.1 Introduction
This chapter will present the evaluation of the rate-base Intrusion Prevention System presented in
Chapter 4. In addition, the test-bed design and implementation presented respectively in Chapter 3
and 4. Section 5.2 will present the results of all experiments and they will be commented, they will
highlight the importance of attacking traffic in the capacity of the tested device. Section 5.3 will
discuss these results and present the different findings. These results will be analysed in contrast
with the results, which could be found the Lo’s (2009) thesis. The main of this finding is the breaking
point of the system. Finally, Section 5.5 will conclude this chapter.
5.2 Results
Each experiment has been carried out as described in Section 4.5, which included the part where the
background traffic and attacking traffic are sent. After each experiment the IPS was stopped and all
data recorded for further analyse later on. In addition, the machine hosting the IPS was rebooted
between each experiment to ensure that the experiment environment is the same between each of
them. It is possible that these testing have an impact on the DUT and so it may bias the results. The
first diagram, Figure 19 is the representation of CPU load in relation with the different attacking
speed traffic. The second, Figure 20 is the results of the memory load. The data for this experiment
have been taken at the same time that the CPU load results. The following will be Figure 21 that
shows the available bandwidth results of the experiment. The latencies results are represented in
Figure 22 while the rate filter by destination experiment and the rate filter by source experiment are
respectively presented in Figures 23 and Figure 24. In the results, when the attacking traffic is 0pps
this mean that only the background traffic is crossing the IPS.
5.2.1 CPU Load and Memory Load Experiment
The first hypothesis for the CPU and load and memory load is that the rate of attacking traffic will
affect the IPS by using more and more resources of the system. Table 10 and Table 11 detailed the
results for the CPU load and memory usage for the three protocols and the mix of the three.
Attacking traffic ICMP TCP UDP MIX protocols
0 10% 10% 10% 10% 3 000 pps 11% 30% 16% 25% 3 750 pps 11% 34% 29% 28% 5 000 pps 15% 35% 45% 33% 6 000 pps 16% 58% 54% 49% 7 500 pps 37% 56% 59% 53% 10 000 pps 40% 55% 59% 49% 15 000 pps 40% 55% 52% 45% 20 000 pps 39% 52% 58% 52% 25 000 pps 39% 50% 64% 63% 30 000 pps 39% 48% 66% 67%
Table 10: CPU Load Detailed Results
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
52
Attacking traffic ICMP TCP UDP Mix protocols
0 315 Mbytes 315 Mbytes 315 Mbytes 280 Mbytes 3 000 pps 325 Mbytes 325 Mbytes 325 Mbytes 290 Mbytes 3 750 pps 325 Mbytes 338 Mbytes 342 Mbytes 295 Mbytes 5 000 pps 326 Mbytes 338 Mbytes 392 Mbytes 298 Mbytes 6 000 pps 328 Mbytes 337 Mbytes 288 Mbytes 300 Mbytes 7 500 pps 330 Mbytes 338 Mbytes 288 Mbytes 301 Mbytes 10 000 pps 340 Mbytes 339 Mbytes 288 Mbytes 317 Mbytes 15 000 pps 342 Mbytes 337 Mbytes 288 Mbytes 315 Mbytes 20 000 pps 340 Mbytes 337 Mbytes 287 Mbytes 315 Mbytes 25 000 pps 340 Mbytes 336 Mbytes 288 Mbytes 316 Mbytes 30 000 pps 339 Mbytes 335 Mbytes 289 Mbytes 318 Mbytes
Table 11: Memory Load Detailed Results
Figure 19: CPU Load Results
As expected the CPU load depends of the traffic rate that the DUT should process. The ICMP
protocol is the one who need the less resource because it stayed at 40-39% of CPU usage while UDP
consume until 66% of CPU usage. The ICMP echo-packet sent a really small and so need less
processing time for the IPS to take a decision about these packets. For this protocol when the rate of
6 000pps is reached the CPU load stay stable (around 40%). For the TCP protocol the CPU load
decrease when the rate of attacking traffic is of 6 000 pps of 58% of CPU load to 48%. In another end
the UDP protocol, never stop its need of CPU load. The TCP and ICMP results highlight that an
optimisation might be done by Snort in the processing of these packets. The mix protocols
experiment follow the UDP experiment that comfort the idea as the UDP protocol need more
resources than the others do.
0
10
20
30
40
50
60
70
0 3000 6000 9000 12000 15000 18000 21000 24000 27000 30000
Load
Pe
rce
nta
ge
Malicious packets per second
ICMP
TCP
UDP
MIX
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
53
Figure 20: Memory Load Results
Even if the results show an augmentation of the memory load as expected the results are not
significant because the augmentation is in the order of 20Mbytes for each protocols. Moreover, the
results for the UDP protocol are strange. This might be because the experiment is run under a Linux
machine and it is not possible to know how the system will handle its memory.
5.2.2 Available Bandwidth Experiment
The available bandwidth experiment measures how much traffic the IPS could still handle. For this
experiment the hypothesis would be that the available bandwidth start from around the maximum
throughput and. Moreover, the rate of malicious traffic is important this bandwidth available will
arrived near zero. Table 12 present the detailed results for this experiment.
Attacking traffic ICMP TCP UDP MIX protocols
0 2,47 Mbps 2,47 Mbps 2,47 Mbps 2,47 Mbps 3 000 pps 2,32 Mbps 2,28 Mbps 2,31 Mbps 2,03 Mbps 3 750 pps 1,99 Mbps 1,82 Mbps 1,81 Mbps 1,66 Mbps 5 000 pps 1,33 Mbps 1,64 Mbps 1,51 Mbps 1,5 Mbps 6 000 pps 1,27 Mbps 1,24 Mbps 1,07 Mbps 1,48 Mbps 7 500 pps 0,81 Mbps 1,08 Mbps 0,95 Mbps 0,78 Mbps 10 000 pps 0,4 Mbps 0,58 Mbps 0,5 Mbps 0,49 Mbps 15 000 pps 0,01 Mbps 0,04 Mbps 0 0,01 Mbps 20 000 pps 0,01 Mbps 0,02 Mbps 0 0,01 Mbps 25 000 pps 0 0 0 0 30 000 pps 0 0 0 0
Table 12: Available Bandwidth Detailed Results
270
280
290
300
310
320
330
340
350
0 3000 6000 9000 12000 15000 18000 21000 24000 27000 30000
Load
Pe
rce
nta
ge
Malicious packets per second
ICMP
TCP
UDP
MIX
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
54
Figure 21: Available Bandwidth Results
This experiment has been done when the DUT was not in the system and without background traffic
to measure the impact of the test-bed on the results. When the IPS is not set-up in the network the
available bandwidth is of 8.05Mbps. When the IPS is set-up in the test-bed the available bandwidth
is of 7.86Mbps. We could conclude that the test-ted does not bias the results for this experiment
because the results are about the same. When only the background traffic is crossing the DUT the
available bandwidth is of 2.47 Mbps.
The results obtained after the experiments with attacking traffic correspond to the expected results.
All protocols have the same results with some slightly differences. The UDP protocol has the poorest
result of the experiment. When the attacking traffic reach a rate of 15 000 pps the available
bandwidth is near zero and at zero when the number of packets per second is of 25 000. Then rate
of malicious traffic reach 6 000 pps the available bandwidth drop of half is capacity when only
background traffic was crossing the IPS.
5.2.3 Latency Experiment
The aim of this experiment is to evaluate the influence of high malicious traffic on the latency of the
network. With a lot of packets to handle the DUT will have trouble to receive them, analyse them,
take a decision and send them back to the network or drop them. Therefore, when the rate of
attacking traffic is high the latency will be high too. Section 4.5.2 defines that this experiment will
last 60 seconds; however, network latency is calculated from a shorter widow size. So, latency is
often represented in millisecond. Latency is calculated by the formula.
𝐿𝑎𝑡𝑒𝑛𝑐𝑦 =𝑅𝑇𝑇
2 {Eq. 4}
The Round-trip Time represents the time needed by a packet to go to a point to another and to
come back. Table 13 shows the detailed results for this experiment.
0
0,5
1
1,5
2
2,5
3
0 3000 6000 9000 12000 15000 18000 21000 24000 27000 30000
Mb
ps
Malicious packets per second
ICMP
TCP
UDP
Mix
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
55
Attacking traffic ICMP TCP UDP Mix protocols
0 2.219 ms 2.219 ms 2.219 ms 2.219 ms 3 000 pps 2.965 ms 2.91 ms 3.535 ms 2.662 ms 3 750 pps 7.228 ms 5.557 ms 6.925 ms 4.587 ms 5 000 pps 14.727 ms 12.213 ms 15.895 ms 3.074 ms 6 000 pps 25.594 ms 21.891 ms 22.359 ms 25.26 ms 7 500 pps 69.06 ms 75.7 ms 73.894 ms 62.668 ms 10 000 pps 69.705 ms 66.364 ms 75.789 ms 68.17 ms 15 000 pps 74.393 ms 60.176 ms 72.745 ms 61.529 ms 20 000 pps 73.9 ms 64.569 ms 73.514 ms 64.612 ms 25 000 pps 71.893 ms 65.316 ms 71.934 ms 72.672 ms 30 000 pps 71.638 ms 66.91 ms 70.567 ms 74.345 ms
Table 13: Latency Details Results
Figure 22: Latency Results
The results for this experiment are in correlation of the hypothesis results. The latency grows with
the important of background traffic. As for the available bandwidth, the experiment has been
conducted without the IPS and without background traffic to evaluate the impact of the test-bed on
the results. Without the IPS the latency is of 0.4105 ms and with the IPS the results is 0.7435. It is
possible to conclude that the test-bed does not bias the results because of the small impact of it on
the experiment. The latency when only the background traffic is crossing the DUT is of 2,219. The
latency is then multiply by around two between each measure. When the rate of 6 000 pps is
reached the latency for all protocol is in the area of 23 ms and when the attacking traffic display a
rate of 7 500 pps the latency is around 71 ms for all protocols. After this important augmentation
the latency, stay stable for all protocols. This highlights that Snort might process to some
optimisation when the latency is too important. The DUT might stop to use some feature to
concentrate on all its resources on the routing of packets.
0
5
10
15
20
25
30
35
40
0 3000 6000 9000 12000 15000 18000 21000 24000 27000 30000
Mill
i-se
con
d
Malicious packets per second
ICMP
TCP
UDP
MIX
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
56
5.2.4 Rate Filter by Destination Results
This experiment will measure the impact of attacking traffic on the number of packets lost when the
rate filter is set-up in destination mode. This represents the overall packets loss of the system. The
hypothesis is that when the percentage of packets lost will follow the augmentation of malicious
traffic. Table 14 represent the results detailed of the experiment:
Attacking traffic ICMP TCP UDP MIX protocols
0 0% 0% 0% 0% 3 000 pps 0% 0% 0% 0% 3 750 pps 0% 0% 0% 0% 5 000 pps 0% 0% 0% 0% 6 000 pps 8% 5% 3% 5% 7 500 pps 23% 21% 22% 21% 10 000 pps 43% 31% 38% 43% 15 000 pps 51% 53% 55% 56% 20 000 pps 59% 72% 67% 60% 25 000 pps 66% 77% 75% 68% 30 000 pps 78% 85% 81% 73%
Table 14: Rate Filter by Destination Results Detailed
Figure 23: Rate Filter by Destination Results
The results of this experiment are in correlation with the expected results. The number of dropped
packets grows, as the number of malicious packets is more important. When packets start to be
dropped the progression is fast. Packets start to be dropped after 5 000 attacking packets per
second. After 15 000 pps, the number of dropped traffic represent 50% of the traffic. For this
experiment, the TCP protocols have the poorest results. This might be explained by the time
required to analyse a TCP packets because Snort should represent these packets along with the
other TCP packets of the same TCP connection.
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
0 3000 6000 9000 12000 15000 18000 21000 24000 27000 30000
Pac
ket
Lost
Pe
rce
nta
ge
Malicious packets per second
ICMP
TCP
UDP
MIX
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
57
5.2.5 Rate Filter by Source Results
This experiment will evaluate the capacity of Snort to drop only illegitimate traffic when the filter is
in source mode. This experiment will not be done with the mix protocols because will be the
equivalent of dividing the attacking traffic by three, this will not be relevant. The expected results
would be that no legitimate packets are drop when the rate of attacking traffic is low. However,
when the rate of malicious traffic will be more and more important the percentage of dropped
packets will be also higher. Table 15 shows the details results of the experiment:
Attacking traffic ICMP TCP UDP
0 0% 0% 0% 3 000 pps 0% 0% 0% 3 750 pps 0% 0% 0% 5 000 pps 1% 2% 3% 6 000 pps 4% 5% 7% 7 500 pps 26% 25% 23% 10 000 pps 33% 37% 35% 15 000 pps 55% 60% 58% 20 000 pps 62% 70% 65% 25 000 pps 69% 74% 78% 30 000 pps 77% 84% 82%
Table 15: Rate Filter by Source Results Detailed
Figure 24: Rate Filter by Source Results
The results of this experiment are confirming the expected results. The number of legitimate packets
dropped is more important as the number of attacking packet per second is more important too. For
all protocols, the first packets are drop when the rate of malicious traffic is of 5000 pps. When this
rate is multiply by three more than 50% of legitimate traffic are dropped. For each protocol the rate
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
0 3000 6000 9000 12000 15000 18000 21000 24000 27000 30000
Pac
ket
Lost
Pe
rce
nta
ge
Malicious packets per second
ICMP
TCP
UDP
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
58
of dropped traffic is low until 6 000 pps (around 5%) and percentage of dropped traffic is multiply by
five when the attacking traffic is bigger of 1 500 pps. As for the previous experiment, the three
protocols have slightly the same results. UDP and TCP have the poorest results as previously.
Because ICMP packets are smaller and need fewer resources to be process this might be why this
protocol has the best results.
5.2.6 Time to Live Experiment
This experiment will measure the number of packets needed by Snort before a rate filter is triggered.
The needed number of packets is specified as a parameter of the rate filter. This parameter has been
described in Section 4.2.2. The hypothesis for this experiment is that when the number of attacking
packet will be important Snort might let pass more packets before triggering the rate filter than
expected. After the experiment the results shows that the number of packets that pass through the
IPS before triggering the rate filter are each time the expected number of packets. The results of this
experiment are not displayed because they are not relevant and not interesting.
5.2.7 Reliability Experiment
The last experimentation is the reliability one. During seven hours, the DUT has been under attack
with a traffic rate of 6000pps. During this time, it was expected that the system or Snort crash but it
did not. At the end of the seven hours of testing, the system was still running perfectly.
5.3 Analysis
The evaluation of the methodology shows that it is possible to generate background traffic along
with attacking traffic. The speed for the generate background traffic has been chosen as described in
the literature reviews. The maximum throughput of 1Mbps that ensure no loss of data is poorest
than the results found by Lo (2009). The results for its experimentation found that Snort could
handle traffic of 60Mbps without dropping any packets in a virtual environment and with Snort
running as an IDS (Version 2.7.0). This shows the huge difference between the capacities of Snort to
handle traffic when it is running as an IPS. The variable rate of attacking traffic permits to compare
the results between resource metrics and detection metrics.
The results for the CPU load experiment show clearly the impact of the attacking traffic on the
consumption of resources by the system. Snort start to drop packets when the CPU load is 16% for
the ICMP protocol, 58% for the TCP protocol, 54% for the UDP protocol and, 49% when this is a mix
of the three protocols. When the percentage of packets lost, grow the CPU load of the ICMP
protocol become stable at 39%, for the TCP protocol it decrease until 48%, only the UDP and Mix
protocol continue to use more resources until respectively 66% and 67%. When Snort is running as
an IDS it starts to drop packets when 40% of the CPU is used and when 70% is reached the number
of lost packets is huge (Lo, 2009). We could conclude that the reason of why Snort start to drop
packets is not because of the lack of resources but because of the design of Snort.
In regard of the two rate filter (by source and by destination) experiments they report that Snort
begin to drop packets when the attacking traffic reach 6 000 pps. When the rate filter is in
destination mode 8% of packets are drop when the attack is using the ICMP protocol, when this is
the 5% TCP protocol, 3% for the UDP protocol and 5% when the three protocols are used. When the
attacking traffic reach the rate of 30 000 pps 78% of packets lost for the ICMP protocol, 85% for the
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
59
TCP protocol, 81% for the UDP protocol and for the 73% mix of the three protocols. When the rate
filter is watching source IP addresses, the results are the same with some slightly differences. When
the rate of malicious packet is of 6 000 pps, the rates for ICMP, TCP and UDP protocols are
respectively of 4%, 5%, 7%. For the rate of 30 000 pps the results are respectively of 77%, 84%, 82%.
It is possible to conclude that the break point of Snort is when the attacking traffic is of 6 000 pps
when 80% of the maximum throughput of Snort is composed of legitimate traffic. Salah and Kahtani
show that Snort (Version 2.8.1) running as an IDS has got a breaking point when the malicious traffic
reach a rate of 40 Kpps.
As Lo shows is, the background traffic generate false alert when Snort is using the VRT rules set (Lo,
2009). This might be explained by one of the rules used for this project, the rule 469 which generate
an alert for each ICMP packets that cross Snort. This type of packet is not necessarily a malicious
packet and so if the DARPA data set contains ICMP echo-request packets it will generate false
positive. This might have an impact on the performance of the IPS because it took time to records
alerts.
5.4 Conclusion
The produced results show that Snort possesses effective mechanism to handle Distributed Denial of
Service. It also highlighted the main weakness of Snort, which is the rate of packet lost. When the
system reaches 16%, 58% or 54% of CPU load depending of the protocols used for the DDoS Snort
starts dropping more and more packets. These loads are in correlation which the attacking traffic
speed of 6 000 pps which is the breaking point of the IPS. When this rate is reached Snort could not
ensure anymore that the legitimate user will still have access to the services of the trusted network
neither that no malicious packets will reach its target. The system also offers a good reliability
because it could handle an attack of seven hours without any problem. It is important to note that
during this experiment no other information has been taken so it might be possible that the number
of packets dropped increase with the time of use.
After the analysis of these results it is possible to conclude that Snort could be use to protect a
network where the traffic will never be grater that 1Mbps. However, this network should also not be
a possible target to DDoS that will use attacking rates upper than 6 000pps. When this reached is
overcome legitimates users might have to endure a denial of service, which was the aim of the
attackers. Finally it is important to note that the lack of literature about the evaluation of Snort as
an IPS force the utilisation of literatures evaluating Snort but only as an IDS.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
60
CHAPTER 6
CONCLUSION
6.1 Introduction
The aim of this objective was to produce a methodology to evaluate objectively NIPS. Snort is the
NIPS that have been chosen to be evaluate against. The previous chapter discuss about the results
produce by evaluation and shows that the methodology permits to evaluate the chosen NIPS and so,
meet the aim objective of this thesis.
Section 6.2 will present how the objectives of this thesis were met. The following section (Section
6.3) contains a critical analysis of the produced methodology. Section 6.4 will discuss about the
project management and about some encounter difficulties along with how they were overcome.
The last section (Section 6.5) present some future works that could be taken in the area of the topic.
6.2 Appraisal Achievement
The main aim of this thesis was to produce a methodology to evaluate Intrusion Prevention System.
The main steps that had to be achieved to meet the objective are as follow:
Objective 1: Produce a literature review, which furnished the needed background.
Objective 2: Investigation of IPS systems, their performance impact, and evaluation
methodologies.
Objective 3: Investigation of evaluation tools for traffic playback to create a test-bed.
Objective 4: Design a range of experiments for the evaluation.
Objective 5: Implementation of evaluation test-bed and Device Under Test (DUT).
Objective 6: Evaluation results of the system selected with the test-bed implemented.
6.2.1 Objective 1
The first objective was to produce a literature that gives a better understanding on the status of
network security. Section 2.1, Section 2.2, and Section 2.3 met this objective. It appears that security
is not enough taking seriously by companies. Many of them do not possess specialise staff to ensure
the security of the company network (Gollman, 2006). This is critical because security issues are
more and more common. Thus, issues include data theft, denial of service, and loss of data, financial
loss, or even corporate image deterioration. The best way to protect a network against all these
threats is by using tool as firewall, IDS or IPS (Lerace, Urrutia, & Bassett, 2005) . The combination of
these tools will permit a better protection of the overall systems because each of them are specialise
in a particular security area. The literature review shows that prevention systems are used more
often.
6.2.2 Objective 2
Section 2.4 has met the second objective, which investigate the current Intrusion Detection
Prevention System (Carter & Hogue, 2006). The literature review shows that IPSs are more and more
popular because of their capacity to respond in real-time while IDS could only record the alert
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
61
(Rowan, 2007). The literature review show also that IDS and IPS have common point as the detection
engine which for both system use a database of rules used to detect malicious packets. With IDS
solution, the threat could not be handled until someone does something about it. The risk it that the
system could be infected with a virus, legitimate users could have lost access because of a
Distributed Denial of Service before any actions are taken. As described before, companies are not
spending enough time about the security management of their networks. IDS produce alerts so
specialise employee should look at these alerts and take the corresponding actions. Which means
nobody had to do this an IDS is completely useless for the security of the company. This is why IPS
might be preferred. Some constructor understood that companies do not want to lose time to
secure the network so they produce IPS ready to use “out of the box” (NSS Labs, 2008). These IPSs
possess all what they need to be operational when it is connected in the network. The IPS will
request its update alone and only a small configuration has to be done to set-up properly. These
kind of system are easy to use but too rigid, it is hard or even not possible to custom them.
6.2.3 Objective 3
The third objective was met by Section 2.5, which showed the different methods to evaluate IDS and
IPS. This thesis focused on IPS but because of the lack of documentation in the methodology to
evaluate such system, the methodologies already existing to evaluate IDS have also been analysed.
The literature review showed that IDS and IPS have common point as the detection engine this is
why some IDS evaluation methodology could be bring to IPS evaluation methodology (NSS Labs,
2006) (Fink, O'Donoghue, Chappell, & Turner, 2002). The literature review described the differences
between online and offline testing. Online testing offered results that will be the more realistic
because they were done with real traffic. The offline method could not offer that characteristic but it
was easier to perform offline evaluation because no risk is taken to crash a system running in
production. To try to offer offline evaluation with realistic traffic, this is the aim of the DARPA project
(DARPA, 1998). They produced data set that could be used to perform IDS evaluation by offering
realistic background traffic. This methodology could be used in the evaluation of IPS because it also
needs realistic background traffic. Even if the DARPA data set is not anymore realistic enough this is
the best available solution to generate realistic traffic (Corsini, 2009). To generate this background
traffic a tool called Tcpreplay have been used. It permits to rewrite data set to fit the test-bed and it
can replay this data set at the selected speed (Corsini, 2009). The methodologies of black box and
white-box testing have been also investigated. The black-box testing methodology has been chosen
but the selected IPS is open-source and well document so white-box testing could has been selected.
The reason why black-box evaluation was preferred is because not all IPSs are open source and the
evaluation methodology should be produce to fit every IPS.
6.2.4 Objective 4
This objective has met in Section 2.5.2 where different objectives metrics have been explained. This
metrics could be classified in three different areas, resources metrics, response metrics, and impact
metrics. Each of these categories contains metrics and each metrics represent an independent
experiment that should be done separately. Only the CPU load and memory load should be done
together. These metrics are the most objective possible because they do not take in account the
design of the tested IPS. The respond metrics is a bit particular on this point. Two experiments are
done for these metrics, which represent two different configuration of the IPS Snort. These
configurations are not necessarily present in every other IPS. The results of these two experiments
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
62
represent the capacity of the IPS drop only illegitimate traffic while on other IPS only one
experiment has to be done to find these results. Because of this, it is possible to conclude that the
methodology is good enough to be used to evaluate any other IPS without modify some part of this
methodology.
6.2.5 Objective 5
This objective has been met with the Chapter 3, which described the design the test-bed, and the
Chapter 4, which described the implementation of this test-bed. Even if the literature review
explained the important of real traffic for the evaluation, it was not possible to use realistic traffic.
As explained before the tool Tcpreplay was used to replay a data set from DARPA. The set-up of the
background traffic has not been easy but it is capable to generate traffic from both ways, which is a
necessity to evaluate an IPS. The encountered difficulties are due of the lack of documentation on
the subject. Thankfully, Grave and Saliou created a good tutorial, which is present in the thesis of
Corsini (Corsini, 2009). The test-bed involved to set-up an FTP server for the evaluation and the
attacks where generated with the tool Hping3. The tool permits to simulate DDoS attack by spoofing
the source address but this is not realistic because the same IP address might never be seen more
than once. In a real DDoS attack the attacker are using a botnet of computer and each of these
computer send more than one attacking packets.
6.2.6 Objective 6
The last objective was the evaluation of the results obtained during the testing of the selected IPS.
The methodology showed that it could generate background traffic along with attacking traffic with
variable rate. This methodology permitted to stress Snort until its braking point which is 6 000 pps of
malicious packets. The result permitted to find the correlation between the CPU load, the attacking
traffic, and the rate of packet lost. The experiment of the time to respond was not relevant this
highlight that the design of this experiment is not good enough. This is important to know how many
malicious packets are needed by the IPS before it started its response procedure. Therefore, it is
possible to conclude that the majority of this objective has been met.
6.3 Critical Analysis
Even if the majority has been met, a critical analysis of the produced methodology should be done
regarding the weaknesses and the strength of the methodology. The different methodologies
discussed in the literature review will be used for this purpose. Different limitations should be
acknowledged regarding this methodology. The first one is about the background traffic generation.
The literature review showed that the current methodologies did not offer ways to produce realistic
traffic. The DARPA data set has been chosen in knowledge of its lack to look realistic. When the
background traffic and attacking traffic were generated these both traffics were not blended
together by an aggregator. This is the switch, which does this capability, but because it is not
possible to have control on it; it is not possible to know how the switch is blending the traffics.
Another limitation in this methodology was the number of different Distributed Denial of Services;
they were only four. In real life the numbers of different DDoS is huge and have technically no limits
because it could be a blend of other DDoS attack (Mirkovic, Dietrich, Dittrich, & Reiher, 2005).
Evaluated a system against a huge number of DDoS will permit to evaluate against which one tested
device is better than another. The results showed that the protocol used in a DDoS attack have an
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
63
impact on the system. In addition, as described in the objective, one of the experiments was to
specify for the tested IPS.
The methodology has also strength not present in others evaluation methodologies. One of them
was the metrics used for the evaluation. The response capacity of the IPS was directly taken in
account, which seems to be new. Also even of all the metrics used for the evaluation of an IPS are
from different literatures of IDS evaluation or IPS evaluation. It seems to appear that they were
never put all together to produce an evaluation methodology that took the maximums metrics. The
methodology has been developed to be as objective as possible and so the evaluation of different
devices was possible. The thesis also produced a benchmarking of the rate traffic mitigation capacity
of Snort, which seems to be the first time that such evaluation was carried out.
6.4 Personal Reflection
The current thesis describes the project that I have done during the fourth year of my program. I
chose this subject because I was interested with Intrusion Detection technologies that I have
discovered in the Computer Security and Forensics module. This module showed me the importance
to understand this technology. Many students have already done project around Intrusion Detection
technologies. Therefore, it was not interesting for me to study this subject. During the module,
Intrusion Prevention technologies were shortly described and they attracted my intention and are
from the same area than Intrusion Detection technologies. After a short investigation, I saw that
nearly no research was existing about this area. Therefore, I decided to do my honours project using
Intrusion Prevent technologies.
During the research in the subject of IPSs, I discovered that this topic was much more complex than
initially expected. In the analysis of multiple papers from searchers, it was at the beginning hard to
understand everything because of the complexity of the subject. This was an obstacle to select a
project because the methodologies used by searchers in the evaluation of IPS were hard to
understand. Moreover, one of the major problems that I encountered was the lack of
documentation around the selected topic. This has been overcome by studying methodologies
created for IDS evaluation and by finding the part that could be used for the evaluation of IPS. After
a while, the background reading permitted me to have a better understanding of articles that were
previously too hard to understand. These researches permitted me to meet the three first objectives
and a part of the fourth objective.
During the design stage, others difficulties occurred. One of them was the doubt of the design for
the test-bed to be good enough to evaluate the IPS and the experiment design. This is why they
were implemented and then tested. The design has been improved regarding the results of the
previous testing to tend to have the best possible design methodology.
When the implementation stage arrived, two main problems were faced. The first one was because
the test-bed was set-up in a real environment. The devices that should be used were accessible only
in one room and so when another person were using the devices it was not possible to conduct the
experiments. This resulted in the loss of times and some experiments were put back to later so the
end of experimentation has been delayed. The other problem that I encountered in the
implementation of the design was with the utilisation of Tcpreplay. During the first testing, only a
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
64
part of the data set was replayed (for example, the server traffic only). This was due because of the
rewriting of the data set where a mistake has been done and so the packets were not routed
properly. This has been overcome after the tutorial of Grave and Saliou has been found (Corsini,
2009).
The last part of this analysis was about the project management. At the beginning of the project, I
made a Gantt chart, which plans the different steps of the projects. During all the time of the project
attempts were made to ensure that, the project followed the planning schedule. During part of the
year, it was harder to keep the planning because of other commitment. In the majority of times,
these commitments have been known before they happened so precautions have been taken to
ensure that the planning could be still followed. Even with all these attempts, the planning has not
been completely followed but I made every effort to continue the weekly meeting with my
supervisor to keep the workload. Appendix H shows the Gantt chart of the project while Appendix F
and Appendix G contain all the meeting diary sheets.
6.5 Future Work
The utilisations of IPSs are more popular which means that more researches should be done on the
topic to improve this security. The area would be the capacity of response to a threat but also the
test and evaluate these devices. The produced methodology does not cover all the evaluation of an
IPS but only on IPS that possess the rate traffic mitigation capability.
The possibilities of improvement of the methodology are vast. One of the first improvements would
be the addition of other DDoS attack scenario to evaluate IPSs. The attack set of the methodology
was composed of four different DDoS, one for each main IP protocol (TCP, UDP, and ICMP) and a
blend of these three with the rate of one third for each protocol. Another improvement was to
modify the methodology in a way that the respond metrics include all IPSs and not only Snorts. The
methodology could be also being extended to evaluate other parts of the IPS and not only its rate
traffic mitigation. This will add new metrics that are proper to IDS as the detection metrics
(Sommers, Yegneswaran, & Barford, 2005) which are not defined in this thesis.
Another area of work would have been the creation of an automated framework, which is using this
methodology. In the thesis of Lo, he created an automated framework to evaluate IDS. The
background traffic generation, attacking traffic generation and data for retrieval for the metrics were
all automated. Moreover, a GUI is added to the framework, which makes it easier to use (Lo, 2009).
The current methodology obliges the user to launch the attacks machine by machine and to retrieve
data for the metrics by hand. This is really times consuming and errors might be made during these
processes. Transposing the same idea for IPS would permit to evaluate objectively a huge range of
devices easily.
Other experiment could also be done like the evaluation of the number of malicious packets that
pass through the IPS when this one is stressed. This will permit to evaluate its capacity to handle
malicious traffic even during stress mode and to define if the IPS becomes an open switch: all
packets pass through without being analysed or a closed switch: all packets are dropped without
being analysed. As the results show the time to respond experiment did not produce relevant
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
65
example, further work might be to modify the design of this experiment so the IPS might be
evaluated under this criteria.
The methodology evaluates the IPS Snort under only one Linux distribution. Evaluating Snort under
different flavour of UNIX and Windows would be interesting to evaluate which is the best platform
for Snort. Finally, this methodology has been developed in the aim to permit an objective evaluation
of IPS system to compare different systems together. To do so, it would be interested to evaluate
different IPS with this methodology and so this will permit to evaluate the methodology itself.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
66
7 REFERENCES Ahmed, M. (2009, March 31). Mystery computer virus Conficker threatens to wreak havoc. Retrieved
April 2, 2010, from TimesOnline:
http://technology.timesonline.co.uk/tol/news/tech_and_web/article6005567.ece
Alder, R. e. (2007). Syngress How to Cheat at Configuring Open Source Security Tools. USA: Syngress
Publishing.
Alessandri, D. (2004). Attack-Class-Based Analysis of Intrusion Detection Systems. Ph.D. Thesis .
Allen, J., Christie, A., Fithen, W., McHugh, J., Pickel, J., & Stoner, E. (2000). State of the Practice of
Intrusion Detection Technologies. Pittsburgh: Carnegie Mellon University.
Allman, M., Bennett, M., Lee, J., Pang, R., Paxson, V., & Tierney, B. (2004, October 04). Enterprise
Traces. Retrieved Fevruary 26, 2010, from Bro-IDS.org: http://bro-ids.org/enterprise-traces/hdr-
traces05/
Bächer, P., Holz, T., Kötter, M., & Wicherski, G. (2008, October 08). Know your Enemy: Tracking
Botnets. Retrieved October 27, 2009, from Honeynet.org: http://www.honeynet.org/papers/bots/
Barry, D. J. (2009). Special problems of securing 10Gbps networks. Network Security , 8-12.
BBC. (2009, May 28). Pension details of 109,000 stolen. Retrieved April 2, 2010, from BBC:
http://news.bbc.co.uk/1/hi/business/8072524.stm
Bellovin, S. (1994, February). RFC1579 - Firewall-Friendly FTP. Retrieved January 28, 2010, from
Faqs.org: http://www.faqs.org/rfcs/rfc1579.html
Bishop, M. (2004). Introduction to Computer Security. Boston: Pearson Eductaion, Inc.
Braun, H.-W. (1998, May 18). Characterizing Traffic Workload. Retrieved November 02, 2009, from
CAIDA: http://learn.caida.org/cds/traffic9912/TrafficAnalysis/Learn/Flow/index.html
Buchanan, B. (2008). Bill's Home Page. Retrieved December 19, 2009, from Buchananweb:
http://buchananweb.co.uk/2008_2009_sfc.pdf
Buzzard, K. (1999). Computer Security - What Should You Spend Your Money On? Computers &
Security , 18, 322-334.
CAIDA. (1998, March 14). Traffic Studies from NASA Ames Internet Exchange (AIX). Retrieved
November 02, 2009, from CAIDA: http://www.caida.org/research/traffic-analysis/AIX/
Cakanyıldırım, M., Yue, W. T., & Ryu, Y. U. (2009). The management of intrusion detection:
Configuration, inspection and investment. European Journal of Operational Research , 195, 186–204.
Càrdenas, A. A., Baras, J. S., & Seamon, K. (2006). A Framework for the Evaluation of Intrusion
Detection Systems. Proceedings of the 2006 IEEE Symposium on Security and Privacy (S&P’06) , 77-
92.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
67
Carter, E., & Hogue, J. (2006). Intrusion Prevention Fundamentals. Cisco Press.
CERT. (n.d.). CERT Statistics (Historical). Retrieved November 03, 2009, from CERT:
http://www.cert.org/stats/
CERT Coordination Center. (1996, September 19). CERT Advisory CA-1996-21 TCP SYN Flooding and
IP Spoofing Attacks. Retrieved October 28, 2009, from CERT: http://www.cert.org/advisories/CA-
1996-21.html
Cole, E., Krutz, R. L., Conley, J. W., Reisman, B., Ruebush, M., & Gollman, D. (2008). Network Security
Fundamentals. USA: R. R. Donnelley.
Combs, G. (2006). Wireshark. Retrieved March 30, 2010, from Wireshark:
http://www.wireshark.org/
Corsini, J. (2009). Analysis and Evaluation of Network Intrusion Detection Methods to Uncover Data
Theft, Master's thesis. Edinburgh: Edinburgh Napier University.
Coulouris, G., Dollimore, J., & Kindberg, T. (2001). Distributed Systems Concept and Design (3rd
Edition ed.). USA: Addison Wesley.
Crosby, S. A., & Wallach, D. S. (2003, August). Denial of Service via Algorithmic Complexity Attacks.
Proceeding of 12th USENIX Security Symposium , 29 - 44.
DARPA. (1998). DARPA Intrusion Detection Data Sets. Retrieved March 29, 2010, from Massachusetts
Institute of Technology:
http://www.ll.mit.edu/mission/communications/ist/corpora/ideval/data/index.html
Das, K. (2001). Protocol Anomaly Detection for Network-based Intrusion Detection. Retrieved
October 10, 2009, from SANS:
http://www.sans.org/reading_room/whitepapers/detection/protocol_anomaly_detection_for_netw
orkbased_intrusion_detection_349?show=349.php&cat=detection
Debar, H. (n.d.). Intrusion Detection FAQ: What is knowledge-based intrusion detection? Retrieved
October 10, 2009, from SANS: http://www.sans.org/security-resources/idfaq/knowledge_based.php
Department of Defense. (1985, December 26). Early Computer Security Papers, Part I. Retrieved
December 16, 2009, from Computer Security Recource Center:
http://csrc.nist.gov/publications/history/dod85.pdf
Desai, N. (2003, Februry 27). Intrusion Prevention System: the Next Step in the Evolution of IDS.
Retrieved October 10, 2009, from SecurityFocus: www://www.securityfocus.com/infocus/1670
Dwan, B. (2004). Identity theft. Computer Fraud & Security , 2004 (4), 14-17.
Einwechter, N. (2001, January 8). An Introduction To Distributed Intrusion Detection Systems.
Retrieved October 11, 2009, from SecurityFocus: http://www.securityfocus.com/infocus/1532
Farshchi, J. (2003). Statistical-based Intrusion Detection. Retrieved October 10, 2009, from
SecurityFocus: http://www.securityfocus.com/infocus/1686
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
68
Fink, G., O'Donoghue, K. F., Chappell, B. L., & Turner, T. G. (2002). A Metrics-Based Approach to
Intrusion Detection System Evaluation for Distributed Real-Time Systems. IPDPS '02: Proceedings of
the 16th International Parallel and Distributed Processing Symposium , 6-9.
Fuchsberger, A. (2005). Intrusion Detection Systems and Intrusion Information. Security Technical
Report , 10, 134-139.
Furnelb, S. M., & Warren, M. J. (1999). Computer Hacking and Cyber Terrorism: The Real Threats in
the New Millennium? Computers & Security , 18, 28-34.
Furnell, S., & Papadaki, M. (2008, May). Testing our defences or defending our tests: the obstacles to
performing security assessment references. Computer Fraud & Security , 8-12.
Garìa-Teodoro, P. e. (2009). Anomaly-based network intrusion detection - Techniques, systems and
challanges. Computer and Security , 28, 18-28.
Gates, C. (2009, April 11). Metasploit 3.4 Developer's Guide Chapter 01: Introduction. Retrieved
February 12, 2010, from Metasploit:
http://www.metasploit.com/redmine/projects/framework/wiki/DeveloperGuide_01
Gollman, D. (2006). Computer Security. Glasgow: Bell & Bain.
Gouda, M. G., & Liu, A. X. (2007). Structured firewall design. Computer Networks , 51 (4), 1106-1120.
Grimes, G. A. (2005). Network security managers' preferences for the Snort IDS and GUI add-ons.
Network Security , 19-20.
Hinde, S. (2005). Identity theft & fraud. Computer Fraud & Security , 2005 (6), 18-20.
ICANN. (2007, March 01). Root server attack on 6 February 2007. Retrieved April 02, 2010, from
ICANN: http://www.icann.org/en/announcements/factsheet-dns-attack-08mar07.pdf
Ierace, N., Urrutia, C., & Bassett, R. (2006). Intrusion Prevention System. Connecticut : Western
Connecticut State University.
Ingham, K., & Forrest, S. (2005, August 17). A History and Survey of Network Firewalls. Retrieved
January 25, 2010, from UNM Computer Science: http://www.cs.unm.edu/~treport/tr/02-
12/firewall.pdf
International Standard. (2006). Information technology - Security techniques - Selection, deployment
and operations of intrusion detection systems. Geneva: ISO.
International Telecomuniation Union. (1994). X.200 : Information technology - Open Systems
Interconnection - Basic Reference Model: The basic model. Retrieved November 03, 2009, from
http://www.itu.int/rec/T-REC-X.200-199407-I/en
ISI. (1981). RFC793 - Transmission Control Protocol. Retrieved Februry 11, 2010, from FAQS:
http://www.faqs.org/rfcs/rfc793.html
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
69
Jacobson, V., Braden, R., & Borman, D. (1992, May). RFC1323 - TCP Extensions for High Performance.
Retrieved February 11, 2010, from IETF: http://www.ietf.org/rfc/rfc1323.txt
Jones, A. (2008). Industrial espionage in a hi-tech world. Computer Fraud & Security , 2008 (1), 7-13.
Jones, R. (2007). Care and Feeding of Netperf 2.4.X. Retrieved March 30, 2010, from Netperf:
http://www.netperf.org/svn/netperf2/tags/netperf-2.4.5/doc/netperf.html#Top
Jones, R. (2000). Welcome to the Netperf Homepage. Retrieved March 30, 2010, from Netperf:
http://www.netperf.org/netperf/
Kennedy, J. (2009, May 18). 75,000 customers’ bank details on stolen Bord Gais laptop. Retrieved
April 2, 2010, from SiliconRepublic.com:
http://www.siliconrepublic.com/news/article/13218/cio/75-000-customers-bank-details-on-stolen-
bord-gais-laptop
Ladin, R., Liskov, B., Shrira, L., & Ghemawat, S. (1992). Providing Availability Using Lazy Replication.
ACM Transaction on Computer Systems , 10 (4), 360-391.
Lancope. (2006). Enterprise Network Security Architecture Does Not End with an Inline IPS.
Alpharetta: Lancope, Inc.
Lemonnier, E. (2001, June 28). Retrieved October 10, 2009, from Lemonier.se:
http://lemonnier.se/erwan/docs/protocol_anomaly_detection.pdf
Lerace, N., Urrutia, C., & Bassett, R. (2005). Intrusion Prevention System. Ubiquity , 2-4.
Lin, S.-C., & Tseng, S.-S. (2004). Constructing detection knowledge for DDoS intrusion tolerance.
Expert Systems with Applications , 27, 379–390.
Lippmann, R., Haines, J. W., Fried, F. D., Korba, J., & Das, K. (2000). The 1999 DARPA off-line intrusion
detection evaluation. Computer Networks , 34 (4), 579-595.
Lo, O. (2009). Framework for Evaluation of Network-Based Intrusion Detection System. Bachelor of
Engineering with Honours in Computer Networks and Distributed Systems .
Lyu, M. R., & Lau, L. K. (2000). Firewall Security: Policies, Testing and Performance Evaluation. 24th
International Computer Software and Applications Conference , 116-121.
Mahoney, M. V., & Chan, P. K. (2003). An Analysis of the 1999 DARPA/Lincoln Laboratory Evaluation
Data for Network Anomaly Detection. Proc. 6th Intl. Symp. on Recent Advances in Intrusion Detection
, 220-237.
Maynor, D., Mookhey, K. K., Cervini, J., Roslan, F., & Beaver, K. (2007). Metasploit Toolkit for
Penetration Testing Exploit Development and Vulnerability Research. Burlington: Syngress Publishing,
Inc.
McHugh, J. (2000). Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA
Intrusion Detection System Evaluations as Performed by Lincoln Laboratory. ACM Transactions on
Information and System Security , 3 (4), 262-294.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
70
Mell, P., & Scarfone, K. (2007). NIST Special publication on intrusion detection systems. Gaithersburg:
National Institute of Standards and Technology.
Metasploit. (2008). Metasploit Framework User Guide. Retrieved February 12, 2010, from
Metasploit: http://www.metasploit.com/redmine/projects/framework/wiki/UserGuide
Mirkovic, J., & Reiher, P. (2004). A Taxonomy of DDoS Attack and DDos defense mechanisms.
Computer Communication Review , 34 (2), 39-55.
Mirkovic, J., Dietrich, S., Dittrich, D., & Reiher, P. (2005). Internet Denial of Service: Attack and
Defence Mechanisms. Upper Saddle River: Pearson Education.
Mogul, J. C. (1989). Simple and flexible datagram access controls for Unix-based gateways.
Proceedings of the USENIX Summer 1989 Conference. , 203-222.
Montia, G. (2010, March 11). Thousands of HSBC private bank account details stolen. Retrieved April
2, 2010, from Banking Times: http://www.bankingtimes.co.uk/11032010-thousands-of-hsbc-private-
bank-account-details-stolen/
Morgan, D. (2005). The evolution of security purchasing. Network Security , 11-12.
Mutz, D., Vigna, G., & Richard, K. (2003). An Experience Developing an IDS Stimulator for the Black-
Box Testing of Network Intrusion Detection Systems. In proceedings of ACSAC '03 .
Muuss, M. (1983). The Story of the PING Program. Retrieved March 30, 2010, from The Research
Interests of MIKE MUUSS: http://ftp.arl.mil/~mike/ping.html
Nazario, J. (2008, July). DDoS attack evolution. Network Security , 7-10.
NIST. (1995, October). An Introduction to Computer Security: The NIST Handbook. Retrieved
December 19, 2009, from National Institute of Standards and Technology:
csrc.nist.gov/publications/nistpubs/800-12/handbook.pdf
NSS Labs. (2005). Cisco IPS-4240 V5.0(3). Mas la Carrière: NSS Group.
NSS Labs. (2009). IBM ISS Proventia Server for Linux 2.0. Retrieved February 4, 2010, from NSS Labs:
http://nsslabs.com/intrusion-prevention/ibm-iss-proventia-server-for-linux-2.html
NSS Labs. (2008, May). Juniper IDP 800. Retrieved February 4, 2010, from NSS Labs:
http://nsslabs.com/intrusion-prevention/juniper-idp-800.html
NSS Labs. (2006). Network IPS Testing Procedure (V4.0). Retrieved February 11, 2010, from NSS Lab:
http://nsslabs.com/certification/ips/nss-nips-v40-testproc.pdf
O'Reilly, T. (2000, June 08). The Network Really Is the Computer. Retrieved November 15, 2009, from
O'Reilly: http://www.oreillynet.com/pub/a/network/2000/06/09/java_keynote.html?page=1
Parker, D. (2003). Hping. Retrieved February 12, 2010, from Hping:
http://gd.tuwien.ac.at/www.hping.org/hping_conv.pdf
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
71
Pasquinucci, A. (2007). From network security to content filtering. Computer Fraud & Security , 14-
17.
Paxson, V. (2003). Bro Intrusion Detection System. Retrieved February 26, 2010, from Bro-IDS.org:
http://bro-ids.org/
Peiser, S., & Bishop, M. (2007). How to Design Computer Security Experiments. International
Federation for Information Processing Publications , 141-148.
Postel, J., & Reynolds, J. (1985, October). RFC959. Retrieved January 28, 2010, from University of
Southern California: ftp://ftp.isi.edu/in-notes/rfc959.txt
Ptacek, T., Newsham, T., & Simpson, H. (1998). Insertion, evasion, and denial of service: Eluding
network intrusion detection. Calgary: Secure Networks, Inc.
Raja, S. (2005). Network Intrusion Prevention Systems - Why “Always On” Stateful Inspection and
Deep Packet Analysis are Essential to Deliver Non-Stop Protection. Top Layer Networks, Inc.
Ranum, M. (2001). Experiences benchmarking intrusion detection systems. Retrieved February 08,
2010, from NFR Security White Paper:
http://www.bandwidthco.com/whitepapers/compforensics/ids/Benchmarking%20IDS.pdf
Rash, M. e. (2005). Intrusion Prevention and Active Reponse: Deploying Network and Host IPS. USA:
Syngress Publishinf.
Rexworthy, B. (2009). Intrusion detections systems – an outmoded network protection model.
Network Security , 17-19.
Rowan, T. (2007). Intrusion prevention systems: superior security. Network Security , 11-15.
Salah, K., & Kahtani, A. (2009). Performance evaluation comparison of Snort NIDS under Linux and
Windows Server. Network Comput Application .
Sanfilippo, S. (2006). hping. Retrieved November 03, 2009, from hping:
http://www.hping.org/documentation.php
SANS. (2001, July 06). Vulnerability Assessment. Retrieved December 18, 2009, from SANS:
http://www.sans.org/reading_room/whitepapers/basics/vulnerability_assessment_421
Schauer, H. (2002, October 23). IDSwakeup. Retrieved Fevruary 04, 2010, from HSC:
http://www.hsc.fr/ressources/outils/idswakeup/index.html.en
Schiller, C. A., Binkley, J., Harley, D., Gadi, E., Bradley, T., Willems, C., et al. (2007). Botnets: The killer
web app. Canada: Andrew Williams.
Seagren, E. (2007). Secure Your Network for Free. Rockland: Syngress Publishing.
Sharma, A., Kumar, R., & Grover, P. S. (2007). A Critical Survey of Reusability Aspects for Component-
Based Systems. Proceedings of the World Academy of Science, Engineering and Technology , 21, 35-
39.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
72
Shimonski, R. J. (2003, Februry 05). Denial of Service 101. Retrieved October 23, 2009, from
WindowSecurity.com: http://www.windowsecurity.com/articles/Denial_of_Service_101.html
Singaraju, G., Teo, L., & Zheng, Y. (2004). A Testbed for Quantitative Assessment of Intrusion
Detection Systems using Fuzzy Logic. IWIA '04: Proceedings of the Second IEEE International
Information Assurance Workshop (IWIA'04) , 79- 93.
Snapp, S. R., Brentano, J., Dias, G. V., Goan, T. L., Heberlein, T., Ho, C.-L., et al. DIDS (Distributed
Intrusion Detection System) - Motivation, Architecture, and An Early Prototype. University of
California.
Snort Team. (2009, October 22). Snort Official Documentation. Retrieved February 28, 2010, from
Snort: http://www.snort.org/assets/125/snort_manual-2_8_5_1.pdf
Sommers, J. (2004(b)). Harpoon: A Flow-level Traffic Generator. Retrieved February 12, 2010, from
The University of Wisconsin: http://pages.cs.wisc.edu/~jsommers/harpoon/
Sommers, J., Yegneswaran, V., & Barford, P. (2004(a)). A Framework for Malicious Workload
Generation. The 4th ACM SIGCOMM conference on Internet measurement , 82-87.
Sommers, J., Yegneswaran, V., & Barford, P. (2005). Toward comprehensive traffic generation for
online ids evaluation. University of Wisconsin: Tech Rep.
Spathoulas, G. P., & Katsikas, S. K. (2009). Reducing false positives in intrusion detection systems.
Computer & Security , 1-10.
Sprouse, M. (1992). Sabotage in the American Workplace: Anecdotes of Dissatisfaction, Mischief and
Revenge. San Francisco: Pressure Drop Press.
Srivastava, S., & Soi, I. M. (1983). Empirical Prediction of Overall Reliability in Computer
Communication Networks. Microelectron. Reliab , 23 (1), 137-139.
Stutz, M. (1998, September 01). Bonk! A New Windows Security Hole. Retrieved October 28, 2009,
from Wired: http://www.wired.com/science/discoveries/news/1998/01/9581
Tanase, M. (2002, December 03). Barbarians at the Gate: An Introduction to Distributed Denial of
Service Attacks. Retrieved October 12, 2009, from SecurityFocus:
http://www.securityfocus.com/infocus/1647
Tanase, M. (2003, January 07). Closing the Floodgates: DDoS Mitigation Techniques. Retrieved
October 12, 2009, from SecurityFocus: http://www.securityfocus.com/infocus/1655
Terry, D., Theimer, M., Petersen, K., Demers, A., Spreitzer, M., & Hauser, C. (1995). Managing update
conflicts in Bayou, a weakly connected replicated storage system. Proceeding of the 15th ACM
Symposium on Operating Systems Principles , 172-183.
The Tolly Group. (2006). Benchmarking Strategies for Intrusion Prevention Systems (IPS) Part One:
Wired Systems. Boca Raton: The Tolly Group.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
73
The Tolly Group. (2006). IntruGuard Devices, Inc. IG200 Rate-Based Intrusion Prevention System
Layer 2-4 DoS/DDoS Attack Mitigation and Performance Evaluation. Boca Raton: The Tolly Group.
TippingPoint. (2007). IPS vs. IDS: Similar on the Surface, Polar Opposites Underneath. Austin:
TippingPoint.
TippingPoint. (2008, August 29). The Fundamentals of Intrusion Prevention System Testing. Retrieved
November 02, 2009, from PreferredTechnology:
http://www.preferredtechnology.com/support/whitepapers/download/IPSTesting.pdf
Tomahawk. (2002, November 01). Tomahawk Test Tool. Retrieved November 02, 2009, from
TomahawkTestTool: http://www.tomahawktesttool.org/
Top Layer Networks, I. (2006). The Critical Importance of Three Dimensional Protection (3DP) in an
Intrusion Prevention System. Top Layer Networks, Inc.
Top Layer Networks, Inc. (2004). Protecting IT Infrastructures Against Zero-day Network Attacks with
Intrusion Prevention System Technology. Westborough: Top Layer Networks, Inc.
Traynor, I. (2007, May 17). Russia accused of unleashing cyberwar to disable Estonia. Retrieved April
2, 2010, from The Guardian: http://www.guardian.co.uk/world/2007/may/17/topstories3.russia
Turner, A. (2003). Tcpreplay. Retrieved November 02, 2009, from Tcpreplay:
http://tcpreplay.synfin.net/trac/
Ubuntu. (2009). Ubuntu. Retrieved March 29, 2010, from Ubuntu: http://www.ubuntu.com/
Vigna, G., Robertson, W., & Balzarotti, D. (2004). Testing network-based intrusion detection
signatures using mutant exploits. The 11th ACM conference on Computer and communications
security , 21-30.
Vixie, P., Sneeringer, G., & Schleifer, M. (2002, November 24). Events of 21-Oct-2002. Retrieved
March 2, 2010, from Root-server.org: http://d.root-servers.org/october21.txt
VRT Team. (2009, October 23). Download Snort Rules. Retrieved October 30, 2009, from Snort:
http://www.snort.org/snort-rules/?#rules
Walsh, J., & Koconis, D. (2006, July 12). Cleaning Packet Captures for Network IPS Testing. Retrieved
November 02, 2009, from ICSAlabs:
http://www.icsalabs.com/sites/default/files/nips_cleaningpcaps_060712.pdf
Weinsberg, Y., Tzur-David, S., & Dolev, D. One Algorithm to Match Them All: On a Generic NIPS
Pattern Matching Algorithm. Jerusalem: The Hebrew University Of Jerusalem.
Whitman, M., & Mattord, H. (2008). Principles of information security. Canada: Course Technology.
Workman, M. (2007). Gaining Access with Social Engineering: An Empirical Study of the Threat.
Information Security Journal: A Global Perspective , 16 (6), 315 - 331.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
74
Wortham, J. (2009, August 6). Online Attack Silences Twitter for Much of Day. Retrieved April 2,
2010, from The New York Times:
http://www.nytimes.com/2009/08/07/technology/internet/07twitter.html
Wuu, L.-C., Hung, C.-H., & Chen, S.-F. (2007). Building intrusion pattern miner for Snort network
intrusion detection system. The Journal of Systems and Software , 80, 1699-1715.
ZDNet Uk. (2001, January 26). Web attackers knock out Microsoft sites. Retrieved October 31, 2009,
from ZDNet Uk: http://news.zdnet.co.uk/communications/0,1000000085,2083974,00.htm
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
75
Appendix A List of DDoS
TCP-based Distributed Denial of Service
Name Description
TCP SYN-ACK Flood Attack (Spoofed and not spoofed IP source)
This is a Layer 4 spoofed flood in which the attacker sends TCP SYN packets. The attacker sends a request to open a TCP connection. This attack creates a huge number of half-open TCP connections and this might fill the buffer of the target. The target will crash or discard all half-open connections and so legitimate users will not be able to access the resources.
TCP ACK Flood (Spoofed and not spoofed IP source)
This is a Layer 4 floods in which TCP ACK packets are continuously sent without establishing formal connections.
TCP NULL Flood (Spoofed and not spoofed IP source)
This is Layer 4 floods in which TCP packets are continuously sent without establishing formal connections. These packets do not have any flags set in them, and therefore have a header anomaly in Layer 4 header.
TCP Random flag Flood This is a Layer 4 floods in which TCP packets are continuously sent with randomly, changing TCP flags. Due to the randomization, there may be a header anomaly in Layer 4 header. Some flags combinations are illegal. Example of legal combinations are SYN-ACK, FIN-ACK. Examples of illegal flag combinations are SYN-FIN-RST-ACK, SYN-RST, etc.
TCP random sequence, acknowledgement numbers
When the sequence number is chosen, it follows a discipline. In such attack the number are totally random and it might confused the target which will start to do unexpected things.
TCP Random window size (Spoofed and not spoofed IP source)
The window size is chosen randomly and so it might confuse the target.
TCP random option value (Spoofed and not spoofed IP source)
The option of the TCP packets are chosen randomly and some of them might be illegal.
TCP random data length (Spoofed and not spoofed IP source)
The payload of the TCP packets is chosen randomly for each packet.
TCP checksum error flood (Spoofed and not spoofed IP source)
The target is flooded with TCP packets which have a bad checksums. This aim to overload the checksum validation capability of the target.
UDP-based Distributed Denial of Service
Name Description
IP-UDP Fragments Attack (Spoofed and not spoofed IP source)
This is an IP flood in which packets are fragmented and use the UDP protocol.
UDP Flood (Spoofed and not spoofed IP source)
The target is flooded with UDP packets.
UDP checksum error (Spoofed and not spoofed IP source)
The target is flooded with UDP packets which have a bad checksums.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
76
ICMP -based Distributed Denial of Service
Name Description
ICMP Attack (Spoofed and not spoofed IP source)
The target received a huge numbers of ICMP packets. In general they are echo-request packets. This could works with any other types of ICMP packets. ICMP echo-request could be also carried out where the attacker send the echo-request packets with the source IP address spoofed to mirrors machines. These machines will then send the echo-reply to the source IP address which is the original target. The targets will think that the attackers are the mirrors machines.
IP-ICMP Fragments Attack (Spoofed and not spoofed IP source)
The target received fragmented ICMP packet with an initial size bigger than 65,535 bytes. Such IP packets are illegal because they are bigger than the authorised size. When the target will attempt to rebuilt the packets it might crash. This is an old flaw patched since 1997 and also known as the “Ping of Death”.
Others types of Distributed Denial of Service
Name Description
TCP/UDP Destination Port Attack (Spoofed and not spoofed IP source)
This type of attack targets a particular port of the target using the TCP or UDP protocol.
TCP-SYN / UDP / ICMP Blended Attack (Spoofed and not spoofed IP source)
This attack use a blend of the three most used IP protocols which are TCP, UDP and ICMP.
HTTP Half-Connection Attack (Spoofed and not spoofed IP source)
This is a TCP-SYN attack that targets the port 80 which is in general use for HTTP connection.
DNS Attack (Spoofed and not spoofed IP source)
This attack uses the UDP protocol and target the DNS server of the target which is in general using the port 53.
IP Random Identification flood (Spoofed and not spoofed IP source)
The IP-ID is set-up randomly which might confuse the target.
IP Random fragment flag (Spoofed and not spoofed IP source)
The fragment flag is used to tell a devise if it should no fragment a packet because the target machine does not support it. Such attack tries to confuse the device by sending IP packets with random fragment flat and so it will ask the device to change of behaviour all times.
IP Random TTL flood (Spoofed and not spoofed IP source)
This attack flood the target with IP packets which have got a random TTL value.
IP random protocol (Spoofed and not spoofed IP source)
The IP protocol support to 256 protocols, in this attack the target is flooded with packets which have a random protocol value. This might confuse the target because some of these values are not related to any protocols.
Non-IP Flooding (Spoofed and not spoofed IP source)
The target is flooded by packets which are not IPv4 or IPv6 packets.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
77
Appendix B Test-bed
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
78
Appendix C.1 R1 configuration
version 12.4
!
hostname R1
!
interface FastEthernet0/0
description R1 --» IPS
ip address 192.168.1.100 255.255.255.0
duplex auto
speed auto
no shutdown
!
interface Serial0/0/0
description R1 --» R2
ip address 10.0.1.1 255.255.255.0
clock rate 8000000
no shutdown
!
interface Serial0/0/1
description R1 --» R3
ip address 10.0.2.1 255.255.255.0
clock rate 8000000
no shutdown
!
router rip
network 10.0.0.0
network 192.168.1.0
!
end
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
79
Appendix C.2 R2 configuration
Appendix C.3 R3 configuration
version 12.4
!
hostname R2
!
interface FastEthernet0/0
description R1 --» Servers
ip address 10.0.3.1 255.255.255.0
duplex auto
speed auto
no shutdown
!
interface Serial0/0/0
description R2 --» R1
ip address 10.0.1.2 255.255.255.0
clockrate 8000000
no fair-queue
no shutdown
!
router rip
network 10.0.0.0
!
end
version 12.4
!
hostname R3
!
interface FastEthernet0/0
description R3 --» Clients
ip address 10.0.4.1 255.255.255.0
duplex auto
speed auto
no shutdown
!
interface Serial0/0/0
description R3 --» R1
ip address 10.0.2.2 255.255.255.0
no fair-queue
clock rate 56000
no shutdown
!
router rip
network 10.0.0.0
!
end
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
80
Appendix D Switch configuration
hostname Switch
!
vlan internal allocation policy ascending
!
interface FastEthernet0/1
switchport access vlan 2
switchport mode access
!
interface FastEthernet0/2
switchport access vlan 2
switchport mode access
!
interface FastEthernet0/3
switchport access vlan 2
switchport mode access
!
interface FastEthernet0/4
switchport access vlan 2
switchport mode access
!
interface FastEthernet0/5
switchport access vlan 2
switchport mode access
!
interface FastEthernet0/6
switchport access vlan 2
switchport mode access
!
interface FastEthernet0/13
switchport access vlan 3
switchport mode access
!
interface FastEthernet0/14
switchport access vlan 3
switchport mode access
!
interface FastEthernet0/15
switchport access vlan 3
switchport mode access
!
interface FastEthernet0/16
switchport access vlan 3
switchport mode access
!
interface FastEthernet0/17
switchport access vlan 3
switchport mode access
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
81
interface FastEthernet0/18
switchport access vlan 3
switchport mode access
!
interface FastEthernet0/23
switchport access vlan 4
switchport mode access
!
interface FastEthernet0/24
switchport access vlan 5
switchport mode access
!
monitor session 1 source interface Fa0/1
monitor session 1 destination interface Fa0/23
!
monitor session 2 source interface Fa0/13
monitor session 2 destination interface Fa0/24
!
end
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
82
Appendix E Initial Project Overview
Overview of Project Content and Milestones:
The global aim of this project will be investigate the performance of rate-based Intrusion Prevention
System (IPS) for a range of network conditions, and assess how well they cope with different types
of network traffic. This type of system is specially designed for rate-based or anomaly-based attack
prevention; this can be used to prevent Distributed Denial-of-Service (DDoS).
The project will involve the creation of a test-bed network with realistic network traffic in the frame
of a scenario. The IPS system will be set up in the test-bed and different DDoS attack will be perform
on the network. The final step will be the evaluation of results of the processing of the threat by the
IPS.
The Main Deliverable(s):
Perform a critical evaluation of network security
Investigation of IPS systems, their performance impact, and evaluation methodologies.
Investigation of evaluation tools for traffic playback to create a test-bed.
Design a range of experiments for the evaluation.
Implementation of evaluation test-bed and Device Under Test (DUT).
Evaluation results of the system selected with the test-bed implemented for the scenario.
The Target Audience for the Deliverable(s):
Every company, organisations who have a network with sensible data, and need to have their
services working all time are targeted. If they are aware about the risk of DDoS attack they will
understand the capacity of the product to help them to protect them of this threat.
The Work to be Undertaken:
The first step should be the investigation of the current way to secure a network against DDoS attack
and in particular the role of IPS system and their capacity to handle this threat. The next step will be
the creation of a network; this will be the test-bed. After that the utilisation of specific tools will
permit to generate realistic flow and record it to replay it as many time as we need for testing. When
this is done the proper testing of the system can take place, the network will be taken under
different type of attack. The last step is the evaluation of the results generate by the system and
how well it handle attacks.
Additional Information / Knowledge Required:
Different skills should be learn as the creation of test-bed that will involve the understanding and the
creation of a network that will fit a scenario and using tools to generate realistic traffic. The study of
an IPS will be the extension of the study of an Intrusion Detection System (IDS) studied in the
module Computer Security & Forensics.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
83
After the investigation of IPS system en tools used for generate traffic new skill might need to be
learned.
Information Sources that Provide a Context for the Project:
The company NSS Lab is specialise in the testing of different security product. Their customers
include the leading brands in security: CheckPoint, Cisco, HP, IBM, Juniper, McAfee, Solidcore,
Sourcefire, TippingPoint, Trend Micro, Websence and others.
The Importance of the Project:
As explain before it is possible to find testing about IPS products, but, there are all commercial one,
but, Open Source project have been creating IDS and IPS system since many years. The equivalence
of testing as done for commercial system can be found. The aim of this project is to produce an
empiric analyse and testing of a IPS system created by an Open Source project.
The Key Challenge(s) to be Overcome:
The testing of an IPS system needs different type of hardware as routers, switches and computer as
nodes. The last part of hardware will be the computer that will host the IPS, the firewall that will be
used could be implemented in the IPS host or separate. This choice will be done after the
investigation of IPS systems.
The final challenge will be to handle threat as DDoS attack, this is a huge threat difficult to handle.
Because of that, only part of this threat will be studied during the project.
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
84
Appendix F Week 9 Meeting Report
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
85
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
86
Appendix G Gantt Chart
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
87
Appendix H Diary Sheets
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 14/09/09 Last diary date: N/A
Objectives:
Find a proposal for my honours project
Progress:
Background reading about the area that interest me like IDS/IPS/DDoS
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
88
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 29/09/09 Last diary date: 14/09/09
Objectives:
See if the project overview and the deliverable are valid
Progress:
Produced a project overview and a list of the main deliverable
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
89
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 06/10/09 Last diary date: 29/09/09
Objectives:
-Correctness of the outline.
-Discuss of the different design for the test-bed.
-Identify element required to build environment / test-bed.
-Learn how-to configure and deploy inline IPS.
Progress:
-Investigation of different types of Denial of Service attack.
-Investigation of different way to simulate a Distributed Denial of Service attack.
-Investigation of different way to generate, manipulate and used background traffic.
-Found company traffic on the website: http://bro-ids.org/
-Outline of the lecture review.
Supervisor’s Comments:
Student: Flavien Flandrin Supervisor: Bill Buchanan
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
90
Date: 13/10/09 Last diary date: 06/10/09
Objectives:
-Snort with inline mode setup in a virtual machine.
-Snort working with Barnyard2, MySQL and BASE
Progress:
-Investigation of Snort evaluation
-Investigation of DDoS behaviour and mitigation point of view
-Investigation of IDS/IPS methodologies
-Investigation of host-based and network-based IDS/IPS
-Learned how to deploy and configure Snort with in-line mode (IPS)
Supervisor’s Comments:
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 20/10/09 Last diary date: 13/10/09
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
91
Objectives:
- Investigate the scope of DDoS in 2009
- Continue to set-up the test-bed in a virtual environment
Progress:
- Design of the test-bed topology
-Started to setup the test bed in a virtual environment.
-Snort with inline mode setup in a virtual machine.
-Snort working with Barnyard2, MySQL and BASE
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
92
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 27/10/09 Last diary date: 20/10/09
Objectives:
Set-up the test-bed in a real network
-Design of a first DDoS attack
Progress:
-Finished to setup the test bed.
-Snort working with Barnyard2, MySQL and Snorby.
-Understanding of Hping
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
93
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 01/11/09 Last diary date: 27/10/09
Objectives:
Continue to write the literature review
Prepare week 9 report
Progress:
Writing the literature review
Supervisor’s Comments:
Student: Flavien Flandrin Supervisor: Bill Buchanan
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
94
Date: 10/11/09 Last diary date: 01/11/09
Objectives:
Continue to write the literature review
-Create a DDoS attack
-Handle the DDoS attack
- Start to set-up the test bed in a real environment
Progress:
-Writing the Lecture Review
-Create all OS images needed for the test-bed
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
95
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 17/11/09 Last diary date: 10/11/09
Objectives:
-Define a meeting time for the interim meeting
- Finish the implementation of the test bed
Progress:
Prepared the interim meeting
-Finished to prepare realistic traffic
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
96
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 24/11/09 Last diary date: 17/11/09
Objectives:
-Evaluate the background traffic using Tcpreplay
-Finish to define all the different way to evaluate experiment results
Progress:
Test-bed implemented.
Week 9 report produced.
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
97
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 25/01/10 Last diary date: 24/11/09
Objectives:
Designing the first experiment
- Defining how the report should be done for the experimental part
Progress:
- Literature review nearly finished
- Evaluation of the background traffic with Tcpreplay finished.
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
98
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 03/02/10 Last diary date: 25/01/10
Objectives:
Implement the first experiment
Investigate tools that could be used for the different metrics.
Progress:
Design of the time to respond, latency and reliability experiment done
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
99
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 10/02/10 Last diary date: 03/02/10
Objectives:
Design the available bandwidth experiment and resource metrics experiment
Discuss of the obtained results
Progress:
Time to live experiment and latency experiment carried out
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
100
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 23/02/10 Last diary date: 10/02/10
Objectives:
Design of packet loss experiment
Implementation of the reliability experiment
Discuss about the results of the previous experiment
Progress:
Resources metrics and available bandwidth carried out
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
101
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 17/03/10 Last diary date: 23/02/10
Objectives:
Start writing the design and implementation chapters.
Carried out the lost packets methodology
Progress:
Investigate SNMP protocol
Investigate other metrics which could be interesting for the project
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
102
Student: Flavien Flandrin Supervisor: Bill Buchanan
Date: 23/02/10 Last diary date: 17/03/10
Objectives:
Discuss about the results and the different findings of the project
Discuss about the evaluation and conclusion chapter of the report.
Progress:
Design and Implementation chapter finished
Packet lost experiment carried out
Supervisor’s Comments:
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
103
Acronyms
ACL Access List
API Application Program Interface
C&C Command and Control
CAIDA Cooperative Association for Internet Data Analysis
CIA Confidentiality Integrity Availability
CPU Central Processing Unit
DDoS Distributed Denial of Service
DIDPS Distributed Intrusion Prevention System
DNS Domain Name Server
DoS Denial of Service
DUT Device Under Test
E-mail Electronic mail
EULA End-User License Agreement
GRE Generic Routing Encapsulation
GUI Graphical User Interface
HIDPS Host Intrusion Detection Prevention System
HTTP Hyper Transfer Text Protocol
ICMP Internet Control Message Protocol
IDEVAL Intrusion Detection Evaluation
IDPS Intrusion Detection Prevention System
IDS Intrusion Detection System
IP Internet Protocol
IPS Intrusion Prevention System
IRC Internet Relay Chat
Kbytes Kilobytes
MAC Media Access Control
FLAVIEN FLANDRIN Honours Project
A METHODOLOGY TO EVALUATE RATE-BASED INTRUSION PREVENTION SYSTEM AGAINST DISTRIBUTED DENIAL OF SERVICE
104
Mbytes Megabytes
NIDPS Network Intrusion Detection Prevention System
NIDS Network Intrusion Detection System
OS Operating System
OSI Open System Interconnection
PC Personal Computer
pps packets per second
QoS Quality of Service
RIP Routing Information Protocol
RTT Real-time Tactics
SPAN Switch Port Analyzer
TCB Transmission Control Block
TCP Transmission Control Protocol
TTL Time To Live
UDP User Datagram Protocol
VLAN Virtual Local Area Network