host-based security

32
Host-based Security Dmitry D. Khlebnikov [email protected] Secure Development Melbourne, June 11, 2015 Good evening, everyone! Tonight’s talk (as you may see from the title of the slide behind me) is about host-based security: what it means, how one can protect their applications, and how you can apply security principles during the design stages of building your infrastructure. I was always striving to participate in all stages of software life-cycle, trying to spot the security related issues at the development as well as at the deployment stages, so when DevOps methodology became mainstream in 2008, I discovered that I was following that methodology for almost a decade without even knowing it :). However, if you look closely, DevOps promotes a lot of good things to speed the development up but has no focus on security, and often the security part is suering considerably there. So, we will look at the issues through the eyes of a Systems Administrator and not a Developer, a Systems Administrator who is a part of DevOps culture, and who drives the security into the project.

Upload: secdevmel

Post on 10-Aug-2015

135 views

Category:

Technology


0 download

TRANSCRIPT

Host-based SecurityDmitry D. Khlebnikov

[email protected]

Secure Development Melbourne,June 11, 2015

Good evening, everyone! Tonight’s talk (as you may see from the title of the slide behind me) is about host-based security: what it means, how one can protect their applications, and how you can apply security principles during the design stages of building your infrastructure. I was always striving to participate in all stages of software life-cycle, trying to spot the security related issues at the development as well as at the deployment stages, so when DevOps methodology became mainstream in 2008, I discovered that I was following that methodology for almost a decade without even knowing it :). However, if you look closely, DevOps promotes a lot of good things to speed the development up but has no focus on security, and often the security part is suffering considerably there. So, we will look at the issues through the eyes of a Systems Administrator and not a Developer, a Systems Administrator who is a part of DevOps culture, and who drives the security into the project.

Overview

• basic security principles

• what does “Host-based security” mean?

• a generic web application infrastructure (its breaking points and mitigation techniques)

• further steps

In the following 15 minutes or so I am going to cover the basic security principles, will briefly explain the meaning of the host-based security approach, and will analyse the common breaking points and the corresponding mitigation techniques one may use to increase the security of their infrastructure.

The basic principles of security rarely change,people’s behaviours do

When I started to prepare this presentation the following idea struck my head: the basic principles of security almost never change. It’s people and their views on security that change over the course of time. There is almost nothing revolutionary in the security techniques that are considered to be the best security practices, most of which have been here for decades, yet you can rarely see even a good part of these techniques implemented in the real life. I hope this presentation is one of the steps toward changing the mindsets of people who architect, deploy, and maintain hosting infrastructures.

The basic principles of security:

Before we proceed further let’s examine the basic common principles of IT security. Although it is debatable, but I came up with the following six principles that in my opinion apply to any software development.

The basic principles of security:

Balance protection with utility

The first principle is that you should balance protection with utility.

Even if it was possible to build something 100% secure and unbreakable it would be a sad thing to see since it would be totally unusable and isolated from the rest of the world. There is a common example of a perfectly secure server: it’s the one which is disconnected from the network and powered off. Even in this case it’s not entirely secure — what if somebody with a hammer walks in? Anyway, the lesson here is that the security measures we apply should be justified and should not prevent the business from achieving their goals, hence it is a game of balance and always will be.

The basic principles of security:

Balance protection with utility

Assign minimum privileges

The second principle states that you should try to assign minimum required privileges.

There is a well known (I hope) “Principle of least privilege” which states something along the following lines: every module (be it a user, a process, or a program) must be able to access only the information and resources that are necessary for its legitimate purpose. Despite that the definition is simple and understandable the implementation is often not. I find it that a good analogy to the principle of least privilege is the theory of probability, where if you possess all the variables describing a given environment you should be able to predict the future. However, the more we dive in the more variables we discover and we never have enough information to build an exact model. It is the same with the principle of least privilege.

Therefore, we should use some approximation and define the scope (or depth) of what we consider to be the atoms of our privilege system. Just as an example: we can consider that we operate within the standard Unix DAC security model, then we would have users, groups, and permissions bit masks as our building blocks to define the access permissions. If we want to go deeper we can employ security frameworks like SELinux and define security domain contexts and transitions between them, restrict different sys calls, and so on. It’s really hard to find the size that fits all, but personally I think you can achieve a pretty good result even if you work with the Unix DAC security model only and going for the SELinux targeted policy after that would improve the protection even further.

The basic principles of security:

Balance protection with utility

Assign minimum privileges

Use layered security (the onion model)

The third principle is to use layered security (AKA the onion model AKA security in-depth)

Even if you came up with a brilliant layer of security that prevents 99.9% of all possible attack vectors once the attackers find a way to bypass your protection it’s gone as it was never there. For this reason your security controls must follow the onion model when each security layer is independent but complements others.

For example, the network firewall should control the general traffic flows to and from your network, the host-based firewall should manage traffic to and from the services specific to that particular host, services should be running under their corresponding user accounts, and the files/directories should have proper ownership and permissions. This way if one of the layers is bypassed or compromised others are still there and will minimise (or even mitigate) the impact.

The basic principles of security:

Balance protection with utility

Assign minimum privileges

Use layered security (the onion model)

Plan for failure

The forth principle, I think, is the most important one — always plan for a failure.

No matter how good your security model and its implementation is, eventually it will be broken into. So it’s not a question of “if”, but a question of “when”. For this reason, you should assume that every single part of your infrastructure can be possibly hacked and use that assumption when you design your solution.

Personally, I think this is the principle people often underestimate the most and ignoring this principle is what leads to so many publicly known leaks, disclosures, etc. Following this particular principle requires a different mindset, a mindset of a hacker I would say. Each time you see a component of your infrastructure ask yourself: what would be the impact if that particular component is compromised? what can we put in place to ensure that when the component is compromised the impact would be acceptable/manageable?

The basic principles of security:

Balance protection with utility

Assign minimum privileges

Use layered security (the onion model)

Plan for failure

Ensure sufficient audit is in place

The fifth principle requires that a sufficient audit routines are in place.

The four principles before this one were of the preventive nature, but we also need some means to detect anomalies, to know when there is a breach, and to have sufficient data to investigate when the attackers were successful. Additionally to this the collected data would be a perfect source for ideas on how to improve our protection further and will supply us with critical information on how our protective measures are performing.

The basic principles of security:

Balance protection with utility

Assign minimum privileges

Use layered security (the onion model)

Plan for failure

Ensure sufficient audit is in place

Run frequent tests and reassess the controls in place

The final, sixth principle is of iterative nature and demands that frequent tests and reassessment of the implemented security measures are performed.

We live in the ever changing world and security researches come up with better tools and techniques every single day, in other words they naturally evolve. So do our applications (new functionality is added, some features are deprecated) and infrastructure (replacing hardware, changing vendors, scaling infrastructure up and down), therefore the protective measures should be also often reassessed and adjusted accordingly.

What does “Host-based security” mean?

Network-based (perimeter) security

Host-based security

Images are from http://www.isaserver.org/articles-tutorials/articles/2004tales.html

I was too lazy to create proper illustrations for this part of the presentation, so I borrowed these images from a decade old isaserver.org’s article. Although the article speaks of Windows-based corporate networks I’d still highly recommend to read it since, you know, “the security principles rarely change” :) and information presented in that article is still applicable today.

On the left side of the slide you can see a typical perimeter-protected network. This approach, if my memory serves me right, was invented in 70’s and is still employed by the vast majority of enterprises today as the only protection of their assets. The problem with the perimeter protection is that it’s a single layer of protection: once a hole is found the attackers have unrestricted access to the internal systems.

The host-based security on the other hand is an evolution of the network-based security model and focuses on multilayer security, where security threats are assessed from the view-point of each and every host. In other words, in the host-based security model each host is considered to be in a hostile environment and should not trust anyone in the same network. You can go even further and assess applications running on the host from a point that they are running in the hostile environment and should not trust anyone.

Generic Web Application

We have approached the second half of the presentation. To make things easier to understand let’s examine a generic web application and its possible infrastructure it is running on.

Users

Generic Web Application

Any application should have some kind of users. Be it real people who visit a website or programs which communicate with our application using API.

UsersWeb Server

Generic Web Application

An application should have an entry point where users could connect to. Since we are discussing a web application such an entry point would be Web accessible, hence we need a web server.

UsersWeb Server

Static Content

Generic Web Application

A part of our application would be static content (images, javascript, etc), so we need a static content storage reachable by the web server. By the way, this diagram already presents a complete architecture for a static web site. However, we will go further and will define a dynamic web site infrastructure here.

UsersWeb Server

Application Server

Static Content

Generic Web Application

To implement our application’s logic we need a place where we could execute some code (be it PHP/Ruby/Python scripts, binaries, etc.), so we need an application server for that.

UsersWeb Server

Application Server Database Server

Static Content

Generic Web Application

Also our application will use a database backend to store and retrieve operational data. This means that we require a database server.

UsersWeb Server

Application Server Database Server

Static Content

Generic Web Application

Implementing Host-based security

• Analyse attack vectors• Deploy counter-measures• Protect data flows

To protect our application we need to go from node to node and perform three simple steps for each node:* analyse attack vectors - this would provide us with information on how each component of the corresponding node could fail* deploy counter-measures - here we implement security measures to minimise the impact of each possible failure* protect data flows - at this step we need ensure that each node receive the expected traffic from expected sources

UsersWeb Server

Application Server Database Server

Static Content

Generic Web Application

You may have noticed that there are no words on how application is written or what the application is supposed to do. The reason for this is quite simple: we are looking at the infrastructure as systems administrators and we consider the application itself as a black box, where we know application’s inputs and outputs, but have no idea how it was built.

The goal of this exercise is to create a generic, secure hosting infrastructure, a solid foundation, if you like, and then once it’s done it would be the time to look deeper, work with developers on implementing security principles at the application level.

Please note that we are not trying to guess how something may be broken into, but we assess what could be broken and how bad the result would be.

So, let’s examine the breaking points of the first node, which is the Web Server.

Web Server (breaking points)

• the web process can be compromised

• web server’s access restrictions can be bypassed

• can provide unidentified entry points into the application

Firstly, we don’t know how but it’s possible that the web process can be compromised and we can expect that the attacker would be able to get the same access level as the web server process possess. As an example, although a bad one (since I hope nobody uses mod_php now), imagine Apache running mod_php and the application was written so bad that you can inject PHP code into it and run it through eval().

Secondly, the access restrictions imposed by the web server configuration can be bypassed and the attackers would get access to places they were not supposed to get. Again, we are not looking at how it can be done, we just assume it is possible. As an example for this scenario you may recall file inclusion vulnerabilities where attackers were sourcing the /etc/passwd file from the system. Maybe it’s not the best of examples, but I hope you’ve got the idea.

Finally, there may be some unidentified entry points which due to their nature are not adequately protected and can be exploited by the attackers. For example, many PHP applications are suffering from accessing their class files with the .php extensions directly. Or maybe the developers left something in the codebase during their debugging session and forgot to remove it.

Web Server (mitigation techniques)

• runs as a non-privileged system account

• has read-only access to the static content

• may have write access to the file system to write logs only

• accesses application through handlers only

• the code base is NOT readable to the web server process

Running the web server as a non-privileged system account shrinks the impact surface to that account only

Having the web server account access limited to be read-only ensures that the compromised web server cannot tamper with the content

In the ideal world the web server should not be able to write anywhere, but having log files is often very convenient, so it should be OK

Ideally, each entry point should be defined and have the corresponding handler (for example, a call through the FastCGI interface), this would ensure that it is not possible to execute anything beyond the defined set of entry points

As the result of the previous technique (access through handlers only) there is no need for the web server to read the code base. Implementing this will ensure that you would not leak the source code to the attackers even if the web server is compromised or misconfigured

Static Content (breaking points)

• a compromised web server process may tamper with the content

• web server’s security restrictions may be misconfigured (or bypassed)

• directory structure and file listings may be exposed

Although this may be the easiest component to assess (if it’s located on the local to the web server file system), but it also may get a bit more complicated if, for example, the static content is provided to the web server via NFS.

The breaking points I listed here are a subset from the web server ones and there is a reason for this: we are looking at the same attack vector from a different perspective, we are looking at it from the static content point of view.

Static Content (mitigation techniques)

• is read-only accessible for the web server process

• has strict directory permissions on leading directories (e.g. 0711 to allow the web server to reach static files)

Application Server (breaking points)

• can be compromised and malicious code may be injected into the code base

• temporary and/or private files can be exposed to the web server through too relaxed file system permissions

• failure to validate user input can lead to a security compromise (e.g. SQL injections, XSS, code execution, etc.)

You may notice that the last point actually lays in the application security realm and has nothing to do with the host-based security. This is true, but I decided to include it on this slide for completeness since it’s a critical breaking point and can be addressed via multiple means on different levels of the security model. For example, you can employ a Web Application Firewall to filter user input, you can sanitise it at the application level, or you can put a proxy between the web server and the user to do both sanitisation and validation.

Application Server (mitigation techniques)

• runs as a non-privileged system account

• has read-only access to the code base

• may write to the static content directories (e.g. to generate content), but should be careful in doing so

• user input should be sanitised and validated using the whitelist approach

Database Server (breaking points)

• exposure to the public networks can result in direct manipulation with the data sets

• a compromise within a single component with the database access can lead to access to the whole database

• a read query could be modified to do an update (due to missing input validation at the application level)

Database Server (mitigation techniques)

• runs as a non-privileged system account

• is reachable to the application server only

• leverages the database security model (i.e. the schema should be properly designed to utilise multiple database users with the corresponding access controls to update/retrieve information from tables)

UsersWeb Server

Application Server Database Server

Static Content

Generic Web Application

So, we assessed the breaking points for all nodes and implemented the corresponding counter-measures. The next step is to ensure that the data flows are protected.

UsersWeb Server

Application Server Database Server

Static Content

Generic Web Application

We should start with a very simple host-based firewall on each node that does a very simple thing: allows SYN packets in for the defined services on the node and allows traffic to go out for established connections. Everything else can be rejected or dropped (well, you may keep some ICMP exchange allowed for the network sake). This will ensure that even if the attacker gets shell access (in other words are able to execute arbitrary commands on the box) they won’t be able to connect to external resources and or expose new ports to the network.

Are we secure at this point? No, of course we are not. In fact, we will never be, but we increased the security of this generic hosting infrastructure to a level that the most automated attacks from the OWASP TOP-10 list would fail and attackers would have hard time to exploit our infrastructure even if the application itself is poorly written and has serious security issues.

There are multiple things that one should do at the infrastructure level for a real-life production system: exposing only the load-balancer working in the full proxy mode to the public network and keeping everything mentioned on this slide in the private networks behind the load-balancer. Using several network firewalls to protect each logical layer of the infrastructure (these could be used as the chocking points), etc. But all of this is a topic for another presentation, I think.

Further steps

• Utilise SELinux to run the application/components in their own security contexts

• Dive into the application security and guide developers on how to write secure code (input validation using whitelists, handling filesystem permissions and ownership properly, etc.)

• Segregate the application into logically separate components and execute these components under dedicated non-privileged accounts