innovation for it security

85
Innovation for IT Security Bilingual edition: english / spanish

Category:

Technology


1 download

TRANSCRIPT

Innovation

for IT Security

Bilingual edition: english / spanish

The Research Center for Technological Risk Management, “CI-GTR, initials in Spanish”,

created at the request of the BBVA Group and the Rey Juan Carlos University, has

continued to deepen its purpose of serving as a channel for innovation through the

promotion of collaboration among companies, researchers and the academic community.

In the summer of the year 2012, the CI-GTR held at the imposing historic venue of the Rey

Juan Carlos University, located in the Royal Site and Villa of Aranjuez (Madrid. Spain), its

second Summer Course, this time entitled Innovation for IT Security.

The pages of this volume — published in Spanish and English — collect the content of the

conferences, the round table and subsequent debate that constituted the crux of the

program, including, in addition, a photographic album that shows some of the moments

lived through by the students and teachers, a faithful testimony of how good atmosphere,

professional rigor and the desire to learn are essential ingredients in order to innovate.

Innovation for IT Security

2012 Summer Course Rey Juan Carlos University

Aranjuez, 9–11 July 2012

PUBLISHING

PRODUCTION

DESIGN AND LAYOUT

Miguel Salgueiro / MSGráfica

PRINTING AND BINDING

Gráficas Monterreina

Legal Deposit: M-16149-2013

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo �

INDEX

INTRODUCTION .......................................................................................................................................................................................... 5 Santiago Moral Rubio

PROLOGUE .................................................................................................................................................................................................... 7 Pedro González-Trevijano

INNOvaTION aS RISk MaNaGEMENT STRaTEGy IN ThE BBva GROUP ................................................................ 9 Santiago Moral Rubio / Francisco García Marín

INTENTIONaL RISk IN DIGITaL ENvIRONMENTS: COMPLEx NETwORkS OF INTENTIONaLITy ........... 13 víctor Chapela / Regino Criado

SECURITy INTELLIGENCE aND RISk MaNaGEMENT whaT yOU CaN’T SEE CaN hURT yOU? ..................................................................................................................................... 21 Simon Leach

IF yOU INNOvaTE IN yOUR BUSINESS, why NOT INNOvaTE IN yOUR PRESENTaTIONS? ............................... 27 Gonzalo Álvarez Marañón

PREDICTION OF FRaUD wITh INNOvaTIvE TEChNOLOGIES .......................................................................................... 33 Fernando Esponda / Luis vergara

Centro de Investigación para la Gestión Tecnológica del Riesgo�

Innovation for IT Security 2012 Summer Course

a RISk MaNaGEMENT MODEL BaSED ON GaME ThEORy ................................................................................................ 39 Jesús Palomo

ROUND-TaBLE DISCUSSION. NEw TRENDS IN RISk aND IT SECURITy TEChNOLOGIES ............................... 45 Taking part: Jaime Forero Rafael Ortega García Luis Saiz Gimeno Juan Jesús León Cobos víctor Chapela Chaired by Esperanza Marcos

NON-INvaSIvE FaCIaL RECOGNITION aT aIRPORTS ............................................................................................................ 57 Enrique Cabello

a STaRTUP TRIP TO SILICON vaLLEy ............................................................................................................................................ 63 Julio Casal

PhOTO GaLLERy ........................................................................................................................................................................................ 69

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo �

he evolution in a technologically globalized world is unstoppable and its adoption by the society is unhindered. Citizens demand more and more

transparency and organized crime moves forward in their attempts to benefit through technological fraud. Mobility begins to be already a leitmotif in the corporate landscape; paradigms such as ByOD (Bring your Own Device) fills offices with new tools of productivity (smart phones, tablets, etc.), the access controls are enhanced and reinforced by the identification based on mobile devices and biometric techniques and identity federation opens the door to a scenario in which the re-utilization of identity is already a service. The provision of services in the cloud (cloud services) advances at a dizzying pace and these changes will inevitably involve the emergence of new threats that challenge the functioning and effectiveness of protection mechanisms.

This new situation requires companies to equip themselves with good tools for risk assessment, detection of abnormal behavior in large volumes of data and justification of decision-making on the best protection strategies.

Innovation, key driver of growth and competitiveness, thus emerges as an essential lever in the fight against the threats that affect the security of the information and the obtaining of the desired risk reduction levels.

The Research Center for Technological Risk Management (CI-GTR) is working on several projects to apply novel techniques and concepts to information security and the fight against fraud considering innovative trends in the context of the Information and Communication Technology (ICT). In this scope, the CI-GTR convened the second edition of the Summer

INTRODUCTION

Santiago Moral Rubio(Director of the Summer Course “Innovation for IT Security”)

Centro de Investigación para la Gestión Tecnológica del Riesgo�

Innovation for IT Security 2012 Summer Course

Course held in the framework of the Summer School of the Rey Juan Carlos University. The talks were given from 9 to 11 July, 2012, in the city of aranjuez. This edition has had a wide acceptance with more than one hundred attendees both students and research teams, and Spanish companies and representatives of international organizations.

In the course we approach the experience of an entrepreneur who has managed to place a startup of Spanish origin as a reference in its sector in Silicon valley, promoting at the same time a powerful open source software

community around their project. we also saw

how new techniques can be applied in the fight

against technological fraud providing capacity

for learning and adaptation to the changing

behaviors of crime. In addition, innovative

methods in the risk analysis of intentional

type and the application of graph theory in

the vulnerability assessment of the computing

facilities were presented.

Through this publication, we convey to those

interested the transcription of the lectures

presented in this Summer Course.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo �

gain we find ourselves before a meritorious example of collaboration between Business and University, since, as it already happened in 2012,

BBva and Rey Juan Carlos University have joined forces to launch and develop new action of R&D. Both entities have been collaborating intensively since several years in the development of effective tools of evaluation, detection, prevention, and protection against risk, ever-present, in the complex systems of information, collection and control of data. Thereby we respond to one of the priority needs of today’s world, where the security in the communication of banks and financial transactions is, without a doubt, an essential guarantee for the reliability of the system. we, therefore, are witnessing an ambitious investigation in a cutting-edge sector, which BBva and the Rey Juan Carlos University lead through the analysis and evaluation of effective

and innovative techniques to assure the security of information and data transfer and in the fight against financial fraud.

From this imperative need, the collaboration between both entities was born, materialized with the creation of the Research Center for Technological Risk Management (CI-GTR) that, once again, bears fruit with this publication. This Research Center has managed to be among the centers of reference at the national level and is working tirelessly to keep up with the trend of the much-needed internationalization with the globalized world. This is evidenced with the publication that we mention. It’s a volume that collects the papers presented at the Summer Courses of the Rey Juan Carlos University. Thus, at the venue of the aforementioned courses in the Royal Site of aranjuez, in the last edition of 2012, the course “Innovation for the IT Security”

PROLOGUE

Pedro González-Trevijano(Rector of the Rey Juan Carlos University. President of the Rey Juan Carlos University Foundation)

Centro de Investigación para la Gestión Tecnológica del Riesgo�

Innovation for IT Security 2012 Summer Course

was carried out under the direction of Mr. Santiago Moral, in order to make available to the academic and scientific community the results obtained from the various projects carried out in collaboration with the Rey Juan Carlos University.

again the response of the scientific community was as expected, as well as the level of the speakers and attendees: a significant number of participants and a cast of speakers of recognized national and international prestige, both from the business and the academic-university world. They all showed us that we still have a long way to

go in the risk analysis and the fight against fraud in the banking sector. and they enlightened us about new methods that are being researched and discussed in the most prestigious scientific forums.

There can be no doubt, all in all, of the usefulness of the work that I have the pleasure to present and preface, which completes and continues, in an outstanding way, the essential work to disseminate the research findings of the Research Center for Technological Risk Management (CI-GTR), for which I foresee a promising future.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo �

wo years ago we started this adventure with the creation, together with the Rey Juan Carlos University, of the Research Center for

Technological Risk Management. In parallel, we also started a series of works, both in the University and with technology companies, to develop the ideas that we had at that time and create the security services that we thought the market needed.

Currently, we are three people working in this scenario. and we rely on three areas of action: consolidated technology companies (Intel, GMv, RSa...), which have the muscle to carry out large projects; universities and research centers, both

nationally and internationally, with agreements with universities in the United States of america and research centers in Israel, for example; and small venture capital companies. In this context, either we participate in the creation of new companies devoted to research, alone or jointly; or we participate by entering in the capital of some of these technology startups, or buying its product.

In essence, from the Technological Observatory, the place where we are working, we look at the challenges posed throughout the planet to be as present as we can on new technologies, either through agreements of some sort, or collaborating in the funding.

Santiago Moral Rubio(Director IT Risk, Fraud and Security. BBVA Group)

Francisco García Marín(Innovation for Security. BBVA Group)

INNOvATION AS RISk MANAGEMENT STRATEGy IN ThE BBvA GROUP

Centro de Investigación para la Gestión Tecnológica del Riesgo10

Innovation for IT Security 2012 Summer Course

having said all this and recalling that before we focused on a number of scattered projects, at present this function has been created within the team of Santiago Moral of IT Risk, Fraud and Security, devoting an own space to innovation.

Focal points

To begin with, I would like to tell, according to our perspective, what the main points of interest are on the innovation of security. First I will contextualize and then we will see in detail.

There is now no doubt that the evolution of new technologies is unstoppable, its adoption is unhindered. The presence on Internet maintains its competitiveness in crescendo, and the spread to emerging countries materializes their visions of expansion.

On the other hand, we are witnessing an ever-increasing division of the value chain. Companies outsource more and more things, everything that is not their core business, thus abandoning that image of monolithic structure that had characterized them before. and all this happen in an environment where risks, which were formerly perceived as mainly internal, do not stop growing in their external version.

another trend, still incipient, is the migration to the cloud, of both IT infrastructures and tools (email, collaborative environments, etc.). It is still a slow passage, but the migration of business processes is ultimately considered, which make this decision absolutely strategic for a company.

and linked to that, there is another trend, the deconstruction of the processes, closely bound to outsourcing. and in order to get to the consolidation of a given product, there are more and more participants producing previous things, offering a service that complements the rest of the proposal. The global trend heads towards that direction: the fact that all processes are of this type, betting on a further optimization and efficiency, offshoring and cutting up the processes.

In keeping with market trends, we cannot forget to talk about what has been called “consumerization” of smart devices. They are more and more personal; and, from a banking point of view, are used increasingly for payment systems and business processes. all of this goes hand in hand with the popularity of social networks. Currently, there are more than 500 million active users on Facebook, with millions of people doing reviews in the Network about hotels, airlines, about any consumer item type... in an era where the consumer tips swing from professionals to ordinary people. The unstoppable rise of youTube has also been spectacular, which in 60 days has more content uploaded than the content created by professional media companies in the last 60 years.

There will be other fields where we will see new developments in the coming years, as in the scenario of intellectual property, which will not only be constrained to music or video, but also will reach eBooks or 3D printers, which will make possible to copy physical objects, easier to protect until now.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 11

In parallel to this technological evolution, threats are also increasing exponentially. So you have to manage new risks too, such as data theft, denial-of-service attacks, cyberactivism operations, attacks against critical infrastructures and mobile devices, attacks against brand and business reputation, aPTs (Advanced Persistent Threats), etc.

Better technology, better tools

In the midst of this panorama, the good news is that we have more and more advanced technology, which allows us to build better tools to defend ourselves. One of them is big data, the ability to extract information from large volumes of data, something that previously could not be done because there were no tools suitable for this purpose due to the costs associated with processing and storage. I would also highlight abnormal pattern recognition systems, where we have a leading role; the new developments in format preserving encryption that allow operating on encrypted data; biometric systems; and the tools of analysis and management of security on mobile devices.

In this context, and celebrating that innovation in all of these scenarios is a lever that leads to growth and increased competitiveness, our work areas are aligned to develop solutions against fraud; advanced payments; digital identity and authentication systems; services in the cloud, big data, environments of new devices; and the field of risk management, among other options.

In particular, we consider that in the cloud world currently exists a lack of sufficiently developed

security systems to allow us to deal with these environments effectively; and, at the same time, develop further controls on companies where services are outsourced. Therefore, also in Google apps we are devoted to find tools that complete the security of this environment.

Following on from the cloud, we must assume that the traditional system of a Data Center changes. The trend speaks about the growing number of company departments that moved their processes to the cloud environment. and the clouds are diverse, from full-services, with integrated application software, to the lower part, which focuses only on the use of process and storage along with infrastructure capabilities. and between both ends, many intermediate points.

all of these scenarios have a new security perimeter, which can be managed by different tools and people; what brings a great complication in order to unify and manage the security in a cohesive and balanced manner. actually, we would like to have a system that brings us a joint vision of the cloud dispersed environments, and have a virtual perimeter that is able to encompass both traditional and cloud systems. It’s true that nowadays we already have solutions that can respond to this need, but are very fragmented; and they don’t have much maturity or touch only partial aspects.

Services of a Data Center, now to the cloud

But, which are the usual services of a data center that we would like to bring to the cloud? we would be talking about the traditional perimeter

Centro de Investigación para la Gestión Tecnológica del Riesgo12

Innovation for IT Security 2012 Summer Course

protection of a firewall; DLP tools to prevent information leaks; information encryption; protection against malware; protection of administrative systems of these cloud services; and profiling of the devices, because we would access from any of them. In addition, identity services, both from the point of view of the authentication and the identity provisioning of all those scattered access points, as well as Identity Federation, in order to allow greater transparency of the physical spread of processes in the cloud for users, plus the possibility to federate identities with other external entities which are massive entry points for our clients, as social networks.

In regards to the “consumerization”, according to Gartner, from 2009 more smart devices are sold to individual consumers than to professionals. and this trend will continue to grow. So we aim to protect all mobile devices accessing our systems. and here we would highlight trends in management of these environments as the ByOD (“Bring your Own Device”).

On the other hand, if we consider now what is really new on the big data proposal, we realize that until very recently the hardware and software systems didn’t have competitive prices that, for instance, made the analysis of all transactions with credit cards in BBva in the last five years possible. This amount of information was literally unmanageable . another example would be the analysis of all telephone calls of a telco during a long time period.

also a chance to merchandize big data arises for some companies, which could be useful for third parties. however this forces the deployment of solid security measurements that anonymize individual information and therefore the only possibility is performing statistical analysis. In addition, many initiatives are also appearing to manage these databases; one of them is hadoop, which is based on the way of storing data from Google. In the end, in addition to encouraging businesses to use big data securely, in the Security area we also make the most of this scenario to improve our own services, improve the classification of the information, prevent cyber-attacks, verify the effectiveness of controls, etc.

Furthermore, in terms of digital identity, we are interested in biometric systems for the trustworthy identification of individuals, which can be used alone or complementing traditional systems; and also in Identity Federation, both internally and externally.

and, finally, in the field of risk management, we work with the Rey Juan Carlos University in the analysis of complex networks, looking for ways to obtain the strength or weakness of a network with mathematical analysis and graph theory. we are also working at the University with groups of advanced technologies of risk analysis, which will be explained in more detail in the following papers. and, of course, we follow closely the field of antifraud, so we continue the research in algorithms to improve our ability to detect online fraud.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 1�

Regino Criado

On the basis of the idea that we all want to make truly effective the collaboration formula between the academic and the business world, I’m privileged to be able to tell you this experience where we achieved a real company and university partnership model, articulated around the interaction of multidisciplinary groups, where each one does their duty within the framework of a mutual feedback.

To explain the origins of this collaboration, I have to comment that some time ago we began to work with the Research Center for Technological Risk Management. we did several projects with

tokenization, aTM renewal model, etc.; and one day, during the previous summer course, we began to exchange ideas about our experience in complex networks.

Víctor Chapela

and there, thanks to this ideas exchange, we started to envision the idea of working on how to model hacking.

In the beginning, we started to use techniques from the world of chess, but they didn’t work entirely; and, in between, we picked up the ideas in a book by Barabási that I had read, about the connection of networks by graphs, and the

Víctor Chapela (Chairman of the Board of Sm4rt Security)

Regino Criado(Professor of Applied Mathematics of the Rey Juan Carlos University)

INTENTIONAL RISk IN DIGITAL ENvIRONMENTS: COMPLEX NETwORkS

Of INTENTIONALITy

Centro de Investigación para la Gestión Tecnológica del Riesgo1�

Innovation for IT Security 2012 Summer Course

Internet in particular. I thought that this was also related to the hacking points, which were connected things but at the same time difficult to model. and from there the idea actually arose, seeing it as part of this interconnected graph, where it could interact with others.

what is a graph? a graph is a collection of interconnected nodes and what is represented are those nodes. as matrices, the relations could be worked out mathematically, being able to do transformations of these matrices to precisely convert, analyze, or model relations amongthe elements. The book of Barabási said that Internet relationships are exponential. Most connected nodes in the center were exponentially more connected than the less connected nodes. he called this “scale-free networks”. and this was the same thing that happened in hacking. Those points more connected were the first to be hacked, the easiest to hack, such as databases or user and password administration systems.

Then, the question that arose was simple: what exactly were the graphs? and here I pass the baton on to Regino Criado.

Regino Criado

The origin of the graph theory (structures that consist of two parts, the set of vertices, nodes, or points, and the set of edges, lines or sides that can be oriented or not) is in the works of Leonhard Euler on the seven bridges of the city of königsberg (today kaliningrad) on the Pregel River. There, the city inhabitants attempted to prove that it is possible to go over all of them and

return to the same starting point without passing twice through the same bridge. after a while, they came to the conclusion that there was no solution. But this man made a statement in order to demonstrate such a possibility: the number of edges leaving each node should be an even number. Later, the graph theory evolved and by the 1960s it was explained howgraphs could evolve.

In this context, it arises what we call today the theory of complex networks, keeping great resemblance to graphs, but with one essential difference: in classical graph theory we worked with graphs of small size, with a small number of networks and edges connected to these nodes, as opposed to today’s networks. Internet is also an example of a large complex interconnected network. If we conceive of Internet as if two pages are interconnected and there is a hyperlink between them, how many mouse clicks of average distance are there between two separate web pages? according to a study, there are 19.

It is important to understand this scheme. 30 years ago, the predominant thinking in the world was called “reductionist thinking”. That is, to understand a system, what you need to do is break it down into its parts, increasingly smaller, and understand the dynamic behavior of each of them so then you export the knowledge to how the system behaves globally. But this only works for a certain type of system, the so-called “linear systems”.

To clarify this topic, we will go into the difference between complex systems and complicated

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 1�

systems. a complex system is a system that not only has a large number of components that interact among themselves, whether linear or not, but it also has a certain degree of non-linearity, which reflects in that not always the whole is the sum of the parts. while linear systems behave in a reasonable manner and without much variation; the non-linear are systems that either can lead to periodicity, or exceed a certain threshold where a chaotic behavior appears, as in the neuronal world. also, complexity can come from the way in which the interactions occur among the various components of the system. and here we find that there may be different ways of simplifying the models, either among the specific interaction with each other, either in the interaction among all at the same time.

In general, in Internet, non-linearity not only manifests in each one of the components, but also in the own interactions among them. To end with the example of the human genome, although this differs from a primate genome by only 1%, what really make us different are the connections that occur between the genes, which allow the activation and/or deactivation of specific genes.

In this context, to work on the model that victor is going to present you, graphs were not sufficient, and it was necessary to introduce a new concept. To see it graphically, and taking as an example the map of Madrid underground as a graph with nodes, the nodes would be stations; and the connections, the direct connections between stations. But then each station is assigned to one or several lines. Then here is the concept of layers: each line would be a layer, each node would be

in several layers, and there would be connections among various layers, arriving at the concept of multilayer network on which we have worked.

But, what is the relationship between all the things we have been talking about with the question posed by victor one year ago?

Victor Chapela

Regarding the specific matter of intentionality, not so much in graphs, we had been working on three basic axes: accessibility, anonymity and value. The first, related to graphs; and the last two, to the risk of the attacker and the expected value. Each of them has different controls, such as separating and dissociating the value; authentication; monitoring and reaction of anonymity; and authorization, filtering and isolation of access.

The underlying question of a graph is where to place the controls on my network to be more efficient and effective from the point of view of the effort. and here there was a particular problem in those three axes: as we increased accessibility various problems were also caused. On one hand, in the total accessibility we have accidental risk, redundancy and the ability to keep something up or online; and, on the other hand, we have the confidentiality, this is where, in contrast, we opt for the restricted access. This is the kind of total security we are looking for at this time, and this was achieved with less accessibility.

On the other hand, we can also reduce the risk by reducing the anonymity and the value, but the accessibility is, above all, targeted at graphs. and

Centro de Investigación para la Gestión Tecnológica del Riesgo1�

Innovation for IT Security 2012 Summer Course

this is the case because we can take all access and all systems, and convert them into nodes, seeing relationships among them-who consulted information, who transformed it and who saved it. That allows us to segment the nodes and find relationships among them; see which are the most interconnected points of our graph and put a border of confidence around them that allows us to protect them. It is not about securing everything else at the same level, because part of the security problem is how we focus on what’s really important. with these ideas we just start to look at information in a multilayered fashion – how a complex structure is organized, which groups of users must be managed differently, what aspect of the information must come first, etc. – Thus, what we can do is to isolate from our graph the parts with higher intentional risk, most likely to be hacked. within this network, it is important to see which nodes had greater anonymity, i.e. what was the most anonymous attacker. and thus we arrive at the question that I raised a year ago: how can we model this?

Regino Criado

we started to work on the basis of understanding that the risk of intentional attack is a function, and we don’t know if it’s linear or non-linear, it depends on the value of the data for the attacker, its accessibility, and the degree of anonymity. and we create the multilayer network model – the big problem of working in complex networks –, which was targeted, tagged and weighted, with the idea of seeking a complex network adapted to complex digital environments of large corporations. and the big question arrived: what can we represent within each of the layers?

Victor Chapela

at first, one approach used to model the accesibility, anonymity and value consisted on considering an electric circuit model, wich is depicted by graphs. On the one hand, we have the negative pole, measuring the anonymity of the person, and the positive, measuring the value for whom is accessing. hence, as much potential difference there is between poles, more risk. So if what we want is to reduce that risk, we have to place resistances in the form of access controls at the level of network, applications, servers, etc.

we also work with the idea of reducing the anonymity or the value with the dissociation or separation of the data. This ended up converging on a single idea, an electrical deterministic circuit where we could accurately determine the workloads and the relationship between anonymity and value as two things that were measured on the same axis, seeing that they were not equal.

In this process of understanding the complexity as something multidimensional, each node had the three basic characteristics, and there were no separate nodes of accessibility, value and anonymity. we had at one point the anonymity, at another point value, and then we had to be able to spread those values through the complex network. and see which controls are needed to reduce these maxims, seeing how the value was spreading itself from one point to another. That is, if we have a data base that is very valuable, and is connected to a web application, if I hack either of them I have access to the same value. But if only a fragment of

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 1�

that information is transmitted, only thatis exposed. also regarding anonymity... If the attack comes from Internet it is reduced if I put controls; if not, we continue with the same anonymity.

and this is where we begin to understand that we were talking about different types of layers. we finally find 4 basic layers (now I think that we are already on 16): topological layer -a value and a level of anonymity is assigned to each network system-; network and risk mitigation controls layer; authorized access layer; and potential access by affinity layer – here it doesn’t matter if I am not allowed, what matters is that I can manipulate to gain access –.

Thus, each network device had all these parts before that we saw before as different layers. and, in this way, a system might be comprised of subsystems. In this regard, there was an important issue to take into account, the collapsing and expansion of these multilayer networks, because at their base, which is the complete network, we have lots of information that then we don’t use later. and we wanted to collapse it from different perspectives, such as application and user management controls. Thus, the graph could be expressed in smaller graphs and might be represented by a higher level node, or extended or expanded to other nodes. This gives us not only the multilayer, but a scale-free ability, in a way that we could enter in detail so that every bit of a system would be a node; and see in each of them the value and anonymity.

From each of these layers we end up having three large groups: automated deployment

(infrastructure), by scanning the networks and obtaining all the information; calculation of static risk (opportunistic risk), with information on the value and the anonymity through the network to see which applications have more risk to make the modeling in each layer; and dynamic risk, which does not focus on where I have access but on what route is easier for an attacker.

In essence, protecting systems among them, from themselves, is more important that protecting the flow downwards. and what we are starting to do is dynamic risk.

Regino Criado

The layer of affinity corresponding to all unauthorized access is reduced, in practice, to accesses that we didn’t count on. Then we are talking about adaptive networks, not static; and hence, of dynamic risk. The basis in our model is to quantify the risk of each of the network elements and keep this in mind in the collapsing to visualize a system with a reduced number of parameters.

we have the three magnitudes (value, accessibility and anonymity) and have three nodes (origin, intermediate and objectives, where the value resides). The idea is that if we have a network of interconnections, the value should be distributed from right to left, so that, taking the considered value in the nodes where that information resides, each of the nodes connected to them through a single hop or link would have, not exactly the same value, but an important value. and if one is connected with one of the nodes that access this

Centro de Investigación para la Gestión Tecnológica del Riesgo1�

Innovation for IT Security 2012 Summer Course

having value, it would also have value. Somehow, the algorithm used is the exponential of a matrix, thus getting a vector of values corresponding to the quantification of the value residing in the nodes where the value is. This base of this idea is to decrease gradually the influence of the value, as it’s blurring in the successive nodes.

On the other hand, we have accessibility. and the idea is that, starting on nodes that are connected from the farthest points of the network and try to reach the target nodes, accessibility is spread giving a certain importance to the intermediate nodes. This is given by the same algorithm that works in Google. If you place a random passenger in a point, the most important nodes are those that are passed. we have two parameters, one to attribute value to each node of the network, and another for accessibility. we have anonymity left...

Victor Chapela

Regarding anonymity, we just spread it as the major anonymity. If there is access from the Internet, it just spreads. But we saw that if there was an access from Internet, then anonymity is expressed in that anyone in the internal network had access as if he/she were on Internet and we saw that this was not entirely true. additionally, there was another fundamental problem, from the point of view of normalization of our three axes. The accessibility and value axes were exponential and we were using logarithmic scales. and both could relate very well. But anonymity is a phase shift, is a sigmoid curve. It’s the perception of anonymity that has the potential attacker, so that

the perception of risk is not exponential, when actually we do have it. we think that we have much or little risk, but in the middle...

To solve this, we start trying to define different user profiles that could access from different environments; and we saw three environments that came from outside the company: Internet, the wireless network, and external suppliers; and then a DMZ, and a concept of safes, which was in a second part of transition.

we understood there were two transition zones, we went from the high anonymity to the low anonymity; and from low value to high value, which were our safes. and we began to define rules and patterns, seeing the roles at each point in the security area and the types of valid and invalid connections.

Instead of putting in a sigmoid, where the anonymity was, we thought it was better to dissect it in layers; and we made the assessment of how far can reach different types of users coming from Internet. In this way, we find twelve sub-layers of anonymity, which were different values, and that allowed the use of different controls.

In the Data Protection Law of Mexico it was formalized this way: prioritizing risks by data type, and then prioritizing risks by environment; and, then, seeing the own prioritization of the controls. we identify which wasthe process; and we define the risk by the data type, from one to five (the greater volume of data and the greater risk by data type, the higher level of control you need).

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 1�

In the intentional risk, which is divided into accessibility and external threat; this is divided in turn into anonymity and value to third parties. hence we arrived to what we call “latent risk” in the data protection legislation of Mexico. There is the risk by data type, accessibility, anonymity... and, in the case of value, we simply had to dissociate and separate the value. In the case of accessibility and anonymity stayed in a second box, where we had five types of anonymity, four of which reflected in graphs: physical level, internal network, wireless, third parties and Internet. we put them a specific value based on what we saw in the different statistics that indicated where more information is stolen, and resulted in different patterns of DMZ to use, physical controls to consider, etc.

Then, once the modeling of risks in a measurable way is outlined, now we could do modeling of risks of a node, which didn’t exist before. and right after we also see what the easiest route is, modeling the risks.

Regino Criado

all this, in conclusion, is set out over time through

evolutionary networks, temporary networks,

interconnected networks, etc.; centrality and other

similar parameters; control of networks, etc.

The important thing is that this collaboration has

not only allowed the University and business to

speak, but also to provide tools for modeling.

and it allowed each proposal to get feedback for

the sake of ease of use. Thus, we’ll understand

risk better: we’ll see how an IPS improve the risk

reduction, we’ll see where the firewall works

better, etc.

Dynamic risk is still what remains to be done, it’s

still being determined how to model hacking. we

have many ideas, but we still have to wait so they

take shape and draw conclusions.

Centro de Investigación para la Gestión Tecnológica del Riesgo20

Innovation for IT Security 2012 Summer Course

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 21

would like to use my speech to talk about the value of security intelligence, in a moment where cybercriminals are finding with greater success the way to get inside the

organizations. But before this I want to introduce you hP Enterprise Security.

hP is known in many areas of IT and now, after the acquisitions of companies like TippingPoint, Fortify and arcSight, has decided to enter the field of enterprise security with the new division hP Enterprise Security, which is also supported by our global research in security, where we study vulnerabilities, the problems associated with third-party applications, and threats in real time.

If you look at Internet and at how companies use information technologies, it is important to note the existence of disruptive technology trends. In recent years, we have seen many changes in the way of organizing work. Before, all the technology was within the same building of the company; and now we have confidence in the cloud world, customizationand outsourcing, and in the ByOD, among other options. and all of this has an undeniable impact on security.

The cloud world

In “cloud” terms, one of the big concerns is who the data owner is. Is the responsibility implied on those

Simon Leach(Pre-sales Director EMEA of HP Enterprise Security)

SECURITy INTELLIGENCE AND RISk MANAGEMENT

what you can’t see can hurt you?

Centro de Investigación para la Gestión Tecnológica del Riesgo22

Innovation for IT Security 2012 Summer Course

offering subcontracting services in the “cloud” or is something that always keeps the company itself and cannot be delegated? In effect, you are responsible for your information, wherever you are. If your “cloud” provider steals yours, you will remain the top person responsible for it. and this is something that is in the small print of contracts with cloud providers.

On the other hand, customization leads to a situation in which, in many cases, we don’t know what is happening in our network, because there are many devices and they can be managed both internally and externally.

and in the end, how many security solutions are we facing? In the market there are approximately 1,200 products in total, which do not communicate among themselves. So when the product a sees that something isn’t working well, it cannot communicate this to product B, which, on its own, has seen that something didn’t work well. we have a problem of information correlation. we have to set up some sort of metric to measure what is really a risk priority for us. and that becomes evident when we see how cyber-threats have changed in recent times.

a few years ago, when someone entered a website they put a mocking face or something similar. and this was done by students who wanted to play and make a name. Now they are professionals who are motivated by money. They sell information in a black market where they can get paid around $300 for each credit card number.

On the other hand, we are also witnessing

an explosion of cyberterrorism and what is called “offensive security”, by virtue of which Governments are investing in tools which purpose is to attack his enemies. For example, it is said that the Pentagon is changing the pace in its cyber-strategy to have a technology capable of to set a short circuit on any system that would like to attack the State. we all know also that a malware was written to break the security of nuclear power plants in Iran. we don’t have exact details of the losses caused, but it’s said that the attacked Iranian plants lost 20% of their capacity.

On the other hand, another of the threats that we face today are the aPT (Advanced Persistent Threats). and here it is important to understand what a well-made aPT is. It is advanced because they don’t use only a single tool, but all the attack skills that are known to succeed; tools can be used to break in, then to see the possible targets, and then seize them permanently. It is persistent because it is targeted against specific organizations and will not stop until it gets what it wants. and it is a threat because someone is doing this with the aim of producing damage.

In reality, the aPTs really began as a governmental exercise, for example, in 1997 a project called the “eligible receiver” was carried out by US Government. The aim was to attack 36 governmental networks with the available tools. The results made the US Government to wake up and think about it seriously. In 1998 we had another aPT. here they discovered that three men had entered the Pentagon network to steal information. and that was a warning signal to Governments around the world. and then we passed from the

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 2�

governmental field to other possible scenarios of attack, such as the electricity grid. Someone commented recently that the Chinese have access to the power networks around the world, and that they could stop them with the push of a button. Think a moment about this.

In this context, my favorite attack is Google aurora. It has not been proven, but it was suggested that story began with a group of Chinese hackers who entered the Google network to steal some of their source codes. They did so because Google was having problems in China and wanted to copy its formula to have a similar search engine. Then they broke into Google using a non-disclosed vulnerability in Internet Explorer. If Google had been using his own browser (Google Chrome), Google aurora wouldn’t have happened, because no one would have exploited this vulnerability by using Chrome.

If I ask to any of you if you have suffered an attack, no one would raise hand; and, however, 25% of the companies with which we have spoken in the last twelve months have experienced an attack or violation of their data. and they are sometimes not even aware. an american Government adviser says very clear that almost all US companies have been attacked by the Chinese Government. It’s shocking.

Common attack pattern

But, what do these companies have in common? Google, one uranium processing plant and the rest of the known attacks? all of them have been attacked in the last 20 months by the same pattern that consists on exploiting zero-day vulnerabilities,

for what we don’t have yet an effective way of protection. So it’s complicated for companies to protect themselves against something that has yet no remedy.

By looking at the common pattern of attack, the attacker first identifies where he wants to enter and what he wants to attack, and checks that he can do it with zero-day vulnerability. If so, we won’t be able to stop him. Once inside, the attacker will have privileged access to critical assets, he will scan the network, etc. he can do it gradually, not immediately, to arouse less suspicion. To, finally, start to steal information.

To understand how this could occur, we must understand before what the steps are that lead to this situation. what happens just before an attack occurs, or in its beginning? we will put the following example: an organization receives a request for remote access to its network and an employee receives a mail in this regard from China. a notification is sent, but nothing more. The next day, in the early hours, the employee opens the email and it seems to come from someone you know, and opens the attached file. Then a remote access is installed in the system and the notification is sent to the same address in China. Two hours later, the server CPU goes beyond limits and triggers an alert. after a few minutes it disappears and, thus, the potential attack is ignored. at the same time, but nobody relates it, there is also an increase in the network activity, but it returns to normal after a few minutes, and then this is ignored too. Thus, the network administrator, who sees much outbound traffic, does not notice the true alert. a few days after the information

Centro de Investigación para la Gestión Tecnológica del Riesgo2�

Innovation for IT Security 2012 Summer Course

from the data base of this company is on Facebook. what has happened?

In fact, the customer was doing things right. They had up-to-date security tools, they were monitoring the services, etc.; but they lacked the necessary correlation between all these warnings and information. although we have the best tools in the world, if we don’t correlate the information we will not have enough evidence to find out what is really happening. Information is the key. we have to understand the weak points that we have inherited, the weaknesses that we have created, and the ones we can use. and make sure that our security intelligence is working. we have to understand the weaknesses. all of us have third-party applications, network servers, databases... and each application is going to have a vulnerability.

Vulnerabilities in source code

Research into the quality of software development says that ten years ago there were normally 40 errors in each a thousand lines of code as average. Nowadays, thanks to the improvement in software development and the use of model-based approaches, this figures have been reduced to Taking into consideration that an iPhone has 13 millions of code lines, an android has 15 millions, windows xP 60 millions, errors in code cause a huge amount of vulnerabilities.

In addition, although the number of published vulnerabilities has considerably gone down in the last 10 years, severity has gone up. For instance, in 2006 the number of vulnerabilities with reachacality level 8 or above was approximately

26%. Currently, however it has reached 37% That is to say, vulnerabilities are becoming more critical.

at this point, we must highlight that only 1.3% of the vulnerabilities are being identified by the manufacturer itself, while more than 90% are by third parties. In other words, there is a lot of intelligence in the network.

In this context, the way in which this market has begun to deal with these vulnerabilities in the products is through the launch of the so-called Vulnerability bounty programs. Independent researchers find a problem in a commercial application and communicate it to the manufacturer, and this pays them money. So this dynamics works as an incentive to find vulnerabilities in products.

From hP TippingPoint, in 2005, we launched the zero day initiative, which was the first installment of a program of this type. Thus we try to remove the vulnerabilities from the black market, helping the manufacturer to solve its problem more quickly; and, incidentally, we offer our clients a proactive protection.

Vulnerabilities in proprietary software

In view the need to take sides in the resolution of the vulnerabilities associated with the software code, the second point to consider would be that of the weaknesses that we create ourselves when we develop proprietary solutions to provide services to our customers. The disadvantage of these initiatives is that there is no one who monitors vulnerabilities patches on the systems, there is

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 2�

nobody looking after those applications. That’s why the number of revealed vulnerabilities is lower in these fields. But vulnerabilities increase and the reason is that there are no patches to solve them.

attackers know this situation, this lack of monitoring, and they are also changing the patterns of attack to target more frequently this customized software. at hP, we did a study among more than 1,000 customers, and 73% of them declared having a vulnerability of SQL in their code. In the financial sector, it was only 8%.

On the other hand, we must also highlight the practices based on the purchase of vulnerabilities in order to enter the competitors’ network, for example. There are people specialized in selling organizations access to unique unpatched vulnerabilities of their competitors.

Unfortunately, today the security tools aren’t perfect. how many tools are there in a large organization? 60? 70? There are many different tools, which, even if they were the best possible ones, then we must hope that the people who work with these tools understand perfectly the information that is coming from them and have a suitable process to recreate events. In the end, a SOC is much more than technology; it’s also people and processes.

Internet: different levels of evil

In conclusion, I would say that we must understand that there are no longer “good guys” on Internet; but different levels of evil. anyone at any given

time can be an adversary. In addition, the legacy controls will not be able to stay completely up to date. and, at the same time, we have to understand the potential of the vulnerabilities we have, being able to turn all this into security intelligence; and share this information with managers at all levels of the organization, so that when there is a problem, it’s understood at different levels, understanding the impact on the company. Because, in the case of companies that have been attacked, the difference is, many times, in how they have reacted to the attack. The goal is to treat the attack proactively.

HP EnterpriseView

and how hP deals with all this? From our point of view, you have to let yourself be guided by the business needs; and be able to see beyond the security tools, integrating corporate information with all the business operations.

after the acquisitions of Fortify, TippingPoint, etc. we have started to feed all that information to a single point, a tool called HP EnterpriseView, which is designed to be a kind of dashboard of an organization. If we are asked why our proposal is different from others, I would say that because we have also integrated it in other parts of the infrastructure, taking information also from the perspective of security compliance. we take information from other proprietary solutions, such as Business Services Management (BSM), and combine it with vulnerability management tools in order to understand how this relates to the risk management position of the organization.

Centro de Investigación para la Gestión Tecnológica del Riesgo2�

Innovation for IT Security 2012 Summer Course

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 2�

If you remember, when we were little our mother always said “son, wash your hands before you eat”; and although it now seems evident to do so against germs, it has not

always been the case. In the year 1864, when Luis Pasteur presented at the Sorbonne University the existence of some small agents capable of killing us, it was unthinkable that something that we couldn’t see could end our life. But Pasteur demonstrated the existence of these germs making the most ever innovative presentation at the Sorbonne. he took a kind of Power Point and made experiments in the same room with the cultures of germs and bacteria. and since that success hundreds of thousands of lives were

saved only because doctors began to wash their hands before the surgeries.

we don’t fight against germs and bacteria, but we do fight against the “bad guys” in the virtual world, and it would be a shame that because of a bad presentation our message doesn’t reach the audience. It is what is called “death by PowerPoint”, because many times you end up preaching only in front of the bodies because the mind has gone away already from the conference room.

we have to see how we can innovate with presentations. Because if we care about innovation in our business, why not innovate also with presentations?

Gonzalo Álvarez Marañón(Scientist, Writer and Lecturer)

If yOU INNOvATE IN yOUR BUSINESS, why NOT INNOvATE IN yOUR

PRESENTATIONS?

Centro de Investigación para la Gestión Tecnológica del Riesgo2�

Innovation for IT Security 2012 Summer Course

Yes we can

Something that usually surprises a lot is to see a telecommunications engineer like me, doctorate in computer science, doing courses in how to speak in public. and here is the real catch: think that someone with this degree is not qualified to teach a course. and what we do is invert the argument and think that we are not able to make good presentations.

and now I ask you, and raise your hand... how many of you know how to draw? how many of you know how to dance? how many of you know how to sing? very few will raise their hands... Think what would happen if we ask four years old children. Everyone would say yes, without hesitation. The error is that you think you have to paint as a professional painter, instead of simply painting. I propose you to draw a car, and let’s see to what extent all of us can express ourselves with a drawing.

This is also the case with presentations, we want to make perfect presentations, as when we speak of drawing or dancing; and it’s not about that. It’s about making a good presentation so people take our idea home, because the problem of mediocre presentations is that they are invisible, go unnoticed.

If we make a graphic representation with a human Gauss curve [the lecturer asks 10 volunteers to make it and places the shortest people at the ends], we see that 10% of the right end are terrible presentations, and the other 10% from the other end, the sublime presentations; and the remaining

80% (all that is in the middle) brings together the mediocre presentations. Fortunately, there are a 10% of extraordinary presentations, which manage to transform the audience, encourage them; and we have to go there. we must forget to follow the same patterns, what everyone does; because that is only normal results: mediocre. although, for sure, leaving the comfort zone implies a risk, since we all know that the line that separates excellence from ridicule is very thin.

THE � GOALS OF ANY PRESENTATION

Regardless of its purpose – inform, persuade, etc. –, any presentation always pursues three objectives: connect with the audience; capture, direct and hold the attention; and foster understanding and memory. we are going to analyze the implications of each of them and see what techniques can be used to achieve them; some are old, as the rhetoric of Plato and other are more modern.

Connect with the audience

This connection with the audience will be at different levels. Basically, we’ll talk about three: intellectual, emotional and ethical connection.

The intellectual connection is based on the premise that the level of knowledge is similar between the speaker and the audience; there is a link between what I know and what the audience knows. here we are going to do another exercise. I need a volunteer who drums a series of music themes with the knuckles of the table so the others guess them. you see that it is difficult to recognize it this way; but when we share the

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo 2�

name of the melody you already recognize it. what happens with presentations? Sometimes, we have a knowledge that the audience does not share, what in psychology is called “the curse of knowledge”, and what you need to do is to reduce that distance and put ourselves in the place of those who don’t know as much as we do.

The emotional connection speaks of our willingness, our emotions towards the audience. we all know what the result is if I do my presentation from the superiority and contempt. Therefore, you should always show yourself as open-minded, demonstrate that you’re pleased to be where you are. It’s very simple: If you want to impress the audience, tell them about your achievements; but if you want to connect with them, tell them about your failures and your struggles. Of course, also you have to have faith in your public, believe in them and their possibilities. and all of this is seasoned with a way of speaking capable of moving passion to the audience, because how the others will believe what you don’t believe yourself.

we arrive now to the ethical connection: how you are transformed by what you are telling. you have to have lived the changes of which you speak, if not it is not very credible. you can’t talk about something that you have not experienced. aristotle said in his Rhetoric that “credibility is born from the balance between logic and emotion”; and what usually happens in the presentations, especially in the technical ones, is that the contents and arguments are often biased completely towards logic. It is interesting at this point the discovery that kahneman and Tversky

made in developing the so-called “Theory of perspectives”, according to which human beings make decisions first emotionally, and then justify them at a rational level. For example, you first choose the candidate for a specific job because of the feeling that he/she have caused on you, and then go through the curriculum.

To connect with the audience you have to know what their DNa is: their attitude and resistance – if they are people with great interest in the subject, if they’ve been “forced” by their companies to attend, etc. –; demographics – it’s not the same talking to one person than to 500, or to telecommunications engineers than to lawyers-; his knowledge of the subject, his position with regard to it – it’s not the same talking to parents than to children, even if it is the same topic –, etc. and based on all that, we adapt the speech.

For example, if we find a high resistance, something that usually works very well is the “aversion to loss” – for example, say something like: “If you do not install this tool, your system will go down”. To demonstrate this, let’s play a game. If I tell you that I offer you the choice between winning safely € 500 and winning € 1,000 under the possibility of heads or tails, how many of you would prefer € 500 cash? we’ll be more pleased if we stop losing a certain amount than winning the same amount.

On the other hand, a disastrous pedagogical mistake is to throw our responses to the audience as if they were stones, before listening to their questions; because, in the end, in a presentation, what the audience is looking for is to get their problem, concern, solved.

Centro de Investigación para la Gestión Tecnológica del Riesgo�0

Innovation for IT Security 2012 Summer Course

In view of all this, let’s make an exercise of self-presentation, what you say about yourself when you meet someone. If you are placed in pairs and for 30 seconds you tell each other who you are and what you do... how many of you have tried to recognize the need for the other and have come forward to help solve a problem? In our presentations we say, for example, that “I work as an auditor”, and leave the other to deduce if what I do is useful for him. But if you put in the phrase “My name is... and help... to achieve...” everything changes. I, for example, introduce myself like this: “I help people, managers, engineers, entrepreneurs, etc., to tell the world their stories, the ones inside their hearts, and inspire them to change others for the better”. Now introduce yourself like this, and see what changes.

Catching, leading and holding attention

In this second objective we encounter one bad news: the attention curve. at the beginning of the presentation we have 100% of the attention; and, as we move forward, it decreases, and then it’s almost lost completely. If you say “to finish”, finish.

If we’d want to adapt ourselves to the attention curve of the audience, what we should do is to say the most important thing at the beginning and not at the end. If we do an experiment and I show you a list with a series of words, how many would you remember? Most remember the words in the beginning... but what happens with those in the middle?

Once this check is done, how can we rekindle this attention in the middle of the paper?

we can basically do four things: resort to statements – if I say, “iPhone is superior to ...”, for example –; evidence (either logical or emotional); illustrations – stories, anecdotes, testimonials, videos, photographs, etc. –; and, of course, the participation of the audience, so it’s not a one-way information delivery. In the latter case we can ask questions, propose games to confirm an idea, etc. In short, there are no boring topics, but boring presentations.

This being so, it’s important to catch the audience’s attention at once, with a starting signal. with something that will attract their attention immediately. here we could speak of four mechanisms: using an anecdote or a metaphor to reinforce our idea; asking a question to the audience; using data or statistics not known so far, or talking about a surprising fact; and, certainly, taking care of the design of our slides.

In this last aspect, if we analyze some slides, we see lots of wasted space, and perhaps we could put something funny in any of them. and here there’s something very important to keep in mind: for whom do you make slides? Do you make them for you or for the audience? For you they have to be just an excuse, you are experts in the subject matter and don’t need a comprehensive reminder with slides full of text, because if not, what are you there for? what we could do, to really carry out presentations to the public, is for example, segmenting content – if you use full screen photographs, do not use recurring photos, use the imagination; If you use a slide full of text and messy, what message are you transmitting? – Or you can also mask the message in creative

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo �1

ways – bullet points, presenting the key points as a self-assembled puzzle, making use of the audio possibilities, etc. In the end, a conclusion at this point: use a single idea per slide, because if not it’ll seem that you are subjecting the audience to the where’s waldo game.

Promoting understanding and memory

we arrive to the last point. here it is important to have a basic knowledge of how our brain works. In cognitive psychology, it is said that there are three types of memory: sensory memory, which recalls only 3 or 4 elements; working or short term memory; and the long-term one. however, the second one also allows to remember longer sequences, but cut into pieces. For example, we know our phone number or our passport number because we remember grouped numbers: in twos, in threes, but not number by number.

From all this we derive the importance that has a good structure when facilitating memory and understanding. It is not enough to select the ideas, put them in order, and in an order that also facilitates their understanding. If I give the audience 100 ideas, they will remember 3 or 4; and if I give 3 or 4 they will remember just what I’m giving. So we should reduce the number of ideas to three, and then organize them well.

It’s not necessary to put all the information in the presentation. what tends to happen is that if we want to tell everything, at the end we come up with nothing. The more information we give, the less will reach the audience. Imagine that

this triangle drawn here encompasses all the information that we handle for the presentation. In the part closest to the vertex there will be the most interesting information that we want to transmit, what seduces most to capture their interest, and what further encourages the desire to find more of that information. It involves opening the door to a greater interest. If we have transmitted them well our idea, and the interest has come to them, we could tell them where to look for more information to expand their knowledge. and this can be in a wiki, in a manual, in books that we can deliver, in documents that I can also deliver or upload on Internet, etc.

Many times, giving all the details of the product is not the most important thing, because they are of no interest. It’s usually much more impressive to show, as Steve Jobs did with his creations, how extremely thin is the product, for example. appeal to emotion.

In conclusion, I encourage you to think more about your presentations, look over them, and be aware that saying the same thing in other words can have a completely different result. Restate your message. Think that you can achieve your presentations, as Pasteur did with his, saving millions of lives... Perhaps not so much so you can save lives, but how you can change the lives of some.

I finish with this: a presentation can change the world, so all of you, innovate in your presentations. Don’t stop to innovate.

Centro de Investigación para la Gestión Tecnológica del Riesgo�2

Innovation for IT Security 2012 Summer Course

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

FERNANDO ESPONDA

I am pleased to speak of the joint project between Sm4rt and BBva, because it’s important and inspiring to find companies that continue to devote effort to research.

To immerse ourselves in our project of fraud prediction with innovative technologies, the first thing we had to consider was which the problems of fraud on credit cards were. and we found two: one, matching a card with a single customer; and two, having a single copy of a card. The first is solved using documents such as the ID card or using the credit card PIN (Personal Identification Number); and the second, incorporating chips or

magnetic bands in the cards. however, technology is not always used properly and the non-face-to-face transactions are growing more and more; so that, how do we really make sure that who makes the transaction is, really, the entitled person?

at this point, we can find the answer in the scenario of use and behavioral patterns, which can tell us if the detected attitudes correspond to the usual ones of the card holder. and here we would have two models: the so-called traditional one, based on rules – “If this happens, I interpret it as fraud” –; and the automated models, which are based on algorithms. The first are necessary, but not sufficient, since we have also to consider patterns that are not intuitive, patterns that many

Fernando Esponda(Director of Research of Sm4rt Predictive Systems)

Luis Vergara(Professor of the Communications Department of the University of Valencia)

PREDICTION Of fRAUD wITh INNOvATIvE TEChNOLOGIES

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

times the subject doesn’t even know and patterns that require lots of data and involve too many logs; so an algorithm is needed.

Techniques for an automated model

Some techniques – neural networks, decision trees, etc. – are used to find patterns in data in order to create an automated fraud detection model. Something that is achieved by analyzing records of card information: amount of the operation, place of purchase, date, etc. however, each technique has assumptions that differ from a relevant pattern, so that once a model, based on a specific technology, is effective, we get a certain type of patterns and others are ignored. In addition, a single model is not able to identify all the defrauding ways; not to mention that fraudsters themselves end up learning how to evade detection of their actions.

Considering all this, and given that one part of the current models are based on the same technology, the premise from which our research started was to corroborate if models based on different technologies were able to find different things. and based on this, we had two goals: to find a totally innovative technology for a model of fraud detection – how to combine the technology to make a more complete model –; and to ensure that it won’t result in a substitution of what we already have, so it’s a complement to the existing proposals.

Four Technologies

according to our studies, we have four innovative technologies (artificial immune systems, negative

databases, graphs and hierarchical temporal memory). In our project we opted for the last one and bet on this sense for the technology sold by the Numenta firm, which had developed algorithms as a kind of neural network, based on the neocortex functioning, but that put emphasis on finding temporal data patterns.

So we find ourselves with an automated combination of supervised and unsupervised learning. The latter says that the data observation phase doesn’t focus so much in the categories of “fraud” or “no fraud”, but it simply tries to find some consistency in them. Then, the supervised part takes these patterns previously found, and tags them. The interesting thing about the Numenta solution is that it finds spatial similarities in data; and then, among them, tries to find temporary similarities.

with our model of fraud detection, based on the technology of Numenta, we were able to prove our initial premise: we concluded that using an innovative technology works, and that by increasing the options we are really able to detect fraud patterns. So the first part of the objective, accomplished. But it was still needed to achieve the second goal; namely: to what extent these patterns that we could find are different from those ones coming from traditional techniques, such as neural networks; and how to combine them to get something better.

Defining and measuring the difference

To understand exactly what is to be different, we first needed a definition and then a measure that

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

could quantify it. The definition that we found was this: for a set of data, the difference is in a model that classifies a certain subset of that data better than the other model.

as for how to quantify this difference, the number we are looking for must be minimum when a model subsumes to another, when it is better than the other; and we’d like it to be maximum when, exactly in the middle of the transactions, a model is better than the other. Since the complementarity y index has to do with the place where the qualification and the quality of the classification lines cross; seeing which transactions are on one side and which transactions are on the other. The highest point appears when one of the models is better than the other, exactly in the middle of the measurements. In essence, it resembles the Shannon’s information measurement. Once this is done, what we did was to merge the ratings of different models and get a final rating.

what we did was to divide it into ‘fraud’ and ‘no fraud’. and it turned out that it is certainly true the premise that technologies with different patterns find different things, and if it’s worth to combine them. In fact, we found a way to be able to mix models to take advantage of the strengths of both. however, it is an ongoing investigation, looking for better results.

LUIS VERGARA

For my part, I’m going to describe the in-Fusion platform, which we are developing at the Polytechnic University of valencia in our

Signal Processing Group, within the Institute of Communications and Multimedia applications; and we applied, among other things, to the joint project with the BBva Group for fraud detection.

Being the objective the detection of fraudulent operations with bank card, and starting with the issue of pattern recognition applied to the fraud detection, we go into the difficulties associated with the use of cards in different modalities. In this sense, whenever a card is used, there are recorded records, and based on this information, a machine must tell us whether we are in the presence or not of a possible fraud.

In this context, we will go into a little more in detail to see what the general problem is, the specific application, what we understand by pattern, etc.; so based on that information we make a decision among a set of possible decisions based on that information.

Regarding the subject of the application of bank card fraudulent transactions detection, the initial pattern is made up of that information record made after the transaction (amount of money, date and place of the operation, etc.), and thus we will decide whether or not it’s a fraud. and clarifying a little more, giving a rating that, coupled with the experience of the operator, will allow authorizing or not the operation.

Three stages

all pattern recognition process has three stages. In the first one, the environment is measured and the record data of each card transaction

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

are obtained, thus we obtain a set of numbers, which the machine will use later. and we’ll have to discriminate depending on how useful is the information, and its redundancy; at the same time the size of the sample is reduced. In the second stage, which is the main core of pattern recognition, we generate the scores, which represent the likelihood of fraud. It will be a number between 0 and 1. and, finally, we arrive at the third stage, the threshold setting one, where we decide where to put the threshold to generate an alarm, which notice will be received along with the associated score.

This is the moment when the problems associated with machine learning to generate scores and define the threshold are also generated. at the end, what we have is a pattern with two parts (‘fraud’ and ‘no fraud’), in such a way that when a transaction comes, it falls into one of the two areas.

In essence, I think that all the procedures we currently have to classify thanks to the use of machines, can be performed within these three options, and their possible combinations: the population density of each type or probability density; the distance to representative values of each type; and the distance to a certain border of separation, where results are obtained depending on how far or near is from/to the border.

Fusion of detectors

On the other hand, I would also like to mention four significant aspects in this area: how we choose the set of labeled patterns for learning; how it’s changing with the environment and

the modifications in the population density measurement; how the objective function to minimize is defined; and what philosophy we chose to merge detectors, allowing us to start from simpler detectors, easily trained, and turning them into a more sophisticated one due to their fusion. In the case of fraud detection we can make fusion at any level, since the detectors have the same input (in the transaction log).

Specifically, if we have two detectors, we could make soft fusion, if we merge the scores of both to have a single score; and hard fusion, if we merge the decisions of both detectors, in what we call “hard fusion algorithm”. If the detectors are homogeneous, equally reliable and statistically independent, algorithms are simple; not so if some are more reliable than others.

But, why can we be interested in doing fusion? Because when both detectors come together we will have the opportunity to do an operation that any of them would be unable to do on their own; and thus we will achieve complex operations.

Soft Fusion, better than hard

If we make simulations of fusions, assuming independence of the scores under the hypothesis, fraud and non-fraud, we find that the soft fusion, and mostly the optimized soft function, will always be better than the hard one, because the longer it takes to lose the information and make the part of threshold setting, the better. however, the soft one is more complex to design than the hard one, because handling 0 and 1 is easier than handling continuous numbers between 0 and 1.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

Then we’d have the ROC curves, representing the probability of detection based on the probability of false alarm. In these curves the upper is located the false alarm the better is the detector. we have made these ROC curves for all the detectors: the individual ones, the optimally merged ones of soft type, the optimally merged ones assuming statistical independence and hard fusion. The individual ones are below both the hard fusion, and the soft fusion and the optimal soft fusion – the latter coinciding with the fusion assuming independence, because there was independence –. what this shows is that it’s good to make fusion, because in both hard and soft the behavior of each detector is improved over that of each detector separately. and in this case, if we can, the best thing is to make a soft function, because it’s the best looking curve, which for a probability of false alarm gives us the maximum probability of detection.

On the other hand, under the hypothesis that there is no fraud, statistical independence still remains; otherwise the figure that depicts distinction of statistical populations tends to be more an ellipse and not a more rounded form – the statistical dependence tends to flatten circles turning them into ellipses –. This would be how an individual detector woks. we see the optimum of soft fusion, assuming independence; and the one of hard fusion, which is the same as

before although there is dependency. Since the hard fusion doesn’t have so much flexibility, there isn’t so much difference between dependence and independence; but, in this case, it gives the feeling that hard fusion (that is to say, first setting thresholds and then merging decisions) fits better than the soft function assuming independence, and is very similar to what would be optimal.

at the same time, in the opposite example, when it has statistical dependence under the hypothesis of no fraud, means that when there is no fraud the two detectors say more or less the same; but when there is fraud they behave more independently, so here, an optimal hard fusion may be an option to value, even though there is dependency, and that complicates things.

Three final comments

In conclusion, I would like to make three comments. On the one hand, I consider that these pattern recognition technologies are demonstrating its importance as complementary elements – not only – in fraud detection, providing information that can come in useful. On the other hand, the fusion of detectors is a good choice for the constant changes in the fraud patterns, especially if we choose to merge detectors not very complicated and easy to train and teach. Finally, we must assume that each type of problem will require a form of merging adapted to their needs.

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

don’t know how many of you know about Game Theory. what I am going to do is to focus on one aspect that is omitted: intentionality. On risk management we have

to think about how the other party thinks. we have to keep in mind that the “bad guys” will do what they intend to, whatever the case may be. They’ll bring all their resources into play.

while last year Santiago Moral used Darwin to talk about Co-Evolution, I’m now going to talk about this idea: ground-breaking Co-Evolution. and I think that risk management is characterized by a high risk and high effort analysis. and we need a change, to be ground-breaking, as discussed in other papers. we need

to assess risks and see what measures can be used to reduce and control them. But here the most important part is left: what those willing to attack are doing, how they are doing this analysis. Because it is based on what I do, but not on what the other does. Many of the techniques and methodologies don’t consider the analysis performed by the other part, the part of the “bad guys”. and it is, precisely, this evolution with the agents of the environment what will allow us to improve the fight against fraud.

we have a large number of methodologies to choose from. and we have a maze. we also have some ground-breaking paths.

Jesús Palomo(Tenured Professor of Business Economics at the Rey Juan Carlos University)

A RISk MANAGEMENT MODEL BASED ON GAME ThEORy

Centro de Investigación para la Gestión Tecnológica del Riesgo�0

Innovation for IT Security 2012 Summer Course

If we go into cryptography, it will give me some data that will be useful later in the presentation. we’ll start with some facts, as it is stated that the enemy knows the system because they have an infinite capacity to analyze it; if the message contains some information, they will get it for sure; and if there is any algorithm that can break the system, the attacker will use it. It is assumed that this could happen and that the “bad guy” will do his best to achieve his goals.

In the end, there is no security through obscurity, and this is known by the public and private keys. you may know the method, but the only necessary thing is that you don’t know the key. and what is the fortification principle? It is based on that the defender has to repel all the attacks. and, while I have to defend myself from everything, the attacker only has to find a hole to enter. and it is certain that he will be analyzing me all the time.

Regarding complete risk management, what do we do when we analyze risk? we see that it’s available, the associated costs, potential impacts, and what the future of these decisions is. we evaluate risks, and see what can go wrong and what the possible consequences are. The sources of risk can be hardware or software; and, again, the missing part in the equation is what the attacker is going to do.

what happens, then, with other risk management methodologies not based on Game Theory? That it’s assumed that the other agent, the attacker, does not change. But if we forget the other party, we won’t achieve our goal. Let us give an example, what happens with a speed traffic camera? Does

it really achieve its goal of reducing speed? No, it doesn’t. Because drivers only reduce speed when they approach the traffic camera and right after they speed up again.

Instability principle

Once this analysis is done, we see that we have more risk. and this is what I call “Instability principle” of traditional risk management. I call it unstable because when I put a measure in place, the enemy part will react, which means that I have a new problem, so I re-analyze and try to protect myself again. and thus we start over in a closed loop.

This being the case, some interesting maxims to take into account before this problem are the following: before a change of strategy, the reaction of the attacker push us to return to the beginning, to start over; there are ad-hoc models for specific sectors, which then don’t apply to other problems; other methodologies are simply descriptive models that don’t reach the step of suggesting a decision making; and the existence of sources of risks that are not accidental but intentional, as organized crime.

By analyzing all of the above, we must never forget that crime is very intelligent – those who aren’t are already in jail –. Only the intelligent ones remain, who reinvent themselves.

Studying crime

Resources are limited. So, apart from working so everything works well, we must also analyze

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo �1

what crime thinks. But looking at historical records doesn’t work, because you can’t assume that in the future they will do the same than in the past.

a game consists of agents, strategies and payments. Regarding agents, although there may be more (competitors, telecom companies and Internet service providers), we will simplify assuming two agents: Bank and organized crime. On one hand, the bank plays with a variety of strategies to defend itself against attacks, these are known as methods of defense. On the other hand, organized crime plays with strategies (preparation methods) to prepare themselves in order to obtain something from the Bank (for instance, information or money) and, afterwards, execute the robbery by using the above assets by choosing execution methods. Finally, there are payments resulting from performing combined actions (Bank’s defenses and crime’s attacks); these are benefits (for the organized crime) or losses (for the Bank).

what does in this context the traditional management (defined as any other method of risk management not based on game theory) do? Basically two things: identification and quantification of the risks (the bank builds probabilities, based on historical series or expert opinion, that organized crime use preparation methods); and decision-making, based on those probabilities together with cost-benefit analysis of the defense method in which the cost is less. here, unlike game theory, there are problems of instabilities.

Let us give an example: If we take the income tax return, we’d have two strategies: the Treasury one and the fraudster one. If the fraudster one is “I declare all or not declare anything”, the Treasury one is “I review all tax returns or not review any”. The Treasury makes a balance, has a sampling and has alarms. and when these go off then it reviews. Let’s see if it makes sense or not.

a case would be when the Treasury reviews everything and all taxpayers report. another one when the Treasury doesn’t review any and no one reports. Both seem to be balanced but they aren’t since they don’t provide an optimum outcome for either Treasury or taxpayer (there are incentives to derive and benefit from it). In the first case, the Treasury spends much money, when it isn’t necessary because everyone is reporting; and in the second case, it doesn’t collect anything because it doesn’t review any, while everyone is evading.

Now let’s take the other variants, to see if they are balanced or not. In the variant on which taxpayers report but Treasury don’t review any tax return, the risk is that taxpayers no longer report knowing that Treasury will not review. This makes no sense and would be an unstable equilibrium. another variant is that taxpayers don’t declare and Treasury review all. here there is no balance either. Since there is no balance at all, we are going to choose a strategy that turns random: some players will play a certain strategy with a certain likelihood and others will play others according with probabilities. Certainly, that’s what is happening and that’s why this strategy will endure because it is a balance. That is what we like and want to assess with the risks we face since

Centro de Investigación para la Gestión Tecnológica del Riesgo�2

Innovation for IT Security 2012 Summer Course

we want to get balances. we don’t want to get measurements and then, after a reaction of the organized crime, the problem changes and push the stone up the hill again.

Benefits of analysis in game theory

analysis on game theory allows us to simultaneously solve: identification and quantification of risks as well as decision-making. we are not basing this assertion on probability we only want to assess whether the final outcome is balanced or not. In equilibrium, the best method of defense of the Bank is the best to defend itself against the best attack of the enemy. and this would be a balance. In addition, in theories of games predictions are stable (nobody has incentives to change). we aren’t based on probability, hence that equilibrium is stable.

according to the Nash equilibrium, no agent is interested in unilaterally changing its decision given what its opponents are doing. No one moves. It is vital to define the agent objectives, which are to be measured. as for the Nash equilibrium in mixed strategies, as the one we have seen on the Treasury, both players play a mixed strategy with a certain probability that depends on the consequences. There is no Nash equilibrium when one is going to do a safe thing; and the other, another safe thing, but with a certain probability one is going to play a strategy or another. But that is also a balance.

My prisoner’s dilemma

I’m going to tell you my own prisoner’s dilemma,

with which John Nash began to tell this story. a few years ago I shared the car with my brother. and one day the car had a hit. The punishment was 10 days without a car if my brother accused me and I didn’t say anything.

Taking into considerationa traditional analysis of decisions, not based on game theory, which options would I have? If I don’t say anything and my brother neither, the punishment was a weekend without the car for both; If I accuse my brother, and he remains silent, I get rid of being punished and my brother is punished with 10 days without car; If I keep silent and he accuses me, I am 10 days without a car; and if I accuse him and he accuses me, equitable decision, 5 days without car each one.

In the traditional decision-making, where I don’t consider what my brother does or will do, if I keep silent, which is the expected outcome, the expected loss that I’ll have, it’s 5.5 days. If I accuse my brother, it’s 2.5 days. what do I decide to do? I accuse my brother.

Now let’s see what happens from the point of view of games analysis, where I already have in mind what my brother does. Does it make sense any strategy there? Talk to my brother, exactly. I see the hit; one of the two knows who is responsible. and we agreed to be quiet. we stay for a weekend without a car and that’s all. But it turns out that it is not a stable equilibrium, because both can change the strategy on our own, because there are incentives to move from there. If one of us is quiet, as agreed, and the other accuses him, then 10 days of punishment.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

we will then see what strategies don’t make sense, to see if there is any Nash equilibrium. (In this case, there is one, which represents the best option).

There is a strategy that makes no sense, aside from my brother, and is the silence one. Regardless of what my brother does, or I stay a weekend or 10 days without a car, against none or 5 days. No one would choose silence, because he always loses. There is no Nash equilibrium.

Now, from the point of view of my brother, knowing that I wouldn’t chose silence, then, what does he do? accuse me. we accuse us both. This will be the balance. But, will there be any incentive so that I break the “agreement” and then be quiet? No, because there is no incentive for anyone.

Modeling of the games

Static games are those where no agent has information of what has been done before by the attacker (as in the rock-paper-scissors game); while in the dynamic ones I make a decision knowing what the other has done before (as in chess). Regarding crime, in a dynamic game they will know your method of defense before deciding to launch their attack.

The modeling of the two games is different. Sometimes, it is reasonable that we release information, because it involves less loss than not releasing.

Thus, with regard to the planning of those decisions, and talking about the agents bank and organized crime (OC), let’s, then, analyze the

games statically and dynamically; and, if in one of the two I lose less, that’s the choice. we’ll have three methods of defense in the Bank (nil, medium and high); and four methods of attack on the crime part.

In traditional management, we analyze the probabilities of all the methods of attack and seeing the results with the alternatives of the bank, after deciding, we have values in each defense method of the Bank: 155 (nil), 130 (medium) and 117 (maximum).

If we suppose the bank applies “traditional” management

If we looked at OC, they will face to:

Looking at it now with risk analysis, organized crime wins in each of their four methods of attack: 87, 112, 91 and 38, respectively. Does anyone see something weird? In the first two rates, there is a negative number, which means that crime will lose in methods 1 and 2, and with the maximum protection of the bank. In the end they won’t keep this long, they’ll prefer to stay at home.

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

with this in mind, if crime sees that the Bank makes maximum protection policy, it will change to method of attack 3, where the gain is 45 (23, in method 4). and now we come to the bank... If I opt for a method of maximum defense, and criminals choose this method of attack 3, then I should make zero protection. But this wouldn’t make sense.

If we continue with the game, we see that the method of attack 2 prevails over 1, and 3 prevails over 4; while the maximum protection of the bank dominates the medium protection. we reduce the table and are left with this mini-game. we see that there is no Nash equilibrium because there is no strategy which will be chosen by both parties. If we go to a dynamic game, the best in this case is the maximum protection policy and the method of attack 3. and here, do I want to release information? In the static game we have played without knowing anything, and we lose; in the dynamic one I had more knowledge. Is this of any interest to me? I am not interested.

If we now focus on the Cassandra model, at the top of the diabolo we have a game. we make

methods of defense; they want to steal from us. In part 2 we have acted as a black market; there is a lot of asymmetrical information. But you cannot have reputation, because you are anonymous. we are working on this. It matters how I measure the effectiveness of the attack, according to the strategy of crime and the strategy of the bank. we are currently working on the development of software that will allow us to calculate all these balances and the prevailing strategies. we are also seeing the reverse game. From the observed “balances”, induce the returns from the game. So far what we have seen is this sequence: I have the returns, I want to reach the balance. But it may also be inferred conversely.

Future work

after taking a look at what we are currently working on, in the future our efforts are aimed at the development of simulation techniques for scenarios that have not yet taken place, at the definition of second-best policies, at the robustness analysis of the balances of the agents; and at the sensitivity analysis.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

Jaime Forero(Director of business development for banking and utilities of Intel)

Before we venture on the analysis, we should look at the environment in which we are moving. we are talking about proliferation of mobile and fixed devices, and about what we call embedded computing (inclusion of processors in a multitude of environments, such as vehicles or industrial windmills, for example). In this scenario, the forecasts say that in 2015 there will be 15 billion connected devices, from the current 2,000 million Internet users to more than 3,000 million. This will lead to an exponential increase in traffic and data – already in 2010 the

traffic generated on the Internet was higher than all recorded traffic since we began using the Network of networks –.

This being so, there is a great opportunity for companies thanks to this new way of communicating with customers. But, at the same time, the complexities and the need for higher levels of security grow. and the best way to deal with all this is, in my opinion, from the cloud.

In our particular case, the acquisition of Mcafee – something that the industry didn’t understand well at the time, wondering if we were now going to sell security – is the result of our plans of reinforcement in a third strategic pillar: security.

The Panel discussion had as purpose approaching the problems of new risks in IT security, analyzing the aspects of security related to applications and the data migration to the cloud.

The participants were the following: Jaime Forero, Rafael Ortega García, Luis Saiz Gimeno, Juan Jesús León Cobos and Víctor Chapela. The panel was chaired by Esperanza Marcos, Professor of Language and Computer Systems II at the Rey Juan Carlos University.

ROUND-TABLE DISCUSSIONNEw TRENDS IN RISk AND IT SECURITy

TEChNOLOGIES

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

we wanted to do with it the same as with the rest of the technologies: incorporate them inside the CPU. In particular, Intel works on four different areas: providing security to private and public cloud data centers; connections and communications; and access devices; in addition to working with the industry in the construction of an ecosystem based on open, interoperable solutions and standards-based.

Rafael Ortega Garcia(Managing Director of Innovation 4 Security)

In essence, we still have the same problems, and the difference is that now we air them as if we were on Twitter. we don’t only resort to the cloud for office automation scenarios and e-mail; there are also companies that use applications containing personal and confidential data. The reflection is, then, how I should change the paradigm. The pure and simple control as we have understood it until now (identity management tools, DLPs, etc.) is no longer useful, because there will be places where we won’t able to control operations and transactions and we will have no choice but to rely on the security of a third party. we must go to, necessarily, the field of monitoring.

On the other hand, if we ask ourselves which are the keys in the use of the cloud, I consider that we should talk about authentication and encryption. On the first point, it would be desirable that it is the company itself which continues to control the matters of authentication, using identity federation, for example. In terms of

data protection, there are only two techniques: encryption or dissociation.

I insist that we have to go to monitoring duties, but, of course, once the basic themes are solved. we haven’t yet solved the SIEM deployments and want to get into the big data, where cannot respond to the immediacy that is needed on the security of everyday. For this reason I believe that big data is a relentless pursuit. Not we can keep on “buying toys” and then not be able to integrate them.

another problem I see is the privacy and protection of data in the cloud. It’s clear that the cloud is a one-way trip, but it can’t be an open bar either. what will the regulators do? That’s the big question.

Luis Saiz Gimeno(Director of Global Fraud Management of BBVA)

I would like to talk about religion, myths and legends. There are some myths that we should question. One, for example, if the data is more secure because we know where the disk is physically located. another, the databases should always be encrypted. The latter, is easily taken to pieces, because while you protect yourself from the database administrator, you can’t prevent someone stealing information from you through the application.

If we now talk about privacy regulations... I speak of Galileo. we’ve been told that bodies move in circular orbits, and if you are considering that

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

maybe that is false and they move in an ellipse, the answer you get is always blunt: “no, it has to be in circumference”. On the assumption that passwords “have” to expire, something similar happens. Many times, the dynamics of some of the things we take for granted in security is, simply, the result of a one-time problem, at any given time, where a point solution was found and then we made it best practice, which we follow after a specific regulation and audit standards... and all of that is based on a problem that happened 15 years ago in a given country and in a given company. we accept it and don’t question it, so we have security measures we believe useful and aren’t very helpful... and vice versa.

Juan Jesús León Cobos(Director of products and new developments of GMV Soluciones Globales Internet SA)

Until some weeks ago, GMv, as a development company of new technology for businesses that want to go to the cloud, had a very clear view around the cloud world. One day we started to think about creating a corporate social network, which we were told that we could only set it up in the cloud. and then the change in our viewpoint came: what risks will we face? how attainable are those risks?

among the techniques that Rafael Ortega named earlier, encryption and dissociation, we opted for encryption. although, normally, cryptography is given as a bonus, but it is not often the answer to the real problems of security, in the cloud it might

be different. If it builds trust and provides security so that data are encrypted and only decrypted at the customer premises, owner of those data.

Under this premise, apart from identity-based encryption, we also work in the homomorphic encryption. The future could go partly this way.

Victor Chapela(Chairman of the Board of Sm4rt Security)

Jaime Forero spoke about technology as something underlying... I don’t think that the technological layers have changed at all. what has changed is the way in which we connect things. as the genes we share with chimpanzees, but that we connect very differently.

we have reinvented neither hardware, nor software nor communication protocols; we have only changed the way to rearrange them. In addition, there has been a deeper and less obvious change in the way of reorganizing in the cloud, the way in which we have to secure information. The previous paradigms are no longer useful. In the past, antivirus, for example, could solve part of the problem, but when we put them online, they didn’t respond. we had an illusion of control over who access what, being able to do this as granular as we wanted to. But we come into the world of the practical things: the controls on a database are not applied, because otherwise, they become unworkable; and it isn’t practical to define on a rule by rule basis. So moving towards the future is to set aside determinism, zeros and ones, and

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

the linear relationships between what happens, the security based on person-level specific permissions. and assume the increase of variables and the current complexity; and understand that it will have to be statistical. we will have to be ruled by what is more likely and there apply probabilistic control for the detection of the majority of the frauds.

The truth is that the cloud exaggerates problems that we were carrying around. Before we had to solve vulnerabilities; and today we neither have the visibility, nor we have always the management. Thus, we must use more complex mechanisms, such as centralized authentication and encryption.

DEBATE

Question to Jaime Forero... You have commented that Intel has four open fronts, and that one of the pillars of your present and future vision is security. In those first three lines that you commented, are you going to practice or not the “McAfee inside”? When do you plan to do it and in what phases?

Jaime Forero. That definition of “Mcafee inside” is good, but I would rather say “McIntel inside”. On the one hand, we are making joint developments; and on the other hand, Intel also continues with its own developments. as to the former, we are working on the development of identity management, using the Cloud Identity Manager of Mcafee and our Intel Cloud SSO software, single

sign on; without forgetting a recent technology, Deep SAFE, which is presented as an antivirus that works beyond the operating system. In addition, we also operate with Salesforce to use their platform as the web entry point for that generation.

On the other hand, in terms of Intel’s own developments, I would highlight Identity Protection, in the category of ultrabook laptops, where we provide a platform that includes technologies like one time password, being the same CPU which generates the token; and the Display Protection proposal, where the graphics core in the CPU is responsible for generating all those windows for bank transfers, in such a way that a hacker would only see a black screen.

Question for Rafael Ortega García... You pointed out in your speech that perimeter security is, let’s say, inappropriate. Traditional antivirus are useless, anti-DDoS are too new and need a renewed direction, IDSs... in this reflection for the future what needs to be done?

Rafael Ortega García. I didn’t want to say that exactly. what I intended to convey is that we security professionals have settled in, living in comfortable environment, where we limit ourselves to ISMSs, risk analysis, compliance with the Data Protection regulations, vulnerability management – but with care, in case the IT systems people get angry –. and organized crime emerges from this environment of comfort and beat us all up. and what we should do is to stop putting tamagotchies, which is our relentless pursuit, and mature the tools we already have,

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

setting us to work in a real environment of uncertainty.

how often have we struggled with access management projects and then we didn’t see them grow? how many SIEM tools are truly implemented in the company? what we must do is to begin to change in the environment of uncertainty. and here there are two fronts: control and monitoring; and a rule: If you make the effort in the first, you move away from the second. you hope that they don’t enter, but know that they will. and you cannot waste time on the control. Of course, you have to have an acceptable level of control, but you can’t be trapped at that level if you really want to be alert to the uncertainty.

Finally, regarding cloud themes, there isn’t a unique type of security. There are different types: It’s not the same to protect the cloud for the financial sector than for a Telco or a utilities company... we have to customize security. In addition, we must be very clear about the fact that taking out an application to a hosting provider isn’t precisely the same as taking it to the cloud. It’s to outsource it, period. The real revolution in the cloud is to pick up an application and take it to amazon, for example; or having a whole department working with an application that is configured by default in the cloud, like Salesforce or up to SaP. and there the security is set by others, so we have to rely on their diligence. That’s what you need to take on, that risk; and, on the basis that is intentional risk, put in place the necessary measures, which, from my point of view, inevitably, go through monitoring. and of course, we should prioritize the authentication of

persons. For the rest we will have to give a degree of confidence to the cloud providers. If not, we would be just talking about an illustrated hosting.

Which type of credentials for authentication?

Rafael Ortega García. here a world of possibilities opens up. There are many alternatives to know which person is entering the system. Fine-grained authorization systems are crazy, because they have an excessive cost since they have to be defined again in two years. For this reason, I insist, we have to monitor; know what you can do and what is critical; and control the rest. about encryption, I would also say that we need encryption standards in the cloud. and I would make one more reflection: what level of flexibility can I have. Making an inventory and classifying the information is a huge job, and then the situation may have changed a lot at its end. we have to make an exercise of reflection and see what the most important is.

Question to Luis Saiz Gimeno. If you at the BBVA Group hadn’t implemented systems for fraud detection, would it be noticeable? And, on the other hand, it seems that privacy is to defend ourselves from those inside, which is why databases are encrypted. Does this open the door to commercialize privacy from the customers’ standpoint, not from the security standpoint?

Luis Saiz Gimeno. with respect to the first question, yes, we would notice it a lot. Sometimes we make calculations of how much we save on monitoring based on the limit of the card,

Centro de Investigación para la Gestión Tecnológica del Riesgo�0

Innovation for IT Security 2012 Summer Course

although we avoided using those numbers. The important thing is that we have found that when we put stronger measures, the potential attackers forget us. In the end, it’s a question of overcoming the so-called “risk of selection”, which is not a risk caused by not being monitored, but the risk of hackers perceiving you as “soft”.

On the second question, and the issue of encryption, I say the same thing. Encryption of databases can be used when you are in an environment of critical data where you don’t deal directly with the security. But just for that. Because if they enter through the application from the Internet, it’s the same how encrypted is the database. what you need to do then is to monitor.

Can we then witness a growth in the cryptanalysis market?

Luis Saiz Gimeno. No. Let’s see.It is very difficult to train a cryptanalyst. There aren’t so many in the world. In fact, if we put together all the cryptographers and cryptanalysts from around the world we’d have plenty of space in this room. The problem lies elsewhere.

Juan Jesús León Cobos. In my opinion, the problem that has been discovered with encryption, authenticated encryption, the https one, it is very popular. and at the end everyone trusts and nobody knows if they are being spied on.

Luis Saiz Gimeno. Exactly, they don’t know that are being spied on, or could be spied on. how many of you know to check when a certificate and the whole certification chain are correct?

Question to Juan Jesús León Cobos. Taking as reference the preference of your company for encryption techniques, you’ll agree that the use of encryption mechanisms depends on the people and not on the devices they carry; and what is usually put in the hands of the people is still not very friendly.

Juan Jesús León Cobos. I don’t agree with you. Delving into the subject, to solve the issue of encryption we should first solve the usability one. The real problem of the certificate is the control of the key. and this control has, in turn, two problems: possession of the key, and credentials to access it. Thus, having a key and protecting it with a password greatly improves the usability of the encryption system. But then the complications come when moving the problem of encryption and the integrity of the data within the scope of the authentication. we move the problem elsewhere.

when the concept of public and private key was invented it was perfect to encrypt. however, the crux of the matter was how to make sure that the public key being used is really from who claims to be.

and here we come to the PkI, the chosen option, among others possible, to solve this issue. and which, in my opinion, made things more complicated. The option that costs more money and less is used. The concept can’t work because when I send you an encrypted email I need you to do something with it – that, for example, you get a certificate from a trusted authority –; and you might not even know what a certificate is or have the slightest interest in reading what I’m sending.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo �1

But beyond the empire of the PkI, another proposal was invented eleven years ago: identity-based encryption. I send you an encrypted email, and don’t need you to do anything previously; unless you actually have a real interest in decrypting it. This seems a more natural scheme, but has not earned much attention from industry (collides with the interests of some, such as RSa), even though there is a group at IEEE (Institute of Electrical and Electronics Engineers) who has spent years trying to normalize this.

Question for Victor Chapela... You insist on statistical calculation. On weighing and measuring how and where crime is committed, and what the consequences are to someone for committing a crime. How can you make a mathematical approach on the aspect of intentionality?

Víctor Chapela. when I talk about statistical calculations, what I say is that the controls must be implemented where they really are more effective and efficient. and that is something that is not happening with the current mechanisms – ISO 27000, for example –. we work on these mechanisms, defining how to prioritize. Cassandra, graphs, is that: how to prioritize in the risks and then how to prioritize in the controls.

Now the legislation has a basic problem to integrate in the models an environment that is neither deterministic nor absolute, but contextual, where, for example, the social part cannot be measured. when I speak of non-deterministic systems, I speak of statistical levels and systems that emulate the neocortex. we go towards

systems that allow this imperfection, and make it possible to enter the context. Rather than seeing the legislation and try to incorporate it into the model, perhaps it’s a better choice that the model incorporates, within the perception of average risk that an attacker has, which are the types of attack, frequency, its benefit, etc.

Luis Saiz Gimeno. Talking about legislation, where does the idea of the need to encrypt laptops come from? The explanation is because in the US there is no national ID card. It’s that simple. But if your laptop is stolen, it’s because of the device itself, not because of the information that you have inside.

Víctor Chapela. we have to contextualize what information is of high risk in one context or another, in one country or another. In Mexico, for example, the data that could provide information for a possible kidnaping are critical. and here in Spain, they are irrelevant.

I searched on Internet the advantages and disadvantages of the cloud and almost all forums agree that the biggest advantage is the security that it provides, and the biggest disadvantage is the legal framework and the regulatory issue. What will the regulator do in this environment?

Luis Saiz Gimeno. Regulators still don’t know what to do. It seems that until BBva Group appeared in the newspapers, no one had gone to the cloud yet. But actually, up to 80-90% of small consultant’s offices, law firms, etc. have their email in the cloud. If they have to regulate, they’ll regulate. and if

Centro de Investigación para la Gestión Tecnológica del Riesgo�2

Innovation for IT Security 2012 Summer Course

they discover tomorrow that Facebook has failed to comply with the data protection act, will they make the ISPs in the country cut the access?

Rafael Ortega García. In Spain, in any case, there’s a problem. how is it possible that the advisors who work in security regulations are consulting services companies? The first data protection regulation handbook was made by people of the University and service companies, who, with all due respect, don’t know about business. If the sufferers from the regulation – banks, telecoms – don’t join and form interest groups, things will go astray.

Luis Saiz Gimeno. ah, but is there advice?

I see that the panel is focusing on external attacks; but I am concerned about the fact that large enterprises are outsourcing the whole IT infrastructure and we can have enemies at home…

Rafael Ortega García. a generic security environment no longer exists; and although there are common things, there are specialties that have become fashionable, and we must live with that. we are talking about critical infrastructures, cyber-security, fraud and compliance, to summarize it in some way. The internal part is very important, and is always linked to fraud.

When a card is intercepted or stolen, the customer or the bank becomes aware of it. But when someone steals a database to sell it on the black market, who notices it?

Rafael Ortega García. This is what I was saying before, preparation and resilience. If someone steals a database, although he is a privileged user, means that your level of security was not correct.

Juan Jesús León Cobos. at the end, the perception of security is vital. Publicizing or not the incidents closes that loop. Internal attacks are better left unsaid. as nobody sees them, the strategy is to say that we have not suffered from any attack. and thus the security problems are hidden.

Víctor Chapela. I don’t think that will change. To me the fundamental thing is the contractual link with the company, because in a company of 50,000 employees there will be an identical sample of the population: there will be hackers, fraudsters, etc. In the data protection law of Mexico we treat separately the “third parties”, because they have one increased risk, not because we do not control them, but because they perceive themselves as under covered by a greater anonymity. That is at least, what emerges from some studies that we have analyzed.

Luis Saiz Gimeno. From my point of view it’s irrelevant who is paying the salary, if we have the appropriate security levels. you can make them less anonymous.

Juan Jesús León Cobos. another thing is the interest that the company may have to put a control. you have to transfer the risk. Or if you transfer the risk you transfer the authority. Dissociating the problem of authority is a key issue.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

Víctor Chapela. This responds to what Luis Saiz said. There is a study that analyses the situation of fraud in banks of England – where the burden of the evidence was passed to the customer, having to prove that it was not him – and US – where the customer was exempted from any evidence and the money was reimbursed –, which showed that american banks will spend on average less money and they had much better protection against fraud then the English ones.

On the other hand, I wanted to tell Rafael Ortega something we discovered with the data protection legislation in Mexico, on the line of aligning incentives. The legislation is not made to protect banks or large enterprises, but the citizens and their data. Then, if we are all going to put the same controls, each industry has to specialize in the data type they are handling. and this is not considered in the Spanish Data Protection act. and in Mexico, It is. Security has to be adapted.

we must also mention the theme of theoretical security against practical security. after the terrorist attack against the twin towers, airports forced the passengers to remove the shoes after a person used this way to bring something forbidden. If now this measure is taken away, it will be the first place where we’d put something into, because we’ve been already told that we can.

Juan Jesús León Cobos. On the same line, US dedicate much less money to protect against organized crime than against terrorism – much more improbable –. and it is a problem of perception.

How will the cloud make us more secure?

Luis Saiz Gimeno. It is already doing it for all the thousands of small companies that have outsourced their security, and it is no longer the friend or brother-in-law who does it, figuratively speaking. another thing is large enterprises, where we have to be more careful.

Víctor Chapela. If we assume that systems such as Gmail have been in the cloud more than 10 years fighting against attacks, there is no doubt about the benefits of this process of refinement. Gmail, for example, has few incidents because it has long been working on security. In addition, one of the great benefits is the dissociation and segmentation of information, which relies on a single point for its reconstruction, a single point of failure. It is only one. In addition, giving access only to the legitimate user reduces the risk in the majority of cases.

Rafael Ortega. For their part, the providers of tools in the cloud, such as Salesforce or amazon, will make an unimaginable effort on security issues, because otherwise their business is literally down.

Juan Jesús León. I don’t agree. Their business model doesn’t collapse, as they simply transfer the risk towards a problem of availability. If you miss Gmail for a week, or put a ridiculous password, nothing happens to Google. availability and security are two sides of the same coin.

Víctor Chapela. In the vector of accessibility is, on one side, availability; and, on the other

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

side, confidentiality. and when someone said that security was confidentiality, integrity and availability generated the worst management problem that we have today. Because a Firewall closes by default, and as witch opens by default; so we have two contradictory basic criteria: the IT systems area must make things available; and the security one must close everything if necessary. This happened to us when we worked with a treasurer’s office, where we mistook security for availability. we develop them a quite secure solution, but not so available, which we ended up changing for the sake of availability, which was the main asset of this organization.

Question to Rafael Ortega, due to his emphasis put on monitoring. How can you monitor something so scattered and so off shored as the cloud? In addition, I wanted to know your opinion about if security in the cloud will be a paradigm shift in Internet, which will require some type of prior concentration for dispersed communications, through a proxy that provides these security services, which by themselves, these clouds will have heterogeneously.

Rafael Ortega García. I still say that the key in the cloud is monitoring, verifying that providers are in compliance with your standards of protection. It is true that there will be some point that will be a black box, and you’ll have to believe in the business model and that they have the security deployed. Of course, the supplier can give you the alarm, but incident management will be the customer’s responsibility, because it is their core. here the issue has to move, inexorably, toward the application layer and authentication,

which is the responsibility of the customer in any case. we must be clear on, however, what is cloud and what isn’t. The cloud of infrastructure, application and the hosting providers that say they are cloud, actually aren’t. For example, the well-known cloud broker, as an intermediary between cloud providers and users, will be a mere administrator.

On the other hand, I foresee that many security services are going to migrate to the cloud and the customers will request to be charged for each service. and that will be the mindset of those of us who want to provide security services, and not only consulting services. The challenge, however, is if there will be integration in the cloud services. That would be the great innovation. Identity management has been dealing with provisioning for 25 years, and it is still giving the same problems of integration.

Regarding managed services in this field of security, this is something that is not very experienced facing problems and vulnerabilities...

Rafael Ortega García. here I have to say that let’s see if we learn a little from our colleagues from other areas of the company, taking advantage of what they are doing to apply it in security. For example, with regard to security dashboards we remain autonomous, while the organizations have perfect tools for this purpose. Of course, when the CSO is aware of what he wants to measure and the objectives, but that’s another story. Or the IT department systems people who knows very well the SLas. almost everything is already invented,

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

so let’s look inside and take advantage of the internal knowledge.

Keeping in mind that the materialization of all mathematical encryption algorithms is made either through software or through hardware, who ensures that these materializations are the correct ones and don’t contain errors?

Juan Jesús León Cobos. On the basis that I don’t know of any software where vulnerabilities have not been found; we have to take into account also that the algorithms aren’t the problem. The key issue lies in the way to make software. at GMv, for example, we make software for space projects, which differs from the ordinary software because the former costs 20 times more. It has 400,000 lines of code and when tested no error

was detected. It’s the same case as the software for an airplane, which is not made the same way. It is much more difficult and much more expensive. I don’t know if someone someday will propose to make security software this way.

Víctor Chapela. In my opinion, there are two ways to make applications. I consider that algorithms will always fail, and that you cannot protect everything, so what we need is to understand what is most valuable and protect it.

What is Intel doing or will do to prevent errors in the materialization of the algorithms in the chips?

Jaime Forero. I can tell you that the main value of our technologies are that they are based on well-proven industry standards.

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

’m going to talk about non-invasive facial recognition in a controlled environment, such as an airport. and to this end we’ll go into the project we made for the Barajas airport,

in Madrid, Spain, and will see its characteristics. But before, I would like to introduce our group of research within the Rey Juan Carlos University, and then talk about the field of facial recognition in Spain and in the world.

Who we are and what we do

we are a research group, within the Technical high School of Computer Engineering; and our work is focused on the study of facial recognition and computer vision. The nature of the group is

multidisciplinary, adding professionals of different areas: computer engineers, statisticians, etc. at our best moment we have come to form a group of 13 people, but now due to budget cuts the staff has been reduced.

Our study of computer vision with facial recognition of faces is based on four different applications: intelligent video surveillance, bio-inspired vision systems, airport security and intelligent transport systems. In the first scenario, we use cameras already available at external facilities such as, for example, parking lots, and detect people and elements in those areas so we can verify whether we have seen them before or not.

Enrique Cabello(Tenured Professor of the Department of Computer Architecture and Technology at the Rey Juan Carlos University)

NON-INvASIvE fACIAL RECOGNITION AT AIRPORTS

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

In regards to bio-inspired systems, we work with a special retina, which gazes at the movement of a person – only gazes at what is moving – and then the characteristics of this movement are analyzed so we can identify what exactly has moved. In the area of intelligent transport systems, our task is to examine all data regarding the risk of driving a vehicle and from that we obtain a risk level to work on.

Finally, in airport security, we are focused on facial recognition works. here we have developed different recognition systems, both in controlled environments (immigration control booths using digital ID cards) and not controlled (where, for example, there were lighting problems).

In essence, in the four areas, the objective is to obtain information that depicts for us what is happening in the environment. and once we have all the characteristics associated with each case, we set up identifiers to define the situation that we depict: if it is a subject, a situation of risk or whatever. In other words, from some images or an environment we aim to represent it numerically and then apply classifiers or different systems.

This is, basically, our academic work. But our work does not end there, because if a company asks us for help to carry out a particular project, as what happened with the Barajas airport, we took the leap from the University laboratory to the real world and started to work in a less controlled environment.

a company is looking for very different things from our purposes at the laboratory, where we

would play with the parameters. In the “real” world they aren’t going to wait for the adjustment of 30 parameters at the same time, for example. we are also asked for the least possible invasive technique, so we do not bother the user, who barely realizes what is happening, who must not notice there is a system that is authenticating him when in the boarding area. and, finally, another thing that in the business environment is vital, and not so much in the laboratory of the University: the lowest possible number of alarms should be issued, because otherwise the operator would turn off the system.

and all this, accompanied by the fact that a project in the ‘real’ world has a bigger size in all cases, there are more people involved and more cameras and systems to work with; and while at the University our goal is to publish the study, a company seeks to register a patent of the result.

The world of biometrics

In the recent years the number of patents concerning biometric systems has grown significantly, at the same time the return on investment values have been increasing. In parallel, fingerprint and voice recognition systems have grown too. I would highlight, among the systems that have been patented lately, retina analysis systems, hand geometry, keystroke dynamics, signature and handwriting analysis, vein/vascular recognition, iris analysis and facial recognition.

In this context, there is no doubt that will witness an explosion of emerging technologies in the field

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

of biometrics to cover different applications; the technology that best suits each concrete case, due to its cost, because it’s less intrusive, etc. Facial recognition could expand to new environments, such as, for example, mobile devices, which already have a camera, so it would be very simple to add a facial recognition tool – cheaper than putting a fingerprint reader, which requires incorporating hardware –.

Problems of facial biometrics

In the field of facial biometrics we encounter two problems: one, identification (by way of “where’s waldo”), where I have to locate a person among a large number of users; and another, verification: I have a subject and want to identify him; I have his base characteristics (a photograph, for example), and I want to compare the correlation with the actual “sample”. In some cases is much more useful to work with the picture or the model of the subject and check if it meets these parameters; rather than finding the subject inside a database.

Regarding success rates in biometrics, it is often irrelevant, is not the most important data. It’s more important the information on false positives (letting the wrong subject go through, for example) and false negatives (not letting an authenticated one go through). Therefore, when it comes to showing results, the false positives and false negatives rates are shown more frequently.

Our project at Barajas

Once we have seen this picture of the scenario, I would like to go now into the project we carried

out, which was put at Barajas airport for facial recognition of travelers. The main objective was to make an assessment about whether or not it was feasible to install a non-controlled facial biometrics system inside their facilities. The airport security officers wanted to know the current state of the technology options, and we help them to materialize the possibilities, offering ourselves as subjects of the study, for compliance with the legislation on personal data protection.

In this scenario, we began to work using video surveillance cameras in a non-controlled environment. we made two experiments: in one of them we used images of ID card type, where the subject is identified by his ID card, and we proceed to analyze if it is the same person who is actually in the airport at that very moment; and on the other experiment, we used images obtained by the airport itself, which we then used to identify a particular subject. In the latter case, for example, we started from images obtained during the check-in process, to check later in the boarding area that he/she was the same person who had registered previously.

Thus, we started from a database of images, which are not usually recent (ID card or passport photos can be several years old), and also have the disadvantage of having different conditions of lighting and image quality. we also made images in 3D, and we saw slight variations, because photographs are in 2D. In addition, the scanner also had certain problems, for example if they had facial hair.

Something important to keep in mind in this type of projects is what is called “linear discriminant

Centro de Investigación para la Gestión Tecnológica del Riesgo�0

Innovation for IT Security 2012 Summer Course

analysis”, which is based on maximizing the difference between two classes and minimize the variation among the same class, or subject. as we find large variations in each subject – the camera is in different locations, different light, etc. –, we will try that the variations are as few as possible. and at the same time, as we want to recognize a subject by separating him from the rest, we will maximize the differences that may exist between the different subjects of the study.

On the other hand, we had techniques that work on 2D and others on 3D. and among the techniques to choose from (global, and local, as well as hybrid), in our study we decided from the beginning to use the technique of global facial features; that is, major features of the face, its shape and layout, looking at the global details if I am far; in contrast to local techniques, which are looking for specific details of the face.

In view of all this, the first thing we had to determine was which cameras we were going to use from all the available ones at the airport. we made a choice that met our needs, but we faced a difficulty: the images were of poor quality, because there were zone swith little contrast, dimly lit... Finally, we chose a couple of cameras, at two different passenger conveyor belts.

Lastly, we deduced that the closest thing to an access control that we could find were the metal detectors; and it occurred to us to make a multi-camera studio – taking multiple cameras to see several subjects at the same time –, since the majority of video surveillance systems don’t have many cameras in the same area.

Three elements of facial recognition

Once the details are defined, we addressed the three phases of the process of facial recognition: locate the face of the subject, represent it in the form of a number as compared with the other faces, and then classify it to see to which subject belongs.

In the first section, in the part of face detection, many systems were previously based on patterns as the skin type, etc.; but now, almost all implementations are based on a method called viola & Jones. after the detection of the face, we generate a database, creating a sufficiently large number of images with our own photographs.

Once the face is located, the following was to represent it, extract some characteristic features of the same. here there are several techniques. One of them is the principal component analysis. It lies in the following: if we have an object with a certain shape, such as an ellipse, we will put one of the axes of the image in the longest axis of the ellipse, in such a way that we represent the cloud of points of the face and find the axes in which the variation is greater and then we work in that space. This technique works quite well, but the problem when speaking of faces is that the face is not an object that works in certain way, and its information in two dimensions is very important. So, instead of using this method of principal components, we use a method that would allow us to incorporate more two-dimensional information of the face.

as I’ve said before the linear discriminant analysis, we maximize the difference between two classes

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo �1

and minimize the variation in the same class. again, we can do it with any classification, but the face has an important 2D structure, so that the analysis will be in two dimensions. Because of that we try to keep the smallest possible amount of numbers associated with the face – as a reference, if you take a photograph, the image of the face could be a frame of 100 x 100 pixels, which would take me to process up to 10,000 numbers. and they are many –.

Once the face is represented, we come to the classification phase, where we choose among different methods, such as neural networks, rules, etc. we finally chose one method called “support vector machine”. Its main advantage is that it allows, thanks to the support vectors, to put a border to separate classes from each other and be able to distinguish them. So, to draw the border we only need to take a representative from each class, to conclude later that those ones placed on the right of the representative are of one class, and those ones placed on the left, of another.

In the end, and once we had the whole system perfected, and defined the images to use, the number and form of classification, we began to

make tests and charts. To represent the operation, we didn’t use the success rate, because it would have been very high. we chose the false positives and false negatives rate. we represented it and got the same error rate, which is the rate where the false positives rate matches the false negatives one, which we use to represent all the charts.

Results obtained

The result was a project that reported the required information to the user. The Barajas airport wanted to know if they could resort to this technology, at what cost and under what conditions. and they confirmed that it was a feasible option.

we made the tests in days of normal operation; without being invasive atany time for subjects, barely aware that we were testing the system. we used actual infrastructure, and thus the cost was minimal.

I will say that we will see an increasingly number of these systems. They are less intrusive than others such as fingerprinting or iris recognition. and, in addition, and as we showed with our project, they are perfectly able to work in real time.

Centro de Investigación para la Gestión Tecnológica del Riesgo�2

Innovation for IT Security 2012 Summer Course

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

ur third round of funding has just been made public, amounting to $ 22.4 million, led by kleiner Perkins Caufield & Byers and Sigma Capital firms,

and with the participation of the current Trident Capital and adara venture Partners, which will support the expansion of our sales, marketing and customer service capabilities, as well as the increase of investment in R&D.

It is, therefore, a good time to share with you our trip; and, as Steve Jobs said in this famous speech, connect all the points, all the decisions that led us, phase by phase, toward just where we are now.

The most important of this news is not so much the monetary and quantitative value, but the background of the operation and the relevance of these renowned names that have decided to back us. kleiner Perkins is considered the largest investment fund in the world in the field of TI. It participates in companies such as apple, Sun Microsystems, Intel, Facebook and amazon.

Madrid, �0,000 euros

what I’d like is to tell you how a startup of two people, created in Madrid with only 40,000 euros of capital, has managed to go step by step towards its consolidation in a market that, apparently, was

Julio Casal(Founder of AlienVault)

A STARTUP TRIP TO SILICON vALLEy

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

already very mature. I am not going to talk about technology. I will talk about what is behind it so that ended up being attractive around the world and how we achieved a sustainable business development. In any case, what I would like to emphasize is that apart from technological innovations, what we need most are market innovations, which take us to a higher definition and a more solid settlement from our country.

In his book, Joffre Moore said that the majority of the startups crash into an abyss in the leap from an emotional sale – in a close circle of acquaintances, in sales to enthusiasts of the product, etc. – to other more functional and based on price and product, where they only buy for the product itself and not for your efforts, the context and the environment. and we wondered if that couldn’t happen to us as well. Otherwise it might also happen that our business plan could be also easily referred to as “underwear plan. There is a story of a gnome who, on his eagerness to win lots of money, it occurred to him to steal many briefs to achieve this. and he is asked about what his plan is exactly and what the suggested phases are, to which he replies that in phase 1 he’ll steal briefs, in phase 2 will steal more briefs and in phase 3 will win a lot of money. and when has was asked about what happens between phase 2 and phase 3, he replies that he doesn’t know, is busy stealing briefs.

In this context, when you have told a venture capital firm your products and your capabilities, there comes a time where you are requested not to talk any more about product and tell him why you think you’re going to be one of the three winners of the market among the 50 companies

that do the same as you around the world. They want to know how are you going to make proofs of concept, how you are going to make your developments known, etc.

and here, a company like ours, which didn’t know anything of go to market when we started in 2003-2004, must have some intuition which you can trust, because otherwise it will be impossible to move forward. In our case – we have always wanted to make products – we ask ourselves, perhaps obsessively, why we would survive in an inbred market and being so small. Our beginnings were based on services, but we wanted to develop into the world of product. we spent around 10 years doing services, first as employees and later as entrepreneurs. and at that time, we witnessed the blossoming of the security perimeter services, firewalls, IDSs, etc.

That began and continued to be the big question: how will we survive in such a big and inbred market? what differentiating value will we provide? we didn’t stop asking ourselves about that; and, initially, we got no response. Ok, we said to ourselves: if I am very good, I can make an Intrusion Detection System, but... how do I get more sales of the IDS than the competition?

The SIEM world

In the 1990s the UTMs (Unified Threat Management) emerged, which were not, just, a technological proposal, but one of value. They proposed the unification of different technologies in a single box; and they offered ease of use and operation, and low costs.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

Then came the year 2000, and as pioneers of security services, we launched a managed security service, setting up a SOC. It was just at that moment when we realized that were missing a product that would assist in that process, because we attempted to make a mental picture of millions of logs per day and it was absurd. It was necessary a new product that could unify all this. and here we envisioned SIEM, a new product that would later have this name. we could collect logs, correlate them through different algorithms, cross them and then report them in various ways. and here, at this point, the idea that finally made our mind was to make it open source. we called the result OSSIM (Open Source Information Security Management).

and this was our differentiating bet. we made a unified product, added the SIEM a component called “sensor”, which is managed from the SIEM and where we group many open source dispersed components, which at that time didn’t have management capabilities, didn’t have consoles (IDSs, network monitoring systems, anomaly detection, etc.). we were looking for a very ambitious product, a complete security monitoring system, which allowed companies to have a fabulous management and viewing capabilities.

In 2003, this value proposition was, absolutely, exceptional. I remember the first conversations with the venture capital firms. They told us that open source only applied to unified and mature markets, but not in our case. and they wondered how it was possible that even though the product was not born yet, we were already doing a launch of that type.

This being so, we spent four years selling services. and developing the product and offering it for free to the open source community.

200�: a product company

already in 2007, we saw that the community had an important size and we started the adventure of creating alienvault as a product company. Our goal: to develop an enterprise product on top of the open source for those implementations requiring greater deployment capabilities, and higher reporting and performance functionalities.

The company was born with very little capital, but with an enormous asset. and we had the idea of starting the commercialization in our trusted environment and within the national borders.

when we presented our value proposition, we emphasized two very relevant facts for us: our awareness – we were known by 90% of technicians around the world. and that with little money –; and the characteristic of being open source, and a unified platform, where 95% of its code was external. This gave our proposal a very high value. The functionality/cost ratio was exceptional.

In the community we didn’t had many contributions in terms of the code, but we did have a huge benefit at the testing level, since there were thousands of companies around the world that tested our products when we made a new development. This was very beneficial because it meant a significant cost savings to us. while the proofs of concept are expensive and we at that time could not pay tests in China, for example; at

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

the open source community the users downloaded it, tested it, and when they liked the product, came to us. we weren’t the market leaders, but our products were the most commonly used. although they were free of charge.

In relation to the topic of sales leads we were slower. In 2007, I remember that we had many downloads, but only 6 or 7 leads. But already in 2009 we started to see how the demand was increased and the market began to see us as an increasingly credible proposal. however, the question was always: will we really be able to sell the product?

In the midst of all this, alienvault created a business model of conversion, an open source model, which is now very popular and that venture capital like most. It’s called freemium and proposes the basic services for free, while other more advanced or special services are charged.

The model worked very well. It was based on the creation of a code, which was providing us, above all, awareness. It generated visits, and a high percentage of them were downloads of the same; while another percentage ended up becoming opportunities. and a percentage of those opportunities ended up in a sales proposal; then we got another percentage of the latter transformed into sales.

2010: a call from Jakarta and Silicon Valley

In May 7, 2010, we got a phone call from the representatives of a Jakarta telecommunications firm and they sent us a product purchase order

worth $16,000. we talked with them a couple of times by phone, and suddenly they wanted to buy our product without knowing us and without involving an emotional sale. For me, everything changed at that time.

we talked with the company partners and told them that we had to go to the US. The investors told us that we were crazy, that in Spain we had much larger contracts. But for me everything had changed. Eventually I learned that we had crossed the abyss of Moore, we had sold to a customer without any emotional bond, they weren’t the friend of anybody, we hadn’t seen their face, and we had not convinced them. It’s been the product itself what had convinced them. and they were willing to send us $16,000 from Jakarta. That decision led us to end the year with 62 customers in more than 25 countries. and all the sales were done by phone. and they were leading brands. They were small deals, but the beginning of the greater sales that would come later.

In November 10, 2010, Dominique karg and I landed in Silicon valley with our families. and there we began a new stage. It was an ocean of business opportunities. It was also full of predators. at that time the Gartner Quadrant for the SIEM market spoke of a total of 29 players, and we didn’t know27 of them at all, because they only operated in the US. and all of them were selling around 30 million dollars, much more than us. and this worried us.

Identity crisis

Our stay in Silicon valley, full of opportunities, also

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

had some shadows. we had an identity crisis for not knowing how to position us exactly in such a mature market. and to further complicate the situation, the usual things of mature markets occurred, the acquisitions started. hP bought arcSight for 1.5 billion dollars; IBM bought Qlabs, for more than 500 million; and Intel acquired NitroSecurity for something less than 500 million.

we were approached by some manufacturers, such as Fortinet, Cisco and Oracle; some of them even launched a purchasing figure. But we doubt and had a rethink. we thought it was now easy to position ourselves, because our product was better and cheaper than the rest. But there were also ‘micro’ companies cheaper than us.

we thought it would be too late, if we had wasted our time. and just then the unified model came, the Unified Security Management (USM), which intended to join the SIEM and the monitoring tools... and came to the rescue. we saw a picture of the SIEM market based on three categories: vendors, who sell in another space; pure SIEM, who are specialized in correlation, such as RSa, arcSight...; and then another series of manufacturers seeking to add value to the SIEM, adding new features and making it a single system. and in this vertical, in the unified one, we found ourselves with the right to become leaders. It’s true that some like NitroSecurity and Qlabs were adding features, but they did it little by little. In addition, our system also was, much more complete; without forgetting that since 2003 our plan has always been that one, the unified model.

And Trident Capital arrived

with this message of leadership we went to the venture capital. They understood it and it worked. Our first venture capital was Trident Capital, specialized in niche businesses. The first thing they told us was that we had to have american managers. we found this hard, because it was not easy to give our “baby” to a third party who we barely knew. we hired seven hP executives at once, IT security managers who had created a company called Fortify, which had just been sold for almost 500million dollars.

There we began the growth stage that followed the first of experimentation and creativity. we began to think about the team, we bet on a vertical structure, on the specialization, the maturity of processes and a strict operational plan. however, we kept the development team, and continue to maintain it, in Spain, with 25 people in Madrid and Granada. and they were fascinated with the talent.

The integration was not easy. and that’s why my biggest effort since then was dedicated precisely to achieve this integration. The first thing that our new partners did was to talk to Gartner, who told us that the market was evolving beyond the SIEM, to security intelligence.

Then the americans spoke with kleiner Perkins and the founder of arcSight. and from there new challenges arose: responding to why we were going to be the winner of the new cyber-intelligence market, and to why our company would improve the world – if we wanted the

Centro de Investigación para la Gestión Tecnológica del Riesgo��

Innovation for IT Security 2012 Summer Course

Fund to invest in us –. we began another stage of analysis of our objectives, looking for a new identity.

we went back to our open source origins, and from there created an intelligence network: Open Threat Exchange, which was the definition for the new identity. and this is what convinced kleiner Perkins. and here we come to the starting point, when this large investment fund finishes with their choice our business trip; and from where we join today all the points of this way.

New formulas for Spain

Our voyage has been a journey of transformation and adaptation: services to product, from a domestic market to the international expansion, from a flexible and creative startup organization to a consolidated sales organization. But, above all, it was a journey of innovation, of the SIEM

world, of the open source environment and the unified model.

Finally, my message would be that competing from Spain is an unequal battle while there is a group of american companies selling globally with more funding than you. I think that this country has to seek formulas that make us more productive and competitive in the global market. and, in the meantime, of course, there is no other option than to succeed in the US If you want to make a global product.

at the end, and in essence, I would say that the key is to innovate. To find our own ways, often based only on intuition, but out of the prescribed ways and preconceived ideas, and use all the creativity available. and you shouldn’t be scared when someone tells you that you’re crazy. Perhaps you have to be a little bit crazy.

2012 Summer Course Innovation for IT Security

Centro de Investigación para la Gestión Tecnológica del Riesgo ��

PhOTO GALLERy

Centro de Investigación para la Gestión Tecnológica del Riesgo

Innovación para la Seguridad TIC Curso de Verano 2012

I

Centro de Investigación para la Gestión Tecnológica del Riesgo

Curso de Verano 2012 Innovación para la Seguridad TIC

II

Course opening. (From left to right): Francisco García Marín, Innovation for Security of the BBVA Group; Santiago Moral Rubio, Director of IT Risk, Fraud and Security of the BBVA Group; and Pedro González-Trevijano, Rector of the Rey Juan Carlos University and President of the Rey Juan Carlos University Foundation.

More than one hundred students enrolled in this second course, which took place at the venue of the Rey Juan Carlos University, located in Aranjuez.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Innovación para la Seguridad TIC Curso de Verano 2012

III

Victor Chapela, Chairman of the Board of Sm4rt Security

and Regino Criado, Professor of Applied Mathematics at the

Rey Juan Carlos University, showed that computer science

and mathematics applied to intentional risk management are not at odds with fun and

learning.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Curso de Verano 2012 Innovación para la Seguridad TIC

IV

Simon Leach, Presales Director for EMEA of HP Enterprise Security, said it clearly: “There are no longer ‘good guys’ on Internet, but different levels of evil”.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Innovación para la Seguridad TIC Curso de Verano 2012

V

Santiago Moral, Director of IT Risk, Fraud and Security of the BBVA Group, an expert always committed to the promotion of innovation as an essential factor in order to get better.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Curso de Verano 2012 Innovación para la Seguridad TIC

VI

Gonzalo Álvarez Marañón, scientist and lecturer. Nobody like him to teach students that today, besides having deep knowledge in one discipline, must also master the art of communicating it.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Innovación para la Seguridad TIC Curso de Verano 2012

VII

(Top from left to right) Fernando Esponda, Research Director of Sm4rt Predictive Systems, and Luis Vergara, Professor of the Communications Department at the University of Valencia. Both delivered a brilliant lecture on fraud prediction.

Julio Casal, Founder of AlienVault, a Spanish entrepreneur who has triumphed at Silicon Valley.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Curso de Verano 2012 Innovación para la Seguridad TIC

VIII

The application of Game Theory to risk management is one of the fields in which investigates Jesús Palomo, Tenured Professor of Business Economics at the Rey Juan Carlos University.

Enrique Cabello, Tenured Professor of the Department of Computer Architecture and

Technology at the Rey Juan Carlos University, explained the project of facial recognition of

travelers in Barajas airport (Madrid).

Centro de Investigación para la Gestión Tecnológica del Riesgo

Innovación para la Seguridad TIC Curso de Verano 2012

IX

The participants in the round-table discussion on new trends in technologies for risk management and IT security, along with some teachers. (From left to right) Rafael Ortega, Managing Director of Innovation 4 Security; Luis Saiz, Director of Global Fraud Management at BBVA; Ana Arenal, Security Director of Mnemo; Esperanza Marcos, Professor of Language and Computer Systems II of the Rey Juan Carlos University (moderator); Jaime Forero, Director of Business Development for Banking and Utilities from Intel; Víctor Chapela, Chairman of the Board of Sm4rt Security; Juan Jesús León, Director of products and new developments of GMV Soluciones Globales Internet SA; and Francisco García Marín, from Innovation for Security of the BBVA Group.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Curso de Verano 2012 Innovación para la Seguridad TIC

X

The lively debate during the round-table discussion attracted the attention of the large attendance.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Innovación para la Seguridad TIC Curso de Verano 2012

XI

The Summer Course provided a superb space

for cultivating social relationships between students and teachers.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Curso de Verano 2012 Innovación para la Seguridad TIC

XII

Between classes, networking makes its appearance.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Innovación para la Seguridad TIC Curso de Verano 2012

XIII

Aranjuez during the Summer Course, a place

where risk management, fraud prevention and

IT security merge with the good times and the

popular gastronomy.

Centro de Investigación para la Gestión Tecnológica del Riesgo

Curso de Verano 2012 Innovación para la Seguridad TIC

XIV