automation of secrecy proofs for security of protocols · automation of secrecy proofs for security...

106
Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science Degree in Information Systems and Computer Engineering Supervisor: Doctor Pedro Miguel dos Santos Alves Madeira Ad˜ ao Examination Committee Chairperson: Doctor Ant ´ onio Manuel Ferreira Rito da Silva Supervisor: Doctor Pedro Miguel dos Santos Alves Madeira Ad˜ ao Member of the Committee: Doctor Carlos Manuel Costa Lourenc ¸o Caleiro November 2015

Upload: others

Post on 22-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

Automation of Secrecy Proofs for Security of Protocols

Miguel Abreu Malafaia Mendes Belo

Thesis to obtain the Master of Science Degree in

Information Systems and Computer Engineering

Supervisor: Doctor Pedro Miguel dos Santos Alves Madeira Adao

Examination Committee

Chairperson: Doctor Antonio Manuel Ferreira Rito da SilvaSupervisor: Doctor Pedro Miguel dos Santos Alves Madeira AdaoMember of the Committee: Doctor Carlos Manuel Costa Lourenco Caleiro

November 2015

Page 2: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science
Page 3: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

Acknowledgements

Throughout the 5 years that I’ve been frequenting IST, many people have helped me and de-serve my deepest and sincere thanks. From professors to colleagues, friends to family, all ofthem have supported me and helped me throughout this journey. And without any of them,all of my academic accomplishments wouldn’t be achieved. First I would like to salute theprofessors from the courses that I’ve taken to reach this point. Thank you for lecturing thosecourses so I could get the necessary credits to get to the Master’s Degree.

I spent by far the most time when developing this thesis in discussion with Pedro Adao, mythesis advisor, which was always a great pleasure. Thank you for your help, support and goodwill. Without it this thesis wouldn’t have the same quality. Next, from my friends from IST,Guilherme, Miguel, Pedro, Caramujo, Paulo, Marta, Vera, David, I salute you for the patiencethat you had with me. I know that I sometimes can be “a pain in the neck” and for that youhave my deepest thanks. And also for teaching me stuff about computer science and for theexcellent work that you always produced, you, again, have my thanks.

Afonso and Bruno, together we formed an excellent team. Throughout these last 5 years weconquered every obstacle that IST and their professors imposed us. For your help, friendshipand the laughs that we shared, I owe you a lot and for that I thank you.

To my great friend Duarte there isn’t enough ‘thank yous’ in this world and the others fromthe whole galaxy to express my gratitude. You’ve become a sort of an older brother to me. Notonly have you been helping me with projects, and teaching me some cool stuff about computerscience, but specially you’ve been teaching me about life and on how a Man, with a capital M,should be. For that my friend, my brother, I thank you and I hope we can continue to share thisamazing thing called life.

To my friends Diogo, Miguel, Pedro Vicente, Pedro Mangualde, Vasco and Gil, my mostthoughtful thanks, for always being supportive, and for having the patience to put up withme. And also for the amazing experiences that we’ve been sharing, (and that I hope that wecan continue to share for the rest of my life), that took away the pressure and stress that ISTcan put in a person. To my family, the ones that have being supporting me since I was born.Always giving me what I needed, and more, to not only be successful, but to become happy.To my sister and grandparents, I thank you for your help, love and care throughout all my life.

To my mother, I do not only thank you, I owe you my life and future! You’ve alwaysbelieved in me, through good times and bad times. You’ve always given me hope, joy andstrength to overcome all the obstacles that life has presented to me. You’re the main reason thatI’m here. I love you not only for this, but for everything you are and for teaching me how to bean honourable person.

Finally, Susana. You are one of the most important persons in my life. What you are to me,cannot be expressed through a paragraph. You have given me strength and purpose to endure

Page 4: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

this journey. You are also one of the main reasons I’m here. And I thank you and God for yourexistence. The world would be a lot sadder without you.

Lisboa, November 19, 2015Miguel Abreu Belo

Page 5: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

Para a minha familia e para aminha razao de viver

Page 6: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science
Page 7: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

Resumo

Protocolos de Seguranca sao algoritmos distribuıdos para alcancar objectivos de seguranca,como por exemplo, secretismo da informacao trocada e autenticacao, mesmo numa rede inse-gura. Estes tem um papel bastante importante nas estruturas de negocio e de rede modernas.Esta tese tem como foco a verificacao automatica de protocolos de seguranca baseada na tecnicade adversarios simbolicos proposta por Bana e Common-Lundh.

Esta tecnica pretende lidar com ataques nao-Dolev-Yao e ataques computacionais. O mod-elo Dolev-Yao e basicamente um modelo de atacante que define quais e que sao as habilidadesde um atacante numa comunicacao dentro de uma rede insegura. Um ataque nao-Dolev-Yaoexplora aquilo que nao e abrangido pelo modelo Dolev-Yao de forma a violar uma das pro-priedades criptograficas de um protocolo. E mais, com a utilzacao do modelo Dolev-Yao que eum modelo simbolico, nao somos capazes de detectar ataques computacionais.

Esta tecnica lida com esta questao usando um modelo de atacante que em vez de definirquais e que sao as habilidades de um atacante, define qual o conjunto de axiomas que o atacantenao pode violar, sendo que procura combinar o melhor dos modelos simbolicos com o melhordos modelos computacionais, obtendo solidez computacional.

Ja existe uma ferramenta que implementa esta tecnica (Scary), mas para alem de usar umaprocura para a frente para explorar o estado de espacos de um protocolo, limita o numero deinstancias. Para lidar com este assunto iremos utilizar um algoritmo de procura para tras, que evulgarmente usado noutras ferramentas, como por exemplo o Scyther. O Scyther, como muitasoutras ferramentas, utiliza o modelo Dolev-Yao, o que significa que nao e capaz de lidar comataques computacionais, nao-Dolev-Yao.

Nao so com o intuito de implementar a tecnica do Bana e do Common-Lundh, mas tambemde ultrapassar as limitacoes do Scary, desenvolvemos uma ferramenta, que utiliza uma abor-dagem de procura para tras para explorar o state space, podendo lidar assim com um numeroilimitado de instancias. Esta ferramenta confia no Scary para garantir que a execucao do algo-ritmo principal da nossa ferramenta seja coerente com um determinado conjunto de axiomascomo o axioma de freshness, nao-maleabilidade e demais.

Sao tambem apresentadas as limitacoes que a ferramenta possui de forma a poder fun-cionar de forma correcta e coerente com a tecnica do Bana e Commom-Lundh. Estas limitacoesenvolvem o tipo de protocolos que podem ser analisados (por exemplo, protocolos comencriptacao simetrica/assimetrica) e as respectivas condicoes em que podem ser analisados.

Finalmente para a validacao e correccao da nossa ferramenta, comparamos o prototipoobtido com o Scary, com o NSL, Andrew Secure RPC e outros pequenos protocolos, exem-plificativos de utilzacao de encriptacao simetrica e assimetrica. A comparacao com o Scary eimportante na medida em que implementa a mesma tecnica que a ferramenta criada pretendeimplementar.

Page 8: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science
Page 9: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

Abstract

Security protocols are distributed algorithms for achieving security goals, like secrecy or au-thentication, even when communicating over an insecure network. They play a critical role inmodern network and business infrastructures. This thesis focuses on the automated verifica-tion of security protocols based in the technique for symbolic adversaries proposed by Banaand Common-Lundh.

This technique aims to deal with non-Dolev-Yao attacks and computational attacks. Dolev-Yao model is basically an intruder model that defines the abilities of an intruder in the commu-nication over an insecure network. A non-Dolev-Yao attack explores what is not covered by theintruder model to violate a cryptographic property of a protocol. And also, if we’re using theDolev-Yao model, which is a symbolic model, we’re not able to detect computational attacks.

This technique to deal with this issue uses an intruder model that instead of defining whatare the abilities of the intruder, defines the axioms that the intruder cannot violate. This tech-nique aims to combine the best of symbolic models with the best of computational models,achieving computational soundness.

There already exists a tool that implements this technique (Scary), but besides using a for-ward search approach to explore the state space of a protocol, it also bounds the number ofinstances. To deal with this issue we’re going to use backtracking, that is commonly used inother verification tools, such as Scyther. Scyther, like most tools, uses the Dolev-Yao intrudermodel, which means it is not able to deal with computational, non-Dolev-Yao attacks.

For the purpose of not only implementing Bana-Common-Lundh technique, but also toexceed Scary limitations, we developed a tool that uses a backward approach to explore thestate space, supporting this way an unlimited number of sessions. Also this tool relies on Scaryto guarantee that the execution of the main algorithm of our tool is coherent with a certain setof axioms, like the freshness axiom, the non-malleability axiom, and so forth.

The limitations of the tool are also presented in order to work coherently with the tech-nique of Bana and Commom-Lundh. These limitations are related to the type of protocols thatcan be analysed (for example, protocols that use symmetric/asymmetric encryption) and therestrictions in which the protocols can be analysed.

Finally for the validation and correctness of the this tool, we compared the obtained pro-totype with Scary, testing the NSL, Andrew Secure RPC and other minimal protocols that ex-emplify the usage of symmetric and asymmetric encryption. The comparison with Scary isimportant, since it implements the same technique the prototype implements.

Page 10: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science
Page 11: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

Palavras Chave

Keywords

Palavras Chave

Seguranca de Protocolos

Verificacao Automatica

Solidez Computacional

Informacao Secreta

Modelo de Atacante

Keywords

Security of Protocols

Automatic Verification

Computational Soundness

Secrecy

Intruder Model

Page 12: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science
Page 13: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

Index

1 Introduction 1

1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Historic Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.3 Non-Dolev-Yao Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 Bana-Common-Lundh Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.5 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.6 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Tool Background 11

2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2 Symbolic Models vs Computational Models . . . . . . . . . . . . . . . . . . . . . . 11

2.2.1 Symbolic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2.2 Computational Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2.4 Computational Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.3 Infinite State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.4 State Space Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.5 Intruder Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.6 Forward vs Backwards Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.7 Theorem Proving vs Model Checking . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.7.1 Theorem Proving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.7.2 Model Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.7.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Tool Examples 19

3.1 Overview Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.1.1 AVISPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

i

Page 14: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

3.1.2 Athena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.1.3 Casper-FDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.1.4 ProVerif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.1.5 StatVerif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.1.6 Scyther . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.1.7 Tamarin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1.8 Maude-NPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.9 EasyCrypt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.10 CryptoVerif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2 Tool Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.2.1 State Space Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.2.2 Bounded Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.2.3 Mutable Global State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.2.4 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.2.5 Other Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2.6 Tools Comparison Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4 Secrecy Proof 33

4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2.1 Terms and Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2.2 Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.3 Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.3.1 CipherText Indistinguishability . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.3.2 Secrecy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.3.2.1 Secrecy of CCA2 Encryption . . . . . . . . . . . . . . . . . . . . . 37

4.3.2.2 Non-Malleability of CCA2 Encryption . . . . . . . . . . . . . . . 38

4.3.2.3 Other Important Axioms . . . . . . . . . . . . . . . . . . . . . . . 39

4.4 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.4.1 Proof 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.4.2 Proof 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.4.3 Proof 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.5 Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

ii

Page 15: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5 Prototype 51

5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5.3 Tool Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.3.1 Input/Object Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.3.2 Trace Generation/Ids Manager . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.3.3 Clause Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

5.3.4 Scary/Scary Interactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5.4 Theoretical Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.5.1 NSL Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.5.1.1 First Attack NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.5.1.2 Second Attack NSL . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.5.1.3 Third Attack NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.5.1.4 Special Case NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.5.2 Scary Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5.6 Further Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6 Conclusion 75

6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

I Appendices 81

A Appendix 83

iii

Page 16: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

iv

Page 17: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

List of Figures

1.1 Scytale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Unfolded Scytale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 NS Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Man-in-the-middle attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1 Example of Ability of an Attacker . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.1 NSL Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.2 Case 1 NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.3 Case 2 NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.4 Case 2 Invariant NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.1 Prototype’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.2 Tool Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.3 Example of an Input File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.4 Case Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.5 Case 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5.6 Scary Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.7 Scary Input File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.8 Attack 1 in NSL protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.9 Trace First Attack in NSL protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.10 Trace Second Attack NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.11 Attack 2 on the NSL protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.12 Attack 3 on the NSL protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.13 Trace Third Attack on NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.14 Special Case NSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

A.1 Excerpt of Output of Attack 1 on the NSL protocol . . . . . . . . . . . . . . . . . . 83

v

Page 18: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

A.2 Excerpt of Output of Attack 2 on the NSL protocol . . . . . . . . . . . . . . . . . . 84

A.3 Excerpt of Output of Attack 3 on the NSL protocol . . . . . . . . . . . . . . . . . . 85

vi

Page 19: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

List of Tables

3.1 Tools Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.1 h3 projections comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.2 h′3 projections comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.3 h′′3 projections comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.4 h′′′3 projections comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.1 Manual Proof/ Tools Comparison NSL Protocol . . . . . . . . . . . . . . . . . . . 72

vii

Page 20: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

viii

Page 21: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

1IntroductionThe beginning is the most important part of work.– Plato

1.1 Overview

In this thesis, the focus will be the explanation of the ideas of the tool, and the implementationof the ideas that are behind it. In this section, it will be given an historical context about cryp-tography. Also, some current implementations concerning protocol verification are going to bediscussed and also some problems related to these tools are going to be explored, that involvesone aspect of protocol verification, that is the used intruder model. Finally, it is presented away to try and solve the identified problems.

Finally, the goals of this thesis are going to be addressed, and an overview of the thesis ismade.

1.2 Historic Context

Cryptography (or cryptology) is the practice and study of techniques for secure communica-tion in the presence of third parties (called adversaries). More generally, it is about constructingand analysing protocols in a secure fashion and in a world with adversaries. Various aspectsin information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersec-tion of the disciplines of mathematics, computer science, and electrical engineering. Applica-tions of cryptography include ATM cards, computer passwords, and electronic commerce.

Cryptography prior to the modern age was effectively synonymous of encryption, the con-version of information from a readable state to apparent nonsense. The originator of an en-crypted message shared the decoding technique needed to recover the original informationonly with intended recipients, thereby precluding unwanted persons from doing the same.

Cryptography dates back to 600 BC, when Hebrew scholars used the so-called Atbash ci-pher. Atbash is a simple substitution cipher for the Hebrew alphabet. It consists in substitutingthe first letter for the last, the second for one before last, and so on, reversing the alphabet. So ifthe reader wants to send a message with the word “civic”, encoding it using the Atbash cipher,the ciphered message is read as “xrerx”. As the reader can see, this is a very rudimentary andsimple cipher that can be easily “cracked”.

Around 400 BC, the Spartans allegedly used a Scytale for encryption, which is consideredthe first device used for encryption. The Scytale is basically just a rod with a random diameter.

Page 22: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

2 CHAPTER 1. INTRODUCTION

The idea is that both the sender and recipient know what diameter the rod should have. Thesender wraps a long thin piece of paper around the rod, and writes his message from left toright on the paper, as in Figure 1.1.

Figure 1.1: Scytale

To send a secret message containing the word consoles and the diameter of the rod is suchthat two characters can be written around the rod, it is written c o n s along the front side of therod, as in Figure 1.1. The remaining characters o l e s are written in the other side of the rod.By unwrapping the paper from the rod, and reading the characters, the encrypted message iscoolness, as can be seen in Figure 1.2.

If the recipient wants to decode it, the recipient wraps the paper around a rod of the samediameter. If the recipient uses a wrong diameter, the message will read a completely differentthing, not corresponding to the correct message. Here, the diameter of the rod acts as the key.Even when the encryption method is known to the enemy, encrypting and decrypting requiresa key which is only known by the sender and the recipient.

Throughout history, the use of encoding schemes has evolved significantly. Traditionalencryption relies on the fact that an encoding scheme is kept secret. Since World War I and theadvent of the computer, the methods used to carry out cryptology have become increasinglycomplex and its application more widespread.

Modern cryptography is heavily based on mathematical theory and computer sciencepractice; cryptographic algorithms are designed around computational hardness assumptions,making such algorithms hard to break in practice by any adversary. It is theoretically possi-ble to break such a system, but it is infeasible to do so by any known practical means. Theseschemes are therefore termed computationally secure. During World War II, cryptography wasused extensively, also by the German forces. Although different machines were used, the mostwell known are the Enigma devices, based on a system with multiple rotors.

In 1976 Diffie and Hellman published their key paper (Diffie and Hellman 1976), in which

Page 23: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

1.3. NON-DOLEV-YAO ATTACKS 3

Figure 1.2: Unfolded Scytale

they introduce a mechanism that later became known as asymmetrical encryption. In this typeof encryption every individual that wants to communicate with someone, has a public key anda private key. The public key is used by the other individuals to encrypt messages that aremeant to the owner of that public key and correspondent private key. The private key is onlyknown to its owner and is used to decrypt messages that are encrypted with the associatedpublic key.

Imagine that Alice wants to invite Bob for dinner. For this purpose, Alice must share withBob her public key, and Bob must share his public key with Alice. Only after this key exchange,Alice sends the message to Bob, encrypted with Bob’s public key. When he receives the mes-sage, Bob decrypts it with his private key, reading the content of the message. Both symmetricand asymmetric encryption schemes are used extensively today for the encryption of inter-net traffic, wireless communications, smart-card applications, cell phone communications, andmany other applications.

Throughout history, we’ve been searching for the perfect encryption scheme where everycommunication that uses that scheme is completely secure and without holes. That type ofscheme still does not exist. But that doesn’t mean the existent ones can’t be improved.

This is where tools of verification of security of protocols come into play, more precisely,the automatic verification of protocol security.

1.3 Non-Dolev-Yao Attacks

The research in automatic analysis of security protocols has been quite successful, generatingseveral tools for the cause. The reason for this success is simple: the improvement of the secu-rity of communication protocols. In other words, to have more secure ways to communicate.For instance, the Needham-Schroeder-Lowe protocol (Lowe 1996) depending on the tool that

Page 24: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4 CHAPTER 1. INTRODUCTION

is used for verification, can present a different number of attacks, that use different methodsdepending on the encryption scheme being considered.

These methods try to explore known vulnerabilities of security protocols or even discovernew vulnerabilities of these protocols. This is why verification tools are of extreme importance.They not only verify if a protocol is secure (for some security property), but they also discovernew ways that a protocol can be “hacked”.

This depends on the properties of the encryption scheme used. An example of one of thisproperties is the ciphertext indistinguishability property. This subject will be discussed withmore detail later on. Intuitively, if a cryptosystem possesses the property of indistinguishabil-ity, then an adversary will be unable to distinguish pairs of ciphertexts based on the messagethey encrypt. The property of indistinguishability under chosen plaintext attack is considereda basic requirement for most secure public key cryptosystems, though some schemes also pro-vide indistinguishability under chosen ciphertext attack and adaptive chosen ciphertext attack.

One of the main points of this thesis is the intruder model that is used associated withthe protocols and encryption schemes. The intruder model defines what an attacker is able todo and not do. The most used intruder model, is the Dolev-Yao model (Dolev and Yao 1983).The Dolev-Yao model states the type of actions the intruder can execute in a protocol. In thismodel, the intruder can control, cipher and decipher all the information, (as long as he knowsthe correspondent key) that goes through the network. Most tools use this model to identifyattacks on protocols.

For example, the man-in-the-middle attack to the Needham-Schroeder (NS) protocol wasidentified and corrected by Gavin Lowe using the Dolev-Yao model. Before explaining in whatthis attack consists, it is worth mentioning that NS uses asymmetrical encryption, that is, everyentity has a public key, that shares with other entities that it wants to communicate with, and aprivate key.

In a normal execution of the Needham-Schroeder protocol, two entities, Alice (A) and Bob(B), are trying to communicate with each other. For that purpose, they’ve already performedthe key sharing process. After that stage, Alice sends Bob a message, encrypted with Bob’spublic key, with a nonce generated by Alice and its identifier (message 1). Then Bob respondsto Alice with a message encrypted with Alice’s public key, with the nonce generated by Aliceand a nonce generated by Bob (message 2). Finally Alice responds to Bob sending a messageencrypted with Bob’s public key, with the nonce generated by Bob (message 3).

Supposedly, the nonces generated by Alice and Bob are secret, that is, only known to Aliceand Bob. Well that is not the case in NS. Basically, if an intruder is monitoring the messagesexchanged between Alice and Bob, that same intruder can discover the value of the noncesgenerated by Alice and Bob. Imagine Eve (E) persuades Alice to initiate a session with her,then she can relay the messages to Bob and convince him that he is communicating with Alice.

Lowe corrected this issue by adding in the second message of the protocol the identifier ofits sender, that is, following the previous example, the message sent by Bob to Alice has also itsidentifier. This way Alice has guarantee that she’s talking to Bob and not another entity.

Lowe, like we said, used the Dolev-Yao model to prove that the new version of the NS(Needham-Schroeder-Lowe), is immune against the man-in-the-middle attack. But even withthe new and improved Needham-Schroeder-Lowe protocol, Warinschi (Warinschi 2003) wasable to recreate the man-in-the-middle attack, the same attack Lowe made on the NS protocol.

Page 25: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

1.3. NON-DOLEV-YAO ATTACKS 5

Figure 1.3: NS Protocol

How did he manage to do that? He basically used an operation that was not considered by theintruder model that was used to prove that the NSL protocol was secure. Using operations thatare not covered by the intruder model that was used, Warinschi was able to make the man-inthe-middle attack.

More specifically, Warinschi took advantage of the malleability property associated withthe exchanged messages. This property states that an adversary can modify a ciphertext andtransform it into another ciphertext which decrypts into a related plaintext. That is, given anencryption of a plaintext m, it is possible to generate another cipher text which decrypts tof(m), for a known function f , without necessarily knowing or learning m.

So what Warinschi did, was basically transform the message sent by Bob to Alice, usinga function that changes the identifier associated with the sender of the message. The messageinstead of having the identifier associated to Bob, it has the identifier of the intruder. Thatway an intruder can execute the man-in-the-middle attack without any of the participants takenotice of its occurrence.

This example gives rise to a fundamental question: what type of intruder model did Warin-schi consider then? This leads to computational and non-Dolev-Yao attacks and to the problemthat this thesis tries to give a possible solution. It can happen that a protocol, like Needham-Schroeder-Lowe, under certain cryptographic conditions can be attacked and the involved at-tack is not identified by the tool that is being used. This reiterates what was previously said:

Page 26: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

6 CHAPTER 1. INTRODUCTION

Figure 1.4: Man-in-the-middle attack

verification tools are of extreme importance to verify if a protocol is secure and immune to at-tacks taking into account all intruder models used and all types of attacks. This leads to thework of this thesis: the creation of a tool that takes into account computational attacks andextends the Dolev-Yao model.

1.4 Bana-Common-Lundh Technique

The technique proposed by Bana and Comon-Lundh in (Bana and Comon-Lundh 2014) aims todeal with non-Dolev-Yao attacks. This is a proof technique that involves long manual securityproofs, (detailed in (Adao and Bana 2015)). This technique performs proofs in a symbolic,abstract, model, while keeping strong, computational, guarantees, and without establishinggeneral soundness results.

The basic idea of this technique is that instead of listing every kind of move a symbolicadversary (present in the symbolic model) is allowed to do, as usual with the Dolev-Yao ad-versary, a few rules (axioms) are listed stating what the symbolic adversary is not allowed toviolate.

In other words, the adversary is allowed to do everything that is consistent with theseaxioms. The axioms are first-order formulas expressing properties that hold computationallyfor probabilistic polynomial time adversaries and for the computational implementations ofthe cryptographic primitives.

The main result of this technique is that if there is no successful symbolic adversary com-pliant with the set of axioms, then for any computational implementation satisfying that set of

Page 27: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

1.4. BANA-COMMON-LUNDH TECHNIQUE 7

axioms (that is, implementations for which the axioms are computationally sound), there areno successful computational attacks.

For a computational attack to be made, the computational attacker took advantage of oneof the defined computational properties of the protocol. This is on the computational sideof the analysis. What about the symbolic side? Well, the associated symbolic attacker, tookadvantage of the corresponding axiom that characterizes the previous computational property.As the reader can see there’s a correspondence between these two worlds, that are the symbolicmodel and the computational model (to be analysed later). So, the technique presented in (Banaand Comon-Lundh 2014) is based in a symbolic model and like it was proved in (Bana andComon-Lundh 2014), if there exists a computational attack then there exists a correspondingsymbolic attack.

For example, suppose that we find an attack that explores the fact that the pair composedby the nonce n1 and name Q1, 〈n1, Q1〉 can be interpreted as the name Q2. That is Q2 can beequal to the concatenation of n1 with Q1.

Symbolically to solve this problem the following axiom is added: 〈n1, Q1〉 6= Q for everynameQ. On the computational side it is also necessary an associated fix. For instance, associatea tag for every type of message element. That way a pair composed by a nonce and namecannot be equal to a certain name. The detection of these conditions fall out of the scope thisthesis and is not going to be explored in detail.

Guillaume Scerri has already developed a tool that supports and automates Bana-Comon’slogic, Scary (Scerri 2015). The particularity of this tool is that it explores the involved state spacewith a forward search algorithm. This tool generates all reachable states of a protocol which,if one does not limit the number of instances running, can lead to one of the most commonproblems when verifying protocols: the state explosion problem.

To avoid this problem, in Scary the number of sessions/instances is limited. The resultobtained is not fully conclusive, because for example if it is verified that in a protocol for 5instances it can’t be identified the occurrence of an attack, then it cannot be assured that for6 or more instances that is also the case. This is also something the work of this thesis aimsto solve. Instead of exploring the related state space from its root and limiting the numberof instances, the state space is explored using an unlimited number of instances, from a statewhere a hypothetical attack occurred and go backwards until the initial state is reached, or aset of contradictions is found, which means the attack is impossible to occur. The comparisonbetween Forward and Backwards search will be addressed later.

The proofs designed to implement the technique proposed in (Bana and Comon-Lundh2014), that use backtracking as a state space search method, are explained in detail (Adao andBana 2015). Later in this thesis, we also explore these proofs.

Scyther (Cremers 2008) is a protocol verification tool, (based on Athena (Song 1999)), thatexplores the state space search using backwards search. This state space consists of trace pat-terns that can occur within a certain protocol. Trace patterns allow to capture the class of allpossible traces for a given protocol, including attacks. If there exists a trace of a protocol thatis an attack, this trace violates the security property, and it can be concluded that the securityproperty is false.

An example of a trace pattern of the Needham-Schroeder protocol would be the following:

Page 28: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

8 CHAPTER 1. INTRODUCTION

1. Initiator I sends {I, n1}pkR to Responder R, being n1 a freshly generated nonce by I .

2. I receives {n1, n2}pkI from R, being n2 a freshly generated nonce by R.

3. I sends {n2}pkR to R, confirming that I received the message from R.

These trace patterns take into account every type of agent whether they’re honest or not,and all of their actions, including ways an intruder can have knowledge of a value he is notsupposed to know, for example the nonce n1 that was generated by I and sent to R. Basicallya trace pattern contains every valid exchange of messages between two roles, which includesintruder actions. When analysing a security protocol, Scyther generates what it calls a completecharacterization of the protocol. This contains basically every possible trace pattern of theprotocol. In conclusion, Scyther not only verifies if a protocol is secure, but also shows how itcan be attacked.

Scyther, like most current tools, does not deal with computational, non-Dolev-Yao attacks,but Scary, on the other hand, does. So the main contribution of this thesis is to develop atool that implements the combination between Bana’s technique/Scary tool and backtracking,dealing this way with non-Dolev-Yao attacks with virtually unlimited number of sessions. Themain focus will be the automation of the verification of non-occurrence of attacks, or better, thenon-violation by the symbolic adversary of the cryptographic properties of a protocol, whichimplies the automation of the manual proofs involved in the technique proposed in (Bana andComon-Lundh 2014).

Basically, the execution of the tool could be summarised in the following way: for a pro-tocol it generates traces considering its messages, using backtracking, where the start point isan hypothetical state where an attack has occurred (let’s call it ”attack state”); between pointsin a trace that is generated, it is checked using Scary if the trace is feasible or not; if that’s thecase then the trace is expanded, otherwise it is closed; if the tool finds a trace that goes from the”attack state” to the beginning of the protocol, that same trace represents an attack.

With all of this is in mind, the reader could pose the following question: besides Scary,are there any other tools/models/frameworks that are able to deal with non-Dolev-Yao andcomputational attacks? And what are the advantages of our tool comparing to these other”solutions”?

The answer for the first question is: yes. Two examples of these are EasyCrypt and Cryp-toVerif that are computational tools/frameworks that indeed are able to deal with compu-tational attacks since they use a computational model instead of a symbolic model like theDolev-Yao model. This subject is going to be further analysed in Chapters 2 and 3.

Answering the second question, the technique proposed in (Bana and Comon-Lundh 2014)that this tool automates, tries to get the best of the concept of symbolic models and computa-tional models, and this is where computational soundness comes into play (more detail inchapter 2). But for the reader to have a notion, EasyCrypt and CryptoVerif, despite consider-ing a computational model, are difficult to use and cannot be used as fully automated provers,unlike Scyther for example.

Not only the proofs require some user interaction, but a failure in the proof search is diffi-cult to interpret. Indeed, when trying to prove a protocol in the computational model, the goalis to find the correct reduction. Therefore if one fails to prove a protocol in the computational

Page 29: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

1.5. CONTRIBUTION 9

model this failure does not necessarily yield an attack. It might be that the proof is simply toodifficult to find. Proofs in the computational model tend to be quite involved, as it is necessaryto take into account probabilities and complexity of the reductions.

On the other hand, the reason for Scyther to be a fully automated prover is because ituses a symbolic model with a simple logic that is easily automated. The comparison betweensymbolic and computational models will be discussed in Chapter 2.

So our tool is a fully automated prover that is able to deal with non-Dolev-Yao and com-putational attacks, supporting an unlimited number of sessions/instances unlike Scary, like itwas previously explained. To validate our tool, we have our main test case the NSL protocoland compared the obtained results with the results in the manual proof presented in (Adaoand Bana 2015), in terms of attack detection. From this, the tool was extended to support thetesting of other protocols.

Finally it is important to mention that our tool, works with asymmetric and symmetricencryption. It supports as function symbols, encryption/decryption and pairing. So the classof protocols that the prototype is able to analyse, obeys the previous conditions. The reasonbehind this is because Scary is only able to analyse this type of protocols, and our tool dependson the execution of Scary.

1.5 Contribution

The main contribution of this thesis is a verification tool for security of protocols that imple-ments and automates the technique of computationally complete symbolic attackers proposedin (Bana and Comon-Lundh 2014), dealing this way with computational and non-Dolev-Yaoattacks. Also, this tool goes a step further than Scary (that already implements the mentionedtechnique) in terms of the number of instances in a protocol execution. This tool sustains anunlimited number of instances with the help of backtracking.

1.6 Thesis Overview

In this section we briefly summarized what the reader is going to encounter in each chapter ofthis thesis.

Chapter 2 - It is laid down the basics for every verification tool. More specifically we’regoing to talk about what is a state space, ways to explore it, restrictions that can be madeon the state space. In this chapter it is made the comparison between symbolic models andcomputational models, and we’re going to discuss the importance of each one for our tool toachieve computational soundness. We’re also going to discuss about what an intruder modelis, and finally make a comparison between two types of existing tools: theorem proving andmodel checking.

Chapter 3 - After the analysis made in chapter 2, in this chapter we make the analysis ofseveral verification tools and the mechanisms that are used. This comparison will be madetaking into account the aspects that were depicted in the previous chapter.

Chapter 4 - In this chapter we present the secrecy proof associated with the technique ofsymbolic adversary proposed by Bana and Common-Lundh that our tool aims to automate for

Page 30: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

10 CHAPTER 1. INTRODUCTION

every protocol. For that purpose, we present the correspondent axioms to be checked in theexecution of the secrecy proof. To exemplify this proof, we’re going to explore the NSL protocollike it is made in (Adao and Bana 2015).

Chapter 5 - In this chapter, finally we talk about the fundamentals of our tool and howit implements the technique of Bana and Commom-Lund taking into account the proofs ex-plored in the previous chapter. Finally, it is described the experimental results obtained withthe prototype, having as the main benchmarks the NSL protocol and Scary.

Page 31: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

2Tool Background

It all comes to basics. Serve costumers the best tasting food at a good value in a clean, comfortablerestaurant, and they’ll keep coming back.

– Dave Thomas

2.1 Overview

In this chapter, the focus turns to the fundamentals of verification of security protocols, such asthe state space search, what it is, how it can be explored (forward vs backward search) and waysof pruning it (bounded verification). It is also made a comparison between model checking andtheorem proving (Berezin 2002), symbolic and computational models.

2.2 Symbolic Models vs Computational Models

In chapter 1 it is described in a summarized way, the solution that conquers the problem pre-sented by Warinschi (Warinschi 2003), that is, the execution of operations that are not consid-ered by the intruder model used by some tools, which lead to undetected attacks. This solution,the technique proposed in (Bana and Comon-Lundh 2014), like it was previously said, performsproofs using a symbolic model.

What is a symbolic model? What is the alternative? This is the purpose of this section: tounderstand what a symbolic model is and what a computational model is.

2.2.1 Symbolic Model

A symbolic model, like the name says, englobes all the specifications of a protocol and rep-resents them as terms, functions, predicates and so forth. An example of a symbolic modelwould be the Dolev-Yao model (Dolev and Yao 1983) that represents all the messages and itscomponents as terms and the abilities of an attacker are specified as rules.

In these models, the messages are modelled as abstract terms, living in a, what is called,term algebra. Intuitively this means that a message records the information on how it wasbuilt. For example in the Dolev-Yao model, an attacker is able to decrypt a message to whichhe has the key. So if the attacker has a message m encrypted with a k key, and also the k key(assuming it is being used symmetric encryption), then the attacker can obtain m.

It is important to recall that the symbolic adversary can only execute the actions that arespecified by rules that were established. In these models, the security properties can be di-vided between reachability properties and equivalence properties. The former states that in a

Page 32: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

12 CHAPTER 2. TOOL BACKGROUND

Figure 2.1: Example of Ability of an Attacker

certain state the adversary should not be able to derive a secret s (secrecy), or if a certain pointin the protocol is reached, the principals should agree on some value, or similar properties(agreement). Anonymity is a typical example of an equivalence property. For instance, voteranonymity where the adversary should not be able to notice a swapping of the votes in a votingprocess.

The main advantage of these symbolic models is that they can be easily automated. Thereare lots of tools that check trace properties like ProVerif (Blanchet, Smyth, and Cheval 2014),Scyther (Cremers 2008), or AVISPA (TheAVISPATeam 2006).

2.2.2 Computational Model

Unlike Scyther or ProVerif, there are tools that consider a verification of a property on a proto-col as a game. More precisely has probabilistic programs. This is what happens in a computa-tional model. In the computational model the attacker is not an abstract process dealing withunambiguous terms but a probabilistic polynomial time Turing machine (PPT ) that deals withbitstrings.

The key idea in these tools is to define security as a game that the adversary should notbe able to win. This model allowed to define the security of cryptographic primitives in arigorous way. This leads to various levels of security that can be expected from cryptographicprimitives against attackers that are more or less powerful. Typical examples are IND-CPAsecurity or IND-CCA security.

The former states that an encryption scheme should not leak information on the plaintextin the presence of an adversary that is able to obtain ciphertexts. The latter models the fact thatan adversary should not be able to obtain information from a ciphertext even if he has access toa decryption oracle. Note that these rigorous definitions of security for the primitives is a keydifference with the symbolic models. In the symbolic model the cryptographic primitives areassumed perfect while in the computational model the cryptographic primitives are assumedto satisfy no more than what is explicitly stated by the security property.

As an example, say that the protocol P should ensure the secrecy of some value n. It isdefined the following secrecy game: given a PPTA, say that A wins the secrecy game if AP (Ainteracting with the protocol P as an oracle) outputs n. The protocol is secure if no PPT maywin this game with non-negligible probability.

The main advantage of the computational model is that the PPT attacker is much closerto what a “real life” attacker is.

Page 33: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

2.2. SYMBOLIC MODELS VS COMPUTATIONAL MODELS 13

2.2.3 Comparison

The first difference between these models is the way messages are represented. The symbolicmodels abstract messages as terms, therefore it may overlook some confusion between mes-sages. In the computational model, the messages are bitstrings, which is precisely what themessages are in “real life”.

A related problem is the capabilities of the adversary. As mentioned earlier, a symbolicadversary may only compute messages according to a fixed set of rules. This leads to theproblem mentioned earlier: if an encryption scheme is malleable, this should be reflected inthe capabilities of the adversary. However, if an encryption scheme is only assumed to beIND-CPA, it may be malleable in very different ways. It appears to be a very complicated taskto model them all. In other words, it seems close to impossible to deal with underspecifiedcryptographic primitives in the symbolic model, such as the Dolev-Yao model.

This problem does not arise in the computational model. Indeed, the game based proofsonly rely on the (proven or assumed) properties of the cryptographic primitives.

Let’s now compare the models from the viewpoint of the proof search. In the symbolicmodels, as mentioned earlier, the proofs are often completely automated. It is also worth men-tion that usually proofs in the symbolic models are done by applying the attacker’s rules up tosaturation, (it will be explained in which consists saturation in chapter 5 when talking aboutScary). Therefore disproving the security property often yields a sequence of attacker rulesapplications leading to an attack.

Moreover, the proofs carried on in the symbolic models are mainly reasonably simpleproofs using a simple logic. This fact, together with the automation of the process gives a verygood level of confidence in such proofs. On the other hand, in the computational model, whilethere are proof assistants such as EasyCrypt (Barthe, Dupressoir, Gregoire, Kunz, Schmidt, andStrub 2014) or CryptoVerif (Blanchet 2007), there is no fully automated prover so far and littlehope of designing one in the near future. Not only the proofs require some user interaction,but a failure in the proof search is difficult to interpret. Indeed when trying to prove a protocolin the computational model, the goal is to find the correct reduction. Therefore if one fails toprove a protocol in the computational model this failure does not necessarily yield an attack. Itmight be that the proof is simply too difficult to find.

Indeed proofs in the computational model tend to be quite involved, as it is necessary totake into account probabilities and complexity of the reductions.

2.2.4 Computational Soundness

The aim of computational soundness is to get the best of both worlds, symbolic and computa-tional models. That is, symbolic proofs imply computational security. Works concerning activeadversaries can be divided into two groups. Works in the first group define symbolic adver-saries, and soundness theorems state that under certain circumstances, if there is no successfulsymbolic attack, then there is no successful computational attack either. The other group aimsto work directly in the computational model.

The technique proposed in (Bana and Comon-Lundh 2014) takes into account the firstgroup, like it was previously explained in chapter 1.

Page 34: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

14 CHAPTER 2. TOOL BACKGROUND

Computational soundness proofs are very complex and its reason can be summed up asfollows. In the computational model, the adversarial capabilities are defined by what the ad-versary can’t do. On the other hand, in the symbolic models, the adversarial capabilities aredefined by what the adversary can do (Dolev-Yao model). In other terms, in the computationalmodel the adversarial capabilities are defined as a greatest fixed point while in the symbolicmodels they are defined as a least fixed point. The difficulties in soundness results is to makesure that these two fixed points actually coincide.

The technique proposed in (Bana and Comon-Lundh 2014) aims to deal with this issue.

2.3 In�nite State Space

In order to verify properties of security protocols by automated model checkers, the executionof the protocols is represented in terms of reachable states. What is a state, the reader mightask. Well, simply putting, in the context of this thesis, a state consists of a point of execution ofa protocol that contains the messages that were previously sent, the interacting agents and theattacker’s knowledge.

If Reachable(P ) refers to all states that can be reached during the execution of a protocolP , and S represents the set of states satisfying the desired property SP , then a protocol thatsatisfies this property SP should fulfill the following formula, claiming that all states reachableby P are included in S:

Reachable(P ) ⊆ S

If S refers to the complement of S, including all states describing possible attacks, the aboveformula can also be expressed as follows:

Reachable(P ) ∩ S = {}

This formula specifies that no state included in S is reachable by P , which means thatno attack exists and no counterexample can be constructed. When it comes to implementingan automatic model checking algorithm to verify the reachability of states, a severe challengeappears due to the fact that the search space is infinite. In such an infinite state space thesecrecy problem, for example is undecidable if there is no bound on the number of sessions.But if a bound is introduced, then the problem becomes NP-complete. The number of statesthat need to be searched is limited to a specific number and accordingly, the number of possibleexchanged messages is bounded as well.

Some tools limit the number of instances in a protocol to limit the number of states thatare explored. The tool related to this thesis avoids this issue with the help of backtracking andInvariants (which are going to be explained in chapter 4).

2.4 State Space Restrictions

Given a security protocol P , let Sys(P ) be the system describing the protocol that is being anal-ysed, and let tr(Sys(P )) be the set of traces of the system. A trace of a protocol consists of avalid message exchange between agents in the context of the protocol. Because of undecidabil-ity or efficiency concerns, protocol analysis tools usually apply some restrictions on the system

Page 35: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

2.5. INTRUDER MODEL 15

and do not explore all elements from the set tr(Sys(P )). More precisely, all traces of a protocolare not taken into account. Such a subset can be defined by using a Scenario. A Scenario is amulti-set of processes. Let Sc represent all the possible scenarios related to a security protocol.

These are some of the following restrictions on scenarios that some of the tools that aregoing to be analysed, use in most cases to reach termination that is, for the tool to finish theprotocol analysis:

• Scen(Sc) - contains only the intruder processes and the processes specified in S.

• MaxRuns(P,m) - defined by a protocol P and a maximum count m, (the name derivesfrom the term “run” referring to a role instance). While the system Sys(P ) contains anynumber of replications of each role, the MaxRuns system contains only a finite numberof replications of each role.

Concerning the context of this thesis, it is taken into consideration all processes that mightlead to an attack and consequently, violation of the property being verified. Remember that theproofs, (to be explored in chapter 4) related to the technique proposed in (Bana and Comon-Lundh 2014) begin from a hypothetical state where an attack already occurred.

2.5 Intruder Model

One thing that must be defined, when verifying security protocols is the intruder model. Theintruder model basically defines the degree of power an intruder, a malicious agent has overthe related network. This power is related to the abilities of an intruder to intercept, changeand create messages over a communication channel in the network. The intruder model thatis considered in most of the tools that are going to be analysed is the Dolev-Yao model. Inthe Dolev-Yao model the intruder has complete control over the communication network. Theintruder can intercept any message and insert any message, as long as he is able to constructits contents from his knowledge.

The intruder model that is considered in the development of the tool related to this thesisis an extension of the Dolev-Yao model. In this model, instead of defining what are the abilitiesof the intruder, limitations are established to what the intruder can make, by defining a set ofaxioms/rules the intruder can’t violate.

2.6 Forward vs Backwards Search

Forward Search Algorithms compute all reachable states of a protocol in an iterative mannerby beginning with the initial state sinit. As soon as a state is reached which is part of S then thedesired property does not hold and a counterexample can be constructed. When a fix-point isreached, i.e, a subsequent state equals the current state, then it can be assumed that the desiredproperty holds for the protocol. Fixpoints are always reached in finite-state models, i.e, wherethe number of sessions and hence the number of threads is limited. However, in infinite statemodels the reachability of a fix-point cannot be guaranteed.

Page 36: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

16 CHAPTER 2. TOOL BACKGROUND

In contrast, Backward Search Algorithms take as a starting point the set of states of possibleattacks S from which a chain of possible predecessors is iteratively constructed. The searchchecks whether sinit is part of these predecessor states. If so, then the desired property doesnot hold, since there is a possible state sequence leading from sinit to a state in S, thus to apossible attack.

The closure of states is infinite for both forward and backward searching algorithms, al-though the reasons are different. In forward search infinitely many states can be reached fromthe starting point sinit, whereas in backward search, the set S contains infinitely many states. Ingeneral, the negation S contains more information about states than the initial state sinit, sincestates in S include prerequisites such as the adversary knowledge of certain terms or the claimthat particular events must have been executed before.

Accordingly, applying a backwards search approach, starting from S, is more suitablewhen it comes to infinite state models. Conversely, when dealing with finite state spaces itis simple and straight-forward to conduct a forward search starting from sinit.

With backwards search it is possible to abstract a group of states that are similar, this wayreducing the state space. The idea of invariants can be used with backtracking to abstract agroup of states into one. This subject will be discussed later. Also this abstraction can be madebecause, like it was previously said, the set of states of S contains more information aboutstates than the initial state sinit, since states in S include prerequisites such as the adversaryknowledge of certain terms or other type of info, which restricts the state space, restricting andeven grouping them.

2.7 Theorem Proving vs Model Checking

When building a tool for security protocol verification, two main types of verification arisewhich have advantages and disadvantages concerning each other.

Several current protocol verification tools try to circumvent undecidability in differentways. Automatic tools either sacrifice soundness (providing false positives) or completeness(missing some attacks), or simply may not always terminate. Then alternatively, tools may beinteractive and require user guidance, for example, to construct interactively a proof that theprotocol is correct.

These two main types of tool are addressed here.

2.7.1 Theorem Proving

Theorem proving is the use of automated techniques to provide mathematically sound proofsof theorems. Theorem provers are programs that provide such automated support. An exampleof a theorem prover is one based on a higher-order logic, such as Maude (Escobar, Meadows,and Meseguer 2009).

Theorem provers are connected to what is called a proof system. A proof system besidesdefining the rules of inference for deriving theorems also define the axioms which are going tobe used. Using a theorem prover, one formalizes the system (the agents running the protocolalong with the attacker) as a set of possible communication traces. Afterwards, one states and

Page 37: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

2.7. THEOREM PROVING VS MODEL CHECKING 17

proves theorems expressing that the system in question has certain desirable properties, i.e.each system trace satisfies properties such as authentication or secrecy.

Some theorem provers require interaction with the user. The prover will possess a set ofproof tactics that it can apply, but the user may be required to choose which tactics, or evensuggest tactics of his own. For example the user can define if he adopts a forward or backwardapproach. Some theorem provers also function as proof checkers, in which the user himselfwrites the proof, and uses the theorem prover to check its validity. The main drawback of thisapproach is that verification is quite time consuming and requires considerable expertise.

Moreover, theorem provers provide poor support for error detection when the protocolsare flawed. Theorem Proving has been applied to several protocols including the Needham-Schroeder-Lowe.

2.7.2 Model Checking

The second kind of verification centers on the use of model checkers, which are fully automatic.Three classes of this kind of tool can be identified:

• Those which attempt verification (proving a protocol correct)

• Those that attempt falsification (finding attacks)

• Hybrids that attempt to provide both proofs and counterexamples

The first class of tools, which focus on verification, typically rely on encoding protocolsas Horn clauses and applying resolution-based theorem proving to them (without terminationguarantees) (Blanchet 2011).

In contrast to verification, the second class of tools detects protocol errors or more com-monly described, attacks, using model checking or constraint solving. Model checking at-tempts to find a reachable state where some supposedly secret term is learnt by the intruder,or in which an authentication property fails. Constraint solving uses symbolic representationsof classes of such states, using variables that have to satisfy certain constraints (e.g. the term{m}k must be derivable from the intruder knowledge).

To ensure termination, these tools usually bind the maximum number of runs of the pro-tocol that can be involved in the attack. Therefore, they can only detect attacks that involve nomore runs of the protocol than the stated maximum. AVISPA related tools (TheAVISPATeam2006) can be inserted into this class of model checking tools, since it not only uses constraintsolving algorithms to find attacks (CLAtse (Kosmatov 2005)), but also uses symbolic techniquesto perform protocol falsification as well as bounded analysis (OFMC (Basin, Modersheim, andVigano 2005)).

In the third class, attempts to combine model checking with elements from theorem prov-ing have resulted in backward-search-based model checkers. These use pruning theorems,resulting in hybrid tools that in some cases can establish correctness of a protocol (for an un-bounded number of sessions) or yield a counterexample, but for which termination cannot beguaranteed. Scyther falls into this last category, but is guaranteed to terminate unlike Athenawhich also falls into this category of protocol verification tools.

Page 38: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

18 CHAPTER 2. TOOL BACKGROUND

2.7.3 Comparison

Verification by model checking has gained popularity in industry because the verification pro-cedure can be fully automated and counterexamples are automatically generated (only by cer-tain model checking tools) if the property being verified does not hold.

Model checking tools have been the most used and developed, since they are easy to useand do not involve complex and long manual proofs found in theorem proving. A key differ-ence between the theorem approach and the model checking one, is that theorem provers donot need to exhaustively visit the program’s state space to verify properties.

Consequently, a theorem prover approach can reason about infinite state spaces and statespaces involving complex datatypes and recursion. Theorem provers search for proofs in thesyntactic domain, which is typically much smaller than the semantic domain searched bymodel checkers. Consequently, theorem provers are well-suited for reasoning about “data-intensive” systems.

While theorem provers have distinct advantages over model checkers, namely in the supe-rior size of the systems they can be applied to and their ability to reason inductively, deductivesystems also have their drawbacks. The proof system of a theorem prover for a system of prac-tical size can be extremely large. Also the involved verification is quite time consuming andrequires considerable expertise. Moreover, theorem provers provide poor support for errordetection when the protocols are flawed.

Now, taking into account the fundamentals that were presented in this chapter, it is timeto analyse and compare protocol verification tools, to get a notion of what each of these toolsdoes to verify if a protocol satisfies a certain property.

Page 39: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

3Tool Examples

The expectations of life depend upon diligence; the mechanic that would perfect its work mustfirst sharpen its tools.

– Confucius

3.1 Overview Tools

In this section several symbolic and computational tools are analysed and a comparison is madebetween all of them taking into account some of the aspects described in the previous chapter.As the reader will see, some of these tools implement some notions that are going to be reusedin the implementation of the tool related to this thesis. Later this too shall be addressed.

3.1.1 AVISPA

The AVISPA Tool (TheAVISPATeam 2006) is equipped with a web-based graphical interfacethat supports the editing of protocol specifications and allows the user to select and configurethe different back-ends of the tool. If an attack on a protocol is found, the tool displays it as amessage-sequence chart.

The current version of the tool integrates the following four back-ends that take the sameinput language called HLPSL:

• CL-Atse: (Version: 2.2-5) (Kosmatov 2005) Constraint-Logic-based Attack Searcher thatapplies constraint solving with simplification heuristics and redundancy eliminationtechniques.

• OFMC: (Version of 2006/02/13) (Basin, Modersheim, and Vigano 2005) The On-the-FlyModel-Checker employs symbolic techniques to perform protocol falsification as well asbounded analysis, by exploring the state space in a demand-driven way. OFMC imple-ments a number of optimizations, including constraint reduction, which can be viewedas a form of partial order reduction.

• Sat-MC: (Version: 2.1, 3 April 2006) (Armando and Compagna 2004) The SAT-basedModel-Checker builds a propositional formula encoding all the possible traces (ofbounded length) on the protocol and uses a SAT solver.

• TA4SP: (Version of AVISPA 1.1) (Boichut, Heam, and Kouchnarenko 2005) Tree Automatabased on Automatic Approximations for the Analysis of Security Protocols that approxi-mates the intruder knowledge by using regular tree languages and rewriting to produceunder and over approximations.

Page 40: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

20 CHAPTER 3. TOOL EXAMPLES

The first three AVISPA tools (CL-Atse, OFMC and Sat-MC) take a concrete scenario (asrequired by the HLPSL language) and consider all traces of Scen(Sc). The last AVISPA back-end, TA4SP, also takes a HLPSL scenario, but verifies the state space that considers any numberof repetitions of the runs defined in the scenario.

A very important aspect of AVISPA is the ”lazy intruder” (Basagiannis, Katsaros, and Pom-bortsis ). The ”lazy intruder” is used to avoid an explicit enumeration of the possible messagesthe intruder model can generate, by storing and manipulating constraints about what must begenerated. The resulted symbolic representation is evaluated in a demand-driven way and thisapproach reduces the search tree without excluding any attacks.

AVISPA Characteristics

• AVISPA is an easy to use tool to verify the security properties of ad hoc routing protocols.

• A web interface is provided to give an easy way to use AVISPA tool without installingany other software to support it.

• AVISPA requires the protocol specification to be written in HLPSL language.

• The same specification can be validated by four tools: OFMC, CL-AtSe, SATMC andTA4SP. Hence the verification results are more reliable and meaningful to the user.

• If one of these validates the protocol then the user has no need to use the other tools.

• AVISPA gives details about whether a protocol is safe or not. If not then it also gives thetrace of the attack found, to indicate secrecy attack or authentication attack.

• It is possible to run the protocol only for an unbounded number of sessions.

• Outputs the result of the analysis stating whether the input problem was solved (pos-itively or negatively), the available resources were exhausted, or the problem was nottackled for some reason.

3.1.2 Athena

Athena (Song 1999) is an automatic checking algorithm that incorporates a logic that can ex-press security properties including authentication and secrecy. Basically, the protocol specifica-tions are represented by well-formed formulas, just like the property to be verified if it holds.For example if for the secrecy “formula”, the evaluation procedure terminates, it will generatea counterexample if the formula is false, or provide a proof if the formula is true.

Even when the procedure does not terminate when arbitrary configurations of the protocolexecution are allowed, termination could be forced by bounding the number of concurrentprotocol runs and the length of messages, as is done in most existing model checkers. SoAthena explores, by default Scen(Sc), but since it also allows bounding of the number of runsthen, it also explores MaxRuns(P, n), being n a given number. Athena also uses several statespace reduction techniques.

Together with backward search and other techniques, Athena naturally avoids the infinitestate space explosion problem commonly caused by asynchronous composition and symmetry

Page 41: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

3.1. OVERVIEW TOOLS 21

redundancy. Athena also has the advantage that it can easily incorporate results from theoremproving through unreachability theorems.

The state transition in Athena is based on goal-bindings, avoiding this way the state spaceexplosion caused by asynchronous composition. A goal represents a term from a message, (e.gsender, receiver, nonce) that is received. A goal-binding represents the send action of that term.By using the unreachability theorems, it can be proved that a certain state is unreachable andwith this prune the state space at an early stage. A state is unreachable if there is no protocolexecution containing it.

For practical purposes, it is necessary to express such theorems as computable predicates,σ, that is, if σ(s) is true, then the state s is unreachable. Thus, it is simply eliminated from theset of states immediately.

Athena Characteristics

• Athena can reason about infinite state space problems.

• When the evaluation procedure terminates, Athena finds a mapping from the infinitestate space to a finite state space while preserving the validity of well-formed formulae.

• Athena may force termination by bounding the number of runs involved.

• By analysing the finite state space, Athena can derive a proof that a security propertyholds under any configuration of the protocol execution. If the property does not hold itreturns false.

• Uses backward search instead of forward search.

• Uses symbolic state transitions instead of explicit state search. States can be naturallygrouped. In Athena, a state can contain free variables. A state with free term variablesrepresents a class of variable-free states. A state transition between two states whichcontain free variables represents a set of state transitions between two variable-free states.This type of state structure is more compact and efficient since it groups states into a singleclass.

• The state transition in Athena is based on goal-bindings.

• Uses unreachability theorems to prove early that a state is unreachable, hence, pruningthe state space.

3.1.3 Casper-FDR

Casper/FDR (Version: Casper 1.11 alpha-release, FDR 2.83) (Aiash, Mapp, Phan, Lasebae, andLoo 2012) is a tool that is based on the usage of the Casper tool (Lowe 1997), which translatesprotocol descriptions into the process algebra CSP (Communication Sequential Processes), andthe CSP model checker FDR (Failure Divergence Refinement). It uses forward search to gothrough the state space.

Gavin Lowe has developed the Casper/FDR tool to model security protocols, it acceptsa simple and human-friendly input file that describes the system and compiles it into CSPcode which is then checked using the FDR model checker. This tool has been used to model

Page 42: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

22 CHAPTER 3. TOOL EXAMPLES

communication and security protocols. Lowe used Casper/FDR to find the man-in-the-middleattack on the Needham-Schroeder protocol (Lowe 1996).

Casper/FDR works in the following way:

• Each agent taking part in the protocol is modelled as a CSP process.

• Then, the most general intruder who can interact with the protocol is also modelled as aCSP process.

• The resulting system is tested against specifications representing desired security prop-erties.

• For that purpose, FDR searches the state space to investigate whether any insecure tracescan occur.

• If FDR finds that the specification is not met then it returns the trace that does not satisfythe specification. This trace corresponds to an attack.

Casper-FDR Characteristics

• It accepts a simple and human-friendly input file that describes the system and compilesit into CSP code which is then checked using the FDR model checker.

• The input file given by the user contains the intruder’s identity and what initial knowl-edge the intruder has.

• Based on the input file that is given, Casper returns the occurrence of an attack or not.

• Uses forward search to explore the state space.

• Suffers from the state explosion problem.

• Provides support for analysing systems of unbounded size.

3.1.4 ProVerif

ProVerif (Version: 1.13pl8) (Blanchet, Smyth, and Cheval 2014) is a tool that analyses an un-bounded number of runs by using over-approximation and represents protocols by Hornclauses. This tool accepts two kinds of input files: Horn clauses and a subset of the Pi-calculus.

If the reader doesn’t recall what a Horn clause is, basically it is a clause (a disjunction ofliterals) with at most one positive literal. An example of a Horn clause would be somethinglike this: human(X) ∨mortal(X). This clause would lead to the following resolution:

human(X) ∨mortal(X) ≡ ∀X(human(X) ∨mortal(X)) ≡ ∀X(human(X)→ mortal(X))

The final clause means that all humans are mortals.

So, going back to ProVerif, it represents the specification of the protocol that is being ver-ified, the messages that are exchanged, and the attacker’s abilities and knowledge, as Hornclauses, including the property to be verified.

Page 43: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

3.1. OVERVIEW TOOLS 23

To the resulting set of clauses a resolution algorithm is applied. In every resolution algo-rithm, what is trying to be proved is actually the negation of what is trying to prove. So thestarting point for the resolution algorithm is the negated clause that represents the propertybeing verified. Next, what is done is to “apply” the clauses from the set of clauses that was pre-viously obtained. Then this process is continuously done until one of these two cases happen:reach the empty clause {} then by contradiction it is proved that the property holds or if theempty clause was not deduced and all clauses were applied, then the property does not holdand an attack as occurred.

ProVerif does this resolution process also with the help of Pi-calculus and probabilities tooptimise the results and assurance of attack discovery. ProVerif takes a description of a set ofprocesses, where each defined processes can be started any number of times. The tool uses anabstraction of fresh nonce generation, enabling it to perform unbounded verification for a classof protocols.

So in conclusion, given a protocol description, one of four things can happen. First, thetool can report that the property is false, and yield an attack trace. Second, the property canbe proven correct. Third, the tool reports that the property cannot be proved. Fourth, the toolmight not terminate. ProVerif uses a backward search in order to determine whether a state isreachable or not.

ProVerif Characteristics

• The protocol is modelled using Horn clauses or Pi-calculus.

• It generates the following possible outputs: Property is true, Property is false and attacktrace is generated, Property cannot be proven when false attack is found, Tool might notterminate.

• Step by step trace is generated explaining the run and attack.

• When an attack occurs a trace is generated only for the property which is checked.

• The communicating parties need to be modelled as processes.

• It checks only those attacks for which the “query” has been specified in the code. Anexample of a ”query”, would be something like ”is term x hacked?”. Then ProVerif wouldverify if x holds the secrecy property.

• It is possible to run the protocol only for an unbounded number of sessions.

3.1.5 StatVerif

StatVerif (Arapinis, Ritter, and Ryan 2011) a tool that is an extension of ProVerif with construc-tors for explicit state, including assignments, in order to be able to reason about protocols thatmanipulate global state.

This notion of global mutable state poses a problem for existing protocol verification tech-niques, because those techniques often make abstractions that will introduce false attacks whenstate is considered.

Page 44: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

24 CHAPTER 3. TOOL EXAMPLES

Also for the correct functionality and integration, the ProVerif compiler was extended to acompiler for StatVerif: it takes processes written in the extended language, and generates thecorresponding Horn clauses. The StatVerif compiler converts processes written in the languageto clauses upon which ProVerif can be run. This translation is carefully engineered to avoid thefalse attacks mentioned above.

StatVerif Characteristics

• StatVerif is an extension of the ProVerif that takes into account global state in the verifica-tion of security protocols.

• Allows the study of protocols that manipulate state in an intuitive high-level language.

• Includes locked sections to enable sequences of state manipulations to be written conve-niently and correctly. Locked sections are similar to the ones used in concurrent access. Ifsomeone wants to access the related section it must first get the correspondent lock, andcan only enter when obtains the related lock.

• StatVerif compiler converts processes written in the language to clauses upon whichProVerif can be run.

3.1.6 Scyther

Scyther (Version: 1.0-beta6) (Cremers 2008) verifies bounded and unbounded number of runs,using a symbolic backwards search based on patterns. Scyther does not require the input of sce-narios. It explores Sys(P ) or MaxRuns(P, n): in the first case, termination is not guaranteed.In the second case, even for small n, it can often draw conclusions for Sys(P ).

By default Scyther explores MaxRuns(P, 5), is guaranteed to terminate, and one of thefollowing three situations can occur. First, the tool can establish that the property holds forMaxRuns(P, 5) (but not necessarily for Sys(P )). Second, the property is false, yielding a coun-terexample. Third, the property can be proven correct for Sys(P ).

Also, just like Athena it uses a goal binding state transition system, which in combinationwith the complete characterization algorithm, that was explained in the Introduction section, isable to discover which traces are “malevolent”. Or more specifically which trace patterns leadto a violation of a certain property like the secrecy property.

• The protocol is modelled using the “spdl” language.

• It is possible to run the protocol for either bounded or unbounded number of sessions.

• It generates the following possible outputs: Property holds for n runs, Property is falseand attack trace is shown, Property holds for all traces.

• Attack graphs are generated which give a visual flow graph of a trace and are self-explanatory.

• All possible trace patterns are generated depicting protocol execution.

• The communicating parties need to be modelled as roles.

Page 45: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

3.1. OVERVIEW TOOLS 25

• Scyther checks for the secrecy property of all possible variables even if there’s no explicitclaim that the secrecy property holds for certain variables.

• The initial intruder knowledge along with the legitimate communicating parties have tobe specified.

• In case of protocols which may suffer from a freshness attack, (send a message that wasalready sent by another party), a key compromise module needs to be put in the codethat specifies that a complete session has been captured and the intruder also knows thesession key.

3.1.7 Tamarin

The Tamarin (Meier, Schmidt, Cremers, and Basin 2013) tool checks the soundness of securityprotocol theories, by translating their validity and satisfiability claims to corresponding con-straint solving problems. It uses constraint-solving rules (defined on Tamarin) to detect if itdoes not exist an attack.

Just like ProVerif it uses a resolution algorithm which has as a starting point the claim thatrepresents the property to be proved. Then it continuously applies the constraint-solving rulesuntil, once again, one of the following two cases happen: reach the empty clause, which meansthat the property holds and verifies the constraints that were specified, or all the rules wereexhausted and the empty clause cannot be deduced, which means the property does not holdand there’s an attack. If this is the case, then a counter-example has been encountered.

It provides both a command line interface and a graphical user interface, which allows tointeractively inspect and construct attacks and proofs. Tamarin supports both the falsificationand the unbounded verification of security properties of protocols formalized in the expressivemodel described above. This means that Tamarin considers Sys(P ) at the beginning of theprotocol and then applies constraints to check its validity. The user interface of Tamarin pro-vides both an interactive and an automatic mode. In the interactive mode, the user selects thenext constraint-reduction rule to apply to a constraint system using a GUI. To support the userin his verification task, the intermediate constraint systems are visualized as partial protocolexecutions.

In the automatic mode, the Tamarin prover uses a heuristic to select the next constraint-reduction rule to apply to the constraint system. In general, a constraint-reduction rule there-fore reduces a single constraint system to multiple constraint systems. A reduction of a con-straint system to the empty set certifies that this system has no solutions, meaning the propertythat is being verified holds. It only terminates once it has reduced the constraint system denot-ing all counter-examples either to the empty set, or to a solved constraint system from which aconcrete counter-example can be extracted immediately.

Tamarin Characteristics

• The objective of Tamarin is to assist the user in checking the soundness of security proto-col theories reducing it to constraint-based problems.

• It supports both interactive and automatic, falsification and unbounded verification ofproperties of security protocols.

Page 46: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

26 CHAPTER 3. TOOL EXAMPLES

• The input given to the Tamarin prover specifies the protocol (including the adversaryinitial knowledge), the considered equational theory, and the expected properties jointlyas a security protocol theory.

• The user interface of Tamarin provides both an interactive and an automatic mode.

• Tamarin provides support for specification and reason about loops and Diffie-Hellmanexponentiation.

• Sometimes requires type assertions to effectively analyse a protocol.

• It guarantees termination for most of the existing security protocols.

3.1.8 Maude-NPA

The Maude-NPA protocol analysis tool (Escobar, Meadows, and Meseguer 2009), which relieson unification to perform backwards reachability analysis from insecure states, makes use oftwo different techniques to handle the combination problem.

One is to use a general-purpose approach to unification called variant narrowing, which,although not as efficient as special purpose unification algorithms, can be applied to a broadclass of theories that satisfy a condition known as the finite variant property (Comon-Lundhand Delaune 2005).

A second technique, applicable to special purpose algorithms, or theories that do not sat-isfy the finite variant property, uses a more general framework for combining unification algo-rithms. Creating this framework can be quite challenging, since the unification algorithms tobe used need to be compatible and for that an analysis (that is not trivial) of the compatibilityof these algorithms needs to be made.

Maude-NPA Characteristics

• Used to reason about protocols with different algebraic properties (Cortier, Delaune, andLafourcade 2006) (cancellation, encrypt-decrypt, exponentiation, Abelian Groups (Fuchs2014)) in the unbounded session model.

• Supports combinations of algebraic properties by protocols.

• Uses narrowing modulo equational theories as a symbolic reachability analysis method.

• Finds or proves the absence of attacks using backwards search/tactic.

• Analyses infinite state systems.

3.1.9 EasyCrypt

EasyCrypt (Barthe, Dupressoir, Gregoire, Kunz, Schmidt, and Strub 2014) is an interactiveframework for verifying the security of cryptographic constructions in the computationalmodel. EasyCrypt adopts the code based approach, in which security goals and hardnessassumptions are modelled as probabilistic programs (called experiments or games) with un-specified adversarial code, and uses tools issued from program verification and programming

Page 47: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

3.1. OVERVIEW TOOLS 27

language theory to rigorously justify cryptographic reasoning. Concretely, EasyCrypt supportscommon patterns of reasoning from the game-based approach, which decomposes reductionistproofs into a sequence (or possibly tree) of small steps (sometimes called hops) that are easierto understand and to check.

Every proof begins with a game that represents the real protocol. Everytime it is applieda substitution (according to the involved syntactic and cryptographic rules) a different gameis born. The last game has the form that implies the security property that is trying to beproved. It is worth to mention that in each game it is calculated the probability of the attacker’sadvantage that is the probability of the attacker to violate the security property being proved.

A key challenge for the formalization of cryptographic proofs is to support compositionalreasoning. Indeed, many cryptographic systems achieve their functionality by combining (of-ten in intricate ways) different cryptographic constructions, which may themselves be builtfrom several cryptographic primitives.

In order to support reasoning about such cryptographic systems, EasyCrypt features amodule system which allows the construction of modular proofs that respect the layered andmodular design of the cryptographic system. The module system is also useful for structuringlarge and complex proofs that involve a large number of game hops and perform reductions atdifferent levels. This modules facilitates the proof system and the related proofs.

EasyCrypt Characteristics

• Computational tool/framework.

• Uses a computational model instead of using the Dolev-Yao model.

• Uses the game-hopping method.

• Calculates the probability of an attack that violates the property that is being verified.

• Uses external tools to justify cryptographic reasoning.

• Uses a modular system that allows the construction of modular proofs.

3.1.10 CryptoVerif

CryptoVerif (Blanchet 2007) is a verification tool that instead of using a formal model like theDolev-Yao model that is adopted by the previous tools analysed, uses a computational model,just like EasyCrypt. CryptoVerif is able to prove secrecy and correspondence properties, andalso provides a generic method for specifying properties of crypto primitives which handlesMACs (message authentication codes), symmetric encryption, public-key encryption and sig-natures. MAC’s and signatures are used to ensure the authenticity of the messages that areexchanged. CryptoVerif works for an unbounded number of sessions with an active adversary.

CryptoVerif gives a bound on the probability of an attack (exact security), using what iscalled a game-hopping method, just like EasyCrypt. The proof is a sequence of games as fol-lows: the first game is the real protocol. CryptoVerif relies on a collection of game transfor-mations, in order to transform the initial protocol into a game on which the desired securityproperty is obvious. One goes from one game to the next by syntactic transformations or by

Page 48: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

28 CHAPTER 3. TOOL EXAMPLES

applying the definition of security of a cryptographic primitive. The last game is an ideal onewhere the security property is obvious from the game’s shape (the advantage of the adversaryis typically 0 for this game).

Games are formalized in a process calculus similar to pi-calculus, also used in ProVerif.The calculus is purely probabilistic and runs in polynomial time. So, CryptoVerif receives asinput a protocol specification, the security assumptions on the cryptographic primitives andthe security properties to prove. It outputs the sequence of games that leads to the proof, asuccinct explanation of the transformations performed between games, and an upper bound ofthe probability of success of an attack.

As the reader can see this tool is very similar to Easycrypt.

CryptoVerif Characteristics

• Computational tool

• Uses a computational model instead of using the Dolev-Yao model.

• Able to prove secrecy and correspondence properties.

• Uses the game-hopping method.

• Games are formalized using pi-calculus.

• Calculates the probability of an attack that violates the property that we’re verifying.

3.2 Tool Comparison

In this section, a comparison is made between the tools that were presented in the previoussection taking into account its state space types of search to overcome certain problems, (likenon-termination), mutable global state and in which conditions they terminate.

3.2.1 State Space Search

AVISPA, Casper/FDR and Scary use forward search to explore the state space. Casper/FDRsuffers from the state space explosion problem which can cause the non-termination of theverification. To avoid this problem, AVISPA uses a constraint-based logic to reduce the statespace to verify, simplifying the analysis of security protocols. These constraints are not onlyrelated to the messages exchanged but also to the intruder knowledge. AVISPA makes use ofthe back-end tools that are integrated to give the most accurate answer possible.

Athena uses backwards search to avoid the state space explosion problem. To avoid thisproblem, Athena also uses asynchronous composition. Athena also has the advantage thatit can easily incorporate results from theorem proving through unreachability theorems. Byusing the unreachability theorems, it can prune the state space at an early stage, hence, reducethe state space explored and increase likelihood. Scyther, which is based on Athena, also usesbackward search together with the complete characterization algorithm to check if a protocolsatisfies a certain property.

Page 49: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

3.2. TOOL COMPARISON 29

ProVerif uses backward search only to check if a certain state is reachable or not, that is, ifthere’s a path from that state to the beginning of the protocol. Tamarin, with its implementa-tion of the constraint solving technique, is able to do a basic backwards reachability analysis,similarly to ProVerif. Maude also relies on their unification process to make a backwards reach-ability analysis.

3.2.2 Bounded Verification

AVISPA, Athena, Casper/FDR, Scyther and Scary perform bounded verification. Terminationis guaranteed, but it can guarantee at best that there exists no attacks within the given bound.So the main problem with bounded verification is to discover the ”minimum” bound in whichthe protocol shows an attack.

Scyther also allows unbounded verification and Tamarin (which is based on Scyther), sup-ports both the falsification and unbounded verification of security properties of protocols for-malized in the expressive model described above. Maude also allows unbounded verification.ProVerif and StatVerif analyse unbounded number of sessions using overapproximation withHorn Clauses.

CryptoVerif and EasyCrypt, with their game logic, support an unlimited number of in-stances.

3.2.3 Mutable Global State

The ProVerif protocol analyzer models an ever-increasing set of derivable facts representingattacker knowledge, and is not able to associate those facts with the states in which they arose.For this reason, the tool typically reports many false attacks. That is why, StatVerif was created,to handle with the mutable global state, consequently avoiding false attacks. StatVerif includeslocked sections like it was previously explained to enable sequences of state manipulations tobe written conveniently and correctly. It works better if the state space in question isn’t infinite.

The AVISPA tool aims to handle mutable state via its OFMC, CL-AtSe and SATMC back-ends. However, the first two of these require concrete bounds to be given on the number ofsessions and fresh nonces. Also Scyther, Tamarin, Maude, and EasyCrypt can handle mutableglobal state, unlike Casper/FDR and Athena.

Right now, CryptoVerif does not support mutable global state, more specifically variableswith global state. Although in CryptoVerif, work is being developed towards that (Blanchet ).

3.2.4 Termination

Another topic that is relevant to talk about is termination, more precisely, the termination ofverification of the security of a protocol. Every tool does not present the same result whenanalysing a certain protocol. Some tools will demonstrate that the protocol is secure, or that thecorrespondent cryptographic properties are verified, trace an attack or not even terminate.

The “non-termination” problem is seriously related to the state space that is associated tothe protocol that is being verified and to the way the analysing tool generates, modifies and“explores” the state space.

Page 50: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

30 CHAPTER 3. TOOL EXAMPLES

When the procedure does not terminate when it is allowed any arbitrary configurationsof the protocol execution, (for example, any number of initiators and responders), termina-tion could be forced by bounding the number of concurrent protocol runs and the length ofmessages, as is done in most existing model checkers, like it was previously analysed.

Athena does precisely this to guarantee termination, but like it was previously explained itcan guarantee at best that there exist no attacks within the bound. Casper/FDR also does thisto guarantee termination and bypass the state explosion problem.

3.2.5 Other Observations

Concerning Casper/FDR, the current limitations upon the models using the data independencetechniques that Casper uses are as follows:

• No algebraic equivalences can be defined

• Secure channels cannot be used

• Guessing attacks cannot be modelled

• Crackables are not allowed

Scyther has an advantage over the majority of existing tools and that is that there is noneed to specify so-called scenarios that were talked about in section 3.1. In most cases (as e.g.in Casper/FDR), one tailors the scenario in such a way that it closely resembles the attack, toreduce verification time (or even to make verification feasible). For some tools, the scenarioseven partially encode the security properties that one wants to verify (e.g. the constraint solver,Athena).

The creation of such scenarios is subtle and error-prone. If set up incorrectly, the result canbe that attacks are missed. Although Scyther has many advantages over existing methods, itcurrently has a smaller scope than some of the other tools in the table. In particular, Scythercurrently cannot handle Diffie-Hellman exponentiation or the use of the exclusive-or operator.Thus, for protocols that include such operations, which are not yet supported by Scyther, otherstate-of-the-art tools such as the AVISPA tool set would be an appropriate choice.

3.2.6 Tools Comparison Table

Table 1 presents all the analysed tools characteristics concerning the topics that were discussed:Bounded Verification, (meaning bounded number of sessions), State Space Search, MutableGlobal State. All of these verify the main cryptographic properties: authentication and secrecy.A note worth mention is that, the verification of protocols, made by most of these tools, has anexponential behaviour according to the number of sessions.

Page 51: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

3.2. TOOL COMPARISON 31

Tools Bounded Verification State Space Search Mutable Global StateAVISPA Yes Forward YesAthena Yes Backwards NoCasper/FDR Yes Forward NoProVerif No Backwards NoStatVerif No Backwards YesScyther Yes Backwards YesTamarin No Backwards YesMaude-NPA No Backwards YesCryptoVerif No Backwards NoEasyCrypt No Backwards YesScary Yes Forward Yes

Table 3.1: Tools Comparison

Page 52: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

32 CHAPTER 3. TOOL EXAMPLES

Page 53: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4Secrecy Proof

A proof is a proof. What kind of a proof? It’s a proof. A proof is a proof. And when you have agood proof, it’s because it’s proven.

– Jean Chretien

4.1 Overview

In this section, it is discussed the basis of this thesis, which is the secrecy proof that the tooldeveloped in this thesis wants to automate. The manual proofs presented throughout thissection were performed in (Adao and Bana 2015). For that purpose, the necessary mechanismsrelated to this proof shall be analysed, including the framework that was proposed in (Bana andComon-Lundh 2014), which is going to help in the construction of the prototype (to be analysedin the next chapter), and the necessary axioms for the proof to be made. The presented proof isapplied to the NSL protocol, but the logic of the proof itself is to be applied to more protocols.

4.2 Framework

In this section we present the framework associated with the technique proposed in (Bana andComon-Lundh 2014). This framework contains the basic elements to represent a protocol, theinformation exchanged in a execution, the involved agents and conditions related.

4.2.1 Terms and Frames

First, we’re going to describe the most basic elements that are present in a protocol execution.A term is basically a piece of information that is exchanged in a execution of the protocol. Itcould be a nonce, a key, an identifier and so forth. More specifically, terms are built from a setof function symbols F containing a countable set of names N , a countable set of handles H ,which are of 0-arity, and a finite number of higher arity symbols. Names denote items honestlygenerated by agents, while handles denote inputs of the adversary. Let X be an infinite set ofvariables. A ground term is a term without variables.

A frame is like a deposit of information that was exchanged in a execution of a proto-col. It contains all the messages that were exchanged and all public information. Technicallyspeaking, frames are sequences of terms together with name binders: a frame ϕ can be written((νn).p1 7→ t1, . . . , pn 7→ tn where p1, . . . , pn are place holders that do not occur in t1, . . . , tn andn is a sequence of names. fn(ϕ), the free names of ϕ, are names occurring in some ti and not inn. The variables of ϕ are the variables of t1, . . . , tn.

Page 54: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

34 CHAPTER 4. SECRECY PROOF

4.2.2 Formulas

Let P be a set of predicate symbols over terms. P contains the binary predicate = used ast1 = t2, and a family of n + 1 − arity predicates .n used as t1, . . . , tn . t, intended to modelthe adversary’s capability to derive something. We drop the index n for readability. As forthe symbolic interpretation of such predicates, it is allowed any FOL interpretation that doesnot contradict the set of axioms that was defined before executing proofs on a protocol. We’regoing to talk about these in the next section.

Symbolic Execution of a Protocol

In this part, we’re going to talk about how an execution of a protocol is represented symbol-ically, namely what each step of the protocol represents and what it contains. More specificallyevery step of the protocol represents what we call a symbolic state, and a protocol basicallydefines the transition system for each one of these states. The way we go from one state toanother is through symbolic transitions, which we’re also going to describe.

Definition 1 - Having Q as a set of control states, together with a finite set of free variables,a symbolic state of the network consists of:

• a control state q ∈ Q together with a sequence of names n1, . . . , nk (that have been gener-ated so far).

• a sequence of constants called handles h1, . . . , hn (recording the attacker’s inputs).

• a ground frame φ (the agents’ outputs).

• a set of formulas θ (the conditions that have to be satisfied in order to reach the state).

A symbolic transition sequence of a protocol Π is a sequence:

(q0(n0), ∅, φ0, ∅)→ . . .→ (qm(nm), 〈h1, . . . , hm〉, φm, θm)

where, for every m− 1 ≥ i ≥ 0, there is a transition rule

(qi(αi), qi+1(αi+1), 〈x1, . . . , xi〉, x, ψ, s)

such that n = αi+1 \ αi, φi+1 = (νn).(φi · p 7→ spiσi+1), ni+1 = ni ] n,Θi+1 = Θi ∪ {φi . hi+1, ψpiσi+1} where σi+1 = {xi 7→ h1, . . . , xi+1 7→ hi+1} and pi is arenaming of the sequence αi into the sequence ni. It is assumed a renaming that ensures thefreshness of the names n : n ∩ ni = ∅.

Definition 2 - Given an interpretation M , a transition sequence of protocol(q0(n0), ∅, φ0, ∅) → . . . → (qm(nm), 〈h1, . . . , hm〉, φm,Θm) is valid w.r.t. M if, for everym− 1 ≥ i ≥ 0,M |= Θi+1.

Page 55: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4.2. FRAMEWORK 35

Initialization: It’s always set φ0 = νn (), where n are the honestly generated items. φ1contains the output of the initialization, that is, the names and the public keys.

Satisfaction of Predicates, Constraints and FOL Formulas in Executions

M models, among others, the predicate t1, . . . , tn . t. In executions however, instead of thispredicate, it is considered a predicate that we write as φ, t1, . . . , tn.t. This is also an n+1−aritypredicate. φ represents the frame containing the messages that protocol agents sent out, that is,the information available from the protocol to the adversary.

The = predicate refers to equality up to negligible probability, and . means that the ad-versary is able to compute the right side from the left. It’s also used another predicate, W (x),which just tells if x is the name of an agent. Also we use 4 different constraints: Handle(h), andRandGen(x), and φ w x, and x w x. Handle(h) means h is a handle; RandGen(x) means thatx was honestly, randomly generated; x ⊆ φ means that x was part of a message sent out by anagent (i.e. listed in the frame φ); x w x means that x is part of x.

Given a first-order model M as before, satisfaction of predicates and constraints in a sym-bolic execution is defined as the interpretation by M,σ, 〈t1, . . . , tm〉, (n1, . . . , nk), where φ is asubstitution as above, t1, . . . , tm are closed terms, and n1, . . . , nk are names of interpretations.The following elements are targets of interpretation:

Predicates which interpretations depend on M are defined as follows:

• M,σ, 〈t1, . . . , tm〉, (n1, . . . , nk) |= t = t′ if M,σ |= t = t′.

• M,σ, 〈t1, . . . , tm〉, n |= φ, s1, . . . , sn . t if M,σ |= s1, . . . , sn, t1, . . . , tm . t.

• M,σ, 〈t1, . . . , tm〉, n |= W (x) if M,σ |= W (x).

Constraints which interpretations do not depend on the model M are defined as follows:

• Handle(h) for h closed term:

M,σ, 〈t1, . . . , tm〉, (n1, . . . , nk) |= Handle(h) if h ∈ H

• RandGen(s) for a s closed term:

M,σ, 〈t1 . . . tm〉, (n1 . . . nk) |= RandGen(s) if s ∈ N and M,σ |= s = n1∨. . .

∨s = nk.

• t ⊆ φ, where t is closed term:

M,σ, 〈t1, . . . , tm〉, n |= t ⊆ φ if t is a subterm of some ti

• t ⊆ s1, . . . , sn, where s1, . . . , sn and t are closed terms:

M,σ, 〈t1, . . . , tm〉, n |= t ⊆ s1 . . . sn if t is a subterm of some si.

FOL Formulas, in which there are no free variables under constraints, whose interpreta-tions are defined recursively as follows:

• Interpretations of θ1 ∧ θ2, θ1 ∨ θ2, and ¬θ are defined as usual in FOL.

Page 56: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

36 CHAPTER 4. SECRECY PROOF

• If x is not under a constraint in θ, interpretations of ∀x.θ and ∃xθ are defined as usual inFOL

• If x occurs under a constraint in θ, then:

– M,σ, 〈t1 . . . tm〉, (n1 . . . nk) |= ∀x.θ iff for every ground term t,M,σ, 〈t1 . . . tm〉, (n1 . . . nk) |= θ{x 7→ t}

– M,σ, 〈t1 . . . tm〉, (n1 . . . nk) |= ∃x.θ iff there is a ground term t,M,σ, 〈t1 . . . tm〉, (n1 . . . nk) |= θ{x 7→ t}

Satisfaction at step m : M, (q, 〈h1 . . . hm〉, n, φm,Θ) |= θ iff M, θm, n |= θ.

4.3 Axioms

This section contains a set of computationally sound axioms that are sufficient to prove securityof actual protocols that use CCA2 secure encryptions. The axioms are not at all special tothe NSL protocol and can be used in other protocol proofs too. They are entirely modular,so introducing further primitives will not invalidate their soundness, they do not have to beverified again.

An important thing to mention before we explore these axioms is that all of these wereproven to be computationally sound. What does computationally sound mean? Like it waspreviously explained in Section 2.2.4, computational soundness is basically the combination ofboth symbolic and computational models to get a symbolic proof that implies computationalsecurity. Like it was said in Chapter 1, our tool aims to be a fully automated prover that doesprecisely this, automating the technique proposed in (Bana and Comon-Lundh 2014), dealingthis way with non-Dolev-Yao and computational attacks.

With this in mind, let’s go through the axioms.

4.3.1 CipherText Indistinguishability

Ciphertext indistinguishability is a property of many encryption schemes. Intuitively, if a cryp-tosystem possesses the property of ciphertext indistinguishability, then an adversary will beunable to distinguish pairs of ciphertexts based on the message they encrypt.

A cryptosystem is considered secure in terms of indistinguishability if no adversary, givenan encryption of a message randomly chosen from a two-element message space determined bythe adversary, can identify the message choice with probability significantly better than that ofrandom guessing (1/2). There exist several instances of this property which are the following:

• IND-CPA - A cryptosystem is indistinguishable under chosen plaintext attack if everyprobabilistic polynomial time adversary has only a negligible ”advantage” over randomguessing.

• Indistinguishability under non-adaptive and adaptive Chosen Ciphertext Attack (IND-CCA1, IND-CCA2) uses a definition similar to that of IND-CPA. However, in addition to

Page 57: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4.3. AXIOMS 37

the public key (or encryption oracle, in the symmetric case), the adversary is given accessto a decryption oracle which decrypts arbitrary ciphertexts at the adversary’s request,returning the plaintext.

The relevant security property for this thesis is the IND-CCA2 as it is the one also used in(Adao and Bana 2015).

4.3.2 Secrecy

The aim of the tool developed in this thesis is to verify if a protocol verifies or not the secrecyproperty. A note worth mentioning is that later this tool could be extended to also check theauthentication property, as the related proof is similar to the secrecy one. Going back to thesecrecy property, what this property says is that if a term is considered secret, then it shouldonly be known by the entity that created the term and the entities to whom the term was sharedwith.

There’s a small detail on this point. The secret share process must be made in a secure fash-ion so that no intruder can discover the secret, so the secret is of course encrypted. This leads tothe usage of what is called, honest keys. Every agent has an associated key (and if asymmetricencryption is being used, a public and a secret key) to encrypt and decrypt messages in thenetwork. So if the agent in question is honest then it possess an honest key.

This is an important matter because if the agents that are exchanging a term are honest thenthe keys that are used to encrypt the term are honest, so at first the term is a secret between theexchanging agents. If the term were trying to prove that is secret, is not encrypted at all thenthe term is obviously compromised.

Now that this was explained and what this property means it is time to represent it in theform of an axiom.

4.3.2.1 Secrecy of CCA2 Encryption

With K as a key, φ as the frame of messages exchanged, R a random number, x as sequence ofmessages, x has a term, and y has the “secret”, if the encryption scheme is IND-CCA2, then thefollowing formula is computationally sound:

RandGen(K)∧ eK ⊆ φ∧ fresh(R; φ, x, x, y)∧ x, x, y � φ ∧φ, x, {x}ReK . y → dK ⊆ φ, x, x∨φ, x . y

It says that if K was correctly generated, R is fresh, and y can be derived with the help of{x}ReK , then it can be derived without {x}ReK , or dK has been sent out or it is in x or x. Thecondition dK * φ, x, x, y (if moved to the premise) ensures dK has not been revealed, and it isalso an easy way to avoid key-cycles, sufficient for the NSL protocol.

Again, x, x, y � φ ensures that x, x, y do not contain future information in the form ofhandles not computable from φ. Note, that this axiom may look like it would work for CPAsecurity, but it does not as honest agents in general can be used as decryption oracles.

Now giving a more practical example in the context of the proofs. When executing theNSL protocol there’s a lot of messages exchanged between Initiators and Responders. Imagine

Page 58: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

38 CHAPTER 4. SECRECY PROOF

the last message exchanged is in the form {nonce(2)}pk1 (let’s name it t2) that is the Initiator’sresponse to a message t1, previously sent. The secret here in question is nonce(2), that is sup-posed to be fresh. So following the secrecy axiom that was just presented, if the key with theidentifier 1 was randomly generated then nonce(2) can be derived with m2.

Figure 4.1: NSL Protocol

Now, nonce(2) is apparently secret, but what if nonce(2) can be derived only with m1 andthe previous messages and information, that is, the information contained in the frame, φ.This means that nonce(2) was previously generated and it isn’t fresh, contradicting one of theprevious assumptions about nonce(2). This is going to be a very important aspect of the proofsthat are going to be analysed further ahead. Also the computational soundness of this axiom isproved in (Bana, Adao, and Sakurada 2012).

A very important aspect to take into account is the malleability property that was exploredby Warinschi. For this purpose also in (Bana, Adao, and Sakurada 2012) it is presented thecorrespondent axiom that is also taken into consideration.

4.3.2.2 Non-Malleability of CCA2 Encryption

If the encryption scheme is IND-CCA2, then the following formula is computationally sound.

RandGen(N) ∧RandGen(K) ∧ eK ⊆ φ ∧ x � φ ∧N ⊆ φ, x ∧ φ, x . y ∧ ∀xR.(y = {x}ReK →{x}ReK * φ, x) ∧ φ, x, dec(y, dK) . N −→ ∃K ′(RandGen(K ′) ∧ dK ′ ⊆ φ, x) ∨ φ, x . N

Page 59: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4.3. AXIOMS 39

This axiom is stating that if N and K were correctly generated, the decryption of y therandom nonce N , but, no honest agent ever produced y as an encryption, then either N can bederived without the plaintext of y, or some dK ′ has been sent out.

4.3.2.3 Other Important Axioms

Here are some other useful axioms for the proof logic to be discussed in the next section. Theseaxioms were also introduced in (Bana, Adao, and Sakurada 2012).

Equality is a Congruence

This axiom says that equality is a congruence relation, x = x, and the substitutability (con-gruence) property of equal terms holds for predicates (but not necessarily constraints). Thisaxiom is computationally sound simply as we limit ourselves to consider computational inter-pretations of predicates that are invariant if we change the arguments on sets with negligibleprobability.

Axioms for the Derivability Predicate

The following axioms are trivially computationally sound for what the symbolic adversarycan compute:

• Self-derivability: φ, ~x, x . x. The identity function is computable.

• Increasing capabilities: φ, ~x . y −→ φ, ~x, x . y. This states that giving something theadversary knowledge can already compute it does not increase its capabilities. In otherwords, it states that the composition of two PPT algorithms is also a PPT algorithm.

• Commutativity: If ~x′ is a permutation of ~x, then φ, ~x . y −→ φ, ~x′ . y

• Transitivity of derivability: φ, ~x.y∧ φ, ~x, y .z −→ φ, ~x.z. This is basically a rule of logicaltransition.

• Functions are derivable: φ, ~x . f(x). All functions in F are computable in polynomialtime.

Axioms for Randomly Generated Items

These axioms express relations between RandGen() and the ⊆ constraints that are notpurely symbolic. The no-telepathy axiom expresses that items that were randomly generatedbut not yet sent out are not guessable. It is sound because it is assumed that random generationhappens in a large enough space such that guessing is only possible with negligible probability.

So these axioms can be expressed in the following way:

• No-telepathy: fresh(x; φ) . φ 7 x

• Fresh items are independent and hence contain no information about other items:fresh(x; φ, ~x, y) ∧ x � φ ∧ y � φ ∧ φ, ~x, x . y −→ φ, ~x . y

Equations for the fixed function symbols discussed earlier

Page 60: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

40 CHAPTER 4. SECRECY PROOF

dec({x}ReK , dK) = x;π1({x, y}) = x;π2({x, y}) = y

Scary takes into account all of the previous axioms on the saturation process (to be ex-plained later) of the given set of clauses that represent the protocol specifications and definedtrace.

With these axioms in mind, let’s finally explore the reasoning behind the proof associatedwith the technique of Bana and Common-Lundh, and that the tool developed in this thesiswants to automate.

4.4 Proof

In this section, it is discussed the reasoning process about the secrecy property, to be imple-mented and automated when analysing a protocol to check if it holds a certain property. Thisreasoning was first performed in (Adao and Bana 2015)

The aim of the secrecy proof is to show that nonces N generated and sent between honestagents remain secret. Throughout this section we will denote honest agents by X,Y,X ′, Y ′, ...and arbitrary agents by Q,Q’. H represents the set of all honest agents. The fact that N is anonce generated and sent by an honest initiatorX to an honest responder Y in the NSL protocolcan be expressed as:

∃R.{N,X}ReKY ⊆ φ

Every nonce N generated and sent by an honest responder X to an honest initiator Y canbe expressed as:

∃hR.{π2(dec(h, dKX)), N,X}ReKY ⊆ φ

Such nonces can be characterized by the condition

CX,Y [N ] ≡ RandGen(N)∧∃(R.{N,X}ReKY ⊆ φ∨ (∃hR.{π2(dec(h, dKY )), N, Y }ReKX ⊆ φ))

where X,Y ∈ H . By writing C[N ] it is stated that X,Y are clear from the context. Thesecrecy property to be proved can be expressed as ∀N(C[N ] → φ 7 N), meaning that suchnonces cannot be derived by the adversary. It is equivalent to show that its negation,

∃N(C[N ] ∧ φ . N)

is inconsistent with the axioms and the agent checks on every possible symbolic trace.Basically what is being done is a proof by absurd.

The intruder model considered in these proofs, as previously explained, defines that anintruder can execute any action he wants, except if it contradicts the predefined set of axioms.This way, unlike the tools that were previously analysed (Scary, EasyCrypt and CryptoVerif arethe exceptions), the tool developed in this thesis (and that automates the secrecy proof beingdiscussed) can deal with computational, non-Dolev-Yao attacks.

Page 61: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4.4. PROOF 41

Just like it was discussed in previous chapters, the beginning point of the proof is a hypo-thetical state where an attack occurred. Then this state is expanded using a backwards statespace approach, where the resulting states take into account the possible messages that aredescribed in the protocol. So when analysing the NSL protocol that contains three types ofinteractions, (two for the Initiator and one for the Responder), everytime backtracking is used,at least three new states are obtained, one state for each interaction. Also during this processit is taken into consideration if the agents involved in the exchange of messages are honest ornot.

So how can the secrecy property be verified and why is backtracking being used?

Simple. We begin from a point where the frame of execution of the protocol, φm is empty.Recall that a frame contains part of all the information concerning messages that were ex-changed throughout the execution of the protocol to that particular point. Everytime we anal-yse a message and we need to backtrack, we add to φm the message previously analysed. Thiswill be demonstrated in Proof 2.

To prove for any nonceN , that it is secret,N must have been sent in a message that containsit, which means it was generated in that particular message and not before. Now, let’s call thenext message, that is sent in the same execution of the protocol, t. What we’re going to do now,to check if N is secret, is to see if we are able to derive N from φm and t, that is:

φm, t . N (1)

The following step is to check if we from (1), φm . N can be derived. This means thatt can be dropped and is irrelevant, since with only φm we can derive N . This represents acontradiction because for an attack to occur the last message in the involved trace must bean honest encryption. It cannot be a message produced by the intruder. Without this honestmessage that contains the nonce to be compromised, N , then the attack cannot be made.

If this clause is deduced, then by induction φ0 . N is derived which is an absurd (by fresh-ness), and consequently leads to the conclusion that φm, t 7 N , meaning that the current branchbeing analysed is incoherent. With all of this the following formula, (which is the main formulathat supports this thesis work), can be deduced (being ϕ ⊆ φ, and ψ all public information):

ϕ, t . N ` ψ . N then φ, t . N ` φ . N (2)

This proposition is valid for every φ that extends ϕ with the following conditions:

• RandGen(K) ∪ dK * ψ −→ dK * φ

• ∀x ⊆ t, RandGen(x), fresh(X,ψ) −→ fresh(x, φ)

• The freshness axiom is only applied to nonces in t, the message currently being analysed.

• All encryptions use fresh R randoms.

The first condition states that every secret key that is randomly generated, is not includedin the set ψ that contains all the information, which means it cannot be included in any frameφ. The second one, states that for every fresh nonce contained in ψ is also fresh in φ.

Page 62: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

42 CHAPTER 4. SECRECY PROOF

Basically with this, we’re saying that we’re working with partial frames, ψ that follow theprevious conditions, and according to these, we can apply the same reasoning to a ”full” frameφ.

So let’s recap all that was learnt so far, concerning the reasoning to be automated. We beginfrom a state where an attack occurred φm+1 . N . φm+1 is empty at the beginning. Backtrackingis used to see what are the possible ways to reach this point. This is done by consideringall possible interactions of the protocol and the honesty of the agents. For every generatedstate it is considered that φm, t . N and checked if φm . N , being φm all the information thatwas exchanged until t was sent. So if we previously analysed a message, let’s call it tm, andwe deduced that we needed to backtrack then tm is added to φm. Now what we’re trying todeduce is not only φm, t . N but φm, tm, t . N . Since φm at the beginning of the proof is emptywe end up with tm, t .N . This process is repeated everytime we need to backtrack. If with this,we’re still not able to drop t we would need to backtrack once again, and add the message tobe analysed, to the frame. This process repeats itself until we are able to drop t.

If φm . N cannot be derived and if there’s no contradiction (in terms of names and noncesequivalence, to be discussed with more detail in Section 5.2.4) then the current branch cancontinue to be explored as a possible attack path (this is going to be exemplified after this briefexplanation). On the other hand, if a contradiction is found or if when executing the proof andapplying the axioms previously mentioned, φm . N is derived, then the branch is closed.

So, the secrecy property is verified for every branch that was generated and the objectiveis to go from φm, t . N and try to reach φm . N or reach at a contradiction. If there’s a branchwhere no contradiction was found and φm . N was not derived, that goes from the beginningpoint of the proof, all the way to the state that represents the beginning of the protocol, thenthere’s an attack and that trace represents how the attack can be made.

Now let’s exemplify and indeed show a practical example of this proof applied to the NSLprotocol.

Legend for Proof Comprehension

• Increasing Capabilities Axiom – IC

• Functions are derivable Axiom – FD

• Transitivity of derivability – T

• Non-malleability Axiom – NM

4.4.1 Proof 1

Like it was previously said, the proof begins from a state where it is supposed an attack oc-curred and that the secrecy property is violated, so what is going to be done is to check thisproperty for every trace that is generated.

From this state it is supposed that φm, t . N . But what is the possible value of t? Forthe sake of not repeating similar proofs, let’s consider only the case where t ≡ {N1, X}R1

eKQ

with an arbitrary agent Q, generated nonce N1, and freshly generated randomness R1. Thiscorresponds to the first message sent by an initiator X of the NSL protocol.

In this case, two subcases are obtained taking into account the honesty of agent Q. So if:

Page 63: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4.4. PROOF 43

Figure 4.2: t ≡ {N1, X}R1eKQ is the first message sent by the Initiator X

• Q ∈ H . Applying the secrecy axiom with φm, {N1, X}R1eKQ .N and dKQ * φm, N1, X one

gets the intended result φm . N and this case is closed.

• Q /∈ H . Notice that N1 6= N , as N was generated in a session between two honest users(it satisfies C[N ]) and N1 was generated in a session with Q /∈ H .

In the second subcase, N1 6= N , otherwise as φ0, N . N holds by self-derivability, φ0, N1 .N also holds by the congruence of =, and then φ0 . N by freshly generated items, but thatcontradicts the no-telepathy axiom. Then,

i φm, {N1, X}R1eKQ . N by (1)

ii φm, {N1, X}R1eKQ, N1, X, eKQ,R1 . N by IC(i)

iii φm, N1, X, eKQ,R1 . {N1, X}R1eKQ by{.}-FD

iv φm, N1, X, eKQ,R1 . N by T(iii, ii)

v φm, N1, eKQ,R1 . N X is public in φ0 (iv)

vi φm, N1, R1.N eKQ public in φ0 (v)

vii φm, N1 . N by freshness of R1(vi)

viii φm .N by freshness of N1(vii) and N1 6= N

In this case, φm .N can be derived which means that the message t is irrelevant and so thisbranch is closed.

4.4.2 Proof 2

Before going through the explanation of the prototype of this tool let’s analyse one more casewhich is a little bit more complex than the one it was previously analysed.

Let the message t be equal to {π2(dec(h3, dKX))}R3eKQ, that is, let t be the last message sent

by the Initiator X in response to the message previously sent by the Responder. Let’s executethe proof concerning this case. The reader will notice it is somewhat similar to the previousproof that was presented with a small difference in the end (h represents handle).

i φm, {π2(dec(h3, dKX))}R3eKQ . N by (1)

Page 64: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

44 CHAPTER 4. SECRECY PROOF

ii φm, {π2(dec(h3, dKX))}R3eKQ, π2(dec(h3, dKX)), eKQ,R3 . N by IC(i)

iii φm, π2(dec(h3, dKX)), eKQ,R3.{π2(dec(h3, dKX))}R3eKQ by{.}-FD

iv φm, π2(dec(h3, dKX)), eKQ,R3.N by T(iii, ii)

v φm, π2(dec(h3, dKX)), R3 .N eKQ public in φ0 (iv)

vi φm, π2(dec(h3, dKX)).N by freshness of R3 (v)

vii φm, dec(h3, dKX) . N by function π2 (vi)

viii φm, h3 . N by hypothesis

ix φm . N or ∃x′R′.(h3 = {x′}R′eKX ∧ {x′}R

′eKX ⊆ φm) by NM(viii, vii)

Figure 4.3: t ≡ {π2(dec(h3, dKX))}R3eKQ is the last message sent by the Initiator X

The first immediately implies the result so let’s consider the second. By the hypothesis thatt ≡ {π2(dec(h3, dKX))}R3

eKQ and the derivation above it follows that for n ≡ π2(dec(h3, dKX)),

x′ = dec(h3, dKX) = 〈N1, n,Q〉 and φm, x′ . N (3)

Let’s then analyse who could have sent {x′}R′eKX as it is in the frame φm. One particular

subcase, that is generated from this case, is important to dissect.

4.4.3 Proof 3

In this subcase {x′}R′eKX ≡ {π2(dec(h′3, dKX

′))}R′3

eKQ′ for some handle h′3, freshly generatednonce N ′

1, arbitrary agent Q′, and freshly generated randomness R′3 such that φm . h′3, N

′1 =

π1(dec(h′3, dKX′)), and Q = π3(dec(h′3, dKX

′)).

Let’s once again do the proof for this subcase.

Page 65: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4.4. PROOF 45

i φm, x′ . N by (1)

ii φm, {π2(dec(h′3, dKX′))}R

′3

eKQ′ . N by congruence of ≡ applied to (i) and x′

iii φm, dec(h′3, dKX′) . N by function π2

iv φm, h′3 . N by hypothesis

v φm . N or ∃x′′R′′.(h′3 = {x′′}R′′eKX′ ∧ {x′′}R

′′eKX′ ⊆ φm) by NM(iv, iii)

As the reader can see, the result that was obtained is somewhat similar to the result ob-tained in proof 2 above. And the reader might ask: if we continue to check to what {x′′}R′′

eKX′

is equal to, do we end up to try to find what is {x′′′}R′′′eKX equal to and so forth? The answer,

unfortunately, is yes. We’ll end up analysing similar cases and execute similar proofs withoutgoing nowhere.

Figure 4.4: {x′}R′eKX is the last message sent by the Initiator X ′

This case isn’t conclusive in terms of the trace representing an attack or not, since no con-tradiction is found and something similar {x′}R′

eKX is continuously deduced, not reaching theclause φm .N , which would allow to close the current branch. This problem occurs in the NSLprotocol concerning precisely the last message from the Initiator that should contain the noncethat was generated and sent by a certain Responder.

So what is the solution for this problem, that is, discover in fact that this type of trace withthese characteristics that was just presented indeed represents an attack or not? The answerlies in the concept of Invariants.

Page 66: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

46 CHAPTER 4. SECRECY PROOF

4.5 Invariants

Just like the reader witnessed in the previous section, there’s a special situation when abranch is continuously expanded, generating similar states and consequently performing sim-ilar proofs time and time again. But it is known that this branch cannot go on indefinitelybecause the number of messages exchanged, in the related execution of the protocol, is finite.

This means that there’s no certainty if this branch represents an attack or not, because nocontradiction is found, and there’s no derivation that leads to φm . N . This is where invariantscome to play.

When exploring a branch, sometimes the step that is generated is equal to a previous step(in the derivation). With this, patterns can be identified throughout the backwards expansion ofthe branch. These patterns represent invariants. So what can be done is, since it is known thatthis invariant will repeat itself when exploring the branch, and that the branch has a finite setof steps, by induction reach at the beginning of the protocol and stop the backtracking process.

The reason for this type of branch, (composed by messages that contain decryptions), beingfinite is due to the fact that there will be no more encryptions {xn}Rn

eKXn ∨ φm that could matchthe decryption present in one of the handles. This leads to the concept of decryption chains.This concept is explained in detail in (Meier, Cremers, and Basin 2010), but in summary adecryption chain is basically a sequence of messages in a execution of a protocol that containseveral encryptions inside those that need to be encrypted.

For instance, imagine the following 3 messages, m1 being {N1}pkX , m2 equal to{dec(h1, dk(X))}pkQ and m3 {dec(h2, dk(Q))}pkQ′ . For the intruder to get access to N1 needsfirst to decrypt m3. Then with the result he discovers that it also needs to decrypt h2 to getaccess to m2, and finally he needs to decrypt h1 to get access to m1 and finally get N1. Thissequence of decrypting encryptions is called a decryption chain. The Invariant algorithm thatwe’re going to present is a bit similar to the concept of decryption chains.

So with the help of invariants, the full trace is obtained and it can be verified, that is, if itrepresents an attack or not. In the case, where the trace represents an attack, one or more axiomsmight be added to the intruder model if one wants to prevent the attack. This could disallowthe intruder from making a step that is included in the previous attack making it impossibleto happen, and when making the manual proof reach a contradiction, which would close thebranch.

As it was previously said in the first chapter, to prevent this attack from happening firstthe computational property that the attacker is taking advantage of needs to be identified.With the result of that analysis, then the attacker is limited. Also the same needs to be donein the symbolic side, to prevent the associated symbolic attack, so an axiom that representsthe previous computational property needs to be added, to the set of predefined axioms. Thedetection of these conditions are out of the scope of this thesis.

In the case of the proof 2 and 3 and beyond, the reader shall see that because the trace isfinite, the backtracking process cannot be executed indefinitely and, at a certain point, there willbe no more encryptions {xn}Rn

eKX ⊆ φm that could match the decryption. When this happens, acontradiction is found and the current branch is ruled out.

Having the concept of Invariants in mind, how is this going to be implemented in the tool?

Page 67: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4.5. INVARIANTS 47

It’s quite simple actually. Every message that is exchanged by the Initiator and the Respon-der is recorded including its composing fields, namely nonces names and so forth. Let’s callthis message ”recorder” M . In the meantime it is received another message, called t. To findan invariant it is needed to check if there’s a message in M that is similar to t. For them to besimilar they must have the same properties, that is, their fields should be also similar. If suchmessage cannot be found then the branch is expanded using backtracking. But if a messagesimilar to t is found then an Invariant was found and the respective branch is closed.

A question arises from what the reader has just read that needs explanation: when are twomessages similar? Basically when they have its composing fields similar. The better way toexplain this is to present an example and for that let’s revisit the cases in proofs 2 and 2.5.

So in proof 2, the start point represents the reception of the second message of the Initiatorof the NSL protocol

{π2(dec(h3, dKX))}R3eKQ

Since there’s no messages to analyse and like it was demonstrated in proof 2 the contentof h′3 needs to be checked so the current branch is expanded. In the NSL protocol this handlecorresponds to a triple composed by two nonces, (N and n) and the name of the sender (nameof the Responder of this execution, Q). This triple corresponds to the fields that compose themessage sent by a Receiver in the NSL protocol.

Fields h3Nonce N

Nonce/Handle n

Name Q

Table 4.1: h3 projections comparison

Continuing with the analysis, the next message that is received is once again the secondmessage from another Initiator, that is, a message apparently similar to the previous messagethat was analysed.

{π2(dec(h3’, dKX ′))}R3eKQ′

h3’ is also a triple that appears to be similar to h3, but every field is required to be analysedto check if these messages are really similar.

Only by checking the second projection of each handle, it can be concluded that they’re notsimilar since the second projection of h3 is a nonce and the second projection of h3’ is an handlethat corresponds to a triple, namely h3.

Page 68: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

48 CHAPTER 4. SECRECY PROOF

Fields h3 h′3Nonce N N ′

Nonce/Handle n h3Name Q Q′

Table 4.2: h′3 projections comparison

So no invariant was found and in the associated proof it is necessary to discover to whatcorresponds h3” so once again the branch is expanded. The next message received is anothersecond message sent by another Initiator.

{π2(dec(h3”, dKX ′′))}R3eKQ′′

h3” is a triple like the previous handles that were analysed. Is this handle similar to theprevious ones? Let’s compare it with h3’.

Fields h3 h′3 h′′3Nonce N N ′ N ′′

Nonce/Handle n h3 h′3Name Q Q′ Q′′

Table 4.3: h′′3 projections comparison

The first projection of both handles are similar since both of them are nonces that are freshand generated by Initiators, (despite the fact the nonces are different, and the sessions theywere generated are also different). The third projection of both handles are also similar becauseboth of them are dishonest names. But what about the second projection? Well both of theserepresent handles that represent triples. So in conclusion h3’ and h3” are similar and so thebranch can be closed right? Wrong!

Despite the fact that the second projection of these handles are in fact other handles, itdoesn’t mean that the containing handles are similar. Because it isn’t known if the containedhandles are similar themselves, so a comparison between the contained handles is necessary, inthis case a comparison between h3 (the second projection of h3’) and h3’ (the second projectionof h3”).

This comparison was previously made and the reached conclusion was that h3 and h3’were not similar which means the second projections of h3’ and h3” are not similar and so thesehandles are not similar. So it is necessary to go deeper one more level and check the content ofh3”’ associated to the related proof.

One more time it is received another second message from another Initiator which containsh3”’, a triple like the ones that were analysed.

{π2(dec(h3”’, dKX ′′′))}R3eKQ′′′

Page 69: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

4.5. INVARIANTS 49

Let’s compare it with h3”. For the first and third projections we conclude that these aresimilar, more specifically both the first projections are nonces that are fresh and once againgenerated by Initiators, and both third projections are names. The second projections of bothhandles are other handles, more specifically, h3’ and h3”. What is necessary to do now is tocompare both of these handles.

Fields h3 h′3 h′′3 h′′′3Nonce N N ′ N ′′ N ′′′

Nonce/Handle n h3 h′3 h′′3Name Q Q′ Q′′ Q′′′

Table 4.4: h′′′3 projections comparison

So both first projections and third projections of h3’ and h3” are similar. The first projectionsare both nonces that once again are fresh and generated by Initiators, and both third projectionsare names. And for our surprise both second projections are handles which means that thesecond projections of h3” and h3”’ are similar, which consequently means that h3” and h3”’are similar. And so the ”invariant rule” can be applied and the current branch can be closed,ending the pain of continuing making repeated comparisons between handles.

In conclusion, to find invariants, similar handles need to be found. What does this mean?That all of its components are similar to the components of some previous handle that wassent in an execution. In the previous example, triples were being analysed, but of course thisprocess works for other type of formats and for other protocols. If similar handles are presentin the same frame of messages then the ”looping” branch can be closed.

Page 70: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

50 CHAPTER 4. SECRECY PROOF

Page 71: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5PrototypeWrite it. Shoot it. Publish it. Crochet it, saute it, whatever. MAKE.– Joss Whedon

5.1 Overview

Finally, after the analysis made in the previous chapters about the main aspects of creatinga tool, the technique proposed in (Bana and Comon-Lundh 2014), and Scary which will befundamental to accomplish this thesis objectives, it is time to unveil the prototype of the tooland see how it works and the used mechanisms to automate the reasoning presented in theprevious chapter.

It is important to note that the main difference of this tool comparing with the Scary tool isthat it uses backtracking to explore the state space of protocols.

So in this chapter, we first overview the tool and its functionalities. Next we describe theinput model of the tool. This tool is going to receive a template of a protocol execution, onwhich the secrecy property will be tested. The tool input and output are the following:

• Input→ protocol specification. The analysis of this input is going to be discussed later.

• Output → Creates a log file with all traces explored, including the ones that representattacks. If an attack is detected it is also printed the respective trace, including the honestyof the involved agents. If no attack is found, the claim is verified and prints ”no attackswere found”.

After this analysis, the focus shall be shifted to the flow of execution of the tool. This toolbasically is going to work in the following way: generates the starting point of the proof accord-ing to the protocol specifications. Then it continuously generates the symbolic representationof the traces (or suffix of traces). Everytime a suffix of a trace is generated, it is sent to Scary.Basically what we’re giving to Scary is precisely φm, t . N to make the deduction of the clauseφm . N . What Scary receives and how it makes the deduction process, will be detailed later.Scary then gives us the correspondent set of clauses it can deduce. If in this set we can find theclause that represents φm .N , then the trace can be closed and we proceed to the next availabletrace. Otherwise the tool continues to expand the trace.

This set of tasks is going to be comprised in a algorithm which we’re going to discuss inthe next section. After this we’re going to describe the architecture of our tool and explain whatmodule does what task presented in the algorithm of the tool.

Page 72: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

52 CHAPTER 5. PROTOTYPE

An important thing to mention, is that the responsibility of checking the set of axioms,that the intruder cannot violate, is transferred to Scary that has already that mechanism imple-mented. This is made during the deduction stage previously described. Attack finding is inoverall in NP (including trace guessing).

This chapter focuses on the part that is executed in our tool, and it will not go into detailinto the work that is done by Scary. For that, the reader can check (Scerri 2015).

This tool works with asymmetric and symmetric encryption. It supports as function sym-bols, encryption/decryption and pairing. So the class of protocols that the prototype is able toanalyse, obeys the previous conditions. The reason behind this is because Scary is only ableto analyse this type of protocols, and the fact that triples are treated as pairs of pairs, meaningthat despite the fact a protocol contains triples, we must treat them as pairs of pairs becauseScary only knows how to work with those.

5.2 Algorithm

It is finally time to put all the information gained and materialize it into an algorithm where wecan see the execution with the reasoning presented in the previous chapter. The reader rightnow has a vague idea how the tool works. Now we’re going into more detail and unveil whatis done by our tool. This is illustrated in Figure 5.1.

Translate Input File into Scary Language (Input/Object Constructor Module - Section5.3.1) The first thing that is done is to translate the given input file to the language Scary isable to interpret.

Group out with in Actions (Input/Object Constructor Module - Section 5.3.1) Then, allmessages (out actions) from a protocol are associated with the correct restrictions (in actions)they must satisfy. After this process is done, we start analysing the protocol.

Grab Next out Action (Trace Generation Module - Section 5.3.2) The prototype fetches thenext out action in the current branch. What the tool does is, for example, like it was previouslyanalysed, grab the first message of the NSL protocol. If this was already analysed, then it grabsthe second one and so forth.

Generate Instantiation Cases (Trace Generation/Ids Manager Modules - Section 5.3.2) Forthe selected out action, the fields of the contained message are instantiated with the correct ids.Depending on the instantiation of these elements, several traces are generated.

So, in this step, a message is instantiated according to one of the instantiation cases. Afterthis case is analysed, the same message is instantiated according to next case in line, and soforth until all instantiation cases are analysed.

Construct Node with the obtained Message (Trace Generation Module - Section 5.3.2)After instantiating a message, this is associated with a new node to be inserted in the currentbranch. This is done so when the tool wants to retrace a full trace, namely a trace that representsan attack, it simply goes to the current node and goes through all of its parent nodes.

Construct Message Handle for Invariant Detection (Trace Generation Module - Section5.3.2) Also, associated to the message it is constructed what is called the Message HandleStructure that consists of an association between an handle and its fields. For the reader to

Page 73: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.2. ALGORITHM 53

Figu

re5.

1:Pr

otot

ype’

sA

lgor

ithm

Page 74: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

54 CHAPTER 5. PROTOTYPE

understand this better let’s recall Proof 2 from Section 4.4. This message, let’s call it t2, is equalto {π2(h3, sk(dkX))}pkQ. A question that arose at the time was: what was the value of h3? Wellit is known that t2 corresponds to the third message of the NSL protocol, so what is the previ-ous expected message equal to? Well we know that it is equal to a triple that contains a nonce,an element that can be anything, and the identifier of its sender, in this case, Q. This structurecorresponds to the Message Handle Structure.

This structure will help to check if there is an Invariant in the trace being analysed. Simplyputting, an Invariant is detected in a branch if in the same branch exists at least two similarnodes, that is, nodes that contain messages with similar properties. The process of Invariantanalysis was explained with more detail in Section 4.5. Next, what is done is precisely check ifthere’s an Invariant or not in the trace. If that’s the case, then the current trace is closed and thetool analyses the next branch.

Check for nonce/name Contradictions (Clause Coherence Module - Section 5.3.3) If noInvariant is found, then the tool checks for contradictions in terms of nonce and name equiva-lence. This type of contradictions comes from the matching between messages. These contra-dictions will be further analysed in Section 5.3.3. If the tool deduces one of the previous con-tradictions with the current information (messages, handles), then it closes the current branch.

Construct and send handle File to Scary (Scary/Scary Interactor Module - Section 5.3.4) Ifno contradiction is found, then the prototype constructs the file to send to Scary.

Check Scary Answer for Stopping Clause (Scary/Scary Interactor Module - Section 5.3.4)If the prototype finds in the log file generated by Scary, (containing the set of clauses deducedfrom the given input file), the clause φm.N (condition analysed in Section 4.4), or a clause fromwhich we’re able to deduce φm .N , then the current branch is closed. However if the prototypedoes not find such clause then the tool checks if the message being analysed corresponds to thefirst message of the protocol (meaning we cannot backtrack any further and we’ve reached thebeginning of the protocol). If that’s the case, the resulting trace goes from the beginning of theprotocol to the state where an attack has occurred. This trace corresponds to an attack. The toolthen prints out all of the messages that lead up to the attack.

However if the message analysed, does not correspond to the first message of the protocol,then the trace is expanded using backtracking. However, we must first see if we can apply thenon-malleability axiom (presented in Chapter 4) with the current message being analysed. Thisprocess was made in Section 4.4 in proof 2, to check if we could backtrack or not. If that’s thecase just like in proof 2, then we can expand the branch, otherwise the branch might representa possible attack, because if we can’t backtrack we have no guarantee that indeed that tracedoes not represent an attack. It might be a false positive, but if it really represents an attackthen we’re not excluding it.

Going back to Scary, the process of checking Scary answer is made by the Scary Interactor,and not by Scary itself, because the only thing Scary is doing is deducing all possible clausesfrom the given input file. Then with the resulting set of clauses, the prototype needs to check ifindeed the stopping clause can be deduced

Attack Detected / Close Branch (Scary/Scary Interactor Module - Section 5.3.4) If a branchis closed or if an attack is discovered, the tool checks if there are more instantiation cases as-sociated with the message being analysed. If that’s the case, then the message is instantiatedaccording to the next instantiation case and the process repeats itself.

Page 75: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.3. TOOL ARCHITECTURE 55

Go to the next branch (Trace Generation Module - Section 5.3.2) However, if all instantia-tion cases were explored then the tool goes to the next branch of the protocol, and then grabsthe next out action in line.

Exit When the tool exhausts all possible branches, it finishes the analysis.

Throughout this section, for every presented task we also indicated the modules of thetool that executed it. In the next section we’re going to present the architecture of the tool andanalyse each composing module.

5.3 Tool Architecture

With the algorithm described, it is time to describe the modules that execute each task that waspresented.

The prototype tool is divided into 6 modules. The first one interprets the input file thatcontains the protocol specifications, and constructs the necessary objects to execute the relatedproof (Input Module). The second module grabs the objects generated by the Input Moduleand constructs the secrecy proof, and generates the traces using backtracking (Proof/TracesModule). The third one assigns identifiers to all objects (Ids Manager). The fourth moduleis the one responsible for the management of the interaction between the Proof Module andScary (Scary Interactor Module). In between these two tasks, the Clause Coherence Moduleanalyses the obtained clauses, after the trace generation stage, looking for contradictions. If acontradiction exists then the current trace is closed, otherwise the Proof Module sends a requestto Scary and then receives its answer. The answer is then analysed by the Scary Interactor andaccording to it, the respective trace is closed or not.

Figure 5.3 illustrates the architecture of the tool (by modules).

Figure 5.2: Tool Architecture

Page 76: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

56 CHAPTER 5. PROTOTYPE

5.3.1 Input/Object Constructor

The Input given to the tool is an example of a basic execution of the protocol that is going to beanalysed, a template of the protocol. So, a grammar that represents this, needs to be defined,more precisely, a set of basic elements necessary to represent a protocol and its specifications.These tasks are executed by the Input/Object Constructor.

Figure 5.3 shows an input file that represents the execution of the NSL protocol.

Figure 5.3: Example of an an Input File for the NSL protocol

In every communication protocol there always exists roles. A role is equivalent to a ex-ecution of an entity of the protocol. In the case of Figure 5.3, the role with the identifier A,represents an initiator of the NSL protocol and the one with the identifier B represents a re-sponder of the NSL protocol.

Associated with a role there are 2 types of interactions, one that illustrates the sending ofa message and another that represents the expected content to be received. These interactionsare represented by the out and in keywords respectively.

The out interaction, like it was previously explained, illustrates the sending of a messagefrom one of the intervening roles to another. That message can be encrypted or not. If it isencrypted, then it can be used an asymmetric or symmetric encryption. These two types ofencryption are associated with the aenc (asymmetric encryption) and senc (symmetric encryp-tion) keywords.

aenc or senc action is composed by 3 elements: the message with the elements to be sent, forexample the pair composed by the identifier of the sender (name(1) for example) and a nonce(nonce(1)); the key (pk(2)) used to encrypt the message; and a random number (nonce(12))to represent the randomness used in the process. If the message contains several elements, thepair keyword is used. The elements of the pair are separated by a ”|” as illustrated in Figure 5.3.To access the message components, the projections pi1 and pi2 are used. The result of applyingpi1 to a pair, is its first element and with pi2 we get the second element. These functions aresimilar to the ones that were presented in the axioms section.

The in interaction is not as straightforward as the out interaction. The in interaction repre-sents what the associated role is expecting to receive. This interaction not only defines the typeof message it receives but also the associated content. The handle(1) from the first interactionof Figure 5.3, represents the message that it receives from another role that its communicatingwith. It can be anything. A message, encrypted or not, a nonce, a key, anything.

Page 77: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.3. TOOL ARCHITECTURE 57

The next step is to verify if the content contained in handle(1) fulfills a certain set of con-ditions. Basically a condition is a restriction upon an element of the message received. For ex-ample, in Figure 5.3, we have the following condition: pi1(adec(handle(1), sk(1))) = name(2).What this condition is stating is that the first projection of the decryption of handle(1) must beequal to name(2). To decrypt messages that are encrypted either by symmetric or asymmetricencryption, the sdec or adec functions are used.

Basically the adec/sdec terms are composed of two elements, the variable, or more specifi-cally an handle, and the associated secret key needed to decrypt the variable. The result of thedecryption process is the content of the handle, if it exists.

All nonces, keys, variables and names, have different ids associated. With all of this, theassociated objects (roles, nonces, names, conditions) we can analyse a protocol using all theprevious described mechanisms (Bana-Common-Lundh technique, Scary, backtracking).

So in the given template it is specified the roles that intervene in the execution of the proto-col, and the messages that they send, out actions. These might have restrictions, when it comesto the related messages. They are expressed through in actions.

With this template, the tool will search the protocol for attacks. If these are found, themessages that are contained in the trace that represent the attack, are outputted in the correctorder. These messages are expressed using the same grammar that was used to express theInput File presented in Figure 5.3.

5.3.2 Trace Generation/Ids Manager

At this stage, the tool has all the necessary components to simulate all the possibilities of exe-cution of a protocol. The first thing that is done is to create the initial state of the “trace tree”.Like it was explained, this state represents a hypothetical attack state.

Every state is represented in this tool as a node from a tree. Every node is expressed in thefollowing way:

node(parent, action, handles, children);

The parent field corresponds to the parent node of the current node. The action field cor-responds to the action/message exchanged. The handles field contains all the previous ac-tions/handles of the parent nodes. And finally the children field basically contains the follow-ing nodes that have the current node as a parent.

An important mechanism associated with the creation of a node is the assignment of iden-tifiers. Everytime a node is created, an action needs to be associated to it. This action containsmeta-nonces, meta-names, meta-keys and meta-randoms. To all these elements we assign aconcrete identifier.

According to the session (instance of execution of the protocol that is being analysed) inwhich this node is included, the identifiers are associated for every nonce, name, random andpublic keys. For that purpose we need the Ids Manager Module. This module is responsiblefor assigning the correct identifiers to the objects that are created during the trace generationprocess.

Page 78: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

58 CHAPTER 5. PROTOTYPE

But let’s get back to the initial state of the proof. This one is represented in the followingway:

initialNode(empty, empty, empty, empty);

Now with the protocol specification received in the input file, we got all the messages thatare exchanged between all the roles of the protocol. With these messages, all the coherent traces(with the set of axioms the intruder cannot violate) are generated in a backwards fashion.

To exemplify all of this let’s continue with the NSL example.

Like it was previously said, in a secrecy proof the objective is to show that all nonces Ngenerated and sent between honest agents remain secret. This N is contained in a message thatwas either sent by an Initiator or by a Responder, in the case of the NSL protocol. Let’s call thismessage t0 that was exchanged between 2 honest agents. So beginning the proof for the NSLprotocol there are 3 possible interactions, t, that could have led to an attack state:

1. For an honest Initiator X , t = aenc(name(X), nonce(1), pk(Q), nonce(100)), with an arbi-trary agent with name(Q), freshly generated nonce(1), and freshly generated randomnessnonce(100) (1st message from A to B);

2. For an honest Initiator X , t = aenc(pi2(pi2(adec(handle(2), sk(X)))), pk(Q), nonce(100))for some handle(2), arbitrary agent Q, freshly generated nonce(1), and freshly generatedrandomness nonce(100) such that pi1(pi2(adec(handle(1), sk(X)))) = nonce(1) and Q =pi1(adec(handle(2), sk(X))) , (last message of the protocol from A to B);

3. For an honest responderX , t = aenc(name(X), pi2(adec(handle(3), sk(X))), nonce(1)), pk(Q),nonce(100)) for some handle(3), agent Q = pi1(adec(handle(3), sk(X))), freshly gener-ated nonce(2), and freshly generated randomness nonce(100) (first and only messagesent from B to A).

For each one of these interactions, a meta-node is created that is associated with one ofthe previous interactions. So right now we have a meta-node that contains a message that hasall of its elements with an identifier assigned. Now what we must do, is define the honestyof all identifiers that are related to agents, even if they’re equal to one of the identifiers ofthe agents that exchanged t0. As it was mentioned in Section 5.2, from here several cases aregenerated, and more precisely, several nodes are generated. So for each meta-node, severalnodes are associated, each node corresponding to one different instantiation case. Each nodethen is analysed individually, until there’s no more nodes associated to the current meta-node.

Each time a trace is expanded another meta-node is generated and associated to the pre-vious node that was being analysed. The possibilities for the meta-node are, once again, equalto the interactions of the protocol (in this case NSL). Every node generated by the instantiationprocess contains all the information contained in the trace to which the node belongs.

For the reader to have a better notion of what the tool does, let’s see what are the resultingtraces from instantiating the message {name(3), nonce(1)}pk(4) associated to the meta-node thatrepresents the first message of the protocol. Let’s call this message t1.

Before analysing the possible generated traces, it’s important to have in mind that we’retrying to prove that N , that is present in the manual proofs, is indeed secret. We know that N

Page 79: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.3. TOOL ARCHITECTURE 59

was generated in t0. If for example, N was generated by an Initiator of the NSL protocol, t0could be the following message {name(1), nonce(0)}pk(2), being N equal to nonce(0). So as thereader can see that two cases are generated from here, when it comes to the possibilities of t0,that can be a message that was either generated by an Initiator or by a Responder (in the caseof the NSL protocol).

From one of these cases, the possible traces generated from t1 depend on the possibilitiesof each field of t1. So what are the possibilities of each field, the reader might ask. Well, pk(4)corresponds to a key that can be associated to an honest or dishonest agent or can be equal topk(2), one of the honest agents that exchanged nonce(0). Next, name(3) can be associated toan honest agent or it can be equal to one of the honest agents that exchanged nonce(0), in thiscase name(1). It cannot be dishonest because, since this message is contained in the frame, thenit was sent by an honest agent. Finally nonce(1) can be equal to nonce(0) or not. In the nextfigure, it is presented all possible cases that are generated for a message for the NSL protocol.

Figure 5.4: NSL Case Generation

This is the case for the NSL protocol, for other protocols, like we said, the number of in-stantiation cases depends on the fields of each message. But basically, every key that encrypts amessage can be honest or not, and/or equal to one of the agents that exchanged N . And everyagent can be honest or not, and/or equal to one of the agents that exchanged N .

So for each message associated with a meta-node, we check first what are its composingelements. For each element, we check what are its instantiation possibilities. Then, we generateall the cases covering all combinations of all possibilities for every element of the message beinganalysed.

Page 80: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

60 CHAPTER 5. PROTOTYPE

In the case of t1, the number of traces that are generated is equal to the number of combi-nations of each field of t1.

As the reader can see,X can be equal to pk, only if pk is honest. Also when it is being statedXSame, or pkSame, basically what is being said is that these elements are equal to the ones(name(1), pk(2)) contained in the message that contain nonce(0), the element to be deduced.

With this information, the file to be given to Scary is constructed and sent. The syntax ofthis file will be explained later. But first, this information must be checked for contradictions.

Before this is done, associated to the message of the current node it is constructed what iscalled the Message Handle Structure that consists of an association between an handle and itsfields as it was explained in Section 5.2.

This structure will help to check if there’s an Invariant in the trace being analysed. AnInvariant is detected, if in the same branch exists at least two similar nodes. The process ofInvariant analysis is explained with more detail in Section 4.5. If there’s an Invariant in thetrace that contains the node we’re analysing, the current trace is closed and the tool goes to thenext branch.

Recapping all that was learned from the execution of this module for a protocol: first: theTrace Generator Module checks for messages that are ”generator” of nonces; second: from eachof these messages we start to analyse all messages of the protocol; third: we create a meta-nodefor each message of the protocol; fourth: for each meta-node we check for all the instantiationpossibilities considering the possible values for each element of the message associated withthe meta-node; fifth: for each instantiation case we create a node; sixth: for each node thatis generated (that contains the information of the trace in which the node is included), wefirst check for the existence of Invariants in the trace. If no Invariant is found, then with theinformation of the node we construct the file to send to Scary.

But before the last step is made, we must first verify if there’s no contradictions in theinformation contained in the node.

5.3.3 Clause Coherence

We’re going to talk about how Scary will be used later with more detail, but basically Scaryis going to be used as clause deducing machine. From a given set of clauses, Scary, indeed,is able to deduce all possible clauses according to a set of pre-established axioms (describedin Chapter 4). Also, Scary is able to find contradictions according to those axioms. But it isincapable of discovering name and nonce equivalence contradictions.

First of all, what is a nonce equivalence contradiction and a name equivalence contradic-tion? Well, this type of contradictions has already been analysed in the previous section. Gener-ally when talking about nonce equivalence contradictions, the main focus here are the sessionsin where the nonces being analysed were generated. Two nonces are equivalent if they weregenerated in the same session, that is, generated by the same entity.

To see this in action, let’s analyse one of the cases of the NSL protocol.

In this case, it was first analysed the last message of the protocol (tI2). The branch thatcontains this message, is expanded, as there are no contradictions found and the stoppingclause (φm . N ) cannot be deduced. This was shown in Proof 2 in Section 4.4. One of the

Page 81: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.3. TOOL ARCHITECTURE 61

Figure 5.5: Case 2.1 - the previous message was sent by one Initiator

possibilities for the previous message to tI2, is the first message of another Initiator (tI1). LettI1 be equal to (name(X ′), nonce(1′))pkQ. tI1 is being matched with the expected content tobe received by the Initiator that generated tI2. So (name(X ′), nonce(1′))pkQ is being matchedwith (name(Q), n, nonce(1)).

In this case, when comparing nonce(1′) with nonce(1), it is verified that both nonces weregenerated by Initiator sessions. But if for some reason one of these nonces was generated by aResponder session, then these nonces could not be equivalent and so a contradiction would befound.

Now for the name equivalence. Two names are equivalent if both names are honest ordishonest. If this is not the case when trying to match two names, then a contradiction arises.Continuing with the same example, in case 2.1, it is being matched Q with X ′. X ′ is honest andQ is dishonest. Conclusion: there’s a name equivalence contradiction and this branch can alsobe closed.

It is important to mention that there’s danger of an element being confused by the concate-nation of other elements. We’re going to analyse in Section 5.5, attacks that use take advantageof concatenation of elements.

So, in conclusion, the main task executed by the Clause Coherence Module is to discoverif the messages to be sent to Scary reveal nonce or name contradictions. If one of these contra-dictions is found, then the current branch being analysed is closed. Otherwise the Input File isconstructed and sent to Scary to analyse it.

5.3.4 Scary/Scary Interactor

Before explaining how Scary is going to be used, let’s analyse it.

Scary (Scerri 2015) implements and automates the technique proposed in (Bana andComon-Lundh 2014). It takes into consideration the full state space of the protocol that is being

Page 82: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

62 CHAPTER 5. PROTOTYPE

analysed. To explore the state space it uses a forward search algorithm. It starts from the rootof the protocol and continuously explores every possible execution until no more traces can beexpanded. As previously said, with this approach all traces of a protocol are generated, whichmight lead to an expansion of an infinite number of states. To avoid this issue, Scary boundsthe number of instances, guaranteeing termination of the analysis of the protocol, meaning thenumber of states/traces that are generated is finite.

This leads to a problem that was addressed earlier, which is the certainty an attack does notoccur for any number of instances. Scary can determine that with n instances an attack doesnot exist. But can it be assured with certainty that an attack does not occur with n+1 instances?There’s simply no way.

More precisely, the objective of Scary is to decide the following problem: when it receivesas input a protocol P , Scary must output, if there exists an attack, the correspondent trace onwhich the attack is possible together with a model of the attack. Otherwise the protocol issecure.

Scary works in two phases: first it computes a symbolic representation of all the traces ofP , and the corresponding sets of formulae; second it checks, for each set of formulae, if it issatisfiable with respect the axioms described in Chapter 4. More specifically what Scary doesis, for every trace that is generated, it creates a set of formulae that describe the trace and theassociated condition being verified. For example, for the secrecy property on the related trace,to the set of formulae a clause is added, clause that derives a piece of information that shouldbe secret.

To all of this Scary applies the saturation algorithm. What the saturation algorithm doesis to grab the set of formulae, which are basically clauses and with them tries to derive, inthis case, the “secret”. When applying the saturation algorithm, Scary deduces every possibleclause in accordance with the mentioned set of axioms. If Scary deduces that an the attackercould have taken knowledge of the secret when it was not supposed to, then there exists anattack, otherwise the trace is completely valid and secure.

In the next figure, it is exposed the route of execution of Scary when it is given a protocolspecification.

Figure 5.6: Scary Execution

Now the reader might ask, what is the purpose of using Scary? Why is Scary used andhow? Well, Scary uses the same intruder model that is considered by the prototype, since it au-tomates the same technique the prototype automates, which means that already handles withthe set of axioms that the intruder cannot violate, like the freshness axiom, the functionalityaxiom and so forth. Why not take advantage of the fact that Scary takes into account the same

Page 83: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.3. TOOL ARCHITECTURE 63

Figure 5.7: Scary Input File

intruder model, and that is able to deduce every possible clause, given a set of clauses thatrepresents a trace, and use Scary to see if we can make the same reasoning that is made in theproofs presented in Section 4.4. That is, see if Scary can deduce the stopping clause φm . N .

Indeed, after the Clause Coherence Module checks for contradictions in the informationcontained in a node that was constructed, if no contradiction was found, a new file contain-ing the information of the node (and consequently the previously analysed nodes) is created.Obviously this file is written according to the syntax defined by Scary.

An example of this input file is illustrated in Figure 5.7.

This Input file represents the case analysed in Proof 1 in Section 4.4. The first two linesrepresent the expected result from the set of clauses that are included in the file. In this case,the trace is expected to be closed, so Unsat is written in the file. This is currently not beingused, but for parsing reasons, all the files sent to Scary must have these two lines.

Next, the honest users keys (that were randomly generated) are stated. So as the readercan see in the file, the keys with the identifiers 1, 3 and 4 are honest.

Then, it is expressed the honest interaction that contains the nonce that represents the se-cret that is being verified, our N , that is represented by nonce(0). In this case, nonce(0) isgenerated in the first message of the protocol that was exchanged between agents with theidentifiers 3 and 4. This message is associated with an handle. For that purpose it is used thehandle dependency association. After this, the message associated with the state that representsthe case analysed in Proof 1 is expressed. It is also associated with an handle.

It is also necessary to write the restrictions of the message being analysed.For example, in the Input File presented in Section 5.3.1 there’s the followingrestriction: (name(X ′), pi2(adec(handle(3), sk(X ′))), nonce(2′), pk(Q′), nonce(101)),

Page 84: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

64 CHAPTER 5. PROTOTYPE

pi1(adec(handle(3), sk(X ′))) = Q′. This restriction would be expressed in an input filesent to Scary in a similar way, for example pi1(adec(handle(3), sk(1))) = name(2).

Finally, with these two handles, Scary checks if nonce(0) can be derived. Scary will checkfor the two handles if that is possible. Scary does this by applying the saturation algorithmon all the clauses contained in this file and the axioms that were described in Chapter 4. Thisalgorithm is explained in more detail in (Comon-Lundh, Cortier, and Scerri 2013). The obtainedset of clauses is recorded in a log file. Since all clauses that are given to the file, including theones that represent the frame φm and the one that represents the message t we want to drop,are all saturated the objective here is to check if were able to deduce the clause {} . nonce(0)meaning we dropped t and the trace represented does not represent an attack since nonce(0)can be deduced from nothing.

Our tool will then check if this log file contains the clause {} . nonce(0) which means thatnonce(0) is indeed not secret. This subject was discussed in Section 4.4. If the prototype findsthat clause or a clause from where {} . nonce(0) can be derived, then the current branch isclosed. What this clause means is that, in this case, nonce(0) can be derived from nothing whichimplies that nonce(0) is not fresh, which contradicts one of the initial assumptions that nonce(0)is fresh at the beginning of the protocol, since it was generated and exchanged between honestagents. This implies that the branch we are currently exploring is not possible to reach.

Continuing with the prototype execution, if on the other hand in the log file constructedby Scary, the tool does not find such clause then the current branch is expanded, using back-tracking. However, we must first check if we’re able to expand the trace with the help of thenon-malleability axiom presented in Chapter 4. By applying the non-malleability axiom to thecurrent message being analysed, we verify if we’re able to backtrack. If it is possible then weexpand the branch, otherwise, the branch might represent an attack, since because we cannotexpand any further, we cannot guarantee if indeed the associated trace represents an attack ornot, so we assume the trace to be an attack (which might represent a false positive).

5.4 Theoretical Assumptions

In section 4.4, we’ve seen that our tool is based in proposition (2) of Section 4.4:

ϕ, t . N ` ψ . N then φ, t . N ` φ . N (2)

This proposition is valid following a set of conditions that must be fullfiled for every φ thatextends ϕ. Here’s those conditions.

• RandGen(K) ∪ dK * ψ −→ dK * φ

• ∀x ⊆ t, RandGen(x), fresh(X,ψ) −→ fresh(x, φ)

• The freshness axiom is only applied to nonces in t, the message currently being analysed.

• All encryptions use fresh R randoms.

Page 85: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.5. EXPERIMENTAL RESULTS 65

These conditions are needed so our tool works correctly. The first condition like we pre-viously said, means that every secret key that is randomly generated, is not contained in theframe at all. If a secret key and a message encrypted with the associated key were contained atthe same time in a frame, we wouldn’t be able to apply the secrecy axiom mentioned in Chapter4. This issue would invalidate the secrecy proof our tool automates.

A note worth mentioning is that, the existence of secret keys on frames isn’t an issue forScary and doesn’t invalidate its decision procedure.

The third and fourth conditions are related to the freshness of the elements of a message.More specifically in the third condition, all nonces of t, the message we want to drop, must befresh or elseN could not be contained for any t, becauseN must be fresh at the beginning of theprotocol. Also in the fourth condition, all randoms used in encrypted messages, whether theyuse asymmetric or symmetric encryption, must be fresh, otherwise they cannot be eliminatedfrom the clause deduction process as it is done in the Proofs presented in Chapter 4.

5.5 Experimental Results

The benchmarks of this thesis are correlated with the NSL protocol and with the manual proofsthat apply the technique of Bana and Commom-Lundh (Bana and Comon-Lundh 2014) de-scribed in (Adao and Bana 2015). Throughout this section, the results obtained by the prototypewill be compared with the results obtained by the manual proof, and also with Scary.

It is important to mention that the prototype focuses on detecting traces that might rep-resent attacks, sacrificing a bit on the assurance level. What this means is that, the prototypemight give false positives on the attack detection. In this section, the prototype will be eval-uated in terms of its completeness, that is, its attack detection rate. The NSL will be used asthe main benchmark, like it was previously said. Then a comparison with Scary (the tool thatautomates the same technique (Bana and Comon-Lundh 2014)), will be made, taking into con-sideration the protocols that were tested for Scary, mentioned in (Scerri 2015).

5.5.1 NSL Protocol

Considering the NSL protocol, in the manual proof made in (Adao and Bana 2015), 3 types ofattacks are found in the protocol. Our tool discovers all of these attacks.

5.5.1.1 First Attack NSL

The first case to be analysed involves the interaction between two Responders and the intruder,where the last message analysed was sent by a Responder.

{X ′, π2(dec(h′3), sk(X ′)), N ′

2}ek(X)

In the manual proof in (Adao and Bana 2015), it is deduced that this trace represents an at-tack. The attack is represented in Figure 5.8 and is executed in the following way: a maliciousagent Q, acting as X , sends {X,n′}eKX′ to X ′, starting this way a new session with X’. Let

Page 86: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

66 CHAPTER 5. PROTOTYPE

Resp(X) be the associated responder session of X ′. Upon receiving this message, X ′ respondsaccording to his role generating a new nonce N ′

2 and sending {X ′, n′, N ′2}eKX back to Q that is

pretending to be X . Then Q starts a new session with X by forwarding the received message{X ′, n′, N ′

2}eKX , which is understood by X as {Q,n}eKX . This n may hold information aboutN ′

2. According to his role, X responds by sending {X,n,N2}eKQ, for some freshly generatednonce N2, that can be now decrypted by Q. So Q is able to retrieve the value of n and possi-bly compute 〈X ′, n′, N ′

2〉 from n and Q. If 〈X ′, N ′, N ′2〉 = 〈Q,n〉 may hold non-negligibly for

honestly generated nonce N ′, then it is not even needed to initiate the protocol maliciously. Xmay have initiated a legitimate session with message {X,N ′}eKX′ , and then even this N ′ willbe compromised.

Figure 5.8: Attack 1 in NSL protocol

Going back to the context of the prototype, the trace obtained associated with the manualproof until this point is composed by the interactions of Resp(X) and Resp(X ′). In the proto-type, this trace by itself does not correspond to an attack, since the last message analysed doesnot correspond to the first message of the protocol, that is, the trace does not go from the hypo-thetical state where an attack occurred to the root of the protocol. So this trace is expanded withbacktracking. Then, when analysing the subcase where the previous message corresponds tothe first message of the protocol, indeed the tool determines that this trace corresponds to anattack. The other subcases that are generated by our tool are analysed, unlike what is done bythe manual proof in (Adao and Bana 2015), since they the deduce the attack previously to thegeneration of these subcases. The trace obtained by the tool is illustrated in Figure 5.9.

5.5.1.2 Second Attack NSL

The second case to be analysed considers the trace presented in Figure 5.10. In this case, thelast message analysed was sent by an Initiator,

{X ′′, N ′′1 }ekQ′′

The tool determines that the current trace, that contains this message, to be an attack, be-ing coherent with the result obtained in (Adao and Bana 2015) for this case, since there’s nocontradictions in the message matching process.

Page 87: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.5. EXPERIMENTAL RESULTS 67

Figure 5.9: Trace First Attack in NSL protocol

Figure 5.10: Trace Second Attack NSL

Basically what is happening in this trace is the following: Let Init(X), (the execution onthe left hand-side of the previous figure), be the Initiator process in a protocol run between anhonest Initiator X and himself, and where X generates nonce N . Suppose that an adversary Qintercepts the first message and sends it back to X , posing as a responder. Let Resp(X) be theresponder process in a protocol run between adversaryQ andX , initialized byQ that forwardsto X the message {n′}eKX that was captured from the network.

{n′}eKX is parsed as {Q,n}eKX . In this case, by decrypting the last message, Q is able toretrieve n, and as in the previous attack, Q may be able to compute n′ as 〈Q,n〉. With n′ andX , Q retrieves part of the message 〈X,N〉. In particular, if 〈X,N〉 = 〈X,N,N〉, then n′ = N .Figure 5.11 illustrates the attack in action.

Page 88: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

68 CHAPTER 5. PROTOTYPE

Figure 5.11: Attack 2 on the NSL protocol

5.5.1.3 Third Attack NSL

Now let’s consider the same case, but the last message analysed instead of being a message sentby an Initiator, it is a message sent by a Responder. The considered message has the followingformat:

{X ′′, π2(dec(h′3), sk(X ′′)), N ′′

2 }ek(Q′′)

In the manual proof in (Adao and Bana 2015), it is deduced that this trace represents anattack. The attack is represented in Figure 5.12 and is executed in the following way: let anhonest Initiator X ′ execute the protocol with an honest Responder X , and suppose that theadversary Q intercepts the last message {N ′′

2 }eKX and uses it to initiate a new session with Re-sponder X fooling him as the message being of the form {Q,n}eKX . In this case, by decryptingthe last message, similarly to the previous attacks, Q is able to retrieve n, and may be able tocompute N ′′

2 if N ′′2 = 〈Q,n〉. There’s the possibility that this might happen.

Figure 5.12: Attack 3 on the NSL protocol

Page 89: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.5. EXPERIMENTAL RESULTS 69

Going back to the context of the prototype, the trace obtained associated with the manualproof until this point is composed by the interactions of Init(Q) and Resp(X). Just like thefirst attack analysed, this trace by itself is not considered an attack by the prototype, since thelast message analysed does not correspond to the first message of the protocol. So just like thefirst attack, we expand this trace with backtracking. Then, when analysing the subcase wherethe previous message corresponds to the first message of the protocol, the tool then determinesthat this trace corresponds to an attack. Again, the other subcases that are generated by ourtool are analysed, unlike what is done by the manual proof in (Adao and Bana 2015). The traceobtained by the tool is illustrated in Figure 5.13.

Figure 5.13: Trace Third Attack on NSL

The expansion made by the tool leads to the analysis of new subcases that are not coveredby the manual proof. In fact, some of these cases represent attacks that are similar to the onesthat were dissected in this section. The difference is that there are more sessions.

Keep in mind that, when evaluating the tool, the most important factor is the discovery ofattacks, that is, not letting attacks on protocols go undetected. The next case illustrates a tracewhere the prototype detects an attack in a trace that in fact does not represent an attack.

5.5.1.4 Special Case NSL

A special case is found by our tool that corresponds to the execution represented in Figure5.14: let Init(X ′′′), be an Initiator process in a protocol that contains an honest InitiatorX ′′′ thatgenerates nonce N ′′′

1 . X ′′′ communicates with another Initiator, X ′′ that is expecting a messagefrom a responder, since it already started a session previously (Init(X ′′)). So, X ′′ interprets themessage sent by X ′′′ as 〈X ′, N ′′

1 , n′′〉. The response sent by X ′′ is intercepted by an adversary Q

and sends it toX ′ posing as a responder. X ′ also had already started a session Init(X ′) withX .

Page 90: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

70 CHAPTER 5. PROTOTYPE

LetResp(X) be the responder process in a protocol run between adversaryQ andX , initializedby Q that forwards to X the message {n′}eKX that was captured from the network.

In the manual proof in (Adao and Bana 2015), in this case 〈X ′′′, N ′′′1 〉 is being matched with

〈X ′, N ′′1 , n

′′〉, and consequently what is determined is that N ′′1 needs to be equal to N ′′′

1 andsince N ′′

1 was generated and sent in a single session necessarily X ′′′ = X ′′. Hence 〈X ′′′, N ′′′1 〉 =

〈X ′′, N ′′1 〉 = 〈X ′, N ′′

1 , n′′〉, and since n′′ = 〈X,N ′

1, n′〉, φ0, N ′′

1 . N′1 needs to be verified.

Well necessarily N ′′1 = N1, which means that the two sessions run by X ′ and X ′′ in Figure

5.14 are the same which is a contradiction as the session on the right ends before the session onthe left.

Figure 5.14: Special Case NSL

So this trace is closed by the manual proof, but the prototype considers this trace an attack.Indeed just like the manual proof, the prototype does not find a nonce or name equivalencecontradiction, but it is not capable to make the reasoning about equivalence of sessions andtheir duration. Scary, also, is not able to deduce this type of contradiction, which means thistrace is understood as an attack, when in reality it isn’t.

The occurrence of this false positive is not a problem, because, when analysing the outputthe prototype provides that contains the trace, we can verify that, that same trace indeed is notan attack. The problem would be if a trace that represents an attack is undetected by the tool.

But to have the ability to reason about session equivalence would be a serious upgrade toavoid the detection of false positives like the one that was analysed. That is something to beconsidered in the future.

The tool also presents false positives when the tool is checking if we can expand a branchusing backtracking with the help of the non-malleability axiom. If the tool discovers that a

Page 91: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.5. EXPERIMENTAL RESULTS 71

branch cannot be expanded through the application of the non-malleability axiom, then thestatus of the branch is unknown. That is, we cannot guarantee that a trace indeed represents anattack or not. So for every trace that has these characteristics, the tool assumes that these tracesrepresent possible attacks.

Also from the other subcases that are generated, as explained in section 5.5.1.2 and 5.5.1.3,similar attacks are found when expanding the related traces. The difference is that these attacksinvolve more sessions, but are similar to the ones described previously.

The output of our tool when an attack is discovered is illustrated in the Appendix section.

5.5.2 Scary Comparison

The prototype was also tested against other protocols that were tested in Scary as describedin (Scerri 2015). All of these protocols use asymmetric or symmetric encryption, and have asfunctions symbols the encryption and pairing primitives. These types of protocols are the onesthat are supported by Scary (and consequently, since we’re using Scary, supported by our tool).The protocols that were tested include the NSL protocol, the Andrew Secure RPC protocol, andother toy protocols that represent simple cases of asymmetric and symmetric encryption.

So, this prototype automates the technique presented in (Bana and Comon-Lundh 2014),But just like it was previously said, Scary already automated the same technique. So the ques-tions that might arise are: what are the advantages and disadvantages of the prototype consid-ering Scary? What is gained with the prototype?

Well, as previously discussed, the prototype with the help of backtracking is able to supportan unlimited number of sessions. Scary is not able to do this, and like it was discussed in Section5.3.4, does not provide guarantees about the presence of attacks in a protocol.

The prototype overcomes this issue, and indeed is able to guarantee the presence/absenceof attacks in a protocol since it uses an unlimited number of sessions. The problem with Scaryis exactly the determination of the limit of number of sessions, that is, what is the minimumnumber of sessions we need to take into consideration to cover all attacks that are known fora protocol. For the protocols we tested, our prototype and Scary discovered the same exactattacks, which makes sense because our prototype is using, in a way, Scary. For these protocolsthe determination of the ”boundary” of sessions wasn’t an issue, since those protocols don’thave a lot of roles and interactions.

A subject that helps in the analysis of traces is the concept of Invariants that was presentedin Section 4.5. With the help of the Invariant Algorithm previously explained, the prototypeis able to, besides abstracting a number of states into a single state transition, it is also able toclose a ”looping” trace, that appears to be infinite. This way, our tool doesn’t repeat unneces-sary proofs with the bonus of every possible trace of a protocol being fully analysed. But sinceit considers all possibilities in terms of the honesty of all agents, the tool also analyses unneces-sary traces that only involve dishonest agents. The two previous described subjects are goingto be important to determine what is the faster tool: our prototype or Scary.

Considering now the performance of both tools, Scary is quicker and analyses faster aprotocol. The reason for this, are the following: the prototype to analyse a single trace, firstneeds to check the set of clauses, that represent the same trace, for contradictions in termsof name and nonce equivalences; then after this process is performed, it sends the same set

Page 92: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

72 CHAPTER 5. PROTOTYPE

of clauses to Scary that will execute the saturation algorithm explained in Section 5.3.4; afterthis the result obtained by Scary needs to be analysed and it needs to be determined if thetrace is indeed possible or not. This process is done for every case that is originated fromthe instantiation case generation (Section 5.2). Also like it was previously said, our tool alsoanalyses cases that only involve dishonest agents. The associated traces are going to be closedeventually but that’s also a factor that will delay the execution of the analysis of the wholeprotocol.

But the most important factor is related to the fact that we execute the Algorithm describedin Section 5.2, for every nonce that is generated in a message. That is, for every message gen-erator of a nonce we execute the Algorithm. For example, for the NSL protocol, there are twomessages generators of nonces, the first and second messages. So we execute the Algorithmfirst for the first message and then for the second one.

In contrast, Scary generates traces from the beginning of the protocol, for each trace exe-cutes the saturation algorithm, and immediately determines each the trace is feasible or not.With this we can determine that the prototype is slower than Scary, even despite the fact ourprototype is abstracting the states that are explored (in the case of ”looping” traces) using theInvariant algorithm.

The next Table illustrates the comparison between the manual proof, Scary and our pro-totype for the NSL protocol, in terms of the number attack detected (taking into considerationthe attacks previously analysed for the NSL protocol), false positives and performance (notapplicable to the manual proof).

Proof/Tools No of attacks False Positives PerformanceManual Proof 3 0 −Scary 3 No FasterPrototype 3 Yes Slower

Table 5.1: Manual Proof/ Tools Comparison NSL Protocol

For the protocols we tested, our tool with the help of Scary is indeed able to fully automatethe technique proposed in (Bana and Comon-Lundh 2014), and discover the expected attacksthe related manual proofs discover.

5.6 Further Work

This prototype is still a work in progress. As the analysis made in the previous section, the toolgives false positives. These false positives are correlated with session equivalence (also dis-cussed in the previous section) and with traces in which we cannot expand using backtracking.Our prototype is not able to make the reasoning about session equivalence right now, so this isfurther work to be done in the future.

Also, we need to expand the Algorithm related to the prototype. However the prototypeis able to analyse more protocols, the Algorithm is too much focused on the NSL protocol.

Page 93: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

5.6. FURTHER WORK 73

With the protocols analysed (NSL, Andrew Secure RPC, and other toy protocols), there wereno problems, but with other protocols that might not be the case.

Which leads to another matter to address in the future, which implies the support of veri-fication of protocols that not only use asymmetric encryption, symmetric encryption, but thathave more function symbols besides the encryption and pairing primitives. To do this, Scaryalso needs to be modified in this sense.

We would like our tool to be independent of Scary, which implies implementing a satura-tion algorithm that simulates the deduction process done in the Proofs presented in Chapter 4.This way, there would be no need to construct files to Scary, modelling triples or other morecomplex structures, as pairs of pairs, and we wouldn’t need to analyse the log file Scary pro-duces, and search (and deduce) the stopping clause mentioned in Section 5.3.4.

Besides the secrecy property, we would also like for the tool to verify the authenticationproperty. This wouldn’t be a complicated task since the manual proofs to automate are similarto the proofs related to the secrecy property.

Finally we would like to create a user friendly interface that would allow for any user toanalyse protocols in an easy way. For example, our tool would first ask the user for a specifi-cation of a protocol, and then it would show the user the attacks that were discovered for thatprotocol.

Page 94: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

74 CHAPTER 5. PROTOTYPE

Page 95: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

6ConclusionA conclusion is simply the place where you got tired of thinking.– Dan Chaon

6.1 Conclusion

In this thesis, we’ve created a tool that automates the technique proposed in (Bana and Comon-Lundh 2014) dealing this way with non-Dolev-Yao attacks. This type of attacks is not detectedby tools that use as an intruder model, the Dolev-Yao model. Scyther is an example of this typeof tools. This technique, to deal with non-Dolev-Yao attacks, considers a different intrudermodel than the Dolev-Yao model, that takes into consideration a set of axioms the intrudermust not violate.

There exists a tool that already automated this technique, called Scary. However, to reachtermination in the analysis of a protocol, it uses bounded verification, limiting the number ofsessions, combined with a forward search approach. We’ve shown that our tool overcomes thislimitation using backtracking.

In this technique, the beginning point of verification is a hypothetical state where an attackoccurred and we generate traces in a backwards fashion. We know that all traces exploredare finite, since they must, in the worst case, reach the beginning of the protocol. So with thisassumption, and the Invariants Algorithm explained in Section 4.5, we’re able to analyse allpossible traces of a protocol, without going through ”looping” traces.

From all of this, we’ve created a prototype that is able to verify if the secrecy property, thatwas defined in Chapter 4, is respected in a protocol. In the future, the tool will also be able toverify the authentication property. The type of protocols that this tools supports for verifica-tion obey the following conditions: use asymmetric or symmetric encryption and support asfunction symbols the encryption and pairing primitives. In the future, we intend to supportdifferent protocols from the ones previously mentioned.

Our tool uses Scary to determine if a trace can be expanded or not. So our tool dependson Scary that also only supports protocols that obey the previous conditions. To use Scary, weneeded to modify it to be coherent with the context of our tool, basically Scary was only usedas a clause deduction machine.

The tool focuses on detecting traces that might represent attacks, sacrificing a bit on theassurance level. What this means is that, the prototype might give false positives on the attackdetection.

To validate our tool we tested it with the NSL protocol, the Andrew Secure RPC and someother toy protocols, and compared the results with the results obtained by the manual proof

Page 96: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

76 CHAPTER 6. CONCLUSION

that is related to the technique in (Bana and Comon-Lundh 2014), and with Scary. In compari-son with Scary, our tool is able to support an unlimited number of sessions involved. Although,the verification process is slower than Scary.

The number of attacks detected by our prototype and Scary is the same, as the limitationof sessions does not present an issue for the protocols that were tested.

We would like our tool to be independent of Scary, which implies implementing a satura-tion algorithm that simulates the deduction process done in the Proofs presented in Chapter4. This way there would be no need to construct and files to Scary, modelling triples or othermore complex structures, as pairs of pairs, and we wouldn’t need to analyse the log file Scaryproduces, and search (and deduce) the stopping clause mentioned in Section 5.3.4.

Finally, our tool is far from being perfect, and needs to be improved. The algorithm that thisprototype automates is mentioned in Section 5.2 needs to be expanded. However the prototypeis able to analyse more protocols, the Algorithm is too much focused on the NSL protocol. Withthe protocols analysed, there were no problems, but with other protocols that might not be thecase.

Page 97: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

Bibliography

Adao, P. and G. Bana (2015). Nsl verification and attacks agents playing both roles.

Aiash, M., G. Mapp, R.-W. Phan, A. Lasebae, and J. Loo (2012, June). A formallyverified device authentication protocol using casper/fdr. In Trust, Security and Privacy inComputing and Communications (TrustCom), 2012 IEEE 11th International Conference on, pp.1293–1298.

Arapinis, M., E. Ritter, and M. Ryan (2011, June). Statverif: Verification of statefulprocesses. In Computer Security Foundations Symposium (CSF), 2011 IEEE 24th, pp. 33–47.

Armando, A. and L. Compagna (2004). Satmc: A sat-based model checker for secu-rity protocols. In J. Alferes and J. Leite (Eds.), Logics in Artificial Intelligence, Volume 3229of Lecture Notes in Computer Science, pp. 730–733. Springer Berlin Heidelberg.

Bana, G., P. Adao, and H. Sakurada (2012). Computationally complete symbolicattacker in action. In FSTTCS, Volume 12, pp. 546–560.

Bana, G. and H. Comon-Lundh (2014). A computationally complete symbolic at-tacker for equivalence properties. In Proceedings of the 2014 ACM SIGSAC Conference onComputer and Communications Security, CCS ’14, New York, NY, USA, pp. 609–620. ACM.

Barthe, G., F. Dupressoir, B. Gregoire, C. Kunz, B. Schmidt, and P.-Y. Strub (2014).Easycrypt: A tutorial. In A. Aldini, J. Lopez, and F. Martinelli (Eds.), Foundations of Secu-rity Analysis and Design VII, Volume 8604 of Lecture Notes in Computer Science, pp. 146–166.Springer International Publishing.

Basagiannis, S., P. Katsaros, and A. Pombortsis. Intrusion attack tactics for the modelchecking of e-commerce security guarantees. In Proceedings of the 26th International Confer-ence on Computer Safety, Reliability and Security (SAFECOMP, pp. 238–251. Springer Verlag.

Basin, D., S. Modersheim, and L. Vigano (2005). Ofmc: A symbolic model checkerfor security protocols. International Journal of Information Security 4(3), 181–208.

Berezin, S. (2002). Model Checking and Theorem Proving: A Unified Framework. Ph. D.thesis, Pittsburgh, PA, USA. AAI3051019.

Blanchet, B. (2007). Cryptoverif: Computationally sound mechanized prover forcryptographic protocols. In Dagstuhl seminar “Formal Protocol Verification Applied, pp. 117.

Blanchet, B. (2011). Using horn clauses for analyzing security protocols. Formal Mod-els and Techniques for Analyzing Security Protocols 5, 86–111.

Blanchet, B., B. Smyth, and V. Cheval (2014). Proverif 1.90: Automatic cryptographicprotocol verifier, user manual and tutorial.

77

Page 98: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

78 BIBLIOGRAPHY

Blanchet, B. e. a. Programming securely with cryptography.

Boichut, Y., P.-C. Heam, and O. Kouchnarenko (2005). Automatic Verification of Se-curity Protocols Using Approximations. Research Report RR-5727.

Comon-Lundh, H., V. Cortier, and G. Scerri (2013). Tractable inference systems: Anextension with a deducibility predicate. In M. Bonacina (Ed.), Automated Deduction –CADE-24, Volume 7898 of Lecture Notes in Computer Science, pp. 91–108. Springer BerlinHeidelberg.

Comon-Lundh, H. and S. Delaune (2005). The finite variant property: How to getrid of some algebraic properties. In J. Giesl (Ed.), Term Rewriting and Applications, Volume3467 of Lecture Notes in Computer Science, pp. 294–307. Springer Berlin Heidelberg.

Cortier, V., S. Delaune, and P. Lafourcade (2006, January). A survey of algebraicproperties used in cryptographic protocols. J. Comput. Secur. 14(1), 1–43.

Cremers, C. (2008). The scyther tool: Verification, falsification, and analysis of secu-rity protocols. In A. Gupta and S. Malik (Eds.), Computer Aided Verification, Volume 5123of Lecture Notes in Computer Science, pp. 414–418. Springer Berlin Heidelberg.

Diffie, W. and M. Hellman (1976, Nov). New directions in cryptography. InformationTheory, IEEE Transactions on 22(6), 644–654.

Dolev, D. and A. C.-C. Yao (1983). On the security of public key protocols. IEEETransactions on Information Theory 29(2), 198–207.

Escobar, S., C. Meadows, and J. Meseguer (2009). Maude-npa, version 1.0.

Fuchs, L. (2014). Abelian groups, Volume 12. Elsevier.

Kosmatov, N. (2005, October). Constraint Solving for Sequences in Software Valida-tion and Verification. In Proc. of the 16th Int. Conf. on Applications of Declarative Programmingand Knowledge Management (INAP), Volume 4369 of Lecture Notes in Artificial Intelligence,Fukuoka, Japan, pp. 25–37. Springer. ISBN 978-3-540-69233-1.

Lowe, G. (1996). Breaking and fixing the needham-schroeder public-key protocolusing fdr. In T. Margaria and B. Steffen (Eds.), Tools and Algorithms for the Constructionand Analysis of Systems, Volume 1055 of Lecture Notes in Computer Science, pp. 147–166.Springer Berlin Heidelberg.

Lowe, G. (1997, Jun). Casper: a compiler for the analysis of security protocols. InComputer Security Foundations Workshop, 1997. Proceedings., 10th, pp. 18–30.

Meier, S., C. Cremers, and D. Basin (2010, July). Strong invariants for the efficientconstruction of machine-checked protocol security proofs. In Computer Security Founda-tions Symposium (CSF), 2010 23rd IEEE, pp. 231–245.

Meier, S., B. Schmidt, C. Cremers, and D. Basin (2013). The tamarin prover for thesymbolic analysis of security protocols. In N. Sharygina and H. Veith (Eds.), ComputerAided Verification, Volume 8044 of Lecture Notes in Computer Science, pp. 696–701. SpringerBerlin Heidelberg.

Page 99: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

BIBLIOGRAPHY 79

Scerri, G. (2015). Proofs of security protocols revisited. Ph. D. thesis, Ecole NormaleSuperieure de Cachan.

Song, D. X. (1999). Athena: a new efficient automatic checker for security protocolanalysis. In Computer Security Foundations Workshop, 1999. Proceedings of the 12th IEEE, pp.192–202.

TheAVISPATeam (2006). AVISPA v1.1 User Manual - Automated Validation of InternetSecurity Protocols and Applications. The AVISPA Team.

Warinschi, B. (2003, June). A computational analysis of the needham-schroeder-(lowe) protocol. In Computer Security Foundations Workshop, 2003. Proceedings. 16th IEEE,pp. 248–262.

Page 100: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

80 BIBLIOGRAPHY

Page 101: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

IAppendices

Page 102: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science
Page 103: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

AAppendix

Figure A.1: Excerpt of Output of Attack 1 on the NSL protocol

Page 104: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

84 APPENDIX A. APPENDIX

Figure A.2: Excerpt of Output of Attack 2 on the NSL protocol

Page 105: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

85

Figure A.3: Excerpt of Output of Attack 3 on the NSL protocol

Page 106: Automation of Secrecy Proofs for Security of Protocols · Automation of Secrecy Proofs for Security of Protocols Miguel Abreu Malafaia Mendes Belo Thesis to obtain the Master of Science

86 APPENDIX A. APPENDIX