formal verification of neural network controlled

10
Formal Verification of Neural Network Controlled Autonomous Systems Xiaowu Sun Department of Electrical and Computer Engineering University of Maryland, College Park [email protected] Haitham Khedr Department of Electrical and Computer Engineering University of Maryland, College Park [email protected] Yasser Shoukry Department of Electrical and Computer Engineering University of Maryland, College Park [email protected] ABSTRACT In this paper, we consider the problem of formally verifying the safety of an autonomous robot equipped with a Neural Network (NN) controller that processes LiDAR images to produce control actions. Given a workspace that is characterized by a set of poly- topic obstacles, our objective is to compute the set of safe initial states such that a robot trajectory starting from these initial states is guaranteed to avoid the obstacles. Our approach is to construct a nite state abstraction of the system and use standard reachability analysis over the nite state abstraction to compute the set of safe initial states. To mathematically model the imaging function, that maps the robot position to the LiDAR image, we introduce the no- tion of imaging-adapted partitions of the workspace in which the imaging function is guaranteed to be ane. Given this workspace partitioning, a discrete-time linear dynamics of the robot, and a pre-trained NN controller with Rectied Linear Unit (ReLU) non- linearity, we utilize a Satisability Modulo Convex (SMC) encoding to enumerate all the possible assignments of dierent ReLUs. To accelerate this process, we develop a pre-processing algorithm that could rapidly prune the space of feasible ReLU assignments. Finally, we demonstrate the eciency of the proposed algorithms using numerical simulations with the increasing complexity of the neural network controller. CCS CONCEPTS Computing methodologies Articial intelligence; Con- trol methods; Security and privacy Formal methods and theory of security; KEYWORDS Formal Verication, Machine Learning, Satisability Solvers ACM Reference Format: Xiaowu Sun, Haitham Khedr, and Yasser Shoukry. 2019. Formal Verica- tion of Neural Network Controlled Autonomous Systems. In 22nd ACM International Conference on Hybrid Systems: Computation and Control (HSCC ’19), April 16–18, 2019, Montreal, QC, Canada. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3302504.3311802 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specic permission and/or a fee. Request permissions from [email protected]. HSCC ’19, April 16–18, 2019, Montreal, QC, Canada © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6282-5/19/04. . . $15.00 https://doi.org/10.1145/3302504.3311802 1 INTRODUCTION From simple logical constructs to complex deep neural network models, Articial Intelligence (AI)-agents are increasingly control- ling physical/mechanical systems. Self-driving cars, drones, and smart cities are just examples of such systems to name a few. How- ever, regardless of the explosion in the use of AI within a multitude of cyber-physical systems (CPS) domains, the safety and reliability of these AI-enabled CPS is still an under-studied problem. It is then unsurprising that the failure of these AI-controlled CPS in several, safety-critical, situations leads to human fatalities [1]. Motivated by the urgency to study the safety, reliability, and potential problems that can rise and impact the society by the de- ployment of AI-enabled systems in the real world, several works in the literature focused on the problem of designing deep neural networks that are robust to the so-called adversarial examples [28]. Unfortunately, these techniques focus mainly on the robustness of the learning algorithm with respect to data outliers without pro- viding guarantees in terms of safety and reliability of the decisions made by these neural networks. To circumvent this drawback, re- cent works focused on three main techniques namely (i) testing of neural networks, (ii) falsication (semi-formal verication) of neural networks, and (iii) formal verication of neural networks. Representatives of the rst class, namely testing of neural net- works, are the works reported in [918] in which the neural network is treated as a white box, and test cases are generated to maximize dierent coverage criteria. Such coverage criteria include neuron coverage, condition/decision coverage, and multi-granularity test- ing criteria. On the one hand, maximizing test coverage gives system designers condence that the networks are reasonably free from defect. On the other hand, testing does not formally guarantee that a neural network satises a formal specication. To take into consideration the eect of the neural network de- cisions on the entire system behavior, several researchers focused on the falsication (or semi-formal verication) of autonomous systems that include machine learning components [1921]. In such falsication frameworks, the objective is to generate corner test cases that forces a violation of system-level specications. To that end, advanced 3D models and image environments are used to bridge the gap between the virtual world and the real world. By parametrizing the input to these 3D models (e.g., position of objects, position of light sources, intensity of light sources) and sampling the parameter space in a fashion that maximizes the falsication of the safety property, falsication frameworks can simulate several test cases until a counterexample is found [19–21]. While testing and falsication frameworks are powerful tools to nd corner cases in which the neural network or the neural 147

Upload: others

Post on 27-Oct-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Formal Verification of Neural Network Controlled

Formal Verification of Neural Network ControlledAutonomous Systems

Xiaowu SunDepartment of Electrical and

Computer EngineeringUniversity of Maryland, College Park

[email protected]

Haitham KhedrDepartment of Electrical and

Computer EngineeringUniversity of Maryland, College Park

[email protected]

Yasser ShoukryDepartment of Electrical and

Computer EngineeringUniversity of Maryland, College Park

[email protected]

ABSTRACTIn this paper, we consider the problem of formally verifying thesafety of an autonomous robot equipped with a Neural Network(NN) controller that processes LiDAR images to produce controlactions. Given a workspace that is characterized by a set of poly-topic obstacles, our objective is to compute the set of safe initialstates such that a robot trajectory starting from these initial statesis guaranteed to avoid the obstacles. Our approach is to construct a�nite state abstraction of the system and use standard reachabilityanalysis over the �nite state abstraction to compute the set of safeinitial states. To mathematically model the imaging function, thatmaps the robot position to the LiDAR image, we introduce the no-tion of imaging-adapted partitions of the workspace in which theimaging function is guaranteed to be a�ne. Given this workspacepartitioning, a discrete-time linear dynamics of the robot, and apre-trained NN controller with Recti�ed Linear Unit (ReLU) non-linearity, we utilize a Satis�ability Modulo Convex (SMC) encodingto enumerate all the possible assignments of di�erent ReLUs. Toaccelerate this process, we develop a pre-processing algorithm thatcould rapidly prune the space of feasible ReLU assignments. Finally,we demonstrate the e�ciency of the proposed algorithms usingnumerical simulations with the increasing complexity of the neuralnetwork controller.

CCS CONCEPTS• Computing methodologies → Arti�cial intelligence; Con-trol methods; • Security and privacy→ Formal methods and theoryof security;

KEYWORDSFormal Veri�cation, Machine Learning, Satis�ability SolversACM Reference Format:Xiaowu Sun, Haitham Khedr, and Yasser Shoukry. 2019. Formal Veri�ca-tion of Neural Network Controlled Autonomous Systems. In 22nd ACMInternational Conference on Hybrid Systems: Computation and Control (HSCC’19), April 16–18, 2019, Montreal, QC, Canada. ACM, New York, NY, USA,10 pages. https://doi.org/10.1145/3302504.3311802

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor pro�t or commercial advantage and that copies bear this notice and the full citationon the �rst page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior speci�c permissionand/or a fee. Request permissions from [email protected] ’19, April 16–18, 2019, Montreal, QC, Canada© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-1-4503-6282-5/19/04. . . $15.00https://doi.org/10.1145/3302504.3311802

1 INTRODUCTIONFrom simple logical constructs to complex deep neural networkmodels, Arti�cial Intelligence (AI)-agents are increasingly control-ling physical/mechanical systems. Self-driving cars, drones, andsmart cities are just examples of such systems to name a few. How-ever, regardless of the explosion in the use of AI within a multitudeof cyber-physical systems (CPS) domains, the safety and reliabilityof these AI-enabled CPS is still an under-studied problem. It is thenunsurprising that the failure of these AI-controlled CPS in several,safety-critical, situations leads to human fatalities [1].

Motivated by the urgency to study the safety, reliability, andpotential problems that can rise and impact the society by the de-ployment of AI-enabled systems in the real world, several worksin the literature focused on the problem of designing deep neuralnetworks that are robust to the so-called adversarial examples [2–8].Unfortunately, these techniques focus mainly on the robustness ofthe learning algorithm with respect to data outliers without pro-viding guarantees in terms of safety and reliability of the decisionsmade by these neural networks. To circumvent this drawback, re-cent works focused on three main techniques namely (i) testingof neural networks, (ii) falsi�cation (semi-formal veri�cation) ofneural networks, and (iii) formal veri�cation of neural networks.

Representatives of the �rst class, namely testing of neural net-works, are theworks reported in [9–18] inwhich the neural networkis treated as a white box, and test cases are generated to maximizedi�erent coverage criteria. Such coverage criteria include neuroncoverage, condition/decision coverage, and multi-granularity test-ing criteria. On the one hand,maximizing test coverage gives systemdesigners con�dence that the networks are reasonably free fromdefect. On the other hand, testing does not formally guarantee thata neural network satis�es a formal speci�cation.

To take into consideration the e�ect of the neural network de-cisions on the entire system behavior, several researchers focusedon the falsi�cation (or semi-formal veri�cation) of autonomoussystems that include machine learning components [19–21]. Insuch falsi�cation frameworks, the objective is to generate cornertest cases that forces a violation of system-level speci�cations. Tothat end, advanced 3D models and image environments are usedto bridge the gap between the virtual world and the real world. Byparametrizing the input to these 3D models (e.g., position of objects,position of light sources, intensity of light sources) and samplingthe parameter space in a fashion that maximizes the falsi�cation ofthe safety property, falsi�cation frameworks can simulate severaltest cases until a counterexample is found [19–21].

While testing and falsi�cation frameworks are powerful toolsto �nd corner cases in which the neural network or the neural

147

Page 2: Formal Verification of Neural Network Controlled

HSCC ’19, April 16–18, 2019, Montreal, QC, Canada X. Sun et al.

network enabled system may fail, they lack the rigor promised byformal veri�cation methods. Therefore, several researchers pointedto the urgent need of using formal methods to verify the behaviorof neural networks and neural network enabled systems [22–27].As a result, recent works in the literature attempted the problem ofapplying formal veri�cation techniques to neural network models.

Applying formal veri�cation to neural network models comeswith its unique challenges. First and foremost is the lack of widely-accepted, precise, mathematical speci�cations capturing the correctbehavior of a neural network. Therefore, recent works focused en-tirely on verifying neural networks against simple input-outputspeci�cations [28–33]. Such input-output techniques compute aguaranteed range for the output of a deep neural network givena set of inputs represented as a convex polyhedron. To that end,several algorithms that exploit the piecewise linear nature of theRecti�ed Linear Unit (ReLU) activation functions (one of the mostfamous nonlinear activation functions in deep neural networks)have been proposed. For example, by using binary variables to en-code piecewise linear functions, the constraints of ReLU functionsare encoded as a Mixed-Integer Linear Programming (MILP). Com-bining output speci�cations that are expressed in terms of LinearProgramming (LP), the veri�cation problem eventually turns to aMILP feasibility problem [32, 34].

Using o�-the-shelf MILP solvers does not lead to scalable ap-proaches to handle neural networks with hundreds and thousandsof neurons [29]. To circumvent this problem, several MILP-likesolvers targeted toward the neural network veri�cation problemare proposed. For example, the work reported in [28] proposed amodi�ed Simplex algorithm (originally used to solve linear pro-grams) to take into account ReLU nonlinearities as well. Similarly,the work reported in [29] combines a Boolean satis�ability solvingalong with a linear over-approximation of piecewise linear func-tions to verify ReLU neural networks against convex speci�cations.Other techniques that exploit speci�c geometric structures of thespeci�cations are also proposed [35, 36]. A thorough survey ondi�erent algorithms for veri�cation of neural networks againstinput-output range speci�cations can be found in [37] and thereferences within.

Unfortunately, the input-output range properties are simplisticand fail to capture the safety and reliability of cyber-physical sys-tems when controlled by a neural network. Recent works showedhow to perform reachability-based veri�cation of closed-loop sys-tems in the presence of learning components [38–40]. Reachabilityanalysis is performed by either separately estimating the output setof the neural network and the reachable set of continuous dynam-ics [38], or by translating the neural network controlled system intoa hybrid system [39]. Once the neural network controlled system istranslated into a hybrid system, o�-the-shelf existing veri�cationtools of hybrid systems, such as SpaceEx [41] for piecewise a�nedynamics and Flow⇤ [42] for nonlinear dynamics, can be used toverify safety properties of the system. Another related techniqueis the safety veri�cation using barrier certi�cates [43]. In suchapproach, a barrier function is searched using several simulationtraces to provide a certi�cate that unsafe states are not reachablefrom a given set of initial states.

Di�erently from the previous work—in the literature of formalveri�cation of neural network controlled system—we consider, in

this paper, the case in which the robotic system is equipped with aLiDAR scanner that is used to sense the environment. The LiDARimage is then processed by a neural network controller to computethe control inputs. Arguably, the ability of neural networks toprocess high-bandwidth sensory signals (e.g., cameras and LiDARs)is one of the main motivations behind the current explosion in theuse of machine learning in robotics and CPS. Towards this goal,we develop a framework that can reason about the safety of thesystem while taking into account the robot continuous dynamics,the workspace con�guration, the LiDAR imaging, and the neuralnetwork.

In particular, the contributions of this paper can be summarizedas follows:1-A framework for formally proving safety properties of autonomousrobots equipped with LiDAR scanners and controlled by neural net-work controllers.2- A notion of imaging-adapted partitions along with a polynomial-time algorithm for processing the workspace into such partitions.This notion of imaging-adapted partitions plays a signi�cant rolein capturing the LiDAR imaging process.3-ASatis�abilityModulo Convex (SMC)-based algorithm combinedwith an SMC-based pre-processing for computing �nite abstrac-tions of neural network controlled autonomous systems.For brevity, we omit here the proofs of the main technical resultsand report them in an extended version of the paper [44].

2 PROBLEM FORMULATION2.1 NotationThe symbols N, R,R+ and B denote the set of natural, real, positivereal, and Boolean numbers, respectively. The symbols ^,¬ and !denote the logical AND, logical NOT, and logical IMPLIES operators,respectively. Given two real-valued vectors x1 2 Rn1 and x2 2Rn2 , we denote by (x1,x2) 2 Rn1+n2 the column vector [xT1 ,x

T2 ]

T .Similarly, for a vector x 2 Rn , we denote by xi 2 R the ith elementof x . For two vectors x1,x2 2 Rn , we denote by max(x1,x2) theelement-wise maximum. For a set S ⇢ Rn , we denote the boundaryand the interior of this set by @S and int(S), respectively. Given twosets S1 and S2, f : S1 ◆ S2 and f : S1 ! S2 denote a set-valued andordinary map, respectively. Finally, given a vector z = (x ,�) 2 R2,we denote by atan2(z) = atan2(�,x).

2.2 Dynamics and WorkspaceWe consider an autonomous robot moving in a 2-dimensional poly-topic (compact and convex) workspace W ⇢ R2. We assumethat the robot must avoid the workspace boundaries @W alongwith a set of obstacles {O1, . . . ,Oo }, with Oi ⇢ W which is as-sumed to be polytopic. We denote by O the set of the obstaclesand the workspace boundaries which needs to be avoided, i.e.,O = {@W ,O1, . . . ,Oo }. The dynamics of the robot is described bya discrete-time linear system of the form:

x (t+1) = Ax (t ) + Bu(t ), (1)

where x (t ) 2 X ✓ Rn is the state of robot at time t 2 N andu(t ) ✓ Rm is the robot input. The matrices A and B represent therobot dynamics and have appropriate dimensions. For a robot withnonlinear dynamics that is either di�erentially �at or feedback

148

Page 3: Formal Verification of Neural Network Controlled

Formal Verification of Neural Network Controlled Autonomous Systems HSCC ’19, April 16–18, 2019, Montreal, QC, Canada

Preprocess LiDAR Image

Hidden layer 1

Hidden layer 2

Outputlayer

Inputlayer

...<latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit>...<latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit>...<latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit>

r(x(t))<latexit sha1_base64="MZ6ZPCdg2iygs2vWsJqimN2pCc0=">AAAB8XicbVDLSgNBEOz1GeMr6tHLYBCSS9gVQY9BLx4jmAcma5idTJIhs7PLTK8YlvyFFw+KePVvvPk3TpI9aGJBQ1HVTXdXEEth0HW/nZXVtfWNzdxWfntnd2+/cHDYMFGiGa+zSEa6FVDDpVC8jgIlb8Wa0zCQvBmMrqd+85FrIyJ1h+OY+yEdKNEXjKKV7nXp6SEtYXlS7haKbsWdgSwTLyNFyFDrFr46vYglIVfIJDWm7bkx+inVKJjkk3wnMTymbEQHvG2poiE3fjq7eEJOrdIj/UjbUkhm6u+JlIbGjMPAdoYUh2bRm4r/ee0E+5d+KlScIFdsvqifSIIRmb5PekJzhnJsCWVa2FsJG1JNGdqQ8jYEb/HlZdI4q3huxbs9L1avsjhycAwnUAIPLqAKN1CDOjBQ8Ayv8OYY58V5dz7mrStONnMEf+B8/gCbR5A0</latexit><latexit sha1_base64="MZ6ZPCdg2iygs2vWsJqimN2pCc0=">AAAB8XicbVDLSgNBEOz1GeMr6tHLYBCSS9gVQY9BLx4jmAcma5idTJIhs7PLTK8YlvyFFw+KePVvvPk3TpI9aGJBQ1HVTXdXEEth0HW/nZXVtfWNzdxWfntnd2+/cHDYMFGiGa+zSEa6FVDDpVC8jgIlb8Wa0zCQvBmMrqd+85FrIyJ1h+OY+yEdKNEXjKKV7nXp6SEtYXlS7haKbsWdgSwTLyNFyFDrFr46vYglIVfIJDWm7bkx+inVKJjkk3wnMTymbEQHvG2poiE3fjq7eEJOrdIj/UjbUkhm6u+JlIbGjMPAdoYUh2bRm4r/ee0E+5d+KlScIFdsvqifSIIRmb5PekJzhnJsCWVa2FsJG1JNGdqQ8jYEb/HlZdI4q3huxbs9L1avsjhycAwnUAIPLqAKN1CDOjBQ8Ayv8OYY58V5dz7mrStONnMEf+B8/gCbR5A0</latexit><latexit sha1_base64="MZ6ZPCdg2iygs2vWsJqimN2pCc0=">AAAB8XicbVDLSgNBEOz1GeMr6tHLYBCSS9gVQY9BLx4jmAcma5idTJIhs7PLTK8YlvyFFw+KePVvvPk3TpI9aGJBQ1HVTXdXEEth0HW/nZXVtfWNzdxWfntnd2+/cHDYMFGiGa+zSEa6FVDDpVC8jgIlb8Wa0zCQvBmMrqd+85FrIyJ1h+OY+yEdKNEXjKKV7nXp6SEtYXlS7haKbsWdgSwTLyNFyFDrFr46vYglIVfIJDWm7bkx+inVKJjkk3wnMTymbEQHvG2poiE3fjq7eEJOrdIj/UjbUkhm6u+JlIbGjMPAdoYUh2bRm4r/ee0E+5d+KlScIFdsvqifSIIRmb5PekJzhnJsCWVa2FsJG1JNGdqQ8jYEb/HlZdI4q3huxbs9L1avsjhycAwnUAIPLqAKN1CDOjBQ8Ayv8OYY58V5dz7mrStONnMEf+B8/gCbR5A0</latexit><latexit sha1_base64="MZ6ZPCdg2iygs2vWsJqimN2pCc0=">AAAB8XicbVDLSgNBEOz1GeMr6tHLYBCSS9gVQY9BLx4jmAcma5idTJIhs7PLTK8YlvyFFw+KePVvvPk3TpI9aGJBQ1HVTXdXEEth0HW/nZXVtfWNzdxWfntnd2+/cHDYMFGiGa+zSEa6FVDDpVC8jgIlb8Wa0zCQvBmMrqd+85FrIyJ1h+OY+yEdKNEXjKKV7nXp6SEtYXlS7haKbsWdgSwTLyNFyFDrFr46vYglIVfIJDWm7bkx+inVKJjkk3wnMTymbEQHvG2poiE3fjq7eEJOrdIj/UjbUkhm6u+JlIbGjMPAdoYUh2bRm4r/ee0E+5d+KlScIFdsvqifSIIRmb5PekJzhnJsCWVa2FsJG1JNGdqQ8jYEb/HlZdI4q3huxbs9L1avsjhycAwnUAIPLqAKN1CDOjBQ8Ayv8OYY58V5dz7mrStONnMEf+B8/gCbR5A0</latexit>

d(x(t))<latexit sha1_base64="+7/Frd8U+ZDwUOvehmGeKHfV8wQ=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHaS0lE0GPRi8cK9gPbWDabTbt0swm7E7GE/gsvHhTx6r/x5r9x2+agrQ8GHu/NMDPPTwTX6Djf1srq2vrGZmGruL2zu7dfOjhs6ThVlDVpLGLV8YlmgkvWRI6CdRLFSOQL1vZH11O//ciU5rG8w3HCvIgMJA85JWik+6Dy9JBVsDqp9ktlp+bMYC8TNydlyNHol756QUzTiEmkgmjddZ0EvYwo5FSwSbGXapYQOiID1jVUkohpL5tdPLFPjRLYYaxMSbRn6u+JjERajyPfdEYEh3rRm4r/ed0Uw0sv4zJJkUk6XxSmwsbYnr5vB1wximJsCKGKm1ttOiSKUDQhFU0I7uLLy6R1VnOdmnt7Xq5f5XEU4BhOoAIuXEAdbqABTaAg4Rle4c3S1ov1bn3MW1esfOYI/sD6/AGFkZAm</latexit><latexit sha1_base64="+7/Frd8U+ZDwUOvehmGeKHfV8wQ=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHaS0lE0GPRi8cK9gPbWDabTbt0swm7E7GE/gsvHhTx6r/x5r9x2+agrQ8GHu/NMDPPTwTX6Djf1srq2vrGZmGruL2zu7dfOjhs6ThVlDVpLGLV8YlmgkvWRI6CdRLFSOQL1vZH11O//ciU5rG8w3HCvIgMJA85JWik+6Dy9JBVsDqp9ktlp+bMYC8TNydlyNHol756QUzTiEmkgmjddZ0EvYwo5FSwSbGXapYQOiID1jVUkohpL5tdPLFPjRLYYaxMSbRn6u+JjERajyPfdEYEh3rRm4r/ed0Uw0sv4zJJkUk6XxSmwsbYnr5vB1wximJsCKGKm1ttOiSKUDQhFU0I7uLLy6R1VnOdmnt7Xq5f5XEU4BhOoAIuXEAdbqABTaAg4Rle4c3S1ov1bn3MW1esfOYI/sD6/AGFkZAm</latexit><latexit sha1_base64="+7/Frd8U+ZDwUOvehmGeKHfV8wQ=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHaS0lE0GPRi8cK9gPbWDabTbt0swm7E7GE/gsvHhTx6r/x5r9x2+agrQ8GHu/NMDPPTwTX6Djf1srq2vrGZmGruL2zu7dfOjhs6ThVlDVpLGLV8YlmgkvWRI6CdRLFSOQL1vZH11O//ciU5rG8w3HCvIgMJA85JWik+6Dy9JBVsDqp9ktlp+bMYC8TNydlyNHol756QUzTiEmkgmjddZ0EvYwo5FSwSbGXapYQOiID1jVUkohpL5tdPLFPjRLYYaxMSbRn6u+JjERajyPfdEYEh3rRm4r/ed0Uw0sv4zJJkUk6XxSmwsbYnr5vB1wximJsCKGKm1ttOiSKUDQhFU0I7uLLy6R1VnOdmnt7Xq5f5XEU4BhOoAIuXEAdbqABTaAg4Rle4c3S1ov1bn3MW1esfOYI/sD6/AGFkZAm</latexit><latexit sha1_base64="+7/Frd8U+ZDwUOvehmGeKHfV8wQ=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHaS0lE0GPRi8cK9gPbWDabTbt0swm7E7GE/gsvHhTx6r/x5r9x2+agrQ8GHu/NMDPPTwTX6Djf1srq2vrGZmGruL2zu7dfOjhs6ThVlDVpLGLV8YlmgkvWRI6CdRLFSOQL1vZH11O//ciU5rG8w3HCvIgMJA85JWik+6Dy9JBVsDqp9ktlp+bMYC8TNydlyNHol756QUzTiEmkgmjddZ0EvYwo5FSwSbGXapYQOiID1jVUkohpL5tdPLFPjRLYYaxMSbRn6u+JjERajyPfdEYEh3rRm4r/ed0Uw0sv4zJJkUk6XxSmwsbYnr5vB1wximJsCKGKm1ttOiSKUDQhFU0I7uLLy6R1VnOdmnt7Xq5f5XEU4BhOoAIuXEAdbqABTaAg4Rle4c3S1ov1bn3MW1esfOYI/sD6/AGFkZAm</latexit>

u(t)<latexit sha1_base64="M5UxtYebzB6PRZfJDAFVXqkty+I=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigh6LXjxWsB/QxrLZbtqlm03YnQgl9Ed48aCIV3+PN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZeJUM95ksYx1J6CGS6F4EwVK3kk0p1EgeTsY38789hPXRsTqAScJ9yM6VCIUjKKV2uljVsXzab9ccWvuHGSVeDmpQI5Gv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5uVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1OBkJzhnJiCWVa2FsJG1FNGdqESjYEb/nlVdK6qHluzbu/rNRv8jiKcAKnUAUPrqAOd9CAJjAYwzO8wpuTOC/Ou/OxaC04+cwx/IHz+QP4VI9Q</latexit><latexit sha1_base64="M5UxtYebzB6PRZfJDAFVXqkty+I=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigh6LXjxWsB/QxrLZbtqlm03YnQgl9Ed48aCIV3+PN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZeJUM95ksYx1J6CGS6F4EwVK3kk0p1EgeTsY38789hPXRsTqAScJ9yM6VCIUjKKV2uljVsXzab9ccWvuHGSVeDmpQI5Gv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5uVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1OBkJzhnJiCWVa2FsJG1FNGdqESjYEb/nlVdK6qHluzbu/rNRv8jiKcAKnUAUPrqAOd9CAJjAYwzO8wpuTOC/Ou/OxaC04+cwx/IHz+QP4VI9Q</latexit><latexit sha1_base64="M5UxtYebzB6PRZfJDAFVXqkty+I=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigh6LXjxWsB/QxrLZbtqlm03YnQgl9Ed48aCIV3+PN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZeJUM95ksYx1J6CGS6F4EwVK3kk0p1EgeTsY38789hPXRsTqAScJ9yM6VCIUjKKV2uljVsXzab9ccWvuHGSVeDmpQI5Gv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5uVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1OBkJzhnJiCWVa2FsJG1FNGdqESjYEb/nlVdK6qHluzbu/rNRv8jiKcAKnUAUPrqAOd9CAJjAYwzO8wpuTOC/Ou/OxaC04+cwx/IHz+QP4VI9Q</latexit><latexit sha1_base64="M5UxtYebzB6PRZfJDAFVXqkty+I=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigh6LXjxWsB/QxrLZbtqlm03YnQgl9Ed48aCIV3+PN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZeJUM95ksYx1J6CGS6F4EwVK3kk0p1EgeTsY38789hPXRsTqAScJ9yM6VCIUjKKV2uljVsXzab9ccWvuHGSVeDmpQI5Gv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5uVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1OBkJzhnJiCWVa2FsJG1FNGdqESjYEb/nlVdK6qHluzbu/rNRv8jiKcAKnUAUPrqAOd9CAJjAYwzO8wpuTOC/Ou/OxaC04+cwx/IHz+QP4VI9Q</latexit>

LiDARImage

Control input

ProcessedLiDAR Image

Figure 1: Pictorial representation of the problem setup un-der consideration.linearizable, the state space model (1) corresponds to its feedbacklinearized dynamics. We denote by � (x) 2 R2 the natural projectionof x onto the workspaceW, i.e., � (x (t )) is the position of the robotat time t .

2.3 LiDAR ImagingWe consider the case when the autonomous robot uses a LiDARscanner to sense its environment. The LiDAR scanner emits a setof N lasers evenly distributed in a 2� degree fan. We denote by� (t )lidar 2 R the heading angle of the LiDAR at time t . Similarly,

we denote by � (t )i = � (t )lidar + (i � 1) 2�N , with i 2 {1, . . . ,N }, theangle of the ith laser beam at time t where � (t )1 = � (t )lidar and by

� (t ) = (� (t )1 , . . . ,� (t )N ) the vector of the angles of all the laser beams.While the heading angle of the LiDAR, � (t )lidar, changes as the robot

pose changes over time, i.e., � (t )lidar = f (x (t )) for some nonlinearfunction f , in this paper we focus on the case when the headingangle of the LiDAR, � (t )lidar, is �xed over time and we will dropthe superscript t from the notation. Such condition is satis�ed inseveral real-world scenarios whenever the robot is moving whilemaintaining a �xed pose (e.g. a quadrotor whose yaw angle ismaintained constant).

For the ith laser beam, the observation signal ri (x (t )) 2 R isthe distance measured between the robot position � (x (t )) and thenearest obstacle in the �i direction, i.e.:

ri (x (t )) = minOi 2O

minz2Oi

kz � � (x (t ))k2

s.t. atan2⇣z � � (x (t ))

⌘= �i . (2)

In this paper, we will restrict our attention to the case when theLiDAR scanner is ideal (with no noise) although the bounded noisecase can be incorporated in the proposed framework. The �nalLiDAR image d(x (t )) 2 R2N is generated by processing the obser-vations r (x (t )) as follows:

di (x (t )) =⇣ri (x (t )) cos�i , ri (x (t )) sin�i

⌘,

d(x (t )) =⇣d1(x (t )), . . .dN (x (t ))

⌘. (3)

2.4 Neural Network ControllerWe consider a pre-trained neural network controller fNN : R2N !Rm that processes the LiDAR images to produce control actionswith L internal and fully connected layers in addition to one outputlayer. Each layer contains a set ofMl neurons (where l 2 {1, . . . ,L})with Recti�ed Linear Unit (ReLU) activation functions. ReLU acti-vation functions play an important role in the current advances indeep neural networks [45]. For such neural network architecture,the neural network controller u(t ) = fNN(d(x (t ))) can be writtenas: h1(t ) = max

⇣0, W 0d(x (t )) +w0

⌘,

h2(t ) = max⇣0, W 1h1(t ) +w1

⌘,

...

hL(t ) =max⇣0, W L�1hL�1(t ) +wL�1

⌘,

u(t ) =W LhL(t ) +wL , (4)

whereW l 2 RMl⇥Ml�1 andwl 2 RMl are the pre-trained weightsand bias vectors of the neural network which are determined duringthe training phase.

2.5 Robot Trajectories and Safety Speci�cationsThe trajectories of the robot whose dynamics are described by (1)when controlled by the neural network controller (2)-(4) startingfrom the initial condition x0 = x (0) is denoted by �x0 : N ! Rn

such that �x0 (0) = x0. A trajectory �x0 is said to be safe wheneverthe robot position does not collide with any of the obstacles at alltimes.

De�nition 2.1 (Safe Trajectory). A robot trajectory �x0 is calledsafe if � (�x0 (t)) 2 W, � (�x0 (t)) < Oi , 8Oi 2 O, 8t 2 N.

Using the previous de�nition, we now de�ne the problem ofverifying the system-level safety of the neural network controlledsystem as follows:

P������ 2.2. Consider the autonomous robot whose dynamics aregoverned by (1) which is controlled by the neural network controllerdescribed by (4) which processes LiDAR images described by (2)-(3).Compute the set of safe initial conditions Xsafe ✓ X such that anytrajectory �x0 starting from x0 2 Xsafe is safe.

3 FRAMEWORKBefore we describe the proposed framework, we need to brie�yrecall the following de�nitions capturing the notion of a systemand relations between di�erent systems.

De�nition 3.1. An autonomous system S is a pair (X ,� ) consist-ing of a set of statesX and a set-valuedmap � : X ◆ X representingthe transition function. A system S is �nite if X is �nite. A systemS is deterministic if � is single-valued map and is non-deterministicif not deterministic.

De�nition 3.2. Consider a deterministic system Sa = (Xa ,�a )and a non-deterministic system Sb = (Xb ,�b ). A relation Q ✓Xa ⇥ Xb is a simulation relation from Sa to Sb , and we writeSa 4Q Sb , if the following conditions are satis�ed:

(1) for every xa 2 Xa there exists xb 2 Xb with (xa ,xb ) 2 Q ,

149

Page 4: Formal Verification of Neural Network Controlled

HSCC ’19, April 16–18, 2019, Montreal, QC, Canada X. Sun et al.

Partition the workspace into

imaging-adaptedsets

Use SMC tocompute a finite state abstraction

1

2

11

3

12

46

unsafeHidden layer 1

Hidden layer 2

Outputlayer

Inputlayer

...<latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit> ...<latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit>...<latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit><latexit sha1_base64="aob6yfo7DjevWJ/4C7HjbKaeLdU=">AAACyXicbVHLbhMxFHWGVwmvtizZjIiQWEUzCAmWFWyQkKpSkbRqZlTZnpvU1I/BvlOSWl7xA92Wf+CD+Bs86SClCVeydHzOubrHvqyWwmGW/ekld+7eu/9g62H/0eMnT59t7+yOnWkshxE30thjRh1IoWGEAiUc1xaoYhKO2PnHVj+6AOuE0V9xUUOp6EyLqeAUIzUuLiqD7nR7kA2zZaWbIO/AgHR1cLrT+11UhjcKNHJJnZvkWY2lpxYFlxD6ReOgpvyczmASoaYKXOmXcUP6KjJVOjU2Ho3pkl3t8FQ5t1AsOhXFM7euteT/tEmD0/elF7puEDS/GTRtZIombd+eVsICR7mIgHIrYtaUn1FLOcYf6hcWNPzgRimqK1/Y4AuEOS7jeQtVCOsWdsvCZAOtZ9VxGLwv2qCM+cOwru6vqPsb6smKehI2p0fJinmY5GVMAjOhPeuoQV5AdPy7hn5ccL6+zk0wfjPMs2H+5e1g70O36i3ygrwkr0lO3pE98okckBHh5Bu5ItfkV/I5+Z7Mk8sba9Lrep6TW5X8/At5z+gE</latexit>

Compute thepredecessors ofthe unsafe states

1

2

11

3

12

46

Compute theset of safe states

unsafe

unsafe

unsafe

unsafe

Figure 2: Pictorial representation of the proposed framework.

(2) for every (xa ,xb ) 2 Q we have that x 0a = �a (xa ) in Sa im-plies the existence of x 0b 2 �b (xb ) in Sb satisfying (x 0a ,x 0b ) 2Q .

Using the previous two de�nitions, we describe our approach asfollows. As pictorially shown in Figure 2, given the autonomousrobot system SNN = (X,�NN), where �NN : x 7! Ax + BfNN(d(x)),our objective is to compute a �nite state abstraction (possibly non-deterministic) SF = (F ,�F) of SNN such that there exists a simu-lation relation from SNN to SF , i.e., SNN 4Q SF . This �nite stateabstraction SF will be then used to check the safety speci�cation.

The �rst di�culty in computing the �nite state abstraction SFis the nonlinearity in the relation between the robot position � (x)and the LiDAR observations as captured by equation (2). However,we notice that we can partition the workspace based on the laserangles �1, . . . ,�N along with the vertices of the polytopic obstaclessuch that the map d (de�ned in equation (3) which maps the robotposition to the processed observations) is an a�ne map as shownin Section 4. Therefore, as summarized in Algorithm 1, the �rststep is to compute such partitioningW? of the workspace (WKSP�PARTITION, line 2 in Algorithm 1). WhileWKSP�PARTITION fo-cuses on partitioning the workspace W, one needs to partition theremainder of the state space X (STATE�SPACE�PARTITION, line 5in Algorithm 1) to compute the �nite set of abstract states F alongwith the simulation relation Q that maps between states in X andthe corresponding abstract states in F , and vice versa.

Unfortunately, the number of partitions grows exponentially inthe number of lasers N and the number of vertices of the poly-topic obstacles. To harness this exponential growth, we compute anaggregate-partitioningW 0 using only a few laser angles (called pri-mary lasers and denoted by�p ). The resulting aggregate-partitioningW 0 would contain a smaller number of partitions such that eachpartition in W 0 represents multiple partitions in W?. Similarly,we can compute a corresponding aggregate set of states F 0 as:

s 0 = {s 2 F | 9x 2 w 0,w 0 2 W 0, (x , s) 2 Q}

where each aggregate state s 0 is a set representing multiple statesin F . Whenever possible, we will carry out our analysis using theaggregated-partitioningW 0 (and F 0) and use the �ne-partitioningW? only if deemed necessary. Details of the workspace partition-ing and computing the corresponding a�ne maps representing theLiDAR imaging function are given in Section 4.

The state transition map �F is computed as follows. First, weassume a transition exists between any two states s and s 0 in F(line 6- 7 in Algorithm 1). Next, we start eliminating unnecessarytransitions. We observe that regions in the workspace that are ad-jacent or within some vicinity are more likely to force the need oftransitions between their corresponding abstract states. Similarly,regions in the workspace that are far from each other are morelikely to prohibit transitions between their corresponding abstract

Algorithm 1 V������NN(X,�NN)1: Step 1: Partition the workspace2: (W?, W0) =WKSP�PARTITION(W, O, �p, �p )

3: Step 2: Compute the �nite state abstraction SF4: Step 2.1: Compute the states of SF5: (F, F0, Q ) = STATE�SPACE�PARTITON(W?, W0)6: for each s and s0 in F do7: �F .ADD�TRANSITION(s, s0)8: Step 2.2: Pre-process the neural network9: for each s in F do10: Xs = {x 2 X | (x, s) 2 Q }11: CEs = PRE�PROCESS(Xs , �NN)12: Step 2.3: Compute the transition map �F13: for each s in F and s0 in F0 where s < s0 do14: Xs = {x 2 X | (x, s) 2 Q }15: Xs0 = {x 2 X | (x, s?) 2 Q, 8s? 2 s0 }16: S����� = CHECK�FEASIBILITY(Xs , Xs0, �NN, CEs )17: if S����� == INFEASIBLE then18: for each s? in s0 do19: �F .REMOVE�TRANSITION(s, s?)20: else21: for each s? in s0 do22: Xs? = {x 2 X | (x, s?) 2 Q }23: S����� = CHECK-FEASIBILITY(Xs , Xs?, �NN, CEs )24: if S����� == INFEASIBLE then25: �F .REMOVE�TRANSITION(s, s?)

26: Step 3: Compute the safe set27: Step 3.1: Mark the abstract states corresponding to obstacles and

workspace boundary as unsafe

F0unsafe = {s 2 F | 9x 2 X : (x, s) 2 Q, � (x ) 2 Oi , Oi 2 O}

28: Step 3.2: Iteratively compute the predecessors of the abstract un-safe states

29: S����� = FIXED-POINT-NOT-REACHED30: while S����� == FIXED-POINT-NOT-REACHED do31: Fk

unsafe = Fk�1unsafe [ P��(Fk�1

unsafe)32: if Fk

unsafe == Fk�1unsafe then

33: S����� = FIXED-POINT-REACHED34: Fsafe = F \ Funsafe35: Step 3.3: Compute the set of safe states36: Xsafe = {x 2 X | 9s 2 Fsafe : (x, s) 2 Q }37: Return Xsafe

states. Therefore, in an attempt to reduce the number of compu-tational steps in our algorithm, we check the transition feasibilitybetween a state s 2 F and an aggregate state s 0 2 F 0. If our al-gorithm (CHECK-FEASIBILITY) asserted that the neural network�NN prohibits the robot from transitioning between the regionscorresponding to s and s 0 (denoted by Xs and Xs 0 , respectively),then we conclude that no transition in �F is feasible between theabstract state s and all the abstract states s? in s 0 (lines 13-19 in

150

Page 5: Formal Verification of Neural Network Controlled

Formal Verification of Neural Network Controlled Autonomous Systems HSCC ’19, April 16–18, 2019, Montreal, QC, Canada

Algorithm 1). This leads to a reduction in the number of state pairsthat need to be checked for transition feasibility. Conversely, if ouralgorithm (CHECK-FEASIBILITY) asserted that the neural network�NN allows for a transition between the regions corresponding tos and s 0, then we proceed by checking the transition feasibilitybetween the state s and all the states s? contained in the aggregatestate s? (lines 21-25 in Algorithm 1).

Checking the transition feasibility (CHECK-FEASIBILITY) be-tween two abstract states entails reasoning about the robot dy-namics, the neural network, along with the a�ne map representingthe LiDAR imaging computed from the previous workspace parti-tioning. While the robot dynamics is assumed linear, the imagingfunction is a�ne, the technical di�culty lies in reasoning aboutthe behavior of the neural network controller. Thanks to the ReLUactivation functions in the neural network, we can encode the prob-lem of checking the transition feasibility between two regions asformula �, called monotone Satis�ability Modulo Convex (SMC)formula [46, 47], over Boolean and convex constraints representing,respectively, the ReLU phases and the dynamics, the neural net-work weights, and the imaging constraints. In addition to using theSMC solver to check the transition feasibility (CHECK-FEASIBILITY)between abstract states, it will be used also to perform some pre-processing of the neural network function �NN (lines 9-11 in Algo-rithm 1) which is going to speed up the process of checking the thetransition feasibility. Details of the SMC encoding and the strategyto check transition feasibility (CHECK-FEASIBILITY) are given inSection 5.

Once the �nite state abstraction SF and the simulation relationQ is computed, the next step is to partition the �nite states F intoa set of unsafe states Funsafe and a set of safe states Fsafe using thefollowing �xed-point computation:

F kunsafe =

8>>>><>>>>:

{s 2 F | 9x 2 X : (x , s) 2 Q,

� (x) 2 Oi ,Oi 2 O} k = 0F k�1unsafe [

s 2Fk�1unsafe

P��(s) k > 0

Funsafe = limk!1

F kunsafe, Fsafe = F \ Funsafe.

where theF 0unsafe represents the abstract states corresponding to the

obstacles and workspace boundaries, F kunsafe with k > 0 represents

all the states that can reach F 0unsafe in k-steps, and P��(s) is de�ned

as:P��(s) = {s 0 2 F | s 2 �F(s 0)}.

The remaining abstract states are then marked as the set of safestates Fsafe. Finally, we can compute the set of safe states Xsafe as:

Xsafe = {x 2 X | 9s 2 Fsafe : (x , s) 2 Q}.

These computations are summarized in lines 27-36 in Algorithm 1.

4 IMAGING-ADAPTEDWORKSPACEPARTITIONING

We start by introducing the notation of the important geometricobjects. We denote by R��(w,� ) the ray originated from a pointw 2 W in the direction � , i.e.:

R��(w,� ) = {w 0 2 W | atan2(w 0 �w) = � }.

Similarly, we denote by L���(w1,w2) the line segment between thepointsw1 andw2, i.e.:

L���(w1,w2) = {w 0 2 W | w 0 = �w1 + (1 � � )w2, 0 � 1}.

For a convex polytope P ✓ W, we denote by V���(P), its set ofvertices and by E���(P) its set of line segments representing theedges of the polytope.

4.1 Imaging-Adapted PartitionsThe basic idea behind our algorithm is to partition the workspaceinto a set of polytopic sets (or regions) such that for each region Rthe LiDAR rays intersects with the same obstacle/workspace edgeregardless of the robot positions � (x) 2 R. To formally characterizethis property, let O? =

–Oi 2O Oi be the set of all points in the

workspace in which an obstacle or workspace boundary exists. Con-sider a workspace partition R ✓ W and a robot position � (x) thatlies inside this partition, i.e., � (x) 2 R. The intersection betweenthe kth LiDAR laser beam R��(� (x),�k ) and O? is a unique pointcharacterized as:

zk,� (x ) = argminz2W

kz � � (x)k2 s.t. z 2 R��(� (x),�k ) \ O?. (5)

By sweeping � (x) across the whole region R, we can characterizethe set of all possible intersection points as:

Lk (R) =ÿ

� (x )2Rzk,� (x ). (6)

Using the set Lk (R) described above, we de�ne the notion ofimaging-adapted partitions as follows.

De�nition 4.1. A set R ⇢ W is said to be an imaging-adaptedpartition if the following property holds:

Lk (R) is a line segment 8k 2 {1, . . . ,N }. (7)

Figure 3 shows concrete examples of imaging-adapted partitions.Imaging-adapted partitions enjoy the following property:

L���� 4.2. Consider an imaging-adapted partition R with cor-responding sets L1(R), . . . ,LN (R). The LiDAR imaging functiond : R ! R2N is an a�ne function of the form:

dk (� (x)) = Pk,R� (x) +Qk,R , d = (d1, . . . ,dN ) (8)

for some constant matrices Pk,R and vectorsQk,R that depend on theregion R and the LiDAR angle �k .

4.2 Partitioning the WorkspaceMotivated by Lemma 4.2, our objective is to design an algorithmthat can partition the workspace W into a set of imaging-adaptedpartitions. As summarized in Algorithm 2, our algorithm starts bycomputing a set of line segments G that will be used to partitionthe workspace (lines 1-5 in Algorithm 2). This set of line segmentsG are computed as follows. First, we de�ne the set V as the onethat contains all the vertices of the workspace and the obstacles, i.e.,V =

–Oi 2O V���(Oi ). Next, we consider rays originating from

all the vertices in V and pointing in the opposite directions of theangles �1, . . . ,�N . By intersecting these rays with the obstacles andpicking the closest intersection points, we acquire the line segments

151

Page 6: Formal Verification of Neural Network Controlled

HSCC ’19, April 16–18, 2019, Montreal, QC, Canada X. Sun et al.

Figure 3: (left-up) A partitioning of the workspace that isnot imaging-adapted.Within regionR1, the LiDAR ray (cyanarrow) intersects with di�erent obstacle edges dependingon the robot position. (left-down) A partitioning of theworkspace that is imaging-adapted. For both regions R1 andR2, the LiDAR ray (cyan arrow) intersects the same obsta-cle edge regardless of the robot position. (right) Imaging-adapted partitioning of the workspace used in Section 6.G that will be used to partition the workspace. In other words, Gis computed as:

Gk = {L���(�, z) | � 2 V, z = argminz2R��(�,�k+� )\O?

kz �� k2}

G =

Nÿk=1

Gk (9)

Thanks to the fact that the vertices � are �xed, �nding the inter-section between R��(�,�k + � ) and O? is a standard ray-polytopeintersection problem which can be solved e�ciently [48].

The next step is to compute the intersection pointsP between theline segmentsG and the edges of the obstacles E =

–Oi 2O E���(Oi ).

A naive approach will be to consider all combinations of line seg-ments in G [ E and test them for intersection. Such approach iscombinatorial and would lead to an execution time that is exponen-tial in the number of laser angles and vertices of obstacles. Thanksto the advances in the literature of computational geometry, suchintersection points can be computed e�ciently using the plane-sweep algorithm [48]. The plane-sweep algorithm simulates theprocess of sweeping a line downwards over the plane. The orderof the line segments G [ E from left to right as they intersect thesweep line is stored in a data structure called the sweep-line status.Only segments that are adjacent in the horizontal ordering needto be tested for intersection. Though the sweeping process can bevisualized as continuous, the plane-sweep algorithm sweeps onlythe endpoints of segments in G [ E, which are given beforehand,and the intersection points, which are computed on the �y. To keeptrack of the endpoints of segments in G [ E and the intersectionpoints, we use a balanced binary search tree as data structure tosupport insertion, deletion, and searching in O(lo� n) time, wheren is number of elements in the data structure.

The �nal step is to use the line segments G[E and their intersec-tion points P, discovered by the plane-sweep algorithm, to computethe workspace partitions. To that end, consider the undirected pla-nar graph whose vertices are the intersection points P and whoseedges are G[E, denoted byG����(P,G[E). The workspace parti-tions are equivalent to �nding subgraphs of G����(P,G [E) suchthat each subgraph contains only one simple cycle 1. To �nd thesesimple cycles, we use a modi�ed Depth-First-Search algorithm inwhich it starts from a vertex in the planar graph and then traversesthe graph by considering the rightmost turns along the vertices ofthe graph. Finally, the workspace partitions are computed as theconvex hulls of all the vertices in the computed simple cycles. Itfollows directly from the fact that each region is constructed fromthe vertices of a simple cycle that there exists no line segment inG [ E that intersects with the interior of any region, i.e., for anyworkspace partition R, the following holds:

int(R) \ e = ; 8e 2 G [ E (10)

This process is summarized in lines 8-16 in Algorithm 2. An impor-tant property of the regions determined by Algorithm 2 is statedby the following proposition.

P���������� 4.3. Consider a workspace partition R that is com-puted by Algorithm 2 and satis�es (10). The following property holdsfor any LiDAR ray with angle �k :

9e 2 E such that Lk (R) ✓ e,

where Lk (R) is de�ned in (6).

We conclude this section by stating our �rst main result, quanti-fying the correctness and complexity of Algorithm 2.

T������ 4.4. Given a workspace with polytopic obstacles and a setof laser angles �1, . . . ,�N , then Algorithm 2 computes the partitioningR1, . . . ,Rr such that:

(1) W =–r

i=1 Ri ,(2) Ri is an imaging-adapted partition 8i = 1, . . . , r ,(3) d : Ri ! R2N is a�ne 8i = 1, . . . , r .

Moreover, the time complexity of Algorithm 2 isO(M log M+I log M),whereM = |G [ E| is cardinality of G [ E, and I is number of inter-section points between segments in G [ E.

5 COMPUTING THE FINITE STATEABSTRACTION

Once the workspace is partitioned into imaging-adapted partitionsW? = {R1, . . . ,Rr } and the corresponding imaging function isidenti�ed, the next step is to compute the �nite state transitionabstraction SF = (F ,�F) of the closed loop system along withthe simulation relation Q . The �rst step is to de�ne the state spaceF and its relation to X. To that end, we start by computing apartitioning of the state space X that respects W?. For the sakeof simplicity, we consider X ⇢ Rn that is n-orthotope, i.e., thereexists constants x i ,x i 2 R, i = 1, . . . ,n such that:

X = {x 2 Rn | x i xi < x i , i = 1, . . . ,n}

1A cycle in an undirected graph is called simple when no repetitions of vertices andedges, other than the starting and ending vertex.

152

Page 7: Formal Verification of Neural Network Controlled

Formal Verification of Neural Network Controlled Autonomous Systems HSCC ’19, April 16–18, 2019, Montreal, QC, Canada

Algorithm 2WKSP�PARTITION (W,O,� ,�p )1: Step 1: Generate partition segments2: O? =

–Oi 2O Oi , V =

–Oi 2O V���(Oi ), E =

–Oi 2O E���(Oi )

3: for k 2 {1, . . . , N } do4: Use a ray-polygon intersection algorithm to compute:

Gk = {L���(�, z) | � 2 V, z = argminz2R��(�,�k+� )\O?

kz � � k2 }

5: G =–

k2� Gk , G0 =–

k2�p Gk

6: Step 2: Compute intersection points7: P = PLANE�SWEEP(G [ E), P0 = PLANE�SWEEP(G0 [ E)

8: Step 3: Construct the partitions9: C����� = F����V��������O��S������C����(G����(P, G [ E))10: C�����0 = F����V��������O��S������C���� (G����(P0, G0 [ E)).11: for c 2 C����� do12: R = C������H���(c)13: W? .ADD(R)14: for c 2 C�����0 do15: R0 = C������H���(c)16: W0.ADD(R0)17: Return W?, W0

Now, given a discretization parameter � 2 R+, we de�ne the statespace F as:

F = {(k1,k3, . . . ,kn ) 2 Nn�1 | 1 k1 r ,

1 ki x i � x i

�, i = 3, . . . ,n} (11)

where r is the number of regions in the partitioning W?. In otherwords, the parameter � is used to partition the state space intohyper-cubes of size � in each dimension i = 3, . . . ,n. A state s 2 Frepresents the index of a region in W? followed by the indicesidentifying a hypercube in the remaining n � 2 dimensions. Notethat for the simplicity of notation, we assume that x i �x i is divisibleby � for all i = 1, . . . ,n. We now de�ne the relation Q ✓ X ⇥ F as:

Q = {(x , s) 2 X ⇥ F | s = (k1,k3, . . . ,kn ),x = (� (x),x3, . . . ,xn ),� (x) 2 Rk1 ,x i + �(ki � 1) xi < x i + �ki ,

i = 3, . . . ,n}. (12)

Finally, we de�ne the state transition function �F of SF as follows:(k 01,k

03, . . .k

0n ) 2 �F((k1,k3, . . .kn )) if

9x = (� (x),x3, . . . ,xn ) 2 Rk1 ,x i + �(ki � 1) xi < x i + �ki ,

x 0 = (� (x 0),x 03, . . . ,x0n) 2 Rk 0

1,x i + �(k 0i � 1) x 0i < x i + �k 0i ,

s.t. x 0 = Ax + BfNN (d(x)). (13)

It follows from the de�nition of �F in (13) that checking the transi-tion feasibility between two states s and s 0 is equivalent to searchingfor a robot initial and goal states along with a LiDAR image thatwill force the neural network controller to generate an input thatmoves the robot between the two states while respecting the robotsdynamics. In the reminder of this section, we focus on solving thisfeasibility problem.

5.1 SMC Encoding of NNWe translate the problem of checking the transition feasibility in �Finto a feasibility problem over a monotone SMC formula [46, 47]as follows. We introduce the Boolean indicator variables blj with

l = 1, . . . ,L and j = 1, . . . ,Ml (recall that L represents the numberof layers in the neural network, while Ml represents the numberof neurons in the lth layer). These Boolean variables represent thephase of each ReLU, i.e., an asserted blj indicates that the output ofthe jth ReLU in the lth layer is hlj = (W l�1hl�1 +wl�1)j while anegated blj indicates that h

lj = 0. Using these Boolean indicator vari-

ables, we encode the problem of checking the transition feasibilitybetween two states s = (k1,k3, . . . ,kn ) and s 0 = (k 01,k

03, . . . ,k

0n ) as:

9 x ,x 0 2 Rn ,u 2 Rm ,d 2 R2N , (14)

(bl ,hl , t l ) 2 BMl ⇥ RMl ⇥ RMl , l 2 {1, . . . ,L}subject to:� (x) 2 Rk1 ^ x i + �(ki � 1) xi < x i + �ki , i = 3, . . . ,n (15)

^� (x 0) 2 Rk 01^ x i + �(k 0i � 1) x 0i < x i + �k 0i , i = 3, . . . ,n (16)

^ x 0 = Ax + Bu (17)^ dk = Pk,Rk1

� (x) +Qk,Rk1, k = 1, . . . ,N (18)

^✓t1 =W 0d +w0

◆^

✓ L€l=2

t l =W l�1hl�1 +wl◆

(19)

^✓u =W LhL +wL

◆(20)

^L€l=1

Mj€j=1

blj !h⇣hlj = t lj

⌘^⇣t lj � 0

⌘i(21)

^L€l=1

Mj€j=1

¬blj !h⇣hlj = 0

⌘^⇣t lj < 0

⌘i(22)

where (15)-(16) encode the state space partitions correspondingto the states s and s 0; (17) encodes the dynamics of the robot; (18)encodes the imaging function that maps the robot position intoLiDAR image; (19)-(22) encodes the neural network controller thatmaps the LiDAR image into a control input.

Compared to Mixed-Integer Linear Programs (MILP), monotoneSMC formulas avoid using encoding heuristics like big-M encodingwhich leads to numerical instabilities. The SMC decision proce-dures follow an iterative approach combining e�cient BooleanSatis�ability (SAT) solving with numerical convex programming.When applied to the encoding above, at each iteration the SATsolver generates a candidate assignment for the ReLU indicatorvariables blj . The correctness of these assignments are then checkedby solving the corresponding set of convex constraints. If the con-vex program turned to be infeasible, indicating a wrong choice ofthe ReLU indicator variables, the SMC solver will identify the set of“Irreducible Infeasible Set” (IIS) in the convex program to providethe most succinct explanation of the con�ict. This IIS will be thenfed back to the SAT solver to prune its search space and provide thenext assignment for the ReLU indicator variables. SMC solvers wereshown to better handle problems (compared with MILP solvers) forproblems with relatively large number of Boolean variables [47].

5.2 Pruning Search Space By Pre-processingWhile a neural network withM ReLUs would give rise to 2M com-binations of possible assignments to the corresponding Boolean

153

Page 8: Formal Verification of Neural Network Controlled

HSCC ’19, April 16–18, 2019, Montreal, QC, Canada X. Sun et al.

indicator variables, we observe that only several of those combina-tions are feasible for each workspace region. In other words, theLiDAR imaging function along with the workspace region enforcessome constraints on the inputs to the neural network which inturn enforces constraints on the subsequent layers. By performingpre-processing on each of the workspace regions, we can discoverthose constraints and augment it to the SMC encoding (15)-(22) toprune combinations of assignments of the ReLU indicator variables.

To �nd such constraints, we consider an SMC problem with thefewer constraints (15), (18)-(22). By iteratively solving the reducedSMC problem and recording all the IIS con�icts produced by theSMC solver, we can compute a set of counter-examples that areunique for each region. By iteratively invoking the SMC solverwhile adding previous counter-examples as constraints until theproblem is no longer satis�able, we compute the set R�C��������which represents all the counter-examples for region R. Finally, weadd the following constraint: ‹

c 2R�C��������¬c (23)

to the original SMC encoding (15)-(22) to prune the set of possibleassignments to the ReLU indicator variables. In Section 6, we showthat pre-processing would result in several orders of magnitudereduction in the execution time.

5.3 Correctness of Algorithm 1We end our discussion with the following results which assert thecorrectness of the whole framework described in this paper. We �rststart by establishing the correctness of computing the �nite stateabstraction SF along with the simulation relation Q as follows:

P���������� 5.1. Consider the �nite state abstraction SF =

(F ,�F)where F is de�ned by (11) and �F is de�ned by (13) and com-puted by means of solving the SMC formulas (15)-(23). Consider alsothe system SNN = (X,�NN) where �NN : x 7! Ax + BfNN(d(x)). Forthe relation Q de�ned in (12), the following holds: SNN 4Q SF .

Recall that Algorithm 1 applies standard reachability analysison SF to compute the set of unsafe states. It follows directly fromthe correctness of the simulation relation Q established above thatour algorithm computes an over-approximation of the set of unsafestates, and accordingly an under-approximation of the set of safestates. This fact is captured by the following result that summarizesthe correctness of the proposed framework:

T������ 5.2. Consider the safe setXsafe computed by Algorithm 1.Then any trajectory �x with �x (0) 2 Xsafe is a safe trajectory.

While Theorem 5.2 establishes the correctness of the proposedframework in Algorithm 1, two points needs to be investigatednamely (i) complexity of Algorithm 1 and (ii) maximality of theset Xsafe. Although Algorithm 2 computes the imaging-adaptedpartitions e�ciently (as shown in Theorem 4.4), analyzing a neuralnetwork with ReLU activation functions is shown to be NP-hard.Exacerbating the problem, Algorithm 1 entails analyzing the neuralnetwork a number of times that is exponential in the number ofpartition regions. In addition, �oating point arithmetic used by theSMC solver may introduce errors that are not analyzed in this paper.In Section 6, we evaluate the e�ciency of using the SMC decision

Table 1: Scalability results for the WKSP�PARTITION Algo-rithm

Number of Number of Number of TimeVertices Lasers Regions [s]

8 111 0.01528 38 1851 0.3479

118 17237 5.53008 136 0.0245

10 38 2254 0.4710118 20343 6.93808 137 0.027538 2418 0.5362

12 120 23347 8.0836218 76337 37.0572298 142487 86.6341

procedures to harness this computational complexity. As for themaximality of the computed Xsafe set, we note that Algorithm 1 isnot guaranteed to search for the maximal Xsafe.

6 RESULTSWe implemented the proposed veri�cation framework as describedby Algorithm 1 on top of the SMC solver named SATEX [49]. Allexperiments were executed on an Intel Core i7 2.5-GHz processorwith 16 GB of memory.

6.1 Scalability of the Workspace PartitioningAlgorithm

As the �rst step of our veri�cation framework, imaging-adaptedworkspace partitioning is tested for numerical stability with in-creasing number of laser angles and obstacles. Table 1 summarizesthe scalability results in terms of the number of computed regionsand the execution time grows as the number of LiDAR lasers andobstacle vertices increase. Thanks to adopting well-studied com-putational geometry algorithms, our partitioning process takesless than 1.5 minutes for the scenario where a LiDAR scanner isequipped with 298 lasers (real-world LiDAR scanners are capableof providing readings from 270 laser angles).

6.2 Computational Reduction Due toPre-processing

The second step is to pre-process the neural network. In particular,we would like to answer the following question: given a partitionedworkspace, howmany ReLU assignments are feasible in each region,and if any, what is the execution time to �nd them out. Recall thata ReLU assignment is feasible if there exist a robot position and thecorresponding LiDAR image that will lead to that particular ReLUassignment.

Thanks to the IIS counterexample strategy, we can �nd all feasi-ble ReLU assignments in pre-processing. Our �rst observation isthat the number of feasible assignments is indeed much smallercompared to the set of all possible assignments. As shown in Ta-ble 2, for a neural network with a total of 32 neurons, only 11 ReLUassignments are feasible (within the region under consideration).

154

Page 9: Formal Verification of Neural Network Controlled

Formal Verification of Neural Network Controlled Autonomous Systems HSCC ’19, April 16–18, 2019, Montreal, QC, Canada

Table 2: Execution time of the SMC-based pre-processing asa function of the neural network architecture.

Number Total Number of Number of Timeof Hidden Number Feasible Counter- [s]Layers of Neurons ReLU Assignments examples

32 11 60 2.781972 31 183 11.422792 58 265 18.4807102 68 364 43.2459152 101 540 78.3015172 146 778 104.4720202 191 897 227.2357

1 302 383 1761 656.3668402 730 2614 1276.4405452 816 4325 1856.0418502 1013 3766 2052.0574552 1165 4273 4567.1767602 1273 5742 6314.4890652 1402 5707 7166.3059702 1722 6521 8813.182922 3 94 1.318042 19 481 10.982362 35 1692 53.224682 33 2685 108.2584

2 102 58 5629 292.7412122 71 9995 739.4883142 72 18209 2098.0220162 98 34431 6622.1830182 152 44773 12532.855232 5 319 5.7227

3 47 7 5506 148.872762 45 72051 12619.5353

4 22 9 205 10.466742 5 1328 90.1148

Comparing this number to 232 = 4.3E9 possibilities of ReLU assign-ments, we conclude that pre-processing is very e�ective in reducingthe search space by several orders of magnitude.

Furthermore, we conducted an experiment to study the scala-bility of the proposed pre-processing for an increasing numberof ReLUs. To that end, we �xed one choice of workspace regionswhile changing the neural network architecture. The executiontime, the number of generated counterexamples, along with thenumber of feasible ReLU assignments are given in Table 2. For thecase of neural networks with one hidden layer, our implementa-tion of the counterexample strategy is able to �nd feasible ReLUassignments for a couple of hundreds of neurons in less than 4minutes. In general, the number of counterexamples, and hencefeasible ReLU assignments, and execution time grows with thenumber of neurons. However, the number of neurons is not theonly deciding factor. Our experiments show that the depth of thenetwork plays a signi�cant role in a�ecting the scalability of theproposed algorithms. For example, comparing the neural networkwith one hidden layer and a hundred neurons per layer versus thenetwork with two layers and �fty neurons per layer we notice thatboth networks share the same number of neurons. Nevertheless,the deeper network resulted in one order of magnitude increaseregarding the number of generated counterexamples and one orderof magnitude increase in the corresponding execution time. Inter-estingly, both of the architectures share a similar number of feasibleReLU assignments. In other words, similar features of the neuralnetwork can be captured by fewer counterexamples whenever theneural network has fewer layers. This observation can be accounted

Table 3: Execution time of the SMC-based pre-processingas a function of the workspace region. Region indices areshown in Figure 3.

Region Number of Number of TimeIndex Feasible Counter- [s]

ReLU Assignments examplesA2-R3 33 2685 108.2584A14-R1 55 4925 215.8251A13-R3 7 1686 69.4158A1-R1 25 2355 99.2122A7-R1 26 3495 139.3486A12-R2 3 1348 54.4548A15-R3 25 3095 121.7869A19-R1 38 4340 186.6428

for the fact that counterexamples that correspond to ReLUs in earlylayers are more powerful than those involves ReLUs in the laterlayers of the network.

In the second part of this experiment, we study the dependenceof the number of feasible ReLU assignments on the choice of theworkspace region. To that end, we �x the architecture of the neuralnetwork to one with 2 hidden layers and 40 neurons per layer. Ta-ble 3 reports the execution time, the number of counterexamples,and the number of feasible ReLU assignments across di�erent re-gions of the workspace. In general, we observe that the number offeasible ReLU assignments increases with the size of the region.

6.3 Transition FeasibilityFollowing our veri�cation streamline, the next step is to computethe transition function of the �nite state abstraction �F , i.e., checktransition feasibility between regions. Table 4 shows performancecomparison between our proposed strategy that uses counterex-amples obtained from pre-processing and the SMC encoding with-out preprocessing. We observe that the SMC encoding empow-ered by counterexamples, generated through the pre-processingphase, scales more favorably compared to the ones that do not takecounterexamples into account leading to 2-3 orders of magnitudereduction in the execution time. Moreover, and thanks to the pre-processing counter-examples, we observe that checking transitionfeasibility becomes less sensitive to changes in the neural networkarchitecture as shown in Table 4.

ACKNOWLEDGMENTSThe material presented in this paper is based upon work supportedby the National Science Foundation award number #1845194.

REFERENCES[1] Wikipedia, “List of autonomous car fatalities,” https://en.wikipedia.org/wiki/List_

of_autonomous_car_fatalities.[2] A. Ferdowsi, U. Challita, W. Saad, and N. B. Mandayam, “Robust deep reinforce-

ment learning for security and safety in autonomous vehicle systems,” arXivpreprint, 2018.

[3] T. Everitt, G. Lea, and M. Hutter, “AGI safety literature review,” arXiv preprint,2018.

[4] M. Charikar, J. Steinhardt, and G. Valiant, “Learning from untrusted data,” inProceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing.ACM, 2017, pp. 47–60.

155

Page 10: Formal Verification of Neural Network Controlled

HSCC ’19, April 16–18, 2019, Montreal, QC, Canada X. Sun et al.

Table 4: Performance of the SMC-based encoding for com-puting �F as a function of the neural network (timeout = 1hour).

Number of Total Number Time [s] Time [s]Hidden Layers of Neurons (Exploit Counter- (Without Counter-

examples) examples)82 0.5056 50.1263102 7.1525 timeout

1 112 12.524 timeout122 18.0689 timeout132 20.4095 timeout22 0.1056 15.884142 4.8518 timeout62 3.1510 timeout82 2.6112 timeout

2 102 11.0984 timeout122 3.8860 timeout142 0.7608 timeout162 2.7917 timeout182 193.6693 timeout32 0.3884 388.549

3 47 0.9034 timeout62 59.393 timeout

[5] J. Steinhardt, P. W. W. Koh, and P. S. Liang, “Certi�ed defenses for data poisoningattacks,” in Advances in Neural Information Processing Systems, 2017, pp. 3520–3532.

[6] L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C.Lupu, and F. Roli, “Towards poisoning of deep learning algorithms with back-gradient optimization,” in Proceedings of the 10th ACM Workshop on Arti�cialIntelligence and Security. ACM, 2017, pp. 27–38.

[7] A. Paudice, L. Muñoz-González, and E. C. Lupu, “Label sanitization against label�ipping poisoning attacks,” arXiv preprint, 2018.

[8] W. Ruan, M. Wu, Y. Sun, X. Huang, D. Kroening, and M. Kwiatkowska, “Globalrobustness evaluation of deep neural networks with provable guarantees for l0norm,” arXiv preprint, 2018.

[9] K. Pei, Y. Cao, J. Yang, and S. Jana, “Deepxplore: Automated whitebox testingof deep learning systems,” in Proceedings of the 26th Symposium on OperatingSystems Principles. ACM, 2017, pp. 1–18.

[10] Y. Tian, K. Pei, S. Jana, and B. Ray, “Deeptest: Automated testing of deep-neural-network-driven autonomous cars,” arXiv preprint arXiv:1708.08559, 2017.

[11] M.Wicker, X. Huang, andM. Kwiatkowska, “Feature-guided black-box safety test-ing of deep neural networks,” in International Conference on Tools and Algorithmsfor the Construction and Analysis of Systems. Springer, 2018, pp. 408–426.

[12] Y. Sun, X. Huang, and D. Kroening, “Testing deep neural networks,” arXiv preprint,2018.

[13] L. Ma, F. Juefei-Xu, J. Sun, C. Chen, T. Su, F. Zhang, M. Xue, B. Li, L. Li, Y. Liu,J. Zhao, and Y. Wang, “Deepgauge: Comprehensive and multi-granularity testingcriteria for gauging the robustness of deep learning systems,” arXiv preprint,2018.

[14] J. Wang, J. Sun, P. Zhang, and X. Wang, “Detecting adversarial samples for deepneural networks through mutation testing,” arXiv preprint, 2018.

[15] L. Ma, F. Zhang, J. Sun, M. Xue, B. Li, F. Juefei-Xu, C. Xie, L. Li, Y. Liu, J. Zhao,and Y. Wang, “Deepmutation: Mutation testing of deep learning systems,” arXivpreprint, 2018.

[16] S. Srisakaokul, Z. Wu, A. Astorga, O. Alebiosu, and T. Xie, “Multiple-implementation testing of supervised learning software,” in Proceedings of theAAAI-18 Workshop on Engineering Dependable and Secure Machine Learning Sys-tems (EDSMLS), 2018.

[17] M. Zhang, Y. Zhang, L. Zhang, C. Liu, and S. Khurshid, “Deeproad: Gan-basedmetamorphic autonomous driving system testing,” arXiv preprint, 2018.

[18] Y. Sun, M. Wu, W. Ruan, X. Huang, M. Kwiatkowska, and D. Kroening, “Concolictesting for deep neural networks,” arXiv preprint, 2018.

[19] T. Dreossi, A. Donzé, and S. A. Seshia, “Compositional falsi�cation of cyber-physical systems with machine learning components,” in NASA Formal MethodsSymposium. Springer, 2017, pp. 357–372.

[20] C. E. Tuncali, G. Fainekos, H. Ito, and J. Kapinski, “Simulation-based adversarialtest generation for autonomous vehicles with machine learning components,”arXiv preprint arXiv:1804.06760, 2018.

[21] Z. Zhang, G. Ernst, S. Sedwards, P. Arcaini, and I. Hasuo, “Two-layered falsi�ca-tion of hybrid systems guided by monte carlo tree search,” IEEE Transactions onComputer-Aided Design of Integrated Circuits and Systems, 2018.

[22] Z. Kurd and T. Kelly, “Establishing safety criteria for arti�cial neural networks,”in International Conference on Knowledge-Based and Intelligent Information andEngineering Systems. Springer, 2003, pp. 163–169.

[23] S. A. Seshia, D. Sadigh, and S. S. Sastry, “Towards veri�ed arti�cial intelligence,”arXiv preprint, 2016.

[24] S. A. Seshia, A. Desai, T. Dreossi, D. Fremont, S. Ghosh, E. Kim, S. Shivaku-mar, M. Vazquez-Chanlatte, and X. Yue, “Formal speci�cation for deep neuralnetworks,” arXiv preprint, 2018.

[25] J. Leike, M. Martic, V. Krakovna, P. A. Ortega, T. Everitt, A. Lefrancq, L. Orseau,and S. Legg, “AI safety gridworlds,” arXiv preprint, 2017.

[26] F. Leofante, N. Narodytska, L. Pulina, and A. Tacchella, “Automated veri�cationof neural networks: Advances, challenges and perspectives,” arXiv preprint, 2018.

[27] K. Scheibler, L. Winterer, R. Wimmer, and B. Becker, “Towards veri�cation ofarti�cial neural networks,” in Workshop on Methoden und Beschreibungssprachenzur Modellierung und Veri�kation von Schaltungen und Systemen (MBMV), 2015,pp. 30–40.

[28] G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer, “Reluplex: An e�-cient smt solver for verifying deep neural networks,” in International Conferenceon Computer Aided Veri�cation. Springer, 2017, pp. 97–117.

[29] R. Ehlers, “Formal veri�cation of piece-wise linear feed-forward neural networks,”in International Symposium on Automated Technology for Veri�cation and Analysis.Springer, 2017, pp. 269–286.

[30] R. Bunel, I. Turkaslan, P. H. S. Torr, P. Kohli, and M. P. Kumar, “A uni�ed view ofpiecewise linear neural network veri�cation,” arXiv preprint, 2018.

[31] W. Ruan, X. Huang, and M. Kwiatkowska, “Reachability analysis of deep neuralnetworks with provable guarantees,” arXiv preprint, 2018.

[32] S. Dutta, S. Jha, S. Sankaranarayanan, and A. Tiwari, “Output range analysisfor deep feedforward neural networks,” in NASA Formal Methods Symposium.Springer, 2018, pp. 121–138.

[33] L. Pulina and A. Tacchella, “An abstraction-re�nement approach to veri�cationof arti�cial neural networks,” in International Conference on Computer AidedVeri�cation. Springer, 2010, pp. 243–257.

[34] V. Tjeng and R. Tedrake, “Verifying neural networks with mixed integer pro-gramming,” arXiv preprint arXiv:1711.07356, 2017.

[35] T. Gehr,M.Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, andM. Vechev,“Ai 2: Safety and robustness certi�cation of neural networks with abstract inter-pretation,” in Security and Privacy (SP), 2018 IEEE Symposium on, 2018.

[36] W. Xiang, H.-D. Tran, and T. T. Johnson, “Reachable set computation andsafety veri�cation for neural networks with relu activations,” arXiv preprintarXiv:1712.08163, 2017.

[37] W. Xiang, P. Musau, A. A. Wild, D. M. Lopez, N. Hamilton, X. Yang, J. Rosenfeld,and T. T. Johnson, “Veri�cation for machine learning, autonomy, and neuralnetworks survey,” arXiv preprint arXiv:1810.01989, 2018.

[38] W. Xiang and T. T. Johnson, “Reachability analysis and safety veri�cation forneural network control systems,” arXiv preprint arXiv:1805.09944, 2018.

[39] R. Ivanov, J. Weimer, R. Alur, G. J. Pappas, and I. Lee, “Verisig: verifying safetyproperties of hybrid systems with neural network controllers,” in Proceedings ofthe 22nd International Conference on Hybrid Systems: Computation and Control(HSCC), to appear. ACM, 2019.

[40] M. E. Akintunde, A. Lomuscio, L. Maganti, and E. Pirovano, “Reachability analysisfor neural agent-environment systems,” in Proceedings of the Sixteenth Interna-tional Conference on Principles of Knowledge Representation and Reasoning (KR2018). AAAI, 2018.

[41] G. Frehse, C. L. Guernic, A. Donzé, S. Cotton, R. Ray, O. Lebeltel, R. Ripado,A. Girard, T. Dang, andO.Maler, “SpaceEx: Scalable veri�cation of hybrid systems,”in International Conference on Computer Aided Veri�cation (CAV). Springer, 2011,pp. 379–395.

[42] X. Chen, E. Ábrahám, and S. Sankaranarayanan, “Flow*: An analyzer for non-linear hybrid systems,” in International Conference on Computer Aided Veri�cation(CAV). Springer, 2013, pp. 258–263.

[43] C. E. Tuncali, J. Kapinski, H. Ito, and J. V. Deshmukh, “Reasoning about safetyof learning-enabled components in autonomous cyber-physical systems,” inProceedings of the 55th Annual Design Automation Conference (DAC). ACM,2018.

[44] X. Sun, H. Khedr, and Y. Shoukry, “Formal veri�cation of neural network con-trolled autonomous systems,” arXiv preprint arXiv:1810.13072, 2018.

[45] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classi�cation with deepconvolutional neural networks,” in Advances in neural information processingsystems, 2012, pp. 1097–1105.

[46] Y. Shoukry, P. Nuzzo, A. L. Sangiovanni-Vincentelli, S. A. Seshia, G. J. Pappas,and P. Tabuada, “SMC: Satis�ability Modulo Convex optimization,” in Proceedingsof the 20th International Conference on Hybrid Systems: Computation and Control(HSCC). ACM, 2017, pp. 19–28.

[47] ——, “Smc: Satis�ability modulo convex programming [40pt],” Proceedings of theIEEE, vol. 106, no. 9, pp. 1655–1679, 2018.

[48] M. d. Berg, O. Cheong, M. v. Kreveld, and M. Overmars, Computational geometry:algorithms and applications. Springer-Verlag TELOS, 2008.

[49] (2018, Jun.) SatEX Solver. [Online]. Available: https://yshoukry.bitbucket.io/SatEX/

156