principles for the development and assurance of …

27
CONTENTS PRINCIPLES FOR THE DEVELOPMENT AND ASSURANCE OF AUTONOMOUS SYSTEMS FOR SAFE USE IN HAZARDOUS ENVIRONMENTS White Paper, 14 June 2021

Upload: others

Post on 12-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

CONTENTS

PRINCIPLES FOR THE DEVELOPMENT AND ASSURANCE OF AUTONOMOUS SYSTEMS FOR SAFE USE IN HAZARDOUS ENVIRONMENTS

White Paper, 14 June 2021

22

ContentsExecutive Summary 3 Scope 3

High-Level Recommendations 3

1 Introduction 62 Prerequisite Concepts 10 2.1 Ethical Impact 10

2.2 Materials Suitability 11

2.3VerificationandValidationofAutonomousandRoboticSoftware 12

2.3.1VerificationandValidation(V&V)ChallengesfromAutonomousandRoboticSystems 12

2.3.2 Formal Methods 13

2.3.3Verification&ValidationTools 14

2.4Mixed-CriticalitySystems 15

3 General Principles for Safety-Critical Robotic Systems 16 3.1 Design 16

3.2VerificationandValidation 17

3.3 Operation 17

3.4 Maintainance, Recovery and Decommissioning 18

4 Principles for Human-Controlled Robotic Systems 19 4.1 Design 19

4.2VerificationandValidation 20

4.3 Operation 20

5 Principles for Autonomous Robotic Systems 21 5.1 Design 22

5.2VerificationandValidation 23

5.3 Operation 23

Authors, Glossary and References 24

3CONTENTS<

Autonomous systems are increasingly beingused(orproposedforuse)insituationswheretheyarenearorinteract(physicallyorotherwise)withhumans.Theycanbeusefulforperformingtasksthat are dirty or dangerous, or jobs that aresimplydistantordull.Thiswhitepaper sets out principles to consider whendesigning,developing,andregulating autonomous systems that are required to operate in hazardous environments.Autonomoussystemsusesoftwaretomakedecisionswithouttheneedforhumancontrol.Theyareoftenembedded in a robotic system, to enable interaction withtherealworld.Thismeansthatautonomousroboticsystemsareoftensafety-critical,wherefailurescancausehumanharmordeath.Forthesortsofautonomousroboticsystemsconsideredbythiswhitepaper,theriskofharmislikelytofallonhumanworkers(thesystem’susersoroperators).Autonomoussystemsalsoraiseissuesofsecurityanddataprivacy,bothbecauseofthesensitive data that the system might process and because asecurityfailurecancauseasafetyfailure.

ScopeThiswhitepaperisintendedtobeanadd-ontotherelevantexistingstandardsandguidancefor(forexample)robotics,electronicsystems,controlsystems,andsafety-criticalsoftware.Theseexistingstandardsprovidegood

practicefortheirrespectiveareas,butdonotprovideguidanceforautonomoussystems.ThiswhitepaperaddstotheemerginggoodpracticefordevelopingautonomousroboticsystemsthatareamenabletostrongVerification&Validation.

Theintendedaudienceofthiswhitepaperisdevelopersofautonomousandroboticsystems.Itaimstoprovideadescriptionofthingsthatneedtobedemonstrablebyoroftheirsystems,andrecommendationsofwaystoachievethis.ThisaimstoenablestrongVerification&Validationoftheresultingautonomoussystem,andtomitigatesomeofthehazardsalreadyoccurringinautonomoussystems.

High-Level RecommendationsThiswhitepapercanbesummarisedinsevenhigh-levelrecommendationsforthedevelopmentanddeploymentofautonomousroboticsystems,whichareeachdiscussedinmore detail in the main text.

1. Remember both the hardware and software components during system assurance.

It is important to remember that an autonomous robotic systemcontainsbothhardwareandsoftware;bothpartsshouldbedevelopedtobeassafeaspossible.Inadditiontoanysector-specificstandardandguidelinesforroboticsystems,thechoiceofmaterialsusedintherobotshouldbesuitableforthepotentialhazardsintheenvironment.For example, a robotic system deployed in a radioactive environmentshouldbeassessedforthesuitabilityofthematerialsusedtobuildtherobot’sstructureandcasing,aswellasthematerialsusedinitsinternal,electrical,andmechanical components.

Executive Summary

4CONTENTS<

Thesoftwarepartsofthesystemshouldalsobedevelopedusingappropriate,possiblysector-specific,standardsandguidelines.Inaddition,carefulattentionshould be paid to autonomous components, since they willbemakingexecutivedecisionsorinterpretingsensordata,uponwhichdecisionswillbemade.ThestrongestVerification&Validationshouldbeusedwherethisispossible,andFormalMethodsshouldbeconsideredwherethey are applicable.

2. Hazard assessments should include risks that have an ethical impact, as well as those that have safety and security impacts.

Theintroductionofroboticandautonomoussystemswillhaveanethicalimpactontheworkplacesandworkforceswheretheyaredeployed.Risksinvolvingsafetyandsecurityareobviouslyethicalimpacts,butherewemeanrisksthatarenotnecessarilysafetyorsecurityrisks,butarerisksofanonlyethicalnature.

The standard BS 8611 ethical design and application of robots and robotic systems(BritishStandardsInstitution2016)providesaframeworkforensuringthatethicalimpactsareunderstoodandaccountedforwhendeveloping an autonomous robotic system. BS 8611 uses theterminologyofethical hazards: a potential source ofethicalharm,whichisanythinglikelytocompromisepsychologicaland/orsocietalandenvironmentalwell-being.Theseethicalhazardsdescribeanumberofrecognisedethicalimpactsthatcanarisefromthedevelopmentanddeploymentofroboticsystems.

Theidentificationofethicalhazardsshouldbespecificallyincludedinthesystem’shazardassessment,andcareshouldbetakentonotintroducethemwheretheycanbeavoided.Likeother(forexamplesafety)hazardsitisimportantthatethicalhazardsareidentifiedearlyanddesignedoutofthefinalsystem.Ethicalhazardsthatmakeitintothefinalsystembecomemoredifficulttocorrect,theyare‘bakedin’.Forexample,hiddenbiasinadata set used to train a machine learning component can becomeanintegralpartofthesystem’sdecision-makingprocess,producingsystemicallybiaseddecision-making.

3. Take both a corroborative and a mixed-criticality approach to Verification & Validation.

Eachofanautonomousroboticsystem’scomponents(bothhardwareandsoftware)mayneeddifferentassurancemethods.Onemethodologyforachievingthis could be the corroborative Verification & Validation describedin(Websteretal.2020);whereFormalMethods,simulation-based testing, and physical testing are all used in assuring a single system. Formal Methods are veryprecisebutthemightrequireabstractmodelsofthesystem(thoughthereareformalapproachesthatworkdirectlyonprogramsoratruntime).Physicaltestingwithrobotsprovidesrealism,becauseitusestheroboticsysteminarealenvironment,butwillstruggletobeexhaustive.Simulationssitbetweenthesetwoextremes.ThecorroborativeVerification&Validationapproachlinksresultsfromeachofthesemethodstoprovideconfidencein an overall result, and enables an iterative development andassuranceworkflow.

Itisalsolikelythatthedifferentcomponentsofanautonomousroboticsystemwillhavedifferentlevelsofcriticality,wherecriticalitymeansthelevelofassuranceagainstfailurethatagivencomponentneeds.Werecommendanalysingthecriticalityofasystem’scomponents,sothatthestrongestVerification&Validationmethodscanbefocussedonthemostcriticalcomponents,wheretheywillhavethebiggestimpact.Thisenablestheuseof,forexample,FormalMethodsforverifyingasystem’sexecutivedecisionswithoutimplyingthatthewholesystemmustbeformallyverified.ThisideapairswellwiththeconceptofcorroborativeVerification&Validation described previously.

4. Autonomous components should be as transparent and verifiable as possible.

Whereautonomyisusedtomakeexecutivedecisionsaboutwhatthesystemshoulddo,itisveryimportanttobeabletounderstandwhyadecisionhasbeentakenandtobeabletoverifythatthecorrectdecisionwillbemadeunderallcircumstances.Thisisparticularlyusefulformitigatingthechallengeofextensivelytestingroboticsystems.

Executive Summary

5CONTENTS<

Whereautonomyisusedto,forexample,interpretsensordata, it is important to minimise incorrect interpretations and to ensure that a incorrect interpretation does not lead tounsafebehaviour.Ifthiscannotbeachieveddirectly,thenthesystem’sarchitectureshouldbearrangedsothatthereisamoreanalysable‘governor’betweentheautonomouscomponentandtherestofthesystem.

Inbothcases,thechoiceofhowtoimplementautonomywillimpactthekindsofVerification&Validationtechniques that are available. For example, an agent-basedapproachtoautonomyislikelytobeeasiertoformallyverifythananapproachbasedonmachine-learning.Howautonomousdecisionsareimplementedshouldbeconsideredanalogouslytopickingaprogramminglanguageforasafety-criticalsystem:howeasyitistoverify,understand,anddemonstratethesafetyoftheautonomouscomponentshouldbekeyfactorsinthedecisionofhowtoimplementthatcomponent.

5. Tasks and missions that the system will perform should be clearly defined.

Taskdefinitionscouldincludeexpectedinputsandoutputs,astep-by-stepdescriptionofthebehaviour,potentialfailuremodes,etc.Missiondefinitionscouldbeseenasacollectionoftasks,butshouldalsoincludehigher-leverconcerns,likepotentialhazardsinthedeploymentenvironment.Thesedefinitionsmakesitcleartothedevelopersandenduserswhatthesystemcanandcannotdo.Italsodescribesthesystem’srequirements,whichareessentialformeaningfulVerification&Validation;andthesystem’sdeploymentcontext,whichisessentialforthingslikeassessingthesuitabilityofthematerialsusedtobuildtheroboticsystem(aspreviouslydescribed).

6. Dynamic Verification & Validation should be used to complement static Verification & Validation.

Staticcheckingcanprovidecrucialconfidenceinasystem’scorrectness.Forexample,verifyingthatanautonomouscomponentthatmakesexecutivedecisionswillneverchooseanunsafebehaviour.However,any

systemthatinteractswiththerealworldisboundtofacesomeuncertainty,sodynamicVerification&Validationtechniques(techniquesdeployedatruntime)shouldbeusedaswell.Thishelpstobridgethereality gap,betweendesign-time assumptions about the environment and the run-timereality.Inadditiontojustperformingruntimemonitoring,thisapproachcouldbeextendedtoperformruntimeenforcement.

Runtimemonitoring(orenforcement)canbehigh-level,forexamplecheckingtheroboticsystemdoesn’texceedasafespeed;oritcouldbelow-levelandmorefocussed,forexamplegoverningamachinelearningcomponenttoensurethatitdoesn’tmakedecisionsoutsideofasafeoperatingenvelope.Eitherway,dynamicVerification&Validation during deployment can help to provide extra confidencethatthesystemiscontinuingtooperatecorrectly.

7. System requirements should be clearly traceable through the design, the development processes, and into the deployed system.

Autonomous robotic systems that are deployed into hazardousenvironmentsoftenrequireapprovalfromaregulatorybodybeforetheycanbeused.Eveniftheydon’t,thenitislikelythatthesystemwillneedtobeacceptable to, and trusted by, human users or operators. Traceabilityofthesystem’srequirementsthroughthedevelopmentprocessiskeytoshowingthattheverificationischeckingforpropertiesorbehavioursthatsupporttherequirements,givingoverallconfidenceinthecorrectnessofthesystem.Beingabletotracetherequirementsthroughtothefinalsystem(eitherasartefactsofstaticVerification&Validation,dynamicVerification&Validationcomponents,orboth)helpsprovideconfidencethatthefinalsystemfulfilstheoriginalrequirements.Thiscanhelptoeaseregulatoryeffortsatthesametimeaspotentiallyreducingrisk.

Executive Summary

6CONTENTS<

1 Introduction

Theseguidelinessetoutkeyprinciplestoconsiderwhendesigning,developing,and regulating autonomous systems that arespecificallyrequiredtooperateinhazardousenvironments(egradiological,chemotoxic,etc).Particularquestionsthatthiswhitepaperaddressesinclude:1. Whatdoweconsidertobeanautonomous robotic

system?

2. Howdowedemonstratethattherisksandbenefitsarisingfromtheapplicationofroboticorautonomoussystems are acceptable?

3. Whatarethedifferencesbetweensafetyengineeringforahuman-controlledrobotandforanautonomously-controlled robot?

4. Whatarethefactorstobeconsideredintheimplementationofanautonomoussystem?

Anautonomousroboticsystemcanbeviewedasacombinationofaphysical(robotic)elementandalogical(software)element.Theroboticpartofthesystemenablesinteractionwiththephysicalworld.Thesoftwarepartofthesystemenablesautonomousdecision-making,basedonsensorinputand(oftenpre-built)modelsofitsenvironment. Autonomous robotic systems are highly complex,usuallymission-critical,andoftensafety-critical. Autonomous systems are discussed in more detail later in this section.

Movingfromasystembeingbroadlyhumancontrolled(eitherremotelyorviascriptedautomation)toautonomouscontrolrequiresanewapproachtoVerification&Validation.Addingasoftwarecomponentorlayerthatmakechoicesbringsbothnewchallengesandnewopportunities.Whileitischallengingtoensurethatthesystemwillmakesafechoices,giventheavailablesensorinput;italsoprovidestheopportunitytoimprovetheconfidenceinthesafetyofthesystembyexaminingthedecision-makingprocess.Assuch,itshouldnotbewasted.

Physicalsafetycontrolscanbeusedtoconstraintheroboticsystem,butstrongVerification&Validationofthesoftwaresystemisalsohighlyrecommended,notleasttohelpestablishusertrust.Wherephysicalcontrolsarenotpossible,thenthestrongestVerification&Validationmethodsthatareapplicabletothesystem(orcomponent)shouldbeused.Ineithercase,careshouldbetakentochooseVerification&Validationmethodsthataresuitablefordescribingboththesystemandthepropertiestobechecked.

Itisimportanttohavedetaileddefinitionsofthetasksthatanautonomousroboticsystemwillperform.TaskdefinitionsarekeydesignartefactsthateaseVerification&Validationefforts,theyalsomakesitcleartothedevelopersandenduserswhatthesystemcanandcannotdo.Ataskdefinitioncouldinclude:expectedinputsandoutputs,astep-by-stepdescriptionofthebehaviour,potentialfailuremodes,etc.Itisalsoimportanttohaveadetaildefinitionofthemissionthatanautonomousroboticsystemwillperform.Amissiondefinitioncouldbeseenasacollectionoftasksthatachievethemission.

7CONTENTS<

Agoodmissiondefinitionshouldalsoincludehigher-leverconcerns,likethetypesofhazardexpectedinthedeploymentenvironment.Again,thissortofdefinitionisusefulforVerification&Validationmethodsandthemitigationofpotentialhazards,butalsoforuseroroperator trust.

Asinglesystemwilloftenrequiremultiplemethodsofassurance,becauseofthemultipledifferenttypesofcomponents that compose one system. One methodology forachievingthiscouldbethecorroborativeVerification & Validationdescribedin(Websteretal.2020);whereFormal Methods, simulation-based testing, and physical testingarecombinedintheassuranceofasinglesystem.

Inadditiontohazardsthatarisefromtheviolationoftraditionalsafetyprinciples,therearehazardsthatcanarisefromarangeofconsiderationsthatareoftenlooselygrouped together under the term ethics and artificial intelligence.ThegloballandscapeofethicsguidelinesforAIisfragmented,butarecentsurveyhasfoundconvergenceonfiveprinciples:transparency,justiceandfairness,non-maleficence,responsibility,andprivacy,withafurthersixprinciples:beneficence,freedomandautonomy, trust, sustainability, dignity and solidarity mentionedinmanyguidelinedocuments(Jobin,Ienca,andVayena2019).

CarelesslydevelopedanddeployedAIsystemshavealready been seen to violate these principles both through unintendedside-effectsoftheiroperationandthroughtheirinteractionswithusers.Thiscanbringreputationalharmtotheorganisationsresponsibleforthesystemsand,atworst,causerealdamagetousers,bystandersandthegeneralpublic.AsourunderstandingoftheethicalrisksposedbyAIandautonomoussystemsismaturing,sotooisthedevelopmentofmethodologiesforethicalriskandimpactassessmentsaswellasstandardsforthedesignanddevelopmentofsystemsthatadheretotheseprinciples.Itisthereforebothpracticalanddesirabletoperformsuchassessmentswhenproposingtheuseofanautonomoussysteminordertoidentifypotentialhazardsthatcanarisefromviolationsoftheprinciples.Thisisdiscussed in Section 2.1andfurtherguidancecanbefoundin(BritishStandardsInstitution2016)

Key Points• Theintroductionofaroboticorautonomoussystem mustbetothebenefitoftheoperatorsorusersofthesystem. This includes: giving priority to automating (witharoboticorautonomoussystem)tasksthatarepotentiallyharmful(physically,mentally,ethically,etc.)tooperatorsorusers,evenifthiswouldbemoreexpensive(see(High-LevelExpertGrouponAI2019Recommendation3.2);theconsultationofworkersduringthedesignanddevelopmentofroboticorautonomoussystems(see(High-LevelExpertGroup onAI2019Recommendation3.3));andensuringthatdata collection or monitoring by the system does not becomesurveillanceoftheoperators,users,oranyresponsible person.

• Ethicalissuesarepotentialsourcesofhazards,soahazard assessment should include the assessment ofethicalissues.Suchanassessmentneedstobeaspecificpartofthesystem’srequirementsanddesign.(See,forexample,theBSIguidetoethical design and application of robots and robotic systems(BritishStandardsInstitution2016).)

• AutonomouscomponentsthatmakehighriskdecisionsshouldbedesignedandimplementedinawaythatenablesstrongVerification&Validationoftheirchoices,forexamplethroughtheuseofFormalMethods.Thisisparticularlynecessarytomitigatethedifficulty(andsafetyimplications)ofcheckingtheautonomousdecisions via extensively testing robotic system.

• Thephysical(robotic)partofthesystemshouldbedeveloped and assured using appropriate standards or guidelines.Additionally,thechoiceofmaterialsusedintherobotneedtobesuitableforthepotentialhazards(e.gradiation)intheenvironment.

• Thesoftwarearchitectureofthesystemshouldbearranged so that components that are unpredictable ordifficulttoanalysearenotabletodirectlyinfluencethesystem’sdecisions;theiroutputshouldbecheckedby more reliable or analysable components. For example, a statistical data-driven machine learning classifiershouldnotbedirectlyconnectedtoactuation,withouthavingtheclassificationscheckedbyamoreanalysable‘governor’.

1 Introduction

8CONTENTS<

• Theoveralldesignofaroboticorautonomoussystemshouldaimtomaximisethesystem’seffectivenesswhilealsominimisingsafetyrisks,risksoffailure,operationandmaintenancerisks,etc.Thisincludesallaspectsofthedesign,suchasitsmechanicalarrangement,hardwareandsoftwareselection,etc.

• Taskandsafetyrequirementsneedtobeclearlytraceable in the design, the development processes, and through to the system

Thisfinalpointisconsideredtobeofcrucialimportanceto those organisations developing autonomous systems andjustifyingtheiruse.

What is an Autonomous System?At its most basic, an autonomous system is one that hasthecapabilitytomakedecisionsfreefromhumanintervention.Thisissubtlydifferenttoautomaticsystemsoradaptivesystems,whichcanreacttochangesintheirenvironmentwithouthumaninterventionbutdoesn’tmakedecisions.Forexample,athermostatcanautomaticallytoggleaswitchoradapttheheating/coolinginreactiontosensedtemperaturechanges,butwedonotconsider it to have made a decision. These three concepts canbedescribedasfollows.

• Automatic systems are pre-programmed to react to inputandareunlikelytohaveinternalmodelsiftheirenvironment.Theyareusefulformorepredictablesituationswherethesystemisrequiredtoperformasmallnumberoftasksrepeatedly.

• Adaptivesystemsaretightlylinkedto(andoftendrivenby)thesystem’soperatingenvironmentusingfeedbackcontrollersdescribedusingdifferentialequations.Suchapproachesareusefulwherethesystemisperformingmonitoringtaskswheretheobjectistomaintainsomestate that can be described using calculus.

• Autonomoussystemsareabletomakedecisionsthatrequire intelligence and situational understanding. Thesedecisionswillwilltaketheenvironmentintoaccount,butthesystemdecideswhattodobasedonitsinternalprioritiesandgoals.Theyareusefulinopenenvironmentswherethereisalargerangeofpossibledecisionstobemade,oftenbasedonuncertaininput.

Broadly, autonomous robotic systems can be described as:

• semi-autonomous,wherethe“robotandahumanoperatorplanandconductthetask,requiringvariouslevelsofhumaninteraction”(StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety2015),or;

• fully-autonomous,wheretherobotperformsatask“withouthumanintervention,whileadaptingtooperationsandenvironmentalconditions”(StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety2015).

Anewlydeployedroboticsystemmightbedesignedtobefully-autonomous,butoftenahuman-controlledsystemwillbeadaptedtobecome(semiorfully)autonomous.Asasimpleexampleofanautonomoussystem,considerarobotvacuumcleaner.Anautomaticcleanerwillfollowexactlythesamepatharoundaroom,regardlessofthedifferentenvironmentalconditions.Anadaptivecleanercould be programmed to spend longer on the dirtiest area. Whereas,anautonomouscleanercouldchooseeitherofthe above, but could also choose not to do any cleaning, perhapsbecauseitknowsthatsomeoneinthehouseissleepingandithasdecidedthatthecleaningnoisewillwakethemorbecausewaitinguntiltomorrowtoclean(whennooneisinthehouse)willprovidebetterefficiency.

Therearevariousframeworksdescribingthedifferentlevelsofautonomythatasystemmighthave,withnosingleframeworkbeinguniversallyadopted.Theycanbeusefulfordescribingasystem’scapabilities,butitimportant to be clear whichlevelsofautonomyyouarereferringto.

1 Introduction

9CONTENTS<

Theoriginallevelsofautonomyweredefinedforunderseateleoperationsystems(SheridanandVerplank1978),thoughtheyarewordedratherneutrallysomayhavewiderapplicability,andwererevisedin(Endsley1999).ThelevelsofautonomydevelopedbySAEInternationalfordriverlesscars(On-RoadAutomatedDriving(ORAD)committee,SAEInternational2018)areoftenadaptedtosuitthedeploymentcontextoftheautonomoussystem,butthisisbynomeansideal.Therehavebeenothereffortstocreateframeworksthatarespecifictoparticulardeploymentcontexts,forexamplespacecraft(Proud,Hart,andMrozinski2003);ortobegenericandapplicabletoallautonomoussystems,forexample(Huangetal.2005;Beer,Fisk,andRogers2014).Adetailedreviewofdifferentframeworksforlevelsofautonomycanbefoundin(Beer,Fisk,andRogers2014,Sect.3).

Whichever‘level’ofautonomyasystemhas,itisimportanttonotethatautonomycanbeachievedindifferentwaysandthatthechosenapproachhasimplicationsfortheVerification&Validationtechniquesthatcanbeappliedtothesystem’sdecision-making.Forsafety-criticaldeployments, autonomy should be achieved by means thatenablestrongVerification&Validationmethods.Wediscuss this in more detail in Section 2.3.

Document StructureTherestofthiswhitepaperisarrangedasfollows.Section2discussesprinciplesthatweconsidertobeprerequisiteto those discussed in later sections, but are are beyond the scope this document to describe in detail. Section 3 describesgeneralprinciplesforroboticsystemsintendedforuseinhazardousenvironments.Section4describesprinciplestoconsiderwhendevelopinganddeployinga human-controlled system, and Section 5 describes principlesforautonomous systems.

Sections 3, 4, and 5 each build on the previous section. Forexample,whendevelopinganautonomoussystem,the principles in Section 5 should be considered alongside those in Sections 3 and 4.

1 Introduction

10CONTENTS<

2 Prerequisite Concepts

Thescopeofthisdocumentisthedevelopmentandassuranceofautonomoussystems.Howeverthereareprinciplesthatweconsidertobeprerequisite to the development and the deploymentofanautonomoussystem.These principles are also very complex, so this section aims to introduce them and guide the reader to more detailed information.

2.1 Ethical ImpactAll technologies have ethical impacts as they are introducedintosociety.Theintegrationofcomplexcomputationalandroboticsystemsintoworkplacesareraisinganumberofethicalissueswhich,evenwherethese do not translate directly into hazards, have impacts on user trust and societal acceptability. The analysis andassessmentoftheethicalimpactandrisksposedbytheintroductionofautonomoussystemsisafastmovingarea.WhilestandardsfortheethicaldevelopmentanddeploymentofAIandautonomoussystemsareindevelopmentbyorganisationssuchastheIEEE,mostofthesehaveyettobepublished.However,BS8611ethical design and application of robots and robotic systems(BritishStandardsInstitution2016),publishedin2016,providesastandards-basedframeworkforensuringthatethicalimpactsareunderstoodandaccountedforwhenproposingtheintroductionofanautonomousroboticsystem.Thedevelopmentofsimilarstandardsisalreadyunderway.

BS8611usestheterminologyofethical hazards: a potentialsourceofethicalharm,whichisanythinglikelytocompromisepsychologicaland/orsocietalandenvironmentalwell-being.Theseethicalhazardsdescribeanumberofrecognisedethicalimpactsthatcanarisefromthedevelopmentanddeploymentofroboticsystems.Weadoptthisterminologyofethicalhazardherebutthisshould not be interpreted as implying an insistence that systemsadheretoBS8611specifically.

Anyhazardassessmentshouldincludetheassessmentofethicalhazards.Likeotherhazards(suchassafetyhazards)itisimportantthatethicalhazardsareidentifiedearlyanddesignedoutofthefinalsystem.Ethicalhazardsthatmakeitintothefinalsystembecomemoredifficulttocorrect,theyare‘bakedin’.Forexample,hiddenbiasinadatasetused to train a machine learning component can become anintegralpartofthesystem’sdecision-makingprocess,therebyproducingsystemicallybiaseddecision-making.

Ignoring ethical hazards, or leaving them to be discovered ordealtwithafterthesystemhasbeendeployedislikelytobecostlyandlesseffective.Theharmsthesehazardscancausecandirectlyimpactthesystemsafety,particularlywheretheyeffectthewaythattheoperatorsorusersinteractwiththesystem.Ethicalhazardsthatmakeitintothefinalsystemcanalsocausehugereputationaldamagetothesystem’sdevelopingand/oroperating organisation. Finally, not addressing ethical hazardsis,bydefinition,un-ethical.

Theethicalhazardswillchangefordifferentsystemsanddifferentpeople,soacomprehensiveassessmentoftheethicalhazardsofthespecificsystemwithitsoperatorsandusersshouldbeperformed.Definingtheprinciplesandprocessforthisassessmentisoutsidethescopeof

11CONTENTS<

thisdocument;thissub-sectionpresentssuggestionsforboth principles and process, but leaves the developer the flexibilitytochoosethemostsuitableroute.

Beforedevelopment,theimpactofintroducing a robotic or autonomous system needs to be analysed, to ensure thatitwouldbetothebenefitoftheoperatorsorusersofthe(currentorproposed)system.Forexample,priorityshouldbegiventoautomating(witharoboticorautonomoussystem)tasksthatarepotentiallyharmful(physically,mentally,ethically,etc.)tooperatorsorusers,evenifthiswouldbemoreexpensive(High-LevelExpertGrouponAI2019Recommendation3.2).Toensurethatthesystemwouldbenefittheoperatorsorusers,theyshouldbemeaningfullyconsultedduringthedesignanddevelopmentofthesystem(High-LevelExpertGrouponAI2019Recommendation3.3).

The standard BS 8611 ethical design and application of robots and robotic systems(BritishStandardsInstitution2016)outlinesaprocessofidentifyingthepotentialethicalhazardandthendeterminingtheethicalrisks1(BritishStandardsInstitution2016Sect.4).

Afirststepinunderstandingthepotentialethicalhazardposedbyasystemisidentifyingtherelevantethicalprinciplestowhichthesystemshouldadhere.Listing the ethical principles to use in the assessment ofeverypossibleroboticorautonomoussystemwouldnotbeuseful.BS8611recommendsthattherelevantethicalprinciplesshouldbe“identifiedanddefinedbyengagingwithendusers,specificstakeholdersandthepublic”.Neverthelessnumeroussetsofsuggestedprinciples exist that can be used as starting points. A recentsurveyofstandardsandguidelinesonethicsforartificialintelligencefoundthattheyhadconvergedontheprinciplesoftransparency,justiceandfairness,non-maleficence,responsibility,andprivacy(Jobin,Ienca,andVayena2019).Thesamesurveyfoundafurthersixprinciplesthatwerepresentinsomeoftheexistingethicsguidelines:beneficence,freedomandautonomy,trust,sustainability, dignity, and solidarity.

BS8611itselfdescribessomegeneralethicalprinciplesthat can be used in additiontoabespokeassessment

(BritishStandardsInstitution2016Sect.5).Theyaredividedintofourethicalissues,eachcomprisingseveralethicalhazard.Firstaresocietalissues:lossoftrustinthe system, deception, anthropomorphisation, privacy andconfidentiality,lackofrespectforculturaldiversityand pluralism, robot addiction, and employment. Next are application issues: misuse, unsuitable use, dehumanisationofusersandoperators,inappropriate“trust”ofahumanbytherobot,andself-learningsystemsexceedingtheirremit.Then,commercialorfinancialissues:approbationoflegalresponsibilityandauthority,employment,equalityofaccess,learningbyautonomousrobots,informedconsent,andinformedcommand.Finally,environmentalissues:environmentalawarenessforboththerobotandappliances,andtheoperationsandapplications.

BS8611highlightsmethodsofmitigating,validatingandverifying(theabsenceof)thesetypesofhazards.Mitigationmethodsincludethingslikeincludingaparticularprincipleinthesystem’sdesign,orprovidingaparticulartypeofinformationtoanoperatorsorusers.Validationandverificationincludeswellknowntechniqueslikeuservalidation,compliancetesting,andsoftwareverification;aswellastechniquesrequiringwiderexpertise,suchasstructuredassessmentsofeconomic,social, and legal impacts.

Wereiteratethatthesearesuggestedstartingpointsfortheidentificationandmitigationofethicalhazards.Athoroughanalysisisneededforeachsysteminitsfinalcontext,whichshouldberepeatedifthecontext(mission,users,operators,software,hardware,etc)changes.

2.2 Materials SuitabilityToaffecttherealworld,anautonomoussystemisusuallyembedded in a robotic system. This gives it a physical presenceandtheabilitytointeractwithitssurroundings,but leaves the system vulnerable to hazards in its environment. So, the materials used in the robot need to beassessedfortheirsuitabilitytothehazardspresentingin the deployment environment. It is important to note that this covers both the materials used to build the

1.Anethicalriskistheprobabilityofethicalharmfromoccurringfromthefrequencyandseverityofexposuretoahazard.

2 Prerequisite Concepts

12CONTENTS<

robot’sstructureandcasing,aswellasthematerialsusedin its internal, electrical, and mechanical components.

For example, deployment in a radioactive environment willinfluencechoicesofmaterialsusedinarobot.Differenttypesofradiationwillhavedifferenteffectsondifferentmaterials.Thiscanimpactawiderangeofcomponents,fromthemetalsusedtoconstructtherobot,totheplasticsusedforelectricalinsulation,andeventhelubricantsusedformovingparts.Thereisalsothepotentialforradiationtoaffectthestateofaroboticsystem’selectroniccomponents,sothelikelihoodandimpactofthisshouldalsobeanalysed.

Inthisexampledeployment,thetypesofradiationthatareexpectedintherobot’senvironmentshouldbeanalysedtoidentifyhowtheywillimpactthematerialsusedineverypartoftheroboticsystem.Thisanalysisshouldbeused,alongsideanagreedMissionDefinition(detailingmissionduration,forecastdoserate,operationalrequirements,etc.),todeterminewhichmaterialsaresuitableforthedeploymentenvironment.Adetaileddescriptionofhowlongeachpartoftheroboticsystemcanbeexpectedtolast under particular radioactive conditions should be compiled.Thiscanbeusedbyruntimemonitors(seeSections 3.3, 4.3 and 5.3)toprovide‘health’informationtoa user, operator, and/or autonomous system.

2.3 Verification and Validation of Autonomous and Robotic SoftwareTherearearangeoftechniquesforcheckingthatasoftwaresystemfunctionscorrectly.Muchofanautonomoussoftwaresystem(inparticularanautonomousroboticsystem)mayusewell-understoodsoftwareandalgorithms.Assuch,itisexpectedthatrelevant, current good practice is observed during thesystem’sdevelopment.Thismayincludecurrentguidelines, standards, or regulator advice that is applicable to the system and its intended use. Examples ofsoftwareverificationtechniquesalreadyinuseforrobotic systems include: standardised or restricted

middlewarearchitectures,softwareorphysicaltestingandsimulations,domainspecificlanguagesforspecifyingcheckableconstraints,graphicalnotationsfordesigns,and generating code using model-driven engineering approaches(Luckcucketal.2019,Sect.2).However,systematictestingofautonomousroboticsystemscanbechallenging.

2.3.1 Verification & Validation Challenges from Autonomous and Robotic Systems

Physicaltestswitharoboticsystemcanbedangerousinearlyphasesofdevelopment,andareoftendifficultor time-consuming to set-up or run. As such, greater reliance on approaches based on simulation and code analysismaybenecessaryeventhoughthesemaylackthefidelityoftheactualoperatingenvironment.Evenso,thereissomeevidencethatevenalow-fidelitysimulationofarobot’senvironmentcanreproducebugsthatwerefoundduringfieldtestsofthesamerobot;Sotiropoulosetal.(2017)foundthatofthe33bugsthatoccurredduringafieldtestofarobot,onlyonecouldnotbereproducedintheirlow-fidelitysimulation.

Autonomoussystemsareoftendesignedtobeusedin(partially)unknownsituations,wheretheautonomyenablesthesystemtooperatewhenplanscannotbemadebeforehand.Sometechniquesforimplementingautonomycreateartefactsthatareopaque,difficulttoanalyse,whichmayfindsolutionsthataresurprisingtohumans.Inmanysituationsthisveryabilitytobehaveflexiblyinunexpectedwaysisarequirementofthesoftwareandthereasonforitsdevelopment,butitclearlypresentschallengestotheanalysisoftherisksofdeployingthesystem.

For many autonomous systems it may be possible to limitthenumberofcomponentsthataredifficulttoanalyse.Thisreducestheamountofmitigation,relatedto these components, required to argue that the system isadequatelysafe.Itmayalsobepossibletopairacomponentthatisdifficulttoanalysewithamonitorthatchecksthatcomponent’soutputsagainstitsexpectedbehaviourorrangeofbehaviours.Ifthecomponent’sbehaviourdiffersfromtheexpectation,thenthemonitor

2 Prerequisite Concepts

13CONTENTS<

cantakesomemitigatingaction,forexample:logging,alerting a useror operator, or some automatic remedial behaviour.Inthecaseofamonitortriggeringremedialbehaviour, this could be considered runtime enforcement. Thesystem’sexpectedbehaviourcouldbedescribedinaformalspecificationorasamoretraditionalsoftwareartefact.Thekeyfeaturesofamonitoristhatisshouldbe simpler and more analysable than the component it ismonitoring.Also,itmaybeappropriateforamonitortorunonhardwarethatisseparatefromthecomponentitismonitoring,tosafeguardagainstahardwareerroreffectingboththecomponentandthemonitor.Thesesoftwareengineeringandarchitecturetechniquesaretobeencouragedwheretheycanhelpimproveanalysabilityoroverallsafety.

Asidefromthechallengesthatautonomoussoftwarepresents, it also provides the opportunity to examine asystem’sdecision-makingmechanisms.Especiallywhereanautonomouscomponentistakingoverfroma previously human-made decision, this can play a keyroleinmaintainingorimprovingsystemsafetyinademonstrableway.Ignoringthechancetointerrogatethedecisionsanautonomoussystemismakingwouldbeawasteofthisopportunity.

ThechoiceoftechniquetoimplementautonomyimpactsthekindsofVerification&Validationtechniquesthatcanbe applied to it – as mentioned in Section 1. Autonomous softwareisoftenimplementedinadifferentprogrammingparadigm to procedural or object-orientated programs. Forexample,agent-basedautonomyisoftenacollectionofguardedactionsthatthesystemchoosesfromatruntime,basedonwhatisperceivedintheenvironment.Machine learning techniques may enable a system to derive and exploit complex statistical relationships betweeninputs,sequencesofactions,andresults.Sometimes these relationships, once learned, can be expressedasclearruleswhichcanthenbeanalysedintraditionalways,butinsomecasestheserelationshipsaretoocomplextobemeaningfullyexpressedinthisway.Clearlyusesofmachinelearningthatproduceanalysableresultsarepreferableintermsofassurance,butwherethisisnotpossibleconsiderationmustbegiventohow

incorrect derivations can be detected and mitigated.

Adaptiveautonomoussystemsareoftenimplementedusingfeedbackcontrolapproaches,whichusuallyusedifferentialequationstomodelthesystembeingcontrolledwhenthefeedbackcontrollerisbeingtuned.Threekeypropertiesthatshouldbecheckedfortheseapproachesare:(1)thatthemodelisvalidatedagainsttherealworld,andthatitisbasedonvaliddata;(2)thatanyabstractionsintheimplementationofthefeedbackcontrollerarevalid,and;(3)thatthefeedbackcontrollerisrobustagainstvariationsinthesystemsperformance,andcancopewithanydifferencesbetweenitsmodelandthereal system.

Many autonomous robotic systems have their perception unitsand/orcontrolunitsimplementedwithDeepNeuralNetworks(DNNs).Techniquesarebeingdevelopedfortheverificationandtestingofdeeplearning(asurveyofthecurrentstateoftheartcanbefoundin(Huangetal.2020)).AkeypropertythatshouldbeconsideredwhenmakingacaseforthecorrectnessofaDNNiswhethertwoinputstotheclassifierthatappearidenticaltoahumanobservercanbeclassifieddifferentlybytheclassifier-cleardefinitionsthatattempttocapturethispropertyexistasdoapproachestoverifying(eitherformallyorviatesting)thattheyhold.Thereforeanysafetycaseforanautonomoussystemthatincludessuchaclassifiershouldincludeevidencethattheclassifierconformstooneofthesedefinitions.

2.3.2 Formal Methods

ThemostrigorousVerification&Validationtechniquesarereferredtoasformalmethods:mathematicalandlogicaltechniquesfordefiningdesiredsystemproperties,andtheanalysisofspecifications,designsandevensourcecodeusingmathematicalproof-basedtechniques.Formal methods can also be used to generate provably correctsourcecodefromspecificationsanddesigns.These techniques can be strategically applied to minimise thelikelihoodofintroducingfaultsduringsoftwaredevelopment.

2 Prerequisite Concepts

14CONTENTS<

Therearemanyexamplesofformalspecificationandverificationappliedtoroboticandautonomoussystemsintheresearchliterature(Luckcucketal.2019),buttherearenotableexamplesoftheirapplicationinindustryaswell(Woodcocketal.2009).Onepotentialuseforformalmethodsistoprovideanunambiguousspecificationofthesystem’srequirements.Thisenablestheverificationofadesignagainstitsrequirements.Formalspecificationsareoftenbuiltbyexamininganatural-languagespecificationandcorrespondingwithdomainexperts,tofillinthegaps.Formalisingrequirementswasfoundtobethemostoftenusedtechniqueintheindustrialprojectssurveyedin(Woodcocketal.2009).Clearandunambiguousspecifications,linkedtodesigns,canalsohelpshowwhereopaqueanddifficulttoanalysepartsofthesystemcanbeeliminated,andwheretheyaretruly necessary.

Formalmethodscanalsobeusedtoperformrigorousstatic and dynamic analysis. The static analysis technique mostoftenusedittheresearchliteratureismodelchecking(Luckcucketal.2019),whichisanautomaticprocessthatexhaustivelychecksifapropertyholdsineverystateofaformalsystemspecification.Somemodel-checkersaccepttimedorprobabilisticspecifications,andprogrammodel-checkerscancheckaprogramagainstaformalspecification.Therearealsostatisticalmodel-checkersthat,insimilarwaytostatisticaltesting,takesamplesoftheavailablepathsthroughaspecification.Thiscanenablethecheckingofverylargespecifications.

StatisticalAIapproaches,suchasdeepneuralnetworks,canalsobeanalysedbystatisticalmodel-checking,whichcanbeusefulevenwhereabsoluteguaranteesofbehaviourcan’tbegiven.Suchtechniquescanhelpidentifyworst-caseboundariesoftheoutput,andanalysethestabilityofthesysteminthefaceofsmallchangesininput(e.g.,whetheralteringafewpixelsinanimagemightcauseadrasticchangeintheresultinganalysis–forinstanceinterpretingaredtrafficlightasagreenone).

Fordynamicanalysisofasystem,runtimeverificationcanbeused:thisisthestyleofmonitoringdescribedabove;wherethesystem’sbehaviouriscomparedtoaformalspecification.Ifthesystem’sbehaviourdiffersfromthe

specification,thenthemonitorcanlogthefailure,alertthe user, or trigger mitigating actions. An extension ofthisideaispredictiveruntimeverification,whereaformaldescriptionofthesystemisusedtopredictthesatisfactionorviolationofaproperty.Ifthepropertywillcontinuetobesatisfied,thenthemonitorcanberemovedtosavesystemresources;ifthepropertywillbeviolated,thenactioncanbetakentopreventit.

Ingeneral,runtimeverificationbridgestherealitygap(betweenamodelandtherealworld)bycheckingformalpropertiesandassumptionsatruntime.runtimeverificationhasthepotentialtomitigatetherisksinvolvedin incorporating unpredictable or statistical techniques intoautonomoussoftware,byprovidingguaranteesthatacomponent’sbehaviourwillremainwithinsomeguaranteedsafeenvelope,whileenablingthecreativityofthestatisticaltechniquetofindsolutionswithinthatenvelope. As previously mentioned, a runtime monitor mightbeextendedtotriggerruntimeenforcementofsafetyproperties.

Usingformalmethodsarenotalwayspossibleorpracticable,buttheyareausefulpartofthetoolkitforverifyingautonomoussystems.Therearealsoarangeoflessformalbutstillrigorousapproachestotheanalysisandtestingofsoftwarethathelpidentifyandeliminatebugsinordertoprovidehighdegreesofassuranceofsystem behaviour.

2.3.3 Verification & Validation Tools

ThedevelopmentofsoftwareforautonomoussystemsreliesuponeffectiveuseofappropriateVerification&Validationtools,suchasdebuggers,testframeworks,simulators,andformalmethodstools,aspartofawell-structuredrigorousmodernprocessthatfollowsadefinedsoftwareengineeringlifecycle.Thesearesoftwaretoolsthatdonotmakeitintothefinalsoftwarebutareusedduring development to support testing and assurance activitiestodemonstratethatthesystem’ssoftwareiscorrect.AkeyconcernisbeingabletoestablishthesuitabilityofVerification&Validationtoolsthatareusedandtoshowthattheycannotunintentionallyunderminethe assurance they are being used to provide.

2 Prerequisite Concepts

15CONTENTS<

InassessingthesuitabilityofaVerification&Validationtool,considerhowitmightfailandtheconsequencesofafailure.Thetypesoffailuresarelikelytobedependantonthetypeoftool.Thefollowingnon-exhaustiveexamplesdemonstratesomeofthetypesoffailurethatcouldoccur:

• asoftwaretestingframeworkmightbevulnerabletosimilar programming errors as the language it is testing, likeconfusing“=”and“==”;

• asimulatormightnotaccuratelyaccountforcertainphysicalmeasurements,suchasfriction;or,

• thestatespaceofaspecificationmightbetoobigforthemodelchecker(orratherthehardwarerunningthemodelchecker)tocopewith.

IftheconsequencesofVerification&Validationtoolfailingonlyhaveasmallimpactonthesystem’ssafetyassurance,thenafairlyuntestedtoolmightbeacceptable.If,asismorelikely,theconsequencescouldhaveahighimpactontheassuranceforsafetyofthesystemorcomponent,thenprevention or mitigation measures should be considered. ThecategorisationoftheimpactofaVerification&Validationtoolfailing,andoftheappropriatepreventionormitigationmeasures,willdependonthingslikethesystem’sdeploymentenvironment,mission,andapplicableregulatoryregime.Forexample,thepartofthetoolthatisknowntofailtoidentifyerrorscouldbeavoided,oriftherearetypesofknownerrorthatatoolfailstoidentifythesecould be detected by an independent tool or technique, orforhigherlevelsofassuranceuseofmorethanonethird-partyaccreditedVerification&Validationtoolwithcomparisonoftypesoferrorsdetected.

2.4 Mixed-Criticality SystemsReal-Time and embedded systems are increasingly oftencomposedofcomponentswithdifferentlevelsofcriticalityonthesamehardwareplatform,hencemixed-criticalitysystems.Here,criticalitymeansthelevelofassuranceagainstfailurethatagivensystemcomponentneeds.Differentdefinitionsofthesecriticalitylevelsexist,andtheymaybenamed,forexample,Automotive

SafetyandIntegrityLevels,DesignAssuranceLevels(orDevelopmentAssuranceLevels),orSafetyIntegrityLevels.Theuseofmixed-criticalitysystemsisoftendrivenbystrictnon-functionalrequirements,suchasweight,heatgeneration,orpowerconsumption.Foracomprehensivereviewofmixed-criticalitysystemsresearchsee(BurnsandDavis2018).

InSections3,4,and5werecommendanalysisofthecriticalityofsystemcomponentstoenabletothestrongestVerification&Validationmethods(suchasFormalMethods)tobeusedonthemostcriticalcomponents. This suggestion aims to ensure that the mostcriticalcomponentsareassuredagainstfailure,withoutrequiringtheentiresystemtobe(forexample)formallyverified.

Toavoidconfusionwithnuclearcriticality,intherestofthisdocumentwerefertothecriticalityofsystemcomponentsastheir‘importancetothesystem’ssafety’.ThiscanstillbethoughtofintermsofAutomotiveSafetyandIntegrityLevels,DesignAssuranceLevels(orDevelopmentAssuranceLevel),orSafetyIntegrityLevels.

2 Prerequisite Concepts

CONTENTS< 16

3 General Principles for Safety-Critical Robotic Systems

This section introduces general principlestoconsiderforroboticsystems that are deployed in hazardous environments.Theseprinciplesfocuson the physical aspects and deployment contextofthesystem.Theyshouldbeconsidered alongside those in Sections 4 and 5, as appropriate to the system.

3.1 Design1. Identifytheimportancetosafety2ofeachsoftwareand

hardwarecomponent,thenusethemostrobustdesignmethodsforthemostimportant.

Thiscanproduceadesignthatcanhaveitssafetyassured,whilefocussingrobustmethodsonthecomponentsthatarekeytothesystem’ssafety. (See3.2(1), 3.3(1) and 4.2(1)).

2. Designofthefacilitymayneedtobephysicallyextended to accommodate the robotic system. This may include considering the storage and maintenance oftherobotandhowitistransportedbetweentheseareas and its deployment environment.

3. Ifthedesignofthesystemrequiresthatit,itscomponents,oranydebrismayberemovedfromthedeployment environment, this needs to be assessed forpotentialcontamination.

4. The design needs to describe the decommissioning approachforthesystem.Itmaybethatpartsofthesystem can be decontaminated and removed, or thesystemmayneedtobedisposedof‘whole’.Thiselementofthedesignneedstotakeintoaccounttheenvironmentalimpactsofthedecommissioningstrategy.

2. See Section 2.4.

17CONTENTS<

3.2 Verification and Validation1. UsestrongVerification&Validationtechniques,such

asFormalMethods,atearlystagesofthesystem’slifecycle.Thesecanbeusedtoprototypeforataskorasoftwarecomponentthatishighlyimportanttothesystem’ssafety,whileensuring(ifFormalMethodsareused)thatthespecificationofthetaskorcomponentis rigorous.

2. Introducesimulation-basedtestingtocheckthesystem’sintegrationandtofindstatisticallyunlikelystatesthatcauseafailure.Itiscrucialthatthesimulation’slimitationsarewellunderstoodandthatthedifferencesbetweenitandtheactualsystemareadequatelymanaged.Simulationswill,ideally,includehighfidelitycontrolsthatenabletheassessmentofhowauser’soroperator’sactionsimpactthesystem’sbehaviour.

3. Demonstratebyseveralmethods–whichshouldinclude physical testing – that the system operates asspecified,intheabsenceofpotentialhazards.Itisimportanttonotethatphysicalorsoftwaretestingwillstruggletobeexhaustive,whichiswhyitneedstobecombinedwithorprecededbyVerification&Validationtechniquesthatcanbeexhaustive(suchasFormalMethods).AnexampleofusingdifferentVerification&Validation techniques to corroborate each other can be foundin(Websteretal.2020).

3.3 Operation1. Verifiablycorrectruntimemonitorscouldbeusedto

perform‘onlinerequirementschecking’,comparingthesystem’soperationalbehaviourtoitsrequirements.Thebehaviourofthewholesystemcouldbemonitored,orthemonitoringcouldbefocussedonthecomponentsmostimportanttothesystem’ssafety3 as described in 3.1(1).Iftherequirementsarenolongerbeingmet,thenthesystemcould:movetoasafestateuntiltheproblemisresolved(see.3.3(2)),reportthefailuretoaresponsibleperson,orlogthefailure.

2. Forcomponentsthatareimportanttothesystem’ssafety3 runtime monitors could be extended to provide runtimeenforcement.Forexample,ifthemonitorfindthatthesystemisnolongermeetingitssafetyrequirements,thenitcouldtriggerasafestateuntiltheproblemisresolved(see 4.1).Thiscouldincludeenforcinglimitsonthespeedofmovementorforceappliedbythesystem,evenwhenauserattemptstoexceed these limits.

3. In addition to any physical barriers used to contain thesystem,itssoftwarecanbeusedtocreatelogicalbarriers(thisisrelatedto3.3(2).Forexample,monitoringarobot’spositionandpreventingitfrommovingforwardonceitentersa‘buffer’zonearoundthephysicalbarrier.Thishastheadvantageofreducingthelikelihoodofarobotbecomingstuckataphysicalbarrier,whichwouldrequiremanualinterventiontoremedy. Validating the logical barriers is important and highlycontextdependent.Oneexampleofvalidatinglogical barriers could involve analysing the reaction and brakingtimeoftherobotatitsmaximumspeedandensuringthatthelogicalbarrierisfarenoughawaythatitwonthitthephysicalbarrier.

3. See Section 2.4.

3 General Principles for Safety-Critical Robotic Systems

18CONTENTS<

3.4 Maintainance, Recovery and Decommissioning1. Toenableformaintenance,recovery,and

decommissioning;ensureeithersafehumanaccessto,orsaferemovalof,theroboticsystemfromitshazardous environment.

Thiscouldincludeaccesstothesystem’sdeploymentenvironment(whilepowereddown)ortheabilitytoremovethephysicalsystemfromthedeploymentenvironment(ifneededorpossible).Itneedstobedemonstrable that this is possible even in the event ofsomeorallofthesystemfailing.Thisislikelytoneedmultipletechniques,forexampletheanalysisofthe system and deployment environment, simulation, formalanalysisofthecontrolalgorithm,physicaltesting, etc.

2. Ifthesystem,itscomponents,oranydebrismayberemovedfromthedeploymentenvironment,thisneedstobeassessedforpotentialcontamination.Thisrequiresthatthepotentialforthistohappenhasbeenconsidered, as mentioned in 3.1(3) and 3.1 (4).

3. Thedecommissioningstrategy(whichshouldbeconsidered during the design phase, 3.1(4))shouldtakeintoaccounttheenvironmentalimpactsofdecommissioning the system.

4. Modificationstothesoftwareorhardware(whichmayincludeupgrade,reconfigurations,etc)needtobeassessedagainstthesystem’soriginaldesignandverificationartefacts,toensurethatthesafetyproperties(ofboththemodifiedcomponentandthewholesystem)arepreserved.Additionalmonitors,ormodificationstoexistingmonitors,mayberequired.

5. Componentsshouldbeaddedtothesystemtoallowittomonitoritsown‘health’.Thiscanbeusedtotrackitscurrentcapabilityandsuitabilityforthetask,andtopredictfuture(hardwareorsoftware)failures.Thisisusefulforissueslikehardwarewearandtear,augmenting the human ability to consider replacing a componentwhenit‘feelslikeitsabouttobreak’withdata- or model-based prognostics.

It needs to be provable that the data collection and monitoringdoesnotbecomesurveillanceofanyhumaninteractingwiththesystem.

3 General Principles for Safety-Critical Robotic Systems

CONTENTS< 19

4 Principles for Human-Controlled Robotic Systems

In addition the principles in Section 3, this section outlines the principles to considerforroboticsystemswherethereissignificanthuman-in-the-loopcontrol.Thiscanrangefromremote-control,wherea“humanoperatorcontrolstherobotonacontinuousbasis,fromalocationofftherobot,viaonly[their]directobservation”(StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety2015);totelecontrol,wherea“humanoperator,usingsensoryfeedback,eitherdirectly controls the actuators or assigns incremental goalsonacontinuousbasis,fromalocationofftherobot”(StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety2015)4.

Theprinciplesbelowareorganisedintothreesections:Design, Sect. 4.1;Verification&Validation,Sect,4.2;andOperation, Sect.4.3.Theprinciplesarenumberedforidentificationnottoindicateordering.

4.1 Design1. Designoftheroboticsystem’sworkingenvironment

shouldcontainsimilarsafetyconstraintstoaconventional(non-robotic)deploymentenvironment,tomitigatetheforeseeablehazardscausedbythetaskitself.Thisincludesre-designofanexistingworkingenvironmenttoaccommodatearoboticsystem.Whereanexistinghumanworkingenvironmentisbeingre-designed to accommodate human-controlled robotic systems, attention should be paid to the potential differenceinspeed,rangeofmovementandstrengthoftheroboticsystemincomparisontohumans.

4.Thekeydifferencebetweenremote-controlandteleoperationiswhethertheoperatorisdirectlyobservingtherobot,withtheireyes;or indirectly observing the robot, via sensors.

20CONTENTS<

2. Designingthetask(s)thattheroboticsystemwillperformmayinvolvesimplyreplicatinghowahumanworkerdoesit,butthetaskmayneedre-designingto,forexample:accommodatethedifferencesbetweenrobotsandhumans,ortoimproveefficiency.

Theroboticsystemshouldperformataskdemonstrablyassafelyatraineduseroroperator.Additionalsafetyconsiderations(e.g.failsafestatesand/orpassivemeasures)arelikelytobenecessary.

3. Multiple,parallel,safetysystemscouldbedesigned,toprovidetherequiredlevelofassurance(e.g.throughdefence-in-depth).Thesecouldbeacombinationofphysical,hardwaresystems,andsoftwaresystems.

4.2 Verification & Validation1. UsethemostrobustVerification&Validation

methodsforthemostcriticalhardwareandsoftwarecomponents.Thisrequirestheidentificationofthosecomponentsmostimportanttothesystem’ssafety5, as described in 3.1(1).Forexample,ifthesystem’sdesignhasidentifiedtheneedforcomponentsthatlimittherobot’sspeedofmovementorpotentialtoapplyforce,thenthesecomponentsshouldbecandidatesforrobustVerification&Validation.

Thisapproachenablessystemsafetyassurance,whilstprioritisingthecomponentsthatarekeytothesystem’ssafetywhenchooisngwheretoapplyrobustVerification&Validation.(Alsosee3.3(1)).

2. Demonstrate(viahumanfactorsassessmentsand/oruserevaluationstudies)bothusercompetenceanduserconfidenceinthenewremote-controlledsystem.

3. Ifaroboticsystemthatisreplacinganexistingsystem(roboticorotherwise),thenthesafetyboundariesandconstraintsofthenewsystemshouldbeassessedtoensurethatnoadditionalsafetyconcernshavebeenintroducedandthatexistingsafetymitigationsremainvalid.Thisshouldbedonebeforedeploymentofthenewsystem.

4. users and operators should only be in physical proximitytotherobotafterithasbeendemonstratedthat the robot cannot harm them under any circumstance,includingfailures.

4.3 Operation1. Informationabouttheuseofaroboticsystemcould

beloggedto,forexample,helpmodelhowataskisperformedsafelyorhighlightareaswherenewsafetyfeaturescouldbeuseful.Thisinformationcouldalsobeusedtovalidatehowanautonomoussystemperformsthetask.However,hisdatacollectionneedsto be transparent to the humans involved and only be collectedwithinformedconsent(see4.3(2)).

2. It should be demonstrable that any data collection ormonitoringdoesnotbecomesurveillanceofanyoperator,user,oranyotherhumaninteractingwiththesystem.Anymonitoringbyorofthesystemneedsto be transparent to the humans involved and only be collectedwithinformedconsent.

5. See Section 2.4foradiscussionofthisconcept.

4 Principles for Human-Controlled Robotic Systems

CONTENTS< 21

5 Principles for Autonomous Robotic Systems

In addition to the principles in Sections 3 and 4, this section outlines the principles toconsiderforsystemsrangingfromsemi-autonomousuptofully-autonomous. Inasemi-autonomousroboticsystemthe“robotandahumanoperatorplanandconductthetask,requiringvariouslevelsofhumaninteraction”(StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety2015).Thisworksbestwherewell-definedtasksaredelegatedbytheoperatortotheautonomoussystem.Inafully-autonomousroboticsystem,therobotperformsatask“withouthumanintervention,whileadaptingtooperationsandenvironmentalconditions”(StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety2015).

Acommonroutefortheintroductionofanautonomousroboticsystemistobeginwitharemote-controlledsystem,whereahumanoperatorisstillmakingthekeydecisions,andsteadilydelegatetasksordecisionsto

autonomouscomponentsofthesystem.Thisstagedapproachhastheadvantageofonlyautomatingsmallsetsoftasksordecisionsatatime,whichcanaidVerification&Validationandmonitoringefforts,aswellasslowlybuildingtheconfidenceofusersandoperatorsinthesystem’ssafety.Aspreviouslymentioned,priorityshouldbegiventoautomatingtasksthatarepotentiallyharmful(physically,mentally,ethically,etc.)tooperatorsorusers,evenifthiswouldbemoreexpensive(High-LevelExpertGrouponAI2019Recommendation3.2).Toensurethatthesystemwouldbenefittheoperatorsorusers,theyshouldbemeaningfullyconsultedduringthedesignanddevelopmentofthesystem(High-LevelExpertGrouponAI2019Recommendation3.3).Stagedintroductionofaroboticorautonomoussystemislikelytoinvolvedatacollectionandmonitoring,tocheckthesystemisoperatingcorrectlyandsafely,butthisshouldnotbecomesurveillanceoftheoperators,users,oranyresponsibleperson.

Theprinciplesbelowareorganisedintothreesections:Design, Sect. 5.1;Verification&Validation,Sect.5.2;andOperation, Sect. 5.3.Theprinciplesarenumberedforidentificationnottoindicateordering.

22CONTENTS<

5.1 Design1. Theguidingprinciplesofanautonomoussystem’s

designshouldbethatitimprovesthewell-beingandsafetyoffront-lineworkers,bycomplementingtheirskills;andbedemonstrablytrustworthyandadequatelysafeintheeyesofthoseworkers.Thiswillinvolveongoingconsultationwithfront-lineworkersaboutwhichtask(s)wouldmostusefully(forthem)bedelegated to an autonomous system.

Thisisinadditiontoensuringthatthesystem’srequiredlevelofsafetycanbedemonstratedtotheregulators.

2. Ifanautonomoussystemisbeingintroducedtocontrolaroboticsystem,thenitshould(initially)makeuseofexistingphysicalbarrierstocontaintherobot,toprovideafinallineofdefenceifthesoftwareandhardwaresafetysystemsfail.

3. Theintroductionandimplementationofautonomy,orautonomouscomponents,shouldfocusonthesystem’stransparencyandamenabilitytostrongverification(forexample,FormalMethods,formallyspecifiedruntimemonitors,etc).OpaquestatisticalAItechniques(e.g.data-drivenmachinelearning)shouldonlybeusedwheretheyarenecessary.

4. An autonomous system should maintain the remote-controloptionasabackup,sothatthesystemcanbeoperatedormadesafeiftheautonomoussoftwarefails.Thisimpliesthattheremote-controllinkispartofthesystemandshouldalsobedemonstratedtobeadequatelysafe.

5. Clearlyidentifyandfullydefinethetask(s)thattheautonomoussystemwillperform.Thisshouldinclude,forexample:itscontextandinputs(andanyotherpreconditions),howitperformsthetask,theexpectedoutputs(andanyotherpostconditions),whatmightcausethetasktofailandhowthesystemshouldrecover,etc.Theintentofthesetaskdefinitionsistomakeitcleartoboththedevelopmentteamandtheenduserswhatthesystemcanandcannotdo.

6. Iftheautonomoussystemisbeingdelegatedataskfromahuman,thenthetaskdesignprocessshouldbeginwithaclearanalysisofhowthehumanoperatorcurrentlydoesthejobsafely.Safetyenhancements,that are enabled by an autonomous robotic system withhighercapabilitiesthanahumanoperator,canbeincludedafterthesystemcopyingthehumanoperatorcanbeshowntobeacceptablysafe.

Thishasseveralbenefits.First,itinvolvescurrentfront-lineworkersinthedesignprocess,whichimprovesthelikelihoodthatactualsafepracticeiscaptured(andnotjustthedocumentedsafepractice).Second,itprovidesthesystem’srequirements,whichareessentialformeaningfulVerification&Validation(seeSection5.2).Third,thesystem’sactionsaremoreunderstandable,whichcanimprovefront-lineworker’strustandtheabilitytoidentifyfailuresinperformingthetask.

7. Foreachtask,designhigh-levelpre-andpost-condition(orassume-guarantee)propertiestodescribewhatthingsthetaskassumestobetruebeforeitstartsandafteritexecutesandalow-levelproceduraldescriptionofthetask(e.g.aflowchartorstatemachine).

Thedesignshouldconsiderthepotentialforatasktofailduringitsoperation,anddefineafallbacktask(ortasks)tosafelyrecoverfromthefailureortransitiontoafailsafestate(forexample,seeViard,Ciarletta,andMoreau 2019).

8. Whendesigninganautonomoussystemtotake-overtaskswithahighimportancetosafety6, a decision supportsystemcanbeusedasaprototypeforthefullsystem,tovalidatethetaskmodelanddecisionprocesses. This could also improve user trust in the autonomoussystem’sdecisions,inawaythatdoesnotgivethesoftwarephysicalcontrolofthetask(s).

9. Ifanunpredictablecomponent(suchasamachinelearningorevolutionarysystems)isused,ensurethatthegoalofthesystemanditsethicalandsafetyproperties(thingsitmustavoiddoingorchoosing)arewelldefinedbeforehand(see5.3(2)).

6. See Section 2.4foradiscussionofthisconcept.

5 Principles for Autonomous Robotic Systems

23CONTENTS<

10. IfanAIPlanningsystemisusedtoplanthemovement(orotheractions)fromausersuppliedworldmodel,thenexpertsintheoperationalcontextshouldbeinvolvedinthedesignofthisworldmodelbecausethecorrectnessoftheresultingplansdependsupontheaccuracyoftheworldmodel.Formaltechniquescanbeusedtoanalysethemodelforconsistency and detect ambiguities.

5.2 Verification & Validation1. Foreachtask,verifythatitsimplementationrespects

boththehigh-level(pre-andpost-condition)andlow-leveldesign(see5.1(7)).

2. users and operators should only be in physical proximitytotherobotafterithasbeendemonstratedthat the robot cannot harm them under any circumstance,includingfailures.

3. Ifstatisticaltechniques(suchasdeepneuralnetworks)are used, then they should be analysed, to the extent possible,toidentifyworst-casemis-classificationratio and their robustness to small changes in input. Forexample,itshouldbedemonstratedthatifahumanobserver(orotheroracle)wouldclassifytwoinputs(thattheneuralnetworkhasbeentrainedon)asthesame(orthesametype),thentheyarealsoclassifiedasthesame(orthesametype)bytheneuralnetwork.

4. Onlinelearning:thatissystemsthatwillcontinuetouseobservationsoftheoutcomesoftheiractionswithintheenvironmenttomodifytheiroperationafter deployment should be avoided unless absolutely necessary.

5. Iffeedbackcontrollersareused,thenitshouldbedemonstratedthat(1)themodelofthesystem (whichislikelytobeacollectionofdifferentialequations)isvalidatedagainsttherealworld,and thatitisbasedonvaliddata;(2)thatanyabstractionsintheimplementationofthefeedbackcontrollerarevalid,and;(3)thatthefeedbackcontrollerisrobustagainstvariationsinthesystemsperformance,and cancopewithanydifferencesbetweenitsmodel and the real system.

5.3 Operation1. Thesystemshouldmonitoritshardwareforsigns

ofdegradation,thiscouldincludeexcessnoiseorvibrationetc.Thismonitoringenablespredictionofwhencomponentsneedreplacingandcouldfeedintothesystem’splanningsoftware.Thisisespeciallyimportantwhereanautonomoussystemwillworkmore intensively than a human, as degradation rates arelikelytoincreaseandmaybelesseasytopredict.

It needs to be provable that the data collection and monitoringdoesnotbecomesurveillanceofanyhumaninteractingwiththesystem.

2. Ifunpredictablesoftwarecomponents(suchasmachinelearning,orevolutionarysystems)areused(see5.1(9)),thenverifiablycorrect(forexample,byusingFormalMethods)runtimemonitorsshouldbeusedtoenforcethegoal,ethical,andsafetypropertiesthathavebeendefinedinthedesign.Thisensuresthat the system remains in an acceptable behavioural envelope. This envelope should be transparent and understood by human operators, users, and the responsibleperson.Thisenableshumandetectionofdeviationsfromthebehaviouralenvelopeatruntime.(See(Mallozzi,Pelliccione,andMenghi2018)foranexampleofthisapproach.)

3. Autonomoussystemsoftenuseinternalmodelsoftherealworldtoplanbehaviourandinteractwiththeirenvironment,suchasapre-builtmapofthesystem’senvironmentfornavigationandplanning.Thesemodelsareabstractionsoftherealworld,butiftheabstractionsarewrongortherealworldchanges,thenthe model becomes less correct. The system should monitor its environment to ensure that its internal modelsremaincorrect.Ifachangeisdetected,thenthe system could attempt to update its model or alert anoperatororresponsiblepersonifitisunableto.

5 Principles for Autonomous Robotic Systems

CONTENTS< 24

Authors, Glossary and ReferencesAuthors• Matt Luckcuck,DepartmentofComputerScience,

Maynooth University, Ireland7

• Michael Fisher,Autonomy&VerificationGroup, TheUniversityofManchester,UK8

• Louise Dennis,Autonomy&VerificationGroup, TheUniversityofManchester,UK9

• Steve Frost,OfficeforNuclearRegulation,Bootle,UK

• Andy White,OfficeforNuclearRegulation,Bootle,UK

• Doug Styles,OfficeforNuclearRegulation,Bootle,UK

OurthanksgotoVincePage,andXiaoweiHuangforcontributingtheirexpertadvice;andtoourearlyreviewers:XingyuZhao,BaşakSarac̣-Lesavre,andNickHawesfortheirinvaluablediscussionandcomments.

GlossaryASILAutomotiveSafetyandIntegrityLevel.

AssuranceJustifiedconfidenceinaproperty.

CriticalityThelevelofassuranceagainstfailureneededforasystemcomponent.

DAL Design Assurance Levels or Development Assurance Level.

Deployment EnvironmentTheenvironmentinwhichtheroboticorautonomoussystemhasbeendesignedtowork.

Ethical RiskTheprobabilityofethicalharmoccurringfromthefrequencyandseverityofexposureofahazard.SeeBS8611:2016(BritishStandardsInstitution.2016).

Ethical HazardApotentialsourceofethicalharm.SeeBS8611:2016(BritishStandardsInstitution.2016).

Ethical HarmAnythinglikelytocompromisepsychologicaland/orsocietalandenvironmentalwell-being.SeeBS8611:2016(BritishStandardsInstitution.2016).

Formal MethodsMathematicalapproachestosoftwareand system development that support the rigorous specification,design,andverificationofcomputersystems.

Fully-Autonomous Robotic SystemAsystemwheretherobotperformsatask“withouthumanintervention,whileadaptingtooperationsandenvironmentalconditions”(StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety.2015).

MCSMixed-CriticalitySystem.

Mission-CriticalAmission-criticalsystemisonewhereafailuremayleadtolargedata-andfinancial-losses.Examples include deep-sea submersibles, automatic explorationvehicles(forexample,theMarsrovers),andotherscientificmonitoringsystems.

7.ThemajorityofMattLuckcuck’sworkonthiswhitepaperwasdonewhilehewasemployedbytheUniversitiesofLiverpoolandManchester.https://orcid.org/0000-0002-6444-9312

8. Michael Fisher https://web.cs.manchester.ac.uk/~michael9. Louise Dennis: https://personalpages.manchester.ac.uk/staff/louise.dennis

25CONTENTS<

OperatorAhumanprovidingaroboticsystemwithinstructions or direction.

Predictive Runtime VerificationAnextensionofRuntimeVerification,whereaformalspecificationofthesystemisusedtotrytopredictsatisfactionorviolationofaproperty.

Responsible PersonAhumanwhohasthelegalresponsibilityforaroboticsystem,see(Principlesofrobotics–EPSRCwebsite,Rule5).

Runtime VerificationAnapproachwherethesystemismonitored,anditsbehaviouriscomparedtoaformalspecification.Actioncanbetakenifthebehaviourdiffersfromthespecification.

Safety-CriticalAsafety-criticalsystemisonewhereafailuremayleadtoecologicalorfinancialdisaster,seriousinjury, or death. Examples include medical equipment, cars,aeroplanes,andpowerplants.

Semi-Autonomous Robotic SystemAsystemwherethe“robotandahumanoperatorplanandconductthetask,requiringvariouslevelsofhumaninteraction”(StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety.2015).

SILSafetyIntegrityLevel.

UserAhumaninteractingwitharoboticsystem,whomayor may not be an employee.

V&VVerificationandValidation.

ReferencesBeer,JenayM,ArthurDFisk,andWendyARogers.2014.“TowardaFrameworkforLevelsofRobotAutonomyinHuman-RobotInteraction.”Journal of Human-Robot Interaction3(2):74. https://doi.org/10.5898/JHRI.3.2.Beer

British Standards Institution. 2016. BS 8611:2016 - Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems. British Standards Institution. http://shop.bsigroup.com/ProductDetail?pid=000000000030320089

Burns,Alan,andRobertDavis.2018.“MixedCriticalitySystems-AReview.”August2018.York:UniversityofYork.https://www-users.cs.york.ac.uk/ burns/review.pdf

Endsley,MicaR.1999.“LevelofAutomationEffectsonPerformance,SituationAwarenessandWorkloadinaDynamicControlTask.”Ergonomics42(3):462–92. https://doi.org/10.1080/001401399185595

High-LevelExpertGrouponAI(AIHLEG).2019.“PolicyandinvestmentrecommendationsfortrustworthyArtificialIntelligence.”EuropeanCommission. https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence

Huang,Hui-Min,KerryPavek,JamesAlbus,andElenaMessina.2005.“AutonomyLevelsforUnmannedSystems(ALFUS)Framework:AnUpdate.”In,editedbyGrantR.Gerhart,CharlesM.Shoemaker,andDouglasW.Gage,439.Orlando, Florida, USA. https://doi.org/10.1117/12.603725

Huang,Xiaowei,DanielKroening,WenjieRuan,JamesSharp,YouchengSun,EmeseThamo,MinWu,andXinpingYi.2020.“ASurveyofSafetyandTrustworthinessofDeepNeuralNetworks:Verification,Testing,AdversarialAttackandDefence,andInterpretability.”ComputerScienceReview37(August):100270. https://doi.org/10.1016/j.cosrev.2020.100270

Jobin,Anna,MarcelloIenca,andEffyVayena.2019.“ThegloballandscapeofAIethicsguidelines.”Nature Machine Intelligence1(9):389–99. https://doi.org/10.1038/s42256-019-0088-2

Luckcuck,Matt,MarieFarrell,LouiseADennis,ClareDixon,andMichaelFisher.2019.“FormalSpecificationandVerificationofAutonomousRoboticSystems:ASurvey.”ACM Computing Surveys52(5):1–41. https://doi.org/10.1145/3342355

Mallozzi,Piergiuseppe,PatrizioPelliccione,andClaudioMenghi.2018.“Keepingintelligenceundercontrol.”2018 IEEE/ACM 1st International Workshop on Software Engineering for Cognitive Services(SE4COG),37–40.https://doi.org/10.1145/3195555.3195558

Authors, Glossary and References

26CONTENTS<

On-RoadAutomatedDriving(ORAD)committee,SAEInternational.2018.“TaxonomyandDefinitionsforTermsRelatedtoDrivingAutomationSystemsforon-RoadMotorVehicles.”J3016_201806.SAEInternational. https://doi.org/10.4271/J3016_201806

RyanProud,JeremyHart,andRichardMrozinski.2003.“MethodsforDeterminingtheLevelofAutonomytoDesignintoaHumanSpaceflightVehicle:AFunctionSpecificApproach.”In,15.Gaithersburg,MD;UnitedStates. http://handle.dtic.mil/100.2/ADA515467

Sheridan,ThomasB,andWilliamLVerplank.1978. “HumanandComputerControlofUnderseaTeleoperators.”MassachusettsInstofTechCambridgeMan-Machine Systems Lab. https://apps.dtic.mil/sti/citations/ADA057655

Sotiropoulos,Thierry,HélèneWaeselynck,JérémieGuiochet,andFélixIngrand.2017.“Canrobotnavigationbugsbefoundinsimulation?Anexploratorystudy.”In Softw. Qual. Reliab. Secur., 150–59. IEEE. https://doi.org/10.1109/QRS.2017.25

StandingCommitteeforStandardsActivitiesoftheIEEERoboticsandAutomationSociety.2015.“IEEEStandardOntologiesforRoboticsandAutomation.”IEEE. https://standards.ieee.org/standard/1872-2015.html

Viard,Louis,LaurentCiarletta,andPierre-EtienneMoreau.2019.“Monitor-centricmissiondefinitionwithsophrosyne.”In 2019 Int. Conf. Unmanned Aircr. Syst. ICUAS 2019, 111–19. Atlanta, United States. https://doi.org/10.1109/ICUAS.2019.8797898

Webster,Matt,DavidWestern,DejaniraAraiza-Illan,ClareDixon,KerstinEder,MichaelFisher,andAnthonyGPipe.2020.“ACorroborativeApproachtoVerificationandValidationofHuman–RobotTeams.”The International Journal of Robotics Research39(1):73–99. https://doi.org/10.1177/0278364919883338

Woodcock,Jim,PeterGormLarsen,JuanBicarregui,andJohnFitzgerald.2009.“FormalMethods:PracticeandExperience.”ACM Computing Surveys41(4):1–36. https://doi.org/10.1145/1592434.1592436

Authors, Glossary and References

27CONTENTS<

The RAIN Hub: a radical change to existing nuclear programmes and initiatives.

TheRAINHub’sobjectivesaretoovercomethechallengesfacingthenuclear industry. Through applying ourscientificknowledgeandunderstandingwecanimprovesafetyandefficiencyinthenuclearsector,benefitingtheUKeconomyand creatingasaferworkingenvironment.

3625.06.21

DepartmentofElectricalandElectronicEngineeringTheUniversityofManchesterOxfordRoadManchester M139PLUnitedKingdom

Tel: {+44} 0161 306 2622Email:[email protected]

www.rainhub.org.uk

@RAIN_Hub