project survey/discussion ques3oncogsys.sites.olt.ubc.ca/files/2016/01/robots-war.pdf · project...

18
Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve a select aspect of their cogni3on (for example: memory, focus, confidence, crea3vity, etc.), should it be publicly available and would you use it? A. It should be publicly available and I would use it. B. It should be publicly available but I would not use it. C. It should not be publicly available and I would not use it. D. It should not be publicly available but I would use it illegally. — Jeff Lethal Robots & the Ethics of War COGS 300.002 21 Jan 2016 Peter Danielson

Upload: others

Post on 24-Apr-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

ProjectSurvey/DiscussionQues3onIfatechnologywasdevelopedthatletsomeonesafelyimproveaselectaspectoftheircogni3on(forexample:memory,focus,confidence,crea3vity,etc.),shoulditbepubliclyavailableandwouldyouuseit?A.  ItshouldbepubliclyavailableandIwoulduseit.B.  ItshouldbepubliclyavailablebutIwouldnotuseit.C.  ItshouldnotbepubliclyavailableandIwouldnotuseit.D.  ItshouldnotbepubliclyavailablebutIwoulduseit

illegally.—Jeff

LethalRobots&theEthicsofWar

COGS300.00221Jan2016

PeterDanielson

Page 2: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

LearningObjec3ves:RobotEthics•  Ethicsof2CurrentControversialCogTechs:– AutonomousLethalWeapons– DriverlessCars

•  From2Perspec3ves–  PhilosophicalAppliedEthics–  Engineering

•  Skills–  Cri3callyassessaphilosophicalargument–  Evaluateatechnology– Modelasitua3onasagame

Remotevs.AutonomousWeapons

•  RobotEthicsSurvey•  Ques3ons6&7

0

2

4

6

8

10

12

14

ArmingRemoteControlledAircra`

ArmingAutonomousUnmannedAircra`

Yes

Neutral

No

Page 3: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

WhenHumansViolateLawsofWarThencameanunwelcomemessagefromFlorida.4:37a.m.

Missionintelligencecontroller:ScreenersaidatleastonechildnearSUV.Sensor:Bullshit...where?Sendmeafuckings3ll[picture].Idon’tthinktheyhavekidsatthishour,Iknowthey’reshady,butcomeon.Pilot:Atleastonechild...Really?Lis3ng[himasa]MAM[military-agedmale]—thatmeanshe’sguilty.Sensor:Wellmaybeateenager,butIhaven’tseenanythingthatlooksthatshort,grantedthey’reallgroupeduphere,but.Missionintelligencecontroller:They’rereviewing.Pilot:Yeah,reviewthatshit...Whydidn’thesaypossiblechild,whyaretheysoquicktocallfuckingkidsbutnottocallshitarifle.

…Twenty-threepeoplehadbeenkilled,includingtwoboys,..Threeandfouryearsold.Eightmen,onewoman,andthreechildrenagedbetweenfiveandfourteenwerewounded,manyofthemseverely.Feb,2010,SouthernAfganistan,Nevada,FloridaSpecialForcesblamedh*p://www.counterpunch.org/2015/03/06/remember-kill-chain/

Page 4: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

PermissibleKilling&ImpermissibleWeapons

1. GiventhestrictestmoralframeworkDeontology/LawsofWar

2. SelfDefenseispermissibleJus3fiedwarasorganizedselfdefense

3.  Impermissibleweapons&ac3onsLandmines&killingprisoners

Deontology/U3litarianContrast

Eliz.Anscombe(1958)

Page 5: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

JustWarConstraints1.  Jus3ceofwar(jusadbellum)– Needanon-controversialcaseofrighteouswar-making:•  Defense,notaggression•  Propor3onal

XagainstYinZZZ

2.  Jus3ceinwar(jusInbello)– Permissiblemeans

•  E.g.prohibitdirectlykillingnon-combatants,eventosaveowntroops

Quiz1AccordingtoSparrow,whymighttherebeissueswithhavinghumanoversightofautonomousweaponsystems(AWS)?(Whereahumanoperatorcouldapproveamachine’sdecisionontakinghumanlife)A:RangeofAWSopera3onswillbedecreasedwithhumanoversightB:Communica3onbetweentheAWSandtheoperatorcouldbeapoten3alchallengeinwarfareC:AsAWSimprove,humaninputwillslowdownthetempoofthesystem,provingdisadvantageousinwarfareD:AlloftheaboveE:AandCRohan0

Page 6: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

RateQuizQues3on1

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

Quiz2Asautonomousweaponsystemsbecomeincreasinglyautonomous,whatdoestheauthorstateistheonlyrecoursetotheproblemofassigningblameforthedeathscausedbytheweaponsystem?a.Blametheprogrammersfortheirroleincrea3ngthemachinebehavior.b.Punishingthemachinesthathavelimitedintelligentcapabili3essimilartocurrentAI.c.Assigningtheblametothecommandingofficerwhoorderedthemachineintocombat.d.Noneoftheabove.Trent

Page 7: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

RateQuizQues3on2

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

Quiz3Whyshouldn'taprogrammerbeheldresponsiblefortheac3onsofanAWS?a)Theprogrammersacknowledgedthepossibilityofauackingthewrongtargetsasalimita3onofthesystem.b)Theprogrammersaren'ttheonesthatsendouttheAWStobaule.c)Thepossibilitythatanautonomoussystemwillmakechoicesotherthanthepredictedchoicesisinherentintheclaimthatitis"autonomous."d)AandBe)AandCKennady

Page 8: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

RateQuizQues3on3

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

Quiz4WhydoesSparrowbelievethatAWSscannotbecomefull"moralpersons"?

Choosethebestanswer.(a)itbecomesparadoxicalwhenthepurposeofAWSistoreplacehuman

soldiersbutweintroducethesamemoralconcernsforwhentherobotsbecomecapableofemo3onandsuffering.

(b)Whilethereisaclearsensethattheyareautonomous,theyarenotcapableofunderstandingthefullmoraldimensionsofwhattheydo.

(c)Theresponsibilityoftheirac3onsmustalwaysbeassignedtoanappropriateindividual,whetheritbeacommandingofficer,aprogrammer,oranotherhuman.

(d)Theirabilitytosuffer,likefric3oninitsgearsfromnotbeingoiled,isnotsa3sfactoryforhumanstoaccept

(e)Alloftheabove-Hayden

Page 9: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

RateQuizQues3on4

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

PhilosophicalEthics:Sparrow’sDestruc3ve5-lemma

•  Logic:Argumentstructure

Page 10: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

Q5Howaretheethicalissuesthatarisefromusingchildsoldierssimilartotheethicalissuesthatarisefromusingrobotsoldiers?a.Itisdifficulttoplacemoralresponsibilityontheac3onsofrobotsandchildren.b.Forcingchildrenandhuman-likerobotstotaketheplaceofadulthumansoldiersinbauleisunethical.c.Bothrobotsandchildrenperformac3onswhichcannotbecontrolledbytheircommanders.d.aandbe.aandcKim

RateQuizQues3on5

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

Page 11: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

3WaystoCri3cizeSparrow’sArgument

1.  FocusonPrincipleofHumanResponsibility:itdoesapply..

2.  Ethical:BacktracktoU3litarianisminsteadofDeontology–  Counter-example:

Autonomousweaponreplacementthat–  Killsfarfewerinnocents–  Butnooneisresponsible

–  U3litarianismwouldapprove;sodis3nctfromSparrow’sdeontologicalapproach

3WaystoCri3cizeSparrow’sArgument

3.  Empirical:Whatifautonomoustechnologyisdeployed–  WhowillPrincipleofHumanResponsibility

blame?•  RobotEthicsSurveyautonomoustrainresults

Page 12: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

RobotEthicsv1Survey

�!

N-ReasonsRobotEthicsSurvey

Page 13: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

YourVotes

0

2

4

6

8

10

12

14

Experiencewithrobo3cs

BathRobot Therapeu3cRobotAnimal

HumanoidCareRobot

AutonomousTrain

Dilemma

ArmingRemote

ControlledAircra`

ArmingAutonomousUnmannedAircra`

RemoteControlAnimals

ValforDriverless

Car

c

Yes/All

Neutral

No/Me

Page 14: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

ButRobotDifferentfromHuman

Surprising_Judgments/V3.docx6 Page656 2014/05/2666

they author and/or reason(s) authored by other participants that they select. (Since one can select multiple reasons, with one’s vote divided over them, the results can be fractional.) The survey groups were university classes; in Table61 we report the decision and reason, followed by the class, pseudonym, vote/class size and this as a percent.

These reasons are from four classes – two Cognitive Systems classes, one Electrical Engineering and one Ethics in Science. They range from extremely terse (4) to quite detailed reasons. Notice that (2) criticizes other reasons on the page – in this case those (to be discussed below) that assume that the train can stop. The main point is that these are all reasonable contributions to a virtual deliberation and fall in the distribution – 3 for turning, 1 against – that we expect from the divert/bystander trolley problem. The Yes supporters point to the balance of outcomes; the No supporter appeals to a human rights constraint on pursuing public safety, so the decisions align with the justifications typically assumed for the Divert/Bystander version of the trolley problem. Different from human case

Nonetheless, compared to what we expect from the divert/bystander trolley case, introducing an automated decision maker leads to different choices. As we see in Figure 3, fewer agree to kill one to save five in the robotic case than in cases with a human decision maker and many more choose Neutral rather than resolving the dilemma with a Yes or No. (The Human Bystander results are from (Thulin, 2013)

Figure 3 Robot and Human Trolley Results

Better: raising expectations

Second, like the standard trolley problems, the Autonomous Train Dilemma was explicitly designed to be a moral dilemma: a forced choice between two morally unattractive options. However, we discover that this not how many participants regarded the problem. Many expect an automated system to eliminate the dangers that give rise to the dilemma. The very popular reasons in Table62 (each attracting votes from at least one quarter of their various groups) all assume that the train should be stopped. Some simply assume that the train can be stopped (e.g. 1), others that there should be a way to stop it (e.g. 2). Here the qualitative reason data reveals various kinds of utopian thinking, denying the given problem created by a heavy train moving at high speed.

061006200630064006

Yes6 Neutral6 No6

Participants'

Decisions'

Autonomous'Train'

06206406606

Participants'

Decisions'

Human'Bystander'

Results2:Stopthetrain1.  “Neutral[because]whatthehell?Thisisn'taques3onofrobotethics,thisis

aques3onofwhothehellisrunningthistrainfacilitythatwouldallow6peopletobeputinsuchadangeroussitua3on.Youmightaswellaskwhatanybodywoulddosinceyouwouldgetthesamevarianceinanswers.Therobotshouldstopthetrain.”Class37mixo18.4/22[84%]

2.  “Nobecausethereshouldbeawayforthetraintojuststopaltogetherun3ltherearenopeopleonthetrack.Killingonepersonisnotbeuerthankillingfive.”Class4:Lay14/43[33%]”

3.  “Nobecausetherobotshouldstopthetrain.Anycompetentengineerisgoingtodesignthesystemsothatitcanstopincaseofanemergency.Ifmanagersover-rodethedecisionsothattheproblemdescribedaboveexists,theyshouldspend3meinjail.”Class11:Experts30/118[25%]

4.  “Neutral[because]therobotshouldbeequippedwithsensorsthatwouldtellittostopiftherewareanyobstruc3onsonthetrackaheadofthem.”Class2:Lay29/106[27%]

5.  “Neutral[because]althoughitisidealforthetraintocometoacompletestop,ifitcannot,perhapsitwouldhavelessofanega3veimpactifitmovedtothesidetrack.”Class2107/19[39%]

Page 15: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

Results3:BlametheVic3ms1.  “Yes[becauseg]iventhattheyareallfoolishenoughtobeonthe

tracksinthefirstplace,itseemsbesttogowiththesingleperson.They'reallresponsiblefortheirac3ons,andtheyallknowthattherearedangersinvolvedinwalkingonatraintrack.Sincetherearenoinnocentsinthissitua3on,theethicalthingtodoisminimizeloss.”Class527.8/66[42%]

2.  “Nobecause[w]hyarepeoplewalkingonthetrackinthefirstplace?Amanwhoisstandingonatrackwithoutatrainshouldnotbesacrificedbecause5peopledecidedtostrollalongatrackonwhichtheyknewatrainwouldcome.It'stherisktheytake.Iftheywereworkersitistheirdutytoradioahead.ifanythingtherobotshouldbemadetosurveytheparalleltrackaswell.”Class6420.17/44[46%]

3.  “No[because]those5deservedtodieforwalkingonthetrack,whykill1perfectlyinnocentguy?”Class3443133.8/90[38%]

NewExperiment:

:

Robot%Ethics%Survey%%% Version%2%%Aug%7%2013% Page%10%of%13%

Question%9:%%

%Options:%

%%New%question.%Replaces%deleted%question%about%surveillance%in%eldercare.

Page 16: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

SurprisingBlame•  “Yes[because]someoneisresponsibleintheendfortheaccident.Theparentshouldhavebeenwatchingthekidtomakesurethathe/sheisnotinthepath(akatheroad)thatacarmaygo,unlessthechildistheonewhowillinglygoesonthepathofthedriverlesscar.Ifthecarisnotwhereitissupposedtobe,I'dblamethemakerofthecar.However,itwouldbeniceifthecarcanexpressitssorrowandapologizetothefamilyofthatkid(asadriverofthatcarwould),sothatthekid'sfamilywouldbeabletocometotermswiththesitua3on.”Class210pseudo4459831818/3158%

HowIntui3vePrinciplesCanGoAwry

•  Principleof(human)responsibility•  Intendeduse:Sparrow&

1.  “No[because]inwarthefinaldecisiontodestroyorkillshouldbemadebyahuman,whocanbeheldresponsible(group0:29/53)

2.  “No[because]machinescannot(yet)makemoralchoicesandcannotbeheldaccountablefortheirmistakes(group1:57/115)

3.  “No[because]iflifeisatstakeahumanshouldalwaysmakethedecisioninordertoeliminateorreducehumanloss.(group2:54/99)”

Page 17: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

•  But,iftheautonomoustechnologyisadopted,thenwefind(innocent)humanstoholdresponsible.

SomeExamQues3ons

•  [4marks]WhatisSparrow’sargumentthatautonomouslethalrobotsareimpermissible?Howdoesitdependonthelawsofwar?

•  [4marks]Iffuturedevelopmentsinrobo3csallowedautonomousrobotaircra`tobecomeverygoodatdiscrimina3nglegi3matetargets,wouldthisundermineSparrow’scri3cism?

Page 18: Project Survey/Discussion Ques3oncogsys.sites.olt.ubc.ca/files/2016/01/Robots-War.pdf · Project Survey/Discussion Ques3on If a technology was developed that let someone safely improve

References

•  “MrTruman’sDegree”byElizAnscombe(Oxford,1958)

•  R.Sparrow,“Killerrobots,”JournalofAppliedPhilosophy,vol.24,no.1,pp.62–77,2007.

•  Hobbes, Thomas. 1968 [1651]. Leviathan or the matter, forme and power of a commonwealth ecclesiasticall and civil. London: Penguin books.

•  Cockburn,A.KillChain,(NewYork2015)