who should decide how machines make morally laden decisions?

21
1 Who Should Decide How Machines Make Morally Laden Decisions? a Forth. Science and Engineering Ethics 12 October 2016 Dominic Martin John Molson School of Business Concordia University [email protected] A new version of the trolley problem is becoming increasingly popular. The problem refers to a popular thought experiment wherein one must decide whether or not to divert a trolley hurtling down a track. If the trolley is not diverted, five people on the main track will die. If the trolley is diverted a man working on the side track will die (Foot 1978; Appiah 2008). Tens if not hundreds of variations of the trolley problem have been introduced so far, but the setup of this new version is quite different. In this new trolley problem, it is not a human agent who faces the daunting task of deciding whether or not to divert the trolley, but a machine — an artificially intelligent computer system — whose job it is to drive the trolley (Allen, Wallach, and Smith 2006). Let us say that the machine just received two signals indicating that live casualties could be encountered on the main track and the side track, and it must now decide what to do. Another recent thought experiment, called the tunnel problem, raises similar issues. b According to this problem, a selfdriving car is approaching the entrance of a tunnel when a boy crossing the road suddenly trips in the center of the lane. If the car avoids the kid, it will hit the entrance of the tunnel and kill its passengers. If the car protects its passengers, it will kill the boy. What should the car do? In considering these two thought experiments, several questions emerge. First, can we ascribe something like a mental state to a machine (Fiala, Arico, and Nichols 2014)? More broadly, what does it mean to say that a machine ‘makes a decision’ or ‘behaves’ in certain ways (Searle 1980; 1984)? How can we create machines that have the ability to drive trolleys or cars? Can a machine be an ethical agent (Purves, Jenkins, and Strawser 2015; Anderson and Anderson 2007; Floridi and Sanders 2004)? Or even, what ought a machine to do in the trolley or the car problem? These are all important inquiries, but I shall leave them aside here in order to focus on a different problem. The problem I will address goes as follows: who should decide how a machine will decide what to do when it is driving a trolley, a car or, more generally, when it is facing any kind of morally laden decision? c More and more, machines are making complex decisions, with a considerable level of autonomy. Selfdriving cars are in

Upload: uqam

Post on 15-May-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

  1  

Who  Should  Decide  How  Machines  Make  Morally  Laden  Decisions?a  Forth.  Science  and  Engineering  Ethics  12  October  2016    Dominic  Martin  John  Molson  School  of  Business  Concordia  University  [email protected]    A  new  version  of  the  trolley  problem  is  becoming  increasingly  popular.  The  problem  refers  to  a  popular  thought  experiment  wherein  one  must  decide  whether  or  not  to  divert  a  trolley  hurtling  down  a  track.  If  the  trolley  is  not  diverted,  five  people  on  the  main  track  will  die.  If  the  trolley  is  diverted  a  man  working  on  the  side  track  will  die  (Foot  1978;  Appiah  2008).  Tens  if  not  hundreds  of  variations  of  the  trolley  problem  have  been  introduced  so  far,  but  the  setup  of  this  new  version  is  quite  different.  In  this  new  trolley  problem,  it  is  not  a  human  agent  who  faces  the  daunting  task  of  deciding  whether  or  not  to  divert  the  trolley,  but  a  machine  —  an  artificially  intelligent  computer  system  —  whose  job  it  is  to  drive  the  trolley  (Allen,  Wallach,  and  Smith  2006).  Let  us  say  that  the  machine  just  received  two  signals  indicating  that  live  casualties  could  be  encountered  on  the  main  track  and  the  side  track,  and  it  must  now  decide  what  to  do.  Another  recent  thought  experiment,  called  the  tunnel  problem,  raises  similar  issues.b  According  to  this  problem,  a  self-­‐driving  car  is  approaching  the  entrance  of  a  tunnel  when  a  boy  crossing  the  road  suddenly  trips  in  the  center  of  the  lane.  If  the  car  avoids  the  kid,  it  will  hit  the  entrance  of  the  tunnel  and  kill  its  passengers.  If  the  car  protects  its  passengers,  it  will  kill  the  boy.  What  should  the  car  do?  In  considering  these  two  thought  experiments,  several  questions  emerge.  First,  can  we  ascribe  something  like  a  mental  state  to  a  machine  (Fiala,  Arico,  and  Nichols  2014)?  More  broadly,  what  does  it  mean  to  say  that  a  machine  ‘makes  a  decision’  or  ‘behaves’  in  certain  ways  (Searle  1980;  1984)?  How  can  we  create  machines  that  have  the  ability  to  drive  trolleys  or  cars?  Can  a  machine  be  an  ethical  agent  (Purves,  Jenkins,  and  Strawser  2015;  Anderson  and  Anderson  2007;  Floridi  and  Sanders  2004)?  Or  even,  what  ought  a  machine  to  do  in  the  trolley  or  the  car  problem?  These  are  all  important  inquiries,  but  I  shall  leave  them  aside  here  in  order  to  focus  on  a  different  problem.  The  problem  I  will  address  goes  as  follows:  who  should  decide  how  a  machine  will  decide  what  to  do  when  it  is  driving  a  trolley,  a  car  or,  more  generally,  when  it  is  facing  any  kind  of  morally  laden  decision?c  More  and  more,  machines  are  making  complex  decisions,  with  a  considerable  level  of  autonomy.  Self-­‐driving  cars  are  in  

  2  

the  process  of  being  tested  and  legalized  in  many  jurisdictions  worldwide.  Military  drones  are  becoming  increasingly  popular.  We  have  software  that  can  administer  psychotherapy  (Economist  2014;  Garber  2014),  write  a  press  release,  screen  students  for  university  admission  (Burn-­‐Murdoch  2013;  see  also  Economist,  The  2013),  or  even  perform  a  medical  operation  almost  unaided  by  humans  (Economist  2016).  We  should  be  much  more  preoccupied  by  this  problem  than  we  currently  are.  This  paper  is  divided  into  five  sections.  After  a  series  of  preliminary  remarks  (section  I),  I  go  over  four  possible  approaches  to  solving  the  problem  raised  above.  We  may  claim  that  it  is  the  maker  of  a  machine  that  gets  to  decide  how  it  will  behave  in  morally  laden  scenarios  (section  II).  We  may  claim  that  the  users  of  a  machine  should  decide  (section  III).  That  decision  may  have  to  be  made  collectively  (section  IV)  or,  finally,  by  other  machines  built  for  this  special  purpose  (section  V).  I  argue  that  each  of  these  approaches  suffers  from  its  own  shortcomings.  I  conclude  by  showing,  among  other  things,  which  approaches  should  be  emphasized  for  different  types  of  machines,  situations,  and/or  morally  laden  decisions  (section  VI).  

I. Preliminary  considerations  A  machine,  as  I  use  that  expression  here,  includes  any  system  designed  by  humans  that  has  internal  functioning  of  some  sort.  A  steam  engine  is  a  machine,  for  instance,  as  is  a  computer  or  a  humanoid  robot  designed  to  perform  household  chores.  But  I  would  exclude  things  such  as  a  human  heart  or  a  screwdriver.  Of  course,  there  are  some  gray  areas  within  this  definition:  is  a  corn  field  or  a  fir  plantation  a  machine?  To  what  extent  do  humans  design  these  systems?  What  about  a  self-­‐improving  biomechanical  robot?  Could  we  say  that  a  hang  glider  has  an  internal  functioning?  But  this  definition  will  suffice  for  my  purposes.  What  interests  me  specifically,  here,  is  the  design  of  intelligent  machines,  such  as  a  computer  system,  a  robot  or  other  automated  systems.d  I  use  the  term  ‘machine’  simply  to  speak  about  all  these  systems  at  once.  I  should  specify,  however,  that  I  use  the  term  robot  more  exclusively  for  intelligent  machines  designed  to  produce  something  physical  or  tangible  in  the  world:  weld  metal  pieces  together,  fly  to  a  given  GPS  coordinate  or  wipe  the  floor.  While  a  system  such  as  a  computer  may  also  have  a  physical  dimension  (a  screen,  a  case,  electric  circuits,  etc.),  its  useful  product  may  only  be  virtual:  a  hash  value,  the  solution  to  a  face  recognition  request,  a  decision  as  to  whether  or  not  divert  a  trolley  hurtling  down  a  track,  and  so  on.  Robots  rely  on  intelligent  computer  systems,  but  not  all  intelligent  computer  systems  are  robots.  We  may  design  a  variety  of  intelligent  machines  to  behave  in  different  ways,  through  more  or  less  complex  processes,  involving  varying  degrees  of  autonomy.e  That  behavior  may  in  turn  have  a  more  or  less  important  moral  dimension:  

  3  

– A  drone  may  increase  the  speed  of  a  rotor  to  stabilize  itself  in  flight.  – A  pool  of  high-­‐frequency  trading  programs  may  sell  a  particular  stock  at  a  

specific  price  (see  Lewis  2014).  – Psychotherapy  program  may  ask  a  question  that  will  revive  a  patient’s  

childhood  memories.  – A  self-­‐driving  car  may  establish  if  now  is  a  good  time  to  pass  another  vehicle  

on  the  left  or  on  the  right.  – A  clinical  decision  system  may  diagnose  lung  cancer.f  – An  automated  journalistic  content  generator  may  or  may  not  include  key  

information  in  an  earnings  report.  – A  program  designed  to  screen  students  for  university  admission  may  filter  out  

students  based  on  their  gender  or  race.g  – An  autonomous  weapons  system,  such  as  a  military  drone,  may  engage  in  

combat  and  shoot  a  target.  Though  human  approval  is  generally  needed  for  these  kinds  of  interventions,  perhaps  someday  these  systems  will  have  the  ability  to  engage  in  combat  on  their  own.  

– A  medical  robot  performing  an  operation  may  make  an  intervention  that  is  life  saving  or  threatening.  Once  again,  humans  are  closely  involved  in  the  behavior  of  these  systems  today  —  not  to  mention  that  few  systems  like  these  currently  exist,  and  none  of  them  is  operating  on  humans  —  but  that  may  change  in  the  foreseeable  future.  

– A  self-­‐driving  car  may  avoid  a  child  at  the  entrance  of  a  tunnel,  thus  killing  its  passengers.  

As  we  move  down  the  list,  the  moral  dimension  of  these  decisions  becomes  increasingly  important  (permutations  are,  of  course,  possible).  My  intention  is  not  to  draw  any  clear  limit,  but  there  is  a  point  at  which  I  will  consider  a  decision  to  be  morally  laden.h  Decisions  such  as  avoiding  a  child  at  the  entrance  of  a  tunnel,  making  a  surgical  intervention,  or  engaging  in  combat  are  among  them,  in  my  view.  They  involve  live  casualties,  other  risks  for  people’s  physical  and  psychological  integrity,  as  well  as  risks  for  buildings  and  infrastructures.  Decisions  made  by  systems  screening  for  university  admission,  automated  journalistic  content  generators,  or  psychotherapy  software  may  often  be  morally  laden.  Decisions  such  as  reviving  a  child’s  memory,  passing  a  car  on  the  left,  selling  stocks,  or  even  increasing  the  speed  of  a  rotor,  may  or  may  not  have  an  important  moral  dimension,  depending  on  other  conjectural  factors.  When  I  ask  who  should  decide  how  machines  make  these  decisions,  I  am  inquiring  into  who  should  have  a  say  in  the  design  of  intelligent  machines.  Not  all  aspects  of  their  design,  but  the  aspects  that  will  determine  a  machine’s  behavior  in  morally  laden  cases  and/or  situations.  I  will  now  consider  four  different  approaches  to  this  problem.  

  4  

II. Let  makers  decide  Intelligent  machines  are  designed  and  built  by  individuals  or  groups  of  individuals,  within  private  and  public  organizations:  engineers,  scientists,  consultants,  project  directors;  or  even  managers,  high  executives,  directors  and  shareholders.  One  approach  is  to  let  these  individuals,  whom  I  refer  to  as  the  maker  of  the  machine,  to  decide  how  this  machine  behaves  in  morally  laden  situations.  This  view  has  at  least  two  arguments  working  in  its  favor.  First,  we  may  claim  that  that  there  are  considerable  asymmetries  of  expertise.  Given  the  complexities  of  machines  such  as  self-­‐driving  cars  and  medical  robots,  only  their  maker  knows  enough  about  their  functioning  to  determine  how  they  should  behave  in  all  aspects.  Second,  one  could  appeal  to  the  value  of  market  freedom  and  claim  that  makers  of  intelligent  machines  should  be  able  to  design  and  build  these  machines  however  they  want.  This  freedom  could  be  justified  on  the  basis  of  classical  arguments  of  political  economy:  it  may  be  claimed  that  people  or  organizations  have  a  right  not  to  have  their  economic  liberty  constrained,  and  this  would  include  designing,  building  and  selling  goods  such  as  a  self-­‐driving  car  on  the  market.  It  may  be  claimed  that  consumers  can  always  decide  what  machine  they  want  to  purchase,  and  that  they  are  responsible  for  their  own  choices  in  the  market.  Or  it  may  be  claimed  that  letting  makers  decide  is  the  best  way  to  ensure  efficient  production  processes  and  the  development  of  new  and  safe  technologies.i  These  are  all  relevant  considerations.  But  we  may  wonder  if  the  two  arguments  are  sufficient  to  claim  that  only  the  maker  of  a  machine  are  in  the  best  position  to  decide  how  it  will  behave  in  most  foreseeable  circumstances.  Regarding  the  first  argument:  the  existence  of  significant  asymmetries  does  not  mean  that  the  maker  of  a  machine  cannot  work  with  other  parties,  such  as  governmental  agencies,  to  decide  on  key  aspects  of  its  design.  Most  importantly,  we  may  argue  that  having  the  expertise  to  determine  and  shape  how  an  intelligent  machine  will  behave  is  a  different  kind  of  problem  than  understanding  the  moral  complexities  of  how  it  ought  to  behave.  There  is  not  necessarily  a  correlation  between  expertise  in  system  design  and  expertise  in  ethics,  particularly  in  the  face  of  new  technology,  where  one  will  likely  have  to  deal  with  new  and  unchartered  ethical  issues.  Regarding  the  second  argument,  some  sectors  of  the  economy  are  more  heavily  regulated  than  others.  For  instance,  there  is  important  governmental  oversight  in  the  pharmaceutical  and  food  industry.  Car  manufacturers  have  to  comply  with  comprehensive  standards  of  safety.  If  we  accept  that  market  freedom  can  be  constrained  in  these  cases,  why  could  such  constraints  not  be  imposed  on  the  design  of  intelligent  machines,  at  least  when  they  pose  risks  that  are  as  significant  as  those  posed  by  unsafe  drugs,  food  or  cars?  

  5  

What  is  troubling,  however,  is  that  letting  makers  decide  is  almost  the  only  approach  that  has  been  applied  so  far.  People  like  scientists,  engineers  or  project  coordinators  enjoy  significant,  if  not  absolute,  influence  on  the  design  of  intelligent  machines.  Is  this  a  problem?  Should  not  someone  else’s  output  be  sought  as  well?  Should  it  be  the  engineers  at  Google  or  Toyota  that  decide  if  a  self-­‐driving  car  will  kill  a  boy,  or  its  passengers?  

III. Let  users  decide  Before  the  existence  of  intelligent  machines,  it  was  human  agents  who  made  morally  laden  decisions.  Thus,  in  a  second  approach,  we  may  claim  that  it  is  better  to  let  the  users  of  these  machines  decide.  The  suggestion  here  is  not  to  have  human  agents  pre-­‐decide  all  the  morally  laden  decisions  that  intelligent  machines  will  have  to  make;  that  would  be  impossible.  Rather,  it  is  to  give  more  weight  to  users’  preferences  in  the  final  design  and  the  general  behavior  of  an  intelligent  machine.  For  instance,  the  owner  of  a  self-­‐driving  car  could  have  access  to  configuration  settings  wherein  he  could  indicate  how  the  car  ought  to  weight  the  lives  of  its  passengers  if  it  enters  into  conflict  with  the  lives  of  people  outside  the  car.  Another,  more  elaborate,  option  is  to  build  intelligent  machines  with  user-­‐oriented  moral-­‐learning  functionalities.  A  self-­‐driving  car  could  learn  from  the  decisions  made  by  their  users  —  in  an  initial  user  calibration  phase,  say,  or  when  they  are  driving  the  car  themselves  —  and  then  extrapolate  from  these  patterns  in  order  to  make  a  decision  in  the  future.  If  a  user  is  generally  inclined  to  react  promptly  and  intensively  to  avoid  pedestrians  in  the  street,  perhaps  this  indicates  that  he  would  also  be  inclined  to  avoid  a  child  in  a  situation  similar  to  the  tunnel  problem.j  Although  giving  more  weight  to  users’  moral  inclinations  may  be  intuitively  convincing,  this  solution  suffers  from  problems  of  its  own  (see  also  Lin  2014  for  an  early  critical  perspective).  First,  implementation  issues  should  not  be  overlooked.  I  have  outlined  two  ways  in  which  the  choices  of  users  may  be  taken  into  account  (direct  configuration  settings  and  user-­‐oriented  moral  learning),  but  these  suggestions  raise  at  least  as  many  questions  as  they  suggest  solutions.  What  kinds  of  moral  configuration  settings  could  be  offered  to  users?k  How  can  the  settings  be  designed  so  that  they  capture  the  information  relevant  to  helping  a  machine  like  a  self-­‐driving  car  make  the  proper  decisions  in  the  future?  Who  can  change  these  settings,  and  what  if  multiple  individuals  or  groups  of  individuals  use  the  same  machine?  If  a  self-­‐driving  car  uses  options  such  as  user-­‐learning  moral  capabilities,  how  will  these  systems  be  built?  This  may  be  as  complicated  as  building  an  autonomous  moral  agent.  If  these  machines  rely  on  gathering  data  about  users’  behavior  when  they  are  operating  the  machines  themselves,  this  may  become  a  problem  in  the  future  when  most  humans  simply  won’t  have  the  skills  to  operate  these  machines.  

  6  

For  instance,  a  self-­‐driving  car  may  gather  data  about  the  driving  patterns  of  a  user  when  that  user  is  driving  the  car  himself.  This  could  work  today  because  most  people  know  how  to  drive  cars,  but  that  may  not  always  be  the  case.  If  self-­‐driving  cars  are  commonly  used,  people  may  simply  loose  this  ability.  Second,  we  may  wonder  if  all  users  are  competent  or  reliable  enough.  That  is:  if  an  intelligent  machine  is  built  with  configuration  settings,  how  will  we  ensure  that  users  understand  the  meaning  and  the  implications  of  these  settings  (Etzioni  and  Etzioni  2016,  151)?  If  users  have  to  answer  a  complicated  list  of  questions,  how  can  we  be  sure  that  they  will  provide  thoughtful  answers?  A  third  problem  comes  from  the  moral  implications  of  the  circumstances  of  the  choice.  It  is  one  thing  to  make  a  tough  choice  (like  crashing  a  car  at  the  entrance  of  a  tunnel)  in  the  heat  of  the  moment.  It  is  another  thing  to  make  that  choice  in  advance.  Is  it  morally  acceptable  for  the  user  of  a  self-­‐driving  car  to  pre-­‐emptively  program  the  car  to,  say,  prioritize  the  lives  of  people  outside  the  car  over  his  life  or  the  lives  of  the  car’s  passengers?  Can  this  be  seen  as  a  form  of  premeditated  intention  to  cause  harm  (Holtug  2002)?  When  decided  in  advance,  we  tend  to  judge  this  kind  of  behavior  more  severely,  both  from  a  moral  and  legal  perspective.l  Also,  an  agent  may  not  have  access  to  circumstantial  information  that  may  be  morally  relevant  at  the  time  of  the  choice,  such  as  how  many  passengers  are  the  car,  or  who  the  individuals  involved  will  be.  What  is  more,  there  is  no  empirical  evidence  that  people  make  similar  moral  choices  in  different  circumstances,  quite  the  opposite.  Thus  we  may  wonder  what  choice  is  the  right  choice:  the  choice  made  in  the  heat  of  the  moment  through  quick  and  mostly  intuitive  mental  decision  processes,  or  a  choice  made  in  advance  where  an  agent  has  an  opportunity  to  weigh  different  options?m  Fourth,  and  more  generally,  it  may  be  problematic  to  assume  that  users  are  the  legitimate  agents  to  decide  about  machine’s  morally  laden  behavior.  When  I  am  using  a  normal  human-­‐driven  car,  the  choices  I  will  make  can  have  important  implications  for  other  individuals  in  the  car  and  on  the  street.  What  legitimacy  do  I  have  as  a  single  user  to  make  these  choices?  In  a  pre-­‐AI  world,  machine-­‐users  had  to  make  morally  laden  decisions  simply  because  there  were  no  other  options.  But  technology  is  an  enabler  of  change.  Given  current  developments  in  information  technologies,  it  is  much  easier  today  —  and  this  will  probably  be  even  easier  in  the  future  —  to  involve  larger  groups  of  individuals  in  the  process  of  deciding  how  to  make  these  morally  laden  decisions.n  

IV. Let  us  decide  collectively  In  the  face  of  these  problems,  we  may  claim,  third,  that  the  behavior  of  an  intelligent  machine  is  primarily  a  social  issue  and  as  such  should  be  decided  upon  through  a  

  7  

collective  decision-­‐making  process.  Now,  there  are  many  different  ways  to  go  about  this.  One  option  is  to  let  governments  decide  through  the  work  of  publicly  elected  officials:  new  legislation  (Lin  2013b),  public  consultation  processes,  or  special  working  committees  by  members  of  parliament,  are  all  ways  in  which  governments  can  influence  the  behavior  of  intelligent  machines.  Another  approach  is  to  create  groups  or  organizations  of  experts  that  will  be  responsible  for  making  these  decisions.  For  instance,  it  has  been  suggested  recently  that  we  should  create  the  equivalent  of  the  US  Food  and  Drug  Administration  for  data  and  algorithms  (Engel  2016).  The  organizations  that  own  this  data  and  the  technology  to  use  them  (corporations  such  as  Facebook  or  Google),  enjoy  considerable  power,  influence  and  a  privileged  access  to  the  private  data  of  many  people.  The  suggestion  is  that  more  governmental  oversight  may  be  needed  to  control  what  these  companies  are  doing  with  the  information  and  the  technology  they  have  access  to.  If  one  accepts  this  position,  then,  what  about  the  use  of  all  other  technologies  needed  to  make  intelligent  machines?  New  governmental  agencies  could  be  created  to  not  only  oversee  the  use  of  data  and  algorithms,  but  all  AI  technologies.  Other,  more  direct,  options  could  be  used,  too.  We  could  decide  collectively  about  the  behavior  of  intelligent  machines  by  using  focus  groups,  user  surveys  or  public  opinion  pools.  Groups  of  individuals  could  be  asked  how  these  machines  should  behave.  Dominant  trends  could  then  be  identified  and  integrated  into  their  functioning.  If  most  people  believe  a  self-­‐driving  car  should  sacrifice  the  lives  of  its  passengers  to  save  the  life  of  a  child,  car  makers  could  then  be  constrained  to  design  these  cars  in  a  way  that  gives  less  weight  to  the  lives  of  its  passengers  when  faced  with  a  trade-­‐off  similar  to  that  of  the  tunnel  problem.o  There  are  many  others  ways  to  decide  collectively,  and  each  one  has  different  advantages  and  disadvantages.  In  fact,  it  is  a  bit  of  a  simplification  to  put  them  all  under  the  same  category;  presumably,  more  distinctions  could  be  drawn.  However,  despite  their  individual  features,  these  different  options  are  vulnerable  to  a  similar  set  of  objections.  First,  it  may  be  claimed  that  deciding  collectively  about  the  design  of  intelligent  machines  will  impose  a  burden  that  will  reduce  the  efficiency  of  the  creation  processes  of  these  machines  or  the  general  pace  of  innovation.  This  is  a  common  argument.  For  instance,  it  is  often  claimed  that  increased  government  interventions  in  a  sector  of  the  economy  will  translate  into  mismanagement  and  additional  burdens  on  people  and  business.  This,  in  turn,  will  undermine  the  ability  of  the  private  sector  to  adapt,  to  be  sensitive  to  consumer  demand,  and  impede  other  benefits  that  more  market  freedom  is  supposed  to  deliver.p  The  same  rationale  may  apply  to  intelligent  machines.q  

  8  

When  I  claim  there  are  issues  of  reduced  efficiency  in  the  making  of  intelligent  machines,  this  does  not  necessarily  apply  to  machines  such  as  self-­‐driving  cars  or  medical  robots.  As  pointed  out  in  section  II,  we,  as  a  society,  already  impose  some  constraints  on  the  production  of  some  goods,  such  as  transport  vehicles,  drugs  or  food.  This  will  probably  have  an  impact  on  the  efficiency  of  the  production  processes  of  these  goods.  Increasing  safety  regulation  on  cars,  for  instance,  increases  the  cost  of  making  a  car  because  heightened  standards  must  be  met.r  Yet,  it  is  considered  an  acceptable  burden  because  there  are  other  positive  outcomes:  cars  are  safer.  Therefore,  claiming  that  society  should  decide  collectively  about  the  behavior  of  some  machines  may  not  be  problematic,  even  if  it  reduces  efficiency.  But  it  is  likely  that  there  will  be  many  different  types  of  intelligent  machines  in  the  future.  One  can  simply  think  of  the  Internet  of  things,  and  the  multiplication  of  devices,  small  and  big,  that  will  be  interconnected,  that  will  share  data,  and  that  will  be  artificially  intelligent  to  some  extent  (Gubbi  et  al.  2013).  If  we  were  to  decide  collectively  about  all  of  the  different  types  of  intelligent  machines,  this  may  create  a  burden  that  we  are  not  willing  to  accept.  Large  and  costly  social  institutions  would  have  to  be  created,  extensive  new  regulation  would  need  to  be  enacted,  there  could  mistakes  or  problematic  interventions  that  would  be  to  the  detriment  of  the  creation  processes  of  these  machines,  and  so  on.  In  other  words,  the  argument  about  reduced  efficiency  is  a  matter  of  degree.  While  it  may  be  acceptable  to  impose  constraints  on  some  machines  such  as  cars  or  medical  robots,  there  will  be  a  threshold  where  we  may  consider  that  deciding  collectively  is  problematic  on  the  basis  of  reduced  efficiency.  A  second  set  of  issues  around  deciding  collectively  about  the  morally  laden  behavior  of  machines  relates  to  unacceptable  interferences  with  individual  liberties  and  issues  of  paternalism.  It  is  common  in  many  political  traditions  to  claim  that  the  power  of  the  state  should  be  limited  to  some  extent.  This  argument  is  based  on  the  notion  that  people  ought  to  be  treated  as  free  and  autonomous  agents  (Ackerman  1980;  R.  M.  Dworkin  1979;  Rawls  2005;  Wall  2012).  In  the  liberal  tradition,  for  instance,  it  is  common  to  draw  a  distinction  between  the  public  sphere  and  the  private  sphere,  such  that  there  is  an  appropriate  realm  of  governmental  authority  (Mill  1859).  Another  way  to  frame  these  concerns  about  individual  liberties  is  to  speak  in  terms  of  a  division  of  moral  labor  within  our  social  institutions.  Political  philosopher  John  Rawls  claims,  for  instance,  that  justice  takes  the  form  of  two  principles:  first,  a  principle  of  equal  liberties  and,  second,  a  principle  that  embodies  a  principle  of  fair  equality  of  opportunity  and  the  difference  principle.  However,  Rawls  also  claims  that  these  principles  only  apply  to  the  basic  structure  of  society  or,  to  put  it  in  his  (1999,  6)  words,  “the  way  in  which  the  major  social  institutions  distribute  

  9  

fundamental  rights  and  duties  and  determine  the  division  of  advantages  from  social  cooperation.”  People  do  not  have  to  comply  with  the  principles  of  justice  in  their  day-­‐to-­‐day  lives,  as  long  as  they  comply  with  the  rules  of  the  basic  structure,  thus  enacting  something  similar  to  the  distinction  between  the  public  and  the  private  sphere.  Another  type  of  related  concern  is  that  of  paternalism,  which  occurs  when  the  state  interferes  with  a  person’s  life,  against  their  will,  motivated  by  the  claim  that  the  person  interfered  with  will  be  better  off  (G.  Dworkin  1972;  2005;  2014).  Examples  of  paternalistic  policies  include  laws  against  drugs  or  tobacco  products,  or  the  compulsory  wearing  of  seatbelts.  Paternalism  may  be  an  issue  not  only  because  of  the  interference  into  people’s  lives,  but  also  because  of  the  potential  to  treat  people  as  if  they  were  not  fully  rational  or  capable  of  making  their  own  life  choices.  Back  to  the  question  of  intelligent  machines,  we  may  wonder  if  deciding  collectively  about  the  morally  laden  behavior  of  all  machines  would  not  violate  valuable  individual  liberties,  or  if  it  would  not  be  paternalistic  to  do  so.s  Some  form  of  public  oversight  of  the  behavior  of  intelligent  machines  is  necessary,  but  to  claim  that  we  should  decide  collectively  about  all  machines  would  give  considerable  power  to  the  public  institutions  of  a  society  over  the  private  lives  of  its  citizens.  This  argument,  however,  is  subjected  to  the  same  logic  as  the  argument  on  efficiency.  The  proper  level  of  collective  oversight  is  surely  a  matter  of  degree:  it  may  be  acceptable  that  society  decides  about  the  behavior  of  machines  such  as  self-­‐driving  cars  and  medical  robots,  even  if  this  constrains  liberty  or  even  if  this  is  paternalistic  to  some  extent;  but  it  may  not  be  acceptable  for  all  machines.  To  decide  collectively  about  the  design  of,  say,  the  software  in  personal  computers,  mobile  phones,  or  an  intelligent  toothbrush,  would  imposes  constraints  that  we  are  not  willing  to  accept.  

V. Let  other  machines  decide  It  was  pointed  out  in  section  II  that  makers  of  intelligent  machines  should  not  exclusively  be  able  to  decide  about  their  morally  laden  behaviors.  It  was  pointed  out  in  section  III,  that  there  may  also  be  limitations  to  letting  users  decide.  Finally,  it  was  pointed  out  in  the  previous  section,  IV,  that  deciding  collectively  may  lead  to  reduced  efficiency,  interference  with  private  liberty  or  paternalism.  Does  this  mean  that  humans  should  not  always  decide  about  the  behaviors  of  intelligent  machines,  or  at  least  that  human  involvement  should  be  limited?  The  idea  has  at  least  been  suggested.  In  a  recent  paper,  Amitai  and  Oren  Etzioni  (2016,  155  emphasis  removed)  claimed  that  “people  will  need  to  employ  other  AI  systems”  to  ensure  the  proper  conduct  of  AI  technologies;  given  that  there  is  a  growing  number  of  machines  equipped  with  AI  

  10  

technologies  that  allow  for  autonomous  decision  making.  This  should  be  done  with  a  system  of  ethics  that  “analyzes  many  thousands  of  items  of  information,”  such  as  information  publicly  available  on  the  Internet,  but  also  private  information  from  each  user,  such  as  information  that  can  be  found  on  private  computers  (152).t  These  systems  of  ethics  would  provide  “a  superior  interface  between  a  person  and  smart  instruments  compared  to  unmediated  interactions”  (153).  The  purpose  of  these  systems  would  not  be  to  ensure  mere  compliance  of  intelligent  machines  with  other  rules  or  principles  decided  upon  by  humans.  Rather,  these  systems  would  be  involved  in  setting  up  these  rules  and  principles  for  the  intelligent  machines.  This  suggests  a  fourth  approach  to  the  problem:  why  not  let  other  intelligent  machines  decide  how  they  will  make  morally  laden  decisions?  A  clarification  needs  to  be  introduced  here.  We  may  think  of  many  different  systems  of  ethics,  depending,  first,  on  the  extent  to  which  these  systems  aim  at  being  an  interface  for  each  user’s  specific  moral  values  and,  second,  the  level  of  autonomy  of  these  systems.  Regarding  the  first  point,  Etzioni  and  Etzioni  (152)  use  the  example  of  the  nest  intelligent  thermostat  as  a  simple  and  early  form  of  a  system  of  ethics  (see  also  Lohr  2015,  chap.  8).  The  nest  thermostat  first  observes  the  behavior  of  people  in  a  household  and  draws  conclusions  about  their  preferences  in  terms  of  in-­‐house  temperature.  Then,  the  thermostat  takes  over  and  adjusts  the  temperature  based  on  whether  or  not  there  are  people  in  the  house,  how  many  people  are  in  the  house,  the  time  of  the  day,  and  so  on.    If  Etzioni  and  Etzioni’s  idea  of  a  system  of  ethics  is  similar  to  the  nest  thermostat,  then  their  proposal  is  closer  to  the  idea  of  having  intelligent  machines  equipped  with  user-­‐oriented  moral  learning  functionalities,  and  this  is  closer  to  an  approach  wherein  users  would  decide.  With  the  nest  thermostat,  it  is  individual  human  users  that  have  the  most  influence  on  the  behavior  of  the  thermostat  and,  ultimately,  the  behavior  of  a  heating  and  cooling  system.  Alternatively,  Etzioni  and  Etzioni  suggest  that  systems  of  ethics  should  take  into  account,  not  only  each  user’s  values,  but  more  collective  values  such  as  those  expressed  publicly  on  the  Internet.  If  that  is  the  case,  their  proposal  may  in  fact  be  closer  to  the  third  approach,  wherein  these  systems  would  be  composed  of  a  technology  that  could  be  used  to  take  into  account  collective  human  values.  This  would  be  similar  to  deciding  collectively  about  the  behavior  of  intelligent  machines.  Thus,  Etzioni  and  Etzioni’s  proposal  does  not  lead  to  a  fourth  distinct  approach,  when  considered  under  the  two  interpretations  above.  In  order  to  have  a  fourth  approach,  we  need  to  have  a  system  that  has  more  functionalities  and  that  aims  at  doing  more  than  simply  aggregating  human  values  and  communicating  them  to  intelligent  machines.  This  brings  me  to  the  second  point  regarding  autonomy.  In  order  to  say  that  

  11  

intelligent  machines  decide  about  the  morally  laden  behavior  of  other  machines,  we  would  need  to  create  systems  that,  even  if  they  may  take  human  inputs  into  account,  also  possess  sufficient  autonomy  to  make  decisions  on  their  own.  Whether  or  not  Etztioni  and  Etzioni  believe  such  systems  to  be  desirable  is  open  for  debate,  but  this  is  the  sort  of  system  I  will  consider  here.u  Two  arguments  for  having  such  autonomous  systems  of  ethics  immediately  present  themselves.  The  first  is  one  about  the  limited  moral  capabilities  of  human  agents.  If  artificial  intelligence  is  able  to  achieve  tasks  that  human  agents  are  not,  perhaps  it  will  be  possible  to  create  intelligent  machines  that  are  also  able  to  be  better  moral  agents  than  humans.v  The  second  argument  is  one  of  feasibility.  Perhaps  intelligent  machines  will  become  complex  and  capable  to  a  point  where  human  agents  will  lack  the  resources  to  directly  design  the  systems  that  control  their  behavior.  Both  these  arguments  specifically,  and  the  approach  more  generally,  however,  are  vulnerable  to  criticism.  The  first  problem  is  that  creating  systems  of  ethics  like  the  ones  described  above  is  simply  not  possible  at  this  point.  Significant  progress  has  been  made  in  AI,  but  not  to  the  point  where  intelligent  machines  have  levels  of  intelligence  even  similar  to  those  of  human  agents.  Therefore,  any  proposal  to  create  an  autonomous  system  of  ethics  remains  speculative.  Second,  many  observers  are  critical  of  the  idea  that  an  artificial  moral  agent  can  be  created  at  all.  Yet  an  autonomous  system  of  ethics  essentially  requires  equivalent,  if  not  superior,  capacities  for  moral  agency.  The  system  has  to  decide  what  is  right  or  wrong  for  other  machines.  According  to  Duncan  Purves,  Ryan  Jenkins  and  Bradley  Strawser  (2015,  856–58),  for  instance,  morality  cannot  be  “captured  in  universal  rules  that  the  morally  uneducated  person  could  competently  apply  in  any  situation.”  This  is  what  they  call  the  anti-­‐codifiability  thesis.  The  thesis  entails  that  some  moral  judgment  on  the  part  of  the  agent  is  necessary.  In  their  view,  intelligent  machines  cannot  be  fully  morally  capable  agents  because  they  do  not  have  the  capabilities  for  moral  judgments,  even  if  they  can  execute  a  complex  list  of  commands.  Purves,  Jenkins  and  Strawser  agree  that  the  anticodifiability  thesis  may  be  false.  But  they  also  have  a  second  argument.  They  claim  that  an  action  can  be  morally  right  only  if  it  is  made  for  the  right  reasons.  Since  machines  lack  —  and,  in  their  view,  will  always  lack  —  the  necessary  psychological  capacity  to  have  such  right  reasons,  they  will  never  be  full  moral  agents.  There  are  reasons  to  be  skeptical  of  these  arguments.  On  the  one  hand,  Purves,  Jenkins  and  Strawser  do  not  seem  to  be  aware  of  recent  developments  in  AI  involving  machine  learning  and  neural  networks.  These  new  technologies  suggest  that  intelligent  machines  could  make  decisions  in  a  way  that  is  very  similar  to  the  human  brain  (on  a  silicon-­‐based,  rather  than  biological  substrate,  though  that  too  may  change  in  the  foreseeable  future).  Therefore,  it  is  not  obvious  that  machines  

  12  

will  be  unable  to  exercise  judgment  or  possess  reason  in  a  way  similar  to  humans  (see,  for  instance,  Bostrom  2014,  chap.  3).  This  tension  is  already  present  in  their  argument:  they  recognize  that  machines,  such  as  drones,  may  be  sufficiently  developed  to  make  decisions  autonomously  and  they  do  not  completely  dismiss  the  idea  that  these  machines  may  also  be  competent  moral  agents.  But  if  that  is  the  case,  is  it  impossible  that  these  systems  could  also  act  upon  reason?  One  may  even  claim  that  autonomous  decision  making  and  strong  moral  capabilities  already  require  a  high  level  of  artificial  intelligence.  If  such  levels  of  intelligence  are  reached,  is  it  possible  that  a  phenomenon  akin  to  human  reasoning  may  also  be  taking  place  within  these  systems?  It  would  go  beyond  the  scope  of  this  paper  to  answer  these  questions,  but  these  are  matters  that  will  need  to  be  further  addressed.  We  may  have  a  third  reason  for  being  skeptical  of  an  approach  relying  on  autonomous  systems  of  ethics.  Leaving  aside  Purves,  Jenkins  and  Strawser’s  arguments,  and  assuming  that  it  would  be  possible  to  create  these  systems,  it  is  still  not  clear  why  human  agents  could  not  regulate  the  behavior  of  intelligent  machines,  simply  on  the  basis  that  they  are  too  complex.  Humans  may  not  fully  understand  the  internal  processes  of  these  machines,  but  they  can  still  set  limits  on  the  acceptable  consequences  of  these  internal  processes.  To  use  a  simple  analogy,  human  agents  don’t  fully  understand  the  functioning  of  the  human  brain,  but  this  does  not  prevent  them  from  setting  goals  and  objectives  for  other  human  agents.  Fourth,  and  finally,  what  about  second-­‐order  AI  safety  issues?  A  common  preoccupation  regarding  the  social  impacts  of  intelligent  machines  are  concerns  about  human  safety.  The  fear  is  that  new  intelligent  technologies,  when  they  reach  a  certain  level  of  intelligence,  may  escape  human  control  and  cause  damage  to  humans.  The  risks  range  from  the  misbehavior  of  automated  vehicles  like  drones  or  self-­‐driving  cars,  to  the  creation  of  destructive  military  technologies  or  even,  well,  to  the  extinction  of  humanity  itself  (Bostrom  2013;  2014).  If  increasingly  capable  systems  create  these  sorts  of  risks,  these  risks  will  also  be  present  in  autonomous  systems  of  ethics.  The  risk  is  of  the  second-­‐order,  simply  because  there  is  a  sense  in  which  an  autonomous  systems  of  ethics  is  a  system  that  is  responsible  for  the  behavior  of  other  intelligent  machines.  The  risk  that  the  latter  intelligent  machines  misbehave  is  the  first-­‐order  risk.  As  pointed  out  above,  these  considerations  are  highly  speculative,  but  second-­‐order  AI  safety  risks  should  not  be  overlooked,  because  they  are  potentially  significant.  AI  safety  is  a  serious  and  clear  concern  in  itself.  This  is  why  multiple  proposals  have  been  made  to  ensure  that  AI  is  safe.  But  if  autonomous  systems  of  ethics  are  used,  this  is  a  situation  where  some  system  can  decide  about  the  behaviors  of  potentially  many  other  systems.  If  these  systems  misbehave,  the  implications  are  potentially  

  13  

significant.  

VI. Conclusion:  who  should  decide?  I  have  considered  four  approaches  to  determining  who  should  decide  how  machines  ought  to  behave  in  morally  laden  situations.  I  have  outlined  some  of  the  supporting  arguments  for  these  approaches,  but  I  have  also  shown  that  they  all  present  problems.  Thus,  what  are  we  to  conclude?  What  is  the  best  approach?  It  is  most  likely  that  the  right  approach  is  a  mix  of  all  four  approaches.  Still,  the  analysis  suggests  three,  more  specific,  conclusions.  The  first  conclusion  is  that  we  are  relying  on  the  first  approach  much  more  than  we  should.  The  makers  of  intelligent  machines  —  engineers  at  Google  or  Toyota,  or  scientists  working  for  military  contractors  designing  autonomous  weapon  systems  —  should  not  have  exclusive  rights  to  decide  on  all  the  moral  aspects  of  the  behavior  of  these  machines.  Other  inputs  must  be  sought  as  well.  As  a  second  conclusion,  the  analysis  also  suggests  that  the  fourth  approach  involving  autonomous  systems  of  ethics  is  not  feasible  today,  and  overly  speculative  at  this  point.  Indeed,  this  is  why  it  was  not  considered  at  length  here.  But  that  is  not  to  say  it  won’t  become  an  interesting  approach  in  the  future.  Third,  and  finally,  the  right  balance  between  all  approaches  is  not  the  same  in  all  cases.  Different  approaches  are  more  suited  for  different  types  of  intelligent  machines  and  different  types  of  decisions.  It  is  risky,  while  thinking  about  the  morally  laden  behavior  of  intelligent  machines,  to  use  one  approach  for  all  cases,  given  the  diversity  of  machines  that  will  rely  on  artificial  intelligence  and  robotics.  Different  factors  must  be  taken  into  account.  I  outline  some  of  the  most  important  factors  below,  but  the  list  is  by  no  mean  exhaustive.  The  first  factor  may  be  summarized  with  the  following  question:  what  is  the  actual  level  of  collective  oversight?  Non-­‐intelligent  cars  or  medical  instruments  are  already  heavily  regulated,  and  these  industries  are  subject  to  a  high  level  of  governmental  involvement.  This  suggests,  then,  that  intelligent  machines  operating  in  these  sectors  may  require  more  oversight  and  regulation,  even  if  this  may  impose  an  additional  burden  on  their  makers.  Existing  regulation  in  these  sectors  is  the  result  of  long  political  or  social  processes,  wherein  the  advantages  and  disadvantages  of  many  individual  pieces  of  this  legislation  were  decided  upon  socially.  This  may  provide  relevant  indications  of  the  level  of  collective  oversight  that  should  be  applied  today,  especially  when  dealing  with  new  technologies  with  potentially  unexpected  outcomes.  Another  factor  that  may  justify  increased  oversight  is  the  magnitude  of  potential  risks  or  social  impacts.  A  self-­‐driving  vehicle  weighing  a  ton  or  more  could  potentially  cause  more  damage,  ceteris  paribus,  than  a  drone  of  a  few  hundred  

  14  

grams.  This  provides  justifications  for  increased  regulation  of  the  self-­‐driving  car,  but  also  suggests  that  we  may  want  to  impose  looser  regulation  on  small  drones.  As  an  additional  factor,  we  may  have  to  take  into  consideration  the  overall  efficiency  and  public  costs  of  deciding  collectively  about  the  behavior  of  a  machine.  To  use  the  drone  example,  if  we  were  to  decide  collectively  about  the  design  of  all  unmanned  aerial  vehicles  (UAV),  a  considerable  burden  would  be  imposed  on  people  and  institutions.  In  Canada,  for  instance,  the  Ministry  of  Transport  imposes  different  regulations  for  the  use  of  UAVs  weighing  more  than  2.25  or  more  than  35  kg.  The  evaluation  process  is  longer  and  more  thorough  for  heavier  UAVs,  and  special  certificates  may  be  required  (Transport  Canada  Civil  Aviation  2016).  The  main  rationale  behind  these  legal  distinctions  is  that  heavier  UAVs  have  the  potential  to  cause  more  damage  and  thus  need  to  be  more  closely  regulated.  On  the  other  hand,  the  Ministry  does  not  want  to  impose  the  same  constraints  on  all  UAVs,  in  order  not  to  deter  usage,  both  commercial  and  recreational,  and  save  on  administrative  costs.  When  the  complexity  of  an  intelligent  machine  is  an  issue,  perhaps  the  makers  of  these  machines  should  have  more  influence.  We  may  wonder,  for  instance,  if  governments  and  users  have  the  necessary  skills  to  decide  about  the  behavior  of  psychotherapy  software.  But  this  argument  is  also  somewhat  mitigated  when  we  remember  that  the  makers  of  complex  machines  can  always  cooperate  with  other  actors  (such  as  governments),  to  set  moral  boundaries  for  their  behavior.  We  should  be  particularly  sensitive  to  privacy  issues  or  the  respect  for  individual  liberties  in  the  private  sphere.  Machines  such  as  personal  computers  (that  may  run  artificially  intelligent  software)  or  mobile  phones  are  troves  of  personal  data.  More  and  more,  the  private  sector  uses  people’s  personal  data,  combined  with  AI  technologies,  in  order  to  establish  their  choice  patterns,  re-­‐inforce  their  consumption  behavior,  or  other  commercial  purposes.  Arguably,  when  an  intelligent  machine  relies  on  such  private  data,  users  should  have  a  larger  role  in  deciding  how  these  machines  will  behave,  especially  when  they  are  also  entitled  to  decide  how  private  data  is  to  be  used.  But  that  should  also  be  weighted  against  the  magnitude  of  the  social  impact  that  these  machines  can  have  —  if,  say,  a  personal  computer  or  mobile  phone  is  used  in  the  committal  of  a  crime.  Similar  factors  should  also  be  taken  into  account  for  deciding  about  the  behavior  of  an  automated  journalistic  content  generator,  given  the  importance  of  protecting  the  independence  of  the  press.  More  generally,  when  the  behavior  of  a  machine  is  likely  to  only  have  impacts  on  its  users,  as  opposed  to  other  individuals,  the  analysis  suggests  that  users  should  be  given  as  much  weight  as  possible  in  deciding  its  functioning,  if  this  is  feasible,  of  course.  How  intelligent  machines  behave  morally  is  a  fairly  new  and  uncharted  ethical  issue.  But  new  developments  in  AI  and  robotics  are  impressive,  and  the  pace  of  

  15  

development  is  accelerating.  This  issue  needs  to  be  taken  seriously.  Soon  enough,  we  will  have  to  face  much,  much,  more  questions  like  these.  

References  Ackerman,  Bruce  A.  1980.  Social  Justice  in  the  Liberal  State.  New  Haven:  Yale  

University  Press.  Allen,  Colin,  Wendell  Wallach,  and  Iva  Smith.  2006.  “Why  Machine  Ethics.”  IEEE  

Computer  Society  21  (4):  12–17.  Anderson,  Michael,  and  Susan  Leigh  Anderson.  2007.  “Machine  Ethics:  Creating  an  

Ethical  Intelligent  Agent.”  AI  Magazine  28  (4):  15.  Appiah,  Anthony.  2008.  Experiments  in  Ethics.  Cambridge:  Harvard  University  Press.  Bonnefon,  Jean-­‐François,  Azim  Shariff,  and  Iyad  Rahwan.  2015.  “Autonomous  

Vehicles  Need  Experimental  Ethics:  Are  We  Ready  for  Utilitarian  Cars?”  arXiv,  October.  http://arxiv.org/abs/1510.03346.  

———.  2016.  “The  Social  Dilemma  of  Autonomous  Vehicles.”  Science  352  (6293):  1573–76.  doi:10.1126/science.aaf2654.  

Bostrom,  Nick.  2013.  “Existential  Risk  Prevention  as  Global  Priority.”  Global  Policy  4  (1):  15–31.  doi:10.1111/1758-­‐5899.12002.  

———.  2014.  Superintelligence:  Paths,  Dangers,  Strategies.  Oxford:  Oxford  University  Press.  

Bruin,  Boudewijn  de,  and  Luciano  Floridi.  2016.  “The  Ethics  of  Cloud  Computing.”  Science  and  Engineering  Ethics,  February,  1–19.  doi:10.1007/s11948-­‐016-­‐9759-­‐0.  

Burn-­‐Murdoch,  John.  2013.  “The  Problem  with  Algorithms:  Magnifying  Misbehaviour.”  The  Guardian,  August  14.  

Carter,  Ian.  1995.  “The  Independent  Value  of  Freedom.”  Ethics  105  (4):  819–45.  Crawford,  Kate.  2016.  “Artificial  Intelligence’s  White  Guy  Problem.”  The  New  York  

Times,  June  25.  http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-­‐intelligences-­‐white-­‐guy-­‐problem.html.  

Dworkin,  Gerald.  1972.  “Paternalism.”  The  Monist  56  (1):  64–84.  ———.  2005.  “Moral  Paternalism.”  Law  and  Philosophy  24  (3):  305–19.  

doi:10.1007/s10982-­‐004-­‐3580-­‐7.  ———.  2014.  “Paternalism.”  Edited  by  Edward  N.  Zalta.  The  Stanford  Encyclopedia  

of  Philosophy.  http://plato.stanford.edu/archives/sum2014/entries/paternalism/.  

Dworkin,  Ronald  Myles.  1979.  “Liberalism.”  In  Public  and  Private  Morality,  113–43.  Cambridge:  Cambridge  University  Press.  

Economist.  2014.  “The  Computer  Will  See  You  Now.”  The  Economist,  August  16.  http://www.economist.com/news/science-­‐and-­‐technology/21612114-­‐virtual-­‐shrink-­‐may-­‐sometimes-­‐be-­‐better-­‐real-­‐thing-­‐computer-­‐will-­‐see.  

———.  2016.  “Who  Wields  the  Knife?”  The  Economist,  May  7.  http://www.economist.com/news/science-­‐and-­‐technology/21698220-­‐operations-­‐performed-­‐machines-­‐could-­‐one-­‐day-­‐be-­‐commonplaceif-­‐humans-­‐

  16  

are-­‐willing.  Economist,  The.  2013.  “Robot  Recruiters.”  The  Economist,  April  6.  

http://www.economist.com/news/business/21575820-­‐how-­‐software-­‐helps-­‐firms-­‐hire-­‐workers-­‐more-­‐efficiently-­‐robot-­‐recruiters.  

Engel,  Jeff.  2016.  “Making  the  Web  More  Open:  Drupal  Creator  Floats  an  ‘FDA  for  Data.’”  Xconomy,  March  2.  http://www.xconomy.com/boston/2016/03/02/making-­‐the-­‐web-­‐more-­‐open-­‐drupal-­‐creator-­‐floats-­‐an-­‐fda-­‐for-­‐data/.  

Etzioni,  Amitai,  and  Oren  Etzioni.  2016.  “AI  Assisted  Ethics.”  Ethics  and  Information  Technology  18  (2):  149–56.  doi:10.1007/s10676-­‐016-­‐9400-­‐6.  

Evans,  Owain,  Andreas  Stuhlmüller,  and  Noah  D.  Goodman.  2015.  “Learning  the  Preferences  of  Bounded  Agents.”  In  NIPS  2015  Workshop  on  Bounded  Optimality.  http://web.mit.edu/owain/www/nips-­‐workshop-­‐2015-­‐website.pdf.  

———.  2016.  “Learning  the  Preferences  of  Ignorant,  Inconsistent  Agents.”  In  Thirtieth  AAAI  Conference  on  Artificial  Intelligence.  http://web.mit.edu/owain/www/evans-­‐stuhlmueller.pdf.  

Fiala,  Brian,  Adam  Arico,  and  Shaun  Nichols.  2014.  “You,  Robot.”  In  Current  Controversies  in  Experimental  Philosophy,  edited  by  Edouard  Machery,  31–47.  Routledge.  

Floridi,  Luciano,  and  J.  W.  Sanders.  2004.  “On  the  Morality  of  Artificial  Agents.”  Minds  and  Machines  14  (August):  349–79.  doi:10.1023/B:MIND.0000035461.63578.9d.  

Foot,  Philippa.  1978.  Virtues  and  Vices  and  Other  Essays  in  Moral  Philosophy.  Berkeley:  University  of  California  Press.  

Friedman,  Milton.  1962.  Capitalism  and  Freedom.  Chicago:  University  of  Chicago  Press.  

Gantenbein,  R.  E.  2014.  “Watson,  Come  Here!  The  Role  of  Intelligent  Systems  in  Health  Care.”  In  2014  World  Automation  Congress  (WAC),  165–68.  doi:10.1109/WAC.2014.6935748.  

Garber,  Megan.  2014.  “Would  You  Want  Therapy  From  a  Computerized  Psychologist?”  The  Atlantic,  May  23.  http://www.theatlantic.com/technology/archive/2014/05/would-­‐you-­‐want-­‐therapy-­‐from-­‐a-­‐computerized-­‐psychologist/371552/.  

Gubbi,  Jayavardhana,  Rajkumar  Buyya,  Slaven  Marusic,  and  Marimuthu  Palaniswami.  2013.  “Internet  of  Things  (IoT):  A  Vision,  Architectural  Elements,  and  Future  Directions.”  Future  Generation  Computer  Systems  29  (7):  1645–60.  doi:10.1016/j.future.2013.01.010.  

Holtug,  Nils.  2002.  “The  Harm  Principle.”  Ethical  Theory  and  Moral  Practice  5  (4):  357–89.  

Kahneman,  Daniel.  2011.  Thinking,  Fast  and  Slow.  New  York:  Farrar  Straus  and  Giroux.  

Knight,  Will.  2015.  “How  to  Help  Self-­‐Driving  Cars  Make  Ethical  Decisions.”  MIT  Technology  Review,  July  29.  https://www.technologyreview.com/s/539731/how-­‐to-­‐help-­‐self-­‐driving-­‐

  17  

cars-­‐make-­‐ethical-­‐decisions/.  Legg,  Shane.  2008.  “Machine  Super  Intelligene.”  Doctoral  dissertation,  University  of  

Lugano.  Lewis,  Michael.  2014.  Flash  Boys:  A  Wall  Street  Revolt.  Lin,  Patrick.  2013a.  “The  Ethics  of  Saving  Lives  With  Autonomous  Cars  Is  Far  

Murkier  Than  You  Think.”  WIRED,  July  30.  http://www.wired.com/2013/07/the-­‐surprising-­‐ethics-­‐of-­‐robot-­‐cars/.  

———.  2013b.  “The  Ethics  of  Autonomous  Cars.”  The  Atlantic,  October  8.  http://www.theatlantic.com/technology/archive/2013/10/the-­‐ethics-­‐of-­‐autonomous-­‐cars/280360/.  

———.  2014.  “Here’s  a  Terrible  Idea:  Robot  Cars  With  Adjustable  Ethics  Settings.”  WIRED,  August  18.  http://www.wired.com/2014/08/heres-­‐a-­‐terrible-­‐idea-­‐robot-­‐cars-­‐with-­‐adjustable-­‐ethics-­‐settings/.  

Lohr,  Steve.  2015.  Data-­‐Ism:  The  Revolution  Transforming  Decision  Making,  Consumer  Behavior,  and  Almost  Everything  Else.  New  York:  HarperBusiness.  

Metz,  Cade.  2016.  “Self-­‐Driving  Cars  Will  Teach  Themselves  to  Save  Lives  —  But  Also  to  Take  Them.”  WIRED,  June  9.  http://www.wired.com/2016/06/self-­‐driving-­‐cars-­‐will-­‐power-­‐kill-­‐wont-­‐conscience/.  

Mill,  John  Stuart.  1859.  On  Liberty.  London:  John  W.  Parker  and  Son.  Millar,  Jason.  2014a.  “Technology  as  Moral  Proxy:  Autonomy  and  Paternalism  by  

Design.”  In  2014  IEEE  International  Symposium  on  Ethics  in  Science,  Technology  and  Engineering,  1–7.  doi:10.1109/ETHICS.2014.6893388.  

———.  2014b.  “You  Should  Have  a  Say  in  Your  Robot  Car’s  Code  of  Ethics.”  Wired.  September  2.  http://www.wired.com/2014/09/set-­‐the-­‐ethics-­‐robot-­‐car/.  

MIT  Technology  Review.  2015.  “Why  Self-­‐Driving  Cars  Must  Be  Programmed  to  Kill,”  October  22.  https://www.technologyreview.com/s/542626/why-­‐self-­‐driving-­‐cars-­‐must-­‐be-­‐programmed-­‐to-­‐kill/.  

Nader,  Ralph.  1965.  Unsafe  at  Any  Speed;  the  Designed-­‐in  Dangers  of  the  American  Automobile.  New  York:  Grossman.  

Orseau,  Laurent,  and  Mark  Ring.  2012.  “Space-­‐Time  Embedded  Intelligence.”  In  Artificial  General  Intelligence,  edited  by  Joscha  Bach,  Ben  Goertzel,  and  Matthew  Iklé,  209–18.  Lecture  Notes  in  Computer  Science  7716.  Springer  Berlin  Heidelberg.  doi:10.1007/978-­‐3-­‐642-­‐35506-­‐6_22.  

Purves,  Duncan,  Ryan  Jenkins,  and  Bradley  J.  Strawser.  2015.  “Autonomous  Machines,  Moral  Judgment,  and  Acting  for  the  Right  Reasons.”  Ethical  Theory  and  Moral  Practice  18  (4):  851–72.  doi:10.1007/s10677-­‐015-­‐9563-­‐y.  

Rawls,  John.  1999.  A  Theory  of  Justice.  Rev.  ed.  Cambridge:  Belknap  Press  of  Harvard  Univeristy  Press.  

———.  2005.  Political  Liberalism.  Exp.  ed.  Columbia  Classics  in  Philosophy.  New  York:  Columbia  University  Press.  

Satz,  Debra.  2010.  Why  Some  Things  Should  Not  Be  for  Sale :  The  Moral  Limits  of  Markets.  Oxford  Political  Philosophy.  New  York:  Oxford  University  Press.  

Searle,  John  R.  1980.  “Minds,  Brains,  and  Programs.”  Behavioral  and  Brain  Sciences  3  (3):  417–424.  doi:10.1017/S0140525X00005756.  

———.  1984.  Minds,  Brains,  and  Science.  Cambridge:  Harvard  University  Press.  

  18  

Susskind,  Richard,  and  Daniel  Susskind.  2015.  The  Future  of  the  Professions:  How  Technology  Will  Transform  the  Work  of  Human  Experts.  New  York:  Oxford  University  Press.  

Transport  Canada  Civil  Aviation.  2016.  “Flying  a  Drone  or  an  Unmanned  Air  Vehicle  (UAV)  for  Work  or  Research.”  Transport  Canada.  April  21.  http://www.tc.gc.ca/eng/civilaviation/standards/general-­‐recavi-­‐uav-­‐2265.htm.  

Wall,  Steven.  2012.  “Perfectionism  in  Moral  and  Political  Philosophy.”  Edited  by  Edward  N.  Zalta.  The  Stanford  Encyclopedia  of  Philosophy.  http://plato.stanford.edu/archives/win2012/entries/perfectionism-­‐moral/.  

Weng,  Yueh-­‐Hsuan,  Chien-­‐Hsun  Chen,  and  Chuen-­‐Tsai  Sun.  2007.  “The  Legal  Crisis  of  Next  Generation  Robots:  On  Safety  Intelligence.”  In  Proceedings  of  the  11th  International  Conference  on  Artificial  Intelligence  and  Law,  205–209.  New  York:  ACM.  doi:10.1145/1276318.1276358.  

Notes    a  *[Acknowledgements]  b  See  Jason  Millar  (2014b),  and  Amitai  and  Oren  Etzioni  (2016).  For  an  earlier  formulation  of  a  similar  problem,  see  Patrick  Lin  (2013a;  2013b).  See  also  two  articles  in  the  MIT  Technology  Review  (2015;  Knight  2015).  c  For  work  dealing  with  a  similar  problem  see,  for  instance,  Lin  (2014);  Duncan  Purves,  Ryan  Jenkins  and  Bradley  J.  Strawser  (2015);  Jason  Millar  (2014a);  and  Yueh-­‐Hsuan  Weng,  Chien-­‐Hsun  Chen  and  Chuen-­‐Tsai  Sun  (2007).  d  Providing  a  definition  of  intelligence  is  a  difficult  problem  from  a  philosophical  perspective,  but  it  may  be  useful  to  point  out  that  one  common  conception  of  intelligence  relies  on  the  notion  of  rationality.  What  is  more,  it  is  common  in  decision  theory  or  rational  choice  theory  to  draw  a  distinction  between  different  forms  of  rationality:  practical  rationality  (choosing  the  right  action  to  satisfy  one’s  desire),  volitional  rationality  (forming  the  right  desires)  and  epistemic  rationality  (forming  the  right  beliefs).  Each  of  these  forms  of  rationality  can  be  seen  as  a  form  of  intelligence.  Interestingly,  it  is  more  common  in  the  machine  learning  literature,  as  well  as  more  general  AI  literature,  to  define  intelligence  as  a  form  of  practical  rationality.  See,  for  instance,  the  work  Laurent  Orseau  and  Mark  Ring  (2012)  and  Shane  Legg  (2008).  But  we  may  wonder  if  this  definition  is  sufficient  and  if  we  should  not  take  into  account  other  dimensions  of  rationality,  such  as  the  capacity  to  form  proper  desires  and  beliefs.  e  For  the  present  purposes,  I  define  that  as  the  ability  of  a  system  to  perform  a  task  without  real-­‐time  human  intervention.  See  Etzioni  and  Etzioni  (2016,  149)  for  a  similar  definition.  f  Such  as  the  ‘refurbished’  Watson  system  created  by  IBM  that  won  first  place  at  the  game  Jeopardy!  in  2011.  The  system  is  now  used  as  a  clinical  decision  support  system  (Gantenbein  2014).  g  See  also  Kate  Crawford  (2016)  for  other  cases  of  discrimination  performed  by    

  19  

 computer  algorithms.  For  instance,  some  software  used  to  assess  the  risk  of  recidivism  in  a  criminal  is  twice  as  likely  to  mistakenly  flag  black  defendants  as  being  at  higher  risk  of  committing  a  future  crime.  Another  study  found  that  women  were  less  likely  than  men  to  be  shown  ads  on  Google  for  highly  paid  jobs.  h  Drawing  clear  limits  is  likely  to  require  developing  some  form  of  typology  of  morally  laden  decisions,  which  is  not  a  trivial  issue  from  a  philosophical  perspective.  Another  related  issue  goes  as  follows:  how  does  a  machine  decide  that  it  is  faced  with  a  morally  laden  decision?  This  is  not  a  trivial  issue  either,  both  from  an  IA  research  and  philosophical  perspective.  Presumably,  one  needs  to  establish  some  form  of  typology  of  moral  decisions  before  building  a  system  than  can  operate  on  this  typology.  But  some  cases  are  more  straightforward  than  others.  When  live  casualties  can  be  encountered,  for  instance,  a  decision  is  likely  to  be  morally  laden,  and  a  machine  such  as  a  self-­‐driving  car  can  be  designed  to  identify  these  situations.  A  machine  can  also  be  designed  to  identify  the  other  situations  mentioned  above  (risk  for  physical  and/or  mental  integrity,  the  destruction  of  buildings  or  infrastructure)  to  some  extent.  But  this  list  is  by  no  mean  exhaustive.  i  See  Debra  Satz  (2010)  for  an  overview  of  contemporary  arguments  in  favor  of  more  market  freedom.  j  A  group  of  intelligent  machines  could  also  be  connected  together  and  share  information  on  user  choices.  It  seems  likely  that  system  like  these  will  be  developed  in  the  future.  This  option  potentially  mixes  different  approaches,  depending  on  how  these  machines  use  the  information  they  share.  I  will  say  more  about  these  mixed  options  when  I  will  discuss  the  fourth  approach:  letting  other  machines  decide.  k  Given  that  the  two  options  in  the  trolley  problem  flesh  out  a  dilemma  between  deontological  and  consequentialist  modes  of  moral  reasoning  —  a  consequentialist  is  more  inclined  to  divert  the  trolley  to  spare  as  many  lives  as  possible,  which  promotes  the  best  consequences,  while  a  deontological  thinker  would  be  more  sensitive  to  the  value  of  the  action  of  diverting  the  trolley,  which  involves  killing  the  man  on  the  side  track  —  the  usual  joke  is  to  claim  that  users  of  self-­‐driving  cars  should  have  access  to  a  deontological/consequentialist  configuration  settings.  Think  of  it  as  the  ‘balance’  or  ‘fader’  control  on  a  car  radio.  But  of  course,  this  is  just  a  joke.  Moral  configuration  settings  do  not  have  to  be  that  simplistic.  l  A  related  question  goes  as  follows:  is  there  such  a  thing  as  ‘the  heat  of  the  moment’  for  an  intelligent  machine?  One  may  be  inclined  to  say  no,  since  machines  always  make  decisions  at  the  same  pace  through  similar  processes,  but  nothing  is  so  sure.  New  and  more  advanced  forms  of  AI  may  evolve  into  something  similar  to  the  human  brain  and  use  faster  or  lower  decision  systems  depending  on  the  circumstances,  the  latter  being  able  to  perform  more  thorough,  but  also  slower,  assessments  (see  also  the  next  note).  As  well,  intelligent  machines  may  rely  on  information  online,  but  not  be  able  to  access  that  information  when  it  must  take  a  quick  decision.  I  will  not  address  these  questions  directly  here,  for  I  am  mostly  interested  on  how  circumstances  influence  human  decision  making,  not  machine  decision  making,  but  it  is  worth  keeping  these  considerations  in  mind.    

  20  

 m  See  Daniel  Kahneman  (2011)  for  a  particularly  interesting  account  of  the  differences  in  fast  and  slow  mental  processes,  as  he  names  them.  n  This  may  even  be  an  advantage  of  some  intelligent  machines.  Self-­‐driving  cars  may  be  more  advantageous  than  human-­‐driven  cars,  precisely  because  it  may  be  easier  to  decide  collectively  about  their  behavior.  o  This  is  in  line  with  Amitai  and  Oren  Eztioni’s  (2016,  151)  suggestion  that  focus  groups  or  public  option  pools  could  be  used  to  determine  the  relevant  values  that  should  inform  the  behavior  of  intelligent  machines.  See  also  Jean-­‐François  Bonnefon,  Azim  Shariff  and  Iyad  Rahwan  (2016;  2015)  studies  and  an  article  in  the  MIT  Technology  Review  (2015)  for  examples  of  this  approach.  The  2016  study  suggests  that  most  people  think  self-­‐driving  cars  should  minimize  the  total  number  of  fatalities,  even  at  the  expense  of  the  passengers  in  the  car.  But  the  group  of  people  surveyed  also  claimed  they  would  not  buy  such  a  car.  They  want  a  car  that  will  protect  them  and  their  passengers  before  other  people  outside  the  car.  Pools  also  raise  other  problems,  starting  with  methodological  questions.  Who  should  be  surveyed?  How  can  we  account  for  gender,  age-­‐related  or  cultural  variations  or  biases  in  answers?  How  are  we  to  use  a  pool  result  where  no  clear  trends  can  be  identified?  p  For  a  canonical  formulation  of  such  views,  see,  for  instance,  Milton  Friedman  (1962).  q  See  Ian  Carter  (1995)  for  an  argument  in  favor  of  more  freedom  for  the  sake  of  technological  progress.  Even  though  scientific  and  technological  developments  may  have  disadvantages,  claims  Carter,  governments  (and  other  regulating  bodies)  won’t  always  be  able  to  predict  the  disadvantageous  outcomes  of  these  developments,  and  they  should  therefore  minimize  interference  during  the  development  phase.  The  claim  is  not  that  developing  clearly  harmful  technology,  such  as  nuclear  weapons,  should  be  allowed;  the  risks  of  that  technology  are  rather  straightforward  to  determine.  Rather,  the  idea  is  that  in  a  situation  in  which  clear  indications  of  serious  downside  risks  are  so  far  lacking,  government  bans  are  premature.  Carter  suggests  that  we  must  see,  in  each  case,  if  the  burden  of  increased  regulation  is  justified  by  the  risk.  See  also  de  Bruin  and  Floridi  (2016,  13).  r  As  car  lobbyists  in  the  US  pointed  out  every  single  time  transport  authorities  tried  to  raise  security  standards,  a  trend  identified  by  Ralph  Nader  (1965)  a  long  time  ago.  s  On  a  potential  paternalistic  dimension,  see  also  Millar  (2014a).  t  See  also  the  work  of  Owain  Evans,  Andreas  Stuhlmüller  and  Noah  Goodman  (2016;  2015)  on  learning  the  preferences  of  human  agents.  u  Another  proposal  that  may  be  interpreted  in  different  ways  is  the  proposal  that  machines  should  “teach  themselves”  what  to  decide  (Metz  2016).  The  proposal  overlaps  with  the  first  approach  if  the  makers  of  these  machines  have  an  important  influence  on  these  self-­‐teaching  mechanisms.  The  proposal  may  overlap  with  the  second  approach  if  the  machines  are  sensitive  to  user’s  behaviors.  The  proposal  may  overlap  with  the  fourth  approach,  or  be  very  similar  to  the  fourth  approach,  if  these    

  21  

 machines  can  learn  how  to  make  morally  laden  decisions  with  a  high  degree  of  autonomy.  v  Richard  and  Daniel  Susskind  (2015,  280–84)  discuss  this  idea,  though  they  do  not  necessarily  endorse  it.