how should we treat them? remarks on the ethics of artificial consciousness research

25
How should we treat them? Remarks on the Ethics of Artificial Consciousness Research Steve Torrance Universities of Sussex and Middlesex UK [email protected] Exystence Workshop, Turin Sept 2003 Machine Consciousness: Complexity Aspects

Upload: kamil

Post on 10-Feb-2016

36 views

Category:

Documents


0 download

DESCRIPTION

How should we treat them? Remarks on the Ethics of Artificial Consciousness Research. Steve Torrance Universities of Sussex and Middlesex UK [email protected] Exystence Workshop, Turin Sept 2003 Machine Consciousness: Complexity Aspects. Outline. Why are we here? - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

How should we treat them? Remarks on the Ethics of Artificial

Consciousness Research

Steve TorranceUniversities of Sussex and Middlesex UK

[email protected]

Exystence Workshop, Turin Sept 2003Machine Consciousness: Complexity Aspects

Page 2: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Outline• Why are we here?

– What’s the difference between AI research and AC research?

• Relation between consciousness and ethics– Are our phenomenological concepts tied to ethics in a way that

non-phenomenological ones aren’t?

• The discrimination problem– How do we tell real from lookalike consciousnesses? (Or doesn’t

it matter?)

• What ethical responsibilities attach to AC researchers?– In particular, issues to do with social acceptance and proliferation.

Page 3: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Some preliminaries• Machine Consciousness (MC) versus Artificial

Consciousness (AC)– I prefer AC (to sharpen the contrast with AI)

• Machine versus organic (or biotechnological) AC?– The former may get us readier results - but is it the right lamp post to

be looking under for the car-keys?

• Functional versus phenomenal consciousness– I’m going to focus on the latter; – It’s where I think the really interesting questions are

• The functional vs. material ends of the spectrum– Getting closer to the material end using methods discussed in the

workshop may still leave a long, long way to go!...

Page 4: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

However...• All the investigative approaches used here have

enormous importance in terms of understanding the nature of consciousness (or in terms of engineering fruitfulness, etc.)– there are often interesting things to find under whatever

lamp-post you happen to have chosen to look under…

Page 5: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Why are we here?• What’s special about AC (versus AI) research?

– Psychological/philosophical insights? Greater control/flexibility/effectiveness for achieving industrial ends? Benefits to users? Rhetorical (gets funding)?

• Maybe AC research is/should also be seen to be about creating artificial agents that aren’t just instruments for us, – but also have their own interests, sakes, or their own potential for

suffering, benefit, etc.• So is there an ETHICAL significance to AC research that’s

lacking in earlier phases of AI research? – If so, it doesn’t seem to be widely recognized…

Page 6: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Where does science/engineering end and ethics begin?

• 2 views:– strict segregation between science and

ethics or normativity: • no ‘ought’ from an ‘is’ – see quote from

Poincaré, to follow– overlap and deep interrelation

• (at least when sci/eng. is to do with aspects of mind, experience, well-being, etc.)

Page 7: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Poincaré’s principleThere can be no such thing as a scientific morality. But neither can there be an

immoral science. The reason for this is simple: it is – how shall I put it? – a purely grammatical matter.

If the premises of a syllogism are both in the indicative, then the conclusion will equally be in the indicative. In order for a conclusion to be able to be taken as an imperative, at least one of the premises would also have to be imperative. Now general scientific principles … can only be in the indicative mood; and truths of experience will also be in that mood. At the foundation of science there is and can be no other possibility. Let the most subtle dialectician try to juggle with these principles howsoever he will; let him combine them, scaffold them one on top of another: whatever he derives from them will be in the indicative. He will never obtain a proposition that says: Do this, or Don't do that – that's to say, a proposition which either confirms or contradicts any moral principle.

Henri Poincaré, 'La Morale et la Science', Dernières Pensées, Paris: Flammarion, 1913.

Page 8: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Enactive views of mind• E. Thompson (ed) Between Ourselves: Second-

Person Issues in the Science of Consciousness, Imprint Academic, 2001.

• Consciousness is radically bound up with intersubjectivity and empathy:– Individual conscious awareness is intimately interrelated with

recognition of awareness in others;– Intersubjectivity is fundamentally empathetic in character;

• By implication the approach is ethical in character – ‘Empathy is a precondition of a science of consciousness’

• See http://www.imprint.co.uk/jcs_8_5-7.html; • Editor’s Introduction: ‘Empathy and Consciousness’ :

http://www.imprint.co.uk/pdf/Thompson.pdf

Page 9: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Enactive science and ethics

• The enactive approach suggests that to recognize x as phenomenally conscious is (in part) to take up an evaluative or affective attitude towards x;…so the notion of phenomenal consciousness may provide a

crucial ground for ethics (though not necessarily the sole ground)

• Challenges Hume/Poincare fact-value dichotomy;=> consciousness science as essentially ethical?

Page 10: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Intersubjectivity, enaction and the other minds problem

• Traditionally the other minds problem is seen as the problem of MY validating inferences to YOUR ‘inner’ states;

• The enactive approach takes intersubjectivity (assumption of commonality of ‘inner’ states across subjects) as a ground for individual subjective experience;– So conscious states are not seen as purely ‘inner’/’private’;– Also you can’t dissociate the theoretical question from the

practice of interpersonal communication and concern.

Page 11: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Possible implications for AC research

• It’s not clear how this works when trying to bridge the gap between human and artificial minds!– Perhaps any artificial consciousness will necessarily have to

participate in intersubjectivity, on this view.– But of course there isn’t the same biological commonality

between human and machine– At the very least, this suggests an interesting experimental

paradigm (cf Owen Holland’s referee)

• So the enactive, intersubjectivity-based approach may not get us very far on its own

Page 12: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Is ‘mind’ a unified explanatory field?

• Is mind a ‘single scientific kind’? (a question raised, but not highlighted, at Birmingham)

• Does everything which is mental, derive its mentality from a single fundamental kind of characteristic (e.g. a certain kind of cognitive organization)? (‘Psychological or explanatory monism’)

• Supporters of Informational or Computation-based conceptions of mind usually assume psychological monism;

• some may argue that the assumption of monism is NECESSARY to the MC enterprise...– But IS IT the only possibility?

Page 13: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Psychological pluralism• MY VIEW: I think that there’s room for legitimately

questioning psych monism.• In general, ontological monism (physicalism) needn’t entail

psychological monism (e.g. computationalism);• A ‘fully paid-up materialist’ could accept a computationalist account of

SOME mental processes (at least cognitive ones, e.g. learning) without accepting it for ALL– specifically one may wish to deny a computationalist account of phenomenal

states (e.g. tying them rather to lower-level biological structure)• More specific considerations favouring psych pluralism:

– intentional stance vs phenomenological stance;– intrinsic ethical value

Page 14: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

The intentional stance

• A Dennett-style ‘stance’ theory seems to fit well with many cognitive mental properties– e.g. ‘S believes that …’; ‘S understands that ….’; ‘S learned to …’

• It’s no doubt fallacious to look for an introspectible inner state (or quale) as functionally central in such cases…– rather, an interpretative/predictive ‘stance-theoretic’ account

seems to be called for;– and this fits in well with a informational account, and with the

idea that such processes can be replicated in various kinds of computational architectures

Page 15: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Phenomenological stances? (Heterophenomenology)

• But in the case of phenomenological states, a ‘stance-theoretic’ view doesn’t seem to suffice– a heterophenomenological view may go quite a long way

towards capturing the complexity of such states;– but on (my interpretation of) the enactive view, our

conception of such states (in another) essentially involves an empathetic co-identification with the other’s experience;

– [such a conception won’t imply that such a state is private to its owner - indeed the enactive view resists such a claim]

• Even without the enactive view, there seems to be a kind of watershed between cognitive attributions and phenomenal attributions in this connection...

Page 16: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Intrinsic moral worth (IMW)

• Another possible source of discontinuity between cognitive and phenomenological?

• Perhaps any genuine AC would have to be considered as having its own intrinsic experiential point of view, and hence an intrinsic moral worth;

• i.e. it would deserve consideration for its own sake;• this would be in contrast with purely cognitive systems, even

ones with highly complex features– (No one ever suggested that we should care for the well-

being of GOFAI or ANN systems)– SO This mey be another way to show how our

phenomenological notions may be much more closely tied to ethical ones than concepts of purely cognitive or productive mentality.

Page 17: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Machine versus organic phenomenal consciousness?

• Can there be any genuine AC unless it is organic rather than merely machine-like in nature?

• Or are organisms a subset of machines?• We need much clearer definitions of what differentiates a machine

from a living organism – theoretical work by Maturana and Varela, and alternatively, by Hans

Jonas, are useful here• Maybe our (phenomenal) consciousness derives from features of

our material make-up that can only be duplicated by low-level constructions of organisms that metabolize, self-reproduce, self-renew, etc. -

Page 18: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Nano - MC?• Perhaps the appropriate level of aggregation for artificial

phenomenal consciousness is at the level of cell building blocks:• Genuinely conscious systems MAY need to be built up at a

molecular level - e.g. by manufacturing artificial protein molecules, etc.

• An alternative might be that very highly integrated nano-scale computing technologies are necessary (cf.John Taylor): this would imply that phenomenal consciousness is grounded in features much lower than ones of abstract virtual machine functionality (as with Sloman+Chrisley, or Franklin+Baars).

Page 19: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

The misattribution problem:False Positives and Negatives

1. There's scope for at least two kinds of error:2. FALSE POSITIVES: if we house, feed, and otherwise protect agents

believing them wrongly to be CONSCIOUS, we unjustly deprive genuine claimants of those benefits (unless resources are unlimited);

3. FALSE NEGATIVES: if we withold benefits from agents believing wrongly them to be NON-CONSCIOUS, then we have treated them unjustly

• Clearly, the project of AC will be morally hazardous, unless we can have solid grounds for being at least strongly confident in our discriminations

• Are there grounds for erring on the inclusive side, and assuming that any sufficiently close approximation to phenomenal consciousness should be treated as such?

Page 20: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Ethical responsibilities of AC researchers

• Powerful technological innovations tend to proliferate beyond control – (automobiles, computers, the internet, mobile phones)

• If genuine Acs became as widespread as PCs or mobile phones, questions of misattribution would be very serious.

• So that’s maybe why we have make such projections now.

• Also the consequences of a mass social mis-perception of AC need to be considered...

• Such indirect social effects could be enormous, and difficult to predict.

Page 21: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

The long view• There is an ethical imperative on technological innovators

to look deep into possible futures, and to be sanguine about large-scale costs as well as large-scale benefits

• Deep accounting: in the case of personal motorized transport we now see enormous costs, with hindsight;– Ivan Illich - when measuring the mean speed of getting from A to

B for a particular means of travel (bike, car, helicopter), you have to include not just actual journey time, but also the mean time that is required to earn the money to pay for both the personal AND social costs of the means of travel;

• We need to adopt similar methods of deep accounting prior to the production of innovations rather than after the event

Page 22: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

The ethical world-wide web?

• To create ACs is to create a new class of beings to enter the ethical 'world-wide web’ (Kant’s kingdom of ends)…

• Like reproductive cloning, this involves ethical considerations that are quite novel, and difficult to focus on clearly.

• Perhaps any research programme should have a significant sub-project to investigate the ethical implications.

Page 23: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Conclusions (1)

• The gap between cognitive and phenomenal aspects of mind MAY mark a significant watershed, even though most researchers tend to see a continuity;

• The watershed may be of ethical significance:• So creating genuine MC will impose strong

responsibilities on the researchers involved, as the beings that result may have genuine interests, moral claims, etc..

Page 24: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Conclusions (2)

• These considerations may be reinforced by taking an enactive, intersubjective view of consciousness of the sort discussed;

• Also, an intentional stance-style approach, which may work for cognitive properties, may not generalize to phenomenal states;

• None of this is to diminish the importance of using the current range of AC techniques both as a theoretical investigation tool and as a way to enhance computer technology.

Page 25: How should we treat them?  Remarks on the Ethics of Artificial Consciousness Research

Conclusions (3)• All the same, genuine MC may perhaps be realizable ONLY

using techniques much closer to biotechnology than to most of the approaches discussed in the workshop

• (with the possible exception of the CODAM approach,as outlined by John Taylor, if realized at nano-scale levels of integration);

• Whether genuine MC comes sooner rather than later, any major research programme of this sort ought to involve an investigation of ethical and social impacts as a sub-theme;

• Any such investigation should take a long view, and consider the implications of any large-scale (planet-wide) proliferation of (real or ersatz) MC systems.