artificial agents: philosophical and legal perspectives ...schopra/choprawhitechapter1.pdf · 4...

95
1 Artificial Agents: Philosophical and Legal Perspectives Samir Chopra and Laurence White © 2007

Upload: builiem

Post on 01-Sep-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

1

Artificial Agents: Philosophical and Legal Perspectives

Samir Chopra and Laurence White

© 2007

2

Chapter 1

Personhood for Artificial Agents

Table of Contents

1.Introduction.........................................................................................................................41.1 The Concept of “Artificial Agent” ...............................................................................51.2 Terminology and Legal Scope .....................................................................................8

2.Personality for Artificial Agents..........................................................................................92.1 The Concept of “Person” ...........................................................................................102.2 Lockean Personality for Artificial Agents ..................................................................11

2.2.1 Can Non-humans be Lockean Persons?..............................................................122.2.2 The Intentional Stance and Personhood..............................................................142.2.3 Conditions for Lockean Personality ...................................................................17(a) Rationality .........................................................................................................18(b) Reciprocation of the Intentional Stance..............................................................19(c) Verbal Communication ......................................................................................21(d) Self-consciousness.............................................................................................232.2.4 Conclusion.........................................................................................................27

2.3 Legal Personality for Artificial Agents ......................................................................272.3.1 Two Kinds of Legal Personality.........................................................................272.3.2 Is Being Human Necessary or Sufficient for Legal Personality?.........................292.3.3 Dependent Legal Personality for Artificial Agents .............................................302.3.4 Independent Legal Personality for Artificial Agents...........................................32(a) The Link with Lockean Personality....................................................................32(b) Conditions for Independent Legal Personality....................................................32

(i) Being Sui Juris...................................................................................................33(ii) Understanding and Obedience of Legal Obligations .......................................36(iii) Susceptibility to Punishment ..........................................................................37(iv) Capacity to Enter Contracts............................................................................38(v) Ability to Control Money and Own Property..................................................39

2.4 Objections to Personality for Artificial Agents...........................................................402.4.1 Personal Identity Conditions ..............................................................................402.4.2 The Judgment Objection ....................................................................................42

3

2.4.3 “Missing-Something” Arguments ......................................................................43(a) “Phenomenal” Consciousness ............................................................................44(b) Behavior or Internal Design? .............................................................................48(c) Free Will............................................................................................................51(d) Autonomy..........................................................................................................53(e) Moral Sense.......................................................................................................55

3.Artificial Agents and the Contracting Problem ..................................................................563.1 The Contracting Problem...........................................................................................56

3.1.1 Doctrinal Difficulties .........................................................................................573.1.2 The Role of “Closed Systems” ...........................................................................583.1.3 A “Quick” Solution?..........................................................................................59

3.2 Five Possible Solutions to the Contracting Problem...................................................593.2.1 First Solution: Agents as “Mere Tools’ ..............................................................603.2.2 Second Solution: The Unilateral Offer Doctrine.................................................603.2.3 Third Solution: The Objective Theory of Contractual Intention..........................643.2.4 Fourth and Fifth Solutions: Artificial Agent as Legal Agent...............................65

The doctrine of authority .......................................................................................65(a) .................................................................................................................................65(b) Fourth Solution: Artificial Agent as Slave..........................................................66(c) Fifth Solution: Artificial Agent as Legal Person.................................................68

3.3 Evaluating the Possible Solutions ..............................................................................693.3.1 Does Existing Law Dictate a Solution? ..............................................................693.3.2 Agency Law and Artificial Agents .....................................................................73(a) Motivators .........................................................................................................73(b) Objections .........................................................................................................753.3.3 Liability Consequences ......................................................................................77(a) First Scenario: Operator as Principal ..................................................................78(b) Second Scenario: User as Principal ....................................................................80(c) Conclusion.........................................................................................................82

3.4 Final Considerations..................................................................................................824.Conclusion ........................................................................................................................85

4

Chapter 1

Personhood for Artificial Agents

1. Introduction

In this chapter, we explore the potential for according sophisticated artificial agents with legal

personality. We wish to show philosophical and legal theorizing about personhood for artificial

agents can be mutually informing. In particular, legal discourse that seriously considers

personhood for artificial agents provides a useful testing-ground for the pragmatic value of

philosophical positions about the nature and possibility of artificial intelligence in general, and

therefore, artificial agents’ personhood. Philosophical discussion about the cognitive and moral

status of artificial agents could thus usefully draw inspiration from the pragmatic deliberations of

the legal sphere about the status of these agents in our society. And in turn, legal arguments will

be informed by philosophical arguments in favor of the ascription of personhood to artificial

agents. A virtuous circle of complementary argument is possible.

In order to provide a discursive framework, we introduce a distinction between dependent

and independent legal persons, and suggest the latter category is closely related to John Locke’s

philosophical conception of a person , which, enhanced with other modern conceptions, has been

elaborated by Daniel Dennett (Dennett 1978). We investigate whether artificial agents could

meet conditions for “Lockean” personality, as well as for independent legal personality, and

conclude that these conditions could, in principle, be met by artificial agents with the right

technical features. Deploying a framework for evaluating agents’ abilities, we also counter

objections to such a status for them.

Having prepared the ground in this way, we go on to investigate the doctrinal difficulties

5

presented by the contracts entered into by artificial agents, and explore five possible solutions,

one of which involves according legal personality to artificial agents. We conclude that at current

levels of technical ability and sophistication of artificial agents it is not necessary to postulate

artificial agents as legal persons in order to account for such contracts. But this view only reflects

the current sophistication of artificial agents and is likely to be affected by their continued

development. The question of legal or philosophical personhood for artificial agents will be

informed by a broad matrix of pragmatic, philosophical and extra-legal concepts; philosophically

unfounded chauvinism about human uniqueness should not play a role in these deliberations.

1.1 The Concept of “Artificial Agent”

Precisely how to define “artificial agent” is a matter of some difficulty, and there is no agreed list

of necessary and sufficient characteristics. Informally, we mean by the expression a relatively

autonomous information-processing process. This definition is loose enough to capture the large

class of entities that might qualify for such a description. Robots and internet “bots”1 are the

most salient examples but there are many others, including software agents that operate internet

shopping sites. There is a wide spectrum of abilities and architectures available in the world of

artificial agents; it is not an exaggeration to say the world of online commerce would be crippled

if human or corporate principals did not employ software based artificial agents to conduct

business for them. Some agents can sense their environment and extract information from it; they

can engage in simple communications with human beings; and often, they can make executive

decisions such as whether or not to grant a credit card to an online applicant. Clearly, these

abilities are capable of extension and refinement; it is these existing abilities and their potential

1 “Bots” are relatively autonomous programs that do much of the “grunt” work of the internet, such as mailingprograms, search programs, and the like.

6

for enhancement that leads us into the discussions of this chapter.

Typically, an artificial agent, as opposed to an ordinary computer program, will have a

relatively higher degree of one or more of the following:2

• autonomy – the ability to operate without the direct intervention of humans or other

agents, and to exert non-supervised control over their own actions and internal states;

• social ability – the capacity to interact with other artificial agents or with human beings;

• proactivity – the ability to initiate goal-directed behaviour;

• reactivity – the ability to perceive an environment and respond to changes within it;

• adaptive behaviour – the ability to adjust to the habits, working methods and preferences

of users, other agents or humans;

• mobility – the ability to move around a virtual or physical environment; and

• representativeness – the attribute of being a representative of, or an intermediary for,

some other agent or person.

Of this list, in our view, the most important aspects are autonomy, social ability and proactivity,

so that if any one is missing a particular process may well fail to be considered an artificial

agent. In a rough ordering of importance, these are followed by adaptive behavior, reactivity,

mobility and representativeness. In the sense we intend the term “artificial agent”, the

representativeness of the agent, which is a sub-category of social ability, is not crucial, although

it does delimit an important sub-class of artificial agents – those that play a similar role to that

played by human agents. This is because we wish to include in the scope of our discussion

possible fully autonomous agents (in effect, artificial persons), even if they do not act as

intermediaries for, or representatives of, any particular persons or things.

2 Adapted with additions from (Wettig and Zehendner 2003), section I, sub-section 2.

7

As an example of what we might consider autonomous behavior, consider an artificial

agent designed to find used-books bargains on the net for its principal. The agent might be

initially provided with a list of websites to scour, sample titles with abstracts that serve as

learning data for it to recognize other titles in the genre, and some funds with which to make

purchases. As the agent moves, over the Internet, from site to site and makes purchases, it reads

recommendations provided by other agents at the site it uses, much in the way Amazon.com

makes recommendations on past purchases. The artificial agent reads these abstracts, uses natural

language processing algorithms to extract their semantic content, compares it with its sample

learning data for matching on parameters of interest, and then depending on the outcome, buys

more books or declines the offers. This agent, as it purchases electronic or paper copies, which it

instructs to be shipped to its principal’s address, of the book with the funds at its disposal, enters

into the purchase contract autonomously. The agent is able to interact with its environment by

reading and scanning information at the sites it visits, it interacts with other agents such as

selling agents deployed by other principals that offer it bargains, it buys titles not on its original

list, and is able to proactively take the decision to buy a new title.

The used-books agent is able to decide to take an action on its own, in response to input it

decides to focus on; its interaction on the Internet, in a world populated by other shopping agents

and sometimes humans as well, exemplifies autonomous, goal-directed, proactive behavior. Bots

that bid on behalf of users at online auctions already exist; what is needed to make this example a

reality, besides some enhancement in sensory mechanisms and natural language processing, is a

broadly-defined goal for the agent, such as the principal’s satisfaction along several dimensions,

and enough executive discretion in the choices it makes to optimize satisfaction of that goal.

Autonomous behavior like the above is within the reach of current technologies. As a

8

small sample, consider the following description given by the Trading Agents Research Group at

Brown University:

Trading agents are programmed traders that operate autonomously in the marketplace, sending bids,requesting quotes, accepting offers, and generally negotiating deals according to market rules. (Humans donot intervene while negotiations are in progress.) To play the market effectively, agents must makedecisions in real-time in uncertain, dynamic environments. Successful agents rapidly assimilateinformation from multiple sources, forecast future events, optimize the allocation of their resources,anticipate strategic interactions, and learn from their experiences.3

Or the following description of KiRo, a robot under development at the University of Freiburg:

KiRo is a completely autonomous table soccer playing robot: with the help of a camera it perceives theplaying field and, depending on the current game situation, it decides how the rods under its control shouldbe moved 4

We see no reason to expect such technologies will not gain in sophistication in the future.

1.2 Terminology and Legal Scope

In general, we will use the expression “artificial agent” rather than “electronic agent” or

“software agent”. We prefer the term “artificial agent” to “electronic agent”, because an artificial

agent may be instantiated by an optical, chemical, quantum, or indeed biological, rather than an

electronic, computing process. We do not favor the term “software agent”, as it would not cover

embodied agents such as robots or hardware implementations such as neural network chips. We

prefer “artificial agent” to the term “artificial intelligence” or “intelligent agent”, as we wish to

emphasize the embedded, social, real-world nature of the typical artificial agent, rather than

merely its disembodied intelligence.

We use the term “operator” to designate the person who makes the arrangements, or on

behalf of whom the arrangements are made, to operate the agent. The term “user” denotes the

person who interacts with the agent, say, on a shopping website. Usually, the agent can be said to

3 At http://www.cs.brown.edu/people/amygreen/tada.html. Last accessed 2 December 2006.4 At www.informatik.uni-freiburg.de/~kiro/. Last accessed 8 December 2006.

9

act on behalf of the operator. However, in some situations, an operator may make an agent

available to users in such a way that the agent acts on behalf of the user rather than on behalf of

the operator, at least for some purposes. We use the term “principal” to denote the person,

whether it is the user or the operator, on behalf of whom the artificial agent acts in relation to a

particular transaction.

We will generally confine ourselves to legal doctrines to be found in the common law

world i.e., the U.S., England, and those Commonwealth countries, such as Australia, whose legal

system is based on the English legal system. While civil law, based on Roman and Napoleonic

law, will differ in details, most of the concepts discussed will have their analogues in civil law

legal systems. This book does not purport to present an exhaustive description of the relevant law

in any one jurisdiction.

2. Personality for Artificial Agents

In this section, we investigate whether artificial agents can be accorded the status of persons. We

firstly set out a philosophical concept of person, which has derived from John Locke and which

we therefore call “Lockean”. We investigate whether artificial agents can be Lockean persons,

adopting in the process a methodological perspective, suggested by Daniel Dennett, termed “the

intentional stance”. We conclude that there is nothing to prevent such an attribution in principle.

We go on to consider legal personality for artificial agents, being careful to distinguish

dependent from independent legal personality. We conclude that dependent legal personality – of

the kind accorded to corporations, for instance – could readily be attributed to artificial agents.

We also argue that independent legal personality would be a theoretical possibility for

sophisticated artificial agents. We examine a number of objections to the notion of according

personality, whether philosophical or legal, to artificial agents, and conclude that they do not

10

present insurmountable obstacles.

2.1 The Concept of “Person”

The law has its own particular concept of a “person”: a legal person is the subject of legal rights

and obligations. This is the standard definition, derived from John Chipman Gray’s classic text,

The Nature and Sources of Law (Gray 2006). Typically, a legal person has the capacity to sue

and be sued, to hold property in its own name, to enjoy various immunities and protections in

courts of law such as the right to life and liberty, and possesses the ability to be free to manage

its own affairs and make contracts.

Prima facie, this legal definition is distinct from John Locke’s famous definition of a

person as:

a thinking intelligent Being, that has reason and reflection, and can consider it self as it self, the samethinking thing in different times and places; which it does only by that consciousness, which is inseparablefrom thinking, and as it seems to me essential to it (Locke 1996, Book 2, Chapter 27, Section IX).

This concept of a person as a conscious intelligent being continues to inform the philosophical

debate about personality (Parfit 1986; Perry 1975; Shoemaker and Swinburne 1984; Williams

1976), which in turn figures in the philosophical foundations of artificial intelligence: it is not

uncommon to find that many of the suggested attributes of persons in the philosophical literature

are held to be standards artificial intelligence must aspire to in order to be considered successful

in its enterprise (Boden 1990; Copeland 1993; Haugeland 1989, 1997). Incidentally, note that

Locke, much like Descartes before him, makes a conceptual link between “consciousness” and

“thinking”. But neurological disorders like visual agnosia (“blindsight”) show conclusively that

complex perceptual processing can go on unconsciously (Stubenberg 1992); studies on speech

processing indicate that a great deal of unconscious neural activity takes place in the production

of everyday conversations (Jackendoff 1987). We do not need to make the further claim that

11

thinking can take place in wholly unconscious beings; for our present purposes it is sufficient to

deny any such necessary linkage between “consciousness” and “thinking”.

In what follows, we show there are different types of legal personality – “independent”

and “dependent” – and that the former type is closely linked with the Lockean vision of a person.

We will argue that artificial agents could become dependent legal persons. We will also argue

that some artificial agents could attain Lockean personality, and that these could, on the basis of

their satisfaction of other relevant conditions we propose, qualify for independent legal

personality.

2.2 Lockean Personality for Artificial Agents

Locke’s account, despite considerable philosophical critique and emendation, remains at the core

of most contemporary philosophical accounts of personhood (Martin and Barresi 2002). Locke’s

definition is an attempt to capture the metaphysical notion of a person, but he goes on to note

that “person” “is a forensic term, appropriating actions and their merit; and so belongs only to

intelligent agents, capable of a law” (Locke 1996, Book 2, Chapter 27, Section IX). By

“capable”, presumably what Locke means here is “susceptible”5. Thus Locke’s definition

emphasizes the link with legal obligations and with intellect, in terms of the person being capable

of understanding its legal obligations and any punishment that might be inflicted for breach of a

law; this capacity for understanding its legal role plays a role then, in determining the person’s

behavior. The vital role that memory plays in Locke’s account as a component of consciousness

is implicit in the persistence of this understanding over time; for a person is an enduring entity,

one to whom responsibility and blame can be assigned for temporally distant events.

5 “Susceptible” is one of the synonyms of “capable” given by wiktionary.com (accessed 3 November 2006).

12

2.2.1 Can Non-humans be Lockean Persons?

Until now, there have been no non-human things – other than some members of the family of

great apes (Wise 2000) – that have displayed sufficient intelligence to raise the question whether

they can be considered persons in the Lockean sense. However, as artificial agents become

increasingly sophisticated and autonomous, the question will arise whether they should be so

considered.

In our view, there is no principled conceptual barrier to non-humans being accorded

Lockean personality, for there is nothing in Locke’s formulation of the concept of a person that

is limited to human intelligences. So long as the agent is a “thinking intelligent Being” possessed

of the right kind of consciousness it could qualify as such. It is possible to devise conditions for

personhood that do not beg the question whether humans are the only persons, and we will

examine and propose various conditions in turn

Thus, inspired by Steven Wise’s defense of animal rights (Wise 2000), we do not accept

a priori assertions that there exists “a single uniform rule that the category of persons is co-

extensive with the class of human beings” (Weinreb 1998, 110). Instead, we consider “it … quite

inconceivable that the extension of any right should coincide exactly with the boundary of our

species. It is thus quite inconceivable that we have any rights simply because we are human”

(Sumner 1987, 206). Wise’s argument for the legal rights of chimpanzees and bonobos, who

meet many intuitive criteria for personhood, aims to demonstrate the anachronism of their legal

“thinghood” as grounded in the chauvinistic claims of the great Chain of Being, which

unsurprisingly placed Man at its peak (Wise 2000, 12-19). We sense a similar human chauvinism

in the outright popular6, and sometimes academic (Dreyfus 1992; Searle 1980), rejection of the

6 It might seem that the popularity of films like Bladerunner and Terminator would indicate a popular acceptance of

13

possibility of artificial intelligence, and thus implicitly, of the possibility of personhood for

artificial agents.

The philosophical cleavage between the concept of “person” and “human” is a long-

standing one (Ayer 1963; Strawson 1959). Accordingly, the following summation – by Daniel

Dennett – of the philosophical discourse on personhood (which includes the views of Anscombe,

Sartre and Frankfurt) does not mention being human as a condition:

The first and most obvious theme is that persons are rational beings . … The second theme is that personsare beings to which states of consciousness are attributed, or to which psychological or mental orintentional predicates are ascribed. … The third theme is that… our treating of him or her is somehow andto some extent constitutive of being a person. … The fourth theme is that the object to which this personalstance is taken must be capable of reciprocating in some way. The fifth theme is that persons must becapable of verbal communication. The sixth theme is that persons are distinguishable from other entities inbeing conscious in some special way … Sometimes this is identified as selfconsciousness. (Dennett 1978,270)

Dennett points out the concept “person” is “inescapably normative”, and suggests we cannot give

sufficiency conditions for personhood:

[H]uman beings or other entities can only aspire to being approximations of the ideal, and there can be noway to set a ‘passing grade’ that is not arbitrary.…Our assumption that an entity is a person is shakenprecisely in those cases where it matters: when wrong has been done and the question of responsibilityarises. (Dennett 1978, 285)

Thus for Dennett, personhood is moral personhood in much the way that Locke intended, and

reserved for self-conscious, intelligent beings capable of being responsible and being the subject

of legal rights and obligations. One could respond that this definition requires persons to be

humans as only humans could be rational, capable of possessing intentional attitudes and

reciprocation, and self-conscious. But the question-begging nature of this response is obvious.

Divorcing “human” from “person” raises the central questions in this debate:

If Venusians and robots come to be thought of as persons, at least part of the argument that will establishthem will be that they function as we do: that while they are not the same organisms as we are, they are inthe appropriate sense the same type of organism or entity (Rorty 1976).

the possibility of artificial intelligence; perhaps so, but at the fantasy level. Rather we mean to indicate by “popularrejection”, the man-on-the-street view that machines could never be capable of particularly human capacities.

14

But, due to our limited understanding of ourselves, we do not yet know what our “type” or our

way of “functioning” is. Is our “type” that of the moral person, a conscious, self-aware, rational,

verbally communicating entity? Is our “function” determined by a naturalistic world view? What

must the “type” and “function” of a being be, for it to meet the conditions of personhood above?

As we consider the various objections to independent personhood for artificial agents below, it

will become evident each objection is grounded in a particular conception of our “type”, and its

role in our “function”. As we attempt to determine whether, in principle, an artificial agent could

be accorded with “Lockean” personality, we argue the most important consideration is whether

an artificial agent can coherently be treated as an intentional system.

2.2.2 The Intentional Stance and Personhood

Daniel Dennett’s theory of the intentional stance is perhaps the most powerful argument related

to the functionalist tradition in the philosophy of mind for the coherence of ascribing beliefs,

desires and other mental predicates or propositional attitudes to non-human entities. To adopt the

intentional stance towards an entity is inter alia to treat it as a rational one for the sake of

predicting its behavior:

Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rationalagent; then you figure out what beliefs that agent ought to have, given its place in the world and itspurpose. Then you figure out what desires it ought to have, on the same considerations, and finally youpredict that this rational agent will act to further its goals in the light of its beliefs. A little practicalreasoning from the chosen set of beliefs and desires will in most instances yield a decision about what theagent ought to do; that is what you predict the agent will do. (Dennett 1987, 17)

If these predictions are successful, and can be made regularly about the system in question, then

the system is an intentional one, towards whom the intentional stance can, and should, be taken

as the best and most useful explanatory and predictive strategy. A system is an intentional system

just if the intentional stance can be successfully and consistently adopted toward it. The system’s

15

behavior is not just evidence that it holds certain beliefs and desires; it is constitutive of it7.

Clearly, on this approach artificial agents can be intentional systems, ones capable of the right

propositional attitudes, if their behavior so warrants it, if they are sophisticated enough in terms

of their design, implementation, and capacities.

Arguments against the possibility of artificial agents possessing beliefs and desires – of

ever being “true believers” – suggest that “something is missing”, possibly a self, the correct

physical architecture, the lack of an appropriate semantic relation with the contents of

propositions, or some other feature. The intentional stance strategy suggests nothing is left out if

the intentional stance can be successfully taken toward the system in question. What matters for

the intentional stance to be successful is that we should be able to adopt it consistently, “reliably

and voluminously” (Dennett 1987, 15) in predicting the behavior of a system, and in preference

to any other predictive strategy. The intentional stance is not to be casually adopted; there must

be sufficient warrant for it, given the capacities of the system in question.

Many objections to the intentional stance (Baker 1989; Jacquette 1988; Ringen and

Bennett 1993) rely on a misunderstanding of the force of the “successfully and voluminously”

condition, for this constraint on the successful adoption of the intentional stance is more stringent

than is immediately clear. For an artificial agent to meet this condition, its technical capacities

would need to have been honed to a point where ascriptions of beliefs and desires could be made

usefully and regularly, and indeed, would force themselves upon us as the only reliable way to

7 See (Dennett 1987) for an extended defense of the intentional stance strategy against the charge of behaviorism.For present purposes it suffices to say that the system in question must show a far more complicated andsophisticated interrelationship of behaviors and capacities than that required by mere reductive behaviorism.8 John Searle’s Chinese Room argument aims to establish no system can possess intentionality merely by virtue ofexecuting a program. In the original form of the argument a non-Chinese speaking human sits inside a room and ispassed bits of paper with Chinese inscriptions. He is also provided a translation guide for matching appropriate bitsof paper with English on them, which he then passes to the outside. Clearly, the Room “speaks” Chinese but Searlesuggests there is no “understanding” anywhere in the system. This example has inspired a veritable cottage industryof responses and counterresponses. (Preston and Bishop 2002)

16

predict the behavior of the artificial agent in question. At such a stage, predictive strategies based

on the physical or design stances9 would not present themselves as attractive options in dealing

with the behavior of the artificial agent. While we find ourselves occasionally adopting the

intentional stance toward artificial agents (“The bot wanted to find me a good bargain”), such

ascriptions would have to become commonplace and more useful than knowledge of the internal

arrangement of the agent in question; the intentional stance must present itself as a good strategy

given the capacities of the system.

By suggesting the intentional stance strategy as a viable one for dealing with artificial

agents, we have not set the bar low; rather, we have set it reasonably high, for only a technically

sophisticated and autonomous artificial agent could be so treated. The stance’s ability to override

the intuitions associated with the Chinese Room argument (which are powerful, as the prolonged

philosophical debate attests), would require a very advanced architecture and design. To return

for a moment to our example of the used-book buying agent above, it should be clear its behavior

would have to become considerably more sophisticated for us to use the intentional stance as

more than just a convenient way of talking about it; its original programmer or designer, one

with the best access to its innards, would have to become a lesser authority to consult for

predicting its behavior. The agent and its interactions should become the focus of study instead,

and a consistent pattern of responses would need to emerge over a period of time.

So, while the intentional stance cannot always be adopted toward artificial agents, its

adoption, when possible, with agents of the right capacities, has the singular benefit of separating

the issues of consciousness, physical composition or perhaps some ineffable qualities of mental

9 At the level of the physical stance our concerns only extend to the physical properties of the entity in question. Wecan predict the speeds of motor vehicles based on their engine capacities or predict that a bottle will shatter when ithits the ground. At the level of the design stance, purpose, function and design play a significant role in ourpredictions. For instance, we can predict that birds will fly because wing design facilitates flying.

17

states, from the ascription of mental predicates. Furthermore, if and when an artificial agent

becomes the successful target of the adoption of an intentional stance, it will, a fortiori, become

a candidate for personhood because persons, as we will see, are intentional systems. However,

only a suitable kind of intentional system can be a person, as it must display additional

capacities:

[B]eing rational is being intentional is being the object of a certain stance. These three together are anecessary but not sufficient condition for exhibiting the form of reciprocity that is in turn a necessary butnot sufficient condition for having the capacity for verbal communication which is the necessary conditionfor having a special sort of consciousness, which is … a necessary condition of moral personhood.(Dennett 1978, 271)

So, if artificial agents could coherently be the subjects of the intentional stance, then they are

capable of being intentional systems, and if persons are a subclass of intentional systems, we

need to sharpen our analysis to show which artificial agents could be persons in the Lockean

(i.e., metaphysical and moral) sense.

2.2.3 Conditions for Lockean Personality

We now address the question of how artificial agents are capable of meeting each of the

conditions listed in Dennett’s summation10 in turn. Most fundamentally, we will argue that this

will be because they can, in principle, be treated as intentional systems:

Whatever else a person might be – embodied mind or soul, self-conscious moral agent, “emergent” formof intelligence – he or she is an intentional system, and whatever follows just from being an intentionalsystem is thus true of a person. It is interesting to see just how much of what we hold to be the case aboutpersons or their minds follows directly from their being intentional systems. (Dennett 1978, 16)

As a preliminary we note that by adopting the intentional stance towards a system, we would

automatically take the system to have satisfied the second and third theme in conditions of

personhood, in that our treating the system in this fashion is constitutive of not merely its

possessing mental or psychological predicates but also its being a person. Furthermore, we will

10 quoted above at page 13

18

also run the themes of consciousness and self-consciousness together for now, separating the

issue of phenomenal experience as a component of consciousness for a later discussion.

(a) Rationality

The level of rationality of particular artificial agents is a matter of fact, dependent on their design

and their functionality. The most commonly accepted definitions of rationality tend to converge

on the notion of optimal (or optimal given resource constraints) goal-directed behavior (Nozick

1993). Definitions of rationality in formal models of human reasoning stress the achievement of

some context-specific maxima, such as the constraint in formal models of belief change that a

rational agent minimizes the loss of older beliefs when confronted with new, contradictory

information (Gärdenfors 1990). In rational choice theory in the social sciences, the agent acts to

maximize utility given the resources at its disposal (Elster 1986). Either way, ascriptions of

rationality such as these make no reference to the constitution of the entities involved, whether

individuals or organizations. The rationality of the entity is revealed by a consistent pattern of

responses in particular contexts.

Hence, we do not believe it is possible to maintain that a priori an artificial agent is

incapable of rationality. If an artificial agent treats its beliefs and desires in a manner that brings

it closer to achieving its chosen goals and outcomes, while not compromising its functional

effectiveness, then the system is coherently described as rational. Ascriptions of rationality will

need to be made on a case-by-base basis depending on the operational context and cannot be

ruled out for artificial agents tout court. Even a program with such limited autonomy as a chess-

playing program like Deep Blue is coherently described as rational; it possesses a set of goals –

checkmating its opponent, preserving its King – and takes the appropriate actions within its

operational and environmental constraints (time limits for the chess game, computational power)

19

to achieve it.

Linkages of rationality with humanity, consciousness, or even the inquiry “who is being

rational here?”, would be question-begging at this point. Instead, the rationality of artificial

agents becomes a topic for empirical evaluation, perhaps by checking whether they possess a

design architecture that allows for the higher-level consideration of options presented to it for

execution at any given time. Such executive control could play the role of the “rational

constrainer” for an artificial agent. Any ascription of rationality that follows will do so on the

basis of observing the functioning of the agent and its eventual success or failure to meet its

operational objectives.

(b) Reciprocation of the Intentional Stance

The next condition of personhood proposed by Dennett is that the agent in question be capable of

reciprocating attitudes adopted towards it. This condition is crucial in noting that, most

fundamentally, ascriptions of personality to entities are recognitions of roles which those entities

can play with regards to us. Thus a person is an entity that is capable of interacting with us in

some coherent fashion, of adopting intentional attitudes toward us. To satisfy this condition,

artificial agents should be capable of being coherently understood as adopting the intentional

stance towards other entities, whether humans or artificial agents. Such an artificial agent would

be a second-order intentional system, one capable of being described with ascriptions such as “S

believes that T wants that p” (where S is the artificial agent; T is the subject of the intentional

attitude; p a proposition e.g., “my shopping bot believes that I want a cheap book for Christmas”)

(Dennett 1978, 273). As the role played by the subject of the first intentional attitude could be

the artificial agent, we could also make reflexive ascriptions such as “S believes that S desires

that p”. This sort of second-order ascription, including the reflexive case, is not trivial for beings

20

other than normal adult humans. Extensive empirical studies on children and animals such as

chimpanzees were required before such a capacity11 could be demonstrated in their case (and

even then, alternative explanations cannot be ruled out conclusively) (Gallese and Goldman

1998; Gomez 1998).

Nevertheless, for artificial agents, such second-order cognition is an architectural feature,

not a priori impossible, and amenable to technical solutions. A sufficiently sophisticated

artificial agent utilizing a belief-desire-intention (BDI) architecture (Bratman 1987; Wooldridge

2000) could meet this requirement (in the reflexive case) wherein the agent would need to have

its beliefs and desires at one level be the object of the beliefs and desires in another. Numerous

agent systems utilizing BDI architectures have been implemented (Wooldridge 2000); we expect

the sophistication of these systems to increase13, 14. Furthermore, formal models for second-order

reasoning already exist in epistemic logics of knowledge (Fagin et al. 1995); their

implementation in agent architectures is largely dependent on solving problems of

representational complexity and computational feasibility. We wish to stress that the possession

of this capacity raises a technical rather than a conceptual challenge; we do not intend to

trivialize the important problems that need to be solved in these systems but merely to stress

there is no “impossibility proof” of higher-order reasoning in artificial agent architectures.

Importantly for our purposes, the possession of second-order beliefs is not dependent on a

linguistic or verbal capacity; the system in question must show that it can be coherently

11 This includes cases of deception where an animal tries to get another animal or even a human to believe aproposition contrary to the one it believes.12 This includes cases of deception where an animal tries to get another animal or even a human to believe aproposition contrary to the one it believes.13 See http://www.csc.liv.ac.uk/~mjw/pubs/rara/resources.html for a list of BDI agents available for use andexperimentation. Accessed December 2006.14 See http://www.ai.mit.edu/projects/humanoid-robotics-group/Abstracts2000/scaz.pdf for a research projectattempting equip robots with a theory of “other minds”. Accessed December 2006

21

described as an intentional system whose intentional attitudes (whether beliefs, desires or fears)

have as their subject the intentional attitudes of another entity. Reciprocity is also independent of

the requirement of consciousness, for the agent in question need not represent these beliefs to

itself in any particular fashion for our explanations or predictions to work (Dennett 1978, 277). If

an artificial agent undertakes an action intended to make another agent believe a particular

proposition, there need be no more representation of this action than its actual performance.

(c) Verbal Communication

Artificial agents are capable of verbal utterances15; they can both produce and, to a limited extent

at present, process verbal communication (Monzani and Thalmann 2001). To describe their

verbal utterances as communication presumes we can ascribe meanings to them, that the agent

meant something in producing such utterances, for to mean something when we say something is

an essential feature of all verbal communication (Dennett 1978, 278). The most famous analysis

of such meaning is Grice’s theory of “non-natural meaning”, which interestingly enough depends

on the ascription of third-order thoughts16 to the speaker:

“U meant something by uttering X” is true, if for some audience A, U uttered something x intending: (1) Ato produce a particular response r; (2) A to recognize that U intends (1); (3) A to fulfil (1) on the basis ofhis fulfilment of (2). (Grice 1957)

Verbal communication then, is simply engaging in the right kind of verbal behavior that is

amenable to the intentional ascriptions in the Gricean analysis, for the success conditions for

these ascriptions are those of the intentional stance. Note too, that we flirt with such ascriptions

even in non-verbal communication (“amazon.com wants to let me know by their email that they

have bargains waiting for me”). This Gricean analysis does not require the audience or the

15 In his argument for legal personhood for animals, Wise dismisses verbal communication as a standard forpersonhood suggesting instead a broader notion of linguistic capacity needs to be admitted into such considerations.16 i.e., thoughts about second-order thoughts

22

speaker to be conscious of these intentions; these utterances and recognitions can proceed non-

consciously. If an artificial agent’s pronouncements can be coherently analyzed in this fashion,

we can ascribe to it the requisite intentions and consequently, the meanings. This is of particular

relevance to the classic test for the possibility of artificial intelligence, the Turing test, which can

be understood as determining whether an artificial agent could induce the right kind of

intentional state in its tester – via communication – i.e., to think it was a human (Turing 1950).

Thus, even in this original formulation of a test for artificial intelligence, the ability of the agent

to convey meanings was crucial.

Interestingly enough, Grice’s examples of non-natural meanings “fall into a class whose

other members are cases of deception or manipulation … communication appears to be a sort of

collaborative manipulation of audience by utterer” (Dennett 1978, 279). Artificial agents are

eminently capable of this. The extent to which this is becoming so was brought home to one of

the authors recently in the following “chat” session with an operator of a website. The author’s

account had been “frozen” due to an identity theft and the author was seeking to convince the

website operator that it was indeed him and not the identity thief who was seeking to have the

account restored:

Edward: And can I also get your first and last name?

[author]: Laurence White

Edward: Perfect. One moment while I find a suitable question that only the account holder would know.

[author]: Can I ask are you a human or a “bot”?

Edward: I am most definitely a human.

[author]: OK no offence intended!

Edward: No worries, I get it all the time.

Edward: I do have “botted” responses though! =)

The above conversation is a mini-Turing test session (perhaps brought to a quick end by a direct

question). Less anecdotally, it should be clear an artificial agent capable of the right kind of

23

responses and technical extensions to facilitate verbal communication would be one whose

utterances can be plausibly analyzed using Grice’s schema. Customers calling into automated

phone response systems are often convinced they are speaking to humans, not voice-synthesizing

programs.

The question of determination of intention for the purposes of the Gricean analysis

proceeds by means of the intentional stance: we should attribute such intentions when doing so

maximizes our successful predictions. The intention of the agent will be ascribed as part of a

larger effort to place it into context with the agent’s other actions; for instance, an agent that

speaks to a user asking it for its social security number can, in the right circumstances, be

analyzed as intending to get the user to produce it.

(d) Self-consciousness

The possession of consciousness or a particular sort of self-awareness has often been felt to be

intimately related to possessing moral standing. Such concerns motivated the view that “a special

sort of consciousness is a precondition of being a moral agent” (Dennett 1978, 270)17. Thus, if an

agent is to be held responsible for an action, it must be aware of the action as being committed

by itself (where the ascription of responsibility is a precondition for personhood). This kind of

self-consciousness is related to Frankfurt’s suggestion that personhood requires second-order

volitions: “to have the capacity for reflective self-evaluation that is manifested in the formation

of second-order desires [i.e., wanting to have a certain desire]” (Frankfurt 1971). For Frankfurt,

if an entity only has first-order desires but no second-order volitions, then it is merely a

“wanton”, and nonhuman animals, small children and severely mentally disabled people fall into

17 Here Dennett is conflating “moral agent” with “person”.

24

this category.

For Lockean personality, this reflective self-evaluation is the self-consciousness or self-

awareness towards its actions that an agent must possess. The agent must be able to take the

intentional stance towards itself, to query its own reasons for an action, and experience its history

as its own. Prima facie, such ability is a question of functional organization and architecture, and

not of biological composition. Indeed, if we are inclined to deny full self-consciousness to

infants and animals, it is obvious that being a carbon-based life form is not sufficient for self-

consciousness.

While humans most commonly show such self-consciousness, especially in their self-

reporting of motives for actions, the question of whether such self-consciousness can be

exhibited by other systems is a question of empirical investigation, and requires the isolation of a

set of criteria amenable to third-person verification that the system in question is indeed self-

conscious. By suggesting such third-person verification, we mean to delink the question of self-

consciousness from that of experiencing phenomenal qualities. In the case of animals such

verification of self-consciousness has proceeded via mirror-recognition tests or reports (Gallup

1970; Heschl and Burkart 2006). This self-consciousness is an intermediate category of self-

consciousness short of fully reflective thought: perhaps an artificial agent purchasing a book

might not be able to reflect on the fact it was buying a book, but might still be capable of being

aware that it was buying a book.

Allan and Widdison express skepticism about such an ability serving as the basis for

ascriptions of personhood:

To [(Solum, 1992)], a system which achieves self-consciousness is morally entitled to be treated as a legalperson, and the fact that self-consciousness does not emerge from biological processes should notdisqualify it [the system] from legal personality. The validity of this argument is clearly debatable: it is notat all certain that computers can achieve self-consciousness; nor is it obvious that self-consciousness is avalid test for moral entitlement to legal personality (Allan and Widdison 1996, 35).

25

Two objections are being made here. The first is easily dismissed, even if it is not certain that

computers can achieve self-consciousness. It does not matter that current artificial agent

architectures are not capable of adequately facilitating self-consciousness; no argument in

principle rules out the kind of reflective architectures we mention as possibilities above, other

than question-begging ones like “only biological systems (or humans) can be self-conscious”.

Secondly, presumably what Allen and Widdison mean by dismissing self-consciousness as a

“valid” test for legal personality is that they do not consider it a necessary or sufficient condition

for legal personality. We agree, to the extent that self-consciousness has not been considered a

necessary or sufficient condition in decisions to grant dependent legal personhood. Infants,

plausibly described as lacking full self-consciousness, or humans temporarily lacking

consciousness such as persons in comas or asleep, are not denied dependent legal personality,

though those in a permanent vegetative state can suffer drastic curtailments of their human rights

by virtue of their loss of interactive capacities. Furthermore, the many categories of non-human

legal persons such as corporations lack self-consciousness. Conversely, self-consciousness has

not, historically, been considered a sufficient condition for independent legal personhood: many

categories of fully conscious humans, such as married women and slaves, have been denied just

that. But we would still consider the kind of self-consciousness considered above (the possession

of second-order thoughts and volitions) a necessary condition for independent legal personality.

There is an important intuition at its core: that the agent is able to consult, and evaluate, itself in

its decision-making, and take autonomous corrective action when it so desires.

Of particular interest in a legal setting is the issue of whether it would be possible to trust

artificial agents, much as we do humans, as reliable reporters about their mental states and self-

consciousness; accurate reporting on mental states is often a crucial determinant in our third-

26

person ascriptions of self-consciousness. Rather than examining a human’s neurological

structure to determine their reasons for an action, we normally just ask, and we are assured that

in most cases the reports we receive will be reliable indicators of self-conscious awareness of

reasons for actions. It is entirely plausible some artificial agents might be so complex, and their

reasoning so adaptive, that they effectively would be authorities in reporting on their own mental

states. This self-reporting capacity would entail, for an artificial agent, the ability to represent its

own inner workings at a high level of abstraction. This ability is a building-block of the “moral

sense”, where the system checks its own goals, thoughts and feelings, and compares them with a

category of “approved” or valorized ones. A system sophisticated enough to be considered an

authority in reporting on its mental states is on its way to possessing the design features needed

for a moral sense. Any legal interaction with such an agent would perforce rely primarily on the

agent itself for eliciting reasons for its actions, an important determinant of autonomy and the

ascription of self-consciousness to the agent. If we would be prepared to believe such an agent’s

reports on its internal states, we would be at a stage where an ascription of self-consciousness to

the agent would be plausible.

Daniel Dennett suggests such a possibility with the robot Cog, “intended to be self-

designing on a massive scale” (Dennett 2000, 99), whose increasing sophistication and

complexity could entail “the loss of epistemological hegemony on the part of its ‘third-person’

designers”; as its code base grows, and its initial programmer staff is replaced by others who

must decipher older code comments and figure out system interfaces, the latter will soon decide

that the best way to figure out the system’s decision-making is to query it. A system like Cog,

which utilizes connectionist architectures and genetic algorithms, possesses “competencies . . .

18

27

whose means are only indirectly shaped by human hands. . . . Programmers working in these

methodologies are more like plant and animal breeders than machine makers.” (Dennett 2000,

99) These considerations lead Dennett to conclude, in a remark relevant to our discussion as a

whole, that:

If Cog develops to the point where it can conduct what appear to be robust and well-controlledconversations in something like a natural language, it will certainly be in a position to rival its ownmonitors (and the theorists who interpret them) as a source of knowledge about what it is doing andfeeling and why. And if and when it reaches this stage of development, outside observers will have thebest of reasons for welcoming it into the class of subjects or first-persons, for it will be an emitter ofspeech-acts that can be interpreted as reliable reports on various “external” topics, and constitutivelyreliable reports on a particular range of topics closer to home: its own internal states. (Dennett 2000, 100)

2.2.4 Conclusion

To sum up, then, what is most important in the ascription of personhood in accordance with the

summary of views presented above is that an intentional system with the right sorts of capacities

– rationality, the capacity for verbal communication, second-order intentionality, and self-

consciousness (understood as well in terms of reliable reporting on itself) – would have a very

strong case for Lockean personhood. These capacities would be functions of its architecture,

design and implementation. There is no reason in principle why artificial agents could not attain

such a status, given their current capacities and the arc of their continued development along the

directions we have indicated above.

2.3 Legal Personality for Artificial Agents

2.3.1 Two Kinds of Legal Personality

We wish to introduce a distinction between two different types of legal personality: independent

and dependent. A dependent legal person can only act at the behest of another legal person in

exercising some or all of its legal rights. An independent legal person is not subject to any such

restriction. Such an independent legal person is often said to be sui juris, i.e., of the age of

28

majority and of sound mind.

Examples of dependent legal persons include children; those who are not of sound mind;

abstract legal entities such as corporations; and inanimate objects, such as ships, which the law

accords legal personality. In the case of children, they have a limited capacity to enter legal

contracts, and they must sue or be sued via a parent or guardian ad litem who decides on the best

interest of the child with respect to the litigation. In the case of persons who are not of sound

mind, they may enter contracts through an agent who has been appointed either by them

beforehand19, or by a competent court, but they have no, or very limited, capacity to enter

contracts unaided. They may only sue or be sued through a guardian or similar appointee. A

corporation is dependent on the actions of other legal persons, as members of its governing

organs or as agents, in order for it to engage in any legal acts, and so for the purposes of our

distinction is likewise a dependent legal person. Similarly, inanimate objects which are accorded

legal personality, such as ships or temples, are always dependent on the actions of other legal

persons, whether owners, trustees, masters or the like, to represent them and give them legal life.

Not all legal persons, dependent or independent, have the same rights and obligations.

Typically, to fully enjoy and be subject to legal rights and obligations, one must be a free adult

human, of sound mind. Some rights depend on age, such as the right to marry, to drive or vote, to

purchase alcohol or tobacco, to contract for other than necessities, or to sue without using a

guardian. Many rights, such as the right to marry or to vote, and liabilities, such as the liability to

be imprisoned, are always or typically restricted to humans. For example, in some legal

systems,20 corporations cannot sue for defamation. Similarly, in many legal systems

19 Under an enduring power of attorney that does not lapse when the person becomes of unsound mind.20 For example, in Australia only non-profit corporations and corporations with less than 10 employees that are notpublic body corporates can sue for defamation: see the Defamation Act 2005 of each State.

29

corporations, as well as being able generally only to act through their agents, have the power to

transact business only in fulfillment of the objects specified in their charter or other

constitutional documents, and any other action is void or voidable for being ultra vires.

2.3.2 Is Being Human Necessary or Sufficient for Legal Personality?

The law has not, historically, considered membership in the human species a necessary or

sufficient condition of being recognized as a legal person. In Roman law, the pater familias or

free head of the family was the subject of legal rights and obligations on behalf of his household;

his wife and children were only indirectly the subject of legal rights and his slaves were not legal

persons at all (Nekam 1938, n12, 22). In English law before the mid-nineteenth century,

similarly, the married woman was not, for most purposes, accorded separate legal personality

from that of her husband.21 In US law during the times of slavery, slaves were considered non-

persons; indeed, the theory of non-personality was worked out with a severity not seen even

many centuries earlier in the Visigothic code, so that slaves could not marry, for example

(Goodell 1853, Chapter VII). Currently, human fetuses are not considered legal persons though

this continues to be the subject of intense debate (Warren 1996). It is not an exaggeration to say

that in bioethics, debates on the morality of abortion and stem cell research are primarily

21 See, for example, Bradwell v. State of Illinois, 83 U.S. 130 (1872) (U.S. Supreme Court): “The harmony, not tosay identity, of interest and views which belong, or should belong, to the family institution is repugnant to the ideaof a woman adopting a distinct and independent career from that of her husband. So firmly fixed was this sentimentin the founders of the common law that it became a maxim of that system of jurisprudence that a woman had nolegal existence separate from her husband, who was regarded as her head and representative in the social state; and,notwithstanding some recent modifications of this civil status, many of the special rules of law flowing from anddependent upon this cardinal principle still exist in full force in most States. One of these is, that a married woman isincapable, without her husband's consent, of making contracts which shall be binding on her or him.” (per Bradley Jat 140).22 See Roe v Wade, 410 U.S. 113 (1973) (U.S. Supreme Court), “[If the] suggestion of personhood [of the preborn]is established, the [abortion rights] case, of course, collapses, for the fetus’ right to life is then guaranteedspecifically by the [14th] Amendment." per Blackmun J delivering the opinion of the Court, at pp. 156–57. TheCourt went on to hold that in the 14th Amendment, the word “person” does not include a fetus (at p. 158).

30

concerned with whether persons are involved as the subjects of these procedures and

experiments (Edwards 1997; Humber and Almeder 2003).

Conversely, many classes of entity that are not human beings are accorded legal

personality by the law. The most obvious of these is the modern business corporation, but many

other abstract legal entities, such as incorporated associations, as well as government and quasi-

government agencies, are also constituted with legal personality. English admiralty law once

treated a ship as a legal person capable of being sued in its own right.23 Other legal systems have

variously recognized temples, dead persons, spirits and even idols as legal persons.24 While the

term “legal fictions” is often used to describe some of the more exotic members of the family of

legal persons, it is clear that the class of legal persons is not in principle confined to any

particular class of physical entity.25

2.3.3 Dependent Legal Personality for Artificial Agents

With regards to dependent legal personality, we consider that, if legal systems can accord legal

personality of this kind to children, adults who are not of sound mind, corporations, ships,

temples, and even idols, there is nothing to prevent the legal system from according this kind of

legal personality to artificial agents. Social and economic expedience clearly played a large role

23 The action is now seen as in substance one against the ship’s owners, and it is no longer good law that the ship isaccorded legal personality for these purposes: see Republic of India and Others v. India Steamship Company Limited[1997] UKHL 40 (House of Lords) per Lord Steyn24 See Allan and Widdison, 1998, n. 59.25 It should be noted too, that being a non-person does not automatically entail the complete non-possession of legalrights. In some legal codes, non-persons were granted limited legal rights in their own right. As noted, while marriedwomen were only accorded separate legal personality for many legal purposes by statute in England in 1882,nevertheless, for ecclesiastical law purposes, which derived from the civil law, she had already had full right to sueand be sued in her own name (Blackstone’s Commentaries on the Laws of England, Book I, Ch. 15, p. 492.). Priorto 1882 married women could thus be equally seen to have been accorded legal personality by some branches of thelaw only, or by the law generally, but only for certain purposes. Similarly, in the Visigothic code, slaves, who underRoman law from which the Visigothic code derived were not legal persons, were nevertheless entitled to bringcomplaints against freemen in certain circumstances, apparently on their own account and not just on account oftheir masters (Book II, Title II, Ch. X).

31

in decisions to grant dependent legal personality to these classes of entity. What would matter in

such a decision regarding artificial agents would be the question of whether there was a felt need

for this kind of legal personality to be accorded.

As a possible motivator for such a decision, consider the question whether the law should

grant artificial agents a limited form of legal personhood as limited-purpose trustees for “simple

trusts26 designed to minimize the need for discretion and judgment”. This approach, by

dispensing with the need for a human trustee for every trust, would “save administration costs

and reduce the risk of theft or mismanagement” (Solum 1992, 1253).

However, caution is warranted for, as pointed out by Solum,

[E]ven for such limited-discretion trusts, there must be some procedure to provide for a decision in thecase of unanticipated trouble. The law should not allow [artificial agents] to serve as trustees if they mustleave the trust in a lurch whenever an unanticipated lawsuit is filed. (Solum 1992, 1253)

This problem would be solved if all artificial agents that were accorded legal personality by the

legal system required a human representative or “director” to be registered with them, in order to

deal with those circumstances where the agent’s capacities were too limited to enable it to act

competently. In some jurisdictions29 it is possible to register a small company with a single

human director; there is nothing to prevent the only significant asset of such a corporation being

an artificial agent. This legal form is an approximation of dependent legal personality for

artificial agents and could prove useful in addressing the kind of situation described by Solum.

26 In common law countries, a trust is an arrangement whereby the trustee manages property or assets on behalf ofand for the benefit of the beneficiaries.27 In common law countries, a trust is an arrangement whereby the trustee manages property or assets on behalf ofand for the benefit of the beneficiaries.28 In common law countries, a trust is an arrangement whereby the trustee manages property or assets on behalf ofand for the benefit of the beneficiaries.29 E.g., in Australia: see the Commonwealth Corporations Act 2001.30 E.g., in Australia: see Corporations Act 2001 (C’th).31 E.g., in Australia: see Corporations Act 2001 (C’th).

32

2.3.4 Independent Legal Personality for Artificial Agents

(a) The Link with Lockean Personality

Our concept of an independent legal person and John Locke’s philosophical notion of a “person”

are closely related, for independent legal persons belong to the class of Lockean persons. They

are capable of the kinds of understanding Locke’s account requires and they have enough self-

consciousness to experience their rights and obligations as relating to themselves (whether in the

past or future). But the two concepts are not co-extensive, for some Lockean persons, in

particular, older children, do not qualify as independent legal persons. (The law, however,

acknowledges that children gradually develop their mental faculties, and in recognition of this

fact gradually extends the field of decisions in the medical sphere which they can take without

the consent of their guardians.32 Thus, to refine our scheme further, we would admit the class of

dependent legal persons contains a spectrum of capabilities, from the total mental incapacity of

those persons in a permanent vegetative state, to the near-independence of a 17-year old of sound

mind.33)

We therefore need to consider the conditions for independent legal personality separately

to determine whether artificial agents that quality as Lockean persons will also qualify for the

sub-category of independent legal persons.

(b) Conditions for Independent Legal Personality

We suggest the conditions for being an independent legal person include the following (it should

32 See, e.g., Gillick v West Norfolk and Wisbech Area Health Authority [1985] 3 All ER 402 (House of Lords);Marion's Case 175 CLR 189 (High Court of Australia).33 In his argument for animal rights, (Wise 2000) suggests a spectrum of autonomies be taken into consideration,ranging from Kant’s nearly-impossible-to-achieve definition to that of the potential autonomy possessed by acomatose patient.

33

be evident these are related to the conditions considered in the broader philosophical analysis

above):

• intellectual capacity such that the person is of sound mind or sui juris. The determination

of sufficient intellectual capacity would be made within the context of the agent’s role or

function. Without such capacity, the person would always depend on an agent or

guardian, which contradicts the definition of independent legal person we have stipulated;

• the ability to understand, and obey reliably, the legal obligations it is under. Without this

level of understanding, and reliable obedience, the legal system would need to constantly

supervise and correct the agent’s behavior, much as a parent does a child;

• susceptibility to punishment in order to enforce legal obligations. Without such

susceptibility, the agent could not be deterred from not complying with its legal

obligations;

• the ability to manifest the intention to form contracts. Without forming contracts, the

entity would be an inert subject, a “legal neutrino”, unable to perform the most basic of

legal acts or economic functions;

• the ability to control money and own property, so as to make use of its legal rights in the

economic sphere, as well as to be able to pay fines and compensation.

We examine each of these concepts in turn, with a view to determining whether artificial agents

are capable of meeting them.

(i) Being Sui Juris

To be sui juris is to possess rationality of a special kind. It is the rationality adults of sound mind

have, and children and those of unsound mind do not have. It is, to adopt common parlance, the

possession of mature common sense. The according of this status to normal human beings at the

34

age of majority34 also conventionally marks the end of the process of maturation of the child.

Being sui juris can be understood as having a level of intelligence and understanding that is not

markedly different from that of adult humans of various ages. It marks a turning-point where the

education process of the child, both intellectual and emotional, starts to flatten out, leaving aside

the effect of academic and professional education that most adults undertake.

The question whether an artificial agent could ever be sui juris is related to the question

of whether artificial intelligence will ever rise to the level of mature common sense. This

suggests that of the conditions for independent legal personality, this may well be the most

difficult to achieve, since problems requiring the application of day-to-day and implicit

knowledge, as opposed to domain-specific knowledge, have historically been the hardest for the

technical artificial community to solve (Dreyfus 1992). Still, we consider this a question of

empirical determination rather than a priori arguments; most arguments against artificial

intelligence ever rising to the level of common sense (Boden 1990; Brooks 1997; Copeland

1993; Dreyfus 1992; Haugeland 1997) critique particular methodologies – such as the logicist

and knowledge representation paradigms – and not its conceptual possibility. These

methodologies are not exhaustive of work in artificial intelligence.

The law sets a very high standard for artificial agents to meet in order to be considered

for independent legal personality: they must understand the nature of the acts they perform. It

should be clear that the law must set certain empirical benchmarks for such understanding:

whether the system in question displays its understanding of a sales contract, for instance, by

taking the appropriate action in response; whether it notices its terms had been violated by asking

the purchaser to fulfill them prior to completion; and so on. A system capable of being treated as

34 Formerly, the age of majority at common law was 21; in most jurisdictions this is now, for most purposes, 18.

35

an intentional system might attain such benchmarks on its way to becoming so competent. Our

discussion of the intentional stance should suggest that the question of whether there is “any real

understanding going on” will be a poorly posed one. There should be no a priori rejection, with

these benchmarks to guide our deliberations, of the display of such intentionality by an artificial

agent.

The value of the intentional stance within the legal context - in ascriptions of

understanding - has already been noted by Solum in his response to John Searle’s (in)famous

Chinese Room argument (Searle 1980):35

[M]y suspicion is that judges and juries would be rather impatient with the metaphysical argument that AIscannot really have intentionality. I doubt that they would be moved by wild hypothetical examples likeSearle’s Chinese Room . . . . If the practical thing to do with an [artificial agent] one encountered inordinary life was to treat it as an intentional system, then the contrary intuition generated by Searle’sChinese Room would not cut much legal ice (Solum 1992, 1268).

Solum suggests, broadly, that: the argument cannot be made, a priori, that artificial agents will

lack true understanding; the legal system will tend not to place value on arguments that, despite

all appearances, artificial agents lack the “true” consciousness or “intentionality” required for

“understanding”; and the legal system will adopt the intentional stance as a “practical” strategy

to determine understanding by a legal entity. The essence of such a response is that when

artificial agents come into their own in terms of capabilities, distinctions resonating with

ordinary people’s experience of artificial agents will matter more in the legal system than

philosophical thought experiments. Solum is most fundamentally, asking us to consider a court-

room situation in which lawyers opposed to personhood for an artificial agent base their

35 John Searle’s Chinese Room argument aims to establish that no system can possess intentionality merely by virtueof executing a program. In the original form of the argument a non-Chinese speaking human sits inside a room andis passed bits of paper with Chinese inscriptions. He is also provided a translation guide for matching appropriatebits of paper with English on them, which he then passes to the outside. Clearly, the Room “speaks” Chinese butSearle suggests there is no “understanding” anywhere in the system. This example has inspired a veritable cottageindustry of responses and counter-responses (Preston and Bishop 2002).

36

arguments on a graphic description of the Chinese Room; in this situation, the arguments would

not stand up to an extensive counter-demonstration of the agent’s capacities in a variety of

contexts.

(ii) Understanding and Obedience of Legal Obligations

Clearly, an entity that is sui juris will have the kind of sophisticated intelligence required to

understand legal obligations. Whether it will reliably obey them is another question. What seems

to be required here is a strong motivation to obey the law, possibly one built into the fundamental

level of the agent’s basic drives, or expressed in terms of other drives (such as the desire to

maximize wealth) which could result in appropriate behavior, given appropriate deterrence

against disobeying the law. Note too, that in our discussion of rationality above, we have

assumed that agents could act so as to optimize their goal-seeking behavior. Such a rational agent

would presumably not indulge in the kind of self-destructive behavior of an agent who disobeys

the punitive force of legal sanctions. Thus on a construal of understanding and obedience of legal

obligations as rational behavior, this capacity appears amenable to technical solutions. While

there might be deviations from norms of behavior, this in and of itself does not imply the agent

did not understand its legal obligations; it might have understood them and still ignored them.

What is crucial is the extent to which it is able to understand and obey legal obligations as a

norm.

Work in deontological logics or logics of obligations (Hilpinen 2001; Pacuit, Parikh, and

Cogan 2006; von Wright 1951) suggests agent architectures can be programmed that use as part

of their control mechanisms a set of prescribed obligations with modalities of obligations made

available to the agent (some obligations must necessarily be followed; some need only possibly

36 See (von Wright 1951) for an early treatment and an introductory description in (Hilpinen 2001)

37

be followed). These obligations can be made more sophisticated by making them knowledge-

dependent i.e., an agent is obligated to act contingent on its knowing particular propositions

(Pacuit, Parikh, and Cogan 2006). If these propositions are a body of legal obligations, we may

speak coherently of the agent taking actions required by its knowledge of its legal obligations.

These considerations in turn raise the next question of susceptibility to punishment.

(iii) Susceptibility to Punishment

One argument against legal personhood for artificial agents is their limited susceptibility to

punishment. But the modern corporation is accorded legal personality, although it cannot be

imprisoned, because it can be subject to financial penalties. Artificial agents that could control

money independently would be able to pay both damages (for negligence or breach of contract,

for example) and fines from their own resources, and so would be susceptible to financial

punishment in the ordinary way. Artificial agents could in principle be restrained in other ways,

say, from engaging in economically rewarding work for stipulated periods.

Solum considers issues of responsibility and punishment in discussing whether a

hypothetical trustee-agent, which cannot (by reason of its design) engage in fraud or theft, could

be accorded legal personality.

Violations of the criminal law are characteristically intentional. In this case, one of the major purposes ofliability, to deter intentional wrongdoing, is simply not at issue – the expert system cannot steal or commitfraud. If we restrict our attention to the deterrent function of punishment, it seems possible that an AIcould be responsible in a way that satisfies at least some of the policies underlying the imposition of dutiesand liabilities on trustees. … [I]f we take a broader view of the functions of punishment, the second sort ofcase becomes murkier. (Solum 1992, 1248)

The “broader view” in question includes “educative” and “just deserts” theories of punishment.

So, for Solum, it would not be educative to punish an errant artificial agent, nor would it accord

with the need to accord just deserts: “Such a system does not deserve to be punished because it

lacks the qualities of moral persons that make them deserving.” However, Solum observes that:

38

The problem of punishment is not unique to artificial intelligences, however. Corporations are recognizedas legal persons and are subject to criminal liability despite the fact that they are not human beings.Further, it is by no means certain that corporations are moral persons, in the sense that they can deservepunishment. Of course, punishing a corporation results in punishment of its owners, but perhaps therewould be similar results for the owners of an artificial intelligence. (Solum 1992, 1248)

The objections Solum raises are not fundamental. Solum is concerned with a form of dependent

legal personality, rather than the independent kind that ex hypothesi would be capable of

controlling money and therefore paying any fines imposed. Secondly, and more profoundly, it

does not seem to us self-evident that artificial agents could not be considered susceptible to

punishment just like other moral agents.

Artificial agents can respond to the threat of punishment by modifying their behavior,

goals and objectives appropriately. A realistic threat of punishment is a palpable thing that can be

weighed in the most mechanical of cost-benefit calculations. Artificial agents built using

evolutionary algorithms that reward legal compliance, or ethical behavior, that respond to

situations imbued with a moral dimension, exhibit a kind of moral sense. The artificial agent’s

history of responding “correctly” when confronted with a choice between legal/ethical acts,

whose commission is rewarded, and illegal/unethical acts, whose commission results in an

appropriately devised penalty, would be appropriate grounds for assigning it a moral

susceptibility to punishment (we assume the agent is able to report appropriate reasons for

having made its choices). An agent rational enough to understand and obey its legal obligations

would presumably be rational enough to modify its behavior so as to avoid punishment, where

this punishment resulted in an outcome inimical to its ability to achieve its goals.

(iv) Capacity to Enter Contracts

Artificial agents can, in principle, be capable of manifesting the intention to form contracts.

When we interact with artificial agents that operate shopping websites, we are able to form

contracts because those agents, in a systematic and structured way, make and accept offers and

39

acceptances of money in exchange for goods and services. Whether legal personality is

something that is necessary in order doctrinally to explain how this behavior gives rise to a

contract between the user and the operator of the artificial agents, is a question we will examine

in the specific context of the contracting problem.37

(v) Ability to Control Money and Own Property

Historically, the concept of “legal person” has been bound up with the ownership of property.

Indeed, a full-blown concept of a person necessitated a separation between “legal person” and

property, which came about with liberal individualism’s shift from status-based concepts such as

“property” to contract-based concepts of individual rights, which made legal institutions “clarify

the distinctions and tensions between the definition of human, person and property” (Calverley

2007). The genesis of the granting of personhood to corporations – in the United States, via the

Fourteenth Amendment38 – is instructive, for it followed from grants of charters to corporations

to own property. Thus, a linkage between owning property and personhood is built into this

particular “legal fiction”.

The concept of patrimony in civil law indicates an important condition for legal

personhood: the technical ability to control money i.e., to pay money; to receive and hold money

and other valuable property such as securities; and to administer money with financial prudence.

The concept of patrimony suggests that without this ability, the legal system would be reluctant

to impose liabilities on such entities. Prior to being accorded legal personhood, agents would

need to be able to do these things on behalf of other people, and agents that were unable to

undertake economic transactions without relying on intermediaries would find their case for

37 See Section 3 of this Chapter.38 See Santa Clara County v. Southern Pacific Railroad Company, 118 U.S. 394 (1886) (U.S. Supreme Court).

40

independent (as opposed to dependent) legal personhood defeated.39 It may seem unclear how an

artificial agent could derive money or other property. One possibility would clearly be ordinary

gainful employment, on behalf of users or operators. It would be in the interests of the agent’s

operator to continue to derive that income by owning the agent, something that would be

incompatible with independent legal personhood.

2.4 Objections to Personality for Artificial Agents

We now consider a number of possible objections to the notion of according Lockean and/or

independent legal personality to artificial agents. None of these objections is insurmountable.

2.4.1 Personal Identity Conditions

The personal identity conditions for artificial agents are not well understood. Consider an

artificial agent instantiated by software running on hardware. How do we identify the subject

agent? Is it the hardware, the software or some combination of the two? What if the hardware

and software are dispersed over several sites and maintained by different individuals? Consider

an artificial agent implemented in software. Which of its two forms, the source code or the

binary executable, is the agent? Our identification difficulties do not end with the choice of the

executable as the agent, for unlimited copies of the agent can be made at very low cost. Which of

these is the person? Perhaps each instance of a software program could be a separate person if

capable of different interactions and functional roles? Or consider a multi-agent system (Ferber

1999), consisting of multiple copies of the same program in communication, which might

alternately be seen as one entity and a group of entities. Or consider the versioning of software; 39 As far as dependent legal personality is concerned, notice that the most common form of legal person other thanhumans is the modern business corporation. This can only act by its agents (or its board of directors or generalmeeting); by itself it is completely helpless. So a technical inability to perform a task personally is by no means anabsolute bar to being accorded legal personality and, in fact, in the scheme of things, is quite unimportant.

41

does a new version automatically become a person and does this mean the older version is no

longer a person? We should note that such problems of identity are evident in entities like

football teams and universities, and nevertheless, a coherent way of referring to the entities

emerges over a period of time on the basis of shared meanings within a community of speakers.

Perhaps the problem of the identification of agents could be solved by a registry, similar

to that for corporations:

[T]he problem of identification is not unique to computers. The same problem arises with corporations, astheir membership and control can change frequently. However, registration makes the corporationidentifiable. For computers to be treated as legal persons, a similar system of registration would need to bedeveloped. For example, a system of registration could require businesses who wish to rely on computercontracts to register their computer as their “agent”. (Allan and Widdison 1996, 42)

Similar proposals call for a so-called “Turing register” where agents and their principals would

be registered and recognized by the legal system, much as companies are registered today

(Karnow 1996; Wettig and Zehendner 2003). We believe the use of a register for artificial agents

is a promising strategy for dealing with the problems of identification of artificial agents. Such a

register could be open to artificial agents, and registration could confer dependent or independent

legal personality. This proposal has met with skepticism that the benefits of the “Turing registry”

do not provably outweigh its costs (Kerr 1999; Miglio et al. 2002). We agree the costs of

establishing a register would be significant and would need to be weighed against the benefits of

doing so. But this is the same as any other public policy intervention. It may be that the number

and complexity of artificial agents, and the diversity of their socio-economic interactions,

becomes such that the case for intervention becomes overwhelming.

There may too, be increasing pressure to treat computers and computer-person

“partnerships” as legal persons, at least for the purpose of determining when contracts are

formed (Allan and Widdison 1996,52). These “partnerships” would accord a form of dependent

legal personality, much as a company must always be registered with at least one director

42

through whom it may act. 41 This would have somewhat similar effects to the Turing register

proposal.

2.4.2 The Judgment Objection

Closely related to the possibility of an agent being sui juris is the suggestion that the capacity of

artificial agents to follow program instructions is not sufficient to enable them to make

judgments and exercise discretion. While it has been suggested that the law would not permit

artificial agents to serve as trustees unless they had the kind of general-purpose intelligence that

would enable them to take discretionary decisions (Solum 1992), if the intelligence required to

take discretionary decisions were displayed by an artificial agent, then no further objection

should be raised on this score.

As should be evident, our dismissal of this objection is grounded in our adoption of the

methodology of the intentional stance, for what would matter to a court of law would be a

demonstration that the artificial agent was displaying the right kind of judgment.

Artificial agents should only be required to meet standards that are clearly specified with

associated benchmarks; there is a vagueness in the term “discretionary decisions” that needs to

be clarified before it can be determined that an artificial agent is incapable of them. Many

ostensibly “discretionary decisions” taken by human beings are merely taken within an allowable

range of parameters; artificial agents need only show that they are capable of operating within

similarly specifiable parameters. Indeed, it is not essential to the legal notion of discretion that

the agent must be capable of going beyond the scope of its authority. It makes perfect sense to

40 Although in some jurisdictions, such as Australia, this need not be a human person, nevertheless at some point thechain of corporate directors will end with a human agent.41 Although in some jurisdictions, such as Australia, this need not be a human person, nevertheless at some point thechain of corporate directors will end with a human agent.

43

talk of operating with discretion within specified parameters. This capacity is in particular,

amenable to technical resolution and discretionary judgments will need to be defined for

particular domains. Currently, artificial agents guided by Bayesian decision-theoretic algorithms

often display very sophisticated decision-making in the light of unexpected or incorrect

information. A very good example of this may be found in the soccer-playing robots42 that

participate in the annual Robo-Cup tournament43.

Note further that while the practical capacity to perform cognitive tasks, and therefore to

display practical reason, is of importance in whether to grant personhood to artificial agents,

something like the Turing test (or a more elaborate version of it) is neither a necessary nor

sufficient condition to attain the status of a dependent legal person. Arguments for according

legal personality to animals (Nosworthy 1998; Wise 2000) show that high intelligence of the sort

that could approach passing the Turing test should not be seen as a necessary condition of

dependent legal personhood. Other dependent legal persons such as children and the mentally

incapacitated are accorded legal personality by the legal system while having limited mental

capacity; companies and other non-human legal persons do not exhibit intelligence of their own,

although they are associated with human owners or representatives who display those attributes.

And historically, as we have noted, fully capable adults have been denied independent legal

personality. Most importantly, the personhood of artificial agents is a live issue because they

have displayed sufficient reason for us to imagine their capacities in that regard could increase.

2.4.3 “Missing-Something” Arguments

Arguments against artificial intelligence frequently employ the notion that “something is

42 http://www.informatik.uni-freiburg.de/~robocup/publications.htm43 http://www.robocup.org/

44

missing” in a computational architecture, which disqualifies it from being “sufficiently like us”.

Interestingly enough, what is taken to be missing is also notoriously hard to define. We take this

as an encouraging sign, for it suggests that with greater clarity of definition, the mystery of how

artificial agents could ever possess these attributes would be dispelled as well. These arguments

are united by one feature, which is their skepticism about whether particular human attributes

can be the subjects of a more naturalistic understanding. In responding to these objections, we

can only touch the surface of very much larger philosophical debate44.

(a) “Phenomenal” Consciousness

A persistent objection to the possibility of personhood for artificial agents relies on questioning

the possibility of artificially intelligent beings ever having perceptual or phenomenal experience

(Bringsjord 1992). We suggest that firstly, there is no reason to believe artificial agents cannot

have experiences which are like the kinds of phenomenal experiences we take ourselves and

other human beings to have; and secondly, even if artificial agents are not conscious in this full-

blown sense, they can still be reasonably reckoned to possess “enough mentality” to be granted

personhood. 46

We believe the question of whether artificial agents have phenomenal experience

precisely like ours is misguided precisely because it is not clear what experiences humans

themselves have. The concept of qualia, the so-called intrinsic qualities of subjective experience,

like the redness of a rose, is hopelessly confused (Dennett 1993). As just one example amongst

many, the phenomenon of “change blindness”, where subjects are unable to pick up differences

44 This is particularly true of the debate on consciousness; we simply cannot do justice to the huge literature on thetopic and can only point to David Chalmer’s excellent online bibliography at http://consc.net/online.html.45 As our discussion of self-consciousness and the intentional stance indicated.46 As our discussion of self-consciousness and the intentional stance indicated.

45

in presented images until pointed out or without considerable effort, shows there is no

particularly privileged relationship between a subject and his or her qualia; phenomenal personal

experience is a phenomenon ripe for third-person study where agents are only treated as

authoritative on how things seem to them (Dennett 2003).

Furthermore, artificial agents might not share the same perceptual world as us, or they

might, but with greater or lesser perceptual sensitivity. Neurological studies indicate that

phenomenal consciousness may be simply the state of extensive data sharing between

components of a neural architecture; such sharing is a function of functional organization and not

physical composition (Deheane and Naccache 2001). That artificial agents can be sensitive to

perceptual data is a given; witness the color-sensitive behavior of soccer-playing robots, which

detect the color of the soccer-ball and aim for a marked goal.47 From the perspective of the

neurological study of the human brain, what is lacking in these present-day architectures is a

systemic availability of this data. What is not missing is some “viewer” or “user” of the

perceptual data. To suggest the need for such an entity as being crucial to perception is to fall

into the trap of an infinite regress: for then the “viewing” or “using” process itself must be

explained. This process must bottom out somewhere; the inner experience of an agent cannot be

coherently explained in terms of an entity experiencing some “inner theatre” (Dennett 1992).

The rejection of the possibility of machine consciousness has some severe philosophical

consequences as well, while its acceptance has little philosophical cost. The denial of machine

consciousness would place humankind in a naturalistic singularity, and make our consciousness

a mere epiphenomenon, a companion to the supposedly “real” physical phenomena that it

accompanies (Thompson 1965). Most importantly, on a naturalistic world-view, one where the

47 http://www.informatik.uni-freiburg.de/~robocup/publications.htm

46

properties of the world around us are explainable in terms of physical facts and relations, human

beings must have consciousness by virtue of physical, functional properties. There is no prima

facie reason to believe these cannot be instantiated in non-human agents other than pointing to

physical makeup, the ineffability of the phenomenal experience or the uniqueness of human

auto- and inter-subjectivity. The first is utterly mysterious, for it is not clear why only carbon-

based life forms could have phenomenal experience, while the latter two seem like retreats into

theoretical obfuscation.

If cognitive science discovers the underlying processes of an agent’s apparent

consciousness sufficiently resembled – in the appropriate physical and functional dimensions –

those of humans, then part of the conceptual barrier to ascribing them consciousness like ours

should be breached. That is, if a neural correlate for consciousness could be located, and its

operations mapped on to those of an agent architecture, then there might be grounds to ascribe

consciousness to such an entity. Take as an example the location of emotion in the frontal cortex

of the human brain, and consider a situation in which the operations of the frontal cortex

(including its interaction with other brain modules) could be described, independent of its

physical composition, in terms of its functional relationships. Such an architecture could be

fruitfully compared with silicon-based logical architectures present in current artificial agents.

An artificial agent architecture with the right kind of functional organization would then be on its

way to being described as being in the same states a human is in when conscious, and we would

need to treat the agent as a reliable reporter on whether it was experiencing internal states with

phenomenal quality - just as we do humans, despite our inability to determine with any accuracy

whether they have precisely the same experiences as ourselves.

Most fundamentally, intersubjective agreement about phenomenal experience is the only

47

viable method of systematically confirming other human beings have the phenomenal

experiences like ours. The perceptual world we claim to inhabit only makes sense to those

prepared to play a language game with us with enough intersubjective agreement that we can

ascribe them the same kinds of perceptual experiences that we take ourselves to have. If artificial

agents could display evidence of such intersubjectivity, then the only denial we could coherently

make is that of agent’s possessing perceptual experience like ours, not that they do not have any.

Perhaps all we need to do with artificial agents is to see if we can adopt the phenomenal stance

(Robbins and Jack 2006), which suggests ascriptions of phenomenal experience might follow if

success conditions similar to those of the intentional stance were observed. That is, if we were to

“reliably and voluminously” predict a system’s behavior on the basis of ascribing phenomenal

experience to it then it makes sense to adopt the phenomenal stance towards the artificial agent

(“the robot moved towards the door because it felt hot and wanted to move to a cooler

environment”). The adoption of the phenomenal stance is not concerned with the determination

of any exclusively human properties such as its particular physical composition. To suggest that

would be to beg the question whether it is only agents with similar physical composition that can

have similar phenomenal experiences.

In the legal context, (Solum 1992, 1264-6) suggests that arguments that artificial agents

will always lack consciousness, no matter what their behavior might indicate, will not be

determinative in a courtroom context as juries will be agnostic on this score48. Solum suggests

that if the artificial agents jurors encountered in ordinary life behaved in a way that only

conscious human beings did, then jurors would be inclined to accept the claim the consciousness

48 Solum’s argument can be criticized, in the English context, for relying too heavily on the court-room as theparadigm of lawmaking. On fundamental issues such as these, Parliaments and even the people via referenda tend tohave the ultimate say. But substituting “representatives” for “jurors”, a similar argument can be made.

48

was real and not feigned. Artificial agents would not be treated as zombies, creatures that have

all the experiences we do without the phenomenal experience attached to them.49

(b) Behavior or Internal Design?

One objection to functionalist positions like the intentional stance is that evidence of radically

different cognitive architecture in artificial agents could cause us to doubt our initial ascriptions

of intelligence, agency or beliefs. Discovering an agent only gave correct answers to our

questions or took the right decisions because of blind rule-following or luck (Block 1981) might

cause us to reconsider our initial ascriptions. So, perhaps absent evidence of similar cognitive

architectures we should only consider artificial agents as simulating intelligence:

[A]lthough behavior that indicates the presence of a quality such as consciousness, intentionality, feelings,or free will may be very good evidence that the quality is present, the behavior alone is not irrebuttableevidence. Cognitive science might give us knowledge about the underlying processes that produceconsciousness … [T]hat would give us firm reason to believe that a particular AI [(i.e., an artificialintelligence)] had only simulated, as opposed to artificial, consciousness. … [But]… it does not establishthat no AI could possess any particular mental quality. Rather, this argument establishes an AI could turnout not to possess a mental quality, despite strong behavioral evidence to the contrary. (Solum 1992, 1275-76)

A relevant distinction exists between a computer simulation of water and a computer program

that can duplicate the verbal behavior of a normal adult human: an artificial agent that passed the

Turing Test could interact with its environment, whereas no-one could ride a real surfboard on a

simulated wave. This objection assumes the two kinds of simulations are identical. As (Copeland

1993) points out, such an objection assumes that simulations of intelligent behavior are

simulations which lack essential features, much like simulated leather does, as opposed to

simulations like the manufacture of artificial proteins, which are still proteins, and laboratory

diamonds (still diamonds), which do not appear to lack any ostensibly essential features. There is

a crucial difference between the simulation of intelligence or practical reason, and the simulation

49 See (Chalmers 1997) for an argument that zombies are theoretically possible, and (Dennett 1995) for a debunking.

49

of consciousness. In the latter it makes apparent sense to believe that what is being demonstrated

is merely a simulation; in the former it is not clear what essential feature is being left out in the

simulation.

Of course, if behavioral evidence and knowledge of underlying processes both pointed to

actual rather than simulated features of human mentality, this would be good reason to believe

artificial agents did possess these features. Thus there is legal weight to be accorded to a

similarity in underlying processes between humans and artificial agents, and some legal role for

cognitive science in theorizing and discovering what those underlying processes might be:

Absent well-confirmed theories of underlying processes, we cannot make confident judgments that theelements of personhood are lacking in particular cases. [But i]f AIs behaved the right way and if cognitivescience confirmed that the underlying processes producing these behaviors were relatively similar to theprocesses of the human mind, we would have very good reason to treat AIs as persons. (Solum 1992,1286)

It is not clear however whether a legal system would deny an agent personality on the basis of its

internal architecture as opposed to its performative capacities, because the latter are the

determinants of the nature of its social and legal interactions. If the artificial agent behaved in

much the same way as a person, in one (fairly trivial) sense it could be said to share the cognitive

architecture of a normal person. But if at the very next level, the cognitive architecture were

completely different than that of a human person, then what would matter is whether its

cognitive architecture’s components could be mapped onto the functional components of human

cognitive architecture. As most artificial agents are not likely to be biologically composed,

whatever similarity we discover in relation to the cognitive processes between humans and them

are likely to manifest themselves on the functional level. The empirical confirmation of some

computational theory of mind, that humans cognize by processing mental representations or

brains utilize a connectionist architecture, or the successful deployment in some artificial agent

architecture of modules functionally similar to those identified in neuroscientific studies, would

50

be useful support for such assertions of functional similarity. But the socio-legal interactions of

artificial agents could make their legal personality a “live issue” before such a stage was reached.

This makes the intentional stance strategy described above doubly attractive, for it enables us to

move past such considerations to a concern with how the agent conducted itself.

The relevance of the cognitive architecture of the artificial agent allows us to highlight an

important epistemic asymmetry. We know how computers work, but we do not know well

enough how human brains work, and neuroscience offers only partial empirical confirmation of

our best hypotheses (Machamer, Grush, and McLaughlin 2001). We lack very detailed

knowledge of our cognitive architecture; arguably, we know more at the logical level than at the

physical level, as the difficulties of investigations of neuroscience amply demonstrate. But in the

case of artificial agents, we possess fine-grained knowledge of the physical and algorithmic

architecture. It is this familiarity, we suggest, that breeds contempt for the artificial agent and

that Dennett’s story of a sufficiently-complex Cog attempts to dispel. It is this epistemic

asymmetry that leads to repeated violations of the following rules, suggested in the context of

determining animal rights (Wise 2000, 121), but we think, very useful for our purposes as well:

• Rule One: Only with the utmost effort can we ever hope to place ourselves fairly in

nature.

• Rule Two: We must be at our most skeptical when we evaluate arguments that confirm

the extremely high opinion that we have of ourselves.

• Rule Three: We must play fair and ignore special pleading when we assess mental

abilities.

These three rules have been implicit in our tackling these classic “missing-something”

arguments; failure to follow them results in persistent misunderstanding, nowhere better

51

demonstrated than in the objection on the supposed lack of free-will in artificial agents.

(c) Free Will

An artificial agent cannot possess free will and all the other desirable social and moral qualities

that follow because it is just a programmed machine. From this damning claim, the case for

artificial agent’s personhood appears irreparably damaged, for a programmed machine could

presumably never display the qualities that we, as apparently freely choosing human beings,

appear to have. But there is an important reductive way to view free will that considerably

demystifies it. An operative assumption for the concept of free will is that “there is a well-

defined distinction between systems whose choices are free and those which are not.” (Sloman

1992) But a closer examination of intelligent systems reveals no one particular distinction.

Instead, there are many different distinctions, all of which correspond to particular design

decisions that present themselves to the designer of the system in question. Compare for

instance, (a) an agent that can simultaneously store and compare different motives with (b) an

agent that has only one motive at a time. Or, say (a) agents all of whose motives are generated by

a single top level goal (e.g. “buy this book”) or (b) agents with several independent sources of

motivation, e.g., thirst, hunger, sex, curiosity, ambition, aesthetic preferences, etc (Sloman 1992).

These distinctions are suggestive, for one way to ascertain whether an artificial agent actually

has free will is to determine whether it instantiates design features of a particular kind. Speaking

of ourselves, from an evolutionary standpoint it is not clear that those alternatives have not

already been tried and found wanting, that our current assessment of ourselves as possessors of

free will is merely a report on a particular design situation that obtains in ourselves.

The most plausible account of human free will is that an action is free if caused in the

right way through reasoning and deliberation on the part of the agent. But in this sense, artificial

52

agents could possess free will. For (Frankfurt 1971) free will is compatible with a kind of

determinism; what is crucial is the role of second-order volitions. Persons can have beliefs and

desires about their beliefs and desires and can act according to these higher-level beliefs and

desires; they must be the causal agents for their actions, and it is in this agency that their free-

will resides (Copeland 1993). If an agent takes an action, we have three choices - to ascribe the

responsibility to the agent, or to its designer, or to no one at all. The third option can be ruled

out; the second seems increasingly implausible if the human designer is unaware of the action

being committed, and if the range of actions demarcated for the artificial agent is sufficiently

large and only determined by a sophisticated decision procedure. In these cases, causal agency is

plausibly ascribed to the agent.

Free will and autonomy are related; an argument for autonomy is an argument for free

will, for autonomous acts are freely chosen acts. If a person’s second-order desires are motivated

by “second-order volitions”, where it wants the second-order desire to be effective in controlling

the first-order desire, then the person is autonomous if satisfied with the desire (Frankfurt 1971).

But possessing second-order thoughts in this fashion is an architectural or design feature. The

artificial agent would need to have a higher-level evaluative function that checks its executive

module. Such a module when guided by the evaluative capacities of this higher-level module is

plausibly described as autonomous.

Here, the “it is just a programmed machine” argument appears to have particular force,

for the programmed consideration of choices does not appear to meet the intuitive understanding

of free will. But the intuitive understanding behind this objection is, as David Hume recognized

(Hume 1993, Section VIII, Of Liberty and Necessity), a rejection of the naturalistic world-view

53

in relation to humans, for the same objection might be made to free will for humans, governed as

we are by natural laws. But this does not prevent us from ascriptions of responsibility to humans

if it is apparent the person committing an act could have chosen to act otherwise. We recognize

the existence of such choices all the time.

But in any event the “it is just a programmed machine” objection is incoherent when

examined closely. Too many similarities can be drawn between the combination of our

biological design and social conditioning, and the programming of agents for us to take comfort

in the proclamation that we are not programmed while artificial agents unequivocally are. The

programming of the choices of an agent, if made subject to context-sensitive variables and

sophisticated decision-theoretic considerations, fails to look qualitatively and quantitatively

different from a system (i.e., humans like us) acting in accordance with biological laws and

impinged on by a variety of social, political and economic forces. We ascribe to ourselves the

ability to make judgments and exercise autonomy in our decision-making; while we do not deny

this autonomy, it is not implausible that it is significantly guided by those forces whose presence

we do not deny in our lives.

Perhaps, as Hume suggested, our free will consists in acting in the presence of choices,

and not in being free from the constraints of the natural order. Any other notion of free will

requires us to reject a naturalistic world-view or to adopt the implausible view that there could be

uncaused actions. The ascription of free-will to ourselves and the denial of it to programmed

machines has emotional force; it is unclear that it has much philosophical heft in the situation at

hand.

(d) Autonomy

The discussion of free will directly impinges on the issue of autonomy and as such, a related

54

discussion of granting rights to animals is instructive. In arguing for animal rights, Steven Wise

points out that a philosophical definition of autonomy such as that of Kant’s would set the bar

too high even for many, if not most, human beings (Wise 2000, 246). Kant’s definition of

autonomous action requires an agent to possess the capacity to understand what others can and

ought to do in a situation requiring action, and to act only after rationally analyzing alternative

courses of action, while keeping in mind that these choices are informed by an understanding of

other agents’ capacities and how it would want other agents to act (Kant 1998, 41-53). Very few

human beings act autonomously in this sense. Our operational definition of autonomous behavior

for an artificial agent may appear incapable of capturing all the nuances encapsulated in Kant’s

definition; Wise’s discussion, and critiques of Kant for his failure to ascribe full moral status to

children (Herman 1993) suggest we are not philosophically worse off as a consequence.

Legal rights are distributed to all humans across a wide spectrum of autonomy and no one

single definition of autonomy appears to be operative. Indeed, comatose, brain-damaged patients

who are nonautonomous, nonconscious and nonsentient can be awarded legal personhood solely

on the basis of membership in the human species (Wise 2000, 244-245). Wise suggests an entity

should be granted legal rights if humans with a similar level of autonomy have them. If humans

who completely lack autonomy have certain legal rights then so could the entity under

consideration. Furthermore, if its autonomy is insufficient to entitle it to full protection then it is

entitled to legal rights in proportion to the degree to which its autonomy approaches the

necessary minimum, if humans with a similar level of autonomy are entitled to these rights in

proportion to the degree in which their level of autonomy approach the necessary minimum.

From Wise’s analysis, it appears our central challenge is to arrive at a meaningful comparison of

human and artificial agent’s autonomy; we suggest the taxonomy provided at the beginning of

55

this chapter could serve as a useful heuristic and, most importantly, as an empirical guide

(e) Moral Sense

A related objection would deny a moral sense to an artificial agent. Prima facie, however, the

importance of the possession by artificial agents of a moral sense should not be overstated.

Infants, who have little or no moral sense, and few legal responsibilities, are nevertheless

accorded dependent legal personality by modern legal systems. Here, recognition of “species

resemblance” and similarities in potential dispositions between children and adult humans seems

to drive the ascription of legal personality and the granting of legal rights.

In the case of artificial agents, it is the agent’s interactions with its social context that we

take to be the most important factor in the ascription of a moral sense. Even if we take

possessing a moral sense to be contingent on the possession of a privileged set of beliefs, the

moral ones,51 we still have a means for ascribing a moral sense to an artificial agent. For what we

need to determine is whether we can “reliably and voluminously” predict an artificial agent’s

behavior on the basis of the assumption that it rationally deals with the moral beliefs and desires

we ascribe to it. If we can adopt such a stance towards its moral beliefs and desires and other

intentional attitudes inflected with moral content, the assignment of a moral sense to it is a

logical next step. Our thinking of humans as moral agents could be viewed as our adopting such

a stance towards them: we ascribe a moral belief (“John believes helping the physically

incapacitated is a good thing”) and on the basis of this ascription, predict actions (“John would

never refuse an old lady help”) or explain actions (“He helped her cross the street because he

wanted to help a physically incapacitated person”).

51 (i.e., to display a moral sense is to provide evidence of the direction of action by a set of beliefs and desirestermed “moral”)

56

Thus ascription of a moral sense to artificial agents is a prime candidate for treatment by

the intentional stance as in the example “The robot avoided striking the child because it knows

that children cannot fight back”. Failures of morality on part of the artificial agent in question

could then be understood as failures of reasoning: the failure to hold certain beliefs, to draw

particular conclusions, and then, to act accordingly. If we are “reliably and voluminously” able

to use an intentional language of morally-inflected beliefs and desires in describing and

predicting the behavior of artificial agents, then it starts to make sense to discuss the behavior of

artificial agents as morally good or bad. An artificial agent with a set of internalized

“commands”, but autonomous enough to disobey those commands, to report moral beliefs and

desires as grounding its actions, and whose actions were best predicted by the intentional stance

might qualify as a “moral person” in this sense.

3. Artificial Agents and the Contracting Problem

Legal scholars have considered the problem of personhood for artificial agents in a number of

contexts. This debate has most concretely centered on possible solutions to the legal doctrinal

problem of accounting for the formation of electronic contracts. A consideration of this particular

doctrinal problem is a useful starting point for examining some possible solutions, one of which

involves according legal personality to artificial agents.

3.1 The Contracting Problem

Artificial agents, and the contracts they make, are ubiquitous. Every time we interact with a

shopping website we interact with a relatively autonomous interface that queries the operator’s

database, uses our input to populate the operator’s database, and sets out the terms of the

transaction. The operator does not exercise direct control over the agent’s choices, at least until

57

the operator has a chance to confirm or reject the transaction entered into.52 Similarly, websites

which offer users agent-like functionality are becoming increasingly common. An example is the

functionality of the ebay.com auction website, which will bid as required up to the user’s

maximum bid on behalf of the user. Agents such as shopbots and pricebots capable of collecting

information and engaging in transactions with limited or no operator input are the subject of

current research programs.53

3.1.1 Doctrinal Difficulties

A traditional statement of the requirements of every legal contract is as follows54:

To constitute a valid contract

(1) there must be two or more separate and definite parties to the contract;

(2) those parties must be in agreement, that is there must be a consensus ad idem;

(3) those parties must intend to create legal relations55 in the sense that the promises of each side are to beenforceable simply because they are contractual promises;

(4) the promises of each party must be supported by consideration56 …

Several legal doctrinal difficulties are associated with contracts made by artificial agents (Allan

and Widdison 1996; Kerr 1999, 2001). In relation to the requirement that there be two parties

involved in contract-making, artificial agents are not considered by current law to be legal

persons. Therefore, in the case of a purchase effected by means of an artificial agent or agents,

only the ultimate buyer and seller can be the relevant parties to the contract. This entails

52 The terms of conditions of some important shopping websites (for example, Amazon.com) specify that contractsare not concluded until a dispatch confirmation is sent, and that orders are subject to cancellation in the case ofmistake, a point we return to below.53 See the material referenced at footnote 4.54 Halsbury’s Laws of England (4th edn) Vol. 9 para 203, paragraph breaks added; cf Restatement of the Law(Second) Contracts, s 3.55 This intention is sometimes referred to as the animus contrahendi.56 (i.e., supported by something valuable given in return for the promise)

58

difficulties in satisfying requirements (1) and (2) that there should be two parties who are in

agreement, since in many cases one party will be unaware of the terms of the particular contract

entered into by its artificial agent. Furthermore, in relation to the requirement (3) that there

should be an intention to form legal relations between the parties, a similar issue arises: if the

agent’s principal is not aware of the particular contract being concluded, how can the required

intention be attributed?

3.1.2 The Role of “Closed Systems”

In so-called “closed systems”, where the rules governing interactions between buyers and sellers

are specified in advance, doctrinal difficulties can be addressed through the use of contractual

terms and conditions. By “closed systems” we mean not only those market-like systems where

rules and protocols of interaction between buyers and sellers (and their agents) are specified in

advance, but also those systems where one party (usually the seller) specifies the terms of use

(e.g., of the website) that will govern any actual transaction between a human buyer and the

seller’s artificial agent.) For example, the terms and conditions governing the use of Amazon’s

UK website57 provide in part as follows:

13. Our contract

When you place an order to purchase a product from Amazon.co.uk, we will send you an e-mailconfirming receipt of your order and containing the details of your order. Your order represents an offer tous to purchase a product which is accepted by us when we send e-mail confirmation to you that we'vedispatched that product to you (the "Dispatch Confirmation E-mail"). That acceptance will be complete atthe time we send the Dispatch Confirmation E-mail to you. …

15. Pricing and availability

…Despite our efforts, a small number of the millions of products in our catalogue are mispriced. Restassured, however, that we verify prices as part of our dispatch procedures. If a product's correct price islower than our stated price, we charge the lower amount and send you the product. If a product's correct

57 At http://amazon.co.uk. Last accessed 6 December 2006

59

price is higher than our stated price, we will, at our discretion, either contact you for instructions beforedispatch or cancel your order and notify you of such cancellation.

In this case the software interface with which the user interacts does not conclude contracts for

the seller. Rather, the seller’s (human) dispatch clerks must confirm the dispatch, including

presumably a cursory check to ensure the price is not mistaken.

In what follows, we investigate potential solutions to the contracting problem for systems

that are not closed in the sense outlined above.

3.1.3 A “Quick” Solution?

Allan and Widdison (Allan and Widdison 1996) suggest that the most likely route courts would

take to overcome these doctrinal difficulties to grant legal efficacy to electronic contracts would

be to relax the requirement that the intention of the parties must be referable to the offer and

acceptance of the specific agreement in question.

While superficially appealing, this still leaves too much to be explained. If the intention

need not any longer be referable to a specific agreement, how can a contract emerge from an

interaction between a user who seeks a particular contract and an operator who has only a

“generalized intent” or who, more likely, has nothing resembling an intention, but only sets

certain rules for the agent to follow? We explore possible ways to explain such agreements

below.

3.2 Five Possible Solutions to the Contracting Problem

We now consider five possible solutions to the contracting problem. The first three potential

solutions are “tweaks”, involving minor changes of law, or pointing out that existing law,

perhaps with minor modifications or relaxations, can accommodate the problem. The fourth is

more radical and involves treating artificial agents as slaves, i.e., as agents without legal

60

personality; the fifth, the most radical, proposes according artificial agents legal personhood.

3.2.1 First Solution: Agents as “Mere Tools’

Most current approaches to the problem of electronic contracting adopt the first potential

solution: they treat artificial agents as mere tools, or as mere means of communication. All

actions of artificial agents are attributed to the agent’s principal (whether it is an operator or a

user), whether or not they are intended, predicted, or mistaken. On this basis, contracts entered

into through an artificial agent will always bind the principal. This is a stricter liability principle

than that which applies to human agents and their principals.

It would appear at first sight that legal change would not be required in order to entrench

this approach, since it appears commonsensical and captures intuitions that reflect the present

limited capacity of agents to act autonomously.59 However, as the autonomy, social ability, and

proactivity of artificial agents increases, it will be less realistic to approach agents as mere tools

of their operators and as mere means of communication. Wherever there is discretion on the part

of the agent, there is the possibility of communicating something on behalf of the principal,

which, had the principal had the opportunity to review the communication, would not have been

uttered. We examine in more depth below60 whether this solution and the others discussed in this

Section would provide fair outcomes in particular cases.

3.2.2 Second Solution: The Unilateral Offer Doctrine

The second potential solution, which addresses difficulties (2) and (3), deploys the unilateral

58 See the discussion in Section 3.3.1.59 See also the discussion in Section 3.3.1 which considers the extent to which existing legislation dictates thisapproach.60 See Section 3.3

61

offer doctrine of contract law. Under this doctrine, contracts can be formed by a party’s

unilateral offer addressed to the whole world, together with acceptance, in the form of conduct

stipulated in the offer, by the other party.61 Competitions, and terms and conditions of entry to

premises are among the most common examples involving unilateral contracts.

Many will see in the unilateral offer doctrine a theory that could justify many electronic

contracts. In a simple example, the user’s interaction with a shopping website, in legal terms, can

be equated with an interaction with a vending machine, where the contractual terms of particular

contracts are not determined by the agent.62 The offer being made to the world is to be bound by

contracts made through the artificial agent. That offer is accepted by interaction by a user with

the artificial agent.

Note at the outset that a special case of a “closed system” in the sense described above

would be a website that specified the consequences of interacting with an artificial agent by a

unilateral offer that is accepted by the user using the website. Such website terms might read

something like this:

By using this website, you agree that a contract of sale between ABC and you arises at the time when oursystem receives notice that you have clicked on a confirmation button in a dialog setting out the contract’sessential terms (other terms are set out in our legal notice).

We see no reason to suppose that such wording would not be a full solution to the contracting

problem, at least where the degree of autonomy of the agent operating the website was very

limited with respect to the generation of this wording. If, however, the wording were adapted in

response to customer location, profile, etc., then the “bootstraps” nature of the solution would

61 See Carlill v. Carbolic Smoke Ball Company [1893] 1 QB 256 (English Court of Appeal)62 The case of a parking ticket machine was considered in Thornton v Shoe Lane Parking Ltd [1971] 1 All ER 686(English Court of Appeal). Lord Denning MR analyzed such contracts as follows (at 169): “It can be translated intooffer and acceptance in this way. The offer is made when the proprietor of the machine holds it out as being ready toreceive the money. The acceptance takes place when the customer puts his money into the slot. The terms of theoffer are contained in the notice placed on or near the machine stating what is offered for the money”.

62

become more problematic, since the terms and conditions of the offer form a subsequent contract

that would itself need explaining in terms of the principal’s intentions. Exactly where we cross

the line into “bootstraps” territory, where other explanations for resulting contracts must be

found, is a question of some subtlety.

(Kerr 1999, 19) has argued that the doctrinal limitations of the unilateral offer analysis

are revealed when we are dealing with agents that are able to determine contractual terms

autonomously, for then the seller cannot be said to have intended the terms of each particular

contract. However, this proposed distinction between contractual terms that the operator has

assented to and those it has not is not without difficulties. In a trivial sense, even in the simple

vending machine example the seller does not assent to the terms of every particular contract,

since the seller is unaware of each particular sale. Rather, the doctrine allows a general assent to

contracts of that kind to be translated into contracts that satisfy the doctrinal conditions for

agreement ad idem and intention to contract. It is clear, moreover, that the seller need not

consent to each and every possible contract in advance. For example, a parking machine may

charge parkers by the minute. Clearly, as long as the machine operator has assented to a rule for

determining the terms of particular contracts (i.e., a rate or rate(s) by which parkers are

charged), there is no need for the operator to turn his or her mind to each and every possible

length of time.

We suggest the unilateral offer doctrine could therefore serve to validate all contracts

made by artificial agents, so long as the agent chooses from a set of possibilities determined in

advance by the operator. This would be sufficient to validate a large number of operations. For

example, it would validate the operations of a simple shopping website agent, which will need to

consult a price list, in the form of a database, to calculate the price of a particular item being

63

sought. It may well need to calculate postage and packaging as well, depending on a number of

choices. The terms and conditions of the sale may well depend on the geographic location of the

purchaser. It might need to add sales tax, depending on the status of the purchaser, or apply a

special discount (for example, for students). Such a rule would even cover our credit card

example, where as well as deciding whether each applicant is credit-worthy, the agent could also

determine the credit limit – one of the most pertinent terms of the contract – for successful

applicants.

The doctrine might go even further and extend to all contracts made by an artificial agent

where the rule or set of rules the agent applies is set out by the operator. This would cover all

agents, except those that have so much discretion that they can invent new contractual terms to

deal with novel situations.63 We are unaware of any agents of such sophistication in existence at

time of writing, but agents that learn using statistical machine learning or neural network

algorithms could devise new contractual terms ‘similar enough’ to those it had worked out in the

past; these would be novel, and yet not identical to its older contracts.

Lastly, we suggest the law might imply into the offer made to the world a reasonableness

requirement, i.e., the limits of the doctrine would be reached when it is unreasonable for the user

to believe that the agent’s principal would assent to the terms of the contract. This will only be

the case where the agent is acting erratically or unpredictably, as opposed to merely

autonomously. Almost by definition, autonomous action takes place without the knowledge, at

least contemporaneously and of the specific transaction, of the principal. But just as an employee

can, while exercising autonomy in decision-making, stay within well-defined boundaries and act

63 For these purposes, we ignore the fact that the agent technology will be rule-bound in the sense that its behaviorwill not be completely random but reflect its architecture and algorithms. All agents (or all useful agents) will berule-bound in that sense, but some agents will be able, on the basis of machine learning technologies, to adapt andevolve their behaviour, including the contractual terms they offer, to reflect past experience or new situations.

64

in predictable ways, so can an artificial agent. Erratic or unpredictable action, however, as well

as being autonomous, is also beyond the range of action which is expected by the principal at the

relevant time.

3.2.3 Third Solution: The Objective Theory of Contractual Intention

The third potential solution is to deploy the objective theory of contractual intention, the

orthodoxy in United States law. On this approach, a contract is an obligation attached by the

force of law to certain acts of the parties, usually words, which accompany and represent a

known intent. The party’s assent is not necessary to make a contract; the manifestation of

intention to agree, judged according to a standard of reasonableness, is sufficient, and the real but

unexpressed state of the first party’s mind is irrelevant. It is enough that the other party had

reason to believe that the first party intended to agree.

As with the unilateral offer doctrine, (Kerr 1999) suggests this doctrine has real

limitations. According to Kerr, the solution gives out at a higher level of sophistication than the

unilateral offer analysis does. For Kerr, that analysis gives out as soon as the agent is capable of

determining the terms of a contract, and this can take place at a fairly low level of sophistication,

such as in the simple case of an agent which determines the price for a particular sale by looking

up a database. The “objective theory”, however, is thought to give out when an offer can be said

to be initiated by the electronic device autonomously i.e.,

… in circumstances where an offer can be said to be initiated by the electronic device autonomously, i.e.,in a manner unknown or unpredicted by the party employing the electronic device. Here it cannot be saidthat the party employing the electronic device has conducted himself such that a reasonable person wouldbelieve that he or she was assenting to the terms proposed by the other party. (Kerr 1999, 23)

As with the unilateral offer doctrine, we do not share these doubts about the potential utility of

the “objective theory” (as before, we distinguish autonomy from unpredictability). We suggest

the objective theory of intent might be relied on as an alternative underpinning to many contracts

65

reached autonomously through artificial agents. However, (Kerr 1999) is correct to suggest that

there would be limits to the applicability of the objective intent theory. We suggest those limits

might be reached not when agents behave autonomously but when agents behave erratically or

unpredictably. Under those circumstances, it becomes difficult to argue a reasonable person

would conclude that, merely by reason of operating the agent, the operator could be taken to

assent to the terms agreed by the agent. This limit is the same as the limit we proposed on the

applicability of the unilateral offer doctrine above.

3.2.4 Fourth and Fifth Solutions: Artificial Agent as Legal Agent

But it is not clear the “unilateral offer” doctrine or the “objective theory” doctrine should be used

as an attribution rule at all, whereby the actions of an artificial agent can be attributed to its

operator. In current law relevant to human agents, this doctrinal work is done by the law of

agency, which is intimately connected with the notion of the agent’s authority to act, rather than

by a unilateral offer doctrine or the objective theory.

Under the law of agency, within the scope of the agent’s actual authority, legal acts

committed by it on behalf of its principal, such as entering a contract or giving or receiving a

notice, become, in law, the acts of the principal. The doctrine extends to cases of apparent

authority where the agent has no actual authority, but where the principal permits third parties to

believe he or she has authority.

(a) The doctrine of authority

The central doctrine of agency law is the concept of authority. This is a complex and

multifaceted concept:

In respect of the acts which the principal expressly or impliedly consents that the agent shall so do on theprincipal’s behalf, the agent is said to have authority to act; and this authority constitutes a power to affectthe principal’s legal relations with third parties.

66

Where the agent’s authority results from a manifestation of consent that he or she should represent or actfor the principal expressly or impliedly made by the principal to the agent himself, the authority is calledactual authority, express or implied. But the agent may also have authority resulting from such amanifestation made by the principal to a third party; such authority is called apparent authority. (Bowsteadand Reynolds 2001, Articles 1(2) and 1(3)

Since there is no need for both actual and apparent authority, the notion of apparent authority

dispenses with the need for a contract between the principal and his or her artificial agent. The

law could therefore readily find authority by reason of the conduct of the principal in

establishing the agent and providing it with the means of entering into contracts with third

parties.

However, the law might also find it convenient to establish a workable concept of actual

authority for an artificial agent, to go alongside the apparent authority that the agent has by virtue

of the principal’s conduct vis-à-vis third parties. The law could find an appropriate analogue for

the agreement between agent and principal that is the usual source of actual authority, either in

the form of the agent’s programmed instructions, or perhaps in the form of rules that specify the

scope of the agent’s activities and the breadth of its discretion. Those rules might be expressed in

a natural or formal language. Or, in the case of an agent sophisticated enough to reach agreement

with third parties, those rules might even be expressed in the form of an agreement between the

agent and the principal. However, this would require the artificial agent to be a legal person: a

possibility that is dealt with in section 3.2.4(c) below.

We turn now to consider in more detail the two solutions that take the artificial agent

metaphor seriously, and propose that artificial agents can, quite literally, be legal agents for their

principals.

(b) Fourth Solution: Artificial Agent as Slave

Although the term “slave” is obviously an emotive one, treating an artificial agent as a slave in

67

this context would entail treating an artificial agent as having authority to enter contracts on

behalf of its operator, but without legal personality. As (Kerr 1999, Section V A) notes, “the

problem of intermediaries in commercial transactions is by no means novel,” for Romans had to

deal with difficulties akin to the ones we are considering in dealing with the law of slaves. Like

the artificial agents we consider, Roman slaves were skilful, and often engaged in commercial

tasks on the direction of their masters. However, they were not recognized as legal persons by

the jus civile, and therefore lacked the power to sue in their own name. But Roman slaves were

enabled, by a variety of legal stipulations, to enter into contracts on behalf of their masters.

These could only be enforced through their masters but nevertheless slaves clearly had the

capacity to bind a third party on their master’s behalf.

From this, (Kerr 1999, Section V A) goes on to conclude that:

If predictions turn out to be accurate and electronic commerce falls mainly in the hands of intelligent agenttechnology, the electronic slave metaphor could turn out to be more instructive than typical metaphorsused to describe intelligent agent technology such as the “personal digital assistant”.

So, while artificial agents are not legal persons, they might become sophisticated enough to “to

display high levels of autonomy and intelligence” and might require treatment as “intermediaries

rather than as mere instruments.” What motivates this suggestion by Kerr is not concern for the

artificial agents, for “the aim of doing so is not to confer rights or duties upon those devices;” it

is instead, on his argument, “the first step in the development of a more sophisticated and

appropriate legal mechanism that would allow persons interacting through an intermediary to be

absolved of liability under certain circumstances.” The Roman Law is, in this regard, viewed as

particularly instructive by (Kerr 1999, Section V A):

….the Roman law of slavery offers a valuable lesson to legislators who are considering how best to treatautonomous electronic devices. Instead of viewing the alternatives as a dichotomy – either we attributelegal personality to electronic devices or else we impose strict liability on those who initiate their use – theelectronic slave metaphor reveals a third option. As they did in ancient Rome, the legislators of electroniccommerce might decide that is appropriate to enact a special set of rules that define the parameters ofliability for those who choose to conduct commerce through the use of intermediaries, recognizing that the

68

acts of an intermediary are not always identical to those contemplated by the person initiating the use ofthat intermediary.

But this power to bind both parties was not symmetrical. A master was only bound to the third

party if he or she had given the slave prior authority to enter into the contract on his behalf. If the

master had not granted such authority to his slave, then liability could be escaped by him. This

situation was ripe for exploitation: masters could bind third parties without binding themselves

by sending slaves out to make contracts to which they had not given prior authority. Third parties

were arguably unjustly treated in such an arrangement, a point to which we return below.

(c) Fifth Solution: Artificial Agent as Legal Person

The fifth potential solution to the contracting problem is for the legal system to treat artificial

agents as legal agents who have legal personality (Allan and Widdison 1996; Kerr 1999, 2001;

Miglio et al. 2002; Wettig and Zehendner 2003). While a contracting agent need not be treated as

a legal person to be effectual, treating artificial agents as legal persons in the contracting context

would provide a complete parallel with the situation of human agents. If the agent is acting

beyond his or her authority in entering a contract, so that the principal is not bound by it, the

disappointed third party could sue the agent for a breach of the so-called “warranty of

authority”.64 Where the agent is a non-person no such legal action is open to the third party.

The type of legal personality involved (dependent or independent) would depend on the

abilities of the agent. As we have discussed above, current technology would appear to indicate

that only dependent legal personality would be accorded. This would mean that the agent would,

for matters outside its limited functionality, need always to act through the auspices of another

person (presumably its operator) acting as a type of “guardian”. This other person would, for

64 The warranty of authority is the implied promise by the agent to the third party dealing with the agent that theagent has the authority of the principal to enter the transaction. (Bradgate 2000 Section 5.2).

69

instance, make the necessary decisions with respect to any legal action brought by third parties

against the agent.

In turn, such a dependent legal personality might be able to look to the other person for

financial assistance if necessary, much as a child can look to a parent for financial support. On

the other hand, the example of corporations shows that certain kinds of dependent legal persons

do not have an unlimited right to support from the resources of the independent legal persons –

in their case, the shareholders – that are associated with them. A significant public policy

question would therefore arise as to the extent to which artificial agents would required to have

independent financial means before being be allowed to be accorded dependent legal personality.

In some jurisdictions, companies are required to have a minimum of capital set aside to satisfy

creditors, while in others banks are required to have significant capital set aside to avoid ‘runs’

on their resources by depositors all seeing to be paid out at the same time. It remains to be seen

what kind of capital adequacy arrangements might be developed in respect of artificial agents as

a quid pro quo for according them with legal personality.

3.3 Evaluating the Possible Solutions

3.3.1 Does Existing Law Dictate a Solution?

At the international level, an important text has recently been adopted which would be consistent

with the “mere tool” approach but does not require it to be adopted. The United Nations

Convention on the Use of Electronic Communications in International Contracts (the

Convention) was recently adopted by the UN General Assembly after a number of years’

preparatory work by the Electronic Commerce Working Group of the United Nations

Commission on International Trade Law (UNCITRAL). The text relevantly reads as follows:

Article 12 Use of automated message systems for contract formation

70

A contract formed by the interaction of an automated message system and a natural person, or by theinteraction of automated message systems, shall not be denied validity or enforceability on the sole groundthat no natural person reviewed or intervened in each of the individual actions carried out by theautomated message systems or the resulting contract.

We would argue the Convention, while consistent with an “agent as mere tool” approach, does

not make it mandatory. In literal terms, it requires Contracting States to cure the doctrinal

difficulty associated with the lack of human intervention at the level of individual contracts, but

it does not set out further how that doctrinal difficulty must be cured65. Therefore its terms are

flexible enough to accommodate any of the five possible solutions set out above. While the

travaux préparatoires66 to the Convention suggest the “mere tool” doctrine is the one intended

by the relevant UNCITRAL Working Party, and Contracting States can be expected to bear that

in mind as they go about implementing the Convention, we believe the question is not beyond

doubt.

The Working Group in the course of its discussions seemed to assimilate the provision

above with a principle of Civil Law which attributes a contract to the principal in whose name

the contract is entered into:

102. The Working Group noted that [the above provision] developed further a principle formulated ingeneral terms in… the UNCITRAL Model Law on Electronic Commerce. The [above provision] … wasnot intended to innovate on the current understanding of legal effects of automated transactions … that acontract resulting from the interaction of a computer with another computer or person was attributable tothe person in whose name the contract was entered into. 67

This doctrine is very close to, but not identical with, the “mere tool” solution cited above.

However, an earlier discussion of the Working Group, while adopting this basic principle, had

enunciated some qualifications that do not appear in the final Convention text:

65 This is consistent with the literature on the convention that we have reviewed, which does not explicate Article 12in a way which would require the “mere tool” approach to be adopted. See, for example, (Faria 2006), page 691 and(Chong and Chao 2006), paragraphs 26 and 107.66 (i.e., preparatory documents setting out the considerations that steered the drafting process)67 Report of the Working Group on Electronic Commerce on its thirty-ninth session (New York, 11-15 March 2002),Doc. no. A/CN.9/509

71

107. [W]hile the expression “electronic agent” had been used for purposes of convenience, the analogybetween an automated system and a sales agent was not entirely appropriate and that general principles ofagency law (for example, principles involving limitation of liability as a result of the faulty behavior of theagent) could not be used in connection with the operation of such systems. The Working Group reiterated… that, as a general principle, the person (whether a natural or legal one) on whose behalf a computer wasprogrammed should ultimately be responsible for any message generated by the machine. [emphasisadded]

108. Nevertheless … there might be circumstances that justified a mitigation of that principle, such aswhen an automated system generated erroneous messages in a manner that could not have reasonably beenanticipated by the person on whose behalf the system was operated. It was suggested that elements to betaken into account when considering possible limitations for the responsibility of the party on whosebehalf the system was operated included the extent to which the party had control over the software orother technical aspects used in programming such automated system. It was also suggested that theWorking Group should consider, in that context, whether and to what extent an automated systemprovided an opportunity for the parties contracting through such a system to rectify errors made during thecontracting process. 68, 69

While this suggestion of a mitigation principle was not taken up in the final text, the way the

Convention text is worded is consistent with a limitation of liability along the lines suggested,

and that any of the five possible solutions to the contracting problem would be permissible,

despite what is contained in the Working Group’s comments.

There have been other recent legislative attempts to deal with electronic contracting in

the United States of America and in Canada. The U.S. approach tends to embrace the “mere

tool” doctrine, while the Canadian approach is more open. The U.S. Uniform Electronic

Transactions Act (1999)70, which broadly applies to electronic records and electronic signatures

relating to transactions,71 adopts a form of the “mere tool” approach. In Section 9 (Attribution

and Effect of Electronic Record and Electronic Signature), it provides that:

(a) An electronic record or electronic signature is attributable to a person if it was the act of the person.

68 Report of the Working Group on Electronic Commerce on its thirty-eighth session (New York, 12-23 March2001), Doc. no. A/CN.9/484, paragraph 10669 There is an article in the Convention that would relieve persons interacting with electronic agents of liability forerrors in certain cases where no opportunity to correct errors is afforded by the operator of the agent.: see Article14(1) of the Convention.70 U.S. National Conference of Commissioners on Uniform State Laws. Available athttp://www.law.upenn.edu/bll/ulc/fnact99/1990s/ueta99.htm. Last consulted 3 November 2006.71 The Act does not apply to transactions to the extent they are governed by the Uniform Computer InformationTransactions Act: See Section 3(b)(3).

72

The act of the person may be shown in any manner, including a showing of the efficacy of any securityprocedure applied to determine the person to which the electronic record or electronic signature wasattributable.

While seemingly embracing the “mere tool” doctrine, the provision nevertheless raises some

issues. The section emphasizes the question whether the record (i.e., message) is the “act” of the

contracting party. This will raise difficult questions where a system acts in a faulty manner such

that one would be tempted to say a new act (of the agent) intervened, by analogy with the tort

law doctrine of novus actus interveniens (new intervening act) which “breaks the chain of

causation” between the defendant’s conduct and the plaintiff’s loss. In addition, a principal

wishing to escape liability for the actions of a faulty agent might be able to argue “non est

factum” (it is not my act).

The U.S. Uniform Computer Information Transactions Act (2002)72, which applies to

computer information transactions as defined73, seems to go further in embracing the “mere tool”

approach. It relevantly provides

SECTION 107. LEGAL RECOGNITION OF ELECTRONIC RECORD AND AUTHENTICATION;USE OF ELECTRONIC AGENTS

(d) [Party bound by electronic agent.] A person that uses an electronic agent that it has selected for makingan authentication, performance, or agreement, including manifestation of assent, is bound by theoperations of the electronic agent, even if no individual was aware of or reviewed the agent's operations orthe results of the operations.

This provision is close to the “mere tool” doctrine, since a mere “selection” for a particular use is

sufficient to bind the principal with all the actions of the agent. There is some reference to the

agency doctrine in the commentary to the provision, but the agency doctrine is not embraced:

The concept here embodies principles like those in agency law, but it does not depend on agency law. The

72 U.S. Nat ional Conference of Commissioners on Uniform State Laws. Athttp://www.law.upenn.edu/bll/ulc/ucita/2002final.htm. Last consulted 3 November 2006.73 The term “computer information transaction” helps to define the scope of this Act: Section 103. It requires anagreement involving computer information. The term includes transfers (e.g., licenses, assignments, or sales ofcopies) of computer programs or multimedia products, software and multimedia development contracts, accesscontracts, and contracts to obtain information for use in a program, access contract, or multimedia product.

73

electronic agent must be operating within its intended purpose. For human agents, this is often describedas acting within the scope of authority. Here, the focus is on whether the agent was used for the relevantpurpose.

The Canadian approach is embodied in the following provisions of the Uniform Electronic

Communications Act:74

Definition of “electronic agent”

19. In this Part, “electronic agent” means a computer program or any electronic means used to initiate anaction or to respond to electronic documents or actions in whole or in part without review by a naturalperson at the time of the response or action.

Involvement of electronic agents

21. A contract may be formed by the interaction of an electronic agent and a natural person or by theinteraction of electronic agents.

Like the UN Convention, this provision is phrased in an open way, and would, in our opinion, be

consistent with the five possible solutions outlined above. The Canadian approach does differ in

explicitly recognizing contracts made between electronic agents, i.e., where both human

principals need not be aware of particular terms. However the five solutions outlined above

would all be able to cope with this doctrinal challenge.

3.3.2 Agency Law and Artificial Agents

(a) Motivators

The most cogent reason for introducing a notion of authority in the context of artificial agents is

to allow the law to distinguish between those contracts entered into by an artificial agent that

should bind a principal, and those that should not. The doctrine of authority as discussed above

can be of assistance here. If the contract is within the agent’s actual or apparent authority to

enter, then the principal is bound, even if he or she had no knowledge of the particular contract

referred to, and even if, for example, he or she is surprised that the agent exercised its discretion 74 Uniform Law Conference of Canada. At http://www.ulcc.ca/en/us/index.cfm?sec=1&sub=1u1. Last consulted 3November 2006.

74

in that particular way. But if the contract is outside the agent’s actual or apparent authority, then

the principal is not liable on the contract.

As well as permitting a clear path to the attribution of contractual acts to a principal

without visiting potentially unlimited liability on the principal, a doctrine of agency would

permit the legal system to distinguish between the operator of the agent i.e., the person making

the technical arrangements for the agent’s operations, and the principal on whose behalf the

agent is operating in relation to a particular transaction. As discussed in relation to auction

websites, an agent (such as a bidding agent) can be made available by an operator for the

website’s users. When instructed by the user, the agent acts as the user’s – not the operator’s –

agent.

A true agency analysis also allows the legal system to handle situations of “multi-

agency”, i.e., the operation, on behalf of multiple principals, of an agent. Such a “multi-agent”

might be deployed as the front-end for dealing with potential shoppers on behalf of a number of

principals. Even if the agent concerned were operated by one of the principals, using an agency

analysis, determining which principal the multi-agent contracted for in the case of any particular

transaction would be a matter of inspecting the agent’s actual and apparent authority.

Another potentially useful doctrine of agency is the doctrine of attributed knowledge:

A notification given to an agent is effective as such if the agent receives it within the scope of his actual orapparent authority, whether or not it is subsequently transmitted to the principal, unless the person seekingto charge the principal with notice knew that the agent intended to conceal his knowledge from theprincipal. The law may impute to a principal knowledge relating to the subject-matter of the agency whichthe agent acquires while acting within the scope of his authority. Where an agent is authorized to enter intoa transaction in which his own knowledge is material, knowledge which he or she acquired outside hiscapacity as agent may also be imputed to the principal75.

The topic of attribution of knowledge is a complex one, requiring a reworking of some

fundamental notions when dealing with artificial agents. We deal with it at greater length in the

75 (Bowstead and Reynolds 2001), articles 96 at p. 439 and 97(1) and (2) at p. 441

75

following Chapter.

Lastly, the doctrine of ratification of agents’ contracts may be useful in the context of

artificial agents. This permits a principal to approve a contract purporting to be done its behalf,

even though there was no actual or apparent authority at the time the contract was entered:

The only person who has the power to ratify an act is the person in whose name or on whose behalf the actpurported to be done, and it is necessary that he or she should have been in existence at the time when theact was done, and competent at that time and at the time of ratification to be the principal of the persondoing the act; but it is necessary that at the time the act was done he or she was known, either personallyor by name to the third party. 76

As Kerr points out,

The rule that precludes undisclosed principals from ratifying unauthorized transactions could have a usefulapplication in electronic commerce. It could indirectly encourage those who initiate a device to makeconspicuous the fact that the third party is transacting with a device and not a person (Kerr 1999, 61).

(b) Objections

A number of objections have been raised to the possibility of treating artificial agents as true

agents in the legal sense based on the supposed incapacity of artificial agents to act as agents

(Kerr 1999, 2001). We believe none of these objections carries much force.

The first objection holds that artificial agents necessarily lack legal power to give consent

because they are not persons. But the “agent as slave” solution avoids this difficulty, according

as much legal power to agents as is necessary to give consent.

The second objection states that artificial agents lack the intellectual capacity or ability to

exchange promises. But many artificial agents, even the interfaces operated by shopping

websites, display all the intelligence or intentionality required of them, no less so and perhaps

more reliably than the human telephone clerks that they replace. We believe, contrary to Kerr77

that the objective theory of contract should be deployed in precisely this context. On this

76 (Bowstead and Reynolds 2001)77 See section 3.2.3 of this chapter.

76

approach, an ability to perform the requisite actions, such as emailing the buyer with the correct

details in a notification, responding to user inputs with appropriate outputs, and updating the

seller’s databases correctly, would qualify a seller’s artificial agent as having the right kind of

intellectual capacity.. An artificial agent’s intellectual ability to engage in the actions covered by

the scope of its agency would best be revealed by its successful performance of those activities.

We do not see how else courts could adjudicate on these matters other than by adopting such a

stance towards the artificial agent in question

The third objection relies on the fact that in some legal systems the establishment of an

agency relationship requires a contract between the agent and the principal, and as artificial

agents are not persons, they cannot enter contracts in their own name. However, in Anglo-

American law a contract between principal and agent is not necessary; all that is necessary is that

the principal is willing for the agent to bind him as regards third parties. In legal systems where a

contract is necessary, it will be necessary to either accord personhood to artificial agents in order

to deal with contracts made by artificial agents, or to move to the Anglo-American model.

The fourth objection relates to the mental capacity of artificial agents. While in Anglo-

American law an agent need not have the contractual capacity of an adult legal person – for

example, children who cannot contract for themselves can contract on behalf of adults –

nevertheless, an agent must be of sound mind in order for the agency to begin or to continue.

This means the agent must understand the nature of the act being performed (Bowstead and

Reynolds 2001, para 2–012, p. 37) This latter test implies a level of intentionality or

consciousness only hypothetical artificial agents displaying advanced artificial intelligence

would reach. This doctrine would therefore need to be adapted to the case of artificial agents

without human-like intelligence, before a true agency treatment of such artificial agency could be

77

undertaken.

We suggest the doctrine could readily be adapted by asking, not whether the agent

understood the nature of the act being performed, but whether the agent was of sound mind in the

sense the agent was apparently functioning normally. In the contracting example, this would

involve the agent responding, in systematic and appropriate ways, to offers and applications to

contract in such a competent way as to render it reasonable for the third party to rely on the

agent’s mental competence. We would treat the agent as possessing the right kinds of beliefs -

those contingent on its understanding the terms of the contract - based on our ability to

successfully and consistent predict its behavior as a system with those beliefs. Note the similarity

in this approach to that suggested as a resolution of the objection pertaining to alleged lack of

mental capacity.

3.3.3 Liability Consequences

One motivator for a choice among the solutions offered above to the contracting problem could

be efficient risk allocation: which solution provides the most efficient allocation of the risk of

unpredictable and erratic activity of artificial agents (for example, a failure to follow instructions

provided by the principal)? According to neo-classical economic theory of law (Posner 1973, 74-

79), this risk should fall on the person most able to control the risk, since other allocations would

lead to more-than-optimal spending on risk avoidance.

The analysis of the liability consequences of the various solutions differs according to

whether the transaction in question is one where the principal entering the transaction with the

third party by means of the agent is the agent’s operator (such as in a normal shopping website),

or where the principal is a user of the agent but the agent’s operator is another person (such as

where a website makes an agent available to users).

78

(a) First Scenario: Operator as Principal

An example of the first scenario would be where the principal is a shopping website, the agent is

the website interface, and the third party is a user shopping on the website. In the first scenario,

the five solutions we have considered above have somewhat different results:

• Under the “agent as mere tool” solution, the principal would be liable for erratic

behavior, but the principal could in many cases recover his loss from the designer of the

agent, with whom he or she might have a contractual relationship, or under product

liability laws. The solution will be fair in the usual case where it is reasonable for the

third party to rely on the agent, since it places the risk of erratic behavior on the person

best placed to control that risk, i.e. the agent’s operator/principal. However the approach

will be intuitively unfair in cases where it is not reasonable for the third party to rely on

the agent’s behavior. In such cases, the party that is best placed to limit the harm is

arguably the third party, who could draw the behavior to the attention of the operator at

minimal cost, assuming the operator has displayed contact details on the website. This

solution should either be rejected or be modified to incorporate a “rule of reasonableness”

as suggested for the second and third solutions in order to efficiently allocate the risk of

erroneous behavior.

• Under both the “unilateral offer” solution and the “objective intent” solution, the

principal would again be liable for the behavior, unless a limitation is grafted onto those

solutions to the effect that the principal is not liable for erroneous behavior of the agent

where it is not reasonable for the user to believe that the principal would have assented to

that behavior, had it been aware of it. These solutions, with the “rule of reasonableness”

added, provide an efficient solution to the risk allocation problem.

79

• Under the “agent as slave” solution, the principal could disclaim all behavior outside the

scope of the agent’s actual or apparent authority, and the third party in such cases would

have no right of action against the agent, which would not be a person. This might appear

unfair; however, where the agent has no actual or apparent authority to enter the contract

at hand, it will usually not be reasonable for the third party to rely on the agent as acting

on behalf of the principal. There may be some instances where, although the actual

transaction is within the agent’s actual or apparent authority (and the principal is

therefore liable), it still would not be reasonable for the third party to rely on the agent as

acting correctly. In these instances, the principal may be able to show that the agent was

not of sound mind, and that therefore the agency terminated prior to the transaction being

completed. This solution therefore appears to place risks correctly in all or almost all

instances.

• Under the “agent as legal person” solution, the principal would not be liable for the

agent’s behavior outside the scope of its actual or apparent authority, but the third party

would be able to sue the agent for breach of its authority (for the loss caused by not being

able to enforce a valuable contract against the principal, for instance). The agent might

conceivably in turn be able to sue its own designer under product liability laws. This

solution places the risk of erroneous behavior beyond the agent’s actual or apparent

authority on the artificial agent itself. Whether the agent itself or its principal is in the

best position to control that risk depends on the design features of the artificial agent

itself. Where the artificial agent is blindly following simple rules set by the principal (or

the designer), the principal will be best placed and the rule would be inefficient. Where

the electronic agent is capable of exercising “discretion”, perhaps by using probabilistic

80

or learning algorithms, the risk of erratic behavior may no longer be under the principal’s

control in any real sense. At that point, the “agent as person” solution would seem to be

more efficient.

(b) Second Scenario: User as Principal

An example of the second scenario would be where the agent is operated by an auction website,

the agent’s principal is a user of the website, and the user uses the agent to enter a contract with a

third party – say, by instructing the agent to bid on his or her behalf up to a specified maximum

in an auction being conducted by the third party. On the second scenario, the five solutions we

have considered also have different results:

• Under the “agent as mere tool” solution, the user/principal, rather than the operator,

would be liable for erroneous behavior, for example, overbidding by the agent. The user

might be able to shift losses onto the operator, under causes of action such as breach of

warranty of merchantability or product liability; but the initial risk allocation would be on

the wrong party.78

• Under both the “unilateral offer” solution and the “objective intent” solution, the user

would again be liable for that behavior, with potential to recover from the operator, which

again would place the risk on the wrong party. This is subject to the suggested

reasonableness condition, which would place the risk on the third party where it is not

reasonable for that party to rely on the agent.

• Under the “agent as slave” solution, the user could disclaim all behavior outside the

scope of the agent’s actual or apparent authority, and the third party would have no right

78 This risk would be mitigated if some of the risk were placed on the third party using a ‘rule of reasonableness’ aspostulated above.

81

of action against the agent, a non-person. However, as noted above, in most such cases it

would not be reasonable for the third party to rely on the agent. For erroneous behavior

within the apparent authority of the agent, the risk will be placed on the user/principal

rather than the operator. The user/principal might be able to recover against the operator

in some cases under product liability rules, who might in turn be able to recover from the

technology supplier, but the primary risk allocation would be wrong;

• Under the “agent as legal person” solution, the user would not be liable for the agent’s

behavior outside the scope of its authority, but the third party would be able to sue the

agent for breach of its warranty of authority. The agent might conceivably in turn be able

to sue its own operator and/or designer under product liability laws or under the

principles discussed below. As noted above, where the agent is best able to control its

own behavior, this would be a correct risk allocation.

One important benefit of the “agent as slave” or “agent as legal person” solutions in this scenario

would be to broaden the possible range of ways that the user of the agent could seek to recover

any loss from the operator of the agent. While the operator (setting aside the case of a highly

autonomous agent) is best able to control the risk of erroneous behaviour, and thus should be

secondarily liable for any loss to the user, the operator could, through the use of website

conditions and the like, seek to limit its liability (under consumer protection or product liability

laws) to the user significantly.

In this context, the doctrine of respondeat superior might come to the user’s aid. Under

this doctrine, an employer is responsible for employee actions performed within the course of the

employment. This doctrine might be of assistance in the second scenario by making use of the

notion of a dual agency: this would reflect the reality that even though the agent is acting for the

82

user, the agent is at the same time employed by the principal. Taking the agent metaphor

seriously in contracting cases would naturally lead to it being taken seriously for liability

purposes too, a topic we return to in Chapter 5. It is also apparent that the first three solutions to

the contracting problem do not appear to be able to handle the complexity of this fairly common

real-world situation.

(c) Conclusion

In the first scenario, the solutions give roughly equal results, at least where the “reasonableness”

requirement is placed on the first three. However, at very high levels of autonomy and discretion,

the “agent as person” solution may be more efficient than the others. In the second scenario,

none of the solutions solve the main problem, which is that it is the operator of the agent, rather

than the user or the third party, who should bear the risk of its erratic behavior. Again, however,

at very high levels of autonomy and discretion, the “agent as person” solution may be the most

efficient on the second scenario. Lastly, the fourth and fifth solutions hold out the promise of a

broader basis for the user to recover his or her loss from the operator of the agent under the

doctrine of respondeat superior, and for that reason are to be preferred.

3.4 Final Considerations

Legal scholars have identified a raft of considerations – pragmatic and philosophical – that the

law might utilize in its answer to the question of whether to accord legal personality to a new

class of entity. (Naffine 2003) constructs a useful taxonomy of such theorizing:

• one class of theorists would suggest that legal personhood rejects the need for an analysis

based on some metaphysically satisfactory conception of the person;

• a second class would claim that humanity (or membership in our species) is the basis of

83

moral and legal claims on others and the basis of legal personality;

• the most stringent conception would insist that legal personhood also meet the

requirements for a metaphysical conception of the person.

Our proposed distinction between dependent and independent legal personality can partly

explain the divergences between these approaches. The first class of theorists places importance

on those examples of dependent legal personality where the law does not require that the putative

person be either human or even conscious. This class of theorists, which includes legal

positivists, “reflect the earliest meaning of person in that it is a mask created by law to allow an

actor to fill a category” (Calverley 2007). The second and third classes of theorist, informed by a

natural law sensibility, place importance on a notion of independent legal personality, and seek

to assimilate it to the philosophical notion of a person.

When it is clearly understood the classes of theorist are actually discussing different

concepts, it becomes easier to reconcile them. The first and third classes of theorist are clearly

reconcilable, while the second class is more problematic. Our conclusion, in sympathy with the

first and third classes of theorizing, is that artificial agents could readily be accorded dependent

legal personality. The contractual problem, while not enough, on its own, to motivate a

personhood analysis, is the closest so far to a real-world problem of according personhood to

artificial agents. This conservative legal stance, rejecting legal personhood for artificial agents

for the time being, reflects the lack of a felt need. These grounds for rejection are positivistic in

nature; the law does not, as yet, require the device of a person. But the reason we have embarked

on such an inquiry is because we perceive this question as a real one that must be answered at

some point. And thus, these very grounds for rejection are the most amenable to modification in

light of future evidence to the contrary.

84

As for independent legal personhood, again, we believe this is in principle achievable, but

it is currently a distant prospect. We have examined all the conditions we believe need to be met,

and do not consider any of them to present an insuperable barrier. We also consider an artificial

agent could be counted as a “Lockean person”, for similar reasons. Most fundamentally, the

granting of legal personality is a decision to grant an entity a bundle of rights and concomitant

obligations. It is the nature of the rights and duties granted that prompt us to call this entity a

person, not the physical makeup, internal constitution, or other ineffable attributes of the entity.

That some of these rights and duties could follow from the fact that its physical constitution

enabled particular powers, capacities and abilities is not directly relevant to the discussion. What

matters are the entities’ capacities and abilities, and which rights and duties we want to assign.

We suspect the “agent as mere tool” doctrine will survive for some time on grounds of

convenience and justice. As agents become more sophisticated, a slave analysis, and then an

analysis involving personhood, might be embraced. If a dependent legal personality solution was

to be followed, the question of identification of agents could be solved by using a Turing

registry. Such dependent legal agents would, of course, require to be registered alongside

nominated humans who would undertake the more sophisticated operations required to be

performed by legal persons, such as conducting litigation or exercising discretionary choices

relating to the performance of contracts.

Our conclusion from the discussion on agency law is that doctrinal convenience and

neatness, as well as a concern for just outcomes i.e., a mix of the pragmatic and the conceptual,

will play a role in future determinations of the legal status of artificial agents. Our discussion is

intended to show the extent to which “system-level” concerns will continue to dominate for the

foreseeable future. Attributes such as the practical ability to perform cognitive tasks, the ability

85

to control money, and “legal system wide” considerations such as cost benefit analysis, will

further influence the decision whether to accord legal personality to artificial agents. Factors

such as whether it is necessary to introduce personhood in order to explain all relevant

phenomena, efficient risk allocation and whether alternative explanations gel better with existing

theory, will continue to carry considerable legal weight as long as the qualities or capacities of

artificial agents do not go significantly beyond their current standing. When they do, a host of

other considerations will come into play and these will play a role in the discussions in

subsequent chapters.

4. Conclusion

The granting of legal personhood by legislative bodies or courts of law takes place in response to

legal, political, or moral pressure. The legal system, in so doing, ensures its internal functional

coherence. Legal entities are created in order to facilitate the working of the law in consonance

with social realities. Thus, the arguments for the establishment of new classes of legal entity,

while informed by the metaphysically or morally inflected notion of person present in

philosophical discourse, often deviate from it. A crucial determinant in courtroom arguments is

historical or legal precedent and pragmatic consideration of our society’s best interests.

Decisions to award legal personhood illustrate, very aptly, Oliver Wendell Holmes’ famous

dictum – made during his ruling in Lochner vs. New York 1905 – that, “general propositions do

not decide concrete cases”. These decisions reflect a “vortex of discursive imperatives” (Menand

2002, 36): precedent, principles, policy, all impinged on by utilitarian, moral and political

considerations, and inflected by the subtle and long-held convictions and beliefs of the presiding

judges or concerned legislators. Any extension of legal personhood to a new class of entity will

reflect all these imperatives. For the law then, “no single principle dictates when the legal system

86

must recognize an entity as a legal person, nor when it must deny legal personality” (Allan and

Widdison 1996, 35).

Nosworthy suggests the question of whether to extend legal personality to a particular

category of thing is essentially a conventional one:

The real question relates to whether an entity will be considered by the community and its lawmakers to beof such social importance that it deserves or needs legal protection in the form of the conferral of legalpersonality. Nekam suggests that this decision is based upon the community’s “emotional valuation” ofthe entity’s need for legal protection. This is essentially what Lawson ( Lawson, FH, “The Creative Use ofLegal Concepts”, extracted in Smith, JC and Weisstub, DN, The Western Idea of Law, Butterworths,London, 1983) refers to as the policy factor inherent in the question of whether new legal persons shouldbe created (Nosworthy 1998).

This accurate analysis can be sharpened, for the evaluation of the need for legal protection is not

merely “emotional” but pragmatic, and furthermore, it is not only the entity’s need for legal

protection that is at issue but the community’s as well. Rights might be granted to an entity as a

way of protecting the interests of others. The entity in question might interact with, and impinge

on, social, political and legal institutions in such a way that the only coherent understanding of

its social role emerges by treating it as a person. The question of legal personhood suggests the

candidate entity’s presence in our networks of legal and social meanings has attained a level of

significance that demands reclassification. An entity is a viable candidate for legal personality in

this sense if it fits within our networks of social, political and economic relations in such a way it

can coherently be a subject of legal rulings.

Thus, the “real question” is whether the scope and extent of artificial agent interactions

have reached such a stage. Answers to this question will reveal what we take to be valuable and

useful in our future society as well, for essentially we will be engaged in determining what sorts

of interactions artificial agents should be engaged in for us to be convinced that the question of

legal personhood has become a live issue. Autopoietic legal theory, which emphasizes the

circularity of legal concepts, suggests that the interactions artificial agents engage in will play a

87

crucial role in the determination of their legal personality:

[L]egal persons are entities that are constructed within the legal system as “semantic artifacts” to whichlegally meaningful communications are attributed … [E]ntities are described as legal persons when thelegal system attributes legally meaningful communications to them … [W]ithin the legal system, legalpersons are those entities that produce legal acts . . . . a natural person is capable of many types of legalacts, such as making a contract or committing a tort…a wild animal is not capable of any such legal acts.Hence, the legal system treats natural persons, but not wild animals, as legal persons. This explains whythe legal system treats a natural person, but not a wild animal, as a party to a contract or a tort (Teubner1988)

If it is a sufficient condition for personhood that an entity engage in legal acts, then, an artificial

agent participating in legal transactions becomes a candidate for legal personhood by virtue of its

participation in those transactions79. As we have attempted to show in Sections 2.2 and 2.3

above, we think the issue of independent legal personhood is of a very different order. We do

think that the capacities and abilities of artificial agents will be of crucial importance in any

decision to accord them with such a status.

Whatever the resolution of the arguments considered above, the issue of legal personhood

for artificial agents may not come ready-formed into the courts. Rather, a system for granting

legal personhood may need to be set out by the legislature, perhaps through a registration system

as in the case of companies. While the legal convenience of granting legal personhood to

artificial agents is considerable, its importance should not be overstated. The convenience of the

modern business corporation’s legal entity status is significantly greater than that of the

unincorporated partnership, but many large businesses continue to survive as partnerships, for

example, for tax reasons.

We also consider it likely that economic considerations will underwrite any decision

79 This definition excludes comatose or otherwise incapacitated human beings, who while legal persons do notpossess the capacity to engage in any legal acts (unless one construes receiving medical care as participating in alegal act). Rights are granted to these legal persons – despite their passivity – as protection. We suggest thisdefinition is incomplete in leaving out the subjects of the legal acts it speaks to. A scientist performing a medicalexperiment on an animal is engaged in a legal act; the animal is the subject of this legal act and participates in it. It isthe fact of its being subject and participant in this legal act that underwrites the case for animal rights.

88

whether to accord artificial agents with legal personality. In the modern lawmaking context, at

least away from the courtroom, cost-benefit analyses are of increasing importance, especially for

common law judges influenced by policy considerations. Few laws are proposed today in

advanced democracies without some semblance of a utilitarian argument that its benefits

outweigh its costs. As the range and nature of electronic commerce transactions handled by

artificial agents grows and diversifies, these considerations will come into play. If the cost of an

artificial agent trustee were significantly below that of a human one, and if the legal system

required a legal person to act as trustee, as opposed to a slave, one choice would be for the legal

system to accord legal personality to the trustee. This would be both legally and economically

convenient. This should not surprise us; such considerations have played important roles in

landmark judgments, and the case of artificial agents does not demand radically different

theorizing.

A final note on these entities that will challenge us by their presence in our midst.

Philosophical discussions on personal identity often take recourse in the Wittgensteinian idea

that ascriptions of personal identity to human beings are of most importance in a social structure

where that concept plays the important legal role of determining responsibility and agency. We

ascribe a coherent unity to a rapidly changing – both physically and psychologically – object, the

human being, because very little social interaction would otherwise make sense. Similarly, it is

unlikely that in a future society where artificial agents wield significant amounts of executive

power, that anything would be gained by continuing to deny them legal personhood. At best it

would be a chauvinistic preservation of a special status for biological creatures like us. If we fall

back again and again on making claims about human uniqueness and the singularity of the

human mind and moral sense in a naturalistic world order, then we might justly be accused of

89

being an “autistic” species, unable to comprehend the minds of other species.

References

Allan, Tom, and Robin Widdison. 1996. Can computers make contracts? Harvard Journal of

Law and Technology 9:25-52.

Ayer, AJ. 1963. The Concept of of a Person. New York: St. Martins Press.

Baker, Lynne Rudder. 1989. Instrumental Intentionality. Philosophy of Science 56 (2):303-316.

Block, Ned. 1981. Psychologism and Behaviorism. Philosophical Review 90:5-43.

Boden, Margaret A., ed. 1990. The Philosophy of Artificial Intelligence. New York City: Oxford

University Press.

Bowstead, William, and FMB Reynolds. 2001. Bowstead and Reynolds on Agency. 17th ed.

London: Sweet and Maxwell.

Bratman, M.E. 1987. Intention, Plans, and Practical Reason. Palo Alto: CSLI Publications.

Bringsjord, Selmer. 1992. What robots can and can't be. Amsterdam: Kluwer Academic

Publishers.

Brooks, Rodney. 1997. Intelligence without representations. In Mind Design II, edited by J.

Haugeland. Cambridge: MIT Press.

Calverley, David. 2007. Imagining a Non-Biological Machine as a Legal Person. Artificial

Intelligence and Society: Special Issue on Ethics and Artificial Agents.

Chalmers, David. 1997. The Conscious Mind: In Search of a Fundamental Theory. New York:

Oxford University Press.

Chong, KW, and JS Chao. 2006. United Nations Convention on the Use of Electronic

90

Communications in International Contracts – A New Global Standard. Singapore

Academic Law Journal 18:116.

Copeland, Jack. 1993. Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwell

Publishing.

Deheane, S., and L. Naccache, eds. 2001. The Cognitive Neuroscience of Consciousness. .

Cambridge: MIT Press.

Dennett, Daniel. 1978. Brainstorms: Philosophical Essays on Mind and Psychology.

Cambridge: Bradford: MIT Press.

———. 1987. The Intentional Stance, . Cambridge: Bradford/MIT Press.

———. 1992. Consciousness Explained. Boston: Back Back Books.

———. 1993. Quining Qualia. In Readings in Philosophy and Cognitive Science, edited by A.

Goldman. Cambridge: MIT Press.

———. 1995. The Unimagined Preposterousness of Zombies. Journal of Consciousness Studies

2:322-336.

———. 2000. The Case for Rorts. In Rorty and his critics, edited by R. B. Brandom. London:

Blackwell Publishers.

———. 2003. Who's on First? Heterophenomenology Explained. Journal of Consciousness

Studies 10 (9-10):19-30.

Dreyfus, Hubert. 1992. What Computers Can't Do: A Critique of Artificial Reason. Cambridge:

MIT Press.

Edwards, Rem B., ed. 1997. New Essays on Abortion and Bioethics (Advances in Bioethics).

Amsterdam: Elsevier.

91

Elster, Jon, ed. 1986. Rational Choice (Readings in Social & Political Theory) New York: NYU

Press.

Fagin, R., J.Y Halpern, Y. Moses, and M.Y Vardi. 1995. Reasoning about Knowledge.

Cambridge: MIT Press.

Faria, José Angelo Estrella. 2006. The United Nations Convention on the Use of Electronic

Communications in International Contracts: An Introductory Note. International

Comparative Law Quarterly 55:689-694.

Ferber, Jacques. 1999. Multi-Agent Systems: An Introduction to Distributed Artificial

Intelligence. Harlow: Addison Wesley Longman.

Frankfurt, H. 1971. Free Will and the Concept of a Person. Journal of Philosophy 68 (1):5-20.

Gallese, Vittorio, and Alan Goldman. 1998. Mirror neurons and the simulation theory of mind-

reading. Trends in Cognitive Science 2 (12):493-501.

Gallup, Gordon. 1970. Chimpanzees: self-recognition. Science 167:86-87.

Gärdenfors, Peter. 1990. Knowledge in Flux: Modeling the Dynamics of Epistemic States.

Cambridge: MIT Press.

Gomez, Juan Carlos. 1998. Are apes persons? The case for primate intersubjectivity. Etica and

Animali 9:51-63.

Gray, John Chipman. 2006. The Nature and Sources of Law. Boston: Adamant Media

Corporation. Original edition, 1909.

Grice, HP. 1957. Meaning. Philosophical Review 66:377-388.

Haugeland, John. 1989. Artificial Intelligence: The Very Idea. Cambridge: MIT Press.

———, ed. 1997. Mind Design II: Philosophy, Psychology, and Artificial Intelligence.

92

Cambridge: MIT Press.

Herman, Barbara. 1993. The Practice of Moral Judgment. Cambridge: Harvard University

Press.

Heschl, A., and J. Burkart. 2006. A new mark test for mirror self-recognition in non-human

primates. Primates 47 (3):187-198.

Hilpinen, Risto. 2001. Deontic Logic. In The Blackwell Guide to Philosophical Logic, edited by

L. Goble. London: Blackwell.

Humber, James M., and Robert F. Almeder, eds. 2003. Stem Cell Research (Biomedical Ethics

Reviews). Totowa: Humana Press.

Hume, David. 1993. An Enquiry Concerning Human Understanding. Indianapolis: Hackett

Publishing. Original edition, 1748.

Jackendoff, R. 1987. Consciousness and the Computational Mind. Cambridge: MIT Press.

Jacquette, Dale. 1988. Review of "The Intentional Stance". Mind 97 (388):619-624.

Kant, Immanuel. 1998. Groundwork of the Metaphysics of Morals. Edited by M. J. Gregor,

Cambridge Texts in the History of Philosophy. Cambridge, UK: Cambridge University

Press. Original edition, 1785.

Karnow, CEA. 1996. Liability for distributed artificial intelligences. Berkeley Technology Law

Journal 11:147-204.

Kerr, Ian R. 1999. Providing for autonomous electronic devices in the uniform electronic

commerce act. Paper read at Uniform Law Conference of Canada.

———. 2001. Ensuring the success of contract formation in agent-mediated electronic

commerce. Electronic Commerce Research 1 (1/2):183-202.

93

Locke, John. 1996. An Essay Concerning Human Understanding. Abridged ed. Indianapolis:

Hackett Publishing Company. Original edition, 1690.

Machamer, Peter, Rick Grush, and Peter McLaughlin, eds. 2001. Theory and Method in the

Neurosciences. Pittsburgh: University of Pittsburgh Press.

Martin, Raymond, and John Barresi. 2002. Personal Identity. London: Blackwell Publishing

Professional.

Menand, Louis. 2002. American Studies. New York: Farrar, Strauss and Giroux.

Miglio, Federica De, Tessa Onida, Francesco Romano, and Serena Santoro. 2002. Electronic

agents and the law of agency. Paper read at Workshop on The Law of Electronic Agents.

Monzani, J-S., and D. Thalmann. 2001. Verbal communication: using approximate sound

propagation to design an inter-agents communication language. Paper read at

International Conference on Autonomous Agents, at Barcelona.

Naffine, N. 2003. Who are Law’s Persons? From Cheshire Cats to Responsible Subject. The

Modern Law Review 66:346.

Nekam, A, Harvard University Press, Boston, 1938. 1938. The Personality Conception of the

Legal Entity. Boston: Harvard University Press.

Nosworthy, Jane. 1998. The Koko Dilemma. Southern Cross University Law Review 2 (1).

Nozick, Robert. 1993. The Nature of Rationality. Princeton: Princeton University Press.

Pacuit, Eric, Rohit Parikh, and Eva Cogan. 2006. The Logic of Knowledge Based Obligation.

Knowledge, Rationality and Action (Synthese) 149 (2):311-341.

Parfit, Derek. 1986. Reasons and Persons. New York: Oxford University Press.

Perry, John, ed. 1975. Personal Identity. Berkeley: University of California Press.

94

Posner, Richard. 1973. Economic Analysis Of Law. 1st ed. Lebanon: Little, Brown and

Company.

Preston, John, and Mark Bishop, eds. 2002. Views into the Chinese Room: New Essays on

Searle and Artificial Intelligence. New York: Oxford University Press.

Ringen, J., and J. Bennett. 1993. Précis of The Intentional Stance. Behavioral and Brain

Sciences 16 (2):289-391.

Robbins, Philip, and Anthony I. Jack. 2006. The Phenomenal Stance. Philosophical Studies

127:59-85.

Rorty, Richard. 1976. The Identity of Persons. Berkeley: University of California Press.

Searle, J.R. 1980. Minds, brains and programs. Behavioral and Brain Sciences 3:417-457.

Shoemaker, Sydney, and Richard Swinburne. 1984. Personal Identity (Great Debates in

Philosophy). Oxford: Basil Blackwell.

Sloman, Aaron. 1992. How to dispose of the free will issue. AISB Quarterly (82):31-32.

Solum, Lawrence B. 1992. Legal personhood for artificial intelligences. North Carolina Law

Review 2:1231.

Strawson, Peter. 1959. Individuals. London: Methune.

Stubenberg, L. 1992. What is it like to be Oscar? Synthese 90:1-26.

Sumner, L.W. 1987. The Moral Foundation of Rights. New York: Oxford University Press.

Teubner. 1988. Enterprise Corporatism: New Industrial Policy and the “Essence” of the Legal

Person. American Journal of Comparative Law 36 (1):130-155.

Thompson, D. 1965. Can a machine be conscious? British Journal for the Philosophy of Science

95

16 (36).

Turing, A.M. 1950. Computing machinery and intelligence. Mind 59 (433-460).

von Wright, G.H. 1951. Deontic Logic. Mind 60:1-15.

Warren, Mary Anne. 1996. On the Moral and Legal Status of Abortion. In Biomedical Ethics,

edited by T. A. Mappes and D. DeGrazia. New York: McGraw-Hill.

Weinreb, Lloyd L. 1998. Oedipus at Fenway Park: What Rights Are and Why There are Any

Cambridge: Harvard University Press.

Wettig, Steffen, and Eberhard Zehendner. 2003. The electronic agent: A legal personality under

German law? . Paper read at Law and Electronic Agents Workshop

Williams, Bernard. 1976. Problems of the Self: Philosophical Papers 1956-1972. Cambridge,

UK: Cambridge University Press.

Wise, Steven M. 2000. Rattling the Cage: Toward Legal Rights for Animals. New York:

Perseus Books.

Wooldridge, M. 2000. Reasoning About Artificial Agents. Cambridge: MIT Press.