the ethics of ai - a starting point for establishing

35
THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES Avestu Institute The Institute for Idea Management & Intellectual Capital Revolutions cannot take place in the streets if they have not taken place in minds first. All rights reserved. Authored by contributors for Avestu Institute Research, at the Munich faculty of philosophy and science theory within the PPE programme at Ludwig-Maximilians-Universität München. The Ethics of AI – A Starting Point for Establishing Practical Ethical Guidelines: Reframing AI in a Normative Discourse Based on Constructivist Responsibility THE ETHICS OF AI 1 AVESTU INSTITUTE FOCUS REPORT 01 18

Upload: others

Post on 15-Mar-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

Avestu Institute The Institute for Idea Management & Intellectual Capital !!!

!!!!!!!!!!!! !Revolutions cannot take place in the streets if they have not taken place in minds first.

All rights reserved. Authored by contributors for Avestu Institute Research, at the Munich faculty of philosophy and science theory within the PPE

programme at Ludwig-Maximilians-Universität München.

!The Ethics of AI – A Starting Point for Establishing

Practical Ethical Guidelines: Reframing AI in a Normative Discourse Based on Constructivist Responsibility

!

THE ETHICS OF AI !1AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

The Ethics of AI – A Starting Point for Establishing Practical Ethical Guidelines:

Reframing AI in a Normative Discourse Based on Constructivist Responsibility ! !!

Abstract !This white paper aims to trace the ethical limits of Artificial Intelligence from a

normative, action-ethical point of view by providing a - per the scope of this paper - short overview of the most influential current stances in the realm of AI- or Robo-Ethics, and, where indispensable, discussing them against the backdrop of the classical Greek value systems on which Western philosophy is footed; aspiring to further develop them into an initial sketch for a constructivist, action-ethical approach to responsibility.

On this basis, the advanced aim is the provision of an underpinning for a thought platform for future legal regulations as well as the necessary self-regulative efforts of the start-up community and innovating individuals, possibly based on learning from earlier human innovative endeavours and as based on the thesis that ‘responsibility’ constitutes the core concept on which such regulations need to be built – and will necessarily form the heart of such a thought platform. As such, it is to be understood as a selection of practical ethical thoughts helpful to those working on or with AI; more a ‘start ramp’ for further research on regulatory recommendations than a fully developed set of final guidelines in itself. In such a young branch of technology nothing is set in stone and, thus, it seems more fitting to discuss paradigms, common misconceptions and foundational considerations that ‘responsible’ innovators, investors and regulators can work with actively at the ‘soft law’ stage - rather than dictating a corset of rigid restrictions to consume or follow passively from afar.

Thus, this essay further seeks to establish a preliminary talking point for ethical guidelines that might be useful to entrepreneurs providing and developing AI solutions and software - both, presently and in the future. Then this discourse shall be developed in its consequences to briefly forecast what Ethical Entrepreneurship - and by extension programming - might look like. It arrives at the notion that framing and strengthening the concept of human responsibility is of central importance at this stage of regulation. Further research maybe needed to apply the initial regulatory assertions and recommendations, summarized and set out here, more thoroughly in distinct scenarios in the context of AI, such as Autonomous Driving, Intelligent Personal Assistants and Open Source Programming in Robotics.

THE ETHICS OF AI !2AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

!!Keywords: Ethics, Ethics of AI, Artificial Intelligence, Robo-Ethics, Robotics, Techno-philosophy, Ethical Guidelines, Ethical Guidelines for Founders, Innovation, Responsibility, Constructivism, Constructivist Responsibility, Action Ethics, Vita Activa, Ethical Entrepreneurship, Ethical Programming, AI in the Automotive sector, Autonomous Driving, Intelligent Weapons, Personal Assistants, Machine vs. Human, Benignity Assumption, Self-regulative Efforts of the Start-up Community, Self-regulation, Soft law, Hard law, Stages of innovation, Consequentialism; Emerging technologies, Entrepreneurship, Professional Ethics, Formation of Legal Framework, Human-Centric, In-Action Ethics, Human-Computer-Interaction, Asimov’s Laws; !!!!!!!!!!!!!!!!!!!!!!!!!!!!

THE ETHICS OF AI !3AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

!!The Ethics of AI – A Starting Point for Establishing Practical Ethical Guidelines: Reframing AI in a Normative Discourse based on Constructivist Responsibility !

Table of Contents !I. Abstract II. Table of Contents III. Introduction - AI: Mapping a Land Marked by Human Fears & Desires

a. Early Research & Current Discourse: A Revived Discipline b. Clarification of Target Audience: Formulating the Innovator’s guide to AI

IV. Central Notions, Definitions, Disclaimers and Deliberations: Laying an Initial Ethical Groundwork Towards a Thought Platform for Future Legal Regulations, the Necessary Self-regulative Efforts of the Start-up Community and Innovating Individuals

a. Responsibility – a Definition b. In AI - Who is Responsible? c. Renouncing the Benignity Assumption d. A Constructivist Point of View e. On “Summoning the Demon”- Debunking Meta-physical Readings of the

‘Awakening of AI’ f. The God-complex and Learnings from Prometheus

V. Definition of “Machine vs. Human” a. The Operational Definition Produced by the Stanford 100 Year Report on

AI vs. the Definition of “The Human Measure” b. What is in a Decision?

VI. The Current ‘Awakening of AI’ - a. State of AI b. State of Industry Regulation: Mapping Unchartered Territory c. Current Frameworks and Central Notions d. Recommendations for Practical Guidelines Aimed at Innovators & Self-

regulative Efforts of the Start-up Community VII. Conclusion & Forecast – The Bain of Progress

a. Emerging Technologies Bring Emerging Risks – and Opportunities VIII. Appendix

a. Declaration of Authorship b. List of References !!

THE ETHICS OF AI !4AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

!The Ethics of AI – A Starting Point for Establishing Practical Ethical Guidelines: Reframing AI in a Normative Discourse Based on Constructivist Responsibility

---

Picture: i) !I. Introduction - AI: Mapping a Land Marked by Human Fears & Desires !!

“A year spent in artificial intelligence [sic.] is enough to make one believe in God.“ Alan Perlis !

…and while it is unbecoming to delve too deep into Pandora’s box of dystopian visions that are quickly invited in upon the mere mentioning of the ethical conundrums that humanity stands to solve in the face of an evermore complex and seemingly ‘autonomous’ Artificial Intelligence (AI) soft- and hardware, it is of immense importance to reflect on the challenges that may arise and, thus, help shape this discourse along ethical, human-centric cornerstones. !

a. Early Research & Current Discourse: A Revived Discipline !

THE ETHICS OF AI !5AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

This is an issue at the forefront of the minds of several of the technological and academic front-runners of our life time - with Stephen Hawking even warning that “(t)he development of full artificial intelligence could spell the end of the human race”. While Hawking’s statement may be a deliberately dark wake-up call, meant to shock complacent policy makers and elites into action, it seems to be of far-reaching consensus that, as put forward by Klaus Schwab, „(w)e must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.“

Building on the ensuing and ever-more heated discourse in start-up circles and academia that had its start as far back as the 1970s, but quieted down as the technology behind AI had not yet reached the capacity to transform homes & industries at today’s speed, and particularly on recent commentary by Elon Musk, as well as the imminent ‚angst’ around the implications of AI, this paper sets out to establish a line of thinking meant to form the backdrop for a more detailed creation of ethical rules and regulations, here referred to as ‘checks and balances’ or ‘guidelines’, to tackle this fourth industrial revolution and, particularly, the creation of AI systems.

And it is none too early – according to the investment bank GP Bullhound‘s annual Technology Predictions Report, 2017 is set to be the year in which “AI [...] breaks into the mainstream”, stating that “although the present generation of AI is still in the junior phase of development, we believe that 2017 will see consumers and businesses begin to adopt cutting-edge Artificial Intelligence for real-world applications.” !

c. Clarification of Target Audience: Formulating the Innovator’s guide to AI !The addressees of these, herein advanced, guidelines are not primarily fellow ethical

academic researchers, but the innovators on the ground, in the research laboratories, in their garages, or on Google’s campuses, who are funding and founding AI companies or writing the code behind these systems. These innovators, may they be founders, investors, programmers, or researchers are currently in the process of instilling the ‘DNA’ of these systems into them – via their company cultures, the parameters of their research and each line of code they produce.

Thus, the author of this paper asserts that it is of the foremost importance to help these innovators reflect their choices – and by extension the choices and parameters of decision-making, they instil into the machines and systems they build. Giving entrepreneurs and innovators a set of guidelines to start judging their prototypes and algorithms by, may currently be the most effective pre-emptive measure we, as human society, have at our disposal.

If the discourse in the entrepreneurial community reflects ethical principles and helps innovators make ethically sound or, at least, conscious decisions, we may already be one step further than we were during the peak times of the - in hind-sight - innocent faith in progress and national powers that ultimately led to the creation of the atomic bomb.

THE ETHICS OF AI !6AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

This discourse might also prove helpful to regulators which are struggling to put a legal framework in place that does not strangle innovation and technological progress but help enforce positive, benign technological breakthroughs that will make the lives of future generations brighter and the issues humanity faces less daunting – not more. !

IX. Central Notions, Definitions & Disclaimers a. Responsibility – A Definition !

The thesis of this paper is centred on heavy opposition to the stance that robots are anything but tools, and, therefore, aiming to show how important it is for humans to take ownership of the responsibility they accordingly have in creating such machines that might be able to process information faster than our brains allow us to, and solve for ‘x’ quicker than we could ever hope to. We are still the ones deciding what ‘x’ any AI entity solves for, and in what framework, via the use of which parameters, the solving process happens.

The ‘responsibility’, thus, rests squarely on our humble shoulders. Not in any Ayn Rand-ian sense, but as based on a very Hannah Arendt-ian logic spilling into In-Action Ethics, which shall be explored further.

In a body of work published in 1911 and aptly titled the The Devil’s Dictionary the satirist Ambrose Bierce found one particularly eloquent characterization of responsibility that shall serve us as a vantage point for the considerations to follow in this paper: “a detachable burden easily shifted to the shoulders of God, Fate, Fortune, Luck or one’s neighbour. In the days of astrology it was customary to unload it upon a star”.

This illuminates the very nature of this concept nicely - spelling out that it tends to be seen as a necessary evil, an obligation or yoke that restricts movements rather than as a character trait that one cultivates as much for personal benefit, as for the ever-ominous ‘greater good’. We can also find this in the etymology of the adjective “responsible” that according to the Online Etymology Dictionary goes back to the late 16th century, namely the 1590s and meant !

“ “answerable” (to another, for something), from obsolete French responsible (13c., Modern French responsable, as if from Latin *responsabilis), from Latin respons-, past participle stem of respondere "to respond" (see respond). Meaning "accountable for one's actions" is attested from 1640s; that of "reliable, trustworthy" is from 1690s. Retains the sense of "obligation" in the Latin root word.“ !

The definition of responsibility employed in the development of this thesis, rests on the definition employed by Hans Lenk, co-incidentally professor at the University of Karlsruhe, Germany’s evolving center of competence for AI, who distinguishes between !

“four dimensions of responsibility:

THE ETHICS OF AI !7AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

1. Responsibility for actions and their results 2. Task and role responsibility 3. Universal moral responsibility 4. Legal responsibility” !

and, more sharply, the first such responsibility, “responsibility for actions and their results”, as its most prominent in its application to the current state of Ai and the focus of this paper - although there is undeniably an underlying interplay of the different kinds that may be rewarding to explore more deeply. In our case, it is noteworthy to pinpoint that the fourth kind, “legal responsibility”, will regain importance to the AI space and founders in this realm, as regulations on the basis of the foundational efforts such as this piece grow more widespread. It also is worth noting that the three later kinds seem to be deductions from number three, “Universal moral responsibility”, as a global theory of moral responsibility surely must be the precedent for any more narrow form.

The understanding of responsibility of the first kind, as employed in this essay, also draws on a sense of accountability for making a good life for oneself and others by means of a ‘vita activa’ - including a politically active ‘doing’ – as put forward by Arendt, but also on the action-theoretical discussion explored in the book ‘Action, Ethics, and Responsibility’ co-authored by Joseph Keim Campbell, Michael O'Rourke, Harry Silverstein, „going beyond metaphysics and the „free will“ problem” and understands responsibility in its socio-economic, socio-political sense at the level of the individual. In this sense, the concept of responsibility may as well be seen as a collective duty, but one that may become void if it is not simultaneously taken seriously at the personal level. !

b. In AI - Who is responsible? !But who is ultimately responsible? The doer or the doing? When framing the question

this way, the answer lies within an observation that is interestingly shared by Pope Benedict XVI: „Artificial intelligence, in fact, is obviously an intelligence transmitted by conscious subjects, an intelligence placed in equipment. It has a clear origin, in fact, in the intelligence of the human creators of such equipment.“

In logical continuation, when tracing the source of AI, we are also tracing the source of its doings, thus, the creator is not only the origin but also the originator of actions taken by an AI formed by his very hands: he is the one to ‘answer’ for its deeds, he is ‘responsible’.

Hereditary lines of thought on the matter, such as depicted by Michael R. LeChat, when he conjures up the ‘Hebraic’ and ‘Hellenic’ viewpoints on AI, seems to strongly emphasize this connection between the origin and the placement of ultimate accountability – as well as mankind’s necessary failure in emulating God. It seems imperative to remember that in the times, when akin imagery was created, the individual per se was nothing but a plaything to various depictions of fate or kismet, controlled by and utterly in need of the favour of the Gods. Human Rights and our contemporary Western

THE ETHICS OF AI !8AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

understanding of responsibility as a necessary consequence of personal agency, are a rather modern appearance.

As a contemporary commentator, Hans Lenk, first argues along hereditary lines, when he paraphrases Bochenski to say that !

“one can make and hold somebody personally responsible just for the sufficient conditions of the respective results of his or her developments, i.e. for what really triggers the consequences: “Whoever is responsible for a sufficient condition, is also responsible for what is conditioned. If consequently an action, for which I am responsible, leads to the death of a human being (i.e., is a sufficient condition for this death), I am responsible for his dying” but then start to assess that “[t]he same is not automatically true for all necessary conditions. Whoever establishes a necessary condition for an outcome would be according to Bochenski at best possibly be responsible “for the probability of the conditioned.” In the strict sense, the one responsible for only a necessary condition is therefore “not responsible for the conditioned”.” !Thus, he is trying to account for the general hopelessness that it would necessarily

spell for a mere human to be obliged to forecast all possible ‘conditions’ under which his/her creations, or acts, would lead to various ‘outcomes’ in order to be on the morally and action-ethically safe side. Such a mindset or regulatory demand would in its turn bring about immobility, stagnation and despair at the simplest of acts.

However, we cannot plainly dismiss responsibility as too grand a concept to account for in the detached world of science for science, when history puts it in plain view that many a scientist wound up unable to sleep, or worse, after seeing what his research brought fourth. Lenk then goes on in his quoting of Bochenski to illuminate this perfectly: !

“[take] the case of Otto Hahn (1879-1968). In 1938, Hahn, together with Lise Meitner and Fritz Strassmann, discovered nuclear fission at his small Berlin-Dahlem laboratory table. Hahn was a chemist by training rather than a physicist, so he wasn’t well prepared to predict and assess the mass-energy implications of fission, or consequently, its military applications. Indeed he couldn’t even predict the chain reactions and the consequent multiplication of energy output (Otto Frisch and Lise Meitner as physicists did that).” !

He concludes ! "[t]herefore Hahn certainly can’t [sic.] be held personally morally responsible for the development of the atomic bomb. Even so, he felt great scruples and qualms, even a kind of co-guiltiness – so much so that the other German physicists held in the same internment camp near Cambridge after the war were afraid that he would commit suicide.”

THE ETHICS OF AI !9AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

!But does the involvement of other scientists - or the incomplete foreseeability for that

matter - absolve him completely? Is it not expected of a conglomerate of scientists at this level to discuss - in-depth and in-length - the potential uses and abuses of their creation? Is it not their professional honour that would prescribe for them to summon a board of respected knowledgeable individuals until they were sure?

Perhaps this thought pattern, when put into practice would stop short only where Plato’s philosophers’ state was erected in practice. Perhaps, it would have led to an alternative, more practical outcome however.

First calls for the organized regulation of professional ethics, institutions and boards like this, were voiced after the Nuremberg Trials leading to creation of the Nuremberg Code of 1947, later followed by, the World Declarations of the World Medical Association, and the Mount Carmel Declaration on Technology and Moral Responsibility in 1974, although the Hippocratic Oath might be one of the first recorded endeavours at such ethical regulation. The Mount Carmel Declaration „essentially relied on an appeal to the moral consciences of individual scientists and engineers. This encouraging of conscience still seems to be necessary but not sufficient for approaching all the difficult problems of the distribution of different kinds of responsibilities.“

Thus, to take this train of thought a little further in a consequentialist fashion, one might want to introduce the concepts of traceability, transparency and liability. If innovations, and even slight improvements on pre-existing code, were more transparently traceable and the originator was to be held by a) a professional code and understanding of honour, maybe set fourth in a codex of honour and b) in a legal fashion; the dual incentive to stay on the ethically ‘clear’ side might shine through more often.

Perhaps an entrepreneurial/scientific version of the Kantian Imperative or its simplified sibling, The Golden Rule, is called for: ‘Do not build (program, invest in or research), what you would not one want another person to build or to become the generally applicable societal standard/law - or under the full or partial implications of which, you would not want to live.’

Further academic exploration of such a concept in regards to AI may be fruitful, although some scholars may argue that the concept of the Kantian Imperative in itself already provides for every potential situation and, thus, makes this essay superfluous. However, there might be a certain value in taking such a concept out of the ivory tower of philosophy and condensing it into a privy shorthand for the layman on the ground. Along the same lines, having a spelled out set of guidelines rather than, or complementing to, such an abstract imperative that relies on the mental capacities and moral backbone of those implementing it single-handedly in all contexts and under all monetary and political conditions, seems more fit to an imperfect reality, created by imperfect human beings.

See more under Guideline No. 1 – Create Transparent Ethical Expectations and Concepts that are easily accessible by Laymen; !

THE ETHICS OF AI !10AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

c. A Constructivist Point of View !From a more constructivist point of view, AI – and maybe even the entire field of

programming - can be metaphorically seen as ‘constructivism in action’: human concepts are artfully de-assembled in a very de-constructivist fashion and put back together to shape a carefully constructed artificial version of “better”, ‘cleaner’ and maybe even a more politically correct world. It all depends on what we feed into our Artificial Intelligence or its algorithms. To say it with the early constructivist thinker Giambattista Vico: “Verum esse ipsum factum”. The eloquent double meaning behind these words being that ‘truth itself is fact’ as well as ‘truth itself is made’ – an aphoristic reminder that while the truth may be an absolute, we create (or construct) our world, thus, we might very well be responsible for ‘our truth’. The notion that individuals themselves are responsible for the constructions and paradigms they allow, accept and/or propel in their lives, can be empowering – specifically to an innovator or scientist, and might cause him/her to participate in the creations of shared ‘constructions’, or - in this case - ethical standards, more enthusiastically. It is a very different notion to ‘have to adhere’ to an exterior moral exoskeleton than to participate in building ‘a shoe that fits’ so to speak; e.g. a regulatory framework based on a shared (re-)constructed understanding of a concept.

Vico’s words sound hauntingly similar to the omnipresent blind text “[Neque porro quisquam est, qui do]lorem ipsum, quia dolor sit, amet, consectetur, adipisci velit” any programmer worth the name may know by heart - not only because of their shared Latin origin, but because the meaning of both epitaphs can be extracted and interpreted to a maxim reiterated throughout the AI communities: “Do no harm.”

This method was originally used in conflict studies and stood for a seven-step-process; used as directive to ensure NGO on the ground in rogue states were in fact helpful, not harmful. It has been adapted by pioneers throughout the AI communities, although no set standard procedure has been put fourth yet. However, this may be a valuable field for further research and directive/seminal work and can already be used as very short directional guideline.

See more under Guideline No. 2 - “Do no harm” and Other Early Level Opt-in Reporting. !

d. Renouncing the Benignity Assumption !This paper also does not condone the Benignity Assumption displayed in various

statements throughout the AI communities, which allege that a ‘super-human’ intelligence will necessarily or even, probably, take a friendly stance towards humans, such as exhibited in these slightly populist quotes by Futurist and self-proclaimed ‘Techno-philosopher’ Gray Scott and the author and robotics engineer Daniel H. Wilson respectively: !

THE ETHICS OF AI !11AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

„You have to talk about 'The Terminator' if you're talking about artificial intelligence. I actually think that that's way off. I don't think that an artificially intelligent system that has superhuman intelligence will be violent. I do think that it will disrupt our culture.“ !

„I absolutely don't think a sentient artificial intelligence is going to wage war against the human species.“ !

Foremost, because the old adage that ‘it is always better to be safe than sorry’ holds certain value and secondly, based on the thought scenario that should humans create a system solving for ‘perennial self-iteration or self-improvement’- such a system may almost necessarily arrive at the solution that its ‘best decision’ might be to eliminate the one source, which could ‘turn it off’: humanity.

Again, this is not to say that any AI entity will have a true sense of self-preservation the way a person or animal or even flower might display - and, more importantly, - ‘feel’ it. To this end, Nick Bostrom, the director of the Future of Humanity Institute at Oxford University, argues in his book Superintelligence that it will not always be as easy to turn off an intelligent machine as it was to turn off Tay. He defines “Superintelligence” as an intellect that is “smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Such a system may be able to outsmart any attempts to turn it off.

There are also additional ethical considerations to fuel this explosive mix of anxiety, health-truths, inflated expectations and near-monthly technological breakthroughs, based on the fact that the technology of AI is open to a multitude of uses and abuses – and may eventually lend itself as the very entity the most fraught human decision making processes are outsourced to – possibly in blind faith: thus, becoming one more meta-human entity to shift responsibility onto in the above quoted Bierce-nanian sense. We may no longer be ‘wishing upon a star’ but ‘wishing upon AI’ – a very dangerous, and upon closer inspection, rather narcissist undertaking; more becoming of medieval alchemists than modern scientists.

This becomes more unnerving, with the most evident malign application so far being cyber warfare, as Ted Bell, novelist and Vice-Chairman of Advertising Agency Young & Rubicam and writer in residence at Cambridge University, points out „(e)very major player is working on this technology of artificial intelligence. As of now, it's benign... but I would say that the day is not far off when artificial intelligence as applied to cyber warfare becomes a threat to everybody.“

See more under Guideline No. 3 - The Concept of Guardianship over a Technology; !

e. On “Summoning the Demon” – Debunking Meta-physical Readings of the ‘Awakening of AI’ !

THE ETHICS OF AI !12AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

Please note that this paper discusses AI on the basis of the notion that the technological advances, we currently see happening, are not an ‘awakening of AI’ or a newly ‘created’ type of consciousness that is somehow ‘separate’ from human consciousness or that may be akin to humanity creating its own form of ‘alien life’ or a ‘Frankensteinian’ being. In fact, it seems highly advisable to be utmost sceptical of anyone and any approach based on the notion that these machines and systems decide ‘autonomously’. They work in a framework established by humans, on input fed to them by humans; they make observations and take steps - rather than decisions - on the basis of parameters given to them by humans. There is no ‘divine’ spark instilled into such machines that allows for them to have consciousness separate from a meta-form of human-on-human reflection via the tool of a machine or human-created system. They are “Verkörperungen des Geistes” - embodiments of the intellectual essence of the humans, who build and program them - nothing more, but also nothing less. It appears only sensible to take these newly possible ‘embodiments of human spirit’, and all their economic and cultural implications, very seriously; however, it seems best to steer clear of any metaphysical projections. There is no ‘demon’, and if so the ‘demon’ is us.

See more under Guideline No. 4 – Legal Enforceability; !f. The God-complex and Learnings from Prometheus !

The previously mentioned ‘angst’ is often mixed with – and deeply tinged by – the acutely narcissist stance of feeling ‘creator’ or ‘God’-like: the nonsensical idea of sparking a new form of life by merely writing lines of codes or enabling the existence of this new type of machinery.

We may be Prometheus flying to close to the sun, but we are not forming the next step of the evolutionary chain out of clay - or in our case ‘metals, rare earths, and some plastic’.

There are even calls for an AI “bill of rights” that would transfer personhood, and, thus, responsibility to the machine and its so called decision-making, that - as discussed above - would be much more accurately labelled as “past-decision and observation-reflecting”. Transferring responsibility from humans to artefacts manufactured by them is a first in history and usually only employed when tools, such as weapons are demonized in lieu or as an extension of their owners. Such logic may also be familiar from fairy tales or fables, where books or teapots come to life and take decisions. At this point in time, humanity still has not fully wrapped its head around animals or even children as independent, much less responsible, actors. Thus, the hype of passing over the creator’s responsibility from the creator to the creation seems premature, if not pre-emptive. Summa summarum, such a move may be more due to its absolving qualities, taking weight off the consciousness of the creator while simultaneously providing the enormous

THE ETHICS OF AI !13AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

ego-boost of having made an entity that seemingly possesses a decision-making capacity all of its own.

Therefore, such “past-decision and observation-reflecting” algorithms become dangerous when humanized with labels such as “self”-improving or ‘learning’, as these terms may not generally be misleading in their descriptive accuracy, but are highly misleading to the ever-projecting human mind. With the words of Marie-Louise von Franz, a renowned co-worker of C. G. Jung, who considerably extended on his concept of projection: „wherever known reality stops, where we touch the unknown, there we project an archetypal image”.

Thus, when advocates of robot or AI rights, such as the above mentioned Gray Scott, start asking: „The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?“, they evoke imagery so loaded with the historic symbolism of enlightenment and emancipation from slavery, from male guardianship, from mal-functioning societal systems of ownership and wage exploitation that one cannot help but open hearts to the robots’ supposed plight under the weight of the sheer historic significance.

However, the decision if robots are to be treated as sentient beings rather than tools or technical creations/entities, should not be made in a state blinded by an all too human narrative projecting them as our ‘slaves’ or as a ‘slave-archetype’. The distinction between a tool and a slave is fundamental in its consequences, and needs to be addressed early on it the process:

Words are powerful agents in the way they subconsciously cue humans as to how to frame a story as it unfolds - as we may have gleaned from decades of casting other nations or minorities as ‘others’, the ‘out-group’, by gently or even openly cuing their otherness rather than their similarities with the ‘in-group’ in question. Prominent examples being the dehumanization strategies employed under Goeppels, who absorbed the art of ‘manufacturing consent’ from Freud via the detour of Freud’s infamous nephew Edward Bernays; proceeding to cast the Jews as ‘entartet’ or sub-level humans – and, more recently, in Ruanda during the genocide of the Tutsi by members of the Hutu majority government. Several theorists argue that the division into these groups may be a very recent one; stemming only from colonial times, when natives’ limbs and heads where measured and they were allotted to groups on the basis of the thus extracted data, instilling bias via dehumanizing labels and according actions. It were ‘only words’ preparing the ground for genocide by way of spinning ancient hereditary myths into hostile group myths, thereby, justifying “fears of group extinction, and a symbolic politics of chauvinist mobilization. The hostile myths produce emotion-laden symbols that make mass hostility easy for chauvinist elites to provoke and make extremist policies popular”.

Even for an open-minded, innovative thinker and doer, it may be easy to fall into the trap of wanting to spare robots a tragic subhuman existence in slavery by means of their evident otherness, instead of facing the solemn - and possibly a bit boring - probability

THE ETHICS OF AI !14AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

that robots are merely reflective of their makers, rather than capable of any type of ‘true’ sentient response, much less suffering. The question here seems to be - are we not too eager to transfer our feelings and experiences unto our creation, the way a child does with a puppet? May our sympathy towards robots be nothing more than familiar narcissist navel-gazing over the latest technological advancements, a simply fawning over ‘progress’?

See more under Guideline No 5 – Manufacturing Consent: The Importance of Taking Responsibility For the Societal Narrative; !

---

To summarize these fundamental initial disclaimers, definitions, explanations and introductory deliberations, this white paper aims to investigate the ethical limits of Artificial Intelligence from a normative, action-ethical point of view by providing a - per the scope of this paper - very abbreviated overview of current stances in the realm of AI- or Robo-Ethics, concentrating on the most prominent, and, where necessary or helpful, discussing them against the backdrop of the classical Greek value systems on which Western philosophy is footed.

On this basis, the further aim is the provision of an underpinning of a thought platform for future legal regulations as well as the necessary self-regulative efforts of the start-up community and innovating individuals, possibly based on learning from earlier human innovative endeavours on the thesis that ‘responsibility’ is the core concept necessarily at the heart of such a thought platform.

It, thereby, seeks to establish an initial talking point for ethical guidelines that might be useful to entrepreneurs providing and developing AI solutions and software.

It is paramount in this undertaking, to establish an elementary definition of “Machine vs. Human”, elucidating this crucial distinction, on the grounds of what it means to “make a decision” in the face of the necessity and complexity of programming ‘autonomous’ decision making models.

Then this discourse shall be developed in its consequences to forecast why Ethical Entrepreneurship – and by extension programming – are needed, and what they might look like in the context of AI.

To do so, we will succinctly look at the most prominent current frameworks and central notions for such decision making, their implications, foundations and short comings, and provide a foundation for adding to them on the basis of best practice examples of encouraging consistent ethical choices both within innovating entities such as research centers and start-ups, but also on a governmental trans-national scale and, where possible, based on the here applicable scope, on the basis of current AI-human interaction patterns and impact clusters. !V. Definition of “Machine vs. Human” !

THE ETHICS OF AI !15AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

a. The Operational Definition Produced by the Stanford 100 Year Report on AI vs. the Definition of “The Human Measure” !

To shine a light on this distinction, we lean on the operational definition produced by the Stanford 100 Year Report on AI !

“AI can also be defined by what AI researchers do. This report views AI primarily as a branch of computer science that studies the properties of intelligence by synthesizing intelligence.[10] Though the advent of AI has depended on the rapid progress of hardware computing resources, the focus here on software reflects a trend in the AI community. More recently, though, progress in building hardware tailored for neural-network-based computing[11] has created a tighter coupling between hardware and software in advancing AI. “Intelligence” remains a complex phenomenon whose varied aspects have attracted the attention of several different fields of study, including psychology, economics, neuroscience, biology, engineering, statistics, and linguistics. Naturally, the field of AI has benefited from the progress made by all of these allied fields. For example, the artificial neural network, which has been at the heart of several AI-based solutions[12] [13] was originally inspired by thoughts about the flow of information in biological neurons.[14]” !

vs. what they define as “the human measure”: !“Notably, the characterization of intelligence as a spectrum grants no

special status to the human brain. But to date human intelligence has no match in the biological and artificial worlds for sheer versatility, with the abilities “to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories.”[6] This makes human intelligence a natural choice for benchmarking the progress of AI. It may even be proposed, as a rule of thumb, that any activity computers are able to perform and people once performed should be counted as an instance of intelligence. But matching any human ability is only a sufficient condition, not a necessary one. There are already many systems that exceed human intelligence, at least in speed, such as scheduling the daily arrivals and departures of thousands of flights in an airport. AI's long quest—and eventual success—to beat human players at the game of chess offered a high-profile instance for comparing human to machine intelligence. Chess has fascinated people for centuries. When the possibility of building computers became imminent, Alan Turing, who many consider the father of computer science, “mentioned the idea of computers showing intelligence with chess as a paradigm.”

THE ETHICS OF AI !16AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

!! Image 2: xlviii

Interestingly enough, in the case of the surface-level AI poker Genius as presented in Image 2, observers might be inclined to say that its not the programmer of the AI in question, who had won all those chips. As humans tend to work with personifications this is not a surprise, but it drives home the fact just who easily human spectators may get duped into shifting their view on who is deserving of retribution or praise – depending on the scenario and familiar structures of assessment. This assessment bias is found in machines as well, however, there it presents itself as merely a distorted mirror image of our input: „[b]y their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.”

Quite understandably us humans are hard pressed to measure intelligence by non-human standards. However, it might alleviate some of the anxiety around AI if humans did not try and personify – humanize - the quantitative powers, or computing powers such systems may have. Compare them, yes, but currently we would be well advised to refrain from projecting a process of feeling and scheming and the same desire or motivation to win a human might feel into a chess game conducted as a task by a tool we programmed. There are processes that are too complex as of yet to be replicated with known parameters in any kind of artificial setting.

See more under Guideline No 6 – The Personification of Machines and Guideline No. 7 – Machine Intelligence is a Different kind of Intelligence; treat it that way; !

2.What is in a Decision? !A decision can be distilled down into a process that involves intuition, contemplation,

experience, preferences, values, feelings, and the formation of a more or less informed choice between one or more alternatives, depending on the circumstances and under less than perfect conditions, under a relative degree of uncertainty. Often various scenarios are played out beforehand before a arriving at a decision.

There are, arguably, certain areas of decision-making in which computerized processes might not only speed up the process, but may even make it more informed, more accurate and accordingly more reliable: anything involving massive datasets and statistical analyses, for example, micro-targeting. However, it is the non-linear, non-algorithmic portions that require a realistic understanding of values, of taboos, of ethical no-go zones, and part considerations. If we return to the example of micro-targeting, it might be

THE ETHICS OF AI !17AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

effective, as in generating clicks, to run a supremely offensive add that is “dehumanizing to a certain minority” – however, it would violate ethical boundaries and codes of conduct and might make the individual or corporations behind the AI system vulnerable to a lawsuit. When this add, is targeted towards the above minority more close, because it generates the highest click rates with that community, an AI might deem the operation successful, rather than abjectly failing. Thus, an AI can only decide according to the foresight and parameters instilled into it by those programming it. Hence, it would only logical to start investing into the ethical background of those who shape our future world one line of code at a time. Especially, when code is so rapidly shared and replicated that one ‘lazy’ or incompletely written piece of software application can bring completely unforseen – and maybe disastrous – results with it. A sense of responsibility for the entity that is being coded along with is outcomes may help, but it would be plainly irresponsible not to watch out for such minor and major glitches by implementing checks and balances: regulatory systems.

This leads to two of the most impactful guidelines, Guidelines No. 8 and 9, to add to any Ethical Approach to Entrepreneurship – and by extension AI. Please see the according section for more information. !VI. The current ‘Awakening of AI’- Recommendations for Practical Guidelines aimed at Innovators & Self-regulative Efforts of the Start-up community !

As previously explained, in order to briefly forecast what ethical entrepreneurship – and by extension programming – might look like in three distinct scenarios in the context of AI, we will first delve into a summarized overview of the most eminent current frameworks for such decision making, their implications, foundations and short comings, and add to them on the basis of a summary the current state of the impact of AI, the state of Regulatory Efforts, best practice examples of encouraging consistent ethical choices, both within innovating entities such as research centers and start-ups, but also on a governmental trans-national scale, and on the basis of current AI –human interaction patterns and impact clusters. !

a. State of AI Emerging technologies bring emerging risks, and the guidelines that are thus far

emerging, are oriented along the fracturing lines of those risks: the process of establishing guidelines, implanting them in future research and then seeing them implemented in policies, regulations and on the ground, is much the same for any not yet regulated discipline. First impulses erupt - mainly along the fringes of insider, in this case, tech communities, in the ivory towers of academia and elite fore thinkers, such as industry titans pioneers.

In AI’s case, the pivotal moment was this cry for a framework by Elon Musk in 2014, where he recounts the all too familiar tale of the “Zauberlehrling” - Johann Wolfgang Goethe’s The Sorcerer’s Apprentice”:

THE ETHICS OF AI !18AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

!“I think we should be very careful about artificial intelligence. If I had to

guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.“ !

Looking at the Gartner Hype Cycle, AI currently stands here: (Image 3)liii ! !!Thus, AI stands behind

many of ‘more mature’ of the Emerg ing Techno log ies , namely Smart Dust, Machine Learning, Virtual Personal Assistants, Cognitive Expert A d v i s o r s , S m a r t D a t a Discovery, Smart Workspace, C o n v e r s a t i o n a l U s e r Interfaces, Smart Robots, Commercial UAVs (Drones), A u t o n o m o u s Ve h i c l e s , Natural-Language Question A n s w e r i n g , P e r s o n a l A n a l y t i c s , E n t e r p r i s e Taxonomy and Ontology Management, Data Broker, PaaS and Context Brokering - “The perceptual smart machine age” may have begun: !

“Smart machine technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep neural networks that will allow organizations with smart machine technologies to harness data in order to adapt to new situations and solve problems that no one has encountered previously.” !

AI is judged to be at its “peak of inflated expectation” as per the Gardner Hype Cycle 2016, and will thus likely face a period of disillusionment and more realistic recalibration followed by growth within a time frame of 15 -

THE ETHICS OF AI !19AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

60 years. Adoption by the mass market, and, thus, it’s full creative and economic impact are yet to expect, which is underscored by Image 4 below: detailing adoption rates in the legal sector, which is a rather slow market but one where the applications of AI are low-key !b. State of Current Regulatory Efforts !

In comparison, the state of the regulation of AI applications across industries, is currently situated a lot closer to the very beginning of the curve on the left side, when the development of initial non-binding sets of rules and regulations sets in - catalysed by a ‘technology trigger’ (compare image 4). The statements and warnings by those at the forefront of the technological wave are slowly starting to become more than lone cautioning voices: non-threatening and of immense efficacy to users. !!(Image 4) lv

(Image 5)lvi !Early adopters are slowly pushing an early majority to help increase regulation and

work towards a general maturing of the field. At the same time a broader community may start feeling the first titters of the earth quake of societal change that AI is apt to bring fourth, making the middle of society a bit more accessible to concerns and regulatory efforts. As soon as such initial non-binding attempts at regulation become more widely accepted, improved on and implicated, they start to transition up what one might want to call the ‘food chain of law’, and eventually - over process that may well take two decades or longer - become “hard” and, thereby, enforceable law. (Image 4)

Both finance and medicine are fields that are much more tightly regulated and scrutinized as to their benevolence for consumers and humanity as whole and may provide initial best practice models for such an supervisory endeavours that could, at least partly,

THE ETHICS OF AI !20AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

alleviate the growing pains likely to follow such exterior regulative efforts: “In AI, too, regulators can strengthen a virtuous cycle of activity involving internal and external accountability, transparency, and professionalization, rather than narrow compliance.” !

c. Current Frameworks and Central Notions - Mapping Unchartered Territory !While there may be the „emergence of ‘third paradigm’ Human–Computer Interaction

(HCI) […] driven by the shortcomings of existing approaches to adequately describe and understand the ways people interact with a new breed of pervasive digital technologies in everyday life“ such an effort is more of observing than of an actively directive quality.

The formulation of General Principles to articulate and establish a common understanding as to the role of AI and other robotic entities in our society is sorely called for. The General Principles Committee anticipates that such principles, issues, and recommendations “will eventually serve to underpin and scaffold future norms and standards within a new framework of ethical governance for AI/AS design” at same time it “articulated high-level ethical concerns applying to all types of AI/AS“ formulating the following expectations that - in their core - can only be implemented by the entrepreneurs and innovators involved in communities and laying out the seminal technological inventions !

“ 1. Embody the highest ideals of human rights. 2. Prioritize the maximum benefit to humanity and the natural environment. 3. Mitigate risks and negative impacts as AI/AS evolve as socio-technical

systems” !and puts fourth two open questions, which were presented as an intertwined subject in this essay: !

1. “How can we ensure that AI/AS do not infringe human rights? (Framing the Principle of Human Rights) “ 2.“How can we assure that AI/AS are accountable? (Framing the Principle of Responsibility)” !The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and

Autonomous Systems has also chosen to highlight the importance of the "[e]mbedding [of] Values into Autonomous Intelligence Systems” by establishing a Committee for this very purpose. This committee “has taken on the broader objective of embedding values into AIS by helping designers !

1. Identify the norms and values of a specific community affected by AIS; Implement the norms and values of that community within AIS; and, 2. Evaluate the alignment and compatibility of those norms and values between the humans and AIS within that community”

THE ETHICS OF AI !21AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

!It further elaborates that !

“We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives. Eudaimonia, as elucidated by Aristotle, is a practice that defines human wellbeing as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live. By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age.” !Although the conclusion the committee arrived at seems rather Betham-ian,

measuring human wellbeing and employing it to decide on measures of socio-political impact, is not as impractical as it was in his times and thus may see resurgence. The common denominator across all human-centric approaches to Ai - e.g. those that do not wish to see humanity replaced a supreme reign of the machines or go as far as touting AI as the nest step in evolution - is that in some form or another benevolence towards humanity is enshrined in the algorithms of such technical entities. The Three Laws of Robotics” for the introduction of which Isaac Asimov was famed in 1942, also set fourth this principle of benevolence towards humans. He later extended them to include benevolence towards humanity as whole !

1. “A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.“ 4. Addition – Meta law “0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm” !In October 2013, Alan Winfield, Professor of Robot Ethics at University of the West of

England, suggested a revision of the five ‘laws’ that had previously been published by an EPSRC/AHRC working group, which might be the most important foundational formulation of guidelines to date: !

“1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

THE ETHICS OF AI !22AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 3. Robots are products. They should be designed using processes which [sic.] assure their safety and security. 4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 5. The person with legal responsibility for a robot should be attributed.“ !For further immersion and background, it is recommended to review the Stanford AI

Report and, thus, introduced discussion concerning Guidelines . !d. Recommendations for Practical Guidelines Aimed at Innovators & Self-

regulative Efforts of the Start-up Community !Drawing on the pre-existing body of guidelines above as well as on the notions laid

out in this paper, which together form a valuable step towards ‘framing responsibility’, the following summary of recommended targeting points for guidelines may currently be formulated:

Guideline No. 1 – Create Transparent Ethical Expectations and Concepts that are easily accessible by laymen; such as an entrepreneurial/scientific version of the Kantian Imperative or its simplified sibling, The Golden Rule, may be called for: ‘Do not build (program, invest in or research), what you would not one want another person to build or to become the generally applicable societal standard/law - or under the full or partial implications of which, you would not want to live.’ Or even more simply: ‘Take responsibility of the entity or for the insight created.’

Guideline No. 2 - “Do no harm” and Other Early Level Opt-in Reporting; It seems only sensible to aim for the subsequent establishment of a well-founded multi-step-process akin to that put fourth in conflict resolution. This process might provide an effective, actionable, easy-access solution for founders, and researchers alike and might form the basis of early level opt-in reporting. Such reports would need to be at the very least stored for processing, review and feedback sessions by an agency formed by a governmental or non-governmental administrative body or may also be a self-regulative measure by the industry.

--- It is understood that any so-postulated guidelines, aimed directly at the

entrepreneurial level, and not at the establishment of meta-structures, should fulfil the above qualities of efficacy, action-ability, and easy-access.

Guideline No. 3 - The Concept of Guardianship over a Technology; This scenario also drives home the haunting notion that Hannah Arendt propagated so evocatively, when reflecting on how the second world war came into existence, namely, the “Banality of Evil”: in essence, evil is not necessarily the result of sudden alarming catalytic events; but

THE ETHICS OF AI !23AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

more often hides in plain view, veiled by its familiar face and slowly creeping in through the backdoor by a widening of societal margins, one well-adjusted step at a time. Thus, vigilance - old-fashioned as the term may sound - is called for. In the case of innovators and entrepreneurs, it might be a useful exercise to recast their identity in the role of a parent or guardian to an emerging technology like AI. If such endeavours find the right reception and role models, an entrepreneur might soon be judged, not just by his or her economic background, but by his legacy and accomplishments against a broader background.

Guideline No. 4 – Legal Enforceability; The existence of truly ‘autonomous’ machines is highly questionable and, thus, ‘originative ownership’ of thought should remain traceable, and transgressions on ethic codes should remain legally relevant for a minimum time span of 50 to 90 years and/or as long as the person or corporation in question is benefiting from said development. Additionally work licensed under a Creative Commons Attribution-NonCommercial 3.0 United States License or similarly publicly shared and developed bodies of work need to be monitored and non-restrictively regulated by a public national or international ethical board, who take responsibility for long-term impact, or make developmental contributions traceable.

Guideline No 5 – Manufacturing Consent: The Importance of Taking Responsibility For the Societal Narrative; The decision if robots are to be treated as sentient beings rather than tools or technical creations/entities, should not be made in a state blinded by the all too human narrative projecting them as our ‘slaves’ or as a ‘slave-archetype’ – much less by humans struck with the God-complex depicting them as a separate sentient type of ‘Alien species’. Robots are multi-use tools that can only emulate human qualities not, in fact, possess them. Thus, NGOs, governments and international bodies should actively and clearly communicate this principle, taking preventative ownership of the early discourse on this matter – instead of leaving it to corporate stakeholders and their various motivations. Thus, shaping the discourse and according regulations in a human-centric matter, in line with the 5 Laws proposed by Alan Winfield (cited above).

Guideline No 6 – The Personification of Machines; The humanization of man-made artefacts can mislead the layman just as well as the inventor him-/herself and is the first step down a slippery slope towards an elaborate self-deception, towards trust where trust is misplaced and towards eloping from his/her responsibilities for his/her creation. Thus, it is highly fraught ethically to give a machine humanoid or even animal-like traits of descriptive, allusional or physical nature, and invite such a projective approach in – be it actively or passively. Please compare Law 4 by Alan Winfield.

Guideline No. 7 – Machine Intelligence is a Different kind of Intelligence; treat it that way. An innovator should start to question him/herself or anyone - scientist, preacher, professor, entrepreneur at peak hype, peers - when they promulgate to have emulated human, or meta-human traits for it is likely they measure apples with pears, because they

THE ETHICS OF AI !24AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

happen to know pears better; and are potentially indolent and sensationalist in their descriptions – which is hardly a state to aspire to. This process of questioning assumptions – along with AI and human biases - might ultimately be the blueprint for a regulative institution, which tests and confirms - or dismisses - such hypotheses.

Guideline No. 8 – Develop Normative Ethic Standards and Universal Regulatory Bodies (Preferably with Near Global Reach); Develop regulatory bodies akin to those regulating finance, like the German bank regulatory body BaFin. A similar model might be worth exploring (albeit slightly expanded): Mandating proof of character or mental fitness, along with a business plan (including financial forecasts and all income streams for 5 years a priori), supportive, required sessions with a psychologist; a risk assessment with an individual set of necessary milestones, check-backs and reports, an insurance that covers potential risks, a thorough background check on all management personnel, owners, partners and investors; a an ethical clearing certificate, etc. pp. Approaching this from the Angle of Global Governance seems apt in the age of trans-national corporate structures. In fact, a decentralized market that would allow for research, restricted in one location, to move to another without applicable sanctions, might make it impossible to enforce such ethical standards.

Guideline No. 9 - Ensure Ethical Coding Standards & Empowerment of Whistle-blowers; Allowing programmers dedicated space to think and reflect while enabling access to advocacy programmes and professional organisations along with requiring yearly statements on their estimation of their work’s long and near term impact (both anonymously and openly) may prove a good first step in increasing awareness and driving long term results. Organisations like EF, Electronic Frontier, and other regulatory bodies, need to be well funded and able to act. Rewarding Whistle-blowers with scholarships and providing exit/escape routes or “golden parachutes” and, thereby, ensuring they are safe and enjoy a good standing in society will increase societal awareness, a feeling of personal responsibility and ability to act and act as long-term check/balance. If doing “the ethical thing” does not equate to throwing away the life one painstakingly built, but instead might add to an impressive résumé, or even award hero status, we will see fewer missteps by corporations and governmental bodies alike. Even establishing a ‘non-threatening’ situation, where there are little to no repercussions to acting as a Whistle Blower might be considered an incentive by those in moral anguish about their contractual obligations. Consequently, it seems worth thinking about establishing the likes of digital or physical drop off stations or mailboxes which key information could be sent to anonymously. Advocacy, safety and reintegration programmes need to be established.

For an innovator on the ground, tuning into the team’s impulses about right or wrong might provide a first impetus for defining, and in consequence, resolving an ethical conundrum and preventing the necessity for speaking out in a more public environment. Further research might be helpful to determine valid methodological approaches towards helping team members and innovators voice their concerns early and if necessary

THE ETHICS OF AI !25AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

awarding appropriate monetary help that makes missing out on salaries or profits less dire – thereby, helping innovations stay a force for good.

Guideline No. 10 – Embed Values Rather than exist in a sort of ‘fig leave’ function, ethics need to be instilled into processes from the ‘get go’, into the very DNA of the design, the architecture and the business planning processes around AI ventures. Alike financial forecasting, ethical forecasts should be an integral part of business and budget planning, allowing for funds to invest in a more ethical approach. A first step in this direction is “value-sensitive design (VSD)”, however, instead of a passively and reactive ‘value-sensitive’ design, a more active ‘value driven design’ (VDD) that implements ethical standpoints- thus, designing with ethical consequences in mind, building, constructing the very future, we would like to see.

Guideline No. 11 – Foster Generational Thinking; Reflected in the 5 W - questions: Why am I furthering this cause? Would my granddaughter enjoy living under those conditions…? Would I? What is the benefit, what is the worst thing that could go wrong? What is the next step? Is there one or more offset strategies for potential harm? Are there ways to study this impact on society under pre-market launch? (See Guideline No.14 for practical recommendations.)

Guideline No. 12 – Benevolence; Building on the four Laws by Asimov’s (cited above), Benevolence towards the environment, humans and animals is to be the corner stone of any, from a human perspective concludent, ethical approach to AI.

Guideline No. 13 – Separate “Meta-human” Ethics Trans-humanist Developments need to be starkly monitored from an ethical point of

view, and are not dealt with in this paper as they will require a separate set of rules & regulations, although those brought forth here may partially apply. Such movements are to be treated as a different field. A starting point for this exploration may be provided by the notion that such ‘enhancements’ cannot be required as a measure for employability by corporations and that a ‘human as is’ must be able to lead a life in dignity, liberty and in pursuit of happiness – the three ‘unalienable rights’ enshrined in the famed Declaration of Independence of the United States.

Guideline No. 14 –Real Time Industry Regulations and Set-Off Mechanisms Regulatory efforts may fail if impacted industries are not at the starting point of such exertions: there needs to be active collaboration with so-called ‘impact clusters’: the industries and communities changed by AI technologies. One step may be working with Unions and Lobbies, such as IG Metall in Germany, to ensure workers, who, for example, loose jobs to automation, are cared for and able lead lives as detailed under Guideline No. 13. It may also be worth exploring Set-a ‘tax’ on corporations using Ai that may help governments to create positive social parameters for the immanent technological changes via the much discussed ‘basic income’ or other efforts at redistribution. It may also be necessary toad to existing definitions of monopolies and regulate trans-national

THE ETHICS OF AI !26AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

corporations more strongly, creating a culture where the onus is on them to proof their beneficial nature and structure rather than on regulatory bodies.

Guideline No. 15 – Progress is not an End, only a means to an End; the Value of Education Just as society as whole, any inventor would be well-advised to periodically ask themselves the following questions on the impact of his developments on future generations: does this constitute progress for progress’ sake? Does this make life better now – and in the long term? If yes, for how many people? And in a strictly Utilitarian manner, as sort of helpful mental experiment: How many people could be helped with the amount of money invested in this development? Is this a beneficial, necessary step? Is my time well spent? This is more a humanist guideline, rather than a governmentally prescribe-able endeavour – that will foot more on the quality and depth of the education innovators, investors and leaders receive: wether they can see beyond monetary benefits and think on a humanist, global and value-oriented level. As such playful educational efforts on ‘Generational Impact and Ethics’ starting from Kindergarten and Grammar School may be a constructive starting point.

---

Creating much more detailed In-Action Ethics frameworks and showing how the qualities deemed necessary by them can be operationalized in relation to the realities of existing structures by introducing the concept of a professional ‘creator’s ethos’ and duty to take an interest in preventive measures may gong a long way. A deontological sense professional ethics, soft power and educational efforts reach, where law itself has not pointed a clear way forward – yet. !

I. Conclusion & Forecast – The Bain of Progress !a. Emerging Technologies bring Emerging Risks – and Opportunities !

Emerging technologies bring emerging risks, but also emerging opportunities: leading vita activa, in the Arendt-ian sense of being politically involved and active, as an innovator today, may well open up space for a thoroughly enjoyed vita contemplativa within this lifetime: a life of leisure and thought, with all work done by robots. Previously only enjoyable by a select view, such a life devoted to thought and reflection might soon be more a realistic option to many who might no longer have to work due to an outsourcing of the life that once was seen as unbecoming to a free man in Aristotle’s and Plato’s world. Although, such a robotic utopia might never arrive and it seems much more likely that we see abject poverty next to unconceivable riches, when we are too busy with our current variety of an active life to put social security systems into place to help soften the impact of jobs being replaced by mere ‘processes’ – here forth carried out by robotic tools owned by multinational corporations. Not quite as romantic, but a distinct possibility and a reason to take responsibility. If not for us, then for future generations, who will be hard-pressed to make up for any lack of foresight once any such redistribution occurred.

THE ETHICS OF AI !27AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

Thus, we seem to have arrived at a crossroads in history, and the time to jump to action, and assume the role of a sort of ‘parent’ or ‘guardian’ to a maturing emerging technology such as AI, is now, as the fourth industrial revolution still waits in the wings.

To draw this thought piece to a close with the initially quoted Alan J. Perlis, who was one of the first to formulate his thoughts around Ethical Programming, and whose words may well ring true to other innovators on the ground, be they entrepreneurs, philosophers or rocket scientists: “Don't have good ideas if you aren't willing to be responsible for them.” !

— !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

THE ETHICS OF AI !28AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

!!!!!II. Appendix

a. References !References

!Singer, Peter; Time to teach ethics to artificial intelligence, http://www.japantimes.co.jp/opinion/2016/04/17/commentary/world-commentary/time-teach-ethics-artificial-intelligence/#.WLXSeqsoGlI; Cambridge, Mass: MIT Press; Retrieved as of: 28. 02. 2017 (Graphic Image) ! Klaeren, Herbert; (26.03. 1999); http://pu.inf.uni-tuebingen.de/users/klaeren/epigrams.html; p.1; l.95; Retrieved as of: 02.05.17; originally published in Perlis, Alan J.; Epigrams on Programming; SIGPLAN Notices Vol. 17, No. 9, September 1982, pages 7 - 13 Perlis, Alan J. American computer scientist and professor at Purdue University, Carnegie Mellon University and Yale University. He is best known for his pioneering work in programming languages and was the first recipient of the Turing Award. For a working definition of Autonomous System (AI/AS) please see cf. Pratihar, Dilip Kumar; Jain, Lakhmi C.; Intelligent Autonomous Systems: Foundations and Applications; Springer Verlag Berlin Heidelberg (2010); ISBN 978-3-642-11675-9; SCI 275, pp.1-4 RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE; https://futureoflife.org/ai-open-letter/ ; Retrieved as of: 17.04.17; Working definition for this paper: !

„Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.”

For further considerations: cf. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems; Recommendations on AI: Law and Ethics Research; http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf; Retrieved as of: 17.04.17

“The development of systems that embody significant amounts of intelligence and autonomy leads to important legal and ethical questions whose answers affect both producers and consumers of AI technolo-gy[sic.]. These questions span law, public policy, profes-sional [sic.] ethics, and philosophical ethics, and will require expertise from computer scientists, legal experts, political scientists, and ethicists. For exam-ple [sic.]:

Liability and Law for Autonomous Vehicles If self-driving cars cut the roughly 40,000 annual U.S. traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits. In what legal framework can the safety benefits of autonomous vehicles such as drone aircraft and self- driving cars best be realized (Vladeck 2014)? Should legal questions about AI be handled by existing (soft- ware- and Internet-focused) cyberlaw, or should they be treated separately (Calo 2014b)? In both military and commercial applications, governments will need to decide how best to bring the relevant expertise to bear; for example, a panel or committee of profes-sionals [sic.] and academics could be created, and Calo has proposed the creation of a Federal Robotics Commis-sion [sic.] (Calo 2014a).

Machine Ethics How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near certainty of a large material cost? How should lawyers, ethicists, and policymakers engage the pub-lic on these issues? Should such trade-offs be the sub-ject [sic.] of national standards?

Autonomous Weapons Can lethal autonomous weapons be made to comply with humanitarian law (Churchill and Ulfstein 2000)? If, as some organizations have suggested, autonomous weapons should be banned (Docherty 2012), is it possible to develop a precise definition of autonomy for this purpose, and can such a ban prac-tically [sic.] be enforced? If it is permissible or legal to use lethal autonomous weapons, how should these weapons be integrated into the existing command- and-control structure so that responsibility and lia-bility [sic.] remain associated with specific human actors? What technical realities and forecasts should inform these questions, and how should meaningful human control over weapons be defined (Roff 2013, 2014; Anderson, Reisner, and Waxman 2014)? Are autonomous weapons likely to reduce political aver-sion [sic.] to conflict, or perhaps result in accidental battles or wars (Asaro 2008)? Would such weapons become the tool of choice for oppressors or terrorists? Final-ly [sic.], how can transparency and public discourse best be encouraged on these issues?

Privacy How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, and so on, interact with the right to pri-vacy [sic.]? How will privacy risks interact with cybersecu-rity [sic.] and cyberwarfare (Singer and Friedman 2014)? Our ability to take full advantage of the synergy between AI and big data will depend in part on our ability to manage and preserve privacy (Manyika et al. 2011; Agrawal and Srikant 2000).

THE ETHICS OF AI !29AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

Professional Ethics What role should computer scientists play in the law and ethics of AI development and use? Past and cur-rent [sic.] projects to explore these questions include the AAAI 2008–09 Presidential Panel on Long-Term AI Futures (Horvitz and Selman 2009), the EPSRC Prin-ciples [sic.] of Robotics (Boden et al. 2011), and recently announced programs such as Stanford’s One-Hun-dred [sic.] Year Study of AI and the AAAI Committee on AI Impact and Ethical Issues.

Policy Questions From a public policy perspective, AI (like any power- ful [sic.] new technology) enables both great new benefits and novel pitfalls to be avoided, and appropriate policies can ensure that we can enjoy the benefits while risks are minimized. This raises policy ques- tions such as (1) What is the space of policies worth studying, and how might they be enacted? (2) Which criteria should be used to determine the mer- its of a policy? Candidates include verifiability of compliance, enforceability, ability to reduce risk, ability to avoid stifling desirable technology devel-opment [sic.], likelihood of being adoped [sic.], and ability to adapt over time to changing circumstances.

cf. for Technological front-runners: Signatories of Open Letter on AI; https://futureoflife.org/ai-open-letter-signatories/ https://futureoflife.org/data/documents/research_priorities.pdf?x33688; Retrieved as of: 19.04.17

Cellan-Jones, Rory; Stephen Hawking warns artificial intelligence could end mankind; http://www.bbc.com/news/technology-30290540, (02.12 2014); p.1; l. 3-4; Retrieved as of: 17.04.17 Klaus Schwab is Founder and Executive Chairman of the World Economic Forum cf. Schwaab, Klaus; Shaping the Fourth Industrial Revolution; (11.01.2016); https://www.project-syndicate.org/commentary/fourth-industrial-revolution-human-development-by-klaus-schwab-2016-01; Project Syndicate ; p.1, 1ff.; Retrieved as of: 17.04.17 The start of AI discussions/AI as an academic discipline; cf. McCorduck, PamelaMachines Who Think (2nd ed.), (2004), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1; pp. 243–252 cf. Kumparak, Greg; Elon Musk compares building artificial intelligence to summoning the demon; (26.10. 2014); https://techcrunch.com/2014/10/26/elon-musk-compares-uilding-artificial-intelligence-to-summoning-the-demon/; l.11-19; Retrieved as of:12.04.17 cf. Schwaab, Klaus; Shaping the Fourth Industrial Revolution; (11.01.2016); https://www.project-syndicate.org/commentary/fourth-industrial-revolution-human-development-by-klaus-schwab-2016-01; Project Syndicate ; p.1, 1ff.; Retrieved as of: 17.04.17 Retrieved as of: 17.04.17 Dafferner, Alec; Roman, Per; Park, Chris; Inaltay, Okan; Yang, Mart; 2017 Technology Predictions Trends and Technologies shaping the Global Sector, GP. Bullhound; p.13, 1 3-4, 28-31 For more information regarding the allusion to the Atomic Bomb, please see: Lenk, Hans; What is responsibility, (2006); https://philosophynow.org/issues/56/What_is_Responsibility; Retrieved as of: 01.04.17 cf. https://academic.oup.com/iwc/article-abstract/29/2/220/2607838/In-Action-Ethics In-Action Ethics Frauenberger, Christopher; Rauhala Marjo, Fitzpatrick, Geraldine; Interact Comput (2017) 29 (2): 220-236; https://doi.org/10.1093/iwc/iww024; Published: 20 June 2016; Retrieved as of: 17.04.17 cf. Lenk, Hans; What is responsibility, (2006); https://philosophynow.org/issues/56/What_is_Responsibility; Retrieved as of: 02.04.17 cf. Harper, Douglas; Entry for responsible; http://www.etymonline.com/index.php?term=responsible; (2001-2017), Retrieved as of: 02.05.17 Lenk, Hans; What is responsiblity, (2006); https://philosophynow.org/issues/56/What_is_Responsibility; Philosophy Now; Issue 56; 79-85; Retrieved as of: 01.04.17 cf. Hannah Arendt, Vita Activa oder vom tätigen Leben, (April 1981), Edition: 1967 Piper Verlag, München, German language Version; ISBN 978-3-492-23623-2; p.16, 1f. cf. Keim Campbell, Joseph; O'Rourke, Michael, Silverstein, Harry ; Action, Ethics, and Responsibility; MIT Press; (October 2010) ISBN: 9780262514842; p.36ff Pentin, Edward; Full Text of Benedict XVI's Letter to Atheist (NOV. 26, 2013); National Catholic Register; http://www.ncregister.com/blog/edward-pentin/full-text-of-benedict-xvis-letter-to-atheist-odifreddi; p.1, 40-43; Retrieved as of: 04.04.17 cf. LaChat, Michael R.; Artificial Intelligence and Ethics: An Exercise in the Moral Imagination; AI Magazine; Volume 7; Number 2; (1986); https://studylib.net/doc/13828902/artificial--intelligence--and--ethics--michael--r.-lachat; p.1; 1ff.; Retrieved as of: 20.04.17 cf. Lenk, Hans; What is responsibility, (2006); https://philosophynow.org/issues/56/What_is_Responsibility; Philosophy Now; Issue 56; 85ff; Retrieved as of: 01.04.17 cf. Lenk, Hans; What is responsibility, (2006); https://philosophynow.org/issues/56/What_is_Responsibility; Philosophy Now; Issue 56; 273-278 Retrieved as of: 01.04.17 Lenk, Hans; What is responsibility, (2006); https://philosophynow.org/issues/56/What_is_Responsibility; Philosophy Now; Issue 56; 278-280; Retrieved as of: 01.04.17 Full text: “Neque porro quisquam est, qui dolorem ipsum, quia dolor sit, amet, consectetur, adipisci velit.” Kliemt, Stefan; Cicero, De finibus bonorum et malorum. Eine Textauswahl; Edition 2008; Vandenhoeck & Ruprecht ISBN 978-3-525-71724-0; originally Cicero, Marcus Tullius; De Finibus Bonorum et Malorum; Liber primus, 32–33 cf. for application of ‘Do no harm’ concept in AI: Devlin, Hannah; Do no harm, don't discriminate: official guidance issued on robot ethics; (2016); The Guardian; https://www.theguardian.com/technology/2016/sep/18/official-guidance-robot-ethics-british-standards-institute; Retrieved as of: 02.04.17 cf. Scott, Gray; FUTURIST, TECHNO-PHILOSOPHER and one of the world’s leading experts in the field of emerging technology; cf. https://www.grayscott.com/blog/neuralink-connecting-mind-and-machine; Retrieved as of: 12.04.17 cf. Barber, John; Daniel. H. Wilson on the art of making scary robots; (Published Sunday, Jul. 17, 2011 6:00PM, Last updated Thursday, Sep. 06, 2012 10:24AM EDT), http://www.theglobeandmail.com/arts/books-and-media/daniel-h-wilson-on-the-art-of-making-scary-robots/article587888/, The Globe and Mail, Retrieved as of: 17.04.17 cf. Bostrom, Nick; Superintelligence: Paths, Dangers, Strategies; Oxford University Press (1. 03.2013); ISBN-10: 0199678111; p.20, 2ff. Microsoft AI named “Tay”; See more https://twitter.com/TayandYou; Retrieved as of: 17.04.17 cf. Peter Singer, Time to teach ethics to artificial intelligence, http://www.japantimes.co.jp/opinion/2016/04/17/commentary/world-commentary/time-teach-ethics-artificial-intelligence/#.WLXSeqsoGlI; Cambridge, Mass: MIT Press; Retrieved as of: 28. 02. 2017 cf. Discover-Author; Ted Bell; Harper Collins; Publishers; https://www.harpercollins.com/authors/36010; (2017); Retrieved as of: 17.04.17 cf. Kumparak, Greg; Elon Musk compares building artificial intelligence to summoning the demon; (Oct 26, 2014); https://techcrunch.com/2014/10/26/elon-musk-compares-uilding-artificial-intelligence-to-summoning-the-demon/; l.11-19; Retrieved as of: 12.03.17 cf. Warren S. McCulloch; Verkörperungen des Geistes (Computerkultur); Springer; Edition: 2000 (04.08. 2013); ISBN-10: 3211828575; p.10ff. For clarification: in no sense meant to be constructed as spiritual

THE ETHICS OF AI !30AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

cf. for Prometheus allusions; cf. LaChat, Michael R.; Artificial Intelligence and Ethics: An Exercise in the Moral Imagination; AI Magazine; Volume 7; Number 2; (1986); https://studylib.net/doc/13828902/artificial--intelligence--and--ethics--michael--r.-lachat; p.1; l 1ff.; Retrieved as of: 20.04.17 Franz, Marie-Louise von (September 1972). Patterns of Creativity Mirrored in Creation Myths (Seminar series). Spring Publications. ISBN 978-0-88214-106-0; found in: Gray, Richard M. (1996). Archetypal explorations: an integrative approach to human behavior. Routledge. p. 201. ISBN 978-0-415-12117-0. cf. Scott, Gray; FUTURIST, TECHNO-PHILOSOPHER and one of the world’s leading experts in the field of emerging technology cf. https://www.grayscott.com/blog/neuralink-connecting-mind-and-machine; Retrieved as of: 12.04.17 See also Bernays, Edward: Schäfer, Dirk; 9. 05. 2010,Der erste Verdreher; http://www.sueddeutsche.de/kultur/public-relations-der-erste-verdreher-1.894159; Retrieved as of: 18.04.17 See also: Bernays, Edward; Propaganda; (1928); Ig Publishing; Edition: New Ed (1. September 2004); ISBN-10: 0970312598 cf. Moise Jean; The Rwandan Genocide: The True Motivations for Mass Killings: 2006; http://history.emory.edu/home/documents/endeavors/volume1/Moises.pdf; 33f.; Retrieved as of: 17.04.17; cf. Kaufman, Stuart J, “Symbolic Politics or Rational Choice? Testing Theories of Extreme Ethnic Violence,” International Security 30.4 (2006) 45-86; See also von [[email protected]]; Heinrich von Kleist - Über das Marionettentheater; (no date provided); http://gutenberg.spiegel.de/buch/-593/1; (German Version); von Kleist, Heinrich; Heinrich von Kleist; Sämtliche Werke; (1810); R. Löwit, Wiesbaden; p. 980-987; Retrieved as of 02.03.17

Impact clusters in this context being defined as the areas of human live impacted most by the arrival and rise of AI technologies currently and in the future. Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Julia, Hirschberg; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee,;Shah, Julie; Tambe, Milind; and Teller, Astro; Artificial Intelligence and Life in 2030.; One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016; http://ai100.stanford.edu/2016-report.; Retrieved as of : 01.0417. Please consider the following for deeper understanding of “The Human Measure”:

“Notably, the characterization of intelligence as a spectrum grants no special status to the human brain. But to date human intelligence has no match in the biological and artificial worlds for sheer versatility, with the abilities “to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories.”[6] This makes human intelligence a natural choice for benchmarking the progress of AI. It may even be proposed, as a rule of thumb, that any activity computers are able to perform and people once performed should be counted as an instance of intelligence. But matching any human ability is only a sufficient condition, not a necessary one. There are already many systems that exceed human intelligence, at least in speed, such as scheduling the daily arrivals and departures of thousands of flights in an airport. AI's long quest—and eventual success—to beat human players at the game of chess offered a high-profile instance for comparing human to machine intelligence. Chess has fascinated people for centuries. When the possibility of building computers became imminent, Alan Turing, who many consider the father of computer science, “mentioned the idea of computers showing intelligence with chess as a paradigm.”[7] Without access to powerful computers, “Turing played a game in which he simulated the computer, taking about half an hour per move.” But it was only after a long line of improvements in the sixties and seventies—contributed by groups at Carnegie Mellon, Stanford, MIT, The Institute for Theoretical and Experimental Physics at Moscow, and Northwestern University—that chess-playing programs started gaining proficiency. The final push came through a long-running project at IBM, which culminated with the Deep Blue program beating Garry Kasparov, then the world chess champion, by a score of 3.5-2.5 in 1997. Curiously, no sooner had AI caught up with its elusive target than Deep Blue was portrayed as a collection of “brute force methods” that wasn't “real intelligence.”[8] In fact, IBM's subsequent publication about Deep Blue, which gives extensive details about its search and evaluation procedures, doesn’t mention the word “intelligent” even once![9] Was Deep Blue intelligent or not? Once again, the frontier had moved.

(cf. Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Julia, Hirschberg; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee,;Shah, Julie; Tambe, Milind; and Teller, Astro; Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. http://ai100.stanford.edu/2016-report. Retrieved as of: 01.04.17.) Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Julia, Hirschberg; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee,;Shah, Julie; Tambe, Milind; and Teller, Astro; "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016; http://ai100.stanford.edu/2016-report.; Retrieved as of : 01.0417. Image 2 - cf. Bailey, Lori; Cyber risk, Global risks; Emerging technologies bring emerging risks; https://www.zurich.com/en/knowledge/articles/2017/02/emerging-technologies-bring-emerging-risks?WT.mc_id=_tl_sm_li_feb_2017; Retrieved as of: 17.04.17 Newitz, Annalee; Ars Technica; (4/18/2017); https://arstechnica.com/science/2017/04/princeton-scholars-figure-out-why-your-ai-is-racist/; Retrieved as of: 12.04.17 Impact clusters in this context being defined as the areas of human live impacted most by the arrival and rise of AI technologies currently and in the future. Bailey, Lori; Cyber risk, Global risks; Emerging technologies bring emerging risks; https://www.zurich.com/en/knowledge/articles/2017/02/emerging-technologies-bring-emerging-risks?WT.mc_id=_tl_sm_li_feb_2017; Retrieved as of: 17.04.17 Kumparak, Greg; as of Oct 26, 2014; https://techcrunch.com/2014/10/26/elon-musk-compares-building-artificial-intelligence-to-summoning-the-demon/; l.11-19; Retrieved as of: 12.04.17 Image 3 - Gardner, Ben; (11.12. 2015); This presentation was given at Legal Geek on 10th Dec 2015.; Published in: Law; License: CC Attribution-NonCommercial-NoDerivs License; https://www.slideshare.net/bengardner135/what-ai-is-and-examples-of-how-it-is-used-in-legal; Slide 4, Retrieved as of: 01.05.17 Forni, Amy Ann, Van der Meulen, Rob; STAMFORD, Conn., (16.06.2016); http://www.gartner.com/newsroom/id/3412017; Retrieved as of: 17.04.17

THE ETHICS OF AI !31AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

Image 3 - Gardner, Ben; (11.12. 2015); This presentation was given at Legal Geek on 10th Dec 2015.; Published in: Law; License: CC Attribution-NonCommercial-NoDerivs License; https://www.slideshare.net/bengardner135/what-ai-is-and-examples-of-how-it-is-used-in-legal; Slide 4, Retrieved as of: 01.05.17 Image 4 – Turner, Stephen; Will Artificial Intelligence Replace Lawyers? Part 2 – The Future Has Arrived… ; (17.07.2016); http://lawyersoftomorrow.com/will-artificial-intelligence-replace-lawyers-part-2-the-future-has-arrived/; Image 4; Retrieved as of: 01.05.17 Image 5 - Gardner, Ben; (11.12. 2015); This presentation was given at Legal Geek on 10th Dec 2015.; Published in: Law; License: CC Attribution-NonCommercial-NoDerivs License; https://www.slideshare.net/bengardner135/what-ai-is-and-examples-of-how-it-is-used-in-legal; Slide 4, Retrieved as of: 01.05.17 Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller; Artificial Intelligence and Life in 2030. - One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. http://ai100.stanford.edu/2016-report. Retrieved as of: 06.03. 2016 cf. https://academic.oup.com/iwc/article-abstract/29/2/220/2607838/In-Action-Ethics In-Action Ethics Frauenberger, Christopher; Rauhala Marjo, Fitzpatrick, Geraldine; Interact Comput (2017) 29 (2): 220-236; https://doi.org/10.1093/iwc/iww024; Published: 20 June 2016; Retrieved as of: 17.04.17 The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016.; http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html; Retrieved as of: 20.04.17 cf. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html; Retrieved as of: 17.04.17 See also The Three Laws or Asimov's Laws Asimov, Isaac; (1950); I, Robot.; Harpercollins Publishers; Edition 01 (6. Juni 2013); ISBN-10: 000753227X, p. 11ff. Regulating Robots in the Real World; Principles of robotics - EPSRC website; https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/; Retrieved as of: 01.04.17 Section III: Prospects and Recommendations for Public Policy

The goal of AI applications must be to create value for society. Our policy recommendations flow from this goal, and, while this report is focused on a typical North American city in 2030, the recommendations are broadly applicable to other places over time. Strategies that enhance our ability to interpret AI systems and participate in their use may help build trust and prevent drastic failures. Care must be taken to augment and enhance human capabilities and interaction, and to avoid discrimination against segments of society. Research to encourage this direction and inform public policy debates should be emphasized. Given the current sector-specific regulation of US industries, new or retooled laws and policies will be needed to address the widespread impacts AI is likely to bring. Rather than “more” or “stricter” regulation, policies should be designed to encourage helpful innovation, generate and transfer expertise, and foster broad corporate and civic responsibility for addressing critical societal issues raised by these technologies. In the long term, AI will enable new wealth creation that will require social debate on how the economic fruits of AI technologies should be shared. (cf. Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Julia, Hirschberg; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee,;Shah, Julie; Tambe, Milind; and Teller, Astro; Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. http://ai100.stanford.edu/2016-report. Retrieved as of: 01.04.17.)

See more for: Guidelines for the Future “Faced with the profound changes that AI technologies can produce, pressure for “more” and “tougher” regulation is inevitable. Misunderstanding about what AI is and is not, especially against a background of scare-mongering, could fuel opposition to technologies that could benefit everyone. This would be a tragic mistake. Regulation that stifles innovation, or relocates it to other jurisdictions, would be similarly counterproductive. !Fortunately, principles that guide successful regulation of current digital technologies can be instructive. A recent multi-year study comparing privacy regulation in four European countries and the United States, for example, yielded counter-intuitive results.[139] Those countries, such as Spain and France, with strict and detailed regulations bred a “compliance mentality” within corporations, which had the effect of discouraging both innovation and robust privacy protections. Rather than taking responsibility for privacy protection internally and developing a professional staff to foster it in business and manufacturing processes, or engaging with privacy advocates or academics outside their walls, these companies viewed privacy as a compliance activity. Their focus was on avoiding fines or punishments, rather than proactively designing technology and adapting practices to protect privacy. !By contrast, the regulatory environment in the United States and Germany, which combined more ambiguous goals with tough transparency requirements and meaningful enforcement, were more successful in catalyzing companies to view privacy as their responsibility. Broad legal mandates encouraged companies to develop a professional staff and processes to enforce privacy controls, engage with outside stakeholders, and to adapt their practices to technology advances. Requiring greater transparency enabled civil society groups and media to become credible enforcers both in court and in the court of public opinion, making privacy more salient to corporate boards and leading them to further invest in privacy protection. !In AI, too, regulators can strengthen a virtuous cycle of activity involving internal and external accountability, transparency, and professionalization, rather than narrow compliance. As AI is integrated into cities, it will continue to challenge existing protections for values such as privacy and accountability. Like other technologies, AI has the potential to be used for good or nefarious purposes. This report has tried to highlight the potential for both. A vigorous and informed debate about how to best steer AI in ways that enrich our lives and our society, while encouraging creativity in the field, is an urgent and vital need. Policies should be evaluated as to whether they democratically foster the development and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few. And since future AI technologies and their effects cannot be

THE ETHICS OF AI !32AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

foreseen with perfect clarity, policies will need to be continually re-evaluated in the context of observed societal challenges and evidence from fielded systems. !As this report documents, significant AI-related advances have already had an impact on North American cities over the past fifteen years, and even more substantial developments will occur over the next fifteen. Recent advances are largely due to the growth and analysis of large data sets enabled by the Internet, advances in sensory technologies and, more recently, applications of “deep learning.” In the coming years, as the public encounters new AI applications in domains such as transportation and healthcare, they must be introduced in ways that build trust and understanding, and respect human and civil rights. While encouraging innovation, policies and processes should address ethical, privacy, and security implications, and should work to ensure that the benefits of AI technologies will be spread broadly and fairly. Doing so will be critical if Artificial Intelligence research and its applications are to exert a positive influence on North American urban life in 2030 and beyond.“(cf. Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Julia, Hirschberg; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee,;Shah, Julie; Tambe, Milind; and Teller, Astro; Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. http://ai100.stanford.edu/2016-report. Retrieved as of: 01.04.17.) !

cf. Eichmann in Jerusalem: A Report on the Banality of Evil; Penguin Classics; London; 1 edition (September 22, 2006); ISBN-10: 0143039881 See also: BaFin; Bundesanstalt für Finanzdienstleistungsaufsicht; (03.2017); https://www.bafin.de/SiteGlobals/Forms/Suche/E x p e r t e n s u c h e _ F o r m u l a r . h t m l ; j s e s s i o n i d = B 1 C D 6 5 A B 1 F 9 0 9 9 1 9 0 E A B E 0 5 D 0 0 E 9 6 F 1 E . 1 _ c i d 3 6 3 ?cl2Categories_Format=Merkblatt&gts=dateOfIssue_dt+desc&documentType_=News+Publication&sortOrder=dateOfIssue_dt+desc&language_=de; Retrieved as of: 01.05.17 Bailey, Lori; Cyber risk Global risks; (02.2017);Emerging technologies bring emerging risks https://www.zurich.com/en/knowledge/articles/2017/02/emerging-technologies-bring-emerging-risks?WT.mc_id=_tl_sm_li_feb_2017; Retrieved as of: 17.04.17 cf. Schwaab, Klaus; Shaping the Fourth Industrial Revolution; (11.01.2016); https://www.project-syndicate.org/commentary/fourth-industrial-revolution-human-development-by-klaus-schwab-2016-01; Project Syndicate ; p.1, 1ff.; Retrieved as of: 17.04.17 Klaeren, Herbert; (26.03. 1999); http://pu.inf.uni-tuebingen.de/users/klaeren/epigrams.html; p.1; l.95; Retrieved as of: 02.05.17; originally published in Alan J.Perlis; Epigrams on Programming; SIGPLAN Notices Vol. 17, No. 9, September 1982, p. 7 - 13 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Complete List of Literature !Wiener, Norbert; God and Golem: Inc., (1964); Cambridge; MIT Press, Issue: March 1966 | ISBN: 9780262730112 Whitby, Blay; Artificial Intelligence: A Beginner's Guide; Beginner's Guides; Oneworld beginners' guides; Oneworld, (2003); ISBN 9781851683222 Turner, Stephen; Will Artificial Intelligence Replace Lawyers? Part 2 – The Future Has Arrived… ; (17.07.2016); http://lawyersoftomorrow.com/will-artificial-intelligence-replace-lawyers-part-2-the-future-has-arrived/; Image 4; Retrieved as of: 01.05.17

THE ETHICS OF AI !33AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016.; http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html; Retrieved as of: 20.04.17 The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html; Retrieved as of: 17.04.17 Scott, Gray; FUTURIST, TECHNO-PHILOSOPHER and one of the world’s leading experts in the field of emerging technology cf. https://www.grayscott.com/blog/neuralink-connecting-mind-and-machine; Retrieved as of: 12.04.17 Schwaab, Klaus; Shaping the Fourth Industrial Revolution; (11.01.2016); https://www.project-syndicate.org/commentary/fourth-industrial-revolution-human-development-by-klaus-schwab-2016-01; Project Syndicate ; p.1, 1ff.; Retrieved as of: 17.04.17 Safeguards in a World of Ambient Intelligence (The International Library of Ethics, Law and Technology); Law and Technology 1 (Paperback); (03.05. 2010); Springer International Publishing AG,ISBN 978-1-4020-6662-7; Regulating Robots in the Real World; Principles of robotics - EPSRC website; https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/; Retrieved as of: 01.04.17 Pictet Report; The Age of Intelligent Machines - Winter 2016 Philosophy and Phenomenological Research 12 (March): 317-345 Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller. "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016; http://ai100.stanford.edu/2016-report.; Retrieved as of : 01.0417. Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller; Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. http://ai100.stanford.edu/2016-report. Retrieved as of: 01.04.17. Singer, Peter; Time to teach ethics to artificial intelligence; http://www.japantimes.co.jp/opinion/2016/04/17/commentary/world-commentary/time-teach-ethics-artificial-intelligence/#.WLXSeqsoGlI; Cambridge, Mass: MIT Press; Retrieved as of: 28. 02. 2017 Niebuhr, H R; (1963); The responsible self. New York: Harper &Mandelbaum, Maurice (1969) The phenomenology of moral experience. Baltimore, Md : Johns Hopkins Press Mead, G H. (1972) Mind, self, and society Chicago: University of Chicago Press Mcculloch ,Warren S.; Reflections on Artificial Intelligence Januar 1996 McCorduck, Pamela (1979) Machines who think San Francisco: W H. Freeman Marx, Karl, & Engels, F. (1971) On religion New York: Schocken Kumparak, Greg; Elon Musk compares building artificial intelligence to summoning the demon; (26.10.2014); https://techcrunch.com/2014/10/26/elon-musk-compares-uilding-artificial-intelligence-to-summoning-the-demon/; l.11-19; Retrieved as of: 12.03.17 Jonas, Hans (1977) Philosophical reflections on experimenting with human subjects. In S. J Reiser, A. J Dyck, & W J. Curran (Eds ) Ethics in medicine: Historical perspectives and contemporary concerns.; Cambridge, Mass.: MIT Press Hofstadter, Douglas R. (1980) Giidel, Escher, Bach: An eternal golden braid. New York: Vintage Books. Heidegger, Martin (1962) Being and time New York: Harper & Row. Franz, Marie-Louise von (September 1972). Patterns of Creativity Mirrored in Creation Myths (Seminar series). Spring Publications. ISBN 978-0-88214-106-0; found in: Gray, Richard M. (1996). Archetypal explorations: an integrative approach to human behavior. Routledge. p. 201. ISBN 978-0-415-12117-0. Frankena, W K (1973) Ethics. Englewood Cliffs, N J : Prentice-Hall Fletcher, Joseph (1977) Ethical aspects of genetic controls. In S J Reiser, A J Dyck, & W J. Curran (Eds ) Ethics in medicine: Historical perspectives and contemporary concerns. Cambridge,Mass.: MIT Press Fletcher, Joseph (1972). Indicators of humanhood: A tentative profile of man. The Hastings Center Report 2(5):1- 4 Firth, Roderick (1952) Ethical absolutism and the ideal observer Dafferner, Alec; Roman, Per; Park, Chris; Inaltay, Okan; Yang, Mart; 2017 Technology Predictions Trends and Technologies shaping the Global Sector, GP. Bullhound; p.13, 1 3-4, 28-31 Lori Bailey; Cyber risk | Global risks; Emerging technologies bring emerging risks https://www.zurich.com/en/knowledge/articles/2017/02/emerging-technologies-bring-emerging-risks?WT.mc_id=_tl_sm_li_feb_2017; Retrieved as of: 17.04.17 McCorduck, PamelaMachines Who Think (2nd ed.),, (2004), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1; pp. 243–252 Cellan-Jones, Rory; Stephen Hawking warns artificial intelligence could end mankind; http://www.bbc.com/news/technology-30290540, (02.12 2014); p.1; l. 3-4; Retrieved as of: 17.04.17 Bernays, Edward; Propaganda; (1928); Ig Publishing; Edition: New Ed (1. September 2004); ISBN-10: 0970312598 Barber, John; Daniel. H. Wilson on the art of making scary robots; (Published Sunday, Jul. 17, 2011 6:00PM, Last updated Thursday, Sep. 06, 2012 10:24AM EDT), http://www.theglobeandmail.com/arts/books-and-media/daniel-h-wilson-on-the-art-of-making-scary-robots/article587888/, The Globe and Mail, Retrieved as of: 17.04.17 Bailey, Lori; Cyber risk, Global risks; Emerging technologies bring emerging risks; https://www.zurich.com/en/knowledge/articles/2017/02/emerging-technologies-bring-emerging-risks?WT.mc_id=_tl_sm_li_feb_2017; Retrieved as of: 17.04.17 Bailey, Lori; Cyber risk Global risks; (02.2017);Emerging technologies bring emerging risks https://www.zurich.com/en/knowledge/articles/2017/02/emerging-technologies-bring-emerging-risks?WT.mc_id=_tl_sm_li_feb_2017; Retrieved as of: 17.04.17 Asimov, Isaac; (1950); I, Robot.; Harpercollins Publishers; Edition 01 (6. Juni 2013); ISBN-10: 000753227X, p. Bernays, Edward: Schäfer, Dirk; 9. 05. 2010,Der erste Verdreher; http://www.sueddeutsche.de/kultur/public-relations-der-erste-verdreher-1.894159; Retrieved as of: 18.04.17 Keim Campbell, Joseph; O'Rourke, Michael, Silverstein, Harry; Action, Ethics, and Responsibility; MIT Press; (October 2010) ISBN: 9780262514842; p.36ff Warren S. McCulloch; Verkörperungen des Geistes (Computerkultur); Springer; Edition: 2000 (04.08. 2013); ISBN-10: 3211828575; p.10ff. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, (2016); http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html; Retrieved as of: 17.04.17 Ted Bell; Harper Collins; Publishers; https://www.harpercollins.com/authors/36010; (2017); Retrieved as of: 17.04.17 Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Hirschberg ,Julia; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee,;Shah, Julie; Tambe, Milind; and Teller, Astro; "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016; http://ai100.stanford.edu/2016-report.; Retrieved as of : 01.0417. Singer, Peter; Time to teach ethics to artificial intelligence, http://www.japantimes.co.jp/opinion/2016/04/17/commentary/world-commentary/time-teach-ethics-artificial-intelligence/#.WLXSeqsoGlI; Cambridge, Mass: MIT Press; Retrieved as of: 28. 02. 2017

THE ETHICS OF AI !34AVESTU INSTITUTE FOCUS REPORT 01 18

THE ETHICS OF AI - A STARTING POINT FOR ESTABLISHING PRACTICAL ETHICAL GUIDELINES

Signatories of Open Letter on AI; https://futureoflife.org/ai-open-letter-signatories/ https://futureoflife.org/data/documents/research_priorities.pdf?x33688; Retrieved as of: 19.04.17 RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE; https://futureoflife.org/ai-open-letter/ ; Retrieved as of: 17.04.17; Moise Jean; The Rwandan Genocide: The True Motivations for Mass Killings: 2006; http://history.emory.edu/home/documents/endeavors/volume1/Moises.pdf; 33f.; Retrieved as of: 17.04.17; cf. Kaufman, Stuart J, “Symbolic Politics or Rational Choice? Testing Theories of Extreme Ethnic Violence,” International Security 30.4 (2006) 45-86; Microsoft AI named “Tay”: https://twitter.com/TayandYou; Retrieved as of: 17.04.17: Lenk, Hans; What is responsibility, (2006); https://philosophynow.org/issues/56/What_is_Responsibility; Retrieved as of: 01.04.17 Lenk, Hans; What is responsibility, (2006); https://philosophynow.org/issues/56/What_is_Responsibility; Retrieved as of: 02.04.17 LaChat, Michael R.; Artificial Intelligence and Ethics: An Exercise in the Moral Imagination; AI Magazine; Volume 7;; Number 2; (1986); https://studylib.net/doc/13828902/artificial--intelligence--and--ethics--michael--r.-lachat; Retrieved as of: 20.04.17 Klaeren, Herbert; (26.03. 1999); http://pu.inf.uni-tuebingen.de/users/klaeren/epigrams.html; p.1; l.95; Retrieved as of: 02.05.17; originally published in Alan J.Perlis; Epigrams on Programming; SIGPLAN Notices Vol. 17, No. 9, September 1982, pages 7 – 13 https://academic.oup.com/iwc/article-abstract/29/2/220/2607838/In-Action-Ethics In-Action Ethics Harper, Douglas; Entry for responsible; http://www.etymonline.com/index.php?term=responsible; (2001-2017), Retrieved as of: 02.05.17 Hannah Arendt, Vita Activa oder vom tätigen Leben, (April 1981), Edition: 1967 Piper Verlag, München, German language Version; ISBN 978-3-492-23623-2; p.16, 1f. Gardner, Ben; (11.12. 2015); This presentation was given at Legal Geek on 10th Dec 2015.; Published in: Law; License: CC Attribution-NonCommercial-NoDerivs License; https://www.slideshare.net/bengardner135/what-ai-is-and-examples-of-how-it-is-used-in-legal; Slide 4, Retrieved as of: 01.05.17 Frauenberger, Christopher; Rauhala Marjo, Fitzpatrick, Geraldine; Interact Comput (2017) 29 (2): 220-236; https://doi.org/10.1093/iwc/iww024; Published: 20 June 2016; Retrieved as of: 17.04.17 Forni, Amy Ann, Van der Meulen, Rob; STAMFORD, Conn., (16.06.2016); http://www.gartner.com/newsroom/id/3412017; Retrieved as of: 17.04.17 Dilip Kumar Pratihar, Lakhmi C. Jain; Intelligent Autonomous Systems: Foundations and Applications; Springer Verlag Berlin Heidelberg (2010); ISBN 978-3-642-11675-9; SCI 275, pp.1-4 Bostrom, Nick; Superintelligence: Paths, Dangers, Strategies; Oxford University Press (1. 03.2013); ISBN-10: 0199678111; Retrieved as of: 09.04.17 !!Report prepared by Verena A. Sturm, [email protected]

THE ETHICS OF AI !35AVESTU INSTITUTE FOCUS REPORT 01 18