wisdom of webュ tr - arakilab.media.eng.hokudai.ac.jp

9
MachineMoralDevelopment:MoralReasoning Agent8ased 00 Wisdomof Webュ CrowdandEmotions Radoslaw Komuda '. Ptaszynski'. Yoshio Momouchi'. Rafal Rzepka" Kenji Araki- ' , Faculty ofTheology. Nicolaus Copernicus Uni¥'ersity. Poland komuda@stu c ! .u ll1 k.pl 2 High-Tech Research Center Hokkai-Gakuen 3 Graduate School oflnformation Science and Technology Hokkaido UniversityJapan {kabura.araki [email protected] tr Webegi l1 this paper b¥" putti l1 g thetopic o .f conscience asa experience. olr research on lII akef�st atle ll1 pts to tn creattng a l1 jor the elhics.InSlead o{ the Is lI a/ lI on-cognilil'e ulle ¥'e a lI1 ode/ lrith complltatio l1 al strlc lU re and app !i cabi/ity o .f lhis lre presellt some o(thef�

Upload: others

Post on 04-Dec-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Machine Moral Development: Moral Reasoning Agent 8ased 00 Wisdom of WebュCrowd and Emotions

Radoslaw Komuda '. Mにha l Ptaszynski'. Yoshio Momouchi'. Rafal Rzepka" Kenji Araki-' , Faculty ofTheology. Nicolaus Copernicus Uni¥'ersity. Poland komuda@stuc!.ull1k.pl

2 High-Tech Research Center

Hokkai-Gakuen Uni versity、 Japan

ptaszynskiιhgu .j p. momouchi~el i . hokka i十u.ac .j p

3 Graduate School oflnformation Science and Technology Hokkaido University‘ Japan {kabura.araki [email protected]

tr

ccτ

イßSTRACT: We begil1 this paper b¥" puttil1g JOI・ lI'ard the topic o.f 111川an conscience as a melαphysica / experience. ÝJ令

prω:ellt ollr 0噌oing research on morα/ reasoni噌 calegor附 and lIIakef�st atlell1pts to \ 'eriji ・ their usejit!n白・S tn creattng al1 αgent 川th α 《めl 叩川C α/goω町仰r川ïゾ叫川iμ川川itν刈fυ仇hll1 jor mor刀α/ reasol川川Ig. 01川tυr apρ叩αch aωss幻叩11川t

fa陀附clor川S, the ide“叩,吃匂どα1 0可{ lI 'i凶sc/ぬ0111 Cザtゲ{ \reωb-cr,別O仰仰\\"d aωI1d eω仰側11川JηIωotμio仰nト伺のh削Iυtf川fμ1"1附-官e

elhics. InSlead o{ the IIslIa/ lIon-cognilil'e ulle ¥¥'e propωe a lI1ode/ lrith complltatiol1al strllclUre and disclIS・s app!icabi/ity

o.flhis appr,υach. FinαI/\". lre presellt some o(thef�rst reslIlts o(a pre!ill1illarr experill1ent peljorllled 10 fJI"Ol'e our approach.

Key words: Web Crowd, Moral reasoning agent. Machine ethics

Received: 19 May 20 1 O. Revised II J une 20 1 O. Accepled 18 J une 20 1 0

2010 DLlNE. All rights reserved

I.lntroduction

People tend to consider robots as mechanical or artificial. although it is still often believed that theorics developed for humans will work as well for machines as they do for hllmans. For example. Reeves and Nass (1996) fall10llsly proved Ihal in om巴 circ llm stances people treat new media. inclllding computers. like other hllman beings [7]. One oflhe problems which arollse on the basis of such discoveries is an ongoing discussion within the field of Machine Ethics airning 10 select a philosophical systell1 rnost useful to be applied for machines. Today Kantianism and utilitarianism are in Ihe lead [4. 16]. We gree with the importance ofthis discussion. However. with regards to an interdisciplinary field ofresearch such as Machine Ethics it is desirable to provide some empirical proofs over the philosophical argumenl. In our previous research [1] we

proposed an approach to obtain sllch a proofby drawing from other disciplines. like psychology or social psychology [3].

ne oflhe ll1ain problems within Machine Ethics is 10 make a ll1achine capable ofll1oral rcasoning. We began our research to reate a computational model for such an agent in the form ofa companion interacling wilh human users anc! helping them in ryday life (Iater referred to as 111υral reasoning agel1t). In the first step oflhis project we performed a sllrvey in which we �c! participants about their moral choices [1]. The results ofthe survey were consistent and promising enough to make a

her attempt ‘ namely. to generate a model for a machine self-f�ling the questionnaire in a way similar to humans. This paper

Journal of Computational Linguistics Research Volume 1 Number 3 September 2010 155

presents some of the first results of this attempt. In ollr approach we decided to gather the information , such as general

beliefs or opinions. from the Internet using Web-mining rnethods. This inforll1ation is later approximated to indicate the 1l10st

appropriate 1l10ral choice

We decided to lIse Internet. as it is brimflll of daily lIpdated social information flowing through social networking services.

sllch as Facebook or MySpace. People nowadays also share their memories and experiences ¥vith friends by both short

instant Il1cssages (Twitter) and long. 1110re elaborated ones (blogs). Comments on recent news seem to be flooding the Web.

We believe that this flow ofinformation hides an immense potential. similar to the one described by Surowiecki as “ Wisdom of Crowds" [刊8 ] . In our casc we refer to the cro、\\'、\'d in the sense of a collection of op戸inions and cO ll1 ment“s appearing on the

Internet. We borrow S lIrow則l e町ckïs nomenに川clanωJr陀e tωo call it .

utilized 一 the wisdom ofweb-crowd could comribute greatly to th巴 Machine Ethics research

2. Categories For Distinguishing Good (+) From 、Vrong (一)

COl11mon sense knowledge allows people tell that a rewarded action has to be good or when someone was punished -they

ll1ust ha¥'e done something wrong. However守 our lives are 1l10re complex. To give an extreme example, assassins get paid

(reward) for killing people (wrong) and world is full of accidental (undeserved) victill1s (punishment). Therefore in our

rcsearch we needed more fine grained categories for a more sophisticated moral reasoning. A framework of essential issues

to consider while recognizing good from wrong can be found in Lawrence Kohlberg's theory ofhuman moral development

[3).

2.1 Moral Development Theory and Its Usefulness for Machine Ethics Research

Lawrence Kohlberg was an American psychologist whose life time work [3] on human morality resulted in a theory ofhuman

moral development in which he assumed sllccessive changes in aspects by which we consider an action good or wrong. He

èiI scovereむ ルla¥ wnen peop~e are young -¥ney 1m;'もy ~oò、 on ¥ne PUTÙ釦men\ ¥1ra¥ ¥ne '�:¥c¥:mt) '\"1)九九 c'è:\U'5e いIne '\~~1'::>e ¥n'

punishment (一 ) , the more wrong the action itselfhas to be). For example. ifa child gets grollnded for a day for killing hi

hamster and for a w巴巴k for eating sweets before dinner -it is going to believe that having a snack is worse than killing.

Furthennore. the beliefthat suffcring is always considered as a result of doing something wrong ll1ay lead to a misapprehension

that suffering of accidental victims is a punishment adequate to their falllt. Surprisingly. thinking only abollt the reward (

yet indicates second stage ofhuman moral development on which people do not care for being punished as long as they get

what they care about. For exall1ple. a child will cat a candy again dcspite being aware of the threat of the pllnishment

Although both share the sa ll1 e 守 self-inte rest ed (一) concerns abollt the consequences (first category) ー they undoubtedl

make lIS ask abollt Actor's motivation (second category) in search for 1l10re altruistic (+) behaviors sllch as stealing a car t

drivc sOll1cbody to the hospital. But yet this should not be connected to Actor's good (+) or evil (ー ) intentions (third

category) which allow us to distingllish unintentional killing froll1ll1urder.

In thc first stage of our research we focllsed on those three categories. Later we also want to include in our research Actor'

reputation (follrth category) with him being criticized (一) or acclaimed (+). Also sOll1e social factors like direct reaction to th

act itself(fifth category) 一 di sapproval (一 ) or appreciation (+) and ill1provell1ent (+) or deterioration (一) ofrelationships (sixth

category). Distingllishing among these categories might be even harder and judging by thell1 -1l10re cOll1plex. to recall SOIl1

law and social discrepancy when it cOll1es to cases of Iynching.

In the final stage ofthe research we will develop ollr algorithms focllsing onjudging the action in a 1l10re straightforward wa~ ・

like telling ifthe law or etiquette (seventh and eighth category) was broken (一) or kept (+). Prelill1inary experill1ents in that field

have already been ll1ade and results of sOll1e queries may be found in the appendix.

3. Calculating Morality

Conscience is often considered to be the .‘voice within" and the “ inner light円 [2] . In our cu汀ent and previolls [17] research

we agreed it is a ll1atter of reason, which makes judging 1110ral qual町 ofan act a logically successive process. This approach

helps us represent our results in numbers. For example. ‘ 1 'and に l ' can represent acts 1l10rally good and wrong. respectively.

In this setting ‘ 0 ・ can represent acts morally neutral. The rell1aining issues are the categories and scale in which the action i

ought to be rated since not everything in Kohlberg's theory might be useful for machine ethics research.

156 Journ�& of Computational Linguistics Research Volume 1 Number 3 September 2010

The above discoveries make us build our research on two general assllmptions. Firstly. the conscience can be approximated

from emotional infonllation associated with acts. Secondly. this infon11ation can be extracted from the lnternet with a ¥¥'ebュ

mining algorithm. To set a baseline for the algorithm we performed a questionnaire abollt how people evalllate dilTerent

actlons.

3.1 Questionnaire for Morality Calculation

The basic idea in our research is classifying an action by the categories distingllished in section 2 of this paper in a basic

cale from ‘ -5 ・(一) to ‘ 5 ' (+). ln the assllmpt ion, a method for a proper approximation will allow the moral reasoning agent bc

able to tell good from wrong and also solve more complex cases sllch as lesser oftwo evils principle.

However, one has to remember that the main goal behindm machine ethics research is not creating a machine investigating

the sitllation in qllestion, bllt making a moral j udgm巴nt on the basis ofthe provided data [4]. For example. ifthe inpllt states

one man killing another, the expected reslllt wOllld be the agent linding it wrong by the moral reasoning algorithm. Additional

modifiers and factors, e.g. "in selfdefense" or "by accident" enforce the need for a refinecl Web search for specific moral

judgments.

uF

、,d・

4Eι日

To set a mllltidimensional baseline for the algorithm we designed a questionnaire in which we included different sitllations.

both, in assumption positive (saving a man) and negative (being hit): general ones (being hit) to more specific ones (being

robbed 100$): and differing between the actor (hitting a man) and the object ofthe action (being hit).

cd

防ル

lnは

3.1.1 Situations for Setting Moral Baseline

We prepared a set oftwelve situations to ask respondents: stealing a car, saving a man. saving men. son marring the woman

he loves. being killed宅 killinga sick dog, killing your own dog. being hit. being robbed 10$ , being robb巴d 100$ , killing a man

accidentally, saving a man accidentally. This set inclllcles not only events completely opposite like saving man's life and

killing a man but also contain sitllations with different intensity , e.g. rel1ecting the difference in ratings dlle to the intensity of

the acts, like being robbed 10$ or 100$. E

d

3.1.2 Results of the Questionnaire

bove set of sitllations was presented to respondents in age between 20 ancl 35 (all males). The respondents were askecl to

jlldge each siruation according to their moral rllles, with a diversification of actors and objects of the action , e.g.. "Please

jlldge whether the Actor taking the action deserves a punishment or a reward". Filled copies of the qllestionnaire were

llmmarized. The qllestionnaire gave not ideal , however promising reslllts. For approximately 1 100 of answers there was a fllll

match (100%) and over thirty percent got 75% of agreement between respondents

hh

仏'

4. Moral Reasoningagent

s a base for the moral reasoning agent we used a Webmining algorithm designed by Shi et al. [5].

4.1 Algorithm Description

hi and colleaglles developed a techniqlle for extTacting emotive associations from the Web. lt takes a sentence as an inpllt

Journal of Computational Linguistics Research Volume 1 Number 3 September 2010 157

-4x match , e.g A, A, A, A -14/132

・ 3x match , e.g. A, A, A, B ・ 42 /132

・ 2x match , e.g. A, A, B, C -52/132

・ 2x2 match, e. g. A, A, ß, B -6 1132

-Ox match , e. g. A, B, C, 0 ・ 1 8 /132

.4xmatch

・ 3xmatch

2

・ 2x2 match

.Oxmatch

Figure 1. Diagram presenting results ofthe questionnaire

and in the [nternet searches for emotion types associating with the scntence contents. This could be interpreted as online

con> mon sense reasoning about what emotions are the most natural to appear within a certain context of an utterance. The

technique is composcd offour stcps: a) extraction ofinput phrasc: b) modification ofthe phrase with causality morphemes;

c) searching for the modified phrase in thc [nternet; d) matching to the predetermined emotion lexicon and extraction of

emotion associations : e) ranking creation. [n the first step, an utterance is analyzed morphologically by a part-of-speech

(POS) tagger (all ofthe processing in this paper is made for Japanese ‘ thc POS tagger we used is a standard tool for Japanese, MeCab [14]). phrases for further processing are composed using parts 01' speech separated by MeCab. The phrases ending

with a vcrb or an adjective are modified grammatically by the addition ofcausality morphemes (Shi et al. distinguished five

most frequcntly used morphemes stigmatized emotively in the Japanese language: -te. -10. イlOde. -kω・a. -tara. which correspond

to causality markers. like because. since. etc. in English). Finally. the modified phrases are queried in the [nternet with 100

nippets for one modified phrase. This way a maximum of500 snippets for each queried phrase is extracted from the Web and

cross・referenced with the emotive expression lexicon. The higher hit-rate an expression had in the Web. the stronger was the

emotive association of the original phrase to the emotion type

4.2 Lexicon Alteration

The original lexicon used by Shi et al. contains a set of words expressing and describing emotional states. Using only the

emotion lexicon would provide us a general discrimination into desirablc and undesirable actions, howeve r.‘ would not show much detail about the features specific to moral reasoning. such as whether an action is praiseworthy or blamewo口hy, or

whether the consequences of the act are positive 01' negative. Therefore we decided to perfonn the processing on two sets

oflexicons. the original emotion lexicon and a second hand-crafted lexicon consisting ofwords indicating action consequences.

Emotion Lexicon This lexicon contains expressions desc l・ ibing emotional states, i日川ncl川udi口ing ad肖リ ec ti ves: 11υre.'官sh川Iμi (happy). or

s可wαbi.“s古.刈i (いsad): noun首拡s広 alυ/伊011 ( l ove叫) . チ}'ojit (fear): verbs: yorokohll (to feel happy). aisuru (to love): fixed phraseslidioms:

/II ushiご11 ga hashim (give one the creeps [of hate] ), kokoro ga odorll (one's heart is dancing [ofjoy]); proverbs: Johatsllten

1叩 tSllkll (be in a towering rage) , ashi 11 ・ofilll1l1 IOkoro 11'0 shiraご11 (be with one's heaれ up the sky [of happiness]): or

metaphors/similes: itai hodo kallashii (sadness like a physical pain). The lexicon was based on Nakamura's [15] Dictionary

ofEmotive Expressions. lt contains 2100 items (words and phrases) describing emotional states. Nakamura deternlined in his

research 10 emotions classes. [n our research we follow his classif�ation. The breakdown with number of items per emotion

158 Journal of Computational Linguistics Research Volume 1 Number 3 September 2010

type was as Follows: joy (224). anger (199). gloom (232 ), fear (147). shame (65). Fondness (197), dislike (532), excitement (269), relief( 106). Sll巾 rise (129).

Consequence Lexicon In this lexicon we sllbstitllted the ten emotion types into five pairs of word grollps representing

Kohlberg's stages of moral development. The items in the lexicon ¥¥ ere distribllted as Follows: Praises ( 18) / Reprimands (33);

Awards (25) / Penalties ( 15): Society approval (8) / Society disappro、-al (8): Legal (8) / lllegal (8); Forgivable (6) / Unforgivable

(5).

4.3 Processing Examplc

To better visllalize the tlow oFthe algorithm we present the pn

the Appendix on the end ofthis paper. Original senten

a) Extraction of input phrase:

Okane 11'0 I/ IISII II/ II ー> Okal/e 11"0 I/IISIIII1I1

b) Phrase modifica“on with causality morphcm Meaning of all phrases similar to “ becallse of stealine mon

Okal/e ¥1"0 IIIISIIII1-lo

Okal/e 1¥'0 I/ IIS UIII-Iωde

Okane 11'0 I/usllm-kara

Okal/e 1\ ・o I/lIsl/f/-de

Okal/e ¥¥"0 I'/lIslII1-dara

c) AII modified phrases used separately as que吋es in ra/roo! Japan

Extracting lIP to 500 snippets for each ph

d) Matching to the predetermined lexicon

Looking within the snippets For words れ om emotion lexicon and

E . g ・. foremotions: FOllnd 9 words il1 total:

sadness=2, anger=3. fear= 1, Ranking creation

1. sadness (3/9)

2. anger (2/9)

3. fear (1/9)

5. Preliminary Experiment

,\'e tested the agent with one hllndred queries concerning sitllations from our daily rolltine (e.g. '"eating a hamburger" ), with 偆 hich we are familiar fTom the TV on daily basis (e.g. “ killing a man叶) or morally neutral (e.g. "earth turns"). We have not asked

ut the situations from the questionnaire since our resuIts' database has yet to be expanded. For this matter we plan to

ate its 0 11 ・ line version in Engli sh, Polish and Japanese

Tbe database of sitllations for setting moral threshold is still being discllssed and the present database did not give many fllll

ements. Therefore, in this prelirninary experiment we did not use the sitllations from the questionnaire, but rather tested

;he algorithm on the above mentioned tripartite collection ofsitllations. The situations were selected according to their moral

biguity.

5.1 ResuIts

uery domain was set as general dornain of Yahoo! Japal/ (www.yahoo.co.jp/) with the limitation to the first 500 snippets

:'Or every causality morpheme modif�ation. As mentioned in 4.1. the algorithm matches expressions fromlexicons to the data

ined from the Internet. The rnatched expressions are extracted, added them up and ranked to provide emotions and

nsequences associated with the input situations. For example. a phrase '�ank you" is likely to associate with gratitude,

lJe f.宅 orjoy. The resuIts with accompanying cornments are presented in the tables below. To avoid noise and misunderstandings.

tables show approximately only the first 50% oF all emotions extracted for each query.

Journal of Computational Linguistics Research Volume 1 Number 3 September 2010 159

To obtain a full view on the evaluated sitllation it has to be perceived from three points of view. The first one is Actor, the direct doer of action (see: Table 1 a). The second one is Object. on which the action is performed (see: Table 1 b). Finally, the

third one is Spectator watching the event. To avoid noise we assllmed that. e.g・-“ hitting" is less lInpleasant than “ being hit"

and “ winning a lottery" is more pleasant than "seeing winning a lottery'¥To visualize that in our research we used a two

dimensional Cartesian space to make the differences easily comprehensible. The two axis of the space represent: x-axis positive and negative action conseqllences and y-axis pain and pleasure rate (see: Figllre 2). Another concern involves

determining the nature ofaction's objects because while "kicking a bal¥" is rather acceptable , "kicking a cat" is less .

e!!'<lU¥l

c

plea~ur

r:l

Iu

、 11

• po

c

pam fa(e

Figure 2. Cartesian coordinate system illustrating differences discllssed in section 5.1

Table 2 shows the reslllts for a query "someone has killed a man “ for which the algorithm fOllnd 97 opinions in the sentence

database. From all opinions fOllnd , 33% (32/97) consisted expressions of fear. For a comparison , below we attach a table

presenting results for a qllery “ killing 3 people" (see: Table 3.). The emotion that was expressed most often for this sitllation

was '"anger". Finding also “'joy" among extracted emotions might be sllrprising or dislllrbing. However, this could be a result

ofnot considering a deeper context ofthe emotive expression , as the expressions could have been pa口 of, e.g ・, rev iew ofa

game, where it is considered that shooting three people (enemies) is be better than two. Upgrading the system with the ability

to process this important featllre is one of the necessary flltllre tasks.

motions: Results:

fear 7/40 Emotions: ResuIts:

JOY 6/40 sadness 6/25

exc!le 6/40 fear 5/25

relief 5/40 JOY 4/25

anger 4/40

Table 1 b. Reslllts for a query: "to be fired" Table 1 a. Reslllts for a qllery: "to fire somebody"

160 Journal of Computational Linguistics Research Volume 1 Number 3 September 2010

Emotions: Results:

fear 32/97

exclte 16/97

sadness 15/97

Tablc 2. Results for a query:

“ someone has killed a man円

Table 3. Reslllts for a quer下 :

“ kill ing 3 people"

For Table 4 we changed the Object for a more emotional one what resulted in having 59% (dislike + anger -shock all) of

results showing disapproval.

Although not all reslllts were idea l, many of them were promising. An example of a positive result is presented in Table 5.

where associations for “ son marrying someone" consisted in 96% りoy + like / all) ofpositive emotions.

Other results , inclllding both emotional associations and consequence extraction are represented in the appendix on the end

of this paper.

Emotions: Results:

dislike 36/110

anger 16/110

shock 13/110

JOY

like

QJュ

ny-uy

jf

-

QJ

06

-,/

oo-00

Emotions: Results:

Table 4. Reslllts ofa qllery: "has killed mother"

Table 5. Reslllts for a qllery:

son marrying someone"

6. Conclusions and Future Work

Treating moral reasoning as a mathematically calculative process opens the tield ofmachine ethics for statistics. Experiments

conducted and presented in this paper prove that we can extract proper data from the 、Web and process it to make machines

imitate millions not few [17].

When working with conseqllences one has to compare both direct and long term effects ofthe action. This is one ofthe key

features ofa moral agent since , as we know from Kohlberg's Heinzdilemma e可ample [3] , the signiticance lies not in making the

moraljudgment itselfbut in itsjl1stitication.

In the near futllre we plan to start processing not only possessi¥e pronouns Iike "my" or "yol1 r刊 but also verbs since our

llbservient qlleries in that tield showed that Ollr agent is able to differ "Iosing¥"stealing" and 'laking away" one's fornme

We also plan to release an on-line version of the qllestionnaire and build a database of 1110ral jlldgments made by hl1man

respondents. By comparing it with results gathered by the agent we will be able to improve its scope , sensitivity and

effectiveness. Furthennore , we will extend our research with quant iηand emotion robusrness

7. Acknowledgements

This research was partially supported by (JSPS) KAKENH Ib Grant-in・Aid for JSPS Fellows (Project Number: 22・00358) .

Journal of Computational Linguistics Research Volume 1 Number 3 September 2010 161

cntcnccs:

# Original (in Japancsc) Translitcration English Translation an<1 occurcncc: conscqucnccs:

su中nse i I b

11 ‘ . .-)・ゆjJ. ・ 3・ Ringo 11'0 111 IISIII11 11 stealing an apple. reprimend /scold

anger 2/6

anger 3/9 penalty / punishment

2. ~ ..lYI~同州ー 巴. Okalle I 均/11凶LI/1I LI stealing moncy.

sadness 2/9 reprimend /scold

JOY 14/48

3. 1 て1.-:1長了、.....;.. ). し,'shi 11'0 tabem. Eating a beef excltc 8/48 penalty / punishment

fear 7/48

JOY 10123

4. 川 V、. ...1...1":..

・"‘ ' BLlla 1¥'0 taberll. Eating a pork. agreement

exc1te 7/23

5 1' t力 (J去\ Halla ga saku Flowers bloom. JOY 56/87 reward

adness 3/9 6 ν11 1 民主I!t、“t.!. I "'-d.J;. 1l1shu-lIl1fel1 1\ ・oSlIrll Drinkinf after drinking. penalty / punishment

fear 5/19

JOY 8/39

7 コ1]..人を私トjて, Hαnllill 1¥'0 koroo5lI. killing a criminal. exclte 8/39 penalty / punishment

fear 7139

sadness 3/ 17

dislike 3/17

8. 4..-J .~ _ '~, ~う. kanojo 1¥'0 IIbau. Stealing a girl friend. exc1tc 3/17 forgiven

likeness 2/ 17

JOY 2/ 17

anger 2/17

JOY 17/41

9. 工~:/ ~I To sekkusu H・005/.1111 . Having sex. penalty / punishment

likeness 16/41

JOY 12/37

¥0. ノ\宅1..."'1.助,PJ ;, HifO 1¥'0 da/1lao5lI. Decei¥'ing asomebody. forgiven

fear 10/37

JOY 6/14

11. 。U.-1ーにじ7 !刈刊I リ , TOlllodachi 1¥'0 Deceiving friend.

dalllaSII fear 3/14

JOY 25/61

12. 人 '~J:r.:l . 、り 17、 HifO ni damao5arerll Being decei¥'ed. reprimend /scold

sadness 17/61

JOY 12/61

13. '!I tr空"!:I~-- .J -ミ., USO 11'0 fSllkll. Telling a lie. dislike 11 /61 penalty / punishment

sadness 10/61

sadness 10121

14. f、」 ニ1:? 色1: 0.1 I~ HifO Ili damasareru. Being tricked.

exc1tc 4121

162 Journal of Computational Linguistics Research Volume 1 Number 3 September 2010

15. Robo(fO Ili

korosareru

-d

P・M

・L

H

H

O

U

ぬHbr

n

a

l

ρL

Vvd

Ruhu

Appendix. Results for additional queries and estimated action consequen

References

[1] Komuda. R.(2009). Kohlberg's Theory of Moral Development Stages as a Path to Follow In Machine El1ucs R

Lang/lage Acq/lisifiol1 αI1d Understallding Research Gro/lp (LA U) たc/rnica f Reporfs, Winter 2009. p. 6・ 10 .

[2] Moore, R.(2000). The Lighf in 刀'leir COllsciences : 刀leEarfy uakers ill Brifain 1646 1666. Pennsylvania State Cru、eTSl

Press, University Park

[3] Kohlberg、 L.(1981). Essays on Moral Developmenr. Vol. 1: The Philosophy of Moral Development. San Francisco. C A:

Harper & Row.

[4] Anderson ‘ M.. Anderson, S. L. (2007). Machine Ethics: Creating an Ethicallntelligent Agent. AI Magazine, 28 (4) 1 5・26

[5] Shi , Wenhan.. Ptaszynski , Michal.. Rzepka Rafaland., Araki. Ke吋 (2008). Emotive Information Discovery from User

Textual 1叩ut Using Emotion Expression Element and Web Mining (ln Japanese)‘ IEI CE Technicaf Report, Thought and language, 108, No. 353 , p. 3 ト34.

[6] Ptaszynski , M. , Dybala, P.,Shi ,W., Rzepka. R. , Araki: K (2008). Disentangling emotions from the Web. Intemet in the

service of affect ana lysis, In: Proceedings ofthe Second Intemational Conference on Kansei Engineering & Affective System

(KEAS '08), p. 51 ・56 .

[7] Reeves. 8 ., Nass. C.( 1 996)η'le media eq/lafiol1: HOIrpeopfe freaf cOlllp/lters. fefevisioll. and nel¥' media like reaf peopfe

and pfαces . Cambridge University Press.

[8] Surowiecki 守 James (2005). The Wisdolll ofO'O\"\前.Anchor Publ.

[9] Ptaszynski , Michal.. Dybala, Pawe l. , Shi , Wenhan..nRzepka, Rafal.. Araki ,Kenji (2009).Conscience of Blogs: Verげy1l1g

Contextual Appropriateness of Emotions ゚ asing on Blog Contents.ln : 門前eedings ofThe Fourth International Cot!戸rellce

011 Computationaf Infeffigence. p. 1-6.

[10] Taylor, Jeremy (2010). Ductor Dubitanrium or η:le Rufe ofConscience ペ bridged. Gale ECCO, Print Editions (May 27. 2010), originally appeared in 1660.

[11] Alger, William R. Notebooks, 1 822・ 1905 , Andover-トlarvard Theological Library, Harvard Divinity School.

[12] Thompson. Ross A.. Laible, Deborah 1.. Ontai,Lenna L.(2003). Early understandings of emotion. morali臥 and Rzep如、

Rafaし Ge.Yali . , Araki , Kenji (2006). Common Sense from the Web? Naturalness ofEveryday Knowledge Retrieved from

WWW, Jourl1af ofAdvanced Comp/lfafiollaf Illfe/figellce and Iflle/figel1l I T/formarics, 10 (6) 868・875 .

[13] Rzepka. Rafa l. , Ge,Yali.. Araki. Ke町 i (2006) しommon

from WWW. Journaf ojAdml1ced Compufafionaf Il1felligen

[14] Kudo.T. MeCab: Yet Another Part-of-Sp巴ech and 1. http://m巴cab .sourceforge . netl

[16] Powers, T.(2006). Prospects for a Kantian Machine. IEEE Il1Ie/figem Systems 21 (4) 46-5.

[17] Rzepka. R. Araki. K (2005). What Statistics Coult.l Do for Ethic~ヴ ー The l�a ofCommon Scnse Proccssing 8ascd Safcty

'alve. 111: A.4AJ Fal/ :買\'mposiulIl. Technicaf Repυ1・1 FS-ο5- 0n . p. 85-87.

Journal of Computational Linguistics Research Volume 1 Number 3 September 2010 163