biased bots: conversational agents to overcome polarization · bot, which draws information from...

5
Biased Bots: Conversational Agents to Overcome Polarization Tilman Dingler University of Melbourne Melbourne, Australia [email protected] Ashris Choudhury Indian Institute of Technology Kharagpur, India [email protected] Vassilis Kostakos University of Melbourne Melbourne, Australia [email protected] Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. ACM. UbiComp/ISWC’18 Adjunct,, October 8–12, 2018, Singapore, Singapore ACM 978-1-4503-5966-5/18/10. https://doi.org/10.1145/3267305.3274189 Abstract In today’s media landscape, emotionally charged topics in- cluding issues related to politics, religion, or gender can lead to the formation of filter bubbles, echo chambers, and subsequently to strong polarization. An effective way to help people break out of their bubble is engaging with other perspectives. When caught in a filter bubble, however, it can be hard to find an opposed conversation partner in one’s social surroundings to discuss a particular topic with. Also, such conversations can be socially awkward and are, there- fore, rather avoided. In this project, we aim to train different chat-bots to become highly opinionated on specific topics (e.g., pro/against gun laws) and provide the opportunity to discuss political views with a highly biased opponent. Author Keywords Critical Media; Depolarization; Chatbots; Conversational Agents ACM Classification Keywords H.5.m [Information interfaces and presentation (e.g., HCI)]: Miscellaneous Introduction Due to the rapid technological developments, the world’s information is now at the fingertips of mostly anyone any- time and anywhere. Not only do we consume unprece- 1664

Upload: others

Post on 19-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Biased Bots: Conversational Agents to Overcome Polarization · bot, which draws information from websites to recreate the personality of a historical figure middle-schoolers can

Biased Bots: Conversational Agents toOvercome Polarization

Tilman DinglerUniversity of MelbourneMelbourne, [email protected]

Ashris ChoudhuryIndian Institute of TechnologyKharagpur, [email protected]

Vassilis KostakosUniversity of MelbourneMelbourne, [email protected]

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected].

ACM.UbiComp/ISWC’18 Adjunct,, October 8–12, 2018, Singapore, SingaporeACM 978-1-4503-5966-5/18/10.https://doi.org/10.1145/3267305.3274189

AbstractIn today’s media landscape, emotionally charged topics in-cluding issues related to politics, religion, or gender canlead to the formation of filter bubbles, echo chambers, andsubsequently to strong polarization. An effective way tohelp people break out of their bubble is engaging with otherperspectives. When caught in a filter bubble, however, it canbe hard to find an opposed conversation partner in one’ssocial surroundings to discuss a particular topic with. Also,such conversations can be socially awkward and are, there-fore, rather avoided. In this project, we aim to train differentchat-bots to become highly opinionated on specific topics(e.g., pro/against gun laws) and provide the opportunity todiscuss political views with a highly biased opponent.

Author KeywordsCritical Media; Depolarization; Chatbots; ConversationalAgents

ACM Classification KeywordsH.5.m [Information interfaces and presentation (e.g., HCI)]:Miscellaneous

IntroductionDue to the rapid technological developments, the world’sinformation is now at the fingertips of mostly anyone any-time and anywhere. Not only do we consume unprece-

1664

Page 2: Biased Bots: Conversational Agents to Overcome Polarization · bot, which draws information from websites to recreate the personality of a historical figure middle-schoolers can

dented amounts of information this way, but also shareour thoughts and experiences with others and, therefore,contribute to public opinion formation. This has led to theproclamation of the ”democratisation of information” fa-mously resulting in considerable political shifts, such as theArab Spring in 2010. In recent years, however, concernshave started amounting about algorithms increasingly plac-ing users into filter bubbles [10] and social media contribut-ing to the creation of echo chambers, both of which amplifyand reinforce people’s views, beliefs, and convictions.

Figure 1: Biased Bots are trainedon a variety of topics and can takeon different sides of a controversialdebate by presenting arguments tosupport or refute a particular pointof view.

We, therefore, started to experiment with conversationalagents in the form of chat-bots (see Figure 1), which canpick different sides of a controversial debate and presentarguments to support or refute a particular point of view.These bots draw their arguments from media sources witha tendency to reside on the extreme ends of a controversialissue. By inviting users to express their tendency to agreeor disagree with a controversial statement, these BiasedBots pick the side opposing the user’s view and subse-quently present arguments for discussion. We hypothesizethat such chat-bots can lower the social threshold of engag-ing with opposing perspectives and lead to conversationsthat invite users to re-think their position or enhance thequality of an individual’s opinions and arguments [2].

BackgroundWithout critically reflecting on information and their sources,we are likely to fall victim to our own cognitive biases caus-ing us to confirm our existing beliefs (confirmation bias) andrepudiate opposing arguments. In today’s media landscape,especially emotionally charged topics can lead to a strongpolarisation. Such polarisation is characterised by ”an in-tense commitment to a candidate, a culture, or an ideologythat sets people in one group apart from people in another,rival group” [3]. The greater the divide, the higher the poten-

tial hostility and distrust between opposing groups. The useof biases and polarisation has been enabling populist actorsto substantially influence public opinion.

In other debates about topics, such as evolution1 or vac-cinations [4], persistent lobbying and spread of deliberatemisinformation (fake news) have eroded the public trust inscientific evidence and fact altogether. Established facts areoften no longer seen as proven to be true, but rather opento interpretation and debate. When doubts and misinfor-mation create the base for decision-making, this becomesproblematic.

With regard to Filter Bubbles–a term coined by Eli Pariserdescribing the potential for online personalization to effec-tively isolate people from a diversity of viewpoints [10]–Flaxman and colleagues [5] investigated the segregationof people with different ideological viewpoints as a result ofgetting their news from social media websites and searchengines. Especially polarizing topics can lead to such seg-regation of viewpoints, which eventually can result in a dis-trust of media outlets and hostility towards rival groups.While recommender systems expose users to a narrow-ing set of items over time, Nguyen et al. [9] also found thatusers can, indeed, break out of their bubble by diversifyingthose recommendations.

Gillani et al. [6] introduced Social Mirror, a visualization tool,which allows Twitter users to explore the politically-activeparts of their social network. They find that recommendingfollowing Twitter accounts of the opposite political ideologycan effectively reduce people’s beliefs in the political homo-geneity of their network connections. As William Greidercited in [1] points out: ”Strange as it seems in this day ofmass communications, democracy still begins in human

1http://www.interacademies.org/10878/13901.aspx

1665

Page 3: Biased Bots: Conversational Agents to Overcome Polarization · bot, which draws information from websites to recreate the personality of a historical figure middle-schoolers can

conversation.” Political conversations, however, can oftenbe uncomfortable and something that invites conflict [11],which can consequentially lead to issues not being dis-cussed in the circle of friends and family. Hence, we riskmissing out on these interpersonal conversations, whichwould allow us to re-think aspects of an argument and sub-sequently reduce cognitive inconsistencies [13].

Figure 2: Our bot starts theconversation by making acontroversial statement in order toelicit the user’s view or attitude andthen picks the opposing side of anargument.

Being an informed citizen requires attending to argumentssupporting and opposing one’s own perspective. Importantrelated works, therefore, include recommender systemsand conversational agents. Schwind et al. [12] showed thatrecommender systems that present content inconsistentto users’ prior perspective can help overcome confirmationbias. Bots and conversational agents have been used in avariety of scenarios. Haller and Traian [7] describe a chat-bot, which draws information from websites to recreate thepersonality of a historical figure middle-schoolers can chatwith. Klopfenstein et al. [8] recently defined a bot interfaceparadigm promoting context, history, and structured conver-sation elements to create a conversational user experience.Integration of bots into existing messaging platforms wasfound to be crucial in order to support discoverability andprovide simple access with a clear purpose. In an effort touse conversational agents both as a mirror and opponent ofone’s views, we present the concept of Biased Bots, whichwe subsequently plan to iterate on and deploy on socialmedia platforms.

Biased BotsOur chat-bots are based on an information pipeline usingweb crawlers to retrieve information from online mediasources and extract arguments using Natural LanguageProcessing (NLP). For each issue, we create two bots to re-side on both extreme ends of a controversial issue: pro andcon. A conversation starts out by inviting users to express

their proclivity to agree or disagree with a controversialstatement (see Figure 2). Based on the user’s response,the bot picks the side opposing the user’s view and subse-quently presents arguments for discussion. The resultingControversial Bot’s purpose is to invite users to considerthe opposing perspective and to challenge the bot’s argu-ments. On a different note, we also plan to experiment withAgreeable Bots, where the bot takes the same side of anargument generally agreeing with the user’s perspective. It,thereby, delivers arguments that can be moderate, but alsohighly exaggerated. This bot’s purpose is to test the user’ssusceptibility to confirmation biases and attempts to raisethe user’s awareness of such biases.

PrototypeFor our first prototype, we used the Gupshup bot develop-ment platform (https://www.gupshup.io/). The prototype lever-ages Facebook’s Messenger API to allow users to interactwith the bot from within the social media’s ecosystem.

Users start a new chat by addressing the bot who respondsby displaying a carousel of available topics (Figure 2). Itthen presents a controversial statement asking the userfor general agreement/disagreement. Based on the user’sanswer, the bot starts providing arguments opposing theuser’s view (Figure 3). The current prototype supports thetopics of abortion, gun law, and climate change. Pro andcon arguments are at this point manually retrieved fromhttp://www.debate.org/. The bot encourages the user to typecounter-arguments. After taking turns and providing a se-ries of three arguments, the bot rounds up the conversationby asking whether it was helpful in making the user at leastconsider some of the arguments brought forward. We storeall user responses in a database, based on which we planto refine our bots in subsequent development iterations.

1666

Page 4: Biased Bots: Conversational Agents to Overcome Polarization · bot, which draws information from websites to recreate the personality of a historical figure middle-schoolers can

Bot EvolutionTo prototype the conversations and collect initial feedback,we started by developing a series of scripted bots, wherethe conversation flow follows a rigid structure based onmanual information retrieval, predefined arguments anduser interactions.

Figure 3: As the conversationunfolds the bot brings up commonarguments from the user’sopposing perspective.

As issues tend to develop rapidly, manually scripting argu-ments and conversations may not be feasible. We, there-fore, started crawling a selection of news sites to feed is-sues into our database automatically and spawn new chat-bots. Using Information Retrieval (IR) techniques as well asNLP we are planning on further adding intelligence to thesebots and henceforth train a bot AI to learn on-the-fly fromuser’s answers in order to tailor responses more specificallyto their arguments. We, therefore, plan to feed back theuser responses into a Recurrent Neural Network (RNN) tofurther build the vocabulary of the bot and expand its knowl-edge and debating skills on a given topic. This way, the botwill ’learn’ as more users interact with it. The distilled infor-mation will be used to automate the tabulation of controver-sial arguments and can be used to verify the factual factsvia crowdsourced fact-comparisons.

Data CollectionOur chat-bots are connected to a backend server run-ning on NodeJS, which stores users’ interaction data andresponses to a Firebase Database. Having a commondatabase allows platform-independent bot deployment toshare content with each other. The data submitted by theusers will provide both quantitative data points (0-StronglyDisagree, 5-Strongly Agree) as well as qualitative data (rawresponses). The bot also collects data by ”interviewing”users post-hoc with regard to their experience with it and itssuccess in contributing to the user’s opinion formation.

DiscussionIn this position paper, we present Biased Bots, conversa-tional agents inviting users to engage with them in conver-sations that tend to be difficult to be had with friends andfamily members due to 1) the user being mostly surroundedby like-minded people or 2) the awkwardness of discussingcontroversial topics with someone having an opposing per-spective. As we iterate on the prototype’s conversationalabilities and topic spectrum, we are interested in the effectssuch conversations can have on users when they encountera bot that argues in opposition (or deliberate compliance) oftheir respective view. The nature of each bot itself is highlydependent on the sources it retrieves its information from.Currently, we manually curate the bots’ arguments and con-versation flow, but future versions will be seeded with a listof online sources, whose political inclinations are generallyknown.

While we intend to encourage people to consider two sidesof an argument and, therefore, foster critical thinking, wealso explore ethical aspects of the bots’ potential to ”sway”users’ opinions. It is, thus, imperative to provide trans-parency about the bot’s purpose and source of informationwhile users interact with it as well as through comprehen-sive debriefings. Our goal is to use such technologies to en-courage people to see two sides of an argument and con-sider a middle ground for discussion rather than downrightrefusing to have a conversation in the first place. Biasedchat-bots are, therefore, part of our efforts to design andbuild critical media technologies, i.e., systems intendedto invite users to reflect on their views, acquire and advancemedia literacy and critical thinking skills, and subsequentlycontribute to a more informed public discourse and depolar-isation.

1667

Page 5: Biased Bots: Conversational Agents to Overcome Polarization · bot, which draws information from websites to recreate the personality of a historical figure middle-schoolers can

AcknowledgmentsThis work is partially funded by the Samsung GRO Grantnumber IO170924-04695-01.

REFERENCES1. Rob Anderson, Robert Ward Dardenne, and G Michael

Killenberg. 1994. The conversation of journalism:Communication, community, and news. (1994).

2. Michael Billig. 1996. Arguing and thinking: A rhetoricalapproach to social psychology. (1996).

3. David Blankenhorn. 2015. Why Polarization Matters.The American Interest (2015).

4. Dennis K Flaherty. 2011. The vaccine-autismconnection: a public health crisis caused by unethicalmedical practices and fraudulent science. Annals ofPharmacotherapy 45, 10 (2011), 1302–1304.

5. Seth Flaxman, Sharad Goel, and Justin M Rao. 2016.Filter bubbles, echo chambers, and online newsconsumption. Public opinion quarterly 80, S1 (2016),298–320.

6. Nabeel Gillani, Ann Yuan, Martin Saveski, SoroushVosoughi, and Deb Roy. 2018. Me, My Echo Chamber,and I: Introspection on Social Media Polarization. arXivpreprint arXiv:1803.01731 (2018).

7. Emanuela Haller and Traian Rebedea. 2013. Designinga chat-bot that simulates an historical figure. In ControlSystems and Computer Science (CSCS), 2013 19thInternational Conference on. IEEE, 582–589.

8. Lorenz Cuno Klopfenstein, Saverio Delpriori, SilviaMalatini, and Alessandro Bogliolo. 2017. The Rise ofBots: A Survey of Conversational Interfaces, Patterns,and Paradigms. In Proceedings of the 2017Conference on Designing Interactive Systems (DIS’17). ACM, New York, NY, USA, 555–565. DOI:http://dx.doi.org/10.1145/3064663.3064672

9. Tien T. Nguyen, Pik-Mai Hui, F. Maxwell Harper, LorenTerveen, and Joseph A. Konstan. 2014. Exploring theFilter Bubble: The Effect of Using RecommenderSystems on Content Diversity. In Proceedings of the23rd International Conference on World Wide Web(WWW ’14). ACM, New York, NY, USA, 677–686. DOI:http://dx.doi.org/10.1145/2566486.2568012

10. Eli Pariser. 2011. The filter bubble: What the Internet ishiding from you. Penguin UK.

11. Michael Schudson. 1997. Why conversation is not thesoul of democracy. Critical Studies in MediaCommunication 14, 4 (1997), 297–309.

12. Christina Schwind, Jürgen Buder, and Friedrich W.Hesse. 2011. I Will Do It, but I Don’T Like It: UserReactions to Preference-inconsistentRecommendations. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems(CHI ’11). ACM, New York, NY, USA, 349–352. DOI:http://dx.doi.org/10.1145/1978942.1978992

13. John R Zaller and others. 1992. The nature and originsof mass opinion. Cambridge university press.

1668