crowdcasting: remotely participating in live events ......companies. in a study of mobile live...

18
PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017. Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams JOHN TANG, Microsoft Research GINA VENOLIA, Microsoft Research KORI INKPEN, Microsoft Research CHARLES PARKER, Microsoft Research ROB GRUEN, Microsoft Research ALICIA PELTON, Microsoft Research We designed and developed a “crowdcasting” prototype to enable remote people to participate in a live event through a collection of live streams coming from the event. Viewers could select from a choice of streams and interact with the streamer and other viewers through text comments and heart reactions. We deployed the prototype in three live events: a Winterfest holiday festival, a local Women’s March, and the South by Southwest festival. We found that viewers actively switched among a choice of streams from the event and actively interacted with each other, especially through text comments. Streams of walking about among exhibits or city sights elicited more user interaction than streams of lectures. We observed that voluntary viewers had a wider variation in how long they viewed the event, and switched among streams less and had less interaction through text comments compared to viewers recruited through Mechanical Turk. CCS Concepts: • Human-centered computing → Collaborative and social computing → Empirical studies in collaborative and social computing KEYWORDS: Live streaming; live events; Mechanical Turk. ACM Reference format: John Tang, Gina Venolia, Kori Inkpen, Charles Parker, Rob Gruen, and Alicia Pelton. 2017. Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams. Proceedings of the ACM Computer- Human Interaction, 1, CSCW, Article 98 (November 2017), 18 pages. https://doi.org/10.1145/3134733 1 AN OPPORTUNITY AROUND LIVE STREAMING EVENTS The recent popularity of mobile live streaming services, such as Facebook Live and Periscope, provides new user experience opportunities around publicly shared live streams. Live streaming usage statistics [10] documented the growth of Facebook Live throughout 2016, especially among the media, celebrities, and companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity around multiple live streams coming from the same event. In particular, breaking news and emerging events tend to attract multiple live streams and many viewers, such as the Paris terror attacks in 2015 [16] and protests against Donald Trump [6]. Clustering live streams together can enable remote viewers to find the view of the event most personally interesting to them. We refer to a collection of user-generated, independent live streams from the same event as a crowdcast and developed a prototype to explore how people could use them to remotely experience a live event. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. 2573-0142/2017/November – 98… $15.00 Copyright is held by the owner/author(s). Publication rights licensed to ACM. https://doi.org/10.1145/3134733 1 2 3 4 5 98 9 10

Upload: others

Post on 08-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams JOHN TANG, Microsoft Research GINA VENOLIA, Microsoft Research KORI INKPEN, Microsoft Research CHARLES PARKER, Microsoft Research ROB GRUEN, Microsoft Research ALICIA PELTON, Microsoft Research

We designed and developed a “crowdcasting” prototype to enable remote people to participate in a live event through a collection of live streams coming from the event. Viewers could select from a choice of streams and interact with the streamer and other viewers through text comments and heart reactions. We deployed the prototype in three live events: a Winterfest holiday festival, a local Women’s March, and the South by Southwest festival. We found that viewers actively switched among a choice of streams from the event and actively interacted with each other, especially through text comments. Streams of walking about among exhibits or city sights elicited more user interaction than streams of lectures. We observed that voluntary viewers had a wider variation in how long they viewed the event, and switched among streams less and had less interaction through text comments compared to viewers recruited through Mechanical Turk.

CCS Concepts: • Human-centered computing → Collaborative and social computing → Empirical studies in collaborative and social computing

KEYWORDS: Live streaming; live events; Mechanical Turk. ACM Reference format:

John Tang, Gina Venolia, Kori Inkpen, Charles Parker, Rob Gruen, and Alicia Pelton. 2017. Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams. Proceedings of the ACM Computer-Human Interaction, 1, CSCW, Article 98 (November 2017), 18 pages. https://doi.org/10.1145/3134733

1 AN OPPORTUNITY AROUND LIVE STREAMING EVENTS The recent popularity of mobile live streaming services, such as Facebook Live and Periscope, provides new user experience opportunities around publicly shared live streams. Live streaming usage statistics [10] documented the growth of Facebook Live throughout 2016, especially among the media, celebrities, and companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity around multiple live streams coming from the same event. In particular, breaking news and emerging events tend to attract multiple live streams and many viewers, such as the Paris terror attacks in 2015 [16] and protests against Donald Trump [6]. Clustering live streams together can enable remote viewers to find the view of the event most personally interesting to them. We refer to a collection of user-generated, independent live streams from the same event as a crowdcast and developed a prototype to explore how people could use them to remotely experience a live event. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. 2573-0142/2017/November – 98… $15.00 Copyright is held by the owner/author(s). Publication rights licensed to ACM. https://doi.org/10.1145/3134733

1

2

3

4

5

98

9

10

Page 2: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:2 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

The prototype enabled people to not only view the streams of an event, but also interact with the streamers and viewers, so that they could participate in the event. We built a prototype mobile streaming client as well as a web viewer client, shown in Figure 1, and mobile viewing client that we deployed at live events. We discuss related work, the design goals and implementation of the prototype, and insights and methodological challenges learned from the deployments.

Fig. 1. Prototype web interface, showing one live stream on the Stage on the left, with viewer text comments overlaid, and the Wall of previews of five other streams on the right.

2 RELATED RESEARCH An early form of remotely watching events together was envisioned by remotely watching television together [5]. Studies have shown that television viewers commenting using Twitter as they watch creates a backchannel among the viewers that encourages more interaction and engagement [9]. These tweets can also be analyzed afterwards to interpret the audience’s reactions and interests, which can be useful for planning future episodes. However, live streaming, which integrates text comments and reactions from viewers into the user interface, affords a more direct and immediate live backchannel among the viewers and the broadcaster.

A study of live streaming by Dougherty [4] examined the use of Qik, an early mobile live streaming service (no longer available) that offered video broadcast and text chat feedback integrated with social networking. She reported on overall usage patterns based on coding 1000 videos and interviewing seven broadcasters. She focused on the use of Qik for civic content (journalistic, activistic, political, educational) and found that 11% of her sample qualified as having civic value. The rest of the live streams were of a personal nature, focused on family or friends, not generally of value in the public realm. We also want to build on the potential of highlighting the minority of live streams of civic interest.

Tang et al. [14] looked at the live streaming apps Meerkat and Periscope that were popular at the time (Meerkat is also no longer available). We wanted to explore what live streaming was being viewed and how the viewers and streamers interacted. We found that the most common kind of live stream then was an

Page 3: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams 98:3

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

asymmetrical chat, where viewer questions submitted by text comments were responded to by the streamer in the audio-video stream. We were intrigued by instances of multiple streams coming from the same event, which seemed to highlight the streams that had more civic value and filter out the less interesting personal streams. We also found that users were getting overwhelmed by the number of publicly shared live streams, and were looking for help to discover the streams that were of interest to them.

More recently, Haimson and Tang [7] investigated how people experience multiple streams from events in Periscope, Facebook Live, and Snapchat Live Stories. Using interviews and surveys, they focused on how engaged viewers were when using those tools to watch events. Even though Snapchat Live Stories are compiled from uploaded video snaps and thus not literally live, they were posted quickly enough to evoke the sense of engaging with a recent event, similar to viewing live streams. They found that users appreciated the interactive nature of live streams where their comments and reactions could change the streamer’s actions. They also liked how Snapchat Live Stories brought together different views of an event, allowing them to quickly switch from one view to the next according to their interest.

While there have been a few different research prototypes that aggregate video views together into a viewing experience, Rivulet by Hamilton et al. [8] is the most closely related prototype. Rivulet enabled viewers to preview and select which of several streams from an event they wanted to watch. It also integrated a text chat across all streams of the event and a hearts reaction mechanism within each stream. Rivulet was deployed in a jazz music festival where they had four concurrent streamers with an audience of 115 viewers. Because all the streams in Rivulet shared a common text chat channel, users got confused as to which stream a particular comment was referencing. In the deployment of Rivulet, viewers were recruited from the Amazon Mechanical Turk (MTurk) crowdsource platform, leaving open the question of how their behavior as paid viewers differed from those who were intrinsically motivated to view an event.

3 PROTOTYPE DESIGN Our prototype and study of its deployment builds on prior research by focusing on the following design goals: Creating a live event experience from multiple live streams

Enabling interaction among viewers and streamers that scales up to hundreds or thousands of participants

Exploring how users interact in different types of events

Exploring the differences in how the recruitment of viewers affects the evaluation through deployments

Our contribution focuses on how the design of our prototype goes beyond current commercial live streaming applications and the Rivulet prototype from prior work [8], and how our deployment of the prototype explores crowdcasts of different kinds of events and differences between participants recruited via MTurk and voluntary viewers of the crowdcast.

Our prototype goes beyond current commercial tools (such as Periscope and Facebook Live) which focus on individual live streams without any way to gather related streams together. Without grouping together live streams from the same event, it is not easy to switch among live streams offering different views of the event, or even realize that other live streams are available to watch from a different perspective. Haimson and Tang [7] found that users liked being able to quickly switch among different views of the same event in Snapchat Live Stories. However, since those views were compiled together after the fact, they did not afford live interaction with the streamer or the other viewers, which users preferred in live stream experiences.

The design of our prototype also goes beyond Rivulet [8] by enabling scaling up to thousands of viewers, as has occurred in recent popular live streams. The design of text comments and hearts in our prototype enable managing contributions from thousands of participants. The design also keeps text comments scoped to the live stream being watched, in contrast to the shared text chat across all streams in Rivulet.

Furthermore, by deploying our prototype at several different kinds of events and even streaming different content over several days of an event, we explored how usage of the prototype interacted with the kind of event. While Rivulet [8] and other prior work [15] had explored music festivals, we observed crowdcasts of a variety of different events to explore how the crowdcast experience varies with the type of

Page 4: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:4 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

event. Plus, while Rivulet recruited viewers through MTurk, we wanted to study a deployment that included viewers who were voluntarily interested in the events.

3.1 Prototype Implementation Since none of the commercial live streaming apps offered an SDK that allowed us to add features and collect live streams into events, we developed our own streaming and viewing service. We built an Android client for broadcasting audio and video at events and also viewing crowdcast events. Because we wanted to make it easy for many people to view crowdcasts, we also created a web viewer that worked in modern web browsers (e.g., Chrome, Edge). We describe the prototype web client for remotely participating in events (how most people viewed the events), the Android client for broadcasting the streams, and how viewing on the Android client differed from the web client.

While a major distinction of our concept is gathering live streams from the same event together for viewing in a crowdcast, the prototype we implemented focuses on the user experience of a crowdcast. Based on our observation of geo-tagged live streams, we believe that the set of related live streams can be readily identified based on location, using the crowd to filter out unrelated streams that are nearby. Also, we imagine that as people start live streaming in the vicinity of an ongoing event, they could be prompted to include their stream in the crowdcast. Opting-in to contribute to a crowdcast would help the streamer anticipate a more public audience around the event than their typical streaming audience.

3.1.1 Prototype Web Client for Viewing. The prototype web client provided an easy way for viewers to remotely join crowdcasts. The prototype collected all related live streams for an event and displayed video previews of each on the Wall (right side of Figure 1). Clicking a preview placed that live stream on the Stage, where the video view was enlarged (left side of Figure 1) and the associated audio played. Users could switch to view a different stream by clicking on its preview in the Wall. If the stream being watched ended, its view disappeared from the Stage and another stream from the Wall could be selected.

Since prior research [7] showed that users appreciated interacting in live streams, our prototype offered several backchannel mechanisms to enable remotely participating in the event, beyond just passively watching it. Each stream included mechanisms for sharing text comments and heart reactions among the viewers and broadcaster of the stream. Since we anticipated that crowdcast audiences could be in the hundreds or thousands, we designed both the text and hearts features to accommodate that size. At that scale, showing every text message or every heart, as other commercial live streaming apps do, can be hard to process and obscure too much of the video content itself. Thus, commercial apps limit the number of viewers who can interact. For example, Periscope limits text commenting to the first 100 viewers, after which users can view and give hearts but not send text messages. We wanted to explore mechanisms by which all viewers could interact.

For text comments, we implemented a version of the Conversational Chat Circles interface [11] which allowed everyone to contribute to the text chat while filtering the number of comments down to a more manageable amount to share with the streamer and all other viewers. Any viewer could send a text comment by typing it into the text field at the bottom of the stream, and it appeared at the bottom for a subset of the viewers with the option to upvote it. If no one upvoted the comment, it disappeared after 7 seconds. As more people upvoted the comment, it got shown to more people until it crossed a threshold where the message floated to the top and was shown to all the viewers and the streamer. Thus, only a manageable number of comments were broadly shared, allowing users to focus on the streaming content.

Similarly, while the flow of hearts in a rainbow of colors in Periscope can show popular interest in a stream, they also can interfere with viewing the content. We designed a more compact hearts indicator, which showed the proportion of viewers who had recently pressed the heart button. Pressing the heart gave it a highlight for 5 seconds, and showed a pie chart of the proportion of viewers who had pressed the heart within 5 seconds (shown in the stream on Stage in Figure 1). We consider the percentage of viewers giving a heart to be a more meaningful indicator of the amount of interest in the stream than showing each heart or reaction as in Periscope and Facebook Live, which can be overwhelmed by just one person frequently pressing hearts. Showing the percentage of viewers recently giving a heart remains consistent, even as the

Page 5: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams 98:5

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

number of viewers scales up to hundreds or thousands. Our heart indicator was shown in the bottom right corner of every stream and also in the video previews on the Wall.

The number of current viewers of a stream was also shown at the bottom of each stream or preview, along with an arrow indicating whether that number was recently trending up or down. The total number of viewers of the event, along with its trending indicator, was shown below the Wall. We expect that the video preview, the number of viewers (and trending indicator), and the proportion of hearts indicators would be helpful cues in deciding which live stream to view.

The web viewer also showed some contextual information about the event (including a description and web pointer for more information) and a separate interface tab for a map of the event site showing the locations of each active streamer. The web client URL gave us an easy way to invite large numbers of impromptu viewers to participate in crowdcasts.

3.1.2 Prototype Android Client for Streaming and Viewing. The Android streaming client enabled broadcasting video and audio from an Android phone. Tapping a button on the client started streaming video and audio, which was integrated together into a crowdcast in the prototype. The streaming client offered controls for managing the camera and network bandwidth used for the stream and a preview of what was being streamed. The streamer could see incoming text comments overlaid on top of the video preview and the hearts being shared on the stream. Counts of the viewers of this stream and the total number for the event (with trending arrows) were also shown.

Figure 2 shows the viewing experience on the Android client. The Wall, on the left, showed a preview of all streams currently available from the event. A pyramidal structure showed the longest-lived streams at the largest size in the top row with additional, smaller rows added as needed to fit all the currently active streams. Each preview also included the number of current viewers, with its trending arrow. Tapping on any preview expanded that stream to fill the screen (replacing the Wall), shown on the right of Figure 2. Audio from the stream started playing, and any text comments appeared. The numbers at the bottom indicated viewers for this stream on the left and for the whole crowdcast on the right, with trending arrows. Tapping the text bubble icon on the bottom left brought up an interface for sending text comments. Tapping on the heart icon on the bottom right sent a heart instant reaction. Swiping left or right switched to the next stream in the order in which they first appeared. The user could also go back to the Wall to choose a different stream. If a stream being watched ended, the view returned to the Wall. The smaller screen real estate of the Android client meant that the user could not see the Wall of previews of all the other live streams while watching a stream.

3.1.3 Backend Infrastructure The prototype was developed using React Native for web and Android platforms. It relied on three independent servers to: 1) provision the video streaming, 2) host the website for the web client, and 3) connect streams with viewers and their interactions and log usage data in a real-time database.

The video was streamed to a cloud-based Wowza streaming engine [17] server using the Real Time Messaging Protocol (RTMP) through either cellular data or wi-fi networks. The client offered a range of video resolutions to adapt to the level of network connectivity available: from 720p (1280x720) generating 1.5 mbps down to QCIF (176x144) generating 280 kbps. HTTP Live Streaming (HLS) was used to stream to the web and phone clients. Latency from capturing the stream to displaying it on the viewer ranged from about 25-60 seconds. The Wowza server managed the real-time video streaming, video storage, and access.

A simple website, on Azure, was a traditional HTTP-based file server, hosting the HTML/CSS/JS/image files necessary for the web viewer application. The video.js third-party library rendered the video feeds on the web viewing client.

A Firebase real-time database provided the backend support for storage, notifications, and synchronization of all application state associated with the web and phone clients. This database managed all the metadata about video streams, chat messages, heart reactions, and viewer counts. Both the web and Android clients logged all state changes and user actions to an IIS event log.

Page 6: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:6 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

Fig. 2. The prototype Android viewing client showing the Wall of all the streams currently available for the event on the left, and the view of one stream on the right.

4 DEPLOYMENTS FOR TESTING We deployed the working prototype into actual events to see how people would use it to remotely view events and interact with each other. We conducted two pilot deployments: Winterfest holiday festival in December 2016 and a local Women’s March in January 2017. These pilots led to a deployment at the South by Southwest (SXSW) festival in March 2017. Deploying a prototype that involves many users around live events revealed a methodological challenge.

4.1 Data Collection Methodology In each deployment, we collected logging data from our prototype that recorded the number and duration of viewers and all user interactions (selecting to view a stream, sending a text comment, upvoting comments, pressing the heart button). We also recorded the live stream broadcast video and audio. We collected survey responses that asked for users’ reactions to specific features (e.g., text comments, hearts), their perceptions on participating in the event, and what they liked most and what they would improve about the experience. Our deployment experiences gave us data about the usage and reaction to the prototype.

Live streams of breaking events can attract large numbers of live viewers [6, 16]. While this can readily happen in existing social media channels with millions of subscribers, it is difficult to accomplish large scale adoption in a prototype that is disconnected from any large-scale social network. Previous studies [8, 11] have shown the ability to quickly recruit over 100 viewers for live events using Amazon Mechanical Turk. We wondered whether effectively hiring viewers through MTurk would result in different behaviors than those who were intrinsically motivated to watch the events. Thus, we explored a mix of volunteer viewers and viewers recruited through MTurk.

Page 7: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams 98:7

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

4.2 Pilot Deployments Our two pilot deployments helped us test out the prototype platform at scale and refine our data collection methodology.

4.2.1 Winterfest The Winterfest annual holiday festival was held in Seattle over about six weeks between the Thanksgiving and New Year’s holidays. It involved multiple venues (e.g., miniature railroad village, ice skating rink) and scheduled events (e.g., children’s choir performance, live ice sculpture demonstration) on a campus of several different buildings. The Winterfest event had many different activities happening concurrently with general popular interest.

One Saturday afternoon in December 2016, we sent out four members of our team to stream for about one hour. We aimed to hire 400 MTurk viewers who would use the prototype web client to observe the event for at least 20 minutes and complete a survey for a $6 task. Additionally, we brought 18 people into a usability lab setting where we could interview them about their experiences. Half of them used the web client, and the other nine used the Android client loaded onto phones that we provided to them. Their session lasted about 1 hour and they were given a $75 gratuity.

The Winterfest deployment was the first scaled up test of the prototype to support multiple concurrent streams with an audience of 180 people at a live event. We were not able to reach our goal of 400 MTurk viewers, illustrating a challenge of recruiting a large number of viewers within the time window of our live event. Nonetheless, the audience enabled us to see how people used and reacted to the prototype on both web and Android clients. The survey responses and especially the interviews with the lab participants helped us identify revisions to the design and implementation to improve the usability and performance of the prototype.

4.2.2 Women’s March We covered one of many satellite Women’s March marches on Saturday, January 21, 2017, coordinated with the march in Washington, D.C. after the United States presidential inauguration. The March, held in Seattle, involved over 100,000 people along a 3-mile route. It featured many speeches (planned and impromptu), protest signs, and spontaneous singing and chanting. Given the strong public interest in this event, we tried to get voluntarily interested participants, rather than hiring them through MTurk. Four members of our team streamed from the March. We sent out email through various distribution lists within our large technology company inviting them to remotely participate in the crowdcast. We instructed them on how to use the web viewer or install the Android client on their own smartphone. We asked that they watch for at least 20 minutes and fill out a survey for a $5 Amazon gift voucher gratuity.

The Women’s March deployment was the first time we relied solely on people who had an interest in the event with a nominal gratuity for filling out a survey if they watched for 20 minutes. There were 159 viewers of the Women’s March crowdcast; 88 used the web client and 71 used the Android client to view. Unfortunately, because the March route was flooded with people, it was very difficult for the streamers to make a cellular data connection that supported live streaming. Only two of our streamers were able to maintain a somewhat steady stream throughout the event (by connecting to wi-fi at spots along the route). Thus, the viewing experience was poor and the median viewing time was only 8 minutes. However, a quarter of the viewers watched for more than 20 minutes. Survey responses reflected frustration with the poor reliability of the live streams, but when the prototype was working, they really enjoyed remotely participating in the March.

Besides illustrating some of the logistical problems of streaming from a grassroots event such as the Women’s March (which did not have adequate mobile data network support), this pilot did show that we could draw a crowd of over 150 voluntary viewers to a live event. While viewers who persevered through the problems were able to appreciate the potential of the experience, we did not have a meaningful deployment of the prototype.

4.3 South by Southwest Deployment Building on our pilot experiences, we deployed the prototype at a large-scale scheduled event with sufficient networking infrastructure. South by Southwest (SXSW) is a collection of internationally recognized festivals

Page 8: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:8 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

featuring talks (focused on innovation and technology), exhibits, music, and other social events. The SXSW schedule offered dozens of concurrent activities and drew over 150,000 visitors to many venues throughout Austin, TX. We sent six members of our team to stream selected time slots from SXSW during a work week in March 2017. We invited Volunteer viewers from our company to watch for as long as they would like, using the web or Android client, and fill out a survey (no gratuity). Email messages announced in advance which time slots and topics we would stream. During three time slots, we also hired MTurkers (shown in Table 1) to watch for at least 20 minutes and fill out a survey for a $6 task to augment the number of voluntary viewers.

Table 1. Streaming and viewing activity (according to participant type) at SXSW

There were two main types of streams: focusing on a presentation (talk, panel), and walking about and interacting with people (exhibits, meandering around the city). The streams generated a limited amount of viewing from the volunteer audience. One reason for the limited response is that the two-hour time difference between SXSW and the primary location of our company in the Pacific time zone meant that there was no remote audience for the popular opening keynotes early in the day, precluding streaming them. Thus, for time slots 5, 8, and 10, we supplemented the viewers with others recruited through MTurk. We analyze our data across all time slots, grouped as Volunteers or MTurkers. Most people used the web client to view SXSW, with only 11 people who used the Android client.

Figure 3 shows the total number of viewers of each stream stacked together, over time. Each of the 6 streamers is coded in a different color, as well as the gray band at the top representing the Wall, when no stream was selected to be on the Stage. The vertical white lines show 15-minute intervals across the total time slot of over an hour. Figure 3 shows that views were unevenly distributed among the streams, as people selected what was of most interest to them. Additionally, streams coming and going are shown by the jagged dips of color throughout the session. While some of the stream endings were caused by crashes of the prototype, this kind of intermittent streaming typically occurs in the real world. Collecting together streams from the event not only gave viewers a choice to find the stream of the event of interest to them, it also afforded awareness of what was happening with other streams related to the same event. For example, although the teal blue colored stream in Figure 3 kept stopping and restarting, it seemed to regain its viewers, even if they temporarily started watching another stream. Similarly, when a new stream started about 15 minutes late into the time slot (red at the bottom), people were able to discover it in the interface and start viewing it.

Page 9: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams 98:9

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

Figure 4 shows a different view of the data according to each viewer. Every viewer is a separate row, and their viewing is color coded according to which live stream they watched (using the same colors as in Figure 3). Figure 4 shows that almost every row has a rainbow of colors, indicating that most viewers switched around to view different streams. Figure 4 also shows that many viewers exhibited short glances among streams (series of thin bands in different colors) followed by long periods of sustained watching of one stream (long color band). This pattern is also evidenced in the skewed distribution of durations of viewing a stream before switching to another stream or the Wall. The median view duration was 18.8 seconds, whereas the average was 96.9 seconds, and the maximum was 3,314.8 seconds (55 minutes).

Fig. 3. Viewers of time slot 8 in a stacked histogram, color coded by which stream they are watching. The horizontal axis is time, with each vertical line marking 15 minutes; the vertical axis is number of viewers.

Fig. 4. Viewers of time slot 8 grouped according to whether they were Volunteers (at the bottom) or MTurkers. The horizontal axis is time, with each vertical line marking 15 minutes. Each row shows a user’s live stream viewing pattern, using the same colors as Figure 3.

We considered any view longer than the 18.8 second median duration a sustained watch, and found that most viewers (93%) spent most of their session (more than 90%) watching rather than glancing around. Thus, the data show that while there was frequent switching among streams for short duration glances, most of the session was spent watching a stream for a sustained amount of time. This pattern of sustained watching

Page 10: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:10 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

is especially evident in the long color bands among the Volunteer viewers, grouped at the bottom of Figure 4. Since they were more likely to be interested in specific content that was being broadcast, their viewing activity showed sustained watching of a stream.

4.3.1 Interaction Through Backchannel Mechanisms The prototype offered three explicit backchannel mechanisms for interacting with the streamer and other viewers of a stream: sending a text comment, upvoting a text comment, and giving a heart reaction. Table 2 shows what percentage of viewers used each mechanism, broken out by whether they were Volunteers or MTurk recruits.

Table 2. Usage of each interaction mechanism by viewer type, showing more usage by MTurkers than Volunteers

All, n=456 Volun., n=84 MTurk, n=351

Comment 79% 42% 91%

Upvote 63% 30% 74%

Heart 57% 30% 67%

Any 86% 55% 97% The data show a very low percentage of lurking where viewers just passively watched without engaging

in any interaction. Only 14% users did not use any of the backchannel mechanisms. Prior research [13] reported that typical lurking rates in asynchronous online conversations are over 90%. Our data show that most viewers actively interacted in sessions through the backchannel mechanisms.

The data also show that more people tried the text commenting and upvoting mechanisms than the heart reaction. This preference was also reflected in the survey responses about comments and hearts, shown in Figure 5. Open-ended responses to briefly explain their ratings of the backchannel mechanisms elaborated (MTn for Mechanical Turk, Vn for Volunteer participants, emphasis added):

I think it [comments] kept me engaged and made me feel like I was there and interacting with others. MT227

And the upvote system to pin worthwhile comments worked well and was visually appealing. MT206

I could never figure it [hearts] out, are you supposed to push it? Nothing seemed to happen. V9

I didn't really interact with the heart much because I didn't see much use to it. I liked the chatting interface much better. MT60

Fig. 5. Survey responses of user reactions to text comments and hearts.

Another pattern we noticed in the data related to the content of the streams. The two types of streams, presentations and walking about exhibits or city sights, elicited different amounts of interaction. Figure 6 shows that walking about streams had more messages (orange dots) and more hearts (open orange circles) per view minute than talk streams (blue diamonds). A two-sample Kolmogorov-Smirnov test revealed that while the use of hearts was not significantly different across these two kinds of streams (Z=1.376, p=.045) the use of messages was significantly higher in the walkabout streams (Z=2.194, p<.001). Walking about afforded more flexibility to interact with remote viewers as they asked questions or made requests to go see

0% 25% 50% 75% 100%

How did you feel about the text

comment mechanism forinteracting in the stream?

How did you feel about the

hearts indicator for interactingin the stream?

Not valuable at all Not very valuable Neutral Somewhat valuable Extremely valuable

Page 11: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams 98:11

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

specific things. Viewers used the text comments feature more than the hearts, especially in walkabout streams, and users found text comments to be more usable, engaging, and valuable when interacting with the other participants.

Figure 6. Cumulative Distribution Function of livestreams by interaction activity per view minute, according to type of viewer action and type of stream content.

4.3.2 User Perceptions Survey responses provided a sense of how users reacted to the experience. Out of the 468 viewing sessions recorded in the logs, we received 360 completed surveys that correlated with usage in the log data. The survey included questions adopted from the validated user engagement scale by O’Brien and Toms [12], rated on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). Users rated the experience positively, with the percentage of users who agreed or strongly agreed as follows: I felt involved in the live streams from SXSW (86%).

While viewing the live streams from SXSW, I really felt like I was there at the event (92%).

I enjoyed visiting SXSW through the live streams (90%).

Some responses to an open-ended question asking what users liked most about the experience include (emphasis added):

I was able to click through the options to find something interesting to myself personally. MT360

Reading the comments of others and seeing what they thought of the event. MT382

Being able to talk to the folks walking and recording was by far the best part! V8

Feeling like I was in a group and experiencing an event with other people … while at the same time not having to travel all the way there. MT157

Figure 7. Coded survey responses to what users liked most about the experience.

0%

25%

50%

75%

100%

0.0 0.5 1.0 1.5 2.0User Actions per Minute of Viewing

Talk Hearts

Talk Messages

Walkabout Hearts

Walkabout Messages

Page 12: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:12 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

A coding of those responses is shown in Figure 7. Comments about being able to view the event or some specific aspect (talk, music, exhibits) of it add up to 44% of the responses (shown in various shades of blue). The variety of streams available was mentioned by 22% of the respondents. Responses around interaction, either in general or specifically with the streamer or other viewers, are shown in shades of red, comprising 12% of the responses.

These responses indicate that the viewing content in the streams was central to the positive experience. Being able to choose from a variety of streams was also important, which is consistent with the logging data that showed how users switched among streams until finding a perspective of interest to them. The users’ enthusiasm for interaction was also evidenced in the high usage of the backchannel mechanisms logged. The logging data along with the survey responses show how the usage of the prototype’s features correlated with what they liked about the experience.

4.3.3 Differences Between Volunteers and MTurkers While we had begun to notice some differences between voluntary viewers and those extrinsically motivated by an MTurk task during our pilots, we tried to systematically examine this at SXSW. The audience of some time slots were exclusively Volunteer viewers who responded to our email invitation to remotely watch SXSW activities. In other time slots we supplemented those viewers hiring viewers through MTurk. By comparing the logging activity and survey responses between those two audiences, we can infer some differences in their behavior using the prototype.

Figure 8 shows how the duration of viewing sessions varied from shortest to longest session over the entire population, according to whether they were Volunteers or MTurkers. A two-sample Kolmogorov-Smirnov test revealed that these two distributions are significantly different (Z=3.059, p<.001). Not surprisingly, there is a sharp bend in the MTurk curve around 20 minutes, which was the requested viewing time in the MTurk task. Very few MTurkers viewed for less than 20 minutes, although more than 60% stayed for longer, perhaps out of interest in the SXSW activities. By contrast, the Volunteer viewers exhibited an even distribution of durations, reflecting that they watched for as long as they were interested. About 40% of Volunteers watched for less than 20 minutes, however 30% stayed for longer than 40 minutes, which was close to the end of the streaming session.

Figure 8. Cumulative Distribution Function of viewing sessions by duration (minutes), according to participant type.

We also noticed differences in usage patterns of MTurkers compared to Volunteers. Figure 9 shows the distribution of viewing sessions by the average duration of live stream views within that session. A two-sample Kolmogorov-Smirnov test revealed that these two distributions are significantly different (Z=1.836, p=.002). A substantial number of Volunteers viewed a live stream longer before switching to another one,

0%

25%

50%

75%

100%

0 20 40 60 80Session Duration (min)

Volunteer (n=84)

MTurk (n=351)

Page 13: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams 98:13

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

whereas MTurkers switched views more frequently. This difference can also be seen in the viewing patterns in Figure 4 comparing the Volunteers at the bottom with the MTurkers. We attribute this pattern to the Volunteers’ inherent interest in the content, leading to less view switching than MTurkers who are more likely interested in exploring the breadth of the event.

Figure 9. Cumulative Distribution Function of viewing sessions by average live stream view duration (minutes), according to participant type.

Another difference was found in the amount of interactivity of the viewers. Referring back to Table 2 shows that a higher percentage of MTurkers used each of the backchannel mechanisms compared to Volunteers. Focusing on text comments as the most commonly used backchannel, Figure 10 plots the rate of sending text comments, according to the type of user. A two-sample Kolmogorov-Smirnov test revealed that these two distributions are significantly different (Z=4.378, p<.001). The data show that MTurkers were more active in sending text comments, with half sending more than 0.24 comments per minute, compared to only 6% of Volunteers exceeding that rate of comments.

Figure 10. Cumulative Distribution Function of viewing sessions by the rate of sending text comments per minute, according to type of user.

0%

25%

50%

75%

100%

0 10 20 30 40 50 60

Average View Duration (min)

Volunteer (n=84)

MTurk (n=351)

0%

25%

50%

75%

100%

0.0 0.5 1.0 1.5 2.0

Volunteer (n=84)

MTurk (n=351)

Text comments per minute

Page 14: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:14 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

All of these differences are consistent with the extrinsic motivation of MTurkers. Since the task instructed MTurkers to view for at least 20 minutes, it makes sense that they would view for at least that duration. Without any inherent interest in any particular topic, they seemed to explore among the live streams to get a sense of the SXSW experience. MTurkers also seemed more eager to try out the backchannel mechanisms, probably because they were instructed on how to use all the features of the prototype. This increased activity may raise some questions about how relevant their comments were to the content of the streams, or whether they were commenting to help pass the time in their task.

We used a mixed model regression to analyze the effect of participant type (MTurkers or Volunteers) on survey responses about perceptions of the experience (how worthwhile, involved, feeling of being there, interesting, enjoyable, being part of a community it was) while controlling for time slot (5, 8, and 10). We see no main effects or interactions of participant type or time slot (p > .05 for all) on the responses. While the participants differed in the mechanics of how they interacted with the prototype, we do not see differences in how they felt about the experience.

5 LEARNING FROM THE DEPLOYMENTS From the deployments at SXSW and the prior pilots, we learned about the opportunities that crowdcasting provides, and the challenges that still need to be resolved.

5.1 Collecting Live Streams Together into Events By grouping individual live streams of live events together into a crowdcast user experience, we saw how viewers distributed among the available streams to watch what was of most interest to them. They often took advantage of browsing among different streams to find content of particular interest to them. A common pattern observed in the log data was for a viewer to glance around at different streams before a sustained stretch of watching. Collecting live streams together into an event also helped the viewer be aware of new streams that were part of the event or the return of streamers that they wanted to keep watching. When a stream ended, it was easy to browse among related streams from the event. These patterns contrast with commercial live streaming apps where each stream is independent, and the end of one stream meant that the user was more likely to find an unrelated stream to view than one from the same event.

These patterns also confirm a design choice in the web client of previewing all the available streams in the Wall, even while watching one stream on the Stage. This design allows users to notice other streams from the event that they could switch to view. By contrast, the Android client does not have the screen space to simultaneously preview other streams while watching one. In interviewing lab study participants of the Android client during the Winterfest deployment, they preferred going back to the Wall to preview other streams to view before making a selection rather than swiping blindly to get a new stream. Thus, we believe that showing previews of the other available streams in a crowdcast is an important affordance for switching among views.

5.2 Interaction Through Backchannel Mechanisms We designed the backchannel mechanisms to encourage interaction among the participants, so they felt like they were engaging with the streamers and the other viewers. Our data showed that most viewers used the backchannel mechanisms (low percentage of lurking), and that the interaction was one of the things they liked most about the experience. Streams of walking about the exhibits (such as the teal-blue one in Figure 3) or city sights, afforded the flexibility to interact with the viewers to respond to requests and answer questions, were preferred by many viewers. By offering crowdcasts of a variety of different activities, we were able to go beyond prior work [8, 11, 15] to see how users’ interactions varied with the type of content being shared in the live streams.

Our logging data showed that text comments were used more than hearts, and the survey responses explained that users found the comments to be a more meaningful way to participate. However, our design of hearts differed from that used in commercial apps, as we wanted to address the problem of not having the

Page 15: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams 98:15

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

hearts obscure the view of the video while scaling up to large audiences. Our design may have not given enough visual prominence to hearts.

We had hoped that people’s upvotes of text comments would naturally filter out the less interesting or trolling comments and focus the streamer’s and other viewers’ attention on the most important ones. However, in reviewing the logs of upvoted text comments, we were surprised to see trolling types of messages getting upvoted. This may indicate that at least a subset of the audience is actually interested in seeing and supporting those messages in the moment. Recent research has shown that trolling can be a more situational behavior, rather than a personality trait [1]. Thus, upvoting might offer a new approach for managing trolling messages. Instead of filtering out these messages, they could be used to divide the audience into subgroups that see messages from users that they have upvoted in the past, even if they might be trolling messages. This approach would align audiences with the kinds of messages they upvote, while still limiting these messages from being broadcast to everyone.

We studied the backchannel mechanisms of text comments, upvoting, and hearts in terms of crowdcasts, although they all operated within the individual stream. While our designs of text comments and heart reactions differ from those of commercial applications, it would be interesting to compare our usage data with that of the commercial apps. We wonder how our observations of how people interacted through these mechanisms and the low amount of lurking applies to individual live streams outside the context of crowdcasting.

5.3 The Role of MTurkers in Testing Deployments Our experiences with Volunteer and MTurk viewers suggest different ways in which they can be helpful in the deployment testing process. Recruiting viewers through MTurk can be an effective way of getting an impromptu crowd of over 100 viewers for an event, in just a few minutes. They are useful in testing the usability of the interface at scale, giving us a good idea of how view switching, text commenting, and giving hearts work with a large crowd of users. Based on our observation that MTurkers tended to be more active than our Volunteer viewers, it seems like their activity simulates an even larger crowd of natural viewers.

But it is also not surprising that Volunteers behaved differently than MTurkers. Certainly, they provide a more accurate picture of how long they would watch an event, since they are not complying with an artificial minimum view duration. The watching behavior of Volunteers seemed governed by their interest in the topic, whereas most MTurkers may have been looking for the most entertaining way to fill their minimum watching time. Thus, the Volunteers give us more of a sense of the utility of the prototype for remotely participating in live streamed events. While we currently have relatively small numbers of voluntary viewers, their survey responses indicated that they appreciated being able to remotely participate in events.

5.4 Limitations We want to point out a few limitations in our studies. The vast majority of viewers experienced the prototype using the web client, even though the commercial popularity of live streaming has been focused on mobile devices. The web client was the easiest way to enable many people to view a crowdcast. The web client offers more screen real estate that enables the Wall of stream previews to always be shown. It also offers more convenient text input for comments via a full keyboard, where it is easy to type comments quickly, with less pressure to keep them short. The importance of an instant heart reaction might be higher on a mobile client, where typing a text comment might miss the moment of providing a reaction. Unfortunately, we did not get enough users of the Android client to understand the differences between web and mobile client users.

Another issue that arises from the logging of our web client is that we cannot completely discriminate between someone watching a stream for a long time and abandoning the web browser to work in another window. However, our data show a large amount of active user engagement through switching among streams and interacting in those streams through text comments, upvoting, and hearts. The distribution of all view durations show that less than 5% are longer than 10 minutes. Thus, we believe that the possibility of long view durations accounted for by abandoning the web browser is very small.

Page 16: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:16 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

Another limitation was that since all of the streamers were members of our team, we do not have an unbiased view of the streamers’ experience with crowdcasting in general and our prototype in particular. In future work, we would like to try the prototype with independent, experienced streamers to get a sense of their impressions of streaming as part of a crowdcast. Specifically, it would be interesting to explore how the collection of streamers could coordinate together to provide the best coverage of an event. It might be interesting to explore some direct communication among the streamers, although it would need to be carefully designed not to overload the streamers beyond managing their stream.

The crowdcasts in the deployments were somewhat artificially planned in advance, which does not explore the ability for users to discover crowdcast events (among many other streams). Many Volunteer viewers planned ahead by scheduling the event in their calendar and getting a reminder, whereas the MTurkers had unplanned free time when the MTurk task became available. Furthermore, as a prototype that is not connected with any social network, livestreams in our prototype were isolated from the hundreds of other streams generated on commercial live streaming apps. Thus, the crowdcasts we studied were not competing for the attention of viewers among other streams that were available. One key challenge is to study crowdcasting within a social media ecosystem, where users can discover crowdcasts of the events in which they are intrinsically interested.

A technical limitation of the prototype was the 25-60 second latency between capturing the video at the event and rendering the video for remote viewers. While this latency may not have been immediately apparent, since remote viewers do not know precisely when live events are occurring, it does become evident in the interaction lag between asking questions or making requests and the response of the streamer. While this high latency is certainly annoying when noticed, only about 1% of people mentioned it in response to an open-ended survey question about what they would most like to improve in the prototype.

6 FUTURE WORK Our experiences with the prototype demonstrate the promise of integrating streams from a live event together into a crowdcast. The logging and survey data showed that users explored events by switching among the various streams available and appreciated the variety of streams when viewing events. They also showed that users enjoyed actively interacting with the streamer and other viewers in the event, especially through text comments. We found our experience at SXSW and the earlier pilot deployments encouraging towards developing a UX around remotely participating in live events.

Our experiences also identify future work to explore the concept. While we were able to examine the interaction of over 100 concurrent viewers, we would need to scale up to multiple hundreds of viewers to really exercise the filtering features of upvoting text comments. Additionally, we only examined scenarios of up to 6 simultaneous streams. Future work should explore how well crowdcasting scales for larger numbers of viewers and streamers.

The design of the heart instant reaction needs to be reconsidered in light of low usage and reported user confusion in how it worked. While hearts and reactions are popular in commercial live streaming apps, a different design is needed for meaningful reactions in crowdcasts. Perhaps the hearts indicator needs to be more visually prominent or show a longer history of heart activity in crowdcasts.

We would also like to understand the streamer’s perspective and how we could improve their experience. Since all of the streamers were research team members, their experience was shaped by knowing our design goals. They did comment on ways of improving the usability for the streamers. Certainly, holding a smartphone for sustained periods of about one hour while attending a talk or walking around was fatiguing and problematic. It was difficult at times to get a good perspective on the activity. While selfie sticks made it easier to improve the view of the stream, it also made it impossible to see the text comments, hearts, and other viewer actions or interact with the remote audience. Wearable cameras or other ways of automatically managing the streaming camera should be explored, as well as more convenient ways of presenting the viewer feedback to the streamer, such as via a watch, heads-up display, or by spoken word. Involving other experienced streamers in the deployments and learning from their experiences is an important next step.

Page 17: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

Crowdcasting: Remotely Participating in Live Events Through Multiple Live Streams 98:17

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

Another area ripe for further research is exploring intrinsically motivated viewers with naturally occurring, unplanned events at scale. Our experience with Volunteers and MTurkers shows that there are important behavioral differences in how they interact with crowdcasts. Furthermore, we have only been able to work with planned live streamed events, which we expect is different from discovering spontaneous events that naturally emerge out in the world. Even while we navigated several methodological challenges in the deployments we have accomplished so far, we also point out more difficulties in understanding how crowdcasting would be used at scale with naturally occurring events with intrinsically motivated streamers and viewers. While we have gotten an early glimpse of the crowdcasting concept with live events, we have also identified future challenges in developing the concept further.

With the diffusion of live stream-capable smartphones that users take with them everywhere and social media that connect them to the audience of the world, we expect that every event of general interest could be streamed live from multiple perspectives. Our prototype created an early crowdcast user experience around those events. Our deployment experiences demonstrated the potential of enabling remote participants to choose their viewpoint and interact with the streamer and other viewers. We also identified opportunities for improving the interaction experience. We hope that these insights provide some initial directions in exploring how live streaming can be used to enable anyone to participate in an event anywhere through a choice of live streams.

ACKNOWLEDGEMENTS We especially thank the people who streamed at all of our live events, who had to persevere through many technical and logistical difficulties. We acknowledge the larger technical support team that helped us test and refine the user experience: Gavin Jancke, John Krumm, Jessica Wolk, Rick Gutierrez, Nirupama Chandrasekaran, Arturo Toledo, and Matt Pope. We thank the hundreds of anonymous viewers, both through Mechanical Turk and volunteers, who made this research possible.

REFERENCES [1] Justin Cheng, Michael Bernstein, Cristian Danescu-Niculescu-Mizil, Jure Leskovec. 2017. Anyone Can Become a Troll: Causes of

Trolling Behavior in Online Discussions. In Proceedings of the SIGCHI Conference on Computer-Supported Cooperative Work (CSCW 2017), 1217-1230. http://dx.doi.org/10.1145/2998181.2998213

[2] Juliet Corbin and Anselm Strauss. 2008. Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage.

[3] Nicholas A. Diakopoulos and David A. Shamma. 2010. Characterizing Debate Performance via Aggregated Twitter Sentiment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2010), 1195-1198. http://dx.doi.org/10.1145/1753326.1753504

[4] Audubon Dougherty. 2011. Live-Streaming Mobile Video: Production as Civic Engagement. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI 2011), 425-434. http://dx.doi.org/10.1145/2037373.2037437

[5] Nicolas Ducheneaut, Robert J. Moore, Lora Oehlberg, James D. Thornton, Eric Nickell. 2008. Social TV: Designing for Distributed, Sociable Television Viewing. International Journal of Human–Computer Interaction 24, 2: 136-154.

http://dx.doi.org/10.1080/10447310701821426

[6] Josh Feldman. 2016. WATCH: Day 2 of Anti-Trump Protests Across the Country. (November 10, 2016). Retrieved April 4, 2017 from http://www.mediaite.com/online/watch-day-2-of-anti-trump-protests-across-the-country/

[7] Oliver L. Haimson and John C. Tang. 2017. What Makes Live Events Engaging on Facebook Live, Periscope, and Snapchat. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2017), 48-60. http://dx.doi.org/10.1145/3025453.3025642

[8] William A. Hamilton, John Tang, Gina Venolia, Kori Inkpen, Jakob Zillner, and Derek Huang. 2016. Rivulet: Exploring Participation in Live Events Through Multi- Stream Experiences. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX ’16), 31–42. https://doi.org/10.1145/2932206.2932211

[9] Stephen Harrington, Tim Highfield, Axel Bruns. 2013. More than a backchannel: Twitter and television. Participations: Journal of Audience & Reception Studies 10, 1: 405-409.

[10] Jordan Julian. 2016. Facebook Live and Periscope: Live Streaming by the Numbers. (November 25, 2016). Retrieved April 3, 2017 from https://www.socialbakers.com/blog/detail/?id=2642

[11] Matthew K. Miller, John C. Tang, Gina Venolia, Gerard Wilkinson, Kori Inkpen. 2017. Conversational Chat Circles: Being All Here Without Having to Hear It All. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2017), 2394-2404. http://dx.doi.org/10.1145/3025453.3025621

[12] Heather L. O’Brien and Elaine G. Toms. 2010. The development and evaluation of a survey to measure user engagement. Journal of the American Society for Information Science and Technology 61, 1: 50–69. https://doi.org/10.1002/asi.21229

Page 18: Crowdcasting: Remotely Participating in Live Events ......companies. In a study of mobile live streaming connected with social media, Tang et al. [14] identified an interesting opportunity

98:18 J. Tang et al.

PACM on Human-Computer Interaction, Vol. 1, No. CSCW, Article 98. Publication date: November 2017.

[13] Blair Nonnecke and Jenny Preece. 2000. Lurker demographics: counting the silent. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI 2000), 73-80. http://dx.doi.org/10.1145/332040.332409

[14] John C. Tang, Gina Venolia, Kori M. Inkpen. 2016. Meerkat and Periscope: I Stream, You Stream, Apps Stream for Live Streams. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2016), 4770-4780. http://dx.doi.org/10.1145/2858036.2858374

[15] Raphael Velt, Steve Benford, Stuart Reeves, Michael Evans, Maxine Glancy, and Phil Stenton. 2015. Towards an Extended Festival Viewing Experience. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX 2015), 53-62. http://dx.doi.org/10.1145/2745197.2745206

[16] Mike Wehner. Paris attacks crash Periscope, test Twitter Moments. November 13, 2015 Retrieved April 4, 2017 from https://www.dailydot.com/debug/paris-attacks-twitter-periscope/

[17] Wowza Live Streaming Software. 2017. Retrieved April 26, 2017 from https://www.wowza.com/

Received April 2017; revised July 2017; accepted November 2017.