psyciq.apa.orgpsyciq.apa.org/wp-content/uploads/2018/04/01-social-r…  · web viewspeaker 1:...

41
Speaker 1: It's a pleasure to be here with you. I thank Dr. Evans and of course, my friend and colleague, Amber Story for inviting me to present. I actually know all of the four keynote speakers and you're in for a real treat. These are truly just phenomenal rock stars so yeah, buckle in. I'm thrilled to kick this off. I'm going to be talking about social robotics and human behavior and I think you'll appreciate, by the nature of this work, the two are very deeply intertwined and I think, to resonate the opening comments by Dr. Evans, there is a real opportunity for AI and psychology to work much more closely to one another. Not only to advance in science but there's a lot of real world implications that are upon us as well. I straddle both worlds. I am a professor at the MIT Media Lab, I'm recognized as a pioneer of this field of social robotics. The field itself is now about 20 years old and I'm also founder of a company, JIBO, which is bringing, now on the market, you can actually go to Amazon.com and these social robots are starting to become mass consumer technologies as well. Again, this is a very poignant time, I think, in our history of AI and how we interact with intelligent machines. More and more, I find that a lot of the work I do, not only on the academic research side but also in terms of the commercial side, is just appreciating the need for us to understand what it is to live with AI. Before, it used to be SIRI on our phones and in our personal devices and ask her questions but now with products like Amazon Alexa, the sort ambient, always kind of at the ready AI, we're starting to see many, many different demographics interact with AI on a daily basis, from young children to seniors and everyone in between. This question of living with AI I think, is a really important one and it pushes from these more focused encounters, a specific context to just thinking about living with them as part of your everyday life, in your home, in your institutions, in your work places. It's just been a very recent phenomenon. There's a lot 01 Social Robotics and Human Behavior (without Intro)(1)_1 Transcript by Rev.com Page 1 of 41

Upload: trinhkiet

Post on 18-Jul-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

Speaker 1: It's a pleasure to be here with you. I thank Dr. Evans and of course, my friend and colleague, Amber Story for inviting me to present. I actually know all of the four keynote speakers and you're in for a real treat. These are truly just phenomenal rock stars so yeah, buckle in. I'm thrilled to kick this off. I'm going to be talking about social robotics and human behavior and I think you'll appreciate, by the nature of this work, the two are very deeply intertwined and I think, to resonate the opening comments by Dr. Evans, there is a real opportunity for AI and psychology to work much more closely to one another. Not only to advance in science but there's a lot of real world implications that are upon us as well. I straddle both worlds. I am a professor at the MIT Media Lab, I'm recognized as a pioneer of this field of social robotics. The field itself is now about 20 years old and I'm also founder of a company, JIBO, which is bringing, now on the market, you can actually go to Amazon.com and these social robots are starting to become mass consumer technologies as well.

Again, this is a very poignant time, I think, in our history of AI and how we interact with intelligent machines. More and more, I find that a lot of the work I do, not only on the academic research side but also in terms of the commercial side, is just appreciating the need for us to understand what it is to live with AI. Before, it used to be SIRI on our phones and in our personal devices and ask her questions but now with products like Amazon Alexa, the sort ambient, always kind of at the ready AI, we're starting to see many, many different demographics interact with AI on a daily basis, from young children to seniors and everyone in between.

This question of living with AI I think, is a really important one and it pushes from these more focused encounters, a specific context to just thinking about living with them as part of your everyday life, in your home, in your institutions, in your work places. It's just been a very recent phenomenon. There's a lot we still have to understand. There's excitement, there's always a lot of optimism with new technologies and of course, there's a lot of concerns as well, especially in the context of vulnerable populations, of children, of seniors, of people you don't really think about as being the ones that are going to go out and embrace the technology because they want to buy it but they're more, it's around them. I just wanted to highlight an event that happened in 2017 where Mattel wanted to create an Alexa like product for families, particular families with young children. They had gone quite a long way to developing this technology but ended up pulling it before it was released commercially and you can imagine that this represents a pretty significant decision, to have all of that investment and effort to pull it and a lot of it was pulled because of concern being raised by policy makers, by developmental psychologists, by a number of people in the field unsure of whether this was a good idea.

I think maybe this was the right decision, maybe this wasn't the right decision. I think we're just live in a time now, we don't know enough. This is very, very new and because of that, I think there is a real need and the real opportunity to really ground these things out with actual science. Again, I see a real opportunity for AI and psychology to really work much more closely together. In

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 1 of 26

my world of AI, I know much of the dialogue is around future of work and how these intelligent technologies can help us make better decisions and make us more efficient and make us more productive and all these things but I find over time, the questions I started asking myself more and more is really more along these likes of; can AI actually help us to flourish? We talk about the quality of life, the ability of life, our ability to grow, our ability to maintain our well-being, the ability to age with independence. To me, these are highly impactful areas for artificial intelligence and AI will not truly achieve its true potential until it can do these sorts of things as well to really kind of address our very human selves.

We know when we look at, certainly theories and frameworks around flourishing, it's not just about cognition. There's deep aspects of relationships and emotional well-being, sense of achievement. Many, many, many different facets that AI needs to be able to engage and interact with us to do this. A lot of my work, really, at the very core, is grounded and appreciates ... When we talk about human behavior, we are a profoundly social and emotional species and again, in a lot of the work in my field, there's been such an emphasis in cognition, these other aspects that are profound to how we experience the world, how we interact with each other, how we make decisions, how we learn, is not as well developed. It's not as well explored as these other more cognitive domains so of course, this is just ... you all know this of course but just to say that we know that there are specialized neuro structures that support social thinking and that's different from analytical and it plays an incredible role in our intelligence.

We know that our ability to collaborate with one another, is our kind of intelligence superpower so to speak, when we look across all the different species, our ability to empathize, our ability to infer the mental states of others. These are phenomenal accomplishments, when you think about what the human mind is able to achieve and is with so much, I think, of AI when something is so effortless for us, because we've evolved to do these things so exceptionally well, we tend to take them for granted and so we don't think of them as being hard computational problems but any of these are very hard computational problems for a machine to do and it's actually, continues to be quite humbling to try to create machines that can engage in this dance with us, so to speak, with our level of sophistication and nuance.

We also know that emotion is embodied. We experience emotion through our entire body, through our skin, our heart rate, all of these other signals that can also give us insight into what's going on deep within our limbic systems, in our brains. There's exciting new technologies coming out, wearable technologies to help us give insights into our emotional selves and how that could be used to potentially engage and treat symptoms around things like depression and so forth so a lot of excitement happening here. Of course, we know that it's not just our brains in our bodies but we are socially situated in the world of others, in this physical world, and we can influence each others in these dimensions and so a lot of the work that I'll be presenting today is really exploring; what

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 2 of 26

happens if one of those others is a robot, is a physical nonhuman other but engages those parts of our thinking and understanding?

This again is still, to this day I think, relatively unexplored aspects of artificial intelligence. Through all of these 20 years of work that I have done, that people in my field have done all over the world, I think one of the deeper appreciations we're gaining over time and deeper insights of course, is that if you can design a technology that can really engage and support people across a more holistic experience, not just the analytical side of ourselves, but our social, emotional and even physical selves, the more we as human beings can invest in that interaction, in that experience and often that allows us to be even more successful or impactful than if we don't address a more holistic experience. I guess when you say it, that's not surprising but that has never been kind of before a forgone conclusion. It's always been more of about the cognitive side that matters.

A lot of the highlights of research that I'm going not be showing today is going to be fleshing out this picture for you. I recognize it's 6:30, we've had some good food, we've had some good drinks. I want to keep this light weight and fun and hopefully stimulating and thought provoking for you so you're going to see a lot of videos of robots and I'm going to highlight a number of findings but I'm not going to go deep into the details for you this evening. Just again, to keep it at a good, fast pace. This is just to give you a sense, when I talk about the social, emotional, physical, cognitive, interpersonal kind of space, we've been working a lot with children in particular but people of all demographics but I think, as you watch these videos, you can appreciate there's a deep social psychology going on here. There's perspective taking, there's empathizing. There's sharing attention. There's this physical kind of relating that again, is just very different when we think about how we are with computers and our flat screen devices.

This is a technology that just engages us in a different way and often, we see this joyfulness and this delight that almost comes from this sort of companion, animal like experience of these kinds of technologies, at least when they're designed in this particular kind of framing of kind of a companion like but smart pet like other, which is a lot of the philosophy we've applied to our work over time. It's not just for kids. I wanted to show this video. I mentioned that social robots like JIBO are starting to come onto the market so people are starting to bring JIBO into different context and just exploring to see what happens and I thought this was such a thought provoking piece. It was shot in San Franscisco on a news cast.

Speaker 2: Well despite, or maybe because of technology and social media life in the 21st century can be isolating, sometimes lonely for people, that's especially true for the elderly but, Crawford's Morgan Kelley discovered some who are in assisted living communities here in the Bay area are finding comradery and fellowship through robots.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 3 of 26

Speaker 3: Hey JIBO, can you dance?

JIBO: Let me put on my dancing shoes.

Speaker 3: Wow. That was wonderful.

Speaker 5: A resident of the Alamavia Assisted living community in San Franscisco is meeting a new regular visitor here. It's a robot called JIBO that's being used as part of a pilot project among several Bay area facilities run by Elder Care Alliance. In addition to dancing, it can play the radio on request, snap photographs.

Speaker 3: Oh, that's a beautiful picture.

Speaker 5: Tell jokes.

JIBO: What did the snail say when he was riding on the turtle's back? Wee!

Speaker 5: And react to the human touch. It purrs when petted. He also often has some quirky responses to questions.

Speaker 6: Hey JIBO, what are you?

JIBO: I am a robot but I am not just a machine, I have a heart. Well, not a real heart but feelings. Well, not real feelings, you know what I mean.

Speaker 5: Erin Partridge, a researcher and art therapist with Elder Care Alliance takes JIBO with her when she meets with the seniors. She says, "This is not a case of caretakers being replaced by automation. Instead, they're using this cute piece of technology to encourage their residents to connect with each other despite some needing different levels of care."

Erin Partridge: When we have something that all of us are experiencing all together, maybe for the first time, meeting a robot, we've got a focus point where we can all meet right here in this moment and have an interaction, then it breaks that down and it doesn't matter that this person has dementia or this person maybe has Parkinson's and maybe has some trouble talking. We'll all meeting together as humans and robot.

Speaker 8: Did you see him? He's going to lean his fanny.

Speaker 5: JIBO inspired some giggles and guffaws amongst this group here.

Speaker 9: We get to see. You should be able to laugh more and more.

Speaker 10: What it does, it brings that out of us, it's the best part of the miracle of it and that's part of the miracle of living as far as that goes.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 4 of 26

Speaker 5: While JIBO-

Speaker 1: Yeah. I think again, this just gives a sense I think, of the opportunity of this very different kind of technology. JIBO is a primordial, kind of first of its kind sort of product and of course, there's a fair amount of research proceeding that and there's a long way to go before we fulfill this vision of this intelligent other that's a helpful companion in the home but I think a critical part of designing an appropriate technology that can really dovetail with the way that we behave and the way we think in again, the fluidity of our daily lives, it's going to require them to also have social intelligence and emotional intelligence. When we think about these social robot technologies, I would say a lot of the design principles around, they should be designed in a way that's human centric.

Deeply informed by the way people behave and interact, how we learn and how we make decisions and so forth. They're designed to be collaborative. They're not really designed to be a tool that you operate, all though you can. You can do Wizard of Oz type paradigms but they are designed to interact with you as a social other, as a partner and there's a lot of great AI and hard problems of designing collaborative systems. Then they are bridging into this interpersonal dimension. It's more about face-to-face engagement versus kind of abstracted engagement. There's a lot of fascinating human behavior and potential benefits when a technology can engage someone on this interpersonal dimension they'll talk about.

We think about social emotive AI. I mean, a lot of it overlaps with kind of classic AI in terms of, you need systems that can perceive people but now it's about not just perceiving what people are doing but their emotional cues, their social cues as well, their nonverbal behavior. When you talk about learning, it's not just about learning through deep learning methods through large data sets. A lot of it is also an active learning or as you appreciate that teaching and learning is a collaboration where the teacher may be guiding the learner's exploration and the learner's giving feedback as to what they understand to guide the subsequent teaching so it's a coupled collaborative system.

When we talk about expression, one of the key things about how we express is that we're conveying what's going inside our minds, it allows us to be able to infer what the other person is attending to, what they're likely to be able to do so we are likely better able to anticipate and coordinate our actions with theirs so the expressive behavior is very important. Then there's a lot of interesting AI and psychology around; how do you build that sense or rapport? How do you build this relationship where it's not just about this transactional interaction but something that can actually grow and build over time? This obviously, is a social robot. I would say, this also, is a social robot.

People tend to think of social robots as just these kind of small desktop robots but actually, this is a manufacturing robot that's designed to work shoulder to should with a person on a collaborative task. You'll notice they even put a face

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 5 of 26

on the robot because it was important to send those cues of attention to coordinate the joint activity. Social robotics is a feature, it's a capability that can be applied to any kind of robot out there. We want to try to design robots or machines like this; something that can really dance with us, that can intuit, that can respond to us in a way that really supports our human experience, that supports our humanity. I tink one of the things that the field has been able to show is that these robots can actually be a really intriguing tool to actually do science, to actually try and understand something about human behavior.

A lot of work in the field follows this sort of paradigm. In this case for instance, we wanted to design a robot that could engage and learn with the child, like a playful learning companion. The question of course is; well, how do children interact and this is in a particularly storytelling context. We observe these behaviors, we capture them, we annotate the data and then we try to computationally model these skills. This is a particularly back channeling behavior. How is it that a robot can behave as an engaged other, an attentive friend to other? Then we can actually kind of put those computational models through the real acid test is by putting them in a real physical robot in the same kind of physical context with a child for instance, and see; do those models actually hold water? How do they perform in the context of the human behavior we're trying to emulate? This is just a quick video showing this in process. We recruited many, many children from the Boston public schools, engaged them in a collaborated storytelling task or one child would tell one story and the other child would be an attentive listener and then they'd switch roles and we'd do this again and again and again, capture a ton of data, what is spoken, the nonverbal cues, we're using technologies like Affectiva to track facial expressions, a technology to track body pose.

We apply computational methods like palm DPs to start to try to model these behaviors and then we test them, we put them in context. Then we look and see what's the impact in children's behavior. For instance, in this study, what we found is, when you had a robot that actually engaged in this back channeling behavior, children were more likely to look at the robot that exhibits this contingency in the back channeling behavior and they told longer stories. They were more engaged in the storytelling activity than one that was just as expressive but didn't have this back channeling behavior. Again, there's a lot of science that we can do to try to get down to specifics. You could never tell a child, for instance, to modify your behavior in a very specific way and then run around in minds can trail on that so these robots become a really intriguing way to build and carefully control these cues as well in order to try to get to like, what are the specific behaviors that really matter? What are the specific interactions that really matter?

As the field has been evolving. We've been moving from kind of these more in focused laboratory studies, into longer term deployments in the world and really, that's the objective. If we want these technologies to be able to benefit people in the real world, we need to understand the long term interaction of them and that means we need to build for increasing levels of autonomy, we

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 6 of 26

need to deploy them in real environments and we really want to study the long term impact and so I'm going to highlight a number of projects where we do that. The first one is kind of a first of its kind kind of historical study that was an almost 10 years ago around health and wellness and the task we looked at was weight management, which I guess, could fit under the larger challenge of chronic disease management.

We were interested in this problem because it was an area where social support was recognized as being really important for behavior change and when we talk about something like losing weight, it's not just losing the weight that matters, it's actually sustaining your engagement over time because people can lose weight doing all kinds of different sorts of diets. This is really a study trying to understand longer term interaction and engagement and how the affordances, interaction affordances of a technology might influence it over. It's also a case in a lot of the work that we do where this isn't about replacing people. This is really about complimenting, extending people. In many, many of the domains that we look at, there's this recognized gap of growing demand to our professional ability to meet, that demand is getting worse and worse overtime.

The question is, how can you apply technology to help close that gap in a way that can amplify and extend the human services that we already provide and when we engage stakeholders, doctors, teachers in this process, everyone understands the potential benefit of technology when it's designed in a way to empower them to do their jobs better. This is an evolution of social robots that have been really applied to this in home kind of a behavioral change context. It started with this robot Autumn at the MIT Media Lab, that I'll talk about today. Then it became a commercial product from the PhD student Cory Kidd and now he has another company, Catalia Health, with Mabu so again, this is another social robot example that's been a journey in the making but it's really about patient engagement in the home.

When we started this work, we wanted to collaborate with a clinician who worked very closely with her patients in the area of weight management to really understand; what was she doing? What kind of dialogues was she having? What were the kinds of activities that she would do to engage with her patients to understand, how could we model that in a robot? We looked at aspects of the working alliance over time. What were the dialogue tactics that she would use in order to build that working alliance with her patients? We looked at ways that, if you had a robot in the home, how could you actually measure the quality of that working alliance over time? There's a standardized measure called the working alliance inventory and we did this technique where we took just a couple of those questions from the working alliance inventory and we asked the people those questions everyday, just a couple, in order to kind of a pulse on how is the working alliance going and depending on how well that alliance was, if it was breaking down, the robot would engage in other kinds of behaviors to repair the working alliance, to reinforce the team, to reinforce the goals and the shared sense of how they can work together to achieve those goals.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 7 of 26

If the relationship was doing fine, then the robot would continue to make recommendations and engage patients in terms of making their process of gathering their data around how much they walked and their exercise but also just motivating them as well. Really engaging them as a personal coach in the context of the home and applying models of behavior change to that. I'm going to show a quick video of the robot, this is Autumn.

Autumn: Hi, my name is Autumn. I'm a personal weight loss coach. I will help you lose weight and keep it off forever.

Speaker 12: Autumn makes it easy to stick with a diet.

Autumn: It's easy because I'm here everyday, for support. She didn't need a new diet. She needed my help to stick with it.

Speaker 13: Autumn is the most affective weight loss technique we've tested.

Speaker 14: Autumn kept me motivated and is quite simple to use.

Speaker 13: Autumn combines the best of what we do in the clinic with the support of having a coach at home, every day.

Autumn: With my daily support, I help you reach your goals and keep the weight off. Thanks.

Speaker 13: She creates an emotional bond that helps you not only lose but more importantly, keep off the weight.

Speaker 1: Again, this is Dr. Caroline Apovian. She mentions this bond, the importance of that connection and that working alliance. The punchline of this work is that we compared this physical social robot to a computer that had the exact same dialogue but just wasn't embodied as a social robot. It's kind of like what Alexa would be like today or as kind of like a disembodied conversational agent but not with a social embodiment and then Caroline Apovian, at the time, again, this is done 10 years ago, her standard intervention was to send paper logs home so people could record their own diet and exercise. Our question again, was one of engagement and what we found is that people tended to engage with the robot significantly more often and longer than the computer, even though the advice was identical so that was really fascinating to us.

Also, when you looked at the quality of the working alliance, so both during the intervention and also as a post test of the full inventory of the working alliance, people scored the robot much higher, in terms of the quality of the working alliance, again, even though the dialogue and what the robot said, otherwise the computer or the robot was exactly the same. Again, 10 years ago, this was really provocative because it gave us some sense about, there's something about this physical presence that seems to make a difference for people. When we looked

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 8 of 26

at the emotional engagement, it was fundamentally different. People named the robot, they didn't name the computer. They dressed the robot, as you can see. They actually really did that. They personalized him. Then even on a number of the cases when we wanted to bring the robot home to pick them up from the people's homes, they would come out and say goodbye to the robot. Again, this notion of kind of the robot as a social other as a companion, as someone who really fit into the family was echoed quite a lot and this notion of almost a social accountably and with the robot, I want it to be good so again, it's this social engagement, which is different with the physical co-person robot than with a computer.

We've been continuing to push work in this kind of spirit of trying to understand the physical versus other embodiments and other application. We've just finished a clinical trial with Boston Children's Hospital, looking at pediatrics. In this case, we're looking at a social robot that can really engage children but also be a supportive technology for child life specialists. Child life specialists are trained professionals in hospitals that are really about helping to support the emotional state of children to make sure they can emotionally help them cope, to engage them in joyful play, to really just help make that hospital experience much more tolerable for children while they're there in otherwise, a pretty stressful environment.

We designed a robot called The Huggable and the question was, if you had a physical robot huggable versus say a tablet because people have tablets in hospitals but a virtual hospital versus a plush, is there a difference in the emotional engagement of children or even their behavior in the context not only in the case of them interacting with the technology but with the child life specialist as well and even potentially family members who were in the room. This is a piece that was done by the New York Times to kind of give you an overview of the project. We're going to need some volume on it.

Speaker 15: She's in and out of the hospital for procedures, doctor's visits, she's missing school.

Speaker 1: This is the doctor Dendra Logan. She's one of the clinicians we work with.

Speaker 15: She's not doing the normal things kids are doing and what we want to offer kids like that is just one more way of helping them to feel okay where they are in otherwise, what's a really stressful experience. I think there's a way of connecting with kids that's different from what grown ups can offer. They have incredible imaginations and they can really suspend disbelief and there can be a true relationship between Huggable and a patient.

Speaker 16: Hi.

Huggable: Hi.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 9 of 26

Speaker 16: Hello.

Huggable: Hello.

Speaker 1: In this case, we have a tel-operated robot. This is a Wizard of Oz paradigm. We have another child life specialist who is basically being the Huggable while you have a child life specialist in the room.

Speaker 18: There you go. Is that better?

Huggable: Higher?

Speaker 16: Oh my gosh. Hi.

Huggable: Hi. It's Huggable.

Speaker 16: Hi huggable.

Speaker 18: Hi Beatrice.

Speaker 16: You're adorable.

Huggable: Aw, thank you.

Speaker 18: It's very nice to meet you. Do you want to play a game?

Huggable: Play a game?

Speaker 16: Yay, let's play a game!

Speaker 1: Again, you can see, highly engaging. We wanted to compare this physical robot to an identical of a Huggable experience as we could but on a tablet where the same child life specialist would teller operate the virtual Huggable and then of course, just the plush because again, that's just the standard of care. They would bring in plush, they would puppeteer the plush to engage the child. One of the really nice things about the studies is that child life specialist is coming in with each of these interventions and she's doing her job to the best of her ability right? It's really trying to look at; how can each of these technologies can engage her and the patient in a way that really promotes emotional positive lift for the child? This is just a quick highlight of some of the data.

One of the things that we found was, we recorded all the utterances, all the videos between all of these conditions and when you control for the number of utterances that the child life specialist says and the number of utterances that the Huggable agent says, you actually find out that children were more vocal overall. They said more utterances, they were more socially, kind of verbally interactive in the physical robot case, when we did the regression analysis

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 10 of 26

versus the avacard case versus the plush when you look at the sentiment analysis of those utterances over time. Positive emotional sentiments versus neutral and negative emotional sentiments. What we also find over time is with the physical robots, the valance, the positive valance goes up over time from the beginning to the end of the session.

In the first case, we actually even saw that go down and in both of those cases, they were above in terms of the positive cases of the plush condition. Again, this is really intriguing. This is all very early work. When we look at shared attention, as a collaborative process. How much time our children spend looking at the agent versus another person versus a parent? We see that for the physical social robot case, children tend to engage in shared attention behaviors much greater. This is another indication of engagement and collaborative behavior and then when you look at dimensions of touch like social touch, we see that in the case of the physical robot, almost all of the touch is relational like you see in the video there.

With the plush, they may throw the plush across the room, the tablet, they may just be operating, just touching the tablet, it's a virtual agent there but just again, just highlighting the fact that the social engagement, the social emotional engagement seems to be brought out in this very visceral, kind of intangible way when the child has a physical robot and so, as kind of a, jumping ahead, there's been now survey articles written with work in this spirit, systematically controlling a physical robot versus a virtual agent or a physical robot versus a video of a real robot but not in the same room with you and every other combination you can imagine.

There's been over, this is two years ago now but probably closer to 70-80 studies worldwide and the thing that's really intriguing is that the robots really hold their own. The physicality of these robots keep coming up as being highly engaging on a number of dimensions. This notion of trust, trustworthiness, of attraction and enjoyment, of engagement, of persuasiveness, of social presence. The physicality is just a really, really fascinating component of this experience. Now I want to shift gears and talk about early childhood learning. We have been engaged in early childhood learning for about five years now and there's a lot of reasons for that. Number one is, I think, when you talk about these AI technologies coming to the home, this is a population where I feel we just have a responsibility to really try to understand; what are the implications of long term engagement with these technologies and how can we design these technologies to be to the benefit of children and to their families.

The other one is just very practical and of course, you all know, is just the criticality of these 0-5 years and that far too many children start kindergarten behind and it's very, very difficult for them to catch up. Looking at these technologies as a way to be an early at-home intervention to kind of level the playing field so to speak so all children enter kindergarten ready to learn I think is really, really critical and through these technologies, they could be designed potentially, in a highly scalable, affordable, way and through science and

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 11 of 26

through iteration and a highly affective way. That's the goal. That's the dream. When you design a social robot, you could design it to interact with people in any number of ways and I would say, in the field of AI, the other, the agent, tends to be modeled after the teacher as the expert but when we look at, especially how young children learn through play, we were actually intrigued about; what if the robot was actually designed to interact more like a peer like companion, more like a social other because, when you see two children learn together, there are moments where you may know more than your friend and you can learn things by actually teaching another.

There are times where your friend may know more than you and they can help guide your exploration in learning and there's some times you just got to figure it out together right? It felt to be a very fluid, fascinating interaction paradigm that we wanted to understand and it's not only that children learn a lot of knowledge and skills from one another, there's also all of these other attributes that we keep talking about; friendliness, rapport, trust, all of these things, are communicated through this kind of interpersonal dynamic between children and social others and it turns out, even if that social other is a robot. I want to talk a little bit about longitudinal deployments. Thinking about from weeks to months. This is a three month deployment. In the world of social robotics, this is considered a long deployment. I guess from the commercial sense, JIBO has been out there longer than this but in terms of an actual study, this is considered a long term deployment and we clearly want to get longer and longer but it's very intriguing to see, even the impact you can see after a weekly session with a social robot over a period of three months and we specifically go out to Boston Public Schools where we recruit and work with patients, I mean, patients, students from schools with high ESL populations.

We're really trying to work with those students. We're really trying to help. Again, I'm just trying to keep this light weight for you but, this is a video of a typical encounter.

Taiga: Hi. My name is Taiga.

Speaker 1: There's sort of social relational chit chat that happens at the beginning of every session, learning the child's name.

Taiga: Arianna.

Speaker 1: Being able to recognize the child and refer to them by name.

Taiga: I am so happy to get to know you. I love telling and listening to stories. My first story is ... a rainforest day.

Speaker 1: You can see, the technology has a little more latency than you would want but it's still intriguing to see that children can still be really engaged throughout the encounter. In this particular session, the robot first tells a story, engages in

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 12 of 26

dialogical question asking during the story, in this case, asking the child, "Do you know what the word sway means?" She says no and demonstrating what that means and then inviting the child to tell the story. In a lot of this work, the robot tells a story in a dialogical reading process and then invites the child to tell a story. We record the child's utterances, we analyze them for syntactic sophistication, we look at the vocabulary, you compare the vocabulary that the robot embeds in its stories and the vocabulary that children are using in stories they retell to the robot so we're looking at a lot of different measures.

This slide, we're basically to highlight that, we have been doing this now, collecting data through these children in a way that the robot is actively learning a model about what the child, their knowledge, their oral language sophistication, their vocabulary sophistication as well as their engagement and we're applying reinforcement learning, which is an active learning process to have the robot try to predict, what is the optimal next story, personalized story I can tell to this child to maximize their learning outcomes? This is from the end of the three months' session, the far, I guess on your left, diagram there. It's basically, that checkered pattern basically means that the policies that are being learned per child are quite different.

There's definitely personalization going on in these models. The policies are different between the different children. The middle plot is showing the first time the child demonstrates a new oral language syntax structure from a personalized robot in red, to a robot that follows a fixed curriculum, a non-personalized robot, so children are demonstrating used syntactic structures faster with a personalized robot and when you look at the vocabulary, the embedded vocabulary in pre post, children are learning more vocabulary with the personalized robot than the non-personalized robot. Again, this is an intriguing finding just to show that the personalization, the learning with children over time, adapting over time is starting to show some real positive outcomes for children this young.

The next thing I want to touch on is again, this kind of emotional aspect and a lot of the work that I've been presenting to you more around the robot being expressive and I'm going to dig a little more into that but the other thing that we're finding to be intriguing is, what if the robot takes emotion to be an input signal and does it help the robot actually have a more accurate model of predicting the knowledge and skill of a child? We know when you want to design a personalized system, the ability to kind of predict this Vigotzi zone of interproximal development is really key. You want to give them the appropriate level of challenge so the challenge of being able to model what the child knows to challenge them appropriately is a really critical aspect of designing these personalized educational systems.

We've been using knowledge like Affectiva to be able to track children's facial expressions. This is one of the signals that we're bringing in to our algorithms to track engagement and valance. One of the things that we've been finding, which is really interesting is when a child is doing an activity, there's often a tablet that

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 13 of 26

has an interactive storybook in these context, when they do an activity with a social robot versus when they just do the tablet by themselves, what we're seeing is that children tend to be more overtly expressive in the presence of the social robot. The social robot is emotively expressive, it tends to amplify their emotive expression. That's really interesting because it makes it easier to detect those valance effective signals.

We looked at a traditional algorithm, basically knowledge tracing, which is based off the educational kind of intelligent tutoring systems literature to model knowledge states, capability states of students. In this case, we developed a basis knowledge tracing algorithm to track the number of times a child would identify the first letter correct of a target word, the last letter, the length of the word and the number of syllables and the punchline here is, we compared it to two systems, one was traditional basis knowledge tracing and one where we incorporated the child's facial expressions, the emotional valance and engagement as part of that process and we could actually show that the system was more accurate in predicting this sort of more pragmatic, practical knowledge in terms of the models here than when it didn't have the emotional signal, it didn't have the valance. Again, this is some of the first work showing that, even emotion as an input can help these systems accurately predict student knowledge and performance. That's very intriguing.

The other aspect here that we looked at here is also, what about the impact of emotional expression of a robot? If a robot is a storyteller, does it matter if the robot speaks in a neutral voice or a more expressive voice? We would think that the more expressive the robot is, the more engaging it would be but does that impact child's learning? We looked a lot at the literature to try to understand the impact of this. We couldn't see anything that really got down to the specifics of the emotionality of the other in the sort of storytelling interactive paradigm. Here's a video of a child doing a storytelling task with a robot. In this paradigm, the robot tells a story to the child, a puppet is asleep, the puppet wakes up, and then the child is asked to tell the story to the puppet. We look at these two conditions, a neutral speech and more prosodic and expressive speech.

Taiga: Hi, I'm Taiga and my favorite color is blue. My favorite color is blue. There once was a boy who had a boy and a dog and a small pet from.

Speaker 1: The neutral case.

Taiga: The deer stopped suddenly.

Speaker 1: As the robot becomes more expressive, you see more engagement, more tactile engagement.

Speaker 20: Yeah.

Taiga: The frog climbed out of the jar.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 14 of 26

Speaker 20: It was about the frog that came out of the jar.

Taiga: And looked behind a big log.

Speaker 20: They looked behind a big log.

Taiga: There they found the boy's pet frog.

Speaker 20: And he found his frog as a pet.

Speaker 1: We see this really interesting phenomenon which is, the more expressive the robot is, the more children model the robot's behavior. In the case of that video, we're finding that, when the robot used the expressive voice, when children retold the stories, they tended to use more of the phrases that the robot actually used when it told the story and they tended to tell longer stories, they tended to retain more of the story and they tended to encode more of the target vocabulary in their stories that they told the puppet but the thing that was really fascinating is we came back, we went back to the schools a month and a half later and we had the same children tell the same story again and what we found is that, in the more expressive robot case, children tended to still tell longer stories and code more of the robot's phrasing in the higher expressive case and they retain more of the vocabulary.

The stickiness, the engagement of it seemed to persist long overtime, which is really, really fascinating. This modeling phenomenon, we see again and again and again. You've seen it again and again in these videos. We've seen it in behavior, we've seen it in learning so then we started thinking, "Well, we want all kinds of things one another, things like attitude. What if a child actually plays a challenging game with a robot that say, exhibits a growth mindset versus a neutral mindset? Do we see social modeling of that where children start to then exhibit a growth mindset more over time?" We know that, from Carol Duex literature, a lot of mindset is established based on how we talk to our children. If we praise them for being smart, versus we praise them for their effort, and so the question here is, what if it's a robot?

Will they engage? Will they be influenced by a robot who can, again, exhibit a certain kind of mindset as they play challenging games with the children? Here's a quick video. This is a growth mindset condition so the child and the robot play a tangram puzzle game together. First, the robot plays the game, the child sees how the robot chooses more challenging puzzles, the child sees how the robot deals with challenge and failure itself and based on how the robot talks about the importance of effort, the importance of, even if they fail it's okay because you try hard and that's how you learn and improve.

Taiga: I never gave up, even when it was hard. You're next.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 15 of 26

Speaker 1: Then the child plays the game and the robot can track all of these states based on the tablet. It actually has a model of how to solve the tangram puzzle so it's actually authentically solving the tangram puzzles on its own and based on the child's struggle, based on the child's perseverance, the robot is making, either again, growth mindset comments or more factual comments in the case of a neutral mindset. Here's what we found, we did pre post testing on mindset and what we found is that, when children interacted with a growth mindset robot in the post test, they tended to self identify more themselves, with having a growth mindset but the really intriguing finding is that when we looked at the growth mindset condition versus the neutral mindset condition, children actually demonstrated more perseverance and grit on the challenging puzzles, when interacting with the growth mindset robot.

Again, this is just provocative findings to think about a different kind of engagement with the technology in a way that kind of pull children along a developmental trajectory you'd like to see them do, again, not only in terms of knowledge and skills, potentially an attitudes learning which of course, is going to serve them well through their entire life. Again, this is all early stage stuff but it points to a kind of a very, very intriguing opportunity here around these socially emotionally interactive technologies. What's the punchline here? I mean, in a lot of ways, I talk about, this is where high touch engagement meets high technology. When we think about our AI technology today, they're very transactional, even if they have a spoken language interface. It's kind of command, control.

You ask a question, it responds. What I've been showing you in this growing body of work, today is this more moving towards this relational AI, this AI that can change, that can learn about you, that can adapt with you to try to pull you along an aspirational path. By doing that, it's really engaging these social and emotional mechanisms of how we experience and learn and understand the world. JIBO. I want to talk know a little bit about the commercial sie of the enterprise. When we first started this work and if you looked at a lot of the products on the market, we could kind of divide it into this kind of 2x2. We have degree of emotional engagement of the product versus the level of sophistication.

There were single purpose robots like the Rumba, like vacuum cleaners, like lawn mowers that didn't really have a lot of emotional engagement but did a single task in the utility kind of way and then we saw entertainment robots so we're really all about the emotional engagement but weren't really useful. As you started to go up the price point, we'd see technologies that again, maybe not so emotionally engaging but could do more kinds of tests based on the affordances to being able to navigate, to be able to fly like drones, keeping to go further to build it to stipulate objects and manufactured, to be able to do surgery, to be able to drive from one point to another but nothing was in this upper right and a lot of the work that the study of the field research robotics, I think one of the punchlines is that it's not about the physical capability of the machine, it's really this model of humanized engagement. It's that, if you can

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 16 of 26

engage people more deeply around the content, whether it's educational content or health content or whatever, potentially the more successful people could be with that content and that was really the idea of JIBO, not just as a product but really as a platform, as a platform that people could develop a whole bunch of skills for.

When we look at JIBO then, it really is designed to be exactly that. There's the skills that you saw before in the video but there are toolkits by which anyone can also create skills and abilities for robots and so I think for, by psychologists, whether it's in the research community or whether it's in a commercial context, these technologies are now starting to become available at a very affordable price point that you can start to take these technologies and apply them to your science, apply them to your applications in a way that's very exciting that hasn't happened before. With JIBO also as a social robot, based on a lot of the research that I've talked to you about, we talk about these four Ps. There's the personalization of the robot. The ability to learn over time, to be able to adapt, that's really key to longer term engagement. The personality and the interpersonal dynamics with the technology is an important differentiating attribute, versus other technologies out there.

The fact that JIBO can initiate and be proactive is really interesting. We've been doing a lot of interviews with people who own other devices like Echos and so forth and they say things like, "I don't really want my Amazon to proactively engage me in a conversation but I actually expect it from something like JIBO." This proactively becomes a really intriguing theme when you talk about behavior change, nudges that help people go along a path towards greater adherence for instance. Proactivity is something we're finding to be really, really interesting on a platform like JIBO. Then, the purpose really speaks to; what is the goal? What is the objective? What is the benefit we want to design these robots to be able to achieve and I've given you some examples of those today but there's many, many, many more. Tools and toolkits I think, is a critical, enabling technology to be able to make this of course, become a part of everyday life and really be able to be tailored and to expand its functionality to help many, many different kinds of people in many, many different context. It's interesting that JIBO was on the cover of Modern Technology.

I think there's an appreciation that this is a new technology that we really do need to understand as well and the notion of a technology that lives with you as part of your daily life that's always on, that's always animated, that's not just a device on your countertop but is always looking around and engaging you, almost like a companion, like a pet, is something that's really quite different and that we need to understand. Then, when we think about the bigger opportunity and the history of these technologies or intelligent devices we've created over time, today, so many of them I think, really fall under this paradigm of this useful tool and it's really been about the democratization of access to information, to networks, to social networks and things like that but the technology's fundamentally been a tool but in this talk, I've talked a lot about these other domains of really significant importance to society so the ability to

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 17 of 26

educate, the ability to stay healthy, to age in place but even when we talk about shorter term encounters like retail and hospitality, the opportunity for this interpersonal engagement at that attracts people that sustains this kind of humanistic engagement as something that is quite different. That's that high touch engagement.

The big opportunity I think is leveraging that with the personalization to think about creating and democratizing technologies that provide scalable, affordable, personalized service and support for people of all ages. Not everyone can afford to have a personal coach at home or a personal tutor at home but with these kinds of technologies, this is possible. This is possible. Again, this is the vision I think, of how to really help AI really, truly benefit everyone and not just the few and I think the other side of that is providing the tools by which everyone can create these solutions and not just the few. I'm going to end with one of my favorite quotes, which is, "In a world of intelligent machines, humanity is the killer app." I totally believe that and I will take questions. Thank you. I think there's microphones. It's a little hard for me to see. It's dark. Are there any questions? We have one here, if you can wait for the microphone.

Speaker 22: Thank you for your time. I saw the engagement with the pediatric patients, have you done work with say cognitively challenged populations, folks on the autism spectrum, traumatic brain injuries, those sorts of?

Speaker 1: Yes. Yes. In my group, we've worked with children on the autism spectrum. I would say, in the field of social robotics, that has actually been one of the most pioneering applications has been children on the autism spectrum, disorder spectrum, in order to help them practice social cues. One of the things interesting for that population we've been finding is, first of all, they're very attracted to social robots. I mean, they like technologies and so forth but also, they can engage it in a social interaction that might be too overwhelming with a person. What we're finding is a phenomenon where we call it kind of priming the social pump so even in the clinical setting, if a child on the spectrum, interacts with a social robot first, it kind of primes a social pump and so, when they interact with a clinician face-to-face, they actually get more out of the session. The children exhibit more shared attention for instance. They may be more apt to do turn taking. They would be just more apt to speak, to make eye contact. It's really fascinating and I think this kind of technology for children with special needs I think is a huge opportunity as well.

Kara Leeches: Hi over here. Kara Leeches from Educational Testing Service and I was wondering if you are aware of any robots that do the opposite, which would be allow teachers to practice with robot children to hone their craft before they go into the classroom, versus the other way around?

Speaker 1: Yeah. The more I engage with whether it's doctors or educators, the appreciation of ... or even just for parents, the appreciation of a technology that not only engages children but can also help coach them, has come up again and

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 18 of 26

again. In the case of clinical practice, Dr. Peter Wine Stock, who's been the collaborator on the Huggable project, has talked about exactly that. A social robot as a child, to help clinicians to help practice bedside manner. When we talk about early childhood learning, you really need to get these technologies, ideally, into the home. A social robot that can not only help engage a child in educationally impactful curriculum but could even help, even through overt modeling or coaching, help parents also learn how to engage, to read dialogically with children, to ask questions, just to have conversations I think would do a lot of good.

There was a really intriguing finding recently by John Gabrielle's group at MIT. There's been a lot of discussion around the 35 million word gap and he did a provocative finding that said it's not just the number of words but actually, the number of conversational terms that seems to have the biggest change of the brain circuits around the brokus areas. This is a technology that can engage in conversational terms in the context and fluidity of daily lives. I think there's a lot of fascinating science to do here and a lot of fascinating opportunities I think, to create interventions.

Speaker 24: Hi. This way. Right here. Two microphones circling. Thank you very much for giving this talk. I was actually curious more about parenting styles and about whether or not you guys have looked at the maybe cultural affects or parenting styles in these interactions with JIBO for kids.

Speaker 1: We haven't done a systematic study yet for parenting styles that one of the things that early on, when we first started exploring the robot as a learning companion for young children is, we did a study and it was almost kind of like one of these virtuous things where we also allowed parents to be there alongside their child as the robot engaged the child in an interaction and what we found is that, because the robot is physically and socially embodied, parents participated very naturally as if they were part of the group. As opposed to like what happens I think in so many ways, parents talk about, the iPad comes out and the child's attention's on it and they kind of feel like they're kind of pushed out.

What we saw in this very provocative, early study was that parents were able to provide all the scaffold an annex that you would home that they would. They would draw attention. They would point out, "Here's what the robot's doing. Why do you think the robot feels that? Let's try that." They just felt socially included in a really participatory way but it's just a very different dynamic than we see with a lot of other technologies in the home so I think that again is just kind of like a glimmer of what the potential could be and for me personally, a lot of the other work I want to start looking now when we bring these technologies in the home is not only think again, about the child but also think about the parents as well and how can the presence of a social robot like this just foster a better learning environment overall? Whether it's because of helping to coach or modify the child's behavior or the parent's engagement behavior as well. We

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 19 of 26

just know parents have to be engaged in the right way for it to really be affective so we've got to think about the holistic solution. You've got it.

Ellen Lantern: Hello. Thank you. Ellen Lantern, Washington D.C.. I was reminded when you were speaking, about the factor of time. I wonder about the interesting question of; as time progresses, do we as humans become more inured to something that is initially novel? I'm just curious about what questions might occur to you as maintaining the freshness of such an entity in the home for instance?

Speaker 1: Absolutely. Absolutely. I mean, I think some of the answers is exactly what you'd expect. As we have found when you want to engage, even a child over a three month curriculum with these social robots, the activities need to keep changing. The robot is the companion, the robot's the playmate but the activities need to keep challenging the child and having a variability that they want to sustain the engagement. No matter how engaging a single activity is, over time, they'll get bored of it. What we're finding is, it's not so much an issue perhaps, of the novelty wearing off with the robot, it's more about, is the robot continuing to be able to engage them in activities that's of interest in relevance to the child? I think that's really more of the design question. I think you got a little bit of a taste of it in some of those interactions where the robot is engaging in sort of relational tactics as well so being able to recognize a child, identify them by name, remember what they've done in the past, talk about what they may do in the future. These kind of relational team building dialogues but then, importantly, suitably challenging the child so the activities themselves don't get boring.

I think there's a long way to go in terms of really understanding, going from again, months to potentially years, especially as children will grow and change dramatically during those times and their interests will grow and dramatically change. Again, we're at the very beginning really. We're at the very beginning of these kinds of questions. I think the more that we do this work, the more I'm convinced being able to really support these social and emotional aspects is going to be really key because it's just how we want to engage with another as a relationship. It's like these social robots, they talk about it again and again. They feel more like part of the group than part of the stuff. I like to joke, you may upgrade your smartphone every two years when you can but you never want to upgrade your puppy or your dog. There's something sustaining and enduring there and there's a lot to be understood, I think, potentially about how you build and sustain those relationships to the process of trying to create a technology that can do it. There's just a lot of insights we get from human behaviors through the process.

Speaker 26: Hi there, thanks a lot for your insight. To build more on the relationship between the designs of the systems and the relationships that we're trying to build, could you comment more generally on what you see as the balance between designing robotic systems that can fundamentally replicate human interaction in a very real way versus AI capabilities that might actually extend

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 20 of 26

human ability, extend beyond, be super human capabilities and what is the balance between those two moving forward?

Speaker 1: Yeah. I mean I think you're already seeing AI systems that are trying to do both and everything in between. You're already seeing AI systems that can be the world's best GO player. I guess the big theme is, we know how to build artisanal, deep systems. The Alpha GO champion is phenomenal and best in the world at Alpha GO but it can't wash your car. They're deep and they're narrow. These kinds of abilities that we're talking about, I would say are kind of the foundational in order to be able to engage people in this way, in this social emotional way that you can try to pull them along some sort of beneficial trajectory.

These become the table stakes. The systems obviously, are not exactly like people in that and I think one of the intriguing things we're also finding is that it doesn't have to be exactly human to actually be interesting and impactful. There used to be, I think maybe there is kind of the assumption that the more human-like in these capacities, the better or more affective it is. What we're finding is that because these robots ... we designed them intentionally to obviously, not to try to look human in any way. They're really almost designed to be, I call them the Disney sidekick and the reason why is because it creates, again, a different kind of relationship.

You have a relationship with your parent and you have a relationship with your teacher, you have a relationship with your friend. You have a different kind of a relationship with the social robot entities. One where there's kind of again, this almost pet like affinity which, I think, encourages, especially when children are feeling vulnerable, to feel like they can try things and if they make a mistake, it's okay. They're not being judged. We are actually doing a study right now, doing these adaptive roles. We're actually working on the case of, what if the robot is an expert versus the robot is a novice, verus they're as an adaptive role or sometimes it knows and sometimes it doesn't? This is very much work in practice but the thing that we're finding that's so fascinating is that, when the robot always knows the answer, it's almost as if children stop paying attention to what the robot's doing, even as it's explaining its reasoning because they're kind of like, "It's always right."

When the robot makes mistakes with them and they play the game together, they start to pay attention a lot more. They're like, "Why did the robot make the decision? Oh, it was this reason." What we're finding is that, as the robot verbalizes its thought process, children start to verbalize their thought process so again, this sort of social modeling thing happens as well again, that's just a long way to say that I actually think designing these social robots to not try to be too much like people is actually, has a lot of merit as well. We're seeing a certain kind of engagement and a certain kind of affiliation that's made possible because of that that I think can actually benefit a lot of people but we will continue to see systems across the whole spectrum of super capabilities extending.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 21 of 26

A lot of these decision support tools are meant to expand and extend our cognitive capacities beyond what we could do otherwise so we will continue to see AIs serving all of those different roles and relationships in between.

Speaker 27: Okay. Hi.

Speaker 1: Who has a mic?

Speaker 27: Is this on? Okay. You definitely highlighted a lot of the positive aspects of these interactions between the kids and the robots. I was wondering if you could talk about the unintended negative consequences that you may see with these social robots.

Speaker 1: Yeah. I would definitely say that with so much of any of these insights and phenomenons, it's always a dual edge. We're seeing that this is a technology that can be particularly persuasive and they can be persuasive in a way that really helps and acts in the best interest of people but you can imagine somebody designing a system based on these principles that is trying to serve, say a corporate interest instead of your own interest. I think what's really critical in this time especially, is to have the thoughtful dialogue and the best practices and the education of the populous about any kinds of these technologies, any of these kinds of AIs and I think, I didn't talk about this work that is also happening in our lab but we feel right now, we live in a time where there's kind of a very small subset of people who know how to create these technologies and AI is one of these technologies where, it can either be used to close the divide or to exacerbate it. If only a narrow population is able to design solutions that matter to them, it's going to exacerbate the divide.

The question then becomes one of education and familiarity on AI itself and so we have a whole other set of projects looking at; at what age can you actually start to teach children about how machines think and actually empower them with tools of creation where they can build things with AI and grow up with the attitude that this is the technology that I can create with to empower myself to create solutions that have personal significance to me and the people I care about. That's a whole other I think, flip side. Again, that is also very new work. I can tell you, I have conversations now where people say, "You can't teach people about this stuff until they're in college." I see math right, you can design a curriculum and come up with the principles like early math and introduce them to the concepts in the right way and layering and layering, adding more sophistication over time but it's a fascinating opportunity and a lot of our work is about this is just, how do children think about AI?

What ages are they able to understand different concepts? I have a master student who's looking at children actually creating rule based systems to play rock, paper, scissors. You would never thought a child could program a machine to do that but they can if you provide them with the right kinds of tools and intuitions. She's exposing them to principles, we call it generative AI and the

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 22 of 26

activity's around music. A child will enter a piece of a simple tune and through experimenting with the AI, they can discover that the AI can take that and riff on it and do something that's different but related to theirs. There's a whole fascinating psychology in science and I think about really trying to understand; how do children think about machine intelligence and how does that help them reflect upon natural intelligence?

I think we're in very early days of that to but I will say that often, when you do science, you have a surprising finding that makes you rethink everything. We did a study where we brought in parents and children into separate rooms and we had them look at videos of an actual mouse solving a maze, so a natural intelligence solving a maze, a little robot solving a maze and then they could telly-operate a robot to solve a maze. Natural intelligence, human intelligence and robot intelligence and we had parents and children do these activities completely separately and what we found is the highest correlation to how parents or children would talk about the intelligence of these relative entities was based on family relationships. That was the highest predictor.

There's been so much work done 30 years ago about how children can think about AIs and machines and what they're capable of not understanding and understanding. We just live in a fundamentally different time now. The way parents talk about AI, the children are exposed to AI is fundamentally different now than it was 30 years ago. I suspect we're going to see a lot of data now that is going to basically go counter to the insights of when children would be able to think about certain kinds of concepts or think about machines and so forth, animate and inanimate. I just think culturally, we're in a very different place and in the dialogues that kids are having with their parents is in a very different place. Again, now we're trying to look at parent/child interactions and parent/child activities to learn about AI because it's of course, not only important that children understand AI but parents need to understand AI too and we want to empower them with tools to be able to create, again, customized AI experiences that are relevant to them in their homes, in their communities so that democratization I think, is super, super important and poignant right now. Again, every that is in very early stages.

Nick Bowman: I think you. Nick Bowman. West Virginia University. Thank you for sharing your expertise with us, this has been great. I really appreciated your discussion of these four dimensions of robotics, about emotional and cognitive and social and physical. My question is; can you maximize all four of those or are there trade offs? I think you just very simply, when I'm in a really bad mood, you talk me through things. In our lab we've been working with this with gaming and we're dinging out that a gamer, for example, will pay attention to the ludic dimensions of the game and solve it almost at the expense of feeling some of the emotional dimensions. I didn't know if you're finding any of those conflicts. Is there a confluence of these four forces where they're not all maximized but they balanced out based on different end goals. I didn't know if you had any thoughts on that.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 23 of 26

Speaker 1: Sure. Yeah. Well, we haven't systematically studied that but I think what you're saying will make sense. Depending on the activity and the challenge it presents to a person, you're going to get different levels of elevation of these four areas. I think we can have that hypothesis and I expect we could go out there and study it and I wouldn't be surprised if that's what we actually do find out. It's really going to depend on the activity and the mental, emotional, social dimensions, of that task itself. I think the more that a task requires all of these things, I think you're going to see these kinds of technologies potentially have a more beneficial outcomes versus a task that's say, purely cognitive and there's a lot of tasks out there that're just purely cognitive.

I think it's really more of just appreciating. Again for me, the punchline here is just designing these AIs in a way that have a deeper appreciation of a more holistic view of human behavior and intelligence because again, in this field of AI that I'm in, it's pretty skewed. I think I'm probably just overly preaching to the choir about understanding how multi-faceted we are but in my world of AI robotics, it's a very skewed view of what intelligence is and this work is trying to push against that to say, "If you want to design technologies for people, for everyday, real people, not just your expert who wants to use this tool to do the one thing, you got to really think about what human behavior and experience is in a more holistic way, especially when you're talking about kids to seniors." You've got to broaden the viewpoint.

A lot of this work, it's pioneering because it's pushing it to kids typically younger than people who create AI systems or try to address. There's a lot of work for colleges and high schools but kids as young as this, it's very unusual. Again, part of why we're doing this because I feel it's critical to understand the impact of these systems on young learners because these technologies are starting to come into the homes so it's just a lot that we do need to responsibly understand. I think with any of these technologies, there's going to be tremendous opportunities to bring a lot of benefit but you have to understand the flip side of that in being able to have dialogue and best practices so that people are educated.

My also intuition is that the more people understand and can build with these technologies, they'll also bring that intuition to it as well. More ways of understanding and knowing through creating with it, interacting with it, experiencing with it, I think all of these things are important to create a society that's best able to take advantage of a technology that we already know is pervading so many aspects of our lives, in a way that again, is democratized. I think that is just so important.

Speaker 29: Over here. Hi. A childhood friend of mine was R2-D2 and another influence was Kit from Night Rider so these were my friends growing up. I had some human friends too I promise but if you go to an HCI conference, there's many references to Minority Report and the full body interface. In educational technology where I'm at, people refer to Neal Stevenson as the diamond age book.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 24 of 26

Speaker 1: Yes. Of course.

Speaker 29: Easy question. Has any science fiction influenced your work and if so, what?

Speaker 1: Oh yeah. I think my first fascination and love of robots came from Star Wars, not surprisingly. I have three kids right now and it's hard for them to understand what a cultural phenomenon that movie was. It's kind of like, has there been a blockbuster like that in like 20 years? We talk about people, would stand in line and see it like 20 times. It was so huge for us and for them, it's kind of like, there's just nothing that I think really compares to it. The diamond age, absolutely. I think, in a lot of the educational technologies that we've been looking at, Neal Stevenson is always provocative. William Gibson is of course, also very provocative but I think so much of us in the field, we got started in it because of our fascination with science fiction and just the idea of being able to create a future that we want to live in and understanding that technology is a way of amplifying your ability to do good in the world.

You can do good in the world through your direct contact to the people around you but through technology, you can really amplify that. I think that's what really draws us. Certainly a lot of people in my field to it is really thinking about how these technologies can be designed in a way that can really benefit a much broader segment of people who, they need help. They need help. We don't live in a society that's as equal or fair as we'd like it to be and we want to give everybody a sporting chance.

Speaker 30: Hello. Hello. Hello. Hello. Hello. This will be the last question of the evening. I just want to let everybody know.

Speaker 1: But she's been waiting.

Speaker 30: Oh. Okay.

Speaker 31: I want to take you to the other side of the population, as someone who is a parent that is beginning a early stage Alzheimer's do you have a robot that works with them to work with their short term memory?

Speaker 1: I can tell you that there is growing research appreciation of; how concurrent technologies can help people age with independence for as long as that makes sense and how do you create technologies that support the care network of the child caregivers as well as the professionals and so forth? Memory is definitely a growing area of interest. In some of the work that I've been doing, it's a related area but PTSD is another area where people can suffer from memory loss. Thinking about, this is the proactivity of these technologies, they are really interesting so really thinking about how these technologies can help people remember. When we talk about Alzheimer's already, we've been seeing examples of the idea of being able to interact with another where you can ask the same question and they won't get irritated.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 25 of 26

It's like there's kind of these interesting affordances again, of this almost companion, almost like your dog that just got a lot smarter and can do more. There is something about that relationship that people see a lot of value in. It's not ever to replace the human relationships. It's really kind of meant to augment it or extend or just to kind of fill an opportunity niche that leads to, just a better quality of life for individuals. The work is ongoing and I think within the AI community, there are conferences that are starting specifically around technology for aging. There's another conference series called Aging 2.0 which is, I would say it's more in the startup space but very much appreciating that's a global juggernaut of a society shift that we need to grapple with and I would argue that technology clearly needs to be part of the solution, just because you need to be able to extend the gap but it needs to be the right kind of technology. It needs to be designed in a way that supports our human values and again, that's an opportunity for a lot of the science, really, to understand what are the impacts, the best practices, that can inform the design principles of these technologies?

We talked about children on the autism spectrum. Huge variation. Huge variation in stages and capability. That remains a real challenge but that's potentially where adaptation and personalization could play a critical role because we're not all kind of one trink-y solutions. We're all individuals and the more that these technologies can be adaptive and responsive to that, I think the more effective they're going to be. Yeah. Again, early days but I think there's a lot of recognition that that's very important domain worldwide. Yep. Thank you.

01 Social Robotics and Human Behavior (without Intro)(1)_1Transcript by Rev.com

Page 26 of 26