nurit kirshenbaum

59
Set & Motion Nurit Kirshenbaum Advisor: Scott Robertson May 2016 Information and Computer Science University of Hawaii at Manoa

Upload: others

Post on 08-Jul-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Nurit Kirshenbaum

Set & Motion

Nurit Kirshenbaum

Advisor: Scott Robertson

May 2016 Information and Computer Science

University of Hawaii at Manoa

Page 2: Nurit Kirshenbaum

2

Page 3: Nurit Kirshenbaum

3

INTRODUCTION   5  

BACKGROUND   6  PHYSICAL  COMPUTING  AND  ANIMATRONICS   6  STORYTELLING  AND  AUTHORING  TOOLS   7  STATE  MACHINES  AND  VISUAL  CODING   7  

SYSTEM  DESIGN   9  USER  INTERFACE   9  OVERVIEW  TAB   9  SCENE  TAB   10  RESOURCE  TAB   11  CODE  TAB   12  BACKEND   13  SYSTEM  COMPONENTS   13  SERIAL  COMMUNICATION   16  CODE  GENERATION   17  

DESIGN  PATTERNS   20  WHY  USE  DESIGN  PATTERNS   20  DESIGN  PATTERN  DETAILS   24  SET&MOTION  CARDS   26  

USE  CASE   28  SCRIPT  AND  STATE  MACHINE   29  USE  OF  DESIGN  PATTERNS   30  

CONCLUSION  AND  FUTURE  WORK   32  

WORKS  CITED   34  

APPENDIX  A:  USER  MANUAL   38  OVERVIEW  TAB   38  SCENE  TAB   40  RESOURCE  TAB   43  CODE  TAB   47  

APPENDIX  B:  EXAMPLE  OF  CODE  GENERATED   49  ARDUINO  CODE   49  PROCESSING  CODE   50  

Page 4: Nurit Kirshenbaum

4

Page 5: Nurit Kirshenbaum

5

Introduction You are a traveler wondering in the forest. You reach an ancient looking tree puppet that slowly turns to face you and starts talking in a low gravelly voice. “Welcome, traveler. You have reached the heart of the forest. There are great dangers ahead. Perhaps, “ it says with a skeptical tone, “there is help in the tome Set and Motion,” you notice a book with that title by its roots. “You may choose to open it now. If not, I shall send you on an adventure that may cost you your life!” Being the reasonable, cautious person that you are, you quickly pick up the book and open it, triggering a bending sensor hidden in its cover. “Very well,” the tree continues more cheerfully, “enjoy reading the book.” Everyone loves stories. Stories are a part of our lives from a young age and in every culture. We enjoy hearing, reading, watching, and even writing stories. Many computer games have stories where the player can make choices that affect the story. I refer to stories that are affected by audience choices interactive stories. The story snippet in the first paragraph offers such a choice: open the book and something will happen, don’t open it, and something else will happen. Add to this some animatronics – which is the animation of robot like puppets, and the desire to teach electronics and state machines, stir well, and you get Set&Motion, the system I present in this paper. This project started as part of a Student Innovation Contest entry in UIST (User Interfaces Software and Technology) 2015. I maintain the website for the project at http://nurki.net/SetAndMotion/. This paper describes the design and working of the system, and now, its accompanying set of cards, in more detail. The Background chapter introduces some existing work that is connected to my work. The System Design chapter explains the design of the interface and what the author can accomplish in each tab of the application, as well as go into depth on the backend of the system, what are the elements that interact with each other in the code, and how does the application communicate with its hardware components. The Design Patterns chapter explains the purpose of using design patterns, and the specific patterns I found relevant for Set&Motion show creation. It also showcases the Set&Motion help cards. The Use Case chapter describes in full the demo show I created with the tool, the state machine representing it, and which design patterns were used. Finally, the Conclusion and Future Work chapter speculates about future directions.

Page 6: Nurit Kirshenbaum

6

Background While I don’t believe there is existing work identical in nature to Set&Motion, my work touches on several other topics that are well explored. I’ve selected to focus on: physical computing and animatronics, storytelling and authoring tools, state machines and visual coding. In this chapter, I give a small sample of works in related fields, this is in no way an exhaustive list, and in some cases, the work could have been placed under more than one of the topics.

Physical Computing and Animatronics My project started as part of a UIST contest. That contest was inspired by works of Dietz and Alford [Dietz, 2007] [Alford, 2013] in which they describe their experiences running animatronics workshops for middle and high schoolers. This activity engages children in the mechanical construction of an animatronic puppet as well as the creative process of writing and producing a show, which includes: writing and editing a script, building a set, acting the parts (and recording them), and programming the animation. The second paper contains a detailed timetable of the various activities over the span of a 3-day long workshop. The workshops were a huge success and inspired the contest. Another project using animatronics is CTRL_SPACE [Sempere, 2004] where children were encouraged via a visual programming interface to control the movements of a large animatronic head. The user can record action sequences and even use logic structure to create a fork in the timeline. A system that incorporates storytelling and interactivity (though not animatronics) using physical computing is storyRooms [Montemayor, 2004] where children were given a story wand and various toy like actuators and sensors and were asked to create a story that uses the whole room by placing the actuators and sensors in the space and use their wand to program which actuator is triggered by which sensor. However, there are much more open toolkits for the creation of interactive stories. In New pathways into robotics: Strategies for broadening participation [Rusk, 2008] the authors describe how to use the picoCricket physical computing board to create art and stories. [Blikstein, 2015] provides a comprehensive survey of physical computing kits for children and provides an analysis of them based on historical context and their “selective exposure” - that is, how much does the toolkit expose of the electronics, from systems like Arduino exposing low level electronics, and systems like Little Bits exposing high level electronics.

Page 7: Nurit Kirshenbaum

7

Storytelling and Authoring Tools There is an educational shift from working in STEM (Science Technology Engineering Math) subjects in schools to working with STEAM (Science Technology Engineering Art Math). Some of the reasons for that are detailed in From STEM to STEAM: Using brain-compatible strategies to integrate the arts [Sousa, 2013]. In a nutshell, STEM content is better absorbed when learned in a creative environment, and Art studies have been squeezed out of the regular schools curriculum to make way for sciences despite being integral to brain development and by combining both - everyone wins. Storytelling being a creative endeavor is a perfect candidate for STEAM education. In [Garzotto, 2014] there is an overview of the interactive storytelling for children field. This field has many different facets, however, I am mostly interested in authoring tools since Set&Motion can be described as an authoring tool for interactive shows. People have been creating interactive stories and tools for their creations for awhile now, and there are many different approaches. For example, [Silva, 2003] describes a system that changes the tone of a story based on the audience’s input (mood cards) and to implement the authoring tool for their system have used an approach they call StoryBits to organize the story. The author would define levels for the story and each level would have one or more storyBits, then the bits would be connected with some logic to a bit in the next level. Notice this approach does not allow cycles in the story graph. Also, in this specific work, they admitted it was difficult to track the story through their interface. A different example is the Scribe system in [Medler, 2006] where the authoring tool was a general (not game specific) tool designed for interactive drama. In this tool the author creates the story structure using plot points. Each plot point consists of preconditions, events, and actions. When preconditions of a plot-point are true that part of the story is played performing the actions and events. This system is more similar to Set&Motion than the previous one, but does not call for the state machine paradigm specifically (it uses a more logic based approach.) An attempt to incorporate a state machine into an authoring tool is portrayed in [Howland, 2015] where the researchers made an addition to the Alice game design environment where the author could use a state diagram to organize their thoughts. However, this tool was not connected to the execution of the game and was meant as scratch paper, and largely, their participants opted not to use it.

State Machines and Visual Coding The concept of visual programming has been intriguing computer scientists for decades. There are many advantages to using graphics for programming, as mentioned in [Myers, 1990]: our visual system is better suited for processing multi-

Page 8: Nurit Kirshenbaum

8

dimensional data (as opposed to the one dimensional code stream), we can often use graphics for a higher-level description of our program, and usually visual programming systems also offer direct manipulation interfaces which gives the user a sense of “directly constructing a program rather than having to abstractly design it.” Many visual programming environments have been developed since, some specifically design with children in mind. Perhaps the most notable one is Scratch (see [Resnick, 2009]) which uses a blocks metaphor to create code, where some of the blocks show indication about their use and limitations (for example, some blocks should contain other blocks and look different). An interesting example created for programming Arduino circuits is Splish [Kato, 2010] which uses a flow chart of child friendly icons for the different elements of the system and supports actions such as lighting LEDs and offers the visual programmer some condition control (if the signal is 1 do this, if the signal is 0 do that.) Since then, there are many publicly available environment for visually programming Arduino, such as Ardublock which uses a similar format to Scratch and Visuino. The concept of state machines as a computational tool was also previously explored. In the field of children’s technology, [Weller, 2008] made a tangible artefact that represented a state machine and connected it to a game called Escape Machine in which the user was supposed to manipulate the artefact according to the state machine rules to achieve a goal. So far, the use of state machines in children’s education has been sparse.

Page 9: Nurit Kirshenbaum

9

System Design The primary design goal of Set&Motion was to create a one-stop authoring tool for interactive shows. If past show creation process required a text editor (such as Word), a sound editor (such as Audacity), and an animation editor (such as Visual Show Automation), a new suggested process would require only Set&Motion. Though Set&Motion doesn’t provide full featured editing (yet!) it provides the basic editing needs and all the advantages of using one tool: there is no need to master several software tools and switch back and forth between them while working. The secondary design goal was to create an educational opportunity. Non-linear stories are naturally mapped to state machines, and indeed, state machines are a common design pattern for game programmers. Therefore, while creating an animatronic puppet show is already a rich experience in creativity (story telling, acting, set design) and technology (working with actuators, as well as sound and motion editors), we see an opportunity to push it even further and showcase the story as a state machine both visually and in code.

User Interface The tool has four tabs described below.

Overview Tab The main view of the tool is the Overview tab. In this tab, the author can visually describe the state diagram that represents the show. State machines in general can be defined by (1) a list of states, (2) a current state, and (3) rules of transitioning between states. In Set&Motion, states are scenes in the show, the current state is the scene being played (only one scene can be played at any given time) and the rules of transitioning are given by a set of one or more events. Set&Motion supports three types of events: ● Timer events ● Sensor events ● Variable events

Events trigger when certain conditions are met, and when all the events on a transition link from a source scene to a destination scene are met, the current state progresses to the destination scene and it is played.

Page 10: Nurit Kirshenbaum

10

We have chosen these three types of events to enable a large range of story structures. With timers, an author can create a linear progression in the story, with sensors, we can add interactivity, and with the variables, we can add restrictions.

The image above shows the state machine visualization of “Zee’s Mystery” (described in the Use Case chapter.)

Scene Tab In the Scene tab, the author can create and edit the content of a scene. This includes recording (or drag/dropping) audio tracks and creating key-frame animation tracks to describe the desirable motions. Each actuator in the Set&Motion system is set up with a range of movement values (this is set in the Resource tab). The line of the animation wave maps values to that range of movement. The author adds key-frame values and the tool creates linear interpolation (tweening) between them. When the show is played, the actuator animations are synced with the audio.

Page 11: Nurit Kirshenbaum

11

The image above shows the Scene tab. This scene has text, one audio track, three animation tracks (which are grayed because the controller is not connected) and instructions to handle two variables.

Resource Tab The Resource tab manages all the show’s assets. As Figure 2 shows, the components of a show have a complex relationship, and we hope to improve the presentation of this information in the future. In this tab, the author can set up and calibrate the hardware components. A controller is chosen from a list. Sensors and actuators are associated with a controller; the author must specify a pin number and a range of values (for an actuator, these will describe the range of movement, for a sensor, these will be the values for which it is triggered.) Association of components can be done with a drag and drop or via drop down lists. Clicking on an icon in the finder pane, opens a contextual properties panel for that type of resource. For example, for a timer, you can use that panel to name it and specify its duration in seconds. To calibrate sensors and actuators, the author can use the play button for this tab. If a sensor is selected (and the controller is associated with it in the program and connected to it in real life accordingly) the current value read by the controller will show in the properties panel. The author can use those values to estimate appropriate trigger values. If an actuator is selected (and properly associated and connected), the author can use a slider to send values to the actuator in real time. The author can use this functionality to estimate the boundary values for that actuator’s movement. The user manual in Appendix A shows the full set of contextual properties panels

Page 12: Nurit Kirshenbaum

12

The image above shows the Resource tab. Bottom part shows contextual properties of a sensor.

Code Tab As mentioned before, we would like to expose the users of Set&Motion to the concept of state machines. We think that the creation of an interactive show provides a constructivist opportunity to learn this concept. We wanted to provide authors who are also interested in coding an example of state machine implementation, to show that it is more than an abstraction. In the Code tab we generate working Processing [8] code that executes the show (plays the sound, sends actuator instructions, receives sensor input) based on the state diagram designed in the Overview tab. The full code is attached in Appendix B.

The above image shows the Code tab. The left side shows the generated Processing code and the right side shows Arduino code.

Page 13: Nurit Kirshenbaum

13

Backend

System Components The system has many component described in the image below. They are somewhat divided thematically into creative assets, hardware, and triggers. The Scene, Transition, and Variable components, don’t fall neatly into these categories, and may be grouped together is the state machine components. A state machine is defined by a collection of states, one (and only one) of them should be the current state, and a collection of rules that describe what would cause a change of the current state from a specific state to another. In this application, the states are scenes and the rules are transitions. Variables hold some kind of data that is changed by the states, and indicates something about the past, for example, a specific state may repeat several times, this may be inconsequential, however if the number of times it is the current state matters we would need to store this data, or else, make multiple states to indicate the progression (this is actually more common in state machine implementation, for example, for vending machines.) The Scene component is cardinal to the operation of a show. A show is a collection of Scenes. A Scene has a collection of transition rules, variable actions, and creative assets (audio and animation). A Scene component maintains all these lists and allows adding and removing items. A Scene may be played. When it is asked to play, it starts by performing all the variable actions (these may be: none, increment, reset, or random). It then creates the a list of values from the animation wave (this in turn is passed to the actuator which will communicate it with the controller) and starts playing the audio assets. The whole program runs an eternal loop (also known as game loop), and the Scene component checks in each iteration if the scene has reached the end. If it reached the end, it starts querying the transitions (if there are many of them, it iterates through the list) for triggers. This querying repeats in each game loop iteration until a trigger is detected, in which case the current scene is likely to change and another Scene component starts playing.

Page 14: Nurit Kirshenbaum

14

The Transition component is connected to two Scene components: a source and a destination. The implementation introduces a subclass of Transition called Stub, created for the purpose of having a visual component that can be dragged about the interface to quickly describe new transitions. The stub can be either outgoing, in which case, the scene whose stub we use is the source, and the scene on which we let go is the destination. Or the stub can be ingoing, in which case the roles are reversed. A Transition also holds a collection of events. It manages this collection, allowing to add or remove events. It also resets them at the appropriate time, this is particularly needed for timer events. When prompted by the source scene to check if it is triggered (after that scene has finished playing,) it goes through all the events associated with it (if there are more than one) and only if all of them indicate that they triggered will it signal that the triggering occurred. In that sense, having multiple events associated with one transition implies a logical AND relationship. If the author would like to implement a logical OR of multiple events, they would need to create multiple transitions between the source and destination with a different event associated with each. A variable component resembles in functionality to variables we know from programming languages. It is a simple container for data. In this case, the data must be an integer, and the author must specify its initial value (may or may not be 0.) The

Page 15: Nurit Kirshenbaum

15

Scene component has a handle on the variable and may cause it to change. A variable event has a handle on the variable and only reads its value. The creative assets are the Audio tracks and Animation track. Sound is managed using the Processing Minim library. These components are based on a Track component and focus on displaying the wave representations properly (in regards to scaling, for example) and keeping track of the current play position in regards to the overall duration (the scene will update the position in all its tracks if we are using Scene tab and the playhead is moving). The Animation track component is a bit more complex. It holds a collection of Keyframe objects, and manages addition of keyframes, removal of keyframes, and shifting them about. The position of the keyframes affects the instructions given to the actuator, and the Animation track component has a function to interpolate all the values that should be given as instructions to the actuator during the track duration and with a similar resolution as the audio track given the keyframes position. An Animation track is directly associated with an actuator - forming a connection between the creative assets and the hardware - and will pass it this list of instructions on demand. The heart of the hardware is the Controller. The system support an Arduino controller that may be connected to sensors and actuators, and a Pololu controller that may be connected to actuators. The Controller Component manages a list of such Actuator and Sensor components, and during every game loop iteration it communicates with its components one after the other (more about that communication in the next section.) The author is expected to define the appropriate communication port for the controller, select the type of controller, and has the correct code onboard the controller (for arduino, such code can be generated in the Code tab, but still needs to be manually uploaded by the user using the Arduino IDE.) The Actuator component is associated on one side to a Controller and on the other to any number of animation tracks, however it does not manage these relationships, and these associations are managed by the other components. It is passed a list of instructions by a the animation track (this could be a different set of instructions based on whichever scene is currently active) then in turn passed one instruction at a time to the controller based on the current time vs. time of starting to play (this is important - as it supports the synchronization with the audio). The author must specify to type of signal expected (digital|analog), the port number, and a range of relevant values. Sensor event Components are similar to actuators in that it is passively associated on one side to a Controller and on the other to a Transition, except in this case, the

Page 16: Nurit Kirshenbaum

16

controller updates to current value of the sensor in each game loop iteration, and the transition “reads” it - it doesn’t actually looks at the value, just queries if it is triggered or not, as the trigger value is internal to the Sensor event. Just like one actuator can be assigned to multiple tracks, there can be several sensor events that look at the same real-world sensor with different trigger values. Here again the author must specify to type of signal expected (digital|analog), the port number, and a range of relevant values. The sensor event is part of the events along with the Timer event component and the Variable event component. The timer event can be reset and then update itself based on the time in each game loop iteration. When its value hits the value specified by the author it is triggered. It is the job of the Transition component to test if it has triggered or not. The variable even is even simpler, it accepts actions on its value from Scene components and when the value reaches that of the value specified by the author, it is triggered.

Serial Communication Never can work without communication to and from the controller/s. This is done by connecting a controller to a computer port with a USB cable and using that to send information. Since USB sends one bit at a time, this form of communication is called serial communication. As mentioned in the previous section, this application’s main operation is an infinite loop (that is called every 10millisecond or so) called the game loop. The application uses this loop to update all the components that need updating. For example, for a timer, this would update its current value by subtracting the time difference since last update from its value. For the controllers in the system, the update would start a reading and writing on the serial channel for that controller. The following procedure describes how the communication with an Arduino controller works, and it is followed by an explanation about the difference with a Pololu controller. The communication protocol I defined for this application is very simple. Every element of data has the form:

<pin:value> That is: the character “<”, followed by a number value for the pin the component (sensor or actuator) is connected to, followed by the character “:”, followed by a value (that is read or that should be written), followed by the character “>”. The arduino controller onboard code should also be able to read this communication format and use it to write instruction to the actuators, as well as use the format to send back the values read from the sensor for to application. The code tab will generate an appropriate arduino code that implements this.

Page 17: Nurit Kirshenbaum

17

In every game loop iteration, an arduino Controller component will: 1. Read a string from the serial port 2. Process it to find matches of the form <pin:value> 3. Go through its list of hardware components and look for Sensor components 4. For each sensor component, see if a match was found in the input 5. If a match was found - write the value to that component 6. Go through its list of hardware components and look for Actuator components 7. For each actuator component, read its current value (if a scene is playing the actuator will update the value based on the time and the list of instructions it got from the animation track) 8. Form a message in the form <pin:value> based on the actuator and value 9. Send all these messages as a string on the serial port The communication with a Controller of type Pololu is different because it is mostly design as a powerful controller meant to control multiple servos, and more commonly runs as a slave interface waiting for commands of a form PinValue; in a format specified in the Pololu user manual. It is possible to configure a Pololu controller to handle sensors and to change its onboard code, but this is more complex and not the common use. As a result, the first five steps above are not used, and the format generated in step 8 is different.

Code Generation As part of the educational purpose of the application, I hope it provides an opportunity in a learning environment to go over code examples and discuss them. A state machine is a very important computer science concept that is used heavily, especially in game development. However, it appears to be a rather abstract concept. I was hoping to form awareness of the concreteness of state machines by showing how they can be implemented as actual, runnable code. For older children who are using the animatronics workshop as a step while learning about programming and electronics in a wider context, the code generated can be a reference for other tasks. Set&Motion generates two kinds of codes, the arduino code for any arduino controller defined in the resources, and a processing code that will run an entire show. An example of the generated code (based on the example in the Use Case chapter) is available in Appendix B. For the arduino code, there is a core of code that performs the game loop (the loop() function) which reads values from input pins, writes values to output pins, and calls the function serialEvent(). The serialEvent function looks for a string on the serial port, parses it for the format <pin:value> which it can write to outputs, and write to the port in that format based on inputs.

Page 18: Nurit Kirshenbaum

18

How does the code know about the inputs and outputs? This is where the code will vary between projects (and between controllers in one project!) When the author creates the show, they have to specify in the resource tab for each actuator and sensor the controller that will run them, the type of the signal (analog or digital) and the pin number. Set&Motion uses that information to create variables to hold the values of these components, and add commands such as analogRead() and analogWrite() with the specified pin numbers. As long as the real-life components are indeed connected as the author specified in the resource tab, this code should manage communication with the application correctly. The author will need to manually upload this code to the arduino board through the arduino IDE. The state machine implemented in the Processing code is based on the state machine visualisation represented in the Overview tab (there are no verifications to check it is a valid and functioning machine.) It starts by generating a block of variables for:

- A state value for each scene playing called SCENENAME_PLAYING - A state value for each scene waiting (to see where it should transition) called

SCENENAME_WAITING - A variable to hold the current state - A boolean for each scene in both modes (playing and waiting) to indicate if it

had started or not. This boolean is used to make sure resetting of values does not happen more than once.

- Hardware related variables: Serial is a used for communication and is part of the Processing library, we have variables to hold sensor values.

- Variables with the string paths to audio files - Variables for determining and evaluating timing - Variables for the Variable components created through the Resource tab. - In an external file called “actions.pde” each animation track has an array that

holds all the values that correspond to the animation wave. (These array are extremely long, and therefore are stored in a separate file to make the main file easy to read.)

The setup() function which is called when the program first runs initializes variables for communication, timing, and the first state. The game loop, which is Processing is called draw(), is also simple: it calculates the current timing, performs an arduinoRead() (if an arduino is in use), then it checks the current state and calls the function corresponding to that state. So each state has its own function. If the state is a SCENENAME_PLAYING state, the function will: 1. Reset the time elapsed if this is the first time the function is called, and perform variable actions if there are any.

Page 19: Nurit Kirshenbaum

19

2. If the time elapsed is longer than scene duration (calculated by the Set&Motion application) switch to the SCENENAME_WAITING state. 3. Else, check for each audio track if it should be played - if so, play it. 4. Write to actuators values from the actions.pde arrays based on the time elapsed. We do that using the pololuWrite() or the arduinoWrite() functions. The SCENENAME_WAITING functions are a bit more complex, as they have a string of “if” statements that correspond to the outgoing transitions from SCENENAME. If any of the conditions are true, the function sets the next state to be the playing mode of the destination scene. Examples of conditions may be: A variable event transition if ((waiting == 5)) { current_state = ABRUPT_END_PLAYING; ABRUPT_END_started_playing = true; return; } A sensor event transition if ((arduino66_light53_value >= 250 && arduino66_light53_value <= 1000)) { current_state = PAINT_PLAYING; PAINT_started_playing = true; return; } A timer event transition if ((time_elapsed >= 10000)) { current_state = CLUES_PLAYING; CLUES_started_playing = true; return; } I will mention that this is by no means the only, or the best way to implement a state machine. What is the best way to implement a state machine might be a question for another time. This method is, in a way, fairly easy to follow and hopefully teach with.

Page 20: Nurit Kirshenbaum

20

Design Patterns

Why Use Design Patterns Designing is not easy. This is the first stumbling block for those who want to create… anything, really. There is nothing scarier than an empty page, or an empty screen, waiting for what you write or sketch to become the next great… thing. What should be your metaphorical first stroke? Even if you are flexible enough to allow for mistakes and constant erasing I’m sure you would prefer to start heading in the right direction. Creation involves design in every domain - how do you make a product, draft a house, build complex software applications? You need to use your design skills to be successful in such tasks, and this skill is usually improved with experience. Yet, we want something that will help us talk and think about good design from an early stage. To do that we use design patterns. Design patterns, go beyond abstract design guidelines, and curate a collection of known successful solutions to common problems in a specific design domain. This idea was notably executed in the 1977 book “A Pattern Language: Towns, Buildings, Construction”[Alexander, 1977] which aggregated 253 architecture patterns. This idea made even bigger splashes in the Computer Science community following the 1994 book “Elements of Design Patters”[Gamma, 1998] where Gamma et al., known as the gang of four (a name which furthers their superstar aura,) described patterns for object oriented design that are widely used today. A pattern usually has several components. The four core components are: pattern name, problem, solution, and consequences. In some cases, design patterns’ authors include more examples, illustrations, and implementation tips. There is power in naming. The pattern name increases our design vocabulary and makes it easier for us to talk about our design with group members. The name should be descriptive enough so there is no confusion as to why this name was assigned to the pattern, and no confusion with other patterns. The problem is a description of a situation that often comes up in the design domain. This section should give the designer enough information to identify the situation that can be addressed by this design pattern. The purpose of a design pattern is to help designers solve problems with tried and true solutions. The solution is a description of the design itself, what are the elements of the pattern and how should they work together and how they solve the problem. The consequences describes the expected results of using the pattern, how can it be used or co-used with other design ideas, and lists any advantages or disadvantages there may be.

Page 21: Nurit Kirshenbaum

21

Perhaps you are wondering why I mention design in a work that focuses on stories and authoring. In some way it seems that writing a story relies on creativity and emotions, while design is a logical, analytical activity. There are two reasons I bring this up. The first is that story writing is very much a design-centered activity. The components of drama were analyzed in Aristotle’s “Poetics” where he suggests design patterns long before this term was coined. Story-writers use known story structures such as the 3-act model or the Japanese Kishōtenketsu to organize their story development. Interestingly, even the gang of four, in an attempt to illustrate the reason to use design patterns, say:

Novelists and playwrights rarely design their plots from scratch. Instead, they follow patterns like "Tragically Flawed Hero" (Macbeth, Hamlet, etc.) or "The Romantic Novel" (countless romance novels). In the same way, object-oriented designers follow patterns like "represent states with objects" and "decorate objects so you can easily add/remove features." Once you know the pattern, a lot of design decisions follow automatically.

The second reason to discuss design patterns, and the reason I decided to incorporate them in the Set&Motion system is the complexity of designing interaction for stories. Authoring stories is already a difficult task, further exacerbated here with the need to think about possible multiple plot-lines, outcomes, and user actions. The Set&Motion tool provides the functionality, and does so while supporting a visual representation which is fairly understandable (state machines are easy to follow if you understand a little about them) but still far from intuitive. So how can one think about stories as state machines, or even the stories themselves? The answer is, unsurprisingly, design patterns. Table 1. Design Patterns

Diagram Why How One after the other

No viewer input is needed for this part of the story. Division to scenes is for organization.

Create scenes that best organize your story. Connect them with timer events.

Choose your own adventure

The story offers the viewer two (or more) options to choose from. This is like a “choose your own adventure” story.

Create scenes for all the possibilities you presents and connect each one to a different sensor event. You can use the sensor events again for another fork of opportunities.

Page 22: Nurit Kirshenbaum

22

Roll of the die

The next scene should be chosen randomly out of X possible scenes. This can be used to represent a randomized battle consequence (like in D&D) or to create a somewhat different story in every run.

Create a variable with a max value of X. In the root scene set it to “random”. For all the possible scenes create a transition with a variable event: the first should trigger on 1, the second on 2, and so on until the last which will trigger on X.

Dead end

The show should finish. Could be because the story ended, the viewer left, or the viewer made choices, which led them to a dead end.

Any kind of event can lead to this scene. Just don’t connect it to anything else.

Try again

You are expecting for a certain input from the viewer, but you want to prompt them until that input is received. Possibly, you may want to limit the number of times for retrial. Can be good if there is a “correct” and “incorrect” inputs, or if you suspect the viewer may need help recalling what to do.

Create a transition from a scene to itself and add a timer event. Make sure the timer is not too short or too long (try it on people). You can create a variable, and increment it in this scene alone. Then add a transition to another scene (possibly a “dead end”) with a variable event.

Impatient destination

After a certain time has elapsed, you suspect the viewer has either moved on, or does not know what to do and needs a reminder.

Connect all the scenes that may be “stuck” (waiting for input or dependent on a variable that may or may not trigger an event) to a scene with a timer transition.

Dummy scene

An empty scene, it doesn’t promote the plot, but is used for structural reasons. Can be the start scene, a “dead end”, or a “home base.”

Make an empty scene. Anything (or nothing) can lead into it; anything (or nothing) can come out. This depends on the structure you are trying to make.

Page 23: Nurit Kirshenbaum

23

Home base

You expect the viewer to explore multiple options starting from the same base scene. For example, a peddler offering the viewer to look at their wares. The peddler will continue to offer this until some other condition occurs. Can work well with a “dummy scene” and a “list of musts”.

You make a base scene (either a “dummy” or with a brief message). Different sensor events lead to scenes representing the different options. Each of those scenes has a transition back to the base scene with a very short timer event. (You don’t want the timer to be 0, so that variable/sensor events will be trigger and lead to a different part of the story.)

List of musts

Among all the scenes in the show, you want to make sure the viewer has viewed X specific scenes before progressing to a destination scene. Perhaps some secrets must be revealed, or story objects given before a fight scene.

Create a variable. Increment the variable in each of the “must” scenes and add a transition from each of those scenes to a destination scene with a variable event triggered at X. If there aren’t many scenes, you can make a variable for each, and check all of them before transitioning to destination (similar to using Boolean variables)

Just once

There is a scene that should be played just once. A problem may arise when that scene is called from one or more locations using a sensor and after the sensor is triggered, you expect it to remain triggered (for example, a light sensor was exposed, but you don't expect the viewer to cover it), which may call that scene again and again, though this is not desirable. Can work with "home base" or "list of musts."

Create a variable for that scene. In that scene, and only that scene, increment that variable. In addition to the sensor event leading to that scene, add a variable event (so there will be two events on the same transition) that will trigger when the scene variable is equal to 0. That way, it will only be true one time, after the scene plays one, the variable will equal 1.

Page 24: Nurit Kirshenbaum

24

Design Pattern Details Table 1 introduces the design patterns developed for Set&Motion. They are only applicable in this domain as they rely on the use of the specific types of events that Set&Motion supports: timer, sensor, and variable events. In addition, the names are easy to remember, and often have names that make story-sense like roll-of-the-die, or try again. One after the other is an example of a linear story (or a linear fragment inside of an otherwise non-linear story). In this case, the distribution to scenes is done for the sake of organization or some kind of logical partition, since it could have been recorded as one scene. However, working on smaller scenes is more convenient, and changes can be done quicker (you don’t need to find a sound section and its animation, or worse, re-record a sound file) in this kind of modular process. The use of the timers shows there is no expected interactivity between the scenes. The choose your own adventure pattern is named after the book style with that name, where the reader chooses after a story fragment if they should follow a certain action that would lead them to one page, or a different action that would lead them to a different page. Based on this concept, this design pattern represents a fork in the “road” following a scene, and the viewer decides which path they should follow. The viewer expresses their choices by triggering sensors, so the transitions in this design pattern are sensor events. In a roll of the die we see how we can incorporate randomness into stories. It is common in role playing games such as D&D to use a die to decide whether a fight spell, or other action was successful or not, and that, in turn, decides on the progression of the story. If a character was successful they may win treasure, if it failed, it may have to run away injured. In Set&Motion stories, an author can decide to use randomness so that a story is somewhat different every time it is viewed. It can be done using a variable, and setting that variable to receive a random value in a scene. The following scene could be any one out of a handful of scenes that have a variable event assigned to them. For example, in the first scene, the puppet may say “You, traveler. You are new to these parts.” and the next scene may be randomly selected as either “I don’t trust strangers. You have to prove yourself first.” or “I’ve seen you’re face in the paper, you’re a detective, aren’t you?” thus setting the story in two possibly different paths (or partly different). A variable will get a random value smaller than 2 in the first scene. If it is 0, we will switch to the second scene, if it is 1, we will switch to the third. Stories usually end. Even The Neverending Story (the wonderful book by Michael Ende) had a last page. While interactive stories may go on forever and state machines can represent that, the dead end pattern is a reminder that some parts of a story make for

Page 25: Nurit Kirshenbaum

25

a fitting ending. If the story presented a problem and it was resolved, there is no need to revisit other parts of the story. If there is a suspicion that our viewer is gone (for example after a long period of time with no viewer input), it might be a good idea to end to story. A dead end is simply a scene represented with a black semi-circle on the bottom of its rectangle which indicates there are no out-going transition links. It could have any number of incoming transitions with whatever event combinations. There could be multiple dead ends in one show. Suppose your story is waiting for some input from the viewer, but time passes and the input doesn’t come - perhaps because the viewer didn’t hear or understand what to do next. In that situation the show may appear to freeze. We can solve this problem by reissuing the story prompt for the viewer periodically, until some other story-appropriate event happens. This idea is conveyed in the try again pattern, where, in addition to any other story related transitions, we add a transition from the scene to itself with a timer event. Coupling a timer event with a variable event on the link, can help limit the number of times we allow the viewer to try again. This pattern is useful only if the repeated scene includes instructions that will help the viewer move forward in the story. Like the try again design pattern, the impatient destination patterns originates from the problem of the viewer being stuck because they don’t know what to do, or that they left. In some cases, the author may one to give different instructions, so they need a different solution that try again. In the event that the viewer left, the author may want to terminate the show, making the impatient destination also a dead end. It is simple to implement this pattern, the author adds timer event on all the links that lead to the impatient destination. Some of the design patterns listed here are crucial to convey the story the author is presenting, some are meant to provide a friendlier experience (almost like a user interface), but some are structural, and have the purpose of organizing the scenes logically. A dummy scene is an empty scene with no audio/animation or duration, and is used to facilitate other patterns. For example, an impatient destination may be a dummy scene, or a home base from which sensors help decide on the next scene. A dummy scene should allow more structural flexibility to the author. The home base pattern describes a way to connect multiple scenes that can transition from one to the other based on input. If the other has four scenes, the first one should play on input A no matter what the previous scene was, the second one should play on input B and so on… the author may find themselves adding many confusing links - connecting every pair and trying to remember which sensor event is applicable. Having a home base is an organized way to handle the situation. Every scene immediately returns to the home base when it is done (with a very short timer),

Page 26: Nurit Kirshenbaum

26

the next scene is decided based on the input detected by the home base. This method is more modular, and if the author creates a fifth scene, they needn’t start making all the connections, but rather add one from the home base and one to it. When making an interactive story, there are multiple ways in which the author expects the viewer to travel. They may want the viewer to traverse all the story, they may find any single path with an ending sufficient, and they may want the viewer to experience at least some portion of the story (or at least, some salient points.) List of musts is a design pattern for the later case, in which the author specifies the scenes that should be viewed before some other scene is called. An example for this is shown in the example show in the next chapter: in a detective story, there are several possible links, but only some of them are necessary for solving the case. To implement this pattern the author should use variables and variable events. Many interactive story structures avoid having cycles so the viewer does not experience the same story fragment multiple times. The Set&Motion stories are not expected to have this linearity, they are encouraged to use repetition when the story permits or to encourage the viewer to action. However, some story fragments should - for the sake of the story - be experience just once, which lends the name to this design pattern. To make sure a scene doesn’t play more than one time, we use the power of variables again. We create a variable for the scene and have the scene increment that variable. On top of the transition logic already in place, we add a variable event of the variable being 0, this way, after playing once, that event will not trigger again.

Set&Motion Cards For authors creating with the Set&Motion tool, we offer scaffolding the process with the Set&Motion cards. The current set of cards is listed below.

Page 27: Nurit Kirshenbaum

27

The goal of the cards is to guide the author in the different tasks that are part of the show creation. The design patterns are shown in the cards with the orange border, and as mentioned before, they can inspire story structures, and explain how to implement some ideas with the tools available in the Set&Motion state machine. I’ve added the purple border sensor cards to show some information about the different sensors available to the author. Future versions will likely to include a schema to create the relevant electronic circuit for each sensor. Likewise, there are plans for cards describing the actuator connections. The last set of cards with the green border is the story cards. These offer useful prompts for story creation, such as who is the character, what could be bothering it, how would it handle that problem, where would that character be (setting), etc. It also has prompts that relate to the performance aspect of a puppet show, such as, how would the character move, how would the character talk (speak fast/slow, bored/excited), and what props the viewer can interact with.

Page 28: Nurit Kirshenbaum

28

Use Case “Zee’s Mystery” is a show created from start to finish with the Set&Motion tool. It was presented to live audience in the UIST 2015 Student Innovation Contest. The premise of the story is that the viewer is a detective that came to help Zee, an artist (portrayed by a zebra animatronic puppet,) from which an item was stolen. Zee suspects the theft was perpetrated by one of the last three clients whom portraits she had drawn. This show and set were created with various crafting materials, such as, plywood, paint, modeling clay, etc. and controlled with hobby electronics, such as, a pololu controller, arduino controller, servos, light sensor, touch sensor, bend sensor, resistors, and jump wires. Part of the electronics was connected to a breadboard. The set is an important part of the story. It appears like a room with three portraits on the wall (these contain clues about the mystery,) an easel with canvas and paint cans on one side, a box with items in it (costume props, feathers, and some peanuts), and a stack of flight magazines. There are sensors embedded in the set. The paint cans are covered, and hide a light sensor; when they are uncovered the sensor changes values. The stack of magazines have a touch sensor that changes value when picked up and squeezed lightly. The costume box has a bend sensor attached between the side of the box and its lid, when the lid is opened, the sensor changes in value.

Page 29: Nurit Kirshenbaum

29

The set is shown on the left, a viewer interacting with the bend sensor in the box on the right.

Script and State Machine The show has 7 scenes with the following transcript:

Scene 1 (Start): “Hello! A detective, what luck. My priceless McGuffen statue was just stolen. Hmm, I think it was done by one of the last 3 people I painted. Do you think you can help me figure it out?” Scene 2 (Clues): “Help me look for clues around the studio.” Scene 3 (Paint): “It looks like I forgot to close my yellow paint. There is a strange yellow stain on Dr. Lion's coat. But I suppose it may be mustard.” Scene 4 (Chest): “Yep. No surprise seeing Lady Ostrich's feathers there. She insisted on wearing something from my costume chest. Though I wonder how the peanuts got there.” Scene 5 (No clue): “Someone forgot their magazines here. Not sure if it was Lady Ostrich (you know) or Dr. Lion. In any case, I'm not sure this is a clue.” Scene 6 (End): “Wait a minute! I just noticed that Mr. Elephant's yellow bowtie is just a bow tie covered with yellow paint! And the peanuts too... It looks like he was snooping around. I'll get to the bottom of this. Thank you, detective, I couldn't do it without you. Goodbye for now.” Scene 7 (Abrupt end): [empty]

The image below shows the state machine representing the play.

The transition between scene 1 and 2 is based on a timer. Scene 2 is replayed periodically to prompt the viewer. If scene 2 is played five times, we assume the

Page 30: Nurit Kirshenbaum

30

viewer left and transition to scene 7 (the show just ends). However, if we detect the light sensor, the bend sensor, or the touch sensor from scene 2, we transition to scenes 3, 4, and 5 respectively. These scenes are also connected between themselves, for example, if scene 3 has finished playing and detects the bend sensor it will transition to scene 4. If none of the sensors are triggered, 3, 4, and 5, return to scene 2 after a timer. Both scenes 3 and 4 increment a “clues” variable, and when that variable hits the value 2 at the end of one of them we transition to scene 6 and the conclusion of the show.

Use of Design Patterns There are several design patterns manifested in this show. In fact, the show was my inspiration to gather these patterns. Since the show was first created, the design patterns have evolved, and they now present some better ways for implementation. We see one after the other in the transition between the first and second scenes. Scene 1 is an introduction to the case, Scene 2 is a prompt to look for clues, its text is separated from Scene 1 because we want to use it in a try again pattern. But there are no conditions to pass between these two scenes, so a simple short timer moves from 1 to 2. Scene 2 implement the try again pattern. The prompt is repeated every few seconds, it shows the viewer that the show is still running (the puppet is still otherwise) and that they are expected to do something. In this play, we limit the tries to 5, so we create a variable to keep track of that information, and when we exceed our number of repeats, this scene leads to an impatient destination in Scene 7. The transition between Scenes 3, 4, and 5, to Scene 6 illustrates a list of musts. Scenes 3, 4, and 5 have different clues in them: the open yellow paint cans, the stack of magazines, and the feathers and peanuts in the costume chest. Only two of these (the paint and the chest) are required to understand the “solution” to the mystery, so it does not matter if Scene 5 is played or not, the moment Scene 3 and 4 have finished playing, we go to the conclusion of the mystery in Scene 6. The show has a variable called clues that is incremented when Scenes 3 and 4 are played, and when it hits the value 2, it triggers a transition to Scene 6. However, the machine created for this show, is actually flawed, since it will go into the final scene if Scene 3 is played twice or Scene 4 is played twice (because, for example, the box was opened twice.) A better implementation would integrate the just once design pattern to make sure none of the clues are visited multiple times.

Page 31: Nurit Kirshenbaum

31

Another design pattern visible in this example is the home base pattern. The base scene is Scene 2 which also repeats itself up to a limit count if no sensors are triggered. In this example, the Scenes 3, 4, and 5 are also connected between themselves because the timers back to base were not short enough (the reason for that being that just once was not in use then.) We also see the dead end pattern, with both Scene 6 and Scene 7 being possible termination points for the show. Scene 7 is also a dummy scene as it has no content, we have added it to the project simply because we needed a dead end scene, so it has a structural role. There is no procedural way for working with the design pattern. The need to use one may emerge from the story, or deciding to use one may direct the story choices. In the end, the creative process for creating a Set&Motion show is iterative in nature.

Page 32: Nurit Kirshenbaum

32

Conclusion and Future Work There are three directions to explore and improve the system: software, hardware, and user study. At this time there are many basic functionalities that I have not had time to implement. This include an undo capability, cutting and pasting of resources such as sound or scenes, and audio editing abilities like erasing a segment of audio, or pushing it around the timeline. There is a need in some aesthetic changes like a better system for laying out edges, new organization schemes for resources, optimize user interface and make it more fluid and customizable. And there are quite a few new features I would like to add, such as a sound effects library, a debug tool (to check that the state machine works as intended), a tool to assist in building puppet frames, and support for collaborative work. There are several hardware related improvements I would like to explore. One improvement can be to design a puppet frame building process. Puppets usually have different shapes and sizes, and one author may have a preference for the axis of movement (for example: move mouth up and down and move neck side to side) while another author may have different preferences (for example: move mouth up and down and move all body side to side) and these may influence the needed position of the actuators on the puppet’s frame. Each actuator needs a part of the frame to hold the housing in place, and another part of the frame to connect to its horn (the part that moves.) A useful tool may help design the frame and fabricate pieces that will connect to form the frame with the proper attachments for the actuators. Another interesting hardware improvement may be the creation of a system of sensors and actuators that operate as independent units. As [Blikstein, 2015] said, there are different levels in which the electronics components can be exposed to the user, at the moment, the system supports works with fully exposed electronics (arduinos, sensors, resistors, and wires). It can be more accessible if the electronics were encapsulated into easy to use components that don’t require any knowledge in electronics. Of course, a possible objective for working with Set&Motion may be to learn electronics, so I would like to have the system open to different levels of electronic components to support a wide array of users and goals. Most importantly, future work will involve studies with participants (initially undergraduate students, but hopefully children after that) to evaluate, among other questions, how easy it is to make shows? How fun is the experience? Is it an effective

Page 33: Nurit Kirshenbaum

33

way to understand the concept of state machines? Is the tool easy to understand and use? Are the Set&Motion cards helpful for authors? A first study will conduct a workshop for two or three groups that will work to create a show and give some initial feedback that will hopefully be able to direct further improvements and further studies. In summary, Set&Motion has a long way to go and many avenues to explore. At this point, as a prototype, it has the ability to create and play scenes with audio and animations that are instructions to actuators (that control puppet movement), and can create flow control between the scenes using three types of events, representing the whole thing as a state machine. I presented in this paper some of the thought process behind the current design, yet, this may all change with user input. I also presented in this paper a scaffolding process for show-creators that includes prompt cards and a set of design patterns that can inform story decisions as well as explain how to make it work. Thank you for reading.

Page 34: Nurit Kirshenbaum

34

Works Cited

Alexander, Christopher, Sara Ishikawa, and Murray Silverstein. A Pattern

Language: Towns, Buildings, Construction. New York: Oxford UP, 1977. Print.

Alford, Jennifer Ginger, Lucas Jacob, and Paul Dietz. "Animatronics Workshop: A

Theater Engineering Collaboration at a High School." IEEE Comput. Grap.

Appl. IEEE Computer Graphics and Applications 33.6 (2013): 9-13. Print.

Aristotle, and Gerald Frank Else. Aristotle: Poetics. Ann Arbor: U of Michigan, 1967.

Print.

Blikstein, Paulo. "Computationally Enhanced Toolkits for Children: Historical

Review and a Framework for Future Design." FNT in Human–Computer

Interaction Foundations and Trends® in Human–Computer Interaction 9.1

(2015): 1-68. Print.

Dietz, Paul H., and Catherine Dietz. "The Animatronics Workshop." ACM SIGGRAPH

2007 Educators Program on - SIGGRAPH '07 (2007). Print.

Gamma, Erich, John Vlissides, Ralph Johnson, and Richard Helm. Design Patterns:

Elements of Reusable Object Orientated Software. Reading, MA: Addison

Wesley Longman, 1998. Print.

Garzotto, Franca. "Interactive Storytelling for Children: A Survey." International

Journal of Arts and Technology IJART 7.1 (2014): 5. Print.

Page 35: Nurit Kirshenbaum

35

Howland, Kate, Judith Good, and Benedict Du Boulay. "Narrative Support for

Young Game Designers' Writing." Proceedings of the 14th International

Conference on Interaction Design and Children - IDC '15 (2015). Print.

Kato, Yoshiharu. "Splish: A Visual Programming Environment for Arduino to

Accelerate Physical Computing Experiences." 2010 Eighth International

Conference on Creating, Connecting and Collaborating through Computing

(2010). Print.

Medler, Ben, and Brian Magerko. "Scribe: A tool for authoring event driven

interactive drama." Technologies for Interactive Digital Storytelling and

Entertainment. Springer Berlin Heidelberg, 2006. 139-150.

Montemayor, Jaime, Allison Druin, Gene Chipman, Allison Farber, and Mona Leigh

Guha. "Tools for Children to Create Physical Interactive Storyrooms."

Comput. Entertain. Computers in Entertainment CIE 2.1 (2004): 12. Print.

Myers, Brad A. "Taxonomies of visual programming and program visualization."

Journal of Visual Languages & Computing 1.1 (1990): 97-123.

Resnick, Mitchel, et al. "Scratch: programming for all." Communications of the

ACM 52.11 (2009): 60-67.

Rusk, Natalie, Mitchel Resnick, Robbie Berg, and Margaret Pezalla-Granlund.

"New Pathways into Robotics: Strategies for Broadening Participation."

Journal of Science Education and Technology J Sci Educ Technol 17.1 (2007):

59-69. Print.

Page 36: Nurit Kirshenbaum

36

Sempere, A., and B. Mikhak. "CTRL_SPACE : Using Animatronics to Introduce

Children to Computation." IEEE International Conference on Advanced

Learning Technologies, 2004. Proceedings. Print.

Silva, André, Guilherme Raimundo, and Ana Paiva. "Tell Me That Bit Again...

Bringing Interactivity to a Virtual Storyteller." Lecture Notes in Computer

Science Virtual Storytelling. Using Virtual RealityTechnologies for Storytelling

(2003): 146-54. Print.

Stoneburner, Delynn. "From STEM to STEAM: Using Brain-Compatible Strategies

to Integrate the Arts. Pilecki, T., & Sousa, D. A. (2013)." Roeper Review 38.2

(2016): 129-30. Print.

Weller, Michael Philetus, Ellen Yi-Luen Do, and Mark D. Gross. "Escape Machine."

Proceedings of the 7th International Conference on Interaction Design and

Children - IDC '08 (2008). Print.

Page 37: Nurit Kirshenbaum

37

Page 38: Nurit Kirshenbaum

38

Appendix A: User Manual This section will detail the functionality of the Set&Motion tool.

Overview Tab

The Overview tab has a top section and a state machine visualisation canvas. The top part of the application (toolbar + tab selection) in the Overview Tab has the following functionality: 1 - Indicator of the current tab (Overview) 2 - Start new project - user will be asked to select a path for the new project and provide a name for it. A new folder will be created in the destination where the application will save resource as well as the <project_name>.json file. 3 - Save project 4 - Load existing project - user will be asked to navigate to the desired <project_name>.json file. 5 - Play show - this button will activate the state machine describing the show and execute all the scenes. It will only be enabled if there is a single start scene. 6 - Add a new scene 7 - Delete a scene - enabled when a scene is selected 8 - Edit a scene - enabled when a scene is selected. This will open the Scene tab, showcasing the selected scene 9 - Change window to fullscreen or part screen.

Page 39: Nurit Kirshenbaum

39

The state machine visualization canvas has the following characteristics: 1 - Scenes are represented by colored rounded rectangles 2 - A black semi-circle with an “S” on the top of a scene represents a starting scene (no incoming links) 3 - A black semi-circle with an “E” on the bottom of a scene represents an ending scene (no outgoing links) 4 - Outgoing transition stub - can be dragged to another scene to form an outgoing link 5 - Incoming transition stub - can be dragged to another scene to form an incoming link 6 - Transition link - transitions represent possible passage from one scene to another (or from a scene to itself). The color of a link is the color of the destination scene, if the link has events associated with it, or gray if not. 7 - Timer event icon - shows that a link has a timer event associated with it 8 - Sensor event icon - shows that a link has a sensor event associated with it 9 - Variable event icon - shows that a link has a variable event associated with it 10 - Scene name - default as SceneX, where X is a number. Can be changed in Scene tab 11 - Audio tracks indicator - conveys to the user if there are no audio tracks (grayed) or their number 12 - Animation tracks indicator - conveys to the user if there are no animation tracks (grayed) or their number 13 - Notes indicator - when grayed, there are no notes for that scene

Page 40: Nurit Kirshenbaum

40

Working in the Overview Tab: Create a new project or load an existing one. Add scenes with the add button. Drag them around to place them. Drag the transitions stubs to form the desired relationships between the scenes. Click on a link to get the events menu. From this menu, the user can: add a timer event, add a sensor event, add a variable event, add a scene in the middle of the link (splitting the link to two links), or remove the link. Adding an event will switch to the resource tab to edit it. The user can select a scene and delete it or edit it in Scene tab (editing can also be done by double-clicking a scene). When the show is complete, and if there is only one starting scene, the user can click on the Play button to run the show. While the show runs, the current scene will be indicated with a gradually filled color.

Scene Tab

The Scene tab has a top section, text editing section, variable handling section, and a tracks editing section. This tab shows the content of one scene called the current scene. If there are no scenes in the project, an empty one will be created, and the tab will show it. The top part of the application (toolbar + tab selection) in the Scene Tab has the following functionality: 1 - Indicator of the current tab (Scene) 2 - Start new project - user will be asked to select a path for the new project and provide a name for it. A new folder will be created in the destination where the application will save resource as well as the <project_name>.json file. 3 - Save project 4 - Load existing project - user will be asked to navigate to the desired <project_name>.json file. 5 - Play current scene from the location of the playhead. 6 - Record audio - this button switches to “stop recording” when it is active. 7 - Add new animation track - by default, animation tracks span the length of the longest audio track (if there are no audio tracks, animation tracks will have no length) 8 - Color selection - this button allows the user to change the color associated with the scene. This color is manifested in the scene visualization and the scene’s tracks.

Page 41: Nurit Kirshenbaum

41

9 - Previous scene - will load the previous scene as current scene 10 - Next scene - will load the next scene as current scene 11 - Zoom out - this applies to the tracks, it will display them in a more compact way 12 - Zoom in - this applies to the tracks, it will display them in a more expanded way

13 - Change window to fullscreen or part screen. The text editing section has a textbox for changing the name of a scene (with a character limit), and a textbox for writing notes for the scene, such as a script.

The variables handling section will maintain an updated list of all the variables defined in the Resource tab. For each one of those, the user can choose an action that will be performed on the variable every time this scene is played (each scene may have different actions for the variables). Possible actions available in the drop down are: None, Increase, Reset, and Random. The values for the reset or possible random values are set in the Resource tab for each variable. The default action is None.

Page 42: Nurit Kirshenbaum

42

The tracks area can hold two kind of tracks: audio and animation. They are ordered in order of creation. This section has the following characteristics: 1 - Track header - has the track name, can be edited here. 2 - An audio track is represented by its audio wave 3 - An animation track is represented by keyframes and their linear interpolation. The range of values for the animation - the highest value corresponding to the top-most key on the animation wave and the lowest value corresponding to the bottom-most key - are set in the Resource tab. 4 - Playhead - the playhead moves as the current scene plays, or can be dragged by the user to a desired start location 5 - Mute button for audio tracks - this is a toggle button 6 - Add new keyframe on click for animation tracks - this is a toggle button 7 - A keyframe - each dot on the animation wave represents a key frame. A keyframe can be dragged around to a desired position or deleted with a “delete” key. The user can also select a portion of the wave and press “delete” to remove several keyframes at once. 8 - Not connected indicator - animation tracks need to be connected to actuators (through the Resource tab) which should be connected to a Controller. If those connections are missing, or the Controller is not currently active, the “Not Connected” label is displayed and the track is grayed. Working in the Scene tab: The author can start by writing the script for the scene and giving it an appropriate name. They can then record the audio based on that script. The author can then add animation tracks. They would go back and forth to the resource tab to to set an actuator for each animation track (they would need to be set up in order to preview the scene while working on it.) They can mimic the mouth movement by following the

Page 43: Nurit Kirshenbaum

43

sound wave and adding big jumps when the sound wave shows a big change (those usually reflect syllables, which you would like a puppet to mimic).

The author can select individual keyframes and drag them around to edit their value and location, select one and press ”delete” or selected a section of the track and press “delete” to remove. The author can play to test the animations and sound when the animations are connected to actuators that are connected to an active controller. When the logic for the whole show is decided, the author will define variables in the Resource tab, and set up the appropriate actions for each one in that scene.

Resource Tab

The Resource tab has a top section, an iconized resource viewer, and a section with contextual properties panel. When an icon is selected from the resource viewer, that resource is called the current resource, and the contextual properties adjust to reflect the type of resource and its current state. The top part of the application (toolbar + tab selection) in the Resource Tab has the following functionality: 1 - Indicator of the current tab (Overview) 2 - Start new project - user will be asked to select a path for the new project and provide a name for it. A new folder will be created in the destination where the application will save resource as well as the <project_name>.json file. 3 - Save project 4 - Load existing project - user will be asked to navigate to the desired <project_name>.json file. 5 - Play show - this button will activate the state machine describing the show and execute all the scenes. It will only be enabled if there is a single start scene. 6 - Add a new resource - opens a menu with optional resources, see below

Page 44: Nurit Kirshenbaum

44

7 - Delete a resource - enabled when a resource is selected 8 - Change window to fullscreen or part screen.

The resource viewer shows icons for all the resources in the shows. Resources include: scenes, transitions, audio tracks, animation tracks, controllers, actuators, timer events, sensor events, variable events, and variables. Resources that should be connected to a device (controllers, actuators, and sensor events) show a broken puzzle piece if they are not currently connected. To make this finder helpful, the author should give resource meaningful names. Upon clicking on an icon, it will show as selected and all other resources associated with the selected one will show small red indicators at the top left corner of their icon. The contextual properties panel will change according to the type of resource selected. Following is an example of all the panels.

Page 45: Nurit Kirshenbaum

45

Scene Resource: You can edit the scene’s name. You can see all the tracks and remove them or edit them (will open them in resource tab). You can see incoming and outgoing transitions and remove them or edit them (will open them in resource tab).

Audio Resource: You can edit the audio track’s name and see the file path.

Animation Resource: You can edit the animations track’s name, and select the actuator to associate with it from a drop down list (can also be done with drag and drop of icons).

Transition Resource: You can change the source and destination scenes from drop down list (can also be done from the Overview tab or by drag and drop of icons.)

Page 46: Nurit Kirshenbaum

46

Controller Resource: You can edit the controller’s name, choose its type (pololu or arduino), and choose its port from a drop down of all the communication ports found (the author needs to know what kind of names they are looking for.)

Actuator Resource: You can edit the actuators name, pin number, min value, max value, and choose the controller it will work with from the drop down (can also be done by drag and drop of icons). The slider on the right will send values to the actuator when “Play” is clicked. Drag the rectangle handle to change the value, drag the circle handles to change the range of these values.

Variable Resource: You can edit the variable’s name, reset value, and maximum value (all variables are assumed to be integers.)

Page 47: Nurit Kirshenbaum

47

Timer Event Resource: You can edit the event’s name and duration in seconds (can be a decimal value)

Variable Event Resource: You can edit the event’s name, choose the variable it is coupled with (can also be done with drag and drop of icons) and the trigger value (the value for which this event will be “true”)

Sensor Event Resource: You can edit the event’s name, pin number, whether it is digital or analog, the min and max values, and the controller it is connected to (can be done with a drag and drop of icons). When the “Play” button is clicked, the text box on the right will show the current value read from the sensor, which should help detecting a problem (value is stuck or showing -1) and finding the desired trigger range.

Code Tab

The Code tab has a top section and 2 code boxes. The top part of the application (toolbar + tab selection) in the Code Tab has the following functionality: 1 - Indicator of the current tab (Code) 2 - Start new project - user will be asked to select a path for the new project and provide a name for it. A new folder will be created in the destination where the application will save resource as well as the <project_name>.json file. 3 - Save project 4 - Load existing project - user will be asked to navigate to the desired <project_name>.json file.

Page 48: Nurit Kirshenbaum

48

5 - Play show - this button will activate the state machine describing the show and execute all 6 - Change window to fullscreen or part screen. The text boxes with the code in this tab contain code generated based on the controller's setup in Resources (the arduino code), and based on the state machine visualized in the Overview tab. Examples of the code can be found in Appendix B, further discussion about it can be found in the System Design section.

Page 49: Nurit Kirshenbaum

49

Appendix B: Example of Code Generated This appendix shows the code generated by Set&Motion

Arduino Code // Defining public variables int flex54_value = 0; int flex54_pin = 1; int touch55_value = 0; int touch55_pin = 2; int light53_value = 0; int light53_pin = 0; char in_char; String in_string =""; int num = 0; int pin = 0; boolean string_complete = false; boolean reading_pin = true; // Initialize port, attach servos void setup() { Serial.begin(9600); in_string.reserve(200); } // The main loop void loop() { serialEvent(); // a separate function that will read from the serial port if (string_complete) { in_string = ""; num = 0; string_complete = false; } // read values corresponding to sensor pins flex54_value = analogRead(flex54_pin); touch55_value = analogRead(touch55_pin); light53_value = analogRead(light53_pin); delay(10); } // called every loop round, reads and writes to serial port, parses input void serialEvent() { while (Serial.available() > 0) { in_char = (char)Serial.read(); in_string += in_char; // parsing special characters, format is <pin:value> if (in_char == '<') { pin = 0; num = 0; reading_pin = true; } else if (in_char == ':') {

Page 50: Nurit Kirshenbaum

50

reading_pin = false; } else if (in_char == '>') { string_complete = true; } else { int in_num = (reading_pin) ? pin : num; int digit = in_char -'0'; // convert charater into the digit it represents in_num = in_num * 10 + digit; if (reading_pin) pin = in_num; else num = in_num; } } // Send sensor values Serial.print("<" + String(flex54_pin) + ":" + String(flex54_value) + ">"); Serial.print("<" + String(touch55_pin) + ":" + String(touch55_value) + ">"); Serial.print("<" + String(light53_pin) + ":" + String(light53_value) + ">"); }

Processing Code This file is accompanied by the file actions.pde which holds integer arrays with the values corresponding to the actuator waves. import ddf.minim.*; import processing.serial.*; /////////////////////////////// // declaring states /////////////////////////////// final static int START_PLAYING = 0; final static int START_WAITING = 1; final static int CLUES_PLAYING = 2; final static int CLUES_WAITING = 3; final static int PAINT_PLAYING = 4; final static int PAINT_WAITING = 5; final static int CHEST_PLAYING = 6; final static int CHEST_WAITING = 7; final static int NO_CLUE_PLAYING = 8; final static int NO_CLUE_WAITING = 9; final static int END_PLAYING = 10; final static int END_WAITING = 11; final static int ABRUPT_END_PLAYING = 12; final static int ABRUPT_END_WAITING = 13; /////////////////////////////// // global variables /////////////////////////////// Minim minim; boolean START_started_playing = false; boolean START_started_waiting = false; boolean CLUES_started_playing = false; boolean CLUES_started_waiting = false;

Page 51: Nurit Kirshenbaum

51

boolean PAINT_started_playing = false; boolean PAINT_started_waiting = false; boolean CHEST_started_playing = false; boolean CHEST_started_waiting = false; boolean NO_CLUE_started_playing = false; boolean NO_CLUE_started_waiting = false; boolean END_started_playing = false; boolean END_started_waiting = false; boolean ABRUPT_END_started_playing = false; boolean ABRUPT_END_started_waiting = false; int clues_found = 0; int waiting = 0; Serial Pololu32; Serial arduino66; int arduino66_light53_value = 0; int arduino66_flex54_value = 0; int arduino66_touch55_value = 0; // Start_mouth is declared in actions.pde // Start_sides is declared in actions.pde // Start_topdown is declared in actions.pde // Clues_mouth is declared in actions.pde // Clues_sides is declared in actions.pde // Clues_updown is declared in actions.pde // Paint_mouth is declared in actions.pde // Paint_sides is declared in actions.pde // Paint_updown is declared in actions.pde // Chest_mouth is declared in actions.pde // Chest_sides is declared in actions.pde // Chest_updown is declared in actions.pde // No_clue_mouth is declared in actions.pde // No_clue_sides is declared in actions.pde // No_clue_updown is declared in actions.pde // End_mouth is declared in actions.pde // End_sides is declared in actions.pde // End_updown is declared in actions.pde String Start_track2 = "/Users/asafhadari/Documents/Processing/Project1/Demo1/data/track2.wav"; String Clues_track3 = "/Users/asafhadari/Documents/Processing/Project1/Demo1/data/track3.wav"; String Paint_track4 = "/Users/asafhadari/Documents/Processing/Project1/Demo1/data/track4.wav"; String Chest_track5 = "/Users/asafhadari/Documents/Processing/Project1/Demo1/data/track5.wav"; String No_clue_track7 = "/Users/asafhadari/Documents/Processing/Project1/Demo1/data/track7.wav"; String End_track8 = "/Users/asafhadari/Documents/Processing/Project1/Demo1/data/track8.wav"; int time_ms = 0; // current time in ms int time_elapsed = 0; // num of ms passed while in current state int delta_t = 0; // num of ms since last loop int current_state = START_PLAYING; /////////////////////////////// // setup - initialize variables, start serial communication /////////////////////////////// void setup() { size(400, 200); background(0); textSize(36); fill(255); // white text on black minim = new Minim(this); Pololu32 = new Serial(this, "/dev/tty.usbmodem00115411", 9600); arduino66 = new Serial(this, "/dev/tty.usbmodemfd121", 9600);

Page 52: Nurit Kirshenbaum

52

time_ms = millis(); START_started_playing = true; } /////////////////////////////// // main loop /////////////////////////////// void draw() { delta_t = millis() - time_ms; time_ms = millis(); arduinoRead(); background(0); if (current_state == START_PLAYING) { text("playing Start", 20, 40); play_Start(); } else if (current_state == START_WAITING) { text("wait after Start", 20, 40); wait_after_Start(); } else if (current_state == CLUES_PLAYING) { text("playing Clues", 20, 40); play_Clues(); } else if (current_state == CLUES_WAITING) { text("wait after Clues", 20, 40); wait_after_Clues(); } else if (current_state == PAINT_PLAYING) { text("playing Paint", 20, 40); play_Paint(); } else if (current_state == PAINT_WAITING) { text("wait after Paint", 20, 40); wait_after_Paint(); } else if (current_state == CHEST_PLAYING) { text("playing Chest", 20, 40); play_Chest(); } else if (current_state == CHEST_WAITING) { text("wait after Chest", 20, 40); wait_after_Chest(); } else if (current_state == NO_CLUE_PLAYING) { text("playing No_clue", 20, 40); play_No_clue(); } else if (current_state == NO_CLUE_WAITING) { text("wait after No_clue", 20, 40); wait_after_No_clue(); } else if (current_state == END_PLAYING) { text("playing End", 20, 40); play_End(); } else if (current_state == END_WAITING) { text("wait after End", 20, 40); wait_after_End(); }

Page 53: Nurit Kirshenbaum

53

else if (current_state == ABRUPT_END_PLAYING) { text("playing Abrupt_end", 20, 40); play_Abrupt_end(); } else if (current_state == ABRUPT_END_WAITING) { text("wait after Abrupt_end", 20, 40); wait_after_Abrupt_end(); } time_elapsed = time_elapsed + delta_t; } /////////////////////////////// // a function for each state /////////////////////////////// void play_Start() { if (time_elapsed >= 16521) { // play is done - move to wait state current_state = START_WAITING; START_started_waiting = true; return; } if (START_started_playing) { // just started playing - initialize START_started_playing = false; time_elapsed = 0; // activate variables } // start audio play (based on timing) if ((time_elapsed <= 0) && (time_elapsed + delta_t >= 0)) { AudioPlayer track2 = minim.loadFile(Start_track2); track2.play(); } // send value to actuators pololuWrite(Pololu32, 1, Start_mouth[time_elapsed]); pololuWrite(Pololu32, 2, Start_sides[time_elapsed]); pololuWrite(Pololu32, 3, Start_topdown[time_elapsed]); } void wait_after_Start() { if (START_started_waiting) { // just started playing - initialize START_started_waiting = false; time_elapsed = 0; } // check the conditions for transitioning if ((time_elapsed >= 1000)) { current_state = CLUES_PLAYING; CLUES_started_playing = true; return; } } void play_Clues() { if (time_elapsed >= 3950) { // play is done - move to wait state current_state = CLUES_WAITING; CLUES_started_waiting = true; return; } if (CLUES_started_playing) { // just started playing - initialize

Page 54: Nurit Kirshenbaum

54

CLUES_started_playing = false; time_elapsed = 0; // activate variables waiting++; } // start audio play (based on timing) if ((time_elapsed <= 0) && (time_elapsed + delta_t >= 0)) { AudioPlayer track3 = minim.loadFile(Clues_track3); track3.play(); } // send value to actuators pololuWrite(Pololu32, 1, Clues_mouth[time_elapsed]); pololuWrite(Pololu32, 2, Clues_sides[time_elapsed]); pololuWrite(Pololu32, 3, Clues_updown[time_elapsed]); } void wait_after_Clues() { if (CLUES_started_waiting) { // just started playing - initialize CLUES_started_waiting = false; time_elapsed = 0; } // check the conditions for transitioning if ((waiting == 5)) { current_state = ABRUPT_END_PLAYING; ABRUPT_END_started_playing = true; return; } if ((arduino66_touch55_value >= 20 && arduino66_touch55_value <= 1000)) { current_state = NO_CLUE_PLAYING; NO_CLUE_started_playing = true; return; } if ((arduino66_flex54_value >= 740 && arduino66_flex54_value <= 800)) { current_state = CHEST_PLAYING; CHEST_started_playing = true; return; } if ((arduino66_light53_value >= 250 && arduino66_light53_value <= 1000)) { current_state = PAINT_PLAYING; PAINT_started_playing = true; return; } if ((time_elapsed >= 10000)) { current_state = CLUES_PLAYING; CLUES_started_playing = true; return; } } void play_Paint() { if (time_elapsed >= 11652) { // play is done - move to wait state current_state = PAINT_WAITING; PAINT_started_waiting = true; return; } if (PAINT_started_playing) { // just started playing - initialize PAINT_started_playing = false; time_elapsed = 0;

Page 55: Nurit Kirshenbaum

55

// activate variables clues_found++; } // start audio play (based on timing) if ((time_elapsed <= 0) && (time_elapsed + delta_t >= 0)) { AudioPlayer track4 = minim.loadFile(Paint_track4); track4.play(); } // send value to actuators pololuWrite(Pololu32, 1, Paint_mouth[time_elapsed]); pololuWrite(Pololu32, 2, Paint_sides[time_elapsed]); pololuWrite(Pololu32, 3, Paint_updown[time_elapsed]); } void wait_after_Paint() { if (PAINT_started_waiting) { // just started playing - initialize PAINT_started_waiting = false; time_elapsed = 0; } // check the conditions for transitioning if ((arduino66_touch55_value >= 20 && arduino66_touch55_value <= 1000)) { current_state = NO_CLUE_PLAYING; NO_CLUE_started_playing = true; return; } if ((arduino66_flex54_value >= 740 && arduino66_flex54_value <= 800)) { current_state = CHEST_PLAYING; CHEST_started_playing = true; return; } if ((clues_found == 3)) { current_state = END_PLAYING; END_started_playing = true; return; } if ((time_elapsed >= 5000)) { current_state = CLUES_PLAYING; CLUES_started_playing = true; return; } } void play_Chest() { if (time_elapsed >= 13043) { // play is done - move to wait state current_state = CHEST_WAITING; CHEST_started_waiting = true; return; } if (CHEST_started_playing) { // just started playing - initialize CHEST_started_playing = false; time_elapsed = 0; // activate variables clues_found++; } // start audio play (based on timing) if ((time_elapsed <= 0) && (time_elapsed + delta_t >= 0)) { AudioPlayer track5 = minim.loadFile(Chest_track5); track5.play();

Page 56: Nurit Kirshenbaum

56

} // send value to actuators pololuWrite(Pololu32, 1, Chest_mouth[time_elapsed]); pololuWrite(Pololu32, 2, Chest_sides[time_elapsed]); pololuWrite(Pololu32, 3, Chest_updown[time_elapsed]); } void wait_after_Chest() { if (CHEST_started_waiting) { // just started playing - initialize CHEST_started_waiting = false; time_elapsed = 0; } // check the conditions for transitioning if ((arduino66_touch55_value >= 20 && arduino66_touch55_value <= 1000)) { current_state = NO_CLUE_PLAYING; NO_CLUE_started_playing = true; return; } if ((arduino66_light53_value >= 250 && arduino66_light53_value <= 1000)) { current_state = PAINT_PLAYING; PAINT_started_playing = true; return; } if ((clues_found == 3)) { current_state = END_PLAYING; END_started_playing = true; return; } if ((time_elapsed >= 5000)) { current_state = CLUES_PLAYING; CLUES_started_playing = true; return; } } void play_No_clue() { if (time_elapsed >= 9985) { // play is done - move to wait state current_state = NO_CLUE_WAITING; NO_CLUE_started_waiting = true; return; } if (NO_CLUE_started_playing) { // just started playing - initialize NO_CLUE_started_playing = false; time_elapsed = 0; // activate variables } // start audio play (based on timing) if ((time_elapsed <= 0) && (time_elapsed + delta_t >= 0)) { AudioPlayer track7 = minim.loadFile(No_clue_track7); track7.play(); } // send value to actuators pololuWrite(Pololu32, 1, No_clue_mouth[time_elapsed]); pololuWrite(Pololu32, 2, No_clue_sides[time_elapsed]); pololuWrite(Pololu32, 3, No_clue_updown[time_elapsed]); } void wait_after_No_clue() { if (NO_CLUE_started_waiting) { // just started playing - initialize

Page 57: Nurit Kirshenbaum

57

NO_CLUE_started_waiting = false; time_elapsed = 0; } // check the conditions for transitioning if ((arduino66_light53_value >= 250 && arduino66_light53_value <= 1000)) { current_state = PAINT_PLAYING; PAINT_started_playing = true; return; } if ((arduino66_flex54_value >= 740 && arduino66_flex54_value <= 800)) { current_state = CHEST_PLAYING; CHEST_started_playing = true; return; } if ((time_elapsed >= 5000)) { current_state = CLUES_PLAYING; CLUES_started_playing = true; return; } } void play_End() { if (time_elapsed >= 20217) { // play is done - move to wait state current_state = END_WAITING; END_started_waiting = true; return; } if (END_started_playing) { // just started playing - initialize END_started_playing = false; time_elapsed = 0; // activate variables } // start audio play (based on timing) if ((time_elapsed <= 0) && (time_elapsed + delta_t >= 0)) { AudioPlayer track8 = minim.loadFile(End_track8); track8.play(); } // send value to actuators pololuWrite(Pololu32, 1, End_mouth[time_elapsed]); pololuWrite(Pololu32, 2, End_sides[time_elapsed]); pololuWrite(Pololu32, 3, End_updown[time_elapsed]); } void wait_after_End() { if (END_started_waiting) { // just started playing - initialize END_started_waiting = false; time_elapsed = 0; } // check the conditions for transitioning } void play_Abrupt_end() { if (time_elapsed >= 0) { // play is done - move to wait state current_state = ABRUPT_END_WAITING; ABRUPT_END_started_waiting = true; return; } if (ABRUPT_END_started_playing) { // just started playing - initialize ABRUPT_END_started_playing = false;

Page 58: Nurit Kirshenbaum

58

time_elapsed = 0; // activate variables } // start audio play (based on timing) // send value to actuators } void wait_after_Abrupt_end() { if (ABRUPT_END_started_waiting) { // just started playing - initialize ABRUPT_END_started_waiting = false; time_elapsed = 0; } // check the conditions for transitioning } /////////////////////////////// // utility functions - communication with serial port /////////////////////////////// void arduinoWrite(Serial port, int pin, int value) { port.write("<pin:value>"); } void pololuWrite(Serial port, int pin, int value) { value = value*4; int b = 132; // beginning of command int ch = pin; int lo = value&0x7f; int hi = value>>7; port.write(char(b)); port.write(char(ch)); port.write(char(lo)); port.write(char(hi)); } void arduinoRead() { String in; if (arduino66.available()>0) { in = arduino66.readString(); String[] m = match(in, "<(.*?)>"); if (m != null) { for (int i = 1; i < m.length; i++) { int c = m[i].indexOf(":"); if (c > -1) { int pin = int(m[i].substring(0,c)); if (pin == 0) { arduino66_light53_value = int(m[i].substring(c+1)); } if (pin == 1) { arduino66_flex54_value = int(m[i].substring(c+1)); } if (pin == 2) { arduino66_touch55_value = int(m[i].substring(c+1)); } } } } } }

Page 59: Nurit Kirshenbaum

59