creation & computation final...
Post on 29-Aug-2019
224 Views
Preview:
TRANSCRIPT
Creation & Computation Final Project Anne Stevens, Jessica Knox, Navaneethan Sivabalaviknarajah Instructed by Kate Hartman & Jim Ruxton OCAD University December 4th, 2011
Initial Proposal Concept As they become more affordable, more diverse and more prevalent, cameras have become associated with a range of uses and meanings in society. One of the more traditional associations is of camera as eye; in many ways they are similar in structure and function (see also: kino-glaz theory). Another association is with surveillance. Big brother, the panopticon, and the closed circuit cameras used in public and private spaces around the world are rooted in surveillance. Sousveillance has become a tool of empowerment where the camera is turned outward, watching the watchers, and is often used by activists to capture police activity during protest. Finally the camera as documentation tool has become increasingly popular. “Liveloggers” use cameras to document their daily existence – this information can be indexed and stored as a visual record of a person’s life. It is our intention to build on these traditions and offer a camera-based live blogging tool that acts also to track public vs. private moments throughout the day. We also hope that the visual outputs of this tool will have an aesthetic value – rather than offer a literal record they will be abstracted to form an expressive record instead. The basic premise is that a camera will be worn by the user, and will face outwards to their environment. Photos will be automatically taken at regular intervals throughout the day. In response to sensors worn by the user, however, the frame of the photo will range from normal to only a small horizontal slice at the centre of the image, mimicking the blink of an eye. We want to build two versions of this tool: one that will automatically respond to changes in activity (eg, with accelerometers or proximity sensors) and one that is user controlled (eg, with tension sensors attached to the user’s hands), to see which set-up will best support our concept. - On the one hand, the camera is a strong metaphor for sight and insight. On the other hand, surveillance technology and personal videos have made video recording ubiquitous and banal. It is our intention to combine camera, sensor and computer technology in order to create a display that offers up our daily lives in a matter of fact way and, at the same time, conceals them or takes them away.
Hardware Input - wearable camera worn by Jenn, Nav and Anne for a period of time such as an hour or day. If we use a still camera, it will be for time-lapse photos - accelerometer or other sensor attached to two parts of the body that move in distinctive ways (i.e. foot and head) - to start with, the connection b/t Arduino and computer will be via USB cable. - from there, we will try to make this set-up portable (eg. laptop in backpack) - once we get that working, we’ll move on to try to make it wireless (eg. XBee) -we still need to identify how data storage will work in a wireless context (SD card or other method?) Output - two side by side projector or monitor displays (like a pair of eyes) - this may change as the project progresses - concealed laptop computer
Software Input - still photos or video. We’ll start with still camera images - digital or analog sensor data - time? Output - the camera images will become the background image in a Processing sketch - the sensor data will control an overlay image on top of the camera image via Arduino. This overlay consists of a black screen with a full-width horizontal slit opening across the middle of the screen. To start with, the height of the slit will be controlled by the body sensor data so that it opens and closes in response to body movement. For example, a very narrow slit may represent slowness or fatigue and a wide slit may represent a lot of activity. Once we get that working, we’ll try to incorporate other input from the sensors and translate to other filtering or Aperture properties on screen. Looking at the screen image will be akin to looking out through a pair of blinking eyes.
Design Challenges - what sensors and setup will yield meaningful results? - how do we link each image with sensor data? will this happen in real time or with the two data sets be merged once you’re ready to view the output? - display on screen: appearance of the display, how abstract or representational should it be? are there any other abstractions that we should use other than the “blink” to heighten the aesthetic experience? How do we accomplish this in Processing? How is the display affected by the sensor data? Do we have to average it out in order to make the on screen movement smooth? - starting small and then adding complexity in incremental, modular parts - what quality of image or video will be capture? Will it be smooth or smooth enough or a jumpy blurry mess? - what works best: time lapse photography or video? - how do we make this portable and functional? We are still unclear about how to make this truly wireless and compact - in a nutshell, we have a lot of experimenting to do.
Related Projects - Steve Mann and his EyeTap mediated reality technology: http://www.youtube.com/watch?v=DiFtmrpuwNY - Lifelogging: capturing all, or a large part of, your life and uploading to the web continuously - Nokia LifeBlog: http://www.engadget.com/2004/03/10/nokias-lifeblog/ - Microsoft SenseCam: a digital camera you wear around your neck and which snaps up to 2,000 photos thoughout the day. http://www.engadget.com/2004/03/05/microsofts-sensecam/ - Using Android phone for sensor data. Incorporating into Processing or Arduino. - Canon camera hack that Maureen put up on the class blog: http://chdk.wikia.com/wiki/CHDK
Anticipated Behaviour We anticipate that by tracking movement, or proximity to other objects, that we will be able to capture a record of the user’s experience. We hope that the framing of each picture will correspond to the user’s private vs. public moments or at least levels of movement and/or activity. However, it is possible that we will end up with a seemingly random framing of pictures. As we alluded to previously, the right sensor set-up to yield meaningful outputs is one of the greatest challenges we face with this project. We would like to experiment with user-controlled sensors as well to see if this will produce a more meaningful output. Two additional types of behaviour will also come into play: - our behaviour: how conscious will we be of the fact that we are being recorded?
- will we deliberately move a lot, change our behaviour, go to some places, avoid others, be on our best behaviour etc.? - behaviour of other people that may be caught on camera
-this will be affected by how conspicuous the camera is and what type of location we are in
- a camera in the street is no big deal but one in the women’s change room at the Y tends to get attention
Initial Feedback -should the image capture be more deliberate and meaningful? -what triggers the image capture? -can it be controlled by some aspect of our behaviour?> eg. the faster we walk, the faster the camera takes pictures. -how can we make the display more engaging/participatory/meaningful for the viewer? eg. could they control the display? -could the project become more of an interaction between our efforts to control what is captured on camera and the viewer’s efforts to control what is displayed? -this could apply to what is captured and displayed as well as how -trying to play on this tension could suit live video capture more than display after-the-fact -think about what form the display could take. eg. it could be cool to make the viewer have to look through two peep holes at a display concealed behind -suggests voyeurism (which adds a layer of metaphor to our project) and is reminiscent of old fashioned movie displays. -should it be more of a narrative than a straight diary?
Revised Concept The concept evolved based on a number of factors, the most important one being
the feedback we received throughout the process. The largest criticism of our
original idea was that it the experience was geared towards the wearer of the
camera while the playback experience was completely passive. Based on this input,
we decided to shift the concept towards making a participatory experience for users,
where they could interact with and choose moments of focus of the media that they
were experiencing.
As we developed our project, there was a major decision point that we needed to
make – do we create this as a physical DJ tool or do we lean towards a more
experimental aesthetic. This decision would impact the music we selected, the
visualization and ultimately the relationship between movement and sound/image
modulation.
Ultimately, we decided that we should move towards the experimental, that this
would open up greater possibilities to the user and would yield more interesting
results. The final concept is to create sensors that can be used as a movement
based tool to generate sound and image art. The tool would make the art-making
process accessible to more people and encourage users to see potential for
expression in the world around them. This has traditions in the work of artists such
as Alan Licht who use strange objects to generate even stranger sounds. The
difference with our tool is that the interface we created is very easy to use, though
perhaps challenging to master.
New Feedback -talked to Jim again and he thought the idea was interesting but suggested that we
ensure that the form of the interface (eg. the sensors) was related to the content of
the video
-he said in terms of separating video and audio, we should do this manually and have each file as separate inputs into processing (but then we start them simultaneously)
-in terms of having two screens (one with video and one with a tracker) he said he’s never seen this before so his advice would be to create a single screen (with a wide aspect ratio) and then split it to show each component.
-he believed that we should start by using filters to modify the image based on sensor data and then if we’re ambitious to move to pixilation with more controlled effects
Summary of key development stages Accelerometer Researching accelerometer options (range, axes etc.) Capturing X-axis, Y-axis and Z-axis data from the accelerometer. ‘Beads’ library Researching different audio libraries and their options for controlling audio playback. ‘Beads’ library permits audio playback speed in both directions. Develop code to control audio playback direction and speed as well as volume, initially using MouseX and MouseY, then using stretch sensor. Using accelerometer to control audio playback Adapting previous code to use accelerometer sensor data and finessing settings and map ranges for subtle control and juicy feedback. Create visual display Develop Rorschack-like visuals that can also be controlled by accelerometer sensor data to illustrate and reinforce how the audio is being controlled and to make the experience multi-dimensional. Xbee radio It was important to make the equipment wireless so that the user would not feel tethered or leashed to the computer. Therefore, we researched Xbee radios and, in the end, used both Series 1 and 2 transmitter/receiver pairs. Music experiments We experimented with different types of music and music combinations: from complex and layered jazz and pop, to more minimal electronic to very minimal and abstract tones. We also experimented with combinations of base/treble, rhythm/melody and music/spoken word. Form We packaged the arduino/accelerometer/xbee/battery sets inside colour-coded felt enclosures that could be held in the hand or worn on a velcro strap around the wrist, ankle, shoulder, thigh etc. for a hands-free experience.
Preliminary code Capturing accelerometer readings Utilized this Arduino program to test the accelerometer developed by michaelabrahamsen.com
/* This Code reads data from the ADXL335 axis accelerometer */ const int groundpin = 18; // analog input pin 4 -- ground const int powerpin = 19; // analog input pin 5 -- voltage const int xpin = A3; // x-axis of the accelerometer const int ypin = A2; // y-axis const int zpin = A1; // z-axis const String XHEADER = "X: "; const String YHEADER = "Y: "; const String ZHEADER = "Z: "; const String TAB = "\t"; void setup() { // initialize the serial communications: Serial.begin(9600); // Power and ground the necessary pins. Providing power to both // the analog and digital pins allow me to just use the breakout // board and not have to use the normal 5V and GND pins pinMode(groundpin, OUTPUT); pinMode(powerpin, OUTPUT); digitalWrite(groundpin, LOW); digitalWrite(powerpin, HIGH); } void loop() { // print values that are received from the sensors and put a tab between // the values Serial.print(XHEADER + analogRead(xpin) + TAB); Serial.print(YHEADER + analogRead(ypin) + TAB); Serial.print(ZHEADER + analogRead(zpin)); Serial.println(); delay(200); }
Values when the sensor was parallel with the floor: x: 480 y:480 z:480 When i tilt the sensor away from me towards the floor (pivoting on x and moving y up and down along the z axis: x: 480 y:389 z:range between 380 and 420 And when i keep the sensor parallel to the floor but flick along the x axis: x: quick spike at 720, return to y:389 z:range between 380 and 420
Using analog stretch sensor Arduino code to use the stretch sensor to control brightness of LED on pin 9 and print outputs to Serial Monitor
int LEDpin = 9; int potentiometerPin = 0; int LEDvalue; int potentiometerValue; void setup(){ pinMode(LEDpin, OUTPUT); Serial.begin(9600); } void loop(){ potentiometerValue = analogRead(potentiometerPin); LEDvalue = map(potentiometerValue, 0, 1023, 0, 255); analogWrite(LEDpin, LEDvalue); Serial.println(potentiometerValue); } Arduino code to output to Processing with stretch sensor int STRETCHpin = 9; int potentiometerPin = 0; int STRETCHvalue; void setup(){ pinMode(STRETCHpin, OUTPUT); Serial.begin(9600); } void loop(){ STRETCHvalue = analogRead(potentiometerPin); Serial.println(STRETCHvalue); Serial.write(STRETCHvalue); }
Sensor Stretch interacting with video Processing import processing.video.*; //import processing.serial.*; //declare movie object Movie movie; //identify serial port Serial myPort; void setup(){ //set up size, frame rate and movie size (641,481,P2D); frameRate(60); movie = new Movie (this, "Comp_Test.mov"); //start playing movie movie.loop(); // List all the available serial ports and take first name as input into myPort //String portName = Serial.list()[0]; //myPort = new Serial(this, portName, 9600); } void draw () {
if (movie.available()) { movie.read(); } //display movie image (movie, 0, 0, 640, 480); /*slow down by half if mouse cursor is near right hand portion of the screen if(mouseX>200){ movie.speed(0.5); } else { movie.speed(1); }*/ } /*void serialEvent (Serial myPort) { // get the byte: int inByte = myPort.read(); // print it: println(inByte); }*/
Use ‘beads’ library to control audio playback speed/direction and volume using mouseX and mouseY // Sampling_03.pde // this is a more complex sampler // clicking somewhere on the window initiates sample playback // moving the mouse controls the playback rate import beads.*; AudioContext ac; SamplePlayer sp1; // we can run both SamplePlayers through the same Gain Gain sampleGain; Glide gainValue; Glide rateValue; void setup() { size(800, 600); ac = new AudioContext(); // create our AudioContext // whenever we load a file, we need to enclose the code in a Try/Catch block // Try/Catch blocks will inform us if the file can't be found try { // initialize the SamplePlayer sp1 = new SamplePlayer(ac, new Sample(sketchPath("") + "sgt_pepper.wav")); } catch(Exception e) { // if there is an error, show an error message (at the bottom of the processing window) println("Exception while attempting to load sample!"); e.printStackTrace(); // then print a technical description of the error exit(); // and exit the program }
// note that we want to play the sample multiple times sp1.setKillOnEnd(false); rateValue = new Glide(ac, 1, 30); // initialize our rateValue Glide object sp1.setRate(rateValue); // connect it to the SamplePlayer // as usual, we create a gain that will control the volume of our sample player gainValue = new Glide(ac, 0.0, 30); sampleGain = new Gain(ac, 1, gainValue); sampleGain.addInput(sp1); ac.out.addInput(sampleGain); // connect the Gain to the AudioContext ac.start(); // begin audio processing background(0); // set the background to black stroke(255); line(width/2, 0, width/2, height); // draw a line in the middle text("Click to begin playback.", 100, 100); // tell the user what to do! text("Move the mouse to control playback speed.", 100, 120); // tell the user what to do! } // although we're not drawing to the screen, we need to have a draw function // in order to wait for mousePressed events void draw() { float halfWidth = width / 2.0; gainValue.setValue((float)mouseY / (float)height); // set the gain based on mouse position along the Y-axis rateValue.setValue(((float)mouseX - halfWidth)/halfWidth); // set the rate based on mouse position along the X-axis } // this routine is called whenever a mouse button is pressed on the Processing sketch void mousePressed() { // if the left mouse button is clicked, then play the sound if( mouseX > width / 2.0 ) { sp1.setPosition(000); // set the start position to the beginning sp1.start(); // play the audio file } // if the right mouse button is clicked, then play the bass drum sample backwards else { sp1.setToEnd(); // set the start position to the end of the file sp1.start(); // play the file in reverse (rate set in the draw routine) } }
Accelerometer reading filtered to give workable values //Analog read pins const int groundpin = 18; // analog input pin 4 -- ground const int powerpin = 19; // analog input pin 5 -- voltage const int xPin = A3; // x-axis of the accelerometer const int yPin = A2; // y-axis const int zPin = A1; // z-axis //The minimum and maximum values that came from //the accelerometer while standing still //You very well may need to change these int minVal = 265; int maxVal = 402; //to hold the caculated values double x; double y; double z; void setup(){
Serial.begin(9600); pinMode(groundpin, OUTPUT); pinMode(powerpin, OUTPUT); digitalWrite(groundpin, LOW); digitalWrite(powerpin, HIGH); } void loop(){ //read the analog values from the accelerometer int xRead = analogRead(xPin); int yRead = analogRead(yPin); int zRead = analogRead(zPin); //convert read values to degrees -90 to 90 - Needed for atan2 int xAng = map(xRead, minVal, maxVal, -90, 90); int yAng = map(yRead, minVal, maxVal, -90, 90); int zAng = map(zRead, minVal, maxVal, -90, 90); //Caculate 360deg values like so: atan2(-yAng, -zAng) //atan2 outputs the value of -π to π (radians) //We are then converting the radians to degrees x = RAD_TO_DEG * (atan2(-yAng, -zAng) + PI); y = RAD_TO_DEG * (atan2(-xAng, -zAng) + PI); z = RAD_TO_DEG * (atan2(-yAng, -xAng) + PI); //Output the caculations Serial.print("x: "); Serial.print(x); Serial.print(" | y: "); Serial.print(y); Serial.print(" | z: "); Serial.println(z); delay(100);//just here to slow down the serial output - Easier to read }
Accelerometer control to manipulate music Arduino //Analog read pins const int groundpin = 18; // analog input pin 4 -- ground const int powerpin = 19; // analog input pin 5 -- voltage const int xPin = A3; // x-axis of the accelerometer const int yPin = A2; // y-axis const int zPin = A1; // z-axis //The minimum and maximum values that came from //the accelerometer while standing still //You very well may need to change these int minVal = 265; int maxVal = 402; //to hold the caculated values float x; float y; float z; void setup(){ Serial.begin(9600); pinMode(groundpin, OUTPUT); pinMode(powerpin, OUTPUT); digitalWrite(groundpin, LOW); digitalWrite(powerpin, HIGH); } void loop(){ //read the analog values from the accelerometer int xRead = analogRead(xPin); int yRead = analogRead(yPin); int zRead = analogRead(zPin); //convert read values to degrees -90 to 90 - Needed for atan2
int xAng = map(xRead, minVal, maxVal, -90, 90); int yAng = map(yRead, minVal, maxVal, -90, 90); int zAng = map(zRead, minVal, maxVal, -90, 90); //Caculate 360deg values like so: atan2(-yAng, -zAng) //atan2 outputs the value of -π to π (radians) //We are then converting the radians to degrees x = RAD_TO_DEG * (atan2(-yAng, -zAng) + PI); y = RAD_TO_DEG * (atan2(-xAng, -zAng) + PI); z = RAD_TO_DEG * (atan2(-yAng, -xAng) + PI); //Output the caculations //Serial.print("x: "); Serial.print(x); Serial.print(","); Serial.print(y); Serial.print(","); Serial.println(z); delay(100);//just here to slow down the serial output - Easier to read } Accelerometer control to manipulate music Processing // Sampling_03.pde // this is a more complex sampler // clicking somewhere on the window initiates sample playback // moving the mouse controls the playback rate import processing.serial.*; //beads is A realtime audio library for computer music and sonic art in realtime performance, installation // artworks, web content and more. import beads.*; Serial port; float x; float y; float z; //double is a float type value that is greater than the limits for float double val; AudioContext ac; SamplePlayer sp1; // we can run both SamplePlayers through the same Gain Gain sampleGain; Glide gainValue; Glide rateValue; void setup() { size(800, 600); port = new Serial(this, Serial.list()[0], 9600); ac = new AudioContext(); // create our AudioContext // whenever we load a file, we need to enclose the code in a Try/Catch block // Try/Catch blocks will inform us if the file can't be found try { // initialize the SamplePlayer sp1 = new SamplePlayer(ac, new Sample(sketchPath("") + "metronomy.wav")); } catch(Exception e) {
// if there is an error, show an error message (at the bottom of the processing window) println("Exception while attempting to load sample!"); e.printStackTrace(); // then print a technical description of the error exit(); // and exit the program } // note that we want to play the sample multiple times sp1.setKillOnEnd(false); rateValue = new Glide(ac, 1, 30); // initialize our rateValue Glide object sp1.setRate(rateValue); // connect it to the SamplePlayer // as usual, we create a gain that will control the volume of our sample player gainValue = new Glide(ac, 0.0, 30); sampleGain = new Gain(ac, 1, gainValue); sampleGain.addInput(sp1); ac.out.addInput(sampleGain); // connect the Gain to the AudioContext ac.start(); // begin audio processing background(0); // set the background to black stroke(255); line(width/2, 0, width/2, height); // draw a line in the middle text("Click to begin playback.", 100, 100); // tell the user what to do! text("Move the mouse to control playback speed.", 100, 120); // tell the user what to do! } // although we're not drawing to the screen, we need to have a draw function // in order to wait for mousePressed events void serialEvent(Serial port) { String inString = port.readStringUntil('\n'); if (inString != null) { // trim off any whitespace: inString = trim(inString); // split the string on the commas and convert the // resulting substrings into an integer array: float[] controls = float(split(inString, ",")); x = map(controls[0], 0, 20, 0, 500); y = map(controls[1], 0, 20, 0, 500); z = map(controls[2], 0, 20, 0, 500); println( "Raw Input :" +x); } } void draw() { float halfWidth = width / 2.0; gainValue.setValue((float)(x)/ (float)height); // set the gain based on mouse position along the Y-axis rateValue.setValue(((float)(y) - halfWidth)/halfWidth); // set the rate based on mouse position along the X-axis } // this routine is called whenever a mouse button is pressed on the Processing sketch void mousePressed() { // if the left mouse button is clicked, then play the sound if( mouseX > width / 2.0 ) { sp1.setPosition(000); // set the start position to the beginning sp1.start(); // play the audio file } // if the right mouse button is clicked, then play the bass drum sample backwards else { sp1.setToEnd(); // set the start position to the end of the file sp1.start(); // play the file in reverse (rate set in the draw routine)
} }
Visualization Processing Code //visualization // ellipse in the midde http://www.openprocessing.org/visuals/?visualID=43718 //dial http://www.openprocessing.org/visuals/?visualID=35445 void setup () { size(800, 600); background(0); // set the background to black stroke(255); frameRate (300); } void draw () { smooth (); //outer fade for (int i=600; i >= 0; i = i-5) { noStroke(); fill(600 - i, 0, 255 -i); ellipse(width/2, height/2, i, i); } //if mouse Y is in upper half, image will pulse if (mouseY < height/2) { //for (int i=350; i< 450; i++) fill(0); float r = random(-50, 50); ellipse (width/2, height/2, 400 +r, 400 +r); } else { ellipse (width/2, height/2, 300, 300); } //bigger, outer circles for (int o=375; o <= 425; o = o + 8) { smooth(); noFill(); stroke(200); ellipse(width/2, height/2, o, o); } //inner faded series of circles for (int i=350; i >= 0; i = i-5) { //create the circle pattern in the middle of the screen noStroke(); fill(350 - i); ellipse(width/2, height/2, i, i); } /*cursPosX = 350; cursPosY = 350; fill (200, 40, 80); ellipse (cursPosX, cursPosY, 20, 20); float angle = atan2(mouseY-y,mouseX-x); fill(255,255,255,200);*/ //this is the dial in the middle float angle = atan2(mouseY-height/2,mouseX-width/2); fill(255,255,255,200); pushMatrix();
translate(width/2,height/2); rotate( angle ); stroke(0); strokeWeight(10); rect(0,0,150,2); popMatrix(); noStroke(); fill(0); ellipse (width/2, height/2, 20, 20); strokeWeight(1); }
Simple visualization with serial // Sampling_03.pde // this is a more complex sampler // clicking somewhere on the window initiates sample playback // moving the mouse controls the playback rate import processing.serial.*; //beads is A realtime audio library for computer music and sonic art in realtime performance, installation // artworks, web content and more. import beads.*; Serial port; float x; float y; float z; //double is a float type value that is greater than the limits for float double val; int graphXPos; AudioContext ac; SamplePlayer sp1; // we can run both SamplePlayers through the same Gain Gain sampleGain; Glide gainValue; Glide rateValue; void setup() { size(800, 600); port = new Serial(this, Serial.list()[0], 9600); ac = new AudioContext(); // create our AudioContext // whenever we load a file, we need to enclose the code in a Try/Catch block // Try/Catch blocks will inform us if the file can't be found try { // initialize the SamplePlayer sp1 = new SamplePlayer(ac, new Sample(sketchPath("") + "japenese.wav")); } catch(Exception e) { // if there is an error, show an error message (at the bottom of the processing window) println("Exception while attempting to load sample!"); e.printStackTrace(); // then print a technical description of the error exit(); // and exit the program } // note that we want to play the sample multiple times sp1.setKillOnEnd(false); rateValue = new Glide(ac, 1, 30); // initialize our rateValue Glide object sp1.setRate(rateValue); // connect it to the SamplePlayer // as usual, we create a gain that will control the volume of our sample player
gainValue = new Glide(ac, 0.0, 30); sampleGain = new Gain(ac, 1, gainValue); sampleGain.addInput(sp1); ac.out.addInput(sampleGain); // connect the Gain to the AudioContext ac.start(); // begin audio processing background(255); // set the background to black stroke(255); // line(width/2, 0, width/2, height); // draw a line in the middle //text("Click to begin playback.", 100, 100); // tell the user what to do! //text("Move the mouse to control playback speed.", 100, 120); // tell the user what to do! } // although we're not drawing to the screen, we need to have a draw function // in order to wait for mousePressed events void serialEvent(Serial port) { String inString = port.readStringUntil('\n'); if (inString != null) { // trim off any whitespace: inString = trim(inString); // split the string on the commas and convert the // resulting substrings into an integer array: float[] controls = float(split(inString, ",")); x = map(controls[0], 0, 20, 0, 500); y = map(controls[1], 0, 20, 0, 500); z = map(controls[2], 0, 20, 0, 500); println( "Raw Input :" +x +", " +y); //float yPos = 10 - x/12; // draw the line in a pretty color: stroke(40, 150, 75); smooth(); strokeWeight(2); /*line(graphXPos, height,graphXPos, height); // at the edge of the screen, go back to the beginning: if (graphXPos >= width) { graphXPos = 0; // clear the screen by resetting the background: background(255); } else { // increment the horizontal position for the next reading: graphXPos=graphXPos+5; }*/ // float a = random (0, 255); //float b = random (0, 255); //float c = random (0, 255); if (x>1600) { ellipse (width/2, height/2, x/6, y/6); } else { fill (255); ellipse (width/2, height/2, x/6, y/6); } } } void draw() { float halfWidth = width / 2.0; gainValue.setValue((float)(x)/ (float)height); // set the gain based on mouse position along the Y-axis rateValue.setValue(((float)(y) - halfWidth)/halfWidth); // set the rate based on mouse position along the X-axis
} // this routine is called whenever a mouse button is pressed on the Processing sketch void mousePressed() { // if the left mouse button is clicked, then play the sound if( mouseX > width / 2.0 ) { sp1.setPosition(000); // set the start position to the beginning sp1.start(); // play the audio file } // if the right mouse button is clicked, then play the bass drum sample backwards else { sp1.setToEnd(); // set the start position to the end of the file sp1.start(); // play the file in reverse (rate set in the draw routine) } } Rorschach visualization with serial //http://www.openprocessing.org/visuals/?visualID=9112 // Sampling_03.pde // this is a more complex sampler // clicking somewhere on the window initiates sample playback // moving the mouse controls the playback rate import processing.serial.*; //beads is A realtime audio library for computer music and sonic art in realtime performance, installation // artworks, web content and more. import beads.*; Serial port; float x; float y; float z; float a; float b; //double is a float type value that is greater than the limits for float double val; int graphXPos; AudioContext ac; SamplePlayer sp1; // we can run both SamplePlayers through the same Gain Gain sampleGain; Glide gainValue; Glide rateValue; void setup() { size(800, 600); port = new Serial(this, Serial.list()[0], 9600); ac = new AudioContext(); // create our AudioContext // whenever we load a file, we need to enclose the code in a Try/Catch block // Try/Catch blocks will inform us if the file can't be found try { // initialize the SamplePlayer sp1 = new SamplePlayer(ac, new Sample(sketchPath("") + "japenese.wav")); } catch(Exception e) { // if there is an error, show an error message (at the bottom of the processing window) println("Exception while attempting to load sample!");
e.printStackTrace(); // then print a technical description of the error exit(); // and exit the program } // note that we want to play the sample multiple times sp1.setKillOnEnd(false); rateValue = new Glide(ac, 1, 30); // initialize our rateValue Glide object sp1.setRate(rateValue); // connect it to the SamplePlayer // as usual, we create a gain that will control the volume of our sample player gainValue = new Glide(ac, 0.0, 30); sampleGain = new Gain(ac, 1, gainValue); sampleGain.addInput(sp1); ac.out.addInput(sampleGain); // connect the Gain to the AudioContext ac.start(); // begin audio processing background(255); // set the background to white stroke(255); //line(width/2, 0, width/2, height); // draw a line in the middle //text("Click to begin playback.", 100, 100); // tell the user what to do! //text("Move the mouse to control playback speed.", 100, 120); // tell the user what to do! } // although we're not drawing to the screen, we need to have a draw function // in order to wait for mousePressed events void serialEvent(Serial port) { String inString = port.readStringUntil('\n'); if (inString != null) { // trim off any whitespace: inString = trim(inString); // split the string on the commas and convert the // resulting substrings into an integer array: float[] controls = float(split(inString, ",")); x = map(controls[0], 0, 20, 0, 500); y = map(controls[1], 0, 20, 0, 500); z = map(controls[2], 0, 20, 0, 500); println( "Raw Input :" +x +", " +y); float yPos = 10 - x/12; // draw the line in a pretty color: stroke(0); strokeWeight(3); line(graphXPos, height,graphXPos, height); // at the edge of the screen, go back to the beginning: if (graphXPos >= width) { graphXPos = 0; // clear the screen by resetting the background: background(255); } else { // increment the horizontal position for the next reading: graphXPos=graphXPos+5; } //rarshach float r= random (width/4, width/2); //if the x value is very high, create a very large circle if (x>1500) { float var = x/5; float hop = random (-1, 1); fill(0,random(0,30)); noStroke(); ellipse(width/2 +r, height/2-(var-75)*hop,var,var); ellipse(width/2 -r, height/2-(var-75)*hop,var,var); }
//otherwise, smaller circles closer to the middle else { float var = x/4-150; float rayon = random(50); float hop = random (-1, 1); fill(0,random(0,30)); noStroke(); ellipse(width/2 +var, height/2-var*hop,rayon,rayon); ellipse(width/2 -var, height/2-var*hop,rayon,rayon); } } } void draw() { float halfWidth = width / 2.0; gainValue.setValue((float)(x)/ (float)height); // set the gain based on mouse position along the Y-axis rateValue.setValue(((float)(y) - halfWidth)/halfWidth); // set the rate based on mouse position along the X-axis } // this routine is called whenever a mouse button is pressed on the Processing sketch void mousePressed() { // if the left mouse button is clicked, then play the sound if( mouseX > width / 2.0 ) { sp1.setPosition(000); // set the start position to the beginning sp1.start(); // play the audio file } // if the right mouse button is clicked, then play the bass drum sample backwards else { sp1.setToEnd(); // set the start position to the end of the file sp1.start(); // play the file in reverse (rate set in the draw routine) } }
Final Code Arduino
const int groundpin = 18; // analog input pin 4 -- ground
const int powerpin = 19; // analog input pin 5 -- voltage
const int xPin = A3; // x-axis of the accelerometer
const int yPin = A2; // y-axis
const int zPin = A1; // z-axis
//The minimum and maximum values that came from
//the accelerometer while standing still
//You very well may need to change these
int minVal = 265;
int maxVal = 350;
//to hold the caculated values
float x;
float y;
float z;
void setup(){
Serial.begin(9600);
pinMode(groundpin, OUTPUT);
pinMode(powerpin, OUTPUT);
digitalWrite(groundpin, LOW);
digitalWrite(powerpin, HIGH);
}
void loop(){
//read the analog values from the accelerometer
int xRead = analogRead(xPin);
int yRead = analogRead(yPin);
int zRead = analogRead(zPin);
//convert read values to degrees -90 to 90 - Needed for atan2
int xAng = map(xRead, minVal, maxVal, -90, 90);
int yAng = map(yRead, minVal, maxVal, 0, 180);
int zAng = map(zRead, minVal, maxVal, -90, 90);
//Caculate 360deg values like so: atan2(-yAng, -zAng)
//atan2 outputs the value of -π to π (radians)
//We are then converting the radians to degrees
x = RAD_TO_DEG * (atan2(-yAng, -zAng) + PI);
y = RAD_TO_DEG * (atan2(-xAng, -zAng) + PI);
z = RAD_TO_DEG * (atan2(-yAng, -xAng) + PI);
//Output the caculations
//Serial.print("x: ");
Serial.print(x);
Serial.print(",");
Serial.print(y);
Serial.print(",");
Serial.println(z);
delay(100);//just here to slow down the serial output - Easier to read
}
Processing
// this is a more complex sampler
// clicking somewhere on the window initiates sample playback
//libraries
import processing.serial.*;
import beads.*;
//global variables
Serial port;
float x;
float y;
float z;
float a;
float b;
int graphXPos;
AudioContext ac;
SamplePlayer sp1;
// we can run both SamplePlayers through the same Gain
Gain sampleGain;
Glide gainValue;
Glide rateValue;
void setup() {
size(800, 600);
port = new Serial(this, Serial.list()[0], 9600);
ac = new AudioContext(); // create our AudioContext
// whenever we load a file, we need to enclose the code in a Try/Catch block
// Try/Catch blocks will inform us if the file can't be found
try {
// initialize the SamplePlayer
sp1 = new SamplePlayer(ac, new Sample(sketchPath("") + "interlude.wav"));
}
catch(Exception e) {
// if there is an error, show an error message (at the bottom of the processing window)
println("Exception while attempting to load sample!");
e.printStackTrace(); // then print a technical description of the error
exit(); // and exit the program
}
// note that we want to play the sample multiple times
sp1.setKillOnEnd(false);
rateValue = new Glide(ac, 1, 30); // initialize our rateValue Glide object
sp1.setRate(rateValue); // connect it to the SamplePlayer
// as usual, we create a gain that will control the volume of our sample player
gainValue = new Glide(ac, 0.0, 30);
sampleGain = new Gain(ac, 1, gainValue);
sampleGain.addInput(sp1);
ac.out.addInput(sampleGain); // connect the Gain to the AudioContext
ac.start(); // begin audio processing
}
// although we're not drawing to the screen, we need to have a draw function
// in order to wait for mousePressed events
void serialEvent(Serial port) {
String inString = port.readStringUntil('\n');
if (inString != null) {
// trim off any whitespace:
inString = trim(inString);
// split the string on the commas and convert the
// resulting substrings into an integer array:
float[] controls = float(split(inString, ","));
x = map(controls[0], 0, 20, 0, 500);
y = map(controls[1], 0, 20, 0, 400);
z = map(controls[2], 0, 20, 0, 500);
println( "Raw Input :" +z);
//rarshach
float r= random (width/4, width/2);
//if the x value is very high, create a very large circle
if (x>1600) {
float var = x/5;
float hop = random (-1, 1);
fill(random(x/100),random(y/10), random(z/10), random(0,30));
noStroke();
ellipse(width/2 +r, height/2-(var-75)*hop,var,var);
ellipse(width/2 -r, height/2-(var-75)*hop,var,var);
}
//otherwise, smaller circles closer to the middle
else {
float var = x/4-200;
float rayon = random(50);
float hop = random (-1, 1);
fill(random(x/100),random(y/10), random(z/10), random(0,30));
noStroke();
ellipse(width/2 +var, height/2-var*hop,rayon,rayon);
ellipse(width/2 -var, height/2-var*hop,rayon,rayon);
}
}
}
void draw() {
float halfWidth = width / 2.0;
gainValue.setValue((float)(x)/ (float)height); // set the gain based on mouse position along the Y-axis
rateValue.setValue(((float)((y)) - (halfWidth))/width); // set the rate based on mouse position along the X-axis
}
// this routine is called whenever a mouse button is pressed on the Processing sketch
void mousePressed() {
// if the left mouse button is clicked, then play the sound
if( mouseX > width / 2.0 ) {
sp1.setPosition(000); // set the start position to the beginning
sp1.start(); // play the audio file
}
// if the right mouse button is clicked, then play the bass drum sample backwards
Else {
sp1.setToEnd(); // set the start position to the end of the file
sp1.start(); // play the file in reverse (rate set in the draw routine)
}
}
Artefacts Initial accelerometer test video http://www.youtube.com/watch?v=QLbYEUEuhQg
Final accelerometer test videos http://www.youtube.com/watch?v=OkAv5EX4_n4&feature=youtu.be http://www.youtube.com/watch?v=4QVZ_wOVJas&feature=youtu.be http://www.youtube.com/watch?v=sOWAGjQpIS8&feature=youtu.be
Final design images
Equipment required for each handheld tool.
Transmitter: Arduino Uno, Xbee shield, Xbee radio (transmitter), accelerometer and battery
Xbee radio receiver (top) and Xbee Explorer (bottom) connected to serial port of laptop computer
Xbee radio receiver mounted on Xbee Explorer
Assembled components
Assembled components
Transmitter assembly fits inside felt enclosure
Transmitter assembly and felt enclosure
Enclosure fits easily in the hand. Handheld with blue buttons controls blue visuals, handheld with yellow buttons controls yellow visuals.
Twin screens in the exhibition space
Transmitter can be worn on the wrist, ankle, shoulder etc. using velcro strap
Controlling audio playback and painting on the yellow screen using one wireless handheld transmitter
Controlling audio playback and painting on the blue screen with the other wireless handheld unit
Future Directions Some next steps we would like to take are to significantly refine the response to
sensor movement, to make it richer and even more intuitive. We’d also like to play
with the sounds, visuals and sensor casing to more clearly the concept of an
experimental sound and image manipulation. We’d also love the opportunity to
conduct some user testing. We’d like to see how musicians and non-musicians use
the tool and what they believe are the most gratifying and interesting interactions.
top related