designing interactive audiovisual systems for improvising...

Post on 21-Feb-2019

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

DesigningInteractiveAudiovisualSystemsforImprovisingEnsembles

WilliamHsu

DepartmentofComputerScienceSanFranciscoStateUniversity

SanFranciscoCA94132USA

whsu@sfsu.edu

Abstract.Since2009,Ihavebeenworkingonreal-timeaudiovisualsystems,usedinmanyperformanceswithimprovisingmusicians.Real-timeaudiofromaperformanceisanalysed;theaudiodescriptors(alongwithdatafromgesturalcontrollers)influenceabstractanimationsthatarebasedongenerativesystemsandphysics-basedsimulations.Themusiciansareinturninfluencedbythevisuals,essentiallyinafeedbackloop.Iwilldiscusstechnicalandaestheticdesignconsiderationsforsuchsystemsandtheirintegrationintothepracticesofimprovisingensembles,andsharesomeexperiencesfrommusiciansandaudiencemembers.

Keywords:Interactiondesign,improvisation,audiodescriptors,generativesystems,physics-basedsimulations

Introduction

ForyearsIhavebeeninvolvedinfreeimprovisationwithbothacousticinstrumentsandliveelectronics.Asaperformer,softwaredesignerandlistener,Ihavebeenveryinterestedintheassociationsmadebymyself(andothers)betweensonicevents/gesturesandvisual/physicalphenomena.IamattractedtovisualsystemsthatexhibittheoppositionsandtensionsthatIenjoyinimprovisedmusic.

Since2009,Ihavebeenworkingonreal-timeaudiovisualsystemsfornon-idiomaticfreeimprovisation.Ihaveusedthesesystemsinover50performanceswithimprovisingmusicians,includingChrisBurns,JohnButcher,JamesFei,GinoRobair,BirgitUlher,andmanyothers;manysucheventswereevening-longconcertsinvolvinguptosixdifferentsystems/pieces.Performanceshavebeenhostedatavarietyofvenues,suchasZKM(Karlsruhe),STEIM(Amsterdam),CNMAT(Berkeley),theSonglinesseriesatMillsCollege,theSanFranciscoElectronicMusicFestival,andtheNIMEandSMCconferences.

Figure1showsatypicalblockdiagramofmysystems.Inperformance,twoormorechannelsofaudioenterthesystemfrommicrophonesorthevenue’ssoundsystem.TheaudioisanalyzedbyaMax/MSPpatchthatextractsestimatesofloudnessandtempo,andsometimbralfeatures.AudiodescriptorsaresentviaOpenSoundControlmessagestoananimationenvironment,usuallyimplementedinProcessingorOpenFrameworks.Theseinteractiveanimationsareinfluencedbythereal-timeaudiodescriptorsfromthemusicians’performance,andbyphysicalgesturesfromcontrollers.

Videogeneratedbytheanimationenvironmentsisusuallyprojectedbehindorabovethemusicians.Figure2showsatypicalstagesetup,fromasetbyEKG(KyleBruckmannandErnstKarel)atthe2013SanFranciscoElectronicMusicFestival,withvideogeneratedbymysystemTes.Theanimationsarevisibletothemusiciansandinfluencetheirperformance,thusformingafeedbackloop.EachsystemhasbothcomponentsthatIcontrol,andautonomouscomponentswiththeirownbehaviouralrules.Performingwithoneofmysystemsinvolvesongoingnegotiationbetweenthecontrollablecomponentsandtheautonomousmodules.

Figure1.BlockDiagramofInteractiveAudiovisualPerformanceSystem

DesignGoalsandPracticalConsiderationsThesearetheinitialgoalsformysystems:

• Eachsystemwillbeprimarilyusedinthecontextofabstractfreeimprovisation.• Therewillbeminimaluseoflooping/pre-sequencedmaterials;eachsystemwillbehavelikeacomponentinan

improvisingensemble.• Eachsystemwillbea“playable”visualinstrument;systembehaviourcanbeinfluencedbyphysicalcontrollers.• Eachsystemhasautonomouscomponentsthatguidesitsbehaviour;thisbehaviourmaybeinfluencedbyreal-time

audio.• Eachsystemshouldevokethetactile,nuancedandtimbrallyrichgesturescommoninfreeimprovisation.• Thereshouldbecross-referencesbetweenaudioandvisualevents.However,overlyobviousmappingsshouldbe

avoided.

Figure2.PerformancebyEKGwithvideobyBillHsu,2013SanFranciscoElectronicMusicFestival.Photo:PeterBKaars.com

Non-idiomaticfreeimprovisationisasocialmusic-makingpractice,producingopensonicconversationsthattendnottobeeasilyinterpretedoranalysedinnon-musicalterms.Gaveretal.(2003)discusstheroleofambiguityininteractivedesign;someoftheircommentsareusefulforunderstandingthepracticeandexperienceoffreeimprovisation,andhowonemightapproachaudiovisualdesignforconcertsofimprovisedmusic.Forexample,theauthorscouldeasilyhavebeenreferringtosonicgesturesinimprovisation,which“maygiverisetomultipleinterpretationsdependingontheirprecision,consistency,andaccuracyontheonehand,andtheidentity,motivations,andexpectationsofaninterpreterontheother.”Anothercommentwouldfindresonancewithmanyfansofimprovisedmusic:“…theworkofmakinganambiguoussituationcomprehensiblebelongstotheperson,andthiscanbebothinherentlypleasurableandleadtoadeepconceptualappropriationoftheartefact.”

Topreservetheessentialopennessandambiguityofafreeimprovisation,Ifeelthatthevisualsshouldnotover-determinethenarrativeoftheperformance.Hence,mostofmyworkutilizesgenerativeabstractvisualcomponentsthatexhibitarangeofbehaviours.Mypreferenceisforunstable,evolvingformsthatfacilitatesettinguptensionsbetweenabstractandreferentialelements,forarichervisualexperience.Forexample,aparticleswarmorsmokesimulationmaymoveincomplex,pseudo-randomconfigurations,orself-organizeintorecognizablestructures.ThesetransitionsevokeFriedrichHayek’sconceptof“sensoryorder”,whereinobserversorganizerawchaoticstimuliintoperceptuallyrecognizableobjects(Hayek1999).

Eachofmysystemsisprimarilybasedonasinglecomplexprocess,usuallyagenerativesystemoraphysics-basedsimulation.Themovementandevolutionofthevisualcomponentsineachsystemfollowtherulesoftheunderlyingprocess.Audiodescriptorsfromthereal-timeperformanceaudioaffectmostlyhigh-levelparametersofthebaseprocess;theydonotmapdirectlytolowleveldetailsofthevisualcomponents.Frommyexperimentsandobservations,allowingtheunderlyingprocesstodeterminethelowleveldetailsofthevisualsresultsinmore“organic”andaestheticallyconsistentmovementsandtransitions.

Eachsystemreceivesseveralinputstreams,representingeventsoccurringat(often)widelydifferentrates.Forexample,anaudiodescriptorrepresentingawell-definedpercussiveonsetmayoccurextremelyinfrequently,whileonethatrepresentsloudnessoracontinuoustimbralcharacteristicmaybepresentedatregularintervalsof(say)100-200milliseconds.Inaddition,eventsintheinteractiveanimationsubsystemalsohavearangeoftemporaldistributions.Arapidonsetvisualevent,suchasthesudden“birth”ofasmallclusteroftinyobjectsthatexpandsintovisibility,maytakeafewhundredmillisecondsfromtheinitialtriggertoitsfinal,relativelystablestate.Ontheotherhand,abroadsweepinggesture,representing(forexample)atidalflowinafluidsystem,maytakeseveralsecondstocomplete,withitseffectsbeingvisiblelongafterthegestureinitiation.Hence,caremustbetakenwitheachsystemtomanageeacheventtypebasedontypicalratesofoccurrence.Forexample,itmaybeintuitiveforapercussiveaudioonseteventtotriggertheformationofasmallobjectclusterintheanimation.However,iftheliveperformerisabusypercussionist,thevisualenvironmentmayquicklybecomeclutteredandoverwhelmedwithobjects.Acertainamountofthresholdingisalwaysnecessaryformanagingtheratesofautomaticevents;thesystemoperatorshouldalsohavetheoptiontoadjusttheresponsivenessoftheanimationsystemtoselectedeventtypes.

Thereisacoreofexperiencedmusicalcollaboratorswhohaveworkedwithmysystemsregularly.Whenworkingwithregulars,itiseasyforustoquicklyconvergeona“setlist”ofpiecesforanevening’sconcert.Abriefminuteorsobeforetheperformancewitheachpieceisusuallysufficientformusicianstore-familiarizethemselveswiththesystem.Ialsocollaboratewithmusicianswhohaveneverworkedwithaudiovisualperformancesoftware,withminimaltimeforrehearsalsbeforeanevening’sconcert.Hence,interactionmodalitieshavetobeintuitiveandeasytoexplain;withonlyafewminutes’preparation,amusicianshouldunderstandasystem’sbehavioursufficientlytoexploreandimprovisewithit,ontopofthecognitivedemandsofnegotiatinganimprovisation.

Collaboratingmusicianshavetakenseveralapproachestoworkingwithmyaudiovisualsystems.Afewhavechosennottolookatthevideoatall;theyfeltthattheywantedtofocusonsound.Somehavementionedthetemptationtotryto“push”theanimationsintospecificoutcomes,viatheirsonicgestures;thistemptationmayobviouslydistractfrommusic-making.Mostofmyfellowperformershavepreferredsomeconversationsaboutthechosensystemsbeforeaperformance,andactivelyengagewiththevideowhenplaying.

RelatedWorkAudiovisualperformanceiswidespreadintheclubmusiccommunity.TheVJLaborwebsite(http://vjlabor.blogspot.com),forexample,showcasessuchwork.Themusicisoftenbeat/loop-based,withrelativelystabletemposandeventrates,andlittlevariationinspaceoruseofsilence.Thevisualsoftenincorporateloopsandsimplecycles,andworkwithpre-recordedfootagethatmaybemanipulatedinreal-time.

Interactivevideoisalsoacomponentinmanycompositions.Forexample,violinistBarbaraLueneburg’sDVDWeaponofChoice(http://www.ahornfelder.de/releases/weapon_of_choice/index.php)includesasamplingofcomposedpieceswithliveorstaticvideo,byAlexanderSchubert,YannisKyriakides,DaiFujikura,andothers.Someofthepiecesincorporateliveaudioorsensorinput.

Inmyexperience,interactivevisualstendtobesignificantlylesscommoninfreeimprovisation.ThestrategiesthatworkwellintheVJandcomposercommunities,inmyopinion,oftendonotmapwelltonon-idiomaticfreeimprovisation,witheventdensitiesthatmayvarywidelyinshorttimewindows,useofspaceandsilence,andoverallconversationsthatdevelopfrommomenttomoment,withlittleornopre-arrangedcompositionalstructure.Thetechnologiesandinternaldetailsofthesesystemsalsotendtobepoorlydocumented.

AmajorearlyaudiovisualperformanceprojectinvolvingimprovisersisLevinandLieberman’sMessadiVoce(LevinandLieberman,2004).TheprojectappearstobeprimarilydesignedaroundJoanLaBarbaraandJaapBlonk,twovocalistswhoarerenownedfortheirexplorationofextendedvocaltechniquesinperformance.Real-timecameraandaudioinputfromtheperformersdriveanarrayofgenerativeprocesses,includingparticlesystemsandfluids.Messacomprisestwelveshort,theatricallyeffectivesectionsoveratotalof30-40minutes;myownworktendstowardlongersectionsof8-10minuteseach,witheachsectionbasedonadistinctgenerativeprocess,formoreextended“conversations”.InMessa,LaBarbaraandBlonkarealwaysthecentersofvisualattentiononstage;camera-basedtrackingoftheirbodiesisasignificantcomponentoftheliveinteractions.Withmysystems,mycollaboratorstendtofocusonworkingwithabstractsound;whilethemusiciansarevisibleonstage,theirmovementsarenottracked,andthevisualattentionoftheaudiencetendstobeprimarilyonthelivevideo.Fromtheonlinedocumentation,itisnotclearifMessadiVocehasbeenperformedwithimprovisersotherthanLaBarbaraandBlonk,orwhetherithasbeenrevivedsincethe2004/5performancesandinstallations.

Morerecentprojectsincludetheperformanceduoklippav,focusingonliveaudiovisualcutup/splicing(CollinsandOlofsson,2006);trombonistAndyStrain’saudiovisualpiecesforschoolchildren(http://andystrain.com);BillyRoisz’slivevideoworkwhichincorporatesrelativelyminimalisttransformationsofstillimages,foundfootageandanalogartifacts(http://billyroisz.klingt.org/video-works);andWilliamThibault(http://www.vjlove.com),whooftenmanipulatesdensedatanetworkvisualizationsinperformancewithfreeimprovisers.

ExampleSystemsTodate,Ihavebuiltoveradozendistinctaudiovisualperformancesystems,somewithnumerousvariants.Mostofthesesystemshaveparticipatedinnumerousperformanceswithimprovisingmusicians.ThefirstwastheparticlesystemInterstices,introducedin(Hsu2009).Iwillfocusmostlyonfourofthelatersystems:FlowForms,Flue,Fluke,andLeishmania.

FlowFormsisbasedontheGray-Scottdiffusion-reactionalgorithm(Pearson1993),whichhasbeenwidelyusedinthegenerativeartcommunity.Twosimulated“chemicals”interactanddiffuseina2Dgrid,accordingtosimpleequations.Parametersthatcontroltheconcentrationsofthesimulatedchemicalsaremodifiedbyandtrackactivityfromthereal-timeaudio.Sonicallyactivesectionsresultinmorerobustvisuals;longperiodsofsilencewillresultfragmentationofthepatterns,eventuallytoleaveadarkscreen.Inaddition,hiddenmasksrepresentingshapesorimagescanbeintroducedtoguidetheformationofvisiblepatterns.Figure3showsanexampleofasimulatedchemicalflowingintoapre-loadedimagemask.

Figure3.Gray-ScottDiffusionReactionProcesswithTransitionintoHiddenImageMask

Flueisasmokesimulation,basedonaportofJosStam’sstablefluidscode(Stam1999).Twosmokesourcesmovethroughspace,eachactivatedandpushedbyareal-timeaudiostream.Again,activitylevelsinthevisualsimulationtracksroughlyactivitylevelsintheaudio,buttherearenosimplemappingsoflowlevelbehaviors.Hiddenmaskscanbeintroducedtoconstrainthemovementofthesmoke.Figure4showssimulatedsmokecoalescingintotheshapeofaskull,thendispersing.

FlukeisbasedonStephanRafler’sextensionofConway’sGameofLife(Rafler2011).Thealgorithmisverycompute-intensive;IadaptedTimHutton’sOpenCLimplementationthatrunsontheGPU.Real-timeaudioactivitytriggerstheformationofstructuresina2Dspace;algorithmparametersareconstantlymodulatedforthevisualactivityleveltotrackactivitylevelsintheperformanceaudio.

Figure4.SmokeSimulationwithSmokeSourcesfillingHiddenSkullMask

Leishmaniaisaninteractiveanimationenvironmentthatvisuallyresemblescoloniesofsingle-cellorganismsinafluidsubstrate.Eachcell-likecomponenthashiddenconnectionstoandrelationshipswithothercomponentsintheenvironment.Thecoloniesevolveand“swim”throughthesubstrate,basedonacombinationofcolonialstructureandinter-relationships,andflowsinthefluidsubstratethatmightbeinitiatedbygesturalinput.LeishmaniahasbeenusedextensivelyinperformancewithChristopherBurns’Xenoglossiainteractivemusicgenerationsystem.Thesetwosystemscommunicatewithoneanotherinavarietyofways.Theanimationisinfluencedbythereal-timeanalysisofaudiofromXenoglossia.Inaddition,thetwosystemsexchangeOSCnetworkmessages,informingeachotherofeventsthataredifficulttoextractfromautomaticanalysis,suchaspendingsectionchanges,structuredrepetitionswithvariations,andtheconfigurationofanimationcomponents.

Reactions

Sofar,Ihavemostlyfocusedonbuildingcomplexaudiovisualsystemsforliveperformances“inthewild”;littletimehasbeenspentsettinguplaboratory-likesituationsformoreformalevaluations.Evaluationproceduressuchasthosedescribedin(HsuandSosnick,2009),targetinginteractivemusicsystemsfromthepointsofviewofboththeperformingmusiciansandaudiencemembers,mightbeadaptedforinteractiveaudiovisualsystems;thepresenceofcomplexvisualcomponents,withdisparatebehaviouraltypes,significantlycomplicatesuchevaluations.Instead,wewillsummarizesomegenerallypositivefeedbackonperformances,frommusiciansandaudiencemembers;theyappeartosupportsomeofouroriginaldesigngoals.

FrequentOakland-basedcollaboratorGinoRobair(percussion/electronics):“WhatIfindfascinatingishowtheinteractivepieceschallengethemusicianswhoplaythem.Oneimmediatelywantstofigureouthowto"game"thesystem,andcontrolitwithwhatweareplaying.Yetthealgorithmsconfoundthat,forcingthemusicianstotreatthecomputersystemasarealduetpartnerwhois"listening"butnotnecessarilyrespondinginapredictableway(Robair,personalcommunication).”

Anotherfrequentcollaborator,Hamburg-basedBirgitUlher(trumpet/electronics):“WhatIespeciallylikeabout[Bill’sanimations]isthestrongconnectiontothemusicwithoutbeingtooobviousorillustrative.WorkingwithBill'sanimationsopensupalotofnewlevelsofcommunication,theaudioinputofthemusiciansaretransformedintovisualswhichcanbeseenonascreenwhileplaying,soitisakindofseesawofinfluences.Thevisualsinfluencetheplayersandviceversa,alsothecommunicationbetweenthemusicianshasanadditionallevelsincetheirinteractionisseenonthescreenaswell(Ulher,personalcommunication).”

London-basedJohnButcher(saxophones),whowithRobairandmyselfcomprisetheaudiovisualtrioPhospheme:“Thesystems'responsesmaybesubtleorstriking,butalwaysseemorganic.There'sacreative,two-wayencounterwherethemusicianisonlypartlydrivingtheprocessandtheinspirationtheytakefromtheevolvingvisualreactionsopensspaceforsomeofthemoreunpredictableconsequencesonehopesforwithhumaninteractions.Fromastructuralpointofviewit'sinterestingthatthiscanhappenwithintheconsistent"flavour"ofaparticularvisualidea,givingeachimprovisationavaluablecoherence(Butcher,personalcommunication).”

Robair,UlherandButcherhaveallworkedwitharangeofmyaudiovisualpieces.Chicago-basedChrisBurns(electronics)hasmostlyworkedwithLeishmania,onanumberofoccasions;heshareshisexperiences:“[Leishmania]offersarichvarietyofvisualbehaviorswithinafocussed,consistent,admirablystarkandunapologeticallydigitalaesthetic.Thatvarietymakesitasuitableaccompanimentforawiderangeofmusicalchoices;italsomakesthesystemunpredictable,withmusicalstimulileadingtoanumberofpossiblevisualreactions.Leishmaniaprovidesstimulusaswellasresponse.Theanimationsareevocativewithoutbeingcommanding-thevisualsinspireformalandgesturalideasinmyperformance,buttheyneverfeelconstraining,limiting,ordemandingofaparticulartypeofmusicalresponse.Inshort,Leishmaniafeelslikeaverysophisticated,capable,andprovocativeduopartner-whichhaseverythingtodowithduopartnerwhoconstructedit(Burns,personalcommunication).”

ReviewerStephenSmoliarwroteaboutaperformancein2014withJamesFei,GinoRobair,andOferBymel:“…Itseemedclearthat[Hsu]was“playing”hisinteractiveanimationsoftwarefollowingthesamelogicandrhetoricoffreeimprovisation…Whatwasmoststriking…washowHsucouldusevisualcharacteristicssuchasflowandpulsetoachieveanimationsthatprobablywouldhavebeenjustasmusicalhadtheybeendisplayedinsilence.”(Smoliar2014)

SummaryIhavebeenluckytoworkwithmanyinspiringimprovisingmusiciansonaudiovisualprojects.Theircreativeandfascinatingapproachestoperformancesituations,frommanagingtexturalandgesturalmaterials,toworkingwithspaceandpulseinthecontextofnon-referentialabstractmaterials,haveinformedmyaudiovisualsystemdesignsatmanydifferentlevels.I’dliketothankmycollaboratorswhohavebeengenerouswiththeirtimeandfeedback.Ihaveshared

somenotesandexperienceshere,buthighlyencourageviewingoneoftheperformancesonvideoor(especially)live;freeimprovisation,withorwithoutvisuals,isverymuchasocialmusic-makingpracticethatisbestexperiencedinliveperformances.Aselectionofvideodocumentationisavailableonline:

SetwithJamesFei(reeds)andGinoRobair(percussion)atOutsoundSummitFestival2014,SanFrancisco:https://www.youtube.com/watch?v=NLFj26zfqsI

PerformancewithChrisBurns(electronics)ofXenoglossia/LeishmaniainComputerMusicJournalonlineanthology(requireslogin):http://www.mitpressjournals.org/doi/abs/10.1162/COMJ_x_00276#.VJtz6LiALw

ShortdemoofFluke(musicbyBirgitUlherandGinoRobair):https://vimeo.com/106125702

ShortdemoofFlowForms(musicbyChrisBurns):https://vimeo.com/78739548

ReferencesCollins,NickandOlofsson,Fredrik.2006.Klippav:livealgorithmicsplicingandaudiovisualeventcapture.InComputerMusicJournal,Vol.30,No.2(Summer2006),pp.8-18.

Gaver,William,Beaver,JacobandBenford,Steve.2003.AmbiguityasaResourceforDesign.InProceedingsofCHI’03.

Hayek,Friedrich.1999.TheSensoryOrder:AnInquiryintotheFoundationsofTheoreticalPsychology.UniversityofChicagoPress,Chicago,IL.

Hsu,William.2009.Somethoughtsonvisualizingimprovisations/improvisingvisualizations.InProceedingsof6thSoundandMusicComputingConference(SMC'09).

Hsu,WilliamandSosnick,Marc.2009.EvaluatingInteractiveMusicSystems:AnHCIApproach.InProceedingsofInternationalConferenceonNewInterfacesforMusicalExpression(NIME2009)

Levin,GolanandLieberman,Zachary.2004.In-situspeechvisualizationinreal-timeinteractiveinstallationandperformance.InNPAR2004Jun7(Vol.4,pp.7-14).

Pearson,J..1993.Complexpatternsinasimplesystem.Science261,189-192.

Rafler,Stephan.2011.GeneralizationofConway’s“GameofLife”toacontinuousdomain---SmoothLife.2011.arXivpreprintarXiv:1111.1567.

Smoliar,Stephen.2014.BillHsubringsmusicalrhetorictohisimprovisedinteractiveanimations.2014.RetrievedJanuary10,2016fromhttp://exm.nr/1tzVSlp.

Stam,Jos.1999.Stablefluids.InProceedingsof26thannualconferenceonComputerGraphicsandInteractiveTechniques(SIGGRAPH’99).

top related