ljudskrapan/the soundscraper: sound exploration for children with complex needs, accommodating...

7
LJUDSKRAPAN/THE SOUNDSCRAPER: SOUND EXPLORATION FOR CHILDREN WITH COMPLEX NEEDS, ACCOMMODATING HEARING AIDS AND COCHLEAR IMPLANTS Kjetil Falkenberg Hansen KTH Royal Institute of Technology [email protected] Christina Dravins ıga Stradin ¸˘ s University kristina [email protected] Roberto Bresin KTH Royal Institute of Technology [email protected] ABSTRACT This paper describes a system for accommodating ac- tive listening for persons with hearing aids or cochlear im- plants, with a special focus on children with complex needs, for instance at an early stage of cognitive development and with additional physical disabilities. The system is called Ljudskrapan (or the Soundscraper in English) and consists of a software part in Pure data and a hardware part using an Arduino microcontroller with a combination of sensors. For both the software and hardware development, one of the most important aspects was to always ensure that the system was flexible enough to cater for the very differ- ent conditions that are characteristic of the intended user group. The Soundscraper has been tested with 25 children with good results. An increased attention span was reported, as well as surprising and positive reactions from children where the caregivers were unsure whether they could hear at all. The sound generating models, the sensors and the pa- rameter mapping were simple, but provided a controllable and complex enough sound environment even with limited interaction. 1. INTRODUCTION This paper describes a system for promoting listening and exploration of sounds based on playful audio interaction. The system is devised for children at an early stage of cog- nitive development with a focus on children with hearing impairment and complex needs. The aim is to reduce limitations of activity due to func- tional disability and to encourage curiosity toward active listening by providing good conditions for exploring, in- vestigating and playing with a collection of synthesized and recorded sounds. The system, called “Ljudskrapan” in Swedish and “the Soundscraper” in English, consists of a software and a hardware part. The software is programmed in Pure data [1], and various sensors are used for interact- ing. A typical setup includes an Arduino [2] with gesture tracking sensors attached, but other input devices like game Copyright: c 2011 Kjetil Falkenberg Hansen et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License , which permits unre- stricted use, distribution, and reproduction in any medium, provided the original author and source are credited. controllers, cameras and pointing devices can be connect- ed. In the typical envisaged use scenario, the child interacts with the Soundscraper through gestures in a school or a clinic, supervised by a special needs teacher or a speech therapist, and with the software operated by a second per- son. In a simpler use case, a user interacts with and op- erates the software without supervision. In both cases, the Soundscraper can be classified as a sound exploration tool, a sound toy and musical instrument. The Soundscraper has ambitions to empower children with hearing impairment and complex needs in the following ways: make exploration of sound and music accessible, stimulate active listening in a playful and rewarding way, be inclusive by using easily adaptable sensor data for control, provide means for expanding orientation by listen- ing and promoting audition as a means of orientation and communication. The aim is also to offer novel possibilities for assessing hearing capabilities in children at an early stage of develop- ment. An important function is to improve the possibilities for the child to provide clear and unambiguous feedback to caregivers. These are by no means modest goals, but we believe that the method may offer many possibilities to support the de- velopment of listening capabilities for children with com- plex needs and hearing impairment. The advantage with the proposed system is that the child can influence listen- ing and choose which sounds are interesting, thus promot- ing active participation and supporting independence. Pilot interventions and the first tests have been promis- ing, and the potential future users have expressed a wish for further development of the Soundscraper. As will be discussed in the following section, the situation for many of the children we target is such that measuring the success of our intervention is impracticable or impossible in a short time perspective. The focus in this paper is on the concep- tual framework and technical description, illustrated with a few examples from the first two years of small-scale test- ing.

Upload: kth

Post on 14-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

LJUDSKRAPAN/THE SOUNDSCRAPER: SOUND EXPLORATION FORCHILDREN WITH COMPLEX NEEDS, ACCOMMODATING HEARING

AIDS AND COCHLEAR IMPLANTS

Kjetil Falkenberg HansenKTH Royal Institute of Technology

[email protected]

Christina DravinsRıga Stradins University

kristina [email protected]

Roberto BresinKTH Royal Institute of Technology

[email protected]

ABSTRACT

This paper describes a system for accommodating ac-tive listening for persons with hearing aids or cochlear im-plants, with a special focus on children with complex needs,for instance at an early stage of cognitive development andwith additional physical disabilities. The system is calledLjudskrapan (or the Soundscraper in English) and consistsof a software part in Pure data and a hardware part usingan Arduino microcontroller with a combination of sensors.For both the software and hardware development, one ofthe most important aspects was to always ensure that thesystem was flexible enough to cater for the very differ-ent conditions that are characteristic of the intended usergroup.

The Soundscraper has been tested with 25 children withgood results. An increased attention span was reported,as well as surprising and positive reactions from childrenwhere the caregivers were unsure whether they could hearat all. The sound generating models, the sensors and the pa-rameter mapping were simple, but provided a controllableand complex enough sound environment even with limitedinteraction.

1. INTRODUCTION

This paper describes a system for promoting listening andexploration of sounds based on playful audio interaction.The system is devised for children at an early stage of cog-nitive development with a focus on children with hearingimpairment and complex needs.

The aim is to reduce limitations of activity due to func-tional disability and to encourage curiosity toward activelistening by providing good conditions for exploring, in-vestigating and playing with a collection of synthesizedand recorded sounds. The system, called “Ljudskrapan” inSwedish and “the Soundscraper” in English, consists of asoftware and a hardware part. The software is programmedin Pure data [1], and various sensors are used for interact-ing. A typical setup includes an Arduino [2] with gesturetracking sensors attached, but other input devices like game

Copyright: c©2011 Kjetil Falkenberg Hansen et al. This is

an open-access article distributed under the terms of the

Creative Commons Attribution 3.0 Unported License, which permits unre-

stricted use, distribution, and reproduction in any medium, provided the original

author and source are credited.

controllers, cameras and pointing devices can be connect-ed.

In the typical envisaged use scenario, the child interactswith the Soundscraper through gestures in a school or aclinic, supervised by a special needs teacher or a speechtherapist, and with the software operated by a second per-son. In a simpler use case, a user interacts with and op-erates the software without supervision. In both cases, theSoundscraper can be classified as a sound exploration tool,a sound toy and musical instrument.

The Soundscraper has ambitions to empower children withhearing impairment and complex needs in the followingways:

• make exploration of sound and music accessible,

• stimulate active listening in a playful and rewardingway,

• be inclusive by using easily adaptable sensor data forcontrol,

• provide means for expanding orientation by listen-ing and promoting audition as a means of orientationand communication.

The aim is also to offer novel possibilities for assessinghearing capabilities in children at an early stage of develop-ment. An important function is to improve the possibilitiesfor the child to provide clear and unambiguous feedback tocaregivers.

These are by no means modest goals, but we believe thatthe method may offer many possibilities to support the de-velopment of listening capabilities for children with com-plex needs and hearing impairment. The advantage withthe proposed system is that the child can influence listen-ing and choose which sounds are interesting, thus promot-ing active participation and supporting independence.

Pilot interventions and the first tests have been promis-ing, and the potential future users have expressed a wishfor further development of the Soundscraper. As will bediscussed in the following section, the situation for manyof the children we target is such that measuring the successof our intervention is impracticable or impossible in a shorttime perspective. The focus in this paper is on the concep-tual framework and technical description, illustrated witha few examples from the first two years of small-scale test-ing.

1.1 Background

Young children with impaired hearing are often provid-ed with hearing aids at an early age. The most commonform of hearing impairment, sensorineural hearing loss, iscaused by lack of sensory cells in the inner ear. If there issufficient function in the inner ear an acoustic (tradition-al) hearing aid (HA) can be fitted so the remaining sensorycells are stimulated in an optimal way. With profound hear-ing loss it is not possible to enhance the acoustic input tocreate a hearing experience. In these cases a cochlear im-plant (CI) can be used to make hearing possible.

1.1.1 Cochlear implants and hearing development

A CI consists of an electrode array that is surgically insert-ed into the inner ear (cochlea). The electrode array is fedwith an electrical stimulus pattern generated by an exter-nally worn sound processor. The sound processor is pro-grammed to convert the acoustic signals into an electricalpattern that is continuously forwarded by the electrode ar-ray to the hearing nerve, and then further on into the centralauditory system. The auditory processes in the brain con-vert the synthetically produced signal pattern into a mean-ingful listening experience.

CI was initially designed for persons who had becomedeaf after acquiring spoken language, but the technologyhas later been successfully introduced to deaf born childrenas well. The surgical procedure by which the electrode ar-ray is inserted into the inner ear is often performed whenthe child is between six and nine months of age. The chal-lenges are, however, quite different in the pediatric pop-ulation as hearing development depends on physiologicalmaturation as well as exposure to stimulation.

In a typically developing child, hearing is a graduallyemerging skill underlying the development of spoken lan-guage. By the age 9–12 months, the majority understandsseveral spoken words and expressions. This means that achild awaiting CI surgery is delayed several months in com-parison to their hearing peers. In many cases, implanteesgo on developing hearing and spoken language with littleor only minor difficulty. This do, however, not hold truefor all individuals: A specially vulnerable group consistsof children with complex needs.

1.1.2 Complex needs

Children with complex needs at an early stage of develop-ment is a small group in society. Within this group, manyhave limitation in vision and hearing as well as limitationsin motor control. Children with complex needs are alsoconsidered for CI surgery, but the intervention required fol-lowing the implantation may be different from that of typi-cally developing children [3]. It has been suggested that thebenefit of CI use for children with complex needs shouldbe seen in a broader perspective than speech outcome [4].

Objective methods can be applied for adjusting and eval-uating the HA/CI, but they are problematic in assessingthe performance for young children. Subjective methodstarget both qualitative and quantitative aspects of hearing.Primarily these are speech recognition and pitch percep-tion [5, 6], but also melody perception [7] and music en-

joyment [8]. However, children with hearing impairmentand complex needs often show minimal response to audi-tory stimulation through HA/CIs.

In many cases the responses are unspecific and providelittle information on the sound appreciation. In a wider per-spective the lack of clear-cut feedback constitutes a seriousproblem. The capacity to hear is established by auditorystimulation enabling physiological maturation as well ashigher level learning. Due to the lack of reactions from thechild the caregivers may give up the endeavor to ensurethat the HA/CI is used continuously. This may result insuboptimal stimulation and lead to further suppressing ofauditory processes.

Studies by Kraus et al [9] have shown that active lis-tening has a positive and lasting effect on perception af-ter surgery, and therefore the child must be systematical-ly exposed to sound in order to develop auditory capacity.Wiley et al [10] concluded that caregivers reported a vari-ety of benefits for children using CI, such as more aware-ness to environmental sounds, higher degree of attentionand clearer communication of needs. Studies also showthat children unable to communicate with their environ-ment in a meaningful way may have a disrupted emotionaland cognitive development and experience their surround-ings as being chaotic [11].

1.1.3 Special needs interfaces for sound manipulation

Within the expanding field of new musical instruments thatare based on a software/hardware hybrid solution, workwith special needs users has always been a prominent di-rection. Many of the products and prototypes that havebeen described (see for instance [12, 13, 14, 15, 16]) ad-dress classical music therapy needs and problems, and fewlook at sound perception specifically. Very little researchhas been done on the combination of severe hearing im-pairment and the other complex needs as described above,and in particular with the aim to assess hearing and stimu-late active listening [17].

2. SOFTWARE AND HARDWARE

One characteristic of the target user group is that every in-dividual has particularly demanding needs, and there is sel-dom much in common between the users. Additionally, aset-up that works in one session might not be possible touse at all in the next. It is therefore necessary to be able toradically change the software behavior and hardware con-figuration within moments. For example, a child that is stilland hardly able to move an object in one session may inanother session display strong or involuntary movements.The software and hardware must correspond to such di-verse states, and the technician must be ready to adapt tothe situation quickly to ensure a working system and safeenvironments for the involved persons.

In particular for the software side, sudden changes in am-plitude or sound characteristics must happen in a controlledfashion. The sound should not under any circumstance ex-ceed planned level restrictions. On the hardware side, safe-ty must be ensured with regards to for instance sharp edges,loose parts, and wires that can strangle or unexpectedly

return thrown sensors. As the user testing has shown, thehardware will often be vigorously handled.

To accommodate quick and considerable changes, the soft-ware was written in Pure data which is an excellent envi-ronment for prototyping (and is open source software). Thehardware has for the most part consisted of sensors on de-tachable units that are easy to place on clothing or objects,connected to an Arduino digital–analog board. This low-cost solution promotes both easy distribution to schoolsand individuals, and easy extensions of the system.

2.1 Design method

Unlike typical situations where the software/hardware de-veloper engages the user in a participatory design process,or feed back responses from test sessions [18], here thedecisions were mainly based on solid prior backgroundknowledge about the users. Most importantly, the afore-mentioned need for quickly altering any part of the setuphad to be attended.

After the first trials, which took place in real settings, rel-atively little has been reprogrammed or redesigned. In fu-ture revisions, we expect that the main modifications con-cern the software interface (which is not operated by thechild) and the sensor hardware (which will need to becomemore durable).

Models for generating the sound output, described below,can easily be added. This was indeed done during the firstsessions and is also foreseen to be a continuous process.New and replacement sensors are also expected to be addedwhen needed. These processes possibly advocate a partici-patory design methodology.

2.2 Software and sound models

The sound interaction was initially inspired by the wayscratch DJs treat records: how they drag the sound fast orslow over a certain spot and isolate small fragments of asound recording [19] (hence also the name sound scraper).Another inspiration from scratching, though more prag-matic, was that the audio signal of this instrument typicallyhas a lot of broadband energy that swiftly sweeps the au-dible frequency range [20]. This, we argued, could seemeffective for listening with cochlear implants when we arenot sure if a child can perceive sound in a certain frequencyregion or not.

In the current implementation, the software interface in-cludes around five sound models for choosing, of aroundten different ones (see Figure 1). New models can be addedto the interface easily, and they are typically based on ex-isting Pure data abstractions, for instance from Pd’s patchexample library or previous works by the authors. A fewexamples are given below.

The Looper model loops a segment of a recorded soundand was derived from the Skipproof application [21] andthe Pd example B12.sampler.transpose. Loop seg-ments can be varied from the whole file down to a fewmilliseconds, but the typical loop lengths are at least 2-300 ms. Other parameters are starting point, and playbackspeed (“pitch”).

The Vocoder model was added to be able to ‘freeze’ andmove around in the sound file, and it was based on thepatch I07.phase.vocoder. Available parameters areplayback speed, pitch change (“tuning”), and playback po-sition.

The Theremin was included to allow sweeping both puretone and harmonic sounds across a broad frequency rangeto assess pitch perception. To introduce variations in toneswhen they are kept at a stable pitch level, frequency modu-lation was added. The main parameters are pitch, harmon-ics, and frequency modulation speed and range.

Pulse trains of bandpass-filtered white noise bursts of-fer possibilities to manipulate parameters in a rhythmic se-quence of tones. The model was added to explore the tem-poral resolution which can be problematic with CI. Ad-justable parameters include tone attack steepness, tempo,tone duration, noise filter, and filter central frequency.

The music player, based on the oggread∼ object, playscompressed audio files. A compressed format was chosenas this model uses whole songs. Only two parameters canbe changed: track selection and track position. The musicplayer was not included from start, but when we devisedthe session program, we decided to include a mode withless interaction.

In addition to the parameters mentioned above, it is pos-sible in all models to control the amplitude, to add echo formaking sounds more complex, to add filtering for amplify-ing or attenuating frequency ranges.

2.3 Hardware and motion sensors

All interaction with the software from the user was pro-jected to be achieved through capturing body movementand gestures with sensors. Such sensor data include iner-tia measurements, proximity, bending and pressure. In ad-dition to these we have used game controllers. Naturally,even analysing audio or video input would work well (forrecognizing speech, facial expressions and body movementssee e.g. [22]), but this has not been tested in respect of per-sonal integrity, and also since the included sensors alreadypresent a sufficiently rich environment for interaction.

In the last versions, the hardware consisted of up to foursensor “bundles” that could be placed on or near the child.Each bundle was connected to the Arduino board with a2–3m flat-cable or twinned wire to provide the child witha surrounding space. The bundles were enclosed in protec-tive material and could easily be fastened with tape, rubberbands or velcro. Choosing sensors is still an ongoing pro-cess, but typically one of the bundles had an inertia sen-sor with up to six degrees-of-freedom (accelerometer, gy-roscope), one bundle had analog sensors (pressure, lightintensity, bending), and one bundle had one or more mo-mentary buttons.

Momentary buttons were used with some sophisticationand not as simple triggers. For many of the children, but-tons are interfaces which they are familiar with. We extractseveral control parameters from the pushed buttons, andthey are associated with a sort of increasing and decayingenergy measure. First, a parameter value has an increasecorresponding to the push duration. Second, a parameter

Figure 1. The software interface for the Soundscraper. The person operating the software chooses sound models to the left,mapping of parameters in the middle part, and sound files to the right. Incoming sensor data are mapped to the sliders inthe lower part of the screen. The interface shows only the choices available for each sound model and sensor setup.

value has an increase corresponding to the push frequency.These can be used for each separate button or accumulat-ed. Third, a parameter value has an increase related to thenumber of pushed buttons when there are more than one.

The sensor bundles, including the buttons, were integrat-ed in toys or familiar objects, or they were placed on oraround the body. To place sensors on the moving limbs,we used armbands, headbands and similar. For placementon toys or objects, the combination of expected movement,sensor type and fastening possibility had to be explored;two illustrating examples were an enticing rubber bathingduck with a light intensity sensor inside and a steeringwheel with an accelerometer inside permitting “steering”in any direction.

Unlike in a typical music interface application, with spe-cial needs children we can have a conflict of interest wherea movement pattern generates good sensor readings, butthere is an incentive to suppress that activity. Likewise, alimited output from a sensor may have to be tolerated, as amore pressing concern is to encourage a certain behavior.

A consequence of having to settle with tracking move-ment patterns that are very limited, is to be able to scalethe input data from the movement range. In the mappingbetween input sensor data and output sound parameters,we aim at always dealing with full-range signal (0..1). Dueto the nature of the movements, it is necessary to rescalethe sensor output during use. A problem with the rescal-ing is that there are not so many typical movements, suchas repeating gestures, among the user group. One of thecommon characteristics is that the individual might be mo-

tionless for a while, then only momentarily move beforereturning to a calm state.

2.4 Parameter mapping strategies

One of the cornerstones in making the software and hard-ware flexible and adaptable is to allow for changing the pa-rameter mapping between input and output easily. The im-portance of parameter mapping has been thoroughly stud-ied [23], and we can use experience from previous projects.In a typical new instrument-situation a skilled musicianperforms practiced music for an informed audience [24].

Here instead, we face a situation where a child interactswith a system and there might not even be visible signs ofany audio perception taking place, and similarly the mo-tor control can often appear to be so erratic that it is notfeasible to find proof of any interplay with the producedsounds. In these situations, guidelines for parameter map-ping could be followed to ensure possibilities for rich andexpressive interaction [14], for instance by defining “activ-ity thresholds”, but the main challenge is to ensure that thecontrol input is used effectively.

3. USER TESTING

For each session with a child, the caregivers and experi-menters made careful predictions and plans based on thechild’s physical and intellectual condition, personality andknown preferences. Still in most cases, the expected or in-tended interaction was in at least some respects compro-mised by unforeseen behavior. Causes for the unforeseen

behavior could be several and hard to explain: some fac-tors include unfamiliar environment for the child, sensedexpectations from the people in the room, unusual expo-sure to sound, excitement or anxiety, insecurity towardsnew persons, or even factors unrelated to the experiment.

We have so far been conducting tests with a small pop-ulation of children with severe multiple physical and cog-nitive impairments. The project has ethical approval, butmost of the details and data from the tests are not availablefor inclusion in analyses. Future experiments are sched-uled where more data can be included. Presumably, evenresults from these experiments will be challenging to use ina comprehensive analysis because of the extremely diversenature of each session. A few of the aspects we will con-tinue to look at are related to quantitative measures. Qual-itative measures require a great deal of objective interpre-tations and observations that in turn necessitate personnelthat are closely acquainted with the child (these qualitativemeasures are naturally essential in the bigger perspective).

The first quantitative measure we look at is time spentduring a task which can be adapted to the situation. Typicaltime-measured tasks are

• preference of a frequency region over others,

• preference of a (musical) sound over others,

• preference of a motor activity over others,

• sound level preference,

• (. . . )

These measurements require little interaction, but it is nec-essary to set the conditions right so that data from a sensordo not coincide with a comfortable resting position. Fromthe experiments, we have noted and been informed that thechildren in general show a considerably higher amount ofattention to the task than they normally would do. It is like-ly needed to gather much data before making conclusionsfrom a time-measure method.

A second quantitative measure requires a more developedmotor control and intellectual capacity. By restricting therange where a sensor produces the aspired result (for in-stance, making an appreciated sound louder) we can eval-uate the determination to achieve the wanted sound bothfrom assessing the difficulty of using the sensor over a re-stricted range as well as the time spent. Extensions of thismethod include to evaluate if a sound parameter is pre-ferred over others in spite that it is harder to attain it.

3.1 Evaluation

The Soundscraper concept has been tested with the aimto identify strengths and weaknesses in the design and themethod. Twenty five children with complex needs were in-cluded for the evaluation.

Before the test session, details and information on thechildren were collected including general level of func-tioning, motor skills, personal interest and special fears (ifany), and auditory functions. For each child a plan was out-lined including adaption of sensor equipment and selection

of auditory stimulation (see Figure 2). The children wereseen together with a parent, caregiver or teacher. A videorecording of each session was made. The testing includ-ed reactions to and handling of sensors, answer to soundand sound manipulation, attention span, and mood follow-ing the session. All accompanying persons were asked toevaluate the child’s reactions.

3.1.1 Reactions to and handling of sensors

The children reacted differently to the sensors, both thoseattached to the body and to the objects holding sensors.Generally, the impression was that they could be describedas having a toy-like function. All children were able to ei-ther handle or move the sensor sufficiently for the purpose.One child rejected all types of sensors due to hypersensi-tivity in all extremities.

Sensors attached to arms or hands were generally lesswell tolerated than sensors manipulated purposely by thechild (i.e. attached to an object or placed within reach).This was also true for children with very limited range ofmovement. Sensors attached to the head were well toler-ated and used for especially demanding situations. Whenthe movements were voluntary, such as for one child whocould control turning and tilting of the head, the child im-mediately grasped the link between movement and sound.However, when the movements were mainly involuntary,it became a challenge to make relevant mappings betweenmovement and sound. Two children with autism and autism-like symptoms devoted all attention to the cords attachedto the sensors. One child moved arms and hands freely butstopped moving the extremity when a sensor was placed inthe hand or attached to the arm.

3.1.2 Answer to sound and sound manipulation

All but five children showed clear reactions, and generallyappreciation, to sound and sound manipulation. This groupincluded the four children mentioned above and a childwith profound hearing loss who had HA but not yet re-ceived a CI. Three children were intrigued by sound changebut did not seem to make the connection between move-ment and sound.

3.1.3 Attention span

Five children showed limited span of interest. Character-istic of this group was that they all clearly understood thetask and made the coupling between their actions and thesounding result. However, they more quickly became boredand started to act without purport and would need a morecomplex mapping between gestures and the sound output.For the other children, an increased attention span was reg-istered. It should be noted that these observations werebased on the children’s first encounter both with us andthe Soundscraper, and long-term effects on attention spanwere not investigated.

3.1.4 Evaluation of the children’s reaction

Two children showed mild negative reactions during thesession. This was the child with hypersensitivity and onechild who did not make the connection between movement

Figure 2. The images show how the sensors can be adapted to different situations. In the left picture is a blind girl whodisliked holding onto objects, and the sensor was thus placed on the head. In the middle is a girl with good grip andmovement in her left hand. The right picture shows a boy who had a plastic rod used in gymnastics class that he enjoyedwaving with, and the sensor was placed there. For all three, the sensor is the same inertia unit that measures movement.(Video stills are printed with permission, but moments where the face was covered were chosen deliberately.)

and sound. The negative mood did not persist for any ex-tended period. A few children fell asleep directly followingthe session, but this was reportedly due to reasons outsidethe test situation. Fourteen out of twenty five caregiverswere surprised by the interest in sound and the persistenceshowed by the child. One mother said that she did not be-lieve that her son reacted to sound but rather to the sensoras such. However, this impression was not shared by theaudiologist and the other caregivers present.

One unexpected observation was that a girl with spastictetraplegia and involuntary reflex movements was able torelax when listening to a particular sound. During this re-laxed state she was able to move one hand voluntarily. Thissuggests that active manipulation and listening to preferredsounds may strongly influence the overall motor pattern,which will be explored further in forthcoming tests.

4. CONCLUSIONS

The Soundscraper was successfully applied in user testsinvolving 25 children with hearing aids or cochlear im-plants, in addition to other physical or cognitive impair-ments. Both the software and hardware parts could be ad-justed to the conditions of each individual session. Theresults of the preliminary testing were encouraging, andthe caregivers indicated that the concept is promising andcould possibly be introduced in their school activities.

The concept was judged to be stimulating and rewardingto children with listening capabilities at an early level ofauditory development. On the contrary, children who al-ready mastered conscious listening appeared to need high-er degree of interaction freedom, especially when they werenot able to produce sufficient or controlled movement.

Sensors that require active handling by pushing, pullingor moving an object were more likely to be accepted thansensors that were directly attached to the body of the child.Children with autism-spectrum diagnosis possibly need spe-cially designed sensors to overcome problems not directlyrelated to movement constraints.

The sound models were quite simple, but they provideda complex sound environment even through limited inter-action. When the sensor readings were poor, creative map-pings and data scaling were successfully used to compen-sate the scarce input. Having a small but versatile arrange-

ment of sensors that could be placed freely was a greatadvantage over sensors fixated to objects.

During a session with a boy having a hearing loss com-bined with blindness, it was noted that he was eager to ex-plore objects with his hands. This behavior was seldom ob-served for this boy, who for most of the time was reluctantto use his hands. This suggests that the Soundscraper couldeven be useful for normal-hearing children with blindnessand complex needs.

4.1 Future work

User testing will continue with a group of children includ-ed in the present study. The Soundscraper will be includedin the daily activities at school. Three different functionswill be explored: listening for joy and amusement; activelistening and training; and using sound for communicationand as a means for raising contex awareness.

Future analyses of the logged data will hopefully revealcharacteristics of sound perception and listening prefer-ences. Such efforts need to be carried out in collaborationwith audiologists and caregivers who know the child well.Finally, the software and hardware components need to bedeveloped further to provide an effective and stable envi-ronment for the involved caregivers.

Acknowledgments

We would like to sincerely thank the children and parentswho participated in this study and also the professionalswho took part with great enthusiasm. We are grateful forkind contributions to the development of the Soundscraperfrom the Promobilia Foundation and the inspiration pro-vided by The Swedish National Agency for Special NeedsEducation and Schools.

5. REFERENCES

[1] M. Puckette, “Pure data: Another integrated computermusic environment,” in Proc. of the International Com-puter Music Conference. San Francisco: InternationalComputer Music Association, 1996, pp. 269–272.

[2] Arduino, “Arduino open-source electronics proto-typing platform,” http://www.arduino.cc/. [Online].Available: http://www.arduino.cc/

[3] S. Wiley, M. Jahnke, J. Meinzen-Derr, and D. Choo,“Perceived qualitative benefits of cochlear implants inchildren with multi-handicaps,” International Journalof Pediatric Otorhinolaryngology, vol. 69, no. 6, pp.791–798, 2005.

[4] T. P. Nikolopoulos, S. M. Archbold, C. C. Wever,and H. Lloyd, “Speech production in deaf implantedchildren with additional disabilities and comparisonwith age-equivalent implanted children without suchdisorders.” Int J Pediatr Otorhinolaryngol, vol. 72,no. 12, pp. 1823–1828, Dec 2008. [Online]. Available:http://dx.doi.org/10.1016/j.ijporl.2008.09.003

[5] L. L. Pretorius and J. J. Hanekom, “Free fieldfrequency discrimination abilities of cochlear implantusers.” Hear Res, vol. 244, no. 1-2, pp. 77–84, Oct2008. [Online]. Available: http://dx.doi.org/10.1016/j.heares.2008.07.005

[6] W. D. Nardo, I. Cantore, F. Cianfrone, P. Melillo,A. R. Fetoni, and G. Paludetti, “Differences betweenelectrode-assigned frequencies and cochlear implantrecipient pitch perception.” Acta Otolaryngol, vol. 127,no. 4, pp. 370–377, Apr 2007. [Online]. Available:http://dx.doi.org/10.1080/00016480601158765

[7] R. P. Carlyon, C. J. Long, J. M. Deeks, and C. M.McKay, “Concurrent sound segregation in electric andacoustic hearing.” J Assoc Res Otolaryngol, vol. 8,no. 1, pp. 119–133, Mar 2007. [Online]. Available:http://dx.doi.org/10.1007/s10162-006-0068-1

[8] K. Veekmans, L. Ressel, J. Mueller, M. Vischer, andS. J. Brockmeier, “Comparison of music perceptionin bilateral and unilateral cochlear implant users andnormal-hearing subjects.” Audiol Neurootol, vol. 14,no. 5, pp. 315–326, 2009. [Online]. Available:http://dx.doi.org/10.1159/000212111

[9] N. Kraus, E. Skoe, A. Parbery-Clark, and R. Ashley,“Experience-induced malleability in neural encodingof pitch, timbre, and timing.” Ann N Y Acad Sci,vol. 1169, pp. 543–557, Jul 2009. [Online]. Available:http://dx.doi.org/10.1111/j.1749-6632.2009.04549.x

[10] S. Wiley, J. Meinzen-Derr, and D. Choo, “Addi-tional disabilities and communication mode in apediatric cochlear implant population,” InternationalCongress Series, vol. 1273, pp. 273 – 276,2004, cochlear Implants. Proceedings of theVIII International Cochlear Implant Conference.[Online]. Available: http://www.sciencedirect.com/science/article/B7581-4DNPJSX-2F/2/19888be6162034dc9a9b9a345af1fe45

[11] M. Wass, “Children with cochlear implants. Cogni-tion and reading ability.” Ph.D. dissertation, LinkopingUniversity, Department of Behavioural Sciences andLearning, 2009.

[12] B. P. Challis and K. Challis, “Applications for proxim-ity sensors in music and sound performance,” in Com-puters Helping People with Special Needs, ser. Lec-ture Notes in Computer Science. Berlin / Heidelberg:Springer, 2008, vol. 5105, pp. 1220–1227.

[13] S. Bhat, “Touchtone: an electronic musical instrumentfor children with hemiplegic cerebral palsy,” in Pro-ceedings of the fourth international conference on Tan-gible, embedded, and embodied interaction, ser. TEI’10. New York, NY, USA: ACM, 2010, pp. 305–306.

[14] A. Hunt, R. Kirk, and M. Neighbour, “Multiple me-dia interfaces for music therapy,” IEEE Multimedia,vol. 11, no. 3, pp. 50–58, 2004.

[15] W. L. Magee and K. Burland, “An exploratory study ofthe use of electronic music technologies in clinical mu-sic therapy,” Nordic Journal of Music Therapy, vol. 17,no. 2, pp. 124–141, 2008.

[16] A. L. Brooks and E. Petersson, “Play therapy utilizingthe Sony EyeToy,” in Proc. of the 8th Annual Interna-tional Workshop on Presence, 2005, pp. 303–314.

[17] C. Dravins, R. van Besouw, K. F. Hansen, andS. Kuske, “Exploring and enjoying non-speech soundsthrough a cochlear implant: the therapy of music,” in11th International Conference on Cochlear Implantsand other Implantable Technologies, Karolinska Insti-tutet, 2010, p. 356.

[18] L. Elblaus, K. F. Hansen, and C. Unander-Scharin,“Exploring the design space: Prototyping “The Troatv3” for the Elephant Man opera,” in Proceedings of the8th Sound and Music Computing Conference, Padova,Italy, July 2011.

[19] K. F. Hansen, “The acoustics and performance of DJscratching. Analysis and modeling.” Ph.D. dissertation,Kungl Tekniska Hogskolan, 2010.

[20] K. F. Hansen, M. Fabiani, and R. Bresin, “Analysisof the acoustics and playing strategies of turntablescratching,” Acta Acustica united with Acustica,vol. 97, pp. 303–314, 2011.

[21] K. F. Hansen and R. Bresin, “The Skipproof virtualturntable for high-level control of scratching,” Comput-er Music Journal, vol. 34, no. 2, pp. 39–50, 2010.

[22] G. Castellano, L. Kessous, and G. Caridakis, “Emotionrecognition through multiple modalities: Face, bodygesture, speech,” in Affect and Emotion in Human-Computer Interaction, ser. Lecture Notes in Comput-er Science, C. Peter and R. Beale, Eds. SpringerBerlin/Heidelberg, 2008, vol. 4868, pp. 92–103.

[23] E. R. Miranda and M. M. Wanderley, New digital mu-sical instruments: Control and interaction beyond thekeyboard. A-R Editions, 2006.

[24] S. Jorda, “Instruments and players: Some thoughtson digital lutherie,” Journal of New Music Research,vol. 33, no. 3, pp. 321–341, 2004.