composing sound structures with rules
DESCRIPTION
post futurismTRANSCRIPT
Full Terms & Conditions of access and use can be found athttp://www.tandfonline.com/action/journalInformation?journalCode=gcmr20
Download by: [Wesleyan University] Date: 04 January 2016, At: 13:00
Contemporary Music Review
ISSN: 0749-4467 (Print) 1477-2256 (Online) Journal homepage: http://www.tandfonline.com/loi/gcmr20
Composing Sound Structures with Rules
Paul Berg
To cite this article: Paul Berg (2009) Composing Sound Structures with Rules, ContemporaryMusic Review, 28:1, 75-87, DOI: 10.1080/07494460802664049
To link to this article: http://dx.doi.org/10.1080/07494460802664049
Published online: 25 Feb 2009.
Submit your article to this journal
Article views: 213
View related articles
Citing articles: 4 View citing articles
Composing Sound Structures withRulesPaul Berg
This article discusses the composition of musical structures, in particular soundstructures, using generative systems based on work done at the Institute of Sonology in
the 1970s. A view of the aesthetic background of G. M. Koenig is presented and used as aframework for the description of other software projects which often focused on micro-
structure.
Keywords: Composing Sound; Algorithmic Composition; Sonology; Non-standardSynthesis
G.M. Koenig (1978) has described three avenues to search for rules for composing:the analysis of existing music; introspection of one’s own experiences; and the
making and testing of limited models.Koenig’s method for his programs for instrumental composition is the third one. It
assumes that a problem can be reduced to a simplified representation of a musicalreality.
Repeated application of a model under changed circumstances makes its limitsclearer: accumulation and correlation of the results cause the model to reveal itselfand at the same time the extent to which it coincides with a part of musical reality.In this it may even transcend the reality experienced up to that point by exposingcontext which had escaped analytical scrutiny. The analytical task—given themusic, find the rules—is reversed: given the rules, find the music. (Koenig, 1978,p. 10)
In 1971, the Institute of Sonology in Utrecht, the Netherlands, bought a PDP-15
mini-computer without mass storage. It had 12 K words of 18-bit storage and 8channels of 12-bit digital-to-analog conversion and worked in a single-user mode.
Most computer music work at that time involved mainframe computers calculatingoffline. Some studios gradually gained access to PDP-11 mini-computers and
Contemporary Music ReviewVol. 28, No. 1, February 2009, pp. 75–87
ISSN 0749-4467 (print)/ISSN 1477-2256 (online) ª 2009 Taylor & FrancisDOI: 10.1080/07494460802664049
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
ultimately used Barry Vercoe’s Music 11. Microcomputers had yet to become
available.It is obvious from the choice of computer that the Institute of Sonology would not
be using Music V–style programs involving extensive event-list-based specificationwith the associated storage requirements. Instead it would be marching to a different
computer. Koenig’s statement ‘given the rules, find the music’ can also be used todescribe how various activities followed the computer purchase.
The editors of this issue have suggested that ‘generative music has a history of
enquiry and practice worth appreciating, understanding and learning from’. In thatvein, this article will discuss a few software projects related to music generation that
occurred at the Institute of Sonology during the 1970s.The ‘Utrecht aesthetic’ has been described as modernist (Thomson, 2004); it may
have been. It could, however, be seen as a reflection of the fragmented, tolerantnature of Dutch society in the 1970s where some were freely experimenting with new
social structures, some were experimenting with drugs, and others were experiment-ing with trying to make music with a computer that had no memory. Given thecomputer, find the music.
Despite the diverse backgrounds of the composers doing program development,several notable features of this work can be summarized:
1. Sound was generated in real time. Barry Truax’s program POD6 was the first
real-time FM implementation in 1973 (Chowning, 2007).2. Most programs were interactive in the sense of a dialog between the user and the
program. This was a clear distinction from the batch-processing approach usedon larger systems. Alternatively, there was no dialog but, instead, a brief compile
time followed by some form of real-time sound production.3. This interaction underscored the importance of listening during the process of
generating music using rules.
4. Although there were digital signal processing developments such as VOSIM(Kaegi & Tempelaars, 1978) during this period, the primary emphasis was on
the musical activities of programming sound and higher-level controlstructures.
5. Attention was focused on the rule-based generation of material, with formarising secondarily during that process.
This article will mention the instrumental composition programs of G. M. Koenig(Project 1 and Project 2), the composition and synthesis programs of Barry Truax
(POD5 and POD6), the synthesis programs (ASP) and language (PILE) of Paul Berg,the sound composition program SSP of Koenig, and the library of synthesis
experiments of Robert Rowe (SPA). Activities outside the scope of softwaredevelopment concerning generative music, such as research related to the previously
mentioned VOSIM model, the activities of Otto Laske, as well as the compositionalactivities of various sonologists, are omitted from this discussion.
76 P. Berg
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
Background
Koenig moved to Utrecht from Cologne, where he had worked in the WDR studio on
monumental electronic pieces with a programmatic aspect, such as Essay (1957/58).A comment by a friend of his that Essay could have been composed with the help of a
computer supported Koenig’s interest in exploring the possibility.His experience in Cologne with serial composers influenced both his instrumental
and his electronic music. This influence continued in the projects he later undertook
in Utrecht. It provided a background (though not necessarily a foreground) for manysonological activities.
In Koenig’s view, serialism is not just about the ‘series’ but also about quantizationand differentiation. ‘Whilst many composers soon tired of standardization,
differentiation is still possible without strictest serial treatment. Since then serialideas have been based less on the series and more on the concept of difference’
(Koenig, 1971, p. 71).He suggested that serialism is ‘more of a world-view or an aesthetic doctrine than
instructions for the right way to compose’ (Koenig, 1992, p. 44). He sees serial and
electronic music as related because they both require a degree of formalization andmechanization, and this entails planning. ‘By planning I do not mean contriving
systems which operate more or less automatically, but the translation of psychologicalperception values into technical work processes’ (Koenig, 1992, p. 45). For Koenig,
planning can involve rules. ‘Composers who were unwilling to invent a personalserial system, perhaps even a new one for each new composition, could not really
compose serially’ (Koenig, 1992, p. 44).According to Koenig, form emerges during planning and realization. ‘I experience
form as a process as soon as I start working in the studio or at my desk; every bar onpaper, every sound on tape changes its formal function every time I look at it, like thelight in a landscape under scudding clouds’ (Koenig, 1987, p. 171). Since form
appears during realization, it ‘also emerges when musicians improvise, form alwaysbeing both desired and born, desired by the composer, born during the performance’
(Koenig, 1987, p. 172).Finally, there is the relation of form to material. Koenig surprisingly mentions that
form was not really a topic in Cologne because everyone was concerned withmaterial. Form was taken for granted because otherwise how could you recognize art
if it had no form? Concern with material was paramount. ‘Artistic endeavor seemedimpossible without a precise definition of the material’ (Koenig, 1987, p. 166). Theconcept of material was not limited to sounds but also included compositional
methods and rules.The extreme clarity of Koenig’s treatment of material is a characteristic of both his
electronic and his instrumental music. Appreciating this concern is a key tounderstanding his programs for instrumental composition.
The last insight into Koenig’s view of serialism that is relevant to his composingprograms is his acknowledgement of a relationship between serialism and aleatoric
Contemporary Music Review 77
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
composition. Based on his listening experience, he suggests that the rate at which
intervals occur is more important than their order. This leads to the conclusion that‘it appears that the trouble taken by the composer with series and their permutations
has been in vain; in the end it is the statistic distribution that determines the featuresof a composition. Seen from this viewpoint, serial technique appears as a special case
of aleatoric compositional technique’ (Koenig, 1970a, p. 33).Since the programming of constrained random numbers is a manageable task, the
ground was laid for Koenig to develop his programs for instrumental composition,
Project 1 and Project 2.
Project 1
Project 1 (PR1) (Koenig, 1970a) is called a program for musical composition, whichin this case is considered music for instruments with known timbral characteristics.
Although it was originally developed in the period 1964–1966, which is outside thetime frame of our discussion, it was later re-implemented on the Sonology computerin the 1970s.
PR1 is a closed system based on a rather simple model of compositional activity.User input was originally limited to specifying a seed for the random number
generator, 28 entry delays (the time between event onsets), six metronome tempi, andthe number of time points per section. The program fixed the number of sections to
be seven. The parameters instrument, rhythm, pitch, register, dynamics, and sequencewere calculated independently.
The program exploited the opposition between aperiodic and periodic. Given therequirements of differentiation and quantization, seven steps were assigned in the
range between aperiodic (1) and periodic (7). Aperiodic processes involved randomchoices with a repetition check (to avoid repeating values). Periodic processesinvolved the repetition of values. Step 4 was one of balancing aperiodic with periodic.
Koenig called this classical compromise ‘proposition’ and ‘correction’. The branchingtable containing the random values of the degree of aperiodicity (1–7) per parameter
could be considered a reflection of the structure of the entire output.Rhythm was left to the user and the program’s random generators. Pitch, however,
was ‘protected from his assaults’ (Koenig, 1992, p. 47). Pitch was a variable 12-toneseries calculated by the program.
The restrictions on the input data, the possible intervallic content, and theconstruction of the branching table were relaxed in subsequent versions of theprogram.
The program produced a list of events with values for each parameter. The eventswere sequential, and relationships within a section could be seen as the result of the
different degrees of aperiodicity, while parameters could be seen as separate layers.‘Music progresses in layers, so to speak, which can be considered as single parameters:
rhythm, harmony, melody, dynamics, scoring’ (Koenig, 1991, p. 176). This kind ofparameter recognition only works with known sound sources.
78 P. Berg
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
Particularly striking in PR1 is the quantization of structural aspects (the branching
table) and the consideration of the resulting score table as an embodiment of musicalmeaning awaiting realization as a composition. Interpretation of the score table and
its compositional strategy is a key issue for Koenig.
Project 2
Project 2 (PR2) (Koenig, 1970b) was originally implemented in the period 1966–1968.
It too was re-implemented on the Sonology computer (which during the 1970s hadincreased its memory size and cloned itself so that there were two essentially identical
PDP-15 systems).Whereas PR1 provided default input values and calculated structural relations for
the parameters, PR2 can be considered an open system which required the user toconstruct a structural formula and specify a great deal of input data (about 60
questions needed to be answered). Input was not only extensive, it was also quitecomplicated.
The program allowed the use of more parameters, including durations and
performance indications. The user could insert rests, choose among harmonicconcepts, produce variants of a structural description, and create vertical layers.
The program uses the List–Table–Ensemble principle. For each parameter, a listcontains the available values. These values are combined into groups in the table.
These are both activities of the user. A group or a combination of groups can bechosen for the ensemble using a selection principle. From this ensemble, values are
chosen for the score using a selection principle.The available selection principles are sequence (a series of values) and a number of
constrained random procedures: series (random choice with a repetition check—ageneralization of the design of a series); alea (random with no repetition check); ratio(weighted random choice); group (a random value repeated a random number of
times); and tendency (random between boundaries that change over time—this is anextension of Stockhausen’s field concept). These selection principles are an important
legacy of PR2 since Koenig gave a specific musical interpretation to the use ofconstrained random procedures.
Tendency was particularly influential since it provided a way to use randomnumbers to generate transitions. A tendency mask is specified by the number of
desired divisions of the total area, the initial lower and upper boundaries, and thefinal boundaries. There is a linear interpolation between the boundaries. At eachstep in the process, a random choice is made between the current boundary
values.The use of selection principles to pick material groups and to choose values
from this material aided in the making of structural variants. In Koenig’s pianopiece Ubung (1969/70), the score provides three variants for each of the twelve
sections of the piece. The performer is free to choose which variant to use foreach section.
Contemporary Music Review 79
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
POD
Barry Truax developed the computer composition/sound synthesis programs POD5
and POD6 at the Institute of Sonology in the period 1971–1973 (Truax, 1973). POD5worked with fixed waveforms and POD6 with frequency modulation. His aims
included:
1. Doing as much real-time synthesis as possible with the limited computer
facilities.2. Allowing the composer to direct the development activity from the teletype.
3. Operating at a higher, syntactical level than the separate specification ofevents.
4. Allowing the composer to build a ‘more personal version of thematerials available to him, particularly on the level of sound data’ (Truax,
1973, p. i).5. Stimulating the development of sound comprehending programs.
Sound objects were definitions with a few parameters, relevant to the means of soundproduction. For POD5, the object definition included a waveform, envelope values,
and an amplitude modulation frequency. For POD6, up to thirty frequencymodulation objects could be defined, each including envelope data, carrier–
modulator ratios, and a maximum modulation index.These objects were assigned to the frequency-time points calculated by the
program using some of the selection principles Koenig described for PR2: sequence,alea, ratio, and tendency.
The most important feature was the higher-level control that produced thefrequency-time points associated with the sound objects. Central to this was thespecification of event densities within prescribed frequency-time areas. To do this,
ideas from Koenig and Xenakis were combined.The concept of using several tendency masks to describe a frequency-time area was
adapted from Koenig to the needs of Truax’s system. From Xenakis came the use of aPoisson distribution (Xenakis, 1992) to choose values within the boundaries of a
tendency mask. Because Poisson’s theorem was not intended for use in areas withchanging boundaries, a certain tension between the user specification and the
computed result appeared.The output of the program was a monophonic composition section. The statistical
calculations either determined the starting time of an event or the time after an event
had finished sounding. The first case was more reminiscent of a Xenakis-likestructure.
Truax’s concerns were multi-faceted but included an explicit cognitive aspect.
If one is to become involved with complex sound production and at the same timebe concerned with its result being understood as musical, it does not seem
80 P. Berg
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
inappropriate to use statistical distributions of events or parameters since it can besupposed that most observers are capable of sonological strategies arriving at therecognition of such structures. (Truax, 1973, pp. v–vi)
The issue of compositional and user strategies was important in the POD programs.The user was a catalyst for the compositional decisions made by the program.
Given the rules, guide the program. The distinction of sound objects and theircontrol was, however, unlike some later work done at the Institute.
ASP
Given my experience using the POD programs and my pre-sonological experiencewith Music V–style programs, I decided in 1974 to try to write some synthesis
programs which allowed me more and less control. More control because I wouldwrite the program, but less control because the only possible user interaction would
be to set an initial condition, then start, and later turn off the program. Thiscollection of programs was called ASP (Automated Sound Programs) (Berg, 1975; seealso Berg, 1979).
I had a background as an instrumentalist and was familiar with the concept ofexamining the idiom of a particular instrument. For a PDP-15, I decided an
idiomatic usage would be producing numbers quickly and sending then to a digital-to-analog converter (DAC). To produce sound, a bit more work was required but the
idea was clear: try and develop series of computer instructions that would producesound but would not be based on unit generators that I knew from Music V, such as
oscillators and filters, or based on the kind of sound objects and control I knew fromthe POD programs.
A simple starting point was to produce random numbers and send them to a DAC.Fortunately, the Institute had a hardware random-number generator that couldproduce a random number in one machine cycle (of about 4 microseconds). It also
had a programmable clock with which each sample could be delayed a variableamount.
Writing in Macro-15, the assembly language of the PDP-15, 22 programs weremade in 1974–1975 and joined in a library exploring the idea of producing sound in
real time without using a known acoustic model. These programs were ratheridiosyncratic programs based on counting, comparing, arithmetic and logical
operations, choosing among possible loops, filling buffers and changing single values,sending part of a number to a DAC and the other part to a clock, and varying thesample rate at arbitrary moments. These numbers were sent to a 12-bit DAC which
was often manually disconnected from its smoothing filter since smooth was not partof the concept.
Because the programs could run for ever, mechanisms had to be programmed toprovide change. A very useful technique was to derive values by masking a random
number using a logical AND. This operation could produce a discontinuous range ofnumbers, depending on the bits chosen for the mask. This mask could be changed at
Contemporary Music Review 81
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
certain moments, usually chosen by counting program cycles. The value for that mask
could be determined by yet another mask. This might give the impression of somekind of long-term developmental control. The double-layered change mechanism is
reminiscent of the second-order random walks Xenakis later used in his DynamicStochastic Synthesis (Luque, 2006).
This fluid control at the middle level defies Roads’ classification of time levels.Between the macro and micro level he places the meso level with ‘groupings of soundobjects into hierarchies of phrase structures of various sizes, measured in minutes or
seconds’ and the sound object ‘a basic unit of musical structure, generalizing thetraditional concept of note to include complex and mutating sound events on a time
scale ranging from a fraction of a second to several seconds’ (Roads, 2001, p. 3).Although ASP programs tended to change at different rates on different levels, there
was no concept of a sound object and therefore no grouping of sound objects intophase structures. Sample values were programmed which just happened to produce
sound. Sample values did not necessarily have a fixed time interval between them.Repetitions of the same value produced silence.
These programs were quite compact. The shortest was 70 lines of assembler code
and the longest was 340 lines. The conciseness of the code was possibly becauseDigital Signal Processing (DSP) concerns such as producing a desired frequency value
or a specific envelope were ignored.To give an impression of an ASP program, here is a description of ASP18 (the
shortest of the programs):
One number of clock pulses is chosen in the program and maintained. An 11-bitnumber N is chosen and an 8-bit factor. The factor is either added or subtracted toN, the result becoming the new value of N. Twelve bits of this N are sent to theconverter. The program either repeats the addition subtraction process with thenew value of N and the same factor, or a new factor is calculated, or a new factormask and factor are calculated. (Berg, 1975, p. 7)
Since it may be hard to imagine what such a program would sound like (modulo the
18-bit word size), the programs (the material) were necessarily developed in afeedback process of editing, compiling, listening. The results were too unpredictable
to work any other way. Give the rules, hope for sound.The output could be used unedited and considered a composition, or moments of
output could be gathered and mixed into a composition. Alternatively, the outputcould just be considered a sound generator, in need of further manipulation.
The programs were gathered into a library for consultation, and a monitor
program was written to allow the execution of the programs in a (random) sequence.The next ASP program started when a computer switch was hit. Given the programs,
enjoy the result.Although I developed certain insights into what might happen with the use of
various instructions, it was very hard to explain the process to other people. Thedescription of ASP18 above demonstrates the problem. I also noticed that some
82 P. Berg
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
groups of instructions appeared often. These were two of the reasons that led to my
abstracting ideas from these programs and including them in a language design calledPILE.
PILE
PILE (Berg, 1979) was a higher-level language than Macro-15 assembler though notfull-featured. It was designed and implemented in 1976–1977. It did not support any
common unit-generators but continued the ASP tradition of manipulating numericalsystems to produce sound. Curtis Roads (1996) suggested the term ‘instruction
synthesis’ to describe this approach. Sound was described purely in terms ofcomputer instructions without reference to other models. As an alternative to the
term instruction synthesis, the term ‘non-standard synthesis’ (Holtzman, 1978) isalso often applied to ASP and PILE.
In PILE, more concise syntactic versions of the operations found in ASP wereavailable in addition to explicit instructions for manipulating lists and controllingprogram flow. ASP provided a starting point for PILE, but the use of PILE suggested
its own developments. Four channels of DAC output were supported and severallayers of sounds could be produced simultaneously in real time.
Unlike an ASP program, a program written in PILE could stop. A possiblescenario for developing a composition was to develop a section by writing code,
listening, and refining. The program was deterministic, such that given the sameinitial values, the same result would always be reproduced. Several sections could
be developed independently and their code concatenated to create largerstructures. Different sections could be variants of a previous section but with
different initial conditions. The final result (a complete composition) would beproduced in real time.
PILE and ASP reflect the same mindset. But since PILE was a language, it provided
a means to explore additional aspects in a more convenient way and it providedopportunities for others to develop compositions.
A positive aspect of this approach to non-standard synthesis was a refreshinglynoisy sound output generated by syntactic relationships. The material produced by
exploring with PILE was both the instructions and the sound output. Thecomposition was the material from which form emerged. A more challenging aspect
was the heuristic nature of the working process.
SSP
Koenig designed his sound synthesis program SSP in 1972 (Koenig, 1978), though
he had begun talking about and planning a synthesis program in the mid-1960s. A working implementation was available in 1977 (Berg et al., 1980; see
also Banks et al., 1979). SSP is another example of a non-standard synthesisprogram.
Contemporary Music Review 83
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
For SSP, amplitude values (sample values) and time values were the raw material.
In this discussion of SSP, the term amplitude values will be used since that is theterm Koenig used. The possible inappropriateness of that usage from a DSP
perspective will be ignored. Similar to the List–Table-Ensemble principle of PR2,amplitude and time values were specified and selected. The 12-bit DAC allowed
4096 amplitude values. Time was expressed in microseconds with a minimum valueof 38 microseconds. Specified values could also include tables created in anotherprogram and sample values from an analog-to-digital converter. Amplitude and
time values were then joined into (waveform) segments which started with zero(2048). Segments could be permuted into various sequences. The selection
principles which were available in PR2 for composition on the macro-level(tendency, alea, series, etc.) were available for composing the micro-structure of a
sound. The selection principles could be used, for example, in picking amplitudeand time values for a segment or for determining the order of segments in a
permutation.The program was conversational. The user’s task could be described as: list the
source material (amplitude and time values); select all or part of that material;
construct segments with material from the selection; order the segments; listen to thechosen order; and return to any of the preceding steps for further refinement. Given
the rules, pick, choose, and permute.The program produced one layer of sound output. To listen to the
permutation, segments were represented in core memory as breakpoints. Apermutation was played in real time, producing a continuous, composed
waveform by interpolating between these breakpoints. The number and lengthof the segments was limited by available memory, but a varied sound output of a
few minutes was possible.The ordering of segments using tendency masks was particularly successful.
A wide selection of segments would result in a noisy sound structure. Narrow
masks led to unstable sounds within a confined frequency region. Masks movingfrom narrow to wide could produce dramatic transitions between these two
extremes.The selection principle group was also very useful for permutations. Group would
allow a random choice of segment to repeat some random number of times.Repeating a segment in effect treats it as a waveform with a certain perceptible
duration.In these successful cases, permutation was as an effective generative mechanism.
Users willing to accept this as a feature could compose interesting sound structures
by creating strings of waveform segments. These monophonic outputs were latermixed in another studio into compositions.
A problem arose for those concerned with Koenig’s original concept. The List–Select–Segment hierarchy, if considered as an analogy to PR2’s List–Table–Ensemble
principle, would require parameters to have a recognizable identity. For singularamplitude and time values, this is not possible. Segments typically had short
84 P. Berg
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
individual durations which meant that permutations could be heard not as an
arrangement of known events in a variant (as was the requirement in PR2), but assomething new, generated with these segments. This led Koenig to bemoan ten years
later that:
this shifted the form question into a twilight zone which lack of facilities and timeprevented me from exploring properly. It would have taken bigger computers . . . toexploit the potential of such a method—especially in terms of musical language.Scrutiny of the method on which SSP is based reveals a chasm between therelatively simple strategy for describing the sound signal and the aural impression itcreates. (Koenig, 1987, p. 173)
Spa
A last example of the ‘guerilla sound synthesis’ (Rowe, 2008) activities at Sonology is
Robert Rowe’s Spa project from 1980 (Rowe, 1980a,b). It was a collection ofassembly-language programs intended for personal use which operated in real ime.The final program, Spa14, produced the composition Spattering . . . a Shower without
additional mixing or editing. Notable features included the fact that it did not userandom numbers and the conscious attempt to utilize some computer science
concepts involving structured programming. It did maintain the crunchy surfacestructure common to many sonological activities of that decade. The approach to
rhythm, in retrospect, has similarities with some outputs of his later Cypher system(Rowe, 1993).
The most general description of Spa14 is that three lists are sampled for variouspurposes including envelope shaping, waveform production and incrementselection. A controlling routine organizes several subroutines, some of which areexecuted to update global variables, some of which may call other subroutines. Thecontrolling routine directs the subroutines to the appropriate memory areas,initializes output channels, and alters computational variants through the search ofan event list. This event list is checked periodically and changes either the operationof some subroutines, or the logical structure by which they are called. At the end ofthe event list the program exists. (Rowe, 1980a, p. 3)
The tool was programmed during a few months and then the composition was
made with it in a relatively short time. ‘Spa’s real-time sound production alloweda very immediate contact with the work in progress, and encouraged a kind ofcomposition one is tempted to call improvisation’ (Rowe, 1980b, p. 2). Given the
rules, improvise.
Conclusion
Notable in the work presented here is an interest in the rule-based generation ofmusical structures, in particular sound structures. The application of and response to
Contemporary Music Review 85
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
the rules show a certain variety of compositional and musical approaches. During the
decade, a shift from the macro-level organization of parameter distributions in time(PR1 and PR2) to the meso-level organization of POD to the organization of the
macro level by explicitly dealing with the micro level (ASP, PILE, SSP, SPA) can bediscerned.
Aspects of the working method, such as the refinement of real-time generativeprocesses by listening and programming, are now commonplace. Interest incomposing with microsound (including noises and glitches) is a presence in certain
scenes. The awareness of the compositional significance of programming material, inparticular as part of the process which creates musical form, is deserving of continued
attention.
References
Banks, J. D., Berg, P., Rowe, R. & Theriault, D. (1979). SSP – a bi-parametric approach to soundsynthesis. Sonological Reports, 5. Utrecht: Institute of Sonology.
Berg, P. (1975). ASP report. Utrecht: Institute of Sonology.Berg, P. (1979). PILE – a language for sound synthesis. Computer Music Journal, 3(1), 30–41.
Revised and updated version in C. Roads & J. Strawn (Eds.), Foundations of computer music(pp. 160–190). Cambridge, MA: MIT Press.
Berg, P., Rowe, R. & Theriault, D. (1980). SSP and sound description. Computer Music Journal,4(1), 25–35.
Chowning, J. (2007). Fifty years of computer music: Ideas of the past speak to a future – immersedin rich detail. In Proceedings of the International Computer Music Conference 2007(pp. 169–175). San Francisco: International Computer Music Association.
Holtzman, S. R. (1978). A description of an automatic digital sound synthesis instrument. DAIResearch Report No. 59. Edinburgh: Department of Artificial Intelligence.
Kaegi, W. & Tempelaars, S. (1978). VOSIM – a new sound synthesis system. Journal of the AudioEngineering Society, 26(6), 418–426.
Koenig, G. M. (1970a). Project 1. Electronic Music Reports, 2Utrecht: Institute of Sonology.Koenig, G. M. (1970b). Project 2 – a programme for musical composition. Electronic Music Reports.
3. Utrecht: Institute of Sonology.Koenig, G. M. (1971). Summary observations on compositional theory. Utrecht: Institute of
Sonology.Koenig, G. M. (1978). Composition processes. Paper presented at the UNESCO Workshop on
Computer Music, Aarhus, Denmark. Also published in M. Battier & B. Truax (Eds.),Computer music. Canadian Commission for UNESCO.
Koenig, G. M. (1987). Genesis of form in technically conditioned environments. Interface, 16(3),165–175.
Koenig, G. M. (1991). Working with Project 1: My experiences with computer composition.Interface, 20(3–4), 175–180.
Koenig, G. M. (1992). Segmente – a structured landscape. Interface, 21(1), 43–51.Luque, S. (2006). Stochastic synthesis: Origins and extensions. Master’s thesis, Institute of Sonology,
The Hague.Roads, C. (1996). The computer music tutorial. Cambridge, MA: MIT Press.Roads, C. (2001). Microsound. Cambridge, MA: MIT Press.Rowe, R. (1980a). The Spa story. Utrecht: Institute of Sonology.Rowe, R. (1980b). The Spa project: An overview. Utrecht: Institute of Sonology.
86 P. Berg
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16
Rowe, R. (1993). Interactive music systems. Cambridge, MA: MIT Press.Rowe, R. (2008). Presentation at the conference Live Electronics and the Traditional Instrument,
March 29, Amsterdam, the Netherlands.Thomson, P. (2004). Atoms and errors: Towards a history and aesthetics of microsound. Organised
Sound, 9(2), 207–218.Truax, B. (1973). The computer composition – sound synthesis programs POD4, POD5, POD6.
Sonological Reports. 2. Utrecht: Institute of Sonology.Xenakis, I. (1992). Formalized music. (Rev. ed). New York: Pendragon Press.
Contemporary Music Review 87
Dow
nloa
ded
by [
Wes
leya
n U
nive
rsity
] at
13:
00 0
4 Ja
nuar
y 20
16