yqx plays chopin · style of nocturnes by frederic chopin, the other is in a style resem-bling...

14
Consider the following challenge: A computer program is to play two piano pieces that it has never seen before in an “expressive” way (that is, by shaping tempo, tim- ing, dynamics, and articulation in such a way that the perform- ances sound “musical” or “human”). The two pieces were com- posed specifically for the occasion by Professor N.N. One is in the style of nocturnes by Frederic Chopin, the other is in a style resem- bling Mozart piano pieces. The scores of the compositions will contain notes and some expression marks (f, p, crescendo, andante, slurs, and so on) but no indications of phrase structure, chords, and so on. The expressive interpretations generated by the pro- gram must be saved in the form of MIDI files, which will then be played back on a computer-controlled grand piano. Participants are given 60 minutes to set up their computer, read in the scores of the new compositions (in MusicXML format), and run their per- formance computation algorithms. They are not permitted to hand-edit performances during the performance-rendering process nor even to listen to the resulting MIDI files before they are pub- licly performed on the computer grand piano. The computer per- formances will be graded by a human audience and by the com- poser of the test pieces, taking into account the degree of “naturalness” and expressiveness. An award will be presented to the system that rendered the best performance. Now watch (and listen to) the two videos provided at www.cp.jku.at/projects/yqx. The videos were recorded on the occasion of the Seventh Performance Rendering Contest (Ren- con 2008), 1 which was held as a special event at the 10th Inter- national Conference on Music Perception and Cognition (ICM- PC 2008) in Sapporo, Japan, in August 2008. Rencon 2 is a series of international scientific competitions where computer sys- tems capable of generating “expressive” renditions of music pieces are compared and rated by human listeners. The previous task description is in fact a paraphrase of the task description Articles FALL 2009 35 Copyright © 2009, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 YQX Plays Chopin Gerhard Widmer, Sebastian Flossmann, and Maarten Grachten n This article is about AI research in the con- text of a complex artistic behavior: expressive music performance. A computer program is pre- sented that learns to play piano with “expres- sion” and that even won an international com- puter piano performance contest. A superficial analysis of an expressive performance generat- ed by the system seems to suggest creative musi- cal abilities. After a critical discussion of the processes underlying this behavior, we abandon the question of whether the system is really cre- ative and turn to the true motivation that drives this research: to use AI methods to investigate and better understand music performance as a human creative behavior. A number of recent and current results from our research are briefly presented that indicate that machines can give us interesting insights into such a complex cre- ative behavior, even if they may not be creative themselves.

Upload: others

Post on 31-Dec-2019

6 views

Category:

Documents


0 download

TRANSCRIPT

  • Consider the following challenge:A computer program is to play two piano pieces that it has neverseen before in an “expressive” way (that is, by shaping tempo, tim-ing, dynamics, and articulation in such a way that the perform-ances sound “musical” or “human”). The two pieces were com-posed specifically for the occasion by Professor N.N. One is in thestyle of nocturnes by Frederic Chopin, the other is in a style resem-bling Mozart piano pieces. The scores of the compositions willcontain notes and some expression marks (f, p, crescendo, andante,slurs, and so on) but no indications of phrase structure, chords,and so on. The expressive interpretations generated by the pro-gram must be saved in the form of MIDI files, which will then beplayed back on a computer-controlled grand piano. Participantsare given 60 minutes to set up their computer, read in the scoresof the new compositions (in MusicXML format), and run their per-formance computation algorithms. They are not permitted tohand-edit performances during the performance-rendering processnor even to listen to the resulting MIDI files before they are pub-licly performed on the computer grand piano. The computer per-formances will be graded by a human audience and by the com-poser of the test pieces, taking into account the degree of“naturalness” and expressiveness. An award will be presented tothe system that rendered the best performance.

    Now watch (and listen to) the two videos provided atwww.cp.jku.at/projects/yqx. The videos were recorded on theoccasion of the Seventh Performance Rendering Contest (Ren-con 2008),1 which was held as a special event at the 10th Inter-national Conference on Music Perception and Cognition (ICM-PC 2008) in Sapporo, Japan, in August 2008. Rencon2 is a seriesof international scientific competitions where computer sys-tems capable of generating “expressive” renditions of musicpieces are compared and rated by human listeners. The previoustask description is in fact a paraphrase of the task description

    Articles

    FALL 2009 35Copyright © 2009, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602

    YQX Plays Chopin

    Gerhard Widmer, Sebastian Flossmann, and Maarten Grachten

    n This article is about AI research in the con-text of a complex artistic behavior: expressivemusic performance. A computer program is pre-sented that learns to play piano with “expres-sion” and that even won an international com-puter piano performance contest. A superficialanalysis of an expressive performance generat-ed by the system seems to suggest creative musi-cal abilities. After a critical discussion of theprocesses underlying this behavior, we abandonthe question of whether the system is really cre-ative and turn to the true motivation that drivesthis research: to use AI methods to investigateand better understand music performance as ahuman creative behavior. A number of recentand current results from our research are brieflypresented that indicate that machines can giveus interesting insights into such a complex cre-ative behavior, even if they may not be creativethemselves.

  • that was given to the participants at the Rencon2008 contest. A graphical summary of the instruc-tions (provided by the contest organizers) can beseen in figure 1. What the videos at the websiteshow is our computer program YQX—which wasdeveloped by Sebastian Flossmann and MaartenGrachten—performing the two previously men-tioned test pieces before the contest audience.Three prizes were to be awarded:

    “The Rencon Award will be given to the winnerby the votes of ICMPC participants. The RenconTechnical Award will be given to the entrant whosesystem is judged most interesting from a technicalpoint of view. The Rencon Murao Award will be giv-en to the entrant that affects Prof. Murao (set piececomposer) most.”3

    YQX won all three of them.

    This article is about how one approaches such atask—and why one approaches it in the first place,why winning Rencon Awards is not the main goalof this research, and how AI methods can help uslearn more about a creative human behavior.

    The Art of Expressive Music Performance

    Expressive music performance is the art of shapinga musical piece by continuously varying importantparameters like tempo, dynamics, and so on.Human musicians, especially in classical music, donot play a piece of music mechanically, with con-stant tempo or loudness, exactly as written in theprinted music score. Rather, they speed up at some

    Articles

    36 AI MAGAZINE

    PerformanceRendering

    postprocessing

    Score

    Participants are permitted to add some additional information to the given music score

    Pre-processing Annotation(including structural information)

    to the given music score

    Participants are not allowed to edit MIDI-level note data

    Participants are not allowed to elaborate performance rendering based on their repeated listening to the music

    Participants are not allowed output sound from their own PC. (allowed to see the key-movement (without sound) of the automatic-piano for system-check)

    Standard MIDI File

    OK!OK!

    OK! NO!

    NO!

    NO!

    Figure 1. Graphical Summary of Instructions Given to Rencon 2008 Participants.

    Based on a figure from www.renconmusic.org. Reproduced with permission.

  • places, slow down at others, stress certain notes orpassages by various means, and so on. The mostimportant parameter dimensions available to aperformer (a pianist, in particular) are timing andcontinuous tempo changes, dynamics (loudness vari-ations), and articulation (the way successive notesare connected). Most of this is not specified in thewritten score, but at the same time it is absolutelyessential for the music to be effective and engag-ing. The expressive nuances added by an artist arewhat makes a piece of music come alive, what dis-tinguishes different performers, and what makessome performers famous.

    Expressive variation is more than just a “devia-tion” from or a “distortion” of the original (notat-ed) piece of music. In fact, the opposite is the case:the notated music score is but a small part of theactual music. Not every intended nuance can becaptured in a limited formalism such as commonmusic notation, and the composers were and arewell aware of this. The performing artist is an indis-pensable part of the system, and expressive musicperformance plays a central role in our musical cul-ture. That is what makes it a central object of studyin contemporary musicology (see Gabrielsson[2003], and Widmer and Goebl [2004] for compre-hensive overviews of pertinent research, from dif-ferent angles).

    Clearly, expressive music performance is a high-ly creative activity. Through their playing, artistscan make a piece of music appear in a totally dif-ferent light, and as the history of recorded per-formance and the lineup of famous pianists of the20th century show, there are literally endless pos-sibilities of variation and new ways of looking at amasterpiece.

    On the other hand, this artistic freedom is notunlimited. It is constrained by specific perform-ance traditions and expectations of audiences(both of which may and do change over time) andalso by limitations or biases of our perceptual sys-tem. Moreover, it is generally agreed that a centralfunction of expressive playing is to clarify, empha-size, or disambiguate the structure of a piece ofmusic, to make the audience hear a particular read-ing of the piece. One cannot easily play “againstthe grain” of the music (even though some per-formers sometimes seem to try to do that—consid-er, for example, Glenn Gould’s recordings ofMozart’s piano sonatas).

    There is thus an interesting tension between cre-ative freedom and various cultural and musicalnorms and psychological constraints. Exploringthis space at the boundaries of creative freedom isan exciting scientific endeavor that we would liketo contribute to with our research.

    How YQX WorksTo satisfy the reader’s curiosity as to how the Ren-con 2008 contest was won, let us first give a briefexplanation of how the performance renderingsystem YQX works. The central performance deci-sion component in YQX (read: Why QX?4) is basedon machine learning. At the heart of YQX is a sim-ple Bayesian model that is trained on a corpus ofhuman piano performances. Its task is to learn topredict three expressive (numerical) dimensions:timing—the ratio of the played as compared to thenotated time between two successive notes in thescore, which indicates either acceleration or slow-ing down; dynamics—the relative loudness to beapplied to the current note; and articulation—theratio of how long a note is held as compared to thenote duration as prescribed by the score.

    The Bayesian network models the dependencyof the expressive dimensions on the local scorecontext. The context of a particular melody note isdescribed in terms of features like pitch interval,rhythmic context, and features based on an impli-cation-realization (IR) analysis (based on musicol-ogist Eugene Narmour’s [1990] theory) of themelody. A brief description of the basic YQX mod-el and the features used is given in The Core ofYQX: A Simple Bayesian Model elsewhere in thisarticle.

    The complete expressive rendering strategy ofYQX then combines this machine-learning modelwith several other parts. First, a reference tempoand a loudness curve are constructed, taking intoaccount hints in the score that concern changes intempo and loudness (for example,, loudness indi-cations like p (piano) versus f (forte), tempo changeindications like ritardando, and so on). Second,YQX applies the Bayesian network to predict theprecise local timing, loudness, and articulation ofnotes from their score contexts. To this end, a sim-ple soprano voice extraction method—we simplypick the highest notes in the upper staff of thescore5—and an implication-realization parser(Grachten, Arcos, and Lopez de Mantaras 2004) areused. The final rendering of a piece is obtained bycombining the reference tempo and loudnesscurves and the note-wise predictions of the net-work. Finally, instantaneous effects such as accentsand fermatas are applied where marked in thescore.

    In the case of Rencon 2008, YQX was trained ontwo separate corpora, to be able to deal with pseu-do-Chopin and pseudo-Mozart: 13 completeMozart Piano Sonatas performed by R. Batik (about51,000 melody notes) and a selection of 39 piecesby Chopin (including various genres such aswaltzes, nocturnes, and ballades) performed by N.Magaloff (about 24,000 melody notes).

    Articles

    FALL 2009 37

  • Articles

    38 AI MAGAZINE

    The statistical learning algorithm atthe heart of YQX is based on a simpleBayesian model that describes therelationship between the score context(described through a set of simple fea-tures) and the three target variables(the expressive parameters timing,dynamics, articulation) to be predict-ed. Learning is performed on theindividual notes of the melody (usual-ly the top voice) of a piece. Likewise,predictions are applied to the melodynotes of new pieces; the other voicesare then synchronized to the expres-sively deformed melody.

    For each melody note, severalproperties are calculated from thescore (features) and the performance(targets). The features include therhythmic context, an abstract descrip-tion of the duration of a note in rela-tion to successor and predecessor (forexample, short-short-long); the dura-tion ratio, the numerical ratio of dura-tions of two successive notes; and thepitch interval to the following note insemitones. The implication-realiza-tion analysis (Narmour 1990) catego-rizes the melodic context into typical

    categories of melodic contours (IR-label). Based on this a measure ofmusical closure (IR-closure), anabstract concept from musicology, isestimated. The corresponding targetsIOI-ratio (tempo), loudness, and articu-lation are directly computed from theway the note was played in the exam-ple performance.

    In figure 2 we show an example ofthe features and target values that arecomputed for a given note (figure 2a),and the structure of the Bayesianmodel (figure 2b). The melody seg-ment shown is from Chopin’s Noc-turne Op. 9, No. 1, performed by N.Magaloff. Below the printed melodynotes is a piano roll display that indi-cates the performance (the verticalposition of the bars indicates pitch,the horizontal dimension time).Shown are the features and targetscalculated for the sixth note (G𝅗𝅥) andthe corresponding performance note.The IR features are calculated fromthe note itself and its two neighbors:a linearly proceeding intervalsequence (“Process”) and a low IR-closure value, which indicates a mid-

    phrase note; the note is played legato(articulation ≥ 1) and slightly slowerthan indicated (IOI-Ratio ≥ 1).

    The Bayes net is a simple condi-tional Gaussian model (Murphy2002) as shown previously. The fea-tures are divided into sets of continu-ous (X) and discrete (Q) features. Thecontinuous features are modeled asGaussian distributions p(xi), the dis-crete features through simple proba-bility tables P(qi). The dependency ofthe target variables Y on the score fea-tures X and Q is given by conditionalprobability distributions p(yi | Q, X).

    The model is trained by estimat-ing, separately for each target vari-able, multinomial distributions repre-senting the joint probabilities p(yi,X);the dependency on the discrete vari-ables Q is modeled by computing aseparate model for each possiblecombination of values of the discretevalues. (This is feasible because wehave a very large amount of trainingdata.) The actual predictions y′i areapproximated through linear regres-sion, as is commonly done (Murphy2002).

    The Core of YQX: A Simple Bayesian Model

    score

    discretefeatures Q

    continuousfeatures X

    continuoustarget Yperformance

    a b

    FeaturesIR-Label: ProcessIR-Closure: -0.4332Pitch-Interval: -1Rhythmic context: s-s-1Duration Ratio: 0.301

    TargetsTiming: 0.036Articulation: 1.22Loudness: 0.2

    Figure 2. An Example of the Features and Target Values that Are Computed for a Given Note and the Structure of the Bayesian Model.

  • Articles

    FALL 2009 39

    Figure 3. Bars 1–18 from “My Nocturne,” by Tadahiro Murao (Aichi University of Education, Japan). Reproduced with permission.

    Phrase A1

    Phrase A2

    Phrase B1 Phrase B2

  • Articles

    A Brief Look at YQX Playing a “Chopin” Piece

    In order to get an appreciation of what YQX doesto a piece of music, let us have a brief look at somedetails of YQX’s interpretation of the quasi-Chopinpiece “My Nocturne,” composed by TadahiroMurao specifically for the Rencon 2008 contest.1

    Figure 3 shows the first 18 bars of the piece. In fig-ure 4, we show the tempo and dynamics fluctua-tions (relating to the melody in the right hand) ofYQX’s performance over the course of the entirepiece. At first sight, the tempo curve (figure 4a)

    betrays the three-part structure of the piece: themiddle section (bars 13–21) is played more slowlythan the two outer sections, and the endings of allthree sections are clearly articulated with a strongritardando (an extreme slowing down). This ismusically very sensible (as can also be heard in thevideo), but also not very surprising, as the tempochange for the middle section is given in the score,and also the ritardandi are prescribed (rit.).

    More interesting is the internal structure thatYQX chose to give to the individual bars andphrases. Note, for instance, the clear up-down(faster-slower) tempo contour associated with the

    30

    40

    50

    60

    70

    80

    90

    1 5 9 13 17 21 25 29 33

    Loud

    ness

    (M

    IDI v

    eloc

    ity)

    Score time (bars)

    YQX dynamics curve for MyNocturne (soprano voice)

    [a] [a’]

    [b]

    [b’]

    60

    80

    100

    120

    140

    160

    1 5 9 13 17 21 25 29 33

    Tem

    po

    (BPM

    )

    Score time (bars)

    YQX tempo curve for MyNocturne (soprano voice)

    a

    b

    Figure 4. “My Nocturne” as Performed by YQX.

    A. Tempo. B. Dynamics. The horizontal axes represent the “logical” score time, that is, the position in the piece, counted in terms of barsfrom the beginning. In the tempo plot, the vertical axis gives the local tempo YQX played at that point in time (in terms of beats [eighthnotes] per minute). In the dynamics plot, the vertical axis gives the loudness of the current melody note (in terms of MIDI velocity). Onlynotes of the melody of the piece (the highest part in the right hand) are indicated. See the text for an explanation of the specially markedpassages.

    40 AI MAGAZINE

  • Articles

    FALL 2009 4141 AI MAGAZINE

    In the phase-plane representation,tempo information measured from amusical performance is displayed in atwo-dimensional plane, where thehorizontal axis corresponds to tempo,and the vertical axis to the derivativeof tempo. The essential differencefrom the common tempo-versus-timeplots is that the time dimension isimplicit rather than explicit in thephase plane. Instead, the change oftempo from one time point to the oth-er (the derivative of tempo) is repre-sented explicitly as a dimension. Theabundance of names for differenttypes of tempo changes in music(accelerando and ritardando being themost common denotations of speed-ing up and slowing down, respective-ly), proves the importance of thenotion of tempo change in music. Thismakes the phase-plane representation

    particularly suitable for visualizingexpressive timing.

    Figure 5 illustrates the relationbetween the time series representationand the phase-plane representationschematically. Figure 5a shows oneoscillation of a sine function plottedagainst time. As can be seen in figure5b, the corresponding phase-plane tra-jectory is a full circle, starting and end-ing on the far left side (numbersexpress the correspondence betweenpoints in the time series and phase-plane representations).

    This observation gives us the basicunderstanding to interpret phase-plane representations of expressivetiming. Expressive gestures, that is,patterns of timing variations used bythe musician to demarcate musicalunits, typically manifest in the phaseplane as (approximately) elliptic

    curves. A performance then becomes aconcatenation and nesting of suchforms, the nesting being a conse-quence of the hierarchical nature ofthe musical structure. In figure 6 weshow a fragment of an actual tempocurve and the corresponding phase-plane trajectory. The trajectory isobtained by approximating the meas-ured tempo values (dots in figure 6a)by a spline function (solid curve). Thisfunction and its derivative can be eval-uated at arbitrary time points, yieldingthe horizontal and vertical coordinatesof the phase-plane trajectory. Depend-ing on the degree of smoothingapplied in the spline approximation,the phase-plane trajectory revealseither finer details or more globaltrends of the tempo curve.

    Visualizing Expressive Timing in the Phase Plane

    Time Series

    t

    x(t)

    0

    1

    2

    3

    4

    Phase Plane

    x(t)

    dx/d

    t

    0

    1

    2

    3

    4

    a b

    Time Series

    Score Time

    Tem

    po 0

    4

    8

    12

    Phase Plane

    Tempo

    Der

    ivat

    ive

    of T

    emp

    o accelerando

    ritardando

    fastslow0

    48

    12

    a b

    Figure 5. Schematic Relation between the Time Series Representation and the Phase-Plane Representation.

    Figure 6. A Fragment of an Actual Tempo Curve and the Corresponding Phase-Plane Trajectory.

  • Articles

    42 AI MAGAZINE

    First-order phase plane, bars 5 - 7

    -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4-1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4-1.5

    -1

    -0.5

    0

    0.5

    1

    1.5First-order phase plane, bars 3 - 5

    Figure 7. YQX’s Expressive Timing (Tempo Changes) in the Phase Plane.

    Left: Bars 3–4 of “My Nocturne.” Right: Bars 5–6. Horizontal axis: (smoothed)tempo in beats per minute (bpm). Vertical axis: First derivative of tempo (thatis, amount of accelerando or ritardando). Empty circles indicate the beginningsof new bars; black dots mark beat times (each of the 6 eighth note beats with-in a bar); the end of a curve is marked by a diamond (to indicate the direc-tion). The tempo and acceleration values have been normalized to give a μ =0.0 and a σ = 1.0 over the entire piece; the absolute values thus do not have adirect musical interpretation.

    sixteenth-note figure in bar 3, which YQX accom-panies with a crescendo-decrescendo in the dynamicsdomain (see indications [a] in figure 4). Note alsothe extreme softening of the last note of thisphrase (the last note of bar 4), which gives a verynatural ending to the phrase [b]. Both of thesechoices are musically very sensible. Note also howthe following passage (bar 5—see [a’]) is playedwith an analogous crescendo-decrescendo, though itis not identical in note content to the parallel pas-

    sage [a]. Unfortunately, there is no correspondingsoftening at the end [b’]: YQX chose to play theclosing B♭ in bar 6 louder and, worse, with a stac-cato, as can be clearly heard in the video—a ratherclear musical mistake.

    The tempo strategy YQX chose for the entirefirst section (from bar 1 to the beginning of themiddle section in bar 13) deserves to be analyzedin a bit more detail. To facilitate an intuitive visu-al understanding of the high-level trends, we willpresent this analysis in the phase plane (see figures7 through 9). Phase-plane plots, known fromdynamical systems theory, display the behavior ofa system by plotting variables describing the stateof the system against each other, as opposed toplotting each variable against time. It is commonin phase-plane plots that one axis represents thederivative of the other axis. For example, dynamicalsystems that describe physical motion of objectsare typically visualized by plotting velocity againstdisplacement. In the context of expressive per-formance, phase-plane representations have beenproposed for the analysis of expressive timing(Grachten et al. 2008), because they explicitly visu-alize the dynamic aspects of performance. Expres-sive “gestures,” manifested as patterns of variationin timing, tend to be more clearly revealed in thisway. The sidebar Visualizing Expressive Timing inthe Phase Plane gives some basic informationabout the computation and meaning of smoothedtiming trajectories in the phase plane. A moredetailed description of the computation processand the underlying assumptions can be found inGrachten et al. (2009). We will also encounterphase-plane plots later in this article, when wecompare the timing behavior of famous pianists.

    Figure 7 shows how YQX chose to play two verysimilar passages at the beginning of the piece: thefirst appearance of the main theme in bars 3–4, andthe immediately following variation of the theme(bars 5–6); the two phrases are marked as A1 andA2 in the score in figure 3. Overall, we see the samegeneral tempo pattern in both cases: a big arc—aspeedup followed by a slow-down (accelerando-ritardando)—that characterizes the first part (bar),followed by a further ritardando with an interme-diate little local speedup (the loop in the phaseplot; note that the final segment of the curve (lead-ing to the final diamond) already pertains to thefirst beat of the next bar). Viewing the two corre-sponding passages side by side, we clearly see thatthey were played with analogous shapes, but in aless pronounced or dramatic way the second timearound: the second shape is a kind of smaller repli-ca of the first.

    A similar picture emerges when we analyze thenext pair of corresponding (though again not iden-tical) musical phrases: phrase B1 starting in themiddle of bar 9, and its immediate variation B2

  • Articles

    FALL 2009 43

    starting with bar 11 (see figure 8 for YQX’s render-ings). Here, we see a speedup over the first part ofthe phrase (the agitated “gesture” consisting of sixsixteenth notes in phrase B1, and the correspon-ding septuplet in B2), followed by a slowing downover the course of the trill (tr), followed by aspeedup towards the beginning of the next phrase.The essential difference here is that, in repeat 2,YQX introduces a strong (and very natural-sound-ing) ritardando towards the end, because of theapproaching end of the first section of the piece.To make this slowing down towards the end appeareven more dramatic,6 the preceding speedup isproportionately stronger than in the previousphrase. Overall, we again see YQX treating similarphrases in analogous ways, with the second rendi-tion being less pronounced than the first, and withsome variation to account for a different musicalcontext. This can be seen even more clearly in fig-ure 9, which shows the entire passage (bars 9.5–12.5) in one plot. One might be tempted to saythat YQX is seeking to produce a sense of “unity invariation.”

    Of course, YQX’s performance of the completepiece is far from musically satisfying. Listening tothe entire performance, we hear passages thatmake good musical sense and may even sound nat-ural or “humanlike,” but we also observe quite anumber of blatant mistakes or unmusical behav-iors, things that no human pianist would everthink of doing.7 But overall, the quality is quitesurprising, given that all of this was learned by themachine.

    Is YQX Creative?Surely, expressive performance is a creative act.Beyond mere technical virtuosity, it is the ingenu-ity with which great artists portray a piece in a newway and illuminate aspects we have never noticedbefore that commands our admiration. WhileYQX’s performances may not exactly commandour admiration, they are certainly not unmusical(for the most part), they seem to betray some musi-cal understanding, and they are sometimes sur-prising. Can YQX be said to be creative, then?

    In our view (and certainly in the view of the gen-eral public), creativity has something to do withintentionality, with a conscious awareness of form,structure, aesthetics, with imagination, with skill,and with the ability of self-evaluation (see alsoBoden 1998, Colton 2008, Wiggins 2006). The pre-vious analysis of the Chopin performance seems tosuggest that YQX has a sense of unity and coher-ence, variation, and so on. The fact is, however,that YQX has no notion whatsoever of abstractmusical concepts like structure, form, or repetitionand parallelism; it is not even aware of the phrasestructure of the piece. What seems to be purpose-

    ful, planned behavior that even results in an ele-gant artifact, is “solely” an epiphenomenon emerg-ing from low-level, local decisions—decisions onwhat tempo and loudness to apply at a given pointin the piece, without much regard to the largermusical context.

    With YQX, we have no idea why the systemchose to do what it did. Of course, it would be pos-sible in principle to trace back the reasons for thesedecisions, but they would not be very revealing—they would essentially point to the existence ofmusical situations in the training corpus that aresomehow similar to the musical passage currentlyrendered and that led to the Bayesian modelparameters being fit in a particular way.

    Other aspects often related to creativity are nov-

    -0.5 0 0.5 1

    First-order phase plane, bars 9.5 - 11

    -1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    -0.5 0 0.5 1

    First-order phase plane, bars 11 - 12.5

    -1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    Figure 8. YQX’s Expressive Timing (Tempo Changes) over Bars 9.5–11(Left) and 11–12.5 (Right), Respectively.

  • Articles

    44 AI MAGAZINE

    -0.5 0 0.5 1

    First-order phase plane, bars 9.5 - 12.5

    -1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    Figure 9. Overall View of YQX’s Expressive Timing (Tempo Changes) over Bars 9.5–12.5.

    elty, surprisingness, and complexity (for example,Boden 1998, Bundy 1994). YQX does a number ofthings that are not prescribed in the score, thatcould not be predicted by the authors, but are nev-ertheless musically sensible and may even beascribed a certain aesthetic quality. YQX embodiesa combination of hard-coded performance strate-gies and inductive machine learning based on alarge corpus of measured performance data. Itsdecisions are indeed unpredictable (which alsoproduced quite some nervous tension at the Ren-con contest, because we were not permitted to lis-ten to YQX’s result before it was publicly per-formed before the audience). Clearly, a machinecan exhibit unexpected (though “rational”) behav-ior and do surprising new things, things that werenot foreseen by the programmer, but still followlogically from the way the machine was pro-grammed. Seemingly creative behavior mayemerge as a consequence of high complexity of themachine’s environment, inputs, and input-outputmapping.8 That is certainly the case with YQX.

    The previous considerations lead us to settle, forthe purposes of this article, on the pragmatic viewthat creativity is in the eye of the beholder. We do notclaim to be experts on formal models of creativity,and the goal of our research is not to directly mod-el the processes underlying creativity. Rather, weare interested in studying creative (human) behav-iors, and the artifacts they produce, with the helpof intelligent computer methods. YQX was devel-oped not as a dedicated tool, with the purpose of

    winning Rencon contests, but as part of a muchlarger and broader basic research effort that aims atelucidating the elusive art of expressive music per-formance. An early account of that research wasgiven in Widmer et al. (2003). In the following, wesketch some of our more recent results and somecurrent research directions that should take us astep or two closer to understanding this complexbehavior.

    Studying a Creative Behavior with AI Methods

    As explained previously, the art of expressive musicperformance takes place in a field of tensiondemarcated by creative freedom on the one endand various musical and cultural norms and per-ceptual contraints on the other end. As a conse-quence, we may expect to find both significantcommonalities between performances of differentartists—in contexts where they all have to suc-cumb to common constraints—and substantial dif-ferences in other places, where the artists explorevarious ways of displaying music, and where theycan define their own personal artistic style.

    Our research agenda is to shed some new lightinto this field of tension by performing focusedcomputational investigations. The approach isdata intensive in nature. We compile largeamounts of empirical data (measurements ofexpressive timing, dynamics, and so on in realpiano performances) and analyze these with dataanalysis and machine-learning methods.

    Previous work has shown that both commonal-ities (that is, fundamental performance principles)and differences (personal style) seem to be suffi-ciently systematic and pronounced to be capturedin computational models. For instance, in Widmer(2002) we presented a small set of low-level per-formance rules that were discovered by a special-ized learning algorithm (Widmer 2003). Quantita-tive evaluations showed that the rules aresurprisingly predictive and also general, carryingover from one pianist to another and even tomusic of a different style. In Widmer and Tobudic(2003), it was shown that modeling expressioncurves at multiple levels of the musical phrase hier-archy leads to significantly better predictions,which provides indirect evidence for the hypothe-sis that expressive timing and dynamics are multi-level behaviors. The results could be furtherimproved by employing a relational learning algo-rithm (Tobudic and Widmer 2006) that modeledalso some simple temporal context. Unfortunately,being entirely case based, these latter models pro-duced little interpretable insight into the underly-ing principles.

    Regarding the quest for individual style, wecould show, in various experiments, that personal

  • Articles

    FALL 2009 45

    style seems sufficiently stable, both within andacross pieces, that it can be picked up by machine.Machine-learning algorithms learned to identifyartists solely from their way of playing, both at thelevel of advanced piano students (Stamatatos andWidmer 2005) and famous pianists (Widmer andZanon 2004). Regarding the identification offamous pianists, the best results were achievedwith support vector machines based on a stringkernel representation (Saunders et al. 2008), withmean recognition rates of up to 81.9 percent inpairwise discrimination tasks. In Madsen and Wid-mer (2006) we presented a method to quantify thestylistic intrapiece consistency of famous artists.

    And finally, in Tobudic and Widmer (2005), weinvestigated to what extent learning algorithmscould actually learn to replicate the style of famouspianists. The experiments showed that expressioncurves produced (on new pieces) by the computerafter learning from a particular pianist are consis-tently more strongly correlated to that pianist’scurves than to those of other artists. This is anotherindication that certain aspects of personal style areidentifiable and even learnable. Of course, the result-ing computer performances are a far cry from sound-ing anything like, for example, a real Barenboim orRubinstein performance. Expressive performance isan extremely delicate and complex art that takes awhole lifetime of experience, training, and intellec-tual engagement to be perfected. We do not expect—to answer a popular question that is often posed—that computers will be able to play music likehuman masters in the near (or even far) future.

    But we may expect to gain more insight into theworkings of this complex behavior. To that end, weare currently extending both our probabilisticmodeling approach and the work on expressivetiming analysis in the phase plane.

    Towards a Structured Probabilistic ModelGenerally, the strategy is to extend YQX step bystep, so that we can quantify exactly the relevanceof the various components and layers of the mod-el, by measuring the improvement in the model’sprediction power relative to real performance data.While the exact model parameters learned fromsome training corpus may not be directly inter-pretable, the structure of the model, the choseninput features, and the way each additional com-ponent changes the model’s behavior will providemusicologically valuable insights.

    Apart from experimenting with various addi-tional score features, two specific directions arecurrently pursued. The first is to extend YQX witha notion of local temporal contexts. To that end, weare converting the simple static model into adynamic network where the predictions are influ-enced by the predictions made for the previousnotes. In this way, YQX should be able to produce

    much more consistent and stable results that leadto musically more fluent performances.

    The second major direction for extending YQXis towards multiple structural levels. Western musicis a highly structured artifact, with levels of groupsand phrases hierarchically embedded within eachother. It is plausible to assume that this complexstructure is somehow reflected in the performanc-es by skilled performers, especially since musicolo-gy tells us that one function of expressive playing(and timing, in particular) is to disambiguate thestructure of a piece to the audience (for example,Clarke [1991], Palmer [1997]). Indirect evidence forthis was provided by our multilevel learning exper-iments. In Grachten and Widmer (2007) wedemonstrated in a more direct way that aspects ofthe phrase structure are reflected in the tempo anddynamics curves we measure in performances. Ourprevious multilevel model was a case-based one.We now plan to do multilevel modeling in theprobabilistic framework of YQX, which we consid-er more appropriate for music performance (seealso Grindlay and Helmbold 2006).

    Towards a Computational Model of an Accomplished ArtistWe are in the midst of preparing what will be thelargest and most precise corpus of performancemeasurements ever collected in empirical musicol-ogy. Shortly before his death, the Russian pianistNikita Magaloff (1912–1992) recorded essentiallythe entire solo piano works by Frédéric Chopin onthe Bösendorfer SE290 computer-controlled grandpiano in Vienna, and we obtained exclusive per-mission to use this data set for scientific studies.This is an enormous data set—many hours ofmusic, hundreds of thousands of played notes,each of which is precisely documented (theBösendorfer computer-controlled piano measureseach individual key and pedal movement withutmost precision). It will present us with unprece-dented research opportunities, and also formidabledata processing challenges. The ultimate challengewould be to build a structured, probabilistic mod-el of the interpretation style of a great pianist. Itremains to be seen how far machine learning cantake us toward that goal.

    Toward Characterizing Interpretation StylesRepresenting expressive timing in the phase planeis a new idea that needs further investigation. In astrict sense, the representation is redundant,because the derivatives (the second dimension ofthe plane) are fully determined by the tempo curve(the first dimension). Still, as one can demonstratewith qualitative examples (Grachten et al. 2008),the representation has certain advantages whenused for visual analysis. It permits the human ana-

  • Articles

    46 AI MAGAZINE

    Chopin Op.28,No.17; Arrau ’73 (solid), Kissin ’99 (dashed)

    Tempo (normalized)

    D1 T

    emp

    o

    12

    1

    2

    3

    2

    1

    0

    -1

    -2

    -3-2.0 -1.5 -1.0 -0.5 0.0 0.5

    D1 T

    emp

    o

    4

    3

    2

    1

    0

    -1

    -2

    Chopin Op.28,No.17; Argerich ’75 (solid), Harasiewicz ’63 (dashed)

    Tempo (normalized)

    1

    2

    12

    -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0

    Tempo (normalized)

    Chopin Op.28,No.17; Pogorelich ’89 (solid), Pollini ’75 (dashed)

    1

    2

    1

    2

    D1 T

    emp

    o

    3

    2

    1

    0

    -1

    -2

    -3-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0

    Figure 10. Three Distinct Strategies for Shaping the Tempo in Chopin’sPrélude Op. 28, No. 17, Bars 73–74, Discovered from Phase-Plane Data.

    Claudio Arrau, 1973 and Evgeny Kissin, 1999 (top left); Martha Argerich, 1975and Adam Harasiewicz, 1963 (top right); Ivo Pogorelich, 1989 and MaurizioPollini, 1975 (bottom).

    lyst to perceive expressive “gestures” and“gestalts,” and variations of these, much more eas-ily and directly than in plain tempo curves. More-over, even if the second dimension is redundant, itmakes information explicit that would otherwisebe concealed. This is of particular importancewhen we use data analysis and machine learningmethods for pattern discovery. In Grachten et al.(2009), a set of extensive experiments is presentedthat evaluate a number of alternative phase planerepresentations (obtained with different degrees ofsmoothing, different variants of data normaliza-tion, first- or second-order derivatives) against a setof well-defined pattern-classification tasks. Theresults show that adding derivative informationalmost always improves the performance ofmachine-learning methods. Moreover, the resultssuggest that different kinds of phase-plane para-meterizations are appropriate for different identifi-cation tasks (for example, performer recognitionversus phrase context recognition). Such experi-ments give us a basis for choosing the appropriaterepresentation for a given analysis task in a princi-pled way.

    As a little example, figure 10 shows how a par-ticular level of smoothing can reveal distinct per-formance strategies that indicate divergent read-ings of a musical passage. The figure shows threeways of playing the same musical material—bars73–74 in Chopin’s Prélude Op. 28, No. 17. Theywere discovered through automatic phase-planeclustering of recordings by famous concertpianists. For each cluster, we show two prototypi-cal pianists. The interested reader can easily seethat the pianists in the first cluster play the passagewith two accelerando-ritardando “loops,” whereas inthe second cluster, the dominant pattern is onelarge-scale accelerando-ritardando with an interme-diate speedup-slowdown pattern interspersed, ofwhich, in the third cluster, only a short lingering isleft. In essence, pianists in the first cluster clearlydivide the passage into two units, shaping eachindividual bar as a rhythmic chunk in its ownright, while pianists in the third read and play it asone musical gesture. This tiny example is justmeant to exemplify how an intuitive representa-tion coupled with pattern-detection algorithmscan reveal interesting commonalities and differ-ences in expressive strategies. Systematic, large-scale pattern-discovery experiments based on dif-ferent phase-plane representations will, we hope,bring us closer to being able to characterize indi-vidual or collective performance styles.

    ConclusionTo summarize, AI may help us study creative behav-iors like expressive music performance—or moreprecisely: artifacts that result from such creative

  • Articles

    FALL 2009 47

    behaviors—in new ways. That concerns both the“rational,” “rulelike,” or “norm-based” aspects, andthe spaces of artistic freedom where artists candevelop their very personal style. State-of-the-artsystems like YQX can even produce expressive per-formances themselves that, while neither trulyhigh-class nor arguably very creative, could pass asthe products of a mediocre music student. We takethis result primarily as evidence that our modelscapture musically relevant principles. Whethermachines themselves can be credited with creativi-ty, and whether musical creativity in particular canbe captured in formal models, is a question that isbeyond the scope of the present article. We willcontinue to test our models in scientific contestssuch as Rencon, and leave it to the audience todecide whether they consider the results creative.

    AcknowledgmentsThis research is supported by the Austrian ResearchFund (FWF) under grant number P19349-N15.Many thanks to Professor Tadahiro Murao for per-mitting us to reproduce the score of his composi-tion “My Nocturne” and to the organizers of theRencon 2008 contest for giving us all their prizes.

    Notes1. See www.renconmusic.org/icmpc2008.

    2. See www.renconmusic.org.

    3. From www.renconmusic.org/icmpc2008/autonomous.htm.

    4. Actually, the name derives from the three basic sets ofvariables used in the system’s Bayesian model: the target(expression) variables Y to be predicted, the discrete scorefeatures Q, and the continuous score features X. See TheCore of YQX: A Simple Bayesian Model in this article foran explanation of these.

    5. Automatic melody identification is a hard problem forwhich various complex methods have been proposed (forexample, Madsen and Widmer [2007]), which still do notreach human levels of performance. Given the specificstyle of music we faced at the Rencon contest, simplypicking the highest notes seemed robust enough. Theimprovement achievable by better melody line identifi-cation is yet to be investigated.

    6. We are insinuating purposeful behavior by YQX here,which we will be quick to dismiss in the following section.The point we want to make is that it is all too easy to readtoo much into the “intentions” behind the decisions of acomputer program—particularly so when one is using verysuggestive visualizations, as we are doing here.

    7. Viewing the videos, the reader will have noticed thatboth in the Chopin and the Mozart, YQX makes a coupleof annoying octave transposition errors (in the octava(8va) passages). This is not an indication of creative lib-erties taken by the system, but rather a consequence of alast-minute programming mistake made by one of theauthors during the Rencon contest.

    8. One might argue that the same holds for the creativi-ty we attribute to human beings.

    References

    Boden, M. 1998. Creativity and Artificial Intelligence.Artificial Intelligence 103(1–2): 347–356.

    Bundy, A. 1994. What Is the Difference between Real Cre-ativity and Mere Novelty? Behavioural and Brain Sciences17(3): 533–534.

    Clarke, E. F. 1991. Expression and Communication inMusical Performance. In Music Language, Speech andBrain, ed. J. Sundberg, L. Nord, and R. Carlson, 184–193.London: MacMillan.

    Colton, S. 2008. Creativity Versus the Perception of Cre-ativity in Computational Systems. In Creative IntelligentSystems: Papers from the AAAI 2008 Spring Symposium, 14–20. Technical Report SS-08-03. Menlo Park, CA: AAAIPress.

    Gabrielsson, A. 2003. Music Performance Research at theMillennium. Psychology of Music 31(3): 221–272.

    Grachten, M.; Arcos, J. L.; and Lopez de Mantaras, R.2004. Melodic Similarity: Looking for a Good AbstractionLevel. In Proceedings of the 5th International Conference onMusic Information Retrieval (ISMIR 2004), 210–215,Barcelona, Spain. Victoria, Canada: International Societyfor Music Information Retrieval.

    Grachten, M.; Goebl, W.; Flossmann, S.; and Widmer, G.2009. Phase-Plane Representation and Visualization ofGestural Structure in Expressive Timing. Journal of NewMusic Research. In press.

    Grachten, M.; Goebl, W.; Flossmann, S.; and Widmer, G.2008. Intuitive Visualization of Gestures in ExpressiveTiming: A Case Study on the Final Ritard. Paper present-ed at the 10th International Conference on Music Per-ception and Cognition (ICMPC10), Sapporo, Japan, 25–29 August.

    Grachten, M., and Widmer, G. 2007. Towards PhraseStructure Reconstruction from Expressive PerformanceData. In Proceedings of the International Conference on MusicCommunication Science (ICOMCS), 56–59. Sydney, Aus-tralia: Australian Music and Psychology Society.

    Grindlay, G., and Helmbold, D. 2006. Modeling, Analyz-ing, and Synthesizing Expressive Performance withGraphical Models. Machine Learning 65(2–3): 361–387.

    Madsen, S. T., and Widmer, G. 2007. Towards a Compu-tational Model of Melody Identification in PolyphonicMusic. In Proceedings of the 20th International Joint Confer-ence on Artificial Intelligence (IJCAI 2007), 495–464, Hyder-abad, India. Menlo Park, CA: AAAI Press.

    Madsen, S T., and Widmer, G. 2006. Exploring PianistPerformance Styles with Evolutionary String Matching.International Journal of Artificial Intelligence Tools 15(4):495–514.

    Murphy, K. 2002. Dynamic Bayesian Networks: Repre-sentation, Inference and Learning. Ph.D. dissertation,Department of Computer Science, University of Califor-nia, Berkeley.

    Narmour, E. 1990. The Analysis and Cognition of BasicMelodic Structures: The Implication-Realization Model.Chicago: University of Chicago Press.

    Palmer, C. 1997. Music Performance. Annual Review ofPsychology 48(1): 115–138.

    Repp, B. 1992. Diversity and Commonality in Music Per-formance: An Analysis of Timing Microstructure in Schu-

  • Articles

    48 AI MAGAZINE

    ICWSM-10 to be Held in the

    Washington, DC Area!

    The Fourth International AAAI Conference onWeblogs and Social Media will be held in the Wash-ington, DC area in May 2010. This interdisciplinaryconference brings together researchers and indus-try leaders interested in creating and analyzingsocial media. Past conferences have included tech-nical papers from areas such as computer science,linguistics, psychology, statistics, sociology, multi-media and semantic web technologies. A full Callfor Papers will be available this fall atwww.icwsm.org, and papers will be due in mid-Jan-uary 2010. As in previous conferences, collectionsof social-media data will be provided by ICWSM-10organizers to potential participants to encourageexperimentation on common problems anddatasets. For more information, please write [email protected].

    I CWSM

    Tobudic, A., and Widmer, G. 2005. Learning to Play likethe Great Pianists. In Proceedings of the 19th InternationalJoint Conference on Artificial Intelligence (IJCAI’05), 871–876, Edinburgh, Scotland. Menlo Park, CA: InternationalJoint Conferences on Artificial Intelligence.

    Widmer, G. 2003. Discovering Simple Rules in ComplexData: A Meta-learning Algorithm and Some SurprisingMusical Discoveries. Artificial Intelligence 146(2): 129–148.

    Widmer, G. 2002. Machine Discoveries: A Few Simple,Robust Local Expression Principles. Journal of New MusicResearch 31(1): 37–50.

    Widmer, G.; Dixon, S.; Goebl, W.; Pampalk, E.; and Tobu-dic, A. 2003. In Search of the Horowitz Factor. AI Maga-zine 24(3): 111–130.

    Widmer, G., and Goebl, W. 2004. Computational Modelsof Expressive Music Performance: The State of the Art.Journal of New Music Research 33(3): 203–216.

    Widmer, G., and Tobudic, A. 2003. Playing Mozart byAnalogy: Learning Multi-level Timing and DynamicsStrategies. Journal of New Music Research 32(3): 259–268.

    Widmer, G., and Zanon, P. 2004. Automatic Recognitionof Famous Artists by Machine. In Proceedings of the 16thEuropean Conference on Artificial Intelligence (ECAI’2004),1109–1110. Amsterdam: IOS Press.

    Wiggins, G. 2006. A Preliminary Framework for Descrip-tion, Analysis and Comparison of Creative Systems. Jour-nal of Knowledge-Based Systems 19(7): 449–458.

    Gerhard Widmer (www.cp.jku.at/people/widmer) is pro-fessor and head of the Department of Computational Per-ception at the Johannes Kepler University, Linz, Austria,and head of the Intelligent Music Processing andMachine Learning Group at the Austrian Research Insti-tute for Artificial Intelligence, Vienna, Austria. He holdsM.Sc. degrees from the University of Technology, Vien-na, and the University of Wisconsin, Madison, and aPh.D. in computer science from the University of Tech-nology, Vienna. His research interests are in AI, machinelearning, pattern recognition, and intelligent music pro-cessing. In 1998, he was awarded one of Austria’s highestresearch prizes, the START Prize, for his work on AI andmusic.

    Sebastian Flossmann obtained an M.Sc in computer sci-ence from the University of Passau, Germany. He cur-rently works as a Ph.D. student at the Department forComputational Perception at Johannes Kepler University,Linz, Austria, where he develops statistical models andmachine-learning algorithms for musical expression. Hise-mail address is [email protected].

    Maarten Grachten obtained an M.Sc. in artificial intelli-gence from the University of Groningen, The Nether-lands (2001), and a Ph.D. in computer science and digi-tal communication from Pompeu Fabra University, Spain(2006). He is presently active as a postdoctoral researcherat the Department of Computational Perception ofJohannes Kepler University, Austria. His research activi-ties are centered on computational analysis of expressionin musical performances. His e-mail address [email protected].

    mann’s Träumerei. Journal of the Acoustical Society of Amer-ica 92(5): 2546–2568.

    Saunders, C.; Hardoon, D.; Shawe-Taylor, J.; and Widmer,G. 2008. Using String Kernels to Identify Famous Per-formers from Their Playing Style. Intelligent Data Analysis12(4): 425–450.

    Stamatatos, E., and Widmer, G. 2005. Automatic Identi-fication of Music Performers with Learning Ensembles.Artificial Intelligence 165(1): 37–56.

    Tobudic, A., and Widmer, G. 2006. Relational IBL in Clas-sical Music. Machine Learning 64(1–3): 5–24.

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth 8 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /FlateEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    >> setdistillerparams> setpagedevice