daw2007 video sa eae

Upload: shigeki

Post on 31-May-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 daw2007 Video SA EAE

    1/8

    Designing a System for Supporting the Process ofMaking a Video Sequence

    Shigeki AmitaniCreativity & Cognition Studios

    Australasian CRC for Interaction Design

    University of Technology, Sydney AUSTRALIA+61-(0)2-9514-4631

    [email protected]

    Ernest EdmondsCreativity & Cognition Studios

    Australasian CRC for Interaction Design

    University of Technology, Sydney AUSTRALIA+61-(0)2-9514-4640

    [email protected]

    ABSTRACTThe aim of this research is to develop a system to supportvideo artists. Design rationales of software for artists should

    be obtained through investigating artists' practice. In thisstudy, we have analysed the process of making a videosequence in collaboration with an experienced video artist.Based on this analysis we identified design rationales for asystem to support the process of making a video sequence. A

    prototype system Knowledge Nebula Crystallizer for Time-

    based Information (KNC4TI) has been developed. Furtherdevelopment towards a generative system is also discussed.

    Categories and Subject DescriptorsH5.m. Information interfaces and presentation (e.g., HCI):Miscellaneous.

    General TermsDesign

    KeywordsVideo making, cognitive process, sketching, software, time-

    based information, generative system

    1. INTRODUCTIONArtists have started to use information technologies sincecomputers have become available (e.g. [14]). Those tools helpartists to break new ground from an artistic perspective.However, these tools are not optimally designed for artists.

    This paper presents:

    Results of investigation of the process of makinga video sequence to identify design rationales fora supporting system in collaboration with a

    professional video artist

    Development of a prototype system calledKnowledge Nebula Crystallizer for Time-based

    Information (KNC4TI) that supports the process based on the investigation

    Plans for extending the KNC4TI system to agenerative system

    2. RELATED WORKIn composing a video sequence, an editing tool isindispensable. Traditionally, conventional video editingequipment has been designed for industrial video productionswhich are different to those of artists. While industrial video

    production needs tools to organise a certain video sequence

    along with a storyboard devised in advance, artistic video production tends to proceed more through interactions between an artist and a work, rather than following a pre-defined storyboard.

    Even so, artists have adopted that equipment so that they can present their art works. Recently, artists as well as industrialvideo producers have started to use computer software for theircompositions. However, most video editing software has beendeveloped as a metaphor of the traditional editing equipment

    such as films and VCRs, as the general GUI operating systemsadopted the desktop metaphor. This means that video editingsoftware still does not provide suitable interactiverepresentations for artists, whilst the editing process ofindustrial video producers is different from that of artists.

    In order to understand design processes in detail, a number ofanalyses, especially in architects' design processes, have beenconducted [3-5, 22], however, most of the studies focused ondesign process of non-time-based information. Few analyseshave been conducted on those of time-based information suchas making video sequence and musical composition.

    Tanaka [23] has pointed out that the problem in analysesconducted so far in the musical composition research field isthat although generic models for musical composition

    processes have been proposed based on analyses of those processes (macroscopic models and analyses), little has beenconducted to investigate how each stage in those models

    proceeds and how transitions between stages occurs(microscopic models and analyses). Amitani et al. [1] haveconducted a microscopic analysis on the process of musicalcomposition. However, few microscopic analyses have beenconducted on the process of making a video sequence.

    From the viewpoint of human-computer interaction research,Shipman et al. [17] have developed a system called Hyper-Hitchcock. This system has the required flexibility for videoediting, but the system aims to index video clips based ondetail-on-demand concept that facilitate user navigation ofvideo sequences efficiently.

    Yamamoto et al. [24] have developed ARTWare, a library of

    components for building domain-oriented multimediaauthoring environments. A system was developed particularlyfor empirical video analysis of usability analysis.

    Although these systems above have been developed based onthe design perspective, their focuses are on supportingnavigation processes and analyses of video contents. Forauthoring information artefacts, it is important to support itsentire process, from the early stages where ideas are not clear tothe final stages where a concrete work is produced.

  • 8/14/2019 daw2007 Video SA EAE

    2/8

    Shibata [15, 16] have claimed the importance of an integratedenvironment that supports entire process of creative activities,as the process is composed of sub processes (e.g. thegenerating process and the exploring process [8]) and they areinseparable. In our study, we also regard this concept ofintegration important and implement to realise the integration.

    3. A CASE STUDYWe have investigated the process of making a video sequenceto identify design rationales for development of a videoauthoring tool that fits to designers cognitive processes. Itwas a collaborative work with an experienced video artist (wecall the artist participant in this paper). As the participantoriginally had a plan to compose a video clip, we couldobserve a quasi-natural process of making a video sequence.

    Retrospective reporting of protocol analysis [7],questionnaires and interviewing methods were analysed.Overall tendencies are summarised as below:

    Conceptual works such as considering wholestructure of a piece, semantic segmentation of amaterial movie are conducted on the participantssketch book

    Software is mainly used for:o Observing what is really going on in a

    material video sequence

    o Implementing the result of his thoughts inhis sketch in response to what is seen onthe software

    The analysis shows that conceptual design processes areseparated from implementation processes, while they cannot

    be separated with each other. Design process is regarded as a"dialog" between the designer and his/her material [13].Facilitating designers going back and forth between whole and

    a part, and between conceptual and represented world willsupport this design process.

    3.1 Roles of SketchingSketching plays significant roles that existing software doesnot cover. Sketching allows designers to:

    Externalise designer's multiple viewpointssimultaneously with written and diagrammaticannotations

    Visualise relationships between the viewpointsthat the designer defines

    In the following sections we discuss how these two featureswork in the process of making a video sequence.

    3.1.1 Written and diagrammatic annotations fordesigners' multiple viewpointsFigure 1 shows the participant's sketch. Each of the sixhorizontal rectangles in Figure 1 represents the entire materialvideo. They all refer to one same video sequence with differentlabels so that he can plan what should be done regarding toeach element that he decided to label.

    From the top to the bottom they are labelled as follows (shownin (1) Figure 2):

    Movements: movements of physical objects such asperson coming in, a door opening, etc.

    Sound levels: change of sound volume

    Sound image: types of sounds (e.g. "voices", etc.)

    Pic (= picture) level: change of density of the image

    Pic image: types of the images

    Compounded movements: plans

    Figure 1 The participants' sketch

  • 8/14/2019 daw2007 Video SA EAE

    3/8

    These elements are visualised in the sketch based on thetimeline conventions. Although some of the existing videoauthoring tools present sound level with timelines as thesecond rectangle shows, existing software allows limitedwritten annotations on a video sequence, and eventually itdoes not provide sufficient functionality for externalisingmultiple viewpoints. Especially the top sequence labelled as"movement" is conceptually important in making a videosequence, which is not supported by any video authoringtools. As (1) in Figure 2 shows, a mixture of written and non-diagrammatic annotations works for analysing what is goingon in the material sequence.

    (2) Same object with

    different annotations

    (1) Designers own annotations

    Figure 2: Annotations

    3.1.2 Visualising relationships between multipleviewpointsAs shown (2) in Figure 2, a certain part of the materialsequence is annotated in different ways in each rectangle inorder to describe conditions represented by the rectangle, that

    is: speak; zero (with shading); null; black; T (or a T-shaped symbol); meter. This is a power of multiple viewpoints withwritten annotations.

    These annotations are explanations of a certain part of a videosequence in terms of each correspondent viewpoint. Forexample, in terms of "Sound levels", this sketch shows that thesound level will be set to zero at this point of the sequence.

    The participant also externalises the relationships across theviewpoints in his sketch by using both written anddiagrammatic annotations as shown in Figure 3.

    Sketching supports designers to think semantic relationshipssuch as "voices leads pics" shown in (1) of Figure 3, as well asrelationships among physical features such as timing betweensounds and pictures.

    (2) indicates that he visualised the relationships between picture images and his plan on a certain part of the materialsequence by using written and diagrammatic annotations.

    (3) shows that he was thinking about the relationships acrossthe viewpoints.

    The relationships that the participant visualised are both physical and semantic. Some authoring tools supportvisualising physical relationships, however, they have fewfunctions to support semantic relationships amongviewpoints of designers. Sketching assists this process.

    (1) Relationships between sound and vision

    (2) Relationships betweenvision and plan

    (3) Relationships acrossthe viewpoints

    Figure 3: Relationships between multiple viewpoints

    Sketching also provides a holistic view of time-basedinformation. Implementation of these features of sketching tosoftware will facilitate designers going back and forth betweenconceptual and physical world, and whole and a part, so thatthe process of making a video sequence is supported.

    3.2 Roles of SoftwareWe investigated the process of making a video sequence withsoftware. The participant was to edit a material video sequencecomposed of a single shot. The editing tool he used wasFinalCut Pro HD which the participant had been using forabout five years. The duration was up to the participant(eventually it was 90 minutes). The video editing wasconducted at studio at Creativity & Cognition Studios,University of Technology, Sydney. It was the first time heengaged himself in the piece. That is, the process was theearliest stage of using video-authoring software for the new

    piece.

    The process of making a video sequence was recorded bydigital video cameras. Following elements were recorded:

    The participant's physical actions during makinga video sequence on the video editing software

    The participant's actions on computer displaysAfter authoring a video sequence, the participant was asked toconduct a retrospective report on his authoring process withwatching the recorded video data. We adopted theretrospective report method so that we can excerpt cognitive

    processes in actual interactions as possible as we can. Therecorded video data was used as a visual aid to minimise

    memory of the participant [22]. The participant was alsoasked to report what he thought during editing with watchingthe recorded video data. Following this the participant was

    asked to answer a free-form questionnaire via e-mail.

    3.2.1 Observing "facts" in a material sequenceThe participant reported that he was just looking at the filmclip as follows:

    [00:01:30] At this stage, I'm just looking again atthe film clip.

    [00:02:32] So again, still operating on this kind oflooking at the image over, the perceptual thing.

  • 8/14/2019 daw2007 Video SA EAE

    4/8

    This was reported for 18 times in his protocol data. Theseobservations occurred at the early and late phase of the processas shown in Figure 4.

    Time(mins.)10 20 30 40 50 60 70 80 90

    Frequency ofobservation

    Figure 4 The time distribution of observation process

    The observation of facts was 75 minutes, and the explorationof possibilities was 5 minutes, the rest was for other eventssuch as reading a manual to solve technical problems andtalking to a person.

    In this observation process, it was also observed that the participant was trying to find a "rhythm" in the materialsequence which the participant calls "metre".

    [00:02:23] One of the things I've been th inkin gabout ... is actually to, is actually well, what is thekind of metre, what is the rhythm that you are goingto introduce into here

    This type of observation is for checking the actual duration foreach scene that the participant considered as "a semanticchunk".

    The participant recorded precise time durations of the semanticchunks and listed them in his sketchbook. This means that the

    participant was trying to refine his idea by mapping

    conceptual elements to a physical feature.[00:08:32] It's a matter of analysing each, almosteach frame to see what's going on and making adecision of. Having kind of analyse what's going onand making a decision of, well therefore thisduration works out of something like this. Thedurations are in seconds and frames, so that [...] 20unit [...]. It counts from 1 to 24 frames, 25th framerolls over number second.

    In the process of making a video sequence, the software plays arole of elaborating what the participant decided roughly in h issketch. This process has features listed below:

    Transitions from macroscopic viewpointsappeared in his sketch to microscopic actions

    such as focusing on time durations werefrequently observed

    Almost no transition in the opposite directionwas observed, such as seeing the effect ofmicroscopic changes on the entire concept

    3.2.2 Trial-and-error processesVideo authoring software supports trial-and-error processeswith "the shot list" as well as the "undo" function. Existingvideo editing software usually allows designers to list files

    and sequences used for their current video compositions (th isis called a shot list).

    The list function in a video editing tool supports to comparemultiple alternatives. It allows a designer to list not only filesto be potentially used but also created sequences. In theretrospective report, the participant said:

    [00:11:10] It would be a kind of parallel process

    where you make a shot list is causing what they calla shot list [pointing at the left most list-type windowin FinalCut]. And essentially you go through on thelist, the different shots, the different scenes as wewould often call, um. Whereas I'm just working withone scene, dynamics within one scene. So, I'mworking with a different kind of material, but it'srelated too.

    Although this function helps designers conducting trial-and-error processes through comparing multiple possibilities, the

    participant mentioned a problem:

    [ A - 4 ] Film dubbing interface metaphor [isinconvenient]. The assumption is that a TV programor a cinema film is being made, which forces the

    adoption of the system to other modes. For instance,why should there be only one Timeline screen ? There

    are many instances where the moving image ispresented across many screens.

    Existing video editing software has adopted a metaphor of thetools used in industrial film making process. As a result, thesoftware presents only the time axis in the sequence currentlycomposed. This problem was also reported in the context ofmusical composition [1].

    3.3 Identified Design RationalesThree design rationales have been identified based on ouranalysis.

    Allowing seamless transition between aconceptual holistic viewpoint (overview) anda partial implementation of the concepts(detail)

    Visualising multiple viewpoints andtimelines

    Enhancing trial-and-error processesThese three points are not exclusive with each other, we triedto separate them in order to facilitate to implement a system

    based on the knowledge obtained through this study, and tocontribute to more generic design theory for creativity supporttools.

    3.3.1 Allowing Seamless Transition betweenOverview and Detail Representations

    The process of making a video sequence especially in anartistic context is a design process which has hermeneuticalfeature, that is, the whole defines a meaning of a part and at thesame time meanings of parts decide the meaning of the whole[20]. So a video authoring tool should be designed to supportthis transition between whole and a part.

    Although the overview + detail concept is a generic designrationale applicable to various kinds of design problems, weconsider that this is a very important strategy for the processof creating time-based information, because time-basedinformation takes a form which is difficult to have an overview

  • 8/14/2019 daw2007 Video SA EAE

    5/8

    by nature. For example, in order to see the effect on the wholecaused by a partial change, you have to watch and/or listen tothe sequence through from the beginning to the end, or youhave to memorise the whole and to imagine what impact the

    partial change has on the whole. In an architectural design, theeffects of partial changes are immediately visualised on thesketch, which makes it easy for designers to transit betweenwhole and a part. This transition should be supported in th e

    process of making a video sequence.

    The participant first conducted a conceptual design in thesketching process by overviewing the whole with usingwritten and diagrammatic annotations to articulaterelationships among the annotated elements. Then, the

    participant proceeds to detail implementation of the videosequence on the software, which was a one-way process. As theconceptual design of the whole is inseparable from thedetailed implementation on the software, they should beseamlessly connected.

    The reason why this one-way transition occurs may be partially because this is the early stage of the process ofmaking a video sequence. However, we consider that this is

    because tools for conceptual design (sketch) and

    implementation (software) are completely separated. Thiscauses that a designer does not modify a sketch once it iscompleted. This phenomenon was observed in the study onmusical composition process [1]. It was observed thatcomparison between multiple possibilities occurred by

    providing an overview with the traditional score-metaphorinterface. It is expected that providing an overview supportscomparisons between multiple possibilities derived from

    partial modifications.

    3.3.2 Visualising Multiple Viewpoints andTimelinesIt was observed that the participant visualise multipleviewpoints and timelines, however, existing software presentsonly one timeline.

    Amitani et al. [1] have claimed that a musical composition process does not always proceed along with the timeline of amusical piece based on their experiment. They also claimed theimportance of presenting multiple timelines in musicalcomposition. Some musicians compose a musical piece alongwith its timeline, however, we consider that tools should bedesigned to support the both cases. This is applicable to the

    process of making a video sequence.

    3.3.3 Enhancing Trial-and-Error ProcessesAs mentioned before, a shot list helps designers to understandrelationships among sequences. In the questionnaire, hedescribed how he uses the list:

    [A-5] Selecting short segments into a sequence onthe Timeline, to begin testing noted possibilities

    with actual practice and their outcomes.

    Although the shot list helps designers to some degree, the listrepresentation only allows designers to sort the listedmaterials along with an axis such as alphabetical order. This isuseful, however, designers cannot arrange them along withtheir own semantic ways. It makes it difficult for designers tograb relationships between files and sequences, so that the listrepresentation prevents designers from full exploration ofmultiple possibilities.

    Instead of the list representation, a spatial representation ismore suitable to this kind of information-intensive tasks [12].While this comparison has been conducted in a designer'smind, externalisation of a designer's mental space is helpfulfor deciding whether an information piece is used or not. Shojiet al. have investigated differences between a listrepresentation and a spatial representation. They found that aspatial representation contributes to elaborating concepts

    better than a list representation. We believe that spatialrepresentations will facilitate a designer to compare multiple

    possibilities [19].

    In the next section, a prototype system for supporting the process of making a video sequence with spatialrepresentations is presented.

    4. KNOWLEDGE NEBULA CRYSTALLIZERFOR TIME-BASED INFORMATIONKnowledge Nebula Crystallizer (KNC) has been originallysuggested by Hori et al. [10] as a prototype knowledgemanagement system which has a repository called knowledge

    nebula. The knowledge nebula is an unstructured collection of

    small information pieces. Essential operations of the KNC

    system are crystallization and liquidization. During the

    crystallization, information pieces from the nebula are selected

    and structured according to a particular context, resulting in a

    new information artefact. During the liquidization, an

    information artefact is segmented into elements that are added

    to the knowledge nebula.

    The Knowledge Nebula Crystallizer for Time-based In formation

    has been developed with Java 1.4.2 on Mac OS X platform.

    Figure 5 shows a snapshot of the KNC4TI system.

    OverviewEditor

    DetailEditor

    ElementViewer

    Grouping

    : double click

    Figure 5 A Snapshot of the KNC4TI System

    The interface part of the KNC4TI system is composed of: (1)

    OverviewEditor; (2) DetailEditor; (3) ElementViewer; and (4)

    ElementEditor. For a practical reason, we have adopted

    FinalCut Pro HD as the ElementEditor. The reason is described

    in this section.

    4.1 OverviewEditorOverviewEditor provides, as its name says, the overview ofwhat movie objects are available at hand. They are added byeither choosing a folder that contains movie objects to be

  • 8/14/2019 daw2007 Video SA EAE

    6/8

    potentially used or by drag & drop movie files into theOverviewEditor. Figure 6 shows the snapshot of theOverviewEditor.

    Each object has its thumbnails on itself in order for a designerto grab what the movie is about, in addition to its file name.When a movie object is double-clicked, the ElementViewer

    pops up and the correspondent movie file is played so that a

    designer can check the contents (right in Figure 5). TheElementViewer is a simple QuickTime-based viewer. It plays aselected movie object on demand.

    Movie object Grouping by the user

    A comment by the user

    Figure 6 OverviewEditor

    The shot list in the FinalCut Pro HD is a similar component tothe OverviewEditor in a sense as available movie objects arelisted in the shot list, however, following expectedinteractions are advantages of adopting a spatialrepresentation:

    Rearrange the positions of the objects While a listrepresentation provides designers with a mechanically sorted

    file list, a two-dimensional allows designers to arrange movieobjects along with their viewpoints. For example, movie filesthat might be used in a certain video work can be arrangedclose so that the designer can incrementally formalise his orher ideas about the video piece [18].

    Annotations An annotation box appears by drag & drop in a blank space in the OverviewEditor. A designer can putannotations and can freely arrange it wherever he or she wantson the OverviewEditor. This is an enhancement of writtenannotation function.

    Grouping A designer can explicitly group movie objects onthe OverviewEditor. Grouped movie objects are moved as agroup. Objects are addable to and removable from a group atanytime by drag & drop.

    Copy & Paste A movie object does not always belong to onlyone group when a designer is exploring possibilities of whatkind of combinations is good for a certain video work. Tofacilitate this process, copy & paste function wasimplemented. Whilst only one possibility can be explored in atimeline representation and a shot list on a normal videoediting software, it visually allows a designer to examinemultiple possibilities.

    4.2 DetailEditorThe DetailEditor appears by double clicking a group on theOverviewEditor. The DetailEditor shows only the groupedobjects in the clicked group as shown in Figure 5. Figure 7shows a snapshot of the DetailEditor.

    Player

    Grouped objects

    Figure 7 DetailEditor

    In the DetailEditor, the horizontal axis is a time line andvertical axis is similar to tracks. It plays the grouped moviesfrom left to right. If two objects are horizontally overlapped asFigure 7 shows, then the first movie (Hatayoku.mpg) is playedfirst, then in the middle of the movie the next one(Impulse.mpg) is played. The timing when to switch from the

    first movie to the second one is defined by the following rule:Movie 1 has a time duration d1 and represented as arectangle with width l1 pixel, is located at x = x1.Movie 2 has a duration d2 and represented as arectangle with width l2 pixel, is located at x = x2(Figure 8).

    When the play button is pushed, play the movie 1 inthe ElementViewer and after time t1 , play the secondmovie. The playing duration t1 is defined by theequation (1) in Figure 8.

    Object 1

    Object 2

    xx1 x2

    l1

    l2

    duration = d1

    duration = d2

    t1 =d1(x2-x1)

    l1

    t1

    (1)

    Figure 8 The Timing Rule for Playing Overlaps

    Along this rule, movie objects grouped into the DetailEditorare played from right to left in the ElementViewer. Thisfacilitates designers to quickly check how a certain transitionfrom one file to another looks like.

    Designers can open as many DetailEditors as they wish so thatthey can compare and explorer multiple possibilities.

    4.3 ElementEditor: Seamless connectionwith FinalCut Pro HDStarting from the OverviewEditor, a designer narrows down hi sfocus with the DetailEditor and the ElementViewer, then thedesigner needs to work on the video piece more precisely. Forthis aim, we adopted the FinalCut Pro as the ElementEditor andthe KNC4TI system is seamlessly connected with FinalCut ProHD via XML.

    The FinalCut Pro HD provides importing and exporting

    functions for .xml files of video sequence information. The

  • 8/14/2019 daw2007 Video SA EAE

    7/8

    DetailEditor also exports / imports .xml files formatted in

    Final Cut Pro XML Interchange Format [2] by double clicking

    any point on a DetailEditor. An exported XML file by the

    DetailEditor is automatically fed to the FinalCut Pro HD. Figure

    9 shows the linkage between the DetailEditor and FinalCut.

    Double clickon DetailEditor

    Figure 9 Linkage between the DetailEditor and FinalCutthrough XML import/export

    Using FinalCut Pro is advantageous because of the followingreasons: First, it increases practicality. One of the mostdifficult things in applying a new system to practice is that

    practitioners are reluctant to change their tool. And FinalCutPro is one of the most used video authoring tools. That is,

    potentially the KNC4TI could be used as an extension of theexisting video authoring environment.

    Second, it reduces development loads. It is not efficient todevelop a system that beats, or at least has the same quali ty as,a well-developed system such as FinalCut Pro. We are notdenying the existing sophisticated systems, but extending

    what they can do for human designers.

    5. TOWARDS A GENERATIVE VIDEOAUTHORING SYSTEMEdmonds has suggested that a computer can certainly be astimulant for human creative activities [6]. The importantquestion is how we can design a computer system thatsupports people to increase their capacities to take effectiveand creative actions. We are currently developing componentsthat extend the current system to a generative system thatstimulates designers' thinking.

    Figure 10 shows the model of the generative systems. First,information artefacts (existing ones and/or new pieces ofinformation) are collected and stored (left in Figure 10). A

    system (top in Figure 10) generates possible informationartefacts (right in Figure 10). These outputs work in two ways:(1) as final products that a user can enjoy; (2) as draft materialsthat a user can modify (at the centre of Figure 10).

    In order to deliver possible information artefacts to users, acomponent called a Dynamic Concept Base (DCB) is beingdeveloped. It is a concept base that holds multiple similaritydefinition matrices which are dynamically reconfiguredthrough interactions. The more the number of objectsincreases, the more difficult to grab the relationships on the

    physically limited display. In order to assist a designer to grab

    the overview of a movie file space, the way of arrangingobjects in a two-dimensional space is critical. Sugimoto et al.[21] have proved statistically that similarity basedarrangement works better than random arrangement in thecomprehension of information indicated in a two-dimensionalspace. That is, the DCB potentially has an ability to help adesigner to understand an information space. Movie objectsare arranged based on the similarities computed by the DCB.

    A Generative System

    Informationartefacts

    Possible information artefacts(work as stimulants)

    - Artist- Information Designer- Public / active audience

    Such as:- Texts- Videos- Images- etc.

    Such as:- Multimedia composite- Web page- Document- etc.

    Interaction

    Final Outcome

    Generation

    The output becomes an input for the next loop

    Feedback

    Figure 10 How a Generative System Work

    Similarities between movies are computed based on physicalfeatures, such as brightness and hue. While the arrangement isconducted by the system, it does not necessarily fit to adesigners context [11]. So the system should allow end-usermodification [9] for incremental formalisation of informationartefacts [18]. The DCB is reconfigured through interactionssuch as rearranging, grouping and annotating objects. If twoobjects are grouped together by a designer, then the DCBcomputes their similarity again (Figure 11) so that thesimilarity definition becomes more contextually suitable.

    user

    comment

    Grouping

    Annotating

    Dynamic Concept Base

    Restructured Dynamic Concept Base

    ? ? ? ? ? ? ?? ? ? ?? ?? ?? ? ??? ? ?? ?? ? ? ? ?

    ? ? 1 0 .0 95 89 30 .2 58 19 90 .0 91 28 70 .0 62 62 2 0 0 0 0 0

    ? 0 .0 95 89 3 1 0 .3 71 39 1 0 .1 96 96 0 .0 90 07 5 0. 22 74 29 0 0 0 0

    ? ? 0 .2 58 19 90 .3 71 39 1 1 0 .3 53 55 30 .2 42 53 6 0 0 0 0 0

    ? ? ? 0 .0 91 28 7 0 .1 96 96 0. 35 35 53 1 0 .0 85 74 9 0 0 0 0 0

    ? ? 0 . 06 2 62 2 0. 0 90 0 75 0 . 24 2 53 6 0. 0 85 7 49 1 0 0 0 . 03 9 34 4 0. 1 71 4 99 0

    ? ? ? ?? ? 0 0.227429 0 0 0 1 0.408248 0 0 0

    ? ? ? ? 0 0 0 0 0 0.408248 1 0 0 0

    ? ?? ?? ? 0 0 0 0 0. 039 34 4 0 0 1 0 .05 73 54 0. 16 22 2

    ? 0 0 0 0 0.171499 0 0 0.057354 1 0.353553

    ? ? 0 0 0 0 0 0 0 0.1622210.353553 1

    ? ?? 0 0 .09 28 48 0 0 0 0 0 0. 405 55 40 .17 67 77 0. 5

    ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ??? ? ? ? ? ?

    ? ? 1 0 .0 95 89 30 .2 58 19 90 .0 91 28 70 .0 62 62 2 0 0 0 0 0

    ? 0 .0 95 89 3 1 0 .3 71 39 1 0 .1 96 96 0 .0 90 07 5 0. 22 74 29 0 0 0 0

    ? ? 0 .2 58 19 90 .3 71 39 1 1 0 .3 53 55 30 .2 42 53 6 0 0 0 0 0

    ? ?? 0 .0 91 28 7 0 .1 96 96 0. 35 35 53 1 0 .0 85 74 9 0 0 0 0 0

    ? ? 0 . 06 2 62 2 0. 0 90 0 75 0 .2 4 25 3 60 . 08 5 74 9 1 0 0 0 . 03 9 34 4 0. 1 71 4 99 0

    ? ? ?? ? ? 0 0.227429 0 0 0 1 0.408248 0 0 0

    ? ? ? ? 0 0 0 0 0 0.408248 1 0 0 0

    ? ?? ?? ? 0 0 0 0 0 .03 93 44 0 0 1 0. 057 35 40 .16 22 2

    ? 0 0 0 0 0.171499 0 0 0.057354 1 0.353553

    ? ? 0 0 0 0 0 0 0 0.1622210.353553 1

    ? ?? 0 0.092848 0 0 0 0 0 0.405554 0.176777 0.5

    New matrix to be reconfigured

    Original similarity matrix

    ? ? ? ? ? ? ?? ? ? ? ? ?? ? ? ? ?? ? ? ?? ? ?? ? ? ?

    ? ? 1 0 .0 95 89 30 .2 58 19 90 .0 91 28 70 .0 62 62 2 0 0 0 0 0

    ? 0 .0 95 89 3 1 0 .3 71 39 1 0 .1 96 96 0 .0 90 07 50 .2 27 42 9 0 0 0 0

    ? ? 0 .2 58 19 90 .3 71 39 1 1 0 .3 53 55 30 .2 42 53 6 0 0 0 0 0

    ? ?? 0 .0 91 28 7 0 .1 96 96 0. 35 35 53 1 0 .0 85 74 9 0 0 0 0 0

    ? ? 0 . 06 2 62 2 0. 0 90 0 75 0 .2 4 25 3 6 0. 0 85 7 49 1 0 0 0 . 03 9 34 4 0. 1 71 4 99 0

    ? ?? ? ?? 0 0.227429 0 0 0 1 0.408248 0 0 0

    ? ? ? ? 0 0 0 0 0 0.408248 1 0 0 0

    ? ?? ?? ? 0 0 0 0 0 .03 93 44 0 0 1 0. 057 35 40. 162 22

    ? 0 0 0 0 0.171499 0 0 0.057354 1 0.353553

    ? ? 0 0 0 0 0 0 0 0.1622210.353553 1

    ? ?? 0 0. 092 84 8 0 0 0 0 0 0 .40 55 54 0. 176 77 7 0. 5

    Figure 11 Reconfiguring the DCB through Interactions

    6. CONCLUSIONIn this paper, we presented: (1) the analysis of the process ofmaking a video sequence to identify design requirements for asupporting system; (2) The developed system based on thisanalysis; and (3) plans for a generative system.

  • 8/14/2019 daw2007 Video SA EAE

    8/8

    Design rationales for an appropriate video-authoring toolderived from our investigation which are summarised as threeinter-related features: (1) Allowing seamless transition

    between a conceptual holistic viewpoint (overview) and apartial implementation of the concepts (detail); (2) visualisingmultiple viewpoints and timelines; and (3) enhancing trial-and-error processes. A prototype system "Knowledge NebulaCrystallizer for Time-based Information (KNC4TI)" isdeveloped based on this analysis.

    We are going to evaluate the system through user studies andalso implement the generative function.

    7. ACKNOWLEDGMENTSThis project was supported by Japan Society of the Promotionfor Science, and is supported by the Australasian CRC forInteraction Design and Australian Centre for the MovingImage. The authors are also grateful to Dr. Linda Candy and Mr.Mike Leggett for their useful comments for improving ourresearch.

    8. REFERENCES[1] Amitani, S. and Hori, K., Supporting Musical

    Composition by Externalizing the Composer'sMental Space. in Proceedings of Creativity &Cognition 4, Loughborough University,

    Loughborough, 13-16 October, (2002), 165-172.[2] Apple, I. Final Cut Pro XML Interchange Format.[3] Bilda, Z. and Gero, J. Analysis of a Blindfolded Architect's

    Design Session 3rd International Conference onVISUAL AND SPATIAL REASONING IN DESIGN, 22-23

    July 2004, MIT, Cambridge, USA, 2004.[4] Cross, N., Christiaans, H. and Dorst, K. Analysing Design

    Activity. John Wiley & Sons, 1997.[5] Eckert, C., Blackwell, A., Stacey, M. and Earl, C. Sketching

    Across Design Domains 3rd InternationalConference on Visual and Spatial Reasoning in

    Design, MIT, Cambridge, USA, 2004.

    [6] Edmonds, E., Artists augmented by agents (invitedspeech). in Proceedings of the 5th internationalconference on Intelligent user interfaces, NewOrleans, Louisiana, United States (2000), 68-73.

    [7] Ericsson, A. and Simon, H. Protocol Analysis: VerbalReports as Data. Cambridge, MA: MIT Press, 1993.

    [8] Finke, R.A., Ward, T.B. and Smith, S.M. CreativeCognition -Theory, Research, and Applications. ABradford Book The MIT Press, 1992.

    [9] Fischer, G. and Girgensohn, A. End-user modifiability indesign environments. Proceedings of the SIGCHIconference on Human factors in computing systems:

    Empowering people, Seattle, Washington, UnitedStates. 183-192. 1990

    [10]Hori, K., Nakakoji, K., Yamamoto, Y. and Ostwald, J.Organic Perspectives of Knowledge Management:Knowledge Evolution through a Cycle of Knowledge

    Liquidization and Crystallization. Journal ofUniversal Computer Science, 10 (3). 252-261. 2004

    [11]Kasahara, K., Matsuzawa, K., Ishikawa, T. and Kawaoka, T.Viewpoint-Based Measurement of SemanticSimilarity between Words. Journal of Information

    Processing Society of Japan, 35 (3). 505-509. 1994[12]Marshall, C. and Shipman, F. Spatial Hypertext:

    Designing for Change. Communications of the ACM,38 (8). 88-97. 1995

    [13]Schoen, D.A. The Reflective Practitioner: How Professionals Think in Action. Basic Books, NewYork, 1983.

    [14]Scrivener, S. and Edmonds, E., The computer as an aid tothe investigation of art exploration. in Proceedingsof Euro IFIP, Amsterdam, Netherlands (1979), North-Holland Publishing Company, 483-490.

    [15]Shibata, H. and Hori, K. A Framework to Support Writingas Design. Journal of Information ProcessingSociety Japan, 44 (3). 1000-1012. 2003

    [16]Shibata, H. and Hori, K., Toward an integratedenvironment for writing. in Proceedings ofWorkshop on Chance Discovery EuropeanConference on Artificial Intelligence (ECAI), (2004).

    [17]Shipman, F., Girgensohn, A. and Wilcox, L. Hyper-Hitchcock: Towards the Easy Authoring ofInteractive Video. Proceedings of INTERACT 2003.33-40. 2003

    [18]Shipman, F.M. and McCall, R.J. Incremental formalizationwith the hyper-object substrate. ACM Transactionson Information Systems, 17 (2). 199-227. 1999

    [19]Shoji, H. and Hori, K. S-Conart: an interaction method thatfacilitates concept articulation in shopping online.

    AI & Society, Social Intelligence Design forMediated Communication, 19 (1). 65-83. 2005

    [20]Snodgrass, A. and Coyne, R. Is Designing Hermeneutical? Architectural Theory Review, 1 (1). 65-97. 1997

    [21]Sugimoto, M., Hori, K. and Ohsuga, S. An Application ofConcept Formation Support System to Design

    Problems and a Model of Concept Formation Process. Journal of Japanese Society for ArtificialIntelligence, 8 (5). 39-46. 1993

    [22]Suwa, M., Purcell, T. and Gero, J. Macroscopic analysis ofdesign processes based on a scheme for codingdesigners' cognitive actions. Design Studies, 19 (4).455-483. 1998

    [23]Tanaka, Y. Musical Composition as a Creative CognitionProcess (in Japanese). Report of Cultural Sciences

    Faculty of Tokyo Metropolitan University, 307 (41).51-71. 2000

    [24]Yamamoto, Y., Nakakoji, K. and Aoki, A., VisualInteraction Design for Tools to Think with:Interactive Systems for Designing LinearInformation. in Proceedings of the Working

    Conference on Advanced Visual Interfaces(AVI2002), Torento, Italy (2002), ACM Press, 367-372.