maquetistavirtual interactive procedural modeling · maquetistavirtual interactive procedural...

10

Upload: others

Post on 04-Mar-2021

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

MaquetistaVirtual

Interactive Procedural Modeling

Luis Miguel Saraiva Lopes

IST / Technical University of LisbonPortugal, Lisbon

[email protected]

ABSTRACT

The techniques used in existing modeling appli-cations prove to be time consuming, generic andinappropriate for models that have similar com-ponents, such as buildings. This led to the cre-ation of a new approach to generate models froma textual description, called procedural modeling.Few users are able to make the necessary descrip-tions, continuing the problem of the creation beingtoo long, tiring and di�cult to learn. From thisemerges the motivation to create a new and ex-peditious procedural modeling environment thatwill match the user's skills. We aim to providean interactive, natural and better learning proce-dural modeling approach than current ones. Weconsider the models edition by interacting directlyon its representation, adapting a wiimote to en-able procedural operators. To visualize the modelwe propose a solution of active stereoscopy whichallows the user to observe the virtual model ina table with a depth feeling, watching it as if itwere a physical model. We also propose function-alities such as component duplication and a tem-plate library, which abstract the user from pro-cedural concepts and speeds up edition. We per-formed an experimental evaluation which primarypurpose was to check feasibility of the proposedapproach. The results of this assessment highlightthe user preference for our more immersive and in-teractive approach. We conclude that it is feasibleto create an immersive modeling that allows fora more interactive, enjoyable and good learningprocedural edition.

Keywords Procedural Modeling, StereoscopicVisualization, Natural Interaction, Immersive En-vironment.

1 INTRODUCTION

For architects, the models are the best way tomake a realistic representation of large-scale struc-tures such as buildings. Recently, the introductionof computer-aided design (CAD) systems has en-abled the spread of virtual models in architecturestudios.

The techniques used in existing modeling ap-plications allow the model creation by specifyingeach component individually, being too lengthy,generic and interactively inappropriate for modelsthat have many similar components, such as ar-chitectural models. This motivated a new way ofmodeling, called procedural, that uses computertechniques to generate 3D models from a textualdescription, usually done through a series of rulesthat can being embedded in the algorithm, obtain-ing very realistic models and with less time spenton the process of creating models.

Currently, applications commonly used forthree-dimensional modeling, such as AutoCad,3DStudio Max � or SolidWorks, are based on in-teraction via mouse and keyboard following theapproach WIMP (window, icon, menu, pointingdevice), which is a time-consuming, tedious andnot very swift modeling approach, limiting the ex-pression of designer ideas and requiring a long andslow learning.

However, despite the inappropriate methods ofinteraction and little innovation, or applicationssuch as iCube CAVE allow a better perception of3D objects through immersive virtual reality en-vironments where the user feels more involved inhis virtual world.

It is therefore important to know what is neededto build a system with the appropriate user inter-faces, in order to increase the comfort of users dur-ing the design process of three-dimensional models

1

Page 2: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

of buildings.

Thus, emerges motivation to create a new andexpeditious modeling environment that will meetthe user's skills, improving the interaction tech-niques in procedural modeling.

In this paper we develop a modeling environ-ment that integrates the immersive virtual envi-ronments with procedural modeling which allowsa faster, more interactive and less boring edition ofbuildings models, with the aid of tools that speedscomponent changes.

We intend to provide a more interactive, nat-ural and better learning procedural modeling ap-proach than the typical approaches, through aux-iliary tools that motivate an increase in user sat-isfaction with this modeling type.

Hence, we propose a solution that encompassesthree di�erent contexts, each with its valencesand challenges: the creation of more natural andquicker user interfaces for modeling purposes; thedevelopment of features allow the user to abstractfrom procedural concepts, enabling a quicker edit-ing of their models and, �nally, visualization ofmodels in an engaging and natural approach.

To visualize the model we provide an activestereoscopy solution that enables users to observethe virtual model at a table with a depth sense.

We also allow editing by interacting directlywith the model, which is possible by following themovement of an interface with buttons that applythe modeling operators. As in the visualization,since there is no restriction of movement, the usercan move freely in their environment to edit thetemplate. With this solution error conditions areavoided due to reduced ambiguity in the interpre-tation command button, also we can change allthe components of the model regardless of theirsize or location.

In order to validate the solution options, to ver-ify the impact of the high-level editing featuresand to prove the feasibility approach and learningtime of the proposed approach, we performed anexperimental evaluation. For this evaluation wedeveloped a prototype solution that would enabletwo interfaces, an immersive and non immersiveanother, so we could make a comparison betweenthe two approaches.

The experimental evaluation consisted of threetasks to ten users, divided into two groups, eachfor a given approach, followed by a questionnaire.

From results analysis we highlight the fact thatall users were able to �nish the tasks within a simi-lar time to the typical approach. We also highlightthe user preference for a more immersive and in-

teractive approach as the proposed one, and alsothe satisfaction gained from the modeling supporttools.From these results we retrieved the following

main contributions:

1. An immersive approach that allow typical op-erations of procedural modeling, such as di-vision, repetition and extrusion, to be com-pletely abstract for the user and applied di-rectly on the model.

2. Introduction on a procedural modeling envi-ronment for new editing tools for high level,as a template library, duplication of modelcomponents and a copy bu�er.

3. It is feasible to create an immersive modelingenvironment that allows for a more interac-tive, enjoyable and good learning proceduraledition.

2 RELATED WORK

To be able to make a consistent e�ort and a goodtheoretical base, we looked carefully to some de-veloped systems in a few areas related with thissubject. The analysis is divided into two mainthemes, the procedural modeling and virtual mod-eling.

Procedural modeling

With the advancement of modeling techniques,new approaches that aim to automate the model-ing in order to reduce the time required to produceelaborate models that can be structured in a hier-archical manner and which may contain many re-peated patterns. In this context, the �rst systemsto quickly create very realistic models of plants,emerged, such as in Power et al. [13] and Boudenet al [1].This type of modeling is called procedural and

the main objective is the creation of large modelsthrough the speci�cation of some rules or usingalgorithms.In these systems it is usual to use grammars

to describe the models that are generated proce-durally. In the modeling of buildings and citiesappeared the grammar of forms that are an adap-tation of the L-system grammars, and are speci�-cally used to generate geometric shapes.However, the grammars have a very signi�cant

learning curve, especially for users who are not ac-customed to concepts of programming, moreover,

2

Page 3: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

the grammars are limited to symbols which theyinterpret, and you can't perform actions that donot exist in the symbols grammar.

The city models have some characteristics thatfavor the use of procedural modeling for its con-struction, like the existence of several repeatedpatterns in their shape or the ability to structurethe content in a hierarchical manner.

Up to 2001, systems that built cities procedu-rally relied on too much geographical information(height, terrain, ...) and social information (den-sity, height map, ...). However, Parish and Mueller[12] showed that the CityEngine can model a com-plete city using less information, which is mostlyused to generate the road network, but the build-ings are generated through user-de�ned rules.

Wonka et al [15] reach the conclusion that isnot enough to de�ne the rules of the grammar toobtain buildings that are consistent, credible, var-ied and that meet certain criteria. To do this,they propose the Grammar of Division that arean extension of the Grammar of Forms, where theapplication of rules splits the shapes into smallerones.

The Division Grammars have an important rolein simplifying the speci�cation of the rules. For ex-ample, suppose you want to specify the rules forconstructing a facade with four windows. Thenwe split the original four symbols symbol "front",which in turn is divided in the center with the"window", with the top of the window to con-taining the symbol "adornment" and the terminalsymbols "wall" at rest. The adornment is a setof other terminal symbols. The symbol "window"will in turn be divided into two terminal symbolscalled "sill" and "windows" which are two texturesof stone and glass window respectively. Since thereis no more terminal symbols, the derivation is �n-ished (�g 1 esq.).

Mueller [10] returned to work on the modelingof buildings, but this time they created facadeswith a plausible resemblance with photographs ofan actual building. One major di�erence of thissystem is the use of a 3D Objects library to insertlow-level architectural elements. This cause theresults have a higher visual quality without theneed of more time modeling.

Several methods are available for editing: Seg-ments manipulation in the graph; Handling ver-texes; Relocation of streets; edition of streets inlayers. There is also the possibility of applyingthe tensor �elds on a local level, allowing, for in-stance to create a park within a city with a typol-ogy of streets di�erent than the general topology,

Figure 1: Example of division grammar

including the corner of "local" to the streets "gen-eral". This approach has the disadvantage thatthe results are hardly controllable because of themodi�cations are made in an indirect way.

Lipp et al [6] introduced a system of editing andvisual interaction in ways that grammars, inspiredby the paradigm of providing direct control overindirect, makes possible the realization of a graph-ical editor for procedural modeling.

One of the barriers that existed was the di�-culty of making local changes, for example, changethe size of a single window [15]. Therefore, theypropose a mechanism called "accurate locators"that allows clearly specify the instance in a hier-archical tree and so associate the variables to thelocator, this means, a local modi�cation.

They also have "semantic locators" who, as thename implies, allows the selection by the semanticsof the elements (for instance, the entire facade,�oor, column ...).

Another problem identi�ed was the lack of per-sistence of the changes made to the grammar [1].To resolve this problem, the couple locator + at-tributes are stored externally and will be calledduring the model generation.

The visual editor lets you edit the grammarthrough a toolbar with the commands of gram-mar (Split, Repeat, Split Component ...) and hasa tree view of model rules which allows editing ofits attributes. In addition, you can preview themodel in 3D and edit it directly by manipulat-ing the plans and the divisions with the grammarcommands (�g. 2). This system was the �rst ed-itor, with a fully procedural basis that allows auser to create and modify rules grammars withoutrequiring any textual editing.

3

Page 4: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

Figure 2: Interface of Lipp work

Virtual Modeling

Usually when we refer to a virtual reality environ-ment, we mean one in which the user is totally im-mersed and is able to interact with a completelysynthetic world. However the concept of virtualreality has been associated to other types of dis-play which concepts of total immersion and fullysynthetic does not apply [8].Milgram et al.[9] introduce the notion of

Reality-Virtuality Continuum where in the ex-tremes we have the virtual environments on theright and left real environments. Virtual environ-ments can be classi�ed as non-immersive and im-mersive. The immersive environment is one wherethe user feels involved in the virtual world andwhose actions they carry out in the real world havesome kind of direct correspondence with the vir-tual world. In the non-immersive environment theuser can distinguish perfectly the real and the vir-tual world by viewing the virtual world througha window, without have a direct correspondencebetween them.In order to the user have a better understand-

ing of the connection between the real and virtualworld, it is usual to make a correspondence be-tween a physical object and a virtual object. Sys-tems that have a correspondence between physicalobjects and the virtual world objects, where theuser interacts with the virtual object by manipu-lating the physical, are called tangible interfaces[2].One type of tangible interfaces in augmented re-

ality scenarios are the physical markers that rep-resent objects in the virtual world possible to behandled according to the marker movement. Todo this it is usual to use tracking techniques.These mechanisms can also be used to capture

the movement of the user's hands, working as tan-gible interfaces linking them to the virtual repre-

sentations of the hands [3] or objects.A recent approach is to use the Wiimote Nin-

tendo command that serves as tangible interface.Schlömer et al. [14] achieve good results gesturerecognition using this command. However, youcan't make selections in a virtual environment be-cause is not possible to determine the location inspace of interaction of command being necessaryto use markers and infrared sensors. In addition,the command also proved to be unsuitable for op-erations that require greater precision, such as se-lections and local modi�cations of the model.

3 SOLUTION

The developed solution o�ers a new and expedi-tious approach to editing models generated withprocedural modeling techniques, that facilitateslearning of the concepts of this type modeling andthat abstracts the user to their speci�c conditions.

3.1 Overview

The approach is developed in three di�erent con-texts: Interaction - Creating new ways of interact-ing with the model; Modelling - The developmentof new mechanisms of change in abstract models,Visualization - The visualization of models in amore natural and more involving approach (�g.3).The interaction mode of the proposal deviates

from the accomplished through traditional meanssuch as mouse and keyboard and is composed bymeans wishing to engage the user in a modelingenvironment more interactive, providing a combi-nation of interfaces in an immersive environment(3DUI).These interfaces use editing mechanisms to pro-

cedural models, without being necessary to havethe knowledge of the editing operators speci�city.This change management model (GA) along withprocedural generation (WG), form the context tomodeling of this proposal, where is created andchanged the internal representation of the model.From this representation, graphic shapes are

created to make up the visual feedback given tothe user (OSG), which is constantly updated to re-semble the internal representation that the modelhas in a given moment, in this manner is given areal-time feedback of the model editions.To view the model and its editions, it is neces-

sary to eliminate the barrier that has the monitorregarding to the visualization of the scene. Thus,the proposed approach allows the user a natural

4

Page 5: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

Figure 3: Solution Components

movement around a virtual model as if it were aphysical model. Once the user does not lose trackof the real space around him, it manages to get in-volved with the modeling environment available.

3.2 Visualization

The graphical visualization of models has a majorimpact on users experience and in�uence the deci-sions they make for the remaining components ofthe solution, such as the way of interacting withthem and, consequently, how they are edited.

We analyzed several visualization approachesand the most suitable viewing type is the semi-immersive virtual reality environment because it ispossible to partially see the real world and, simul-taneously, di�erent angles from the virtual model.This allow us to avoid the use of a camera to �lmthe scene, making it more cleaner and versatile.

To have a really e�ective stereoscopy e�ect inthis scenario is necessary to adapt the visualiza-tion of the scene at the users head position andorientation. Due to the characteristics of our ap-proach, the adopted technology needs to allow nat-ural and unrestricted movement, besides having agood degree of accuracy for some modeling oper-ations (eg. division).

For this, we use a tracking system that allowsus to record and reconstruct the motion based onimage analysis, generated by ten optical-electronicdevices (video camera) and by re�ective markersof infrared light (passive markers). There are norestrictions on the form or type of characteristicof the movements that can be analyzed, althoughtthe factors that limit the movement analysis willbe imposed by the cameras and lighting used andthe number and position of the markers.

We placed �ve markers on the glasses forming a

virtual rigid body which represents only one posi-tion and orientation. These markers were placedasymmetrically to avoid interpretation problemsof its orientation.

3.2.1 Feedback Mechanisms

To be able to achieve the aspirations of an easyand simple interaction, with reduced learning timeand, consequentially, a better average duration oftasks, we still needed to provide the user with avisual perception with the consequences of theiractions. To this end, the application gives the usersome visual clues on the actions that is to be heldat a given moment, especially in the selection andediting forms.

Selection

There are three di�erent ways to select compo-nents: the main, the auxiliar and the functionalselections. Each with a di�erent purpose and withan appropriate visual feedback.

The most important way to select componentsis the main selection mechanism which uses an hi-erarchical selection. With this selection type, wecan apply all forms of transformation, and it is theonly one on which we can apply two major opera-tors, division and repetition. All selected compo-nents are highlighted with a red colour (�g. 4)..

However, this selection is not enough becausesometimes we need two components to make achange. For example, the duplication of a compo-nent from the middle of the hierarchy to anothercomponent, also in the middle of the hierarchy,needs to have two components simultaneously se-lected, one as an origin and the another as thedestination.

To meet to such situations is provided an auxil-iar selection that also uses the mechanism of hier-archical selection. This selection can only be usedif there is somewhere another component selectedwith the main selection. Since the red highlight isalready taken, for this type of selection, we high-light components with a yellow color.

Even with these two types of selection, it wouldnot be possible to make simultaneous changes to aset of components. For this reason, it is available asemantic selection that selects the components ofthe model that give the same semantic perceptionto the user, such as every windows of a �oor(�g.4).

5

Page 6: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

Figure 4: Example of hierarchical and semanticselections

Editing Templates

We also developed forms of visual feedback to edit-ing operators that can be applied in the models,most notably for resizing, splitting, and repetition.In this approach the user can perform a division,

indicating the cut at any point of the selected com-ponent, in a vertical or horizontal direction. Whenmaking the cutting motion with the command, itappears a black line along the cutting being per-formed.Similarly to the division, in repetition user

makes a movement towards the direction it wantsto repeat the component that is selected. By per-forming this gesture, appears a blue line parallelto the axis movement and position in which thatwill occur.There are also other operators and tools possible

to use and that require some kind of feedback,even if not speci�cally dedicated. Therefore, at theend of any attempted application of anoperator,its success or failure is indicated by the icon nextto the pointer.The application of colors and textures operators

will not have a dedicated form of feedback. Theywere featured at a menu that eases access to gram-mar tools.

3.3 Interaction

It is intended that the user interacts with the Ma-quetistaVirtual in a natural and quicker way, withminimal errors, thus, as noted by Young et al. [11],it is essential that the interaction is fault-tolerantunconsciously held by the user. To this end, theinterfaces used must have a good interaction com-pleteness and interpret the commands with lowerambiguity.The proposal interaction approach should be

easy to learn, therefore, should only be availableat most ten gestures, so the user doesn't need tomake an e�ort to memorize [17].Besides the ease of learning, it is intended that

the approach has enough precision to be able to

change low-level components like a window, dooror balcony [7], that is, su�ciently speci�c to makelocal manipulations in the model.

Model Edition

With the aim of directly editing the virtual model,it was necessary to establish an interactive methodthat would apply, quickly and naturally, di�erentediting operations on the model or edit attributessuch as color or texture.We choose to perform an approach that mixes

an interface of a pointer with the approach of wi-imote. Apart from both approaches to be identi-cal to the interfaces that users are accustomed touse as drawing tools (pencils and pens), the ap-proaches are physically complementary, since thewiimote command already has the necessary but-tons and if we trace its movement, they �out theinternal sensors and it is possible to increase thecommand accuracy while editing the model. How-ever, when we decide for this solution we requirethat the space surrounding the user in the inter-action environment is prepared to track the move-ment of the command.The developed solution has two di�erent com-

ponents: the tracking of movements and the suit-ability of command buttons to the developed edit-ing features. In order to obtain a more preciseposition and orientation markers were added tothe wiimote, allowing the optical tracking of thecommand, like the stereophics glasses were to en-hanced a natural visualization (�g. 5).To allow the user to see the exact spot he was

pointing at the model, we create a virtual ex-tension of the command, like a physical pointer.With this solution we have increased the preci-sion while handling the wiimote, quickly identi-fying any model component, regardless of size ordistance from which it is located.

Model Space Manipulation

As the physical models, where you can manipu-late the model by positioning it in order to beobserved from a better perspective, our solutionallows the user to manipulate the virtual model.To this end, it was necessary to establish a solu-tion that enable the user to manipulate the modelin order to look at it from di�erent perspectives,taking into account that it should be performedanywhere on the table, in a natural way, and withthe non dominant hand since dominant one willbe busy with the wiimote to apply operations onthe model.

6

Page 7: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

Figure 5: Wiimote and stereographics glasses withmarkers

In the adopted solution, the user places his �n-gers on the multi-touch surface to indicate the de-sired type of manipulation by the number of �n-gers placed. The user can switch the number of�ngers to the desired type of manipulation whilenot in the same place for a second, after which itwill set the type of manipulation that is being in-terpreted. Moving the �ngers to manipulate themodel space.

3.4 Modeling Operators

Choosing procedural modeling techniques for thegeneration of models was due to the ease of dealingwith repeated patterns, such as buildings. Thus,the proposed system allows to specify procedu-ral models through a CGA Shape grammar, sim-ilar to that proposed by Pascal and Mueller inCityEngine [12].

We also provide a set of features to perform ba-sic editing operations, which make procedural con-cepts transparent to the user and allow the imme-diate change of architectural components in themodel. It is possible to perform operations thatgive volume to a shape (extrusions), change shapegeometry (geometric transformations), divide intotwo or more shape (division), repeat a shape alonga space (repeat) or as simple as changing its coloror texture.

Beyond these basic editing features there areavailable some tools that aim to facilitate and ac-celerate model editing. The purpose of some tools,like copy-paste or duplicate, is very similar to oth-ers widely used in modeling applications or even insimple editor applications, having recognized ben-e�ts to facilitate the editing process.

While the application of simple operatorschanges a component at time, these auxiliary toolsspeed up model editing by appling a set of oper-

ators at once. Common to all these tools is theprocess of copying a description associated witha component that is selected. This informationis stored for later use or to be applied in anyother component of the hierarchy by removing allof which below.Another important tool added emerged in re-

sponse to an obvious lack in other procedural ap-proaches (eg. in Yi et al.[16]) that was a repositoryof recurrently used architectural component mod-els (windows, doors, moldings, etc..), and that canbe swiftly applied in the models by users. Thus itwas introduced a library of pre-de�ned architec-tural components corresponding to several archi-tectural styles, to be used either in the speci�-cation of initial models or later while editing thegenerated model.With all these features, it is possible to create

complete and detailed models without resorting totextual descriptions, menus or any representationof the model internal hierarchy.

4 EVALUATION

To evaluate the prototype in terms of usability, astudy was carried out with ten users (�g. 6), com-paring the results of implementation and satisfac-tion with a method of editing that is not immer-sive, similar to the system, a procedural creationof buildings proposed by Lipp et al [6]. We wantedsee the level of learning, usability and satisfactionthat our system requires and the extent to whichusers can perform faster models using Maquetis-taVirtual.We recall that the ultimate goal of this work

is to provide an approach to editing proceduralmodels, faster, more interactive and less boringthan the approaches of this type of scripting típi-cad modeling. To achieve this goal, we developeda solution that combines stereoscopic visualizationon a table where operations are made directly inthe model through a command and a multi-touchsurface.So, it becomes important to analyze all of the

procedural features developed, drawing conclu-sions as to the satisfaction of users in relation tothese tools and analyzing their contribution to theperformance of the approach.Since a major focus of the approach is on user

interfaces, the objective of this evaluation is todraw conclusions about the naturalness and ease ofuse of the developed interfaces, as well as demon-strate the feasibility of modeling tasks in a semi-immersive environment. We intend therefore to

7

Page 8: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

Figure 6: Two test users interacting with Maque-tistaVirtual

draw conclusions about how far and how it is thatthis approach is able to replace other solutionsinteractive procedural modeling, and draw someconclusions about the requirement of the learningprocess so that a user feels comfortable in usingthe solution proposal. Easily veri�ed that someof these targets imply a comparative analysis withother procedural modeling approaches, so we de-cided to make a comparison with an approach thatis closer to that shown by Lipp et al.[6], where theuser can interact directly to the model through itsdisplay on a monitor, using the typical mouse andkeyboard interfaces.With the same metrics in the two approaches

we hope to better demonstrate the validity of thedeveloped solution, comparing the gains achievedwith our approach, which aims to be more interac-tive and natural and therefore faster. The tasks toaccomplish the experimental evaluation cover thedi�erent features of the system and are organizedin a gradual increase di�culty.The motivation for each task and the goals that

users should achieve were:

1. In the �rst task, we expected that the userhad a �rst contact with the editing features,and therefore the objective of the task wasto use the simplest editing operation and anauxiliary tool that our solution provides, us-ing the most simple type of selection.

2. In this task, we wanted the user to realizethe practical e�ect of e�ect of procedural con-cepts drawing conclusions about its bene�ts,so the objective of the task was to make twoediting operations, one on the model and theother using semantic selection, allowing theuser to understand procedural concepts.

3. In the last task, we wanted to analyse theproposed approach with the construction ofa model from its root and also analyse thebene�cts of direct interaction with the modeland auxiliary tools. Thus the task objective

Figure 7: Average execution times for each testgroup

was to build a model, using a more advancedoperator and the library of templates to buildparts of it.

We collected each time the user took to accomplishthe tasks identi�ed in order to analyze the use ofof the developed interaction methods. The usersstarted the tests in 2D or 3D environment. So, theresults were divided into four groups: users thatstarted with 2D vs the users that started in 3Denvironment and the consequent results in eachenvironment.In the �gure7 we can see the average running

time of each group. We observe that the execu-tion time of tasks in the �rst trial, when they ex-perience the �rst approach to 2D or 3D, is veryclose, not revealing a very signi�cant di�erence.This indicates that there is not a big di�erencein performance of tasks when they have their �rstcontact with the system.Although the execution times of tasks in the

3D approach could be worse than expected, it isnot possible to conclude that a typical proceduralmodeling approache is faster than the presentedinteractive and natural approach. It is not possi-ble to draw this conclusion due to the fact that thecomparison was made between the same set of fea-tures, where the aproaches were di�erent only inthe interaction methods. Thus, the 2D approachdoes not really represent a typical procedural mod-eling system, allowing, for example, direct appli-cation of operators on the model, the absent ofhierarchical graphs for the selection and availabil-ity of auxiliary tools like the models library or theduplication.For the same reason, it is not possible to draw

clear conclusions about the in�uence of the high-level editing procedural features in the perfor-mance of users, however it is possible to makea simple comparison with the experimental datafrom a study by Lipp et al.[6]], which was the �rst

8

Page 9: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

work to develop a fully procedural editor that en-ables a user to create and modify rules grammarswithout requiring any textual edition.

We can unequivocally state that users prefer theproposed approach in all measured parameters,which indicates the success of a more natural andengaging aproach in the perception of users, ver-i�ed by the good acceptance of particular pointsas the modeling tools.

Thus we can state that the most important con-clusion to be drawn from these results is that youcan provide an immersive modeling environmentthat allows making changes to a procedural model,in a more interactive, enjoyable and natural way tothe user. This conclusion is mainly caused by thesimple fact that users were able to �nish the tasksin an average time similar to the 2D approach, butwith far greater enthusiasm during its realization.

5 Conclusions

In this chapter we address the �nal conclusionswith talking about the contributions of our workand points that might be taken in the future.

5.1 Contributions

From the obtained results in evalution, we con-clude that the proposal approach consists in a ca-pable solution to realize editions in a procedimen-tal model trough a imersive environment interface,thus resulting in several contributions.

We realize that users without any knowledge inprocedural concepts, were able to use the availableoperators and claimed to recognize characteristicsof these modelation type. So, we contribute withan approach that allow us to realize tipical pro-cedural modeling operations, like division, repeti-tion and extrusion, in a complete abstract way andwith direct manipulation over the model, withoutthe explicit use of scripting to the de�nition ofrules or menus.

Another contribution was the introduction ofnew high-level edition tools, like the models li-brary, the duplication and the bu�er copy, in theprocedural modelation environments. These toolsallow us to achieve better times in the tasks real-ized and also acceptance by the users.

The last but not the least, trought the ques-tionary made, and by the fact that all users ac-complish the tasks in the experimental evaluation,we conclude that is possible to build and edit pro-cedural models with a multi-modal environment

that contains a estereoscopy visaulization, intera-tion though the wiimode and with a multi-touchsurface.So, we may state that the biggest contribution

of our work is the conclusion that is possible torealize a imersive modelation environment whichallow us to edit a procedural model, in a mostinteractive, natural, joyable way and also with agood learning.

5.2 Future Work

Throughout the investigation work, we identi�edsome ideas that can be exploited in future work.In the experimental evaluation (and its ques-

tionaries), it would be usefull to develop an undotool, which allows the return of the model to a for-mer state. The mechanisms used in this work canbe used to provide this feature. Another feature,consists in the evolution of the selection meth-ods, allowing to create exceptions in components,like the ones used in OCTOR[4] Jang e Rossignacwork.Regarding modeling environments, we discov-

ered, also throught the evaluations tests, that thewiimode is not a good approach because it con-tains a lot of buttons and consequently actionsthat turn out to be hard to internalize by noviceusers. So, we suggest a more profund study todiscover a better way to interact with the hand onthe model, whether using the tracking methodsdescribed in this work or the depth sensors.Another attractive approach is the use of tangi-

ble interfaces to represent components of the pro-cedural model. This results with a combinationbetween the proposed approach and the one dev-elloped by Jota e Benko[5].

References

[1] F. Boudon, P. Prusinkiewicz, P. Federl,C. Godin, and R. Karwowski. Interactivedesign of bonsai tree models. Computer

Graphics Forum. Proceedings of Eurograph-

ics, 22(3):591�599, 2003.

[2] J. M. S. Dias, P. Santos, and P. Nande.In your hand computing: Tangible inter-faces for mixed reality. In Proceedings CD

of 2nd IEEE International Augmented Real-

ity Toolkit Workshop, 2003.

[3] J.M.S. Dias, P. Santos, and R. Bastos. Ges-turing with tangible interfaces for mixed real-

9

Page 10: MaquetistaVirtual Interactive Procedural Modeling · MaquetistaVirtual Interactive Procedural Modeling Luis Miguel Saraiva Lopes IST / ecThnical University of Lisbon Portugal, Lisbon

ity. In International Gesture Workshop, pages399�408, 2003.

[4] J. Jang and J. Rossignac. Octor: Occur-rence selector in pattern hierarchies. In Shape

Modeling and Applications, 2008. SMI 2008.

IEEE International Conference on, pages 205�212, june 2008.

[5] Ricardo Jota and Hrvoje Benko. Construct-ing virtual 3d models with physical buildingblocks. In Proceedings of the 2011 annual

conference extended abstracts on Human fac-

tors in computing systems, CHI EA '11, pages2173�2178, New York, NY, USA, 2011. ACM.

[6] Markus Lipp, Peter Wonka, and MichaelWimmer. Interactive visual editing of gram-mars for procedural architecture. ACM

Transactions on Graphics, 27(3):102:1�10,August 2008. Article No. 102.

[7] Wille Mäkelä, Markku Reunanen, and TapioTakala. Possibilities and limitations of im-mersive free-hand expression: a case studywith professional artists. In MULTIMEDIA

'04: Proceedings of the 12th annual ACM in-

ternational conference on Multimedia, pages504�507, New York, NY, USA, 2004. ACM.

[8] Paul Milgram and Fumio Kishino. A taxon-omy of mixed reality visual displays. IEICE

Transactions on Information Systems, E77-D(12), December 1994.

[9] Paul Milgram, Haruo Takemura, Akira Ut-sumi, and Fumio Kishino. Augmented reality:A class of displays on the reality-virtualitycontinuum. pages 282�292, 1994.

[10] Pascal Müller, Gang Zeng, Peter Wonka, andLuc Van Gool. Image-based procedural mod-eling of facades. 26(3), 2007.

[11] Ji-Young Oh, Wolfgang Stuerzlinger, andJohn Danahy. Sesame: towards better 3dconceptual design systems. In DIS '06: Pro-

ceedings of the 6th conference on Designing

Interactive systems, pages 80�89, New York,NY, USA, 2006. ACM.

[12] Yoav I. H. Parish and Pascal Mueller. Pro-cedural modeling of cities. In SIGGRAPH

'01: Proceedings of the 28th annual confer-

ence on Computer graphics and interactive

techniques, pages 301�308, New York, NY,USA, 2001. ACM Press.

[13] Joanna L. Power, A. J. Bernheim Brush,Przemyslaw Prusinkiewicz, and David H.Salesin. Interactive arrangement of botani-cal l-system models. In I3D '99: Proceed-

ings of the 1999 symposium on Interactive

3D graphics, pages 175�182, New York, NY,USA, 1999. ACM.

[14] Thomas Schlömer, Benjamin Poppinga, NielsHenze, and Susanne Boll. Gesture recogni-tion with a wii controller. In TEI '08: Pro-

ceedings of the 2nd international conference

on Tangible and embedded interaction, pages11�14, New York, NY, USA, 2008. ACM.

[15] Peter Wonka, Michael Wimmer, François Sil-lion, and William Ribarsky. Instant architec-ture. In SIGGRAPH '03: ACM SIGGRAPH

2003 Papers, pages 669�677, New York, NY,USA, 2003. ACM.

[16] Xiao Yi, Shengfeng Qin, and Jinsheng Kang.Generating 3d architectural models based onhand motion and gesture. Computers in In-

dustry, 60(9):677 � 685, 2009.

[17] Robert C. Zeleznik, Kenneth P. Herndon, andJohn F. Hughes. Sketch: An interface forsketching 3d scenes. In Proceedings of SIG-

GRAPH 96, Computer Graphics Proceed-ings, Annual Conference Series, pages 163�170, August 1996.

10