university of calgary using virtual reality to improve...
TRANSCRIPT
UNIVERSITY OF CALGARY
Using Virtual Reality to Improve Design Communication
by
Chao Liu
A THESIS
SUBMITTED TO THE FACULTY OF GRADUATE STUDIES
IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE
DEGREE OF MASTER OF ENVIRONMENTAL DESIGN
FACULTY OF ENVIRONMENTAL DESIGN
CALGARY, ALBERTA
JANUARY, 2012
© Chao Liu 2012
978-0-494-83582-1
Your file Votre référence
Library and ArchivesCanada
Bibliothèque etArchives Canada
Published HeritageBranch
395 Wellington StreetOttawa ON K1A 0N4Canada
Direction duPatrimoine de l'édition
395, rue WellingtonOttawa ON K1A 0N4Canada
NOTICE:
ISBN:
Our file Notre référence
978-0-494-83582-1ISBN:
The author has granted a non-exclusive license allowing Library andArchives Canada to reproduce,publish, archive, preserve, conserve,communicate to the public bytelecommunication or on the Internet,loan, distrbute and sell thesesworldwide, for commercial or non-commercial purposes, in microform,paper, electronic and/or any otherformats.
The author retains copyrightownership and moral rights in thisthesis. Neither the thesis norsubstantial extracts from it may beprinted or otherwise reproducedwithout the author's permission.
In compliance with the CanadianPrivacy Act some supporting formsmay have been removed from thisthesis.
While these forms may be included in the document page count, theirremoval does not represent any lossof content from the thesis.
AVIS:
L'auteur a accordé une licence non exclusivepermettant à la Bibliothèque et Archives Canada de reproduire, publier, archiver,sauvegarder, conserver, transmettre au public par télécommunication ou par l'Internet, prêter,distribuer et vendre des thèses partout dans lemonde, à des fins commerciales ou autres, sursupport microforme, papier, électronique et/ouautres formats.
L'auteur conserve la propriété du droit d'auteur et des droits moraux qui protege cette thèse. Nila thèse ni des extraits substantiels de celle-ci ne doivent être imprimés ou autrement reproduits sans son autorisation.
Conformément à la loi canadienne sur laprotection de la vie privée, quelques formulaires secondaires ont été enlevés decette thèse.
Bien que ces formulaires aient inclus dansla pagination, il n'y aura aucun contenu manquant.
ii
Abstract
A major challenge that faces designers is creating solutions within specific constraints.
Creating acceptable design proposals requires designers to work closely with their clients
to determine their exact needs and specifications. Good communication between clients
and designers is thereby essential to a favourable outcome. Traditional communication
approaches such as face-to-face discussions are sometimes insufficient for clarifying
clients’ requirements. In these cases, expressing design intent using graphics is more
intuitive and provides greater clarity. Unfortunately, for clients who lack sophisticated
drawing skills, it may be very difficult for them to express ideas with freehand sketches.
To improve this situation, this thesis proposes a web-based communication tool to
help clients express design ideas using computer graphics. Currently, computer-aided
design (CAD) is a typical choice for professional designers and engineers, due to its high
demands of operating skills and computer support. The premise of this thesis is that web-
based virtual reality (VR) can remove these technical barriers. By creating an easy-to-use
tool that works inside a web browser, there is the potential for clients to explore design
options early in the design process without having to learn CAD. To prove its technical
feasibility, this thesis presents a proof-of-concept, YY3D, which enables clients to
explore interior design solutions. YY3D is equipped with interactive functions that allow
its user to select and arrange furniture from an online catalogue into a 3D virtual interior
space. This thesis reviews the theoretical and technical background of virtual design,
introduces the methodology to develop the proposed communication tool, and walks
through the creation process of the proof-of-concept. The research results would be
useful in creating applications to solve communication problems in other design fields.
iii
Acknowledgement
I would like to thank all of those people who helped make this thesis possible.
First and foremost, I would like to express my gratitude to my supervisor, Dr.
Richard Levy, for his support, patience, and encouragement throughout my graduate
studies. His technical and editorial advice was essential to the completion of this thesis
and has taught me innumerable lessons on the workings of academic research in general.
My thanks also go to my committee members Dr. Faramarz Samavati, Gerald
Hushlak, and Dr. Larry Katz for reading previous drafts of this thesis and providing many
valuable comments that improved the contents of this thesis.
I am also grateful to Dr. Jeffrey Boyd, my first computer science teacher, whose
advice and encouragement have been invaluable to my research at the intersection of
design and computer science.
I would like to extend my appreciation to my friends Rozhen Mohammed-Amin,
Kamaran Noori, and Liraz Mor. My study life in the past two years would have been
much more difficult without your friendship, inspiration, and valuable input.
Finally, I would like to thank the Computational Media Design (CMD) graduate
group, not only for providing the funding which allowed me to undertake this research,
but also for giving me the opportunity to broaden my academic horizon and interact with
talented students from different faculties.
iv
Dedication
I dedicate this thesis to my wife Yuyu and to my parents Qian and Xiuling without whose
understanding and support I would not have been able to complete this work.
v
Table of Contents
Abstract ............................................................................................................................... ii Acknowledgement ............................................................................................................. iii Dedication.......................................................................................................................... iv Table of Contents.................................................................................................................v List of Tables .................................................................................................................... vii List of Figures .................................................................................................................. viii
CHAPTER ONE: INTRODUCTION..................................................................................1 1.1 Context and Motivation .............................................................................................1 1.2 Goal and Method .......................................................................................................2 1.3 Contribution ...............................................................................................................6 1.4 Overview of Thesis ....................................................................................................7
CHAPTER TWO: BACKGROUND...................................................................................8 2.1 Interior Design ...........................................................................................................8
2.1.1 Typical Interior Design Process ........................................................................9 2.1.2 Client’s Role and Decision-making Factors....................................................10 2.1.3 Designer’s Communication Approaches.........................................................13 2.1.4 Discussion for Improvement ...........................................................................15
2.2 Virtual Reality..........................................................................................................19 2.2.1 Terminology and Definition ............................................................................20 2.2.2 VR Systems and Technology Requirements ...................................................21 2.2.3 Web-based VR.................................................................................................24 2.2.4 Application Areas of VR .................................................................................26
CHAPTER THREE: DESIGNING A WEB-BASED COMMUNICATION TOOL........33 3.1 Communication Framework ....................................................................................33 3.2 Converting CAD Models .........................................................................................36
3.2.1 Extensible 3D Graphics (X3D) Standard ........................................................37 3.2.2 General Workflow of CAD Model Conversion ..............................................40
3.3 Designing the User Interface ...................................................................................42 3.3.1 User Interface Sketch ......................................................................................42 3.3.2 Script Support..................................................................................................45
3.4 Preview of the Proof-of-Concept .............................................................................46
CHAPTER FOUR: VIRTUAL MODELS AND TEXTURE SAMPLES.........................49 4.1 Construction of Virtual Interiors..............................................................................49
4.1.1 Coding a Scene in X3D ...................................................................................49 4.1.2 Creating a Scene based on CAD Models ........................................................56
4.1.2.1 Modeling and Optimization ...................................................................57 4.1.2.2 Export.....................................................................................................58 4.1.2.3 Additional Editing in the X3D Scene Graph .........................................60
4.2 Creation of Furniture Models ..................................................................................65 4.2.1 Furniture Modeling..........................................................................................66 4.2.2 Programming the Transformation Functions...................................................67
vi
4.2.2.1 Translation .............................................................................................68 4.2.2.2 Rotation..................................................................................................69 4.2.2.3 Switch between the Translation and Rotation Functions.......................71 4.2.2.4 Scaling ...................................................................................................73 4.2.2.5 Template for Integrating the Transformation Functions........................75
4.3 Preparation of Texture Maps ...................................................................................79
CHAPTER FIVE: WEB-BASED USER INTERFACE....................................................80 5.1 User Interface Layout ..............................................................................................80
5.1.1 Model Editor Window.....................................................................................81 5.1.2 Model Catalogue Window...............................................................................83 5.1.3 Texture Catalogue Window.............................................................................85 5.1.4 User Notes Window ........................................................................................87
5.2 Script Support ..........................................................................................................89 5.2.1 Importing Furniture into an Interior Scene......................................................89 5.2.2 Attaching Furniture Textures ..........................................................................97
CHAPTER SIX: OPERATION TEST AND CONCLUSION ........................................101 6.1 Operation Test on the Proof-of-Concept ...............................................................101
6.1.1 Testing Environment .....................................................................................101 6.1.2 Selecting the Virtual Interior .........................................................................102 6.1.3 Importing the Furniture .................................................................................103 6.1.4 Attaching the Textures ..................................................................................104 6.1.5 Arranging the Furniture.................................................................................105 6.1.6 Filling out the User Form ..............................................................................107
6.2 Research Summary and Conclusion ......................................................................109 6.3 Future Research .....................................................................................................111
6.3.1 Application and Evaluation in Design Practice.............................................111 6.3.2 Adding Other Design Components ...............................................................112
REFERENCES ................................................................................................................113
vii
List of Tables
Table 2.1: The categories of current VR systems. ............................................................ 22
viii
List of Figures
Figure 2.1: A mixture of various visual materials for the client to choose, photo courtesy of the Australian Institute of Interior Design. ............................................ 16
Figure 2.2: Freehand sketches of a hotel lobby, photo courtesy of Jenny Gibbs. ............ 17
Figure 2.3: A cardboard model showing a residential interior, photo courtesy of the Howard Architecture Models.................................................................................... 18
Figure 2.4: A living room model built using CAD, photo courtesy of the Mircospot...... 18
Figure 2.5: Prototype of human-computer interaction in a general VR system. .............. 23
Figure 2.6: Prototype of data exchange in a general web-based VR system.................... 25
Figure 2.7: A virtual prototyping for testing vehicle interior, by Virtual Reality Lab at the University of Michigan. ...................................................................................... 28
Figure 2.8: A virtual prototyping for vehicle stability measurement, by Robotic Mobility Group at Massachusetts Institute of Technology....................................... 28
Figure 2.9: An experimental website for collaborative design review, by Cambell......... 30
Figure 2.10: An online product configurator, by Dauner et al.......................................... 31
Figure 3.1: Architecture of the proposed communication framework.............................. 34
Figure 3.2: An example of X3D scene.............................................................................. 39
Figure 3.3: Workflow of converting a CAD model into the X3D format. ....................... 40
Figure 3.4: Sketch of the web-based user interface. ......................................................... 44
Figure 3.5: Architecture of YY3D. ................................................................................... 47
Figure 4.1: Screenshot of the Welcome scene. ................................................................. 50
Figure 4.2: Coding snippet defining a text string. ............................................................ 51
Figure 4.3: Coding snippet defining two cubes that share a same attribute. .................... 52
Figure 4.4: Coding snippet defining the spatial relationship between two objects. ......... 53
Figure 4.5: Defining the animation chain for rotating a cube........................................... 54
Figure 4.6: Coding snippet defining the animation chain for rotating a cube. ................. 54
Figure 4.7: The living room built using Autodesk 3ds Max 2010.................................... 57
ix
Figure 4.8: VRML97 Exporter in Autodesk 3ds Max 2010. ............................................ 58
Figure 4.9: Coding snippet describing a floor plane built using CAD. ............................ 59
Figure 4.10: Coding snippet defining the viewpoints in the living room scene. .............. 61
Figure 4.11: Relative position between the viewpoint Left and the left-side wall. .......... 61
Figure 4.12: Coding snippet defining the left-side wall using a <LOD> node. ............... 62
Figure 4.13: Coding snippet attaching a wood texture on the floor plane........................ 64
Figure 4.14: Coding snippet defining the lightings in the living room scene................... 65
Figure 4.15: Observing the living room scene from the different viewpoints.................. 65
Figure 4.16: A furniture model created in YY3D............................................................. 66
Figure 4.17: Defining the translation function chain. ....................................................... 68
Figure 4.18: Coding snippet defining the translation function. ........................................ 69
Figure 4.19: Dragging and moving the test object............................................................ 69
Figure 4.20: Defining the rotation function chain. ........................................................... 70
Figure 4.21: Coding snippet defining the rotation function.............................................. 70
Figure 4.22: Dragging and rotating the test object. .......................................................... 71
Figure 4.23: Function chain to switch between the translation and rotation sensors. ...... 72
Figure 4.24: Coding snippet defining the switch mechanism between the translation and rotation sensors................................................................................................... 72
Figure 4.25: Defining the scaling function chain.............................................................. 73
Figure 4.26: Coding snippet defining the scaling function............................................... 74
Figure 4.27: Scaling the test object................................................................................... 74
Figure 4.28 Coding template integrating the transformation functions............................ 76
Figure 4.29: Results of applying the transformation template to a furniture model......... 78
Figure 4.30: Texture samples created in YY3D. .............................................................. 79
Figure 5.1: Framework of the YY3D’s user interface. ..................................................... 81
Figure 5.2: Model Editor window with an office scene being rendered........................... 82
x
Figure 5.3: Coding snippet embedding an X3D object within the web page. .................. 82
Figure 5.4: Coding snippet defining control buttons. ....................................................... 83
Figure 5.5: Model Catalogue window with a two-level menu structure........................... 84
Figure 5.6: Coding snippet defining the first level menu in the Model Catalogue window...................................................................................................................... 84
Figure 5.7: Texture Catalogue window with a three-level menu structure....................... 86
Figure 5.8: User Notes window as its initial state. ........................................................... 87
Figure 5.9: Screenshot of the YY3D’s user interface. ...................................................... 88
Figure 5.10: Coding snippet inserting an <Inline> node in a virtual scene. ..................... 90
Figure 5.11: Chair catalogue shown in the Model Catalogue window............................. 92
Figure 5.12: Coding snippet defining the chair catalogue. ............................................... 92
Figure 5.13: Coding snippet retrieving and copying the selected model file. .................. 93
Figure 5.14: Defining function chains of using buttons to update the <Inline> node. ..... 93
Figure 5.15: Coding snippet defining the buttons............................................................. 94
Figure 5.16: Coding snippet connecting the buttons with the <Inline> node................... 95
Figure 5.17: Importing and removing a chair model. ....................................................... 96
Figure 5.18: Importing a group of furniture models. ........................................................ 96
Figure 5.19: Defining the function chain of texture attachment....................................... 98
Figure 5.20: Coding snippet defining the texture attachment for the chair leg. ............... 98
Figure 5.21: Coding snippet defining the model component with the same material. ..... 99
Figure 5.22: Attaching different textures to a chair model. ............................................ 100
Figure 6.1: Operating instruction menu. ......................................................................... 102
Figure 6.2: Selecting the virtual interior. ........................................................................ 102
Figure 6.3: Switching the viewpoints. ............................................................................ 103
Figure 6.4: Choosing and importing the furniture. ......................................................... 104
Figure 6.5: Attaching the textures................................................................................... 105
xi
Figure 6.6: Arranging the furniture................................................................................. 106
Figure 6.7: Previewing the interior layout. ..................................................................... 106
Figure 6.8: Summarizing the selections.......................................................................... 107
Figure 6.9: Rendering the interior layout........................................................................ 108
1
Chapter One: Introduction
1.1 Context and Motivation
Effective communication between clients and designers is critical to design practice.
Designers may be equipped with leading industry design skills, but still struggle in client
communications; if they cannot communicate with their clients, there can be no guarantee
that they will arrive at a successful design. Clients must feel comfortable and open with
their designers. Expectations, likes and dislikes must be openly shared. Without a
sufficient understanding of clients’ aspirations and requirements, the collaboration will
neither be fruitful nor profitable for either designers or clients, and customer satisfaction
will not be achieved.
The use of computer-aided design (CAD) software to build design models is
currently the most popular method of presenting and evaluating design proposals.
Compared to the pencil-and-paper design, CAD modeling is able to convey design ideas
in a more efficient and precise way. Each model component can be built to an accurate
size and position in 3D. Additionally, a CAD model allows for quicker modifications
without having to re-draw designs from scratch. However, for clients to view and modify
a CAD model, both the computer hardware and software supports are prerequisite.
Furthermore, the most common CAD tools (such as AutoCAD, Revit Architecture, and
MicroStation) are highly specialized software packages that require substantial training to
operate properly. It is not practicable to get clients trained in CAD before beginning the
design process. Therefore, clients need technical support from designers to work with 3D
models. Traditionally, designers give clients only rendered views; it is rare for clients to
have access to a 3D interactive model.
2
Because of a lack of relevant skills (e.g., computer modeling or freehand
sketching), most clients have difficulty sharing their design intention as they use only
written or oral descriptions. Though text descriptions are adequate for defining some
design specifications, such as material and budget requirement, these descriptions cannot
clarify detailed design specifications as efficiently as a graphic presentation. As a result,
designers have to spend extra energy creating rendered views of a proposed design model
in order to ensure that clients are in agreement with them. This can be a long and
repetitive process that requires many face-to-face meetings before a successful
conclusion is reached. This inefficient situation prompts the research proposal of a new
communication approach based on the Internet, which allows the visualization of design
options using interactive digital graphics so the virtual collaborations can be realized.
1.2 Goal and Method
This thesis presents a web-based tool which seeks to facilitate the communication of
design intention between clients and designers. The research approach is to employ web-
based virtual reality (VR). VR is a computer simulation that employs 3D visualization
technology to simulate physical presence. As it eliminates both time and geographical
constraints, web-based VR provides a direct and immediate visual experience for Internet
users with personal computers. A goal of this research is to create a VR-based application
for improving the collaboration efficiency between clients and designers. This research
selects interior design as a case study and discusses the following reasons for VR use:
Interior design deals with a relatively simple problem. Compared to other design
fields (e.g., building design), interior design does not require any engineering
knowledge or advanced calculations.
3
Interior design serves a large market place with many potential interior designers
and sales professionals who can benefit from this potential tool.
The approach used in interior design can be applied to other design problems.
The proposed design process using this VR-based communication tool is as follows:
The client poses a design problem and provides the necessary design information,
including the room type (e.g., an office space), its dimensions and other
specification requirements.
The designer creates a virtual interior scene based on the room dimensions.
The designer constructs a database to collect design components (e.g., furnishing
and material texture) that match the client’s specification requirements.
The client logs into the designer’s website and experiences different design
possibilities by selecting, arranging and modifying design components in the
given virtual interior scene. During this process, the client is able to work in both
2D and 3D modes: a 2D plan view allows the client to place components; a 3D
perspective view visualizes the outcome.
At the end of the virtual experience, the client has a list of their design choices
and provides the interior layout renderings for the designer’s review.
Using this design approach, interior designers can better understand clients’ needs
and preferences. Based on the clients’ preliminary design, designers can provide further
professional advice throughout the design development process. Although it may appear
that designers spend more effort preparing a single design case, the VR-based
communication tool should enhance the decision-making process, as many of the virtual
design components can be re-used in future design scenarios. On the client side, clients
4
are expected to play a more active role during their collaboration with designers. Using
this new tool, clients are able to create a preliminary design solution based on the virtual
interior and design components. In this manner, their design intention can be visualized
in addition to text descriptions.
More importantly, the proposed web-based tool minimizes time and geographical
restraints. Because the communication can be done via Internet, clients and designers do
not have to meet in person to exchange ideas or review computer models. This helps
decrease the number of meetings and reduce the overall budget of a design project.
In developing this VR-based communication tool, this research proposes to
complete the following tasks:
Study the interior design process. The interior design is a multi-faceted
profession and its process follows a systematic and coordinated methodology. The study
of this process focuses on the parts where clients are involved. The study topics include
the clients’ roles, factors that influence clients’ decision-making behaviour, and the
traditional communication approaches between clients and designers. Based on this study,
the research suggests the functionality of the proposed communication tool. The work in
this section provides the theoretical basis for developing the proposed tool.
Review VR technologies and their applications. VR is widely applied in
product development, design review and e-commerce. Successful applications in these
areas serve as examples to inspire VR applications in interior design. The research
reviews the basic principles of VR and the technical requirements for creating a VR
system. In particular, to combine the advantages of VR and World Wide Web (WWW)
for solving design problems, the research reviews the previous literature in web-based
5
VR technologies and applications. The work in this section provides the technology basis
to develop the proposed tool.
Derive guidelines for developing the proposed communication tool. Based on
the research work in the above tasks, the research sets up the framework of the proposed
tool using web-based VR technologies. From the designer’s standpoint, the research
introduces steps in how to develop the computer graphics built with a general CAD
application into the ready-to-use design data for the online VR version. From the client’s
position, the research designs a web-based user interface for navigating and manipulating
the design data. In conducting this research consideration must be given to those
operations required for experiencing a virtual environment.
Develop YY3D1, a proof-of-concept of the proposed communication tool. As a
starting point, YY3D is developed to deal with one of the most basic tasks in interior
design: the furniture and interior layout configurations. To achieve this objective, YY3D
includes a sample set of virtual rooms and databases of furniture models and material
textures. YY3D should allow a user to select furniture models from a catalogue and
import them directly into a 3D virtual room for examinations. The user should be able to
move and rotate the furniture in both 3D and 2D plan views. By previewing the room
with different furniture combinations and layouts, the user should be able to explore and
evaluate possible design solutions. After completing the virtual interior design, a list of
the user’s selection of furniture and material textures could be emailed back to the
designer. The client should also be able to supply comments and annotations for the
designer’s review.
6
Interior design can also include a larger list of design tasks, such as adding
partitions for dividing space or installing other interior facilities (e.g., lighting fixtures).
Compared to these design tasks, the furniture selection and layout is relatively simple.
This research hopes that YY3D demonstrates that clients do not need professional
knowledge to create a preliminary interior design solution. Based on YY3D, this research
expects that future applications include a range of building components (e.g., modular
partition walls, ceiling fixtures, and bathing equipments) and an assistant knowledge-
based system (to ensure that relevant design rules are obeyed when these building
components are added in a virtual interior). In doing so, the proposed web-based tool can
be developed and upgraded to solve more complex design problems.
1.3 Contribution
To summarize, there are two major contributions in this thesis. First, relying on a VR-
based approach, this thesis proposes a web-based tool that allows clients and interior
designers to communicate their design intent. It offers clients the opportunity to preview,
modify and experience design solutions in a virtual environment. This approach
demonstrates that it is possible for designers and their clients to use virtual prototyping
collaboratively.
The second contribution is YY3D, which is a proof-of-concept of the proposed
web-based tool. It demonstrates how to apply web-based VR technologies to the problem
of creating a design tool for furniture and interior layout configurations. YY3D is based
on standard web technologies, including Extensible 3D Graphics (X3D), Extensible
Hyper Text Markup Language (XHTML), and scripting technologies (JavaScript and
PHP). The approach adopted in the process of producing YY3D may be used as a
7
prototype to create more advanced web-based applications applicable to architecture,
engineering and urban design.
1.4 Overview of Thesis
Chapter 2 begins with a background review of the interior design process and VR
technology, providing the research basis for this thesis. Chapter 3 describes the
methodology used to develop the proposed communication tool. Chapters 4 and 5 detail
the development of the proof-of-concept, YY3D. Chapter 4 focuses on the preparation of
the design data: the creation of virtual interiors, the furniture models and the material
texture samples. Chapter 5 concentrates on the implementation of the web-based user
interface, and explains the scripts programmed to support various users’ operations.
Chapter 6 conducts an operation test on the completed proof-of-concept, summarizes the
research and speculates on possible future developments.
1 YY3D is named after the author’s wife Yuyu.
8
Chapter Two: Background
This chapter provides relevant background in interior design and virtual reality to aid
comprehension of the theoretical and technical basis of this research. It begins with an
investigation of the interior design practice, focusing on the communication process
between designers and clients. It discusses the current situation and suggests directions
for improvement accordingly. This chapter then reviews some of the previous literature
to introduce the basic principles of the VR concept and the technical requirements of VR
systems. Finally, this chapter cites application examples to enlighten the VR application’s
feasibility in solving interior design problems.
2.1 Interior Design
Interior design, as a profession separate from architecture, can trace its beginnings to the
early 20th century [28]. Prior to the 20th century, interiors were put together instinctively
as a part of the process of building. With the increase of complex architecture due to the
development of industrial processes, building design gradually divided into more
specialized professions [36]. Among them, interior design “concerns itself with more
than just the visual or ambient enhancement of an interior space; it seeks to optimize and
harmonize the uses to which the built environment will be put” [28]. Interior design
serves in many building forms including houses, public buildings and commercial
properties. During design, an interior designer’s duty is “to work with clients to find out
what they need from the space they live or work in, and to research, create and deliver an
interior space that meets the project brief” [45]. Through the imaginative and efficient use
of colour, pattern, texture, light and space, interior design can make a real difference in
the building occupants’ impressions of their environment, which thereby affects their
9
mood, health, and productivity. In the words of the U.S. Bureau of Labor Statistics,
interior design is “practical, aesthetic, and conducive to intended purposes, such as
raising productivity, selling merchandise, or improving life style” [46].
2.1.1 Typical Interior Design Process
In an interior design process, designers draw upon many disciplines to enhance the
function, safety, and aesthetics of interior spaces. Regardless of the space type they work
on, almost all interior designers follow the same process, which are summarized as
follows [45, 46]:
Step 1: Programming (client brief) of the space is needed to determine the client’s
requirements based on use and preferences. During this stage, the designer usually meets
face-to-face with the client to find out how the space will be used and to learn about the
client’s preferences, budget, and time scale in which the project must be realised.
Step 2: Creative design is primarily concerned with formulation of a design plan.
Based on the information collected from the client, the designer starts with design ideas
and concepts, which are usually presented to the client using freehand sketches or
computer models. If the client is satisfied with the design proposal, the designer will
move on to produce more detailed technical drawings. Otherwise, the designer will make
revisions based on the client’s input.
Step 3: Specification is the sourcing of appropriate materials, finishes and
furnishings. Based on the design plan, the designer suggests appropriate furniture,
lighting, floor and wall covering, and artwork. In this stage, depending on the complexity
of the project, the designer may also submit design drawings and specification for
approval by a construction inspector to ensure that the design adheres to building codes.
10
Step 4: Construction The designer implements the design plan according to the
technical drawings and specifications. Depending on the complexity of the project, the
designer might collaborate with various specialists. For example, if a project requires
structural work, the designer can work with an architect or engineer for that portion of the
project. The designer can also hire contractors to do the technical work, such as lighting,
plumbing, and electrical wiring. Working on this stage, the designer oversees the
installation of the design elements, and coordinates contractor work schedules to ensure
the project is completed on time.
Step 5: Completion. After the project is completed, the designer, together with the
client, pays a follow-up visit to the building site to ensure that the client is satisfied. If the
client is not satisfied, the designer makes corrections.
2.1.2 Client’s Role and Decision-making Factors
Though the interior design process is primarily led by the designer, the client plays a
critical role. As described above, a complete design process begins with the client’s brief
and ends at the client’s project acceptance. During this process, the interior designer is
hired for their expertise in a variety of styles and approaches, not merely their own
personal vision. Because the ultimate measure of a successful design largely depends on
client satisfaction, the designer must be able to balance their own tastes and their clients’
tastes, and be willing to put their clients’ tastes first [43]. Therefore, a good design
process involves communication between the designer and the client; the designer needs
to work the client through the design process. It is vital that the designer is a good
communicator and makes the client feel comfortable to open up with their ideas [8, 16].
11
As in the above five steps, the programming step is particularly important,
because it prescribes all the parameters of a client’s project and directly affects the
progress of later steps. If the designer cannot fully appreciate the client’s requirements,
adjustments and corrections are likely to be made throughout the project [8, 16, 45].
Therefore, the designer needs to identify what factors would influence the client’s
decision-making behaviour, so that they can have a definite object in view when
communicating with the client.
From a client’s position, there are many factors that affect their decision-making
behaviour. Unlike the situation faced by professional designers, who are engaged in
design every day, most clients are only involved in interior design activity on rare
occasion. For clients, an interior design project usually results in a large investment in
time and money, which will make them very cautious in their decision making. If clients
make mistakes in their choices, they have to live with their mistakes everyday [8].
Generally, primary issues that clients care about include budget and timeline, lifestyle
requirements, and disposal of existing items [8, 16, 31].
First, budget and timeline are a major part of the client brief, which are usually
the prerequisites when a client considers about design candidates. Martin Waller, director
of Andrew Martin Designs and organizer of the annual international Designer of the Year
Award, said that “Great schemes are soon forgotten if they are not implemented on time
and in budget” [16]. Brown [8] also wrote that “a good designer brings their project in on
budget, this keeps everyone happy”. Therefore, it is important for the designer to fully
understand the client’s available budget and expected project completion date, in order to
arrive at a realistic design proposal.
12
Second, studying the client’s lifestyle is central to design practice. People usually
want to express who they are through how they live [31]. Thus, the ability of a designer
to gain a thorough understanding of the client’s lifestyle is fundamental to the favourable
outcome of a project [16]. Depending upon the space type, the client has various concerns.
In a residential space, the client cares about whether the design proposal respects their
living habit and helps improve their life quality. For example, people from different
cultural and religion backgrounds can have different connotations of the same colour: red
is traditionally the colour of luck and happiness for the Chinese people, yet for the
American Indians, it is a colour associated with death and disaster [16]. Clearly, an
inappropriate colour scheme will displease the client. When designing a kitchen, for
another example, the client may have specific requirements on the equipment
arrangement in accordance with their cooking and eating habits. The designer should take
note of these requirements so that the functional parts (e.g., lighting or electric sockets)
will be positioned properly in the design proposal. If there are elderly family members
living in the household, safety and access considerations will be of concern. For a
commercial project, the client’s requirements and concerns are more complex, which
usually include market factors, branding, consumer demands and competitors. The
designer should investigate and understand these factors, in order to decide on an
appropriate design motif [45]. In addition, the “key” people working in the special
commercial area (e.g., the customer service counter in a retail store, or the kitchen in a
restaurant) have particular requirements about the space for their operation works [16].
All of these requirements should be studied fully prior to starting design, so the design
proposals can take care of them.
13
Finally, in some refurbishment projects, the way in which existing items are used
is also an important factor. For sentimental and economic reasons, some clients may want
to keep certain furnishings when renovating their space. These old furnishings may
occupy a large proportion of the interior space, and the designer cannot neglect them. For
this reason, the designer needs to understand the importance of these furnishings in the
client’s daily life, and design and re-arrange the space to either emphasize or undermine
their sense of existence. The selection of materials, fittings and fixtures may also take
into account these old items, in order that a harmonious interior space can be achieved.
2.1.3 Designer’s Communication Approaches
As the client’s main decision-making factors are clear, the designer needs to employ
various techniques to gain an understanding of the client’s needs. Most interior designers
use face-to-face meetings at the programming stage [8, 16]. At such meetings, the interior
designer will conduct a site survey and discuss the requirements of the project with the
client. Depending on the project, the interior designer may also spend time observing the
client’s daily life.
The site survey is a vital part of the fact-gathering operation. During a survey, the
designer measures the site systematically (e.g., space area, height, positions of door and
window, window sill height), marks in the position of any existing services (e.g., gas
pipes and electrical sockets), and takes notes of the current layout and decoration style (if
it is a renovation project). All this information allows the designer to refer to the detailed
features of a space when working on the design proposal [16]. Moreover, the designer
can record those items the client wants to keep and have an impression about their mass,
14
colour, and material. Through an analysis of the existing decoration style of the client’s
current space, the designer may also gain valuable clues about the client’s preferences.
Though a design survey can provide satisfactory information about an interior
space itself, it cannot provide sufficient information about its occupants. The designer has
to combine other approaches to identify those client’s decision-making factors (i.e.,
budget, timeline, lifestyle, habit, and special concerns) described in the last section. A
face-to-face conversion is an effective way to begin the relationship between a designer
and a client. Any of the client’s considerations and requirements can be openly discussed
in a conversation. Sometimes, to help establish design problems, the designer also works
with a prepared questionnaire. Information regarding the decision-making factors can be
asked via a series of questions.
However, when using the Q&A method in a conversation or questionnaire, the
questions should be very specifically designed to ensure an unambiguous answer. Close-
ended questions, which often expect a number as the answer or provide multiple choices
for a check mark, easily receive effective information. Examples of this type of questions
include asking the available budget, the expected project completion date, the number of
occupants, occupants’ ages, the presence of pets, and the expected use of outdoor living
spaces. On the other hand, open-ended questions such as “What is the very essence of the
home you can see in your mind?” are sometimes less productive. Gibbs [16] finds that
“many clients have very confused ideas about what they want and are not able to
articulate their requirements fully”. As a result, the designer has to spend extra time and
care to confirm their own interpretation of the client’s answers.
15
To help identify design problems, some designers may also choose to spend time
observing the client’s daily life. When designing a residential space, for example, the
designer might spend an afternoon to study how the client’s family use their existing
space, such as how much time each family member spends in each room, how and where
they like to relax, eat, work, watch television, listen to music, cook, and entertain.
Compared to conversation or written answers (questionnaire), direct observations provide
a more vivid picture of the client’s lifestyle. Nevertheless, Brown [8] argues that such a
picture might not help the designer to discover the client’s real life. Based on her
professional experience, Brown points out that “people often have two sides - the side
that they show us when you let them know you are coming to visit and the real side that
you see when you just drop in”. For this reason, designers should work like detectives in
finding out the truth. Brown suggested designers should call in when clients were not
expecting to observe actual use of the living space. Examples include visiting at meal
times or during rainy weather. However, the extent to which clients would accept this
intrusive investigation method is questionable.
For all of these reasons, a designer using traditional communication approaches
has to take a lot of time and efforts to study and understand the client’s lifestyle and
requirements. Even so, the result is sometimes not reliable and thereby a satisfactory
proposal cannot be guaranteed. As a result, the designer and the client have to spend extra
time making corrections and remedying the situation at the following design stage.
2.1.4 Discussion for Improvement
To improve upon that situation, alternative communication approaches need to be
proposed to clarify design information that are not easy to express with language. Brown
16
[8] suggests that, after taking the client brief, the designer should meet with the client
again before starting design. In this meeting, the designer, together with the client, can
view photos of the existing space, flick through design examples in magazines and books,
look at colour and fabric and flooring samples, and scribble sketches. Brown
recommends that the designer could put all of these visual materials haphazardly and pick
out things that caught client’s eyes (Figure 2.1). By repeating this process a few times,
the designer could gradually narrow down the client’s options. Brown’s suggestion is
interesting as it helps establish design problems through a number of visual materials,
which are usually more intuitive and discernible than the textual information.
Figure 2.1: A mixture of various visual materials for the client to choose, photo courtesy of the Australian Institute of Interior Design.
Following this methodology there is still room for improvement. In Brown’s
meeting, the visual materials (e.g., photos, swatches, and design examples) are still
isolated from each other when the client is looking at them. The client has to imagine the
final outcome when everything would be integrated together. However, there is a risk as
Green [17] describes “Each witness extracts an interpretation that is meaningful in terms
of his own beliefs, experiences and needs”, and “since each person interprets the events
in terms of his own world view, different eyewitnesses observing the same event may
have different interpretations and different memories”. Thus, the designer’s creation
17
based on the client’s selections might not exactly reflect the client’s original expectations.
When there is difference in interpretation between each other, corrections and
adjustments are likely to be made halfway through the design stage.
Such an uncertainty about the differences in interpretation may be clarified, if the
client and the designer are able to try out the combinations of visual materials in a real
setting, so the designer can follow up the client’s thoughts of how to use those selected
design materials. Traditionally, interior designers prefer to provide quick sketches made
by hand to confirm those selections (Figure 2.2). Nevertheless, these sketches have two
drawbacks. First, freehand perspectives do not allow the client to change the viewing
angle to look around, thus the spatial relationships sometimes can be unclear. Second, in
a case in which the client changes their mind and wants to consider an alternative
solution (e.g., different floor coverings), the designer has to redraw the entire design.
Figure 2.2: Freehand sketches of a hotel lobby, photo courtesy of Jenny Gibbs.
To overcome these limitations, some designers create physical models to study
design possibilities (Figure 2.3). The physical models are useful as they enable the client
to view, touch, and even change the model components. However, creating physical
models is generally time-consuming and expensive. Moreover, a bulky model is usually
inconvenient to carry about for meetings.
18
Figure 2.3: A cardboard model showing a residential interior, photo courtesy of the Howard Architecture Models.
Today, more and more designers employ CAD software to create design visuals
[46]. Compared to freehand sketches, computer-generated design drawings convey more
accurate spatial information and allow for quicker modifications (Figure 2.4). Digital
models are also more economical and environmentally friendly than physical prototypes.
The CAD files can be transferred to the client via emails or CDs. Though 3D graphics
can only provide visual experiences, the visual sense plays the most important role in
perceptions. The style or look, which is the most important feature in design evaluation,
can be effectively presented using computer graphics [31]. Thus, CAD systems could be
an ideal tool that allows the client and the designer to try out combinations of different
visual materials that Brown suggested [8].
Figure 2.4: A living room model built using CAD, photo courtesy of the Mircospot.
19
In practice, the usability of CAD is currently limited to design professionals. Due
to constraints such as software price, hardware investments, and requirements on the
user’s operating skills, CAD systems are mainly employed at the design development
stage instead of the programming stage when clients’ involvements are frequent. In order
to enable non-professional clients to try out design possibilities using computer graphics,
it is important to eliminate these technical barriers.
Virtual Reality systems have attracted significant attention because they can
create realistic experiences, in which people can interact with virtual objects in a 3D
environment [15, 26]. With this potential, VR technologies may provide solutions to
many design problems. By providing visual materials in a virtual world, both the client
and the designer can select and try different design options in real time. To study these
possibilities, the following section provides the relevant literature review on VR.
2.2 Virtual Reality
The idea of “Virtual Reality” can trace its roots back to the 1960s, when Ivan Sutherland
began working toward what he termed “Ultimate Display”. With this term, Sutherland
[41] envisioned that:
“A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. It is a looking glass into a mathematical wonderland.”
“The ultimate display would, of course, be a room within which the computer can control the existence of matter.”
“With appropriate programming such a display could literally be the wonderland into which Alice walked.”
In 1968, Sutherland further implemented the Sword of Damocles, which is widely
considered to be the first VR system. Using wire-frame graphics and a head-mounted
20
display (HMD), this system allowed users to occupy the same space as virtual objects
[42]. Sutherland’s work was marked by many authors as a beginning event on the VR
historical timeline [5, 18, 48]. Strickland [40] believes that Sutherland’s work “guided
almost all the developments in the field of virtual reality”, and he further derives
Sutherland’s concept into three points:
“A virtual world that appears real to any observer, seen through a head-mounted
display (HMD) and augmented through three-dimensional sound and tactile
stimuli”
“A computer that maintains the world model in real time”
“The ability for users to manipulate virtual objects in a realistic, intuitive way”
Over the past decades, as computer technology has advanced in both hardware
and software, Sutherland’s concept has been enriched and developed, but its principles
still hold true.
2.2.1 Terminology and Definition
There have been different terms used to describe Sutherland’s concept and its variations
at different times. For example, the term “Artificial Reality” by Myron Krueger was used
in the 1970s and “Cyberspace” by William Gibson was used in the early 1980s. The
current widely accepted term “Virtual Reality” was coined in 1989 by Jaron Lanier, the
founder of VPL Research which developed the first commercially available HMD [3, 38].
Today, although the term “Virtual Reality” has been accepted by the public, the
controversy about its definition still exists [7]. To some people, VR strictly refers to
“Immersive Virtual Reality”, where “the user becomes fully immersed in an artificial,
three-dimensional world that is completely generated by a computer” [3]. To others, VR
21
refers to “real-time interactive 3D graphics technology in general so that it has different
types of system environments including immersive VR or non-immersive VR” [31].
Though the debate over the exact meaning of VR is ongoing, in practice, one necessary
characteristic of VR is that a user “can navigate in a virtual world with some degree of
immersion, interactivity, and a speed close to real time” [7].
2.2.2 VR Systems and Technology Requirements
There are different types of VR systems in the present, but most can be classified into one
of the following three categories: Immersive VR, Video Mapping VR and Desktop VR [3,
4, 39]. The characteristics of these VR systems are cited in Table 2.1.
Immersive VR “Immersive VR uses a HMD to project video directly in front of the user’s eyes, plays audio directly into the user’s ears, and can track the whereabouts of the user’s head. Then a data glove (or data suit) is used to track movements of the user’s body and then duplicate them in the virtual environment. When the user cannot distinguish between what is real and what is not, then immersive VR has succeeded.” [4]
Other immersive VR systems include:
BOOM (Binocular Omni-Orientation Monitor)
“The BOOM is a head-coupled stereoscopic display device. Screens and optical system are housed in a box that is attached to a multi-link arm. The user looks into the box through two holes, sees the virtual world, and can guide the box to any position within the operational volume of the device. Head tracking is accomplished via sensors in the links of the arm that holds the box.” [3]
CAVE (Cave Automatic Virtual Environment)
“The CAVE provides the illusion of immersion by projecting stereo images on the walls and floor of a room-sized cube. Several persons wearing lightweight stereo glasses can enter and walk freely inside the CAVE. A head tracking system continuously adjusts the stereo projection to the current position of the leading viewer.” [3]
22
Video Mapping VR “Video Mapping VR uses cameras to project an image of the user into the computer program, thus creating a 2D computer character. Although fully immersed in the environment, it is difficult to interact with the user’s surroundings.” [4]
Desktop VR “Desktop VR is when a computer user views a virtual environment through one or more computer screens. A user can then interact with that environment, but is not immersed in it.” [4]
Table 2.1: The categories of current VR systems.
To build up a VR system, no matter which category it belongs to, the general
technology requirements fall into four groups [27]:
Hardware “capable of rendering real-time 3D graphics and high-quality stereo
sound”
Input devices “to sense user interaction and motion”
Output devices “to replace the user’s sensory input from the physical world with
computer-generated input”
Software “that handles real-time input/output processing, rendering, simulation,
and access to the world database in which the environment is defined”
In practice, most VR systems have a common prototype of the human-computer
interaction. According to the study by [3, 4, 27, 39], this prototype is summarized as in
Figure 2.5. For an immersive VR environment, all of the visual, auditory and tactile
information may be necessary, while for others that are primarily visual experiences,
devices and software dealing with sound and tactual sensation can be omitted.
23
Figure 2.5: Prototype of human-computer interaction in a general VR system.
To make use of VR technologies to solve design problems, designers first need to
choose a proper VR system. As summarized in Table 2.1, an immersive VR system often
provides the most comprehensive information, but its demands for devices and
equipments are also the highest. It is possible for an interior design company to build up
an immersive VR system at their workplace. However, clients have to physically visit the
workplace to use these devices. Considering the time and overall project budget,
employing an immersive VR system may not be the best choice at this point. The second
type of VR systems, video mapping VR, is also inappropriate to be selected, because it
does not fully support the user’s interactions with virtual objects. To make it possible for
the client try out different design options dynamically in a virtual environment, the
interactivity of virtual reality is indispensable.
Therefore, this research suggests designers to choose the desktop VR system.
Using a desktop VR system, the user can interactive with the 3D graphics presented on a
computer screen, and experience some of the sensation of presence to a certain degree.
24
According to the study by Oh et al. [31], the 3D desktop VR has become an affordable
system and the non-immersive VR is now available in the system environments including
home PCs. The capacity and the convenience of desktop VR system should satisfy the
demands of providing non-professional clients with a virtual interior, in which they can
try out combinations of various design materials provided by the designer.
2.2.3 Web-based VR
Based on the desktop VR system, an important question is how to share the 3D graphics
prepared by an interior designer with the design client, so the client can try out design
possibilities independently. If the client still needs a professional’s involvement to view
the 3D graphics, there is less improvement compared to the situation of using a CAD
system. Fortunately, since the debut of Virtual Reality Modeling Language (VRML) in
1994, 3D graphics has become available on the World Wide Web (WWW). VRML and
other Web3D technologies developed in the past two decades have made web-based VR
possible [2, 31, 32]. Web-based VR supports interactive model viewing within the
framework of a regular web browser. With web-based VR the user does not need to
purchase professional modeling software. They also do not need to learn how to use CAD
or acquire any specific design skill [2].
The delivery of a web-based VR system requires at least two computers, the
client’s PC and a web server. In most cases these two computers would be located at a
great distance from one another. This is unlike a standalone VR system, where all the
required data are generated and processed locally by a powerful computer and its
associated input/output devices (Figure 2.5). Consequently, in addition to meeting those
25
technical requirements for the human-computer interaction as in a standalone VR system,
a web-based VR system also needs to consider the data exchange over the Internet.
Figure 2.6: Prototype of data exchange in a general web-based VR system.
Based on the research work by [2, 25, 33, 37], the basic data flow prototype in a
web-based VR system is summarized as in Figure 2.6. In this prototype, the web server is
mainly responsible for storing and providing data (3D computer model and other website
files) when receiving requests from a local computer. The local computer is the location
where all the data (including the data received from the web server and the events input
by the user) are collected and calculated for generating the 3D virtual world. During this
process, the network capacity and the local computational power are determinants of the
overall performance of a web-based VR system. Extended waiting time for the file
download or low local computation speed can hinder the 3D graphics rendering and
interactions in a real time, thereby satisfying virtual experience would not be achieved.
26
To help improve data file transfer speed and rendering, the designer should
optimize their 3D graphics. Beier [2] pointed out that the core of most VR systems is a
computer model, which is a polygonal representation of all geometry involved. The
polygonal representation means using a collection of vertices, edges and faces to
approximate the surface of a geometric object for defining its shape. These faces usually
consist of triangles, quadrilaterals or other convex polygons to simplify the rendering.
The number of polygon for the entire model determines the rendering speed [21].
According to Beier [2], “Real-time rendering requires the generation of at least 20 to 30
frames (perspectives views) per second. A laptop computer can render several thousand
polygons in real-time”. If the models are too complex (including up to a million
polygons), powerful computer systems with special graphics hardware are required.
Therefore, when the client’s computer configurations are unknown, the interior designer
should optimize the model’s polygon count to ensure the basic rendering speed for the
real time visualization and interaction. In some instances when the client’s computational
resources permit, the design models may be created with a certain level of complexity, in
order that the client can experience more realistic visual effects [2, 13].
2.2.4 Application Areas of VR
Using a VR system, people can experience a real or abstract environment in the three
dimensions of width, height and depth with interactivity. This strength opens unlimited
possibilities for the VR applications in a variety of fields. This section reviews three VR
application areas, which would inspire the development of VR-based communication tool
in solving interior design problems.
27
(1) Virtual prototyping for product development
The term “virtual prototyping” refers to “the design, simulation, and testing of
new ideas, concepts, products, schemes, or processes in a synthetic but interactive
computer environment” [24]. For product development, virtual prototyping uses VR
technologies for the design presentation and evaluation, which is based on a digital model
in lieu of a physical prototype [11]. Traditionally, producing a physical prototype is time-
consuming and expensive. Additionally, only a limited number of participants who are
physically present can assess the physical prototype [31]. In contrast to this situation,
virtual prototyping allows designers and users to evaluate and analyze products using
computer graphics. Relevant product properties, such as functionality, safety, aesthetics,
and other aspects, can be studied in a virtual environment. With technical support, this
virtual environment can be accessed by one or more users at the same time and at
different locations [50].
Virtual prototyping is especially welcomed in the automotive and aircraft
industries where the investment requirements for building physical prototypes are high.
For example, many automotive companies and university laboratories have employed
virtual prototyping systems to study their candidate vehicle designs [1, 35, 50]. Such a
virtual prototyping system can enable the designer and the manufacturer to focus on the
very specific design problem each step by creating a partial design model. For example,
Figure 2.7 shows an immersive VR system used to present a vehicle interior at the
driving position. Through virtual pop-up menus, people can examine the interior colours,
lighting environment and other interior settings. The virtual prototyping system can also
be applied to simulate the physical environment in which the product will be used. Figure
28
2.8 shows a simulated rough terrain on which the stability of a high speed vehicle is
examined. By means of virtual prototyping, people can test a design proposal and detect
design flaws before a product is manufactured.
Figure 2.7: A virtual prototyping for testing vehicle interior, by Virtual Reality Lab at the University of Michigan.
Figure 2.8: A virtual prototyping for vehicle stability measurement, by Robotic Mobility Group at Massachusetts Institute of Technology.
Interior design faces many similar problems to those found in the automotive
industry: the time and cost investments for creating a “physical prototype” for design
study are usually high. Moreover, unlike the mass production in automotive industry,
each interior design project is unique and serves few clients. Therefore, the profit for
creating a physical prototype is relatively low. To improve this situation, virtual
prototyping can be considered as an alternative solution to study and compare design
solutions.
29
(2) Collaborative design review
In design industry, perhaps the most promising application area of VR is the
collaborative design review. A complex design project usually involves individuals from
different disciplines. For example, the architectural design process requires the
collaboration of an architect, structural engineer, electrical engineer, building contractor,
urban planner, as well as the client. During design, individuals involved share a great
number of drawing files and building models. Sometimes, different components of the
project may be finished by different groups of designers and engineers at geographically
different regions. Thus, communication and coordination are crucial for disseminating
design knowledge and accelerating design development process. Traditionally, these
problems are typically resolved through meetings or phone discussions, however, team
members are not capable of exchanging their ideas easily when they work in different
places [22, 37].
The Internet is a powerful tool for the information spread and gathering, but
before Web3D technologies, web pages could only render 2D web content composed
using HTML, which constraints their applications in design industry. Though the
designer can pass CAD files to their partners via emails, such methods have limitations
when individuals at different places use different CAD software packages [21]. Web-
based VR removes these obstacles by enabling the 3D graphics visualization and
interaction using a regular web browser. Information (including drawing, digital model as
well as text annotation) throughout different design stages can now be shared online.
Moreover, using web-based VR technologies, team members from different disciplines
are able to conduct design evaluation in parallel to each other instead of in a sequential
30
manner (in which design documents are typically circulated among the team numbers one
after another) [19].
One example of web-based VR applications is provided by Campbell [10], who
created an experimental website for disseminating architectural construction documents
to design clients, contractors, and fabricators. On this website, design data were described
using VRML model and HTML text (Figure 2.9(a)). Through the hyperlinks on the web-
based user interface, each design specification could be linked to a VRML model for
detailed illustration (Figure 2.9(b)). Clients and other team members did not need any
special technical support to view these documents, except for a regular web browser
(with a VRML plug-in installed) and the Internet connection.
Figure 2.9: An experimental website for collaborative design review, by Cambell.
Similarly, the design data in an interior design project can be presented using
Web3D technologies for the client’s review. As discussed, most clients have difficulties
in manipulating a design model independently with a CAD software package. Creating a
collaborative design review environment can help clear those technical barriers, so clients
can also have the opportunity to access 3D graphics to preview design possibilities.
31
(3) Online customization
The term “mass customization” means “mass production on the one hand vs.
products tailor-made for individual tastes on the other” [29]. Online customization is one
way to realize mass customization over the Internet. With mass customization, customers
take advantage of self-guided selection to choose products and services. For example, the
customers can configure their personal computers with a different motherboard, graphics
card, hard disk, monitor, and many other accessories at computer stores. Online
customization further provides these selection options on the web page, so customers can
select them by mouse clicking.
Figure 2.10: An online product configurator, by Dauner et al.
The advantage of using VR technologies in online customization is to provide
consumers with timely visual feedback. Instead of guessing at what they look like,
customers are able to see their selection and configuration results on their computer
screen. Dauner et al. [13], for example, proposed an interactive lamp configurator for
their Virtual Design Exhibition (Figure 2.10). Using this tool, users were allowed to
32
select any combinations of lamp shape and colours (or surface patterns), and check the
real appearance of their selection in almost real time.
In interior design, the selections of decorative materials and colour schemes are
also important aspects in a client’s decision-making process. Traditionally, the designer
brings a group of samples for the client to select. After the client’s selection, the designer
draws sketches or builds CAD models, so that the client can confirm the visual effects.
Inspired by this example, the designer can prepare various material and colour samples
online. By providing proper interactive support, the client can choose and assign these
samples to the interior model components (e.g., wall, floor, and furniture) by themselves.
In doing so, the client does not need to wait for any designer’s sketch or model rendering
to confirm selection results, while the designer can leave some interior design solutions to
the client’s imaginations.
The above application examples suggest that VR-based systems can help non-
professionals to overcome the technical barriers encountered in using a CAD system. The
next chapter will discuss the methodology in using the VR technology to improve the
communication efficiency between clients and designers in interior design.
33
Chapter Three: Designing a Web-based Communication Tool
This chapter introduces the methodology in developing the web-based communication
tool. It begins with a research framework to simulate the communication process between
designers and clients in using web-based VR. This chapter then focuses primarily on two
major technical aspects in creating the proposed VR application: the creation of virtual
design data and the implementation of web-based user interface. Finally, this chapter
provides a preview of the proof-of-concept to be developed in the following chapters.
3.1 Communication Framework
Traditional communication approaches (such as conversation or questionnaire) may not
be detailed enough to discover the client’s requirements that are not easy to express using
verbal or written language (Section 2.1.3). In that case, Brown’s “puzzle-styled” method
[8] provides the client with visual design options (Section 2.1.4), which can help confirm
the client’s intention when textual messages are unclear or confusing. The proposed
communication framework suggests combining traditional communication approaches
with visual options based on computer graphics, so that the designer can more accurately
understand the client’s wishes and needs. It should be noted that this communication
framework serves at the programming stage in a typical interior design process (Section
2.1.1). Within this framework, VR technologies are employed to allow the client to try
out different design options independently through home PCs. In this way, the research
hopes to show that the number of physical meetings in a design process can be reduced
while the communication efficiency can be improved. Figure 3.1 shows the overall
architecture of the proposed communication framework.
34
Figure 3.1: Architecture of the proposed communication framework.
35
Within this framework, the communication process at the programming stage is
divided into three sub-steps:
Step 1: Taking the client brief. During this step, the designer employs traditional
communication approaches. First, to enquire about the client’s available budget and
requirement of the project timeline, the designer creates a questionnaire. In addition, the
questionnaire can include questions in regards to the client’s background information,
such as the number of occupants, each individual’s needs and preferences, or other
requirements. The questionnaire is finished by the client online or via email prior to the
physical meetings with the designer, so the designer comes prepared. The answers
collected through the questionnaire may not be clear enough; in the following face-to-
face meeting, the designer can exchange ideas with the client based on this questionnaire.
In the interim, the designer needs to conduct a site survey. The designer might also spend
time observing how the client’s space is currently used.
Step 2: Creating design materials. Following the initial meeting, the designer
should not immediately suggest a solution in the early stage of the design process. Instead,
the designer begins work on a preliminary 3D model of the interior based on the initial
site data. Once completed, the designer can collect various design materials (e.g., samples
of colour, fabric, wall and floor coverings, as well as examples of furnishing, door, and
window designs) that would meet the client’s requirements. Next, the designer needs to
convert these selected items into digital formats if they were in the physical forms. For
example, the designer can take pictures of physical texture samples to prepare their 2D
image files, and make use of CAD software to create 3D models of design components
like fixtures and furniture.
36
Step 3: Creating web files. The final step is to convert the digital files (images and
models) into the web-based VR version. There are two advantages when a website is
involved. First, it eliminates the constraints of time and geography for scheduling another
physical meeting to view design options. Second, it removes technical barriers to enable a
client to access and view the 3D graphics created using CAD software. Because the
database of design options is stored on the website, the client can access and test different
options with home PCs. If the client is not satisfied with these options, the designer can
receive feedback from the client and make the appropriate changes to the design options.
Using a 3D virtual world to present design options, the designer can quickly clarify and
confirm the text comments provided by the client at the step of client brief.
In creating the 3D models used online, the designer needs to apply Web3D
technologies to convert the CAD files to a format that can be viewed in a web browser.
Moreover, the designer needs to design a web-based user interface to display these digital
files, so that the client can navigate and locate design options on the web page. The
following sections discuss these two technical aspects in more detail.
3.2 Converting CAD Models
To realize a web-based VR application, a core problem lies in creating 3D models that
can be efficiently rendered within a web browser framework [2]. Since the birth of
VRML in 1994, many technologies have been developed for the purpose of delivering 3D
web content [32, 34]. Based on its application area, the proposed communication tool in
this research suggests using Extensible 3D Graphics (X3D) to prepare VR models. There
are two reasons for this selection.
37
From the designer’s point of view, X3D can minimize the workload for preparing
virtual models. X3D is an open standard XML-based file format for representing 3D
computer graphics on the Internet; it supports the 3D data exchange with most common
CAD applications [12]. Thus, the designer can make use of their existing CAD models to
prepare VR files instead of creating them from scratch.
From the client’s point of view, the content of a virtual model in X3D file format
can be viewed interactively through the use of an X3D plug-in. Such a plug-in is usually
available at no cost for most common web browsers on different computer platforms.
With the support of X3D plug-in, the client can easily navigate and interact in a 3D
virtual scene using general input devices, such as mouse and keyboard. There are no
complex operating skills required [2, 12].
To make use of the X3D technology, the following sections introduce its language
syntax and the workflow to convert a CAD model into the X3D file format.
3.2.1 Extensible 3D Graphics (X3D) Standard
X3D was released by the Web3D Consortium in 2001 as an XML encoding of VRML. In
2005, X3D became the ISO standard file format for viewing and archiving interactive 3D
models over the Internet. An X3D model is usually defined in a regular text file that
describes 3D graphics using a standardized syntax, an XML-based label language. This
syntax defines most of the commonly used semantics found in 3D applications, including
hierarchical transformations, geometry, material properties, texture mapping, light
sources, and viewpoints. Therefore, in practice, the 3D content created with most CAD
applications can be exported into the X3D file format for publishing on the web. In
addition to the information describing the appearance of objects, X3D also supports
38
sound, time and tracking sensors, as well as script programming. Thus, an X3D scene
allows for animation and interaction. Finally, because it is XML-based, X3D can be used
for the cross-platform network environment; in other words, no matter what operating
platform the client uses, as long as the browser adheres to the X3D standard, the 3D
scene described in an X3D file can be parsed and rendered consistently [12].
When converting CAD models into the X3D format, the designer needs to
understand the basic syntax of X3D, so modifications or additional functionalities can be
edited in the X3D file. X3D adopts a hierarchical directed acyclic scene graph to describe
geometric objects’ location relationships and appearances. Within this scene graph, the
basic building blocks are X3D nodes. Each X3D node contains field parameters to
describe its attribute. X3D provides different types of nodes to conduct a variety of tasks.
For example, the <Transform> node represents the conversion of local coordinate system,
including moving, rotating and scaling; the <Shape> node describes geometry and its
appearance including colour and texture; and the <Viewpoint> node defines the user
viewing perspective. Sensor nodes are used in an X3D scene to track the passage of time
and the position of the user avatar or the user’s input device (e.g., a mouse cursor). When
certain conditions are met, these sensor nodes can trigger predefined events through the
routing mechanism (i.e., <ROUTE>). Moreover, the <Script> node enables a scene
developer to program customized scripts, which can help enrich the interactive features in
the virtual scene.
The relationships between different X3D nodes are determined by their relative
positions in the scene graph. For example, Figure 3.2 provides a simple X3D scene,
which includes three geometries (box, sphere and cone). In this X3D scene, because the
39
node Shape A is nested under the node Transform A, the node Shape A is typically
referred as a child node of Transform A. As a parent node, the movement or rotation of
Transform A affects its children in the coordinate system. In the cases of Shape B and C,
because they are not attached to Transform A, their local coordinates are not affected by
Transform A. Another important characteristic is that the influences from the parent-level
nodes are accumulated. For example, the node Shape C’s position in the coordinate
system is determined by the combination result (i.e., the sum of vectors) of Transform B
and C.
Figure 3.2: An example of X3D scene.
40
When the X3D scene is rendered, 3D browsers read and parse the scene graph in a
straightforward way. Starting at the scene root, the scene graph is traversed in a depth-
first manner. The <Transform> node is first parsed so the location and orientation of the
coordinate system in the current frame are updated. Following are the <Appearance> and
<Material> properties to identify the rendering parameters for the geometry. The
geometry nodes (e.g., <Box>, <Sphere> and <Cone> as in Figure 3.2) are finally parsed
to render polygons [12].
3.2.2 General Workflow of CAD Model Conversion
To convert a CAD model into the X3D format, the typical workflow consists of model
optimization, exporting, editing and testing. This process is summarized as in Figure 3.3
[2, 31].
Figure 3.3: Workflow of converting a CAD model into the X3D format.
41
First, before converting a CAD model into the X3D format, the designer needs to
reduce the model complexity. This helps contribute to a faster file transfer over the
Internet and a smoother real time rendering on the client’s local computer (Section 2.2.3).
When the CAD model is ready, the designer can use the “Export” command, a built-in
function of most CAD applications, to export the model into the X3D format. There is a
possibility that some CAD applications only support the VRML format, the predecessor
of X3D. In that instance, the designer can translate the exported VRML file into the X3D
version using an X3D encoding converter [20, 49]. The works in this phase are finished
within a CAD application, such as AutoCAD, 3ds Max, and SketchUp.
After the X3D file is exported, the designer needs to do some editing works
before the model is ready to be posted on the website. Specifically, for a virtual scene, the
designer needs to add viewpoints, lightings, shades and shadows, as well as other
environmental elements if necessary, so the virtual scene can be realistic. All these
elements can be added using corresponding nodes offered by X3D. For a design
component (e.g., building component, fixture, and furniture), the designer needs to
program scripts to support the user’s various operational tasks, such as updating their
material attributes and changing their locations in the virtual interior. Generally, these
scripts are programmed and assembled inside <Script> nodes, which then influence the
target nodes (such as <Transform> and <TextureImage>) via routing mechanisms.
Because the X3D model is defined in a text file, the designer can use a regular text editor
(e.g., Windows Notepad) to edit the X3D nodes and program scripts. There are also
special authoring tools developed for editing X3D codes in purpose (e.g., X3D-Edit),
which can help a scene author confirm their editing results in real time [47].
42
Finally, to ensure that the user can view and interact with the X3D model as
expected, the designer needs to test the model locally before uploading the file to the web
server. The Web3D Consortium recommends a number of players and plug-ins for the
X3D model viewing, most of which can be downloaded free [47]. The above workflow of
CAD model conversion is explained with examples for creating the proof-of-concept in
Chapter 4. The BS Contact player is used to demonstrate the application results of the
X3D coding samples created for the proof-of-concept [6].
3.3 Designing the User Interface
After creating the virtual design files, the next step is to design and provide a web-based
user interface to operate on these files. The user interface should be designed in a way
that both 3D (interior scene and design components) and 2D (colour and material samples)
web content can be properly presented. Additionally, all the design materials should be
organized with a clear structure to facilitate a logical workflow during the user’s
operations on the interface.
3.3.1 User Interface Sketch
The purpose of the proposed tool is to enable the user to visualize design options in a 3D
virtual interior. The interior scene should be first rendered on the user interface, so other
design components can then be selected and imported for examinations. To render the
interior scene, an X3D player is essential to retrieve the scene’s X3D file. When coding
the user interface page, an <object> tag can be used to embed the X3D player. Within
this <object> tag, the source value needs to be assigned the url of the scene’s X3D file
on the web server.
43
Except the virtual interior, all other design components should be first presented
using 2D images instead of 3D models. Otherwise, if all the X3D files are retrieved and
rendered at the same time, the loading time of the user interface page will be too long for
the user to operate. To select and view a design component, the procedure is simply as
follow: for a 3D component (e.g., fixture and furniture model), its X3D file is retrieved
from the web server when its image is selected; the retrieved model is then imported into
the interior scene for rendering. For a 2D component (e.g., colour and texture sample), its
image file (e.g., JPG or PNG) is retrieved when being selected, so the user can attach it to
the selected model surface being rendered in the virtual interior.
To give the user easy access to the different design components, it is important to
build a clear navigation structure. The <table> tags can help construct catalogues to sort
out design components according to their categories. For example, 2D colour themes and
material textures, in addition to 3D building components, fixtures, and furniture, can each
be grouped and listed inside a web table. Moreover, considering that some users may
have a low computer screen resolution, it is not advisable to spread out all the catalogues
on the user interface all at once; otherwise, the user interface layout become too large and
the user may get lost. Khoon and Ramaiah [23] warn that visitors who find themselves
lost in an online gallery often end their journey by simply clicking to leave the website.
To avoid this situation, the user interface can adopt multi-level menus to group and
support the switch between catalogues. For example, the two catalogues of colour themes
and material textures can share a same portion on the user interface page. By clicking on
the tabs that convey their respective catalogue names, the user can switch between these
two catalogues (Figure 3.4).
44
Finally, there is the possibility that many of the users will want to leave comments
after examining their design options. To meet this requirement, the user interface can
embed a web form using the <form> tag. Design options and message board can be
provided in this web form, so the user can check and fill out information and send it back
to the designer (Figure 3.4).
Figure 3.4: Sketch of the web-based user interface.
45
3.3.2 Script Support
To function properly, the web-based user interface needs script support. These scripts are
programmed to perform the following tasks:
(1) Selecting and loading an interior scene. It is possible that the designer
provides more than one interior scene in the context of some complex projects. To open
the user’s selected interior scene, the source value of the X3D <object> needs to be
updated dynamically to locate the corresponding X3D file. This script is required to
handle the user’s clicking events on the buttons in Window 2 in Figure 3.4.
(2) Switching between the catalogues of design components. Catalogues for
different 3D and 2D design components are planned to be displayed in the same window
area (i.e., Window 3 and 4 in Figure 3.4). Consequently, script support is needed to open
the user’s selected catalogues in these two windows.
(3) Importing a 3D design component into the interior scene. To import an X3D
model into another, an <Inline> node is required to be placed inside the host X3D scene
graph (i.e., the interior scene in this case). This script is then programmed in the host
X3D file, so the address value (i.e., the url field) of the <Inline> node can be updated
according to the user’s selection of design component in Window 3 in Figure 3.4.
(4) Attaching a 2D design component to the geometry. To attach texture maps to
the different components of a model (e.g., the desk legs and desk top), <ImageTexture>
nodes are required to be placed in the model’s X3D file, following the geometry nodes
defining those model components. Script is then programmed to update the address value
(i.e., the url field) of each <ImageTexture> node, which modifies the model appearance,
according to the user’s selection in Window 4 in Figure 3.4.
46
(5) Processing the user’s inputs in the web form. Once the user submits the
completed form in Window 5 in Figure 3.4, all the inputs will be delivered to the web
server. To save space, the approach in developing this tool does not store the data
received by the server. Instead, the script is programmed to sort out the user’s inputs and
print them into a PDF file. Once this file is saved on the user’s local computer, the user
can email it back to the designer to provide feedback on design options.
The coding process to implement a web-based user interface is explained in detail
in Chapter 5.
3.4 Preview of the Proof-of-Concept
In order to verify the technical feasibility of the proposed tool, Chapters 4 and 5 present
the implementation process of a proof-of-concept, YY3D. As a simplified version of the
proposed tool, YY3D is designed for the furniture layout configuration. YY3D follows
the communication framework proposed in Section 3.1 but reduces its object of attention.
It assumes that the designer has finished the client brief step and moves on to the steps 2
and 3, as shown in Figure 3.1, where the designer begins to prepare virtual design
components according to the collected design information. Instead of providing the client
with a full list of design components, YY3D focuses on a group of furniture models (as
the example of 3D components), a group of furniture texture maps (as the example of 2D
components), and some virtual interior scenes. In the case of YY3D, an abbreviated
version of Figure 3.1 can be redrawn as in Figure 3.5.
Using YY3D, the user is allowed to select and arrange furniture models into the
virtual interiors. The user can modify furniture textures and move and rotate each piece
of furniture to preview their layout in 3D.
47
Figure 3.5: Architecture of YY3D.
The topic of furniture layout is straightforward. It does not require prior technical
knowledge (such as the structural design when adding or demolishing a partition wall).
This makes the focus of the development of YY3D on the basic principle of the proposed
tool, without considering the technical detail for advanced features. The methods adopted
in the YY3D’s development process are universal and can be applied to other design
applications. These methods include:
48
Converting CAD models to the online VR version using X3D technology
Creating the web-based user interface using eXtensible HyperText Markup
Language (XHTML)
Programming web scripts to support the user’s operations on the interface using
JavaScript and PHP
The following Chapters 4 and 5 detail the YY3D’s implementation as an example
to illustrate these methods.
49
Chapter Four: Virtual Models and Texture Samples
This chapter describes the implementation of the proof-of-concept, YY3D. The focus of
this chapter is on the creation of virtual design data for online use, including virtual
interiors, furniture models, and texture maps. The creation process recorded in this
chapter follows the approach described in Section 3.2.
4.1 Construction of Virtual Interiors
In this proof-of-concept, two types of interior space are provided as a case study: one
includes a living room and the other an office space. A simple opening animation is
created to welcome the user. The two virtual interiors are first built using a CAD
application and then exported into the X3D format for further processing. The
“Welcome” scene is created using X3D language directly in a text editor (i.e., Windows
Notepad). This section documents the process of using these two methods to create the
virtual scenes in this research project.
4.1.1 Coding a Scene in X3D
When there are only a small number of geometric objects in the virtual scene, whose
spatial positions are clear, it is convenient to code all the scene elements using X3D
language directly. This process of creating objects without involving CAD applications
should be easy for an interior designer to learn. The functions of the X3D nodes are
simple. Editing them requires only a basic knowledge of computer coding. In this manner,
when a CAD conversion is required, the designer is able to identify where in the X3D
code to look for the relevant modeling information. This sub-section takes the
“Welcome” scene, as shown in Figure 4.1, as an example to illustrate the modeling
process using X3D.
50
Figure 4.1: Screenshot of the Welcome scene.
The “Welcome” scene is designed to be rendered by default when a user opens
the YY3D’s interface. It consists of two text fragments and four cubes. The texts display
the greeting message and the cubes keep rotating around their local axes to inform the
user of a web-based 3D presentation. To create this scene, three tasks need to be finished:
defining the geometries (i.e., text and cube), defining their positions in the coordinate
system, and defining the animation parameters. As introduced in Section 3.2.1, an X3D
scene graph is built up with a series of X3D nodes; each piece of modeling information is
labelled by the relevant tags corresponding to a certain X3D node.
First, to define a geometric object, all the information regarding its shape, size,
and appearance are wrapped within a Shape node labelled using a pair of <Shape> and
</Shape> tags. Between these two tags, the shape and size information are expressed
using a specific geometry node. The available geometry nodes include: (1) standard
primitives, including <Box>, <Cone>, <Cylinder>, and <Sphere>, each of which has a
built-in field size for specifying the geometry length, width, height or radius depending
51
on the node type; (2) irregular shapes, including <IndexedFaceSet>, <IndexedQuadSet>,
and <IndexedTriangleSet>, which retrieve the coordinates of the vertices for defining a
geometry shape; and (3) text block (i.e., <Text>) which nests a <FontStyle> node for
defining the font family, size, and alignment rules.
The appearance information is described using the <Appearance> node, which is
typically placed following the geometry node it describes. Inside the <Appearance>
node, the <Material> node is responsible for defining the colour of geometry using the
Red-Green-Blue (RGB) colour model; the <ImageTexture> node is used for attaching a
texture map to the geometry. For example, the X3D coding snippet that defines the text
block in Figure 4.1, “Hello and Welcome to YY3D”, is shown in Figure 4.2.
<Shape> <Text solid='false' string=' "Hello and Welcome to YY3D" '> <FontStyle family='VERDANA, SANS-SERIF' size='0.8' justify='"MIDDLE"'/> </Text> <Appearance> <Material diffuseColor='1.0 0.3 0'/> </Appearance> </Shape>
Figure 4.2: Coding snippet defining a text string.
In addition, when creating the geometries that share the same attributes (e.g., size,
colour or texture), the geometries defined at later steps can borrow the parameter settings
from the previous node using the DEF and USE mechanism. The DEF stands for
definition and is used to assign a node a name, so the other nodes can make reference to
this node. The USE then carries the name of the node being referenced and is put inside
the other nodes that want to borrow the parameter settings. For example, the four cubes in
Figure 4.1 have the same size; once the first cube (which is red) is defined, the other three
52
cubes can copy the size parameter from the first cube without repeating the size definition
(Figure 4.3). The DEF and USE mechanism not only helps save typing but also improves
rendering performance, because the copied nodes require far less memory and
computation when the same modeling information needs only be created once [9].
<!-- The red cube --> <Shape> <Box DEF='defaultCube' size='2 2 2'/> <Appearance> <Material diffuseColor='1 0 0'/> <ImageTexture url='cubeRed.jpg'/> </Appearance> </Shape> <!-- The green cube --> <Shape>
<Box USE='defaultCube'/> <Appearance> <Material diffuseColor='0 1 0'/>
<ImageTexture url='cubeGreen.jpg'/> </Appearance> </Shape>
Figure 4.3: Coding snippet defining two cubes that share a same attribute.
Second, after creating the geometry, it is necessary to define their spatial positions
in the world coordinate system. This task is conducted using the <Transform> nodes. A
<Transform> node is a grouping node that defines a local coordinate system for its
children. By specifying its relevant fields, a <Transform> node can translate, rotate and
scale its nested geometry nodes relative to the parent coordinate system. For example,
Figure 4.4 shows the code defining the positions of the text block and the red cube in
Figure 4.1. It should be noted that the X3D scene follows a right-handed Cartesian
coordinate system: with the origin in the center of the computer screen, the positive X-
axis is horizontal pointing to the right, the positive Y-axis is vertical pointing to the top,
and the positive Z-axis comes straight out of the screen. Therefore, in Figure 4.4, when
53
the translation field of the <Transform> node that includes the text block is assigned “0
4 0” (in the order of “X_Y_Z”), that <Transform> node moves its child 4 units away from
the origin (0 0 0) in the positive Y-axis direction. Similarly, the second <Transform>
node moves the red cube 4.5 units away from the origin in the negative X-axis direction.
<!-- The text message --> <Transform translation='0 4 0'>
<Shape> <!--The coding snippet defining the text message in Figure 4.2 is inserted here. -->
</Shape> </Transform> <!-- The red cube --> <Transform translation='-4.5 0 0'>
<Shape> <!--The coding snippet defining the red cube in Figure 4.3 is inserted here. --> </Shape>
</Transform>
Figure 4.4: Coding snippet defining the spatial relationship between two objects.
The final task in the “Welcome” scene is to set the animation for rotating cubes.
As discussed above, the <Transform> node accounts for the geometry position in the
coordinate system. To keep a cube rotating, the rotation field of its parent <Transform>
node must be updated continually with gradually increasing angles of rotation. To make
this happen, two more functions need to be added in the scene graph. The first function is
to track the passage of time when the scene is rendered, which can be conducted using a
<TimeSensor> node. The second function is to provide angles of rotation in accordance
with the tracked point in time, which can be realized using an <OrientationInterpolator>
node. With these two nodes, the logic chain for setting the animation is summarized as in
Figure 4.5. When the scene is rendered, the <TimeSensor> node tracks the passage of
time and sends out its record to the <OrientationInterpolator> node in each frame, which
54
in turn calculates the angle of rotation using the received point in time and sends out the
calculation result to the rotation field of the <Transform> node. As its local coordinate
system is updated, the orientation of the nested cube changes accordingly.
Figure 4.5: Defining the animation chain for rotating a cube.
<TimeSensor DEF='orbitalTimeInterval' cycleInterval='12' loop='true'/> <OrientationInterpolator DEF='spinCubes' key='0.00 0.25 0.50 0.75 1.00' keyValue='1 0 1 0, 1 0 1 0.78, 1 0 1 1.57, 1 0 1 2.35, 1 0 1 3.14'/> <Transform DEF='cubeRed' translation='-4.5 0 0'>
<Shape> <!-- The coding snippet defining the red cube in Figure 4.3 is inserted here. --> </Shape>
</Transform> <ROUTE fromNode='orbitalTimeInterval' fromField='fraction_changed' toNode='spinCubes' toField='set_fraction'/> <ROUTE fromNode='spinCubes' fromField='value_changed' toNode='cubeRed' toField='set_rotation'/>
Figure 4.6: Coding snippet defining the animation chain for rotating a cube.
Figure 4.6 shows the coding snippet to implement this animation chain for the red
cube in Figure 4.1. Each node on the chain is given a name using the DEF, in order that
each <ROUTE> connection is able to identify its endpoints (i.e., fromNode and toNode).
To set the animation parameters, the cycleInterval field of the <TimeSensor> node
specifies the animation loop duration in seconds, either for single or looped repetition.
The <OrientationInterpolator> node includes two interrelated arrays (key and keyValue)
for calculating the real time rotation values. Specifically, the key array stores the key
fractions of a loop duration and the keyValue array stores the corresponding rotation
55
values at each key fraction. Thus, there are exactly as many elements in the keyValue
array as there are elements in the key array. Each element in the keyValue array is
assigned a rotation value in the format of “X_Y_Z_rotation value in radians”, in which
the “X_Y_Z” accounts for the rotation axis. During an animation loop, the
<OrientationInterpolator> node receives the value t from the <TimeSensor> node and
processes t using the following linear equation to produce an interpolated value x. This
calculation result is then sent to red cube’s <Transform> node to update its orientation.
“The X3D computational model simply assumes that any arbitrary function of
output values over time can be approximated by piece-wise linear line segments” [9].
Therefore, to control the degree of fidelity or precision of functions, the scene author can
change the number of elements in the key and keyValue arrays. As in the case of the
“Welcome” scene, each cube is finishing a 360 degree rotation around its local axis in an
animation loop (i.e., the start and end positions coincide with each other); therefore, five
elements in each array are enough for defining the start and end positions and the other
three quadrants, as well as their order of occurrence.
This sub-section has been discussing the approach for modeling a scene using
X3D directly, and all of these concepts also apply to the scene graphs exported from
CAD applications. The next sub-section describes the CAD model conversion process
and continues this discussion by supplementing other X3D nodes that are essential to
creating the virtual interiors for online use.
56
4.1.2 Creating a Scene based on CAD Models
Compared to the “Welcome” scene, the two virtual interiors created in YY3D are more
complex. These two scenes consist of many more geometric objects, such as floors, walls,
columns, ceilings, doors, windows, and other building components. Unlike those cubes
created in the previous section, these building components can be made of irregular
shapes instead of standard primitives, and thereby the scene author has to refer to vertex
coordinates to define objects. In addition, as a large number of objects are involved, their
spatial relationship can become intricate. Thus, hard-coding each object position via
<Transform> nodes is not a straightforward task. For these reasons, coding these two
interiors directly in X3D can be time-consuming and error-prone.
Instead, using CAD is more desirable in this context. The CAD modeling
applications free the scene author from those coordinate numbers that are required by
X3D nodes and fields. The author can focus on the modeling task in a more intuitive and
efficient way by applying the tools offered in CAD applications. When the scene is built
up and exported, the CAD applications will take care of those coordinate numbers and
transfer them into appropriate X3D nodes. Therefore, the author does not need to concern
about the coding syntax during the modeling process. After the export, the author is still
able to access the X3D file for further editing based on the modeling work in CAD.
Creating a living room or office interior provides a useful illustration of how an
X3D scene is created from objects modeled in a CAD application. The following sub-
sections detail the process of creating the living room scene used in the proof-of-concept.
57
4.1.2.1 Modeling and Optimization
In this example, the living room space is 5m wide, 5m long and 3m high. This size is
based on an average living room [14]. This living room includes: floor, wall, door,
window, ceiling, and lights. The employed modeling tool is Autodesk 3ds Max 2010,
though any other CAD applications can be used as long as they support the data exchange
with X3D or its predecessor VRML. The adopted modeling approach is Polygon
Modeling and the polygon count of the overall interior scene is less than 2000, which
ensures that real-time rendering speed can be achieved on the client’s local computer
(Section 2.2.3). Figure 4.7 shows the completed living room scene in the 3ds Max
environment.
Figure 4.7: The living room built using Autodesk 3ds Max 2010.
58
4.1.2.2 Export
After creating all the geometric objects, there are three more tasks need to be done prior
to exporting the scene. First, to facilitate the editing works in the X3D file, it is helpful to
name each geometric object in 3ds Max. For example, the geometric object used to stand
for the floor component can be properly named “floor”. When the scene is exported, the
given names will be kept in the corresponding X3D nodes using the DEF mechanism.
This helps the author easily retrieve modeling information in the exported X3D scene
graph. Second, to avoid redundant <Transform> definitions in the exported file, it is
useful to group all the geometric objects in the scene and locate their group center at the
coordinate origin. Also, those geometric objects that work together to stand for a building
component (e.g., lighting trough and down lamps) can be further grouped, so that they
can share a parent <Transform> node in the exported scene graph. Finally, to reduce the
file size, it is beneficial to clean the scene by removing any objects that were created for
auxiliary purposes (e.g., auxiliary line and reference plane) during the modeling process.
Figure 4.8: VRML97 Exporter in Autodesk 3ds Max 2010.
59
Currently, Autodesk 3ds Max only supports the data exchange with the VRML
format. The scene author needs to go through three steps to create an X3D file. First, the
author exports the model to a VRML file using the VRML97 Exporter, a built-in function
of 3ds Max (Figure 4.8). Second, the author copies the VRML codes from the exported
file to an X3D encoding converter (Section 3.2.2). Third, after the conversion, the author
saves the X3D codes into a plain text file and sets the file extension as “.x3d”.
<!-- The floor in the living room scene --> <Transform DEF="floor" translation="0.0 -1600.0 0.0">
<Shape > <Appearance > <Material diffuseColor="1.0 1.0 1.0"/>
</Appearance> <IndexedFaceSet DEF="floor-FACES" coordIndex="5 4 0 -1 0 1 5 -1 6 5 1 -1 1 2 6 -1 7 6 2 -1 2 3 7 -1 4 7 3 -1 3 0 4 -1 2 1 0 -1 0 3 2 -1 4 5 6 -1 6 7 4 -1" solid="true" ccw="true"> <Coordinate DEF="floor-COORD" point="2740.0 0.0 -2740.0, -2740.0 0.0 -2740.0, -2740.0 0.0 2740.0, 2740.0 0.0 2740.0, 2740.0 100.0 -2740.0, -2740.0 100.0 -2740.0, -2740.0 100.0 2740.0, 2740.0 100.0 2740.0"/> </IndexedFaceSet>
</Shape> </Transform>
Figure 4.9: Coding snippet describing a floor plane built using CAD.
Figure 4.9 shows a coding snippet in the X3D scene graph after the conversion,
which describes the floor plane of the living room. The applications of <Transform>,
<Shape>, <Appearance>, and <Material> nodes are the same as those introduced when
creating the “Welcome” scene. The only unfamiliar node is <IndexedFaceSet>, whose
occurrence is because of the Polygon Modeling approach adopted during the modeling
process. The <IndexedFaceSet> node records all the vertices coordinates of a geometric
60
object. Its field coordIndex, an index array, defines the connection order among the point-
field vertices stored in the child node <Coordinate> to re-construct all the polygons in
defining the geometry when the scene is rendered. In addition, the solid and ccw fields
together determine which side of the polygons are rendered. Specifically, the ccw field
defines whether vertices are in counter clockwise order, which in turn determines the
orientation of the polygons. For example, when ccw=“true”, the front side of polygon is
determined based on normal-vector direction using the right-hand rule. The field solid
then determines whether one or both sides of the polygons are drawn. When
solid=“true”, only the front side is drawn while the back side is transparent. These field
values will follow the parameter settings specified during the modeling process in CAD,
when geometries are exported.
4.1.2.3 Additional Editing in the X3D Scene Graph
After creating its X3D file, the author needs to supplement the scene with information,
which includes viewpoints, textures, and lightings, before the virtual interior is ready to
be posted on the website.
(1) Viewpoints
First of all, viewpoints need to be defined in the scene graph. Most X3D players
can discern the viewpoints predefined by the scene author and list them in a shortcut
menu. With this advantage, the user can easily navigate the virtual scene following the
route the author expects. In YY3D, to help the user view an interior from different angles,
five typical viewports are provided, including front, left, back, right, and top.
Figure 4.10 shows the coding snippet defining the five viewpoints in the living
room scene. The position field of the <Viewpoint> node defines the viewpoint location
61
(with X_Y_Z coordinates). The orientation field determines the direction at which the
user is looking (with X_Y_Z_angle in radians and X_Y_Z defines the axis direction). If
there are no viewpoints predefined in the scene graph, the default viewpoint will be
switched on, which looks from the position=“0 0 10” towards the origin (0 0 0) along the
negative Z-axis direction (the positive Z-axis comes straight out of the computer screen).
<!-- The viewpoints in the living room scene --> <Viewpoint description='Front' position='0 0 5500'/> <Viewpoint description='Left' position='-5500 0 0' orientation='0 1 0 4.71'/> <Viewpoint description='Back' position='0 0 -5500' orientation='0 1 0 3.14'/> <Viewpoint description='Right' position='5500 0 0' orientation='0 1 0 1.57'/> <Viewpoint DEF='Top' description='Top' position='0 6500 0' orientation='1 0 0 -1.57'/>
Figure 4.10: Coding snippet defining the viewpoints in the living room scene.
Figure 4.11: Relative position between the viewpoint Left and the left-side wall.
To help the user get a whole picture of the interior space, each viewpoint defined
in Figure 4.10 is positioned outside the room boundary. Because the living room scene
was designed as 5m x 5m x 3m (Section 4.1.2.1) and its geometric center was located at
the scene origin (Section 4.1.2.2), the relative position between the viewpoint Left and the
left-side wall, for example, is as shown in Figure 4.11.
62
As a result, the wall will block the line of sight unless the left-side wall is hidden
when the user views the interior from the viewpoint Left. To solve this problem, the
<LOD> node is used, which stands for “level of detail”. The geometric objects grouped
under this node can have multiple representations based on the distance between the
<LOD> group center and the viewpoint being applied. For example, Figure 4.12 shows
the <LOD> node which is defined to group the elements of left-side wall. Because the
center field of this <LOD> group is set at (-2500 0 0) while the viewpoint Left was set at
(-5500 0 0) (Figure 4.10), the distance between the <LOD> center and the viewpoint Left
is 3m. In the meantime, the range field of the <LOD> node is assigned 3.5m. In doing so,
<!--The left-side wall in the living room scene --> <LOD center='-2500 0 0' range='3500' forceTransitions='true'> <WorldInfo info='"Not visible at left view"' /> <Group DEF='leftElevationGroup'> <!-- Left wall --> <Transform DEF="wall_left" translation="0.0 -1500.0 2.441E-4"> <Shape > <!-- The coding snippet defining the wall… --> </Shape> </Transform> <!-- Left pillar --> <Transform DEF="pillar_left" translation="-2450.0 -1500.0 -2000.0"> <Shape > <!-- The coding snippet defining the pillar … --> </Shape> </Transform> <!-- Left baseboard --> <Transform DEF="baseboard_left" translation="-1450.0 -1500.0 -375.0"> <Shape > <!-- The coding snippet defining the baseboard… --> </Shape> </Transform>
</Group> </LOD>
Figure 4.12: Coding snippet defining the left-side wall using a <LOD> node.
once the interior is observed from the viewpoint Left (3m < 3.5m), the left-wall elements
grouped inside <Group DEF=“leftElevationGroup”> will be hidden. When the interior
63
is viewed from other viewpoints (the distance between the adopted viewpoint and the
LOD center is greater than 3.5m), the left-side wall will be displayed again.
To match the five viewpoints, YY3D uses five <LOD> nodes to group the ceiling
and the four interior elevations respectively. In this manner, no matter applying which
viewpoint, the user can always have an entire picture of the interior space.
(2) Texture Information
<ImageTexture> node is used for attaching texture maps to geometries (Section
4.1.1). To complete this task, each <ImageTexture> node requires a file address to
retrieve the texture map. It is possible that the scene author has assigned textures to the
model in CAD, however, when texture maps are not saved in the same directory where
the X3D (or VRML) file to be exported, those texture paths will be lost. In that instance,
<ImageTexture> nodes will be simply omitted in the exported scene graph. Therefore, it
is important to check the scene graph, insert the missing <ImageTexture> nodes, and
update their url fields with the new file addresses on the web server.
Figure 4.13 shows an example in supplementing the missing texture information
of the floor plane described in Figure 4.9. Because the texture map is stored in the same
directory on the web server as the scene’s X3D file, a relative texture path is enough for
the url field. The repeatS and repeatT fields indicate whether to repeat the texture map
along the s axis (horizontally) and the t axis (vertically) on the surface of the floor plane.
If the texture map needs further modification (e.g., rotating and scaling on the geometric
surface), a <TextureTransform> node can be added following the target <ImageTexture>
node. For example, the <TextureTransform> node, as shown in Figure 4.13, scales the
texture map by 3 times along both s and t axes.
64
<!-- The floor in the living room scene --> <Transform DEF="floor" translation="0.0 -1600.0 0.0"> <Shape >
<Appearance > <ImageTexture url="scene_livingroom_floor.jpg" repeatS="true" repeatT="true" containerField="texture"/> <TextureTransform scale="3 3"/> </Appearance> <IndexedFaceSet DEF="floor-FACES" <!-- The coding snippet defining the floor plane in Figure 4.9 is inserted here. --> </IndexedFaceSet> </Shape>
</Transform>
Figure 4.13: Coding snippet attaching a wood texture on the floor plane.
(3) Lightings
Lightings can be defined during the modeling process in a CAD application.
However, defining lightings in this way may result in a discrepancy in the actual lighting
effects between those rendered in an X3D player and those rendered using a CAD
rendering engine. Therefore, the designer has to test and configure the lighting settings in
the exported X3D file.
X3D provides three lighting nodes: <DirectionalLight> which illuminates the
scene in a single direction, <PointLight> which imitates a light source that radiates in all
directions, and <SpotLight> which lights up the scene within a conical beam. Depending
on the lighting settings in CAD, corresponding lighting nodes will be employed when the
scene is exported. In YY3D, the two interiors take the point-source lighting (i.e., Omni)
in 3ds Max to simulate the lighting effect of bulbs, <PointLight> nodes thus appear in
the exported scene graph (Figure 4.14). Configurable parameters of a lighting node
typically include the lighting colour, intensity, as well as the location of lighting source.
The color field specifies the RGB spectral components. For example, color=”1 0 0” is
65
pure full-intensity red. The intensity field defines the brightness of light rays, whose
value is in the range from 0 (no emission of light) to 1 (full intensity). The location field
locates the lighting source with the X_Y_Z coordinates.
<!-- The lights in the living room scene --> <PointLight DEF="Omni_01" radius="14590.0" intensity="0.6" on="true"
location="0.0 0.0 0.0" color="1.0 1.0 1.0"/> <PointLight DEF="Omni_02" radius="10000.0" intensity="0.3" on="true"
location="2000 0.0 2000" color="1.0 1.0 1.0"/> <PointLight DEF="Omni_03" radius="10000.0" intensity="0.3" on="true"
location="-2000 0.0 2000" color="1.0 1.0 1.0"/> <PointLight DEF="Omni_04" radius="10000.0" intensity="0.3" on="true"
location="-2000 0.0 -2000" color="1.0 1.0 1.0"/> <PointLight DEF="Omni_05" radius="10000.0" intensity="0.3" on="true"
location="2000 0.0 -2000" color="1.0 1.0 1.0"/>
Figure 4.14: Coding snippet defining the lightings in the living room scene.
Up to this point, the editing work based on the exported CAD model is finished.
Figure 4.15 displays the living room scene observed from the different viewpoints.
Figure 4.15: Observing the living room scene from the different viewpoints.
4.2 Creation of Furniture Models
YY3D provides six furniture categories, including desk, chair, office workstation, sofa,
bed, and storage cabinet. For each piece of furniture, there are two files:
(1) an X3D file used to store the furniture model’s 3D data; and
(2) a JPG file used to represent the furniture in the catalogue on the user interface page.
66
During the user’s operation, the X3D file of the furniture model will be retrieved from the
web server, when its image file is selected. To help easily retrieve the furniture file, each
file is named with the furniture category as the prefix following by a number to
differentiate itself from other files in the same category; for example, names include
desk_01.x3d and desk_01.jpg.
4.2.1 Furniture Modeling
A furniture model usually comprises a variety of components. The relative positions
between different components can be complex when they are assembled. Thus, similar to
the modeling process of virtual interiors, the furniture models in YY3D are first created
in 3ds Max and then exported into the X3D format for further processing. The workflow
of modeling and export is the same as the one described in Sections 4.1.2.1 and 4.1.2.2.
Figure 4.16: A furniture model created in YY3D.
67
Figure 4.16 shows a furniture example in YY3D. Figure 4.16(a) shows its original
model built in Autodesk 3ds Max. Figure 4.16(b) shows its perspective view produced
using the 3ds Max rendering engine, which is saved as a JPG file to be displayed in the
catalogue on the user interface page. Figure 4.16(c) shows its raw X3D model rendered in
the BS Contact Player. There is no texture map assigned at this point.
4.2.2 Programming the Transformation Functions
The editing work in the exported X3D file of the furniture models are different from
those used in the interior scenes. The designer does not need to define viewpoints and
lightings for the furniture, because each furniture model is imported into the interior
scene for rendering. There is also no need to define texture information. Instead, texture
maps are provided on the interface page, so that the user can customize furniture textures
themselves. Because the selections of furniture model and texture are conducted on the
user interface, the web scripts to support the furniture import and texture attachment are
explained in Chapter 5, when implementing the user interface.
The furniture’s X3D model can be transformed using the translation, rotation, and
scaling. To arrange the furniture in an interior scene, each furniture model should be able
to be moved and rotated according to the user’s input. To realize these transformation
functions, additional lines of codes need to be added in the furniture’s X3D scene graph.
The following sub-sections take a test object (a red cube) to go through the programs
developed for the translation, rotation and scaling functions respectively. After testing
each function, a coding template is created to integrate them together. By inserting their
X3D codes into this template, each furniture model will be equipped with these
transformation functions.
68
4.2.2.1 Translation
To dynamically move an object in the 3D scene, two things are required. First, the plane
on which the object is positioned must be identified. Second, the user’s input must be
converted into a 2D translation along that plane surface. Because all the furniture models
in YY3D are designed to be placed on the floor only, the 2D translation will be confined
on the floor plane. To realize the conversion of the user’s input, the <PlaneSensor> node
is adopted, which converts the user’s mouse dragging motion into a pair of translation
coordinates on the local X-Y plane (in which the floor will be positioned). These X-Y
coordinates are continually sent to the <Transform> node of the red cube to update the
translation field in each rendering frame (Figure 4.17).
Figure 4.17: Defining the translation function chain.
Figure 4.18 shows the coding snippet defining the translation function, in which
the output value from the translation_changed field of the <PlaneSensor> node is passed
to the set_translation field of the <Transform> node of the test cube. The minPosition
and maxPosition fields of the <PlaneSensor> node determine the moving boundary on
the local X-Y plane. Because the floor plane is 5m x 5m and located at the scene origin,
the boundary values are set in accordance with the diagonal endpoints of the floor plane.
In practice, as the user drags the cube, the <PlaneSensor> node maps the user’s mouse
motion across the computer screen to the cube’s displacement on the floor plane. Figure
4.19 demonstrates the application result.
69
<PlaneSensor DEF='Mover' description='Click and drag to move the model' enabled='true' minPosition='-2500 -2500' maxPosition='2500 2500' /> <Transform DEF='MoveModel'> <!-- The red cube -->
<Shape> <Box DEF='TestModel' size='300 1200 600' /> <Appearance> <Material diffuseColor='1 0 0' /> </Appearance> </Shape>
</Transform> <ROUTE fromNode='Mover' fromField='translation_changed' toNode='MoveModel' toField='set_translation' />
Figure 4.18: Coding snippet defining the translation function.
Figure 4.19: Dragging and moving the test object.
4.2.2.2 Rotation
To dynamically rotate an object, the user’s mouse motion needs to be converted into a 3D
rotation. X3D includes two sensors to conduct this task: the <SphereSensor> node,
which converts the mouse motion into a rotation around the local origin; and the
<CylinderSensor> node, which generates a rotation around the local Y-axis. Because the
furniture in YY3D is to be placed on the floor only, the rotation around the axis that is
perpendicular to the floor plane is desired. Similar to the logic chain that defines the
translation function, the user’s mouse input is first sent to the <CylinderSensor> node,
70
which in turn sends the converted rotation value to the <Transform> node of the test
cube to update the rotation field (Figure 4.20).
Figure 4.20: Defining the rotation function chain.
Figure 4.21 shows the coding snippet defining the rotation function. Because the
floor was positioned on the local X-Y plane when defining the translation function, it
must rotate the local coordinate system by 90 degree (1.57 in radians) to ensure the local
Y-axis is perpendicular to the floor plane, prior to applying the <CylinderSensor> node.
When using the <CylinderSensor> node, the output value from its rotation_changed
field is passed to the set_rotation field of the <Transform> node of the cube. In practice,
as the user drags the cube, the cube rotates on the floor plane around the axis that goes
through its geometric center. Figure 4.22 demonstrates the application result.
<!--Define the local Y-axis --> <Transform rotation='1 0 0 -1.57'> <CylinderSensor DEF='Rotationer' description='Click and drag to rotate the model' enabled='true' /> <Transform DEF='RotateModel'> <!-- The red cube --> <Shape> <Box DEF='TestModel' size='300 1200 600' /> <Appearance> <Material diffuseColor='1 0 0' /> </Appearance> </Shape> </Transform> </Transform> <ROUTE fromNode='Rotationer' fromField='rotation_changed' toNode='RotateModel' toField='set_rotation' />
Figure 4.21: Coding snippet defining the rotation function.
71
Figure 4.22: Dragging and rotating the test object.
4.2.2.3 Switch between the Translation and Rotation Functions
Because both the translation and rotation functions are driven by the mouse dragging
motion, there is a potential conflict in determining which function has the priority to
respond to the mouse input. Therefore, a switch mechanism between the <PlaneSensor>
and <CylinderSensor> nodes is required; when one function works, the other turns off.
To perform this switching task, two keystrokes are registered: the key “M” (which
stands for “Moving”) is designated to activate the <PlaneSensor>; the key “R” (which
stands for “Rotating”) is appointed for the <CylinderSensor>. A <KeySensor> node is
then added in the scene graph to detect which specific key the user is clicking. To process
the received input by the <KeySensor> and send out instructions to turn on/off the target
sensors, script is programmed to connect the <KeySensor> with the <PlaneSensor> and
<CylinderSensor> nodes (Figure 4.23). Figure 4.24 defines this switch function in the
code. Specifically, the script programmed in the <Script> node converts the string value
received by the <KeySensor> to a pair of Boolean values, which are used to update the
enabled fields of the <PlaneSensor> (Figure 4.18) and <CylinderSensor> (Figure 4.21)
nodes respectively. When its enabled field is set true, the sensor is switched on.
72
Figure 4.23: Function chain to switch between the translation and rotation sensors.
<KeySensor DEF='KeyDetector' enabled='true' /> <Script DEF='KeyHandler' directOutput='true' mustEvaluate='true'> <field name='keyInput' type='SFString' accessType='inputOnly' /> <field name='triggerMover' type='SFBool' accessType='outputOnly' />
<field name='triggerRotationer' type='SFBool' accessType='outputOnly' /> <! [CDATA[ ecmascript: function keyInput(value) { // Trigger the Mover mode if (value == 'M' || value == 'm') { triggerMover = true; triggerRotationer = false; } // Trigger the Rotationer mode if (value == 'R' || value == 'r') { triggerMover = false; triggerRotationer = true; } } ]]>
</Script> <ROUTE fromNode='KeyDetector' fromField='keyPress' toNode='KeyHandler' toField='keyInput' /> <ROUTE fromNode='KeyHandler' fromField='triggerMover' toNode='Mover' toField='enabled' /> <ROUTE fromNode='KeyHandler' fromField='triggerRotationer' toNode='Rotationer' toField='enabled' />
Figure 4.24: Coding snippet defining the switch mechanism between the translation and rotation sensors.
73
4.2.2.4 Scaling
The scaling function is not typically used in YY3D, because all the furniture models are
based on their actual size. However, there is the possibility that future application may
include decorative elements whose sizes are user-defined (e.g., a frame hanging on the
wall). Thus, YY3D includes a scaling function.
YY3D registers two keystrokes to scale the model. The key “]” is assigned for
scaling up and “[” is for scaling down. Similar to the method adopted in the switch
function in Section 4.2.2.3, a <KeySensor> node is created to trace the user’s input and a
<Script> node is programmed to process the input value and update the object’s parent
<Transform> node accordingly (Figure 4.25).
Figure 4.25: Defining the scaling function chain.
Figure 4.26 defines the scaling function in the code. The scalingFactor field of
the <Script> node is updated based on the user’s keyboard input. The updated scaling
factor then multiplies the scaling reference values stored in the array of scalingBase. This
array has three elements that correspond with the X, Y, and Z axes respectively. The
results of these calculations are finally sent to the set_scale field of the <Transform>
node to scale its nested geometry. This scaling function scales the model uniformly,
because the elements in the scalingBase array are multiplied by the same scalingFactor.
If non-uniform scaling is required, extra scaling factors are needed to update the scaling
references in each axis direction independently. Figure 4.27 shows the application result.
74
<Transform DEF='ScaleModel'> <!--The red cube -->
<Shape> <Box DEF='TestModel' size='300 1200 600' />
<Appearance> <Material diffuseColor='1 0 0' /> </Appearance> </Shape> </Transform> <KeySensor DEF='KeyDetector' enabled='true' /> <Script DEF='KeyHandler' directOutput='true' mustEvaluate='true'> <field name='keyInput' type='SFString' accessType='inputOnly' />
<field name='scalingFactor' type='SFFloat' accessType='initializeOnly' value='1' /> <field name='scalingBase' type='SFVec3f' accessType='initializeOnly' value='1 1 1' /> <field name='scale_changed' type='SFVec3f' accessType='outputOnly' /> <! [CDATA[ ecmascript: function keyInput(value) { // Scale the model if (value == ']') { scalingFactor += 0.1; // X-axis scale_changed[0] = scalingBase[0] * scalingFactor; // Y-axis scale_changed[1] = scalingBase[1] * scalingFactor; // Z-axis scale_changed[2] = scalingBase[2] * scalingFactor; } if (value == '[') { scalingFactor -= 0.1; scale_changed[0] = scalingBase[0] * scalingFactor; scale_changed[1] = scalingBase[1] * scalingFactor; scale_changed[2] = scalingBase[2] * scalingFactor; } } ]]>
</Script> <ROUTE fromNode='KeyDetector' fromField='keyPress' toNode='KeyHandler' toField='keyInput' /> <ROUTE fromNode='KeyHandler' fromField='scale_changed' toNode='ScaleModel' toField='set_scale' />
Figure 4.26: Coding snippet defining the scaling function.
Figure 4.27: Scaling the test object.
75
4.2.2.5 Template for Integrating the Transformation Functions
Three transformation functions have been developed and tested separately. It is necessary
to integrate them into each furniture model’s X3D file for the practical application. To
simplify this process, a coding template is created as shown in Figure 4.28. Each time a
furniture model is exported from a CAD application, its X3D codes can be incorporated
into the reserved position on this template. Explanations about this template follow.
(1) Local X-Y plane and local Y-axis
The X3D scene follows the right-hand Cartesian coordinate system, in which the
positive X-axis trends horizontally to the right, the positive Y-axis points up, and the
positive Z-axis comes straight out of the computer screen. The default X-Y plan is
therefore parallel to the screen.
In practice, in order to display a normal overview of the interior scene, the floor is
positioned on the X-Z plane. However, the <PlaneSensor> node produces translation
coordinates on the local X-Y plane; therefore, it must rotate the interior scene by 90
degree (1.57 in radians) around the X-axis before applying the <PlaneSensor> node.
Otherwise, the furniture model will not move properly on the floor plane.
In the case of the <CylinderSensor> node, it has to rotate the floor back into the
X-Z plane, so the floor is perpendicular to the Y-axis. Otherwise, the rotation axis of the
furniture model is parallel to the floor plane, resulting in undesired rotation effect.
(2) Scaling center
By default, the imported furniture models are scaled referenced to the coordinate
origin (0 0 0), which is at the geometric center of the interior scene. Restrictions are thus
required to prevent furniture model from moving off the floor plane when they are scaled.
76
<!-- Define the local X-Y plane --> <Transform rotation='1 0 0 1.57'> <!-- Translation --> <PlaneSensor DEF='Mover' enabled='true' description='Click and drag to move the model' minPosition='-2500 -2500' maxPosition='2500 2500' /> <!-- Define the local Y-axis -->
<Transform DEF='MoveModel' rotation='1 0 0 -1.57'> <!-- Rotation -->
<CylinderSensor DEF='Rotationer' enabled='false' description='Click and drag to rotate the model'/> <Transform DEF='RotateModel'> <!-- Scaling --> <!-- Define the scaling center, assuming that the floor plane center is at (0 -1500 0) --> <Transform DEF='ScaleModel' center='0 -1500 0'> <!-- Locate the model on the floor plane --> <Transform translation='0 -1500 0'> <!--The auxiliary cylinder --> <Group> <Shape> <Cylinder radius='450' height='1800' /> <Appearance> <Material DEF='ControlCylinder' diffuseColor='0.5 1 0' transparency='1.0' /> </Appearance> </Shape> </Group> <!--The furniture model -->
<Group> <!-- Coding snippet defining the furniture model is inserted here. --> </Group> </Transform>
</Transform> </Transform>
</Transform> </Transform> <!-- The script node and the route declarations for the transform functions --> <KeySensor DEF='KeyDetector' enabled='true' /> <Script DEF='KeyHandler' directOutput='true' mustEvaluate='true' url='../../yy3d_script/yy3d_script_sectionX3D_transformation.js'> <field name='keyInput' type='SFString' accessType='inputOnly' /> <field name='triggerMover' type='SFBool' accessType='outputOnly' /> <field name='triggerRotationer' type='SFBool' accessType='outputOnly' /> <field name='scalingFactor' type='SFFloat' accessType='initializeOnly' value='1' /> <field name='scalingBase' type='SFVec3f' accessType='initializeOnly' value='1 1 1' />
<field name='scale_changed' type='SFVec3f' accessType='outputOnly' /> </Script> <!-- The Routes: Switch between Mover and Rotationer --> <ROUTE fromNode='KeyDetector' fromField='keyPress' toNode='KeyHandler' toField='keyInput' /> <ROUTE fromNode='KeyHandler' fromField='triggerMover' toNode='Mover' toField='enabled' /> <ROUTE fromNode='KeyHandler' fromField='triggerRotationer' toNode='Rotationer' toField='enabled' /> <!-- The Routes: Update the model location and orientation --> <ROUTE fromNode='Mover' fromField='translation_changed' toNode='MoveModel' toField='set_translation' /> <ROUTE fromNode='Rotationer' fromField='rotation_changed' toNode='RotateModel' toField='set_rotation' /> <!-- The Route: Scale the model --> <ROUTE fromNode='KeyHandler' fromField='scale_changed' toNode='ScaleModel' toField='set scale' />
Figure 4.28 Coding template integrating the transformation functions.
77
To do so, the scaling center of each furniture model has to be located on the floor surface.
For example, if the floor plane is located at 1.5m below the origin, the furniture’s scaling
center will be defined as in the <Transform DEF=“ScaleModel” center=“0 -1500 0”>.
(3) Separation of scripts from the X3D scene graph
The scripts programmed for the switch mechanism between the translation and
rotation sensors (Figure 4.24) and the scaling function (Figure 4.26) are independent from
each furniture model’s X3D content. Therefore, these scripts can be assembled into an
external script file, which is then referred by each furniture model through the url field of
the <Script> node. In this manner, the same scripts are not repeated in each furniture
model’s scene graph. This is beneficial for the future coding modification, because all the
changes to the scripts only need to be carried out once.
(4) Auxiliary cylinder
As planned, the user is able to configure furniture textures by mouse clicking
(Section 5.2.2), which has a potential conflict with the execution of the transformation
functions (i.e., translation and rotation). In order to differentiate the user’s intention when
clicking, an auxiliary cylinder is provided. If the user clicks and drags the auxiliary
cylinder, the translation or rotation function will work (depending on which sensor is
enabled). Alternatively, if the user clicks on the model surfaces that are not overlapped by
the auxiliary cylinder, the texture map will be attached. To make this happen, the
geometry node defining the auxiliary cylinder is located at the same coding hierarchy as
the furniture model on the template. In this manner, the auxiliary cylinder is bound with
the furniture and moves together. Also, the volume of the auxiliary cylinder is set smaller
than the furniture model to ensure that it only overlaps the central area of the furniture.
78
Finally, to avoid visual interference, the transparency field of the auxiliary cylinder’s
<Material> node is set to 1.0 initially, which means totally transparent. If the
transformation function is activated (by clicking on the central area of the furniture
model), the transparency field will be reset to 0.5, so a semi-transparent cylinder shows
up to assist the user to move or rotate the furniture. Once the user releases the mouse
button, the transparency field will be restored to 1.0, so the cylinder disappears again.
Figure 4.29: Results of applying the transformation template to a furniture model.
By employing this template, the exported furniture models can be equipped with
the transformation functions quickly. Figure 4.29 demonstrates the result of applying this
coding template to a desk model in the YY3D’s furniture database.
79
4.3 Preparation of Texture Maps
To match the furniture types, five categories of texture maps are provided in YY3D,
including fabric, leather, wood, metal, and glass. Each texture category collects a group
of sample maps. To help easily retrieve the texture file on the web server, each file is
named with its texture category as the prefix following by a number to differentiate itself
from other files in the same category, for example, fabric_01.jpg and wood_01.jpg.
Because JPG and PNG are the only guaranteed image file formats to be supported
by X3D players, YY3D stores texture maps in the JPG format [9]. To ensure the file
transmission and rendering speed during the user’s operations, the width and height of
each texture map is set at 256 pixels respectively, which guarantees a reasonable file size
for a web-based virtual application. Figure 4.30 displays a group of texture maps created
in YY3D. Among them, Figure 4.30(a) shows the available categories and Figure 4.30(b)
shows the texture collection under the “Wood” category.
Figure 4.30: Texture samples created in YY3D.
80
Chapter Five: Web-based User Interface
This chapter concentrates on the implementation of the YY3D’s user interface, including
a discussion of the layout of the interface which consists of windows to organize and
display the virtual design data created in Chapter 4. The scripts programmed to support
the user’s operations on this interface are then explained, including the furniture import
and the texture attachment. The implementation process recorded in this chapter works as
an example to prove the methods suggested in Section 3.3.
5.1 User Interface Layout
The layout design of the YY3D’s user interface follows the prototype sketch (Figure 3.4)
proposed in Section 3.3.1. Specifically, the YY3D’s user interface is divided into three
parts: from top to bottom - the “title bar”, “navigation bar”, and “user’s operation area”
(Figure 5.1). The “title bar” displays the research project title and the “navigation bar”
provides links to the project information presented on other web pages. The “user’s
operation area”, the main part of this interface, is further divided into four sub-windows:
(1) “Model Editor” embeds an X3D player to retrieve and render virtual interiors. It is the
main operation area where the user examines and configures imported furniture models.
(2) “Model Catalogue” lists all the furniture models using their renderings. Based on their
categories, models are grouped into six catalogues. Each time the window shows one
catalogue, and the user can switch between catalogues using a built-in menu.
(3) “Texture Catalogue” lists all the texture maps. Similarly to the “Model Catalogue”
window, texture maps are categorized into five catalogues that can be switched using a
built-in menu.
(4) “User Notes” embeds a web form for the user to leave comments and annotations.
81
Figure 5.1: Framework of the YY3D’s user interface.
5.1.1 Model Editor Window
The “Model Editor” window occupies the upper left region in the user’s operation area.
This is where the user starts their workflow in YY3D. It includes two parts: the top part
embeds the X3D player and the bottom part provides control buttons to access certain
virtual interiors stored on the web server (Figure 5.2).
Figure 5.3 shows the coding snippet placing an X3D object within the web page.
To place an X3D object, the Web3D consortium has specified a standard object type
(<object type=“model/x3d+xml”>). However, this object type is currently not supported
by the Internet Explorer (IE), a widely used web browser. To solve this problem, another
82
<object type=“application/x-oleobject”> is nested under the Web3D’s object to specify
the implementation of the BS Contact Player. In this manner, if IE is used, the BS
Contact Player can still be loaded properly [6].
Figure 5.2: Model Editor window with an office scene being rendered.
<!-- The Model Editor window, placing an X3D player --> <div id="window_x3d"><div class="sectionTitle_left">Model Editor</div> <div id="x3d"> <!--[if !IE]> --> <object type="model/x3d+xml" width="600" height="540"> <param name="src" value="yy3d_archive/default/scene_welcome/welcomeToYY3D.x3d"/> <!--<![endif]--> <object type="application/x-oleobject" width="600" height="540" classid="clsid:4B6E3013-6E45-11D0-9309-0020AFE05CC8"> <param name="src"value="yy3d_archive/default/scene_welcome/welcomeToYY3D.x3d"/> </object> <!--[if !IE]> --> </object> <!--<![endif]--> </div> </div>
Figure 5.3: Coding snippet embedding an X3D object within the web page.
83
The src values of the <param> tags (Figure 5.3) initially point to the “Welcome”
scene. To switch on other virtual interiors, the src values need to be dynamically updated.
To achieve this function, control buttons are provided below the X3D player (Figure 5.2).
Each button conveys a scene name; by pressing on it, the corresponding scene’s url on
the web server is used to update the src values. Figure 5.4 shows the coding snippet
defining these buttons. Each button is assigned an id, so the url value of the virtual scene
that the button represents can be recorded inside the button’s onclick event handler. When
such a clicking event is activated, the src values in the <div id=“x3d”> container (Figure
5.3) will be altered accordingly.
<!-- The Model Editor window, access buttons --> <div id="window_x3d"><div class="sectionTitle_left">Model Editor</div>
<div id="x3d"> <!-- Coding snippet placing the X3D player--> </div>
<div id="defaultScenes"> Click the following buttons to select a scene.<br /> <input type="button" value="Welcome screen" id="welcome" /> <input type="button" value="5m x 5m work space" id="work" /> <input type="button" value="5m x 5m living room" id="home" /> <input type="button" value="5m x 5m empty scence" id="empty" />
</div> <script type="text/javascript" src="yy3d_script/yy3d_script_sectionX3D.js"></script>
</div>
Figure 5.4: Coding snippet defining control buttons.
5.1.2 Model Catalogue Window
The “Model Catalogue” window is located on the upper right region of the user interface.
Using this window, the user selects and imports furniture into the interior scene rendered
in the “Model Editor” window. The “Model Catalogue” window adopts a two-level menu
structure to list furniture renderings. The first level menu shows furniture categories and
the second level menu shows available models under the selected category (Figure 5.5).
84
Figure 5.5: Model Catalogue window with a two-level menu structure.
<!-- The Model Catalogue window, first level menu --> <div id="window_model"><div class="sectionTitle_right">Model Catalogue</div>
<div id="catalogueModel"> <table class="tableModel" cellspacing="2"> <tr><th colspan="3">Click the model category to enter its submenus.</th></tr>
<tr><td class="tableModel_subtitle" colspan="3">Office Furniture</td></tr> <tr> <td class="tableModel_label" id="modelLabelDesk">Desk</td> <td class="tableModel_label" id="modelLabelChair">Chair</td> <td class="tableModel_label" id="modelLabelWorkstation">Workstation</td> </tr> <tr> <td class="tableModel_image" id="modelImageDesk"> <img src="yy3d_archive/model/desk/desk.jpg" alt="Desk" /></td> <td class="tableModel_image" id="modelImageChair"> <img src="yy3d_archive/model/chair/chair.jpg" alt="Chair" /></td> <td class="tableModel_image" id="modelImageWorkstation"> <img src="yy3d_archive/model/workstation/workstation.jpg" alt="Workstation" /></td> </tr> <tr><td class="tableModel_subtitle" colspan="3">Home Furniture</td></tr> <tr> <td class="tableModel_label" id="modelLabelSofa">Sofa</td> <td class="tableModel_label" id="modelLabelBed">Bed</td> <td class="tableModel_label" id="modelLabelStorage">Storage</td> </tr> <tr> <td class="tableModel_image" id="modelImageSofa"> <img src="yy3d_archive/model/sofa/sofa.jpg" alt="Sofa" /></td> <td class="tableModel_image" id="modelImageBed"> <img src="yy3d_archive/model/bed/bed.jpg" alt="Bed" /></td> <td class="tableModel_image" id="modelImageStorage"> <img src="yy3d_archive/model/storage/storage.jpg" alt="Storage" /></td> </tr> </table> </div>
<script type="text/javascript" src="yy3d_script/yy3d_script_sectionModel.js"></script> </div>
Figure 5.6: Coding snippet defining the first level menu in the Model Catalogue window.
85
The furniture menus are constructed using the <table> tags. Inside a web table,
each furniture category (level 1) or each furniture model (level 2) is represented using
two table cells: a label cell showing the name and an image cell showing the rendering.
For example, Figure 5.6 shows the coding snippet defining the first level furniture menu,
which displays the six furniture categories.
In the first level menu, when the label or image cell is clicked, the selected
furniture category will be opened in the second level menu. To accomplish this function,
each cell in the first level menu table is assigned an id (Figure 5.6). In doing so, the web
content of the selected category in the second level menu can be recorded in each cell’s
onclick event handler. When such an event is activated, the predefined web content will
be released to replace the “old” web content (i.e., the first level menu table) in the <div id
=“catalogueModel”> container (i.e., the “Model Catalogue” window) (Figure 5.6).
Likewise, inside each furniture category table in the second level menu, a “Return to the
category list” button is provided (Figure 5.5(b)), whose event handler records the web
content of the first level menu. Therefore, when this button is clicked, the first level menu
will be restored in the “Model Catalogue” window.
5.1.3 Texture Catalogue Window
Below the “Model Catalogue” window is the “Texture Catalogue” window, which
displays furniture texture maps. The organization structure in this window is similar to
the “Model Catalogue” window but with a three-level menu (Figure 5.7). Given that the
availability of each texture category is dependent on the selection of the furniture type,
the first level menu in the “Texture Catalogue” window acts as a cover page, reminding
the user to select a furniture type in the “Model Catalogue” window. Once the furniture is
86
selected, the available texture categories will be shown in the second level menu. By
further opening a texture category, the texture maps are listed in the third level menu.
Figure 5.7: Texture Catalogue window with a three-level menu structure.
The switch mechanism between the second and third level menus in the “Texture
Catalogue” window is the same as the one applied in the “Model Catalogue” window.
The additional step is to build the connection between these two catalogue windows, in
order that the switch between the first and second level menus in the “Texture Catalogue”
window can execute automatically when the user selects furniture in the “Model
Catalogue” window. To make this happen, scripts are programmed in the event handlers
of each furniture category’s table cell (Figure 5.5(a)). Previously, these table cells’ event
handlers only record the web content of the “Model Catalogue” window; in this section,
their event handlers also contain the web content of the second level menus in the
“Texture Catalogue” window. In this manner, when a furniture category is selected, both
the furniture and texture catalogue windows release their second level menus according
to the user’s selection. Similarly, the event handlers of the “Return to the category list”
buttons in each furniture category’s table (Figure 5.5(b)) are now also assigned the web
87
content of the first level menu of texture (Figure 5.7(a)). Therefore, when the user
switches back to the first level menu in the “Model Catalogue” window, the “Texture
Catalogue” window also restores its first level menu.
5.1.4 User Notes Window
The “User Notes” window occupies the lower left region in the user’s operation area,
where the user can summarize their work in YY3D. The “User Notes” window provides
the user an opportunity to record their selection of furniture models and texture maps, as
well as leave comments (Figure 5.8).
Figure 5.8: User Notes window as its initial state.
To facilitate the selection of any design record, components in YY3D are listed
through the drop-down menus built within a <form> tag. In addition, a text area is
provided to accommodate the user’s comments. The “Submit” button conveys the user’s
inputs to the web server, where the script programmed on the server-side processes the
data received from the client. To save the space resource of the web server, YY3D does
not store the user’s inputs. Instead, the script developed in this section sorts out the user’s
submission and wraps them into a PDF file. This file that is created on the user’s
computer can be sent back to the designer.
88
At this point in the process, all the elements in the YY3D’s user interface have
been created. Figure 5.9 displays the user interface accessed via a web browser. In
practice, the user is expected to perform their operation in an order suggested by the
windows layout, which starts from the upper left corner and ends at the lower left corner
in a clockwise direction.
Figure 5.9: Screenshot of the YY3D’s user interface.
89
5.2 Script Support
To make the interface function, scripts are programmed to execute the following tasks:
(1) Importing the furniture model selected in the “Model Catalogue” window to the
virtual interior rendered in the “Model Editor” window;
(2) Attaching the texture maps selected in the “Texture Catalogue” window to the
furniture model rendered in the “Model Editor” window.
5.2.1 Importing Furniture into an Interior Scene
In X3D, to import an external scene A into a host scene B, all the nodes in the A’s X3D
file need to be inserted into the B’s scene graph. To avoid hard-copying, X3D provides an
<Inline> node. To function, the <Inline> node is positioned within the host scene graph
B, the external scene A is then retrieved via the url field of the <Inline> node [9].
In the case of YY3D, the virtual interior is the host scene graph in which the
<Inline> node is positioned, while the furniture model is the external scene to be
imported. Because it is not known which furniture model the user will select from the
catalogue window, the url field of the <Inline> node is assigned an empty string initially
(Figure 5.10). The subsequent task is to develop a function to dynamically update the url
field of the <Inline> node according to the user’s selection in the “Model Catalogue”
window. To accomplish this function, the following steps are attempted:
(1) Associate the table cell’s id of each piece of furniture on the user interface page
(Figure 5.5(b)) with its X3D file address on the web server. In doing so, when its table
cell is clicked, the url of the corresponding furniture model will be retrieved.
(2) Inside the scene graph of the virtual interior (Figure 5.10), assign the retrieved file
address from the previous step to the url field of the <Inline> node.
90
<scene> <!-- The viewpoints --> <Viewpoint description='Front' position='0 0 5500'/> <!-- All other viewpoint definitions follow --> <!-- The lightings --> <PointLight DEF="Omni_01" radius="14590.0" intensity="0.6" on="true" location="0.0 0.0 0.0" color="1.0 1.0 1.0"/> <!-- All other lighting definitions follow --> <!-- The floor --> <Transform DEF="floor" translation="0.0 -1600.0 0.0"> <!-- Coding snippet defining the floor plane follows --> </Transform> <!-- The imported furniture model--> <Inline DEF='NewModel1' url=' ' /> <!-- If more than one model to be imported, corresponding Inline node statements follow -->
</scene>
Figure 5.10: Coding snippet inserting an <Inline> node in a virtual scene.
To implement the above scheme, two scripts are programmed to finish the tasks in
each step. The first script handles the user’s input event on the XHTML page, and the
second script updates the <Inline> node in the X3D scene graph. In order to make the
second script work continually based on the output of the first script, these two scripts
need to be “connected”. However, such a connection has a technical barrier. To update
the <Inline> node, the second script has to be wrapped within a <Script> node located in
the X3D scene graph. The <Script> node, however, does not support the embedding of
any scripts that are not related to the X3D scene. As a result, the first script which works
with the XHTML page cannot be embedded into the X3D scene, and thereby it cannot
have a direct data exchange with the second script.
To solve this problem, a new solution scheme is designed. Instead of trying to
update its value, the url field of the <Inline> node is assigned a fixed address value
91
pointing to a “cache” directory created on the web server; simultaneously, the furniture
file stored in this cache directory is updated based on the user’s selection in the “Model
Catalogue” window. In addition, to load or unload the furniture model from the cache
directory, a pair of buttons is created in the interior’s scene graph.
The following paragraphs take the chair category as an example to discuss the
coding implementation of the above scheme. Figure 5.11 shows the menu table of the
chair category in the “Model Catalogue” window and Figure 5.12 shows its coding
snippet.
(1) Copying the selected furniture model into the cache directory on the web server
There are two sub-steps in copying a furniture model into the cache directory:
selection and confirmation. First, the user selects a model from the furniture menu by
clicking the model’s table cell (either label or image cell). Once the user selects a model,
the model’s file name appears in the form box beside the “Confirm” button in Figure 5.11.
Second, by pressing the “Confirm” button, the selected model file is copied into the cache
directory on the web server.
To implement the first step, each chair model’s table cell is assigned an onclick
event handler, which updates the value of the form box (id=“modelNo”) with the
model’s file name (Figure 5.12). To execute the second step, script is programmed on the
server-side to process the user’s submitted file name, when the “Confirm” button is
pressed down. Because each model file’s storage path is related to its file name, the script
can make use of the received file name to retrieve the model’s X3D file and copied it into
the cache directory (Figure 5.13).
92
Figure 5.11: Chair catalogue shown in the Model Catalogue window.
<!-- The Model Catalogue window, second level menu, chair catalogue --> <table class="tableModel" cellspacing="2"> <!-- Return to the first level menu -->
<tr><th colspan="3"> <input type="button" value="Return to the category list" id="mainModel_fromChair" /> </th></tr> <!-- Confirm the selected furniture model --> <tr><td class="tableModel_subtitle" colspan="3"> <iframe name="modelFrame" src="about:blank" style="width:0;height:0;border:none"> </iframe> <form method="post" action="yy3d_action_model.php" target="modelFrame"> Category <input type="text" name="modelCategory" value="Chair" disabled="disabled" /> Number <input type="text" name="modelNumber" id="modelNo" /> <input type="submit" value="Confirm" /> </form> </td></tr> <!-- The available models in this catalogue --> <!-- One row for the model labels --> <tr> <td class="tableModel_label" id="chairLabel_01" onclick="document.getElementById(\'modelNo\').value=\'chair_01\'">01</td> <td class="tableModel_label" id="chairLabel_02" onclick="document.getElementById(\'modelNo\').value=\'chair_02\'">02</td> <td class="tableModel_label" id="chairLabel_03" onclick="document.getElementById(\'modelNo\').value=\'chair_03\'">03</td> </tr> <!-- Another row for the model images -->
</table>
Figure 5.12: Coding snippet defining the chair catalogue.
93
<?php // The codes in the ’yy3d_action_model.php’
if ($_REQUEST['modelNumber']!="") { // Get the input value of the form (id=’modelNo’) $model = $_REQUEST['modelNumber']; // Create a copy of the selected model file in the directory of ’yy3d_cache/model/’ $modelCategory = explode("_" , $model); $x3dPath = "yy3d_archive/model/" . $modelCategory[0] . "/" . $model . "/" . $model . ".x3d"; echo copy($x3dPath, "yy3d_cache/model/selectedModel.x3d"); } ?>
Figure 5.13: Coding snippet retrieving and copying the selected model file.
(2) Creating buttons in the interior’s scene graph to update the <Inline> node
Subsequently, to load or unload the furniture model from the cache directory, a
pair of buttons is added to the interior’s scene graph. To load the furniture, a green button
is used to update the url field of the <Inline> node with the file address in the cache
directory. Inversely, to remove the imported furniture from the scene, the other red button
is used to clear the url field (Figure 5.14).
Figure 5.14: Defining function chains of using buttons to update the <Inline> node.
Figure 5.15 shows the coding snippet creating these two buttons. The buttons are
located following the original <Inline> node in the scene graph (Figure 5.10). To ensure
94
that the buttons are always rendered at a fixed position (the upper left corner of the
rendering window) regardless of which viewpoint applies, all the nodes defining the
buttons are grouped under a <Billboard> node. Inside the <Billboard> node, each button
shape is bound with a <TouchSensor>, which makes the button clickable. To make the
buttons function, two <Script> nodes are programmed to connect each button’s
<TouchSensor> with the <Inline> node respectively (Figure 5.16). Once a button’s
touch sensor is activated, a Boolean value will be sent to its corresponding <Script> node,
which in turn updates the url field of the <Inline> node.
<scene> <!-- The X3D codes in Figure 5.10 are inserted here -->
<!-- The imported furniture model --> <Inline DEF='NewModel1' url=' ' />
<!-- The buttons --> <Billboard> <!--The green button for importing the furniture model --> <Transform translation='-2300 2000 0'> <TouchSensor DEF="InsertModel1" /> <Shape> <Sphere radius='90' /> <Appearance> <Material DEF='Holder1a' diffuseColor='0 1 0' transparency='0' /> </Appearance> </Shape> </Transform> <!-- The red button for removing the furniture model --> <Transform translation='-2050 2000 0'> <TouchSensor DEF="DeleteModel1" /> <Shape> <Sphere radius='90' /> <Appearance> <Material DEF='Holder1b' diffuseColor='1 0 0' transparency='0' /> </Appearance> </Shape> </Transform>
</Billboard> </scene>
Figure 5.15: Coding snippet defining the buttons.
95
<!-- The script for the green button --> <Script DEF='AddModelHandler1' directOutput='true' mustEvaluate='true'> <field name='addModel' type='SFBool' accessType='inputOnly' /> <field name='targetURL' type='SFString' accessType='initializeOnly' value='' />
<field name='url_changed' type='MFString' accessType='outputOnly' /> <! [CDATA[ ecmascript: // When the green sphere button is clicked, // assign the address of the furniture model in the 'yy3d_cashe' directory to the 'url_changed’ function addModel(value) { if (value) { targetURL = "http://www.chaoliucl.com/yy3d/yy3d_cache/model/selectedModel.x3d"; url_changed = new MFString (targetURL); } } ]]>
</Script> <!-- Connect the green button’s <TouchSensor> with the above <Script> node --> <ROUTE fromNode='InsertModel1' fromField='isActive' toNode='AddModelHandler1' toField='addModel' /> <!-- Connect the above <Script> node with the <Inline> node --> <ROUTE fromNode='AddModelHandler1' fromField='url_changed' toNode='NewModel1' toField='url' />
(a) The script programmed for the green button
<!-- The script for the red button --> <Script DEF='DeleteModelHandler1' directOutput='true' mustEvaluate='true'> <field name='deleteModel' type='SFBool' accessType='inputOnly' />
<field name='url_changed' type='MFString' accessType='outputOnly' /> <! [CDATA[ ecmascript: // When the red sphere button is clicked, assign the 'url_changed’ with an empty string function deleteModel(value) { if (value) url_changed = new MFString (); } ]]>
</Script> <!-- Connect the red button’s <TouchSensor> with the above <Script> node --> <ROUTE fromNode='DeleteModel1' fromField='isActive' toNode='DeleteModelHandler1' toField='deleteModel' /> <!-- Connect the above <Script> node with the <Inline> node --> <ROUTE fromNode='DeleteModelHandler1' fromField='url_changed' toNode='NewModel1' toField='url' />
(b) The script programmed for the red button
Figure 5.16: Coding snippet connecting the buttons with the <Inline> node.
96
Each pair of buttons can be used to import or remove a single furniture model. To
accommodate more furniture models in the interior scene, corresponding number of
<Inline> nodes, groups of buttons, as well as <Script> nodes will need to be added in
the interior’s scene graph. When the user selects another furniture model from the
catalogue, the old furniture file stored in the cache directory will be replaced but the
imported model in the interior scene will still exist. The user can then use the second
group of buttons to import the new furniture. Using this approach, the user can import
more than one furniture model into the interior scene and preview their combinations.
Figure 5.17: Importing and removing a chair model.
Figure 5.18: Importing a group of furniture models.
97
Figure 5.17 and Figure 5.18 demonstrate the application result of the scripts
programmed for the function of furniture import. Assuming that the user has selected the
chair_01 in the “Model Catalogue” window (Figure 5.11), Figure 5.17(a) and (b) show
how the chair is imported and removed by clicking the buttons. Figure 5.18 shows the
effect of importing a group of furniture models as the scene provides more <Inline>
nodes and buttons.
5.2.2 Attaching Furniture Textures
After importing the furniture, the next task is to assign them textures. The script solution
for the texture selection in the “Texture Catalogue” window is the same as the furniture
selection described in Section 5.2.1. Each time a texture map is selected, a copy of that
image file will be created in the cache directory on the web server. When another texture
map is selected, the old file will be replaced.
To attach a texture map, the <ImageTexture> node in the furniture model’s scene
graph will need to be updated with the file address in the cache directory. To facilitate the
user’s operation, the texture attachment is designed to execute through a mouse click.
Since each furniture model may comprise components made of different materials, the
user is allowed to assign texture maps to each model component independently.
The following paragraphs take the model chair_01 (Figure 5.17) as an example to
explain the coding implementation. First, the model chair_01 includes four components:
chair leg, bracing bar, pad, and backrest. When this model is exported from 3ds Max,
each of these four components has a separate <Shape> node to record its own 3D data.
Therefore, to assign textures to these components, each of their <Shape> nodes has to
include an <ImageTexture> node. Second, in order to assign textures via mouse clicking,
98
Figure 5.19: Defining the function chain of texture attachment.
<!-- The chair leg --> <Transform DEF="chair_leg" > <TouchSensor DEF="TexturePart1" />
<Shape> <Appearance > <ImageTexture DEF='ModelTexture1' repeatS='true' repeatT='true' containerField='texture'
url='' /> </Appearance> <IndexedFaceSet> <!-- The 3D data of the chair leg is omitted --> </IndexedFaceSet>
</Shape> </Transform> <Script DEF='TextureHandler1' directOutput='true' mustEvaluate='true'> <field name='triggerTexture' type='SFBool' accessType='inputOnly' /> <field name='targetURL' type='SFString' accessType='initializeOnly' value='' />
<field name='url_changed' type='MFString' accessType='outputOnly' /> <! [CDATA[ ecmascript: function triggerTexture(value) { if (value) { targetURL = "http://www.chaoliucl.com/yy3d/yy3d_cache/texture/selectedTexture.jpg"; url_changed = new MFString (targetURL); } } ]]>
</Script> <!-- Connect the chair leg’s <TouchSensor> with the <Script> node --> <ROUTE fromNode='TexturePart1' fromField='isActive' toNode='TextureHandler1' toField='triggerTexture' /> <!-- Connect the <Script> node with the chair leg’s <ImageTexture> node --> <ROUTE fromNode='TextureHandler1' fromField='url_changed' toNode='ModelTexture1' toField='url' />
Figure 5.20: Coding snippet defining the texture attachment for the chair leg.
99
the <Shape> node of each model component has to be bundled with a <TouchSensor>.
Finally, to handle the clicking event received by a <TouchSensor> and update the
relevant <ImageTexture> node, a <Script> node needs to be programmed to connect the
above two nodes (Figure 5.19). As an example, Figure 5.20 implements this logic chain
in the code, which defines the texture attachment mechanism for the chair leg of the
model chair_01.
Regarding the other chair components, if they are designed to be made of the
same material as the chair leg, their <TouchSensor> and <ImageTexture> nodes can
reuse the nodes defined for the chair leg. For example, the chair bracing bar has the same
material attribute as the leg; its X3D codes can thus be the one as shown in Figure 5.21,
which adopt the USE mechanisms to reference the node definitions in Figure 5.20. In this
manner, regardless of which of these two components (the chair leg or the bracing bar) is
clicked, the selected texture map will be attached to both of them at the same time.
<!-- The chair bracing bar --> <Transform DEF="chair_bracing" > <TouchSensor USE="TexturePart1" />
<Shape> <Appearance >
<ImageTexture USE='ModelTexture1' /> </Appearance> <IndexedFaceSet> <!-- The 3D data of the chair bracing bar is omitted --> </IndexedFaceSet>
</Shape> </Transform>
Figure 5.21: Coding snippet defining the model component with the same material.
On the other hand, for those components whose materials are designed to be
different from the chair leg (e.g., the chair pad and backrest), new definitions for their
<TouchSensor> and <ImageTexture> nodes are required. In that instance, an additional
100
<Script> node needs to be created for processing the new nodes. In doing so, when the
user clicks the chair leg, the selected texture map will only be attached to the chair leg
and the bracing bar but not to the chair pad and the backrest, and vice versa.
Figure 5.22 demonstrates the application result of the scripts programmed for the
function of texture attachment. In Figure 5.22, the chair model (chair_01.x3d) has two
texture maps attached: the chair leg and the bracing bars have a metal texture while the
pad and the backrest have a leather texture.
Figure 5.22: Attaching different textures to a chair model.
101
Chapter Six: Operation Test and Conclusion
This final chapter begins with an operation test on the completed proof-of-concept. Based
on the testing result, this chapter revisits the research goal and summarizes the steps
through which the proposed idea has been achieved. This chapter concludes the thesis
with an outlook on future research.
6.1 Operation Test on the Proof-of-Concept
YY3D was proposed as a web-based tool for furniture layout configuration (Section 3.4).
The tool was designed to enable a user to apply 3D graphics in examining and comparing
design options. When working with YY3D, the user should be able not only to choose
furniture combinations but also to customize their textures. To examine whether the
research works have accomplished these functionalities, this section goes through an
operation test on the completed YY3D.
6.1.1 Testing Environment
In creating YY3D, a goal was to have an application that would run on most PCs. To load
YY3D, only two applications are needed: a web browser for retrieving the web content
from the server and an X3D player plug-in for rendering the 3D computer graphics. The
test is conducted on a Dell Studio XPS 13 (Windows 7 Ultimate operating system).
Internet Explorer 8.0 (web browser) and BS Contact 7.2 (X3D plug-in) are installed.
Because the operations of YY3D require script support, the “Active Scripting” setting
needs to be enabled within the IE browser. Also, to load the user’s selection of furniture
and texture in real time, the default “Cache Settings” for the BS Contact Player must be
changed. These steps are listed on the “Operating Instructions” page, which can be found
on the “Navigation Bar” of the user interface (Figure 6.1).
102
Figure 6.1: Operating instruction menu.
6.1.2 Selecting the Virtual Interior
The typical process of using YY3D begins with the “Model Editor” window (Figure
6.2(a)), in which the user selects and loads a virtual interior as the working context for the
later steps. The two interiors, “living room” and “office space” scenes, simulate two of
the spaces found in real life, while the “empty space” scene can be used when focusing
on the furniture placement without any reference to an actual interior. Each scene is
accessed by clicking the control button that conveys its name (Figure 6.2(b)).
Figure 6.2: Selecting the virtual interior.
103
When a virtual interior is rendered, the user is able to change viewing angles by
switching between the predefined viewpoints. These viewpoints are typically listed on a
shortcut menu offered by most X3D players (Figure 6.3(a)). Depending on the player, the
user may also change viewpoints using keyboard shortcuts. The BS Contact Player, for
example, supports the “PgUp” and “PgDn” keys for switching between the previous and
following viewpoints listed on the menu respectively. In the process of switching from
one viewpoint to another, the user can also press the “Esc” key to capture a customized
view (Figure 6.3(b)).
Figure 6.3: Switching the viewpoints.
6.1.3 Importing the Furniture
Once the virtual interior is opened in the “Model Editor” window, the user moves
forward to the “Model Catalogue” window, where they select the furniture. The user first
opens a furniture catalogue (Figure 6.4(a)) and then selects a piece of furniture from the
available list (Figure 6.4(b)). After confirming the selection (Figure 6.4(c)), the user
moves back to the “Model Editor” window, where they import the selected furniture into
104
the interior scene by clicking on the import button (Figure 6.4(d)). If the user is not
satisfied with their selection (Figure 6.4(e)), they can remove the model (Figure 6.4(f))
and reselect other furniture in the “Model Catalogue” window (Figure 6.4(a)). During the
furniture selection process, the user conducts all operations with a mouse click.
Figure 6.4: Choosing and importing the furniture.
6.1.4 Attaching the Textures
After the furniture model is imported, the next step is to assign textures. Each time a
furniture model is selected in the “Model Catalogue” window, the available texture maps
matching that furniture appear in the “Texture Catalogue” window. As in the previous
step, the user opens a texture catalogue, picks out the desired texture map, and confirms
their selection. Next, the user moves back to the “Model Editor” window and clicks on
105
the surface of the target furniture (e.g., the chair cushion shown in Figure 6.5(a)) to
assign the selected texture. Depending on the furniture, there may be model components
made of more than one material; in that case, the user can repeat the above process and
assign each model component a texture map independently (Figure 6.5(b)). The process
requires nothing more than a click of the mouse.
Figure 6.5: Attaching the textures.
6.1.5 Arranging the Furniture
By default, when a furniture model is imported, it is located on the floor plane at the
center of the space. The user can move and rotate the furniture model to the desired
position. To move the model, the user presses the “M” key to activate the translation
function and clicks on the model’s central area. When a green cylinder appears, the user
can drag it, moving the furniture model to any area of the virtual interior (Figure 6.6(a)).
To rotate the model, the user presses the “R” key to switch on the rotation function.
106
Clicking on the model’s central area causes the green cylinder to appear again, after
which the user can drag and rotate the model (Figure 6.6(b)).
Figure 6.6: Arranging the furniture.
Figure 6.7: Previewing the interior layout.
By repeating the above process (3-5), the user can import furniture models into
the interior and examine various combinations of furniture. Although the transformation
functions can be executed in any viewport, the top view is typically recommended when
arranging the furniture (Figure 6.7(a)), so that the user can have an overall view of the
interior layout. After arranging the furniture, the user can switch to other viewports to
107
check the result (Figure 6.7(b)). During this process, the user combines the keyboard and
mouse inputs to create an interior design.
6.1.6 Filling out the User Form
After the design is finalized, the user can summarize their work in YY3D. The web from
in the “User Notes” window allows the user to record not only their selections of
furniture and texture but also their comments (Figure 6.8(a)). Based on the user’s
submission, YY3D creates a furniture inventory file in PDF format on the user’s local
computer (Figure 6.8(b)).
Figure 6.8: Summarizing the selections.
108
In addition, the user can render perspective views of the finished interior layout,
using the “Render Bitmap” function provided by the BS Contact Player (Figure 6.9). By
sending the interior renderings, together with the PDF file, back to the designer, the client
is able to clearly express their design intention with the assistance of YY3D.
Figure 6.9: Rendering the interior layout.
Up to this point, a typical project in YY3D has been shown in use. The functions
programmed for YY3D (Chapters 4 and 5) have been tested and have proven to execute
properly. With YY3D, an interior designer could create virtual interiors and furniture
models using a CAD application and could export them to the X3D format used online.
The client could then create and compare their own design solutions with these 3D
graphics. The advantage of this approach is that the client can try various design options
in a real setting using computer graphics without having to install and learn CAD. The
client’s design solution can be shared with the designer to help the latter better
understand and accomplish the client’s intention.
109
6.2 Research Summary and Conclusion
In design, traditional communication approaches may not reveal the details of a client’s
concerns or requirements. Without full understanding of the client’s intention, correcting
and adjusting a design can be a slow and painful process. Although designers often use
CAD to help visualize potential solutions, clients with limited technical skill cannot be
expected to properly use these advanced digital tools (Section 2.1).
To solve that problem, this thesis proposed a web-based interactive tool that
allows clients to visualize and comment on potential design solutions. In developing the
proposed tool, this research began with a review of the literature in virtual reality. It
found that a web-based VR system could be employed to enable non-professionals to
view and manipulate 3D graphics using a regular web browser. Given the access to an
interactive 3D environment, clients would be able to express their design intention
through assembling a virtual design using the standard design components created by
interior designers (Section 2.2).
Based on this premise, this research suggested a new communication framework
to enhance collaboration efficiency between clients and designers. In this framework, a
web-based VR application was implemented to confirm the client’s design intention.
Based on the information collected through traditional communication approaches such
as conversation and questionnaires, the interior designer prepared a series of digital
design materials and categorized them into catalogues to be displayed on the website. The
client then previewed different design options in a virtual environment rendered on a web
page. Each option was built using the virtual objects from the catalogues provided by the
110
designer. This process would allow the client sufficient time to communicate with the
designer should any concerns arise (Section 3.1).
To implement the VR application in the proposed communication framework, this
thesis presented a specific methodology. The first step was to prepare the virtual design
materials used online. Specifically, this thesis introduced X3D, the ISO standard XML-
based file format for representing 3D computer graphics over the Internet, and illustrated
the workflow in converting a CAD-based computer model into the X3D format used
online (Section 3.2). The second step was to operate with these design materials in an
interactive virtual environment, and this thesis sketched a web-based user interface and
specified the script support required for this interface to function (Section 3.3).
To examine the feasibility of this methodology, a proof-of-concept, YY3D, was
created, enabling a client to design an interior scene complete with furniture. Having
multiple viewpoints allowed the client the opportunity to preview each design option and
decide which solution best satisfied their requirements (Section 3.4). The creation process
of the virtual design materials of YY3D, including the interior scenes and furniture
models, verified the methods suggested in Section 3.2 (Chapter 4). The YY3D’s web-
based user interface implemented the interface prototype and its scripts as outlined in
Section 3.3 (Chapter 5). Finally, a test of the completed proof-of-concept demonstrated
the functions developed in Chapters 4 and 5 (Section 6.1).
The results of this research confirm the technical feasibility of employing a web-
based application to facilitate the communication process between clients and designers.
Clients using this YY3D should be able to express their design intention with an easy-to-
use web-based application.
111
6.3 Future Research
The communication framework presented in this thesis proposes a new approach to the
design of interior space. The proof-of-concept created in this research is the first step
toward putting this into practice. Future research is required to further develop the
functionality of this tool. Two points of interest warrant future investigation.
6.3.1 Application and Evaluation in Design Practice
Future research would include a user study of designers and clients; this would be an
effective approach to evaluating YY3D. Using YY3D to design an actual space would
require that designers replace the test models of this proof-of-concept with actual models
of interiors and furniture. Following the methods suggested in Chapter 4, the interior and
furniture models could be created using a CAD application and then exported into the
X3D format. Designers could reference the coding samples (shown in Chapter 4) as part
of the preparation process for creating online models.
The user study would include two parts. First, it would gauge how a professional
interior designer uses YY3D to solve real-life problems. Specifically, is it possible for an
interior designer to create the VR models needed in the design process of an actual space?
Collecting feedback from designers on the usability of the provided X3D codes would be
a major objective of this part of the user study. The second part of the user study would
follow up with clients on their operation process with the user interface. The
effectiveness of the functions developed for the user interface would be evaluated from
the client’s perspective. Such a user study of designers and clients would help to identify
future research directions to improve YY3D.
112
6.3.2 Adding Other Design Components
It is also possible to apply the approach used to create YY3D to the design of other types
of architectural space. Not limited to interior design, YY3D could be used to help
architects and their clients design complete buildings by selecting and assembling virtual
building components, including wall partitions, windows, doors, and a variety of fixtures
(e.g., the lighting and bath equipment in Figure 3.1). Unlike with furniture, however,
adding building components to a design would require special attention to mechanical
and structural engineering constraints. For example, when a client wants to add a window
opening to a wall, the structural role the host wall plays must be considered. Otherwise,
even if the client can create the building plan in a virtual world, it will be impractical in
the real word.
A knowledge-based system would enable future generations of YY3D to solve
such engineering design problems, possibly using pop-up messages to indicate an error
when a user attempts to add a door or window opening to a load-bearing wall. If the user
chooses to neglect the error message, the inserted component would then be removed
from the scene to eliminate the possibility of grave error; a similar approach is found in
many expert systems [30]. Having such a knowledge-based system would guarantee that
the client’s virtual design solution could actually be constructed. With continued
improvement, YY3D could become a practical tool for solving a variety of architectural
and interior design problems.
113
References
[1] Beier, K. (1994). Virtual Reality in Automotive Design and Manufacturing. Proceedings of International Congress on Transportation Electronics ’94, Society of Automotive Engineers, Dearborn, Michigan, USA.
[2] Beier, K. (2000). Web-based Virtual Reality in Design and Manufacturing Applications. 1st International EuroConference on Computer Applications and Information Technology in the Maritime Industries, COMPIT ’2000, Potsdam, Germany.
[3] Beier, K. (2008). Virtual Reality: A Short Introduction. Virtual Reality Laboratory, University of Michigan. http://www-vrl.umich.edu/intro/intex.html.
[4] Bilawchuk, M. (2004). Virtual Reality. Communication Studies 380, University of Calgary. http://www.bilawchuk.com/mark/index.html.
[5] Biocca, F. and Levy, M. (1995). Communication in the Age of Virtual Reality. L. Erlbaum Associates Inc. Hillsdale, New Jersey, USA.
[6] Bitmanagement. BS Contact Player. http://www.bitmanagement.com/.
[7] Briggs, J. (1996). The Promise of Virtual Reality. The Futurist, World Future Society, volume 30, issue 5, pp. 13.
[8] Brown, L. (2007). The Key to Successful Interior Design. Home and Garden, Squidoo. http://www.squidoo.com/successful-interior-design.
[9] Brutzman, D. and Daly, L. (2007). X3D: Extensible 3D Graphics for Web Authors. 1st edition, Morgan Kaufmann, San Francisco, California, USA.
[10] Campbell, D. (2000). Architectural Construction Documents on the Web: VRML as a Case Study. Automation in Construction, volume 9, issue 1, pp. 129-138.
[11] Dai, F., Felger, W., Fruhauf, T., Gobel, M., Reiners, D. and Zachmann, G. (1996). Virtual Prototyping Examples for Automotive Industries. Proceedings of Virtual Reality World ’96, Stuttgart, Germany.
[12] Daly, L. and Brutzman, D. (2007). X3D: Extensible 3D Graphics Standard. IEEE Signal Processing Magazine, volume 24, issue 6, pp. 130-135.
[13] Dauner, J., Landauer, J., Stimpfig, E. and Reuter, D. (1998). 3D Product Presentation Online: The Virtual Design Exhibition. Proceedings of the Third Symposium on Virtual Reality Modeling Language, Monterey, California, USA, pp. 57-62.
114
[14] Dimensions Guide, Living Room Size, http://www.dimensionsguide.com/living-room-size/.
[15] Freeman, J. Avons, S., Pearson, D. and Ijsselsteijn, W. (1999). Effects of Sensory Information and Prior Experience on Direct Subjective Presence Ratings. Presence Teleoperators and Virtual Environments, volume 8, issue 1, pp. 1-13.
[16] Gibbs, J. (2005). Interior Design, Laurence King, London, UK.
[17] Green, M. (2009). Eyewitness Memory is Unreliable, Visual Expert 2009, http://www.visualexpert.com/Resources/eyewitnessmemory.html.
[18] Heim, M. (1993). The Metaphysics of Virtual Reality. Oxford University Press. USA.
[19] Huang, G. (2002). Web-based Support for Collaborative Product Design Review. Computers in Industry, volume 48, issue 1, pp. 71-88.
[20] InstantLabs. http://doc.instantreality.org/tools/x3d_encoding_converter/.
[21] Jezernik, A. and Hren, G. (2003). A Solution to Integrate Computer-aided Design (CAD) and Virtual Reality (VR) Databases in Design and Manufacturing Processes, The International Journal of Advanced Manufacturing Technology, volume 22, pp. 768-774.
[22] Kan, H., Duffy, V. and Su, C. (2001). An Internet Virtual Reality Collaborative Environment for Effective Product Design. Computers in Industry, volume 45, issue 2, pp. 197-213.
[23] Khoon, L. and Ramaiah, C. (2008). An Overview of Online Exhibitions. Journal of Library and Information Technology, volume 28, issue 4, pp. 7-21.
[24] LaCourse, D. (2003). Virtual Prototyping Pays Off. Cadalyst Magazine. May 2003.
[25] Lau, H., Mak, K. and Lu, M. (2003). A Virtual Design Platform for Interactive Product Design and Visualization. Journal of Materials Processing Technology, volume 139, issue 1-3, pp. 402-407.
[26] Li, H., Daugherty, T. and Biocca, F. (2001). Characteristics of Virtual Experience in Electronic Commerce: A Protocol Analysis, Journal of Interactive Marketing, volume 15, issue 3, pp. 13-30.
[27] Louka, M. (1998). An Introduction to Virtual Reality. Ostfold University College, Norway. http://www.ia.hiof.no/~michaell/home/vr/vrhiof98/index.html.
115
[28] Mazarella, F. (2011). Interior Design. Whole Building Design Guide. http://www.wbdg.org/design/dd_interiordsgn.php.
[29] Mello, A. (2001). Mass Customization Won’t Come Easy. ZDNet, http://www.zdnet.com/news/mass-customization-wont-come-easy/296569.
[30] Nousch, M. and Jung, B. (1999). CAD on the World Wide Web: Virtual Assembly of Furniture with BEAVER. Proceedings of the Fourth Symposium on Virtual Reality Modeling Language, Paderborn, Germany, pp. 113-119.
[31] Oh, H., Yoon, S. and Hawley, J. (2004). What Virtual Reality Can Offer to the Furniture Industry. Journal of Textile and Apparel, Technology and Management, volume 4, issue 1.
[32] Ortiz, S. (2010). Is 3D Finally Ready for the Web? IEEE Computer Society, volume 43, issue 1, pp. 14-16.
[33] Pan, Z., Chen, T., Zhang, M. and Xu, B. (2004). Virtual Presentation and Customization of Products Based on Internet. International Journal of CAD/CAM, volume 4, issue 1.
[34] Paulis, P. (2010). 3D Webpages. Študentská vedecká konferencia, FMFI UK, Bratislava, Slovakia, pp. 316–327.
[35] Peters, S. and Iagnemma, K. (2009). Stability Measurement of High Speed Vehicles. Vehicle System Dynamics, volume 47, issue 6, pp. 701-720.
[36] Pile, J. (2003). Interior Design. 3rd Edition, Prentice Hall Abrams, Pearson, New Jersey, USA.
[37] Smparounis, K., Mavrikios, D., Pappas, M., Xanthakis, V., Vigano, G. and Pentenrieder, K. (2008). A Virtual and Augmented Reality Approach to Collaborative Product Design and Demonstration. Proceedings of the 14th International Conference on Concurrent Enterprising, ICE 2008, Lisbon, Portugal.
[38] Steuer, J. (1992). Defining Virtual Reality: Dimensions Determining Telepresence. Journal of Communication, volume 42, issue 4, pp. 73-93.
[39] Stevens, L. (1994). Virtual Reality Now: A Detailed Look at Today's Virtual Reality. MIS Press, Inc.
[40] Strickland, J. (2008). How Virtual Reality Works. HowStuffWorks.com. http://electronics.howstuffworks.com/gadgets/other-gadgets/virtual-reality.htm.
[41] Sutherland, I. (1965). The Ultimate Display. Proceedings of the IFIP Congress, volume 2. pp. 506-508.
116
[42] Sutherland, I. (1968). A Head-mounted Three-dimensional Display. Proceedings of the AFIPS Fall Joint Computer Conference, volume 33. Arlington, Virginia, USA, pp. 757-764.
[43] The Princeton Review. (2011). A Day in the Life of a Interior Designer. Career Interior Designer. http://www.princetonreview.com/Careers.aspx?cid=82.
[44] Tseng, M., Jiao, J. and Su, C. Virtual Prototyping for Customized Product Development. Integrated Manufacturing Systems, volume 9, issue 6, pp. 334-343.
[45] UK Design Council, A Guide to Interior Design, http://www.designcouncil.org.uk/about-design/Types-of-design/Interior-design/.
[46] U.S. Bureau of Labor Statistics, Interior Designers, Occupational Outlook Handbook, 2010-11 Edition, http://www.bls.gov/oco/ocos293.htm.
[47] Web3D Consortium. X3D Resources. http://www.web3d.org/x3d/content/.
[48] Woolgar, S. (2002). Virtual Society? Technology, Cyberbole, Reality. Oxford University Press. USA.
[49] Xj3D. http://www.xj3d.org/.
[50] Zachmann, G. (1997). Real-time and Exact Collision Detection for Interactive Virtual Prototyping. Proceedings of ASME Design Engineering Technical Conferences ’97, Sacramento, California, USA.