graphical and text-based design interfaces for parameter design of an i-beam, desk lamp, aircraft...
TRANSCRIPT
ORIGINAL ARTICLE
Graphical and text-based design interfaces for parameter designof an I-beam, desk lamp, aircraft wing, and job shopmanufacturing system
Timothy W. Simpson Æ Mary Frecker ÆRussell R. Barton Æ Ling Rothrock
Received: 1 September 2005 / Accepted: 2 June 2006 / Published online: 11 October 2006� Springer-Verlag London Limited 2006
Abstract In this paper we describe four design opti-
mization problems and corresponding design interfaces
that have been developed to help assess the impact of
fast, graphical interfaces for design space visualization
and optimization. The design problems involve the de-
sign of an I-beam, desk lamp, aircraft wing, and job shop
manufacturing system. The problems vary in size from 2
to 6 inputs and 2 to 7 outputs, where the outputs are
formulated as either a multiobjective optimization
problem or a constrained, single objective optimization
problem. Graphical and text-based design interfaces
have been developed for the I-beam and desk lamp
problems, and two sets of graphical design interfaces
have been developed for the aircraft wing and job shop
design problems that vary in the number of input vari-
ables and analytical complexity, respectively. Response
delays ranging from 0.0 to 1.5 s have been imposed in
the interfaces to mimic computationally expensive
analyses typical of complex engineering design prob-
lems, allowing us to study the impact of delay on user
performance. In addition to describing each problem,
we discuss the experimental methods that we use,
including the experimental factors, performance mea-
sures, and protocol. The focus in this paper is to publi-
cize and share our design interfaces as well as our
insights with other researchers who are developing tools
to support design space visualization and exploration.
Keywords Visualization � Design optimization �Metamodels � Simulation � Graphical user interface
1 Introduction
In 1984, Lembersky and Chi developed software and a
graphical user interface that incorporated artifact rep-
resentations of logs to enable timber buckers to posi-
tion cuts on a log and determine the use for each section
(e.g., plank, plywood veneer, pulp). The software pro-
vided immediate feedback on the resulting profit per
section and overall profit for the log. At the same time,
the software computed an optimal design via Dynamic
Programming (DP) for log segmenting and product
allocation and presented the alternative graphically,
adjacent to the cutter’s design, in real time. Invariably
the DP allocation produced higher profit, but an
interesting result of their study was that the timber
buckers using the software improved their own cutting
abilities. After 1 week of practice on the log simulator/
design interface, the timber buckers had developed new
strategies for cutting and product allocation based on
T. W. Simpson (&)Departments of Mechanical and Industrial Engineeringand Engineering Design, The Pennsylvania State University,329 Leonhard Building, University Park, PA 16802, USAe-mail: [email protected]
M. FreckerDepartment of Mechanical Engineering,The Pennsylvania State University,University Park, PA 16802, USA
R. R. BartonSupply Chain and Information Systems,Smeal College of Business,The Pennsylvania State University,University Park, PA 16802, USA
L. RothrockHarold & Inge Marcus Department of Industrial &Manufacturing Engineering, The Pennsylvania StateUniversity, University Park, PA 16802, USA
Engineering with Computers (2007) 23:93–107
DOI 10.1007/s00366-006-0045-7
123
viewing the competing (and superior) DP solutions,
improving the profitability of their own ad-hoc cutting/
allocation performance [1].
Over the next 20 years, advancements in computing
power and software sophistication have fostered in-
creased interest in visualization and interactive design
tools. Today, we find visualization and interactive
graphical user interfaces receiving considerable atten-
tion in facilitating decision-making and optimization
in engineering design [2–13]. A rationale for this con-
tinued interest is the lack of consensus on the best
computational method for design decisions that involve
multiple attributes, uncertain outcomes, and often
multiple decision makers [14, 15]. Zionts cites ten myths
of multiple criteria decision-making, including (#2) the
myth of a single decision maker (it is often a group),
(#4) the myth of an optimal solution, (#5) the myth
of limiting consideration to nondominated (Pareto-
optimal) solutions, and (#6) the myth of the existence
of a utility or value function. Competing approaches
include weighted objective functions and mathematical
programming [16–18], construction of utility functions
[19–23], quality function deployment and modifica-
tions [24, 25], game theory [26–28], fuzzy set methods
[29, 30], and other proxy functions [10, 31–33].
A study by the National Research Council high-
lighted three requirements for an effective design
interface: it must be (1) integrative, (2) visual, and (3)
fast, i.e., enable real-time response to user input [34].
Ullman [35] corroborates this, stating that ‘‘In order to
be useful to the short-term memory, any extension (in
the external environment) must share the characteristics
of being very fast and having high information content.’’
Despite the apparent advantages and recent advances of
visualization techniques for engineering design, we have
found limited evidence in the engineering literature that
assesses the impact of having a fast graphical design
interface on the efficiency and effectiveness of engi-
neering design or decision-making. Most research on
the effect of response delay on user productivity with
design interfaces has focused on simple placement,
searching, and editing tasks [36–39] or on the loss of
information held in short-term memory [40]. Goodman
and Spence [41] examined the effect of response time on
the time to complete an artificial task that was created
to mimic design activity; they found an increase in task
completion time of approximately 50% for response
delays of 1.5 s in the software. For more complex tasks,
Foley and Wallace [42] found that response delays of up
to 10 s did not have significant impact.
Unfortunately, many design analysis tasks may not
be instantaneous, even when calculated using state-
of-the-art software on state-of-the-art computers. For
instance, Boeing frequently uses simulation codes that
can take 15–18 h for analysis of some design applica-
tions [43] while researchers at Ford report that a crash
simulation of a full passenger car takes 36–160 h to
compute [44]. Therefore, we assert that a metamodel-
driven design interface provides a strategy for meeting
the challenge of creating an integrative, visual, and fast
graphical design environment. By metamodels we
mean simple mathematical approximations to the in-
put/output functions calculated by the designer’s
analyses and simulation models [45–47]. Metamodels
have been used in a variety of engineering design and
optimization applications, and recent reviews can be
found in [47–50]. Because the approximations are
simple, they are fast, virtually instantaneous, enabling
performance analyses to be computed in real-time
when design (input) variables are changed within a
graphical design interface; however, because they are
simple approximations, there is a tradeoff between
accuracy and speed. Hence, the overarching objective
guiding our research is to determine the efficacy of
metamodel-driven visualization for graphical design
and optimization as shown in Fig. 1.
At the highest level in Fig. 1, our investigations have
been divided into two categories: (1) assessing the
benefit of having a rapid response to user requests for
performance as a function of design parameters, and
(2) assessing the cost of lost accuracy due to the use of
approximations or metamodels. By working with the
metamodels themselves, we can impose artificial delays
in the software to simulate computationally expensive
analyses. The benefit of rapid response depends on the
nature of the design task, the ‘‘richness’’ of the design
interface (e.g., text-based versus graphical), and the
training received by the user. In the next section, we
describe the four design problems and corresponding
interfaces that have been developed as part of our re-
search. The experimental factors, measures, design,
and protocol are discussed in Sect. 3, and a brief
overview of our findings is given in Sect. 4.
2 Overview of design problems and interfaces
A summary of the design problems and interfaces
presented in this paper is given in Table 1. Each prob-
lem is formulated in terms of a numerical optimization
problem, the standard form of which is given in Eq. 1.
The function f is called the objective or cost function,
and x are the design variables. There can be a number
of inequality constraints, gj(x), and equality constraints,
hk(x), which may or may not be explicit functions of
x. The goal is to find the best set of variables x
94 Engineering with Computers (2007) 23:93–107
123
that minimize f (or alternatively maximize –f) while
satisfying the constraints.
min ðf ðxÞÞ
subject to:gjðxÞ � 0
hkðxÞ ¼ 0
ð1Þ
The problems described in Table 1 vary in size from
2 to 6 input (design) variables and 2 to 7 outputs, where
the outputs are formulated as either a multiobjective
optimization problem or a constrained single objective
optimization problem. The source from which each
example has been derived is noted in the table along
with the paper(s) wherein we discuss results involving
each interface. The interfaces are either graphical or
text-based, and response delays within each interface
vary from 0.0 to 1.5 s as indicated in the table. Each
interface is developed using Visual Basic 6.0, which is
then compiled into an executable. The executables for
each interface are available at: <http://www.edog.mne.
psu.edu/visualization/>.
The rationale for selecting these four problems is to
guard against a common misconception in human
subject testing, namely, generalizability of results. The
normative course of scientific investigations is to nar-
row the scope of a real-world task into laboratory tasks
that are generalizable. For example, one might expect
a functional relationship to exist between the number
of inputs and outputs and a subject’s performance on a
task without empirical investigation; however,
researchers have warned against such an assumption
because findings from a simple laboratory task cannot
Fig. 1 Overallexperimentation strategy
Table 1 Overview of design problems and interfaces
Design problem[source]
Problem formulation Type of interface Responsedelay (s)
Results
# Inputs # Objectives # Constraints
I-beam [51] 2 2 0 Graphical and text-based 0.0, 0.25, 0.5 [52, 53]2 1 1 Graphical and text-based 0.0, 1.5 [54]
Desk lamp [55] 3 2 0 Graphical and text-based 0.0, 0.25, 0.5 [53]Aircraft wing [Boeing] 6 1 3 Graphical 0.0, 0.25, 0.5 [43]
2, 4, 6 1 3 Graphical 0.0, 0.25, 0.5 [56]Job shop [57] 6 1 6 Graphical 0.0, 1.5 [58]
Engineering with Computers (2007) 23:93–107 95
123
readily be transferred to tasks situated in dynamic and
complex environments (e.g., design of a desk lamp or a
wing, or job shop control) [59, 60]. By examining a
broad range of problems varying in size, scope, and
application, we can generalize our results to a greater
extent.
Before describing each design problem and its cor-
responding interfaces, we note that the basic func-
tionality of our graphical design interfaces (GDI) is as
follows.
1. After pushing the Start button, the user manipu-
lates the values of the design variables by moving
the slider bars that are located in the lower right
hand corner of the GDI, where the values of the
slider bars define a design alternative.
2. As the slider bars change,
(a) the picture of the geometry changes to reflect the
values of the new design variables,
(b) the objective and constraints are re-evaluated for
the new design variable values,
(c) the new values of the objective(s) and con-
straint(s) are displayed numerically in the table in
the upper right hand corner of the GDI, and
(d) the new values of the objective(s) and con-
straints(s) are plotted graphically in the 2-D
output display window in the middle of the GDI.
3. The user continues to manipulate the slider bars
until s/he determines that a good design is ob-
tained.
At any point during this process, the user can use the
mouse to select a point that is plotted in the 2-D output
display window. Whenever a point is selected, the sli-
der bars revert back to the corresponding settings of
the design variables that yielded this design, and the
geometry and numerical display of the output(s) are
updated. This way, the user can always return to a
promising design with the click of a mouse, using the
selected point as the new starting point when manip-
ulating the slider bars to search the design space. If the
output display window gets too crowded, the user can
click the Clear button to clear the screen or the Zoom
button to zoom in (or out) around the point that is
selected or most recently plotted. Finally, we note that
the interfaces do not work in reverse, i.e., the user
cannot point the mouse to a good position in the out-
put display and have it ‘‘back-solve’’ to find the cor-
responding values of the design variables.
The text-based design interfaces (TDIs) use the
same analyses as the GDIs, but two different methods
are used in the TDIs for changing the design variables:
slider bars and text boxes. Also, the user must click on
the Calculate button in order to evaluate a design
alternative. Because these interfaces are only text-
based, the picture of the design geometry does not
change as the design variables are changed, and there
is no graphical display that plots the output responses.
Instead, input-output response values are stored
numerically in a drop-down list from which users can
select the best design. The drop-down list can be
cleared of all but the last point selected or analyzed at
any time, but there is no zoom in a TDI as it is not
necessary. Descriptions of each design problem and
corresponding interfaces follow.
2.1 I-beam
The I-beam design problem is adapted from [51]
wherein the user varies the cross-sectional dimensions
of an I-beam, subject to a bending moment, to mini-
mize the cross-sectional area, subject to a constraint on
the bending stress. In the I-beam design problem, users
can adjust the height, h, and width, w, of the I-beam
cross-section, which varies the cross-sectional area, A,
and the imposed bending stress, r. The objective in the
I-beam design problem is to minimize the cross-sec-
tional area, A, while satisfying a maximum stress con-
straint on the bending stress, r. Thus, the user is asked
to solve the following constrained, single-objective
optimization problem:
Minimize: A
Subject to:
r � rmax
0:2 in � h � 10.0 in.
0.1 in � w � 10.0 in.
ð2Þ
The analytical expressions for A and r are taken di-
rectly from [51] and coded within the GDI that was
developed for the I-beam; no metamodels are used in
this problem. The GDI for the I-beam design problem is
shown in Fig. 2. The user manipulates the two slider bars
to vary the height and width of the I-beam. As these
values change, the resulting values for A and r are
plotted in the graphical display window, and the picture
of the I-beam geometry changes accordingly. A numer-
ical display of A and r are also provided for the user.
In addition to the GDI shown in Fig. 2, a text-based
design interface (TDI) was also created to serve as a
control in our experiments. The TDI for the I-beam
design problem is shown in Fig. 3 and is a modified
version from our earlier study [53], which used text
boxes and the keyboard to enter values for the input
variables. To ensure consistency with the allowable in-
put values between the three I-beam design interfaces,
96 Engineering with Computers (2007) 23:93–107
123
the design variable input method for the TDI is through
slider bars rather than a keyboard. Thus, users can vary
w and h of the I-beam by moving the slider bars with the
mouse in the center of the GUI (see Fig. 3).
Since slider bars restrict design variable manipula-
tion to one-at-a-time variation, a ‘‘field box’’ GDI was
also created to allow users to simultaneous change
both input variables. The field box is simply a box with
w and h on the horizontal and vertical axes, respec-
tively, and a cursor inside the box, as shown in Fig. 4.
Users can move the cursor anywhere within the
boundaries of the box, which correspond to the design
variable bounds. An advantage of a simultaneous input
device such as the field box is that it allows users to
perform two-factor-at-a-time variation, which can
facilitate design space exploration. The field box input
method also helps reduce the gaps found in the
graphical window in the slider bar GDI, which can be
seen by comparing the A versus r plots in Figs. 2 and 4.
All other functionality is identical between the field
box GDI and the slider bar GDI.
A multiobjective formulation for the I-beam design
problem has also been developed and tested [53]. The
objectives in this formulation are to simultaneously
minimize normalized measures of A and r using a
weighted-sum formulation:
Fig. 2 GDI for I-beam designproblem
Fig. 3 TDI for I-beam designproblem
Engineering with Computers (2007) 23:93–107 97
123
Minimize : F ¼ aA�Aminð Þ
Amax �Aminð Þ þ 1� að Þ r� rminð Þrmax � rminð Þ
Subject to :0:2 � h � 10:0
0:1 � w � 10:0ð3Þ
where a is a scalar weighting factor (0 £ a £ 1) that we
set at 0.1, 0.5, or 0.9; A and r are the area and stress in
the I-beam; Amax and Amin are the maximum and
minimum possible areas, respectively; and rmax and
rmin are the maximum and minimum possible stresses,
respectively, based on the slider bar limits. For this
formulation, the I-beam GDI is modified to show
contour lines of constant F to facilitate the search for
the best solution (see Fig. 5a); note that the feasible
region is no longer highlighted since we do not have
any constraints in this formulation. In addition, the
numerical value of F is displayed when a design point is
selected. The corresponding TDI for this formulation is
shown in Fig. 5b. Note that text boxes are used in this
TDI for design variable input instead of the slider bars.
This TDI was actually developed prior to the slider bar
version shown in Fig. 3, and the slider bars were added
when the multiobjective optimization problem was
simplified to the constrained single-objective optimi-
zation problem of Eq. 2 in an effort to reduce the be-
tween-subject variability [53].
2.2 Desk lamp
The desk lamp design problem is derived from [55] and
uses the radial basis function metamodels for analysis
that are developed in [61]. The objective is to maximize
normalized measures of the mean illuminance and
minimize the standard deviation of the illuminance on
a predetermined surface area (e.g., a paper or a book)
on a desk by changing three design variables: rod
length, L2, reflector length, L1, and reflector angle, h(see Fig. 6). A weighted-sum formulation is used for
the optimization:
Minimize : F ¼ �al� lminð Þ
lmax � lminð Þ þ 1� að Þ r� rminð Þrmax � rminð Þ
Subject to :
50 � L1 � 100
300 � L2 � 500
0� � h � 45�ð4Þ
where F is a weighted-sum of normalized measures of land r; lmax and lmin are the maximum and minimum
possible values for mean illuminance, respectively; and
rmax and rmin are the maximum and minimum possible
standard deviations for illuminance, respectively. The
scalar weighting factor, a, ranges from 0 to 1 (we use
a = 0.1, 0.5, 0.9), and the optimal design maximizes the
normalized l by using –a for the first term in Eq. 4. The
GDI for the desk lamp problem is shown in Fig. 6. We
note that the axis for the mean illuminance has been
reversed so that the best designs reside in the lower left
hand corner of the 2-D output display to match the
location of optimal solutions in the I-beam GDI.
A text-based design interface (TDI) was also
developed for the desk lamp design problem (see
Fig. 7). The functionality is nearly identical to that of
the I-beam TDI except that the user enters the values
for each design variable into textboxes instead of
Fig. 4 I-beam GDI with‘‘field box’’ for user input
98 Engineering with Computers (2007) 23:93–107
123
changing them with slider bars. Also, the responses are
update only after the user pushes the Calculate button
as noted earlier.
2.3 Aircraft wing
The wing design problem involves sizing and shaping
the plan view layout of an aircraft wing to minimize its
cost subject to constraints on range, buffet altitude, and
takeoff field length. The aircraft wing design problem
was developed in conjunction with researchers at The
Boeing Company and is presented in detail in [43]. The
initial problem involved six design variables that could
be manipulated to design the wing; however, 2- and
4-variable versions of the problem have also been
created:
The definition of each variable with respect to the
wing’s geometry is given in [43]. The objective and
constraints for the wing design problem are summa-
rized in Eq. 5.
Fig. 5 GDI and TDI formultiobjective I-beam designproblem. a I-beam GDI withslider bar input. b I-beam TDIwith textbox input
1. Semi-span, x12. Aspect ratio , x23. Taper ratio , x34. Sparbox root chord , x45. Sweep angle , x56. Fan diameter , x6
2-variableproblem 4-variable
problem 6-variableproblem
Bounds: 0 < xi < 1
Engineering with Computers (2007) 23:93–107 99
123
Minimize:Cost
Subject to :
Range[0:589
Buffet altitude[0:603
Takeoff field length\0:377
ð5Þ
The relationships between the design variables and
the objective and constraints are obtained using sec-
ond-order response surface models, which are given in
[43]. To maintain the proprietary nature of the data,
the cost, constraints, and design variables have all
been normalized to vary between [0,1] based on the
minimum and maximum values observed in the
sample data used to construct the response surface
models used for analysis within the GDI. Conse-
quently, the constraint limits on range, buffet altitude,
and takeoff field length are given as normalized values
in Eq. 5, and the bounds on each design variable are
normalized to [0, 1].
The GDI for the 6-variable wing design problem is
shown in Fig. 8. Simplified GDIs for the 2- and 4-var-
iable problems are identical except that they have
fewer slider bars, and each user only uses one of these
GDIs for the experiment. As with the other GDIs, the
user manipulates the slider bars to change the design
variable values, and the GDI updates as follows.
Fig. 6 GDI for desk lampdesign problem
Fig. 7 TDI for desk lampdesign problem
100 Engineering with Computers (2007) 23:93–107
123
1. The picture of the wing geometry changes to reflect
the values of the new design variables.
2. The objective and constraints are re-evaluated for
the new design variable values.
3. The new values of the objective and constraints are
displayed numerically in the table in the upper
right hand corner of the GDI (green if all con-
straints is satisfied, red otherwise).
4. The new values of cost and range are plotted
graphically in the 2-D output display window
(green if all constraints are satisfied, red otherwise)
in the middle of the GDI.
The red and green color scheme was first introduced
in this interface since, unlike the I-beam and desk lamp
problems, the problem has more than two output
responses of interest. In general, user feedback was
positive on the use of the red and green color
scheme [43].
2.4 Job shop manufacturing system
The job shop design problem is adapted from Ref. [57].
The job shop manufacturing system consists of five
workstations, where the machines at a workstation are
identical and perform the same function(s). A first-in,
first-out (FIFO) queue exists at each workstation,
where the first part to enter the queue is the next one
to be processed. There are three different product
types that are manufactured in this job shop, and jobs
are moved from one station to another by a fork truck.
The user can vary the number of machines (from 2 to
6) at each workstation as well as vary the number of
fork trucks (from 1 to 3) transporting the parts. A
simulation model of the job shop system was created
using Arena 3.0, and the routing times, probabilities,
and mean service times for each job are given in [58],
along with the distances between workstations and
operating costs for each workstation.
Seven output responses are considered in the job
shop design problem: system operating cost, average
time a part is in the system, and average utilization at
each of the five workstations. The problem statement
for the job shop design problem is:
The values for the seven performance measures
are all normalized to [0, 1] to alleviate scaling
inconsistencies between them. The values for each
performance measure are obtained through polyno-
mial regression models that were developed from the
simulation model using design of experiments and
Fig. 8 GDI for wing designproblem
Minimize : System operating Cost
Subject to:
Average time in the system � 0.425,0.100,0.340
Average utilization at workstation i � 0:35 8i ¼ 1; . . . ; 5
2 � Number of machines at workstation i � 6 8i ¼ 1; . . . ; 5
1 � Number of fork trucks � 3
ð6Þ
Engineering with Computers (2007) 23:93–107 101
123
least squares regression. First-order, stepwise, and
second-order polynomial regression models are used
to approximate the system responses to allow us to
investigate the impact of coupling within the
approximation model. All three sets of models can
be found in [58] along with details on how we sam-
pled the simulation model and fit each set of
regression models. Three GDIs were created for the
job shop design problem where each GDI used a
different set of the regression models; the controls,
layout, and capabilities of the three GDIs are iden-
tical otherwise.
A screen shot of the GDI for the job shop design
problem is shown in Fig. 9. Similar to the aircraft wing
design problem, the constraints on the average work-
station utilizations are represented using a green (all of
the utilization constraints are satisfied) and red (one or
more constraints is violated) color scheme. This color
scheme is also applied to the Job Shop Layout figure
on the right of the GDI: workstations that do not sat-
isfy the utilization constraint are shown in red, green
otherwise. Finally, the constraint on average time in
the system is highlighted in the objective plot by
shading the feasible region; savvy users will quickly
narrow their search for job shop designs that are lo-
cated within this feasible region.
3 Experimental methods
In this section, we overview the experimental factors,
performance measures, and experimental design typi-
cally used in our experiments and give a sample
experimental protocol for researchers to follow should
they desire to conduct additional experiments using
our design interfaces.
3.1 Experimental factors
• Response delay–response delay is one of the
experimental factors common to all of our design
interfaces, where the amount and levels for
response delay are as listed in Table 1.
In addition to response delay, the following factors
have also been studied:
• I-beam, single objective case-type of interface (3
levels: TDI, GDI w/slider bars, or GDI w/field box)
• I-beam, multi-objective case-type of interface (2
levels: TDI or GDI w/slider bars) and alpha value
(3 levels: 0.1, 0.5, or 0.9)
• Desk lamp-type of interface (2 levels: TDI or GDI)
• Aircraft wing-the size of the problem (3 levels: 2, 4,
or 6 variables)
• Job shop-the level of coupling within the polyno-
mial regression model (3 levels: first-order, second-
order, or stepwise model)
Additional levels for many of these factors could be
easily added to any design interface by changing the
Visual Basic 6.0 code and recompiling it.
3.2 Performance measures
User performance is measured by percent error and
task completion time, which we use as surrogates for
Fig. 9 GDI for job shopdesign problem
102 Engineering with Computers (2007) 23:93–107
123
design effectiveness and design efficiency, respec-
tively. Data transformations are commonly used
when model assumptions such as residual normality
are violated, and two of the more common data
transformations used to satisfy model assumptions
are the square root transform and the logarithmic
transform [62]. We have employed both within our
studies. Various aspects of the design search process
can also be evaluated as each design interface re-
cords the number of designs evaluated along with the
number of times each feature (i.e., clear and zoom
buttons) was used. Users are also asked to complete
pre- and post-test questionnaires to gather demo-
graphic information and evaluate various aspects of
the design interface, the design process, and the de-
sign problem itself. Responses to these questions
were used to test for significant correlation with user
performance. Finally, we have also administered the
NASA Task Load Index (NASA-TLX) [63] after
each trial of the experiment to study the perceived
workload of the user during the experiment. The
NASA-TLX is a widely used subjective workload
measure that provides the users with a direct method
of providing their opinions, is easy to use, has high
face validity, and has been shown to be sensitive to a
variety of task commands [64]. We have found that
user’s perceived workload, in addition to their per-
formance, has been adversely affected by delay and
type of GDI [54, 65].
3.3 Experimental design
We typically employ a between-subjects n · m fac-
torial design where there are n levels for response
delay and m levels for the other factor(s) being tes-
ted. We have also used a Graeco-Latin square design
to test three factors—response delay, a, and run
number—for the I-beam and desk lamp GDIs and
TDIs [53]. Pilot studies are strongly recommended to
assess the sensitivity of the experiment prior to
gathering final data. In most cases, we have found
that we need ~9 subjects per run condition, which
equates to approximately 60 subjects when using a
2 · 3 factorial design. We have also used pilot studies
to determine the amount of ‘‘training’’ necessary to
ensure a sufficient level of proficiency with the design
interface (typically 6–10 trials [54]). In our early
experiments, there was a significant learning effect,
indicating that users were insufficiently trained during
the demonstration trials and were still learning how
to use the software during the actual trials of the
experiment [52, 53].
3.4 Experimental protocol
Each experiment starts by giving subjects an overview
of the problem and brief introduction on what they will
do and how long it will take. After reading the over-
view and answering any of the user’s questions, they
are asked to sign an informed consent form. The sub-
jects are then given the pre-test questionnaire to
complete. Once this is done, they can begin using the
software by entering a tracking number and selecting
the experimental trial number in the upper left hand
corner of the design interface. After pushing the Start
button, pop-up windows guide the user through the
interface and its controls, demonstrating its capabili-
ties. Once comfortable with the interface, the user
completes a series of trials during which time data is
gathered. After each trial, the NASA-TLX is admin-
istered via computer, and after the final trial, subjects
complete a post-test questionnaire for the experiment.
A graduate student can be quickly trained to supervise
the experiment, administer questionnaires and NASA-
TLX, and answer questions. To compensate the sub-
jects for their time—experiments can take up to an
hour to complete depending on the number of tri-
als—we pay them $10/half h. When used as a supple-
ment to in-class instruction, extra credit has been used
effectively to recruit subjects to participate in the
experiment outside of class [52].
4 Summary of results and future avenues of research
To date, we have run more than 330 subjects through
our experiments. The results from each experiment are
summarized in the papers noted in Table 1, of which a
brief summary follows. In our initial study involving
the multiobjective formulation of the I-beam [52], the
0.5 s response delay significantly increase error but did
not affect the task completion time, and we noticed
that users considered fewer design alternatives as re-
sponse delay increased. Some users needed more time
to become familiar with the GDI as evidenced by the
significant learning effect that we found, which indi-
cated that users were not yet proficient with the
interface.
As a continuation of this study, we tested 133 stu-
dents using the multiobjective I-beam and desk lamp
GDIs and TDIs [53]. We found that GDI users per-
formed better (i.e., have lower error and faster com-
pletion time) on average than those using TDIs, but
these differences were not always statistically signifi-
cant. We also found that a response delay of 0.5 s
increased error and task completion time, on average,
Engineering with Computers (2007) 23:93–107 103
123
but these increases were not always statistically sig-
nificant either due to high variability in user perfor-
mance. Our results indicated that the perceived
difficulty of the design task and using the graphical
interface controls were inversely correlated with design
effectiveness—designers who rated the task more dif-
ficult to solve or the graphical interface more difficult
to use actually performed better than those who rated
them easy.
In our follow-up study [54], we used the single-
objective I-beam design problem, and we lengthened
the response delay to 1.5 s and increased the number of
user trials for training to 8 in an effort to minimize
variability between subjects. We also studied the im-
pact of the ‘‘richness’’ of the I-beam design interfaces,
i.e., the TDI (Fig. 3) versus the GDI with slider bars
(see Fig. 2) and the GDI with the ‘‘field box’’ input
mechanism (see Fig. 4). After testing 60 subjects, we
found that the response delays of 1.5 s significantly
increased error and completion time and that users
performed better as the ‘‘richness’’ of the design
interface increased. We also found that the perceived
workload of the users increased as delay increased and
as the ‘‘richness’’ of the design interface decreased, as
measured by the NASA Task Load Index. Rothrock
et al. [65] investigate these latter findings in more
detail.
In an effort to study more complex problems with
larger dimensions, the manufacturing job shop and the
aircraft wing examples were developed. In the job shop
example [58], experimental results from 54 subjects
revealed that user performance deteriorates signifi-
cantly when a response delay of 1.5 s is introduced:
percent error and task completion time increased, on
average, by 9.4% and 81 s, respectively, when the delay
was present. The use of first-order, stepwise, and sec-
ond-order polynomial regression models was also
studied, and we found that user performance improved
when stepwise polynomial regression models were
used instead of first-order or second-order models. The
stepwise models yielded 12% lower error and 91 s
faster completion times, on average, over the first-or-
der models; error was 13.5% lower and completion
time was 62 s faster, on average, then when second-
order models were used. These findings corroborated
those of Hirschi and Frey [6] who found that user
performance deteriorates as the level of coupling in-
creases, but we were able to quantify the extent to
which this occurs by testing different levels of coupling
in the metamodels for the job shop design problem. As
noted in [58], future studies should investigate the
broader implications of this finding by examining
problems with larger and smaller numbers of input
variables and output responses. It is well known that
humans can effectively remember seven (plus or minus
2) pieces of distinct information in their short-term
memory [66], and the impact of coupling may become
negligible in smaller problems while being exacerbated
in larger ones.
Recent experiments with the aircraft wing design
problem have attempted to ascertain the effect of
problem size on user performance. The initial study at
Boeing involved only the 6-variable formulation, and
few significant results were achieved with the 6-vari-
able aircraft wing design example due to the small
sample sizes [43]. Delay did have a significant impact
on the number of points plotted, but not on the percent
error or completion time, whereas the constraint vio-
lation display (i.e., the number of constraints that were
plotted in red/green) did affect the search strategy
employed by the user in the GDI. We found that the
fewer constraint violations displayed, the more free-
dom users felt they had to explore the design space for
good designs. As a follow-on study, we created the 2-
and 4-variable versions of the aircraft wing design
problem and performed tests using 66 engineering
students to study the effect of problem size on user
performance [56]. We found that user performance
dropped off sharply as the number of variables in-
creased from 2 to 4 to 6 and that response delay only
had an impact on the small size problems. We also
found a significant interaction between response delay
and problem size such that the impact of response
delay had less of an effect as the size of the problem
increased: the 6-variable problem was so difficult to
solve that response delay did not impact user perfor-
mance with this GDI whereas its impact was statisti-
cally significant in the 2-variable problem.
While we have successfully demonstrated the use-
fulness of metamodel-driven graphical design inter-
faces in engineering design, we feel that we have just
‘‘scratched the surface’’ of a very large, complex, and
challenge problem, namely, how to develop effective
user interfaces to support engineering design and
decision-making. Our investigations into the specific
capabilities of design interfaces have been limited to
testing text-based versus graphical displays and differ-
ent user input methods (e.g., slider bars versus a 2-D
‘‘field-box’’). The TDIs and GDIs can be analyzed in
terms of user performance with respect to two general
graphical design principles, which provide insight into
types of things that trip up novice users. The first
principle, called the Proximity Compatibility Principle
[67], specifies that displays relevant to a common task
or mental operation should be rendered close together
in perceptual space. The second principle, called the
104 Engineering with Computers (2007) 23:93–107
123
Control-Display Compatibility Principle [68], stipu-
lates that the spatial arrangement and manipulation of
controls should be easily distinguishable. A detailed
investigation of TDIs and GDIs in terms of adherence
to the display principles and the impact on user per-
formance is described in [65].
The populations on which each task was tested var-
ied from student novices (for all four tasks) to experts
(for the wing design problem). For student novices, we
found that training sessions generally mitigated the ef-
fects of user experience in terms of user errors. For
example, we found that the level of previous computer
usage, the frequency of playing video games, or famil-
iarity with single- and multi-objective optimization did
not have a significant effect on user’s error during the
trials [54]. In terms of response time, however, the re-
sults were mixed and warrant further investigation. For
the experts, we found that user groups tended to adopt
different strategies toward solving the wing design
problem and, therefore, did have an impact on user
performance [43]. We also found that specific types of
users (e.g., statisticians vs. mathematicians) desired
different features in their GDIs (e.g., interaction plots
vs. Lagrange multipliers), indicating the need for flexi-
bility within any GDI to be able to tailor it to the
application as well as to different users [43]. Hence,
there is a need for a better understanding of the inter-
action of the nature of the design task, the type of user,
and the design features of the GUI.
The advantage of metamodel-based design analyses
extends beyond the instantaneous (approximate) cal-
culation of responses, even beyond calculations of
statistical characterizations such as variance. Fast
function evaluations should permit the marriage of
instantaneous evaluation with optimization-assisted
and other computational design strategies, such as the
Dynamic Programming coupling in the interface em-
ployed by Lembersky and Chi [1] discussed at the start
of this paper. For that interface, the Dynamic Pro-
gramming solutions were pre-computed for a fixed set
of log datasets, but increasing computational capability
makes real-time optimization of metamodel functions
increasingly practical. An important future endeavor
will be to search for effective combinations of the fast-
response visual environment with strategic control and
use of optimization-based design suggestions.
Finally, for all of this work, we used metamodels as
surrogates of the original analyses. This introduces an
additional element of uncertainty in that the meta-
model is a ‘‘model of the model’’, and the tradeoff
between speed of response and lost accuracy needs to
be examined. Recent related research seeks to address
the added uncertainty in the metamodels themselves
[69]. We have also been working predominantly in a
deterministic setting by either ignoring any uncertainty
in the system or by creating metamodels of the mean
and variance of relevant system responses. Uncertainty
visualization is becoming an important area for future
research.
Acknowledgments This research was supported by the Na-tional Science Foundation under Grant No. DMI-0084918. Weare indebted to the graduate students who worked on this pro-ject—Gary Stump, Martin Meckesheimer, Chris Ligetti, BrittHolewinski, and Param Iyer—as well as the undergraduate stu-dents, Kim Barron and Chris Ligetti, who were supported onREU supplements to our grant.
References
1. Lembersky MR, Chi UH (1984) Decision simulators speedimplementation and improve operations. Interfaces 14:1–15
2. Burgess S, Pasini D, Alemzadeh K (2004) Improved visual-ization of the design space using nested performance charts.Des Stud 25(1):51–62
3. Dahl DW, Chattopadhyay A, Gorn GJ (2001) The Impor-tance of visualisation in concept design. Des Stud 22(1):5–26
4. Eddy J, Lewis K (2002) Visualization of multi-dimensionaldesign and optimization data using cloud visualization. In:ASME design engineering technical conferences - designautomation conference, Montreal, Quebec, Canada, ASME,Paper No. DETC02/DAC-02006
5. Evans PT, Vance JM, Dark VJ (1999) Assessing the effec-tiveness of traditional and virtual reality interfaces in spher-ical mechanism design. ASME J Mech Des 121(4):507–514
6. Hirschi NW, Frey DD (2002) Cognition and complexity: anexperiment on the effect of coupling in parameter design.Res Eng Des 13(3):123–131
7. Jayaram S, Vance JM, Gadh R, Jayaram U, Srinivasan H(2001) Assessment of VR technology and its applications toengineering problems. ASME J Comput Info Sci Eng1(1):72–83
8. Kelsick J, Vance JM, Buhr L, Moller C (2004) Discrete eventsimulation implemented in a virtual environment. ASME JMech Des 125(3):428–433
9. Kodiyalam S, Yang RJ, Gu L (2004) High performancecomputing and surrogate modeling for rapid visualizationwith multidisciplinary optimization. AIAA J 42(11):2347–2354
10. Messac A, Chen X (2000) Visualizing the optimization pro-cess in real-time using physical programming. Eng Optim32(6):721–747
11. Stump G, Yukish M, Simpson TW (2004) The advancedtrade space visualizer: an engineering decision-making tool.In: 10th AIAA/ISSMO multidisciplinary analysis and opti-mization conference, Albany, NY, AIAA, AIAA-2004–4568
12. Maxfield J, Juster NP, Dew PM, Taylor S, Fitchie M, Ion WJ,Zhao J, Thompson M (2000) Predicting product cosmeticquality using virtual environments. In: ASME design engi-neering technical conferences - computers and informationin engineering, Baltimore, MD, ASME, Paper No.DETC2000/CIE-14591
13. Winer EH, Bloebaum CL (2002) Development of visualdesign steering as an aid in large-scale multidisciplinary de-sign optimization. part i: method development. Struct Mul-tidiscip Optim 23(6):412–424
Engineering with Computers (2007) 23:93–107 105
123
14. Zionts S (1992) The state of multiple criteria decisionmak-ing: past, present, and future. In: Goicoechea A, DucksteinL, Zionts S (eds) Multiple criteria decision making, Springer,Berlin Heidelberg New York, pp 33–43
15. Zionts S (1993) Multiple criteria decision making: the chal-lenge that lies ahead. In: Tzeng GH, Wang HF, Wen UP, YuPL (eds) Multiple criteria decision making. Springer, BerlinHeidelberg New York, pp 17–26
16. Athan TW, Papalambros PY (1996) A note on weightedcriteria methods for compromise solutions in multi-objectiveoptimization. Eng Optim 27(2):155–176
17. Charnes A, Cooper WW (1977) Goal programming andmultiple objective optimization - part I. Eur J Oper Res1(1):39–54
18. Wilson B, Cappelleri DJ, Frecker MI, Simpson TW (2001)Efficient Pareto frontier exploration using Surrogateapproximations. Optim Eng 2(1):31–50
19. Hazelrigg GA (1996) The implications of arrow’s impossi-bility theorem on approaches to optimal engineering design.ASME J Mech Des 118(2):161–164
20. Hazelrigg GA (1996) Information-Based Design, PrenticeHall, Upper Saddle River
21. Steuer RE, Choo EU (1983) An interactive weighted Tche-bycheff procedure for multiple objective programming. MathProgram 26:326–344
22. Thurston DL, Carnahan JV, Liu T (1994) Optimization ofdesign utility. J Mech Des 116(3):801–808
23. Yang JB, Sen P (1994) Multiple objective design optimiza-tion by estimating local utility functions. In: Advances indesign automation vol. ASME DE-vol 69–2, pp 135–145
24. Hauser JR, Clausing D (1988) The house of quality. HarvardBus Rev 66(3):63–73
25. Locascio A, Thurston DL (1998) Transforming the house ofquality to a multiobjective optimization formulation. StructOptim 16(2–3):136–146
26. Lewis K, Mistree F (1998) Collaborative, sequential, andisolated decisions in design. ASME J Mech Des 120(4):643–652
27. Lewis K, Mistree F (2001) Modeling subsystem interactions:a game theoretic approach. J Des Manuf Autom 1(1):17–36
28. Rao SS, Vankayya VB, Khot NS (1988) Game theory ap-proach for the integrated design of structures and controls.AIAA J 26(4):463–469
29. Otto KN, Antonsson EK (1991) Trade-off strategies inengineering design. Res Eng Des 3(2):87–103
30. Wood KL, Antonsson EK, Beck JL (1990) Representingimprecision in engineering design: comparing fuzzy andprobability calculus. Res Eng Des 1(3/4):187–203
31. Messac A (1996) Physical programming: effective optimiza-tion for computational design. AIAA J 34(1):149–158
32. Messac A (2000) From dubious construction of objectivefunctions to the application of physical programming. AIAAJ 38(1):155–163
33. Saaty T (1988) The analytic hierarchy process, revised andextended edition. McGraw-Hill, New York
34. National Research Council (1998) Visionary manufacturingchallenges for 2020, Committee on Visionary ManufacturingChallenges, National Research Council, National AcademyPress, Washington, DC
35. Ullman DG (2003) The mechanical design process. 3rd edn,McGraw-Hill, New York
36. Card SK, Moran TP, Newell A (1983) The psychology ofhuman-computer interaction. Lawrence Erlbaum, Hillsdale
37. Sturman DJ, Zeltzer D, Pieper S (1989) Hands-on interac-tion with virtual environments, Proceedings of the 1989
ACM SIGGRAPH Symposium on User Interface Softwareand Technology:19–24
38. Ware C, Balakrishnan R (1994) Reaching for objects in VRdisplays lag and frame rate. ACM Trans Compr HumInteract 1:331–356
39. Watson B, Walker N, Hodges LF, Worden A (1997) Man-aging level of detail through peripheral degradation: effectson search performance in head-mounted display. ACMTrans Comput Hum Interact 4:323–346
40. Waern Y (1989) Cognitive aspects of computer supportedtasks. Wiley, New York
41. Goodman T, Spence R (1978) The effect of system responsetime on interactive computer-aided design. Comput Graph12:100–104
42. Foley JD, Wallace JD (1974) The art of natural graphic man-machine conversation. Proc IEEE 4:462–471
43. Simpson TW, Meckesheimer M (2004) Evaluation of agraphical design interface for design space visualization. In:45th AIAA/ASME/ASCE/AHS/ASC structures, structuraldynamics & materials conference, Palm Springs, CA, AIAA,AIAA-2004–1683
44. Gu L (2001) A comparison of polynomial based regressionmodels in vehicle safety analysis. In: ASME design engi-neering technical conferences - design automation confer-ence, Pittsburgh, PA, ASME, Paper No. DETC2001/DAC-21063
45. Kleijnen JPC (1975) A comment on Blanning’s metamodelfor sensitivity analysis: the regression metamodel in simula-tion. Interfaces 5(1):21–23
46. Barton RR (1998) Simulation metamodels. In: Proceedingsof the 1998 winter simulation conference (WSC’98), Wash-ington, DC, IEEE, pp. 167–174
47. Simpson TW, Peplinski J, Koch PN, Allen JK (2001) Meta-models for computer-based engineering design: survey andrecommendations. Eng Comput 17(2):129–150
48. Sobieszczanski-Sobieski J, Haftka RT (1997) Multidisci-plinary aerospace design optimization: survey of recentdevelopments. Struct Optim 14(1):1–23
49. Haftka R, Scott EP, Cruz JR (1998) Optimization andexperiments: a survey. Appl Mech Rev 51(7):435–448
50. Simpson TW, Booker AJ, Ghosh D, Giunta AA, Koch PN,Yang RJ (2004) Approximation methods in multidisciplinaryanalysis and optimization: a panel discussion. Struct Multi-discip Optim 27(5):302–313
51. Haftka R, Gurdal Z (1992) Elements of structural optimi-zation. 3rd Revised and Expanded Edition, Kluwer Aca-demic Publishers, Boston
52. Frecker M, Simpson TW, Goldberg JH, Barton RR,Holewinski B, Stump G (2001) Integrating design researchinto the classroom: experiments in two graduate courses.In: 2001 Annual ASEE Conference, Albuquerque, NM,ASEE
53. Ligetti C, Simpson TW, Frecker M, Barton RR, Stump G(2003) Assessing the impact of graphical design interfaces ondesign efficiency and effectiveness. ASME J Comput InformSci Eng 3(2):144–154
54. Barron K, Simpson TW, Rothrock L, Frecker M, Barton RR,Ligetti C (2004) Graphical user interfaces for engineeringdesign: impact of response delay and training on user per-formance. In: ASME design engineering technical confer-ences - design theory & methodology conference, Salt LakeCity, UT, ASME, Paper No. DETC2004/DTM-57085
55. Barton RR, Limayem F, Meckesheimer M, Yannou B (1999)Using metamodels for modeling the propagation of designuncertainties. In: 5th international conference on concurrent
106 Engineering with Computers (2007) 23:93–107
123
engineering (ICE’99), The Hague, Centre for ConcurrentEnterprising, The Netherlands, pp 521–528
56. Simpson TW, Iyer P, Barron K, Rothrock L, Frecker M,Barton RR, Meckesheimer M (2005) Metamodel-driveninterfaces for engineering design: impact of delay andproblem size on user performance. In: 46th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics & materi-als conference and 1st AIAA multidisciplinary design opti-mization specialist conference, Austin, TX, AIAA, AIAA-2005–2060
57. Law AM, Kelton WD (2000) Simulation modeling andanalysis. 3rd edn, McGraw Hill, Boston, MA
58. Ligetti C, Simpson TW (2005) Metamodel-driven designoptimization using integrative graphical design interfaces:results from a job shop manufacturing simulation experi-ment. ASME J Comput Inform Sci Eng 5(1):8–17
59. Hammond KR (1986) Generalization in operational con-texts: what does it mean? can it be done?. IEEE Trans SystMan Cybernet 16(3):428–433
60. Hammond KR, Hamm RM, Grassia J, Pearson T (1987)Direct comparison of the efficacy of intuitive and analyticalcognition in expert judgment. IEEE Trans Syst ManCybernet 17(5):753–770
61. Meckesheimer M, Barton RR, Simpson TW, Limayem F,Yannou B (2001) Metamodeling of combined discrete/con-tinuous responses. AIAA J 39(10):1955–1959
62. Neter J, Kutner MH, Nachtsheim CJ, Wasserman W (1996)Applied linear statistical models. 4th edn. WCB/McGrawHill, Boston, MA
63. Hart SG, Staveland LE (1988) Development of NASA-TLX(Task Load Index): results of experimental and theoreticalresearch. In: Hancock PA, Meshkati N (eds) Human mentalworkload. North Holland, Amsterdam, pp 139–183
64. Wierwille WW, Eggemeier FT (1993) Recommendations formental workload measurement in a test and evaluationenvironment. Hum Factors 35(2):262–282
65. Rothrock L, Barron K, Simpson TW, Frecker M, Barton RR,Ligetti C (2006) Applying the proximity compatibility andthe control-display compatibility principles to engineeringdesign interfaces. Hum Factors Ergonom Manuf 16(1):61–81
66. Miller GA (1956) The magical number seven, plus or minustwo: some limits on our capacity for processing information.Psychol Rev 63:81–97
67. Wickens CD, Carswell CM (1995) The proximity compati-bility principle: its psychological foundation and relevance todisplay design. Hum Factors 37(3):473–494
68. Wickens CD (1992) Engineering psychology and humanperformance. 2nd edn. Harper Collins Inc., NewYork
69. Martin JD, Simpson TW (2004) A Monte Carlo simulation ofthe kriging model. In: 10th AIAA/ISSMO multidisciplinaryanalysis and optimization conference, Albany, N, AIAA,AIAA-2004–4483
Engineering with Computers (2007) 23:93–107 107
123