honours project report - university of cape townbharrilal/eric/eric_report.pdf · taken to gather...
TRANSCRIPT
0
Honours Project Report
CodeSketch:
Designing a Visual Programming Language for
Arduino
Graphical Implementation Approach
By: Eric Su
Supervisor: Gary Marsden
Category Min Max Chosen
1 Requirement Analysis and Design 0 20 10
2 Theoretical Analysis 0 25 0
3 Experiment Design and Execution 0 20 10
4 System Development and Implementation 0 15 0
5 Results, Findings and Conclusion 10 20 20
6 Aim Formulation and Background Work 10 15 10
7 Quality of Report Writing and Presentation 10 10
8 Adherence to Project Proposal and Quality of Deliverables 10 10
9 Overall General Project Evaluation 0 10 10
Total Marks 80 80
Department of Computer Science
University of Cape Town 2013
1
Abstract
Arduino is a popular electronic prototyping platform that lets creative users design and
develop interactive objects. A key requirement of this development process is to
program the Arduino microcontroller using an extended version of C/C++. However, a
large population of Arduino‟s user group are artists who have little to no programming
experience. In order to empower these users so that they too can develop with the
Arduino platform, a new programming paradigm that does not require coding is
required. The solution is to design and test a Visual Programming Language (VPL)
which makes use of interactive programming techniques. This project gathers the
essential design principles that are required to build such a language for the Arduino
platform. Using those design principles, a prototype of the interactive user interface
was implemented for this proposed VPL. The development phase of this project made
use of a user-centred design technique to iteratively evaluate and refine the design
until a final prototype was produced.
2
Acknowledgements
Firstly I would like to thank my supervisor, Professor Gary Marsden, for his guidance
and feedback during the course of this project, and for organising meetings for me to
speak with Arduino experts. I also thank him for providing me and my project partner
with the necessary Arduino equipment and tools to let us familiarise ourselves with the
technology before we could start designing for it.
Secondly I would like to thank the Arduino experts Johan Van der Schjiff, Jeff Dooley,
and Joshua Ginsberg who granted me access to attend their lectures at the Michaelis
Undergrad Computer Lab to observe and work with their students during this project‟s
design phase. They also provided valuable expert advice for the design of the system.
Lastly I extend my thanks to the third year art students who participated as users in
the evaluations phase of this project. They allowed me to gather valuable user
information which was necessary to complete the development phase. Without them,
the project would not have been possible.
3
Table of Contents
Abstract ............................................................................................................ 1
Acknowledgements .......................................................................................... 2
Table of Figures ............................................................................................... 6
Chapter 1 – Project overview ........................................................................ 7
1.1 Introduction ................................................................................................ 7
1.2 Research Questions .................................................................................. 8
1.3 Report outline ............................................................................................. 8
Section A: Research ...................................................................................... 9
Chapter 2 – Background and Related Work ................................................ 9
2.1 What is a VPL? .......................................................................................... 9
2.2 The Arduino Platform ................................................................................. 9
2.2.1 Developing with Arduino .................................................................. 9
2.2.2 The Arduino Development Environment ........................................ 10
2.3 Similar Projects ........................................................................................ 11
2.3.1 Modkit ............................................................................................ 11
2.3.2 OpenBlocks ................................................................................... 12
2.4 Other related work ................................................................................... 13
2.4.1 HI-VISUAL, an Iconic Programming System .................................. 13
2.4.2 Generation of Visual Language Environments ............................... 14
Chapter 3 – Design Theory and Principles ................................................ 15
3.1 What is interaction design? ...................................................................... 15
3.2 Understanding the users .......................................................................... 15
3.3 User-centred design ................................................................................. 16
3.4 Design Methodology ................................................................................ 16
3.4.1 Design Phase ................................................................................ 16
3.4.2 Prototyping Phase ......................................................................... 17
3.5 Information Gathering .............................................................................. 18
3.5.1 Recognizing User Goals ................................................................ 18
3.5.2 Identifying Requirements ............................................................... 18
3.5.3 Qualitative Research ..................................................................... 18
3.6 Design Heuristics ..................................................................................... 20
3.7 Design Principles ..................................................................................... 23
3.7.1 Goal-oriented design...................................................................... 23
3.7.2 Affordance ...................................................................................... 24
3.7.3 Mapping ......................................................................................... 24
3.7.4 Performance .................................................................................. 24
4
3.7.5 Rule-making and interaction constraints ........................................ 25
3.7.6 Cognitive thinking .......................................................................... 25
3.7.7 Technology Probing ....................................................................... 26
3.7.8 Visual noise and clutter .................................................................. 27
3.7.9 Contrast and Layering.................................................................... 27
3.8 Usability Evaluations ................................................................................ 28
3.8.1 Heuristic Evaluation ....................................................................... 28
3.8.2 Cognitive walkthrough.................................................................... 29
3.8.3 Expert Reviews .............................................................................. 29
Section B: Deployment ................................................................................ 30
Chapter 4 – Requirements Gathering ......................................................... 30
4.1 Defining System Requirements ................................................................ 30
4.1.1 Who are the Users? ....................................................................... 30
4.1.2 Usability requirements: .................................................................. 31
4.1.3 Functional Requirements ............................................................... 32
4.2 Design solutions and rationale ................................................................. 33
4.2.1 Navigation ...................................................................................... 33
4.2.2 Visual programming solution .......................................................... 34
4.3 Interface Design ideas ............................................................................. 35
4.3.1 Initial design sketch ....................................................................... 35
4.3.2 Improved design sketch ................................................................. 36
Chapter 5 – Low-fidelity prototyping .......................................................... 37
5.1 Paper prototype ....................................................................................... 37
5.2 Expert Review .......................................................................................... 38
5.3 Heuristic Evaluations and Findings .......................................................... 38
5.3.1 Evaluation checklist ....................................................................... 38
5.3.2 Evaluation results .......................................................................... 39
5.4 Design Implications .................................................................................. 40
5.4.1 Addressing expert review issues .................................................... 40
5.4.2 List of design changes and features added ................................... 41
5.4.3 List of design ideas not yet implemented ....................................... 41
5.5 Technology Probing ................................................................................. 42
5.5.1 Why use a technology probe? ....................................................... 42
5.5.2 The implemented probe ................................................................. 42
5.5.3 Probing the users ........................................................................... 43
5.5.4 Results and findings ...................................................................... 44
Chapter 6 – High-fidelity prototyping ......................................................... 46
6.1 Horizontal Prototype ................................................................................ 46
5
6.1.1 Design implications from probing ................................................... 46
6.1.2 Implementation .............................................................................. 47
6.1.3 Design limitations ........................................................................... 49
6.2 Evaluations .............................................................................................. 50
6.2.1 Structure ........................................................................................ 50
6.2.2 Results ........................................................................................... 51
6.2.3 List of required changes................................................................. 53
6.3 Final Prototype ......................................................................................... 54
6.3.1 Changes made .............................................................................. 54
6.3.2 Functions not implemented ............................................................ 55
6.3.3 Problems that are beyond the scope of this project ....................... 56
Chapter 7 – Conclusions ............................................................................. 57
7.1 Research questions addressed................................................................ 58
7.2 Possible Future Work ............................................................................... 58
References .................................................................................................... 59
Appendices .................................................................................................... 62
Appendix 1 – Arduino course breakdown (FIN3030H) ............................ 62
Appendix 2 – Enlarged photos and screenshots..................................... 63
6
Table of Figures
Figure 1: The Arduino UNO microcontroller and the Arduino IDE interface ................... 10
Figure 2: Modkit user interface for visual programming .................................................... 11
Figure 3: Modkit user interface for assigning pins ............................................................. 11
Figure 4: An OpenBlocks visual program with its corresponding source code ............. 12
Figure 5: HI-VISUAL, interpretation of icon behavior ........................................................ 13
Figure 6: Labyrinth, a generated visual environment ........................................................ 14
Figure 7: Diagram which depicts a user-centered design process .................................. 16
Figure 8: Transitive design process, the lower half of this process will be UCD ........... 17
Figure 9: iTunes media library .............................................................................................. 21
Figure 10: The “Back” button on the Google Chrome web browser ............................... 21
Figure 11: The conventional “save” icon ............................................................................ 22
Figure 12: Google’s autocomplete feature .......................................................................... 22
Figure 13: Redial button on conventional telephone ......................................................... 22
Figure 14: Windows error message suggesting recovery solution ................................. 23
Figure 15: Knob tap and lever tap showing affordance in usability ................................ 24
Figure 16: Kitchen stove with mapping controls ............................................................... 24
Figure 17: The file uploading tool for Vula .......................................................................... 25
Figure 18: Concepts related to cognitive psychology ....................................................... 25
Figure 19: An example of a cluttered interface................................................................... 27
Figure 20: Layered windows ................................................................................................ 28
Figure 21: Conventional icons representing Arduino hardware components ................ 31
Figure 22: Arduino program that blinks an LED................................................................. 32
Figure 23: first conceptual sketch ....................................................................................... 35
Figure 24: improved sketch .................................................................................................. 36
Figures 25 (a) and (b) respectively ...................................................................................... 37
Figures 26 (a) and (b) respectively ...................................................................................... 37
Figure 27: Screenshot of technology probe implemented in Java .................................. 43
Figure 28: Drop-zones shaped like buttons ........................................................................ 45
Figure 29: Screenshot of general interface ........................................................................ 47
Figure 30: Interface with added Event ................................................................................. 48
Figure 31: I/O properties menu ............................................................................................ 48
Figure 32: Progress tracking checklist ............................................................................... 49
Figure 33: "View code" button enable constraints ............................................................ 49
Figure 34: New events pane ................................................................................................. 54
Figure 35: Old progress tracking tool ................................................................................. 55
Figure 36: New progress tracking tool ................................................................................ 55
7
Chapter 1 – Project overview
1.1 Introduction
The Arduino microcontroller is one of the most popular prototyping platforms in the
world [12, p.4]. Designed for artists and hobbyists, it is intended for users who want to
create interactive objects or environments using a variety of input and output devices.
The Arduino platform has its own programming language based on an extended
version of C++. Currently, the only effective way to program the Arduino
microcontroller is to use the built-in text editor that comes with the free IDE
downloaded from the Arduino website.
The fact that there is no other way to program the device except writing source code
limits the type of users who are able to use the system. This is a problem for
non-experienced programmers, because in order to make full use of the features that
Arduino has to offer, one would need to have experience in conventional text-based
computer programming (coding). Thus, in order to grant non-programmers the means
to develop on the Arduino platform, we needed to develop a way for them to program
the microcontroller, without the need to write code. The solution to this problem was to
design and implement a new Visual Programming Language (VPL) to empower
end-users the ability to program the Arduino microcontroller graphically.
We‟ve proposed two different approaches to designing a VPL. The first is a HCI
participatory design approach and the second is a Graphical Implementation design
approach. HCI design focuses on the needs of the users by involving users in the
design process itself. Implementation design follows software design heuristics and
other design methodologies to build a functional prototype which is then evaluated by
the users.
The HCI design approach was conducted as a separate project and can be found in a
separate report written by my project partner Bhavana Harrilal.
This report focuses on the Graphical Implementation solution to the problem using a
user-centred design (UCD) approach for software development.
8
1.2 Research Questions
There are two research areas involved in graphical implementation design. The first
research question focuses on design methodologies and the second focuses on
usability testing.
1. How to design an interactive VPL?
Research will be conducted on the design principles involved with designing
interactive user interfaces, particularly in the field of visual programming. Then, using
those design principles, a functional prototype of the interface will be developed.
2. Evaluate the usability of the VPL.
Research will be conducted on the different ways to evaluate the usability of
interactive software systems, particularly for visual programming systems. Then we
will use those techniques to evaluate our functional prototype.
1.3 Report outline
This report is separated into two sections: Section A: Research and Section B:
Deployment. Section A focuses on all the research done in this project with regards to
interaction design and VPL theory. Section B documents the actual development
process which was deployed based on the conducted research.
Chapter 2 will explore the concept of visual programming languages and describe the
Arduino prototyping platform in more detail. It will also highlight some previous works
that are related to this project. Chapter 3 will describe the design elements and
principles that were involved in this project. Next, chapter 4 will report on the steps
taken to gather system requirements and show how a low fidelity prototype was
implemented. Chapter 5 will describe the implementation of a technology probe which
is then made into a high fidelity prototype. Chapter 6 will report on the results of the
final evaluation based on the high fidelity prototype after two UCD (See 3.3) iterations.
Finally, chapter 7 will present the final proposed system for our VPL interface as well
as this project‟s concluding discussions.
9
Section A: Research
Chapter 2 – Background and Related Work
The purpose of this chapter is to provide information to further understand what this
project aims to achieve in its context by looking at some background theory and
related works. Here we will fully define the concept of a visual programming language
and give a brief overview of all the features that the Arduino platform has to offer. It will
also briefly describe some related works that have been done which were similar to
this project and discuss their differences as well as design opportunities.
2.1 What is a VPL?
A visual programming language is a programming language that lets the programmer
specify instructions visually instead of textually [21]. In order to build a new visual
programming language, it is required for the language designer to be familiar with
both user interaction design and programming language compilers. A conventional
programming language usually forms part of an integrated development environment
(IDE) that allows users to define sets of objects and functions that make up a
computer program. However, these conventional programming languages, such as
Java or C++, only allow the programmer to specify objects and functions textually.
Thus, we are in need of a programming language which makes use of visual or
graphical elements to define these objects which specify program functionality.
Furthermore, the main purpose of a VPL is to grant end-users the ability to program
without any skill or knowledge of conventional text based programming concepts
otherwise known as „coding‟.
2.2 The Arduino Platform
The introduction in the previous chapter briefly highlighted Arduino in a summarized
context. The following sections will describe in more detail what the Arduino is and
how to develop interactive systems with its IDE.
2.2.1 Developing with Arduino
As stated in the previous chapter, the Arduino microcontroller is a prototyping platform
that is designed for creative users who want to build fun and interactive projects using
10
a variety of sensors and output devices. To develop with Arduino, one would need to
be familiar with the logical flow of the development process, as well as the hardware
and software features of the platform. The Arduino development process generally
involves mapping the ports of input sensors to those of output devices, and then
programming the microcontroller by coding it in the Arduino IDE. The user will then
physically connect all the sensors and devices to the specified ports on the Arduino
board, and then upload the Arduino code (known as a sketch) onto the device via the
USB port [12].
2.2.2 The Arduino Development Environment
The current Arduino development environment has a simple interface which includes
a text editor for writing code, a message area for feedback on system status, and a
toolbar with buttons that support the most common functions such as Verify, Upload,
and Save. The main purpose of the IDE is to upload text-based Arduino source code.
As mentioned before, source code that is written with the Arduino IDE are called
sketches. The IDE also includes basic text editing features such as cut, copy and
paste. When the user uploads a sketch using the Arduino text editor, the text will be
saved with .ino extension, verified using an extended C++ compiler, and transferred to
the device‟s flash memory where it will execute the instructions indefinitely as long as
there is power on the device.
Figure 1: The Arduino UNO microcontroller and the Arduino IDE interface
Further information regarding Arduino hardware and IDE features can be found on the
official Arduino website: www.arduino.cc
11
2.3 Similar Projects
There have been similar research and development projects that have been done
which aims to create an effective visual programming language for Arduino. There
have been two projects thus far which have been quite successful, Modkit and
OpenBlocks.
2.3.1 Modkit
Modkit is a graphical programming environment for embedded systems that runs in a
browser. It was inspired by the Scratch programming environment which is developed
by the Lifelong Kindergarden Group at MIT. It visualizes the Arduino source code as
blocks and, like Scratch, it makes use of a drag and drop interface to add components
onto a canvas to build your program. It
also includes a graphical hardware
setup feature which lets the user select
which device connects to which ports
and pins using a drop down menu.
Modkit is still in its Alpha development
stage and has not yet been released to
the public for free. However, you can
sign up to be part of their development
process and get a subscription to use
their products first [15].
Figure 3: Modkit user interface for assigning pins
Figure 2: Modkit user interface for visual programming
12
2.3.2 OpenBlocks
OpenBlocks is also a graphical programming language. It is very similar to Modkit as it
also uses visual blocks that represent code to build a sketch. This language is more
concise and easier to understand. It provides enough information to infer the setup
and loop methods which forms the bare minimum code required for an Arduino
program. The advantage of having a puzzle-like programming environment means
that it is easier learn and understand. The disadvantage of OpenBlocks, however, is
that it takes a long time for the sketch to compile during runtime [16].
Figure 4: An OpenBlocks visual program with its corresponding source code
What is common between ModKit and CodeBlocks is the fact that they both use the
same visualization to represent Arduino source code. By visualizing the objects and
methods as blocks resembling puzzle pieces, the user will be able to piece together
their program just like a puzzle game. However, even though these two designs have
managed to achieve a functional programming language for Arduino, they won‟t be
effective if given to non-programmers who have no experience in programming at all.
The reason is because these two visual languages are direct visual representations of
the Arduino source code. There is not enough visual abstraction to represent the
Arduino programming language at a high enough level. A visual language that
matches the source code too closely results in visual elements that are structured in
the same way as the original code. Thus, end-users who have never developed in
Arduino will find it just as difficult to understand the language. Therefore, these two
languages are only effective in helping experienced Arduino programmers to code
more efficiently, and not helpful with letting end-users such as artists program at all.
13
2.4 Other related work
There has been a lot of research done in the last 20-30 years in the field of visual
programming [19]. This project focuses mainly on user interface and human
interaction in visual programming languages. Below are some of the related works
that have been done on human interaction design.
2.4.1 HI-VISUAL, an Iconic Programming System
HI-VISUAL is a visual programming system designed and built by a team of IEEE
members. The system makes use of icons provide to an effective means of
human-computer interaction (HCI) and was first proposed to support visual interaction
in programming. The system was then extended as a programming environment by
adding several useful features and facilities for iconic programming. The iconic
programming can be carried out visually by arranging icons on a 2D display,
specifying the relationship between them. This system was effective by attaining a
higher level of human computer interaction from the viewpoint of universality and
efficiency. However, the effectiveness depended on the icons themselves, whether or
not they can be understood intuitively by most users.
HI-VISUAL‟s framework used an object-oriented approach for icon management,
making it possible to change the behaviour of the system by defining new rules. A
programming system was then implemented onto the existing framework and was
designed to carry out program development spatially by overlapping one icon over the
other. The system will respond to the user‟s actions for each step of programming
making it user-interactive [14].
Figure 5: HI-VISUAL, interpretation of icon behavior
14
2.4.2 Generation of Visual Language Environments
In this research, the researchers presented a formal approach to generating visual
language environments. It is based on meta-modelling and graph transformation and
defines the VL syntax by means of a meta-model. This enables the environments
generated to support multiple views. The meta-model can be expressed as a class
diagram that contains the entities and relations of the VL syntax.
For visual languages with multiple views, the views are part of the global meta-model.
Therefore, it is required to specify the classes, relations, as well as the constraints that
make up each view. Consistency can be maintained by attaching these separate view
models into a global model. These concepts were used to implement a visual
language modelling environment called Labyrinth [13].
Figure 6: Labyrinth, a generated visual environment
15
Chapter 3 – Design Theory and Principles
In the previous chapter, we explained how Arduino projects are developed and
highlighted some projects that are similar with regards to VPL design. We also looked
at some related works that were done in the field of interactive user interface design.
This chapter will describe the principles and techniques which are most commonly
used to design interactive user interfaces. It aims to answer the research question of
how to design an interactive VPL and how to evaluate its effectiveness with regards to
usability. We start off this chapter by defining what interaction design is, then we follow
up by describing all the elements and principles that are associated with the
interaction design process. Finally, we will end this chapter by presenting and
describing a formal design model which will be used for deployment.
3.1 What is interaction design?
Interaction design or interface design is the process of building the visual layout and
behaviour of artefacts, environments, systems, and formal elements that
communicate its behaviour to the user. Because the behaviours of complex systems
are a matter of cognitive factors and logical processes, interaction design can be
greatly aided by a systematic approach [1]. This approach makes use of a
goal-oriented design which is heavily focused on satisfying the needs and desires of
the majority of people who will use the product.
3.2 Understanding the users
The product of any design effort will eventually be evaluated by how successfully it
meets the requirements of the user. If the designer does not have a clear
understanding of the user which the interface is being designed for, the product will
have little chance of success [1, p.39]. Thus, in the early stages of development, it is
required by the designer to decide who the software is being developed for.
Understanding the users requires investigation and research. This means making an
effort to find out the relevant characteristics of whomever the product is intended for [2,
p.10]. Understanding the users is best accomplished by working with them during the
design process as they understand their own goals, experience, likes and dislikes
better than anyone else.
16
3.3 User-centred design
User-centred design (UCD) is an interaction design technique that gives extensive
attention to the needs, wants, and limitations of the users at each stage of the design
process. The purpose of using a UCD approach is to produce software that is more
user-friendly than other designs. The UCD design process is iterative. It addresses
the whole user experience and is driven and refined by user-centred evaluation. The
essential elements of UCD involve principles that consider visibility and accessibility.
Visibility aids the user with constructing a mental model of the interface [18]. It helps
the user to predict the outcomes of their actions while they interact with the interface.
Accessibility offers the user various ways to find information using consistent
navigational elements. UCD will be used in this project to aid a transitive design
approach (See 3.4.1).
Figure 7: Diagram which depicts a user-centered design process
3.4 Design Methodology
Development models are extensively used in software design. They represent a
structure for the development process of a software product, and include various
processes and methodologies that are selected for development depending on the
project‟s goals [11]. The development process for this project will follow a transitive
design approach (See Figure 8).
3.4.1 Design Phase
Before we begin, we will first need to gather the users and resources required to start
the design phase and plan evaluation sessions with the people involved. We will then
begin the design process by speaking to human access points to gather user and
17
system requirements. Human access points (HAP) are experts who are familiar with
the technology and have a good understanding of the users. They act as an
intermediary between the designer and the user in the early stages of the design
process.
Figure 8: Transitive design process, the lower half of this process will be UCD
3.4.2 Prototyping Phase
The purpose of the prototyping phase in software design is to allow the intended users
of the software to evaluate the developer‟s design by actually trying to use it. Using
prototypes, designers may be able to get a good idea of their design‟s performance
(See 3.7.4). There are two types of prototypes used in this project. Low-fidelity
prototypes are simple illustrations of the design concept usually laid out on paper. A
low fidelity prototype that acts as a conceptual model of the proposed interface will be
shown to the HAP for an expert review. High-fidelity prototypes are working versions
of the final system that only supports certain functionality. They will be used as part of
the UCD process which takes place after the expert review.
.
Unlike technology probes (See 3.7.7), prototypes aim to achieve specific usability
goals rather than finding out what the users intend to do with them. It can be used to
gather more information about user requirements that have been missed during
heuristic design [26].
18
3.5 Information Gathering
Interaction design isn‟t just based on a matter of aesthetic choice, but rather on an
understanding of users and cognitive principles. Anything that the designer can learn
from his intended users can help produce better designs.
3.5.1 Recognizing User Goals
Most of the time, users‟ goals aren‟t what we expect them to be. Most commercially
available software today fails to meet user goals. In fact, they routinely cause users to
make mistakes or bore users with navigational medium [1, p.12]. Software designed
and built to achieve business goals alone tend to fail more often than not. Thus,
personal goals of the users will need to be addressed.
3.5.2 Identifying Requirements
Most novice computer users will know of the frustration and disappointment when
learning a new interface. On the other hand, many experienced users find themselves
equally frustrated because the software treats them like beginners [20]. Thus, it is in
the best interest of interface designers to know who they are designing for before
starting the development process. Identifying user requirements involves finding out
who your users are and what their goals are. On the other hand, Identifying the
functional requirements of the system will involve defining the dynamic aspects of the
interface and can be done by following design heuristics such as the one described in
Section 3.6.
3.5.3 Qualitative Research
Qualitative research helps the designer identify patterns of behavior among a group of
users more quickly than quantitative approaches. It also helps the designer
understand the context and constraints of the intended software in more useful ways
than quantitative research. While quantitative research helps us understand the
technical, business, environmental and social aspects of new and existing products.
Qualitative research provides credibility and authority to the designer by allowing him
to make more informed decisions about design issues that would otherwise be based
on assumptions or personal preference [1, p.40]. There are many different types of
qualitative research methods that can be used to facilitate the design of user
interfaces. The qualitative research methods which are likely to be used when
designing our VPL interface are described below.
19
1. User Interviews and Focus groups
User interviews facilitate interaction and discussion about problems and issues
about the software [4]. With focus groups, the interaction between the users may
raise additional issues, or identify the common problems that they experience.
The information that designers are interested in gaining through this method is
how the software fits into the users‟ workflow. This method will be used in this
project to gain basic understanding of the users‟ tasks. Their patterns of behavior
will help the designer to determine what the user is expecting from the software
being designed.
2. Field and User Observations
Most people are incapable of accurately assessing their own behaviors [3],
especially outside the context of their usual activities. Interviews performed
outside of context to the situations which designers wish to document will yield
less accurate results. Thus it is sometimes better for the designer to personally
observe the users‟ behavior rather than asking them about how they think they
might behave. This method involves ethnographic studies, which is defined as the
systematic and immersive study of human cultures. However, instead of
understanding the behaviors of an entire culture, the goal here is to understand
the behaviors of people interacting with software interfaces.
3. Surveys and Questionnaires
These informal interviews are usually carried out after the interface has been
designed to assess the users‟ satisfaction [4]. Similarly, questionnaires are written
lists of questions that are given to users to fill out for feedback on the software.
Questionnaire often require the user to be more sincere and patient.
4. Contextual inquiry
Contextual inquiry is a structured interview method that helps designers to
understand the context in which a user task is being performed. The designer can
not only determine the users‟ opinions and experiences when using the software,
but also their motivations and context. It often involves observing the users and
questioning them while they‟re performing tasks in their own environments.
Contextual inquiry is a combination of immersive observation and directed
interview techniques and forms a solid theoretical foundation of qualitative
research [1, p.45].
20
5. Journal sessions and self-reporting logs
This involves the users having to write their own observations and comments to
evaluate the software‟s usability across long distances. The drawback of not being
able to observe the expressions of the users or interviewing them personally is
inherent in these methods. Thus, it is only used to gather preferential information,
rather than empirical information [4].
3.6 Design Heuristics
Design heuristics are a set of guidelines or principles that advise the designer about
good and bad design solutions based on practical experiences. A heuristic evaluation
is useful in the early stages of design in this project as it does not require user testing.
It can also help the designer to identify possible usability issues regarding the user‟s
interaction with the software. There have been many people who have proposed
guidelines for designing interactive systems which aims to provide effective usability.
The most popular of these guidelines are the Ten Usability Heuristics by Jakob
Nielsen and the Eight Golden Rules by Ben Shniederman. However, these two
guidelines are so similar that we can describe one and relate it‟s concepts to the other.
Thus, it is only required for the designer to be familiar with one of them. The section
below describes the set of heuristics which was used in this project [5].
Visibility of system status
The interface must allow the user to identify the current state of the system and
keep the user informed about the range of possible actions through appropriate
feedback. An example which people are most familiar with is the blinking cursor
in a word processor. It tells the user where text will appear next on the page
when they type.
Flexibility and efficiency of use
The designer should consider tracking usage information and aim to provide
direct access to most-used functions directly. Shortcuts and favourites are the
most common examples used by most software to improve efficiency.
Aesthetic and minimalist design
Dialogues and menus should not contain irrelevant information. It is important
that the design remains minimal and simplistic so that the user will not be
intimidated of its use.
21
Match between system and the real world
The system should communicate to the user with language, words, phrases and
concepts that are familiar to the user instead of using system-oriented terms. It
must follow real-world conventions, allowing information to be displayed in
logical order and sense from the user‟s perspective. An example of this can be
seen on iTunes, where media is organized as a list which the user understands.
Figure 9: iTunes media library
User control and freedom
Users must be able to quickly navigate through different system states at will. If
they arrive at an unwanted state, they must be able to easily revert back to the
previous state. An example which most people are most familiar with is the “back”
button on your web browser. It allows the user to go back to the previous web
page when they click on a wrong link.
Figure 10: The “Back” button on the Google Chrome web browser
Consistency and standards
Make sure the application is internally and visually consistent. Be consistent with
the user‟s general life knowledge and experience wherever possible. Users
22
should not have to wonder if different terms, actions, or states mean the same
thing. A popular example is the “save” button. A conventional icon is used for the
“save” button on any editing software across all platforms.
Figure 11: The conventional “save” icon
Error prevention
Eliminate error-prone conditions and states to prevent errors from occurring. The
interface must consistently check for sates which might lead to user errors and
lead users away from them. This is even better than having good error messages.
A good example here is Google‟s autocomplete feature. It causes spelling
mistakes to occur less frequently.
Figure 12: Google‟s autocomplete feature
Recognition rather than recall
The system should avoid burdening the user with memory load. The user should
not have to remember information about the system‟s state or dialog. Options
and controls to change the system state should always be visible. A good
example is a telephone‟s redial button, which lets a user call a previously dialled
number without needing to remember it.
Figure 13: Redial button on conventional telephone
23
Help users recognize, diagnose, and recover from errors
Error messages should precisely indicate and communicate the problem to the
user and constructively suggest a helpful solution. It may be necessary for the
designer to provide help and documentation to troubleshoot certain errors as
well.
Figure 14: Windows error message suggesting recovery solution
3.7 Design Principles
Principles and elements of design that describe the fundamental ideas, concepts, and
techniques will be used in this project to support the practice of good visual design.
More often than not, good designers will consider only a subset of design principles
and disregard the rest. When they do so, however, there is usually a compensating
merit or benefit that is attained at the cost of the principle violation. The section below
describes the most well-known and most successful design principles and design
elements used for interaction design. They will be used as guidelines in this project to
prevent usability issues and problems that commonly occur in user interfaces.
3.7.1 Goal-oriented design
This design technique presents a set of methods to address the needs of
behaviour-oriented design that focuses on the user‟s goals [6]. It is a subset of UCD
which aims to better understand user goals and determine how to design tasks that let
users achieve these goals with the least effort. Goals are not the same as tasks. A
goal is an end condition, an outcome that the user aims to achieve from using the
software. A task is an intermediate step that a user would take to achieve that goal.
For example, if the goal of a GPS user is to get directions to a particular destination,
the GPS will have tasks that require the user to type the address of the destination,
press the search button, and select from a list of possible routes.
24
3.7.2 Affordance
Affordance is a property or a visual element of an interface that gives the user
information on how to interact with it. If the affordance of a user interface is
perceptually obvious, it is easy to know how to interact use it. For example, it is easy
to tell how to open a tap whether it is fitted with a knob or a lever because of
affordance.
Figure 15: Knob tap and lever tap showing affordance in usability
3.7.3 Mapping
Mapping refers to the relationship between controls and their effects in context. It
ensures that there is a logical correlation between visual elements and the interactive
objects controlling them. Users will tend to look for a direct relationship between the
object and the interface controlling it. The example below shows the mapping of a
kitchen stove and the controls, it is easy to determine which knob will turn on which
stove.
Figure 16: Kitchen stove with mapping controls
3.7.4 Performance
Performance in interaction design is the way in which humans interact with an artefact,
whether it is a user interface or an interactive object. Allowing direct contact with the
artefact will encourage users to interact with it. For an interactive object to have good
performance, the designer will need to construct a set of rules to find a common
ground for interaction between the interface and the user.
25
3.7.5 Rule-making and interaction constraints
This process of rule-making and adding constraints to the interface helps the designer
to ensure that the software is used in the intended way. Incorrect interactions with the
interface must not be allowed by the user or provide no feedback. This will lead to the
user realizing that the particular interaction doesn‟t work and try something else. Here
is a good example of an interaction constraint: the user is unable to click the “Upload”
button before a file has been chosen to upload.
Figure 17: The file uploading tool for Vula
3.7.6 Cognitive thinking
Cognitive psychology is the study of how the human mind works [7]. Information about
the user‟s mental processes can help the designer to develop interfaces with
improved usability. Cognitive analysis can be used in all three stages of an iterative
design model to determine the information they seek, perceive, and remember. For
system testing, cognitive techniques can be used to measure the interface‟s ease of
use and ease of learning. The diagram below shows the mental concepts that are
associated with cognitive psychology.
Figure 18: Concepts related to cognitive psychology
26
3.7.7 Technology Probing
Technology probes are simple adaptable flexible technologies that have three main
goals: A social science goal, an engineering goal, and a design goal. The social
science goal focuses on collecting data about the users of technology in a real world
setting. Technology probes reject the strategy of introducing technology that only
gathers unbiased ethnographic data. Instead, designers must assume that technology
probes influence the behaviour of users. The engineering goal is to field test the
technology. The design goal is to inspire users and researchers to think of new
technology ideas by reflecting upon the usage of the probes [8]. A well designed
technology probe balances all three goals. Technology probes may be used in many
different implementations and settings. The only criteria is that they must be different
enough from commonly available technologies, that they provoke people to consider
how they do or do not fit into their lives. Technology probes also help to reveal
practical needs and playful desires for real life scenarios to motivate discussion during
user interviews and workshops. The distinguishing features of technology probes are
functionality, flexibility, usability, logging ability, and their use within the design phase.
Functionality
While prototypes may have many layers of functionality and address a range of
needs. Technology probes should be as simple as possible.
Flexibility
Technology probes should not offer many functionality choices, but should be
designed to be open-ended with respect to usability. Users should be
encouraged to re-interpret them and use them in unexpected ways. In general,
prototypes are mainly focused on purpose and expected use.
Usability
In prototypes, usability is a primary concern and the design is expected to
change during the usage period to accommodate input from users. Technology
probes are not primarily focused on usability and do not change during period of
use. A deliberate lack of certain functionality might be chosen to provoke users.
Logging Ability
Technology probes collect data about usage and help to generate ideas for new
technology. This allows researchers to create visualizations of the use of probes
to be discussed by both users and designers. Prototypes can collect data as well,
but it is not their primary goal.
27
Design Phase
Technology probes should be introduced early in the design phase as a tool for
challenging existing ideas and influencing future designs. Prototypes appear
later in the design process and are improved iteratively rather than thrown away.
A drawback of using technology probes is that they are only effective when there
are no significant problems with the implementation of the technology. When a
technology probe fails to function, it disrupts the flow of the design process.
3.7.8 Visual noise and clutter
The designer should avoid visual noise and clutter on the interface as the human
brain is easily distracted by visual elements which stand out. Visual noise can be
created by inappropriate use of colour, textures, and typography on the interface.
Interfaces become cluttered when designers attempt to provide excess functionality in
a constrained space, resulting in visual elements and controls that interfere with each
other, and causing the cognitive load of the user to increase [1, p.227].
Figure 19: An example of a cluttered interface
3.7.9 Contrast and Layering
Contrast provides a means of indicating the most or least important elements of an
interface‟s visual hierarchy [1, p.228]. Proper use of contrast will result in visual
patterns and elements that orient themselves rapidly. The designer should provide
visual contrast between active and inactive elements of the interface. Interfaces can
also be organized by layering visual cues for individual elements in the background on
which the active elements rest. The figure below shows contrast between active and
inactive elements by using layering.
28
Figure 20: Layered windows
3.8 Usability Evaluations
Evaluation is an essential part of the design process, an evaluation technique is
chosen depending on the type of results required. The designer must know exactly
what information will be useful to the design so that he can focus on attaining them.
The challenge for the designer is not just collecting user data, but making sense of it
and turning it into design solutions. There are many types of evaluation techniques
and guidelines that an interface designer can use, each one focuses on obtaining
different types of results. The structure of the design sessions are also important, as
the user must also be adequately informed about what they are expected to do so that
they would feel at ease when asked to perform tasks. It is also important to let the
users know that it is the software that is being evaluated and not their intelligence, so
that they would be more willing to provide the designer with information.
3.8.1 Heuristic Evaluation
Heuristic evaluations can form part of an iterative design process by having a small
number of evaluators examine the interface and judge its usability. Heuristic
evaluation is performed by inspecting the various dialogue elements of a user
interface and comparing them to a list of recognized usability principles or heuristics.
The results from using a heuristic evaluation method is a list of usability problems that
reference to heuristics (See 3.6) that were violated by the interface in the opinion of
the evaluator. Heuristic evaluation does not provide ways to provide a systematic way
to fix usability problems. However, it will be fairly easy to create a revised design
according to the results generated from the evaluation. In principle, it is possible to
perform heuristic evaluations on low fidelity prototypes that are paper based. Thus, it
is suited for use in the early stages of the design phase with the purpose of identifying
usability problems before implementation [9].
29
3.8.2 Cognitive walkthrough
A cognitive walkthrough is also a usability evaluation technique like heuristic
evaluation, but its emphasis is on tasks (See 3.7.1). The aim of this evaluation method
is to identify the users‟ goals, and determine the problems that they face while using
the interface when attempting to achieve those goals [28]. A cognitive walkthrough is
performed by giving users a goal or a use case. Then, for each action that the users
take to complete a task, the designer will ask two questions: 1) Does the user know
what to do at this step? 2) Is the user aware of the progress being made towards the
goal?
3.8.3 Expert Reviews
Expert reviews can provide quick and valuable insight into usability problems and can
often identify different problems than tests with end users. However, they may miss
important issues and find issues that may not present difficulties with end users. Thus,
expert reviews should complement formal user studies such as heuristic evaluations
or cognitive walkthroughs [10]. Expert reviews will be conducted with human access
points in the low prototyping phase of the design process (See 3.4.1).
30
Section B: Deployment
This section contains a detailed report of the development phase that was deployed
based on the theory gathered in the research phase of this project (Section A).
Chapter 4 – Requirements Gathering
Based on the theory and principles described in the previous section, the full design
and development phase mentioned in Section 3.4 was deployed. This chapter
documents on how the requirements of the system were gathered and the design
solutions that attempt to solve them. It also describes how a first conceptual model
was formed based on the design solution.
4.1 Defining System Requirements
As highlighted earlier in this report (Section 3.5.2), It is important to know who the
system‟s intended users are before the design process begins. Furthermore, it is also
essential at this stage to define the functional requirements of the system. The reason
is because the system‟s functionality will affect the structure of the user interface
which, in turn, can impact on usability. For this project, we will focus on user interface
requirements rather than system functionality requirements.
4.1.1 Who are the Users?
The user group that this system was initially intended for are artists and designers
who have little to no experience in computer programming. The users that contributed
to this project‟s design process are a class of fifteen third year art students who chose
an Arduino development course as an elective. The advantage of selecting this user
group was that the designer was able to hold the design sessions at the end of their
Arduino lecture period with the consent of their lecturers (who also provided valuable
expert feedback for the designs). The design sessions took place in their lecture
venue at the Michaelis Undergrad Computer Lab on Hiddingh Campus. It is also
worth mentioning that since they were receiving professional training on how to use
Arduino during the project‟s design phase. Their view as Arduino users would have
changed by the end of the course. As the design phase took place over several days
for the duration of their course, their change from being complete beginners to
intermediate Arduino users will have an impact on the project‟s usability evaluations.
31
4.1.2 Usability requirements:
The effectiveness of an interface‟s usability primarily depends on whether or not its
visual elements can be intuitively understood by most users. Designing these visual
elements require the use of visual abstraction techniques in order to define graphical
representations of real objects and functions. These visual objects and functions must
be based on the metaphor of the environment it represents. For example, the devices
and components of an Arduino kit can be visually presented by icons on the interface.
Users should be able to tell intuitively which icons represent which component or
device.
The interactive environment:
The programming environment for the user must be consistent and complete. It
must provide a uniform framework for different aspects of visual interaction.
Before defining any graphical elements, we first need to consider how the user
will interact with the system. This constitutes to the overall flow of the system‟s
task building process and gives the user a sense of progression has they
complete each task to reach their goal. The only ways to determine how a user
interacts with the system is by asking the users themselves through HCI
research or create a technology probe and observe what they do with it. The
purpose of the technology probe is gather requirements from the users
themselves so that we can define effective visual elements and interactive
functions which can be easily interpreted by the user and the software.
Icons and Labels
As previously mentioned, the icons and labels that visually represent each of the
Arduino‟s devices and components must be intuitive so that most users will be
able to discern what they are. We have used conventional icon standards where
applicable to represent each hardware component.
Figure 21: Conventional icons representing Arduino hardware components
32
4.1.3 Functional Requirements
Functional requirements define how the system behaves when a user completes an
action or task. They are the background processes that the software needs to perform
so that users can achieve their goal with least effort (See 3.7.1). The functional goal of
this system is to replace the coding phase of the Arduino development process (See
2.2.1). In order for the system to be defined as a visual programming language, it will
need to allow visual interactions with the user so that they can program their Arduino
projects without coding. This means that the design, layout, and behaviour of all visual
elements that make up the user interface environment must be clearly defined so that
the software‟s visual compiler can easily understand them.
Generating Source Code
Ultimately, the completed system is required to generate an output file that
contains source code for an Arduino program. This code must be the correct
textual representation of the user‟s sketch which was visually defined using the
VPL interface. Building such a system requires in-depth research in other fields
of visual programming language design. The research will need to involve topics
such as data abstraction [21], grammar parsing [22], and control flow [23].
Regrettably, these topics are beyond the scope of this project as it only focuses
on designing the visual interaction and user interface aspects of the system.
Visual Programming
Since the system is required to generate Arduino source code, we will need to
develop new visual programming techniques to replace coding. To do that, visual
abstraction techniques are needed to represent the functions that would
otherwise be defined by textual syntax. These visual abstractions do not have to
be direct visual representations such as the block language used in ModKit or
OpenBlocks (Section 2.3). The abstractions can also be a set of tasks that the
user is required to complete which then generates the intended code.
Figure 22: Arduino program that blinks an LED
33
For example, the development process to create an Arduino project involves
defining which pins on the microcontroller to use, then programming the
relationship of those pins using the Arduino IDE and finally connecting the
components which make use of those pins. The conventional structure of an
Arduino program includes a setup method where the user defines pin modes and
pin numbers, and a loop method where instructions inside it will run repeatedly.
For the blink example (Figure 22), the pin number used is 13 and the mode is set
to output. The instructions given in the loop method tells the microcontroller to
output a „High‟ and a „Low‟ voltage to pin 13 with a 100ms delay in between each
of them, effectively blinking the LED connected to the pin.
The interface must allow the user do all of the above without typing code.
4.2 Design solutions and rationale
This is a first attempt to provide design solutions to meet the requirements gathered
so far. At this stage, the designer has not yet met with any users, so the requirements
gathered so far were all defined from assumptions made using design heuristics.
Once these solutions have been made into a low-fidelity prototype, a heuristic
evaluation will be conducted to gather more information about the usability of the
design. The solutions given below will attempt to solve the interface design problems
described in Section 4.1.
4.2.1 Navigation
This solution attempts to solve the usability requirement of having a consistent
interactive environment. The user must be able to navigate efficiently through the
features and facilities of the program. This means that the user must always be aware
of the current state of the system and know what to do next.
The solution is to use a layered navigational standard for buttons, menus, frames, and
windows. There are many advantages associated with using software interaction
standards. The list below describes the design rationale for using these standards.
1. Familiarity – If it‟s a conventional software standard, there is a high chance that
the user will be familiar with standard interactive functions such as clicking visual
elements that look like buttons or right-clicking on objects to reveal menus and
options.
34
2. Learnability – The software will be relatively easy to learn since the users will
intuitively know most of the functions already. The users will not have to learn a
new way to navigate through the software interface. They only have to learn the
new functionalities that the software provides.
3. Implementation – It is easier for designers to implement user interfaces that use
conventional navigation standards. Since there are already existing libraries and
guidelines in place, the designer won‟t have to define new visual elements and
functions to support navigation.
4.2.2 Visual programming solution
This solution attempts to define a new visual programming technique which will let the
user program the Arduino microcontroller without coding. The programming interface
must make use of different visual interaction techniques to achieve this.
The solution is to use drag and drop object manipulation. Since we are attempting to
design a new way to „build‟ programs, and the art of „building‟ means picking up
components and putting them in place, we can represent this process visually on a
computer by using a drag and drop feature. A more accurate description of drag and
drop is “clicking on a visual object and moving it to imply a transformation”. The list
below describes the design rationale of using a drag and drop interaction technique
for visual programming.
1. User Efficiency – Drag and drop is efficient because it uses two command
components of using a mouse into a single user action that is smooth and
accurate.
2. Metaphorically intuitive – This means that the user will be able to metaphorically
relate dragging and dropping icons representing Arduino components on the
computer to building the actual circuit in real life by moving components into place.
3. Visual Feedback – The task of visually indicating the hotspot for a drop candidate
meaning that it will accept the drop is active visual hinting. The feature of dragging
and dropping gives the user feedback on their progress.
35
4.3 Interface Design ideas
The use of rough sketches can help the designer to build a mental model of now the
interface might look when finished. It also allows the designers to explore different
ideas and concepts quickly. Sketches are also low fidelity prototypes, but they don‟t
show any functionality of the system. They only represent a conceptual model of what
the interface might look like.
4.3.1 Initial design sketch
From the features and solutions mentioned above, a sketch was created to show a
conceptual model of the interface. From this conceptual model, the designer was able
to explore new ideas and features what might improve the interface and build upon
them by making a low-fidelity prototype for expert reviews and evaluations.
Figure 23: first conceptual sketch
The first design sketch features a project workspace where the user will interact with
visual components to program their relationships with other components and with the
Arduino microcontroller board. The right hand side features a palette of input and
output devices which the user can drag and drop onto the workspace. The logic
behind this visual programming technique is based on the notion of triggers, where an
input device is set to trigger an output device by connecting them as shown above. To
program the setup function, the user will click on a device which has been added to
the workspace, an options menu will appear advising the user to enter a pin number
and a pin mode.
36
4.3.2 Improved design sketch
An improved version of the first design sketch was made showing more detail and
features that weren‟t accounted for in the first design sketch. This improved sketch will
be used as the model for a paper based low-fidelity prototype.
Figure 24: improved sketch
The improved design sketch changed the workspace to add user constraints (See
3.6). The constraint involves limiting where input and output devices can be dropped
on the workspace, and defines input and output drop-zones which advise the user on
where input and output devices should be placed. This prevents the user from making
logical errors such as something similar to setting output devices to trigger input
devices. Another feature was added to this design that defines the notion of Events.
An Event, in this context, is described by an input device triggering an action from an
output device. This event is represented by a block that appears when the user clicks
on the „add new event‟ button in the bottom right hand corner of the workspace. The
menu that appears when the user clicks on a device in the workspace has also been
modified to be simpler and more intuitive, with drop down boxes that lets users select
from a possible set of pin numbers and pin modes instead of typing them in a form.
This improved sketch will be made into a functional paper prototype which will be
presented in the next chapter.
37
Chapter 5 – Low-fidelity prototyping
This chapter presents a low-fidelity paper prototype based on the design ideas which
were sketched in the previous chapter. It also documents heuristic evaluations
performed by the designer and findings of the technology probing phase.
5.1 Paper prototype
A paper prototype was built based on the sketches shown in Section 4.3. Unlike the
sketches which only present an idea of what the interface might look like, this paper
prototype aims to show the usable functionality of the interface. The low fidelity
prototype was shown to experts to gather their opinions on the design and was also
used to perform usability evaluations based on design heuristics.
The figures below show how the functionality of the intended interface can be
represented using a paper prototype. In particular, Figure 25 (a) shows what the
interface will look like when the user opens the program. Figure 25 (b) shows the
pop-up menu that appears when a user selects an input device after dragging and
dropping it in the workspace.
Figures 25 (a) and (b) respectively
Figures 26 (a) and (b) respectively
38
Figure 26 (a) shows how the addition of a new event by clicking the „add new event‟
button‟ was represented using the paper prototype and Figure 26 (b) shows what the
interface will look like with more than one event added.
5.2 Expert Review
Expert reviews can provide quick and valuable insight to usability problems and can
often identify different problems than tests with end users. The paper prototype was
shown to experts who are also the Arduino course lecturers and the human access
points for the art students mentioned in Section 4.1.1. The goal was to get their
opinions of the design from an expert point of view, and to raise any issues that the
designer might have missed.
The experts found the design to be „interesting‟ but highlighted some important issues
regarding the design project as a whole. The first issue raised was the fact that
Arduino has too many complex features too be accounted for in the VPL design. It
would be very difficult to design and implement features such as loops for the VPL.
The second issue was that the design could be subjective if shown to users, since
users may have their own ideas for features or interaction techniques.
These issues will be addressed in Section 5.4 after heuristic evaluations have been
performed on the design.
5.3 Heuristic Evaluations and Findings
A heuristic evaluation was performed on the design using the paper prototype. The
goal of this evaluation is to test the usability of the interface before implementation.
Heuristic evaluations were performed by inspecting the interface‟s dialogues and
features and comparing them to a list of recognized design heuristics shown below:
5.3.1 Evaluation checklist
1. Visibility of system status: Does the interface inform the users of their
progress in the visual program? Does the interface provide feedback for each
user action that was performed on the objects on-screen?
2. Flexibility and efficiency of use: Does the interface provide shortcuts and
other usability performance enhancers for expert users?
39
3. Aesthetic and Minimalist design: Does the design contain any irrelevant
information or dialogues which may intimidate the user or cause any
ambiguity regarding its use?
4. Match between system and real world: Does the interface communicate
with language, and concepts that are familiar to the user? Is information
displayed in a conventional and logical order from the user‟s perspective?
5. User Control and Freedom: Does the interface allow the user to efficiently
navigate through different system states at will? Are they able to easily revert
back to the previous state?
6. Consistency and Standards: Is the interface internally and visually
consistent? Are the various menus and screens consistent with one another?
7. Error prevention: Does the design of the interface prevent user errors
from occurring? Does the design check for error-prone conditions and ask
the users for confirmation before they commit to an action?
8. Recognition rather than recall: Does the interface burden the user with
memory load? Are options and controls that change the system‟s state
always visible to the user?
9. Help users recognize, diagnose, and recover from errors: Do error
messages accurately indicate and convey the problem to the user? Do error
messages suggest a helpful solution?
5.3.2 Evaluation results
The results of the heuristic evaluation conducted on the paper prototype are listed
below:
1. Visibility of system status: The interface provided some information to
show users their current progress in the visual program, but it would be
impossible for them to know if the program was complete. The interface
provided feedback for only some user actions, not all user actions.
2. Flexibility and efficiency of use: The interface does not provide any
shortcuts to enhance user performance at all.
3. Aesthetic and Minimalist design: With the current features of the paper
prototype, the interface did not contain any ambiguous dialogues or irrelevant
information.
4. Match between system and real world: The interface uses concepts that
may be unfamiliar to users who are not experienced with using Arduino. The
concepts and words used in the interface were based directly on the Arduino
platform.
40
5. User control and freedom: The interface does feature an „undo‟ button but
this may prove to be difficult to implement. The user will be able to navigate
through the different states of the system easily because the interface uses a
standard layering navigation technique which does not change the view of
the screen (See 3.7.9 and 4.2.1). It only redirects the user‟s attention to
different parts of the screen using overlaying menus and panels. There is an
issue, however, of how to close these pop-up menus and panels if the user
did not intend to open them.
6. Consistency and standards: All screens, menus, and panels are internally
and visually consistent. However, the icons that were used to represent the
I/O components for dragging and dropping may not be consistent with
Arduino standards.
7. Error prevention: The design does prevent some errors from occurring, but
not all cases have been considered yet as of the paper prototype.
8. Recognition rather than recall: The interface does not burden the user with
memory load. However, not all information about the system‟s current state is
visible to the user. Also, controls that change the system‟s state are not
always available to the user.
9. Help users recognize, diagnose, and recover from errors: The error
messages that the current prototype features only indicates the error to the
user, it does not suggest a helpful solution to recover from it.
5.4 Design Implications
Based on the issues raised in the expert reviews and the findings gathered from
heuristic evaluations, a list of design changes and added features was compiled.
These design implications will be used to build a technology probe that will be
presented to the users for the first time to gather further user requirements and other
qualitative information. Before describing the changes and features to be added to the
design, we first need to address the issues raised by the expert review.
5.4.1 Addressing expert review issues
The first issue was the fact that Arduino has too many complex features and functions
to account for, it would be impossible to implement them all in a VPL. To address this
issue, it was decided only to implement a small part of functionalities and features. In
particular, the functionality implemented in the paper prototype was the simple if-else
statement, represented by the notion of trigger events described in Section 4.3.1. This
feature will be the main focus for the remaining design process.
41
The second issue raised by the experts was that the design shown to the users may
be subjective to the designer. The solution to address this issue was to implement a
technology probe to show the design to users instead of using a high fidelity prototype.
The reason is because technology probes reject the strategy of introducing
technology that only gathers unbiased ethnographic data (See 3.7.7). Section 5.5 will
describe in detail how the technology probe was implemented and how it differs from
using a high-fidelity prototype.
5.4.2 List of design changes and features added
Here we present a list of ideas for possible features and changes to add to the design.
This list is based on the results of heuristic evaluations:
1. A checklist to show the current progress – users will want to know how far their
program is from being completed. The checklist will advise users on tasks that
have yet to be completed, such as assigning a pin number for a device.
2. Shortcuts to complete certain tasks – intermediate users of the software will start
to look far faster ways to complete tasks to that they can reach their goal with less
time and effort.
3. Better icons that represent components – The icons that represent the I/O
components must be intuitive to the user interacting with the interface. Standard
icons aren‟t necessarily intuitive.
4. Error prevention and error messages – All cases of error prevention will now be
considered and error messages will now suggest helpful solutions to the user to
recover from those errors.
5.4.3 List of design ideas not yet implemented
1. The undo button – even though it is a feature of the system, it was difficult to
implement because it required the system to track every action completed by the
user and revert back to the previous state when the button was clicked. The
button will still be visible on the interface but lacking functionality.
2. The „add new event‟ button – This feature was not implemented yet because the
designer prioritised on implementing the main function of the interface, which were
the visual programming tasks that the user needs to complete.
42
5.5 Technology Probing
Technology probes are simple adaptable flexible technologies that help to reveal
practical needs and playful desires for real life scenarios to motivate discussion during
user interviews and workshops (See 3.7.7).
5.5.1 Why use a technology probe?
There are four main reasons that a technology probe was chosen to be used in this
part of the design phase. They are each listed with a description below:
1. To prevent showing users too many subjective design ideas – As mentioned in
Section 3.7.7, technology probes reject the strategy of introducing technology that
only gathers unbiased ethnographic data. This is why high-fidelity prototyping will
only be used later on in the design phase (See Chapter 6).
2. To collect data about the users in a real world setting – Having a minimalist
version of the design will gather more unbiased user information compared to an
objective high-fidelity prototype.
3. To inspire the designer and users to think of new technology ideas – New
technology ideas and design alternatives can be more easily considered by
reflecting on the usage of the technology probe.
4. To field test the design idea – Even though technology probes cannot include too
many features, the designer will still want to know what users think of their design
and gather qualitative information.
5.5.2 The implemented probe
Here we present the technology probe that was used to show the users in the first
design session meeting. There are five distinguishing features of technology probes.
We will use these features as guidelines to build our own technology probe for this
project.
Functionality – They must be as simple as possible.
Flexibility – They must be expected to be used in unexpected ways.
Usability – They must be open-ended to provoke more ideas.
Logging ability – They must provide a means for the designer to collect useful data.
Use in design phase – They must be used early in the design phase as a tool for
challenging design ideas and influence future designs.
43
Using the first four guidelines listed above and the design implications mentioned in
Section 5.4, a technology probe was implemented using Java. The figure below
shows the implemented technology probe with an interface that only supports the
drag and drop interaction feature. This was the medium that the designer used to
present ideas to the users for the first time.
Figure 27: Screenshot of technology probe implemented in Java
5.5.3 Probing the users
The next step is to see how users reacted when the technology probe was presented
to them. This step marks the beginning of the co-design process with the users. The
information which the designer hopes to gain from this step is qualitative information
about what the users do with the probe. More specifically, if the users interacted with
the probe in a way that the designer expected them to, then the idea of using a drag
and drop feature as a visual programming technique would have proven to be
successful. Alternatively, if the users interacted with the probe in a way that the
designer did not expect them to, then the designer would need to gather other
valuable information about the usability of the design. Both of these cases would
produce these qualitative results required for the designer to move forward in the
design process.
44
Two types of qualitative research methods were used to gather information in the
technology probing sessions: User Observation and Contextual Inquiry. These
sessions took place at the students‟ lecture venue in the Michaelis Undergrad
Computer Lab on Hiddingh campus with the consent of their lecturers. The structure
of the technology probe design session with each user was as follows:
1. The user was informed about what the designer‟s research topics are and what
their role was in the design sessions.
2. The user was presented with the technology probe and asked what they think the
program does from looking at it.
3. Next, they were asked to interact with the design in any way they liked until they
were satisfied.
4. They were then asked to perform each interaction again, but this time they must
tell the designer what they intended to do at each step.
5. After each step, they were asked why they thought the action that they performed
would produce any feedback from the software.
6. Lastly, they were told what the software is intended to do and asked if they thought
it was an effective solution.
7. For those who answered “No” in the previous step, they were asked if they could
think of a better way to program visually.
This process was performed with five different users with each session taking
between five and seven minutes to complete. The designer had planned to perform at
least seven of these sessions to get more data, but due to time constraints only five
could be performed.
5.5.4 Results and findings
Quantitative as well as qualitative data of the design session was collected and
analysed. The results are as follows:
Quantitative information:
- 3 users had a good idea of what the interface does.
1 user had some idea.
1 user had no idea.
- 4 users agreed that dragging and dropping was an effective solution.
1 user believed that it was not the best solution, and suggested a better solution.
45
- All 5 of the users were able to determine that they were supposed to interact with
the interface by dragging icons around because the interface suggested them to
do so.
Qualitative information:
- When the one user was asked if he knew of a better way to „build‟ programs
visually he suggested an interesting solution. His solution involves using the same
interface layout as the original design but instead of dragging and dropping
components into place, the user can click on a component and it would “just go
there”. Since the interface has been constrained to let users drag input and output
components into designated drop zones, why not skip the dragging action and just
click on the component and it will only have one place to go? When the other four
participating users were asked what they thought of this new solution, one user
agreed to the new solution and the other three said that they still prefer the drag
and drop style of interaction. The main reason was because the drag and drop
style of interaction lets them feel like they have more control over the interface.
- Since it was the second day of their course, most of the students were still novice
Arduino users. This means that they did not know enough about Arduino to
evaluate certain features of the design. However, it is likely that their opinions will
change over the duration of the course (See Appendix 1).
- Some of the users tried clicking on the drop-zones in the workspace because they
looked like buttons. This supported the affordance of the interface (See 3.7.2).
Figure 28: Drop-zones shaped like buttons
- Most users tried right-clicking on different areas of the screen to look for
functionality options, but were disappointed when nothing happened.
46
Chapter 6 – High-fidelity prototyping
This chapter focuses on the high-fidelity prototyping phase of the development
process. This phase makes use of a user-centred design methodology to gather
requirements, refine the prototype, and then evaluate it through an iterative co-design
process.
6.1 Horizontal Prototype
Unlike technology probes, prototypes aim to achieve specific usability goals rather
than finding out what the users intend to do with them [8]. Prototypes were used for
the user-centred design phase to evaluate the usability of the interface. While the
technology probe was used just to gather new ideas, the purpose of a Horizontal
prototype is to provide a broad view of the entire system, focusing on user interaction
rather than functionality [5]. First we look at the design implications that were revealed
from the technology probe (See 5.5.4). Then we show how a high-fidelity prototype
was implemented using Java.
6.1.1 Design implications from probing
Here we present a list of design implications that were derived from qualitative
information gathered during the probing phase:
- Drag and drop is only one of the many effective ways to interact with visual
programs.
- Efficiency does not necessarily mean better user experience.
- Visual cues provoke the user to try different interactions (affordance).
- Most users know that right-clicking on an object is likely to invoke some sort of
functionality.
Next we compile a list of features that will be implemented in the high-fidelity
prototype to be used in user evaluations:
- Right-click functionality: Right-clicking on the I/O buttons will also open the
properties menu prompting users to define pin numbers and I/O modes.
- Progress Checklist: A checklist to update the user on their current progress in the
visual program.
- Properties menu: The properties menu will contain options that prompt the user to
initialize the I/O pin values.
- View source code: The user will be able the view the code that the program
generates after al tasks have been completed.
47
6.1.2 Implementation
Like the technology probe, the horizontal prototype was implemented using the Java
Development Platform. The implementation of the prototype was based on the
technology probe, but with a lot more added functionality and features.
General Interaction Interface
Figure 29: Screenshot of general interface
The general interface of the system is the main interaction screen of the software,
and the screen that users will be presented with when it starts up. It consists of a
workspace located in the middle of the screen, a task progress checklist at the
bottom of the screen, and a list with separate panels for input and output devices
on the right of the screen.
Events
The notion of Events was defined previously in this context as an input condition
triggering an output response. This notion was created to represent the concept
of an IF statement for a conventional programming language. Users can add
Events to the workspace by clicking on the “add new event” button near the
bottom right corner. This will create a new Event Pane in the workspace which
contains the interaction area which allows the user to start the visual
programming process. More than one event can be added to the workspace.
48
Figure 30: Interface with added Event
I/O Properties
After the user has dragged and dropped the I/O devices into their respective
drop-zones. The user will be able to open the I/O properties menu by clicking or
right-clicking on an I/O device in the Event Pane. The properties menu will allow
the user to select a pin number using a combo box. To program the behaviour of
the device, the properties menu will either have radio buttons for digital devices,
or a slider for analogue devices.
Figure 31: I/O properties menu
49
Progress Tracking Tool
The interface features a progress tracking tool located at the bottom of the
screen. The tracking tool uses an automatically updated checklist to inform the
users of the tasks that have been completed and the tasks that still needed to be
completed to finish the visual programming procedure.
Figure 32: Progress tracking checklist
View code button
This button will allow the user to view the generated source once all the tasks of
the visual programming process have been complete. The “view code” button is
visible under the “Add new event” button, but will be disabled until all the tasks in
the progress tracking tool have been marked as done.
Figure 33: "View code" button enable constraints
6.1.3 Design limitations
The goal of the designer was to implement a horizontal prototype so that the usability
of the interface could be evaluated. This means that certain features of the interface
that needed background functionality were not implemented. This included the undo
function, the view code function, and the save function.
50
6.2 Evaluations
A last round of usability evaluations was performed with the students on the
high-fidelity horizontal prototype. The method used for this round of evaluations is a
cognitive walkthrough (See 3.8.2). A cognitive walkthrough is a formalized way of
determining a user‟s thoughts and actions when they use the interface for the first
time [29].
6.2.1 Structure
This evaluation process took place over the last two days of the Arduino course (See
Appendix 1). A total of seven students participated in this evaluation: four on the first
day and three on the second day. Some of these sessions were conducted with two
users at once (to save time), and the rest were conducted with one user at a time. For
the evaluation sessions, the users were given a set of seven tasks to complete. For
each task, the designer asked the users a set of three yes/no questions about the
usability of the interactions they performed and recorded their response.
The seven tasks that were given to the users to complete:
1. Add a new event – For this task, the users were expected to click on the “add
new event” button.
2. Add an input device – To complete this task, the users were expected to drag
any input device into the dedicated drop-zone.
3. Add an output device – Similar to step 2, but with output device.
4. Set input-pin number – For this task, the users were required to left-click or
right-click on the input device in the Event pane, and select a pin number from
the combo box that appears on the properties menu.
5. Set output-pin number – Similar to step 4, but with output properties menu.
6. Set trigger condition/value – For this task, the users were expected to open
the input properties menu again and either: click on a radio button, or, if the
input device is analogue, move the slider to apply a value.
7. Set output response – Similar to step 6, but with output properties menu.
The three usability questions that were asked between each task:
1. Is the control for this action visible? – This question aims to identify
problems with hidden controls. It can also potentially highlight issues with
context-sensitive menus or controls that are buried too deep in the navigation
system.
51
2. Is there a strong link between the control and the action? – This question
aims to find problems with ambiguous language or elements. It also highlights
actions that are physically difficult to execute.
3. Is the feedback for the action appropriate? – This question aims to find
problems when feedback is missing, easy to miss, or ambiguous.
After the seven tasks were complete, the users were given a use case scenario to
attempt. The purpose of this part of the evaluation was to test the learnability of the
interface. The use case scenario that was given to the users was: Make an Arduino
program that turns on a LED when a button is pressed.
6.2.2 Results
The evaluation sessions each took between 7 and 10 minutes to complete and
produced some interesting results. They will be discussed further in detail below.
Results of the evaluation session were recorded for each of the seven users and
analysed. The processed results are documented and presented in tables below.
Task 1: Add a new event
Usability Question: YES NO
1. Is the control for this action visible? 7 0
2. Is there a strong link between this control and the action? 7 0
3. Is the feedback for this action appropriate? 7 0
The interface performed exceptionally well for this task. All the users knew what to do
when they were asked to perform the task. It was fairly obvious since there was a
button labelled “add new event” visible on the screen. They all agreed that the
feedback was appropriate since a large visual element appeared on the screen.
Task 2: Add an input device
Usability Question: YES NO
1. Is the control for this action visible? 6 1
2. Is there a strong link between this control and the action? 5 2
3. Is the feedback for this action appropriate? 5 2
This task was not as obvious as the first task, but almost all the users managed to
figure out the appropriate interaction after pausing for a few seconds to think about it.
52
There were some users who did not think that there is a strong correlation between
the control and the action and believed they needed more information. Some users
also believed feedback generated by this interaction was not sufficient. It is also
important to note at this point that three of the seven users also participated in the
technology probing session earlier on in the design phase. This means they were
aware of the drag and drop interaction to complete this task.
Task 3 produced similar user responses to task 2 with the exception that they all knew
what to do in task 3 because of what they had learnt from task 2.
Tasks 4: Set input-pin number
Usability Question: YES NO
1. Is the control for this action visible? 2 5
2. Is there a strong link between this control and the action? 4 3
3. Is the feedback for this action appropriate? 4 3
This task produced the most interesting results, only two users managed to complete
this task without the guidance of the designer, they both managed to open the
properties menu by right-clicking on the input device shown in the workspace. No one
attempted to left-click on the devices that “supposedly” looked like buttons according
to the users who participated in the technology probing phase.
Task 5 produced similar user responses to task 4 with the exception that they all knew
what to do in task 5 because of what they had learnt from task 4.
Task 6: Set input trigger value or condition
Usability Question: YES NO
1. Is the control for this action visible? 5 2
2. Is there a strong link between this control and the action? 4 3
3. Is the feedback for this action appropriate? 5 2
Here the users were expected to select a radio button from the properties menu if the
input device they selected was digital and a slider if the device was analogue. I initially
thought that the interface would perform poorly like it did in task 4, but the users saw
this option when they opened the menu in task 4 so they knew where to find it.
However, not all of them agreed that this action had a strong connection with the
control that manipulates it. Another interesting finding was that some of the users did
not know the purpose of the task tracker at the bottom of the screen or did not notice it
53
had any functionality at all. This could be one of the reasons that some of the users
reported back feedback for some of the tasks.
Task 7 produced similar user responses to task 6 with the exception that they all knew
what to do in task 7 because of what they had learnt from task 6.
The next results were the results of the use case scenario that was given to the users
to attempt. It required the users to perform all seven tasks without the guidance of the
designer. The use case was to make an Arduino program that turns on a LED when a
button is pressed.
5 users managed to perform the entire programming process without the aid of
the designer.
2 users managed to perform most of the tasks by themselves, but required a few
hints from the designer.
In general, the interface performed quite well in these usability evaluations. The
results and findings gathered from this evaluation session showed that the interface
has a high degree of learnability and good mapping, but lacks affordance and
feedback in certain areas.
6.2.3 List of required changes
Following the results and usability information gained from the evaluation session.
The next iteration of the prototype will need to focus on affordance and interaction
feedback.
- Add visual cues to let users gain access to the properties menu for I/O devices
more easily.
- Upgrade the task tracking tool to be more visible to the users, especially after
each task as been performed.
- Improve the feedback generated by the interface when a drag and drop action is
performed.
- Use a different visual abstraction to show the relationship between input and
output devices rather than just a picture of an arrow.
Next we will try to implement these proposed changes to produce a final high-fidelity
prototype of the interface.
54
6.3 Final Prototype
The main focus of the final prototype is to improve usability by implementing the list of
changes recommended in the previous section. Using the list as guidelines, the
necessary changes for the final prototype were implemented using Java with the
process documented below.
6.3.1 Changes made
Events Pane
Figure 34: New events pane
The most significant changes made to improve the design were done on the
events panel. The first noticeable change is the visual abstraction between the
input and output devices. What used to be just an arrow is now an image of an
Arduino microcontroller, this improves the interface to support mapping that
represents real world relationships between hardware devices. The other
significant change in the events pane was the input/output drop-zones. The
drop-zones themselves used to be the button that calls the properties menu. Now
the drop-zone and the button have been separated and the control to invoke the
properties menu is clearly visible to the user. This increases the level of
affordance for the interface compared to the previous design.
User-Progress tracking tool
Results from the evaluations reported that some users did not know of the
functionality of this tool or did not notice it updating after each completed task. The
challenge was to make the tasks that still needed to be completed more visible to
the user. The change made to the user progress tracking tool was inverting the
55
visual feedback for tasks completed. The old design had tasks greyed out in the
beginning and as the user completed each task, the checkbox will be shown.
Figure 35: Old progress tracking tool
The new design has all the uncompleted tasks visible to the user, and greys them
out as the tasks are completed. This improves the visual feedback received from
completing tasks and it also makes more logical sense.
Figure 36: New progress tracking tool
Overall interface
In general, the overview of the interface didn‟t change much from the
modifications made. Only certain visual elements needed to be changed to
improve the system‟s usability. An overall view of the final prototype can be found
in Appendix 2 or this report.
6.3.2 Functions not implemented
The features that were not implemented to the final interface design were mostly
functional features such as the Undo function (found under the edit tab in the menu
bar), the Save function (found under the File tab in the menu bar), the View code
function (See 6.1.2), and Shortcuts (mentioned in 5.4.2). These features were not
implemented because they were either beyond the scope of the system, or they were
too difficult to implement in the amount time available.
56
The undo function would have improved usability by increasing user control and
freedom, allowing the user to go back to the state of the previous task in the visual
programming process.
The save function and the shortcut features would increase the efficiency of the
user, leading to an improved user experience overall.
The view code function would have also improved user experience by showing the
users the results of their efforts after completing the visual programming process.
6.3.3 Problems that are beyond the scope of this project
The first problem was the current drag and drop feature does not show much visual
feedback. The designer had planned to implement a new drag and drop feature that
improves visual feedback by using animations and real-time object tracking. However,
these required the use of a separate development platform apart from Java.
The second problem was that the current interface was developed focussing on a
small part of the intended system‟s overall functionality. The focus of this project was
to represent the IF statement in a visual program. However, development of a generic
VPL that supports all the functions that Arduino has to offer are beyond this project‟s
scope.
57
Chapter 7 – Conclusions
The aim of this project was to design and develop an interactive visual programming
language to empower end-users with the ability to program the Arduino
microcontroller without writing code. The research questions posed in the beginning
of this project focused on attaining a suitable methodology to design and evaluate
such a system. Then, using the theories and principles gathered from the research, a
software development model was created to structure the design process. Following
this model, the researcher deployed the development process to design and
implement an interactive user interface for the visual programming language.
A number of low-fidelity prototypes were developed by gathering system requirements.
The designer was then able to apply the design heuristics to evaluate the usability of
the low-fidelity prototype. The results of the heuristic evaluation allowed the designer
to gather more user requirements to implement a technology probe which will be
shown to the users. The probing session conducted with the users generated new
ideas and features which were used in later designs. It also gathered qualitative user
data that supported the designer‟s idea to use drag and drop as a visual programming
technique. The technology probe was then made into a high-fidelity horizontal
prototype using the gathered user data.
The high-fidelity prototyping phase made use of a user-centred design methodology
to gather more usability information which focused on the needs of the users. A
cognitive walkthrough evaluation technique was then conducted on the prototype with
the students who were part of the Arduino introductory course offered by the university.
The results of the evaluation were interesting as some of the findings contradicted
earlier results from the probing phase. The main reason for this contradiction was
because the users have gained more knowledge of Arduino through the two-week
course. This made using co-design methods for this project even more worthwhile
because the designer was able to evaluate how changes in the users‟ knowledge
affected the design.
From the cognitive walkthrough evaluations conducted with the users, a final set of
changes were made to the design to improve its usability. Some major changes
included modifying visual elements to improve affordance and increasing visibility of
some objects to improve feedback. The changes were implemented to produce a final
prototype of the VPL interface.
58
7.1 Research questions addressed
This project has effectively addressed two research questions which focused on
methodologies that were needed to design an interactive visual programming system
and evaluate its usability.
The first research question asked in the beginning of this project was: How to design
an interactive VPL? The solution was to use interface design principles to gather initial
system requirements so that a technology probe could built to gather the usability
requirements from users. This involved using information gathering techniques such
as contextual inquiry and field observations during the design session to obtain the
necessary data.
The second research question focused on evaluating the usability of the implemented
VPL. The solution involved using heuristic evaluation techniques and a cognitive
walkthrough evaluation with the users to gather information that can be analysed to
evaluate the design‟s usability. The results of the evaluations highlighted important
usability issues that were present in the design. Those issues were then addressed by
the designer between each iteration to improve the usability of the next prototype
implementation. This eventually produced a final prototype which was much more
user-friendly than the first prototype.
7.2 Possible Future Work
User-centred design is a never ending process. The process only halts when users
are reasonably satisfied with the design. In theory, this iterative process can continue
indefinitely, improving the usability of the design bit by bit. There is still much that can
be done to improve the overall user experience of the system‟s interface.
With regards to the system as a whole, there is a lot of work that still needs to be done
in other fields concerning visual programming languages. This includes research
topics involving data abstraction techniques, visual grammar and parsing, control
flows, efficiency, liveness as well as visual debugging and exception handling.
59
References
1. Cooper, Alan and Reimann, Robert. (2003). About Face 2.0: The Essentials of
Interaction Design. Indianpolis: Wiley Publishing.
2. Johnson, Jeff. (2000). GUI Bloopers: Common user interface dos and don‟ts. San
Francisco: Morgan Kaufmann Publishers.
3. Pinker, Stephen. (1999). How the Mind Works. W. W. Norton & Company.
4. Chepuri, Suman. (n. d.). User Interface Methodologies. [Online] available at
http://www.chepuri.com/projects/doc/interface.html#method [accessed 10 October
2013].
5. Nielsen, Jakob. (1995). 10 Usability Heuristics for Interface Design. [Online] available at
http://www.nngroup.com/articles/ten-usability-heuristics/ [accessed 10 October 2013].
6. Ludolf, Frank. (1998). Model-Based User Interface Design: Successive Transformations
of a Task/Object Model. CRC Press.
7. Anderson, J.R. (2010). Cognitive psychology and its implications. New York, NY: Worth
Publishers.
8. Hutchinson, Hilary. (2003). Technology Probes: Inspiring Design for and with Families.
New York: ACM.
9. Lewis, C, Polson, P, Wharton, C and Rieman, J (1990). Testing a walkthrough
methodology for theory-based design of walk-up-and-use interfaces. In Proceedings of
the SIGCHI conference on Human factors in computing systems: Empowering people
(CHI '90), JACM, New York, NY, USA
10. Rogers, Y. Sharp, H. Preece, J. (2011). Interaction Design: beyond human-computer
interaction. Chichester: Wiley & Sons.
11. ISTQB (n. d.). What are the Software Development Models? [Online] available at
http://istqbexamcertification.com/what-are-the-software-development-models/ [accessed
11 October 2013].
60
12. Leung, Kenneth. (n. d.). A History of the Arduino Microcontroller. [Online] available at
http://www.kenleung.ca/_portfolioassets/PDF/HistoryOfArduino_KenLeung.pdf
[accessed 09 October 2013].
13. Guerra, E. Diaz, P. de Lara J. (2005). A Formal Approach to the Generation of Visual
Language Environments Supporting Multiple Views. 2005 IEEE Symposium on Visual
Languages and Human-Centric Computing.
14. Hirakawa, M. Tanaka, M. Ichikawa, T. (1990). Hi-Visual: An Iconic Programming System.
IEEE Transactions on Software Engineering, October 1990.
15. Gomba, David. (2010). Visual Programming Arduino: Modkit. [Online] available at
http://blog.arduino.cc/2010/10/05/visual-programming-arduino-modkit-and-the-others/
[accessed 09 October 2013].
16. Xin, C.J. (2011). Design a Block Language for Arduino. [Online] available at
http://xinchejian.com/2011/02/07/design-a-block-language-for-arduino/ [accessed 09
October 2013].
17. Jeffries, R. Miller, J. Wharton, C. Uyeda, K. (1991). User Interface Evaluation in the Real
World: A Comparison of Four Techniques. Software and Systems Laboratory, Hewlett
Packard.
18. Greenbaum, J. Kyng, M. (1991). Design at Work: Cooperative Design of Computer
Systems. Taylor & Francis.
19. Burnett, M. Baker, M. (1994). A Classification System for Visual Programming Languages.
Journal of Visual Languages and Computing.
20. Cipan, Vibor. (2010). User interface design for beginners, intermediates, or experts?
[Online] available at
http://www.uxpassion.com/blog/strategy-concepts/user-interface-design-beginners-inter
mediates-experts [accessed 12 October 2013]. UX Design agency, 2013.
21. Burnett, M. Ambler, A. (1994). Interactive Visual Data Abstraction in a Declarative Visual
Programming Language. Journal of Visual Languages and Computing.
22. Costagliola, G. Deufemia, V. Ferrucci, F. and Gravino, C. (2001). Parsability of Visual
61
Languages. Stresa, Italy: IEEE Symposia on Human-Centric Computing.
23. Beaumont, M. and Jackson, D. (1998) Visualizing Complex Control Flow. Halifax,
Canada: IEEE Symposium on Visual Languages.
24. Sefelin, R., Tscheligi, M., & Giller, V. (2003). Paper Prototyping – What is it good for? A
Comparison of paper – and Computer – based Low fidelity Prototyping.
25. Repenning, A. (1994). Bending Icons: Syntactic and Semantic Transformation of Icons.
St. Louis: Proceedings of the 1994 IEEE Symposium on Visual Languages.
26. Erickson, T. (1995). Notes on Design Practice: Stories and Prototypes as Catalysts for
Communication. Scenario-Based Design: Envisioning Work and Technology in System
Development. New York: Wiley & Sons.
27. John, B. E. & Marks, S. J. (1997). Tracking the Effectiveness of Usability Evaluation
Methods. Behaviour and In-formation Technology.
28. Lewis, C, Polson, P, Wharton, C and Rieman, J (1990). Testing a walkthrough
methodology for theory-based design of walk-up-and-use interfaces. In Proceedings of
the SIGCHI conference on Human factors in computing systems: Empowering people
(CHI '90), JACM, New York, NY, USA.
29. Travis, David. (2010). The 4 questions to ask in a cognitive walkthrough. [Online]
available at http://www.userfocus.co.uk/articles/cogwalk.html [accessed 20 October
2013]. Userfocus ltd.
62
Appendices
Appendix 1 – Arduino course breakdown (FIN3030H)
Tuesday, Week 1:
• Project Introduction
• Arduino Hardware and Software overview
Wednesday, Week 1:
• Arduino Programming
Thursday, Week 1:
• Arduino Programming
Friday, Week 1:
• Concept Discussion
• Arduino Programming
Monday, Week 2:
• Final Concept Discussion
• Production of Arduino driven artwork
Tuesday, Week 2:
• Public Holiday – Heritage Day
Wednesday, Week 2:
• Production of Arduino driven artwork
Thursday, Week 2:
• Production of Arduino driven artwork
Friday, Week 2:
• Production of Arduino driven artwork
Monday, Week 2:
• Crit and hand-in
63
Appendix 2 – Enlarged photos and screenshots
64
65
66
67