unit 23 task 3 · unit 23 – task 3 task 1- p3 – design input and output hcis to meet given...
TRANSCRIPT
Unit 23 – Task 3
Task 1- P3 – Design input and output HCIs to meet given
specifications
The HCI that I will be designing via unity is a dark blue sphere in a virtual 3D
space that I will be able to pick up and throw across the grid. The sphere
should also roll when I throw it via an underarm throw or by slapping it. The
input method that will be used to control the sphere is a leap motion
controller and the output method used to display this is a computer monitor.
Task 2 - P4 - Create Input and Output HCIs to meet Given
Specifications
When first creating my HCI the first thing that had to be done was insert the
leap motion in the computer as the input that was going to be used for the
human/user aspect of the HCI in order for the design to work at all. The grid
was the second thing that was done when creating the HCI. This was done due
to it being part of my design which stated that it needed to roll when moving
on it. This was created in unity by me right-clicking the hierarchy Panel,
choosing ‘3D object’ and ‘plane’ as shown below.
After this, the
grid/plane then
showed up on my
screen which in turn I
lowered its height
and adjusted it to be
below the leap
motion hands.
Unit 23 – Task 3 The sphere was next when being created. It was done very similarly to the grid.
Just as said prior, it was created when I right clicked the hierarchy Panel,
choosing ‘3D object’ and instead of plane, it was a sphere as shown below.
At first, when the sphere appeared, it was too large and had to be resized in
order for the leap motion to be able to pick it up. This was done by changing
the x, y and z scales of the sphere via the transform tab on the inspector panel
on the right side as seen below.
Unit 23 – Task 3 The sphere was then coloured to blue by adding a component in the project
panel called IE example and then choosing the dark blue version as shown
below.
The sphere as of this moment did not have any physics which in turn made it
not interactive. In order to fix this, I added a component called rigid body into
the sphere by first going to the add component on the bottom. The
component sphere collider was also added in order for the sphere to be
interact able with the leap motion controller and in turn my virtual hands.
Unit 23 – Task 3 When this was done, the sphere just rolled away and was not interactable. This
was fixed by me adding an interaction manager on both the sphere and leap
motion controller for them to be better at interacting with one another. This
was done by going to the assets icon on the top of unity and then importing
the leap motion interaction engine and all of its assets.
I then dragged the interaction manager onto the hierarchy panel in order for
its effects to come out. This was doing by me going to modules and then
interaction engine and then refabs. After that, the sphere’s interactivity and
my objective with it, came into fruition.
Unit 23 – Task 3 Task 3 – M2 - Explain the fundamental principles which have been applied to
the designs
The first fundamental principal that I applied to the design was colour. I
changed it to blue improve its aesthetic compared to the default of the sphere
which was white.
The second principal, that I applied to my designs was the one of Human as
component. This is the principal of human’s processing the information of they
intend to do and physically be the main input (with the assistance of leap
motion) of the application hence it being HCI (Human – Computer Interaction).
It was used because the leap motion controller was the input and was not
conventional in terms of its normalcy. It required the user (when using my
design/application alongside leap motion), to physically move their arms for
any interactions to take place between them and the application/computer.
This can be evidenced below with small video showing how explanation of the
principal has been implemented within my application’s design.
20190603_125136.m
p4
The third and final principal used is with the key action model. Since this model
is limited to human error it cannot help in terms of predicting that error when
in use by the user when they are interacting with the sphere with the assist of
leap motion. This was mainly used in relation with the physics of the sphere in
that if a leap motion hand model to be near it, it would
either roll away or if the user picks it up it would
correspond with their hand movements. This can be
evidenced with the image below shows an example of
may happen within the model as I explained with the
two scenarios.
Unit 23 – Task 3
Unit 23 – Task 3 Task 4 – P5 -Test the HCIs created
Physics
After applying rigidbody to the sphere, its physics worked just as I intended
within my design in terms of it both rolling and doing it on the surface of the
grid.
Interaction
At first when I tried to interact with the sphere, it did roll just as expected from
me testing its physics, however when I tried to further interact with it. It would
erratically roll away from my leap motion hands very quickly.
20190605_134641.m
p4(Video of the problem)
I then figured out that the reason that this scenario kept on occurring was
because of both me not installing the rigidbody correctly and the leap motion
controller being smudged which in turn did not work. Unfortunately, the
rigidbody problem was so complex that my teacher recommended me to
restart the whole process with the same method that I had used before but
Unit 23 – Task 3 install the rigidbody component in the right section. After following through
with his instructions and installing the component in its right section, the
problem seems to have been fixed and its interactivity had been returned to
normal as evidenced with the video below.
20190605_134650.m
p4
Behaviour
When testing the behaviour of my HCI, I noticed while everything seemed to
be in order, when I tried to lift it, the leap motion controller often took around
one to seconds for it to recognise my hand movements and register it on my
screen which in turn suggest that it had a slow reaction time and was lagging.
Unfortunately, I tried to find a method to fix this issue by restarting the
creation process for the second time and by also trying unity and the leap
motion controller on another computer nothing seemed to work and, in the
end,, I decided to give up and accept its time delaying flaw.
Unit 23 – Task 3 Task 5 – M3 - Explain how the effectiveness of HCIs may be measured.
There are two sub groups on how the effectiveness of HCI’s can be measured.
These are:
1. Quantitative measures of effectiveness: This is based on
measurements obtained using a more rigid measurement process which
can be counted such as:
• Throughput: Throughput is a measurement method of how many
units of information a system can process in a set time. Its main use
has been to measure the effectiveness of computers that run many
programs concurrently and compared them. Its very effective in
measuring the effectiveness of HCI’s because before a product is
released with human interaction in mind, the hci developers can
stress test the product on computers that are running both the
application for the HCi, and other resource intense applications. This
in turn helps them tune any problems that they may occur if
unchecked such as what kinds of failures are most the valuable to
plan for, fixing security flaws that are exposed when under stressful
conditions and to also test the hci strength to see if it can be
corrupted.
• Comparisons with other systems: This measurement method is an
effective way to measure an HCI’s effectiveness because it shows
how much better or worse the product is with interacting with users
compared to other software applications. Within this measurement
process, this method also shows developers how they can improve
their application (if its for commercial use) and how they better
adjust for consumer use in comparison to other applications intended
for HCI, which in turn may help the sales of their software.
• Costs of Staffing: This is an effective way of measuring an HCI’s
effectiveness due to it factoring things such as salaries of the
developers who both created it and may have to keep maintaining
the application for further enhancements for better HCI which in turn
adds to the expense of the application and when measured, lowers
its effectiveness.
Unit 23 – Task 3 Qualitative measures of effectiveness: This is based on measurements
obtained using a more subjective measurement process which can be open to
interpretation such as:
• Ease of use: This is an effective way of measuring an HCI’s
effectiveness due to it allowing the application to have a broader
audience because of its streamlined presence and interactivity. With
this method alongside the method of comparing the software with
other systems/software, the software’s effectiveness can be
measured by certain factors being measured within the method such
as the ease of accessing one section of the software to another, the
simplicity of the software’s interface when a user is browsing it and
also lastly how easy it is for a new user of the software to learn how
to use it, at least to a intermediate level.
• Meets Requirements: This is useful in measuring an HCI’s
effectiveness because if the software cannot meet a set number of
tasks its required to do during its testing, then its effectiveness in
being interactive with a user will be lowered to a software that meets
them.
• User Satisfaction: This is important tin measuring an HCI’s
effectiveness because if the user is not satisfied with the software
that they are interacting with, then the effectiveness of the software
is lowered. This can happen in a few scenarios for example such as
bugs within the software that diminish the interactivity, the simplicity
of it while interacting with it and finally, how long the user enjoys
interacting with the product and continues using it.
Unit 23 – Task 3 Task 6- D2 Evaluate the HCIs developed.
Overall
I am very pleased in the HCI that I develop. This is because of me struggling in
building due to a lot technical errors on my part and even when I didn’t have
those errors I always struggled in placing the parts within the right places such
as the camera angle, hand position, leap motion controller and also how much
space to give between the hand models, controller and the sphere. My
application meets all the requirements within my design as evidenced
throughout this draft’s images and videos. While there were things I wish I
could have implemented but couldn’t due to the lack of online resources
around that specific creation and no spoon-feeding allowed from teachers in
terms of them directly doing the work for me which in turn would have
lowered my grade for the unit, such as adding another object like a cylinder
and multiply them to create the application into a sort of interactive bowling
game/simulation.
Within my original mock-up/plan I originally wanted to create a slider as
suggested by you (or Mr James Holder, if he isn’t the one reading this) but
when I wanted to create it, there was a lack of resources on how to create one
and the only site that a tutorial on how to create was the main leap motion
website in which there wasn’t and it was just an example that I could just
download and play around with. It is quite drastic from the mock-up to the
transitioning of the final design. The reason as to why I drastically changed my
idea on how to create the application I wanted was because I wanted to create
something that fit a simple criteria, was balanced in its complexity to create
and had a lot of online resources which check upon to see my progress and see
from other points of view in case I didn’t understand or was confused on how
one person wrote their instructions compared to another.
I believe that the best thing about my design is its easy of use and the lack of
needing a lot peripherals. This is because when it comes to remote less
software applications there tends to be a lot peripherals needed. For example,
Unit 23 – Task 3
This shogi AI playing robot found within the barking first needed a huge power
supply to power it, a shogi board compatible with its pre - determined
algorithms and a second player willing to play and interact with it. While all my
application requires is a desktop/laptop with unity, my software and a leap
motion controller that’s inserted in one of its usb ports. On the other hand,
this robot compared to my application is much more advanced in terms of its
capabilities and interactivity with users due to it being able to learn to improve
its capabilities and cater its strategy to that specific user to defeat them at
their game.
In conclusion, if I had to restart the design of my application, I would have
increased the complexity of it by making a fully- fledged game of it and adding
an AI component if possible, like a game in Greenfoot. I would also take
inspiration from games such as Play Go
within the barbican centre due to it being like Othello but having both an AI
component and co-op mode for two players at a local level. I would’ve also try
to have two leap motion controllers work for 2 player mode for full
interactivity.