tips and tricks for measuring the user experience and tricks for measuring the user experience tom...

14
523 EP 2012 (59) 3 (523-534) EFFECTS OF AERATION ON GROUNDWATER QUALITY FOR IRRIGATION Economics of Agriculture 3/2012 UDC: 628.1.034.3 Review Article EFFECTS OF AERATION ON GROUNDWATER QUALITY FOR IRRIGATION Zoran Rajic 1 , Vaso Novakovic 2 , Miladin Gligoric 3 , Caslav Lacnjevac 4 , Ranko Grujic 5 , Dragic Živkovic 6 Summary Volatility in agricultural production caused by the reliance on natural factors, and water is one of the basic, which provides greater production. Volatility in agricultural production caused by the reliance on natural factors, and water is one of the basic, which provides greater production. The authors in their professional praxis have seen that the presence and absence of vadose zone in the upper part of aquifer, with inter-granulary porosity type, is a prerequisite for enhanced concentrations of iron and manganese in groundwater. The natural aeration zone in vadose zone of the upper part of aquifer enables additional enrichment of groundwater with oxygen, which is spent on the account of biochemical processes in the direction of their flow. The absence of this zone in aquifer directly influences higher iron and manganese content in groundwater, often above the permissible concentration in drinking water. In order to eliminate this problem, in this paper proposal of future works were made, different of the usual procedure of hydro geological research. It will be possible to examine the effect of aeration of groundwater in the aquifer, during the preparation of wells from which to irrigate the land. Key words: irrigation, unsaturated zone, iron, manganese, aeration, groundwater JEL: Q10, Q25 1 Prof. dr Zoran Rajić, Faculty of Agriculture, University of Belgrade, 11000 Belgrade, Serbia, E-mail: [email protected] 2 Prof. dr Vaso Novaković, IPIN Ltd, Institute for Applied Geology and Hydro-engineering, 76300 Bijeljina, BiH, E-mail: [email protected]; 3 Prof. dr Miladin Gligorić, Technology faculty Zvornik, University in East Sarajevo, BiH, E-mail: [email protected] 4 Prof. dr Caslav Lačnjevac, Faculty of Agriculture, University of Belgrade, 11000 Belgrade, Serbia, E-mail: [email protected] 5 Dr Ranko Grujić, IPIN Ltd, Institute for Applied Geology and Hydro-engineering, 76300 Bijeljina, BiH, E-mail: [email protected] 6 Prof. dr Dragic Zivkovic, Faculty of Agriculture, University of Belgrade, 11000 Belgrade, Serbia, E-mail: [email protected]

Upload: buique

Post on 22-Mar-2018

223 views

Category:

Documents


7 download

TRANSCRIPT

Tips and Tricks for Measuring the User Experience

Tom Tullis and Bill AlbertFidelity Investments

Usability and User Experience 2008

UPA-Boston's Seventh Annual Mini UPA Conference

May 28, 2008

2

Who we are

Tom TullisSenior Vice President, User Insight, Fidelity [email protected]

Bill AlbertDirector, User Insight, Fidelity Investments

[email protected]

3

Our Top 10 List

1. Know your data

2. Show your confidence (intervals)

3. Deal with binary success data (appropriately)

4. Compare means

5. Consider using expectation measures

6. Use the System Usability Scale (SUS)

7. Show frequency distributions

8. Combine different metrics

9. Use appropriate tools

10.Present data appropriately

4

Tip 1: Know your data(and what you can & can’t do with it)

5

Independent & Dependent Variables

Independent variables: The things you manipulate or control for, e.g.,

Characteristics of participants (e.g., age, sex, relevant experience)

Different designs or prototypes being tested

Tasks

Dependent variables: The things you measure, e.g.,

Task Success

Task Time

SUS score

etc.

6

Types of Data

Nominal (aka Categorical) e.g., Male, Female; Design A, Design B.

Ordinal e.g., Rank ordering of 4 designs tested from Most

Visually Appealing to Least Visually Appealing.

Interval e.g., 7-point scale of agreement: “This design is

visually appealing. Strongly Disagree . . . Strongly Agree”

Ratio e.g., Time, Task Success %

7

Ordinal vs. Interval Rating Scales

Are these two scales different?

Top scale is ordinal. You should only calculate frequencies of each response.

Bottom scale can be considered interval. You can also calculate means.

8

Statistics for each Data Type

9

Tip 2: Show your confidence (intervals)

10

Confidence Intervals

Assume this was your time data for a study with 5 participants:

Task Time (secs)

P1 43

P2 53

P3 59

P4 68

P5 49

Does that make a difference in your answer?

11

Calculating Confidence Intervals

Task Time (secs)

P1 43

P2 53

P3 59

P4 68

P5 49

Mean: 54.4

Std Dev: 9.6

N: 5

95% Confidence Interval:

8.4

=CONFIDENCE(<alpha>,<std dev>,<n>)

<alpha> is normally .05 (for a 95% confidence interval)

<std dev> is the standard deviation of the set of numbers (9.6 in this example)

<n> is how many numbers are in the set (5 in this example)

Excel example

12

Show Error Bars

Mean SUS Scores: Apollo Program Websites

(Error bars represent 95% confidence interval)

0

10

20

30

40

50

60

70

80

NASA Wikipedia

SU

S S

co

re

Excel example

13

How to Show Error Bars

14

Tip 3: Deal with Binary Success Data (appropriately)

15

Binary Success

Pass/fail (or other binary criteria)

1’s (success) and 0’s (failure)

Participant Task 1 Task 2 Task 3 Task 4 Task 5

P1 1 0 1 0 0

P2 1 0 1 0 1

P3 1 1 1 1 1

P4 1 1 1 1 1

P5 0 0 1 1 1

P6 1 0 0 1 1

P7 0 1 1 1 1

P8 0 0 1 1 0

P9 1 0 1 0 1

P10 1 1 1 1 1

P11 0 1 1 1 1

P12 1 0 1 1 1

Average 67% 42% 92% 75% 83%

Confidence

Interval (95%) 28% 22% 29% 29% 29%

16

Confidence Interval for Task Success

When you look at task success data across participants for a single task the data is commonly binary:

Each participant either passed or failed on the task.

In this situation, you need to calculate the confidence interval using the binomial distribution.

17

Example

Task 1

P1 1

P2 1

P3 1

P4 0

P5 1

P6 1

P7 0

P8 1

1=success, 0=failure.So, 6/8 succeeded, or 75%.

Easiest way to calculate confidence interval is using Jeff Sauro’s web calculator: http://www.measuringusability.com/wald.htm

18

Chi-square

Allows you to compare actual and expected frequencies for categorical data.

=CHITEST(<actual range>,<expected range>)

Excel example:

19

Tip 4: Compare Means

20

Comparing Means

T-test:

Independent samples (between subjects)

Paired samples (within subjects)

Excel example:

Apollo websites, task times

Excel example:

Haptic mouse study

21

T-tests in Excel

Independent Samples: Paired Samples:

=TTEST(<array1>,<array2>,x,y)

y = 2 (independent samples) y = 1 (paired samples)

x = 2 (for two-tailed test) in almost all cases

22

Comparing Multiple Means

Analysis of Variance (ANOVA)

“Tools” > “Data Analysis” > “Anova: Single Factor”

Excel example: Study comparing 4 navigation approaches for a website

23

Tip 5: Consider Using Expectation Measures

24

Expectation Measure

Based on “expectancy disconfirmation” theory from Market Research

Basic approach:

Ask the users how easy/difficult they think each task is going to be, before doing them.

Ask them how easy/difficult it actually wasimmediately after each task.

Same 7-point scales in both cases:

Very Difficult to Very Easy

25

Expectation Measure: Before/After

Before:

After:

26

Expectation Measure

For each task, calculate two averages:

Average expectation score

Average experience score

Visualize the data as a scatterplot with two axes:

Expectation rating

Experience rating

The four quadrants are interesting

27

Expectation Measure

Average Expectation and Experience Ratings

by Task

1

2

3

4

5

6

7

1 2 3 4 5 6 7

Average Expectation Rating

Avg

. E

xp

eri

en

ce R

ati

ng

“Fix it

Fast”

“Promote

It”

“Big

Opportunity

“Don’t Touch It”1=Difficult …

7=Easy

1=Difficult; 7=Easy

28

Tip 6: Use the System Usability Scale (SUS)

29

SUS

Developed at Digital Equipment Corp.

Consists of ten items.

Adapted by replacing “system” with “website”.

Each item is a statement (positive or negative) and a rating on a five-point scale of “Strongly Disagree” to “Strongly Agree” .

For details see http://www.usability.serco.com/trump/documents/Suschapt.doc

30

SUS

31

SUS Scoring

SUS yields a single number representing a composite measure of the overall usability of the system being studied. Note that scores for individual items are not meaningful on their own.

To calculate the SUS score:

Each item's score contribution will range from 0 to 4.

For items 1,3,5,7,and 9 the score contribution is the scale position minus 1.

For items 2,4,6,8 and 10, the contribution is 5 minus the scale position.

Multiply the sum of the scores by 2.5 to obtain the overall SUS score.

SUS scores have a range of 0 to 100.

http://www.measuringux.com/SUS_Calculation.xls

32

SUS Scoring Example

Total= 22 SUS Score= 22 * 2.5 = 55

33

SUS Usage

“SUS has been made freely available for use in usability assessment, and has been used for a variety of research projects and industrial evaluations; the only prerequisite for its use is that any published report should acknowledge the source of the measure.”

34

SUS Data from 50 Studies

Frequency Distribution of SUS Scores for 129

Conditions from 50 Studies

0

5

10

15

20

25

30

35

40

45

50

<=40 41-50 51-60 61-70 71-80 81-90 91-100

Average SUS Scores

Fre

qu

en

cy

Percentiles:

10th 47.4

25th 56.7

50th 68.9

75th 76.7

90th 81.2

Mean 66.4

http://www.measuringux.com/SUS-scores.xls

35

Tip 7: Show Frequency Distributions

36

Frequency Distributions

SUS Score per Person

0

1

2

3

4

5

6

7

8

9

<=40 41-50 51-60 61-70 71-80 81-90 91-100

SUS Score

Fre

qu

en

cy

Original

Redesign

Comparison of SUS scores cross two tests in an iterative process:(LeDoux, Connor, & Tullis, 2005)

Frequency distributions are a useful way of visualizing variability if you have a reasonably large number of data points.

37

Frequency Distributions

Frequency Distribution of SUS Scores

0

2

4

6

8

10

12

<=10 11-20 21-30 31-40 41-50 51-60 61-70 71-80 81-90 91-100

SUS Score

Fre

qu

en

cy

NASA

Wikipedia

Excel sample

38

How-toMust be entered by selecting a range of cells for the formula and then enter with <ctrl> <shift><enter>.

=FREQUENCY(<data array>, <bins array>)

39

Tip 8: Combine Different Metrics

40

Combined Metrics

Often it’s useful to combine different metrics to get an overall usability measure.

Challenge is combining metrics that have different scales, e.g.

Task completion: % correct

Task time: Seconds

Subjective rating: SUS score

Two common techniques:

Combine using percentages

Combine using z-scores

41

Combine Based on Percentages

Basic idea is to convert each of the metrics to a percentage and then average those together.

For each metric to be transformed, you want:

0% to represent the worst possible score

100% to represent the best possible score

Some metrics already are a percentage:

SUS scores

% correct tasks

42

Sample Data

Original data:

Participant #

Time per

Task (sec)

Tasks

Completed

(of 15)

Rating

(0-4)

1 65 7 2.4

2 50 9 2.6

3 34 13 3.1

4 70 6 1.7

5 28 11 3.2

6 52 9 3.3

7 58 8 2.5

8 60 7 1.4

9 25 9 3.8

10 55 10 3.6

Averages 49.7 8.9 2.8

43

Sample Data

Excel spreadsheet

Original data with percentage transformations added:

Participant #

Time per

Task (sec)

Tasks

Completed

(of 15)

Rating

(0-4) Time % Tasks % Rating % Average

1 65 7 2.4 38% 47% 60% 48%

2 50 9 2.6 50% 60% 65% 58%

3 34 13 3.1 74% 87% 78% 79%

4 70 6 1.7 36% 40% 43% 39%

5 28 11 3.2 89% 73% 80% 81%

6 52 9 3.3 48% 60% 83% 64%

7 58 8 2.5 43% 53% 63% 53%

8 60 7 1.4 42% 47% 35% 41%

9 25 9 3.8 100% 60% 95% 85%

10 55 10 3.6 45% 67% 90% 67%

Averages 49.7 8.9 2.8 57% 59% 69% 62%

44

Combine Using Z-scores

Another method sometimes used is z-score transformation: Convert each participant’s score for each metric to a

z-score. Z-scores are based on the normal distribution.

They have a mean of 0 and standard deviation of 1.

Use the “standardize” function in Excel.

Average the z-scores for each person to get an overall z-score. Make sure all scales go the same direction.

Must decide whether each score is going to be given equal weight.

45

Z-score Transformation Example

=standardize(B2,$B$12,$B$13)

46

A Real Example: Older Users

Overall Usability Index

-5.00

-4.00

-3.00

-2.00

-1.00

0.00

1.00

2.00

3.00

4.00

0 10 20 30 40 50 60 70 80 90

Age (in years)

Pe

rfo

rma

nc

e Z

-Sc

ore

Study 1 Study 2

Linear (Study 1) Linear (Study 2)

47

Tip 9: Use Appropriate Tools

48

Morae and Camtasia

Excellent tools for capturing video, logging issues, and creating highlight tapes (www.techsmith.com).

49

Data Logger

Free Excel program used to collect and analyze usability test data.

Records task success, time-on-task, survey questions, and automatically generates charts.

(www.userfocus.co.uk/resources/datalogger.html)

50

Keynote (formerly Vividence)

Data collection and analysis tools designed to run automated usability studies.

Metrics include task success, time-on-task, clicks, and click-stream.

(www.keynote.com)

51

UTE (Usability Testing Environment)

Bob Bailey at Mind Design Systems (www.mindd.com)

Comprehensive tool for capturing and analyzing usability data

Automatically calculates success rates, time-on-task, and many other metrics

52

Ergo Browser

Modified Internet browser designed to collect usability data including time-on-task, clicks, pages, and scroll usage. Reasonably priced and easy to use. (www.ergolabs.com).

53

The Observer from Noldus

Sophisticated software for usability data collection, analysis and presentation.

Ability to integrate multiple video feeds, analyze eye-tracking data, and other physiological measurements.

(www.noldus.com)

54

Ovo Studios

Software for collecting and analyzing usability data. Allows you to log issue, collects time-on-task, create video highlights, and helps in report generation.

(www.ovostudios.com)

55

“SUM” Calculator

Another free tool developed by Jeff Sauro to calculate a single usability score “SUM” (www.measuringusability.com).

56

Tip 10: Present Data Appropriately

57

Top 10 Mistakes in Data Graphs

1. Not labeling axes or units.

2. Implying more precision in your data than it deserves.

3. Not showing confidence intervals when you can.

4. Not starting a bar graph at the lowest possible value for the y-axis (usually 0).

5. Using a line graph when it should be a bar graph.

6. Using 3-D when it doesn’t add any value.

7. Trying to include too much.

8. Poor labeling of pie charts.

9. Using color coding as the only way to convey information.

10. Not knowing when to use stacked bar graphs.

58

What’s Wrong With This?

Successful Task Completion Rate

0%

10%

20%

30%

40%

50%

60%

70%

80%

Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7 Task 8

59

What’s Wrong With This?

20.0%

30.0%

40.0%

50.0%

60.0%

70.0%

80.0%

Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7 Task 8

60

Better Version

Successful Completion Rate, by Task

(Error bars represent 90% confidence interval)

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7 Task 8% o

f U

se

rs W

ho

Su

cc

es

sfu

lly

Co

mp

lete

d

61

What’s Wrong With This?

% of Pages with Accessibility Errors

0 Errors

1 Error

2 Errors

3-4 Errors

5-6 Errors

7-8 Errors

9-10 Errors

11-15 Errors

> 15 Errors

62

Better Version

% of Pages with Accessibility Errors

0 Errors

10%

1-2 Errors

12%

3-10 Errors

53%

>10 Errors

25%

63

How Could This Be Better?

Breakdown of Task Completion Status

0%

10%

20%

30%

40%

50%

60%

70%

80%

NASA Wikipedia

% o

f T

asks

Correct

Incorrect

Gave Up

64

Better Version

Breakdown of Task Completion Status

58%71%

18%

11%

24%17%

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

NASA Wikipedia

% o

f T

as

ks

Gave Up

Incorrect

Correct

65

Thank You!

For more information: www.measuringux.com