midterm stats min: 16/38 (42%) max: 36.5/38 (96%) average: 29.5/36 (78%)

18
Midterm Stats • Min: 16/38 (42%) • Max: 36.5/38 (96%) • Average: 29.5/36 (78%)

Upload: brittany-pierce

Post on 03-Jan-2016

229 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Midterm Stats

• Min: 16/38 (42%)

• Max: 36.5/38 (96%)

• Average: 29.5/36 (78%)

Page 2: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Evaluation 2: Choosing a Strategy

Page 3: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Objectives

For evaluation of a given application, you will be able to…

• Prioritize goals and specify usability requirements

• Decide what type of data you need to collect• Decide who should take part• Develop good tasks• Decide where and for how long• Justify your decisions for all of the above

Page 4: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Why bother?

• Tied to the usability engineering lifecycle

• Pre-design– investing in new expensive system requires proof of

viability

• Initial design stages– develop and evaluate initial

design ideas with the user

design

implementationevaluation

Page 5: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Why bother?• Iterative design

– does system behavior match the user’s task requirements?

– are there specific problems with the design?– what solutions work?

• Acceptance testing– verify that system meets

expected user performance criteria• 80% of 1st time customers will take 1-3 minutes to

withdraw $50 from the automatic teller

design

implementationevaluation

Page 6: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Experimental approachExperimenter controls all environmental factors

– study relations by manipulating independent variables– observe effect on one or more dependent variables

– Nothing else changes

File Edit View Insert

New

Open

Close

Save

File

Edit

View

Insert

New

Open

Close

Save

There is no difference in user performance (time and error rate) when selecting an item from a pull down or a pop-up menu of 4 items

Page 7: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Usability engineering approach

• Observe people using systems in simulated settings– people brought in to artificial setting that simulates

aspects of real world setting

– people given specific tasks to do

– observations / measures made as people do their tasks

– look for problem areas / successes

– good for uncovering ‘big effects’

Page 8: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Usability engineering approach

• Is the test result relevant to the usability of real products in real use outside of lab?

• Problems– non-typical users tested– non-typical tasks– different physical environment– different social context

• motivation towards experimenter vs motivation towards boss

• Partial Solution– use real users– task-centered system design tasks– environment similar to real situation

Page 9: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Conceptual model extraction• How?

– show the user static images of• the prototype or screens during use

– ask the user explain • the function of each screen element• how they would perform a particular task

• What?– Initial conceptual model

• how person perceives a screen the very first time it is viewed

– Formative conceptual model • How person perceives a screen after its been used for a while

• Value?– good for eliciting people’s understanding before & after use– poor for examining system exploration and learning

Page 10: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Why are you evaluating?: 5 E’s of Usability

• Effective• Efficient• Engaging• Error Tolerant• Easy To Learn

• But, you don’t always want to test all of these!

Page 11: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Why are you evaluating?: Identify your goals

• Identify your usability goals for the interface

• Then prioritize them to determine which you need to test

Page 12: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Who?

• Actual end users• Varied types of users

– Different user groups– Different ages– Different experience– Etc…

Page 13: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

How Many Users?

• Depends on variability of users and how stringently you want to test.

• observing many users is expensive• but individual differences matter

– best user 10x faster than slowest

– best 25% of users ~2x faster than slowest 25%

Solution?• Textbook recommends starting with 5 from each user

group, adding more as needed.• Big problems identified with a few users, smaller

problems may need many.

Page 14: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Where?: Field vs. Lab

• Field studies – realistic setting– True to real life– Hard to arrange and do– Less control of variables

• Controlled studies - controlled environment– More control but less realistic– Often easier to set up, more

efficient

Page 15: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Validity• External validity

– confidence that results applies to real situations– usually good in natural settings

• Internal validity– confidence in our explanation of experimental results– usually good in experimental settings

• Trade-off: Natural vs Experimental– precision and direct control over experimental design

versus– desire for maximum generalizability in real life

situations

Page 16: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

How long should a session last?

• Usually 30-90 minutes / session

• May be longer, but beware of fatigue and learning effects

• Sometimes need several sessions (e.g. for tasks that require substantial training)

Page 17: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

What should the participants do?

• Think carefully about the tasks you ask people to do.

• Task should model real-world use.

• May also want to test rare / unusual situations

Page 18: Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)

Key Points

• Plan - to ensure you are testing the right things with the right people.

• Consider the goals & usability requirements, data to collect, users, tasks, location, and length of the evaluation.