towards task analysis tool support

35

Upload: suzanne-kieffer

Post on 19-Jul-2015

83 views

Category:

Technology


3 download

TRANSCRIPT

Page 1: Towards Task Analysis Tool Support
Page 2: Towards Task Analysis Tool Support

Suzanne Kieffer1

Towards Task Analysis Tool Support

Panos Markopoulos2Nikolaos Batalas2

1Université catholique de Louvain

Louvain School of Management

Louvain-la-Neuve, Belgium

2Eindhoven University of Technology

Industrial Design

Eindhoven, The Netherlands

Page 3: Towards Task Analysis Tool Support

User goals, tasks and work environment

User errors, breakdowns in the task and workarounds

Task Analysis

Page 4: Towards Task Analysis Tool Support

Task Analysis

Other Usability Engineering Tasks

Usability Goals Setting

Work Reengineering

User Interface Design

Page 5: Towards Task Analysis Tool Support

Task Analysis remains resource intensive

Face-to-face interaction

User observation

Note taking

Audio/video recording and transcribing

Data collection

Page 6: Towards Task Analysis Tool Support
Page 7: Towards Task Analysis Tool Support

Analyst efficiency

Analyst workload

User time and effort

In situ data collection Ambulatory Assessment methods

Room for improvement

Page 8: Towards Task Analysis Tool Support

Purpose: to assess the ongoing behaviour, knowledge and experience of people during task execution in their natural setting

Examples: experience sampling, repeated-entry diaries, ecological momentary assessment, acquisition of ambient signals

Ambulatory Assessment (AmA)

Page 9: Towards Task Analysis Tool Support

To which extent can AmA methods support in situ data collection during

task analysis procedures?

Page 10: Towards Task Analysis Tool Support

1. Task model hypothesis Analysis of procedures and artefacts Setting of questions and experimental design

2. Tool-supported in situ data collection Users: expertise and responsibility Tasks: frequency, criticality and complexity Problems and errors

3. Contextual observations/interviews

Method

Page 11: Towards Task Analysis Tool Support

Case study

Page 12: Towards Task Analysis Tool Support

Hot-Dip Galvanizing on Continuous Lines

Page 13: Towards Task Analysis Tool Support
Page 14: Towards Task Analysis Tool Support

Step 1 – Task model hypothesis

Page 15: Towards Task Analysis Tool Support

Artefact:paper checklist

Page 16: Towards Task Analysis Tool Support
Page 17: Towards Task Analysis Tool Support

Q1. Please indicate your degree of

familiarity with this task

Q2. How frequently is this task

executed?

Q3. Please indicate when it was

executed for the last time

Q4. Please indicate when it will be

executed next time

Q5. Please select all the possible

contexts where it takes place

Q6. Why does it have to be

executed?

Q7. Please indicate a mean to facilitate

or to improve this task

Q8. Please give an example of

possible problem during its execution

Q9. Please give an example of error

committed during its execution

Q10. Please select in the list all the

participants to this task

Q11. Please indicate who asks for its

execution

Q12. Please indicate to whom the

related result is communicated

Setting of questions

Page 18: Towards Task Analysis Tool Support

Experimental setup

30 items

4 key users

3 shifts

12questions

12 participants29 items x 12 questions

+ 1 question

4200 questions 350 questions per participant

9 days 40 questions a day per participant

Page 19: Towards Task Analysis Tool Support

Step 2 – In situ data collection

Page 20: Towards Task Analysis Tool Support
Page 21: Towards Task Analysis Tool Support

TEMPEST

1. Prepare your material (questions and protocol)

2. Program sequences of questions

3. Create participants

4. Fire questions

5. Analyze answers

Page 22: Towards Task Analysis Tool Support

Step 3 – Contextual observations/interviews

Page 23: Towards Task Analysis Tool Support

Observations/interviews

Key functions Team leader Mobile operator Bath operator

Key aspects Communication flow Countdown of items Intra-team collaboration Problems or errors

Page 24: Towards Task Analysis Tool Support

Results & discussion

Page 25: Towards Task Analysis Tool Support

Challenge

Unfriendly work environment

Complex work organization

Collaborative

Distributed in space and time

Rotating shifts

Page 26: Towards Task Analysis Tool Support

With vs. without tool-support

With TEMPEST Without TEMPEST

Analyst’sEfficiency

Increased productivityIncreased accuracy

Limited productivityRisk of mistakes

Analyst’sWorkload

Automated & remoteSafe & comfortableStructured process

Manual & face-to-faceDifficult & tedious Unstructured process

User’stime & effort

38 hours overall in 9 days20 minutes a day per user

36 hours overall (estimation)3 hours per user (estimation)

QuestionsTimely with snooze optionRather not intrusive

DisruptiveIntrusive

Answers Complete results Fragmented results

Page 27: Towards Task Analysis Tool Support

Requirements

Supporting tools

Analyst configurability

Real-time monitoring and traceability of responses

On the fly adaptation of the sampling protocol

Data collection across platform (responsiveness)

Task model hypothesis

Guidelines for analysts

Mapping with the sampling protocol

Mapping with the responses

Page 28: Towards Task Analysis Tool Support

Take away

Task Analysis Tool Support (TATS)

Method and TEMPEST

Feasibility and cost-efficiency of TATS

Requirements for conducting TATS

Page 30: Towards Task Analysis Tool Support

Definition of the key users

Page 31: Towards Task Analysis Tool Support

Divergences

Page 32: Towards Task Analysis Tool Support

Convergences

Reasons to execute a task (Q6): instructions, cleanliness and quality

Means to improve the tasks (Q7): automation, better care of the zinc bath and new equipment

Problems (Q8): technical problems and accidents

Errors (Q9): related to manipulation of the zinc bath and lack of time

Page 33: Towards Task Analysis Tool Support
Page 34: Towards Task Analysis Tool Support
Page 35: Towards Task Analysis Tool Support

“The questions interfered with my schedule”

Satisfaction questionnaire, 5-item Likert scale

Shift A=3.50, equally distributed between “neutral” and “agree” Shift B=2.67 Shift C=2.25

Most of the participants (10/12) thought they answered between 15 and 30 questions a day, while they actually answered about 40