090511 appleby magna overview presentation

Download 090511 Appleby Magna Overview Presentation

Post on 18-Nov-2014

569 views

Category:

Education

0 download

Embed Size (px)

DESCRIPTION

Slides used as an introduction to E-Learning Resources: Evaluation course at Appleby Magna on 11 May 2009 run by Martin Bazley on behalf of Renaissance East Midlands

TRANSCRIPT

  • 1. User testing and evaluation: why, how and when to do it Evaluating and user testing Appleby Magna Centre 11 May 2009 Martin Bazley Martin Bazley & Associates www.martinbazley.com

2. Intro: Martin Bazley

  • Consultancy/websites/training/user testing ICT4Learning.com (10+ yrs)
  • Chair of E-Learning Group for Museums
    • Previously:
  • E-Learning Officer, MLA South East (3yrs)
  • Science Museum, London, Internet Projects (7yrs)
  • Taught Science in secondary schools (8yrs)

3. Why evaluate websites?

  • Why do evaluation and user testing?
  • Isnt it really expensive and time consuming?
  • Save money avoid substantial, hurried redevelopment later in project
  • Audience feedback improves resource in various ways new activity ideas, etc
  • Demonstrate involvement of key stakeholders throughout project

4. Making websites effective

  • 3 key success factors
  • Understandingaudience
  • Learning experienceandlearning outcomes right for audience and clearly stated
  • Evaluation esp in classroom or home (observe in natural habitat wherever possible)

5. Who for what for ...

  • Who for? (audience)
    • Need to be clear from start e.g. for teachers of yr5/6 in local area with whiteboards
  • What real-world outcomes? (learning outcomes)
    • What will they learn or do as a result?e.g. plan a visit to museum, learn that Romans wore funny clothes, discover that they enjoy using a digital camera
  • How will they use it? (learning experiences)
    • What do they actuallydowith the site?e.g. work online or need to print it? - in pairs or alone? - with or without teacher help?
  • Where, whenandwhywill they use it?
    • contextis important

6. 7. 8. 9. 10. 11. 12. 13. 14. 15. Website evaluation and testing

  • Need to think ahead a bit:
    • what are you trying to find out?
    • how do you intend to test it?
    • why? what will doyou do as a result ?
    • TheWhy?should drive this process

16. Test early

  • Testing one user early on in the project
  • is better than testing 50 near the end

17. When to evaluate or test and why

  • Before funding approval project planning
  • Post-funding - project development
  • Post-project summative evaluation

18. Testing is an iterative process

  • Testing isnt something you do once
  • Make something
  • => test it
  • => refine it
  • => test it again

19. Before funding project planning

  • *Evaluation of other websites
    • Who for? What for? How use it? etc
    • awareness raising: issues, opportunities
    • contributes to market research
    • possible elements, graphic feel etc
  • *Concept testing
    • check idea makes sense with audience
    • reshape project based on user feedback

Focus group Research 20. 21. Post-funding - project development

  • *Concept testing
    • refine project outcomes based onfeedback from intended users
  • Refine website structure
    • does it work for users?
  • *Evaluate initial look and feel
    • graphics,navigation etc

Focus group Focus group One-to-one tasks 22. 23. 24. 25. 26. Post-funding - project development 2

  • *Full evaluation of a draft working version
    • usability AND content: do activities work, how engaging is it, what else could be offered, etc

Observation ofactual use of website byintended users ,using it forintended purpose ,inintended context classroom, workplace, library, home, etc 27. 28. 29. 30. 31. 32.

  • Video clip Moving Here key ideas not lesson plans

33. 34. 35. 36. 37. Post-funding - project development 3

  • Acceptance testing of finished website
    • last minute check, minor corrections only
    • often offered by web developers
  • Summative evaluation
    • report for funders, etc
    • learn lessons at project level for next time

38. Two usability testing techniques

  • Get it testing
  • - do they understand the purpose, how it works, etc
  • Key task testing
  • ask the user to do something, watch how well they do
  • Ideally, do a bit of each, in that order

39. 40. User testing who should do it?

  • The worst person to conduct (or interpret) user testing of your own site is
    • you!
  • Beware of hearing what you want to hear
  • Useful to have an external viewpoint
  • First 5mins in a genuine setting tells you 80% of whats wrong with the site
  • etc

41. User testing more info

  • User testing can be done cheaply tips on how to do it available (MLA SE guide):www.ICT4Learning.com/onlineguide

42.

  • Strengths and weaknesses of different data gathering techniques

43. Data gathering techniques

  • User testing - early in development and again near end
  • Online questionnaires emailed to people or linked from website
  • Focus groups - best near beginning of project, or at redevelopment stage
  • Visitor surveys - link online and real visits
  • Web stats - useful for long term trends /events etc

44.

  • Need to distinguish between:
  • Diagnostics making a project or servicebetter
  • Reporting
  • to funders, or for advocacy

45. Online questionnaires

  • (+) once set up they gather numerical and qualitative data with no further effort given time can build up large datasets
  • (+) the datasets can be easily exported and manipulated, can be sampled at various times, and structured queries can yield useful results
  • () respondents are self-selected and this will skew results best to compare with similar data from other sources, like visitor surveys
  • () the number and nature of responses may depend on how the online questionnaire is displayed and promoted on the website

46. Focus groups

  • (+) can explore specific issues in more depth, yielding rich feedback
  • (+) possible to control participant composition to ensure representative
  • () comparatively time-consuming (expensive) to organise and analyse
  • () yield qualitative data only - small numbers mean numerical comparisons are unreliable

47. Visitor surveys

  • (+) possible to control participant composition to ensure representative
  • () comparatively time-consuming (expensive) to organise and analyse
  • () responses can be affected by various factors including interviewer, weather on the day, day of the week, etc, reducing validity of numerical comparisons between museums

48. Web stats

  • (+) Easy to gather data can decide what to do with it later
  • (+) Person-independent data generated - it is the interpretation, rather than the data themselves, which is subjective.This means others can review the same data and verify or amend initial conclusions reached

49. Web stats

  • () Different systems generate different data for the same web activity for example no of unique visits measured via Google Analytics is generally lower than that derived via server log files
  • () Metrics are complicated and require specialist knowledge to appreciate them fully

50. Web stats

  • () As the amount of off-website web activity increases (e.g. Web 2.0 style interactions) the validity of website stats decreases, especially for reporting purposes, but also for diagnostics
  • () Agreeing a common format for presentation of data and analysis requires collaborative working to be meaningful

51. Who for what for ...

  • Who for? (audience)
    • Need to be clear from start e.g. for teachers of yr5/6 in local area with whiteboards
  • What real-world outcomes? (learning outcomes)
    • What will they learn or do as a result?e.g. plan a visit to museum, learn that Romans wore funny clothes, discover that they enjoy using a digital camera
  • How will they use it? (learning experiences)