what determines student satisfaction with university subjects? a choice-based approach
DESCRIPTION
What determines student satisfaction with university subjects? A choice-based approach. Twan Huybers, Jordan Louviere and Towhidul Islam. Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014. Overview. 1. Introduction (student perceptions of teaching) - PowerPoint PPT PresentationTRANSCRIPT
What determines student satisfaction with university subjects? A choice-based approach
Twan Huybers, Jordan Louviere and Towhidul Islam
Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014
Overview
1. Introduction (student perceptions of teaching)
2. Study details (study design, data)
3. Findings (ratings instrument, choice experiment)
4. Conclusion
1. Introduction
Higher education practice:widespread use of student perceptions of subjects/teaching
Scholarly research, (contentious) issues:formative vs. summative; effects of grades, class size, etc.; teaching “effectiveness”/”quality”; value of student opinions
Student satisfaction• Student as a customer?• Satisfaction vs. effectiveness• Overall summary item in evaluation instruments
(incl. CEQ)
Contribution of study is methodological:Use of DCE vs. ratings method
(response styles)
Use of DCE in student evaluation:• NOT an alternative to classroom evaluation exercises
(although BWS Case 1 could be)• Instead, DCE as a complementary approach
2. Study details
Evaluation items used in the study:• Wording of 10 items derived from descriptions in
Australian university student evaluation instruments• Subject and teaching of the subject
Subject and teaching items used in the studyItem 1. The aims of this subject (learning outcomes and expected standards) were clear to me. 2. The teacher communicated and explained clearly in face-to-face, online, written and other formats. 3. The teacher was easily approachable and contactable to assist with student problems and needs. 4. The subject was challenging and interesting. 5. I received prompt and appropriate feedback on my work in this subject. 6. The assessment methods and tasks in this subject were appropriate given the subject aims. 7. The subject was effective for developing my skills in critical thinking, analysing, problem solving and communicating. 8. My learning in this subject was well supported by appropriate and effective teaching and learning methods and activities. 9. The workload in this subject was appropriate to the achievement of the subject aims. 10. The subject provided effective opportunities for active student participation in learning activities.
Evaluation items used in the study:• Wording of 10 items derived from descriptions in 14 student evaluation instruments• Covers subject and teaching of the subject• Possible confounds in descriptions
Teaching and learning, Methods and activitiesReflects evaluation practiceSame for ratings and DCE
Two survey parts:• Evaluation instrument (rating scales)
(“instrument”); and• Evaluation experiment (choices in a DCE)
(“experiment”)
We controlled for order of appearance of the instrument and the experiment, and for respondent focus:
4 versions of the survey in the study design
Study design
Content of questionnaire Sample size
Version 1 Experiment only 80 Version 2 Instrument only 80 Version 3 Experiment first, instrument second 80 Version 4 Instrument first, experiment second 80 Total 320 Total instrument (Versions 2, 3, 4) 240 Total experiment (Versions 1, 3, 4) 240
PureProfile panel: 320 respondents randomly assigned to the 4 study versions, December 2010
Participant screening: • student at an Australian-based university during previous semester• completed at least two university subjects (classes) during that semester (to allow comparison between at least two subjects in the instrument)
Instrument:• Names of all subjects in previous semester• Most satisfactory (“best”) and least satisfactory (“worst”) subject nominated• Each attribute for the “best” and “worst” subjects rated on a five-point scale:
-2 to +2 (SD, D, neither D nor A, A, SA)
Experiment:• Pairs of hypothetical subjects described by rating scale
categories as attribute levels (range -2 to +2)Ratings assumed to be own ratings
• Each participant evaluated 20 pairs:8 pairs: OMEP from 410 (8 blocks from the 64 runs)12 pairs: OMEP from 210 (all 12 runs)
• 4-level OMEP: -2, -1, +1 and +2 • 2-level OMEP: -2, +2 • Subject A had constant, “neutral” ratings descriptions• Subject B ratings as per the above experimental design
Example of choice task in the choice experiment Please carefully evaluate the aspect ratings for hypothetical subjects A and B and answer the question below. -2 -1 0 +1 +2 Strongly Disagree Disagree Neither Disagree
or Agree Agree Strongly Agree
Aspects of this subject Ratings for subject A
Ratings for subject B
1. The aims of this subject (learning outcomes and expected standards) were clear to me. 0 -2 2. The teacher communicated and explained clearly in face-to-face, online, written and other formats. 0 -1 3. The teacher was easily approachable and contactable to assist with student problems and needs. 0 -1 4. The subject was challenging and interesting. 0 +2 5. I received prompt and appropriate feedback on my work in this subject. 0 -1 6. The assessment methods and tasks in this subject were appropriate given the subject aims. 0 +2 7. The subject was effective for developing my skills in critical thinking, analysing, problem solving and communicating. 0 -2
8. My learning in this subject was well supported by appropriate and effective teaching and learning methods and activities. 0 +1
9. The workload in this subject was appropriate to the achievement of the subject aims. 0 -1 10. The subject provided effective opportunities for active student participation in learning activities. 0 +2 Which of the two subjects do you think would be more satisfactory to you?
Subject A
Subject B This example indicates, for instance, that subject B provided more effective opportunities for active student participation in learning activities compared with subject A, but you thought that the aims of this subject (learning outcomes and expected standards) were clearer to you for subject A than for subject B. So, you will have to decide whether "opportunities for active student participation in learning activities" matters to you more than "the aims of this subject (learning outcomes and expected standards) were clearer" and whether the rating score differences are sufficiently larger or smaller to matter to you. In this example question, a person ticked Subject A indicating that, overall, subject A would be more satisfactory than subject B. The following questions are just like the above example except that the ratings of the aspects of subject B differ. All you have to do is answer the question about each subject pair.
Sample (n=320) Australia Gender Female 54.7% 55.8% Male 45.3% 44.2% Age 27.5, 25, 18-59, 7.7
(mean, median, range, std. deviation) 26.5 (mean)
Study mode Full-time 71.6% 66.6%
Part-time 28.4% 33.4% Study level Undergraduate 69.7% 76.1% Postgraduate 30.3% 23.9% Student origin Domestic 90.0% 74.8% International 10.0% 25.2% Number of subjects in semester 2 2010 n.a. 2 31.3% 3 19.7% 4 41.6% >4 7.5% University Group of Eight 30.0% 29.6% Other 70.0% 70.4% Discipline Sciences/Engineering 44.7% 37.3% Business/Social Sciences/Humanities 55.3% 62.7%
University student characteristics: sample and Australia
3. FindingsItem ratings, instrument (n=240) Rating for best subject Rating for worst subject Mean* SD Mean SD 1. Subject aims 1.088 0.908 0.158 1.090 2. Teacher communication 1.025 0.924 0.092 1.120 3. Teacher approachability 1.029 0.939 0.138 1.162 4. Challenging and interesting 1.183 0.882 0.075 1.122 5. Feedback to students 0.825 0.987 0.033 1.093 6. Assessment 0.983 0.951 0.125 1.059 7. Skills development 0.996 0.861 0.108 1.085 8. Methods and activities 0.900 0.972 -0.042 1.078 9. Workload 0.800 0.960 0.183 1.063 10. Student participation 0.900 0.927 0.100 0.989
* Rating scale from -2 (strongly disagree) to +2 (strongly agree)
ANOVA: equal means for the three study versions, so pooled
Binary logistic regression: Best (1) and Worst (0) subjects as DV, items ratings as IVs
Binary logistic regression, instrument Item Parameter Std. Error p-value 1. The aims of this subject (learning outcomes and expected standards) were clear to me.
0.329 0.146 0.025
2. The teacher communicated and explained clearly in face-to-face, online, written and other formats.
0.083 0.166 0.615
3. The teacher was easily approachable and contactable to assist with student problems and needs.
0.160 0.156 0.306
4. The subject was challenging and interesting. 0.603 0.145 <0.001 5. I received prompt and appropriate feedback on my work in this subject.
-0.047 0.144 0.746
6. The assessment methods and tasks in this subject were appropriate given the subject aims.
0.315 0.155 0.041
7. The subject was effective for developing my skills in critical thinking, analysing, problem solving and communicating.
0.069 0.164 0.674
8. My learning in this subject was well supported by appropriate and effective teaching and learning methods and activities.
0.252 0.157 0.109
9. The workload in this subject was appropriate to the achievement of the subject aims.
-0.376 0.158 0.017
10. The subject provided effective opportunities for active student participation in learning activities.
0.112 0.161 0.487
Constant -5.462 0.585 <0.001 Number of observations: 480 LL(constants only): -332.711 LL (model): -252.111 McFadden pseudo R-squared: 0.242
Instrument, best vs. worst subject:
• Four items discriminateOne item with counter-intuitive sign
•High correlation between ratings(for Best, Worst and Best minus Worst)
Experiment
•Responses from 12 individuals deleted (always chose A or always chose B)•Mean choice proportion for each choice option in each pair, for each of the three study versions (for common set of 12 pairs): high correlation with sample proportions (≈ 0.94)
→ study versions pooled
Conditional binary logit estimation
•First: 4-level linear vs. 4-level non-linear (effects coded):LR test: no stat. difference, so 2-level and 4-level designs pooled
•Cond. Logit for all 20 pairs of 228 respondents•Model fit and prediction accuracy (in-sample, out-of-
sample):Comparing, for each choice option in each pair, the mean choice proportion with the predicted choice probability
Binary conditional logit estimation, experiment, sample aggregate Item Parameter Std. Error p-value 1. The aims of this subject (learning outcomes and expected standards) were clear to me.
0.139 0.017 <0.001
2. The teacher communicated and explained clearly in face-to-face, online, written and other formats.
0.165 0.017 <0.001
3. The teacher was easily approachable and contactable to assist with student problems and needs.
0.094 0.017 <0.001
4. The subject was challenging and interesting. 0.178 0.017 <0.001 5. I received prompt and appropriate feedback on my work in this subject. 0.107 0.017 <0.001 6. The assessment methods and tasks in this subject were appropriate given the subject aims.
0.122 0.017 <0.001
7. The subject was effective for developing my skills in critical thinking, analysing, problem solving and communicating.
0.148 0.017 <0.001
8. My learning in this subject was well supported by appropriate and effective teaching and learning methods and activities.
0.132 0.017 <0.001
9. The workload in this subject was appropriate to the achievement of the subject aims.
0.118 0.017 <0.001
10. The subject provided effective opportunities for active student participation in learning activities.
0.079 0.017 <0.001
Constant 0.231 0.032 <0.001 Number of observations: 4560 LL (constants only): -3134.589 LL (model): -2851.968 McFadden pseudo R-squared: 0.090
All item parameter estimates discriminate re satisfaction
Most important to student satisfaction:‘the subject was challenging and interesting’ closely followed by ‘the teacher communicated and explained clearly in face-to-face, online, written and other formats’
Some results similar to Denson et al (2010) (final ‘overall satisfaction’ item in SET instrument as DV explained by subject ratings), in particular:
the “challenging and interesting nature of a subject” (most important) and the “opportunities for active student participation” item (least important)
Instrument vs. Experiment (approximation):R2 of parameter estimates = 0.18
Overall: experiment better distinguishes the relative contribution of items, i.e. better “diagnostic power”
Note: higher number of observations in experiment
Scale-Adjusted Latent Class Models (SALCM)• Identifying preference heterogeneity (co-variates) and
variance heterogeneity simultaneously• BIC used for model selection
SALCM for 12 common pairs (2-level):• One preference class• Two scale classes: male students more variable in their
choices than females
SALCM for master set of 64 pairs (4-level):Similar results
SALCM estimates, 12 common pairs Item Parameter Std. Error p-value 1. The aims of this subject (learning outcomes and expected standards) were clear to me.
1.852 0.465 <0.001
2. The teacher communicated and explained clearly in face-to-face, online, written and other formats.
1.824 0.546 0.001
3. The teacher was easily approachable and contactable to assist with student problems and needs.
1.086 0.401 0.007
4. The subject was challenging and interesting. 1.729 0.360 <0.001 5. I received prompt and appropriate feedback on my work in this subject.
0.537 0.227 0.018
6. The assessment methods and tasks in this subject were appropriate given the subject aims.
1.174 0.356 0.001
7. The subject was effective for developing my skills in critical thinking, analysing, problem solving and communicating.
1.429 0.331 <0.001
8. My learning in this subject was well supported by appropriate and effective teaching and learning methods and activities.
0.865 0.228 <0.001
9. The workload in this subject was appropriate to the achievement of the subject aims.
1.704 0.407 <0.001
10. The subject provided effective opportunities for active student participation in learning activities. Number of observations: 2736 LL (constants only): -1884.078 LL (model): -1652.007 Pseudo R-squared: 0.123
0.687 0.347 0.048
SALCM, 12 common pairs, choice proportions vs. choice probabilities
SALCM estimates from the master design pairs Item Parameter Std. Error p-value 1. The aims of this subject (learning outcomes and expected standards) were clear to me.
0.367 0.044 <0.001
2. The teacher communicated and explained clearly in face-to-face, online, written and other formats.
0.394 0.046 <0.001
3. The teacher was easily approachable and contactable to assist with student problems and needs.
0.270 0.041 <0.001
4. The subject was challenging and interesting. 0.482 0.045 <0.001 5. I received prompt and appropriate feedback on my work in this subject.
0.249 0.040 <0.001
6. The assessment methods and tasks in this subject were appropriate given the subject aims.
0.290 0.038 <0.001
7. The subject was effective for developing my skills in critical thinking, analysing, problem solving and communicating.
0.377 0.041 <0.001
8. My learning in this subject was well supported by appropriate and effective teaching and learning methods and activities.
0.306 0.040 <0.001
9. The workload in this subject was appropriate to the achievement of the subject aims.
0.394 0.045 <0.001
10. The subject provided effective opportunities for active student participation in learning activities. Number of observations: 4560 LL (constants only): -3134.589 LL (model): -2772.016 Pseudo R-squared: 0.116
0.170 0.037 <0.001
SALCM, master pairs, choice proportions vs. choice probabilities
Individual-level model using WLSEmpirical distribution of individual-level item parameter estimates Using 12 pairs from common designSmall size for estimationQuite a few negative parameter estimates
Item Min Max Mean SE Adj t 1. Subject aims -0.173 0.347 0.049 0.005 33.948 2. Teacher communication -0.173 0.289 0.043 0.006 24.826 3. Teacher approachability -0.173 0.231 0.022 0.005 15.242 4. Challenging and interesting -0.289 0.347 0.061 0.007 30.187 5. Feedback to students -0.289 0.347 0.029 0.006 16.743 6. Assessment -0.231 0.231 0.039 0.006 22.517 7. Skills development -0.173 0.231 0.041 0.006 23.671 8. Methods and activities -0.231 0.289 0.041 0.006 23.671 9. Workload -0.347 0.289 0.039 0.006 22.517 10. Student participation -0.231 0.289 0.015 0.005 10.392 Constant -0.693 0.693 -0.066 0.017 -13.449
WLS individual estimates; descriptive statistics (n=228)
Close correspondence between the results of Cond. Logit, SALCM-64 pairs (4-level), WLS and (slightly less so) SALCM-12 pairs (2-level)
4. Conclusion
Ratings instrument and choice experiment to establish individual contributions of subject aspects to satisfaction
The experiment provided greater discriminatory power
‘Challenging and interesting’ and ‘Teacher communication’ major drivers of satisfaction
‘Feedback’ and ‘Student participation’ among the least important ones
Methodological contribution to higher education literatureNovel application of DCE to student evaluation
Combine quantitative results with qualitative feedback
Limitations/further researchRelatively small sample sizePotential confounding in itemsApplication at university program level
What determines student satisfaction with university subjects? A choice-based approach
Twan Huybers, Jordan Louviere and Towhidul Islam
Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014