mentor self-efficacy and perceived program support scale : m-sepps suzannah vallejo calvery, phd

Download Mentor Self-efficacy and Perceived Program Support scale : M-SEPPS Suzannah Vallejo  Calvery, PhD

Post on 22-Feb-2016




0 download

Embed Size (px)


Mentor Self-efficacy and Perceived Program Support scale : M-SEPPS Suzannah Vallejo Calvery, PhD National Mentoring Summit January 25, 2013. Down the Rabbit Hole: Lit Review and Design Fun with Scales: Instrumentation Psychometric Joy: Validity and Reliability - PowerPoint PPT Presentation


Slide 1

Mentor Self-efficacy and Perceived Program Support scale: M-SEPPS

Suzannah Vallejo Calvery, PhD

National Mentoring SummitJanuary 25, 20131The AgendaDown the Rabbit Hole: Lit Review and Design

Fun with Scales: Instrumentation

Psychometric Joy: Validity and Reliability

Back out of the looking glass: Implications and Applications

2The Big QuestionFunding is increasingly focused on:Outcomes-based assessmentBest-practices

Only proven interventions are receiving the funding necessary to implement solutions.

SO What?non-profit funding is increasingly tied to proofs the org can give of effectiveness.

Mentoring has historically been focused on putting more bodies in the field and not on assessing and ensuring the success of the intervention.

Best practices have been emergent for about 10 years, yet meta-analyses of mentoring studies indicates that best practices are not being implemented with fidelity in the field.

Research indicates that mentoring intervention benefits are negligible without best practices.

One of the biggest problems facing mentoring programs is the attrition of mentors. Some research indicates that mentor self-efficacy impacts mentor commitment and motivation to continue working as a mentor.tutor. However, very few studies have been done of mentor s-e.

This study isolates the best practice of program support for mentors and will aid in the investigation of the relationship between ms-e and program support. 3Down the Rabbit Hole: Does mentoring work?Best practices gleaned over timeMatch qualityMatch lengthProgram infrastructure2002 vs. 2011 findings of DuBois et al. studies (2002, 2011)

4What about the Mentor?Dyadic construct with monadic research baseBest practices tied to mentor self-efficacy

5New Instrument Preparation & ValidationM-SEPPS InstrumentResearch Questions:What are the psychometric properties of the proposed measure?Are there significant differences between demographic groups?

Instrument development and validation

New instrument to assess the connection between mentor s-e and perceived program support. M-SEPPS

Current study was designed to begin the validation process of a new instrument, which targets building understanding about mentor self-efficacy and program support and the possible interrelations between them.

6Fun with Scales

Literature ReviewItem Construction*PilotItem RefinementData Collection

6. Analysis: AssumptionsExploratory Factor AnalysisItem analysisReliability estimation* Bandura, 2006; Fowler, 2009; Gefen & Straub, 2005; Netemeyer, Bearden, & Sharma, 2003; Nunnally, 1978; Nunnally & Bernstein, 1994; Pett, Lackey, & Sullivan, 2003.Lit review: only 2 other studies on mentor s-e. Very few on mentor experience

Measures: 1 measure for teacher s-e, none for mentor s-e in published lit. Pps none in published lit.

Items construct: through lit. recommendations, discussion with professionals, pilot and refinement

Study: conducted with 4 partner orgs over one school year7Method


Original scale/item pool:General self-efficacyPersonal teaching efficacyMentor/tutor self efficacyProgram Support

4 orgs, about 160 participants in original pool.

27 items in original scales, 4 subscales. 2 previously published, 2 self-devised according to best practices8Principal Axis factoring104 participants in remaining analysis18 total items3 latent constructs

Process: PAF (Pett, Lackey, & Sullivan, 2003, Tabachnik & Fidell, 2007)Direct Oblimin rotation w/delta level of -.5*5 original factors extracted, 3 retained*Pett et al.Multivariate outliers and missing data caused about 50 cases to be deleted.

PAF because results account for shared variance, but not error/unique variance, freedom of variables to load on most appropriate factor, in PCA, the loadings are specified.

Factor 4 had only 2 items, Factor 5 only 1.

9Psychometric Joy!

10Reliability EstimatesFactorMSD1231. General self-efficacy (n = 8)64.218.0910.882. Perceived Program Support (n = 4)32.534.9730.2570.833. Mentor/Tutor self-efficacy (n = 6)46.885.8630.5110.1720.78Total Scale (n = 18)14414.7630.892Factor Correlations and Factor Alpha Coefficients for the M-SEPPS Scale

Per Research Question #2:Original Demographic data variables:

Age, Gender, Ethnicity, Level of education, Previous experience tutoring, Years tutoring.

Age was the only demographic variable that had significant differences between levels on Factors 2 an 3.

The reliability estimates of the extracted factors show evidence of good reliability initially.

Test-retest reliability was not examined due to time/access to participants

To note: High reliability estimates between scales, perhaps too high between F1, F3

Age was a significant effect for perceived program support with younger mentor//tutors having significantly lower pps than those in older age brackets.

Age for m/t s-e with older mentor/tutors having significantly lower s-e than younger groups.

11Back out of the Looking Glass:Limitations and Future ResearchLimitations:Sample sizeTest-retest reliabilityScale redundancy

Next Steps:CFALarger sampleScale reduction

12And after that? Implications for practiceProgram Evaluation Dynamic program assessmentBuilding support for implementation of best practices13Thank you for attending.Q & A


View more >