experimental and single-subject design psy440 may 27, 2008

43
Experimental and Single-Subject Design Experimental and Single-Subject Design PSY440 May 27, 2008

Upload: stanley-kelley

Post on 28-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Experimental and Single-Subject Design PSY440 May 27, 2008

Experimental and Single-Subject DesignExperimental and Single-Subject Design

PSY440

May 27, 2008

Page 2: Experimental and Single-Subject Design PSY440 May 27, 2008

Definitions:Definitions:

Consider each of the following terms and generate a definition for each:

• Research

• Empirical

• Data

• Experiment

• Qualitative Research

Page 3: Experimental and Single-Subject Design PSY440 May 27, 2008

DefinitionsDefinitions

Research

1. Scholarly or scientific investigation or inquiry

2. Close and careful study (American Heritage Dictionary)

3. Systematic (step-by-step)

4. Purposeful (identify, describe, explain, predict)

Page 4: Experimental and Single-Subject Design PSY440 May 27, 2008

DefinitionsDefinitions

Empirical:Relying upon or derived from observation or experiment; capable of proof or verification by means of observation or experiment. (American Heritage Dictionary)

Data: Information; esp. information organized for analysis or used as the basis of a decision. (American Heritage Dictionary)

Experiment:A method of testing an hypothesized causal relationship between two variables by manipulating one variable and observing the effect of the manipulation on the second variable.

Page 5: Experimental and Single-Subject Design PSY440 May 27, 2008

Overview of Experimental Overview of Experimental DesignDesign

Based on Alan E. Kazdin. (1982). Single-Case Research Designs:

Methods for Clinical and Applied Settings. Chapter IV.

Page 6: Experimental and Single-Subject Design PSY440 May 27, 2008

Independent & Dependent VariablesIndependent & Dependent Variables

The independent variable (IV) is the variable that is manipulated in an experiment.

The dependent variable (DV) is the variable that is observed to assess the effect of the manipulation of the IV.

What are some examples of IV’s and DV’s that might be studied experimentally?

Page 7: Experimental and Single-Subject Design PSY440 May 27, 2008

Internal and External ValidityInternal and External Validity

Internal validity refers to the extent to which a study is designed in a way that allows a causal relation to be inferred. Threats to internal validity raise questions about alternative explanations for an apparent association between the IV and DV.

External validity refers to the generalizability of the findings beyond the experimental context (e.g. to other persons, settings, assessment devices, etc).

Page 8: Experimental and Single-Subject Design PSY440 May 27, 2008

Threats to Internal ValidityThreats to Internal Validity

History

Maturation

Testing

Instrumentation

Statistical Regression

Attrition

Diffusion of Treatment

Page 9: Experimental and Single-Subject Design PSY440 May 27, 2008

HistoryHistory

Any event other than the intervention occurring at the time of the experiment that could influence the results.

Example in intervention research: Participant is prescribed medication during the time frame of the psychosocial treatment

Other examples?How can this threat be ruled out or reduced by

the experimental design?

Page 10: Experimental and Single-Subject Design PSY440 May 27, 2008

MaturationMaturation

Any change over time that may result from processes within the subject (as opposed to the IV)

Example: Client learns how to read more effectively, so starts behaving better during reading instruction.

Other examples?How can this threat be ruled out or reduced by

the experimental design?

Page 11: Experimental and Single-Subject Design PSY440 May 27, 2008

TestingTesting

Any change that may be attributed to effects of repeated assessment

Example: Client gets tired of filling out weekly symptom checklist measures, and just starts circling all 1’s or responding randomly.

Other examples?

How can this threat be ruled out or reduced by the experimental design?

Page 12: Experimental and Single-Subject Design PSY440 May 27, 2008

InstrumentationInstrumentation

Any change that takes place in the measuring instrument or assessment procedure over time.

Example: Teacher’s report of number of disruptive incidents drifts over time, holding the student to a higher (or lower) standard than before.

Other examples?How can this threat be ruled out or reduced by

the experimental design?

Page 13: Experimental and Single-Subject Design PSY440 May 27, 2008

Statistical RegressionStatistical Regression

Any change from one assessment occasion to another that might be due to a reversion of scores toward the mean.

Example: Clients are selected to be in a depression group based on high scores on a screening measure for depression. When their scores (on average) go down after the intervention, this could be due just to statistical regression (more on this one later in the course : )

How can this threat be ruled out or reduced by the experimental design?

Page 14: Experimental and Single-Subject Design PSY440 May 27, 2008

Selection BiasesSelection Biases

Any differences between groups that are due to the differential selection or assignment of subjects to groups.

Example: Teachers volunteer to have their classes get social skills lessons, and their students are compared to students in classrooms where the teachers did not volunteer (both teacher and student effects may be present).

Other examples?How can this threat be ruled out or reduced by the

experimental design?

Page 15: Experimental and Single-Subject Design PSY440 May 27, 2008

AttritionAttrition

Any change in overall scores between groups or in a given group over time that may be attributed to the loss of some of the participants.

Example: Clients who drop out of treatment are not included in posttest assessment - may inflate treatment group posttest score.

Other examples?

How can this threat be ruled out or reduced by the experimental design?

Page 16: Experimental and Single-Subject Design PSY440 May 27, 2008

Diffusion of TreatmentDiffusion of Treatment

The intervention is inadvertently provided to part or all of the control group, or at the times when the treatment should not be in effect.

Example: Teacher starts token economy before finishing the collection of baseline data.

Other examples?

How can this threat be ruled out or reduced by the experimental design?

Page 17: Experimental and Single-Subject Design PSY440 May 27, 2008

Internal validity and single-subject designsInternal validity and single-subject designs

In single-subject research, the participant is compared to him/herself under different conditions (rather than comparing groups).

The participant is his/her own controlSelection biases and attrition are automatically ruled

out by these designsWell designed single-subject experiments can rule

out (or reduce) history, maturation, testing, instrumentation, and statistical regression

Page 18: Experimental and Single-Subject Design PSY440 May 27, 2008

Threats to External ValidityThreats to External Validity

Generality Across• Participants• Settings• Response Measures• Times• Behavior Change Agents

Reactive Experimental ArrangementsReactive AssessmentPretest SensitizationMultiple Treatment Interference

Page 19: Experimental and Single-Subject Design PSY440 May 27, 2008

Generality Across Subjects, Settings, Responses, & TimesGenerality Across Subjects, Settings, Responses, & Times

Results do not extend to participants, settings, behavioral responses, and times other than those included in the investigation

Example: Couple uses effective communication skills in session, but not at home.

Other examples?How can this threat be ruled out or reduced by

the experimental design?

Page 20: Experimental and Single-Subject Design PSY440 May 27, 2008

Generality Across Behavior Change AgentsGenerality Across Behavior Change Agents

Intervention results do not extend to other persons who can administer the intervention (special case of previous item)

Example: Parents are able to use behavior modification techniques successfully but child care providers are not (child responds differently to different person)

Other Examples?How can this kind of threat to external validity be

ruled out or reduced?

Page 21: Experimental and Single-Subject Design PSY440 May 27, 2008

Reactive Experimental ArrangementsReactive Experimental Arrangements

Participants may be influenced by their awareness that they are participating in an experiment or special program (demand characteristics)

Example: Social validity of treatment is enhanced by the association with a university

Other examples?How can this threat be ruled out or reduced by

the experimental design?

Page 22: Experimental and Single-Subject Design PSY440 May 27, 2008

Reactive AssessmentReactive Assessment

The extent to which participants are aware that their behavior is being assessed and that this awareness may influence how they respond (Special case of reactive arrangements).

Example: Child complies with adult commands when the experimenter is observing, but not at other times.

Other examples?

How can this threat be ruled out or reduced by the experimental design?

Page 23: Experimental and Single-Subject Design PSY440 May 27, 2008

Pretest SensitizationPretest Sensitization

Assessing participants before treatment sensitizes them to the intervention that follows, so they are affected differently by the intervention than persons not given the pretest.

Example: Pretest administered before parenting group makes participants pay attention to material more closely and learn more

Other examples?How can this threat be ruled out or reduced by the

experimental design?

Page 24: Experimental and Single-Subject Design PSY440 May 27, 2008

Multiple Treatment InterferenceMultiple Treatment Interference

When the same participant(s) are exposed to more than one treatment, the conclusions reached about a particular treatment may be restricted.

Example: Clients are getting pastoral counseling at church and CBT at a mental health center.

Other examples?

How can this threat be ruled out or reduced by the experimental design?

Page 25: Experimental and Single-Subject Design PSY440 May 27, 2008

Evaluating a Research DesignEvaluating a Research Design

No study is perfect, but some studies are better than others. One way to evaluate an experimental design is to ask the question:

How well does the design minimize threats to internal and external validity? (In other words, how strong a causal claim does it support, and how generalizable are the results?)

Page 26: Experimental and Single-Subject Design PSY440 May 27, 2008

Internal/External Validity Trade-OffInternal/External Validity Trade-Off

Many designs that are well controlled (good internal validity), are more prone to problems with external validity (generality across settings, behavioral responses, interventionists, etc. may be more limited).

Page 27: Experimental and Single-Subject Design PSY440 May 27, 2008

Random selection/assignmentRandom selection/assignment

Random: Chance of being selected is equal for each participant and not biased by any systematic factor

Group designs can reduce many of the threats to internal validity by using random assignment of participants to conditions.

They can (in theory) also limit some threats to external validity by using a truly random sample of participants (but how often do you actually see this?)

Page 28: Experimental and Single-Subject Design PSY440 May 27, 2008

Single-Subject DesignsSingle-Subject Designs

More modest in generalizability claimsCan be very strong (even perfect) in reducing threats

to internal validityExamples:

Reversal (ABAB)Multiple BaselineChanging CriterionMultiple Treatment

Page 29: Experimental and Single-Subject Design PSY440 May 27, 2008

Reversal Design (ABAB)Reversal Design (ABAB)

A=Baseline

B=Treatment

A=Return to Baseline

B=Reintroduce Treatment

Page 30: Experimental and Single-Subject Design PSY440 May 27, 2008

Reversal DesignReversal Design

Baseline:

Stable baseline allows stronger causal inference to be drawn

Stability refers to a lack of slope, and low variability (show examples on white board)

Page 31: Experimental and Single-Subject Design PSY440 May 27, 2008

ABAB Design: BaselineABAB Design: Baseline

If trend is in reverse direction from expected intervention effect, that’s OK

If trend is not too steep and a very strong effect is expected, that may be OK

For reversal design, relatively low variability makes it easier to draw strong conclusions

Page 32: Experimental and Single-Subject Design PSY440 May 27, 2008

Threats to internal validity?Threats to internal validity?

History & maturation (return to baseline rules these out)

Testing, instrumentation, and statistical regression (return to baseline rules these out, but use of reliable measures, trained & monitored observers strengthens the design)

Selection bias & attrition (not an issue with single-subject research!)

Diffusion of treatment (a concern but can be controlled in some cases)

Page 33: Experimental and Single-Subject Design PSY440 May 27, 2008

What about generalizability?What about generalizability?

Can’t easily generalize beyond the case.

Need to replicate under different circumstances and with different participants, or follow-up with group design.

Page 34: Experimental and Single-Subject Design PSY440 May 27, 2008

Disadvantages to Reversal DesignDisadvantages to Reversal Design

Diffusion of treatment:If the intervention “really worked” the removal of the intervention should not result in a return to baseline behavior!

Ethical concerns:If the behavior change is clinically significant, it may be unethical to remove it, even temporarily!

Page 35: Experimental and Single-Subject Design PSY440 May 27, 2008

Multiple Baseline DesignMultiple Baseline Design

Collect baseline data on more than one dependent measure simultaneously

Introduce interventions to target each dependent variable in succession

Page 36: Experimental and Single-Subject Design PSY440 May 27, 2008

Multiple Baseline DesignMultiple Baseline Design

Each baseline serves as a control for the other interventions being tested (Each DV is a “mini” AB experiment of the previous baseline)

May be done with one participant when multiple behaviors are targeted, or with multiple participants, each receiving the same intervention in succession.

Page 37: Experimental and Single-Subject Design PSY440 May 27, 2008

Example with one participantExample with one participant

Multiple baseline design across behaviors to increase assertive communication:– Increase eye contact– Increase speech volume– Increase number of requests made

Page 38: Experimental and Single-Subject Design PSY440 May 27, 2008

Example across participantsExample across participants

• More than one client receives the same intervention, beginning baseline all at one time, and introducing the intervention in succession, so that each participant serves as a control for the others.

• Example from textbook: intervention to increase appropriate mealtime behavior in preschoolers

Page 39: Experimental and Single-Subject Design PSY440 May 27, 2008

Example across situations, settings, and timeExample across situations, settings, and time

• Measure target behavior in multiple settings, and introduce the intervention into each setting in succession, while collecting baseline data on all settings

Page 40: Experimental and Single-Subject Design PSY440 May 27, 2008

Advantages of Multiple BaselineAdvantages of Multiple Baseline

+ No need to remove intervention

+ Interventions can be added gradually (practical utility in clinical setting)

Page 41: Experimental and Single-Subject Design PSY440 May 27, 2008

Disadvantages of Multiple BaselineDisadvantages of Multiple Baseline

- Interdependence of baselines (change in one may result in change in another)

- Inconsistent effects of interventions (some are followed by changes in behavior and others are not)

- Prolonged baselines (ethical & methodological concerns)

Page 42: Experimental and Single-Subject Design PSY440 May 27, 2008

Changing Criterion DesignsChanging Criterion Designs

Intervention phase requires increasing levels of performance at specified times.

If the performance level increases as expected over several intervals, this provides good evidence for a causal relationship between intervention and outcome.

Page 43: Experimental and Single-Subject Design PSY440 May 27, 2008

Multiple Treatment DesignsMultiple Treatment Designs

More than one treatment is implemented during the same phase, in a manner that allows the effects of the treatments to be compared to each other