research problems in clinical psychology

27
Research Problems Spring 2013: Willcutt 1 RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY PSYC 5423 Spring Semester, 2013 Instructor: Erik Willcutt, Ph.D. Office: Muenzinger D-313C Work phone: 303-492-3304 Home phone: 303-926-8844 Cell phone: 303-916-9148 (it's fine to call or send text messages to my cell) Email: [email protected] Schedule: Tuesday 11:30 - 2:00; Muenzinger D318 Website: http://psych.colorado.edu/~willcutt/res_meth/index.htm COURSE DESCRIPTION Course Goals: Psyc 5423 is an intensive, upper-level graduate course that provides a survey of research design, research criticism, and proposal writing. The overarching goal of the course is to facilitate the development The first primary aim of the course is to enable students to become proficient with the fundamentals of research design. Primary topics will include methods for systematic literature reviews, hypothesis formulation and operationalization, inference testing, advantages and disadvantages of design alternatives, measurement and assessment strategies for clinical research, and selection and implementation of appropriate statistical procedures. The second overarching goal of the course is to facilitate the development of writing skills that will enable students to write empirical papers and research proposals that will be selected for publication or external funding. Course Format: To accomplish these objectives, students will be exposed to information through class lectures, assigned readings, and class discussion. These sources of information are designed to complement one another. The readings will broadly cover topics related to research design. The content of the lectures will overlap with the readings to a certain extent, but will also provide specific context and applied examples which will facilitate the learning process. The course will focus heavily on the application of research design and will emphasize class discussion. Readings: There is no textbook. The reading list is maintained online at http://psych.colorado.edu/~willcutt/resmeth_schedule.htm, and the website also includes a link to the version of each paper. For most class sessions there will be 3 - 5 required readings. In addition, the list also includes a number of supplemental readings for each major content area (the supplemental readings will be clearly indicated bold headers on the reading list). The supplemental readings are not required for the course. One of my primary goals for this course is to provide you with a list of resources regarding different aspects of research design that you will be useful to you later in graduate school and in your future career as an independent researcher. Therefore, for each topic we cover the reading list includes a range of "classic" papers that provide useful overviews or different perspectives, along with papers that provide a useful discussion of more specific topics that are only relevant for some specific study designs. On a related note, I am always looking for useful new articles to add to the list, so if you find any articles that are especially helpful during the course or later in your training, please forward them to me.

Upload: others

Post on 29-Apr-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

1

RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

PSYC 5423

Spring Semester, 2013

Instructor: Erik Willcutt, Ph.D. Office: Muenzinger D-313C Work phone: 303-492-3304 Home phone: 303-926-8844 Cell phone: 303-916-9148 (it's fine to call or send text messages to my cell) Email: [email protected] Schedule: Tuesday 11:30 - 2:00; Muenzinger D318 Website: http://psych.colorado.edu/~willcutt/res_meth/index.htm

COURSE DESCRIPTION

Course Goals: Psyc 5423 is an intensive, upper-level graduate course that provides a survey of research design, research criticism, and proposal writing. The overarching goal of the course is to facilitate the development The first primary aim of the course is to enable students to become proficient with the fundamentals of research design. Primary topics will include methods for systematic literature reviews, hypothesis formulation and operationalization, inference testing, advantages and disadvantages of design alternatives, measurement and assessment strategies for clinical research, and selection and implementation of appropriate statistical procedures. The second overarching goal of the course is to facilitate the development of writing skills that will enable students to write empirical papers and research proposals that will be selected for publication or external funding. Course Format:

To accomplish these objectives, students will be exposed to information through class lectures, assigned readings, and class discussion. These sources of information are designed to complement one another. The readings will broadly cover topics related to research design. The content of the lectures will overlap with the readings to a certain extent, but will also provide specific context and applied examples which will facilitate the learning process. The course will focus heavily on the application of research design and will emphasize class discussion. Readings: There is no textbook. The reading list is maintained online at http://psych.colorado.edu/~willcutt/resmeth_schedule.htm, and the website also includes a link to the version of each paper. For most class sessions there will be 3 - 5 required readings. In addition, the list also includes a number of supplemental readings for each major content area (the supplemental readings will be clearly indicated bold headers on the reading list). The supplemental readings are not required for the course. One of my primary goals for this course is to provide you with a list of resources regarding different aspects of research design that you will be useful to you later in graduate school and in your future career as an independent researcher. Therefore, for each topic we cover the reading list includes a range of "classic" papers that provide useful overviews or different perspectives, along with papers that provide a useful discussion of more specific topics that are only relevant for some specific study designs. On a related note, I am always looking for useful new articles to add to the list, so if you find any articles that are especially helpful during the course or later in your training, please forward them to me.

Page 2: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

2

COURSE REQUIREMENTS

I. Class attendance and participation: I.A. Overall attendance and participation (20% of final grade): Although the content of the course requires

much of our time to be devoted to presentation of information by lecture (especially early in the semester), I have structured the course to empasize discussion as much as possible. Students are expected to read the assigned materials prior to class and to be prepared to discuss those materials during class. In addition, you will complete several (very) brief assignments to help you to consolidate the information presented in class. These will be announced during class.

I. B. Discussion leader (10% of final grade; can be completed anytime during the semester): Once during the semester each student will lead a discussion of an empirical paper that utilized one of the methods we cover during the course. Ideally, I would like each of you to choose the paper that you would like to present so that you can select a paper that is relevant to your own research interests, but I am also happy to provide suggestions. Once you select a paper please email it to me for final approval, and then I will post it on the website so all of us can read it prior to the discussion.

II. Research reviews:

II.A. Manuscript review #1 (10% of final grade; due January 29th). Read Reynolds & Nicolson (2007) and prepare a "bullet point" critique summarizing your reaction to the paper (positive and negative) for discussion in class. Think about both the specific content / logic of the paper and your more general "gut-level" reactions to the style of presentation. I realize that you don't have detailed knowledge about this area of research - the goal of this assignment is just to get us all thinking about these issues, so please don't let this one stress you out.

II.B. Manuscript review #2 (10% of final grade; due February 26th). You will review an empirical paper that I will distribute in the format that is used for blind reviews for a clinical psychology journal. I will provide several sample reviews as examples before you are required to write your own review.

II.C. Grant review (10% of final grade; due approximately March 19th). You will each write a review of one full application for a National Research Service Award (NRSA) fellowship. We will discuss the NIH review format in detail before this assignment.

II.D. Review of popular press article (10% of final grade; can be completed anytime, due by April 30th). As you read articles in the popular press, watch for articles that make an "error in thinking" that is relevant to the issues covered in this course ("relevant" can be interpreted liberally). Newspapers, magazines, and the internet are all fine - just make a copy of the article or send along the link so I know what you read. In no more than 2 - 3 pages, describe the error that you spotted, and explain why it is an error. Then, in your role as empathic skeptic, discuss why you think the error was made. Things to think about could include:

1. Why does the error matter in the big picture? 2. Why was the author of the article susceptible to the error? Did the original source of the information play

a role in the error? (i.e., did the author of the article just misinterpret the source, miss a subtle point, or frame the information from the source in a way that contradicts its content, or was the original source material misleading?)

3. Why might members of the public be susceptible to believe the error? 4. How would you change the story? Is the information in the article simply wrong, or is it a more subtle

mistake that could be presented more appropriately by providing adequate context, discussion of caveats, etc.?

III. Foundation of NIH F31 individual fellowship proposal (30% of final grade; due May 7th) The final assignment is to write the framework for a proposal for an NIH Ruth Kirschtein National Research Service Award Individual Predoctoral Fellowship. In a perfect world each of you will use the final product from this course as the foundation for a submitted proposal for the NRSA deadline in December of your second year in the program. We will discuss the specific details of this assignment extensively throughout the semester. For now, this is a summary of the sections you will complete:

Full written draft: Abstract, Specific Aims, Significance Detailed outline: Approach (including data analyses), Training Plan, Human Subjects

Page 3: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

3

POLICIES Students with a disability

If you qualify for accommodations because of a disability, please submit to your professor a letter from Disability Services in a timely manner (for exam accommodations provide your letter at least one week prior to the exam) so that your needs can be addressed. Disability Services determines accommodations based on documented disabilities. Contact Disability Services at 303-492-8671 or by e-mail at [email protected].

Honor Code

All students of the University of Colorado at Boulder are responsible for knowing and adhering to the academic integrity policy of this institution. Violations of this policy may include: cheating, plagiarism, aid of academic dishonesty, fabrication, lying, bribery, and threatening behavior. All incidents of academic misconduct shall be reported to the Honor Code Council ([email protected]; 303-735-2273). Students who are found to be in violation of the academic integrity policy will be subject to both academic sanctions from the faculty member and non-academic sanctions (including but not limited to university probation, suspension, or expulsion). Other information on the Honor Code can be found at http://www.colorado.edu/policies/honor.html and at http://honorcode.colorado.edu Classroom behavior policy

Students and faculty each have responsibility for maintaining an appropriate learning environment. Those who fail to adhere to such behavioral standards may be subject to discipline. Professional courtesy and sensitivity are especially important with respect to individuals and topics dealing with differences of race, color, culture, religion, creed, politics, veteran’s status, sexual orientation, gender, gender identity and gender expression, age, disability, and nationalities. Class rosters are provided to the instructor with the student's legal name. I will gladly honor your request to address you by an alternate name or gender pronoun. Please advise me of this preference early in the semester so that I may make appropriate changes to my records. The University of Colorado Boulder (CU-Boulder) is committed to maintaining a positive learning, working, and living environment. The University of Colorado does not discriminate on the basis of race, color, national origin, sex, age, disability, creed, religion, sexual orientation, or veteran status in admission and access to, and treatment and employment in, its educational programs and activities. (Regent Law, Article 10, amended 11/8/2001). CU-Boulder will not tolerate acts of discrimination or harassment based upon Protected Classes or related retaliation against or by any employee or student. For purposes of this CU-Boulder policy, "Protected Classes" refers to race, color, national origin, sex, pregnancy, age, disability, creed, religion, sexual orientation, gender identity, gender expression, or veteran status. Individuals who believe they have been discriminated against should contact the Office of Discrimination and Harassment (ODH) at 303-492-2127 or the Office of Student Conduct (OSC) at 303-492-5550. Information about the ODH, the above referenced policies, and the campus resources available to assist individuals regarding discrimination or harassment can be obtained at http://hr.colorado.edu/dh/

Page 4: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

4

TENTATIVE COURSE SCHEDULE AND ASSIGNMENTS

Note: The deadlines listed below are estimates, and will be confirmed as the date approaches. Specific topics that will be covered each week will be updated regularly on the website throughout the semester, and articles for each

class session will be available online.

1/15: WHAT IS THIS CAREER THAT WE’VE CHOSEN? (AND WHY IN THE WORLD DID WE MAKE THAT CHOICE?)

REQUIRED READINGS: NONE

Supplemental readings: page 8 and http://psych.colorado.edu/~willcutt/resmeth_career.htm

1/22: TOPIC I: THE NRSA AND OTHER TRAINING GRANT OPTIONS AND GENERAL GRANT-WRITING TIPS

TOPIC II: COMBATING ILLOGICAL THINKING: CAUSAL INFERENCE AS A CANDLE IN THE DARK

REQUIRED READINGS:

Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304-1312. Platt, J. R. (1964). Strong Inference. Science, 146, 347-353. Shermer, M. (1997). How thinking goes wrong: Twenty-five fallacies that lead us to believe strange things. In Why

People Believe Strange Things (pp. 44 - 61). New York: Freeman.

Supplemental readings on grant writing: page 8 and http://psych.colorado.edu/~willcutt/resmeth_grants.htm

Supplemental readings on causal inference: page 9 and http://psych.colorado.edu/~willcutt/resmeth_inference.htm

1/29: DEFINING YOUR RESEARCH QUESTION AND REVIEWING THE LITERATURE

**DUE: ASSIGNMENT II.A.: BULLET-POINT CRITIQUE OF REYNOLDS AND NICOLSON (2007)**

REQUIRED READINGS:

Bem, D. J. (1995). Writing a review article for Psychological Bulletin. Psychological Bulletin, 118, 172-177. [even if you never have the slightest interest in writing an article for Psychological Bulletin, this paper provides a nice summary of things to think about for any literature review you write.]

Wicker, A. W. (1985). Getting out of our conceptual ruts: Strategies for expanding conceptual frameworks. American Psychologist, 40, 1094-1103. [this paper begins to address the importance of theoretical models to guide both the literature review and new empirical studies]

Supplemental readings, literature reviews: page 10 and http://psych.colorado.edu/~willcutt/resmeth_reviews.htm

Supplemental readings, meta analyses: pages 11-13 and http://psych.colorado.edu/~willcutt/resmeth_meta.htm

2/5: ERIK OUT OF TOWN: MEET WITHOUT ME FOR DISCUSSION OF INITIAL DRAFTS OF YOUR AIMS

2/12: OVERVIEW OF STUDY DESIGN AND SAMPLING ISSUES

REQUIRED READINGS:

Clarke, G. N. (1995). Improving the transition from basic efficacy research to effectiveness studies: Methodological issues and procedures. Journal of Consulting & Clinical Psychology, 63, 718-725.

Hsu, L. M. (1989). Random sampling, randomization, and equivalence of contrasted groups in psychotherapy outcome research. Journal of Consulting & Clinical Psychology, 57, 131-137.

Wainer, H. (1999). The most dangerous profession: A note on nonrandom sampling error. Psychological Methods, 3, 250 - 256. [describes potentially important implications of the cases that do not get included by your sampling design]

Supplemental readings, design: page 14 and http://psych.colorado.edu/~willcutt/resmeth_design.htm

Supplemental readings, longitudinal studies: page 15 and http://psych.colorado.edu/~willcutt/resmeth_long.htm

Supplemental readings, treatment: page 16 and http://psych.colorado.edu/~willcutt/resmeth_intervention.htm

Page 5: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

5

2/19: MULTICULTURAL ISSUES

REQUIRED READINGS:

Cohen, A. B. (2009). Many forms of culture. American Psychologist, 64, 194-204. [describes the dramatic oversimplification of many of our measures of ethnicity, race, and culture]

Hartung, C. M., & Widiger, T. A. (1998). Gender differences in the diagnosis of mental disorders: Conclusions and controversies of the DSM-IV. Psychological Bulletin, 123, 260-278. [systematic review of the implications of gender differences in the prevalence of psychopathology. Many of the take-home points also apply to research with minority groups and other specific populations]

Okazaki, S. & Sue, S. (1995). Methodological issues in assessment research with ethnic minorities. Psychological Assessment, 7, 367 - 375.

Smedley, A., & Smedley, B. D. (2005). Race as biology is fiction, racism as a social problem is real: Anthropological and historical perspectives on the social construction of race. The American Psychologist, 60, 16-26. [Key points about the biological meaning of our racial categorizations (especially the "census level" categories). This has important implications for interpretation of several prominent (and inflammatory) hypotheses about racial differences.]

Supplemental readings: page 17 and http://psych.colorado.edu/~willcutt/resmeth_culture.htm

2/26: MEASUREMENT: INTERNAL VALIDITY, RELIABILITY, AND SCALE DEVELOPMENT

REQUIRED READINGS:

Green, C. E., Chen, C. E., Helms, J. E., & Henze, K. T. (2011). Recent reliability reporting practices in Psychological Assessment: recognizing the people behind the data. Psychological Assessment, 23, 656-669.

Kraemer, H. C., Kupfer, D. J., Clarke, D. E., Narrow, W. E., & Regier, D. A. (2012). DSM-5: how reliable is reliable enough? American Journal of Psychiatry, 169, 13-15. [Position paper describing why low reliability diagnoses of mental disorders may not mean the diagnosis is not valid]

Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437-455. [A lot of detailed models here - just focus on big picture points]

Podsakoff, P. M., MacKenzie, S. B., Lee, J., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88,

879-903. Willcutt, E. G., Boada, R., Riddle, M. W., Chhabildas, N. A., & Pennington, B. F. (2011). A parent-report screening

questionnaire for learning difficulties in children. Psychological Assessment, 778 - 791. [Don't worry about the substantive details of our results - this paper just illustrates a number of the methods we will cover in class.]

Supplemental readings: pages 18-19 and http://psych.colorado.edu/~willcutt/resmeth_internal.htm

3/5: NO CLASS, ERIK OUT OF TOWN.

3/12: EXTERNAL VALIDITY PART I

**DUE BY FRIDAY 3/15: DRAFT OF AIMS**

REQUIRED READINGS:

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

Strauss, M. E., & Smith, G. T. (2009). Construct validity: advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1-25.

Willcutt, E. G., Boada, R., Riddle, M. W., Chhabildas, N. A., & Pennington, B. F. (2011). A parent-report screening questionnaire for learning difficulties in children. Psychological Assessment, 778 - 791. [Don't worry about the substantive detasils of our results - this paper just illustrates a number of the methods we will cover in class.]

Willcutt, E. G., Nigg, J. T., Pennington, B. F., Carlson, C. L., McBurnett, K., Rohde, L. A., Solanto, M. V., Tannock, R., & Lahey, B. B. (2012). Validity of DSM-IV attention-deficit/hyperactivity disorder symptom dimensions and subtypes. Journal of Abnormal Psychology, 121, 991 - 1010. [same as previous]

Supplemental readings: page 20 and http://psych.colorado.edu/~willcutt/resmeth_external.htm

Page 6: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

6

3/19: EXTERNAL VALIDITY PART II

**DUE: ASSIGNMENT II.B.: RESEARCH CRITIQUE #2: RIND 1998**

REQUIRED READINGS: NO NEW READINGS. COMPLETE ANY FROM LAST WEEK IF NECESSARY.

3/26: NO CLASS, SPRING BREAK

4/2: STATISTICAL INFERENCE AND INTERPRETATION I: DISTRIBUTIONS, ASSUMPTIONS, AND VIOLATIONS

DUE: OUTLINE OF SIGNIFICANCE SECTION

DUE: ASSIGNMENT II.C.: CRITIQUE OF F31 PROPOSAL

Required readings: DeCoster, J., Iselin, A. M., & Gallucci, M. (2009). A conceptual and empirical examination of justifications for

dichotomization. Psychological Methods, 14, 349-366. [examines researchers' justifications for dichotomization, then tests whether their rationale is supported empirically. Read this one for big picture points, and don't worry about the details of the simulation models.]

MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7, 19-40. [A second examination of issues around dichotomization. Again focus on the big picture points.]

Scarr, S. (1997). Rules of evidence: A larger context for the statistical debate. Psychological Science, 8, 16-17. Wilcox, R. R. (1998). How many discoveries have been lost by ignoring modern statistical methods? American

Psychologist, 53, 300-314. [summarizes the loss of power (and potential implications for validity) if "old school" statistical procedures are applied when assumptions are not met]

Wilkinson, L., & the Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.

Supplemental readings: pages 21-22 and http://psych.colorado.edu/~willcutt/resmeth_statinf.htm

4/9: STATISTICAL INFERENCE AND INTERPRETATION II: SIGNIFICANCE TESTS, CONFIDENCE INTERVALS, AND EFFECT SIZES

DUE: ROUGH DRAFT OF DETAILED OUTLINE OF APPROACH

Required readings: Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003. [Position paper by one of

the most influential methodologists in our field. Argues against the value of null hypothesis significance testing.]

Cumming, G., & Finch, S. (2005). Inference by eye: confidence intervals and how to read pictures of data. American Psychologist, 60, 170-180. [focus on the big picture points about the utility of confidence intervals to simultaneously illustrate a point estimate of an effect size and the likely range of error around the estimate]

Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist, 56, 16-26. [summarizes some of the key arguments that have been advanced for and against null hypothesis significance testing]

Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115-129. [argues against null-hypothesis significance testing and in favor of point estimates of effect size with confidence intervals]

Schmidt, F., & Hunter, J. (2002). Are there benefits from NHST? American Psychologist, 57, 65-71. [a brief response to Krueger]

Supplemental readings: pages 23-24 and http://psych.colorado.edu/~willcutt/resmeth_signif.htm

Page 7: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

7

4/16: STATISTICAL INFERENCE AND INTERPRETATION III: SPECIFIC PROCEDURES AND STATISTICAL POWER

DUE: FULL DRAFT OF DETAILED OUTLINE OF APPROACH

G*Power3 computer program: http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/

Required readings: Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159. [overview of power by one of the most

influential figures in this specific area. Includes his take on estimates of small, medium, and large effect sizes] Hallahan, M., & Rosenthal, R. (1996). Statistical power: Concepts, procedures, and applications. Behaviour

Research and Therapy, 34, 489-499. [Nice overview of the issues with suggestions regarding ways to increase power]

Supplemental readings, statistical power: page 24 and http://psych.colorado.edu/~willcutt/resmeth_power.htm

Supplemental readings, regression: page 25 and http://psych.colorado.edu/~willcutt/resmeth_regression.htm

Supplemental readings, factor analysis: page 25 and http://psych.colorado.edu/~willcutt/resmeth_latent.htm

Supplemental readings on longitudinal designs: http://psych.colorado.edu/~willcutt/resmeth_long.htm

Supplemental readings on intervention: http://psych.colorado.edu/~willcutt/resmeth_intervention.htm

Supplemental readings on meta analysis: http://psych.colorado.edu/~willcutt/resmeth_meta.htm

4/23: ETHICAL ISSUES IN RESEARCH

DUE: ROUGH DRAFT OF OUTLINE OF TRAINING PLAN

Required readings: Pachter, W. S., Fox, R. E., Zimbardo, P., & Antonuccio, D. O. (2007). Corporate funding and conflicts of interest:

a primer for psychologists. American Psychologist, 62, 1005-1015. [report and recommendations of an APA Task Force on conflicts of interest in research]

Fine, M. A., & Kurdek, L. A. (1993). Reflections on determining authorship credit and authorship order on faculty-student collaborations. American Psychologist, 48, 1141-1147. [some suggestions may be a little too rigid, but brings up key points regarding a potentially awkward issue]

Supplemental readings, ethical issues: page 26 and http://psych.colorado.edu/~willcutt/resmeth_ethics.htm

4/30: DISSEMINATION OF RESULTS

DUE: ASSIGNMENT II.D.: REVIEW OF POPULAR PRESS ARTICLE

Required readings: APA Publications and Communications Board Working Group on Journal Article Reporting Standards. (2008).

Reporting Standards for Research in Psychology: Why Do We Need Them? What Might They Be? American Psychologist, 63, 839-851. [APA publication describing the rationale and requirements for reporting of research]

Kazdin, A. E. (1995). Preparing and evaluating research reports. Psychological Assessment, 7, 228-237. [hopefully this one will be largely review for you. He provides a nice summary of the components of an effective research report.]

Lilienfeld, S. O. (2012). Public skepticism of psychology: why many people perceive the study of human behavior as unscientific. American Psychologist, 67, 111-129. [excellent synopsis of the issues faced by psychologists when we present our results to those outside the field. May tie in nicely with your assignment on an error in thinking in a popular press article]

Supplemental readings, dissemination: page 27 and http://psych.colorado.edu/~willcutt/resmeth_dissem.htm

5/7: DUE: ASSIGNMENT III: FINAL DRAFT OF ALL PARTS OF F31 REQUIRED FOR THIS COURSE

Page 8: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

8

FULL READING LIST (Once again, please note that the supplemental readings are not required for the course)

----------------------------------------------------------------------------------------------------------------------------- ------------------------------------ CAREER DEVELOPMENT AND TRAINING

JANUARY 15, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_career.htm)

Supplemental readings:

Aiken, L. S., West, S. G., & Millsap, R. E. (2008). Doctoral training in statistics, measurement, and methodology in psychology: replication and extension of Aiken, West, Sechrest, and Reno's (1990) survey of PhD programs in North America. American Psychologist, 63, 32-50.

Aiken, L. S., West, S. G., & Millsap, R. E. (2009). Improving training in methodology enriches the science of psychology. American Psychologist, 64, discussion 51-52.

Bem, D. J. (1995). Writing a review article for Psychological Bulletin. Psychological Bulletin, 118, 172-177. Bray, J. H. (2010). The future of psychology practice and science. American Psychologist, 65, 355-369. Calof, A. (1999). Grant-writing amnesia. Current Biology, R869. Eissenberg, T. (2003). Teaching successful grant writing to psychology graduate students. Teaching of Psychology,

30, 328 - 330. Hitt, E. (2008). Seeking the skills for a successful career in academia. Science Careers, 499 - 502. Illes, J. (1999). The strategic grant-seeker: A guide to conceptualizing fundable research in the brain and behavioral

sciences (chapter 6 and chapter 9). Lawrence Erlbaum: London. Nickerson, R. S. (2005). What researchers want from journal editors and reviewers. American Psychologist, 661-662. Oetting, E. R. (1986). Ten fatal mistakes in grant writing. Professional Psychology: Research and Practice, 17, 570 -

573. PDF Rasey, J. S. (1999). The art of grant writing. Current Biology, R387. Roberts, M. C. (2006). Essential tension: specialization with broad and general training in psychology. American

Psychologist, 61, 862-870. Zerhouni, E. (2003). The NIH Roadmap. Science, 302, 63 - 72. Zimiles, H. (2009). Ramifications of increased training in quantitative methodology. American Psychologist, 64, 51.

----------------------------------------------------------------------------------------------------------------------------- ------------------------------------

GRANT WRITING JANUARY 22, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_grants.htm)

Supplemental readings Bordage, G., & Dawson, B. (2003). Experimental study design and grant writing in eight steps and 28 questions.

Medical Education, 37, 376 - 385. Calof, A. (1999). Grant-writing amnesia. Current Biology, R869. Illes, J. (1999). The strategic grant-seeker: A guide to conceptualizing fundable research in the brain and behavioral

sciences (chapter 6 and chapter 9). Lawrence Erlbaum: London. Rasey, J. S. (1999). The art of grant writing. Current Biology, R387. Eissenberg, T. (2003). Teaching successful grant writing to psychology graduate students. Teaching of Psychology,

30, 328 - 330. Bordage, G., & Dawson, B. (2003). Experimental study design and grant writing in eight steps and 28 questions.

Medical Education, 37, 376 - 385. Marsh, H. W., Jayasinghe, U. W., & Bond, N. W. (2008). Improving the peer-review process for grant applications:

reliability, validity, bias, and generalizability. American Psychologist, 63, 160-168. Oetting, E. R. (1986). Ten fatal mistakes in grant writing. Professional Psychology: Research and Practice, 17, 570 -

573. Zerhouni, E. (2003). The NIH Roadmap. Science, 302, 63 - 72.

Page 9: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

9

COMBATING ILLOGICAL THINKING: CAUSAL INFERENCE AS A CANDLE IN THE DARK JANUARY 22, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_inference.htm)

Required Readings: Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304-1312. Platt, J. R. (1964). Strong Inference. Science, 146, 347-353. Shermer, M. (1997). How thinking goes wrong: Twenty-five fallacies that lead us to believe strange things. In Why

People Believe Strange Things (pp. 44 - 61). New York: Freeman. Supplemental readings: Causal inference

Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: an easy way to maximize the accuracy and power of your research. American Psychologist, 63, 591-601.

Jo, B. (2008). Causal inference in randomized experiments with mediational processes. Psychological Methods, 13, 314-336.

Rubin, D. B. (2010). Reflections stimulated by the comments of Shadish (2010) and West and Thoemmes (2010). Psychological Methods, 15, 38-46.

Scarr, S. (1997). Rules of evidence: A larger context for the statistical debate. Psychological Science, 8, 16-17. Shadish, W. R., Cook, T. D., and Campbell, D. T. (2002). Experimental and quasi-experimental designs for

generalized causal inference. Boston: Houghton-Mifflin. Shadish, W. R. (2010). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field

settings. Psychological Methods, 15, 3-17. West, S. G., & Thoemmes, F. (2010). Campbell's and Rubin's perspectives on causal inference. Psychological

Methods, 15, 18-37. Supplemental readings: causal inference in the popular press

Gigerenzer, G. (2002). Calculated Risks. How To Know When Numbers Deceive You. New York: Simon & Shuster. Paolos, J. A. (2001). Innumeracy. New York: Hill and Yang. Paulos, J. A. (1995). A Mathematician Reads the Newspaper. New York: Basic Books. Sagan, C. (1996). The demon-haunted world: Science as a candle in the dark. New York: Random House. Silver, N. (2012). The Signal and the Noise. Penguin Press: New York. Tal, J. (2001). Reading Between the Numbers. Statistical Thinking in Everyday Life. New York: MacGraw Hill. Taleb, N. N. (2010): The black swan: the impact of the highly improbable (2nd Edition). New York: Penguin. Taleb, N. N. (2004). Fooled by randomness: the hidden role of chance in life and in the markets. New York: Random

House. Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth century. New York: W.

H. Freeman. Schick, T., & Vaughn, L. (1999). How to think about weird things. Mountain View, CA: Mayfield Publishing Company. Stanovich, K. E. (2001). How to think straight about psychology (6th Edition). Boston, MA: Allyn and Bacon.

Supplemental readings: the role of theory

Cacioppo, J. T., Semin, G. R., & Berntson, G. G. (2004). Realism, instrumentalism, and scientific symbiosis: Psychological theory as a search for truth and the discovery of solutions. American Psychologist, 214-223.

Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103-115.

Meehl, P. E. (1996). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry, 1, 108-141.

Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66, 195-244.

Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834. N/A

Roberts, S. & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358-367.

Trafimow, D. (2003). Hypothesis testing and theory evaluation at the boundaries: Surprising insights from Bayes's theorem. Psychological Review, 110, 526-535.

Page 10: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

10

COMBATING ILLOGICAL THINKING: CAUSAL INFERENCE AS A CANDLE IN THE DARK READINGS CONTINUED

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_inference.htm)

Supplemental readings: philosophy of science Dar, R. (1987). Another look at Meehl, Lakatos, and the scientific practices of psychologists. American Psychologist,

42, 145-151. Gough, B., & Madill, A. (2012). Subjectivity in psychological science: from problem to prospect. Psychological

Methods, 17, 374-384. Haig, B. D. (2008). How to enrich scientific method. American Psychologist, 63, 565-566. Lau, H. C. (2007). Should scientists think? Comment on Machado and Silva (2007). American Psychologist, 62, 686-

688. Machado, A., & Silva, F. (2007). Toward a richer view of the scientific method: The role of conceptual analysis.

American Psychologist, 62, 671 - 681. Meehl, P. E. (1993). Philosophy of science: Help or hindrance? Psychological Reports, 72, 707-733. Sternberg, R. J. & Grigorenko E. L. (2001). Unified Psychology. American Psychologist, 56, 1069-1079. Wampold, B. E., Davis, B., & Good, R. H. (1990). Hypothesis validity of clinical research. Journal of Consulting and

Clinical Psychology, 58, 360-367. Wilcox, R. R. (1998). How many discoveries have been lost by ignoring modern statistical methods? American

Psychologist, 53, 300-314.. ----------------------------------------------------------------------------------------------------------------------------- ------------------------------------

SUMMARIZING THE LITERATURE

JANUARY 29, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_reviews.htm)

Required readings: Bem, D. J. (1995). Writing a review article for Psychological Bulletin. Psychological Bulletin, 118, 172-177. [even if you

never have the slightest interest in writing an article for Psychological Bulletin, this paper provides a nice summary of things to think about for any literature review you write.]

Wicker, A. W. (1985). Getting out of our conceptual ruts: Strategies for expanding conceptual frameworks. American Psychologist, 40, 1094-1103. [this paper begins to address the importance of theoretical models to guide both the literature review and new empirical studies]

Supplemental readings: Literature review methodology

Cooper, H., & Koenka, A. C. (2012). The overview of reviews: unique challenges and opportunities when research syntheses are the principal elements of new integrative scholarship. American Psychologist, 67, 446-462.

Fehrmann, P., & Thomas, J. (2011). Comprehensive computer searches and reporting in systematic reviews. Research Synthesis Methods, 2, 15-32.

Hopewell, S., Clarke, M., Lusher, A., Lefebvre, C., & Westby, M. (2002). A comparison of handsearching versus MEDLINE searching to identify reports of randomized controlled trials. Statistics in Medicine, 21, 1625-1634.

Thomson, D., Russell, K., Becker, L., Klassen, T., & Hartling, L. (2010). The evolution of a new publication type: Steps and challenges of producing overviews of reviews. Research Synthesis Methods, 1, 198-211.

Supplemental readings: Literature review interpretation

Ioannidis, J. P. A. (2010). Meta-research: The art of getting it wrong. Research Synthesis Methods, 1, 169-184. Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological

Reports, 66, 195-244.

Page 11: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

11

META-ANALYSES JANUARY 29, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_meta.htm)

Supplemental readings: overviews and critiques of meta-analysis: Cohn, L. D., & Becker, B. J. (2003). How meta-analysis increases statistical power. Psychological Methods, 8, 243-

253. Cooper, H., & Koenka, A. C. (2012). The overview of reviews: unique challenges and opportunities when research

syntheses are the principal elements of new integrative scholarship. American Psychologist, 67, 446-462. Ferguson, C. J., & Brannick, M. T. (2012). Publication bias in psychological science: prevalence, methods for

identifying and controlling, and implications for the use of meta-analyses. Psychological Methods, 17, 120-128. Hofer, S. M., & Piccinin, A. M. (2009). Integrative data analysis through coordination of measurement and analysis

protocol across independent longitudinal studies. Psychological Methods, 14, 150-164. Ioannidis, J. P. A. (2010). Meta-research: The art of getting it wrong. Research Synthesis Methods, 1, 169-184. Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological

Reports, 66, 195-244. Normand, S. T. (1999). Tutorial in biostatistics: Meta-analysis: formulating, evaluating, combining, and reporting.

Statistics in Medicine, 18, 321-359. Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: Sage. Bk Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological

Methods. Sutton, A. J., & Higgins, J. P. (2008). Recent developments in meta-analysis. Statistics in Medicine, 27, 625-650.

Supplemental readings: Estimating effect sizes

Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (pp. 279-293). New York: Russel Sage Foundation.

Deeks, J. J. (2002). Issues in the selection of a summary statistic for meta-analysis of clinical trials with binary outcomes. Statistics in Medicine, 21, 1575-1600.

Gillett, R. (2003). The metric comparability of meta-analytic effect-size estimators from factorial designs. Psychological Methods, 8, 419-433.

Hedges, L. (1981). Distributional theory for Glass' estimator of effect size and related estimators. Journal of Educational Statistics, 6, 107-128.

Kraemer, H. C. (1983). Theory of estimation and testing of effect sizes: Use in meta-analysis. Journal of Educational and Behavioral Statistics, 8, 93-101.

Kuhnert, R., & Bohning, D. (2007). A comparison of three different models for estimating relative risk in meta-analysis of clinical trials under unobserved heterogeneity. Statistics in Medicine, 26, 2277-2296.

Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychological Methods, 7, 105-125.

Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.), The Handbook of Research Synthesis. New York: Russell Sage Foundation. Chp

Rosenthal, R., & Rubin, D. B. (2003). r equivalent: A simple effect size indicator. Psychological Methods, 8, 492-496. Sanchez-Meca, J., Marin-Martinez, F., & Chacon-Moscoso, S. (2003). Effect-size indices for dichotomized outcomes

in meta-analysis. Psychological Methods, 8, 448-467. Supplemental readings: Publication and other biases in the literature search

Cooper, H., DeNeve, K., & Charlton, K. (1997). Finding the missing science: The fate of studies submitted for review by a human subjects committee. Psychological Methods, 2, 447-452.

Edwards, P., Clarke, M., DiGuiseppi, C., Pratap, S., Roberts, I., & Wentz, R. (2002). Identification of randomized controlled trials in systematic reviews: accuracy and reliability of screening records. Statistics in Medicine, 21, 1635-1640.

Fehrmann, P., & Thomas, J. (2011). Comprehensive computer searches and reporting in systematic reviews. Research Synthesis Methods, 2, 15-32.

Hopewell, S., Clarke, M., Lusher, A., Lefebvre, C., & Westby, M. (2002). A comparison of handsearching versus MEDLINE searching to identify reports of randomized controlled trials. Statistics in Medicine, 21, 1625-1634.

Orwin, R. G. (1983). A fail-safe N for effect size in meta-analysis. Journal of Educational Statistics, 8, 157-159. Rosenthal, R. (1979). The "file drawer problem" and tolerance for null results. Psychological Bulletin, 85, 638-641. Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2005). Publication bias in meta-analysis: Prevention, assessment,

and adjustments. Chichester, England: Wiley and Sons. Bk Shrier, I. (2011). Structural approach to bias in Meta-analyses. Research Synthesis Methods, 2, 223-237.

Page 12: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

12

Thomas, J., McNaught, J., & Ananiadou, S. (2011). Applications of text mining within systematic reviews. Research Synthesis Methods, 2, 1-14.

Vevea, J. L., & Woods, C. M. (2005). Publication bias in research synthesis: sensitivity analysis using a priori weight functions. Psychological Methods, 10, 428-443.

Viechtbauer, W., & Cheung, M. W. L. (2010). Outlier and influence diagnostics for meta-analysis. Research Synthesis Methods, 1, 112-125.

Williamson, P. R., Gamble, C., Altman, D. G., & Hutton, J. L. (2005). Outcome selection bias in meta-analysis. Statistical Methods in Medical Research, 14, 515-524.

Supplemental readings: extensions of meta-analysis

Jackson, D., Riley, R., & White, I. R. (2011). Multivariate meta-analysis: Potential and promise. Statistics in Medicine. Cooper, H., & Patall, E. A. (2009). The relative benefits of meta-analysis conducted with individual participant data

versus aggregated data. Psychological Methods, 14, 165-176. Curran, P. J. (2009). The seemingly quixotic pursuit of a cumulative psychological science: introduction to the special

issue. Psychological Methods, 14, 77-80. Curran, P. J., & Hussong, A. M. (2009). Integrative data analysis: the simultaneous analysis of multiple data sets.

Psychological Methods, 14, 81-100. Shrout, P. E. (2009). Short and long views of integrative data analysis: comments on contributions to the special

issue. Psychological Methods, 14, 177-181. Thomson, D., Russell, K., Becker, L., Klassen, T., & Hartling, L. (2010). The evolution of a new publication type: Steps

and challenges of producing overviews of reviews. Research Synthesis Methods, 1, 198-211. Wicherts, J. M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data

for reanalysis. American Psychologist, 61, 726-728.

Supplemental readings: detecting and accounting for heterogeneity among studies Hedges, L. V., & Pigott, T. D. (2004). The power of statistical tests for moderators in meta-analysis. Psychological

Methods, 9, 426-445. Higgins, J. P. T., Jackson, D., Barrett, J. K., Lu, G., Ades, A. E., & White, I. R. (2012). Consistency and inconsistency

in network meta-analysis: concepts and models for multi-arm studies. Research Synthesis Methods, 3, 98-110. Higgins, J. P., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21,

1539-1558. Higgins, J. P., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analysis.

British Medical Journal, 327, 557-560. Howard, G. S., Maxwell, S. E., & Fleming, K. J. (2000). The proof of the pudding: An illustration of the relative

strengths of null hypothesis, meta-analysis, and Bayesian analysis. Psychological Methods, 5, 315-332. Huedo-Medina, T. B., Sanchez-Meca, J., Marin-Martinez, F., & Botella, J. (2006). Assessing heterogeneity in meta-

analysis: Q statistic or I2 index? Psychological Methods, 11, 193-206. Jackson, D., White, I. R., & Riley, R. D. (2012). Quantifying the impact of between-study heterogeneity in multivariate

meta-analyses. Statistics in Medicine, 31, 3805-3820. Kulinskaya, E., & Koricheva, J. (2010). Use of quality control charts for detection of outliers and temporal trends in

cumulative meta-analysis. Research Synthesis Methods, 1, 297-307. Lijmer, J. G., Bossuyt, P. M., & Heisterkamp, S. H. (2002). Exploring sources of heterogeneity in systematic reviews

of diagnostic tests. Statistics in Medicine, 21, 1525-1537. Pereira, T. V., Patsopoulos, N. A., Salanti, G., & Ioannidis, J. P. A. (2010). Critical interpretation of Cochran's Q test

depends on power and prior assumptions about heterogeneity. Research Synthesis Methods, 1, 149-161. Sterne, J. A., Juni, P., Schulz, K. F., Altman, D. G., Bartlett, C., & Egger, M. (2002). Statistical methods for assessing

the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Statistics in Medicine, 21, 1513-1524.

Terrin, N., Schmid, C. H., Lau, J., & Olkin, I. (2003). Adjusting for publication bias in the presence of heterogeneity. Statistics in Medicine, 22, 2113-2126.

Thompson, S. G., & Higgins, J. P. (2002). How should meta-regression analyses be undertaken and interpreted? Statistics in Medicine, 21, 1559-1573.

Supplemental readings: Presentation of meta-analyses Anzures-Cabrera, J., & Higgins, J. P. T. (2010). Graphical displays for meta-analysis: An overview with suggestions

for practice. Research Synthesis Methods, 1, 66-80.

Supplemental readings: software for meta-analyses Bax, L., Yu, L. M., Ikeda, N., & Moons, K. G. (2007). A systematic comparison of software dedicated to meta-analysis

of causal studies. BMC Medical Research Methodology, 7, 40.

Page 13: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

13

Supplemental readings: statistical procedures for meta-analyses

Bond, C. F., Jr., Wiitala, W. L., & Richard, F. D. (2003). Meta-analysis of raw mean differences. Psychological Methods, 8, 406-418.

Bonett, D. G. (2007). Transforming odds ratios into correlations for meta-analytic research. American Psychologist, 62, 254-255.

Bonett, D. G. (2008). Meta-analytic interval estimation for bivariate correlations. Psychological Methods, 13, 173-181. Bonett, D. G. (2009). Meta-analytic interval estimation for standardized and unstandardized mean differences.

Psychological Methods, 14, 225-238. Bonett, D. G. (2010). Varying coefficient meta-analytic methods for alpha reliability. Psychological Methods, 15, 368-

385. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2010). A basic introduction to fixed-effect and

random-effects models for meta-analysis. Research Synthesis Methods, 1, 97-111. Botella, J., Suero, M., & Gambara, H. (2010). Psychometric inferences from a meta-analysis of reliability and internal

consistency coefficients. Psychological Methods, 15, 386-397. Cai, T., Parast, L., & Ryan, L. (2010). Meta-analysis for rare events. Statistics in Medicine, 29, 2078-2089. Cheung, M. W., & Chan, W. (2005). Meta-analytic structural equation modeling: a two-stage approach. Psychological

Methods, 10, 40-64. Darlington, R. B., & Hayes, A. F. (2000). Combining independent p values: extensions of the Stouffer and binomial

methods. Psychological Methods, 5, 496-515. DerSimonian, R., & Laird, N. (1986). Metaanalysis in Clinical-Trials. Controlled Clinical Trials, 7, 177-188. Hedges, L. V., Tipton, E., & Johnson, M. C. (2010). Robust variance estimation in meta-regression with dependent

effect size estimates. Research Synthesis Methods, 1, 39-65. Henmi, M., & Copas, J. B. (2010). Confidence intervals for random effects meta-analysis and robustness to

publication bias. Statistics in Medicine, 29, 2969-2983. Rodriguez, M. C., & Maeda, Y. (2006). Meta-analysis of coefficient alpha. Psychological Methods, 11, 306-322. Sanchez-Meca, J., & Marin-Martinez, F. (2008). Confidence intervals for the overall effect size in random-effects

meta-analysis. Psychological Methods, 13, 31-48. Thorlund, K., Wetterslev, J., Awad, T., Thabane, L., & Gluud, C. (2011). Comparison of statistical inferences from the

DerSimonian-Laird and alternative random-effects model meta-analyses: an empirical assessment of 920 Cochrane primary outcome meta-analyses. Research Synthesis Methods, 2, 238-253.

Supplemental readings: statistical power in meta-analyses

Hedges, L. V., & Pigott, T. D. (2001). The power of statistical tests in meta-analysis. Psychological Methods, 6, 203-217.

Valentine, J. C., Pigott, T. D., & Rothstein, H. R. (2010). How many studies do you need? A primer on statistical power in meta-analysis. Journal of Educational and Behavioral Statistics, 35, 215-247.

Supplemental readings: meta-analysis of correlations

Field, A. P. (2001). Meta-analysis of correlation coefficients: a Monte Carlo comparison of fixed- and random-effects methods. Psychological Methods, 6, 161-180.

Field, A. P. (2005). Is the meta-analysis of correlation coefficients accurate when population correlations vary? Psychological Methods, 10, 444-467.

Furlow, C. F., & Beretvas, S. N. (2005). Meta-analytic methods of pooling correlation matrices for structural equation modeling under different patterns of missing data. Psychological Methods, 10, 227-254.

Hafdahl, A. R., & Williams, M. A. (2009). Meta-analysis of correlations revisited: attempted replication and extension of Field's (2001) simulation studies. Psychological Methods, 14, 24-42.

Page 14: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

14

STUDY DESIGN AND SAMPLING ISSUES FEBRUARY 12, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_designs.htm)

Required readings: Clarke, G. N. (1995). Improving the transition from basic efficacy research to effectiveness studies: Methodological

issues and procedures. Journal of Consulting & Clinical Psychology, 63, 718-725. Hsu, L. M. (1989). Random sampling, randomization, and equivalence of contrasted groups in psychotherapy

outcome research. Journal of Consulting & Clinical Psychology, 57, 131-137. Wainer, H. (1999). The most dangerous profession: A note on nonrandom sampling error. Psychological Methods, 3,

250 - 256. [describes potentially important implications of the cases that do not get included by your sampling design]

Supplemental readings: general research design

McClelland, G. H. (1997). Optimal design in psychological research. Psychological Methods, 2, 3-19. Keppel, G. & Wickens, T. (2004). Design and Analysis: A Researcher’s Handbook, 4th Ed. Prentice Hall: Upper

Saddle River, NJ. Rodgers, J. L. (2010). The epistemology of mathematical and statistical modeling: a quiet methodological revolution.

American Psychologist, 65, 1-12. Rutter, M. (2007). Proceeding from observed correlation to causal inference: the use of natural experiments.

Perspectives on Psychological Science, 2, 377 - 394. Supplemental readings: Epidemiological designs

Kraemer, H. C. (2010). Epidemiological methods: about time. Int J Environ Res Public Health, 7, 29-45. Supplemental readings: Extreme groups analysis

Preacher, K. J., Rucker, D. D., MacCallum, R. C., & Nicewander, W. A. (2005). Use of the extreme groups approach: a critical reexamination and new recommendations. Psychological Methods, 10, 178-192.

Supplemental readings: qualitative research designs

Kidd, S. A. (2002). The role of qualitative research in psychological journals. Psychological Methods, 7, 126-138. Madill, A., & Gough, B. (2008). Qualitative research and its place in psychological science. Psychological Methods,

13, 254-271. Rennie, D. L. (2012). Qualitative research as methodical hermeneutics. Psychological Methods, 17, 385-398.

Page 15: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

15

LONGITUDINAL DESIGNS FEBRUARY 12, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_long.htm)

Supplemental Readings: overview Kelley, K., & Rausch, J. R. (2011). Sample size planning for longitudinal models: accuracy in parameter estimation for

polynomial change parameters. Psychological Methods, 16, 391-405. Masten, A. S., & Cicchetti, D. (2010). Developmental cascades. Developmental Psychopathology, 22, 491-495. Rutter, M., Kim-Cohen, J., & Maughan, B. (2006). Continuities and discontinuities in psychopathology between

childhood and adult life. Journal of Child Psychology and Psychiatry, 47, 276-295. Supplemental Readings: statistical issues

Blozis, S. A. (2004). Structured latent curve models for the study of change in multivariate repeated measures. Psychological Methods, 9, 334-353.

Cole, D. A., & Maxwell, S. E. (2009). Statistical methods for risk-outcome research: being sensitive to longitudinal structure. Annual Review of Clinical Psychology, 5, 71-96.

Enders, C. K. (2011). Missing not at random models for latent growth curve analyses. Psychological Methods, 16, 1-16.

Gibbons, R. D., Hedeker, D., & DuToit, S. (2010). Advances in analysis of longitudinal data. Annual Review of Clinical Psychology, 6, 79-107.

Hogan, J. W., Roy, J., & Korkontzelou, C. (2004). Handling drop-out in longitudinal studies. Statistics in Medicine, 23, 1455-1497.

Khoo, S.-T., West, S. G., Wu, W., & Kwok, O.-M. (2006). Longitudinal methods. In M. Eid & E. Diener (Eds.), Handbook of psychological measurement: A multimethod perspective. Washington, DC: American Psychological Association books.

Kuljanin, G., Braun, M. T., & Deshon, R. P. (2011). A cautionary note on modeling growth trends in longitudinal data. Psychological Methods, 16, 249-264.

Lix, L. M., & Sajobi, T. (2010). Testing multiple outcomes in repeated measures designs. Psychological Methods, 15, 268-280.

McArdle, J. J., Grimm, K. J., Hamagami, F., Bowles, R. P., & Meredith, W. (2009). Modeling life-span growth curves of cognition using longitudinal data with multiple samples and changing scales of measurement. Psychological Methods, 14, 126-149.

Muthen, B., Asparouhov, T., Hunter, A. M., & Leuchter, A. F. (2011). Growth modeling with nonignorable dropout: alternative analyses of the STAR*D antidepressant trial. Psychological Methods, 16, 17-33.

Nagin, D. S., & Odgers, C. L. (2010). Group-based trajectory modeling in clinical research. Annual Review of Clinical Psychology, 6, 109-138.

Nagin, D. S., & Tremblay, R. E. (2001). Analyzing developmental trajectories of distinct but related behaviors: a group-based method. Psychological Methods, 6, 18-34.

Overall, J. E., & Woodward, J. A. (1975). Unreliability of difference scores: A paradox for measurement of change. Psychological Bulletin, 82, 85-86.

Velicer, W. F., & Fava, J. L. (2003). Time series analysis. In J. A. Schinka & W. F. Velicer (Eds.), Handbook of psychology (Vol. 2). Research methods in psychology (pp. 581-606). New York: Wiley. Chp

Venter, A., Maxwell, S. E., & Bolig, E. (2002). Power in randomized group comparisons: The value of adding a single intermediate time point to a traditional pretest-posttest design. Psychological Methods, 7, 194-209.

Wu, W., West, S. G., & Taylor, A. B. (2009). Evaluating model fit for growth curve models: Integration of fit indices from SEM and MLM frameworks. Psychological Methods, 14, 183-201.

Page 16: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

16

INTERVENTION STUDIES FEBRUARY 12, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_intervention.htm)

Supplemental readings: design issues Bray, J. H. (2010). The future of psychology practice and science. American Psychologist, 65, 355-369. Hsu, L. M. (1989). Random sampling, randomization, and equivalence of contrasted groups in psychotherapy

outcome research. Journal of Consulting & Clinical Psychology, 57, 131-137. Kazak, A. E., Hoagwood, K., Weisz, J. R., Hood, K., Kratochwill, T. R., Vargas, L. A., & Banez, G. A. (2010). A meta-

systems approach to evidence-based practice for children and adolescents. American Psychologist, 65, 85-97. Kazdin, A. E. (2011). Evidence-based treatment research: Advances, limitations, and next steps. American

Psychologist, 66, 685-698. Kraemer, H. C. & Kupfer, D. J. (2006). Size of treatment effects and their importance to clinical research and practice.

Biological Psychiatry, 59, 990-996. Nathan, P. E., Stuart, S. P., & Dolan, S. L. (2000). Research on psychotherapy efficacy and effectiveness: Between

Scylla and Charybdis? Psychological Bulletin, 126, 964-981. Persons, J. B., & Silberschatz, G. (1998). Are results of randomized controlled trials useful to psychotherapists?

Journal of Consulting & Clinical Psychology, 66, 126-135. Reichardt, C. S. (2006). The principle of parallelism in the design of studies to estimate treatment effects.

Psychological Methods, 11, 1-18.

Supplemental readings: efficacy and effectiveness Clarke, G. N. (1995). Improving the transition from basic efficacy research to effectiveness studies: Methodological

issues and procedures. Journal of Consulting & Clinical Psychology, 63, 718-725.

Supplemental readings: outcome measures Atkins, D. C., Bedics, J. D., McGlinchey, J. B., & Beauchaine, T. P. (2005). Assessing clinical significance: does it

matter which method we use? Journal of Consulting and Clinical Psychology, 73, 982-989. Kraemer, H. C., & Frank, E. (2010). Evaluation of comparative treatment trials: assessing clinical benefits and risks for

patients, rather than statistical effects on measures. Journal of the American Medical Association, 304, 683-684. Kraemer, H. C., Frank, E., & Kupfer, D. J. (2011). How to assess the clinical impact of treatments on patients, rather

than the statistical impact of treatments on measures. Int J Methods Psychiatr Res, 20, 63-72. Kraemer, H. C., Morgan, G. A., Leech, N. L., Gliner, J. A., Vaske, J. J., & Harmon, R. J. (2003). Measures of clinical

significance. Journal of the American Academy of Child and Adolescent Psychiatry, 42, 1524-1529. Lambert, M. J., Okiishi, J. C., Finch, A. E., & Johnson, L. D. (1998). Outcome assessment: From conceptualization to

implementation. Professional Psychology: Research & Practice, 29, 63-70.

Supplemental readings: statistical issues Crits-Christoph, P., Tu, X., & Gallop, R. (2003). Therapists as fixed versus random effects-some statistical and

conceptual issues: a comment on Siemer and Joormann (2003). Psychological Methods, 8, 518-523. DerSimonian, R., & Laird, N. (1986). Metaanalysis in Clinical-Trials. Controlled Clinical Trials, 7, 177-188. Holmbeck, G. N. (1997). Toward terminological, conceptual, and statistical clarity in the study of mediators and

moderators: Examples from the child-clinical and pediatric psychology literatures. Journal of Consulting & Clinical Psychology, 65, 599-610.

Kraemer, H. C., Frank, E., & Kupfer, D. J. (2006). Moderators of treatment outcomes: clinical, research, and policy importance. Journal of the American Medical Association, 296, 1286-1289.

Kraemer, H. C., Wilson, G. T., Fairburn, C. G., & Agras, W. S. (2002). Mediators and moderators of treatment effects in randomized clinical trials. Archives of General Psychiatry, 59, 877-883.

Muthen, B., Asparouhov, T., Hunter, A. M., & Leuchter, A. F. (2011). Growth modeling with nonignorable dropout: alternative analyses of the STAR*D antidepressant trial. Psychological Methods, 16, 17-33.

Nagin, D. S., & Odgers, C. L. (2010). Group-based trajectory modeling in clinical research. Annual Review of Clinical Psychology, 6, 109-138.

Supplemental readings: statistical power Raudenbush, S. W., & Liu, X. (2000). Statistical power and optimal design for multisite randomized trials.

Psychological Methods, 5, 199-213. Venter, A., Maxwell, S. E., & Bolig, E. (2002). Power in randomized group comparisons: The value of adding a single

intermediate time point to a traditional pretest-posttest design. Psychological Methods, 7, 194-209. Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: Evidence from meta-

analysis. Psychological Methods, 6, 413-429.

Page 17: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

17

CULTURE, RACE/ETHNICITY, GENDER, AND OTHER FORMS OF DIVERSITY FEBRUARY 19, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_culture.htm)

Required readings: Cohen, A. B. (2009). Many forms of culture. American Psychologist, 64, 194-204. Hartung, C. M., & Widiger, T. A. (1998). Gender differences in the diagnosis of mental disorders: Conclusions and

controversies of the DSM-IV. Psychological Bulletin, 123, 260-278. Okazaki, S. & Sue, S. (1995). Methodological issues in assessment research with ethnic minorities. Psychological

Assessment, 7, 367 - 375. Smedley, A., & Smedley, B. D. (2005). Race as biology is fiction, racism as a social problem is real: Anthropological

and historical perspectives on the social construction of race. The American Psychologist, 60, 16-26. Supplemental readings: Race

Ossorio, P., & Duster, T. (2005). Race and genetics: controversies in biomedical, behavioral, and forensic sciences. The American psychologist, 60, 115-128.

Rowe, D. C. (2005). Under the skin: On the impartial treatment of genetic and environmental hypotheses of racial differences. The American Psychologist, 60, 60-70.

Sue, S. (1999). Science, ethnicity, and bias: Where have we gone wrong? American Psychologist, 54, 1070-1077. Supplemental Readings: Gender

Zahn-Waxler, C., Crick, N. R., Shirtcliff, E. A., & Woods, K. A. (2006). The origins and development of psychopathology in females and males. In D. Cicchetti & D. Cohen (Eds.), Psychopathology, Vol. 1: Theory and Method (2nd Edition). Hoboken, NJ: Wiley.

Page 18: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

18

MEASUREMENT: INTERNAL VALIDITY, RELIABILITY, AND SCALE DEVELOPMENT FEBRUARY 26, 2012

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_internal.htm)

REQUIRED READINGS: Green, C. E., Chen, C. E., Helms, J. E., & Henze, K. T. (2011). Recent reliability reporting practices in Psychological

Assessment: recognizing the people behind the data. Psychological Assessment, 23, 656-669. Kraemer, H. C., Kupfer, D. J., Clarke, D. E., Narrow, W. E., & Regier, D. A. (2012). DSM-5: how reliable is reliable

enough? American Journal of Psychiatry, 169, 13-15. [Position paper describing why low reliability diagnoses of mental disorders may not mean the diagnosis is not valid]

Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437-455. [A lot of detailed models here - just focus on big picture points]

Podsakoff, P. M., MacKenzie, S. B., Lee, J., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88, 879-903.

Willcutt, E. G., Boada, R., Riddle, M. W., Chhabildas, N. A., & Pennington, B. F. (2011). A parent-report screening questionnaire for learning difficulties in children. Psychological Assessment, 778 - 791. [Don't worry about the substantive detasils of our results - this paper just illustrates a number of the methods we will cover in class.]

Supplemental Readings: reliability measures

Bonett, D. G. (2010). Varying coefficient meta-analytic methods for alpha reliability. Psychological Methods, 15, 368-385.

Botella, J., Suero, M., & Gambara, H. (2010). Psychometric inferences from a meta-analysis of reliability and internal consistency coefficients. Psychological Methods, 15, 386-397.

Feldt, L. S., & Charter, R. A. (2003). Estimating the reliability of a test split into two parts of equal or unequal length. Psychological Methods, 8, 102-109.

Green, S. B. (2003). A coefficient alpha for test-retest data. Psychological Methods, 8, 88-101. Osburn, H. G. (2000). Coefficient alpha and related internal consistency reliability coefficients. Psychological Methods,

5, 343-355. Overall, J. E., & Woodward, J. A. (1975). Unreliability of difference scores: A paradox for measurement of change.

Psychological Bulletin, 82, 85-86. Rodriguez, M. C., & Maeda, Y. (2006). Meta-analysis of coefficient alpha. Psychological Methods, 11, 306-322. Schmidt, F. L., Le, H., & Ilies, R. (2003). Beyond alpha: An empirical examination of the effects of different sources of

measurement error on reliability estimates for measures of individual-differences constructs. Psychological Methods, 8, 206-224.

Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological assessment, 4, 350-353.

Supplemental readings: scale development Achenbach, T. M., Dumenci, L., & Rescorla, L. A. (2003). DSM-oriented and empirically based approaches to

constructing scales from the same item pools. Journal of Clinical Child and Adolescent Psychology, 32, 328-340. Bauer, D. J., & Hussong, A. M. (2009). Psychometric approaches for developing commensurate measures across

independent studies: traditional and new models. Psychological Methods, 14, 101-125. Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological

Assessment, 7, 309 - 319. Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437-

455. Willcutt, E. G., Boada, R., Riddle, M. W., Chhabildas, N. A., & Pennington, B. F. (2011). A parent-report screening

questionnaire for learning difficulties in children. Psychological Assessment, 778 - 791. Supplemental readings: response styles

Weijters, B., Geuens, M., & Schillewaert, N. (2010). The stability of individual response styles. Psychological Methods, 15, 96-110.

Supplemental readings: interrater agreement

Bartels, M., Boomsma, D. I., Hudziak, J. J., van Beijsterveldt, T. C., & van den Oord, E. J. (2007). Twins and the study of rater (dis)agreement. Psychological Methods, 12, 451-466.

Hoyt, W. T. (2000). Rater bias in psychological research: when is it a problem and what can we do about it? Psychological Methods, 5, 64-86.

Page 19: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

19

Kraemer, H. C., Measelle, J. R., Ablow, J. C., Essex, M. J., Boyce, W. T., & Kupfer, D. J. (2003). A new approach to integrating data from multiple informants in psychiatric assessment and research: mixing and matching contexts and perspectives. American Journal of Psychiatry, 160, 1566-1577.

Schuster, C., & Smith, D. A. (2002). Indexing systematic rater agreement with a latent-class model. Psychological Methods, 7, 384-395.

Vanbelle, S., Mutsvari, T., Declerck, D., & Lesaffre, E. (2012). Hierarchical modeling of agreement. Statistics in Medicine, 31, 3667-3680.

Supplemental readings: Item-response theory

Brown, A., & Maydeu-Olivares, A. (2012). How IRT can solve problems of ipsative data in forced-choice questionnaires. Psychological Methods.

Meijer, R. R. (2003). Diagnosing item score patterns on a test using item response theory-based person-fit statistics. Psychological Methods, 8, 72-87.

Reise, S. P., & Waller, N. G. (2003). How many IRT parameters does it take to model psychopathology items? Psychological Methods, 8, 164-184.

Reise, S. P., & Waller, N. G. (2009). Item response theory and clinical measurement. Annual Review of Clinical Psychology, 5, 27-48.

Waller, N. G., Thompson, J. S., & Wenk, E. (2000). Using IRT to separate measurement bias from true group differences on homogeneous and heterogeneous scales: an illustration with the MMPI. Psychological Methods, 5, 125-146.

Page 20: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

20

EXTERNAL VALIDITY

MARCH 12, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_external.htm)

Required Readings:

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302. Strauss, M. E., & Smith, G. T. (2009). Construct validity: advances in theory and methodology. Annual Review of

Clinical Psychology, 5, 1-25. Willcutt, E. G., Boada, R., Riddle, M. W., Chhabildas, N. A., & Pennington, B. F. (2011). A parent-report screening

questionnaire for learning difficulties in children. Psychological Assessment, 778 - 791. [Don't worry about the substantive detasils of our results - this paper just illustrates a number of the methods we will cover in class.]

Willcutt, E. G., Nigg, J. T., Pennington, B. F., Carlson, C. L., McBurnett, K., Rohde, L. A., Solanto, M. V., Tannock, R., & Lahey, B. B. (2012). Validity of DSM-IV attention-deficit/hyperactivity disorder symptom dimensions and subtypes. Journal of Abnormal Psychology, 121, 991 - 1010. [same as previous]

Supplemental readings: construct and content validity

Albright, L., & Malloy, T. E. (2000). Experimental validity: Brunswik, Campbell, Cronbach, and enduring issues. Review of General Psychology, 4, 337-353.

Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R., Eisman, E. J., Kubiszyn, T. W., & Reed, G. M. (2001). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56, 128-165.

Smith, G. T. (2005). On construct validity: Issues of method and measurement. Psychological Assessment, 17, 395-408.

Westen, D., & Rosenthal, R. (2005). Improving construct validity: Cronbach, Meehl, and Neurath's ship. Psychological Assessment, 17, 409 - 412.

Willcutt, E. G., Boada, R., Riddle, M. W., Chhabildas, N. A., & Pennington, B. F. (2011). A parent-report screening questionnaire for learning difficulties in children. Psychological Assessment, 778 - 791.

Supplemental readings: concurrent and predictive validity

Kraemer, H. C., Morgan, G. A., Leech, N. L., Gliner, J. A., Vaske, J. J., & Harmon, R. J. (2003). Measures of clinical significance. Journal of the American Academy of Child and Adolescent Psychiatry, 42, 1524-1529.

Willcutt, E. G., Nigg, J. T., Pennington, B. F., Carlson, C. L., McBurnett, K., Rohde, L. A., Solanto, M. V., Tannock, R., & Lahey, B. B. (2012). Validity of DSM-IV attention-deficit/hyperactivity disorder symptom dimensions and subtypes. Journal of Abnormal Psychology, 121, 991 - 1010.

Supplemental readings: Incremental validity

Hunsley, J., & Meyer, G. J. (2003). The incremental validity of psychological testing and assessment: Conceptual, methodological, and statistical issues. Psychological Assessment, 15, 445-455.

Kraemer, H. C., Measelle, J. R., Ablow, J. C., Essex, M. J., Boyce, W. T., & Kupfer, D. J. (2003). A new approach to integrating data from multiple informants in psychiatric assessment and research: mixing and matching contexts and perspectives. American Journal of Psychiatry, 160, 1566-1577.

Page 21: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

21

AN OVERVIEW OF STATISTICAL INFERENCE APRIL 2, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_statinf.htm)

REQUIRED READINGS: DeCoster, J., Iselin, A. M., & Gallucci, M. (2009). A conceptual and empirical examination of justifications for

dichotomization. Psychological Methods, 14, 349-366. [examines researchers' justifications for dichotomization, then tests whether their rationale is supported empirically. Read this one for big picture points, and don't worry about the details of the simulation models.]

MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7, 19-40. [A second examination of issues around dichotomization. Again focus on the big picture points.]

Scarr, S. (1997). Rules of evidence: A larger context for the statistical debate. Psychological Science, 8, 16-17. Wilcox, R. R. (1998). How many discoveries have been lost by ignoring modern statistical methods? American

Psychologist, 53, 300-314. [summarizes the loss of power (and potential implications for validity) if "old school" statistical procedures are applied when assumptions are not met]

Wilkinson, L., & the Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.

SUPPLEMENTAL READINGS: GENERAL STATISTICAL INFERENCE

Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin, 97, 129-133. Abelson, R.P. (1995). Statistics as Principled Argument. NJ: Lawrence Erlbaum Associates. Berger, J. O. & Berry, D. A. (1988). Statistical analysis and illusion of objectivity. American Scientist, 76, 159-165. Best, J. (2001). Damned Lies and Statistics. Untangling Numbers from the Media, Politicians, and Activists. CA:

University of California Press. Best, J. (2004). More Damned Lies and Statistics. How Numbers Confuse Public Issues. CA: University of California

Press. Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003. Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: an easy way to maximize the

accuracy and power of your research. American Psychologist, 63, 591-601. Paulos, J.A. (1995). A Mathematician Reads the Newspaper. NY: Basic Books. Platt, J. R. (1964). Strong Inference. Science, 146, 347-353. Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological

Methods. Schmidt, F. L. & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing

in the analysis of research data. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds.) What if there were no significance tests? (pp. 37-64). Mahwah, NJ: Lawrence Erlbaum Associates.

Tryon, W.W. (2001). Evaluating statistical difference, equivalence, and indeterminancy using inferential confidence intervals: An integrated alternative method of conducting null hypothesis statistical tests. Psychological Methods, 6, 371-386.

Wilcox, R. R. (1998). How many discoveries have been lost by ignoring modern statistical methods? American Psychologist, 53, 300-314.

Supplemental readings: Distributions and assumptions

Wilcox, R. R., & Keselman, H. J. (2003). Modern robust data analysis methods: measures of central tendency. Psychological Methods, 8, 254-274.

APPROACHES TO DEAL WITH MISSING DATA

Allison, P. D. (2003). Missing data techniques for structural equation modeling. Journal of Abnormal Psychology, 112, 545-557.

Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7, 147-177.

SUPPLEMENTAL READINGS: DICHOTOMIZATION

Coghill, D., & Sonuga-Barke, E. J. (2012). Annual research review: categories versus dimensions in the classification and conceptualisation of child and adolescent mental disorders--implications of recent empirical study. Journal of Child Psychology and Psychiatry, 53, 469-489.

Helzer, J. E., Kraemer, H. C., & Krueger, R. F. (2006). The feasibility and need for dimensional psychiatric diagnoses. Psychological Medicine, 36, 1671-1680.

Page 22: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

22

Kraemer, H. C., Noda, A., & O'Hara, R. (2004). Categorical versus dimensional approaches to diagnosis: methodological challenges. Journal of Psychiatry Research, 38, 17-25.

MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7, 19-40.

McGrath, R. E., & Walters, G. D. (2012). Taxometric analysis as a general strategy for distinguishing categorical from dimensional latent structure. Psychological Methods, 17, 284-293.

Rhemtulla, M., Brosseau-Liard, P. E., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17, 354-373.

Sanchez-Meca, J., Marin-Martinez, F., & Chacon-Moscoso, S. (2003). Effect-size indices for dichotomized outcomes in meta-analysis. Psychological Methods, 8, 448-467.

Widiger, T. A., & Trull, T. J. (2007). Plate tectonics in the classification of personality disorder: Shifting to a dimensional model. American Psychologist, 62, 71-83.

Page 23: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

23

SIGNIFICANCE TESTS APRIL 9, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_signif.htm)

Required readings: Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003. [Position paper by one of the

most influential methodologists in our field. Argues against the value of null hypothesis significance testing.] Cumming, G., & Finch, S. (2005). Inference by eye: confidence intervals and how to read pictures of data. American

Psychologist, 60, 170-180. [focus on the big picture points about the utility of confidence intervals to simultaneously illustrate a point estimate of an effect size and the likely range of error around the estimate]

Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist, 56, 16-26. [summarizes some of the key arguments that have been advanced for and against null hypothesis significance testing]

Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115-129. [argues against null-hypothesis significance testing and in favor of point estimates of effect size with confidence intervals]

Schmidt, F., & Hunter, J. (2002). Are there benefits from NHST? American Psychologist, 57, 65-71. [a brief response to Krueger]

Supplemental readings: significance tests: Abelson, R. P. (1997). On the surprising longevity of flogged horses: Why there is a case for the significance test.

Psychological Science, 6, 12-15. Chow, S. L. (1988). Significance test or effect size? Psychological Bulletin, 103, 105-110. Frick, R. W. (1996). The appropriate use of null hypothesis testing. Psychological Methods, 1, 379-390. Gardner, M. J., & Altman, D. G. (1986). Confidence intervals rather than P values: Estimation rather than hypothesis

testing. British Medical Journal, 292, 746-750. Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1-20. Hagen, R. L. (1997). In praise of the null hypothesis statistical test. American Psychologist, 52, 15-24. Harris, R. J. (1997). Significance tests have their place. Psychological Science, 8, 8-11. Howard, G. S., Maxwell, S. E., & Fleming, K. J. (2000). The proof of the pudding: An illustration of the relative

strengths of null hypothesis, meta-analysis, and Bayesian analysis. Psychological Methods, 5, 315-332. Hsu, L. M. (2000). Effects of directionality of significance tests on the bias of accessible effect sizes. Psychological

Methods, 5, 333-342. Jones, L. V., & Tukey, J. W. (2000). A sensible formulation of the significance test. Psychological Methods, 5, 411-

414. Keselman, H. J., Algina, J., Lix, L. M., Wilcox, R. R., & Deering, K. N. (2008). A generally robust approach for testing

hypotheses and setting confidence intervals for effect sizes. Psychological Methods, 13, 110-129. Keselman, H. J., Miller, C. W., & Holland, B. (2011). Many tests of significance: new methods for controlling type I

errors. Psychological Methods, 16, 420-431. Mulaik, S. A., Raju, N. S., & Harshman, R. A. (1997). There is a time and place for significance testing. In Lisa A.

Harlow, Stanley A. Mulaik, and James H. Steiger , Eds. What if there were no significance tests? (pp. 65-116). Mahwah, NJ: Lawrence Erlbaum Associates.

Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5, 241-301.

Rindskopf, D. M. (1997). Testing "small," not null, hypothesis: Classical and Bayesian Approaches. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds). What if there were no significance tests? (pp. 319-332). Mahwah, NJ: Lawrence Erlbaum Associates.

Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods.

Schmidt, F. L. & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds.) What if there were no significance tests? (pp. 37-64). Mahwah, NJ: Lawrence Erlbaum Associates.

Tryon, W.W. (2001). Evaluating statistical difference, equivalence, and indeterminancy using inferential confidence intervals: An integrated alternative method of conducting null hypothesis statistical tests. Psychological Methods, 6, 371-386.

Wainer, H. (1999). One cheer for null hypothesis significance testing. Psychological Methods, 6, 212-213.

Supplemental readings: confidence intervals Cumming, G., & Maillardet, R. (2006). Confidence intervals and replication: where will the next mean fall?

Psychological Methods, 11, 217-227.

Page 24: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

24

Cumming, G., & Finch, S. (2001). A primer on understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 532-574.

Supplemental readings: Effect sizes Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin, 97, 129-133. Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The

handbook of research synthesis and meta-analysis (pp. 279-293). New York: Russel Sage Foundation. Chow, S. L. (1988). Significance test or effect size? Psychological Bulletin, 103, 105-110. Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159. Durlak, J. A. (2009). How to select, calculate, and interpret effect sizes. Journal of Pediatric Psychology, 34, 917-928. Greenwald, A. G., Gonzalez, R., Harris, R. J., & Guthrie, D. (1996). Effect sizes and p values: What should be

reported and what should be replicated? Psychophysiology, 33, 175-183. Grissom, R. J., & Kim, J. J. (2001). Review of assumptions and problems in the appropriate conceptualization of effect

size. Psychological Methods, 6, 135-146. Hedges, L. (1981). Distributional theory for Glass' estimator of effect size and related estimators. Journal of

Educational Statistics, 6, 107-128. Olejnik, S., & Algina, J. (2003). Generalized eta and omega squared statistics: measures of effect size for some

common research designs. Psychological Methods, 8, 434-447. Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.), The Handbook of

Research Synthesis. New York: Russell Sage Foundation. ----------------------------------------------------------------------------------------------------------------------------- ------------------------------------

STATISTICAL POWER APRIL 16, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/power.htm)

REQUIRED READINGS: Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159. [overview of power by one of the most

influential figures in this specific area. Includes his take on estimates of small, medium, and large effect sizes] Hallahan, M., & Rosenthal, R. (1996). Statistical power: Concepts, procedures, and applications. Behaviour Research

and Therapy, 34, 489-499. [Nice overview of the issues with suggestions regarding ways to increase power]

SUPPLEMENTAL MATERIALS: Cohen, J. (1988). Statistical power analyses for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum Associates. Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and

Social Psychology, 65, 145-153. Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: causes, consequences,

and remedies. Psychological Methods, 9, 147-163. Rossi, J. S. (1990). Statistical power of psychological research: What have we gained in 20 years? Journal of

Consulting and Clinical Psychology, 58, 646-656. Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies?

Psychological Bulletin, 105, 309-316.

Page 25: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

25

STATISTICAL METHODS: CORRELATION AND REGRESSION APRIL 16, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_regression.htm)

Supplemental readings: mediators/moderators Holmbeck, G. N. (1997). Toward terminological, conceptual, and statistical clarity in the study of mediators and

moderators: Examples from the child-clinical and pediatric psychology literatures. Journal of Consulting & Clinical Psychology, 65, 599-610.

Judd, C. M., Kenny, D. A., & McClelland, G. H. (2001). Estimating and testing mediation and moderation in within-subject designs. Psychological Methods, 6, 115 - 134.

Kraemer, H. C. & Blasey, C. M. (2004). Centering in regression analyses: a strategy to prevent errors in statistical inference. International Journal of Methods for Psychiatry Research, 13, 141-151.

Muller, D., Judd, C. M., & Yzerbyt, V. Y. (2005). When moderation is mediated and mediation is moderated. Journal of Personality and Social Psychology, 89, 852-863.

----------------------------------------------------------------------------------------------------------------------------- ------------------------------------

FACTOR ANALYSIS AND LATENT VARIABLE APPROACHES APRIL 16, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_latent.htm)

Supplemental materials: Overview of latent variable approaches Loehlin, J. C. (1992). Latent variable models: An introduction to factor, path, and structural analysis. Hillsdale, NJ:

Lawrence Erlbaum Associates. Muthen, L. K., & Muthen, B. O. (2009). Mplus User's Guide: Fourth Edition. Los Angeles, CA: Muthen and

Muthen.

Supplemental readings: Exploratory and confirmatory factor analysis Fabrigar L. R. , Wegener D. T., MacCallum R. C., Strahan E. J. (1999). Evaluating the use of exploratory factor

analysis in psychological research, Psychological Methods, 4, 272-299. Grice, J. W. (2001). A comparison of factor scores under conditions of factor obliquity. Psychological Methods, 6, 67-

83. Grice, J. W. (2001). Computing and evaluating factor scores. Psychological Methods, 6, 430-450. Heene, M., Hilbert, S., Draxler, C., Ziegler, M., & Buhner, M. (2011). Masking misfit in confirmatory factor analysis by

increasing unique variances: a cautionary note on the usefulness of cutoff values of fit indices. Psychological Methods, 16, 319-336.

Jackson, D. L., Gillaspy, J. A., & Purc-Stephenson, R. (2009). Reporting practices in confirmatory factor analysis: an overview and some recommendations. Psychological Methods, 14, 6-23.

Linting, M., Meulman, J. J., Groenen, P. J., & van der Koojj, A. J. (2007). Nonlinear principal components analysis: introduction and application. Psychological Methods, 12, 336-358.

Reise, S. P., Waller, N. G., & Comrey, A. L. (2000). Factor analysis and scale revision. Psychological Assessment, 12, 287-297.

Page 26: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

26

RESEARCH ETHICS

APRIL 23, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_ethics.htm)

Required readings Pachter, W. S., Fox, R. E., Zimbardo, P., & Antonuccio, D. O. (2007). Corporate funding and conflicts of interest: a

primer for psychologists. American Psychologist, 62, 1005-1015. [report and recommendations of an APA Task Force on conflicts of interest in research]

Fine, M. A., & Kurdek, L. A. (1993). Reflections on determining authorship credit and authorship order on faculty-student collaborations. American Psychologist, 48, 1141-1147. [some suggestions may be a little too rigid, but brings up key points regarding a potentially awkward issue]

Supplemental Materials: Conflicts of interest

Antonuccio, D. O., Danton, W. G., & McClanahan, T. M. (2003). Psychology in the prescription era: building a firewall between marketing and science. American Psychologist, 58, 1028-1043.

Supplemental materials: Ethics and IRBs

Fine, M. A., & Kurdek, L. A. (1993). Reflections on determining authorship credit and authorship order on faculty-student collaborations. American Psychologist, 48, 1141-1147.

Fly, B. J., Van Bark, W. P., Weinman, L., Kitchener, K. S., & Lang, P. R. (1997). Ethical transgressions of psychology graduate students: Critical incidents with implications for training. Professional Psychology: Research & Practice, 28, 492-495.

Pipes, R. B., Blevins, T., & Kluck, A. (2008). Confidentiality, ethics, and informed consent. American Psychologist, 63, 623-624; discussion 624-625.

Rosnow, R. L. (1997). Hedgehogs, foxes, and the evolving social contract in psychological science: Ethical challenges and methodological opportunities. Psychological Methods, 2, 345-356.

Steinberg, L., Cauffman, E., Woolard, J., Graham, S., & Banich, M. (2009). Are adolescents less mature than adults?: minors' access to abortion, the juvenile death penalty, and the alleged APA "flip-flop". American Psychologist, 64, 583-594.

Page 27: RESEARCH PROBLEMS IN CLINICAL PSYCHOLOGY

Research Problems – Spring 2013: Willcutt

27

DISSEMINATION OF RESULTS APRIL 30, 2013

(webpage with links to most papers: http://psych.colorado.edu/~willcutt/resmeth_dissem.htm)

Required readings:

APA Publications and Communications Board Working Group on Journal Article Reporting Standards. (2008). Reporting Standards for Research in Psychology: Why Do We Need Them? What Might They Be? American Psychologist, 63, 839-851. [APA publication describing the rationale and requirements for reporting of research]

Kazdin, A. E. (1995). Preparing and evaluating research reports. Psychological Assessment, 7, 228-237. [hopefully this one will be largely review for you. He provides a nice summary of the components of an effective research report.]

Lilienfeld, S. O. (2012). Public skepticism of psychology: why many people perceive the study of human behavior as unscientific. American Psychologist, 67, 111-129. [excellent synopsis of the issues faced by psychologists when we present our results to those outside the field]

Supplemental Readings: Authorship and publication issues

Fly, B. J., Van Bark, W. P., Weinman, L., Kitchener, K. S., & Lang, P. R. (1997). Ethical transgressions of psychology graduate students: Critical incidents with implications for training. Professional Psychology: Research & Practice, 28, 492-495.

Fine, M. A., & Kurdek, L. A. (1993). Reflections on determining authorship credit and authorship order on faculty-student collaborations. American Psychologist, 48, 1141-1147.

Supplemental Readings: Dissemination of results to the scientific community

Nickerson, R. S. (2005). What researchers want from journal editors and reviewers. American Psychologist, 661-662. Bem, D. J. (1995). Writing a review article for Psychological Bulletin. Psychological Bulletin, 118, 172-177. Belia, S., Fidler, F., Williams, J., & Cumming, G. (2005). Researchers misunderstand confidence intervals and

standard error bars. Psychological Methods, 10, 389-396. Cumming, G., & Finch, S. (2005). Inference by eye: confidence intervals and how to read pictures of data. The

American psychologist, 60, 170-180. Sommer, R. (2006). Dual dissemination: writing for colleagues and the public. The American psychologist, 61, 955-

958. Supplemental Readings: Dissemination of results to the public

Best, J. (2001). Damned Lies and Statistics. Untangling Numbers from the Media, Politicians, and Activists. CA: University of California Press.

Best, J. (2004). More Damned Lies and Statistics. How Numbers Confuse Public Issues. CA: University of California Press.

Gigerenzer, G. (2002). Calculated Risks. How To Know When Numbers Deceive You. New York: Simon & Shuster. Paulos, J.A. (1995). A Mathematician Reads the Newspaper. NY: Basic Books. Rosenthal, J.S. (2006). Struck by Lightning. The Curious World of Probabilities. Washington, DC: John Henry Press. Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth century. New York: W.

H. Freeman. Sommer, R. (2006). Dual dissemination: writing for colleagues and the public. The American psychologist, 61, 955-

958. Stanovich, K. E. (2001). How to think straight about psychology (6th Edition). Boston, MA: Allyn and Bacon. Tal, J. (2001). Reading Between the Numbers. Statistical Thinking in Everyday Life. New York: MacGraw Hill.