cruel to be kind: a neopragmatist approach to teaching ... · pdf filesuccess in a statistics...

14
Journal of Public Affairs Education 435 INTRODUCTION: STATISTICS IN THE PUBLIC ADMINISTRATION CURRICULUM The first day of a statistics class is pivotal. Public administration students—some of whom may have chosen their field as much to avoid certain subject areas as to actively pursue an academic interest—await complex equations with severe trepidation. Imparting the relevance of statistics to public administration and bracing students for its technical dimensions is a challenge for instructors (Smith & Martinez-Moyano, 2012). Arithmophobia can block students’ cognitive pathways, and a case of “quantitative paralysis” may set in when students are confronted with the first mathematical problems they have encountered in several years (Adenay & Carey, 2011). Not every institution has a sufficient math proficiency requirement to ensure student success in a statistics course, but certain public administration curricula do have a statistics course requirement that proves to be a persistent obstacle to student success. The teaching of statistics must contend with contemporary contexts. The information age is escorted by misinformation, with a widening gap between comprehension and cognition when it comes to processing social data (Silver, 2012; Tishkovskaya & Lancaster, 2012). The “pure” statistics taught for mathematics and the natural sciences does not suffer the many variations in presentation seen in the social Cruel to Be Kind: A Neopragmatist Approach to Teaching Statistics for Public Administration Students David Oliver Kasdan Incheon National University ABSTRACT Many public administration students harbor doubts about their ability to learn statistics. Adoption of a tenet of neopragmatism can realign statistics with students’ cognitive interests and frame it as a method to advance social progress away from cruelty. This approach is rooted in John Dewey’s fusion of educational philosophy with scientific method and Richard Rorty’s postmodern upgrade of classical pragmatism. Neopragmatism recognizes that there are linguistic and contextual challenges to social science research, and that statistics is “translating” what happens around us into a language based on the math logic that is actually common to many of our social phenomena. This eases students’ arithmophobia so they can see the greater challenge as analyzing governance issues to take advantage of the explanatory powers of statistics. Students then focus on figuring out the words, rather than the numbers, that are necessary to improve administrative decisions and reduce cruelty in the world. KEYWORDS Neopragmatism, arithmophobia, statistics, public-wellbeing JPAE 21 (3), 435–448

Upload: dangngoc

Post on 07-Mar-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

Journal of Public Affairs Education 435

INTRODUCTION: STATISTICS IN THE PUBLIC ADMINISTRATION CURRICULUMThe first day of a statistics class is pivotal. Public administration students—some of whom may have chosen their field as much to avoid certain subject areas as to actively pursue an academic interest—await complex equations with severe trepidation. Imparting the relevance of statistics to public administration and bracing students for its technical dimensions is a challenge for instructors (Smith & Martinez-Moyano, 2012). Arithmophobia can block students’ cognitive pathways, and a case of “quantitative paralysis” may set in when stu dents are confronted with the first mathe ma tical problems they have encountered in several years (Adenay & Carey,

2011). Not every institution has a sufficient math pro ficiency re quirement to ensure student success in a statistics course, but certain public admini stration curricula do have a statistics course requirement that proves to be a persistent obstacle to student success.

The teaching of statistics must contend with contemporary contexts. The information age is escorted by misinformation, with a widening gap between comprehension and cognition when it comes to processing social data (Silver, 2012; Tishkovskaya & Lancaster, 2012). The “pure” statistics taught for mathematics and the natural sciences does not suffer the many variations in presentation seen in the social

Cruel to Be Kind: A Neopragmatist Approach to Teaching Statistics for

Public Administration Students

David Oliver KasdanIncheon National University

ABSTRACTMany public administration students harbor doubts about their ability to learn statistics. Adoption of a tenet of neopragmatism can realign statistics with students’ cognitive interests and frame it as a method to advance social progress away from cruelty. This approach is rooted in John Dewey’s fusion of educational philosophy with scientific method and Richard Rorty’s postmodern upgrade of classical pragmatism. Neopragmatism recognizes that there are linguistic and contextual challenges to social science research, and that statistics is “translating” what happens around us into a language based on the math logic that is actually common to many of our social phenomena. This eases students’ arithmophobia so they can see the greater challenge as analyzing governance issues to take advantage of the explanatory powers of statistics. Students then focus on figuring out the words, rather than the numbers, that are necessary to improve administrative decisions and reduce cruelty in the world.

KEYWORDSNeopragmatism, arithmophobia, statistics, public-wellbeing

JPAE 21 (3), 435–448

436 Journal of Public Affairs Education

sciences (Payne & Williams, 2011). Even within one political science department, an instructor with a public administration specialty will concentrate on and frame certain concepts differently than an instructor coming from the comparative politics perspective. Although val-ues of the mean and standard deviation are calculated the same regardless of discipline, sampling methods and p-values hold different weight in disparate research fields (Gal, 2002).

An ideal curriculum would require a general statistics course to be paired with a research design class tailored to the students’ majors, but degree loads and departmental requirements do not allow for such extravagance. With a requirement of more than 30 classes for the bachelor’s degree, public administration stu-dents have neither the leisure nor the inclination to pursue statistics over multiple courses. The minimal math skills necessary are already off-putting to many students; asking them to en-dure pure statistics for a term before seeing how it actually applies to their chosen field would be too much.

Pedagogical theory, framed by the conflict of institutional themes of education as a consumer product (e.g., online degree programs) versus the traditional notion of the academic com-munity (e.g., liberal arts colleges), has divided learning approaches. Do students learn what they are able to learn or what they are receptive to learning? Are some students wired for the abstract while others need practical examples? The trend seems to be that successful teaching depends on engaging students’ cognitive in-terests, which includes those things students think they will be most likely to use later (Zieffler et al., 2008). This controversy is larger than the discussion at hand, but it is worth noting that these views loosely correspond to the foundational logic behind statistics itself: deduction and inference. Some students are better at working from the population to the sample, while other students naturally reason from the sample to the population.

It is possible to conceive of the idea of an epistemic community for statistics in public administration education. That is, these stu-dents’ shared context affects their understanding of statistics (Adenay & Carey, 2011; Garfield, 1995; Tishkovskaya & Lancaster, 2012). That context necessarily includes the means of communication, the extent of application, and the weight that a statistical study holds in the field. For example, students expect to use sur-veys and census data to inform decisions, as well as to be educated consumers of public admini-stration research that presents findings relevant to their eventual practices (Gal, 2002). At the street level, there are endemic misconceptions about some statistical concepts (Zieffler et al., 2008). For example, polls often portray the margin of error as the known interval around a mean value, rather than as the estimate’s samp-ling error (and thus a poll favoring one candi-date over another at 55% to 45% with a 5% margin of error is thought to be a dead heat). Making students in public administration aware of their epistemic com munity helps to stand ard-ize statistical opera tions while still recognizing that those opera tions have an appropriate time and place in research according to the disci-pline’s objectives.

After several terms teaching statistics to under-graduates in the public administration, political science, international relations, and history maj ors, I adopted an approach that helps to alleviate students’ fears by channeling their con cerns into a reframing of the utility of statistics. For those who agree with Lindley’s (2000, p. 294) philo-sophical position that “statistics is essentially the study of uncertainty and that the statis tic-ian’s role is to assist workers in other fields…who encounter uncertainty in their work,” then there is reason to customize statistics for those students who firmly believe themselves to be “workers in other fields.” Indeed, public ad-ministration turns to statistics to quell uncer-tainty, and it is the instructor’s responsibility to suggest when statistics can as sist understanding of problems in “other fields.”

D. O. Kasdan

Journal of Public Affairs Education 437

Neopragmatism is a philosophy that confronts uncertainty in the determination of utility. Util ity is an indicator of social progress, defined as the alleviation of cruelty (Elshtain, 2003; Rorty, 1989, 1991, 1999; Shklar, 1984). From this vantage, statistics becomes a way to calc-ulate utility as the likelihood that a course of action may alleviate conditions of cruelty—argu ably a foundational precept of public ad-min istration. Introducing neopragmatism into a statistics course leads to homework and test problems that focus on issues such as inequity and the protection of democratic values. In my experience, this approach, paired with a sub-stantial emphasis on working realistic examples in class to reinforce the pragmatist value of em-pirical experience, has yielded gains in student performance as well as better course evalua - tions from the students themselves. Stu dents appre ciate the outcome-based perspective, which rele gates the mechanics of statistics to the realm of available methodologies useful in some in-stances (Garfield, 1995), rather than a deon to-logical necessity for understandi ng phenomena.

This paper briefly outlines a philosophical perspective on statistics in social science before explaining the relevant aspects of neopragma-tism in detailed terms. Next comes a description of melding neopragmatism with statistics, fol lowed by a discussion of how this strategy serves the teaching of statistics to public admin-istra tion students. Several statistical concepts will be “neopragmatized” to illustrate the trans-form ation of a math problem into a governance problem, and thus position the numerate calc-ulations into mere operational processes that assist in the answering of a greater concern. Giv-en that students have ready access to calculators, spreadsheet programs, and other means of work-ing the formulae, the teaching objective can now focus on how observations relate to cond itions of cruelty and how the analysis can serve public administration to make the world a better place.

The objective of this paper is to introduce an alternative approach to teaching statistics—an

approach that accounts for students’ interests and competencies while adequately applying the methodological lessons to the practical contexts of public administration. This approach takes a philosophical perspective by integrat ing a core idea of neopragmatism into the coursework. That idea—the inverse relationship between cruelty and social progress—has been tested in the classroom with encouraging results, as described at the end of the paper.

STATISTICS IN SOCIAL SCIENCE: A BRIEF PHILOSOPHICAL CONSIDERATIONThe goal of statistics is to explain as much of a phenomenon as possible while also reducing the level of error as much as possible. Good statistics purport to increase the ratio of explan-ation to error up to some level of con fidence that is necessarily shy of certainty. The error term—always present, never under stood—eliminates big “T” truth from our vocabulary, but the concept somehow has remained in the ontology of statistics despite its history of prag-matic guesswork (Pearson, 1990). Statistical conclusions rely on a plethora of qualifying terms—likelihood, probability, uncertainty, margin of error, and so on—that, when used appropriately and consciously, can make the application of statistics more acceptable to the social sciences. Within the academy, Lindley (2000) and commentators give ample credi bi-lity to the continuing philosophical debate over statistics, while popular discourse takes issue with its use and abuse as well (Silver, 2012).

Assigning a number to a phenomenon helps to legitimize and deal with it, especially if the phenomenon concerns the weirdness of human behavior. At the beginning of the 20th century, soon after “Student” codified the values of in-ference for small samples of beer (Pearson, 1990), social scientists adopted statistics as a new tool for making societal advances. The prag matists took note as well, for the basis of utility in their philosophy was the ability to apply lessons learned from their experiences to future practice. Their lessons were often of the trial-

A Neopragmatist Approach to Teaching Statistics

438 Journal of Public Affairs Education

and-error variety, and the expediency of stat-

istical inference for making broad assess ments of utility fit their mission of social pro gress.

Of course, the social sciences ran up against behaviorist and humanist schools when logical positivism crept over from the hard sciences, making demands of rationality that conflicted with experience. The role of the counterfactual in social sciences became especially profound, as Popper demanded that falsification be a criterion of the empirical method against the inertia of conservatism (1959, p. 57). This is seen in how the null hypothesis is framed: it is the condition of the status quo. A scientific revolution (Kuhn, 1996) in the social sciences quietly shifted the idea of progress from an orientation that held the promise of enlighten-ment by accessing the big “T” truth toward the much more realistic objective of absolving researchers of any epistemological limitations that only enlightened the discipline to human errors. Kuhn (1996) proposed that development is truly pushed from the dis satisfaction with current explanations. Thus, the counterfactual is needed to highlight the place that the field is developing away from; falsifiability ensures that possibilities are not limited to past negative experiences, but rather the potential for exper-iences beyond those circumscribed by the con-text of the hypothesis set are created. The goal should not be to seek final answers so much as new questions.

This lineage has left statistics in the social sciences with an unstable sense of identity. Abelson (1995) draws on his years of teaching statistics for psychology students to outline what can and cannot be done with its methods. His prescription for using statistics concludes with a Kuhnian prophecy: “Each new genera-tion of research workers in the social sciences, therefore, is exposed to a more sophisticated scientific culture than the previous cohort” (Abelson, 1995, p. 198). Those cohorts are taking into account contemporary consider-

ations from critics of scientific claims, such as the interpretivists’ warning that “methods-driven research narrows the range of questions that the social sciences can usefully entertain and explore,” and call for “sensitivity to contextually specific meanings” (Yanow & Schwartz-Shea, 2006, p. 382).

NEOPRAGMATISM: THEORETICAL AND OPERATIONAL DEFINITIONSThe story that leads to a neopragmatist ap-proach goes back to the common history of statistics and pragmatism at the end of the 19th century. While Karl Pearson, William Sealy Gossett, and other early statisticians were working out the formulae and tables for consistent inference, William James, Charles Peirce, and John Dewey were trying to figure out a philosophy to synthesize experience, truth, and utility. Dewey (1916) was adamant that education be grounded in shared experience—what might be called consensus-based empir-icism—meaning that the classroom lesson had to center on developing an epistemic com mun-ity to understand what is useful in our lives. Dewey’s concept of education emphasizes prac-tice as the object of empirical analysis, which translates to lessons that show concepts through real-world examples and a communal solidarity as to those concepts’ usefulness when seen in different contexts. Both statistics and classical pragmatism appreciate the value of inquiry, trial and error, and the need to qualify the instrumentality of scientific methods in the pursuit of knowledge.

Neopragmatism is based on regular old empir-icism: experience and observation matter, but how we use them necessitates a postmodern update. Richard Rorty is the most prominent of neopragmatism’s champions; his core idea is to reject notions of big “T” truths that we can someday access through advances in science (Rorty, 1979). There are no ethereal foundations to knowledge, just as there is no objective vantage to contemplate them; we just have

D. O. Kasdan

Journal of Public Affairs Education 439

what we can talk about to get us along. Determining the utility of any bit of knowledge is an exercise in building consensus rather than an advancement in our grasp of reality.

Neopragmatism shifts focus from experience to language through the “linguistic turn” (Hilde-brand, 2003) that makes the communication of empirical data into a contested task (Swartz, 1997). The means of representing experience is context specific; one person’s description of an experience may differ widely from another’s, even if they both witnessed the same event. For Rorty (1991), there is no meaningful distinction between the experience and the language because the former is nothing without being represented by the latter. Since the social sciences often observe phenomena that must be described in words rather than measured by an interval scale, it is reasonable to hinge eval ua-tions of utility to reaching agreement on the language of the phenomena.

Moving from the high philosophical theory to social science proper, Rorty (1989, pp. 189–198) builds off Shklar’s (1984) proposition that cruelty is the worst thing that we can do, and goes so far as to propose that there is a level of cruelty that eventually, under the protections of near universal solidarity, becomes a functional (big “T”) Truth. Rorty’s argument is convincing enough to admit that there is some credence to the idea of a “final vocabulary” when it comes to cruelty. For instance, although the Nazi atrocities were widely decried, there were some sadistic SS officers who did not consider their acts to be cruel (e.g., killing a Jewish person was cruel to that person, but was offset by the benefit to Hitler’s regime and, for stout believers, the eventual fortunes of the world). Yet there is surely an unimaginably horrendous level of cruelty that would force those very officers to cry for mercy.

This possibility of a transcendent cruelty then serves as the closest thing we have to an ob-jective vantage from which we can gauge all

else. For practical social science considerations, all human activity can then be fixed at some relative distance from the “universally cruel” cruelty. Thus cruelty is operationalized as the antecedent of any proposition that, for general purposes of utility, is constructed to test one specific social context against another. The reference point for the test is the “true” cruelty that allows us to pass judgment, opening the door for methods of inquiry (such as statistics) that rely on relativism between disparate contexts.

Neopragmatism can become complicated quite quickly, of course, as the intricacies of post-modern, deconstructivist, antiessentialist lang-u age games riddle the social sciences with uncertainties. If neopragmatism and its notions of cruelty and progress take issue with the hard sciences’ claims to big “T” truth (Rorty, 1991), then how does this theory fare in the wispier study of human behaviors? One aspect to keep in mind is that cruelty is not always manifest as the übercruelty that even a Nazi fears. Cruelties may be slight, but the point is that they may extend eventually beyond local contexts and grow their influence toward others. No one wants to wait until another Holocaust is happe-ning to address a cruelty. The key for neoprag-matism is balancing limited experiences with a kind of antifoundational sensitivity as empirical background for action. This means that claims of cruelty are neither absolute nor completely rel-ative; there is no standard of cruelty for reference.

When it comes to the value of science, neo-pragmatism’s antiessentialism implies that the natural “hard” sciences are no more valid than the social sciences; neither school has privileged access to reality nor a better record of accuracy when it comes to correspondence with truths (Rorty, 1979). Baert (2005, p. 141) adds that “from a neo-pragmatic angle, ontological asser-tions can never suffice, as methodological op-tions are at least partly dependent on what is to be achieved.” The best that either school of science has to offer is the ability to identify a set of contextual practices that have helped us

A Neopragmatist Approach to Teaching Statistics

440 Journal of Public Affairs Education

lessen cruelty and advance social progress, as enveloped in our experience as language. In other words, empiricism and scientific meth - ods are communications about conditions of cruelty, not the recording and processing of perceptions with an instrument (Swartz, 1997).

Nonetheless, neopragmatism does value evi-dence and methods, whether they are presented by a doctor in a white lab coat or an academic sporting tweed and paisley. A semblance of reliability can be achieved with some nugget of science that produces the same outcome as we experience and talk about it, but reliability does not equal validity. At best, reliability lends support to notions of internal validity, but repeating scientific experiments only shows the ability to cope with a particular situation. The knowledge achieved with reliability is that the closed system and its proprietary logic work in their present context. This serves to bolster internal validity, but that validity has an expiration date when contexts change to such a degree that a paradigm shift is in order (Kuhn, 1996). The neopragmatist is interested in that kind of knowledge insofar as it can be gener-alized for the good right now.

Social scientists can take this opportunity to feel good about their fields. The natural sciences do help us get by in material ways, but all of that is secondary to the grander human experi-ment conducted in society, as Dewey (1916) and Rorty (1999) see it. This is where external validity is assessed: Does the result of an analy-tical method contribute to human welfare? The correspondence sought through scien tific study is grasping reality as it is experienced.

If there is doubt as to the preeminence of social progress as the ultimate outcome of natural sciences, then consider that the actual cement that undergirds a classroom where social science ideas are theorized is a product of chemistry. Social science is enabled by natural science, but knowing the elasticity projections for poured

con crete does not directly solve problems of equit able health policy or forge a service agreement between neighboring governments. To drive the point home, consider that the studies of astrophysics are done in the broad interests of advancing humankind, not for the sake of another alphanumeric mark on a map that is fully contained and explained within the closed system of instrument-enhanced human visual perception.

More often the social sciences are confronted with violations of civil rights, which neo prag-matism views as cruelty by means of the margin-al ization of a group that does not buy into some predominant idea of objective truth (Abellanosa, 2010, p. 102). This usually occurs when a seg-ment is excluded from participating in consensus building (demo cracy). For ex ample, the Jim Crow laws of the South were based on protecting a foundational ontology full of big “T” truths—blacks were less than whites—that enabled marg inalization. Whites felt that giving equal rights to blacks was cruel; it would upset the social order and be a cruelty to the white man’s natural superiority. In this case, social progress was obviously the reduction of the greater cruelty of violations against black peoples’ civil rights.

THE NEOPRAGMATIST APPROACH: HOW IS STATISTICS CRUEL?The next step is taking the neopragmatist perspective to figure out how statistics might actually inhibit social progress and thus itself be a cruel practice. The broad-form cruelty of statistics is the assumption that its conclusions are logically sound indicators of the way things really are. Neopragmatism exposes the fallacy that truth is correspondent to reality if the logic behind the truth claims is held to be objectively indisputable. Since much of statistics works from the math logic that exemplifies Cartesian a priori knowledge and is the basis of so much Western thought, the neopragmatist perspective takes issue on behalf of any phenomena that is not of this traditionalist mold.

D. O. Kasdan

Journal of Public Affairs Education 441

This is not to say that math logic is useless—experience shows that it serves human purposes in many ways—but the blind application of math logic to all aspects of human lives is troublesome, especially when used in the context of the oddities of human behavior. The logic that denies us the ability to conceive of a triangle whose interior angles do not sum to 180 degrees, or demands that the product of an odd number and an even number be an odd number, has little to offer in the way of understanding transgender political sensitivities1 or public safety policy compliance rates.

The following subsections divide perspectives on the cruelty of statistics into internal and external understandings. In this context, cruelty is not meant to evoke images of torture or distress, but rather the inhibiting of social progress for students and of the objectives of public administration. More generally, the idea is that the neopragmatist approach can alleviate some of the instances where the use of statistics may in fact produce outcomes that conflict with the intentions of the social sciences.

Internal: “Statistics Itself Is Cruel”The internal understanding holds that statistics may produce cruel outcomes, intentional or not, by virtue of its epistemological nature. This understanding includes the notion that statistics is a cruel tool, insofar as it quantifies and generalizes things that may not really be countable or broadly applied. For example, statistics may be used incorrectly to impart influence, as told by the quip attributed to Benjamin Disraeli: “There are three kinds of lies: lies, damned lies, and statistics.” Forcing an observation into a value-scheme in order to make probability assertions can also be cruel. Consider the measurement of pain—a wholly subjective human experience—on a Likert scale, where the ordinal values are grossly insuf ficient to reliably convey the myriad character istics of pain (Ariely, 2009, pp. xiii–xvii). This would be statistics as the agent of cruelty, caus ing some

suffering to the world because it is forcibly employed in an inappropriate context.

At first blush, arithmophobic students may identify with this understanding because they think that having to learn the theory and meth-ods of statistics is a cruel practice on the part of their institution. They may condemn statistics as abusive because it appears incommensurable with their cognitive interests (Zieffler et al., 2008). After all, statistics depends on rigid rules that the social sciences eschew, in contradistinc tion to the laws of natural science that frame the episte-mologies of physics, chemistry, and the like. Furthermore, the statistics instructor may be seen as cruel because she is forcing a system into students’ lives that they expect never to use again. Add in that the abuse of statistics is rampant in social science—another quip that students appreciate is attributed to Mark Twain: “Statistics never lie, but liars use statistics”— and conveying the qualified utility of statistical methods becomes even more difficult.

The neopragmatist approach, as a creature of postmodern deconstructionism, avers that a binary view of the world is insufficient; the languages we use are built on the logic of dichotomies that do not do justice to our experiences in the world. Statistical probability is an obvious culprit of this offense. Yes, a coin flip necessarily turns up heads or tails and an event either happens or does not happen. The Type I and Type II errors are also dichotomous (to themselves), as is the dependent variable in a logistic regression. But the outcomes of these dichotomies are not the end of the story: Do we have to accept a two-sided coin as the decision maker? Can we imagine room for compromise in the world?

Mind, most statistics taught in public admin is-tration curricula are parametric; the distribu tions are assumed to be normal, Gaussian, and have a nice curve of probability. (Some spe cial ized and higher-level statistics courses may intro-duce Bayesian probability, but students are not

A Neopragmatist Approach to Teaching Statistics

442 Journal of Public Affairs Education

normally exposed to this.) Since this is not always the case for a distribution of a population, we are making an assumption about the world that may not hold up. It is a convenient assumption that can make for difficulties in the future when the model is not commensurate with observations.

A relevant example of the potentially cruel outcomes from the use of statistics played out in my past teaching context in Michigan. Many students in the Detroit metro area come from families with auto industry backgrounds and are now looking at careers in government—these are two job sectors that have traditionally offered workers defined-benefit retirement plans. Both sectors have seen their pension funds dwindle far below their needs in recent years, however, causing a popular backlash against defined-benefit plans because they are now unsustain able. These plans depend on actuarial projections that have proven inadequate at accounting for the errors extant with such applications. The context of the retired American worker has changed—due to longer life expectancies and a sustained recessionary period—and the statistics used by the pension managers are underspecified because the needs are too dynamic for quanti-tative analysis alone.

The cruelty here is that the experienced results do not resemble the projections from the numbers; the projected likelihoods have fallen short, and the output-outcome discrepancies will mean tangible reductions in my students’ parents’ retirement quality of life, and perhaps even these students’ own ability to pay for their college education.

A more theoretical but nonetheless poignant instance of the cruelty in statistics may be found in the hypothesis test. It is cruel to students because it is counterintuitive; as Abelson (1995, p. 9) states, “A null hypothesis test is a ritualized exercise of devil’s advocacy.” (Indeed, “rejecting the null hypothesis” is one of the few times a double negative is institu-

tionalized.) Consider this arrangement as akin to the justice system, albeit with the twist that a defendant is presumed guilty until proven innocent. The status quo condition in the null hypothesis is the default that would be more naturally positioned as the lesser option for an administrative application. It is a cruel evaluation process, cruel because it puts the burden of proof on imperfect forecasts of a decision’s chance for success—thus it stifles social progress when the null hypothesis is the condition of cruelty that is known and suffered right now. Better the devil that we know than the devil we do not, unless accompanied by a qualified p-value! This boils down to the conservative versus progressive debate that frustrates the student who hopes to improve society.

External: Statistics Determines “What Is Cruel?”The external understanding is more amenable to classroom instruction: in this approach, stat-istics can be used to assess cruelty by assign ing indicators of cruelty to phenomena and then looking for patterns of predictability. If a phenomenon can be plausibly observed by a quantified measure of cruelty, then statistics is the principal of cruelty because it serves to pro-vide an understanding of the cruel pheno men-on. For example, public health studies often include a variable for the infant mortality rate into their calculus as a proxy for many types of cruel conditions (or at the least, conditions that do not foster social progress).

Although the previous section argued that the hypothesis test is structurally cruel because it puts the burden of proof on the wrong side of social progress, that same fault works to the neopragmatist’s advantage in determining what is cruel. The hypothesis test—as a discursive measure to gain consensus among those who are trying to lessen cruelty—is a clear way to make decisions in the face of uncertain contexts. Neopragmatism holds that we should avoid cru elty without dictating a more concrete objec tive to move toward. “Run away!” as the

D. O. Kasdan

Journal of Public Affairs Education 443

conclusion of a situational analysis is often a useful recommendation.

The goal of lessening cruelty in the world pins down human experiences of the past while leaving the future open for assessment. If there were five instances of beatings yesterday, then one might hope for less than five today. There is no replacement or exchange going on with this hope, but rather the meager desire to have less of something that is not desired. Mind, this ap-proach avoids a dichotomous mind-set, which neopragmatism discredits for forcing a right-or-wrong view of the world. The opposite of five beatings is not five kisses, but entertaining the alternative of four beatings or fewer is most definitely a better outcome. The neopragmatist perspective can only say what is preferred in contrast to what has been experienced.

Neopragmatism has a grasp of the null that is as firm as the consensus behind its recognition. That is, all can agree that there were five unpleasant beatings yesterday. An alternative to the null is simply defined as being the “not null” con di-tion—less cruelty is not the opposite of cru elty—and thus opens up options for social pro gress. Quibbling over this difference may seem like so many language games, yet that is exactly what is experienced of the world once an actor tries to do anything beyond just being in the world (Rorty, 1979, 1999). Those who embark on the study of phenomena that are contingent on words for their measurement, as is usually the case in the social sciences, are playing language games that cannot be justifiably summarized by a t-test for significance.

Other components of statistics may be neo-prag matized for public administration students. For example, randomness is a concession to the practical limits of the “language” of statistical methods, to represent how weird and unpre-dictable phenomena really are. Random is the side door to the error term; incorporating ran-dom ness into statistics is implicit agreement that the method does not access big “T” truth.

Similarly, a neopragmatist perspective recon-ciles its distrust of the truth-reality corres pon-dence construct with statistical inference by positioning the p-value as the possibility that randomness overcame the methodology being used, within the acceptable allowance for things not being what they appear. Lindley (2000, p. 295) makes a special note that “the definition of randomness is subjective; it depends on you. What is random for one person may not be random for another.” For example, when con-sidering what most polling services would call a random sample of respondents for a pol itical survey, students know that the selection pool is inherently nonrandom. Ameri cans have con-flicted feelings about political privacy, and only certain types—often those whom we con ven-tion ally think of as outliers on the political spec-trum—are wont to disclose their voting choices.

THE CLASSROOM PRACTICE OF NEOPRAGMATISMFitzpatrick (2000) and Smith and Martinez-Moyano (2012) suggested some neopragmatist values in their prescriptions for effective ped-agogy: using real examples, keeping the utility of the practice in mind, and minding that the interpretation of results is paramount. What can be added, to fill out the neopragmatist approach, is realizing that all of these things are contestable and subject to democratic discourse.

That is not to say that my classroom is a forum for entertaining alternative mathematics, but rather that the answers to the questions do not end with “fail to reject the null.” The mayor does not care what the p-value is. The U.S. Census, despite its name and intentions, does not do an actual head count of every person in the country. A bad purchasing decision affects every citizen’s welfare, even for something as inconsequential as a pressure gauge at a nuclear reactor. These and myriad other factoids spur-ring off the intersection of statistics and public administration illustrate that the outputs of a study need to be closely related to the outcomes

A Neopragmatist Approach to Teaching Statistics

444 Journal of Public Affairs Education

and then put into the context of the admini-strative objectives.

My teaching of statistics for social science stu-dents retains the formal trappings of the trad -itional approach, such as stating the alpha and insisting on a clear diagram of the confi dence interval on a normal distribution curve. Yet these must be supplemented with not only an interpretation of the statistical answers, but also an extension of the conclusion into its potential effect on administrative practice and, ultimately, social welfare. I continually remind students that statistics will not reveal big “T” truths or illuminate metaphysical certainties, but it can demonstrate the utility of some things within qualified circumstances.

My classroom experience teaching in the “traditional” method—that is to say, without explicitly introducing and emphasizing the neopragmatist consideration of cruelty—felt dis-connected. Students are always concerned with their grades, but earlier, non-neopragmatist iterations of the course found students to be more fixated on their answers being correct as determined by the answers on the key. Student responses to interviews and their open-ended comments on course evaluations expressed con-cerns with their technical aptitude for statistics. By contrast, in later sections of the class taught with an explicit neopragmatist approach, some students still struggled with the mechanics of statistics, but there was a palpable difference in that their frustrations were directed not so much at the complexities of calculating the standard error as at the fact that the difficulties of statistics could lead to cruelties. These stu-dents’ concern was for the appropriate applica-tion and interpretation of statistics, a long-term orientation that is much more satisfying than just mastering the operations to pass a test.

This shift in understanding was illustrated by the popularity of an assignment that I intro-duced under the neopragmatist approach to the class: to write a research proposal. I positioned

the assignment as a “grade-saver” insofar as it was a sizable portion of the course grade that called on the students to apply the theoretical properties of statistics to an everyday problem without concern for doing the actual number crunching. In essence, I asked students to make a coherent argument to analyze a current issue of public administration using statistics, paying special attention to how the quantitative research design would fit the context with explicit consideration of the social progress im-plications of its potential conclusions. In other words, they had to propose the right tool for the job and give a thorough explanation of how it could improve the world.

In the context of public administration, this goes toward what Carl Friedrich (1940) called “publicity,” meaning the bureaucrat’s respons-ibility to provide clear justification for decisions and action. A result of this assignment (as an exercise in neopragmatism) was the interesting variables and models that students proposed as they sought ways to measure and reduce cruelty. For instance, one student operationalized a mea-sure of “destitution” to indicate how sus cept ible immigrants would be to gang influences in metropolitan Detroit. Another student outlined a survey to capture the experienced effects of a neighborhood stabilization program on long-term residents in the neighborhood; a far cry from relying on median home values, tax assess-ments, and the proportion of owner-occupied properties to evaluate the equity of such public policies on distressed urban areas.

Statistics is a technique that must be justified for each use, regardless of whether the problem is a relatively straightforward calculation for person-nel allocation or a murky quandary over sewer routing. An example of a classroom exercise that I liked to use would be to ask students to assess the effectiveness of police shifts (i.e., five 8-hour shifts or four 10-hour shifts). As this kind of administrative action is happening in almost every state, there are other studies and data available. But the issue is loaded with

D. O. Kasdan

Journal of Public Affairs Education 445

context-specific ramifications for a government as well as citizens that need to be considered when trying to design a model to capture dimensions of equity, social progress, and potential cruelties. These considerations could take form in police force absenteeism, increases in harassment complaints, or even unexpected wear on equipment (“Like coffeemakers!” one student joked). All of these aspects speak to broader concerns than the cost savings or other administrative indicators that most often drive such decisions. The neopragmatist approach serves to ensure that these things are given due consideration; the question of police shift length is multifaceted and open to the debate that is incumbent on public administration to entertain.

Another example of this approach is a lesson, always effective for students, about the implications of using the median or the mean to understand income inequality. News coverage and policy action taken with “the average income” in mind is fraught with problems, as the outliers are either marginalized (median) or overindulged (mean). To make the discussion in class especially prescient, I use the university’s published salary data for faculty and staff to illustrate what it means when there are labor negotiations over a cost-of-living increase or a proposed increase in tuition. Suffice it to say, students often gain awareness of a previously unknown cruelty when they see the salary difference between a professor and a basketball coach. The neopragmatist angle to such a lesson is that summary statistics and inferences based on such can be horri - bly misleading if not outright wrong. The construction of a hypothesis around the issue, such as, “Faculty salaries are fairly competitive for university employees,” shows how statistics can contribute to—but not conclude—issues in the real world.

The most simple neopragmatist advice for teaching statistics to public administration students is to enforce the objective through an appendage. That is to say, every question,

problem, and answer should include the phrase “as it reduces cruelty in the world.” Before signing off on a community needs assessment survey, contextualize the questions as they would inform a course to improve social pro-gress. After analyzing the racial demo graphics of a city to determine the effectiveness of an urban renewal campaign, ensure that the rejection of the null hypothesis is accompanied by the indication of confidence as well as a meaningful statement of how that campaign has affected the level of inequality for citizens. Students begin to anticipate and appreciate this predication as a sort of neopragmatist condi-tioning that keeps statistics within the realm of their academic and practical interests. In the spirit of the philosophy’s antifoundationalist attitude, the teaching approach is to provide tools (i.e., the formulae of statistics) and then open discourse to allow consensus by the students as to the utility of those tools for a variety of administrative contexts.

CONCLUSION…AS IT REDUCES CRUELTY IN THE WORLDThe statistics teacher has long struggled to apply lessons to students’ experiences (Smith & Martinez-Moyano, 2012; Zieffler et al., 2008); this struggle is exacerbated when the class is the sole quantitative study course in the public administration curriculum. In essence, there is an extenuating obstacle in what public administration students consider to be their cognitive interests and their need to pass a required course that, on the surface, might not directly inform those interests. For many students, numeracy ended with high school algebra and home economics. For many more, statistics is some sort of scientific witchcraft that is better left to specialists. Part of the challenge of teaching statistics is to engage the students in the subject’s pertinence to many endeavors, as well as to convey statistics’ access-ibility to even the most arithmophobic learner.

As the neopragmatist approach demonstrates, a method’s validity and utility is a shared

A Neopragmatist Approach to Teaching Statistics

446 Journal of Public Affairs Education

determination (Rorty, 1991); without a democratic discourse about statistics, it will remain a tool for a particular subset of intellects who will command the definition of cruelty within their own narrow interests. Neoprag ma-tism is an approach to research design and justification in public administration, rather than a full-fledged call for a scientific revolution to re-create quantitative analysis in postmodern terms. Whereas traditional science puts validity first and foremost as its objective, a neoprag ma-tist approach promotes the alleviation of cruelty as the ultimate goal and subjugates validity as an ancillary, context-dependent, and discur sive ly determined outcome. This positioning helps students see the data of public policy problems as a potentiality for informing useful outcomes.

Neopragmatism’s ironic epistemology—that we are all in on the joke but still act as if our lang-uage has something to do with the world when all it really has to do with is communication about our language—lightens the mood of quant itative analysis. If students treat statistics as a language puzzle rather than a positivist chore battered by a never-ending list of eponymous tests, then they can more easily accept it in their cognitive framework. Dewey (1916) makes a concerted effort to discuss how “interest and discipline” hinge on students’ connectedness to the subject, which is the instructor’s respons-ibility to ensure. If challenged with a classroom of recalcitrant public administration students and the equation for the correlation coefficient, such an instructor would point out that “the problem of instruction is thus that of finding material which will engage a person in specific activities having an aim or purpose of moment or interest to him, and dealing with things not as gymnastic appliances but as conditions for the attainment of ends” (Dewey, 1916, p. 155). Teaching statistics for social science students can be a successful endeavor from Dewey’s perspective by bridging the divide, unifying “an independent mind on one side and an inde p - en dent world of objects and facts on the other” (1912, p. 162).

For students, it is a far leap from deriving the standard error of the sampling mean to know-ing that one can get an “irregular” group of respondents on any particular day that might warp one’s explanation of a phenomenon. Stu-dents sometimes feel as though statistics is something wholly different from their disci-pline, rather than accepting it as a tool within their field’s pursuit of understanding. Abelson notes, “For many students, statistics is an island, separated from other aspects of the research enterprise. Statistics is viewed as an un-pleasant obligation, to be dismissed as rapid ly as possible so that they can get on with the rest of their lives” (1995, p. xii). In other words, he holds that teaching is the illumination of how thinking and doing are connected, foreshadowing our contemporary pedagogical buzzword: engagement.

This paper offers an approach that extends the findings of previous studies about the difficulties of teaching statistics in both the general (Adenay & Carey, 2011; Garfield, 1995; Payne & Williams, 2011; Tishkovskaya & Lancaster, 2012; Zieffler et al., 2008) and specific (Fitzpatrick, 2000; Smith & Martinez-Moyano, 2012) public administration contexts. The findings from using a neopragmatist approach are anecdotal: I tried it in three iterations of an introductory statistics course at a single institution. Yet there has been positive feedback, if not a measurable difference in outcomes (analyzing grades is inconclusive and inappropriate). Responses to open-ended questions on the course evaluations for sections that emphasized a neopragmatist approach were encouraging: students acknowledged that framing statistics as a means to assess cruelty was helpful to their understanding, as well as recognized that the idea of statistics as just a tool to make help make administrative decisions was conducive to their success in the course.2

Fixing the objective as “less cruelty” leads to utility and operational truths being assessed by their assistance in getting humankind closer to that goal. If quantitative empirical analysis can advance social progress to a state of less cruelty, then it is useful.

D. O. Kasdan

Journal of Public Affairs Education 447

ACKNOWLEDGEMENT

This research was supported by the Korean National Research Fund through Incheon National University.

NOTES

1 A real example of this drove the point home: During my first term teaching the graduate course in quantitative methods, I intended to use some basic demographic data about the students in the class to introduce the concept of a dummy variable (0 = male, 1 = female). One of my students was transgender.

2 I taught three sections of the undergraduate in-troduction to quantitative methods course while conscientiously using a neopragmatist approach, in comparison to two earlier sections of the course that I taught in the “traditional” format. The “sample” was from a single institution from 2012–2013 with 58 students completing the course. An analysis of grades or other metrics to show a relationship be tween the neopragmatist and “traditional” ap proaches of teach ing statistics to public admin istration students would be inappropriate, given the issue of academ-ic records privacy and the myriad variables in the course content that would confound any results for the sample size. Nonetheless, direct conversations with select students after the course was completed reinforced the positive sentiments about the neo-pragmatist approach.

REFERENCES

Abellanosa, R. (2010). Rorty’s philosophy of education: Between orthodoxy and vulgar relativism. Kritike, 4(2), 87–104.

Abelson, R. (1995). Statistics as principled argument. Hillsdale, NJ: Lawrence Erlbaum Associates.

Adenay, K. & Carey, S. (2011). How to teach the reluctant and terrified to love statistics: The

importance of context in teaching quantitative methods in the social sciences. In G. Payne & M. Williams, (Eds.), Teaching quantitative methods (pp. 85–98). London, UK: Sage.

Ariely, D. (2009). Predictably irrational. New York, NY: Harper Collins.

Baert, P. (2005). Philosophy of the social sciences: Towards pragmatism. Malden, MA: Polity Press.

Dewey, J. (1916). Democracy and education. New York, NY: Macmillan.

Elshtain, J. (2003). Don’t be cruel: Reflections on Rortian liberalism. In C. Guignon and D. Hiley (Eds.), Richard Rorty (pp. 139–157). Cambridge, UK: Cambridge University Press.

Friedrich, C. (1940). Public policy and the nature of administrative responsibility. Public Policy, 1(1), 1–20.

Fitzpatrick, J. (2000). What are our goals in teaching research methods to public administrators? Journal of Public Affairs Education, 24 (1), 173–181.

Gal, I. (2002). Adults’ statistical literacy: Meanings, components, responsibilities. International Statistical Review, 70 (1), 1–51.

Garfield, J. (1995). How students learn statistics. International Statistical Review, 63 (1), 25–34.

Hildebrand, D. (2003). The neopragmatist turn. Southwest Philosophy Review, 19(1), 79–88.

Kuhn, T. (1996). The structure of scientific revolutions (3rd ed.). Chicago, IL: The University of Chica- go Press.

Lindley, D. (2000). The philosophy of statistics. Journal of the Royal Statistical Society: Series D (The Statistician), 49, 293–337.

Payne, G. & Williams, M. (Eds.). (2011). Teaching quantitative methods. London, UK: Sage.

Pearson, E. (1990). ‘Student.’ Oxford, UK: Claren- don Press.

Popper, K. (1959). The logic of scientific discovery. New York, NY: Routledge.

Rorty, R. (1979). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press.

A Neopragmatist Approach to Teaching Statistics

448 Journal of Public Affairs Education

Rorty, R. (1989). Contingency, irony, and solidarity. Cambridge, UK: Cambridge University Press.

Rorty, R. (1991). Objectivity, relativism, and truth. New York, NY: Cambridge University Press.

Rorty, R. (1999). Philosophy and social hope. London, UK: Penguin Group.

Shklar, J. (1984). Ordinary vices. Cambridge, MA: Harvard University Press.

Silver, N. (2012). The signal and the noise. New York, NY: Penguin Press.

Smith, A. & Martinez-Moyano, I. (2012). Techniques in teaching statistics: Linking research production and research use. Journal of Public Affairs Education, 18(1), 107–136.

Swartz, O. (1997). Conducting socially responsible research. Thousand Oaks, CA: Sage.

Tishkovskaya, S., & Lancaster, G. (2012). Statistical education in the 21st century: A review of challenges, teaching innovations and strategies for reform [Online]. Journal of Statistics Education, 20(2). Retrieved from http://www.amstat.org/publications/jse/v20n2/tishkovskaya.pdf

Yanow, D., & Schwartz-Shea, P. (Eds.) (2006). Interpretation and method: Empirical research methods and the interpretive turn. Armonk, NY: M. E. Sharpe.

Zieffler, A., Garfield, J., Alt, S., Dupuis, D., Holleque, K., & Chang, B. (2008). What does research suggest about the teaching and learning of introductory statistics at the college level? A review of the literature [Online]. Journal of Statistics Education, 16(2). Retrieved from http://www.amstat.org/publications/jse/v16n2/zieffler.html

ABOUT THE AUTHOR

David Oliver Kasdan is assistant professor of public administration at Incheon National Uni-versity in South Korea. His research interests include neopragmatism, behavioral economics, disaster management, and social justice.

D. O. Kasdan