web viewtherefore readers either hunt these down or must take this report's word for it--and...

38
The Good, the Bad, and the Ugly of Public Opinion Polls Russell D. Renka Professor of Political Science Southeast Missouri State University E-Mail: [email protected] February 22, 2010 ° Polls v. Reports from Polls ° Sampling Error ° Good Polls ° Bad Polls ° Ugly Polls ° Conclusion ° Polling Links ° Notes ° References Public opinion polls or surveys are everywhere today. A nice sampling of professional surveyors is at Cornell Institute for Social and Economic Research (CISER), Public Opinion Surveys . The Wikipedia Opinion poll site has history and methods of this emergent profession that was pioneered in America, and its Polling organizations lists some globally distributed polling organizations in other countries. PollingReport.com compiles opinion poll results on a wide array of current American political and commercial topics. USA Election Polls track the innumerable election-related polls in the election-rich American political system. The National Council on Public Polls (NCPP) defines professional standards for and lists its members--but many polls online and off do not adhere to such standards. Polls have become indispensable to finding out what people think and how they behave. They pervade commercial and political life in America. Poll results are constantly reported by national and local media to a skeptical public. Seemingly everyone has been contacted by a pollster or someone posing as one. There is no escape from the flood of information and disinformation from polls. The internet has enhanced both the use and misuse of such polls. Any student therefore should be able to reliably tell a good poll from a bad one. Bad ones are distressingly commonplace on the web. What is more, bad polls come in two forms. The more common one is the innocuous or unintended worthless poll. But there is a far more malevolent form that I label "ugly" polls. This is a manual for separating good polls from bad ones, and garden- variety bad from the truly ugly.

Upload: lamnhi

Post on 01-Feb-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

The Good, the Bad, and the Ugly of Public Opinion PollsRussell D. RenkaProfessor of Political ScienceSoutheast Missouri State UniversityE-Mail:   [email protected] February 22, 2010

° Polls v. Reports from Polls° Sampling Error° Good Polls° Bad Polls° Ugly Polls° Conclusion° Polling Links° Notes° References

 

    Public opinion polls or surveys are everywhere today.  A nice sampling of professional surveyors is at Cornell Institute for Social and Economic Research (CISER), Public Opinion Surveys.  The Wikipedia Opinion poll site has history and methods of this emergent profession that was pioneered in America, and its Polling organizations lists some globally distributed polling organizations in other countries.  PollingReport.com compiles opinion poll results on a wide array of current American political and commercial topics.  USA Election Polls track the innumerable election-related polls in the election-rich American political system.  The National Council on Public Polls (NCPP) defines professional standards for and lists its members--but many polls online and off do not adhere to such standards.

 

    Polls have become indispensable to finding out what people think and how they behave.  They pervade commercial and political life in America.  Poll results are constantly reported by national and local media to a skeptical public.  Seemingly everyone has been contacted by a pollster or someone posing as one.  There is no escape from the flood of information and disinformation from polls.  The internet has enhanced both the use and misuse of such polls.  Any student therefore should be able to reliably tell a good poll from a bad one.  Bad ones are distressingly commonplace on the web.  What is more, bad polls come in two forms. The more common one is the innocuous or unintended worthless poll.   But there is a far more malevolent form that I label "ugly" polls.  This is a manual for separating good polls from bad ones, and garden-variety bad from the truly ugly.

 

Polls v. Reports from Polls                             Next down; Top

 

    Rule One in using website polls is to access the original source material.  The web is full of polls, and reports about polls.  They are not the same thing.  A polling or survey site must contain the actual content of the poll, specifically the questions that were asked of participants, the dates during which the poll was done, the number of participants, and the sampling error (see next section below).  Legitimate pollsters give you all that and more.  They also typically have a website page devoted to news reports based on their polls.  The page will include links for the parent website, including the specific site of the surveys being reported.  So anyone who wants to directly check the information to see if the report is accurate, may easily do so on the spot.

Page 2: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

 

    But once polls are published, advocate groups rapidly put them to their own uses.  Sometimes they do not show links to the source.  For instance, see Scenic America's Opinion Polls:   Billboards are Ugly, Intrusive, Uninformative.  This is a typical advocate group site with a report based on several polls saying the American people consistently dislike highway billboards.  But the polls are not linked (although this group does cite them properly at the bottom of their file).  Therefore readers either hunt these down or must take this report's word for it--and that is never a good idea in dealing with advocate groups!  Advocate groups have a bad habit of selectively reporting only the information that flatters their causes.  That should not be accepted at face value.  It's best to draw no conclusion at all unless one can access the source information for oneself.

 

&    Some advocacy groups attack legitimate pollsters and polls by distorting their data and purposes.  A Christian conservative group with the name Fathers' Manifesto produced The Criminal Gallup Organization to attack this well-known and reputable pollster for alleged misrepresentation of American public opinion on legalized abortion.  They said "The fact that almost half of their fellow citizens view the 40 million abortions which have been performed in this country as the direct result of an unpopular, immoral and unconstitutional act by their own government, as murder, is an important thing for Americans to know.  This is not a trivial point, yet the Gallup Organization took it upon itself to trivialize it by removing any and all references to these facts from their web site." (Abortion Polls by the Criminal Gallup Organization)  That was followed with a link to the offender's URL at www.gallup.com/poll/indicators/indabortion.asp, now a dead URL.  The truth is far simpler than conspiracy.  In late 2002, Gallup went private on the web with nearly all its regular issue sets, not excepting abortion.  One will only know this by escaping the confines of an advocate group's narrow perspective and seeing the targeted poll and pollster's own take on the issue.  And that can now readily be done, via the newer Gallup site's search using "abortion polls."  That produces an Abortion In Depth Review summary of numerous polls dating from 1975 at this URL:  www.gallup.com/poll/9904/Public-Opinion-About-Abortion-InDepth-Review.aspx.   It demonstrates that 12 to 21 percent of Americans would prefer that abortions be "illegal in all circumstances"; but of course (for reasons cited below), the word "murder" is not employed.

 

    The lesson is that any poll-based report must make the full source information available to its readership.  There is no excuse for not identifying the source or directly linking to the source.  If they do neither, it's grounds for suspicion that they want you to take their word as the final authority.  That is not acceptable conduct in the world of polls and surveys.  I do not mean the report must literally attach links, although that's never a bad idea.  But they must identify the source in such a way that anyone can then do a standard search and examine the original source material.1

 

Sampling Error                    Next down; Top

 

    This elementary term must be properly understood before we go further.  "Sampling error" is a built-in and unavoidable feature of all proper polls.  The purpose of polls is not to get direct information about a sample alone.  It is to learn about the "mother set" of all those from which a poll's sample is randomly drawn.2  This "population" consists of everyone or everything we wish to understand via our sample.  A particular population is defined by the questions we ask.  It might be "all flips of a given coin" or "all presidential election voters in the 2008 American general election" or "all batteries sold by our firm in calendar 2008" or "all aerial evasions of predatory bats by moths" or "all deep-sky galaxies" or any number of other targets.  The object is not to poll the whole population, but rather to draw a sample from it and directly poll them for sake of authoring an "inference" or judgment about that population.  But all samples have an inherent property:  they fluctuate from one sample to the next one as each is

Page 3: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

drawn at random from the elements of the targeted population.  This natural property is "sampling error" or "margin of error" (Mystery Pollster:   What does the margin of error mean? ).  These are not surveyor's mistakes, but rather are inherent properties of all sampling (SESTAT's Understanding Sampling Errors).  Cautions on reading and interpreting these are at PollingReport's Sampling Error (Taylor 1998) or Robert Niles' Margin of Error.

 

    Sampling error tells us the possible distance of a population's true attribute from a directly found sample attribute.  You cannot assume any sample's measured properties (such as mean and standard deviation) is exactly like the population's properties. The sweet part of sampling error is that we can easily calculate how large it is.  This is chiefly defined by the number of units in the sample.  You can use the DSS Research Sample Error Calculator to determine this (also: American Research Group's Margin of Error Calculator).  Or first specify a desired accuracy level, and find out what size sample will achieve that (Creative Research Systems, Sample Size Calculator).

 

    People tend to believe that samples must be a significantly large part of a population from which they're drawn.  That is simply wrong.  Asher (2001) cites the fallacy of thinking that cooks testing the broth or blood testers taking red and white cells must take some appreciable portion of the whole.  Thank goodness, neither of those is necessary.  I like to cite coin flips, because the population of "all flips of a coin" is some undefined huge number, yet we routinely test coins for heads-to-tails fairness with a mere 500 to 1000 flips.  Our sampling error for 1000 flips is just 3.1% or 31 flips; so we predict that a fair coin produces 500 heads plus-or-minus 31.  We don't mind the huge population size (all coin flips).  In fact, we prefer that it be very large, because that way our extraction of a sample has no appreciable effect on the leftover items from that population.3

 

    The DSS Calculator also permits us to seek different levels of assurance about the sampling error.  We call this "confidence level" or "confidence interval."  Customarily we accept a 95% level, meaning that our 1000 flips will go above or below the 3.1% only 1 time in every 20 samples.  We get 500 heads plus or minus 31 on 19 trials out of 20.  If that isn't good enough for the cautious, they can select 99% instead, and that produces a larger sampling error (about 4.1%) for a more cautious inference about the mother set of flips; and now we predict 500 heads plus-or-minus 41.  Polls can be custom-fit for different accuracy demands.

 

Good Polls             Next down; Top

 

    All polls are surveys based on samples drawn from parent populations.  A poll's purpose is to make accurate inferences about that population from what is directly learned about the sample through questions the sampled persons answer.  Knowledge of the sample is just a means to that end.  All good polls follow three indispensable standard requirements of scientific polling.

 

    First, the questions must be worded in a clear and neutral fashion.  Avoid wording that will bias subjects toward or away from a particular point of view.  The object is to discover what respondents think, not to influence or alter it.  Along with clear wording is an appropriate set of options for the subject to choose.  It makes no sense to ask someone's income level down to the dollar; just put in options that are sufficiently broad that most respondents can accurately place themselves.  A scan of good polls generally shows the "no opinion" option as well.  That's to capture the commonplace fact that many people have no feelings or judgments one way or the other on the survey

Page 4: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

question.  If obliged to choose only from "True" or "False," many who have no opinion will flip the coin and check off one of those options.  Thus a warning:  the business of fashioning truly effective survey questions is not easy.  Even the best polls have problems with fashioning their questions to avoid bias, confusion, and distortion (Asher 2001, 44-61).  Roper illustrates this via a confusing double negative causing a high proportion of respondents to opt for a Holocaust-denial reply, whereas a more clearly worded question showed that this radical view is held by a tiny proportion of respondents (Ladd 1994, Roper Holocaust Polls; Kagay 1994, Poll on Doubt Of Holocaust Is Corrected - The New York Times).  It usually takes a professional like Professor Ladd to parse out such distinctions in question wording among valid polls.  This is where determined issue advocates can be valuable, because many watch out for subtle differences in question wording that can alter responses to the advocate's pet issue (for example, Mooney 2003, Polling for Intelligent Design).  But with some practice it's still feasible for any alert reader to see the difference between properly worded questions and the rest.

 

    The rest fall into two categories:  amateur work, and deliberate distortion.  A great many website polls exhibit amateurs at work, with highly imprecise or fuzzy wording of questions.  I'll not bother to show these by links, since their numbers are legion all over the web.  The deliberate abusers are less common.  These are discussed later on under "ugly polls."

 

    Second, the subjects in the sample must be randomly selected (Research Methods Knowledge Base:  Random Selection & Assignment).  The term "random" does not mean haphazard or nonscientific.  Quite the opposite, it means every subject in a targeted or parent population (such as "all U.S. citizens who voted in the 1996 general election for president") has the same chance of being sampled as any other.  Think of it like tumbling and pulling out a winning lottery number on a State of Kentucky television spot; they are publicly showing that winning Powerball numbers are selected fairly by showing that any of the numbers can emerge on each round of selection (Kentucky Lottery).  Fairness means every number has identical likelihood of being the winning number, no matter what players might believe about lucky or unlucky numbers.  So "random" means lacking a pattern (such as more heads than tails in coin flips, or more of one dice number than the other five on tumbled dice) by which someone can discover a bias and thereby predict a result (Random number generation - Wikipedia).  That's a powerful property, as only random selection is truly "fair" (unbiased on which outcome occurs).  Any deviation from random produces biased selection, and that's one of the hallmarks of bad polls.

 

    Granted, national pollsters cannot literally select persons at random from all U.S. citizenry or residents, because no one has a comprehensive list of all names (despite what conspiracy theorists want to believe).  So they substitute a similar method, of random digit dialing or "RDD" based on telephone exchanges (Random digit dialing - Wikipedia).  Or the U.S. Census Bureau will do block sampling; that is, they will randomly select city or town blocks for direct contact of sample subjects (Data Access Tools from the Census Bureau; or direct to Accuracy of the Data 2004).  Emergent web polls do the same from their mother population of potential subjects.  These honor the principle of pure random selection by coming as close to that method as available information allows.

 

    It is not perfect stuff.  Green and Gerber have long argued that there are better methods than RDD for pre-election polling (Green and Gerber 2002).  There are also serious issues among telephone pollsters over household reliance on cell phones only, as that is disproportionately true of younger households which may therefore be excluded using landline RDD procedures (Blumenthal 2007, Mystery Pollster:   Cell Phones and Political Surveys: Part I , 3 July 2007; Part II, 13 July 2007).  That problem is being handled in a manner resembling block sampling to approximate a true random sample (Asher 2005, 74-77; Pew Research Center - Keeter, Dimock and Christian 2008a, The Impact Of "Cell-Onlys" On Public Opinion Polling:   Ways of Coping with a Growing Population Segment , 31 January 2008; Keeter 2008, Latest Findings on Cell Phones and Polling, 23 May 2008; Keeter, Dimock and Christian 2008b,

Page 5: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

Cell Phones and the 2008 Vote:   An Update , 23 September 2008).  But this does not change the underlying principle of seeking a random sample.4

 

    Third, the survey or poll must be sufficiently large that the built-in sampling error is reasonably small.  Sampling error is the natural variation that occurs from taking samples.  We don't expect a sample of 500 flips of a coin will produce exactly the same heads/tails distribution as a second sample of 500.  But the larger the samples are, the less the natural variation from one to another.  Common experience tells us this--or it should.  A sample of newborn babies listed in large city birth registers will show approximately (but not exactly) the same proportion of boys and girls in each city, or in one city each time the register is revisited; but in small towns there are large variation in boy-to-girl ratios.  Generally, we do not want sampling error to be larger than about 5 percent.  That requires about 400 or more subjects, without subdivisions among groups within the sample.  If you divide the sample evenly into male and female subgroups, then you naturally get larger sampling errors for each 200-person subgroup.  Ken Blake's guide entitled "The Ten Commandments of Polling" provides a step by step guide to calculate sampling errors via calculator for any given sample size; and you can go on line to the DSS Calculator for that.  The sound theoretical grounding is in any standard book on statistics and probability, in manuals with scientific calculators, and in several websites listed below.

 

    Remember another rule about sample size.  It does no harm that the sample is extremely small in number compared to the target population.  Consider coin flips as a sample designed to test the inherent fairness of a coin.  There is virtually no limit to number of possible flips of a coin.  You want to know if the coin is fair, meaning that half of all flips will be heads and half tails.  So "all flips" is the population you want to know about.  "Actual flips" are the sample.  You can never know what "all flips" looks like, but that's OK.  The key to accurate judgment of "all flips" is to make sure you have a large enough sample of actual flips.  Asher (2005, 78) gives a similar example of taking a small proportion of one's billions of red blood cells to take its profile, or a chef sampling soup before serving it.  Statisticians refer to a law of large numbers, and it's explained at many sites like The Why Files, Obey the Law.

 

    If all three of these criteria are met, you have reasonable assurance the poll is good.  How can you know this?  Expect all poll reports to honor the journalists' rule.  They must cite all the information necessary to let you confirm the three conditions.  Even a brief news report can cite the method of selection (such as "nationwide telephone sample obtained by random digit dialing, on October 5-6, 1996"), the sample size and sampling error (1000 subjects, with sampling error of plus-or-minus 3.1 percent at a 95% confidence level), and the questions used in that survey.  For more extended print articles there are fuller guidelines (Gawiser and Witt undated, 20 Questions A Journalist Should Ask About Poll Results, Third Edition).  Still, most reports of poll results will not reproduce the poll questions in full for you to see; too little space in papers, too little time on television or radio.  So they must provide a link to the original source for the full set of questions.  With websites now universally available, no pollster can plausibly slip that responsibility.  Neither can any reputable news organization.

 

    The New York Times offers a brief review on how modern polling has expanded and been revised, at Michael Kagay's Poll Watch Looking Back on 25 Years of Changes in Polling.  I recommend this for those seeking more detail.

 

Page 6: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

    In conclusion:  all three criteria must be met for a poll to be judged "good."  The burden of proof is on the pollster or those who use and report from it.  In turn, students shouldn't report poll information only from a secondary source.  Instead, a web news source that summarizes the relevant information should also be linked to the primary source.  You should also check the primary source to ascertain that the information was correctly interpreted by the reporter.

 

Bad Polls                         Next down; Top

 

    So when is a poll not good?  Simply enough, it only has to violate any of the three rules specified above.  The one emphasized here is violation of random selection--because that's the prevalent website violation.

 

    The web is filled with sites inviting you to participate by posting your opinion.  This amounts to creation of samples via self-selection.  That trashes the principle of random selection, where everyone in a target population has the same likelihood of being in the sample.  A proper medical experiment never permits someone to choose whether to receive a medication rather than the placebo.  No; subjects are randomly placed in either the "experimental group" (gets the treatment) or the "control group" (gets the sugar-coated placebo).  If you can call or e-mail yourself into a sample, why would you believe the sample was randomly selected from the population?  It won't be.  It consists of persons interested enough or perhaps abusive enough to want their voices heard.  Participation feels good, but it is not random selection from the parent population.

 

    Next, remember this:  any self-selected sample is basically worthless as a source of information about the population beyond itself.  This is the single main reason for the famous failure of the Literary Digest election poll in 1936, where the Digest sampled 2.27 million owners of telephones and automobiles to decide that Franklin Roosevelt would lose the election to Republican Alfred Landon, who'd win 57 percent of the national popular vote (History Matters, Landon in a Landslide: The Poll That Changed Polling).  Landon didn't!  Dave Leip's Atlas of Presidential Elections, 1936 Presidential Election Results, displays the 36.54% won by Landon below the 60.80% of national popular vote won by the incumbent Roosevelt.  This even though the Digest had affirmed of its straw poll:  "The Poll represents the most extensive straw ballot in the field--the most experienced in view of its twenty-five years of perfecting--the most unbiased in view of its prestige--a Poll that has always previously been correct." (Landon in a Landslide)  Yeah, but a lot of 1936 depression-era Roosevelt voters didn't own telephones or automobiles so never received the opportunity to voice their opinions.

 

    So if they are worthless, why are they so commonplace?  Self-selected polls are highly useful for certain legitimate but limited purposes.  Sellers always want to know more about their customers; but such customer surveys are necessarily self-selected rather than selection as a random sample.  Suppose you are an internet seller such as Amazon.  You try for a profile of customers by inviting them to give you some feedback.  This helps you discover new things about them, gives tips on who else you'd want to reach, alerts you to trouble spots in advance, and lets you decide how to promote new products.  But none of this is to discover the nature of the parent population.  It's to know more about those customers who care enough to respond.  All such samples are not random; they are biased via self-selection to include mostly the interested, the opinionated, the passionate, and the site-addicted.  All the rest are silent and therefore unknown.  So long as you understand this limitation, it is perfectly fine to invite the "roar of the crowd" from your customers.

Page 7: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

 

    Now suppose your self-selected sample is very large, and you cannot study all of it.  Then define that total sample as your population (called "all site visitors"), and seek a sample within it for intensive study.  But that takes random sampling from the population.  Inviting some of your site visitors to fill out surveys won't tell you about "all site visitors."  Instead you get the relative few who bother to reply, and they are probably untypical of the rest.  So smart sellers who really want to know all their traffic seek to establish a full list of all customers--by posting cookies to their computers, by getting telephone numbers at checkout counters to produce comprehensive customer lists, or by telling you to go online to get a warranty validated whereupon you must show them an email address and telephone to get the job done.  Understand, though, that smart businesses do this to avoid hearing only from an untypical few of their customers.

 

    The dangers of self-selection may seem obvious by now, yet flagrant violations of random selection have sometimes received polite and promotional treatment in the press.  Shere Hite has made a successful career writing on the habits and mores of modern women.  In 1987 she hit the headlines and made $3 million selling a book based upon a mail survey of 4500 American women derived from a baseline sample of 100,000 women drawn from lists compiled in various women's magazines.  The highlight was a report that well over half her sample of women married five or more years were having one or more extramarital affairs.  That got Hite oceans of free publicity and celebrity tours.  Yet the Hite 4500 were a heavily self-selected sample who chose to respond to Hite's invitation to disclose sensitive matters of private and personal beliefs and behavior.  This outraged legitimate surveyors, who know that any "response rate" (percentage of those surveyed who submit to the questions) below 60 percent invites distortion of the sample in favor of the vocal and opinionated few.  A response rate of 4.5 percent clearly will not do.

 

    That low response-rate samples invite bias is well known from congressional offices inviting citizen responses to franked mail inquiries.  It mainly draws responses from those who have some knowledge and interest in public affairs and who feel favorably toward that Member of Congress.  In Hite's case, most knew and cared little about her or her very strongly held opinions on feminism and man-woman relations.  But a few did.  Those divided into persons who liked and shared Hite's basic views, and those who didn't.  The friendlies were far more likely to fill out and mail back the survey.  So Hite got a biased sample of Hite supporters.  This is non-response bias:  her sample was stacked with angry and dissatisfied women who were much more likely than the 95.5 percent non-responders to have had affairs outside of marriage and to tell that (Singer in Rubenstein 1995, 133-136; T.W. Smith 1989, Sex Counts: A Methodological Critique of Hite's "Women and Love", pp. 537-547 with this conclusion:  "In the marketplace of scientific ideas, Hite's work would be found in the curio shop of the bazaar of pop and pseudoscience.").

 

    It could be that those who do not share Hite's views systematically select themselves out of her sample, while those sharing her views select themselves in.  Or it could be that her original sample was drawn in a way that violates random selection with respect to the questions about which she was inquiring.  Or some combination of these.  Whatever it is, we finish with a highly biased sample from which one cannot draw valid inferences on those questions about the population of all American women or even from her original 100,000 mail-list.  Low response rate is a well-known pitfall.  Alongside the Hite example, it is one of the many mistakes committed by the infamous Literary Digest polls (Squire 1988; Rubenstein 1995, 63-67).

 

    Biased samples are not automatically shunned by marketers.  Sometimes they are a welcomed thing.  Members of Congress use their congressional franking privileges to conduct district mail surveys that are irredeemably flawed by self-selection of the samples (Stolarek, Rood and Taylor 1981).  Citizens who like that office holder are much more

Page 8: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

likely to respond to the query.  So are those with high interest in the subject matter.  Thus the sample leans heavily to those who like its sponsor and care about its questions. These queries produce a predictably biased set of responses favoring the point of view held by the politician.  This pleases most politicians, who are practiced in arts of self-promotion and recognize a favorable data source when they see one.  Typically these franked-letter survey questionnaires are followed by another franked report summarizing the results in a way that validates the Member's policy program.

 

    The most spectacular example of deliberate creation of a biased sample is associated with the annual voting culminated in May of 2001 through 2008 on American Idol.  American Idol FAQs explains how to vote once an Idol show is completed.  Voting by voice is done to toll-free numbers, but there's also the option of text messaging.  The FAQ site says "if you vote using Cingular Wireless Text Messaging, standard Text Messaging fees will apply."  The show is tremendously popular, and voting requires waiting in line, unless the text message option is used.  Cingular does not disallow repeat messaging, for the baldly obvious reason that it charges a fee per message.  Thus FAQ says "input the word VOTE into a new text message on your cell phone and send this message to the 4 digit short number assigned to your contestant of choice (such as 5701 for contestant 1).  Only send the word 'VOTE' to the 4 digit numbers you see on screen, you cannot send a text message to the toll-free numbers."  That's right, there are two separate procedures, one for toll free lines with slow one-at-a-time votes and then slow waits for another crack at it, another for fast repeat voting with fees to Cingular via text messaging.  That's a positive invitation to creation of a highly biased sample.

 

    Biased samples can also be dangerous to democratic standards of voting for public office.  The most important self-selected population in the political world is the voting citizenry in democratic elections.  Serious political elections are obliged to follow three strict standards of fairness:  each individual voter gets to vote only once, no voter's ballot can be revealed or traced back to that person, and every vote that is cast gets counted as a cast vote in the appropriate jurisdictional locale.  Internet voting is heralded as a coming thing, but so far the experience with it is studded with instances of ballot tampering by creative hackers.  That tampering is a violation of the third condition, that cast votes are counted properly.  ElectionsOnline.us--Enabling Online Voting (URL: www.electionsonline.us/) assures us that it "makes possible secure and foolproof online voting for your business or organization," but hackers have demonstrated that security is a relative term.  AP Wire 06-21-2003 UCR student arrested for allegedly trying to derail election cites a campus hacker who demonstrated in July 2003 how a student election for president could be altered through repeat voting.  That's documented online by Sniggle.net: The Culture Jammer's Encyclopedia, in their Election Jam section (URL:  sniggle.net/index.php > sniggle.net/election.php); and there are other sources as well.

 

    Indeed this campus hacker is not an isolated case.  After 2000 the U.S. Department of Defense set forth a Federal Voting Assistance Project project called SERVE (Secure Electronic Registration and Voting Experiment).  This was an ambitious pilot plan to enable overseas military personnel from seven states to vote online in the 2004 national election (formerly available at Welcome to the SERVE home page at www.serveusa.gov/public/aca.aspx, but now gone - RDR, September 2005).  The ultimate goal was to permit the several million overseas voters to register in their counties and vote by secure on-line links.  But on 20 January 2004, four co-authors with specialties in computer security produced a potent indictment of the shortcomings of SERVE in terms of potential election fraud.  The prospects of hacking into the system to stack the ballot box are daunting barriers to a system that must also secure the individual's anonymity.  Commercial security lacks any comparable requirement to ensure that the individual participant's true identity remain unknown.5 (Jefferson et al., 2004, A Security Analysis of the Secure Electronic Registration and Voting Experiment (SERVE); also John Schwartz, Report Says Internet Voting System Is Too Insecure to Use, New York Times, 1/21/04)  As a result, the Pentagon wisely scrapped plans to use online voting for 2004, in part due to a State of Maryland demonstration of how easily a skilled hacker can break locks and alter voter identity paper trails (Report from a Review of the Voting System in The State of Maryland, 12 October

Page 9: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

2006).  Yet even that damning evidence has not deterred one prominent manufacturer of on-line voting machines from nonetheless claiming their system is foolproof.6

 

    So a 'vigorous debate' supposedly exists over how to insulate website voting against the danger of fraud and altered results via ballot box stuffing.  It clearly pays to be deeply skeptical of those who claim on-line voting is immune from dangers of getting a distorted sample.  That is an extreme form of the self-selection inherent to all elections, which count recorded votes rather than opinions from the whole electorate.

 

    Bad polls on the web do not include election results but are nonetheless remarkably abundant.  These fall into two basic categories.  First are amateur bad polls.  The web is positively overflowing with these.  These show self-selection and other errors like small sample sizes or badly worded questions.  Some are simply interactive web pages created for fun and dialogue with others.  They often make no pretense of being legitimate surveys.  Some are self-evidently not serious.  They all tend to have certain common signs of amateurs at work.  For one, there are frequent wrongly spelled words.  For another, the questions are worded in vague or unclear ways that may be typical of everyday speech but are strictly not allowed at legitimate polling sites.  Sometimes these are humorous sites with gonzo questions about a variety of current news items, especially those of salacious or bizarre nature.  Others are accompanied with blogs that really amount to ranting licenses.  Amateur bad polls are very easy to recognize on a little inspection.  Their samples are running tallies determined by whoever has chosen to participate one or more times.  They lack any "sampling error" because they're just running tallies of recorded responses, not samples taken at random from a population.

 

    The second category of bad polls is the sophisticated bad poll.  These are more serious.  Self-selection along with a seller's denial of the problem are their hallmarks.  They are professionally presented on the web, they do not have the obvious spelling and grammatical failures, and they customarily ask questions in a manner similar to legitimate polls.  These are not the work of amateurs.  Surface level recognition of their failings is much harder to recognize.  Shere Hite's 1987 poll is a pre-web era example of this genre.  Its purveyor defended the poll vigorously and insisted upon its legitimacy as the real thing.  So do current offenders, as we shall see.

 

    A website example of this practice is PulsePoll.community Network, which in Spring 2000 ran four pre-primary polls for the New Hampshire, Arizona, Washington and Colorado presidential primaries (at PulsePoll Primary: Arizona Results).  They got very similar results to four scientific telephone-based polls taken on the eve of these four events.  So they concluded that "The PulsePoll has made Internet polling history" with a web poll emulating telephone surveys in its forecasting accuracy.  But this claim does not bear close examination.  Objections from professional survey sources came in immediately.  Some are captured in Jeff Mapes' article of 12 April 2000 entitled "Web Pollster Hopes To Win Credibility" in PulsePoll.com News The Oregonian.  Even if four spring 2000 primary polls did closely resemble legitimate survey results, that could be pure luck.  One should remember that the Literary Digest also used wrong sampling methods to correctly pick presidential winners in four straight elections from 1920 through 1932 (Rubenstein 1995, 63-67).  But they made one major mistake.  In 1936 they predicted a fifth one--and got it spectacularly wrong.  Luck has a natural way of eventually running out.

 

    PulsePoll still relies on a self-selected sample rather than a randomly selected one.  The only defense for this is that internet users of this site were somehow typical of the larger population of citizens, or more particularly, of citizens who vote in presidential primaries.  The problem with this is already known:  internet users were not a

Page 10: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

random sample of all citizens, all voters, or all presidential primary voters.  See "The Digital Divide" spring 2003 theme issue of IT&Society (URL: www.stanford.edu/group/siqss/itandsociety/v01i04.html) for indications that digital users were still quite different by factors such as wealth and political activism from the non-digital population.  There is no doubt that digital users have been different, and often so in ways that especially attract both politicians and advertisers to them.  But even if the self-chosen PulsePoll sample somehow captured all the attributes of its parent population of digital users, those users still did not resemble the true target population of presidential primary voters.

 

    Another sophisticated bad poll is run by former President Clinton's ex-advisor Dick Morris at Vote.com (URL: www.vote.com).   Like PulsePoll, Vote.com is professionally presented in hopes of producing enough audience to interest advertisers in subsidizing the site.  The issues are current and interesting.  The site promises all participants that their opinions and votes truly count, since those in power will hear about the poll results.  That might satisfy the millions whom legitimate polls show are alienated from their own government.  But just like PulsePoll and its brethren, this site is irretrievably biased by its failure to do random sampling.  It does just the opposite, by inviting the opinionated to separate themselves from the silent and make their voices heard by those in power.

 

    Internet polling is nonetheless here to stay.  By 2003 it had taken a quantum jump in publicity and material impact.  Even groups that know better will use it.  The Berkeley, California organization known as MoveOn.org ran an online vote among its membership on June 24-25, 2003 to determine which among the Democratic presidential candidates its membership preferred (MoveOn.org PAC at URL: www.moveonpac.org/moveonpac/).  The results was a strong plurality for outspoken anti-Iraq War candidate Howard Dean, with 43.87% of 317,647 members who cast votes in this 48-hour period (Report on the 2003 MoveOn.org Political Action Primary).  The second-place result was nearly-unknown long-shot Dennis Kucinich, with 23.93% of the vote.  Near the bottom, the well-known candidates Joseph Lieberman and Richard Gephardt got 1.92% and 2.44% respectively!  What can be concluded from this?  Self-selection of a highly left-wing participant voter pool is dramatically obvious.  Stark distinction between this group and the actual 2004 Democratic presidential primary voters was forthcoming soon thereafter (Democratic Party presidential primaries, 2004).  But the appeal of doing such polls is evident.

 

    Incidentally, MoveOn.org, a knowledgeable organization on survey methods, engaged the professional services of a telephone polling organization to verify that its 317,647 votes were not biased through "stacking the ballot box" by anyone voting more than once.  To check this, a randomly selected sample of 1011 people from those 317 thousand were directly surveyed by telephone to ascertain that the sample results were remarkably close to those of the parent population.  That means if ballot stuffing were done at all, its effect was minor or negligible since the sample of 1011 was fundamentally similar in result to the population of 317,647 (Greenberg Quinlan Rosner Research, Inc. - gqr at former URL:  www.moveon.org/moveonpac/gqr.pdf).  Nonetheless, this safeguard had no effect upon the original self-selected nature of the voting population of 317,647 web-surfing MoveOn.org participants compared to the target population of "all persons who will vote in 2004 Democratic presidential primaries and caucuses."  They remained as distinctive and politically untypical a group as ever.

 

    The conclusion is inescapable: no one to date has discovered a method of making web-based polls truly representative of a general parent population.  Amateur or sophisticated, these polls are not capable of accurately profiling a parent population beyond themselves.

 

Page 11: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

Ugly Polls                                 Next down; Top

 

    This is a special category of bad poll, reserved for so-called pollsters who deliberately use loaded or unfairly worded questions under disguise of doing an objective survey.  Some of these are done by amateurs, but the most notorious are produced by political professionals.  These include the infamous push polls.  I treat these first.  There are also comparable polls composed of subtle question biases that create a preconceived set of responses.  These fall into the category of hired gun polls.  I treat them second, but not least.

 

    A push poll is a series of calls, masquerading as a public-opinion poll, in which the caller puts out negative information about a target candidate (Push poll - Wikipedia).  Sometimes called robo-calls, the auto-call from a supposed polling operation spews out derogatory information about a specific target.  They call very large numbers of households to disseminate as much derogation as possible (Blumenthal 2006b, A Real Push Poll?", 8 September 2006).  They appear before presidential primary and general elections and in swing district congressional or senatorial contests, always by hard-to-trace nominally independent organizations not directly linked to the beneficiary candidate or party.7  They are quite common in recent elections.  Obviously someone in campaigns makes use of these shadow practitioners.  The operative most closely identified with their use is former Bush political strategist Karl Rove, suspected as director of the infamous February 2000 South Carolina accusatory telephone "polls" maligning Bush primary rival John McCain (Push poll - SourceWatch; Green 2007, The Rove Presidency; Moore and Slater 2006; NPR Karl Rove, 'The Architect' interview with Slater, 2006; Green 2004, Karl Rove in a Corner; Borger 2004, The Brains; Davis 2004, The anatomy of a smear campaign; Suskind 2003, Why are These Men Laughing?; DuBose 2001, Bush's Hit Man; Snow 2000, The South Carolina Primary).  That did not end the practice despite the expose.  The 2006 midterm saw a spate of these (Drew 2006, New Telemarketing Ploy Steers Voters on Republican Path - New York Times, 12/6/06).  On eve of the 3 January 2008 Iowa caucuses, Republican rivals of Mike Huckabee received such calls (Martin 2007, Apparent pro-Huckabee third-party group floods Iowa with negative calls - Jonathan Martin's Blog - Politico.com, 12/3/07).  One may expect another round of these in fall 2008 before the 4 November election of a 44th president and the 111th Congress.

 

    These dirty campaign practices masquerade as legitimate polls.  They are not inquiries into what respondents truly think.  Traugott and Lavrakas (2000, 165) define them as "a method of pseudo polling in which political propaganda is disseminated to naive respondents who have been tricked into believing they have been sampled for a poll that is sincerely interested in their opinions.  Instead, the push poll's real purpose is to expose respondents to information ... in order to influence how they will vote in the election."  Asher (2001, 19) concurs:  "push polls are an election campaign tactic disguised as legitimate polling."  Their contemporary expression through automated telephone calls led Mark Blumenthal of Mystery Pollster to call them "roboscam," meaning an automated voice asks respondents to indicate a candidate preference, followed by a scathing denunciation of the intended target (Blumenthal 2006a, Mystery Pollster - RoboScam: Not Your Father's Push Poll, 21 February 2006).  After a couple of attack-statements, it's on to another number, hitting as many as possible for sake of maximizing the damage to the intended political target.  That, of course, is not real polling at all, which explains why Blumenthal shuns the very term "push poll" for these.

 

    Legitimate polling organizations universally condemn push polls.  The National Council on Public Polls has shunned them since they masquerade as legitimate queries yet are intended to sway rather than discover the opinion of respondents (NCPP 1995, A Press Warning from the National Council on Public Polls).  So has the American Association for Public Opinion Research, which recommends that the media never publish them or portray them as polls (AAPOR 2007, AAPOR Statement on Push Polls).  Push polls are propaganda similar to negative advertising.  They are conducted by professional political campaign organizations in a manner that detaches them from the

Page 12: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

intended beneficiary of actions taken against a rival (see Saletan 2000, Push Me, Poll You in Slate Magazine).  Some political interest groups also use them, often in a hot-language campaign to raise money and membership by using scare tactics.  No matter the source, they treat their subjects with contempt.

 

    Hired gun polls are real polls, with limited size samples and numerous questions.8  They have been defined as: "Polls commissioned and carried out to promote a particular point of view.  Hired gun polls are associated with reckless disregard for objectivity.  A synonym for the term hired gun poll is the term advocacy poll—although the hired gun metaphor connotes a much sleazier and less professional image.  Selective reporting of poll results is one mark of hired gun polls.  Another is questions worded to reflect the positions of sponsors.  Both practices blatantly violate accepted ethical standards in the polling field." (Young 1992, 85; The Polling Company TM)

 

    Hired gun polls are not literally synonymous with advocacy polls, polls used by advocacy groups to promote their viewpoints.  Advocacy polls become very widespread in American politics in the past two decades (Beck, Taylor, Stanger, and Rivlin 1997 at REP16 - Issue Advocacy Advertising During the 1996 Campaign).  Issue advocacy is any communication intended to promote a particular policy or policy-based viewpoint.  Polls can be extremely helpful in doing this persuasively.  There is an important political market for legitimate poll-based issue information.  Advocacy groups often commission a poll to be done and then selectively release that information which furthers their cause.  But usually they do not go further, into the realm of push polling.

 

    But there are apparently some exceptions.  In 2002 the professional golf tour witnessed a political fight which ultimately yielded a hired gun poll that quite deliberately violated all standards enunciated in The Polling Company TM definition.  Chairman and CEO Hootie Johnson of the Augusta Golf Club chose an aggressive counter-campaign to Martha Burk of the National Council of Women's Organizations, who sought to oblige the Masters' Golf Tournament's host club to open its doors to women for the first time.  He hired The Polling Company and WomanTrend, a Washington D.C. polling firm chaired by a prominent Republican woman named Kellyanne Fitzpatrick Conway (the polling company T inc. - Kellyanne Conway).

 

    The result was satisfying for CEO Johnson and unsatisfying for Burk.  One conservative advocacy group took the survey and ran with it (Center for Individual Freedom, Augusta National Golf Club Private Membership Policies under title "Shoot-Out Between Hootie and the Blowhard Continues").  Conway herself accompanied Johnson at a November 13, 2002 press conference to announce the poll result, which had an 800-person-based sampling error of 3.5%.  As portrayed in the official PGA website (Poll shows support for Augusta's right to choose membership - PGATOUR.COM): "When asked whether -- like single-sex colleges, the Junior League, sororities, fraternities and other similar same-sex organizations -- "Augusta National Golf Club has the right to have members of one gender only," 74 percent of respondents agreed.  Asked whether Augusta National was "correct in its decision not to give into Martha Burk's demand," 72 percent of the respondents agreed.'"  That would appear to wrap the matter up.

 

    But a look at the poll questions is instructive.  They are clearly aimed at a push throughout the survey.  We get this language in Question 21 (Augusta National poll Part III - PGATOUR.COM; also CFIF, cfif_poll_data):

 

Page 13: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

21.  As you may or may not know, Augusta National Golf Club is a private golf club in Augusta, Georgia that does not receive any type of government funding. Each year, the Masters Tournament is held at Augusta National Golf Club. Currently, only men are members.

Martha Burk, the President of the National Council of Women's Organizations, wrote a letter to the Augusta National Golf Club, saying that the Masters Golf Tournament should not be held at a club that does not have women members. She demanded that the Golf Club review its policy and change it immediately, in time for the tournament scheduled for April 2003.

Do you recall hearing a lot, some, only a little, or nothing about this?

Some 51 percent of the sample heard nothing about this.  Normally that's a warning to pollsters not to proceed further with questions except under very high cautions.  But here, Question 22 proceeds immediately with this stem:

 

22.  And, as you may or may not know, the Chairman of Augusta National Golf Club, William Johnson, responded to Martha Burk by saying that membership to the club is something that is determined by members only, and they would not change their policies just because of Burk’s demand.

And, do you support or oppose the decision by Augusta National Golf Club to keep their membership policy as it is?

 

The result was net support by 62 percent, opposition by 30 percent, and a volunteered "do not know" from the remainder.  Then Question 23:

 

23. Although currently, there are no women members of the Augusta National Golf Club, the Golf Club does allow women to play on their golf course, and visit the course for the Masters Tournament.  In other words, women are welcome to visit the Club and often play golf there as guests.

Knowing this, would you say that you support or oppose the Augusta National Golf Club’s decision to keep their membership policy as it is?

 

That was a now-obvious push.  This time we get 60 net support for the status quo and 33 percent opposed.  Questions 24 and 25 then sally forth in this fashion:

 

Please tell me if you agree or disagree with the following statements:

24. “Martha Burk had the right to send a letter to Augusta National Golf Club about their membership policies, but if she really wanted to make some progress on behalf of women, she would have focused her time and resources on something else.”  [and]

25. “Martha Burk did not really care if the Augusta National Golf Club began allowing women members, she was more concerned with attracting media attention for herself and her organization.”

Page 14: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

 

The replies to this being satisfactory, the key item 26 comes in:

 

26. “The Augusta National Golf Club was correct in its decision not to give into Martha Burk’s demand.  They should review and change their policies on their own time, and in their own way.”

 

That got 72 percent to agree to not bending to this awful woman's unreasonable demands against a selfless and public-minded private club that welcomes women golfers with open arms.  A little later, Question 29 kept up the drumbeat:

 

And please tell me if you agree or disagree with the following statement:

29. “Just like single-sex colleges, the Junior League, Boys and Girls Scouts, Texas Women’s Shooting Club, Sororities and Fraternities, and women business organizations, Augusta National Golf Club has the right to have members of one gender only.”

Lo, this produced a full 74 percent sample agreement with some form of defense for the existence of single-gender organizations in America.  That was the highest proportion of any of these leading items, and thus was the single one seized by Mr. Johnson for highlighting in his press conference accompanied by this hired-gun poll's principal.

 

    But a rebuke to the "Hootie Poll" soon come from within the golf community itself.  The November 14, 2002 issue of PGA Tour's Golf Web carried a piece entitled Is the Augusta National poll misleading?.  Its verdict:  "The "Hootie Poll" is a mishmash of loaded statements and biased, leading questions that are unworthy of Johnson or Augusta.  It is a poll that is slanted to get the answers they wanted, and in that it succeeded."9

 

    All such ugly polls commit gross violations of ethical standards of behavior.  They masquerade as legitimate objective surveys, but then launch into statements designed to prejudice respondents against a specific candidate or policy.  Alongside the Hootie Poll, the web has produced other direct examples for perusal.  The investigative left-wing magazine Mother Jones in 1996 published Tobacco Dole, by Sheila Kaplan.  The target turned out to be former Attorney General of the State of Texas, Dan Morales.  He and others are routinely brought forth as statewide office-holders, but suddenly on Question 24 onward, the true purpose of this query is revealed in a series of relentlessly negative statements about Morales alone.  The reason was that Attorney General Morales at the time was point man for engaging the state in legal action against tobacco firms, and this alleged poll was a response to undermine that goal.

 

    How does one detect these false jewels?  Not simply by looking at the sample's selection.  The Hootie Poll obtained a proper sample in the proper way, thus avoiding the most common reason for a "bad poll" label.  They also did not launch an immediate attack on a target the way robo-calls do.  Instead, questions worded in a deliberate leading way are the surest sign of these ugly polls.  Watch for loaded or biased questions somewhere in the question

Page 15: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

sequence.  Of course, most of these polls are done by telephone since that's still the prevalent means of doing legitimate surveys; so in person one must wait out the innocuous queries before discovering the push component.  Once that does show up, ask yourself if that question or statement would be permitted in court of law without an objection from the subject's counsel (or the judge).  If "objection!" followed by "sustained!" come to mind, you're probably looking at an ugly poll.  These deserve no more of your time, and should be publicly given the contempt they so richly deserve.

 

    An on-line discussion of this technique is the Leading Questions sector of The Business Research Lab's site (URL:  www.busreslab.com/tips/tip34.htm).  It cites a survey designed to move opinion toward a change in the location of a charitable-walk event that had the professed objective of preventing teenage suicides.  The status quo was location A but the survey's sponsor obviously wanted to switch to location B.  So the question was worded this way: "We are considering changing from location A to location B this year.  Would you be willing to walk starting from location B, if it meant that hundreds more teenage suicides would be avoided?"  Now that's an authentic leading question!

 

    There are ways to get even with these moral offenders.  Herbert Asher, author of the six-edition polling text Polling and the Public:  What Every Citizen Ought to Know, recommends that citizens who are push-polled should alert their local media of that fact (Asher 2005, 140).  One might also consider self-policing by political consultants via their organization, the American Association of Political Consultants.  However, a 1998 survey of political consultants showed that few believe their organization's formal stance against push polls is an effective deterrent (Thurber and Dulio 1999, at Reprinted from the July 1999 Issue of Campaigns and Elections Magazine:   A Portrait of the Consulting Industry, p. 6).  Subsequent pre-election practice has verified that concern.  So citizen and media pressure is the only effective current avenue for curbing this practice--but that in turn requires wide public recognition of the ugly poll for what it truly is.  I offer this paper in pursuit of that worthy end.

 

    Remember also that questions are half the story.  The other half is the set of responses available to the polled.  Another "ugly" sign is that respondents face choices designed to help ensure the pre-ordained response sought by the alleged pollster.  This is not done only by campaign organizations seeking to impeach a rival.  It is also done at web poll sites, sometimes in a rankly biased but amateur manner.  This is richly displayed at Opinion Center from Opinion Center.com.  One has to sample their fare to see how biased it truly is.  Here is one example that shortly followed the 2003 death of actress Katherine Hepburn:  "Everyone talks about how Katherine Hepburn was such a role model.  She wore pants, had a long affair with a married man, never had kids and never married.  Is this a good role model?"  The respondent is left to choose only a "yes" or "no" response to this rant.

 

    Another entry from them concerned the scandal revelations of the New York Times in 2003:  "Top management at The New York Times, including Howell Raines and Gerald Boyd, resigned / were asked to leave / were fired.  These two individuals were known for their curmudgeon-style of management.  Is there actually a curmudgeon-style of management or is that really just management by intimidation and a bad attitude toward employees?"  The respondents could choose among the following three responses:

 

    1) Curmudgeon-style management is a valid style.    2) Curmudgeon-style management is not a valid style.    3) Managers manage that way because they are insecure.

Page 16: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

 

    As one can see, subtlety is not a long suit at Opinion Center.com.  They borrow from legitimacy of real polls and profess this as their motto:  "Surveys are intended to elicit honest information for academic and consumer-oriented market research & entertainment."  Opinion Center falls alarmingly short of that.  But they do teach us how to recognize bias that is built straight into the questions and available responses.  The professional push polls and hired gun polls are considerably more difficult to smell out--but with a little practice and a skeptical eye, any layperson can get their drifts too.

 

    Many issue advocacy groups routinely engage in blatantly biased polling on their pet topics.  Some are organizations that address "hot button" issues such as abortion or gun control in the U.S.  A website poll from pro-gun Keep and Bear Arms (Keep and Bear Arms   -   Gun Owners Home Page - 2nd Amendment Supporters , reviewed 10/14/03) had this survey question and result.

How do you feel about the blatant abuses being foisted upon lawful, peaceable gun owners by crooked politicians and the biased media?

Angry 26.1% 336 votes

Frustrated 3.0% 39 votes

Sad 0.8% 10 votes

Afraid 1.1% 14 votes

Ready for whatever comes our way 3.8% 49 votes

Empowered that we will be victorious 2.9% 37 votes

Amused -- they will never take our guns away 1.4% 18 votes

All of the above 60.9% 782 votes

Total Votes: 1285

    This is no attempt to discover public opinion.  It validates the sponsor's biases by using a self-selected audience which is urged along in its slant by questions that would never gain admittance to a courtroom trial transcript.

 

    One effect of these slants is to invite skepticism about anyone who addresses hot button political topics.  Students often mistakenly identify polls on controversial subjects to be ugly polls.  This is patently incorrect.  It is perfectly legitimate for good polls to address the most touchy or delicate subjects.  In fact, those are often the things most worthwhile to know and understand.  Content addressing an explosive topic is not itself grounds for sensing "ugly" in a poll.  I recommend studying the legitimate polls to see how two or three of them address such hot-button topics as abortion or gun control.10  Once you see the nature of the wording, compare it to someone who is genuinely trying

Page 17: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

to sway you instead of learn what your opinions are.  With some practice and alertness, you won't find it difficult to tell the difference of good from ugly.

 

Conclusion                                 Next down; Top

    Surveying of public opinion has become an important part of public life in democracies.  With just a little knowledge and practice, any student can master the distinctions of good and bad, or bad and ugly public polls.  These polls are so pervasive in modern life that the need to accomplish this is self-evident.  Getting fleeced is not a good thing!  No citizen should wander into the public informational arena lacking the equipment for protection against false and misleading sales pitches.  In that spirit, I offer this piece as a shield against the bad and ugly of the survey world at large.

 

    Russell D. Renka

 

°Polling Links:    ° Blogs and Commentary on Polls    ° Data Sources    ° Election Polls     ° Embarrassments in polling history    ° General Sources for Polls and Surveys    ° How to interpret and judge polls    ° New York Times Polling Standards     ° Numeracy    ° Numbskull abuses with numbers    ° Push Polls    ° Skepticism    ° Specific polls (all "good" ones, of course)    ° Statistical basis of polling    ° Ugly Poll List

                                                   

Polling Links:             Next down; Top

 

Blogs and Commentary on polls:    ° Mystery Pollster and Mystery Pollster - Pollsters by Mark Blumenthal is excellent for "Demystifying the Science and Art of Political Polling."

 

Data Sources:     ° NORC--The General Social Survey at the University of Chicago; very widely used data source, abundant documentation     ° Center for Political Studies at the University of Michigan, Ann Arbor - gateway to several major sources    ° Public Opinion Quarterly - journal devoted to methodology and results of public opinion surveys

Page 18: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

    ° CESSDA HomePage from Council of European Social Science Data Archives    ° National Network of State Polls - "the largest available collection of state-level data," from the data archive of the Odum Institute for Research in Social Science    ° RealClear Politics - Polls from John McIntyre and Tom Bevan

 

Election Polls:    ° The Cook Political Report's National Poll - biweekly election-year polling, from Associated Press and Ipsos Public Affairs (also see Ipsos News Center - Polls, Public Opinion, Research & News)    ° American Research Group Inc. is a clearinghouse site    ° PollingReport.com - Public Opinion Online - "An independent, nonpartisan resource on trends in American public opinion "    ° NAES 2004 Home Page (National Annenberg Election Survey ) from The Annenberg Public Policy Center of the University of Pennsylvania    ° Presidential Trial Heats: A Daily Time Series and Documentation for time series extraction from James Stimson, University of North Carolina

 

Embarrassments in polling history - Literary Digest of 1936 and other royal screw-ups:     ° The Seattle Times Political Classroom Political Primer Polls     ° Oops!! (Yes, it's the Digest again.  But there are others as well.)

 

General Sources for Polls and Surveys:    ° Copernicus Election Watch: Public Opinion Polls     ° National Council on Public Polls     ° The archive of polls surveys -- The Roper Center for Public Opinion Research     ° Public Opinion from University of Michigan Documents Center     ° Ruy Teixeira - Center for American Progress has weekly polling columns; parent site is Home - Center for American Progress with Ruy's columns shown under heading of "Public Opinion Watch"    ° SSLIS Public Opinion Guide from Yale University Social Science Libraries and Information Services (SSLIS)     ° PRI - Links to Public Opinion Research    ° Public Opinion Polling in Canada (BP-371E)    ° Pew Forum on Religion & Public Life - American Religious Landscapes and Political Attitudes (in 2004)

 

How to interpret and judge polls:    ° The Ten Commandments of Polling by Ken Blake, UNC-Chapel Hill     ° 20 Questions A Journalist Should Ask from National Council on Public Polls (NCPP)     ° A Press Warning on Push Polls from National Council on Public Polls     ° Statement About Internet Polls from National Council on Public Polls - "there is a consensus that many web-based surveys are completely unreliable.  Indeed, to describe them as "polls" is to misuse that term."     ° NCPP Principles of Disclosure - a statement on ethics of proper polling    ° Howard W. Odum Institute Poll Item Database Query Page has properly worded questions     ° Answers to Questions We Often Hear from the Public from National Council on Public Polls     ° If You're Going to Poll by The Why Files; see its Polling Glossary (for layman's explanation of standard terminology in polls), Serious Statistical Secrets page (good explanation of the basics), Obey the Law (law of large numbers, that is), Doing it wrong... (on subtle failures associated with not taking a truly random sample), Oops!! (screwing up royally), A Little Knowledge ... on deliberative polling per James Fishkin of U of Texas (my alma

Page 19: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

mater) and his Goes a Long Way and Why Change (of opinions) on the National Issues Convention in Austin, TX.    ° ABCNEWS.com ABCNEWS Polling Guide from Gary Langer, head of the ABC News Polling Unit

Numeracy - Competent interpretation of statistics and data-based information is essential for sifting out the good from the bad and the ugly.  Besides that, these sites contain rich collections of abuse and plain old bunkum that will delight and repel us in comparable proportions.  Here are some sites that promote literacy in handling numerical, statistical, and mathematical information:    ° innumeracy.com - the home site; below are subcategories with extensive links to illustrative sources    ° numeracy    ° numeracy - Archives    ° critical thinking    ° Knowlogy

Numbskull abuses with numbers - Here's a rich category, probably unlimited in potential number of examples.    ° Best, Joel. 2001.  Telling the Truth About Damned Lies and Statistics, The Chronicle Review, May 4.

 

Push Polls:    ° 2003_pushpollstatement from AAPOR - American Association for Public Opinion Research (AAPOR)    ° NCPP - National Council on Public Polls - Press WARNING on push polls as political telemarketing    ° Campaigns & Elections: What Are Push Polls, Anyway? by Karl G. Feld, May 2000.  Campaigns & Elections 21:62-63, 70    ° Push Polls - a biographical source compilation    ° CBS News: The Truth About Push Polls February 14, 2000   180605 by Kathleen Frankovic    ° pushpolls (The Case Against Negative Push Polls), from Michael Sternberg; a compilation of cases, including the one immediately below    ° Public Opinion Strategies Push Poll from Mother Jones; a reproduced full poll that looks legitimate but is actually designed to condemn a specific candidate; the serious abuse starts with Question no. 24.    ° Push Me, Poll You By William Saletan in Slate (February 15, 2000) on South Carolina push polling by the Bush campaign against presidential primary rival John McCain in February 2000

Skepticism - This combination of attitude and education is a mighty valuable approach for any who want to avoid fraud.    ° The Skeptic's Refuge, including link to The Skeptic's Dictionary: A Guide for the New Millennium

Specific polls (all "good" ones, of course):      ° American attitudes Program on International Policy Attitudes from Program on International Policy Attitudes (PIPA) ; intro at [PIPA] About Us says "This website will report on US public opinion on a broad range of international policy issues, integrating all publicly available polling data."      ° Current Population Survey Main Page and CPS Overview - From the U.S. Bureau of the Census, the CPS has special benefit of exceptionally large samples that can be subdivided almost endlessly.     ° Eurobarometer - Monitoring the Public Opinion in the European Union     ° European Public Opinion - Homepage    ° The Gallup Organization - Gallup has gone commercial, limiting web access to subscribers only.  But a few recent summations are present at any given time.    ° Welcome to the Harris Poll Online and Harris Interactive - online polling    ° Knowledge Networks® - The consumer information company for the 21st century - online polling    ° The NES Guide to Public Opinion and Electoral Behavior - American National Election Studies has queries in 9 categories from 1948 through 2002.    ° The New York Times/CBS News Poll - a useful archive    ° On Politics - Washington Post Archive     ° The Pew Research Center for The People & The Press    ° PollingReport.com - Public Opinion Online - This is a compilation site organized by subject.    ° Program on International Policy Attitudes (PIPA)

Page 20: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

    ° Public Agenda Online - Public Opinion and Public Policy    ° SurveyUSA® Methodology - They do simultaneous 50-state surveys for presidential election forecasting, comparison of state and regional presidential approval, and other cross-unit comparative purposes.    ° Zogby International

 

Statistical basis of polling:     ° Statistics Every Writer Should Know from RobertNiles.com     ° Statistics Every Writer Should Know - Margin of Error     ° Statistics Every Writer Should Know - Standard Deviation     ° Statistics Every Writer Should Know - Mean     ° Statistics Every Writer Should Know - Median     ° Sample Sizes    ° Calculate a Sample     ° The Stats Board     ° Statistical Assessment Service

 

Ugly Poll List (I am always looking for these characters.)    ° Opinion Center is from Opinion Center.com; these characters take the cake for slanted and biased questions.

Top

 

Notes

 

1 This practice is noticeably violated in recent years by Investor's Business Daily and their polling agency, Technometrica Institute of Policy and Politics (IBD/TIPP).  TIPP does polls available only to IBD, which produces deeply biased reports based on TIPP surveys with no direct or full link to that surveyor's questions or methods of acquiring its samples.  Their practices and results are of doubtful value, to say the least.  Nate Silver reviews a notorious recent IBD/TIPP polls of doctors thusly:  "that special pollster which is both biased and inept.." (Nate Silver of FiveThirtyEight:  Politics Done Right at ibdtipp-doctors-poll-is-not-trustworthy, 9/16/2009).

 

2 The HIP-Sampling Error site defines Sampling Error as "That part of the total estimation error of a parameter caused by the random nature of the sample" where a Random Sample is "A sample that is arrived at by selecting sample units such that each possible unit has a fixed and determinate probability of selection."  In layman's terms, this means every sample unit has the same likelihood of being included in the sample, yet there's still error when making an inference about the population.  A self-selected sample that is not randomly selected from a population has no specification of sampling error--as the term is meaningless in that context.    A more technical online introduction with a bit of math from National Science Foundation is SESTAT's Understanding Sampling Errors and What is the Margin of Error.    Good polls use computer-generated random numbering.  There's evidence that human beings cannot create truly random numbers very well.  See Nate Silver, FiveThirtyEight Politics Done Right:   Strategic Vision Polls Exhibit Unusual Patterns, Possibly Indicating Fraud, 9/25/2009 where an Atlanta polling firm called Strategic Vision, LLC is suspected of claiming poll results without doing the polls.  The results distribute in a markedly nonrandom way.

Page 21: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

 

3 The 1995 NPTS Courseware Interpreting Estimates - Sampling Error site shows that sampling error follows naturally from drawing out a part of a population for creation of a sample.  In the DSS Calculator, entry of population size of 1000 AND also a sample size of 1000 produces a 0% sampling error, because the entire population went into that sample, so any second sample of 1000 cannot possibly vary from the first one.  That's true for all finite population and sample sizes, such as the 2004 presidential election voter turnout of about 122,000,000.  But if you enter population of 122,000,000 and sample size of 1220, then you get a manageably small sampling error of about 3%, even though this sample consists of only 1 in every 100,000 voters-to-be from the population.

 

4  Traditional response rates in randomly selected telephone exchange samples are declining, and those not called differ substantially from those called.  Cell-only households are younger, more affluent, more politically liberal, and less likely to be married or to own their home; so polling cannot be indifferent to their absence from landline samplings.  However, a May 2006 report cited "a minimal impact on the results" of surveys where cell-only users are excluded (Pew Charitable Trusts, The Cell Phone Challenge to Polling, 17 May 2006); and for now, RDD usage is still widely employed.    The response rate problem along with rapidly spreading standard web access in U.S. households has prompted the Stanford-based Polimetrix firm to abandon the telephone outright in favor of an internet-based Matrix database (Polimetrix, Scientific Sampling for Online Research).  They claim successes compared to traditional firms in California-based 2004 statewide referenda forecasts; and they may well be the portent of future polling methods from large on-line databases (Hill, Lo, Vavreck, and Zaller 2007).    However, their standard internet polling site (PollingPoint - A Nationwide Network of Millions of People Inspiring Public Debate) invites the usual website visitors' indulgence in online polling, with results showing almost nothing about resultant sample size, sampling error, or comparability to other polls.  This is still self-selected sampling rather than random selection.  I believe the jury is out; there is yet no consumer-linked warrant to inspire confidence in the results obtained by this method.

 

5 Their warnings include this:  "The SERVE system might appear to work flawlessly in 2004, with no successful attacks detected. It is as unfortunate as it is inevitable that a seemingly successful voting experiment in a U.S. presidential election involving seven states would be viewed by most people as strong evidence that SERVE is a reliable, robust, and secure voting system. Such an outcome would encourage expansion of the program by FVAP in future elections, or the marketing of the same voting system by vendors to jurisdictions all over the United States, and other countries as well. However, the fact that no successful attack is detected does not mean that none occurred. Many attacks, especially if cleverly hidden, would be extremely difficult to detect, even in cases when they change the outcome of a major election. Furthermore, the lack of a successful attack in 2004 does not mean that successful attacks would be less likely to happen in the future; quite the contrary, future attacks would be more likely, both because there is more time to prepare the attack, and because expanded use of SERVE or similar systems would make the prize more valuable. In other words, a "successful" trial of SERVE in 2004 is the top of a slippery slope toward even more vulnerable systems in the future."  Jefferson et al., 2004, A Security Analysis of the Secure Electronic Registration and Voting Experiment (SERVE).

 

6 That would be Diebold Elections Systems.  Parent website is Welcome To Diebold Election Systems.  See Diebold Investor Relations News Release of January 29, 2004 - "Maryland Security Study Validates Diebold Election Systems Equipment for March Primary" at URL: www.corporate-ir.net/ireye/ir_site.zhtml?ticker=DBD&script=410&layout=-6&item_id=489744.  See also the New York Times Opinion piece on this bizarre claim:  How to Hack an Election - New York Times, 31 January 2004; and Trusted Agent_Report_AccuVote, 20 January 2004, a report to the state legislature on Diebold's Maryland experience.

Page 22: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

 

7 There is one prominent exception to concealment.  FreeEats and its director Gabriel Joseph III were effectively outed after the 2006 midterm election as authors of robo-call attacks against targets of conservative candidates and causes (Schulman 2006, 2007).  Those who hired FreeEats remained unknown, but this shadowy organization has assumed a certain notoriety.  That may only be good advertising for someone offering this product.  Joseph made himself well known in Indiana by counter-suing that State's attorney general (Schulman 2006).

 

8 The hired gun poll is succinctly described by Humphrey Taylor, chairman of the Harris Poll in the U.S., with journalist Sally Dawson.  See Public Affairs News - Industry - Polling:   Poll Position (June 2006) and scroll down to "hired gun" polling.  Taylor says "there is a long history of hired-gun polls which are actually designed to mislead people using every methodology. The prime offenders have included PR firms, and sometimes non-profit groups, who really more or less will come to you and say: ‘I need a survey which shows that 80 per cent of people support our position – pro- or anti-abortion, or pro- or anti-globalisation, or whatever it is’, or ‘80 per cent of people like my client’s product more than they liked the other product.’”    That's right.  Here's an example from a religious right wing group on the topic of abortion:  Faith 2 Action Abortion Poll, with Wirthlin Worldwide National Quorum serving as the hired gun.  (Thanks to my student Laura Muir for providing this example.  RDR, 10/4/07)

 

9 This is not the only occasion for Conway's firm to conduct polls with deliberate intent to produce an ideologically conservative policy boost.  After the 2008 presidential election, The Federalist Society employed the firm to such effect, per Key Findings from a National Survey of 800 Actual Voters » Publications » The Federalist Society (November 7, 2008).  The full poll is labeled 2008 Post-Election Survey of 800 Actual Voters with Questions 7 through 11 on judicial philosophy (locale:  pp. 5-6 of this 73-page Acrobat file).  The wording is designed to ensure a high proportion of respondents will select the literalist approach strongly sought by the Society; and that mission was accomplished.

 

10 For the abortion issue, PollingReport.com has an Abortion and Birth Control site with years of legitimate polls showing typically worded legitimate questions on this topic.

 

Top

 

References

 

American Association for Public Opinion Research.  2007.  Push Polls:  Not to be confused with legitimate polling (filename: AAPOR Statement on Push Polls).  URL: www.aapor.org/aaporstatementonpushpolls.

Asher, Herbert.  2001.  Polling and the Public:  What Every Citizen Should Know, 5th ed.  Washington, D.C.:  CQ Press.

Page 23: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

Asher, Herbert.  2005.  Polling and the Public:  What Every Citizen Should Know, 6th ed.  Washington, D.C.:  CQ Press.

Beck, Deborah, Paul Taylor, Jeffrey Stanger, and Douglas Rivlin.  1997.  Issue Advocacy Advertising During the 1996 Campaign.  URL: www.annenbergpublicpolicycenter.org/03_political_communication/issueads/REP16.PDF.

Blake, Ken.  1996.  The Ten Commandments of Polling.  URL: facstaff.uww.edu/mohanp/methodspolls.html.

Blumenthal, Mark.  2006a.  Mystery Pollster - RoboScam: Not Your Father's Push Poll, 21 February 2006.  URL: www.mysterypollster.com/main/2006/02/roboscam_not_yo.html.

Blumenthal Mark.  2006b.  A Real Push Poll?", 8 September 2006.  URL: www.pollster.com/blogs/roboscam/.

Blumenthal, Mark.  2007a.  Mystery Pollster:   Cell Phones and Political Surveys: Part I , 3 July 2007.   URL:  www.pollster.com/blogs/cell_phones_and_political_surv.php.

Blumenthal, Mark.  2007b.  Mystery Pollster:   Cell Phones and Political Surveys:   Part II . 13 July 2007. URL: www.pollster.com/blogs/cell_phones_and_political_surv_1.php.

Borger, Julian.  2004.  The Brains.  The Guardian, March 9, 2004.  URL: www.guardian.co.uk/uselections2004/story/0,13918,1165126,00.html.

Business Research Lab, The.  2004.  A Business Research Lab Tip, Leading Questions.  URL:  www.busreslab.com/tips/tip34.htm.

Davis, Richard H.  2004.  The Anatomy of a Smear Campaign, The Boston Globe, March 21, 2004.  URL:  www.boston.com/news/politics/president/articles/2004/03/21/the_anatomy_of_a_smear_campaign/.

Diebold Investor Relations.  2004.  News Release of January 29, 2004 - "Maryland Security Study Validates Diebold Election Systems Equipment for March Primary."  URL: www.corporate-ir.net/ireye/ir_site.zhtml?ticker=DBD&script=410&layout=-6&item_id=489744.

The Digital Divide.  2003.  IT&Society: A Web Journal Studying How Technology Affects Society, Volume 1, Issue 4, Spring 2003.  URL:  www.stanford.edu/group/siqss/itandsociety/v01i04.html.

Drew, Christopher.   2006.  New Telemarketing Ploy Steers Voters on Republican Path, New York Times, 12/6/06.  URL:  www.nytimes.com/2006/11/06/us/politics/06push.html.

DuBose, Louis.  2001.  Bush's Hit Man.  The Nation, February 15, 2001.  URL:  www.thenation.com/doc/20010305/dubose.

ElectionsOnline.us--Enabling Online Voting.  URL: www.electionsonline.us/.

Fathers' Manifesto.  The Criminal Gallup Organization.  URL:  www.christianparty.net/gallup.htm.

Fathers' Manifesto.  Abortion Polls by the Gallup Organization.  URL:  christianparty.net/abortiongallup.htm.

Gawiser, Sheldon R., and G. Evans Witt.  undated.  20 Questions A Journalist Should Ask About Poll Results, Third Edition.  URL: www.ncpp.org/?q=node/4.

GolfWeb Wire Services, PGATOUR.com - Is the Augusta National poll misleading? (November 14, 2002).  URL: images.golfweb.com/story/5888231.

Page 24: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

Green, Donald P. and Alan S. Gerber.  2002.  Enough Already with Random Digit Dialing: A Proposal to Use Registration-Based Sampling to Improve Pre-Election Polling, May 5, 2002.  URL:  bbs.vcsnet.com/df/RegistrationBasedSampling.pdf.

Green, Joshua.  2004.  Karl Rove in a Corner.  Atlantic Monthly, November 2004.  URL:  www.theatlantic.com/doc/200411/green.

Green, Joshua.  2007.  The Rove Presidency.  Atlantic Monthly, September 2007.  URL: www.theatlantic.com/doc/200709/karl-rove.

Greenberg Quinlan Rosner Research, Inc. (with MoveOn.org).  Filename: gqr at URL:  www.moveonpac.org/moveonpac/gqr.pdf.

Hill, Seth J., James Lo, Lynn Vavreck, and John Zaller.  2007.  The Opt-in Internet Panel: Survey Mode, Sampling Methodology and the Implications for Political Research.  Annual meeting of the American Political Science Association, Chicago, IL.  URL:  web.mit.edu/polisci/portl/cces/material/HillLoVavreckZaller2007.pdf.

Hootie Poll:  See Helen Ross, Poll shows support for Augusta's right to choose membership - PGATOUR.COM, November 13, 2002 at URL: www.golfweb.com/u/ce/multi/0,1977,5885978,00.html.    Poll questions are listed sequentially on five files:  Augusta National poll Part I - PGATOUR.COM; Augusta National poll Part II - PGATOUR.COM; Augusta National poll Part III - PGATOUR.COM; Augusta National poll Part IV - PGATOUR.COM and Augusta National poll Part V - PGATOUR.COM.  All were posted November 13, 2002 with respective URL suffixes: 0,1977,5886264,00.html; 0,1977,5886269,00.html ; 0,1977,5886271,00.html ;0,1977,5886273,00.html; and 0,1977,5886278,00.html.

Jefferson, David, Aviel D. Rubin, Barbara Simons, and David Wagner.  2004 (January 20).  A Security Analysis of the Secure Electronic Registration and Voting Experiment (SERVE).  URL:  www.servesecurityreport.org/.

Kagay, Michael.  1994.   Poll on Doubt Of Holocaust Is Corrected, New York Times, July 8, 1994.  URL:  www.nytimes.com/1994/07/08/us/poll-on-doubt-of-holocaust-is-corrected.html.

Kagay, Michael.  2000.  Poll Watch Looking Back on 25 Years of Changes in Polling, New York Times, April 20, 2000.  URL:  www.nytimes.com/library/national/042000poll-watch.html.

Kaplan, Sheila.  1996.  Tobacco Dole, Mother Jones, May/June 1996.  URL:  www.motherjones.com/news/special_reports/1996/05/kaplan.html.

Keep and Bear Arms.com.  Keep and Bear Arms   -   Gun Owners Home Page - 2nd Amendment Supporters .  URL:  keepandbeararms.com/polls/pollmentorres.asp?id=10.

Keeter, Scott, Michael Dimock and Leah Christian.  2008.  Pew Research Center for the People & the Press.  The Impact Of "Cell-Onlys" On Public Opinion Polling:   Ways of Coping with a Growing Population Segment , 31 January 2008; Cell Phones and the 2008 Vote:   An Update , 23 September 2008.  URLs: people-press.org/report/391/  and pewresearch.org/pubs/964/.

Keeter, Scott.  2008.   Research Roundup: Latest Findings on Cell Phones and Polling, 22 May 2008.  URL:  pewresearch.org/pubs/848/cell-only-methodology.

Ladd, Everett Carl.  1994.  The Holocaust Poll Error:  A Modern Cautionary Tale.  Public Perspective, Vol. 5, No. 5 (July/August 1994).  Filename: Roper Holocaust Polls.  Reprinted at URL:  edcallahan.com/web110/articles/holocaust.htm, from Ed Callahan's STAT 110 Articles site at URL:  edcallahan.com/web110/articles/.

Page 25: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

Leip, Dave.  Atlas of Presidential Elections:  1936 Presidential Election Results.  URL: www.uselectionatlas.org/RESULTS/national.php?f=0&year=1936.

Mapes, Jeff.  2000.  Web Pollster Hopes To Win Credibility. PulsePoll.com News: The Oregonian , April 12, 2000.  URL:  www.pulsepoll.com/news/pr/oregonian.html.

Martin, Jonathan.  2007.  Apparent pro-Huckabee third-party group floods Iowa with negative calls - Jonathan Martin's Blog, Politico.com, 12/3/07.  URL:  www.politico.com/blogs/jonathanmartin/1207/Apparent_proHuckabee_thirdparty_group_floods_Iowa_with_negative_calls.html.

Mooney, Chris.  2003.  Polling for Intelligent Design (Doubt and About).  September 11, 2003.  URL:  www.csicop.org/specialarticles/show/polling_for_id/.

Moore, James and Wayne Slater.  2006.  The Architect:  Karl Rove and The Master Plan for Absolute Power.  New York: Crown Publishers.

MoveOn.org.  2003.  Report on the 2003 MoveOn.org Political Action Primary.  URL:  moveon.org/pac/primary/report.html.

National Council on Public Polls.  1995.  A Press Warning from the National Council on Public Polls.  URL:  www.ncpp.org/push.htm.

Niles, Robert.  Margin of Error at RobertNiles.com.  URL:  www.robertniles.com/stats/margin.shtml.

NPR Karl Rove, 'The Architect' interview with coauthor Wayne Slater, WHYY, September 6, 2006.  URL: www.npr.org/templates/story/story.php?storyId=5775226.

Opinion Center.  URL:  www.opinioncenter.com/.

PulsePoll.  2000.  PulsePoll Primary: Arizona Results.  URL:  www.pulsepoll.com/primary/primary.html.

Rubenstein, Sondra Miller.  1995.  Surveying Public Opinion.  Belmont, CA:  Wadsworth Publishing.

Saletan, William.  2000.  Push Me, Poll You, Slate Magazine, February 15, 2000.  URL:  slate.msn.com/id/74943/.

Scenic America.  undated.  Opinion Polls:   Billboards are Ugly, Intrusive, Uninformative .  URL: www.scenic.org/billboards/background/opinion.

Schulman, Daniel.  2006.  Tales of a Push Pollster, Mother Jones, 29 October 2006.  URL: www.motherjones.com/news/update/2006/10/free_eats.html.

Schulman, Daniel.  2007.  i, robo-caller, Mother Jones, January/February 2007.  URL:  www.motherjones.com/news/outfront/2007/01/i_robo_caller.html.

Schwartz, John.  2004 (January 21).  Report Says Internet Voting System Is Too Insecure to Use.  URL:  www.nytimes.com/2004/01/21/technology/23CND-INTE.html?ex=1076821200&en=7d215de9386d6652&ei=5070  (Use the file name at a search engine or at the New York Times site should this URL be a failure.)

Silver, Nate.  2009.   ibdtipp-doctors-poll-is-not-trustworthy, 9/16/2009.  URL:  wwww.fivethirtyeight.com/2009/09/ibdtipp-doctors-poll-is-not-trustworthy.html.

Page 26: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

Silver, Nate.  2009.  FiveThirtyEight Politics Done Right:   Strategic Vision Polls Exhibit Unusual Patterns, Possibly Indicating Fraud, 9/25/2009.  URL:  wwww.fivethirtyeight.com/2009/09/strategic-vision-polls-exhibit-unusual.html.

Singer, Eleanor.  1995.  The Professional Voice 3:  Comments on Hite's Women and Love.  In Rubenstein, Sondra Miller, Surveying Public Opinion, pp. 132-136.  Belmont, CA:  Wadsworth Publishing.

Smith, Tom W.  Sex Counts:  A Methodological Critique of Hite's Women and Love.  1989.  Washington, D.C.:  National Academies Press.  On line:  Nat'l Academies Press, AIDS, Sexual Behavior, and Intravenous Drug Use (1989), Sex Counts A Methodological Critique of Hite's Women and Love , pp. 537-547.   URL:  www.nap.edu/books/0309039762/html/537.html.

Sniggle.net:  The Culture Jammer's Encyclopedia, AP Wire 06-21-2003 UCR student arrested for allegedly trying to derail election.  URL:  web.archive.org/web/20030703124458/http://cbs11tv.com/national/HackerArrested-aa/resources_news_html.

Snow, Nancy.  2000.  The South Carolina Primary:   Bush Wins, America Loses .  CommonDream.org News Center.  URL:  www.commondreams.org/views/022100-106.htm.

Squire, Peverill.  1988.  Why the 1936 Literary Digest Poll Failed.  Public Opinion Quarterly 52:1 (Spring), 125-133.

Stolarek, John S., Robert M. Rood, and Marcia Whicker Taylor.  1981.  Measuring Constituency Opinion in the U.S. House:  Mail Versus Random Surveys.  Legislative Studies Quarterly 6:4 (November), 589-595.

Suskind, Ron.  2003.  Why are These Men Laughing?.  Esquire, January 1, 2003.  URL: www.ronsuskind.com/newsite/articles/archives/000032.html.

Taylor, Humphrey.  1998.  Myth and Reality in Reporting Sampling Error:   How the Media Confuse and Mislead Readers and Viewers. URL: PollingReport.com at www.pollingreport.com/sampling.htm.

The Polling Company TM and WomanTrend.  URL:  www.pollingcompany.com/resourcecenter.asp?FormMode=Call&LinkType=Text&ID=14 (or www.pollingcompany.com/resourcecenter.asp and link via "Polling Definitions").

The Why Files.  Obey the Law.  URL:  whyfiles.org/009poll/math_primer2.html.

Thurber, James A., and David A. Dulio.  1999.  A Portrait of the Consulting Industry.  Campaigns and Elections, July 1999.  URL:  Reprinted from the July 1999 Issue of Campaigns and Elections Magazine “A Portrait of the Consulting Industry” at 216.239.53.104/search?q=cache:WcajxnubstUJ:www.american.edu/spa/ccps/pdffiles/A_Portrait_of_the_Consulting_Industry.pdf+American+Association+of+Political+Consultants%2Bpush+polls&hl=en&ie=UTF-8 (better to find this via copy and paste of the filename to Google at www.google.com).

Traugott, Michael W., and Paul J. Lavrakas.  2000.  The Voter's Guide to Election Polls, 2d ed.  Chatham, NJ:  Chatham House.

Vote.com.   URL: www.vote.com.

Voting_System_Report_Final.  2003.  Risk Assessment Report:  Diebold AccuVote-TS Voting System and Processes, September 2, 2003.  SAIC (Scientific Applications International Corporation), for State of Maryland.  URL:  www.dbm.maryland.gov/dbm_search/technology/toc_voting_system_report/votingsystemreportfinal.pdf.

Page 27: Web viewTherefore readers either hunt these down or must take this report's word for it--and that ... to raise money and ... Golf Tournament's host club

Welcome to the SERVE home page.  URL:  www.serveusa.gov/public/aca.aspx.

Young, Michael L.  1992.  Dictionary of Polling: The Language of Contemporary Opinion Research.  Westport, CT:  Greenwood Press.

 

Top

 

Copyright©2011, Russell D. RenkaThursday, June 09, 2011 11:32:21 AM