message mapping explained
DESCRIPTION
Most research companies rely on voters to guess at what motivates them. What we know about how human beings evaluate information, make choices, and respond to survey questions tells us that the traditional approach to message testing—askingto judge which people how effective message would be—is not reliable. The problem: voters (or people in general) just aren’t good at understanding the reasons they do things. Message Mapping is the only research technique proven to solve this problem. If you're relying on traditional message testing (more/less likely or vote for/against scales), you might as well be guessing. They don't work.TRANSCRIPT
M e s s a g e M a p s
The Most Effective Technology for
Assessing Message Effectiveness
Research shows that
voters are unable
to judge which
messages actually
motivate them
during a telephone
survey, so WPA uses
a methodology that
measures the actual
effectiveness of each
message without
having to rely on a
respondent’s guesses.
The problem
Most research companies rely on voters to guess at what motivates them. What we know about how human beings evaluate information, make choices, and respond to survey questions tells us that the traditional approach to message testing—asking people how effective message would be—is not reliable.
There are several reasons for this, including:
• Voters(orpeopleingeneral)justaren’tgoodatunderstandingthereasonstheydothings.
> There’s a reason that psychology and psychiatry are burgeoning industries—most people act without fully understanding why they act and often act in ways that are contrary to what they believe are their preferences and motivations.
> Voters just can’t differentiate among the importance of as many as a dozen distinct messages, so they wind up rating one high and the rest low, or all of them high, or another simple strategy. All of these can lead to us reaching the wrong conclusion when we rely only on respondent ratings to assess messages.
• Voterswanttobelikedbytheinterviewer. > The foundation of telephone polling is the social exchange between
interviewer and interviewee.
> While this is what allows us ask 20 minute surveys, it creates bias in message assessments.
> People will say one thing and do another on socially controversial topics such as race, class, honesty questions, and others.
• Peopletendtogivegreatestweighttowhatthey’vebeenexposedtorecentlyinthenews(hottopics).
> While these messages sound familiar at the time, they may not have any impact at all on their vote.
> For this reason voters will say a message matters a lot that is already driving their choice on the ballot—if we repeat that message in an ad we won’t gain any ground because everyone already know about it.
Traditional research does not identify messages that
actually change opinions.
The solution to this problem is to measure the actual effect of hearing each message on a respondent’s vote choice.
We still ask respondents to rate messages because it gives them a cognitive task that causes them to listen to each message. We evaluate effectiveness, however, not by their responses, but by using observed changes from the pre-ballot to the post-ballot.
The way this works is as follows:
• Eachrespondentisaskedtoratearandomselectionofmessagesandthentheballotisretested.
• Werecordwhatmessageseachdidanddidnothear.
• Wemeasuretheactualbehavioralresponse—thedifferencebetweentheirinitialballotvoteandtheinformedballotvote.
• Webuildaregressionmodelwiththatresponseforeachrespondentasthedependentvariableandaseriesofindicatorvariablesforheard/didnotheareachmessageastheindependentvariables.
• Thecoefficientsoneachheard/didnothearisthemeasurementoftheeffectivenessofeachmessageinchangingvotes.
We then use the actual effect of each message on the X axis of our Message Map™, and combine it with scales of stickiness (the ability to recall the message later in the survey), and believability of the message.
What our Message Maps reveal is the latent values that voters bring to elections—and the messages that appeal to them—that we would not get looking just at message ratings.
The SoluTionWpA’s message mapping
methodology measures actual change in opinion
0.0
0.2
0.0
0.4
0.6
0.8
1.0
1.2
1.4
Incr
easi
ng M
emo
rab
ility
of
a M
essa
ge
Increasing Effectiveness of a Message
Bubble Size: Believability
0.25 0.50 0.75 1.0
8. Will end pork spending
6. Pro-life champion
1. Will fight for a balanced budget amendment
2. Will cut Dept of Energy and Dept of Education funding
3. Cut corporate taxes to spark economy
4. Supports repeal of Obamacare
5. Wants to privatize Social Security
7. Wants to greatly expand domestic oil drilling
• Respondents’self-reportinggavethestrongestratingstomessages4and8,repealingObamacare,andendingporkspending.Messages2and5wereselfreportedastwoofthelowest-ratedmessages.
> But our analysis shows that while respondents said statements 4 and 8 would motivate them, those messages ultimately had very little effect on their vote choice.2
• Thisexampleillustrateshowrespondentsgavethe“expected”conservativeresponses—endingporkspendingandrepealingObamacare—totheinterviewerwhileavoidingmorecontroversialresponsesregardingeliminatingtheDepartmentsofEnergyandEducationandprivatizingSocialSecurity.
> But in reality, these controversial topics were the winning messages for this particular campaign.3
WPA plots each message on a chart, showing the actual effectiveness on the X axis, the stickiness of the message on the Y axis, and the believability of a message represented by the size of the bubble. The best messages are large bubbles in the green area, balancing effectiveness, stickiness, and believability.
• OnthisMessageMap,themosteffectivemessagesarenumbers2and5,cuttingthefundingoftheDepartmentsofEnergyandEducationfunding,andprivatizingSocialSecurity.1
• InarecentTexaslegislative
primarywefoundthatvoters
whosaidtheycaredmostabout
bordersecurityandillegal
immigrationreallyresponded
besttoamessageaboutlife
issues.Illegalimmigrationwas
ahottopicatthetime,but
MessageMapsrevealedthatthe
enduringissueofprotectinglife
reallymatteredmoreandhelped
ourcandidatewin.
• InacompetitiveCongressional
generalelectioninKansaslast
cycle,votersratedamessage
aboutbalancedbudgetsmost
highly.ButMessageMaps
revealedthatamoreaggressive
messageaboutfightingagainst
theGovernor’staxhikeproposal
wonmorevotes.Voters
wantedtobelievetheywouldn’t
respondtoa“combative”
message,butinrealitytheydid.
exAmpleS
324 Second Street, SeWashington, DC
20003
1319 Classen Droklahoma City, oK
73103
1005 CongressSuite 495, Austin, Tx
78701
www.WpAresearch.com
1. Krosnick, J.A., S. Narayan, W.R. Smith (1996). Satisficing in Surveys: Initial Evidence. New Directions for Evaluation 70: 29-44.
2. Fisher, R.J., J.E. Katz (2000). Social-desirability bias and the validity of self-reported values. Psychology and Marketing 17(2): 105-120.
3. Ashton, R.H., J. Kennedy. (2002) Eliminating recency with self-review. Behavioral Decision Making 15(3): 221-231.