Improving Message Testing Polling

  • Uploaded by: mattdabrowski
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Improving Message Testing Polling as PDF for free.

More details

  • Words: 3,293
  • Pages: 68
“Would You Still Hang Up on this Poll If You Knew…” An Experiment to Improve the Design of Political Message Testing Polls Presented at the 2009 AAPOR Annual Conference May 15, 2009

Research conducted by: • Thomas M. Guterbock, Ph.D., Director, Center for Survey Research, University of Virginia, [email protected] • Deborah L. Rexrode, M.A., Research Analyst, Center for Survey Research, University of Virginia, [email protected] • Samantha Luks, Ph.D., Polimetrix, Vice-President for Special Projects, YouGovPolimetrix, [email protected] Sponsored by: University of Virginia Center for Politics Web Survey Services donated by: YouGovPolimetrix Center for Survey Research University of Virginia 2

Overview • Background—How we got here • Research objectives – Ideas for improvement

• • • • •

The experiment Respondent reactions Party mismatch Treatment comparisons Conclusions & further research

Center for Survey Research University of Virginia

Background

Center for Survey Research University of Virginia 4

Message Testing ≠ Push Polling • “Push Polls” are not really surveys at all. • Political advocacy calling conducted under the guise of research. • Typically very short calls, communicating negative messages about a candidate. – sometimes false messages

• Source of call not clearly identified • Aimed at altering outcome of the election • Sometimes live calls, sometimes outbound IVR Center for Survey Research University of Virginia 5

Push Polls are condemned • AAPOR issued its first statement in 1997 – Renewed statements in 2000, 2004

• National Council of Public Polls issued statement in 1997. • American Association of Political Consultants issued statement too. – The larger, reputable political consulting firms have refrained from actual push polling.

Center for Survey Research University of Virginia 6

BUT: Message Testing Polls are OK? • Political campaigns regularly use surveys to test positive and negative messages for their effect on voters. • These are recognized as a legitimate form of research. • These surveys contain ‘push questions.’ • The scripts are often unbalanced, treating one candidate positively and others negatively. • The messages are often strongly partisan in tone. – Sometimes highly distorted, misleading, even false? Center for Survey Research University of Virginia 7

If the election were held today…

Why improve Message Test Practice?

• Message test polls often lead to trouble

– For campaigns and candidates – For firms that conduct them – For AAPOR, which must handle complaints

• These polls make respondents angry – – – –

Lack of real informed consent Reaction to partisan, misleading nature of the messages Feeling of being exploited Confusion with ‘push polls’

• There is very little published, scientific research that can demonstrate the superiority of any specific technique in message-testing polls. Center for Survey Research University of Virginia 9

AAPOR: on Message Testing “AAPOR recognizes that message tests may need to communicate positive or negative information in strongly political terms, in a tone similar to campaign advertisements. Still, these surveys should be judged by the same ethical standards as any other poll of the public: Do they include any false or misleading statements? Do they treat the respondent with fairness and respect?” —AAPOR Statement on Push Polls Center for Survey Research University of Virginia 10

Background • AAPOR task force worked on re-writing ‘Push Poll’ statement in 2006. – Recognized message testing as a related issue.

• Guterbock organized a telephone ‘summit’ in December, 2006 with polling practitioners – Participants: Celinda Lake, Daniel Gotoff (LRP/Lake, Snell, Perry), Mark Mellman (Mellman Group), B. J. Martino (Tarrance Group), Jim Burton (Public Opinion Strategies), John Nienstedt (CERC: Competitive Edge Research and Communication)

• Meeting at AAPOR 2007 (Anaheim) – With academics, AAPOR Standards, practitioners Center for Survey Research University of Virginia 11

Message from the Pollsters • Message-testing polls are used in nearly all serious campaigns • Message testing is essential to what the polling firms do. • Pressures of cost and time limit what practitioners can do in these polls. • Negative messages are carefully researched – Campaign pollsters are ethically bound not to mislead. – Suggesting that anything in the poll script might not be true would jeopardize the campaign.

• Where’s the research? Center for Survey Research University of Virginia 12

Proposal and Support • Obtained a real message-testing survey conducted by a major political polling firm in 2006 – This clarified design issues of concern – Model for the “generic” questionnaire in our experiment

• Prepared a project description and rationale • Received a grant from the UVA Center for Politics (Larry Sabato and Kenneth Stroupe) • Offer of complimentary web survey services from YouGovPolimetrix (Doug Rivers, CEO) Center for Survey Research University of Virginia 13

Research Objectives

Center for Survey Research University of Virginia 14

Purpose of the Research • Show that improved design can improve respondent experience in a message testing poll. • If improved designs are used: – – – –

Cooperation will improve, costs will be lower Data quality will improve Fewer complaints, less confusion with ‘push polls’ Less possibility that public perception of surveys will suffer

Center for Survey Research University of Virginia 15

The “Generic” Message Testing Survey • Closely follows design of a real survey from a large firm • Starts out as an ordinary pre-election poll • No transitional introductions within the script • No balance of positive and negative statements by party – Our party is all good; their party is all bad

• Repeated question after each statement: “Would you still vote for …?” • Strong, increasingly personal, negative statements about the opposing candidate. Center for Survey Research University of Virginia 16

Ideas for Improvement • Improved transitional introductions – Start like an ordinary poll . . . – . . . but make the transition to message testing clear

• Improved balance of negative and positive statements by party – No campaign would want to run a fully balanced script – Test a partially balanced script • Sprinkle a few positives among the negatives • And a few negatives among the positives

• Improve the ‘test’ questions Center for Survey Research University of Virginia 17

Improve the transition . . . • Lack of transitional introduction before negative statements. – Respondent is caught off guard; strong statements are read to them without any forewarning. – Expectations of the respondent are violated. • Expects an ‘objective’ or ‘neutral’ survey – Respondent feels pressured. – These results are strongest when the negative party statements are mismatched by party of the respondent. Center for Survey Research University of Virginia 18

Add a bit of balance . . . • Lack of balance of messages by party creates a stressful experience for respondent. – Respondent expects polls to be objective and neutral. – Strongly worded statements violate the sense of fairness for respondent. – Respondents feel pressure to change their opinions. – Respondent loyal to the opposing party will realize that the results will aid the campaign of politicians they oppose. – Even partial balancing will symbolize researchers’ intention to be fair, respectful of partisan differences. Center for Survey Research University of Virginia 19

Use alternate ‘test’ questions . . . • Asking the respondent after each statement if they will change their vote puts unwelcome pressure on the voter – Respondents feel they are being ‘pushed’ – Especially if they favor the opposing party, or if they are independents – Poll fails to measure respondents’ true reaction to some of the messages. • “That’s distorted, even if somewhat true” • “I don’t believe that” • “That’s not fair” Center for Survey Research University of Virginia 20

The Experiment

Center for Survey Research University of Virginia 21

Fielding the Experiment • Special thanks to Doug Rivers and YouGovPolimetrix for their support in fielding this experiment. • YouGovPolimetrix fielded the survey with prerecruited PollingPoint panelists. • Main focus: generic 2010 Congressional election • Sample was well balanced between Democrats, Republicans, and Independents. • Fielded March 2009. Center for Survey Research University of Virginia 22

Dependent Variables • Self-reported measures of respondent experience – – – –

Measure of comfort or stress during the interview Fairness of questions Believability of partisan statements Concern about use of results to aid opposing party

• Were the questions what was expected? • Willingness to participate in the future

Center for Survey Research University of Virginia 23

The Experiment • Full factorial design • Thirty-six treatments (3 × 2 × 3 × 2) – – – –

Three introductions: none vs. two transitional intros Unbalanced vs. partially balanced partisan statements Three sets of ‘test’ questions (responses to statements) Two political party versions

• Random assignment of treatments • Party versions used equally • ‘Generic’ treatment assigned a larger number of cases, to facilitate comparison with alternatives. Center for Survey Research University of Virginia 24

Transitional Introductions • Control: “When you hear the following statements, does knowing about this make you more likely or less likely to vote for this candidate?” —Wording used on ‘generic’ poll.

• Intro 1: “Here are some statements you might hear from a political candidate running for office. Your thoughts and opinions in response to these statements are an important part of our research.”

Center for Survey Research University of Virginia 25

Introduction – Treatment 1

Center for Survey Research University of Virginia 26

Transitional Introductions • Intro 2: “The following statements are the sort that you might hear in a political campaign, or a campaign commercial. These statements are different from what you might expect from a usual poll or survey. You might not agree with these statements, and some are negative. We are testing people’s reactions to these statements and we would like to warn you that these statements could cause some people to react strongly. This is ok. Your thoughts and opinions in response to these questions are an important part of our research.”

Center for Survey Research University of Virginia 27

Introduction – Treatment 2

Center for Survey Research University of Virginia 28

Unbalanced Design • Follows design of the ‘generic’ poll • Questions were grouped into three blocks. – Series One: 6 positive statements about members of Congress from favored party – Series Two: 7 negative statements about opposing party – Series Three: another 7 negative statements about opposing party • Series 3 included cultural and social issues that are ‘hotbutton’ issues for some voters

Statements were randomized within blocks Center for Survey Research University of Virginia 29

Partially Balanced Design • Questions were again grouped into three blocks. – Series One: 4 positive and 2 negative statements about favored party – Series Two: 2 positive and 5 negative statements about opposing party – Series Three: 1 positive and 6 negative statements about opposing party Statements were randomized within blocks Center for Survey Research University of Virginia 30

Test One Questions • “Does knowing about this make you more likely or less likely to vote for a [D/R] Party candidate, and how strongly do you feel about that?” – – – –

Much more likely Somewhat more likely Somewhat less likely Much less likely Follows wording used on ‘generic’ poll

Center for Survey Research University of Virginia 31

Test One Questions

Test Two Questions • “How convincing is this statement as a reason to vote for a [D/R] candidate?” – “Very convincing” to “Not convincing” – Used to evaluate positive statements

• “How serious a doubt does this statement create about your voting for a [D/R] candidate?” – “Serious doubt” to “No doubt” – Used for negative statements Some firms currently use this wording Center for Survey Research University of Virginia 33

Test Two Questions—Positive

Test Two Questions—Negative

Test Three Questions •

Two evaluative questions asked about each statement: 2) “How believable do you think this statement is?” –

“Very believable” to “Not believable”

3) “For you as a voter, how important is it for you to know this information?” –



“Very important” to “Not important”

Problem: Survey length increased by 2 minutes when Test 3 questions were used –

More mid-survey break-offs with Test 3

Center for Survey Research University of Virginia 36

Test Three Questions

Test Three Questions

Interview Dispositions • 2,086 complete interviews • 329 incomplete interviews • 300 screenouts – Low interest in congressional elections – Industry

• Average interview length: 13 minutes • Response Rate = 30.8% Center for Survey Research University of Virginia 39

Demographic Targets • Based on 2006 American Community survey • Invitations based on cross-classifications of race, gender, education, and age and expected responsiveness • No hard quotas employed. All respondents who met screening criteria were permitted to take the survey

Center for Survey Research University of Virginia 40

Unweighted Respondent Demographics Gender

Education

Male

46%

Female

53%

HS or less

34%

Some College 34% College Grad Post Grad

Age

19% 13%

Race

Party

White, other

82%

Black

10%

Latino

8%

Strong Rep.

16.3%

Moderate Rep.

11.2%

Indep. Rep.

11.2%

Independent

9.7%

Indep. Dem.

9.9%

Moderate Dem.

10.8%

18-34

21%

Strong Dem.

24.0%

Age

40%

No Preference

2.9%

55+

39%

Other

2.6%

Unsure/DK

1.4%

Respondent Reactions

Center for Survey Research University of Virginia

Comments on the Survey •



• • • •

This is clearly the MOST BIASED survey your firm has presented. You need to rethink your questions about Republicans and present a moderate position rather than the extremist view you are espousing. Your questions leave no doubt in my mind as to your political position. Not a good thing in a survey. This is the most biased survey I ever read! Who wrote these questions? I'm going to take a guess that it was the Democrats and Acorn by putting out such lies about the Republicans. Maybe the Democrats should have inserted their name in where the Republicans were listed. I used to think that polling point was fair and conservative. Now I know it's just a front for those cheating, lying, manipulative phony Democrats. This is the single MOST BIASED survey I have taken for your firm. It is garbage! I refuse to finish this poll. The questions on the Republican congress are far too critical and on the Democrats far too kind. There is no possible way to do this poll fairly. It's the worst you've ever sent me!!!!!!!!!!!! (note: survey was completed) Go into a useful business. Your questions were left leaning and loaded. I am an independant. I vote for both parties. I will vote to preserve our God given rights and freedoms, and to seal our Southern border!!!!!!

After launch, a debriefing was added at the end, explaining purpose and design of the experiment. Center for Survey Research University of Virginia 43

After the Debriefing • “Be fair and truthful in your questions and you'll get fair and truthful answers; that statement about this survey not being for a particular political party... yeah right” • “Don't deliberately give me surveys that you know will stress me out, for one. That would "improve my experience". Seriously now I'm kind of pissed at you.” • “This poll was designed by the LIBERAL loons at the University of Virginia. How despicable. I'm surprised you went along with this charade.”

Center for Survey Research University of Virginia 44

After the Debriefing • “Be fair and truthful in your questions and you'll get fair and truthful answers; that statement about this survey not being for a particular political party... yeah right” • “Don't deliberately give me surveys that you know will stress me out, for one. That would "improve my experience". Seriously now I'm kind of pissed at you.” • “This poll was designed by the LIBERAL loons at the University of Virginia. How despicable. I'm surprised you went along with this charade.”

Center for Survey Research University of Virginia

Results: The importance of party mismatch

Center for Survey Research University of Virginia

Results • Mismatched: Respondent party is opposite of the party favored in questionnaire version – Includes Independents who ‘lean’ R or D

• Unaffiliated: Respondent is Independent (no leanings) or did not select a political party • Findings indicate that negative reactions are strongest with mismatched or unaffiliated.

Center for Survey Research University of Virginia 47

Key to partisan mismatch Questionnaire version Favors Democrats Favors Republicans

Respondent’s party ID N Democrat Independent Republican Matched 480 Mismatched 460

Unaffiliated 152 Unaffiliated 137

Mismatched 387 Matched 413

Percent saying survey was ‘stressful’ Respondent’s party ID Questionnaire version Favors Democrats Favors Republicans

Democrat

Independent

Republican

8%

20%

26%

26%

17%

7%

Percent saying items were ‘unfair’ Respondent’s party ID Questionnaire version Favors Democrats Favors Republicans

Democrat

Independent

Republican

8%

24%

47%

31%

15%

5%

Remaining analyses exclude ‘matched’ Questionnaire version Favors Democrats Favors Republicans

Respondent’s party ID Democrat

Independent

Republican

Matched 480 Mismatched 460

Unaffiliated 152 Unaffiliated 137

Mismatched 387 Matched 413

Comparison: Generic Treatment vs. Improved Balance and Improved Test Questions Center for Survey Research University of Virginia 52

Stress in 18 treatment pairs (D+R) Generic

Balanc ed Balanced + Alternate tests Compare: these 12 ‘better’ treatments (N = 503) with the 2 generic treatments (N=385)

Level of Stress

Concern about Misuse

Questions Expected?

Questions Fair?

Questions Believable?

Future Participation

Regression Results (Mismatched Only) Design Improvement:

Stress

Misuse

Questions Expected

Questions Fair

Questions Believable

Future Participation

Intro 2

-.036

.017

.029

-.006

-.017

-.043

Intro 3

-.026

-.008

.095*

-.001

-.025

-.050

Balance

-.163*

-.125*

.163*

.270*

.260*

.064+

Test Two

-.174*

-.112*

.120*

.269*

.202*

.059

Test Three

-.102*

-.079*

.052

.173*

.132*

.067+

Center for Survey Research University of Virginia

* p<.05; + p<.10. Reference categories: no intro, unbalanced, test one

60

Regression Results (Mismatched and Unaffiliated) Design Improvement:

Stress

Misuse

Questions Expected

Questions Fair

Questions Believable

Future Participation

Intro 2

-.013

-.001

.057+

.022

.016

-.009

Intro 3

-.033

.002

.088*

-.003

-.014

.004

Balance

-.135*

-.133*

.129*

.234*

.228*

.042

Test Two

-.127*

-.083*

.090*

.255*

.180*

.059+

Test Three

-.079*

-.065*

.047

.150*

.098*

.041

Center for Survey Research University of Virginia

* p<.05; + p<.10. Reference categories: no intro, unbalanced, test one

61

Conclusions

Center for Survey Research University of Virginia 62

Conclusion • When respondent’s party was a mismatch with the questionnaire or the respondent was unaffiliated with either party, the experience was stressful for many respondents. • Respondent assessment of experience is strongly conditioned by the design features of the study.

Center for Survey Research University of Virginia 63

Further Conclusion • Better design can improve: – Fairness, believability, meeting expectations, concern about use of results to aid opposing party, future willingness

• Improvements gained through: – better balance of positive and negative statements about opposing party. – implementing different types of ‘test’ questions. – including a transitional introduction • makes partisan statements less unexpected for the respondent. Center for Survey Research University of Virginia 64

Further analyses: This study • Further analysis is needed: – examine the interaction between the various treatment factors. – look at content found in the open-ended comments – effects of the actual messages on vote intention

• We plan to measure future participation of our subjects as YouGov panelists.

Center for Survey Research University of Virginia 65

Further Research: Future studies • Test if observed effects operate in a specific, real campaign, with more personal messages • Replicate experiment in an actual phone survey of cold-called voters • Explore additional measures of respondent experience. • We invite political polling firms to incorporate similar tests into their future message-testing polls. Center for Survey Research University of Virginia 66

In sum . . . Message testing polls CAN be improved by better design

Center for Survey Research University of Virginia

“Would You Still Hang Up on this Poll If You Knew…” An Experiment to Improve the Design of Political Message Testing Polls Thomas M. Guterbock [email protected]

Deborah L. Rexrode [email protected]

Samantha Luks [email protected]

Presented at the 2009 AAPOR Annual Conference May 15, 2009

Related Documents

Message
June 2020 22
Message
June 2020 19
Message
April 2020 17
Message
October 2019 36
Message
May 2020 22

More Documents from "Paulina Chasiliquin A"