Program Evaluation Experimentation

  • Uploaded by: APIAVote
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Program Evaluation Experimentation as PDF for free.

More details

  • Words: 553
  • Pages: 12
Program Evaluation Using Experiments Jared Barton George Mason University APIA Vote June 23, 2009 Washington, DC

Evaluators 

David W. Nickerson (Ph.D., Yale University)

– Assistant Professor, Political Science, University of Notre Dame – Published dozens of articles on GOTV strategies using field experiments and consulted for several large-scale campaigns – A leading scholar in the GOTV program evaluations



Jared Barton

– Graduate Student, Economics, George Mason University – Former political consultant, Global Strategy Group – Consulted for local and state candidates in

Evaluating Political Campaigns 

Did the program change behavior? – Did they vote? – Did they contribute? – Did they volunteer?



If so, how effective was the program? – How many more votes? – How much more money (contributors)?



What’s the best way to answer these questions?

In 2008, We Implement a GOTV Program in “Precinct 12” 

How should we assess the program? – Compare turnout in Precinct 12 across elections? – Compare turnout in precinct 12 with another precinct where we didn’t implement a program? – Field experiment?



Different methods make different assumptions!

Comparing Across Time Voter Turnout (Percentage)

60% 50% 40% 30% 20% 10% 0%

2006

Year

2008

It appears the program increased turnout by 10%. BUT! Assumes the political context is the same.

Comparing Across Precincts Voter Turnout (Percentage)

60% 50% 40% 30% 20% 10% 0%

Precinct 11

Precinct 12

It appears the program increased turnout by 5%. But! Assumes the voters and campaign activities are the same.

What About An Experiment? 

Construct a perfect comparison group.



Random group assignment: “flip a coin” to determine whether a voter or precinct is exposed to the program



Since decision is random, voters or precincts exposed to the program should be identical (on average) to those not exposed to the program

How Experiments Work 

Identify the “target population”



Randomly divide the target population into:

– a “treatment group” to contact – a “control group” that isn’t contacted by you.



Contact the treatment group



Measure the outcomes in both groups



Compare outcomes across the treatment and control groups

– This is the effect of the treatment!

Target Population Control Group

Randomization

Leave Alone

Treatment Group

Apply Treatment

Individuals Engage in Behavior

Control Group Outcomes

Treatment Group Outcomes

What Do We Need to Do a Good Experimental Evaluation? 

Identify a subject population

– Voters of Asian or Pacific Island ancestry – We require “large enough” population to invoke key experimental assumption  

Randomize the population into control and treatment groups Measure the outcome of interest – Outcome must be measurable



Leave alone the control group and contact the treatment group

Example Experiment: A GOTV Program   

You select areas with low voter turnout rates among Asian Americans We randomly assign the voters in those areas to control and treatment groups You execute the treatment you designed – e.g., flyers with election information (date, polling place location)

 

Leave the control group alone We compare the rates of voter turnout between treatment and control groups after Election Day

But Enough about Us… 

What are some goals your groups have?



What are some methods you use to achieve those goals?



Can we identify any common goals and common methods to achieve them?

Related Documents

Experimentation
May 2020 18
Experimentation
May 2020 11
Experimentation
June 2020 10
Program Evaluation
November 2019 21

More Documents from ""