Understanding the Sensitivity of Party Identification Questions in Polarized African Contexts, with Martha Wilfahrt
Survey researchers often take special care to obtain accurate information on sensitive topics. However, this hinges on our ability to identify which topics are sensitive and hence which questions merit special attention and techniques. In this paper, we propose that one of the most routine questions in political science – asking a respondent for their party identification (PID) – can be a highly charged question. We illustrate this in two ways: first, we turn to an original survey of enumerators in Côte d’Ivoire, to show that the front-line workers in our data collection efforts believe that conventional questions about partisanship are as sensitive as asking about exposure to violence. In fact, over 50% of enumerators felt uncomfortable asking about partisan identity, as opposed to 26% asking about violent experience. Enumerators believe that fewer respondents will honestly answer questions about party identification and that the very act of asking about PID can influence the research encounter. We then present evidence from the Afrobarometer that shows that respondents display significant self-censorship on questions about political identification compared to questions about clientelism or violence experience, which are traditionally viewed as sensitive. These differences are especially acute when respondents reside in highly polarized countries. We conclude that political scientists need to interrogate what is sensitive, where and to whom when designing survey research. Specifically, partisanship can be highly sensitive in contexts where polarization characterizes the political environment.
Consider the Source: Individual Variation in Reporting Electoral Violence, with Leonardo Arriola, Arsène Brice Bado, Allison Grossman, and Aila M. Matanock Pre-Analysis Plan. The lack of locally sourced data remains an obstacle to improving knowledge about election violence around the world. Researchers continue to largely rely on secondhand forms of data, whether sourced from media reports or election monitors. But the uncertain accuracy and validity of such data sources raises critical questions about our understanding of fundamental dynamics relating to the victims and perpetrators of election violence. In this paper, we present a theoretical framework for assessing the likelihood that differently situated individuals report on violent events they have witnessed or experienced firsthand. Drawing on an original survey conducted across 289 urban and rural locations in the run-up to Côte d’Ivoire’s 2020 presidential elections, we show that there is no significant difference in reporting between citizens and leaders despite social status distinctions. Instead, we find that key demographic factors consistently affect the likelihood of reporting: while women and rural residents are less likely to report violence, we find little systematic difference in reporting based on partisanship or ethnic identity. We show that violence reporting is correlated with exposure to other forms of conflict, namely, ethnic, religious, or land. We further show that there are few or small differences in the likelihood of reporting across forms of violence (i.e., property damage, killings, physical assaults, verbal threats). The findings presented here contribute to emerging discussions focused on improving data-collection methodologies for election violence and potential policy interventions aimed at reducing the outbreak of such violence.
Monitors Do It Better: Detecting Violence During Elections, with Leonardo Arriola, Arsène Brice Bado, Allison Grossman, and Aila M. Matanock
Pre-Analysis Plan. Deploying monitors to identify violence during elections is frequently critiqued for being costly, politicized, and possibly ineffective. In response, the international community has increasingly turned to digital platforms that crowdsource election violence reporting directly from citizens because they ostensibly sidestep limitations associated with monitoring. We argue that monitoring improves the detection of election violence when compared to citizen reporting because the training and experience of monitors enables them to detect various forms of violence. To test our claims, we randomized whether the most populous areas of Côte d’Ivoire, a country long afflicted by political violence, were assigned to monitoring by civil society groups in addition to citizen reporting during the 2020 presidential elections. We find that the random assignment of a monitor to a location significantly increases the likelihood of election violence being reported compared to locations only assigned to citizen reporting. This differential rate of reporting is driven primarily by monitor efforts; the presence of a monitor does not induce citizens to report violence. In assessing potential mechanisms, we further find that monitor training and experience is correlated with a greater likelihood of reporting.