Understanding the Sensitivity of Party Identification Questions in Polarized African Contexts, with Martha Wilfahrt (Accepted at Political Behavior)
The global rise of political polarization has generated new insights into how polarization affects political behavior. But political polarization may also shape the dynamics of researching such behavior in the first place. This paper argues that in polarized contexts, asking about party identification (PID) during academic surveys can be highly sensitive. We demonstrate this in sub-Saharan Africa, using three forms of evidence. First, a survey of enumerators in Côte d’Ivoire shows that over 50% feel uncomfortable asking about PID, linking their discomfort to the polarized political climate. Enumerators also believe that such questions reduce honest responses and influence the research encounter itself. Second, data from the Afrobarometer reveals that respondents in polarized countries often self-censor on PID questions more than on traditionally sensitive topics like clientelism, violence, or identity. Finally, interviews and focus groups with Ivorian citizens support these findings, highlighting the sensitivity around PID. These results emphasize that in politically polarized environments, partisanship questions may be uniquely sensitive and impact data quality and the research experience in addition to holding ramifications for political behavior. We conclude with a discussion of implications for survey design and analysis as well as a general call for political scientists to consider what is sensitive, in what context, and for whom when designing surveys.
Seeing What Citizens Miss: How Monitors Improve Election Violence Reporting, with Leonardo Arriola, Arsène Brice Bado, Allison Grossman, and Aila M. Matanock (under review)
Pre-Analysis Plan. Documenting election violence is central to safeguarding electoral integrity, but collecting such data is difficult. While citizen crowdsourcing is often seen as a cost-effective alternative to traditional monitoring, we argue that monitors provide unique advantages due to their training and insulation from local pressures. Our field experiment during Côte d’Ivoire’s 2020 presidential election assessed whether monitors enhance the documentation of election violence when used alongside citizen reporting. We found that the presence of a monitor increased the likelihood of violence being reported by 10.7 percentage points without affecting citizen behavior. Monitors with more geographic experience were more likely to report incidents, regardless of their proximity to home communities. These findings highlight the importance of monitors in revealing latent acts of violence, enhancing both the scope and depth of reports. This demonstrates the critical value monitors offer, even as crowdsourced data becomes more prevalent.
Consider the Source: Individual Variation in Reporting Electoral Violence, with Leonardo Arriola, Arsène Brice Bado, Allison Grossman, and Aila M. Matanock Pre-Analysis Plan. The lack of locally sourced data remains an obstacle to improving knowledge about election violence around the world. Researchers continue to largely rely on secondhand forms of data, whether sourced from media reports or election monitors. But the uncertain accuracy and validity of such data sources raises critical questions about our understanding of fundamental dynamics relating to the victims and perpetrators of election violence. In this paper, we present a theoretical framework for assessing the likelihood that differently situated individuals report on violent events they have witnessed or experienced firsthand. Drawing on an original survey conducted across 289 urban and rural locations in the run-up to Côte d’Ivoire’s 2020 presidential elections, we show that there is no significant difference in reporting between citizens and leaders despite social status distinctions. Instead, we find that key demographic factors consistently affect the likelihood of reporting: while women and rural residents are less likely to report violence, we find little systematic difference in reporting based on partisanship or ethnic identity. We show that violence reporting is correlated with exposure to other forms of conflict, namely, ethnic, religious, or land. We further show that there are few or small differences in the likelihood of reporting across forms of violence (i.e., property damage, killings, physical assaults, verbal threats). The findings presented here contribute to emerging discussions focused on improving data-collection methodologies for election violence and potential policy interventions aimed at reducing the outbreak of such violence.