Enumerator Experiences in Violent Research Environments, with Martha Wilfahrt (R&R at Comparative Political Studies) Working Paper [Appendix]
Understanding the political and social effects of violence in local populations through public opinion surveys has become increasingly common across the globe. Yet while researchers are attuned to possible challenges induced during survey implementation, this work has focused almost uniformly on the respondent. This paper considers the experiences of survey enumerators as critical actors for data collection in violent research settings. We present the results from a survey of 245 enumerators in Cote d’Ivoire to show that their exposure to personal violence conditions the challenges they face and the compromises they make to collect data. Contrary to expectation, we find that individual enumerator experience with violence while collecting data is more consequential for the process than being an outsider in the communities they work or surveying violence-affected participants. We shed light on how academic research in violent political settings poses unique security concerns for enumerators, with ramifications for data integrity. Summary of findings (French)
Sensitive to Whom? Understanding "Sensitive" Questions in Violent Contexts, with Martha Wilfahrt
In violent environments, survey researchers often take special care to obtain accurate information on sensitive topics without traumatizing participants. However, this research agenda fails to take into account how the individuals asking the questions might also be affected by sensitive topics. In this research letter, we propose that asking about partisanship is a highly charged question in insecure contexts for enumerators and respondents. Leveraging a survey of enumerators in Côte d’Ivoire, we show that enumerators believe that asking about conventional questions about partisanship is as sensitive as asking about exposure to violence. In fact, over 50% of enumerators felt uncomfortable asking about partisan identity, as opposed to 26% asking about violent experience. Enumerators who feel uncomfortable asking about PID think that fewer respondents will honestly answer this question. We corroborate these findings with suggestive evidence from the Afrobarometer, showing that respondents in insecure environments are more likely to refuse to answer the partisanship question. We posit that asking about partisanship can be sensitive in contexts where violence and politics are intrinsically linked.
Consider the Source: Individual Variation in Reporting Electoral Violence, with Leonardo Arriola, Arsène Brice Bado, Allison Grossman, and Aila M. Matanock Pre-Analysis Plan. The lack of locally sourced data remains an obstacle to improving knowledge about election violence around the world. Researchers continue to largely rely on secondhand forms of data, whether sourced from media reports or election monitors. But the uncertain accuracy and validity of such data sources raises critical questions about our understanding of fundamental dynamics relating to the victims and perpetrators of election violence. In this paper, we present a theoretical framework for assessing the likelihood that differently situated individuals report on violent events they have witnessed or experienced firsthand. Drawing on an original survey conducted across 289 urban and rural locations in the run-up to Côte d’Ivoire’s 2020 presidential elections, we show that there is no significant difference in reporting between citizens and leaders despite social status distinctions. Instead, we find that key demographic factors consistently affect the likelihood of reporting: while women and rural residents are less likely to report violence, we find little systematic difference in reporting based on partisanship or ethnic identity. We show that violence reporting is correlated with exposure to other forms of conflict, namely, ethnic, religious, or land. We further show that there are few or small differences in the likelihood of reporting across forms of violence (i.e., property damage, killings, physical assaults, verbal threats). The findings presented here contribute to emerging discussions focused on improving data-collection methodologies for election violence and potential policy interventions aimed at reducing the outbreak of such violence.
Documenting Electoral Violence: Assessing Tradeoffs in Data Collection Methodologies, with Leonardo Arriola, Arsène Brice Bado, Allison Grossman, and Aila M. Matanock
Pre-Analysis Plan. Using election monitors to identify violence during elections is frequently critiqued for being costly, politicized, and possibly ineffective. By contrast, digital platforms that crowdsource election violence reporting directly from citizens are increasingly promoted because they ostensibly sidestep the limitations associated with traditional monitoring. So, why bother with monitoring? We argue that monitoring can still improve on the detection of violence when compared to citizen crowdsourcing. The training and experience of monitors enables them to detect various forms of violence, resulting in a higher reporting rate when compared to crowdsourcing alone. To test our claims, we randomized whether the most populous areas of Côte d’Ivoire, a country long afflicted by political violence, were assigned to monitoring by civil society groups in addition to citizen crowdsourcing during the 2020 presidential elections. In comparing reports of election violence from locations that had only citizen reporting to those that had both citizen and monitor reporting, we find that the assignment of a monitor significantly increases the likelihood of violence being reported compared to the citizen treatment alone. Our findings indicate that a location is more likely to be associated with at least one report of violence due to monitor reporting rather citizen reporting. We further find suggestive evidence that the experience/training of monitors is correlated with a greater likelihood of issuing a report.