Sensitive to Whom? Understanding "Sensitive" Questions in Violent & Polarized Contexts, with Martha Wilfahrt
Survey researchers often take special care to obtain accurate information on sensitive topics. However, this hinges on our ability to identify which topics are sensitive and hence which questions merit special attention and techniques. In this research letter, we propose that one of the most routine questions in political science -- asking a respondent for their party identification (PID) -- can be a highly charged question. We illustrate this in two ways: first, we present evidence from the Afrobarometer that shows that respondents display significant self-censorship on basic political questions when they reside in high violence or highly polarized countries. We then turn to an original survey of enumerators in Côte d'Ivoire, to show that the front-line workers in our data collection efforts believe that conventional questions about partisanship are as sensitive as asking about exposure to violence. In fact, over 50% of enumerators felt uncomfortable asking about partisan identity, as opposed to 26% asking about violent experience. Enumerators who feel uncomfortable asking about PID think that fewer respondents will honestly answer this question and the very act of asking about PID can influence the research encounter. We conclude that political scientists need to interrogate what is sensitive, where and to whom when designing survey research. Specifically, partisanship can be highly sensitive in contexts where violence and politics are intrinsically linked.
Consider the Source: Individual Variation in Reporting Electoral Violence, with Leonardo Arriola, Arsène Brice Bado, Allison Grossman, and Aila M. Matanock Pre-Analysis Plan. The lack of locally sourced data remains an obstacle to improving knowledge about election violence around the world. Researchers continue to largely rely on secondhand forms of data, whether sourced from media reports or election monitors. But the uncertain accuracy and validity of such data sources raises critical questions about our understanding of fundamental dynamics relating to the victims and perpetrators of election violence. In this paper, we present a theoretical framework for assessing the likelihood that differently situated individuals report on violent events they have witnessed or experienced firsthand. Drawing on an original survey conducted across 289 urban and rural locations in the run-up to Côte d’Ivoire’s 2020 presidential elections, we show that there is no significant difference in reporting between citizens and leaders despite social status distinctions. Instead, we find that key demographic factors consistently affect the likelihood of reporting: while women and rural residents are less likely to report violence, we find little systematic difference in reporting based on partisanship or ethnic identity. We show that violence reporting is correlated with exposure to other forms of conflict, namely, ethnic, religious, or land. We further show that there are few or small differences in the likelihood of reporting across forms of violence (i.e., property damage, killings, physical assaults, verbal threats). The findings presented here contribute to emerging discussions focused on improving data-collection methodologies for election violence and potential policy interventions aimed at reducing the outbreak of such violence.
Documenting Electoral Violence: Assessing Tradeoffs in Data Collection Methodologies, with Leonardo Arriola, Arsène Brice Bado, Allison Grossman, and Aila M. Matanock
Pre-Analysis Plan. Using election monitors to identify violence during elections is frequently critiqued for being costly, politicized, and possibly ineffective. By contrast, digital platforms that crowdsource election violence reporting directly from citizens are increasingly promoted because they ostensibly sidestep the limitations associated with traditional monitoring. So, why bother with monitoring? We argue that monitoring can still improve on the detection of violence when compared to citizen crowdsourcing. The training and experience of monitors enables them to detect various forms of violence, resulting in a higher reporting rate when compared to crowdsourcing alone. To test our claims, we randomized whether the most populous areas of Côte d’Ivoire, a country long afflicted by political violence, were assigned to monitoring by civil society groups in addition to citizen crowdsourcing during the 2020 presidential elections. In comparing reports of election violence from locations that had only citizen reporting to those that had both citizen and monitor reporting, we find that the assignment of a monitor significantly increases the likelihood of violence being reported compared to the citizen treatment alone. Our findings indicate that a location is more likely to be associated with at least one report of violence due to monitor reporting rather citizen reporting. We further find suggestive evidence that the experience/training of monitors is correlated with a greater likelihood of issuing a report.