by Ofer Arazy, Keren Kaplan-Mintz, Dan Malkinson, Yiftach Nagar
The collective intelligence of crowds could potentially be harnessed to address global challenges, such as biodiversity loss and species’ extinction. For wisdom to emerge from the crowd, certain conditions are required. Importantly, the crowd should be diverse and people’s contributions should be independent of one another. Here we investigate a global citizen-science platform—iNaturalist—on which citizens report on wildlife observations, collectively producing maps of species’ spatiotemporal distribution. The organization of global platforms such as iNaturalist around local projects compromises the assumption of diversity and independence, and thus raises concerns regarding the quality of such collectively-generated data. We spent four years closely immersing ourselves in a local community of citizen scientists who reported their wildlife sightings on iNaturalist. Our ethnographic study involved the use of questionnaires, interviews, and analysis of archival materials. Our analysis revealed observers’ nuanced considerations as they chose where, when, and what type of species to monitor, and which observations to report. Following a thematic analysis of the data, we organized observers’ preferences and constraints into four main categories: recordability, community value, personal preferences, and convenience. We show that while some individual partialities can “cancel each other out”, others are commonly shared among members of the community, potentially biasing the aggregate database of observations. Our discussion draws attention to the way in which widely-shared individual preferences might manifest as spatial, temporal, and crucially, taxonomic biases in the collectively-created database. We offer avenues for continued research that will help better understand—and tackle—individual preferences, with the goal of attenuating collective bias in data, and facilitating the generation of reliable state-of-nature reports. Finally, we offer insights into the broader literature on biases in collective intelligence systems.