Many citizen science projects have a crowdsourcing component where several different citizen scientists are requested to fulfill a micro task (such as tagging an image as either relevant or irrelevant for the evaluation of damage in a natural disaster, or identifying a specimen into its taxonomy). How do we create a consensus between the different opinions/votes? Currently, most of the time simple majority voting is used. We argue that alternative voting schemas (taking into account the errors performed by each annotator) could severely reduce the number of citizen scientists required. This is a clear example of continuous human-in-the-loop machine learning with the machine creating a model of the humans that it has to interact with.
We propose to study consensus building under two different hypotheses: truthful annotators (as a model for most voluntary citizen science projects) and self-interested annotators (as a model for paid crowdsourcing projects).

Output

Software and documentation for the two new consensus models into the crowdnalysis framework.

New consensus models case study in a citizen science project.

Algorithm for numerical simulations useful to evaluate the efficacy of the consensus models considered in crowdnalysis.

Report of the results of simulations, with suggestions to improve the consensus models.

Project Partners:

  • IIIA-CSIC, Jesus Cerquides
  • CSIC, Jesus Cerquides
  • CNR, Daniele Vilone

Primary Contact: Jesus Cerquides, IIIA-CSIC