Human decision-making systems are the backbone of many applications: crowdsourcing, peer review, hiring, and employee performance evaluation. However, if not organized properly, such systems may produce unexpected results which degrade the overall quality of the process. This talk presents principled approaches towards design and evaluation of large human decision-making systems, focusing on important aspects of:
How do we compensate for the mistakes of individual agents involved in the system?
Primary application: crowdsourcing.
How do we ensure that agents behave honestly and do not engage in strategic manipulations?
Primary applications: peer grading, employee performance evaluation.
How do we evaluate the impact of various biases (subtle cognitive biases, race/gender biases)
on the decisions made?
Primary application: peer review.