In our tutorial, we will share more than six years of our crowdsourcing experience and bridge the gap between crowdsourcing and recommender systems communities.
Today, most recommender systems employ Machine Learning to recommend posts, products, and other items, usually produced by the users. Although the impressive progress in Deep Learning and Reinforcement Learning, we observe that recommendations made by such systems still do not correlate with actual human preferences.
In our tutorial, we will share more than six years of our crowdsourcing experience and bridge the gap between crowdsourcing and recommender systems communities by showing how one can incorporate human-in-the-loop into their recommender system to gather the real human feedback on the ranked recommendations. We will discuss the ranking data lifecycle and run through it step-by-step. A significant portion of tutorial time is devoted to a hands-on practice, when the attendees will, under our guidance, sample and annotate recommendations on real crowds, build the ground truth dataset, and compute the evaluation scores.
All the demonstrated methodology is platform-agnostic and can be freely adapted to a variety of applications. One can gather the judgments on any data labeling platform, from in-house setups till MTurk and Toloka. A related tutorial was previously presented at NAACL-HLT '21, WWW '21, CVPR '20, SIGMOD '20, WSDM '20, and KDD '19.
We expect the attendees to understand the core concepts in recommender systems and are able to write short scripts in Python, while we do not require any knowledge of crowdsourcing. We will provide all the necessary definitions and icebreakers to accommodate a wider audience. We recommend the attendees to bring their laptops for the hands-on practice session.
- Recommender Systems
- Crowdsourcing
- Online and Offline Evaluation
- Problem of Learning-to-Rank
- Pointwise/Pairwise/Listwise Approaches
- Evaluation Criteria
- Core Concepts in Crowdsourcing
- Quality Control
A hands-on practice, when the attendees will, under our guidance, sample and annotate recommendations on real crowds, build the ground truth dataset, and compute the evaluation scores.
- Problem of Answer Aggregation
- Pairwise Comparisons
- Crowd-Kit Library
- Discussion of Results
- References
Retail Week Live 2023
Next-level ecommerce: A winning formula to surpass your competitors
Hosts:
Data Council Austin 2023
How to ensure your model does not drift? From Human-in-the-Loop concept to building fully adaptive ML models using crowdsourcing
Hosts: