In this tutorial, we present a portion of our unique industry experience in data labeling, shared by both leading researchers and engineers from Toloka.
In this tutorial, we present a portion of our unique industry experience in efficient data labeling via crowdsourcing, shared by both leading researchers and engineers from Toloka. Most ML projects require training data, and often this data can only be obtained through human labeling. As new applications of AI emerge, there is ever-growing demand for human-labeled data collected in nontrivial tasks. Large-scale data production requires a technological pipeline that can successfully manage quality control and smart distribution of tasks between performers.
We introduce you to data labeling via public crowdsourcing marketplaces and present the key techniques for efficiently collecting labeled data. This is followed by a practice session, where participants choose one real label collection task, experiment with selecting settings for the labeling process, and launch their own labeling project on Toloka, one of the world's largest crowdsourcing marketplaces. During the tutorial, all projects are run on the real Toloka crowd. Participants also receive feedback and practical advice on making their projects more efficient. We invite beginners, advanced specialists, and researchers to learn how to collect high-quality labeled data, and do so efficiently.
Part 0: Introduction
— The concept of crowdsourcing
— Crowdsourcing task examples
— Crowdsourcing platforms
— Yandex crowdsourcing experience
Part I: Main components of data collection via crowdsourcing
— Decomposition for an effective pipeline
— Task instruction & interface: best practices
— Quality control techniques
Part II: Label collection projects to be done (practical session)
— Dataset and required labels
— Discussion: how to collect labels?
— Data labeling pipeline for implementation
Part III: Introduction to Toloka for requesters
— Main types of instances
— Project: creation & configuration
— Pool: creation & configuration
— Tasks: uploading & golden set creation
— Statistics in flight and downloading results
Coffee Break
Part IV: Setting up and running label collection projects (practical session)
— You
› create
› configure
› run on real performers
— data labeling projects in real-time
Part V: Theory on efficient aggregation, incremental relabeling, and pricing
— Aggregation models
— Incremental relabeling to save money
— Performance-based pricing
Part VI: Discussion of results from the projects and conclusions
— Results of your projects
— Extensions to work on after the tutorial
— References to literature and other tutorials