AI development today rests on three pillars: algorithms, hardware, and data. Ironically, the further AI moves towards new application areas, the more it depends on human efforts: more and more often data for training and validating AI models cannot be collected in any other way than by humans.
AI solutions require data for training and validating models that are not only high-quality and scalable to support growing industry needs but also flexible enough to support a large variety of use cases and data collection scenarios.
Toloka's mission is to create an environment for AI data production that is fully aligned with industry needs: quality, scalability, flexibility.
As a result, Toloka is a multifaceted solution with:
- a global pool of 9 million Tolokers with around 200,000 active on the platform every month
- multiple methods and mechanisms for advanced automated quality control at scale, available for any platform using the Crowd-Kit library for Python
- instruments for integrating the crowd into the ML production process using the Toloka-Kit library for Python
- academic research and education initiatives in the field of Crowd Science for ML specialists
The Toloka workshop aims to cover these aspects and provide a comprehensive picture of how crowdsourcing can be applied to real life AI production.