by Toloka Team
Ready to try data labeling with LLMs?
Large language models (LLMs) are changing the way people and companies do work — and data annotation is no exception. Text classification is a prime opportunity to benefit from LLMs.
Toloka applies commercial and open source models — ChatGPT, GPT-4, LLaMA, and others — directly via prompt engineering or via model fine-tuning for your specific task. Our unique expertise helps teams achieve their goals faster with more efficient data annotation.
How we use LLMs
We integrate LLMs into data annotation pipelines on multiple levels:
LLM annotation with human evaluation: The LLM automates all data annotation and our expert crowd evaluates the results for quality assurance.
LLM annotation alongside humans: The LLM handles part of the data and our expert annotators handle the rest to balance speed and quality.
LLM support for humans: The LLM speeds up human data annotation by providing suggestions for our global crowd of annotators.
Examples of successful cases
Text classification: for unambiguous classes, get labels with equal or higher quality at less than 10% of cost of traditional data labeling.
Semantic similarity: detect similar product descriptions for e-commerce and search engines with the same quality at marginally lower cost and higher throughput.
Semantic search: evaluate product search relevance with the same quality at marginally lower cost and higher throughput.
Ask our experts how to use LLMs in your data pipeline. We can help you optimize speed and cost of data labeling while achieving the best data quality for your project.
Article written by:
by Toloka Team
Updated:
Aug 2, 2023