Subscribe to Toloka News
Subscribe to Toloka News
There are two ways to accept tasks completed by performers in Toloka: automatic acceptance (the response is accepted immediately after the performer submits it) and non-automatic acceptance (the response is accepted after the requester reviews it). Let's look at how these methods differ, which types of tasks they work best on, and how to review tasks with non-automatic acceptance quickly and efficiently.
This method works well for simple tasks with overlap, where you can automatically check the correctness of responses using control tasks and other quality control rules. You can use automatic acceptance in tasks for classifying images, videos, and texts, moderating content, or evaluating search relevance. As soon as the performer submits a response, the task is considered accepted and payment is credited to the performer's account. You won't be able to get this money back, so it's important to monitor performers directly in the interface and set up quality control carefully.
You can check the following performer actions in the interface:
If you check responses in the interface, performers will be more likely to follow the instructions and less likely to make errors, so you'll get better quality of responses. Using quality control rules, you can filter out performers who submit incorrect responses directly when they're labeling data. You can ban these performers from the project and ignore their responses when downloading the results from the pool or use result aggregation.
Auto acceptance doesn't work for tasks where performers need to:
This is because control tasks and majority vote check for exactly matching responses. You can't automatically check the content of files or things like point coordinates in images — there may be a wide variety of correct responses. These types of tasks require non-automatic acceptance (manual review).
This method is used when responses can't be checked automatically. This is usually the case with field tasks, creative tasks, and tasks for creating content (video, audio, or photos). You need to review the performer's responses and decide if they are appropriate. The performer will be paid for the tasks you accept and won't be paid for those you reject.
You don't have to manually review all the tasks yourself. If a project involves reviewing thousands of tasks, you can simplify and automate the process. We'll walk you through the steps.
If there aren't too many tasks and this is a one-time labeling project, the easiest way is to review them manually:
This method works well if you need to review a lot of responses, filter the results first, or process them programmatically. Use it to filter out meaningless or senseless comments, calculate the majority vote, or ignore responses submitted by cheating performers.
If each task page (suite) has multiple tasks, they'll have the same ASSIGNMENT:assignment_id. This is the unique ID of the page with responses, so you first need to decide whether to accept the entire task suite. Add the plus or minus sign for each task on the page, count the total, and then leave only one line per assignment_id in the file. If there are more correct responses in the task suite than incorrect ones, accept it. Otherwise, reject it.
Make sure to remove extra lines. If you upload a file with duplicate assignment_ids, the system will randomly select one of these lines and apply its verdict to the entire task suite, either accepting or rejecting it. In this case, after processing the file, some tasks may get a status different from the one you expected.
If users attach audio, video, or image files to their responses, it's not convenient to review them using a TSV file. You'll have to download the files, view them, and then enter the verdict in the appropriate line. For tasks like that, it's better to review responses in the interface or use method 3.
This method is worth using if your projects are large, with a continuous data labeling pipeline. If desired, you can automate the pipeline method via the API and free up time to focus on higher-level tasks.
To have responses reviewed by Tolokers, create a separate project where you'll show other performers the responses received and ask them to evaluate if the task was completed correctly. For example:
Project 1:
Project 2:
The process may differ depending on the type of task, but these are the general steps for using Tolokers to review responses:
You can use the API to perform non-automatic acceptance. To accept or reject submitted responses, change the status of the task suite using a PATCH request to the resource /assignments/<task suite assignment ID>
:
Note: You can't accept or reject all tasks by pool ID. You need to specify the assignment ID.
This is a combination method for reviewing responses: some completed tasks are accepted automatically based on a quality control rule and the rest are reviewed using one of the methods listed above. If you check performers using control tasks, add a rule for the pool to accept all responses from users who do the task well, like 80% quality or higher. You can decide what level of performer quality you trust. Likewise, you can set up the "Majority vote" rule to accept all responses that match the responses of other performers.
Toloka gives you fast and convenient ways to review responses for any type of task. If you still have questions, contact us and we'll help you choose the right settings for your project.