Detecting inappropriate messages

In this talk, we describe the methodology of collecting and labelling a dataset for appropriateness, collect two datasets and release pre-trained classification models.

Image
Image

Overview

Not all topics are equally “flammable” in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion on politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim to define a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets in Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data. The talk is based on the recent publication at the BSNLP workshop at EACL-2021 conference.

Speaker

Image
Nikolay Babakov
SkoltechResearch Engineer

Recording

(

Don't miss out

Be the first to hear about our workshops, 
tutorials, and webinars.
Fractal