Fighting Online Harassment with State-of-the-art Artificial Intelligence
Technical talk | English
Technical talk | English
Theatre 20: Track 3
Thursday - 12.25 to 13.05 - Technical
We spend a great deal of our life in digital ecosystems. Much of our personal and work relationships now happen on social networks and other digital platforms. Unfortunately, this environment is not free of dangers, especially for the most vulnerable groups.
Language is the main vehicle of communication in the digital arena, and it is such a powerful tool that it can express brilliant ideas and lovely feelings but also be used to convey the most dangerous venoms. One of these venoms is sexism and male chauvinism (machismo), which is directly associated with gender-based violence.
Effectively detecting and neutralizing malicious use of language is not a straightforward task. Firstly, the volume of messages sent and published any single second in digital platforms is in the millions and reviewing them is a challenge that cannot be performed by human supervisors. Secondly, the complexity of the grammatical structures, morphosyntactic variances, slang usage, semantic context and jargon renders useless any traditional Natural Language Processing (NLP) technique.
For these reasons we are in need of powerful state-of-the-art Artificial Intelligent (AI) solutions able to automatically detect toxic language that may appear in many different forms, with literally billions of variations embedded in the nuances and subtleties of the extraordinarily rich human language.
In this work we present an Artificial Intelligence system based on Deep Learning approaches which makes use of recurrent networks and Google Research’s recently proposed BERT (Bidirectional Encoder Representations from Transformers) attention mechanism in order to develop a neural model able to read between the lines and perceive, as a human would do, the presence of the following semantic content:
– Macho Language (male chauvinism, machismo, gender-based violence).
– Bully Language (threats, insults, violence, hostility and bullying).
– Obscene Language (sexually degrading, nastiness, obscenity).
– Hate Language (hatred, extremism, identity hate).
– Sour Language (resentment, antipathy and sourness).
Although the solution presented here is being developed to fight against all these forms of malicious language, we have put our first focus on fighting gender-based violence in digital platforms (especially in the education system). Preventing gender-based violence in adults necessarily involves an active intervention with children and adolescents.
Our main aim has been to develop very accurate automatic detector of male chauvinism for Spanish language. Specifically, a human-level macho language detector that can be deployed as a bot in different social networks and digital platforms. As in any other machine learning project the availability of a training set is a key factor and we didn’t count with a tagged Spanish macho language dataset. In this work we also describe how we used transfer learning techniques and crowdsourcing in order to build a valuable dataset.
Over the past months we launched a crowdsourced science campaign in collaboration with different public and private organizations as well as schools, in which we invite the community to help us train EqualBot, the AI that would contribute to reduce gender-based violence by monitoring digital platforms and alerting parents and educators about possible risks and levels of machismo.
As future work, we are interested in building generative models that, in addition to detect macho language, can provide the teenagers with non-violent alternatives to healthy express their feelings, thus promoting emotional intelligence in the education system.