Symbolic knowledge representations for time-sensitive offensive language detection

Language models learned from data have become prevalent in AI systems, but they are sensitive to the identification of undesired behaviour posing risks to society, like offensive language. The task of automatic detection of offensive language has attracted significant attention in Natural Language Processing (NLP) due to its high social impact. Policy makers and online platforms can leverage computational methods of offensive language detection to oppose online abuse at scale. State-of-the-art methods for automatic offensive language detection, typically relying on ensembles of transformer-based language models such as BERT, are trained on large-scale annotated datasets.  

Detecting offensive language is aggravated by the fact that the meaning of words changes over time, and conventional, neutral language can evolve into offensive language at short time scales, following rapid changes in social dynamics or political events. The word karen, from a neutrally connotated name of person, for example, acquired an offensive meaning in 2020, turning into a “pejorative term for a white woman perceived as entitled or demanding beyond the scope of what is normal”. Adapting to the way meaning of language changes is a key characteristic of intelligent behaviour. Current AI systems developed to process language computationally are not yet equipped to react to such changes: the artificial neural networks they are built on do not capture the full semantic range of words, which only becomes available if we access additional knowledge (e.g. author, genre, origin, register) that is typically contained in external, symbolic, and linguistic world knowledge bases.   

This project aims to develop new computational methods for offensive language detection that combine distributional information from large textual datasets with symbolic knowledge representations to develop time-sensitive methods for offensive language detection. Specifically, this project will develop representations of word meaning from textual data and external knowledge bases containing relevant linguistic and world knowledge, such as lexicons, thesauri, semantic networks, knowledge graphs (e.g. Wikidata), and ontologies, embedding this knowledge into distributional word vectors derived from time-sensitive text data (diachronic corpora) and exploring various approaches for combining these representations. To achieve these goals, we envisage the following specific tasks:  

  • The project starts with a classic knowledge engineering task, in which we will use description logics to formalise the dynamic semantics of offensive language as an ontology that encodes the social and cultural phenomena that turn neutral words into offensive words. To do so, we will use existing examples and well understood use-cases, using a partially inductive approach.  
  • Using this ontology, we will annotate existing datasets and use reasoning to derive non-trivial inferences regarding concepts motivating the formation of offensive language. With such derivations, we will project both asserted and inferred knowledge as knowledge graph embeddings in geometric space.  
  • We will combine the embeddings derived from the reasoning process with the embeddings trained on new textual datasets. We will evaluate the approach with various techniques, including verbalisation/lexicalisation, joint specialization methods, post-processing retrofitting models, post-specialization approaches, as well as ways to inject external knowledge into pre-trained representations such as ELMO and BERT. 

Datasets: 

  • The OffensEval 2019 task produced an Offensive Language Identification Dataset (OLID)  
  • Rosenthal, S., Atanasova, P.,Karadzhov, G.,Zampieri, M., and Nakov, P.2020. A Large-Scale Semi-Supervised Dataset for Offensive Language Identification. https://sites.google.com/site/offensevalsharedtask/solid  
  • Offensive/Profane Word List by Luis von Ahn (https://www.cs.cmu.edu/~biglou/resources/)  
  • https://www.wikidata.org/wiki/Wikidata:Main_Page

References: 

  • https://www.dictionary.com/e/slang/karen/
  • Kutuzov, A., Øvrelid, L., Szymanski, T., & Velldal, E. (2018). Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, 1384–1397  
  • Glavaš, G. & Vulić, I. (2018). Explicit Retrofitting of Distributional Word Vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 34-45  
  • Mihaylov, T. and Frank, A. (2018). Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 
  • Shoemark, P., Ferdousi Liza, F., Nguyen, D., Hale, S., McGillivray, B. (2019). Room to Glo: A Systematic Comparison of Semantic Change Detection Approaches with Word Embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
  • Peters, M. E., Neumann, M., Logan, R., Schwartz, R., Joshi, V., Singh, S. and Smith, N. (2019). Knowledge Enhanced Contextual Word Representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
  • Pernisch, R., Dell’Aglio, D. and Bernstein, A. (2021). Toward Measuring the Resemblance of Embedding Models for Evolving Ontologies. In Proceedings of the 11th on Knowledge Capture Conference (pp. 177-184).  
  • Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M. and Liu, Q. (2019). ERNIE: Enhanced Language Representation with Informative Entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
  • Wolf, M.J., Miller, K.W. and Grodzinsky, F.S., 2017. Why we should have seen that coming: comments on Microsoft’s tay “experiment,” and wider implications. The ORBIT Journal, 1(2), pp.1-12. 

Project ID

STAI-CDT-2022-KCL-2