Conference on Harmful Online Communication
Cologne / online, November 16–17, 2023
A two-day hybrid conference with sessions focused on different aspects of Harmful Online Communication and talks from leading experts. The main event will take place in Cologne, Germany, with the option of online participation.
Harmful Online Communication refers to a variety of ongoing activities on communication platforms such as Twitter, Facebook, TikTok, Telegram and many more. Independent of the platform, harm can, for example, occur in the form of hate speech towards different groups, including racist or sexist content. Harmful online communication can also include aspects of mis- and disinformation, or threats of physical violence. Depending on the type of content, different strategies may be needed to detect it and to apply appropriate counter measures.
The aim of this conference is to bring together a group of experts in computer-based detection and analysis of harmful online communication to discuss new developments in the field. The focus will lie on theoretical concept definitions, data quality, and comparative measurement tools. This will benefit the field of harmful online communication studies by building a community around validity and reliability and creating a baseline that can inform the building of comparative research and shared knowledge. The output of the conference will inform the future work in Computational Social Sciences and help more traditional social scientists to improve their use of data from online platforms.
Open questions to be discussed include, but are not limited to:
What are the practical challenges in handling harmful online communication?
Which theoretical concepts and tools can be used to detect and analyze harmful online communication?
What are the academic challenges in detecting and analyzing harmful online communication?
Which data quality measures should be employed?
Which legal and ethical challenges does the field face (e.g. privacy/informed consent)?
How can the challenges of detecting and analyzing harmful online communication be overcome?
How can we improve the use of data from online platforms in the future?
- Isabelle Augenstein, University of Copenhagen
- Leon Derczynski, ITU Copenhagen & University of Washington
- Iginio Gagliardone, University of the Witwatersrand
- Elena Jung, modus | zad, Centre for Applied Research on Deradicalisation
- Libby Hemphill, University of Michigan
- Homa Hosseinmardi, University of Pennsylvania
- Paloma Viejo Otero, Center for Media, Communication and Information Research (ZeMKI), University of Bremen
- Tetsuro Kobayashi, Waseda University
- Anne Lauscher, University of Hamburg
- Philipp Lorenz-Spreen, Max-Planck-Institute Berlin
- Ilia Markov, Vrije Universiteit Amsterdam
- Diana Rieger, Ludwig-Maximilians-University München
- Björn Ross, University of Edinburgh
- Paul Röttger, Bocconi University
- Mattia Samory, Sapienza University of Rome
- Francielle Vargas, University of São Paulo
- Isabelle van der Vegt, Utrecht University
The conference is funded by the Fritz Thyssen Foundation.
CALL FOR ABSTRACTS
– submission deadline August 30, 2023 –
CHOC2023 welcomes proposals for a poster session on November 16, 2023 at the Conference on Harmful Online Communication in person in Cologne, Germany. This conference seeks to bring together a community of researchers from the (Computational) Social Sciences and related disciplines to discuss data quality, methods, ethics, theoretical work, and practical challenges related to harmful online communication.
Topics may include, but are not limited to:
Quantitative, qualitative, or mixed-methods research on topics subsumed under harmful online communication including but not limited to abusive language, hate speech, misinformation, disinformation, and online harassment
Computer-mediated approaches for tackling such types of communication such as content moderation and policy making.
Computational methods for research on harmful online communication, such as network analysis, textual and image analysis, large language models and machine learning.
Resource creation for studying harmful online communication such as datasets, codebooks, annotation tasks, and taxonomies
Theoretical discussions and practical concepts related to countering misinformation and harmful online communication.
Ethical and legal aspects of Harmful Online Communication research.
Bias and inequalities of (automated) hate speech detection, datasets, and analysis methods
Development of communal resources in Harmful Online Communication research
Presentations at the poster session can be of published work, in preparation for publication or work in-progress. Submissions are open to researchers from all career stages, including PhD candidates and Master students. Abstracts of up to 500 words (excluding references) should be submitted until 30 August 2023 (AoE).
Please note that the number of poster presentations is limited, given that it will only take place in person in Cologne. In case of a higher number of high-quality submissions, we may have to limit both the number of accepted posters and the registration to first authors of the posters. Co-authors and other attendees will be admitted if space permits and potentially be wait-listed.
|Date||November 16-17, 2023|
The conference will be hybrid. The onsite part will take place at the GESIS – Leibniz Institute of Social Sciences in Cologne:
Day 1 (November 16, 2023)
Day 2 (November 17,2023)
Onsite: Eur 60,- for on-site participation in Cologne (poster presenters need to present in-person)
Katrin Weller, Pascal Siegers, Indira Sen, Christina Dahn
|Funding||Fritz Thyssen Foundation|