Presented at
May Contain Hackers (MCH2022),
July 24, 2022, 7 p.m.
(60 minutes).
Help build a secure platform for domestic abuse research! We want to collect a dataset of anonymous screenshots of abusive messages, to help researchers understand the language of domestic abuse and controlling behaviour. To do so we are building a secure online dropbox where victim-survivors of abuse can safely and anonymously upload screenshots. This workshop will present this system – and ask you to hack into it! Help us make this system as secure as possible by suggesting potential exploits and vulnerabilities.
Recently there's been a lot of research concerning online abuse - for example hate speech and cyberbullying - which takes place in public online spaces, like Twitter or Reddit. But it's much more difficult to research interpersonal abuse that happens in private, such as domestic abuse. It’s difficult to collect data about this kind of abuse, because it tends to happen behind closed doors, and there are still a lot of barriers for victims speaking about their experiences.
There isn’t an existing dataset of anonymised abusive text messages between intimate partners or family members. Creating such a dataset could help researchers learn a lot more about the language of abuse, and how abuse takes place. It could help create future educational tools to prevent abuse from taking place, and help to increase identification and prosecution of different types of abuse by the police.
As researchers, we want to build an online portal where victim survivors of abuse can upload screenshots of messages that they believe to be abusive, in order to be anonymised and used in research about abuse. However, collecting this data raises a lot of ethical, data protection, and technical issues. Is it possible to secure a dropbox and a dataset like this? Is it ethical? What kinds of attacks would the platform be vulnerable to?
So we need your help to make this platform as secure as possible! This workshop will take the form of a short presentation about the research and some ideas for how such a portal could be implemented. The rest of the session will be a brainstorming session to try and source expertise and ideas from workshop participants about how such a platform could be hacked, as well as the ethical issues of this research. The workshop will be split into small groups, each with a set of questions, and participants can choose to join the group with the questions that most interest them.
Questions proposed to workshop participants would include:
- HACK & ATTACK: How would you hack this platform? What attacks is it likely to attract? What are it’s vulnerability
- TOOLS TO DEFEND: What are the options for a secure database backend? What tools and platforms are available to scan for unwanted content? (e.g. to protect researchers from seeing explicit images, for example). What methods are available to protect from spamming? - Are there other fields that use similar techniques? e.g. a secure dropbox for whistleblowers
- DATA PROTECTION & ETHICS: What are the arguments for and against such a dropbox, from a data protection perspective? How could such data be securely anonymised? How can you completely anonymise uploads to such a dropbox? E.g. discard IP and device identifying information from incoming upload packets
Presenters:
-
Lilly Neubauer
Lilly Neubauer is a PhD student at UCL studying intimate partner abuse. Her research is looking at the automatic detection of abuse or descriptions of abuse in text data (e.g. incident reports, case summaries, victim-survivor accounts, text messages, etc.) using a variety of machine learning techniques. She completed her BSc in computer science at UCL in 2021, before which she worked as an audio engineer.
Links:
Similar Presentations: