Skip main navigation

Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. T&Cs apply

Protective measures

In this step, we will discuss two specific types of protective measures: the strengthening of the legal framework and the technological approach.
Tools of moderation for online content.
© Shutterstock

In this step, we will discuss two specific types of protective measures: the strengthening of the legal framework and the technological approach.

As discussed in Week 2 and step 3.2 on challenges, there are a host of legal complexities in tackling online abuse. Hence, strengthening the legal framework requires a multifaceted approach.

National governments need to implement regulations and law in accordance with international laws. Such laws need to be comprehensive, clear, and adaptable to evolving online behaviours and technologies. Governments need vigilance tools for such monitoring. This requires international collaboration. This could involve information-sharing that crosses jurisdictional boundaries, harmonising laws, extradition agreements, and mutual legal assistance treaties. Legislation that encourages increased cooperation between the Internet intermediaries and civil society is also advisable.

The rapid pace at which the technology and the law has to adjust means that law enforcement training is essential for agencies to effectively investigate and prosecute online abuse cases. This includes understanding digital evidence collection, cybercrime techniques, and victim support.

Evidence suggests that there is a lack of trust within civil society, particularly among young Internet users, that the law can protect them from online abuse. Strengthening the legal framework in the ways mentioned above can help that, but it’s also important that legislation protects victims, provides support services including counselling and legal assistance. Furthermore, there should be avenues for victims to seek civil remedies, such as restraining orders, injunctions, or civil lawsuits, against perpetrators of online abuse. These remedies need to be accessible, efficient, and effective in addressing the harm caused.

The technological approach

Internet intermediaries are companies or services that enable online interactions between users and content. They include a wide range of entities including Internet Service Providers, social media platforms, search engines, amongst others. As they enable users to access, create and share content, communicate with others, and engage in a wide range of other activities, they can play a decisive role in facilitating and amplifying online abuse or combating it.

Internet intermediaries need to balance freedom of expression and freedom from harm. However, in many jurisdictions due to new legislations, e.g., the EU Digital Services Act and the UK Online Safety Act, they must act with respect to the most severe types of online abuse. However, the majority of online abuse may not meet the threshold of severity required under existing laws and regulations. For such content, Internet intermediaries have a wide range of technological and policy mechanisms to combat online abuse should they desire to use them. These include:

  • Content moderation – manual or automated systems and policies for temporarily or permanently removing content or tagging content as problematic. Moderation also includes de-listing leaked content and managing attempts to overload specific user accounts.
  • Content ranking – de-amplifying problematic content i.e., demoting such content in content feeds or search engine results.
  • Nudges and warnings – automated prompting of users before they post potentially abusive content or warning of consequences of such behaviour. Similarly, Internet intermediaries could display warnings before they view content or provide other such indicators.
  • Sensitivity settings – these are a set of configurable options in an app or service that allow users to control the type of content that they see, based on its nature and the potential for it to be offensive, disturbing, or otherwise inappropriate.
  • Automated detection and classification – advances in machine learning and AI can be used to classify abusive content and detect online abuse attacks, perpetrators, and at-risk users. These can be used to take preventative measures or accelerate response.
  • Reporting mechanisms – all services should have some mechanism to report and flag online abuse and problematic behaviour. Such reports should be adequately staffed by trained human operators with the ability to act while balancing the right to freedom of expression within the constraints of the terms and conditions of the service.
  • Acceptable use policies – every user typically agrees to terms and conditions including an acceptable use policy. These terms and conditions should include details on data protection but also specific policies with respect to the protection of human rights. Specifically, they should allow the Internet intermediary to flag potentially abusive content or problematic behaviour, and subject to human oversight, enforce their content moderation policy including reporting such content or behaviour to law enforcement. Such policies and terms should be in plain language.
  • Data access – Internet intermediaries have substantial historical datasets that (a) contain online abuse, and (b) their moderation decisions. Such data can be shared with researchers, technology vendors, and other stakeholders to support the development of better tools for content moderation and the detection and classification of online abuse, to facilitate greater research on understanding the triggers, types, targets, and effects of online abuse, and to inform education and other initiatives to combat online abuse.
This article is from the free online

Online Abuse in Sport

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now