top of page

Online Safety Act | Consultation | Executive Summary

Juhi Kore, Broderick McDonald

Oxford Disinformation & Extremism Lab



Online-Safety-Act-Oxford-Disinformation-Extremism-Lab-Broderick-McDonald-Juhi-Kore
Online Safety Act | Ofcom Consultation

The Oxford Disinformation and Extremism Lab recently submitted our response to the Office of Communications' Consultation on the Online Safety Act: Protecting People from Illegal Online Harms. As the UK's lead communications regulator, Ofcom will determine how the Online Safety Act is brought into effect over the coming 18 months and input from individuals and organisations will be important in ensuring codes of practice are responsive and appropriately scoped to avoid casting an overly broader net or infringing on human rights such as free expression. While the full submissions will be published by Ofcom over the coming year and half, OxDEL is including an executive summary of our concerns, recommendations and input to help shape this process in way that safeguards human rights and civil society.


Background


The Online Safety Act (OSA) was passed into law in the United Kingdom on October 26, 2023 with similar yet seperate legislation introduced by the EU under the Digital Services Act last year. The Online Safety Act brings a wide range of policies, responsibilities, and regulatory powers to bear on suppliers of regulated user-to-user services (U2U Services) and/or regulated search services, or search engines.These rules apply beyond the UK depending on the location of users and represent an important development in online regulation that will shape the eco-system in major ways. Given the importance of this legislation and its impact on online harms including disinformation and extremism, OxDEL believes it is important to accurately scope and safeguard protections during the implementation of the OSA.

 

The responsibilities of service providers affected by the Online Safety Act will come into effect gradually with additional legislation pending further approval. Additionally, practical guidance and Codes of Practice will be published by the UK's communications regulator when such steps have been taken. Given the 'devil is in the details' when implementing any new and major piece of legislation, it is essential for government and regulators to consult with civil society, academia, and practitioners before specific guidance is finalised.


While OxDEL is supportive of greater regulatory clarity and protections for civil society, researchers, and activists which are enshrined in both law and regulations, we remain concerned about certain aspects of the legislation.




Executive Summary


While we support greater regulatory clarity and protections, we remain concerned about the free expression of civil society organizations (CSOs), academic researchers, and human rights advocates being curtailed by the existing proposals. While moderation of extremist groups and malicious actors operating on private platforms is necessary and legal, we worry about the scoping of the existing proposal and its potential over-reach or abuse to target legitimate peaceful dissent, civil society and academic researchers. To prevent this, we urge more targeted scoping and clear definitions, along with transparency and a robust appeals process in determining how content is classified and moderated.


Automated Content Moderation


We remain concerned about the free expression of civil society organizations (CSOs), academic researchers, and human rights advocates being curtailed by the existing proposals. While moderation of extremist groups and malicious actors operating on private platforms is necessary and legal, we worry about the scoping of the existing proposal and its potential over-reach or abuse to target legitimate peaceful dissent, civil society and academic researchers. To prevent this, we urge more targeted scoping and clear definitions, along with transparency and a robust appeals process in determining how content is classified and moderated. Automated Content Moderation using AI tools or algorithmic detection further extends these risks and we are concerned that human review will be minimized. Human reviewers bring expertise, context, and subject matter knowledge that can be missed by automated systems and is important in ensuring that content moderation does not over-step its mandate.


Scoping


While we agree with the size and risk level approach to applying the OSA we remain concerned that everyday users and researchers in particular will be affected. Existing examples of risk include individual users, organizations, and researchers who have been caught up for discussing or analysing content which falls outside the narrow bounds of the guidelines. Moreover, when such violations have occurred the appeals process remains unclear, slow, and difficult to navigate. Recent research from Dr Aaron Zelin (Brandeis University) has proposed the development of white-listing certain users or organizations, along with broader solutions that include transparency reports and a robust appeals process.


Content Search


In some cases, search functions continue to recommend content which is extremist but not branded as coming from a designated terrorist organization. As such, the proposals need to better account for the post-organizational structure of extremism with clear definitions of what is permitted and what is not. We remain concerned about the free expression of civil society organizations (CSOs), academic researchers, and human rights advocates being curtailed by the existing proposals. While moderation of extremist groups and malicious actors operating on private platforms is necessary and legal, we worry about the scoping of the existing proposal and its potential over-reach or abuse to target legitimate peaceful dissent, civil society and academic researchers. To prevent this, we urge more targeted scoping and clear definitions, along with transparency and a robust appeals process in determining how content is classified and moderated.


Impact of Smaller Platforms and Innovation


We are sensitive to the burdens placed on smaller platforms and urge the development of counter-measures to address these challenges. Industry wide associations could lessen the burden on small and micro businesses which lack the resources to comply independently. Support from industry wide associations and larger platforms which can ‘lend’ expertise or mentorship to smaller platforms would support the overall health of the ecosystem. Future developments in this field could replicate the success of such programs related to content moderation and trust & safety which have provided smaller companies which lack the resources to ensure they are complying with all regulations.Additionally, the development of a ‘one-stop’ office within Ofcom for enquires would further reduce the challenges for small platforms.


Hash Sharing


Industry-wide associations such as the Global Internet Forum for Countering Terrorism (GIFCT) and Tech Against Terrorism are critical for ensuring that hashes from a wide range of platforms and services are included in the database. Strong public-private partnerships are important in ensuring the hash-sharing database is maintained but this must also include smaller platforms which emerge in the future and do not always have access to an in-house Trust & Safety professional. Cryptographic and contextual hashing are helpful to detect ‘fuzzy’ or approximate hashes however these methods will have to be further improved as Generative Artificial Intelligence (GenAI) applications expand the range of obscuration tools available to extremists and malicious actors seeking to evade automated content moderation. Lastly, the development of a hash database in partnership with academic researchers will help preserve this content for future analysis and enable analysts to flag emerging trends that begin on encrypted platforms to hash-sharing databases. Hash-sharing is effective but only to the extent the database is continually maintained and updated with new research on extremism and disinformation. The Global Network on Extremism and Technology (GNET) and the Extremism and Gaming Research Network (EGRN) have published significant research on hash-sharing, evasion, and generative artificial intelligence tools which would be helpful in formulating responses.



While the passage of the Online Safety Bill and the its implementation represent an important step in providing greater clarity to the private sector we remain concerned about the potential impact of these developments on civil society, researchers, and activists depending on how they are implemented. We will continue to monitor this process and welcome Ofcom's efforts to engage a wide and diverse sets of actors as the next step of the implementation is developed


bottom of page