Last updated on November 4, 2025
No, not without meeting specific duties. The Online Safety Act requires platforms—especially Category 1 services—to operate proportionate systems that protect content of democratic importance. Removing political comments while leaving other content intact could breach these duties unless the removal aligns with clear, consistent policies and respects freedom of expression.
A New Era of Online Regulation
The Online Safety Act 2023 is one of the most significant pieces of digital legislation in UK history. It was introduced to make the internet safer for users while preserving fundamental rights like freedom of expression. The Act applies to user-to-user services and search engines, including social media platforms, forums, and messaging apps—even if they operate outside the UK but target UK users or pose a material risk of harm.
This law is not just about removing harmful content; it’s about creating a balanced framework where safety and democratic debate coexist. Platforms now have statutory duties to protect users from illegal content and harmful material, particularly for children, while ensuring political speech remains protected.
Key Principles of the Act
The Online Safety Act establishes a series of duties for regulated services:
- Illegal Content Duties: Platforms must swiftly identify and remove illegal content, such as terrorism or child sexual abuse material.
- Child Safety Duties: Strong protections for children, including risk assessments and age-appropriate design.
- Transparency and User Empowerment: Platforms must publish clear terms of service and provide tools for users to control what they see.
- Freedom of Expression Duties: Services must consider users’ rights when applying moderation policies.
These principles create a compliance framework that balances harm reduction with rights protection.
Protecting Democratic Content
Section 17 of the Act introduces duties to protect content of democratic importance. This includes any user-generated content intended to contribute to political debate in the UK—whether about elections, government policy, or local governance. Platforms must:
- Apply moderation policies consistently across political viewpoints.
- Consider freedom of expression when removing or restricting such content.
- Provide clear explanations in their terms of service.
Removing political comments while leaving other content untouched could breach these duties unless justified under transparent, proportionate policies. Platforms cannot discriminate against specific political opinions or apply rules arbitrarily.
What Counts as “Content of Democratic Importance”?
The Act defines this broadly: any content intended to contribute to democratic political debate in the UK. This includes commentary on elections, government policy, or local governance. News publisher content also falls under this protection, reinforcing the UK’s commitment to press freedom and democratic discourse.
Ofcom’s Role and Enforcement
Ofcom is the independent regulator tasked with enforcing the Act. Its responsibilities include issuing Codes of Practice, conducting audits, and investigating breaches. Ofcom’s enforcement powers are extensive:
- Fines up to 10% of global annual revenue for serious non-compliance.
- Service blocking for severe or repeated breaches.
- Mandatory risk assessments and transparency reports from regulated platforms.
Ofcom guidance emphasises proportionality and transparency in moderation decisions, especially for political content. Platforms must demonstrate that their systems respect democratic debate while addressing harmful material.
Cultural and Historical Context
The UK has a long tradition of protecting political speech, from parliamentary privilege to press freedom. The Online Safety Act reflects this heritage while adapting to the digital age. It signals that while harmful content must be tackled, democratic debate remains sacrosanct. This approach aligns with broader European principles of human rights and freedom of expression.
Practical Implications for Platforms
To comply with the Act, platforms should:
- Conduct risk assessments on how moderation affects democratic content.
- Publish clear policies explaining treatment of political speech.
- Offer appeals for users whose content is removed.
- Train moderation teams to apply rules consistently and proportionately.
Failure to do so risks regulatory action, reputational damage, and significant financial penalties.
Timelines and Compliance
The Act came into force in late 2023, with phased implementation through 2024 and 2025. Ofcom’s Codes of Practice provide detailed compliance timelines. Category 1 services—large platforms with significant reach—face the most stringent requirements, including duties to protect democratic content and provide user empowerment tools.
The Bottom Line
Under the Online Safety Act, platforms cannot arbitrarily delete political comments while preserving other content. Moderation must respect statutory duties to protect democratic discourse, apply policies consistently, and uphold transparency. In the UK’s new online safety regime, free expression and safety are not rivals—they are twin pillars of a responsible digital society.
See more on United Kingdom
Sources
Online Safety Act 2023 – Full Text
https://www.legislation.gov.uk/ukpga/2023/50/contents
Ongoing
Online Safety Act: explainer – GOV.UK
https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer
Updated 24 April 2025