The enforcement of a new law to promote online safety in the United Kingdom has attracted widespread criticism from politicians, tech companies, digital rights advocacy groups, free-speech campaigners, and content creators, among others.
Certain provisions of the UK’s Online Safety Act (OSA) took effect on July 25. These provisions require companies behind websites that are accessible in the UK, to shield minors from harmful content including pornography and material related to self-harm, eating disorders, or suicide. It also requires them to give minors age-appropriate access to other types of content pertaining to bullying and abusive or hateful content.
To comply with these provisions of the OSA and stay online in the country, platforms have implemented age verification measures to check the ages of users on their services. This includes social media platforms Reddit, Bluesky, Discord, and X; porn websites like Pornhub and YouPorn; and sites like Spotify which is also requiring users to submit face scans to access explicit content.
In response, VPN apps have become the most downloaded on Apple’s App Store in the UK over the past few weeks. Proton VPN experienced an 1,800 per cent spike in UK daily sign-ups, according to a report by BBC.
Since the UK is one of the first major democratic countries after Australia to impose such strict content controls on tech companies, it has become a closely watched test case and might influence online safety regulation in other countries like India as well.
“Since 25th July, users in the UK have certainly experienced a different version of the internet than that they were previously used to,” Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation (EFF) told The Indian Express.
“The OSA was first introduced in 2017 and politicians debated this legislation for more than four years and under four different Prime Ministers. Throughout this time, experts from across civil society, academia, and the corporate world flagged concerns about the impact of this law on both adults’ and children’s rights, but politicians in the UK decided to push ahead and enact one of the most contentious age verification mandates that we’ve seen,” she added.
Story continues below this ad
What are the new OSA rules?
In an attempt to make the UK the ‘safest place’ in the world to be online, the Online Safety Act was signed into law in 2023. The sweeping legislation includes provisions that place the burden on social media platforms and search services to take down illegal content as well as adopt transparency and accountability measures.
However, according to the British government’s own website, the most stringent provisions in the OSA are aimed at enhancing the online safety of children.
These provisions apply to any website that “is likely to be accessed by children”, even if the companies that own these sites are located outside the country. Companies had until April 16 to assess and determine if their websites were likely to be accessed by children based on guidance published by the Office of Communications (Ofcom), which is the regulator overseeing the implementation of OSA. The deadline for companies to complete their assessment of the risk of harm to children was July 24, 2025.
Sites that fall within the scope of the Act must take steps to prevent under-18 users from seeing harmful content which is defined in three categories, as per the OSA:
Story continues below this ad
– Primary priority content: Pornographic content; Content which encourages, promotes or provides instructions for suicide; self-harm; or an eating disorder or behaviours associated with an eating disorder.
– Priority content: bullying content; abusive or hateful content; content which depicts or encourages serious violence or injury; content which encourages dangerous stunts and challenges; and content which encourages the ingestion, inhalation or exposure to harmful substances.
– Non-designated content: This is any type of content that presents a material risk of significant harm to an appreciable number of children in the UK as long as the harm does not stem from the content’s potential financial impact; the safety or quality of goods featured in the content; or the way in which a service featured in the content may be performed.
Online service providers in-scope of the Act can address these risks by implementing a number of measures, which includes, but is not limited to:
Story continues below this ad
– Robust age checks: Services must use “highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.”
– Safer algorithms: Services “will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.”
– Effective moderation: All services “must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.”