The UK just took a leap of faith into the murky waters of internet safety legislation. But did they dive into a well-thought-out solution, or have they just cannonballed into a pool of unforeseen consequences and potential for abuse?
The Online Safety Act – What’s the big deal?
The law in Cliff’s notes
Though technically royal assent is still to come, this is mere formality at this point, and this is effectively the law of the land. The crux of that law:
- Classifies some online content, particularly on open platforms like social media, as either “harmful to children” or “legal but harmful”.
- Appoints Ofcom, the UK government telecoms regulator, as the chief arbitrator for content classification.
- Mandates entities “providing an internet service” to either block or filter the categorized content and to conduct periodic risk assessments and reporting.
- Levies penalties for non-compliance, including significant financial ramifications (~$25mm USD or 10% of global revenue) and potential imprisonment for CEOs in certain cases.
- Demands decrypted user communications upon request, laying the groundwork for potential government surveillance (or simple provider abuse).

So what’s going to happen now?
- Headline: This will cause many companies to either adopt UK-centric standards or abandon the UK market, given the steep non-compliance penalties.
- WhatsApp and Signal have already threatened to withdraw from the UK – No worries, I’m sure a pro-censorship replacement can be sourced – I hear the CCP has many offerings in this space…
- Is this concern real? YES.
- GDPR showed us how ‘local’ laws can shake up the global scene. Companies are stuck in an ‘all or nothing’ dilemma – comply or miss out on business in the area.
- Less likely – companies might run a “UK version” of their service. This has been done by many for China to comply with the CCP’s (lack of) security requirements, but don’t forget that China’s economy is ~5x the size of the UK’s – the incentives are different.
- Again looking at GDPR, many large companies choose to try and prevent themselves from being subject to GDPR, rather than implement the changes necessary to comply, thus sidestepping the need for a “European version” of their services.
- It’s less “how do we comply” and more “how do we scope ourselves out of the need to comply?”
- For those that comply, broad age-gating will likely be the answer – “so long” anonymous content browsing, whether it be news, social media posts, other commentary, porn, or whatever.
- An overstatement? Given that restricted categories include things like “extreme sexual violence”, “Animal cruelty” etc.. is it a stretch to think news stories previously prefaced with a note about “disturbing content”, might now simply be behind an account wall? And that, unable to easily filter those from thousands or millions of other user messages, platform providers might simply decide on universal age verification as “The easy way out”?
- The easiest way to prevent minors from accessing problematic content will be to require a registered username with a valid form of payment on file. We’ll likely see some variation of this idea implemented.
- Encryption must build a back door or get out – Given the clause mandating furnishing UK government agencies decrypted user communication on demand “at a future date when feasible” – any messaging or other apps will either need to bail on the UK or build in a back door.
- In the USA, the NSA has historically attempted to achieve this in multiple encryption algorithms in an extralegal / side-channel manner… This law creates the first instance of a western democracy mandating purposefully-flawed encryption with force of law.
- Increased scrutiny to identify “questionable” material is likely on all remaining platforms.
- Cost implications – administrative costs will likely pass on to consumers. No right-minded business “eats” new government imposed-costs – they pass it down, knowing that consumers will grumble but ultimately still pay for their services, as there is often no alternative.
Digging Deeper – Is Privacy Dead and Content Moderation a Mess?”

Privacy Fail, Content moderation… not a win
This is a BIG blow to user privacy, and a questionable gain for content moderation and filtering. Questionable both in terms of its efficacy today versus other solutions, as well as how it will be used in the future. Solutions putting power in the hand of bureaucrats rarely end well.
The Privacy problem
The requirement for decrypted content to be provided “on demand” to Ofcom in the future essentially negates any assurance of privacy from providers. It’s akin to numerous attempts by the NSA to introduce backdoors into various encryption algorithms in the US, but backed by force of law.
As a comparison, imagine if all of your phone calls could be disclosed to one or more government agencies, with or without a warrant from law enforcement, with a simple request. That is effectively what this law puts in place for electronic communications.
Likewise, the filtering and age gating implied likely mean that many anonymous-to-browse sites and services will now move to “accounts (with credit cards or more to prove age..) for everyone for all activities.” No more anonymous reading of news articles, Tweets, browsing of blog posts, etc. While there are ways to age-gate without something like a credit card, most companies will opt for the easy way out – mandating payment options only available to adults.
And hey, once you need a credit card for the service, why not add on a nominal fee? Surely that won’t happen…
Will it even work?
In terms of efficacy and “value vs. harm” – this is a case of a government stepping into an arena – content – it has traditionally stayed out of and basically making a mess of it.
Books, films, and other media forms have traditionally been self-regulated, with parents and guardians filtering content for children. So, what makes the internet warrant such stringent control? Is the real issue the unchecked access many caregivers allow, rather than the medium itself?
Books contain numerous topics that could be considered “harmful“. From graphic content describing the horrors humanity has inflicted upon itself in many wars to more personal accounts such as the writing of the Marquis de Sade or even Hitler’s Mein Kampf. These can be dark works, but they are not banned, rather regulated to appropriate audiences by society. Even questionable characterization in classic works such as Peter Pan and Little House on the Prairie are retained (at least as of today) – the writings themselves are not filtered and banned, but we provide a more enlightened, modern context to the content – yesteryear’s terrible caricature portrayals can become conversation fodder for today’s instruction on equality and inclusion.
Film, in both the UK and the USA, is voluntarily rated and regulated by industry bodies, NOT by governments. And no one regulates what the extremist in the town square is spouting (provided it is not a direct call to violence). As with books, parents and caregivers select and filter what children consume. Is the government now “mom and dad”? Have we deemed actual caregivers so incompetent and devoid of responsibility that the government must now step in?
… So what is so uniquely terrifying about “the internets” to require this degree of enhanced control? Are children really in any more danger online than they are in a library? Or is the bigger problem the unfettered access to electronic information and tools many caregivers provide, out of laziness or lack of due care?

The forward-looking danger
Even assuming one agrees with the restrictions, this is another case of establishing loose guidelines in a law and allowing a regulatory body to “figure it out” through time as it sees fit. We see this pattern often in the US. 1000-page laws have vague mandates and require unelected bureaucrats to then interpret and enforce details. Which results in things like OSHA focusing on precise heights for handrails, as opposed to actual incident and injury rates, and the Bureau of Alcohol, Tobacco, and Firearms spending time debating “how to classify a rifle legally” while a backdrop of illegal gun purchases and murders goes unaddressed.
What should be more frightening is that with the power to define “harmful” in the hands of an appointed regulatory body, how could the definition of “harmful” change based on who is in power? Most western democracies have seen significant swings across both sides of the political spectrum in the last decade…
Do we really want censorship rules to be something determined by those appointed by the “party of the week” as opposed to requiring actual legislation? Today it’s “encouragement to suicide” that’s objectionable speech, but what if it was “acknowledging Judeo-Christian culture”, or “acknowledging the validity of LGBTQ+ individuals?” Both are extreme example to make a point, but both could legally happen under the current law.
Wrap up
UK lawmakers have taken a historic step – but is it a step or a stumble, and in what direction? We have a lot of good intent “paving the road”, but where does it lead? This will ultimately land on the people implementing and enforcing this new regime, and how businesses are (forced) to react to it. And hey, humans in positions of authority generally don’t mess these things up, right?
… right?
Appendix : Regulated categories and bill / law details
Official Bill content link from the Parliament: Online Safety Bill – Parliamentary Bills – UK Parliament
- Content considered harmful to children (not literally spelled out in the bill, but based on current reading, dialogue, Ofcom comments)
- child sexual abuse
- controlling or coercive behaviour
- extreme sexual violence
- illegal immigration and people smuggling
- promoting or facilitating suicide
- promoting self-harm
- animal cruelty
- selling illegal drugs or weapons
- Terrorism
- Cyberflashing
- “deepfake” porn
- Scope of impact as defined in the law -“Service” refers to any internet-based service offering access in the UK
- The search content of the service
- The design, operation and use of the search engine in the United Kingdom
- In the case of a duty that is expressed to apply in relation to users of a service, the design, operation and use of the search engine as it affects United Kingdom users of the service


Leave a comment