After years of debate, the government's controversial Online Safety Bill, which aims to make the internet safer for children, has become law.

It seeks to force tech firms to take more responsibility for the content on their platforms. Technology Secretary Michelle Donelan said it “ensures the online safety of British society not only now, but for decades to come.”

But critics have raised concerns about the implications for privacy. WhatsApp is among the messaging services to threaten to withdraw from the UK over the act.

What is the Online Safety Bill?

The new law puts the onus on firms to protect children from some legal but harmful material, with the regulator, Ofcom, being given extra enforcement powers. It introduces new rules such as requiring pornography sites to stop children viewing content by checking ages. Platforms will also need to show they are committed to removing illegal content including:

  • child sexual abuse
  • controlling or coercive behaviour
  • extreme sexual violence
  • illegal immigration and people smuggling
  • promoting or facilitating suicide
  • promoting self-harm
  • animal cruelty
  • selling illegal drugs or weapons
  • terrorism

Other new offences have been created, including cyber-flashing – sending unsolicited sexual imagery online – and the sharing of “deepfake” pornography, where AI is used to insert someone's likeness into pornographic material.

The act also includes measures to make it easier for bereaved parents to obtain information about their children from tech firms.

What else does the Online Safety Bill do?

Powers in the act that could be used to compel messaging services to examine the contents of encrypted messages for child abuse material have proved especially controversial.

Platforms like WhatsApp, Signal and iMessage say they cannot access or view anybody's messages without destroying existing privacy protections for all users, and have threatened to leave the UK rather than compromise message security. Proton, a mail platform with a focus on privacy, says it would be prepared to fight the government in court if it is asked to alter its end-to-end encryption.

“The internet as we know it faces a very real threat,” said Proton CEO Andy Yen, who says the bills gives the government the power to access people's private messages. “No-one would tolerate this in the physical world, so why do we in the digital world?”

The government has said the regulator Ofcom would only ask tech firms to access messages once “feasible technology” had been developed. Wikipedia has also previously said it would not be able to obey some of the act such as age verification

While the act is often spoken about as a tool for reining in Big Tech, government figures have suggested more than 20,000 small businesses will also be affected.

Who will regulate the Online Safety Bill?

Breaking the rules could result in fines of up to 10% of global revenue for tech companies, or £18m – whichever is bigger. Their bosses could also potentially face prison time as a punishment.

Ofcom says it will draw up codes of conduct that will provide guidance on how to stay within the new rules, with its first draft codes coming on 9 November. Its boss has also addressed some of the concerns raised about its new role.

“Ofcom is not a censor, and our new powers are not about taking content down. Our job is to tackle the root causes of harm,” said the regulator's CEO, Dame Melanie Dawes.

“Importantly, we'll also take full account of people's rights to privacy and freedom of expression,” she added.

What do campaigners say?

The Equality and Human Rights Commission welcomed the law, calling it “a vital first step in addressing harmful content and behaviour online.” Sir Peter Wanless, NSPCC chief executive, said the law “will mean that children up and down the UK are fundamentally safer in their everyday lives.”

He added this is partly “thanks to the incredible campaigning of abuse survivors and young people”.\ Campaigners have included Ian Russell, whose 14-year-old daughter Molly took her own life in 2017 after viewing suicide and self-harm content on sites such as Instagram and Pinterest.

However, fact-checking organisation Full Fact, which supported the bill, said “retrograde changes” made to it meant it did not go far enough “to address the way that platforms treat harmful misinformation and disinformation.”

Full Fact's head of policy and advocacy Glen Tarman continued: “Our freedom of expression is left in the hands of self-interested internet companies, while dangerous health misinformation is allowed to spread rampant.”

— CutC by bbc.com

Leave A Reply

Exit mobile version