Online Safety Bill

  • By Rachel Buttrick

Summary 

The Online Safety Bill was introduced into Parliament on 17 March 2022; it follows on from the draft bill published in May 2021. 

The Bill contains a range of provisions relating to safety online, including the safety of children, safety from fraud and safety from online harassment and sexual offences that disproportionately affect women. The bill will therefore be of interest to any councillors involved in work relating to violence against women and girls, child protection, and consumer protection.

Key provisions of the bill

For all provisions, in-scope companies will have a legal obligation to have regard to the importance of freedom of expression when fulfilling their duties.

Provisions relating to online harassment and abuse

New criminal communications offences will be created:
Harmful Communications Offence  - a person commits an offence if they send a message where there is a real and substantial risk it would cause harm to a likely audience, and they intend to cause harm. This includes sending flashing images to people with epilepsy with the intention to cause seizures.
False Communications Offence – a person commits an offence if they knowingly send false information that they intend to cause non-trivial psychological or physical harm. This offence targets actions such as bomb hoaxes and dangerous fake medical advice. 
Threatening Communications Offence – a person commits an offence if they send a message that conveys a threat of death or serious harm, and either intend the recipient to fear the threat would be carried out, or are reckless about the recipient experiencing such fear. The government explicitly highlights the role that this offence will have in supporting victims and survivors of domestic abuse who experience ongoing harassment or coercive control from perpetrators using online platforms. 
Cyberflashing – a person commits an offence if they send a photo or film of a person’s genitals, with the intention to cause the victim humiliation, alarm or distress, or for the purpose of their own sexual gratification, with reckless disregard for the distress caused to the victim.
 
The largest social media companies (referred to in the bill as category one services) will be required to give users power to control who interacts with them, and block users who have not verified their identity, to tackle anonymous harassment online. An example of how this could be done includes giving users options to tick a box in their settings to receive direct messages and replies only from verified accounts. Social media companies will also be required to take action to tackle harmful content posted anonymously on their platforms and manage the risks around use of anonymous profiles. Platforms will have a duty of care to tackle racist abuse and will be required to have appropriate systems for preventing and removing hate speech.
 
The bill does not include provision to ban anonymity online, due to the negative impact that this would have on individuals who need to be anonymous for their own personal safety.
 
Platforms will have obligations to protect users communicating via private channels; details of this will be set out by Ofcom. Ofcom will be able to require a platform to use technology to scan public and private channels for child sexual abuse material.
 
Provisions relating to pornographic and harmful content 
Services will be required to prevent children from accessing pornography hosted on their sites. This includes commercial porn sites that allow user generated content, social media and video sharing sites, and providers who publish pornographic content on their services.  
 
Companies will be required to proactively remove illegal content such as child sexual abuse imagery, the promotion of suicide, revenge porn, people smuggling and sexual exploitation, hate crimes and incitement to terrorism. Companies will also be legally required to report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency - reporting was previously voluntary.
 
Platforms likely to be accessed by children will also have a duty to protect children from encountering harmful content, including "legal but harmful" content, such as the promotion of self-harm, or anti vaccination disinformation.
 
The largest online platforms (category one services) will also be required to give adult users the power to choose whether they want to be exposed to content classed as "legal but harmful". Sites will not have to remove legal content, but they must give adults the ability to filter such content. Such sites will have to make clear in their terms and conditions which content is and isn’t permitted. 
 
Provisions relating to fraud 
Search engines and platforms which host user-generated content, video-sharing or live streaming will have a duty of care to protect users of their services from fraud committed by other users, such as romance scams. Category one platforms and search engines will have a new legal duty to prevent fraudulent advertisements from appearing on their services.
 
Enforcement
If companies don’t meet their responsibilities under the bill, Ofcom will have powers to impose fines of up to £18m or 10% of global annual turnover (whichever is higher) or apply to court for business disruption measures (including blocking non-compliant services). Senior managers who fail to ensure their companies comply with information requests from Ofcom, or who deliberately withhold information, can face criminal sanctions. 

Commentary

The Online Safety Bill has been long in development and controversial on a range of fronts. There are some elements of the bill which are clearly welcome, including the introduction of cyberflashing as a criminal offence, giving parity with laws on in person flashing, and the requirement that platforms report child abuse and exploitation material to the National Crime Agency. Key concerns raised include the technological feasibility of the bill, as implementation relies on advanced technologies and high levels of moderation, and the capacity for enforcement, especially in relation to protecting children from accessing pornography and harmful material. Concerns have also been raised on the potential impacts on privacy and freedom of speech. 
 
Commentary on proposals relating to VAWG
Women and girls are disproportionately impacted by online violence and harassment; according to research by Refuge, 1 in 3 UK women have experienced online abuse in their lives . The bill has been criticised by some in the VAWG sector for failing to go far enough in tackling VAWG. The End Violence Against Women coalition criticised the bill for failing to explicitly define VAWG as a harm, and for not going far enough to force tech companies to tackle VAWG on their platforms. Concerns have also been raised about the capacity and ability of the police to investigate the new offences created under the act, given the current police backlog and challenges investigating existing offences such as rape and assault.
 
Commentary on proposals relating to child safety
Concerns that the bill does not go far enough to protect children have been raised by children’s charities, among others. Specific concerns have been raised about gaps in addressing cross platform abuse and content posted by abusers on social media that fall just short of illegality. 
 
 
Rachel Buttrick , Principal Policy & Project Officer