• 0800 0862018
  • This email address is being protected from spambots. You need JavaScript enabled to view it.
  • Mon - Fri 8:00 - 17:00

InfoSec / Cyber

Childrens hands showing their palms each with a letter from the word 'Safety'. Blue text: "Online Safety Act Becomes Law"

The UK Online Safety Bill becomes an Act (Law)

The UK Online Safety Bill became law on Thursday 26th October.  The UK Government says the Online Safety Act will protect people, particularly children, on the internet.  The Act should make social media companies keep the internet safe for children and give adults more choice over what they see online.  Ofcom will immediately begin work on tackling illegal content and protecting children's safety.
The new laws take a zero-tolerance approach to pretecting children from online harm, while empowering adults with more choices over what they see online.  The Act places legal responsibility on tech companies to prevent and rapidly remove illegal content, like terrorism and revenge pornography.  They will also stop children seeing material that is harmful to them such as bullying, content promoting self-harm and eating disorders and pornography.

Full Government Report: A guide to the Online Safety Bill.
The Act targets:
1. User to user services (with some exempted services, such as email and messaging platform providers.
2. Search services
3. User to user or server serivces that publish certain high-risk content.

The duties of care listed in the Act are focused on the systems and processes that providers must have in place to ensure the safety of users. The Act provides Ofcom with powers to impose penalties of £18 million or 10 per cent of a company's global turnover, whichever is the highest.

The Act is covers these main categories of harmful content:
  1. Illegal content: this relates to content relating to certain criminal offences (note that defamation is not included in this category),
  2. Content that is lawful but harmful to children, 
  3. Fraudulent advertising (although note that this is only an issue for providers of Category 1 or 2A content).
The Act ensures that social media platforms:
  • Remove illegal content quickly or prevent it from appearing in the first place, including content promoting self-harm
  • Prevent children from accessing harmful and age-inappropriate content including pornographic content, content that promotes, encourages or provides instructions for suicide, self-harm or eating disorders, content depicting or encouraging serious violence or bullying content
  • Enforce age limits and use age-checking measures on platforms where content harmful to children is published
  • Ensure social media platforms are more transparent about the risks and dangers posed to children on their sites, including by publishing risk assessments
  • Provide parents and children with clear and accessible ways to report problems online when they do arise

Ofcom will now work towards:

1. Registering relevant service providers
2. Creating codes of practice
3. Creating guidance for relevant service providers
4. Enforcement

Ofcom says while the onus is on companies to decide what safety measures they need given the risks they face, they expect implementation of the Act to ensure people in the UK are safer online by delivering four outcomes:
  • Stronger safety governance in online firms.
  • Online services designed and operated with safety in mind.
  • Choice for users so they can have meaningful control over their online experiences.
  • Transparency regarding the safety measures services use.
Full Ofcom Article: Ofcom Roadmap to Regulation

The majority of the Act's provisions will be enforceable within two months.

The Act has taken a long time to get to this point, and the journey is summed up well by the NSPCC: