Guardians of Privacy: Social Media, Privacy, Children and the AI Threat
This article combines guidance from the Guardians of Privacy series, produced by Data Protection Education in collaboration with Litus Digital, with urgent new advice issued in May 2026 following confirmed blackmail attempts against UK schools using AI-manipulated images of children. Key sources include the UK Safer Internet Centre (8 May 2026) and the Internet Watch Foundation.
The Stakes Have Never Been Higher
Schools have always faced risks when posting children's images on social media. Those risks have now escalated sharply. In early 2026, the Internet Watch Foundation (IWF) confirmed that an unnamed UK secondary school had been the target of a blackmail attempt in which criminals took photographs of pupils directly from the school's website and social media accounts, used AI tools to create sexually explicit images, and then threatened to publish them unless a payment was made. The IWF assessed that 150 of the manipulated images met the legal threshold for child sexual abuse material (CSAM) under UK law.
This was not an isolated case. The IWF has confirmed awareness of further similar attempts involving UK schools, and the Early Warning Working Group (EWWG), an advisory body on online harms that includes the NSPCC, the National Crime Agency, Education Scotland and the Welsh Government, has warned that it is "only a matter of time" before more schools are targeted. The safeguarding minister Jess Phillips described it as a "deeply worrying emerging threat."
The Loughborough Schools Foundation has already redesigned its website to remove recognisable images of pupils. The Confederation of School Trusts, whose academies educate over four million children in England, said schools would "carefully consider" the guidance.
For Data Protection Officers, governors, headteachers and school business managers, this changes the risk calculus for social media fundamentally.
The EWWG , whose guidance was published via the UK Safer Internet Centre, is not a fringe body. Its membership includes Childnet, Education Scotland, Embrace (Child Victims of Crime), the IWF, the Lucy Faithfull Foundation, the Marie Collins Foundation, the National Crime Agency's CEOP Education team, the NSPCC, the Safeguarding Board for Northern Ireland, Samaritans, SWGfL, The Children's Society, and the Welsh Government. The guidance it has produced carries the weight of the UK's foremost child safety organisations.
The threat also resonates with children and families themselves. Research published alongside the guidance found that 60% of eight to 17-year-olds are worried that AI may be used to create inappropriate or sexual content of themselves or their peers. Some 65% of parents and carers share that concern. Schools operating without updated guidance and policy are doing so against a backdrop of widespread public anxiety about exactly this risk.
What This Means for Data Protection
Photographs from which an individual can be identified are personal data under UK GDPR. Schools have always been required to treat them accordingly. The emergence of AI-generated abuse material does not create new legal obligations, but it dramatically raises the consequences of getting it wrong, and it means that risk assessments which previously concluded that posting was acceptable may now need to be revisited.
The Lawful Basis Problem
Social media does not fall within a school's public task. That means there is no Article 6(1)(e) lawful basis for posting identifiable images of children on public-facing platforms. Consent is required, from parents or guardians (and, where children are old enough to understand, from the children themselves). That consent must be:
- Freely given: not bundled with enrolment or implied by silence
- Specific: granular enough to cover the type of content, the platform, and the purpose
- Informed: which now must include informing parents about the AI manipulation risk
- Withdrawable: schools must be able to act promptly when consent is withdrawn
Given what is now known about AI-enabled abuse, any consent obtained before this threat was publicly understood may not have been truly informed. Schools should consider whether existing consents remain valid, and whether their consent forms need updating to reflect the current risk landscape.
Retention and the Right to Erasure
A post cannot be completely removed from the internet even when deleted from the social media channel or school website. Images scraped by search engines, saved by users, or archived by third-party services may persist indefinitely. This is a data retention issue as well as a safeguarding one: schools must understand that once an image is published publicly, they lose practical control over it. The right to erasure under UK GDPR becomes extremely difficult to honour in practice.
This makes the decision to post in the first place the most important data protection decision, not what happens afterwards.
Data Breach Obligations
If a school's images are used in an AI blackmail attempt, this is likely to constitute a personal data breach under UK GDPR, potentially requiring notification to the ICO and to the data subjects (children and their families) within 72 hours of the school becoming aware. Schools should ensure their breach response procedures cover this scenario specifically, and that staff know to escalate immediately rather than attempting to handle it quietly.
If a blackmail attempt occurs:
- Do not pay the ransom. In some circumstances, paying may itself be illegal.
- Contact the police immediately.
- Preserve evidence: do not delete communications from the blackmailers.
- Remove the original images from the school's public-facing platforms.
- Contact the IWF (iwf.org.uk), they can convert manipulated images into digital fingerprints and share them with major platforms to prevent further distribution.
- Consider whether a reportable breach has occurred and contact your DPO, if you have one, or the ICO.
- Inform affected families sensitively and promptly.
EWWG chair Will Gardner put the human dimension plainly: "It is incredibly sad to think that pictures of children taking part in school activities, showing their rightful and positive place in their communities, have now become a target for cynical scammers willing to exploit children to make money."
IWF Hotline Manager Tamsin McNally added: "These blackmail threats to schools feel very similar to the cases of financially motivated sexual extortion of children that we see every day in the IWF Hotline. However, owing to the rapid improvement in AI technology, schools and hundreds of children's images can now be used for blackmail by criminals. We feel it is only a matter of time before more schools are targeted in this manner, and our experience is that girls are usually the primary victims of image abuse."
The Specific Risks Schools Must Now Assess
When deciding whether to post any image or video involving children, schools should assess risk across several dimensions:
AI manipulation risk (new and elevated) Any publicly available photograph of a child, face-on, in school uniform, identifiable by name or context, can be used as source material by AI tools to generate explicit images. The EWWG specifically warns against images that clearly show pupils' faces alongside identifying information such as names. The risk is not theoretical; it is documented and increasing, with the Report Remove service receiving 394 reports of blackmail attempts from under-18s in 2025, a rise of 34% on the previous year.
Publicly available content There is no access restriction on a public social media post. Anyone, including overseas criminal gangs, can view, download and repurpose it. The NCA has linked sextortion operations to organised criminal groups based in West Africa and Nigeria, and the language used in the school attack matched scripts commonly used by such gangs.
Vulnerable children For children with safeguarding concerns, those recently adopted, resettled due to domestic violence, or subject to court orders restricting their whereabouts, even an apparently innocent photo could reveal their location or identity to those from whom they need protection. Group photographs and school assembly videos are particularly risky because they can inadvertently capture children whose families have specifically withheld consent.
Captions and contextual disclosure A data breach does not require a photograph. If a social media post reveals information about a child who was not photographed, through a caption naming them, a background detail identifying their presence at an event, or a reference to their academic achievement or participation in a specific activity, that also constitutes a breach if it contradicts the family's privacy wishes.
Account security Schools are targets for cyber attacks, including credential theft affecting social media accounts. If an account is compromised, previously posted images remain accessible and can be weaponised. Strong access controls, multi-factor authentication and regular access reviews are essential.
Smarter Ways to Celebrate: With Privacy Built In
The desire to share end-of-year achievements, sports days, graduation ceremonies and leavers' events is entirely understandable and comes from genuine community spirit. Schools can still celebrate effectively while materially reducing risk.
Use secure, private platforms Password-protected school portals or intranets, accessible only to verified parents and guardians, allow images to be shared without public exposure. Watermarking images discourages unauthorised re-sharing. This approach gives families the warmth of celebration without handing control of their children's images to the open internet.
Photograph from behind or at distance Images taken from behind, over the shoulder, or from a distance where faces are not clearly distinguishable can convey the atmosphere of an event without providing usable source material for AI manipulation. The guidance now endorsed by the EWWG specifically recommends this approach.
Use non-identifiable imagery Close-ups of hands-on activities, decorated classrooms, project work, vibrant colours and movement can capture the spirit of an occasion without showing faces. These images carry virtually no risk of misuse.
Blur or pixelate faces Where event photography has already been taken, basic editing tools can blur or pixelate faces before publication. This is a practical middle ground for schools that wish to post but want to reduce the risk.
Celebrate achievements without naming individuals Class or year-group achievements can be shared in aggregate. "Year 6 have raised £800 for charity" carries no personal data risk. "Congratulations to [name], our outstanding mathematician" accompanied by a face-on photograph in school uniform does.
Student-created content Older students, with appropriate consent and supervision, can create written reflections, audio recordings or digital artwork that captures their experiences without raising image risks.
Offline and direct-to-parent sharing Physical photo displays within school, printed photo albums distributed to families, and images shared through secure parent communication platforms (where access is limited to registered families) all achieve the celebratory purpose without public exposure.
What Schools Should Do Now
Immediate steps
- Audit existing public-facing images. Review school websites and social media accounts for identifiable photographs of current pupils. Consider whether the risk assessment that justified their publication remains valid in light of the AI blackmail threat. The Loughborough Schools Foundation's decision to remove such images from its website is a reasonable model.
- Check consent validity. Were parents informed of the AI manipulation risk when consent was obtained? If not, consider whether existing consents are truly informed and whether you need to re-seek consent with updated information.
- Update your photography and social media policy. Policies written before this threat emerged will not adequately address it. They should now explicitly cover AI risks, the EWWG guidance, and the school's response procedure for blackmail attempts. The official Image Guidance for Education Settings from the UK Safer Internet Centre provides a checklist specifically designed to help staff recognise and respond to incidents of image-based abuse.
- Review and update your Data Protection Impact Assessment (DPIA). Any processing that involves publicly posting identifiable images of children should be covered by a DPIA. Given the changed risk landscape, existing DPIAs need revisiting.
- Brief staff. Those who manage social media accounts, take photographs at events, or make decisions about publication need to understand the current threat. This includes the instruction not to engage with blackmailers and to contact the DPO and police immediately.
- Strengthen account security. Ensure all school social media accounts use strong, unique passwords and multi-factor authentication. Limit the number of people with access. Review whether former employees or governors still have access.
Ongoing practice
- Apply granular consent: parents should be able to consent or withhold consent separately for different uses and platforms, and should be able to withdraw consent at any time with a clear process for how the school will respond.
- Follow best practice guidance on photography: face-on images in school uniform with names attached are the highest-risk combination and should be avoided on public platforms.
- Include social media image risks in your annual safeguarding training.
- Know your escalation route: DPO → police → IWF → ICO (if a reportable breach).
The Legal Framework
The key legislation and guidance schools must have regard to includes:
- UK GDPR and the Data Protection Act 2018: the primary framework for lawful processing of children's personal data, including photographs
- The Children's Code (Age Appropriate Design Code): ICO guidance setting out how online services must protect children
- The Online Safety Act 2023: platforms hosting illegal content (including CSAM) face significant obligations; schools benefit from the IWF's work in taking down such material
- The Criminal Justice Act 1988 and the Protection of Children Act 1978: possession and distribution of CSAM is a serious criminal offence; AI-generated CSAM is captured by existing law
- Forthcoming legislation: the safeguarding minister has signalled that legislation on AI-generated explicit images will be updated further if necessary, building on the recently announced ban on possessing AI models designed to generate CSAM
Conclusion
The enthusiasm to celebrate children's achievements through social media comes from entirely the right place. But schools now operate in an environment where publicly posted photographs of pupils can be weaponised within hours by criminal gangs operating thousands of miles away. The legal obligations were always there; the consequences of ignoring them have never been more concrete.
The path forward is not to stop celebrating, it is to celebrate more thoughtfully, using approaches that keep joy and privacy in balance. Secure platforms, distance photography, non-identifiable imagery, and robust consent processes all make it possible to share what matters without handing source material to those who would exploit it.
The well-being and safety of every child must come first. In 2026, that means taking the AI blackmail threat seriously as a live and escalating risk, not a theoretical future concern.
For further guidance, see the full Guardians of Privacy series. The official image security guidance for schools is available at the UK Safer Internet Centre. Professional guidance on AI-generated CSAM from the IWF and NCA is available at the Internet Watch Foundation website. If you believe your school has been targeted by a blackmail attempt involving manipulated images, contact police immediately, do not pay any ransom, and report to the IWF at iwf.org.uk. The Childline/IWF Report Remove service (for under-18s who have been targeted personally) is available at childline.org.uk/report-remove.
Guardians of Privacy: Social Media Articles
Guardians of Privacy: 16. Social Media Checklist
Guardians of Privacy: 15. Navigating Social Media in Educational Settings Summary
Guardians of Privacy: 14. Social Media and Cyber Bullying
Guardians of Privacy: 13. Social Media, Copyright and Intellectual Property
Guardians of Privacy: 12. Social Media and Going Viral
Guardians of Privacy: 11. Staff Social Media Accounts
Guardians of Privacy: 10. Social Media and Cookies
Guardians of Privacy: 9. Social Media and Morality
Guardians of Privacy: 8. Social Media Policies
Guardians of Privacy: 7. Social Media Data Retention
Guardians of Privacy: 6. Posting Safely
Guardians of Privacy: 5. Social Media and Consent
Guardians of Privacy: 4. Social Media Access Control
Guardians of Privacy: 3. Social Media Channels
Guardians of Privacy: 2. Law and Regulations
Guardians of Privacy: Social Media, Privacy, Children and the AI Threat
Other Articles about Social Media:
YouTube breached child protection laws
Using WhatsApp in Schools
Guardians of Privacy: 2. Law and Regulations
Guardians of Privacy: 3. Social Media Channels
Guardians of Privacy: 4. Social Media Access Control
Guardians of Privacy: 5. Social Media and Consent
Guardians of Privacy: 6. Posting Safely
Guardians of Privacy: 7. Social Media Data Retention
Guardians of Privacy: 8. Social Media Policies
Guardians of Privacy: 9. Social Media and Morality
Guardians of Privacy: 10. Social Media and Cookies
Guardians of Privacy: 11. Staff Social Media Accounts
Guardians of Privacy: 12. Social Media and Going Viral
Guardians of Privacy: 13. Social Media, Copyright and Intellectual Property
Guardians of Privacy: 14. Social Media and Cyber Bullying
Guardians of Privacy: 15. Navigating Social Media in Educational Settings Summary
Guardians of Privacy: 16. Social Media Checklist
Product Focus on Checklists : Social Media
Protecting your Social Media Accounts
WhatsApp and FOI's: ICO Warnings
Social Media and Marketing Guidelines and Training
Preschool Employment tribunal for the use of WhatsApp
Social Media Day 2025
Sharing photos on World Book Day: Privacy considerations
Guardians of Privacy: Social Media, Privacy, Children and the AI Threat
