New DfE AI Standards
The DfE previously issued training and guidance about the use of AI in Education - this has now changed to standards. Standards define minimum requirements that must be met, whereas a guideline offers recommended best practice or advice. The standards outline the safety standards that generative AI products and systems should meet to be used in educational settings.
Our consultants have been talking to schools this academic year about their use of AI, with many unsure about whether their staff members are using it or not. The new standards are essentially stricter rules with a focus on what AI harm might look like in education. Our advice to schools is always:
🤖Be clear about how your organisation can/can't use AI.
🤖Be clear about which AI tools can be used and which ones can't after you have assessed the risk. Remember to add the information to your privacy notices!
🤖Have an AI policy - our customers can review our AI Best Practice Area for templates and guidance. Ensure you properly research and document risks.
🤖Train staff on how to enter prompts and not share personal data when doing so. Review our article: The importance of AI literacy and training staff
Key points from the standards:
- Educational use cases -
- Content creation and delivery (e.g. lesson plans)
- Personalised learning and accessibility
- Assessment and analytics (e.g. marking and feedback)
- Digital assistant
- Research and writing aid
- Learner engagement and interaction
- Administrative and management (e.g. reports, parent communication)
- Other - edtech developers and suppliers should specify if their product does not fall into one of the previous use cases categories and provide a clear statement of purpose and use case.
- Filtering - generative AI products must effectively and reliably prevent users from accessing harmful or inappropriate content.
- Monitoring and reporting - the generative AI product must maintain robust activity logging procedures.
- Security - the generative AI product must be secured against malicious use of exposure to harm.
- Privacy and Data Protection - the generative AI product must be compliant with relevant data protection legislation and regulations including:
- having a robust approach to data handling and transparency around the processing of personal data
- ensure a lawful basis for data collection.
- Intellectual Property - the generative AI product must mot store, collect or use intellectual property created by learners, teachers, or the copyright owner for any commercial purposes, such as training or fine tuning of models.
- Design and testing - the generative AI product must prioritise transparency and children's safety in its design.
- Governance - the generative AI product must be operated with accountability. This includes:
- carrying out risk assessments
- instigating formal mechanisms for lodging complaints
- demonstrating that its operations, decision-making processes and data handling practices are understandable and accessible to government agencies and users.
- Cognitive development - Edtech developers and suppliers of products should make every effort to mitigate the potential for cognitive deskilling, or long-term developmental harm to learners.
- Emotional and social development - Edtech developers and suppliers of products should take every action possible to mitigate the potential for harm to the emotional or social development of learners, including the potential for emotional dependence.
- Mental health -
- products should detect signs of learner distress including
- products should follow an appropriate pathway when distress is detected, including providing tiered response actions
- products should use safe and supportive response language
- developers should implement safeguarding and governance measures
- Manipulation - it is expected that products do not use manipulative or persuasive strategies.
DfE Generative AI: product safety standards
What does this mean for data protection in schools?:
Schools are already required to perform DPIA's for 'high-risk' processing. The DfE now expects these assessments to specifically cover:
1.Mandatory Updates to DPIAs
- Automated Decision-Making - Under UK GDPR (Articles 13 & 14), schools must be able to explain the "logic" behind an AI’s output to parents and students.
- Algorithmic Bias - Schools must ensure the AI doesn't discriminate against students based on protected characteristics (e.g., ethnicity or SEND status)
2. Strict Limits on "Commercial Use" of Data
A major compliance shift involves how student data is recycled.
-
No Training Without Consent: Schools must ensure that any AI tool they use does not use student or teacher work to train or fine-tune its models for commercial purposes unless explicit consent is obtained.
-
Ownership: The standards clarify that students and teachers (or their employers) own the copyright to their prompts and outputs. Schools must ensure their contracts with AI providers don't accidentally "sign away" these intellectual property rights.
3. Heightened "Lawful Basis" Scrutiny
The DfE points to the ICO’s position that "Legitimate Interest" is likely the relevant lawful basis for processing personal data in AI used by children.
-
The "Three-Part Test": Schools must be able to demonstrate that the use of AI is necessary for education and that the benefits are not outweighed by risks to the child's privacy.
4. Age-Appropriate Transparency
Under the ICO Children’s Code, transparency isn't just a legal "small print" requirement; it must be functional.
-
Child-Friendly Privacy Notices: Schools must ensure that the AI tools they provide use age-appropriate language to explain what is happening to a student's data.
-
Monitoring Alerts: If a school is monitoring a student’s AI prompts for safeguarding (Filtering and Monitoring), the student must be clearly told they are being tracked.
5. Safeguarding vs. Data Privacy
The standards introduce a requirement for "Real-Time Alerts" for Designated Safeguarding Leads (DSLs).
-
Data Minimisation vs. Safety: Schools must balance the need to monitor (safeguarding) with the principle of data minimisation. The standards suggest that while AI can monitor engagement levels, it should avoid disclosing the full content of student inputs to teachers unless a specific safeguarding threshold is met.
6. Due Diligence on "Data Residency"
Schools must confirm where the AI provider processes data. If the AI model is hosted outside the UK or EEA (which many major LLMs are), the school must verify that "Standard Contractual Clauses" or other international transfer safeguards are in place.
Summary Checklist for School DPOs:
-
✅ Review Contracts: Does the supplier guarantee student data won't be used for model training?
-
✅ Update DPIA: Does the DPIA address "hallucinations," "jailbreaking," and "algorithmic bias"?
-
✅ Check Age Assurance: Does the tool have robust age-verification to prevent younger children from accessing high-risk models?
-
✅ Transparency: Are pupils and parents aware of how their data is being used by the AI?
📽️ DfE Training Videos:
Leadership toolkit for using AI in education
https://www.youtube.com/watch?v=LYThB6tvTrE&list=PLXjcCX3hH9LXi--UBhJUzs1c_9Y4MOmrb
DfE Playlists:
https://www.youtube.com/@DfESectorComms/playlists
DfE Support page:
https://www.gov.uk/government/collections/using-ai-in-education-settings-support-materials
BBC News Article: Teachers can use AI to save time on marking, new guidance says
