Is your organisation at risk from Shadow AI?
Whether it's via a support ticket, online data protection compliance meeting or data walk around an organisation, we are having lots of conversations about the use of Shadow AI - the use of AI tools, applications or models by staff or students without the formal approval, oversight or governance of the organisation.
This 'hidden adoption' of AI presents a significant security and compliance challenge for organisations, escalating the traditional risks associated with 'Shadow IT'.
Generally speaking people want to use AI to increase efficiency, convenience and a perceived lack of suitable, approved tools. Staff members that face administrative burdens or teachers seeking innovative teaching methods, often turn to readily accessible public AI tools.
Examples of Shadow AI Use:
- Teaching staff using AI to quickly draft lesson plans, create quiz questions or generate initial ideas for tailored feedback.
- Administrative staff might paste internal emails, draft newsletters and school policies for proofreading or improving comprehension.
- Researchers/Faculty might upload snippets of research of data for quick analysis or drafting.
- Students might use unmonitored generative AI to complete assignments, essays or code, sometimes even where it is prohibited.
A key incentive is the ease of access and the perception that official approval processes are too slow or restrictive.
The Risks of Shadow AI
The use of unvetted AI tools, particularly when handling sensitive and personal data, introduces a range of risks:
1. Data Privacy and Data Protection Violations.
This is the most significant risk. When staff or students input sensitive information into public, unapproved AI tools, that data is transferred to external servers, often without clear information on its storage location, processing or use.
Entering personally identifiable information about pupils, staff or parents breaches UK GDPR and the Data Protection Act 2018. The organisation then loses control over the data, which may then be used by the third party AI provider to train their models and thus compromising confidentiality and security of the data.
2. Cyber Security Gaps
Shadow AI creates security vulnerabilities that an organisation's IT often can't manage or track.
Unapproved software can introduce vulnerabilities and malware, increasing the risk of a cyber incident and subsequent data breach. Without a clear governance framework, there is no audit trail to demonstrate compliance should there be an incident.
3. Bias, Inaccuracy and Ethical Harm
Generative AI is prone to producing factually incorrect information, known as hallucinations. This could be used in a lesson plan without proper verification which could lead to sharing misinformation.
4. Academic Integrity
The unmonitored use of AI by students poses a direct threat to academic integrity. It becomes difficult for an organisation to accurately assess a student's knowledge, critical thinking and original efforts. Research suggests that over-reliance on AI for tasks like summarising, writing or problem-solving can lead to a decline in essential skills.
The government recently announced changes to the school computing curriculum which includes data literacy, AI and the ethical use of technology: Government announces changes to computing curriculum
How to Mitigate the Shadow AI Threat
1. Clear Guidance and Policy
Have clear guidance and policy about which software and tools the organisation allows and for what purpose. Have an AI Policy. DPE customers can find our AI policy: Model AI Template Policy. This should be reviewed inline with the organisation's Acceptable Use Policy.
2. Training
Ensure staff understand the risks of using Shadow AI, particularly in relation to data privacy, responsible use, ethical use and how to write prompts. Consider using the DfE AI Checklist and the 'Need, Read, Proceed' Poster around your organisation.
3. Governance
Ensure AI policy is part of the organisation's governance and compliance, consider including it as part of your safeguarding, cyber security and data protection training. Ensure there is continued awareness training, rather than once per year training. The AI policy should be reviewed often to keep inline with it's exponential growth.
4. Transparency
Be transparent with staff and parents. Explain what AI you are using and how you are using it. Ensure any due diligence has been completed.
AI Support & Guidance Resources
DPE customers can review our AI Best Practice Area for support, guidance and template policies. Also consider completed our AI Checklist.
Schools and colleges should review the DfE AI in Education.
Review the Public Sector Cyber Security in the age of AI Report
Everyone can watch our Short Guide to AI video:
