• 0800 0862018
  • This email address is being protected from spambots. You need JavaScript enabled to view it.
  • Mon - Fri 8:00 - 17:00

Best Practice Update

Robot wearing an orange hoodie holding a piece of paper with the words Data Protection education is transparent text

IAPP looks at AI privacy risks

This week the IAPP published a set of AI privacy risks in the wake of concerns over how AI should be regulated.  There are moves to regulate AI, such as the EU AI Act, however  because AI remains quite an unknown quantity, there is a lot of unease and uncertainty around it's use, ethics, privacy and intellectual property.
AI is a rapidly emerging and growing technology and growing quicker than any regulations can be reviewed or put in place, so what do we need to consider in the meantime about protecting an individual's privacy?

The IAPP article references research of AI incidents: Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks and by combining a regulation-insenstivie approach with real-world, fact-checked incidents the authors where able to make a list of 12 risks:

  1. Surveillance: AI exacerbates surveillance risks by increasing the scale and ubiquity of personal data collection.
  2. Identification: AI technologies enable automated identity linking across various data sources, increasing risks related to personal identity exposure.
  3. Aggregation: AI combines various pieces of data about a person to make inferences, creating risks of privacy invasion.
  4. Phrenology and physiognomy: AI infers personality or social attributes from physical characteristics, a new risk category not in Solove's taxonomy.
  5. Secondary use: AI exacerbates use of personal data for purposes other than originally intended through repurposing data.
  6. Exclusion: AI makes failure to inform or give control to users over how their data is used worse through opaque data practices.
  7. Insecurity: AI's data requirements and storage practices risk of data leaks and improper access.
  8. Exposure: AI can reveal sensitive information, such as through generative AI techniques.
  9. Distortion: AI’s ability to generate realistic but fake content heightens the spread of false or misleading information.
  10. Disclosure: AI can cause improper sharing of data when it infers additional sensitive information from raw data.
  11. Increased Accessibility: AI makes sensitive information more accessible to a wider audience than intended.
  12. Intrusion: AI technologies invade personal space or solitude, often through surveillance measures.
The full IAPP article can be found here: Shaping the future: A dynamic taxonomy for AI privacy risks

The image for this article was created using Canva AI.