AI and privacy: risks and opportunities in IA

ID: 2177

Presenting Author: zoe mullard

Session: 591 - Managing the right to privacy in impact assessment

Status: pending


Summary Statement

This paper explores how AI can streamline IA while upholding privacy, recognizing cultural and linguistic diversity, and applying ethical frameworks.


Abstract

Artificial intelligence (AI) is reshaping impact assessment (IA) by enabling faster data synthesis, predictive modeling, and inclusive engagement. When responsibly designed, AI can uphold the right to privacy and support the respectful integration of language, culture, gender-based analysis, and health data. This paper explores how AI tools—such as language processing (eg. for Indigenous language translation), voice and text recognition to support oral traditions, and bias detection to reduce systemic inequities—can enhance both the efficiency and equity of IA processes. Privacy-preserving instruments like anonymization, federated learning, and secure data architectures will be examined alongside culturally grounded governance frameworks, including OCAP® principles, the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP), and the General Data Protection Regulation (GDPR). By examining both the risks and opportunities of AI, the paper proposes guiding principles for ethical implementation that protect sensitive information, celebrate cultural diversity, and align with human rights standards.


Author Bio

Zoe Mullard is a Partner at ERM in the Social Performance and Stakeholder Engagement Team. She has more than 12 years of experience in impact assessment for projects around the world.


← Back to Submitted Abstracts