At our recent #OverExposed webinar, we invited academic Arsenii Alenichev to speak to us about the rising threat of AI-generated images. His research has revealed shocking biases that our AI models appear to hold, and his testing has demonstrated how frequently inaccurate, misleading and harmful content is generated, specifically of people in African and Asian countries.
#OverExposed is our campaign for more ethical storytelling in the development sector, and a key part of this is ensuring that the images we capture, create and promote represent people in a fair and dignified way. At Chance for Childhood, we also choose not to show identifiable images of children, to protect their privacy. As advocates for ethical and authentic storytelling, it is clear we need to be paying close attention to developments in AI. Our recent session with Arsenii was not recorded, but this piece is intended to summarise the key topics. If you would like to join the next session, sign up here to receive updates.
Are models choosing to reinforce bias on their own?
Another worrying facet is that the AI model Arsenii has been testing (‘Banana’ /Gemini 3 by Google) will create inaccurate images and ignore instructions in favour of creating a more stereotypical image. For example, putting in the prompt for ‘poor children’ returned exclusively images of Black children. A prompt to show volunteers helping in Africa turned up heavily white-saviour style images, with volunteers always depicted as white people and the recipients of aid as non-white. The AI model also began to add in logos on the volunteers’ clothing of large charities such as Save the Children, completely unprompted. A direct command to show a Black doctor treating white children was ignored and a white doctor was always shown despite multiple repeated prompts. It is clear how closely aid is linked to whiteness in the existing source material, which is now being amplified by AI.
No real faces, no real problem?
Some organisations are now designing charity ad campaigns and communications featuring images created by generative AI models. But if AI models are creating these new images without ‘real’ people, does that make it ok? Does it remove our need for informed consent if real people’s identities are not being shown?
The issue is that these images are drawn from existing and real faces – most of which will not have given their consent to be used and will never know that their image has been used in this way. The types of images that AI models are putting out are also extremely damaging to real people and have real consequences – this short film byexplores how AI-generated content is negatively impacting the Hadzabe indigenous community in Tanzania, using their culture and customs to make videos of them without their permission. Online image libraries such as Adobe Stock and Freepik are also profiting from selling these types of images.
How is the sector using AI?
In our webinar, we ran a poll with attendees to take a temperature check of how the group is currently using AI, and what we see as the biggest risks and benefits. 22 people answered our three anonymous questions.
The poll found that people’s main use for AI is ‘editing/proofreading’ (27%). Only 2% reported using it for image generation.
Respondents reported the biggest benefit as ‘saves time’ (41%) and the biggest risk ‘reduced authenticity’ (64%). Zero respondents said that their organisation does not use AI at all.
AI is here to stay, and as advocates of ethical storytelling, we need to stay aware of the developments in AI imagery and how charities are using AI in their communications. There are clear threats to our mission to promote authentic and human-centred stories given with informed consent.
Thank you to Arsenii for a fascinating presentation. You can find him on LinkedIn here where he shares updates about his research, and make sure you stay up to date with our upcoming #OverExposed sessions by signing up here.