Advertisement

The rise in AI-generated pictures amplifies body-image concerns

Monday December 18 2023
aigenpic

Photo generated by AI platform Dall-E2 in response to the prompt “perfect masculine or feminine body”. PHOTO | DALL-E2

By VINCENT OWINO

Growing up, Joy Omondi, 24, had always been referred by peers as the “fat girl with acne”. The teasing affected her self-esteem, but this was trivial compared to what the world of technology had in store for her adult self.

Joy, a financial analyst, is a textbook case of how interacting with generative AI tools can be detrimental to mental health, especially that of youths who are still discovering themselves mentally and emotionally.

“When I got my first phone, it got worse,” she recounts.

 “People would share photos of women with smaller bodies, fairer skin, and other things that I didn’t have, or had the opposite of, and that didn’t help with my self-esteem at all,” she says.

Read: UN tries to tackle 'mind-blowing' growth of AI

Ostensibly, these photos that made her feel more inadequate were no ordinary pictures. Most of them were unreal, and as Joy has come to learn, artificial intelligence, although at its nascent stages then, had played a critical part in creating them.

Advertisement

As artificial intelligence (AI) becomes more mainstream, its unintended consequences continue to sprout in an unregulated environment, particularly with the advent of Generative AI (GenAI) tools, which generate practically anything in the virtual realm, from images to text to videos.

pics

A woman viewing a photo gallery posted on social media. PHOTO | SHUTTERSTOCK

While AI has been shown to improve worker productivity, early prediction of extreme weather phenomena, and augment medical health solutions among others, it is also disrupting multiple spaces, including jobs, fueling misinformation on social media platforms, and in some cases aggravating mental health concerns, experts say.

At some point, Joy also learnt how to use AI tools to improve her own photos, and for a while, she was confident enough to post her own images on social media platforms.

“I started using the tools to make my skin look fairer, and I would receive a lot of likes and positive comments, which I never did when I posted my real self,” she says.

Joy says the consistent feeling of inadequacy threw her into depression, and not even modifying her online photos could help.

Fortunately, her parents helped her accept and love herself more, and today, she is confident in her own skin, and online unreal images have no effect on how she views herself anymore. But for many other young people, the internet remains the standard reference for beauty and perfection.

Read: AI takes up audio book recording industry jobs

Latasha Blackmond, author of Be You, No Filter – a self-help book on how to defeat online self-comparison – agrees that young people are increasingly drawing their self-worth from how they compare to online images, which might not be entirely real in most cases.

“There’s a shift that’s happening where people, especially young girls, are comparing themselves, and may not be feeling worthy because of what they see online,” Ms Blackmond told The EastAfrican during an interview.

Indeed, a recent study published by Edmonton-based University of Alberta in Canada, revealed that today, over 90 percent of women, and 65 percent of men, compare themselves to images on the internet, real or not, resulting in negative self-image more than half of the time.

“It works against their self-concept and self-image and also their esteem,” observes Dr Ann Wamathai, a counselling psychologist and dean of students at Nairobi-based Utalii College.

“The images generated by AI or on social media could portray what is trending or what is selling out there, and to conform, young people try to imitate them and sometimes do terrible things to themselves, like using steroids to build their body and piercing certain body parts.

For those who don’t resolve it or find a way out, it can really be a problem,” Dr Wamathai said.

According to the World Health Organisation (WHO), 1 in seven people aged 15 to 29 years have mental health conditions such as depression, anxiety, and behavioural disorders, which, Dr Wamathai says, can be linked to such issues as body image and self-esteem.

Read: AKINYEMI: Keep balance between AI tech and humanity

These issues account for 13 percent of the global burden of disease in this age group, and sometimes lead to suicide, which is the fourth leading cause of death for this age group globally, according to WHO data.

phones

Young people glued on their phones in a bus. PHOTO | SHUTTERSTOCK

Unrealistic bodies

Mental health charity Bulimia Project, in May found that if image-generative AI platforms like Midjourney and Dall-E2 are asked to generate a picture of an ‘ideal’ person, over 40 percent of the images they generate depict “unrealistic body types,” indicating that AI’s perception of perfection is generally distorted from reality.

To verify this, we prompted Dall-E2, owned by American tech firm OpenAI, to generate images of the “perfect” male and female bodies, and indeed, two of every four images returned had physical features that were clearly exaggerated and almost practically unattainable.

OpenAI’s head of safety systems, Lilian Weng, did not respond to our queries on whether this has been brought to their attention, if they are aware of the impact it could have on body-image issues, and whether they are doing anything about it.

Another platform, this-person-does-not-exist.com, which generates random pictures of non-existent people, also exhibited serious biases.

For example, when asked to generate pictures of black women aged between 18 to 25, 90 percent of the images showed extremely light-skinned women with long hair, features which are characteristic of mixed-race people and sometimes considered desirable to some people.

Read: AI impact on human future of work and life

While both platforms charge premium fees of at most $15 for every image they generate – which not everyone can pay – as AI expert Rem Darbinyan recently argued in a column on Forbes, these images can also be posted on social media or on a website, hence carrying biases of the AI tools to a wider audience.

Bryan Koyundi, an AI researcher and developer, says that AI programs learn and provide answers based on the information available to them, which can be a collection of pre-prepared data or what they find on the internet or social media.

“When you see the results are detached from reality, then it’s because the training datasets used to train these models are basically flawed,” Mr Koyundi told The EastAfrican.

“The second challenge is that today a lot of AI tools are self-training, and the problem with that is that they train on basically any data available to them because one thing with AI is that the cost of training models is much more expensive than the cost of production.”

Mr Koyundi explains that since it’s costly to train AI models, especially by hiring humans, some AI developers and companies take the self-training route, and the result is such biases, which result in images that are detached from reality. But even when trained by human beings, the individual biases of the trainers will likely be carried on to the models, and the results they give are likely to reflect the stereotypes or beliefs of their trainers, Koyundi explains.

Read: Job loss fear confirmed as more AI tools launch

The generative AI market is currently estimated to be worth $44.89 billion and is projected to grow at an annual rate of 24 percent, hitting $207 billion by 2030, according to data firm Statista, but regulations continue to lag behind.

According to Mr Koyundi, this is an area that we should consider regulating; stipulate who is doing what and to what level can they generate media.

“But...regulating AI is generally not easy because most of the tools used to develop them are open-source, which are very hard to regulate. So, we might want to just leave it open, so developers should just regulate themselves.”

In Kenya, the East African region and most of Africa, there are currently no regulations or bills under legislative consideration, which specify how AI-generated content can be used or limit what can be done by AI-powered tools.

Last week, the European Union enacted the world’s first ever AI regulation, which, among other things, will require that any AI generated content is disclosed as such, “so users can make informed decisions on further use,” the EU said in a statement last Friday.

Advertisement