Online images and texts portray women as less experienced than men across all occupations
On the internet, professional women are represented as younger—and therefore less experienced—than their male counterparts, even though this age difference does not correspond to actual data in the US, according to an article published in Nature. This study of gender and age stereotypes is based on an analysis of 1.4 million images on five platforms (Google, Wikipedia, IMDb, Flickr and YouTube), as well as nine large language models, such as ChatGPT, trained with texts from Reddit, Google News, Wikipedia and Twitter.
251008 sesgo juventud marian EN
Marian Blanco Ruiz
Lecturer in Audiovisual Communication and Advertising, coordinator of the Advertising and Public Relations strand of the Communication Sciences doctoral programme
This study published in Nature on age and gender distortion in images and language models reinforces what feminist research has been pointing out for decades: technology is not neutral, but rather reproduces and even amplifies pre-existing cultural gender stereotypes and roles. The finding that women are represented as younger than men in prestigious occupations reflects a long-standing cultural pattern linked to what Laura Mulvey called the 'male gaze'.
These results also confirm what has been noted by various feminist technoscience studies: algorithms learn from a biased cultural archive, organised around hierarchies of gender, race and class, among others. The concern is that, when incorporated into automated systems with great social authority, such biases are not only symbolic but become discrimination with real effects on people's daily lives, for example, in access to medical coverage, housing rental or employment. It is precisely this practical aspect that is the central contribution of the evidence in this work. The article points out that women are not only represented as younger, but also evaluated as less competent than men. This finding shows that the 'Jennifer and John effect' continues to be very present in AI developments.
These results, however, are limited by the research method itself. The models are also biased by the tools used; it would be interesting to complement this study with a qualitative analysis that incorporates a critical perspective and could interpret these results in an intersectional way, since gender biases are intertwined with other axes of exclusion, such as class or race, affecting different groups unequally. But beyond describing these dynamics of inequality, this article demonstrates that the urgent challenge is to design strategies that question the cultural assumptions on which artificial intelligence models are trained and that allow for the construction of more equitable and inclusive digital infrastructures.
251008 sesgo juventud nuria EN
Nuria Oliver
Scientific Director and co-founder of the ELLIS Foundation Alicante
The article makes significant contributions by providing the first large-scale evidence that age-related gender bias is a widespread distortion, present in digital visual content (images, videos) and in nine language models, and that it is also systematically amplified by algorithms. According to this bias, there is a tendency to assume that women are younger—and therefore less experienced—than men in relation to their professions or social roles.
The relevance of this study lies in the rigorous quantification of this bias against verifiable objective anchors—in particular, US Census data showing that there are no systematic age differences between women and men in the working population—which allows us to move beyond the controversial debate about the accuracy of stereotypes. The study causally demonstrates that Google Image searches amplify the perceived age gap by 5.46 years, and that ChatGPT propagates this bias by generating CVs that assume women are younger and less experienced than men, especially in high-status, high-income occupations.
This highlights the urgent need for intervention, particularly given that bias is strongest where women face persistent pressure to appear youthful (the “beauty tax”) and where older women suffer disadvantages in hiring and promotion (gender ageism). This work is related to the work carried out by ELLIS Alicante on attractiveness bias and beauty filters (What is beautiful is still good: the attractiveness halo effect in the era of beauty filters), as beauty filters tend to make people look younger (5.87 years on average).
The technical soundness of the article is high, characterised by a large-scale methodology combining the analysis of nearly 1.4 million images and videos from Google, Wikipedia, IMDb, Flickr, and YouTube, a pre-registered human experiment with a nationally representative sample in the US (n=459), and a quantitative audit involving nearly 40,000 CVs generated by ChatGPT. The methods are carefully controlled to generalise the results, including comparison with census data and the use of objective age information, as in the case of famous people.
The main limitation is that, while the study confirms algorithmic amplification, identifying the precise causal mechanisms by which industry-specific aesthetic norms or biases are transferred to generative AI remains a critical area for future research. It would also be important to develop strategies to mitigate this bias, as well as to extend the work to other regions of the world, since both the census data and user studies have been conducted with representative populations from the United States.
251008 sesgos juventud pablo EN
Pablo Haya Coll
Researcher at the Computer Linguistics Laboratory of the Autonomous University of Madrid (UAM) and director of Business & Language Analytics (BLA) of the Institute of Knowledge Engineering (IIC)
The study demonstrates the existence of widespread algorithmic bias based on gender and age, which represents women as younger and less experienced than men on digital platforms such as Google and language models such as ChatGPT. This bias, which is more evident in high-status professions, influences perceptions and employment decisions, disadvantaging older women. Overall, the research shows that these algorithms reinforce structural inequalities and distort the social representation of women and men in the digital environment.
This study is particularly relevant because it shows how global algorithmic biases can be reproduced in our European digital environment, affecting the representation and employment opportunities of women and men. By relying on international platforms such as Google or ChatGPT, Spanish society, in particular, is exposed to distortions that can reinforce stereotypes, limit access to positions of responsibility, and perpetuate gender and age inequalities.
In my opinion, it is necessary to promote algorithm audits and demand transparency in AI systems, in line with the obligations established by the European Artificial Intelligence Regulation (AIR), which seeks to ensure the safe, ethical, and non-discriminatory use of these technologies. This is particularly relevant in job selection processes that use AI considered high risk according to the AIR. I also believe it is essential to promote critical digital education in primary and secondary schools in order to detect these biases.
Douglas Guilbeault et al.
- Research article
- Peer reviewed