Bad robots: Tackling AI’s gender bias

Artificial intelligence has a gender bias problem. This much was clear at the recent PPA Festival in London when Editorial Director at The Stylist Group, Lisa Smosarski, chairing a session on how to tackle the issue, explained to delegates what happened when she gave the job titles of her all-female panel to image generator Hotpot.ai. The words ‘Editorial Director’, ‘CEO/Co-founder’ and ‘Managing Director of Digital Content’ all came back as men. Only when ‘Assisted General Council’ was entered did Hotpot.ai return a photo of a woman.

“It illustrates the problem we are seeing with generative AI,” said Smosarski. “When it comes to the content we are creating through AI, and particularly how AI is trained and what data sources it’s being trained on, it’s widely acknowledged when speaking to tech platforms for this that bias does exist, and it’s something that they’re thinking about.

“We know that gender is at the forefront of that, but this really does extend to race, political and any other biases that are rife already in content that the data is trained on.”

To discuss the AI biases we are faced with and what to do about them, Smorsakski was joined at the PPA Festival by media leaders from Immediate Media, Shutterstock and the recently closed Untapped AI, a company that used a blend of human and technology-based approaches to help people and organisations untap their full potential.

From left: Kendal Parmar, Untapped AI; Hannah Williams, Immediate Media; Eleanor Krivicic, Shutterstock; Lisa Smosarski, The Stylist Group


So, let’s set the scene. What is the reality we are seeing of bias in AI?

Eleanor Krivicic, VP and Assistant General Counsel, International at Shutterstock: Well, when we think about the bias in AI, you do need to first look at what were the inputs. How did we get here? And you need to look at the ingredients that were put into those models in order to generate those outputs. So, when you think about the models you think about it like a brain. And the brain needs to be trained on billions of pieces of data. So how was that brain fed? It was fed by companies and researchers, basically indiscriminately scraping the web and grabbing images, text and data – completely unfiltered and unmoderated – and pumping that into their model. All the historical biases and societal inequalities get represented in that data and that’s how you end up with what you’ve just shown, where you end up seeing that CEOs are white and male, and women are underrepresented in any positions of authority.

Kendal Parmar, Co-Founder & Chief Executive Officer of Untapped AI: The company I ran was a coaching business using AI, and the beauty of it was we were coaching lots of women. The data we were collecting, which was online, was diverse. Using diverse data to tune and change things is really important. Also, creating the product and fine-tuning it with EQ, which was the main skill the women I recruited had, because they were coaches, like psychologists and psychotherapists. It’s not that women are better or worse than men when it comes emotional intelligence, but as a woman, by default, when you’re born and the society that we live in, you’re more likely to be in a caring type of role. So, the EQ of women that I recruited in particular was really high. So, we brought that into it, into the product to make it more tuned. Women or people of difference all have so much to bring, and we have to bring it, because otherwise it’s just going be so skewed.

If we don’t resolve these issues surrounding bias, where could it take us?

Hannah Williams, Managing Director, Digital Content at Immediate Media: To get a dystopian vision, I don’t think you have to stray too far from the reality we already see. The route to equity in power is all about who do you encourage and have at the table for the conversation for the decision-making, and who do you see as being relevant to this subject. At Immediate, the cornerstone of our AI programme has been that this is for everybody. So, we started with an immersion day where we had a whole host of speakers – from data scientists to ethicists and entrepreneurs – come and talk to the company and they talk to everybody, from the post room to the C-Suite. And one of those talks was by Dr Kerry McInerney, who is a leading AI ethicist and founded the Good Robot podcast. She spoke about a study where she audited a vast amount of Hollywood blockbuster sci-fi films. She found the same number of times a chief AI scientist in a film was played by a woman, that role was depicted as a mole-rat. So, you are as likely to be a leading AI scientist, if you listen to Hollywood, if you’re a mole-rat, or if you’re a woman. That’s the reality we are consuming now and that’s pretty worst case.

I wanted to touch on implicit bias in AI – for example, instances where ChatGPT or Bard might omit certain countries from a list, or women are not being served certain job ads on Facebook. How do you address these red flags?

Eleanor Krivicic: I think it’s indisputable that these models are generally created by the tech industry, typically in the Western world and by white men. So, that’s what’s contributing to these biases. We are missing training data inside those models. The internet advanced in the Western world far earlier than it did in other parts of the world so there are certain data you won’t even find available on the Internet to scrape. The question is how do you rectify that? At Shutterstock we do recognise that there are omissions in the data, so one thing you can do is try to attract underrepresented artists, underrepresented images, attract diversity and find people of different voices, different perspectives on life, different backgrounds and different emotions. We do that partly by going out to our contributor network and telling them this is what we’re looking for and partly through promoting it through our Create Fund in order to fill content gaps or underrepresented content that we don’t have.

FIPP Interim MD on the topic of AI at this year’s FIPP World Media Congress:

“AI will be a key theme, but we have come such a long way in 12 short months since the last Congress that the conversation will move on towards solutions – how publishers can align together to get the best deal from LLMs looking to licence and re-use their content, how we can build enhancements and opportunities into our business models using AI tools, and how we can improve our workflows, grow profitability and better serve our audiences.” Read the full feature here.

The FIPP Congress takes place from 4-6 June in Cascais, Portugal. See the agenda here. Book your place here.


Hannah Williams: We’re probably at the point where we’re trying to encourage people to work with AI tools in content creation, or at least experiment rather than getting to the point where we are publishing it. But I think there’s probably two points when it comes to this – who are the people informing the use cases of AI in your company? Because if it’s happening separately in the tech department to your journalists, they’re not going be particularly powerful in powering use cases. And then, there’s the wider conversation around content production more generally, like how representative is the content authentically of your audience and how diverse are your talent pools, because our output is only ever going to be as aware of mitigating bias as the content we’re producing. So, we’re at that first step of making sure that our content production mitigates bias. We’re not quite at the output point yet.

Kendal Parmar: I think there is such a big issue of people not leaning into AI because they are not techies.They are intimidated by it. In my own experience with recruiting a lot of women, particularly mums, they thought: ‘Oh my God, I can’t go there. I can’t do AI.’ So, there’s that whole thing of encouragement and that you don’t have to be a techy and don’t have to be a developer (to go into tech). Whilst I’m absolutely passionate about girls and people of difference, people of colour, people of different sexuality going into tech and the whole pipeline, I never thought I’d be running a tech company. My children continually take the mickey out of me of how un-techy I am – I can’t even turn on the projector at home. But it doesn’t matter. So, there’s that whole thing of confidence and leaning in.

What are the pertinent questions we have to ask about AI in the future?

Hannah Williams: They’re all variations on a theme, but firstly, what can the AI do properly and who’s informing those use cases? How are they truly adding value and not just reinventing the wheel? How are you ensuring that everybody who has a role to play in informing that debate, are having a say in your programme? Also, if we are controlling what goes into these models, then they are going to be reflective of the goals and values of your organisation. They not going to invent them. So, what does the wider ecosystem of your content production look like, what does your ED&I programme look like, the diversity of networks, the authenticity with which you are truly representing your communities? I think we need to be asking ourselves those questions.

Eleanor Krivicic: I think we’re just trying to be mindful of all elements of how bias can be created in the output of AI. Some of it might just be generated in the metadata. So, in visual imagery you might have biases reflected in how you’re capturing the image. But how is that image categorised and who’s categorising it and through what lense? We’re taking a conscious approach to basically create content guides to give guidance to our contributors about how to have inclusive and diverse metadata and how to make sure that whatever is in your image gets reflected in the metadata. But historically a lot of the images might not have been generated through that lens or categorised through that lens, it might have been categorised with the view towards optimising SEO or some other algorithm and so again contributing to the bias. And then on the other side you can look at using technological means to try to mitigate bias. You don’t have a lot of examples, so you might not have a lot of examples of women in senior positions in your data set. So, we’re trying to use technological means of just trying to give those images more weight to basically counterbalance some of the biases.

How should publishers be working with those creating AI?

Hannah Williams: If the kind of an equitable, diverse understanding of the world relies on having quality data inputs, then I would absolutely love to see engaged conversation between quality publishers and diverse networks with big tech. And you can only presume that a collective conversation would hold more power. I don’t know how that gets facilitated, but I would love to see that. And then in terms of the benefits for our industry, I’ve been working with editorial teams for the last 20 years transforming what they do in terms of embracing new formats and the same kind of pushback I get lambasted with is that we do not have enough time. We don’t have enough time to maintain the quality of what we know, whilst also kind of learning and upscaling relentlessly. So, absolutely AI is going to be a huge tool in that arsenal, but it isn’t the saving grace on its own. For media companies and publishers, our biggest competitors at the moment are TikTok and YouTube and they are people in their bedroom with heaps of authenticity. They have loads of time, loads of passion and no massive corporate overheads they have to feed. So yes, I think AI will give us the gift of time, but if you really want to see whimsy back or creativity and inspiration, or kind of Hunter S Thompson vibes of lived experience, you’ve got to look at how you’re setting up your creators to truly create and also the measures you’re holding them to in terms of success.

Topics

Your first step to joining FIPP's global community of media leaders

Sign up to FIPP World x