AI: transformative or toxic?

Why we need an intervention to rebalance bias in tech

By Cat Dickie, Copywriter at Flourish

There’s much for marketers to embrace about Chat GPT and its counterparts, but saving time doesn’t need to cost us our values.

AI has the potential to transform the way we work in marketing. The time-saving benefits of copy-generating tools are undeniable. Need a 700-word SEO article on the benefits of embalming including keywords? Chat GPT scours the internet and spits out passable, grammatically correct content in under a minute. Meanwhile, the copywriter is still boiling the kettle.

However, when it comes to content, AI is far from a panacea. AI algorithms have been shown to perpetuate the same gender, racial and ability bias that are ingrained in society, and these biases cannot be ignored.

Unconscious bias is an underlying issue in AI development. If a team building an AI system is predominantly white and male, their unconscious biases will influence the way the system is designed. This is how we end up with facial recognition software that has lower success rates for people of colour, particularly those with darker skin tones. As well as being frustrating on a day-to-day basis – we’ve all heard by now about the hand-dryer that doesn’t recognise brown skin – this can have serious consequences. For example, it can lead to false arrests, and wrongful convictions, disproportionately impacting communities of colour.

Another area where gender bias in AI is prevalent is chatbots. For example, a chatbot designed to assist with job interviews may display gender bias by favouring male applicants or using biased language when interacting with female applicants. This perpetuates gender stereotypes and reinforces gender inequality in the workplace.

That’s just the tip of the iceberg, and I use these examples to illustrate a problem that can quietly fly under the radar, an invisible missile of discrimination. It so often does. Institutional racism, sexism, and ableism are real problems in tech so it tracks that these issues continue to be perpetuated by human-built systems.

There are disrupters working to dismantle bias in tech. Take Misjourney, an AI alternative that creates artwork of women exclusively. Because right now, when you ask an AI to visualise a professional, only 20% of the images generated are women. The platform was launched to coincide with International Women’s Day 2023, and creators recognise that diversity goes beyond gender. Their aim is to inspire developers to build fundamentally inclusive technology that is representative of society as a whole.

As AI becomes omnipresent, the need for tools to be mediated by ethics increases. But until responsible AI becomes widespread, we marketers have a responsibility to be aware of these biases in order to mitigate them. As a copywriter, I’m still not ready to lobotomise my output in the name of saving time. Is it because I enjoy inserting gratuitous words like ‘lobotomise’ into my copy? Yes. But more than that, it’s because I feel strongly about dismantling unconscious bias, whilst AI right now perpetuates it.  Right now, I trust myself more than an algorithm to deliver deliver on the values of inclusivity that most of today’s brands are keen to uplift.

Share

Related Posts