Interviews 03.07.2025

Interview: Kay Firth-Butterfield

Insights into the ethical risks and complexities of bias in AI, the challenges of digital transformation and the regulatory implications of irresponsible AI

Kay Firth-Butterfield made history as the world’s first Chief AI Ethics Officer, later serving as the Head of AI and Machine Learning at the World Economic Forum. Eleanor Dallaway asks her to share her insights into the ethical risks of AI and picks her brain on navigating the complexities of bias in AI, the challenges of digital transformation and the regulatory implications of irresponsible AI

A former judge and barrister, Kay Firth-Butterfield made history as the world’s first Chief AI Ethics Officer and now leads Good Tech Advisory as CEO. With an impressive career spanning law, academia, and global policy, Kay has dedicated her life to ensuring AI is developed and deployed in ways that are both responsible and equitable. 

Kay Firth-Butterfield

What are the ethical risks of AI and big data?

After my legal career, I entered the AI world and immediately began thinking about the risks AI poses to humanity, business, and governments. I was the world’s first Chief AI Ethics Officer. I had to come up with a name, and in hindsight, I perhaps should have gone for Trustworthy Technology Officer or Responsible AI Officer. 

However, in those days, we weren’t discussing ethics. So why now? Because people’s ethics are different, especially when you consider geographical diversity. The world agrees that we should prioritise safety and robustness, but we should also be concerned about bias in the system. 

Bias is created in two ways: when biased historical data is input into the machine and affects decision-making, and through those who code. A lack of diversity among coders is problematic because coders bring their own values, and if most coders are males under the age of 30, bias will likely creep in. Only 22% of coders are currently female.

How can businesses successfully use AI? 

They need to be very aware of the responsible or trustworthy aspects of artificial intelligence. You should never deploy or create AI without ethics at the forefront of mind. It might affect successful deployment if you don’t get it right, or worse, damage your brand or cause financial loss. Regulators are starting to sue those using AI irresponsibly or without trust built in.

“We witnessed a confidential Samsung memo being leaked globally when an employee used ChatGPT to transcribe it”

So, where is best to deploy AI? It’s often used in human resources to help with talent spotting, but bias needs to be considered. We’re seeing lawsuits in the US where companies that purchased AI for HR are being sued for using discriminatory technology.

It’s used in manufacturing across factory floors and in pharmaceutical companies to design medications. There’s generative AI, which everyone is talking about. You can use it in business, but be aware that if you use models like ChatGPT, the data you feed it goes in and could potentially come out anywhere. So don’t give it trade secrets.

We witnessed a confidential Samsung memo being leaked globally when an employee used ChatGPT to transcribe it. 

So, if you are using generative AI in business, it’s essential to understand what AI is. It just predicts the next word — it’s not actually intelligent. Let teams use it after your legal department has approved it, and your C-suite understands how you utilise it.

What advice would you give to businesses looking to transform digitally? 

My best advice would be to do it slowly and carefully. Start with data. You need to convert your data into a machine-readable format. Only once you have cleaned and prepared your data can you even contemplate using artificial intelligence. Question whether you want your data to be created by human beings, or do you want your data to be synthetic?

“There will be a law regulating AI in Europe, and if you trade with Europe, you will be required to comply”

There will be a law regulating AI in Europe, and if you trade with Europe, you will be required to comply. So, think not just about AI applications, but also risks, compliance, and law in the transition. Involve the Chief Strategy Officer too — where is this digital transformation going? What do you want to achieve from it? You can spend a lot without getting far if AI is not the solution. You need a true AI strategy to achieve your goals. If you decide that yes, you’re ready to use artificial intelligence, I would encourage you to hold your suppliers’ feet to the fire and ask the right questions.

What does ‘hallucinating’ mean in the context of AI?

It basically means that generative AI is lying to you. That’s another problem, because we need to be able to reach a point where we can trust the machine’s outputs. Because as AI feeds back the lies, it comes across those lies and then repeats them.

Kay Firth-Butterfield is a globally renowned authority on artificial intelligence, ethics, and governance, and is widely recognised as one of the world’s top speakers on machine learning. A former judge and barrister, Kay Firth-Butterfield made history as the world’s first Chief AI Ethics Officer and now leads Good Tech Advisory as CEO.

Assured Intelligence extends its gratitude to Mark Matthews for facilitating this interview.

 

Latest articles

Be an insider. Sign up now!