Interviews 10.06.2025

Interview: Professor Dame Wendy Hall

With a career spanning decades at the forefront of artificial intelligence and data science, Professor Dame Wendy Hall has a lot to say about how organisations are integrating generative AI into cyber intelligence workflows

Professor Dame Wendy Hall stands as a towering figure in British science and technology, an academic pioneer and a technology visionary. Eleanor Dallaway picks the brain of the woman considered a driving force behind national and international AI strategy

With a career spanning decades at the forefront of artificial intelligence and data science, Professor Dame Wendy Hall currently serves as Regius Professor of Computer Science at the University of Southampton and Chair of the Ada Lovelace Institute. 

How do global geopolitical tensions shape the development and deployment of cyber intelligence infrastructure and influence AI-driven surveillance policies?

In the book I co-wrote with Kieron O’Hara, Four Internets, we discuss the geopolitical forces driving the fragmentation of the internet today. The same geopolitical forces are at play as AI emerges, being trained on the data generated from the internet.

In a nutshell, the US is home to many of the largest Western companies in the Western world. That is very market-driven, as the big companies are based on the West Coast; they lobby in Washington to obtain the rules and regulations they need to help their companies generate profits. You are seeing that play out with AI, where former President Biden had a meeting with the vice presidents of the big tech companies to discuss how to regulate AI.

Self-regulation – is that a good idea? We do not know. We can debate the issues surrounding data protection, privacy, trust, and security. Still, we can only partially influence them because companies outside of Europe are not required to abide by these rules. It is only when they are trading within Europe that they have to worry about them. We have some soft power there, but we do not have the market power they do in the US, and they are moving further around the world.

Of course, in China, their approach from the very beginning has been to use technology first to help people communicate, but secondly, and perhaps more predominantly, for surveillance and control. We see that creeping into the Western world. I see threats to the democratic process, as autocrats do well in this data-controlled world.

That is all underpinned by the original open internet, which remains intact. The open protocols, TCP/IP, were invented over 50 years ago and are still the primary model we use. We have the US, European, and Chinese models, in addition to those. Can we maintain the internet’s technical operation with a global agreement on the technical protocols? Without this, we have no global internet. AI is running on top of that, and this will put further political pressure on the entire ecosystem.

What guidance would you offer to organisations integrating generative AI into cyber intelligence workflows, while ensuring human oversight and decision-making remain central?

Generative AI is nothing to be frightened of. I think we will all start to use generative AI, software that helps us write, summarise, and argue about things.

I liken it to the time when calculators first appeared and people questioned how to trust the answers they produced. Now we have a finance industry run by computers. All the old ways of doing things by hand, the ledger systems, have gone. However, we have more jobs than ever in the finance industry, and this trend is expected to continue with the rise of generative AI.

“We must view it as something that augments human intelligence, rather than taking control, because it currently can’t”

We will be relieved not to have to write essays about things, and it will undoubtedly aid creativity. But you need to view it as an adjunct, augmenting what we do and not taking over, because it is not clever enough to take over.

I do, however, see a future where AI becomes part of the team that decides how to handle a legal case, a medical diagnosis, or a problem pupil at school. There will be teams, and AI will be part of those teams. We will ask the AI questions, and it will come back with answers. However, we must view it as something that augments human intelligence, rather than taking control, because it currently can’t. We can’t trust the answers.

The data being fed into generative AI is highly biased. If it’s being trained on the internet, a lot of it is incorrect. We have this lovely term ‘hallucinating’, meaning it will make things up if it does not know the answer. We have to think of it as part of the team, part of what we do, and use it to help us be more productive, more creative and have a better working life. Perhaps even a shorter working week – maybe we will get a four-day week out of all this!

As AI becomes more embedded in cybersecurity and surveillance systems, how can organisations ensure ethical use of these technologies within cyber intelligence operations?

Take face recognition, for example. We still haven’t fully determined the rules and regulations regarding when people can apply face recognition technology.

Did anyone ask you whether you wanted the face recognition technology on your phone? You get offered it as a system download, and then you can choose to use it.

We all know that face recognition in surveillance happens in China, but it’s also happening in a creeping way in Europe and the US. Our security forces are using it. I do like the fact that there’s a CCTV camera in the car park at night – it makes me feel safer.

All these new technologies, as well as the emerging AI technologies yet to come, have both positive and negative aspects. That’s the yin and the yang. With most technologies, there are benefits and threats. We must learn how to maximise the benefits for the good of humanity, society, ourselves, and our businesses, while also mitigating against the threats. That is what we still must learn to do.


Professor Dame Wendy Hall is a true trailblazer in British business. She is one of the first computer scientists to explore multimedia and hypermedia, and a key architect in the development of web science. With a career spanning decades at the forefront of artificial intelligence and data science, she currently serves as Regius Professor of Computer Science at the University of Southampton, Chair of the Ada Lovelace Institute, and as a driving force behind national and international AI strategy.

Assured Intelligence extends its gratitude to John Hayes for facilitating this interview.

 

Latest articles

Be an insider. Sign up now!