Paul Dongha is head of responsible AI and AI strategy at NatWest Group, where he leads the development of frameworks to ensure artificial intelligence is deployed safely, ethically and in line with regulatory expectations.
Previously serving as group head of data and AI ethics at Lloyds Banking Group, Paul Dongha has been at the forefront of embedding transparency, accountability and trust into enterprise AI systems.
With extensive experience shaping how financial institutions approach emerging technologies, Dongha offers a clear-eyed perspective on both the opportunities and the risks that AI presents for businesses and society.
In this exclusive interview with the Champions Speakers Agency, he discusses the ethical red flags CISOs and boards must monitor, the responsibilities of regulators and the real-world risks that demand attention today.
Q: What ethical red flags should CISOs and boards watch for when deploying AI inside their organizations?
Paul Dongha: “I think some of the standout issues and risks that we have with AI systems that have come to light recently are things like human agency.
“AI systems have the ability to create sophisticated outputs and, to some extent, that takes away from humans their ability to make the right decisions. The loss of human agency is something that we have to be very aware of and that risk has to be mitigated.
“Another risk is robustness. AI systems have the ability to sometimes give different answers to the same questions, so I think technical robustness — ensuring AI systems generate the same result for the same question over time — is something that has to be looked at as well.
“Data privacy is another. The ability of AI systems to inadvertently leak confidential or private information about individuals or organizations is something we also have to guard against.
“I think transparency is a really important one. The way a machine learning or an AI system works is nonlinear, so understanding how it arrives at a decision is hard to do. There are techniques that allow us to introspect how an AI system derived a particular answer, but they are only approximations. Transparency of the algorithm, whether it’s machine learning or generative AI, is something that we have to pay close attention to.
“Then there’s bias. We’ve seen bias creep into many systems and that really takes away their ability to support diversity and inclusion. Those biases can be inherent in the data that trains our AI systems or within the system development life cycle. It’s an ongoing area of work and with generative AI, it’s a particular problem because of the vast amount of training data involved.
“And finally, accountability. Organizations, particularly commercial organizations, need to demonstrate that they’ve got processes in place where, if people need to seek redress for the output of an AI system, they’re able to do so. Firms should take full accountability for how they create systems and how they operate.”
Q: Should every large enterprise have an AI ethics board — and what should its remit include?
Paul Dongha: “When it comes to the executives and decision-makers of large corporations, I think there are a few things here.
“Firstly, I believe an ethics board is absolutely mandatory. It should be comprised of senior executives drawn from a diverse background within the organization, where those participants have a real feel for their customers and what their customers want.
“Those members should be trained in ethics, should understand the pitfalls of artificial intelligence and should make decisions around which AI applications are exposed to customers.
“Importantly, those ethics boards shouldn’t rely just on IT systems to answer ethical questions. Ethics boils down to a discussion between different stakeholders. An ethics board is there to debate and to discuss edge cases — for example, the launch of an application where there may be disagreement over whether it could cause harm or whether it could be a surprise to customers.
“I also believe a chief responsible AI officer should be appointed to the board of every bank — and arguably every large organization — to oversee the end-to-end risk management of applications both during build and post-deployment. Ethics has to be considered at every stage of development and launch.
“Risk management practices and the audit function should all be folded into the remit of a responsible AI officer to ensure strong oversight.”
Q: Are regulators and governments moving fast enough to keep AI risks under control?
Paul Dongha: “I believe our governments and democratically elected institutions, as well as sectoral regulators, have a huge role to play in this.
“We as a society elect our governments to look after us. We have a legislative process — even with something as simple as driving, we have rules to ensure that vehicles are maneuvered correctly. Without those rules, driving would be very dangerous. AI is no different: Legislation and rules around how AI is used and deployed are incredibly important.
“Corporations are accountable to shareholders, so the bottom line is always going to be very important to them. That means it would be unwise to let corporations themselves implement the guardrails around AI. Governments have to be involved in setting what is and isn’t reasonable, what is too high a risk and what is in the public interest.
“Technology companies need to be part of that conversation, but they should not be leading it. Those conversations must be led by the institutions we elect to look after society.”
Q: How real is the threat of artificial general intelligence — and what risks demand our attention today?
Paul Dongha: “Artificial general intelligence, which is about AI approaching human-level intelligence, has been the holy grail of AI research for decades. We’re not there yet. Many aspects of human intelligence — social interactions, emotional intelligence, even elements of computer vision — are things the current generation of AI is simply incapable of.
“The recent transformer-based technologies look extremely sophisticated, but when you open the hood and examine how they operate, they do not work in the way humans think or behave. I don’t believe we’re anywhere near achieving AGI and in fact the current approaches are unlikely to get us there.
“So my message is that there’s no need to be worried about any imminent superintelligence or Terminator situation. But we do need to be aware that, in the future, it’s possible. That means we have to guard against it.
“In the meantime, there are real and pressing risks with today’s generation of AI: weaponization, disinformation and the ability for nefarious states to use generative AI to influence electorates. Even without AGI, current systems have great power — and in the wrong hands, that power can cause serious harm to society.”
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
No Responses