Events
Crucial to strike right balance between ethics & regulation to build trust in AI tech

With the European Commission preparing its first set of rules to govern AI technologies and aiming to boost private and public investment in AI technologies to 20 billion euros per year, what are the most important ethical questions to address in terms of AI legislation? How to get the balance right?
FIPRA International in partnership with the Technical University of Munich (TUM) Institute for Ethics in Artificial Intelligence (IEAI) brought together a panel of experts — moderated by Jacki Davis — to explore the issue from multiple angles.
Demystifying AI tech
Dr Christoph Lütge, Director of the TUM Institute for Ethics in Artificial Intelligence (IEAI), underlined the importance of being able to explain AI. “People tend to fear the ‘black box nature’ of AI. They do not trust what they don’t know. The relation between ethics and regulation is therefore crucial to finding the right balance,” he said.
In addition to explainability, transparency is important, pointed out Kilian Gross, Head of Unit, Technologies and Systems for Digitising Industry, DG Connect. “Regulation to ensure trust in AI would be useful. However, it must be future proof and risk-based, looking at where it is important to intervene without preventing innovation or creating excessive burden.
“AI has certain features which may cause specific risks justifying a horizontal regulatory framework. A clear understanding on where concrete risks exist is needed to get to the balanced framework which we need,” he added.
Elizabeth Crossick, Head of EU Government Affairs, RELX Group, highlighted that AI is after all a set of technologies, which can be used to solve problems. The more we talk about AI applications, the more people start worrying about risks. “It is important to assess how to mitigate and mediate those risks, but it is also important to talk about the benefits of AI to bring people alongside. Context is really important,” she stated.
Addressing the ethics & regulation
AI needs to be trustworthy in order to be used, said Dr Lütge, showcasing the level of detail at which the Ethics in Artificial Intelligence (IEAI) now operates.
“We started with a general approach of wanting to regulate AI and defining ethical principles to guide it. Now we are down to sectors, and we need to be even more specific. We can have as many principles as we want, but need to find ways of implementing them with developers and companies,” he said.
How different are the ethical challenges posed in this area? Robert Madelin, Chairman, FIPRA International, said AI is not unique in the amount of risk it carries as there are always unintended consequences to tech.
“The tragedy of ‘innovative regulation’ in Europe is that the Commission does not have enough power to take swift tertiary action, subordinate legislation where there is a new problem. Thus, a sectoral approach and an innovative approach may help maximize the success factors for AI,” said Madelin.
In other words, one must regulate where problems are identified. Regulation should depend on the level of risk.
Madelin said there are overlaps between Green Recovery and digital, but also risks that could hinder innovation.
“The more detail Europe can get on a possible sectoral and regulatory approach, the more likely Europe is to be a thought-leader for the world in being the safest, most ethical yet pro-innovation regulatory jurisdiction,” he stated.
“The opportunity should be to cross the dots horizontally – the legal system should breathe and allow for flexibility,” he added, saying this Commission had the opportunity to get ahead on quantum AI.
Liability & learning
Crossick brought up the important point of how to program AI systems to allow for tweaks and training to spot biases. “Part of this is also incumbent upon the ecosystem: how do we educate people in an organization and explain to strategic teams the importance of ethics? In the same way doctors are trained on ethics, perhaps data scientists should have similar training?”
She also spoke to the point of civil liability, saying it went hand in hand with insurance. To this, Gross interjected saying there was indeed a need for a liability regime. “We have preventive instruments and liability to ratify potential harms. It is more difficult to identify rules to assess who is responsible for damage. This is important for the users as it is important for the insurer,” he said.
Ultimately, public perception matters. But are there the right mechanisms in place so that people feel if something goes wrong, they can challenge it?
Crossick agreed with the need to have the right to know when a decision was taken by an automated device. “One would have the right to challenge such a decision, and that comes from the GDPR,” she said.
German Presidency influence
Lütge said one specific area where regulation is needed is autonomous driving. “We have not yet managed to achieve meaningful regulation that allows us to deploy these cars on the streets,” he pointed out.
“We came up with ethical guidelines for the sector in 2017 but we have not yet managed to achieve a kind of meaningful regulation which will allow us to deploy these cars on the street, and I think this is what we urgently need. I hope the German presidency will keep working on that.”
Dr Lütge said a particular challenge for the German EU presidency would be to keep the young generations on board when it came to AI. “We need to find ways of building trust and shifting policies,” he said.
How else could the German presidency address the status quo? “I would love to see Angela Merkel offering the consistent sustained conversation she brings around to the EU level. More high-level people need to address this,” Madelin said, adding that further discussions on moving the dial on AI would take place at the AI4People Summit 2020.

