Sylvain Duranton, Managing Director and Senior Partner at BCG, Global Head of BCG GAMMA.
News & Events

Does the EU legislation create ethical AI?

A wave of technology regulation is sweeping the world. Legislators and regulators from the European Union, the US, India, and other countries have been working on new laws and regulation that will soon shape how organisations – and through them, all of us as citizens and consumers – can use artificial intelligence.

Just as artificial intelligence itself, these regulations mostly work under the radar – but they will soon affect almost every aspect of our lives. Because whether you know it or not, artificial intelligence is everywhere: it can help you discover new music, park your car, or play you in a video game.

In a recent survey of senior executives at more than 1,000 large organisations, the business community is already in favor of ethical artificial intelligence: 86% of respondents say their companies were taking active steps to use artificial intelligence responsibly.

This shows that companies are intrinsically motivated to behave responsibly. The challenge many executives report is the lack of clear standards – which leaves many of them no choice but to create their own.

Good regulation establishes common standards and reinforces transparency requirements, so customers can make up their own minds and reward companies for using responsible artificial intelligence. This will help to build public trust and show that responsible artificial intelligence is in line with our fundamental rights and values.

The EU’s anticipated regulation, which may become the de facto global standard, appears to be surprisingly smart. It only prohibits certain uses of artificial intelligence, not the technology itself. Think facial recognition, which can be used for mass surveillance – or to unlock your phone.

It is a thoughtful approach that activists have criticised for not being rigorous enough, but it has the benefit of leaving the door open for further research and new beneficial uses.

The legislation is also unusually tech-savvy. It requires artificial intelligence systems to be trained on high-quality data sets, transparent and subject to human oversight, and robust and accurate. These requirements will have to be more clearly defined – but it’s good to see regulators who know the ingredients for good artificial intelligence.

On the whole, however, the EU regulation could benefit from a more balanced view on artificial intelligence’s potential. The proposed regulation talks about high-risk artificial intelligence systems that are likely to cause physical or psychological harm through the use of subliminal techniques or by exploiting vulnerabilities of a specific group of persons due to their age, physical or mental disability.

It also prohibits artificial intelligence from providing social scoring to assist public authorities.

Every powerful technology has the potential for abuse, so we understand why the EU wants to implement democratic safeguards. But the language of the regulation risks amplifying concerns citizens already have about the technology. Let’s remember that most artificial intelligence use cases are entirely innocuous – just ask Siri or Alexa or look into your Discover Weekly playlist on Spotify. Regulators should strive for a balanced tone that encourages citizens to remain vigilant yet leaves room for all the positive impacts that artificial intelligence can deliver.

At heart, artificial intelligence is a computer-based method for reducing waste: for cutting back unnecessary time, effort, materials and energy. BCG studies show that applying artificial intelligence to corporate sustainability could reduce global emissions by 10%.

So artificial intelligence can do a lot of good. Lawmakers should focus on the relatively few applications that are associated with risk – and give the others space to develop.

EU regulators would likely argue that this is exactly why they differentiate between Unacceptable, High Risk and Moderate, Low Risk use cases. But the regulation encourages voluntary compliance even if you are in the Low-Risk category.

This puts companies in a tough spot: They will either have to deal with great cost and complexity, or answer to their customers who will ask them why they do not. Regulators should instead focus on a middle ground for the Moderate, Low Risk category that encourages transparency and accountability without the full set of onerous requirements.

Sylvain Duranton, Managing Director and Senior Partner at BCG, Global Head of BCG GAMMA.
Sylvain Duranton, Managing Director and Senior Partner at BCG, Global Head of BCG GAMMA.

Otherwise, the EU artificial intelligence regulation could end up stifling innovation, especially for small and medium-sized enterprises, while large platforms with their armies of lawyers and lobbyists could go unscathed.

Adding to that fear is the vague language of the draft, which includes a wobbly definition of artificial intelligence itself. This imprecision is likely to lead to constant updating – and creates loopholes for those wanting to exploit the law.

This lack of precision could lead businesses into a no man’s land of legal uncertainty where they could face fines of up to 6% of their global turnover if they do not use complete data sets. But who will tell them what a complete data set is? Nobody. And that is just one of many unclear standards.

We would much prefer it if regulators worked in phases and required extensive transparency first – and from there, we could work out clear standards together.

We believe that in the end, companies will do a better job of proving that artificial intelligence can be used responsibly than legislation ever could. The new laws will set legal requirements – but clearing those will not be enough to gain society’s approval.

If you want the social license to operate artificial intelligence at scale, you will have to gain people’s trust. We advise businesses to take proactive steps towards using responsible artificial intelligence and be open and transparent about the steps they are taking. The best companies are already moving in that direction – and they will be greatly rewarded for doing so.


Key takeaways

  • It prohibits artificial intelligence from providing social scoring to assist public authorities.
  • Adding to that fear is the vague language of the draft, which includes a wobbly definition of artificial intelligence itself.
  • The imprecision is likely to lead to constant updating and creates loopholes for those wanting to exploit the law.
  • Companies will do a better job of proving that AI can be used more responsibly than legislation ever could.
  • The new laws will set legal requirements, but clearing those will not be enough to gain society’s approval.
  • If you want the social license to operate artificial intelligence at scale, you will have to gain people’s trust.
  • We advise businesses to take proactive steps towards using responsible AI and be open and transparent about steps they are taking.

The EU’s anticipated regulation, which may become global standard, appears to be surprisingly smart and only prohibits certain uses of AI.

Steven Mills, Managing Director and Partner at BCG, Chief AI Ethics Officer of BCG GAMMA.
Steven Mills, Managing Director and Partner at BCG, Chief AI Ethics Officer of BCG GAMMA.