The European Parliament on Wednesday voted in favor of regulating the use of artificial intelligence (AI) systems across the bloc.
The vote kick starts the process toward passing the EU's Artificial Intelligence Act, the first of its kind globally.
The law proposal is aimed at protecting fundamental civil rights and safeguarding against AI threats to health and safety, while simultaneously nurturing innovation of the technology.
How exactly will the new EU measures work?
The measures, in discussion since 2021 but accelerated by the rapid recent emergence of popular AI chatbots such as ChatGPT, will classify AI systems according to four levels of risk, ranging from minimal to unacceptable.
Judged to pose the most unacceptable levels of risk are so-called "social scoring" systems that can judge people based on their behavior or appearance, and applications that engage in subliminal manipulation of vulnerable people, including children.
Predictive policing tools which crunch data to forecast potential future perpetrators of crimes are also considered a no-go, while lawmakers have also widened the ban on remote facial recognition and biometric identification in public.
What about ChatGPT?
The majority of AI systems that have already entered peoples' everyday lives, including video games, spam filters or text generators such as ChatGTP, fall into the low- or no-risk category.
"We don't want mass surveillance, we don't want social scoring, we don't want predictive policing in the European Union, full stop. That's what China does, not us," Dragos Tudorache, a Romanian member of the European Parliament who is co-leading its work on the AI Act, said Tuesday.
How will the EU enforce measures?
Violations will draw fines of up to 30 million euros ($33 million) or 6% of a company's annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions.
It will be up to the EU's 27 member states to enforce the rules, which lawmakers hope powerful AI developers will see as a constructive framework within which to further develop the technology responsibly within the EU single market, rather than arbitrary restriction.
"We as a union are doing something that I think is truly historic, which is to bring about rules and guardrails for the evolution of a technology that has a lot of potential for good in our economies," lawmaker Tudorache told DW.
"At the same time, we see more and more risks. What we're doing with this legislation is to try and curtail these risks, mitigate them and put very clear rules in place that will ensure transparency and trust."
What are the next steps?
The bloc's new AI laws, which still need final approval from member states and parliamentarians, will likely not kick in until 2025 – after next year's key European Parliament elections.
Until then, European parliamentarians are set to work with US counterparts to draw up a voluntary code of conduct that officials promised at the end of May would be drafted within weeks and could be expanded to other "like-minded countries."
"It's true that these laws will not come into effect by the elections next year, but we have other rules that are already in place," said Tudorache.
"But if we craft this legislation right, if we articulate the obligations and the rights in these regulations the right way, they will stand the test of time.
"We are not regulating the technology itself, we are regulating the use of technology – and that transparency is the same and it's going to be the same now or in five years' time."