STRASBOURG: European Parliament lawmakers will vote Wednesday to kickstart talks to approve the world’s first sweeping rules on artificial intelligence systems like ChatGPT, aiming to curb potential harms while nurturing innovation.
Although the EU’s plans date back to 2021, the draft rules took on greater urgency when ChatGPT exploded onto the scene last year, showing off AI’s dizzying development and the possible risks.
There is also growing clamour to regulate AI across the Atlantic, as pressure grows on Western governments to act fast in what some describe as a battle to protect humanity.
While AI proponents hail the technology for how it will transform society, including work, healthcare and creative pursuits, others are terrified by its potential to undermine democracy.
Once adopted by the EU parliament, officials say negotiations for a final law with the bloc’s 27 member states will begin almost immediately, starting later Wednesday.
The race is on to strike an agreement on final legislation by the end of the year.
Even if that ambitious target is achieved, the law would not come into force until 2026 at the earliest, forcing the EU to push for a voluntary interim pact with tech companies.
Brussels and the United States agreed last month to release a common code of conduct on AI to develop standards among democracies.
Lawmakers have hailed the draft law as “historic” and pushed back against critics who say the EU’s plans could harm rather than encourage innovation.
“Is this the right time for Europe to regulate AI? My answer is resolutely yes – it is the right time because of the profound impact AI has,” MEP Dragos Tudorache said during Tuesday’s parliamentary debate in Strasbourg.
“What we can do here is to create trust, legal certainty, to enable AI to develop in a positive manner,” European Commission Vice President Margrethe Vestager said.
‘Common’ approach
The law will regulate AI according to the level of risk: the higher the risk to individuals’ rights or health, for example, the greater the systems’ obligations.
The EU’s proposed high-risk list includes AI in critical infrastructure, education, human resources, public order and migration management.
The parliament has added extra conditions before the high-risk classification would be met, including the potential to harm people’s health, safety, rights or the environment.
There are also special requirements for generative AI systems – those such as ChatGPT and DALL-E capable of producing text, images, code, audio and other media – that include informing users that a machine, not a human, produced the content.
Another MEP spearheading the law in parliament, Brando Benifei, called for a “common approach” to tackle AI risks.
“We need to compare notes with lawmakers all around the world,” he said.
Tudorache added that the law was needed “because hoping that companies will self-regulate is not enough to safeguard our citizens”.
Risks versus rights
Throughout the parliament’s scramble to reach an agreement that began last year, rights defenders have urged the EU to protect rights.
Under the parliamentary committee text approved last month, lawmakers propose bans on AU systems that use biometric surveillance, emotion recognition and so-called predictive policing.
But Mher Hakobyan of Amnesty International warned this was at risk because “parliament may upend considerable human rights protections” that were agreed on by parliamentary committees last month.
There are still fears that, even if lawmakers agree on those bans, they may not make it into the final law after negotiations with EU member states.
“There’s a real risk that when the state representatives get involved, a lot of these protections could be removed or significantly watered down,” Griff Ferris, senior legal and policy officer at the non-governmental group Fair Trials, told AFP.