The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.
On Wednesday, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act with 499 votes in favour-28 against and 93 abstentions ahead of talks with EU member states on the final shape of the law.
The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing.
Prohibited AI practices
The rules follow a risk-based approach and establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate.
AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behaviour or personal characteristics). MEPs expanded the list to include bans on intrusive and discriminatory uses of AI, such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- predictive policing systems (based on profiling, location or past criminal behaviour);
- emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
High-risk AI
MEPs ensured the classification of high-risk applications will now include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment. AI systems used to influence voters and the outcome of elections and in recommender systems used by social media platforms (with over 45 million users) were added to the high-risk list.
Obligations for General Purpose AI
Providers of foundation models - a new and fast-evolving development in the field of AI - would have to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law) and register their models in the EU database before their release on the EU market.
Generative AI systems based on such models, like ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content.
Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.
Supporting Innovation and Protecting Citizens' Rights
To boost AI innovation and support SMEs, MEPs added exemptions for research activities and AI components provided under open-source licenses. The new law promotes so-called regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.
Finally, MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.
Quotes
After the vote, co-rapporteur Brando Benifei (S&D, Italy) said: “All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose. We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council”.
Co-rapporteur Dragos Tudorache (Renew, Romania) said: “The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law”.
Next Steps
Negotiations with the Council on the final form of the law will begin later today.
Press Conference
Co-rapporteurs Brando Benifei (S&D, Italy) and Dragos Tudorache (Renew, Romania) will hold a press conference together with EP President Roberta Metsola today, 14 June at 13.30 to explain the outcomes of today’s vote and the next steps.
By advancing this legislation, MEPs respond to citizens' proposals from the Conference on the Future of Europe on ensuring human oversight of AI-related processes (proposal 35(3)); on making full use of the potential of trustworthy AI (35(8)); and on using AI and translation technologies to overcome language barriers (37(3)).
Culled from European Parliament
What Next for the AI Industry If it gets regulated by the European Parliament?
If MEPs (Members of the European Parliament) decide to regulate the AI industry, it could have significant implications for the future of this rapidly advancing field. Introducing regulations would be a pivotal moment, shaping how AI technologies are developed, deployed, and utilized across various sectors.
Several MEPs have expressed their opinions, providing insights into the potential outcomes. In the words of Brando Benifei from Italy, he said, "All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose. We want AI's positive potential for creativity and productivity to be harnessed, but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council".
On the other hand, Dragos Tudorache, an MEP from Romania, acknowledges AI's transformative potential but submits that it has to go through lawful means. "The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law."
If MEPs regulate the AI industry, we can expect a framework that promotes safety, transparency, and fairness. AI systems may be required to undergo rigorous testing and certification to ensure their reliability and minimize risks.
Additionally, guidelines may be established to govern the collection and use of data, protecting individuals' privacy rights. The regulations might also stipulate that AI algorithms should be explainable, enabling users to understand how decisions are made and avoid potential biases.
Furthermore, MEPs could encourage increased investment in AI research and development, fostering innovation within a regulated environment. Start-ups and established companies may be incentivized to align their AI practices with the regulations, enhancing consumer trust and attracting investments.
Overall, suppose MEPs choose to regulate the AI industry. In that case, it is likely to result in a more responsible, accountable, and inclusive AI ecosystem. By balancing innovation and societal well-being, these regulations would ensure that AI technologies contribute positively to our lives while minimizing potential risks and addressing ethical concerns.
What Challenges Can the MEPs Expect to Face to Regulate AI?
Regulating artificial intelligence (AI) presents many challenges for Members of the European Parliament (MEPs). As they embark on this critical task, they can anticipate several hurdles that may impede their efforts to establish effective rules.
One significant challenge revolves around the rapid pace of technological advancement. AI technologies continue to evolve and innovate at a staggering rate, outpacing the ability of regulatory frameworks to keep up.
MEPs will need to find a delicate balance between allowing room for innovation and ensuring that AI systems are accountable, transparent, and safe. Striking this balance is essential to prevent stifling innovation while safeguarding individuals and society as a whole.
Another hurdle that MEPs may encounter is the complexity of AI systems. AI encompasses many technologies, ranging from machine learning algorithms to autonomous vehicles and facial recognition systems.
Understanding the nuances and intricacies of these technologies can be daunting, especially for policymakers who may not have a technical background. It is crucial for MEPs to engage in extensive consultations with experts, stakeholders, and industry representatives to grasp the implications of AI and make informed decisions.
Additionally, ensuring compliance with AI regulations poses a significant challenge. With the proliferation of AI systems across borders and sectors, enforcing rules becomes inherently complex.
MEPs will need to coordinate efforts at national, regional, and international levels to establish harmonized standards. Cooperation and information sharing between countries and organizations will be vital in addressing jurisdictional issues and preventing regulatory loopholes that could be exploited.
Ethical considerations also loom large in the regulation of AI. The potential impact of AI on privacy, bias, and fundamental rights requires careful attention.
MEPs must grapple with questions surrounding the ethical use of AI, such as preventing discriminatory practices, protecting personal data, and addressing the social implications of automation. Balancing the benefits of AI with ethical concerns is a delicate task that will require comprehensive legislation and robust oversight mechanisms.
Moreover, fostering international cooperation presents a challenge. AI is a global phenomenon, and its regulation should ideally transcend national boundaries. MEPs will need to collaborate with international counterparts to establish common frameworks and standards. This collaboration should encompass developed nations and emerging economies to ensure a cohesive global approach to AI regulation.
Rounding up, MEPs face several challenges as they undertake the crucial task of regulating AI. These challenges include keeping pace with technological advancements, understanding the complexities of AI systems, ensuring effective compliance and enforcement, addressing ethical concerns, and fostering international cooperation.
Overcoming these obstacles will require a thoughtful and inclusive approach, engaging various stakeholders, and leveraging expertise to strike the right balance between innovation and accountability in the rapidly evolving AI landscape.