The Tricky Task of Legislating AI

The European Union is grappling with the fast-paced evolution of artificial intelligence (AI) technologies, such as the ChatGPT AI app, making it challenging to draft and finalize comprehensive AI laws. Despite having proposed the draft rules two years ago, EU lawmakers are still caught in a deadlock over various aspects of the AI Act.

Delayed Deliberations: The EU's AI Legislative Labyrinth

Hopes of reaching a consensus on the 108-page bill have been dashed, as a 5-hour meeting in February failed to produce any resolution. While the industry anticipates an agreement by the end of the year, there are concerns that the complexity of the matter and lack of progress could push the legislation into next year. To further complicate matters, upcoming European elections could usher in MEPs with entirely different priorities.

Brisk Developments: Outpacing the Lawmakers

The swift advancements in AI have outpaced regulatory efforts, as lawmakers struggle to address more than 3,000 amendments on topics ranging from the creation of a new AI office to the Act's rules. As Daniel Leufer, a senior policy analyst at rights group Access Now, puts it, "It's a fast-moving target, but there are measures that remain relevant despite the speed of development: transparency, quality control, and measures to assert their fundamental rights."

Balancing Act: Protecting Citizens and Encouraging Innovation

Legislators face the difficult task of striking a balance between nurturing innovation and safeguarding citizens' fundamental rights. This has led to AI tools being classified according to their perceived risk level: minimal, limited, high, and unacceptable. While high-risk tools won't be banned, companies will need to maintain high levels of transparency in their operations. However, these debates have left little room for addressing rapidly expanding generative AI technologies like ChatGPT and Stable Diffusion, which have captivated users and sparked controversy worldwide.

Big Tech, Big Problems: Regulation and the Competitive Landscape

The EU's discussions have raised concerns among companies, from startups to Big Tech, about how regulations might impact their businesses and whether they would be at a competitive disadvantage against rivals from other continents. Big Tech companies have lobbied extensively to keep their innovations outside the high-risk classification, which would entail more compliance, costs, and accountability. A recent survey by industry body appliedAI revealed that 51% of respondents anticipate a slowdown in AI development activities due to the AI Act.

Taming the AI Beast: Introducing General Purpose AI Systems

To tackle versatile tools like ChatGPT, lawmakers introduced the "General Purpose AI Systems" (GPAIS) category, describing tools that can be adapted for various functions. However, it remains unclear whether all GPAIS will be considered high-risk. Tech companies have pushed back against these moves, arguing that their internal guidelines are sufficient to ensure safe deployment of the technology. Some even suggest the Act should include an opt-in clause, allowing firms to decide for themselves whether the regulations apply.

The Double-Edged Sword: Balancing Risk and Innovation

Regulating multi-purpose AI systems is undoubtedly complex. Alexandra Belias, head of international public policy at Google-owned AI firm DeepMind, emphasized that creating a governance framework for GPAIS should be an inclusive process, involving all affected communities and civil society. She added, "The question here is: how do we make sure the risk-management framework we create today will still be adequate tomorrow?"

Daniel Ek, CEO of audio streaming platform Spotify, which recently launched its own "AI DJ" capable of curating personalized playlists, described AI technology as a "double-edged sword." He acknowledged the need to consider various factors, stating, "Our team is working very actively with regulators, trying to make sure that this technology benefits as many as possible and is as safe as possible."

Adapting to the Future: Regular Reviews and Updates

MEPs plan to subject the AI Act to regular reviews, allowing for updates as new issues with AI emerge. However, with the European elections approaching in 2024, there is pressure to deliver a substantial solution the first time around. Daniel Leufer warned, "Discussions must not be rushed, and compromises must not be made just so the file can be closed before the end of the year. People's rights are at stake."

Final Thoughts

The rapidly evolving landscape of AI technologies is proving to be a significant challenge for EU lawmakers. As they strive to protect citizens' rights and encourage innovation, they must find a way to keep up with the pace of change and address the concerns of various stakeholders. With the EU's AI legislation still in the works, the outcome remains uncertain, but its potential impact on the future of AI cannot be underestimated.

Read More About EU's AI Act

The AI Act: Europe’s Game-Changer for Artificial Intelligence
Explore the groundbreaking EU AI Act, reshaping Europe’s AI landscape with risk classification, regulatory oversight, and innovation balance.
Share this post