Claudio Novelli
Postdoctoral Research Fellow, Department of Legal Studies, University of Bologna and International Fellow, Digitial Ethics Center, Yale University
Observing the negotiation on the AI Act proposal was like witnessing the construction of a bridge across a vast chasm of uncertainty and potential pitfalls. Compromise was key to reaching an agreement, yet some decisions were tougher than others. Moving away from self-regulation for AI models and opting for direct oversight was a necessary step. Banning most biometric systems was a prudent move. Law enforcement can still use them in specific situations, like preventing terrorism or sexual exploitation, but only under judicial review and independent scrutiny.
This is just the beginning, though. Many details still need to be worked out, like the AI Office's role and how it will work with national authorities. Figuring out how deployers will assess the impact of their AI systems on fundamental rights is another crucial step. The next 24 months will be critical. They will determine whether the EU has truly achieved its goal of leading the way in regulating AI, setting a global benchmark for responsible and innovative AI governance.