My IBM Log in
AI policy in Europe: from principles to practices
Sep 09,2021

IBM’s position on the EU Artificial Intelligence Act

 

The EU AI Act

 

As the business and societal benefits of Artificial Intelligence accelerate, it is up to those of us in industry, along with policymakers around the world, to ensure AI is used responsibly and in ways that put people and their interests first.

The European Commission recently published its draft Artificial Intelligence Act to establish clear rules to promote such trustworthy behavior. And much like the General Data Protection Regulation, the AI Act could be a global game changer.

 

The AI Act is built upon a set of well-established principles, and IBM welcomes the Commission’s risk-based approach, regulating specific uses of AI systems and not the AI technology itself.

 

This aligns with IBM’s previous calls for “Precision Regulation,” which takes a pragmatic approach to AI policy that emphasizes accountability, transparency, fairness and security. These elements are paramount to strengthening trust while promoting innovation and advancing AI’s potential to help us make the world smarter, healthier, and more sustainable.

 

While the AI Act is built upon the appropriate principles, we believe the text could be further clarified to better lay out how the rules would work in practice in a few areas. For example, the AI ecosystem involves many players, and clarity regarding the allocation of responsibility throughout the ecosystem and AI lifecycle would allow the various actors involved in supplying, training, deploying and using AI systems, to more effectively comply with the rule.

 

IBM’s detailed views on the draft Regulation are available here.

 

From Principles To Practice

 

While this Regulation is a positive step, no company or organization should idly wait for a new law to take effect in order to do the hard work required to foster societal trust in AI technology. The stakes are just too high. We believe organizations have a fundamental responsibility to act now and to work with governments to ensure new technologies like AI are transparent and, explainable, and employed in concert with processes and tools to rapidly identify and root out harmful and inappropriate bias.

 

Principles though are useless without practices to accompany them, which is why at IBM we’ve made trust the cornerstone of our leadership in AI innovation. IBM is ready and uniquely qualified to help companies develop trustworthy AI today, so they are better prepared when the EU AI Act takes effect.

 

IBM’s human-centered approach to trustworthy AI puts ethical principles at the core of our governed data and AI technology. With multiple open source IBM toolkits like AI Fairness 360, AI Explainability 360, and the Adversarial Robustness Toolbox, we have translated principles into practices and can help others build trustworthy and ethical AI implementations.

 

Advancing Trustworthy AI

 

IBM is a trusted partner, helping businesses use AI in ways that strengthen confidence in the technology as a force for positive change. Our Trust & Transparency Principles, together with our solutions for trustworthy AI, inform the ways we help clients. From auditing and mitigating risk and implementing governance frameworks, to operationalizing AI, providing education and guidance, or supporting organizational change.

 

New AI regulation such to help businesses, organizations, and individuals alike achieve their goals in an efficient, cost-effective, and responsible way.

 

But we’re not waiting. We’re taking steps now to make sure this powerful technology is used responsibly, and that its benefits are felt broadly across society.

 

Authored by:

Christina Montgomery, Vice President and Chief Privacy Officer, IBM

 

Francesca Rossi, IBM Fellow and AI Ethics Global Leader

 

 

Media contact:

michael.cloots@be.ibm.com

 

Share this post: