Open Source @ IBM Blog

Follow the latest happenings with Open Source @ IBM and stay in the know.

IBM donates Trusted AI toolkits to the Linux Foundation AI


For over a century, IBM has created technologies that profoundly changed how humans work and live: the personal computer, ATM, magnetic tape, Fortran Programming Language, floppy disk, scanning tunneling microscope, relational database, and most recently, quantum computing, to name a few. With trust as one of our core principles, we’ve spent the past century creating products our clients can trust and depend on, guiding their responsible adoption and use, and respecting the needs and values of all users and communities we serve.

Our current work in artificial intelligence (AI) is bringing a transformation of similar scale to the world today. We infuse these guiding principles of trust and transparency into all of our work in AI. Our responsibility is to not only make the technical breakthroughs required to make AI trustworthy and ethical, but to ensure these trusted algorithms work as intended in real-world AI deployments.

Our theoretical work in fair machine learning led to one of the earliest and most cited bias mitigation algorithms. We applied advanced machine learning methods for mitigating unwanted discrimination against protected groups and individuals. The integration of our bias detection and mitigation algorithms into Watson OpenScale enabled IBM to be the first vendor to address AI bias in a real product. As a culmination of this work, we created AI Fairness 360, a comprehensive open source toolkit for handling bias in machine learning algorithms. Additionally, we have two more open source Trusted AI toolkits: Adversarial Robustness 360, a toolbox and library for defending AI from adversarial attacks, and AI Explainability 360, a toolbox that explains decisions of AI systems.

On June 18, 2020, the Technical Advisory Committee of Linux Foundation AI Foundation (LFAI) has voted positively to host and incubate these Trusted AI projects in LFAI. We are going through the onboarding process to formalize the charter and governance and IT logistics of these projects and move them under the foundation. Stay tuned for an LFAI blog to announce the details.

Overview image of AIF 360

Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation.

Trusted AI 360 Toolkits

  • The AI Fairness 360 (AIF360) Toolkit is an open source toolkit that can help detect and mitigate unwanted bias in machine learning models and datasets. It provides approximately 70 metrics to test for biases, and 11 algorithms to mitigate bias in datasets and models. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. Recently, AIF360 also announced compatibility with Scikit Learn, and an interface for R users.

    Overview image of AIF 360

  • The Adversarial Robustness 360 (ART) Toolbox is a Python library for machine learning security. ART provides tools that enable developers and researchers to evaluate, defend, certify and verify machine learning models and applications against the adversarial threats of evasion, poisoning, extraction, and inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, generation, certification, etc.).

    Image of Adversarial Robustness Toolkit

    Last year, DARPA awarded IBM Research scientists with a grant to advance research in adversarial AI

  • The AI Explainability 360 (AIX360) Toolkit is a comprehensive open source toolkit of diverse algorithms, code, guides, tutorials, and demos that support the interpretability and explainability of machine learning models. The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas.

    Image of AI Explainability dashboard

Continuing the mission with LFAI Trusted AI Committee

Last year, IBM worked with LFAI to establish the Trusted AI Committee with a mission of advancing the practice of trustworthy AI. The committee has since grown to include over 10 organizations, working towards defining and implementing principles of trust in AI deployments. One of the activities this committee has been driving is the integration of Trust 360 toolkits in Apache Nifi or Kubeflow Pipelines, as a way of powering-up trustworthy machine learning workflows.

Machine learning workflows

We also recognize that when it gets to promoting trustworthy, beneficial, and equitable AI, technology is only one part of the equation. We have the responsibility to look at the broader context of how the AI systems are being designed and deployed, how they are being used and by whom, and evaluate their impact on users and communities.

On this mission, contributions from social science, policy and legislation, and diverse perspectives play equally important role as the technology itself. We look forward to contribution from different areas and stakeholders and invite them to join the community and contribute their work towards advancing the mission of the foundation and its Trustworthy AI open source projects.

More information on the activities of the LFAI Trusted AI Committee can be found in the presentation to LFAI Governing board. Those interested in joining can do so via Trusted AI Committee page.

Machine learning workflows

Todd Moore highlighted this news in his keynote on June 30 at the Open Source Summit NA.