Trustworthy AI

Watson Machine Learning for z/OS provides a set of tools and capabilities for evaluating, monitoring, and improving the trustworthiness of your AI models, in accordance with your organization's regulations and requirements. These capabilities, including model explainability, drift detection, and fairness detection, are seamlessly integrated with the AI model lifecycle management and operationalization in WMLz. You can install and configure the trustworthy AI tools as part of WMLz, and use them to ensure that AI systems that are built on top of WMLz are fair, robust, explainable, and align with the values they are designed for. The trustworthy AI tools and capabilities are added in WMLz with an iterative approach. The first one available now is model explainability, which is for you to understand the important factors that influence the outcomes of your AI models.