Artificial intelligence (AI) is transforming society, including the very character of national security. Recognizing this, the Department of Defense (DoD) launched the Joint Artificial Intelligence Center (JAIC) in 2019, the predecessor to the Chief Digital and Artificial Intelligence Office (CDAO), to develop AI solutions that build competitive military advantage, conditions for human-centric AI adoption, and the agility of DoD operations. However, the roadblocks to scaling, adopting, and realizing the full potential of AI in the DoD are similar to those in the private sector.
A recent IBM survey found that the top barriers preventing successful AI deployment include limited AI skills and expertise, data complexity, and ethical concerns. Further, according to the IBM Institute of Business Value, 79% of executives say AI ethics is important to their enterprise-wide AI approach, yet less than 25% have operationalized common principles of AI ethics. Earning trust in the outputs of AI models is a sociotechnical challenge that requires a sociotechnical solution.
Defense leaders focused on operationalizing the responsible curation of AI must first agree upon a shared vocabulary—a common culture that guides safe, responsible use of AI—before they implement technological solutions and guardrails that mitigate risk. The DoD can lay a sturdy foundation to accomplish this by improving AI literacy and partnering with trusted organizations to develop governance aligned to its strategic goals and values.
It’s important that personnel know how to deploy AI to improve organizational efficiencies. But it’s equally important that they have a deep understanding of the risks and limitations of AI and how to implement the appropriate security measures and ethics guardrails. These are table stakes for the DoD or any government agency.
A tailored AI learning path can help identify gaps and needed training so that personnel get the knowledge they need for their specific roles. Institution-wide AI literacy is essential for all personnel in order for them to quickly assess, describe, and respond to fast-moving, viral and dangerous threats such as disinformation and deepfakes.
IBM applies AI literacy in a customized manner within our organization as defining essential literacy varies depending on a person’s position.
As a leader in trustworthy artificial intelligence, IBM has experience in developing governance frameworks that guide responsible use of AI in alignment with client organizations’ values. IBM also has its own frameworks for use of AI within IBM itself, informing policy positions such as the use of facial recognition technology.
AI tools are now utilized in national security and to help protect against data breaches and cyberattacks. But AI also supports other strategic goals of the DoD. It can augment the workforce, helping to make them more effective, and help them reskill. It can help create resilient supply chains to support soldiers, sailors, airmen and marines in roles of warfighting, humanitarian aid, peacekeeping and disaster relief.
The CDAO includes five ethical principles of responsible, equitable, traceable, reliable, and governable as part of its responsible AI toolkit. Based on the US military’s existing ethics framework, these principles are grounded in the military’s values and help uphold its commitment to responsible AI.
There must be a concerted effort to make these principles a reality through consideration of the functional and non-functional requirements in the models and the governance systems around those models. Below, we provide broad recommendations for the operationalization of the CDAO’s ethical principles.
“DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.”
Everyone agrees that AI models should be developed by personnel that are careful and considerate, but how can organizations nurture people to do this work? We recommend:
Note: These measures of responsibility must be interpretable by AI non-experts (without “mathsplaining”).
“The Department will take deliberate steps to minimize unintended bias in AI capabilities.”
Everyone agrees that use of AI models should be fair and not discriminate, but how does this happen in practice? We recommend:
“The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.”
Operationalize traceability by providing clear guidelines to all personnel using AI:
IBM and its partners can provide AI solutions with comprehensive, auditable content grounding imperative to high-risk use cases.
“The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.”
Organizations must document well-defined use cases and then test for compliance. Operationalizing and scaling this process requires strong cultural alignment so practitioners adhere to the highest standards even without constant direct oversight. Best practices include:
“The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”
Operationalization of this principle requires:
IBM has been at the forefront of advancing trustworthy AI principles and a thought leader in the governance of AI systems since their nascence. We follow long-held principles of trust and transparency that make clear the role of AI is to augment, not replace, human expertise and judgment.
In 2013, IBM embarked on the journey of explainability and transparency in AI and machine learning. IBM is a leader in AI ethics, appointing an AI ethics global leader in 2015 and creating an AI ethics board in 2018. These experts work to help ensure our principles and commitments are upheld in our global business engagements. In 2020, IBM donated its Responsible AI toolkits to the Linux Foundation to help build the future of fair, secure, and trustworthy AI.
IBM leads global efforts to shape the future of responsible AI and ethical AI metrics, standards, and best practices:
Curating responsible AI is a multifaceted challenge because it demands that human values be reliably and consistently reflected in our technology. But it is well worth the effort. We believe the guidelines above can help the DoD operationalize trusted AI and help it fulfill its mission.
For more information on how IBM can help, please visit AI Governance Consulting | IBM
More resources:
We surveyed 2,000 organizations about their AI initiatives to discover what's working, what's not and how you can get ahead.
IBM® Granite™ is our family of open, performant and trusted AI models, tailored for business and optimized to scale your AI applications. Explore language, code, time series and guardrail options.
Access our full catalog of over 100 online courses by purchasing an individual or multi-user subscription today, enabling you to expand your skills across a range of our products at one low price.
Led by top IBM thought leaders, the curriculum is designed to help business leaders gain the knowledge needed to prioritize the AI investments that can drive growth.
Want to get a better return on your AI investments? Learn how scaling gen AI in key areas drives change by helping your best minds build and deliver innovative new solutions.
Learn how to confidently incorporate generative AI and machine learning into your business.
Dive into the 3 critical elements of a strong AI strategy: creating a competitive edge, scaling AI across the business and advancing trustworthy AI.