My IBM Log in
Why We Must Protect an Open Innovation Ecosystem for AI
Dec 05,2023

By Joshua New, Senior Fellow, IBM Policy Lab

 

U.S. President Joe Biden recently remarked, “there will be more technological change in the next 10 years, maybe in the next 5 years, than in the last 50 years.” While it’s clear that the transformative potential of artificial intelligence will be the driver of these coming changes, it is not yet clear whether the benefits of AI will be broadly shared or controlled and doled out by just a few big stakeholders.

 

At the same time, policymakers around the world are focused on addressing the potential safety risks of AI and some have advocated for regulation that would clamp down on open innovation. For instance, some have proposed the idea of a licensing system for AI as a way to make AI safe. This would be a grave mistake for the entire technology ecosystem. Sacrificing an open ecosystem for AI in the name of safety would give only a select few firms the opportunity to hoard the benefits of innovation and fail to develop meaningful and sustainable solutions for potential safety risks that may arise. The only way to guarantee that the transformative changes of AI can be harnessed by all is to ensure that the future of AI is open.

 

That is why, today, IBM joined with over 50 organizations across industry, startups, academia, research, and government: from Berkeley to Cornell to Yale, to start-ups including Hugging Face and Anyscale, to established companies such as Meta, AMD, Intel, Dell, Oracle, and Sony, to scientific collaborators such as the European Organization for Nuclear Research (CERN), the National Aeronautics and Space Administration (NASA), the Nation Science Foundation (NSF), and the Cleveland Clinic. The AI Alliance’s mission is to build and support open technology for AI and the open communities that will enable it to benefit all of us.

 

AI, particularly the foundation models enabled by this current wave of generative AI, is a unique technology and prone to centralization. Many of these models require large amounts of data to train on, large amounts of compute power to train and operate, and large amounts of specialized human expertise to design, build, test, deploy, and monitor. As such, fierce competition makes these models valuable and limited resources even more expensive. Left unchecked, these factors would dictate that only the largest and most well-resourced firms could be competitive in AI. Even worse, these firms would have powerful incentives to keep barriers to entry high, preserve their market dominance, and cut off innovation in more resource-intensive directions.

 

This is precisely why open innovation ecosystems – in which stakeholders recognize the value of community-built technology and the open exchange of information, ideas, and skills it cultivates – provide greater value and inclusivity for the technology industry, including private firms seeking to capitalize on AI-driven innovation. Open innovation doesn’t just mean open-source software. Open source and permissively licensed AI models are a key part of an open innovation ecosystem for AI, as are open-source toolkits and resources, open datasets, open standards, and open science.

 

This entire open ecosystem can offer tremendous benefits to competition and innovation; democratization and skills development; and safety and security. And there are several steps policymakers can take to protect an open innovation ecosystem for AI, and recognizing just how beneficial an open ecosystem is for AI will be critical for any future policymaking.

 

Competition and Innovation
The most obvious benefit of an open ecosystem is a lower the barrier to entry for competition. By making many of the technical resources necessary to develop and deploy AI more readily available, open ecosystems enable small and large firms alike to develop new and competitive products and services without steep, and potentially prohibitive, upfront costs. And there is tremendous demand from businesses for these kinds of resources. Not only does IBM rely on collaborative technologies like open source, we also make them available through our watsonx platform, as our clients value access to a variety of powerful tools and models from the open innovation community like Meta’s Llama2-chat 70 billion parameter model alongside our proprietary offerings for specialized enterprise use cases.

 

Openness and innovation go hand-in-hand, and open ecosystems can drive AI advancements in two key ways. First, more competition means everyone must raise the bar. Companies should compete based on how well they can tailor and deploy AI in valuable ways, rather than on how effectively they can hoard the resources and enact barriers of entry to develop AI. Second, more access to AI means more stakeholders can identify opportunities to improve AI technologies and more easily pursue novel and valuable applications for AI. Combined, an open AI ecosystem is dramatically more innovative, inclusive and competitive than a closed one.

 

Democratization and Skills
Just as openness drives competition, it also drives democratization. Open ecosystems mean more opportunities for anyone to explore, test, modify, study, and deploy AI, which can dramatically lower the bar to deploying AI for socially beneficial applications. This is why IBM partnered with NASA to develop a geospatial foundation model and make it available under an open-source license. The model can analyze NASA’s geospatial data up to four times faster than state-of-the-art deep-learning models, with half as much labeled data, making it a powerful tool to accelerate climate-related discoveries. Since it is openly available, anyone – including non-governmental organizations, governments, and other stakeholders – can leverage this model freely to drive efforts to build resilience to climate change.

 

It is much easier to learn about a subject when you can access the materials for free. An open innovation ecosystem unleashes a significantly broader pool of AI talent, as students, academics, and existing members of the workforce can more easily access the resources necessary to acquire AI skills. This is part of the reason IBM contributes hundreds of open models and datasets to Hugging Face, the leading open-source collaboration platform for the machine learning community building the future of AI. IBM has also committed to training 2 million people in AI by 2026, with a focus on underrepresented communities, by partnering with universities around the world to provide AI training resources and by making AI coursework available for free on our SkillsBuild platform.

 

Safety and Security
Many policymakers are increasingly focused on the potential safety and security risks of generative AI. On November 1, 28 countries including the United States, United Kingdom, France, Germany, and China signed the Bletchley Declaration at the UK’s AI Safety Summit, to acknowledge that powerful AI could pose significant risks alongside significant benefits and commit to studying and mitigating these risks. Here, too, open innovation ecosystems can help. As with open-source software, open models can enable a higher level of scrutiny from the community, greatly increasing the likelihood that vulnerabilities are identified and patched. This means that open AI models can be as trusted, secure, and fit for use in critical infrastructure as proprietary models.

 

Some have raised concerns about the potential safety issues that may arise when granting bad actors access to open-source model weights – the unique parameters of a machine learning model – for powerful generative AI models. While it is true that openly available models will need to be matched by robust risk mitigation practices, open innovation is a boon to AI safety, not a burden. The creation of the newly formed UK and U.S. AI Safety Institutes indicates a prioritization to rapidly develop and mature the science of testing and evaluating AI models for safety and security concerns, such as the authentication of AI-generated content. The mission of these safety institutes will be significantly better served if broad communities of diverse stakeholders contribute to this scientific advancement, scrutinize research, share discoveries and best practices, and adopt widely agreed-upon open standards.

 

Policy Recommendations
Many of the benefits of open innovation ecosystems for AI would flow to proprietary, closed ecosystems being developed in parallel. Open and proprietary models can coexist, and even have a symbiotic relationship. When it comes to AI policymaking, the goal for policymakers should be to ensure that open and closed ecosystems for AI can exist and be competitive with each other. Fortunately, the benefits of open innovation have proven themselves time and time again, ensuring that in most markets, open ecosystems can thrive and be competitive right alongside closed ones.

 

Policymakers need not reinvent the wheel to ensure the future of AI remains open as well. To preserve an open innovation ecosystem for AI and capture its benefits, the IBM Policy Lab recommends policymakers:

 

1. Support precision regulation to address AI risk, reject policies that would sacrifice openness. Policymakers are right to take steps to mitigate the risks of new technologies, and IBM has long advocated for “precision regulation” to address these risks. But certain proposals to address safety risks of AI – such as regulating technology rather than its application or creating an AI licensing regime – are not helpful. These proposals would impose significant constraints on open innovation in AI, limit competition and innovation, democratization and skills, and even safety and security. Instead, policymakers should focus on regulating the application of AI, regardless of whether it is open or closed.

 

2. Enable open innovation ecosystems. While open innovation ecosystems are largely decentralized and self-directed, policymakers can still take steps to ensure they can flourish. Government efforts to facilitate the development of AI standards and advance the science of AI – such as through the UK and U.S. AI Safety Institutes – should encourage the adoption of openly developed and licensed standards, prioritize open access to AI safety research, and share technical resources and other inputs that enable broad collaboration in AI.

 

One particularly helpful way to enable open innovation in AI is to develop shared computing and data resources that can serve as the infrastructure for open innovation ecosystems. Policymakers should fund initiatives to make it easier to access the computing power necessary to develop and evaluate AI. For example, the UK recently announced £300 million ($368m USD) in investments to develop its AI Research Resource, which will leverage high performance computing to evaluate AI models and drive research in drug discovery and clean energy. The United States has proposed a similar program, called the National AI Research Resource (NAIRR), to provide researchers with access to the computational, data, software, training, and educational resources necessary to power AI research. President Biden’s recent Executive Order on AI establishes a pilot for the NAIRR, but the initiative lacks the funding necessary to fully realize its benefits. Policymakers in the U.S. should fully fund the NAIRR, and policymakers globally should make similar investments in shared computing resources for AI.

 

3. Leverage open innovation for public benefit. Governments should recognize open innovation ecosystems for AI as a boon for public benefit and invest in developing and adopting open AI resources. Like IBM’s partnership with and NASA, government agencies should proactively identify opportunities to develop valuable open AI resources that advance mission delivery and serve as a tool that the broader public can leverage.

 

Conclusion
A thriving and competitive open innovation ecosystem for AI is a priority for industry, civil society, and academia, and policymakers should take note. Policymakers are right to seek to mitigate the risks of new technologies. But policymakers also have a duty to ensure technology can deliver broad economic and social benefits. Balancing these priorities need not be in conflict, and by protecting open innovation in AI, policymakers can both help mitigate the risks of AI while maximizing its benefits and ensuring that they are broadly distributed.

 

Share this post: