My IBM Log in
A Precision Regulation Approach to Controlling Facial Recognition Technology Exports
Nov 11,2020

America has historically employed controls on the export of advanced technologies developed here at home to ensure they are not misused abroad in ways that would run counter to national security interests, foreign policy priorities, or American values[1]. Pick an innovation: components that could be used to produce nuclear weapons, chemical weapons precursors, precision weapon guidance systems and even fingerprint matching technology. All of these technologies are carefully controlled not to stifle the advancement of other nations, but to defend U.S. interests and democratic values, and to minimize risks to the United States and its allies.

 

Late last year IBM called for another powerful category of innovation to be added to our country’s list of export controlled technology: certain facial recognition systems. Facial recognition has many helpful and benign applications. It can be used to speed up airplane boarding, to let you quickly and easily unlock your mobile phone, or to let businesses more efficiently control access to their facilities. But there are other uses for facial recognition, too, including some that run counter to American interests and values. Used the wrong way by the wrong people, facial recognition technology can be used to suppress dissent, to infringe on the rights of minorities, or to erase basic expectations of privacy.

 

In a letter earlier this year to members of the U.S. Congress, IBM CEO Arvind Krishna shared that our company has sunset its own general purpose IBM facial recognition and analysis products. Arvind also emphasized that:

 

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.”

 

IBM’s leadership position on facial recognition is part of a larger commitment to advancing the social and public policy dialogue about the potential impacts of advanced technologies, especially those underpinned by powerful AI algorithms. Our leadership in the field of AI Ethics is well known. We participated in shaping and were one of the first two signatories to the Vatican’s Rome Call for AI Ethics, we partnered with the University of Notre Dame to establish a first-of-its-kind research lab dedicated to establishing best practices in technology ethics, and we continually provide expertise and guidance to help policy makers grappling with questions posed by emerging technologies. We believe that technology can have a positive impact on society, but only if its deployed responsibly.

 

That is why IBM has today submitted specific recommendations to the U.S. Department of Commerce for limiting the export of facial recognition systems in specific cases. Consistent with our call last year for Precision Regulation, we have suggested that the tightest restrictions be placed on end uses and end users that pose the greatest risk of societal harm. We believe that to be most effective, U.S. export controls on facial recognition should:

 

  • Focus on facial recognition technologies that employ “1-to-many” matching end uses, the type of facial recognition system most likely to be used in mass surveillance systems, racial profiling or other human rights violations. These systems are distinct from “1 to 1” facial matching systems, such as those that might unlock your phone or allow you to board an airplane – in those cases, facial recognition is verifying that a consenting person is who they say they are. But in a “1-to-many” application, a system can, for example, pick a face out of crowd by matching one image against a database of many others.
  • Limit the export of “1 to many” systems by controlling export of both the high-resolution cameras used to collect data and the software algorithms used to analyze and match that data against a database of images.
  • Limit the ability of certain foreign governments to obtain the large-scale computing components required to implement an integrated facial recognition system.
  • Restrict access to online image databases that can be used to train “1 to many” facial recognition systems, and where explicit consent of the individual in the image for its use may be unclear or non-existent.
  • Update the Commerce Department’s Crime Control country groups to reflect a country’s recent human rights record and place the strictest controls on export of facial recognition technology, especially “1 to many” matching systems, on countries with a history of human rights violations or misuse of such technology.
  • Be implemented on a multilateral basis – in partnership with U.S. allies through a mechanism such as the Waasenaar Agreement – in order to limit the ability of repressive regimes to simply obtain controlled technologies outside the U.S. market.

 

Our full submission to the Commerce Department is available here.

 

Through this rulemaking, the U.S. Government has a real and immediate opportunity to address very serious and legitimate concerns that have been raised about certain uses of facial recognition worldwide. Additional Precision Regulation measures we have recommended may require more time coupled with legislative action, but we appreciate the Commerce Department’s focus on the issue and willingness to drive near-term progress. IBM stands ready to provide whatever expertise and support we can to help the Department brings these controls into effect.

 

 

 

– Christopher A. Padilla, Vice President, IBM Government and Regulatory Affairs

 

 

 

###

 

[1] I know about these controls from direct experience, because I was responsible for administering them when I served as Assistant Secretary of Commerce for Export Administration from 2006-2007. Today, the IBM team managing compliance with global export controls is part of my organization.

 

###

 

Share this post: