Eshoo Urges NSA & OSTP to Address Biosecurity Risks Caused by AI

October 25, 2022
Press Release

PALO ALTO, CA – Today, U.S. Rep. Anna G. Eshoo (D-CA), incoming Chair of the AI Caucus, urged the National Security Advisor (NSA) and the Office of Science and Technology Policy (OSTP) to take action to address the risks dual use AI poses to biosecurity.

Eshoo wrote:

AI has important applications in biotechnology, healthcare, and pharmaceuticals, however, we should remain vigilant against the potential harm dual-use applications represent for the national security, economic security, and public health of the United States, in the same way we would with physical resources such as molecules or biologics.

Eshoo also sent a letter to NSA and OSTP in September urging the Administration to address unsafe open-source AI models that do not take steps to limit the possible harms unchecked use present, specifically the Stable Diffusion model released by Stability AI on August 22, 2022.

A PDF of the letter can be found HERE and the text of the letter is below:

Dear Advisor Sullivan and Director Prabhakar,

I’m writing to urge you to address the dual-use harm that wholly open-sourced artificial intelligence (AI) models can have with regard to biosecurity. The open-source nature of dual-use AI models coupled with both the declining cost and skills required to synthesize DNA and the current lack of mandatory gene synthesis screening requirements for DNA orders significantly increase the likelihood of the misuse of such models.  I urge the Administration to include the governance of dual-use, open-source AI models in its upcoming discussions with our co-signatories at the Ninth Review Conference of the Biological Weapons Convention (BWC) and to investigate methods of governance such as mandating the use of application programing interfaces (APIs).

AI has quickly become ubiquitous in many industries…decreasing costs, increasing productivity and fostering innovation. This is increasingly true in the biotechnology and pharmaceutical industry. Technological advances in recent years have made it easier to capture and store reams of digital patient data, resulting in rich troves of genomic data, health records, medical imaging, and other patient information that AI platforms can mine to help develop drugs faster and more successfully. Companies can also use AI to model new molecules for drug discovery.

As I stated in my previous letter to you regarding the open-source, generative model Stable Diffusion, AI models released without appropriate safeguards can lead to real world harms.  These risks are particularly acute regarding biosecurity. The same AI models designed to assist in the design of new molecules for drug discovery can be easily transformed and directed to design new, lethal molecules. The introduction and use of AI in biotechnology and drug discovery dramatically lowers the technical thresholds in creating toxic substances or biological agents that can cause significant harm. Open-source AI is the primary route for learning and creating new models, and the necessary datasets needed to create harmful toxins are readily available, creating significant biosecurity risks.

Researchers at the drug discovery company Collaborations Pharmaceuticals, Inc. demonstrated this recently by simply inverting their open-source machine learning model and transforming their “innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.”  In less than 6 hours, the AI had designed one of the most toxic nerve agents and many other known chemical warfare agents, as well as new molecules that were predicted to be more toxic than publicly known chemical warfare agents.  

The dual-use biosecurity risks of open-source AI models will become graver as such models provide the pieces needed to resurrect history's worst pathogens or engineer much worse. There has unfortunately already been precedent for misuse of this kind in the scientific community. In 2018, Canadian researchers reconstituted an extinct virus (horsepox) for only $100,000 using mail-order DNA.  In January, 2021, researchers released a paper with a step-by-step guide on how to engineer COVID-19 and make other strains in a lab.  Additional dual-use concerns include the synthesis of mousepox , the synthesis of poliovirus , and the generation of the 1918 influenza virus  in lab settings. Today's incredibly useful AI models for tasks like predicting proteins' structures will soon be succeeded by models that predict proteins' interactions, viruses' health effects, and so on. Such models could be used to not only reconstitute smallpox but engineer it to be even more communicable and deadly. Ungoverned proliferation of such AI models is untenable.

Another concern regarding ungoverned AI that is released open-source without appropriate safeguards is how it can and will be used by our adversaries. President Biden recently announced sweeping new limits on the sale of semiconductor technology to China, a step to protect the American semiconductor industry and slow the progress of Chinese military programs.  It would be counterproductive to limit the availability of certain technology to China while allowing the highest impact product of the technology to be transferred without government review via open sourcing.

These concerns raise the need for safe, transparent, and trustworthy AI. The Administration should encourage policymakers, academics, industry experts and scientists to engage in open dialogue about the risks these models pose and the implications of computational tools. Increased visibility into the use of these models would raise awareness about potential dual-use aspects of cutting-edge technologies. Content controls, a free content filter, monitoring of applications, and a code of conduct are several other steps industry and academia, with the coaxing of the Administration and policymakers, could take to encourage responsible science and guard against the misuse of AI-focused drug discovery. Finally, requiring the use of an API, with code and data available upon request, would greatly enhance security and control over how published models are utilized without adding much hindrance to accessibility. APIs can: (1) block queries that have potentially dual-use applications; (2) screen users, such as requiring an institutional affiliation; and (3) flag suspicious activity. I urge you to explore this and any other viable methods within your authorities to reduce the likelihood of open-source AI models being misused for bioweapons.

AI has important applications in biotechnology, healthcare, and pharmaceuticals, however, we should remain vigilant against the potential harm dual-use applications represent for the national security, economic security, and public health of the United States, in the same way we would with physical resources such as molecules or biologics. To mitigate these risks, I urge the Administration to include the governance of dual-use, open-source AI models in its upcoming discussions at the BWC Review Conference and investigate methods of governance such as mandating the use of APIs.

Most gratefully,

 

Anna G. Eshoo

###