FYI: Science Policy News
FYI
/
Article

Biden Sets Priorities for Applications of AI to National Security

OCT 25, 2024
The directive creates guardrails for the safe use of AI by government agencies involved in national security and intelligence work.
lindsay-mckenzie-2.jpg
Science Policy Reporter, FYI American Institute of Physics
Jake Sullivan 2024 October Briefing

National Security Adviser Jake Sullivan speaking at a press briefing in early October.

Oliver Contreras / White House

President Joe Biden issued the first-ever national security memorandum on artificial intelligence yesterday, outlining actions the federal government should take to ensure the U.S. leads the world in developing safe and trustworthy AI.

The memo intends to boost the adoption of AI by the federal government while creating guardrails to ensure that the technology is used responsibly. If misused, the memo warns that AI could “bolster authoritarianism worldwide, undermine democratic institutions and processes, facilitate human rights abuses, and weaken the rules-based international order.”

A Framework to Advance AI Governance and Risk Management was published alongside the memo that details prohibited uses of AI as well as mechanisms for risk management, evaluations, accountability, and transparency.

While the memo sets implementation timelines for the federal government to adopt and regulate AI, most of the deadlines will happen after Biden leaves office, leaving much of its implementation to the discretion of the next administration.

The directive was announced by National Security Advisor Jake Sullivan at the National Defense University in Washington, D.C., on Oct. 24. In his prepared remarks, Sullivan highlighted several AI challenges the federal government faces, including that the government owns little of the technology, and that the speed of development far outpaces regulation.

“Our government took an early and critical role in shaping developments — from nuclear physics and space exploration to personal computing to the internet,” Sullivan said. “That’s not been the case with most of the recent AI revolution. While the Department of Defense and other agencies funded a large share of AI work in the 20th century, the private sector has propelled much of the last decade of progress.”

The order directs the AI Safety Institute at the National Institute of Standards and Technology to carry out voluntary unclassified safety testing of frontier AI models to check for risks relating to topics such as cybersecurity, biosecurity, and chemical weapons. Additionally, the memo states that NIST, through its AI Safety Institute, will be the primary point of contact for private sector AI developers, and will develop voluntary testing mechanisms for AI models pre- and post-public deployment. The Department of Energy is given this safety testing role for nuclear security risks and is also required to present an annual report to the president on the radiological and nuclear risks of AI models that may make it easier to assemble or test nuclear weapons. According to a fact sheet released by the White House, the order also “doubles down” on the National Science Foundation’s National AI Research Resource pilot project, which provides scientists access to supercomputers and datasets necessary for AI research.

The memo also directs the NSF and other agencies to convene academic research institutions and scientific publishers to develop best practice standards for publishing computational biological and chemical models, data sets, and research methods that use AI. The order also recommends that DOD, DOE and intelligence agencies consider future AI usage when building or updating their computational facilities.

Additionally, the memo emphasizes the need for the U.S. to increase domestic production of advanced chips and semiconductors and directs intelligence agencies to prevent adversaries from stealing U.S. technology, describing this as a “top-tier intelligence priority.” This protection applies to private sector advances, which the memo frames as national assets.

To increase the United States’ ability to draw AI experts and researchers from abroad, the memo directs relevant agencies to convene in the next 90 days and explore how to streamline visa processing for highly skilled applicants working with AI and other critical and emerging technologies. In particular, the order instructs those agencies to consider options for “narrowing the criteria that trigger secure advisory opinion requests for such applicants,” a visa vetting procedure that involves multiple agencies and can slow the approval process. This provision builds on other measures to streamline STEM immigration implemented through Biden’s AI executive order last year.

“America has to continue to be a magnet for global, scientific, and tech talent,” Sullivan said. “As I noted, we’ve already taken major steps to make it easier and faster for top AI scientists, engineers, and entrepreneurs to come to the United States, including by removing friction in our visa rules to attract talent from around the world. And through this new memorandum, we’re taking more steps, streamlining visa processing wherever we can for applicants working with emerging technologies. And we’re calling on Congress to get in the game with us, staple more green cards to STEM diplomas, as President Biden has been pushing to do for years.”

More from FYI
FYI
/
Article
Plans for a national STEM talent strategy modeled on the National Defense Education Act of 1958 are taking shape.
FYI
/
Article
As the government grapples with regulating AI, researchers are experimenting with their own validation methods.
FYI
/
Article
The report identifies needs across sectors such as research, space traffic control, and emergency management.
FYI
/
Article
The strategy places a new emphasis on pre-standardization activities, including incentives for standards R&D.