Skip to content President Biden’s AI Executive Order Highlights Likely Impacts on Government Contractors

Publication

Search Publications




November 2023

President Biden’s AI Executive Order Highlights Likely Impacts on Government Contractors

The development and implementation of Artificial Intelligence (“AI”) is a significant issue affecting government contractors. Indeed, AI garners considerable controversy and skepticism from the general public, especially in the defense context.  Public scrutiny is expected regarding advancements in artificial intelligence, the impacts of which will likely pose unique challenges across various industries.  

In an effort to dispel public concern and initiate regulation of this new wave of technology, the White House announced its AI Executive Order ("EO") on October 30, 2023. The full EO was just recently published. EO 14110, 88 Fed. Reg. 75191-72666 (Nov. 1, 2023).  According to the administration, “[t]he EO establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”  White House Fact Sheet, October 30, 2023. 

Aspirational goals set by the EO—along with the fast-developing technology—will result in tremendous opportunities for government contractors in the coming years. The EO reflects the growing importance and acceptance of AI by the federal government.  As AI technology is in its infancy, many questions remain as to its risks and benefits.  This update aims to highlight provisions likely to impact federal agencies and government contractors. 

The following are just a few mandates of the EO that contractors may consider relevant:

Section 4.3 re: Managing AI in Critical Infrastructure and Cybersecurity.  To ensure the protection of critical infrastructure[1], within 90 days of the EO, “the head of each agency with relevant regulatory authority over critical infrastructure and the heads of relevant Sector Risk Management Agencies, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities. Independent regulatory agencies are encouraged, as they deem appropriate, to contribute to sector-specific risk assessments."  Section 4.3(a)(i).

Section 4.4 re: Reducing the Risks at the Intersection of AI and Chemical, Biological, Radiological, and Nuclear ("CRBN") Threats.  To better mitigate the risks of CBRN threat, with a particular focus on biological weapons, within 150 days of the EO the Secretary of Homeland Security, in consultation with the Secretary of Energy and the Director of the Office of Science and Technology Policy, shall evaluate the potential for AI to be misused and enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats.  Section 4.4(a)(i).

Section 5.2 re: Promoting Innovation.  To advance responsible AI innovation by a wide range of healthcare technology developers that promotes the welfare of patients and workers in the healthcare sector, the Secretary of the Department of Health and Human Services ("HHS") shall identify and, as appropriate and consistent with applicable law and the activities directed in section 8 of this order, prioritize grant-making and other awards, as well as undertake related efforts, to support responsible AI development and use, including collaborating with appropriate private sector actors through HHS programs that may support the advancement of AI-enabled tools that develop personalized immune-response profiles for patients, consistent with section 4 of this order.  Section 5.2(e)(i).

Section 7.3 re: Strengthening AI and Civil Rights in the Broader Economy.  Within 365 days of the date of the EO, to prevent unlawful discrimination from AI used for hiring, the Secretary of Labor must publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.  Section 7.3.

Section 10 re: Advancing Federal Government Use of AI.   The EO requires that, within 150 days of its issuance, the Director of the Office of Management and Budget ("OMB") must issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the federal government. The OMB’s guidance must include recommendations to agencies regarding “maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation.”  See Section 10(b)(viii)(G).

As indicated by the referenced provisions above, the EO is essentially a research vehicle to investigate AI’s benefits and risks. While actual AI regulations remain outstanding, the agency guidance following the EO could provide an indication as to how this emerging technology will be regulated. In addition to the Sections listed above, the EO provides insight into likely policy, regulatory, legal, and practical changes that will impact a broad range of industries, including defense, energy, infrastructure, healthcare, and technology, among others. Importantly, the EO tasks federal agencies with the responsibility to carry out the intent of the EO. 

Government contractors that use AI or provide AI to the government should anticipate the impact of the EO and related agency-level policies through new contract and regulatory clauses.  This could include a litany of changes, including privacy and security standards, anti-discrimination requirements, and fraud detection and authentication efforts. To read the full EO, please click here.

Gordon Rees Scully Mansukhani's Government Contracts practice group will continue to monitor AI impacts at the federal level, particularly as the administration’s guidance is implemented at the agency level.

 


[1] See the definition of “critical infrastructure” in Section 1016(e) of 42 U.S.C s 5195c. (“The term 'critical infrastructure' means systems and assets, whether physical or virtual, so vital to the United  States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.")

Artificial Intelligence

Patrick K. Burns
Meredith L. Thielbahr



Artificial Intelligence
Government Contracts

Loading...