Will We See a National Policy for AI in 2026?

DECEMBER EXECUTIVE ORDER

On December 11, 2025, President Trump issued a new Executive Order, focused on “Ensuring a National Policy Framework for Artificial Intelligence."

This EO continues in the spirit of prior Executive Orders related to AI issued by President Trump during his current term, including: Executive Order 14179 on January 23, 2025 - “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked President Biden’s prior Executive Order and Executive Order 14319 on July 23, 2025 - "Preventing Woke AI in the Federal Government," which addressed President Trump’s concerns that AI could be influenced to return results infused with ideological biases, including programming to reflect diversity, equity, and inclusion (DEI) principles and goals.

From a technological perspective, we should expect to see exponential advancement in AI over the next three years of President Trump’s term in office. The United States has multiple companies competing in the AI race against each other and foreign players, with significant capital investment across the sector. President Trump stated that his goal is to see these investments result in the United States dominating the AI landscape in the future:

“…To win [the AI race], United States AI companies must be free to innovate without cumbersome regulation.  But excessive State regulation thwarts this imperative.  First, State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups.  Second, State laws are increasingly responsible for requiring entities to embed ideological bias within models.  For example, a new Colorado law banning “algorithmic discrimination” may even force AI models to produce false results in order to avoid a “differential treatment or impact” on protected groups.  Third, State laws sometimes impermissibly regulate beyond State borders, impinging on interstate commerce.

My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.  The resulting framework must forbid State laws that conflict with the policy set forth in this order.  That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.  A carefully crafted national framework can ensure that the United States wins the AI race, as we must.

Until such a national standard exists, however, it is imperative that my Administration takes action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation….”

FEDERAL AND STATE FRAMEWORK

The law-making process, particularly at the federal level, is notoriously slow. It is not uncommon for legislation to lag behind technology and innovation. We already see this in the data and privacy arena, where states have taken action to establish legal parameters when there is a federal vacuum of guidance. It can also be difficult to address any issue in an omnibus way, when different industries and facts require disparate treatment.

An example of the interplay of federal and state laws is in the health care industry. The United States has HIPAA at the federal level, designed to address the privacy and security of protected health information (PHI). Many states also have state laws that regulate health-related or sensitive personal information, including information that may also be PHI under HIPAA. In most cases, these laws are written and interpreted in a way where they are able to effectively coexist. Either the federal law provides a floor of protection the states will enhance, or the state law will explicitly reference that compliance with the federal law equates compliance with the state law. HIPAA has limitations on when it controls an actor and a piece of information, and in those instances state laws can step up to avoid a scenario falling through the cracks.

ARTIFICIAL INTELLIGENCE EXISTS WITHIN OUR CURRENT LEGAL FRAMEWORK

AI does not create a compliance exception, if a law or rule applies to the underlying activity then it will generally apply to the AI-enabled or AI-facilitated iteration of that activity. AI does not exist outside of the bounds of current law, which continue to apply to those who develop, train, and use AI. For better or worse, there is no requirement to first understand something in order to regulate it or to police it. The utilization or inclusion of AI will not absolve companies from complying with existing legal requirements.

We have already seen the interaction of AI and intellectual property laws at every level, including development and training of the technology as well as usage and disclosures. What happens when trade secrets are uploaded - intentionally or inadvertently - into AI technology? Does the information lose its trade secret protection? What about technology that generates an image after looking at millions of other images, does that violate US Copyright Law? How do we know what AI used to generate the image, and how does AI being influenced by existing works differ from a human being and artist being influenced by what came before?

Can AI meaningfully participate in the discovery process (e.g depositions and interrogatories)? Not as a tool but as the witness/respondent?

WILL WE SEE A “NATIONAL POLICY FRAMEWORK” FOR ARTIFICIAL INTELLIGENCE IN 2026?

The legislative branch could pass comprehensive laws, or we could see a cascade of consistent judicial rulings up to the Supreme Court involving emerging technology and interpreting existing laws. Anything is possible, but either path is unlikely as each would require meaningful consensus. Neither are likely to provide clarity in short order, as of January, consensus by year-end appears improbable.

It may be more likely that the path to national policy over the next few years will primarily be guided by the Executive Branch - through rulemaking, guidance documents providing revised interpretations and clarifications, and challenging federal and state laws seen to be in contravention of the President’s position. Guidance from the Executive Branch can be more consistent because federal agencies and the President are typically aligned in approach; however, the tradeoff is stability. Executive Branch driven policy can shift quickly from one administration to the next.

Laws can be repealed and rulings can be overturned - but typically not as quickly as Executive Branch policy can shift from one administration to the next.

In other words, companies should be thoughtful and document their review and analysis surrounding the development and integration of AI into their operations. Add AI-related terms into contracts where appropriate, take reasonable steps to protect and position for future administrations, legal interpretations, and look back enforcement efforts.

TAKEAWAYS FOR BUSINESSES AND ATTORNEYS

  1. Self Audit - Identify AI used in the current technology stack and in tools and functionality adopted over time. What agreements are in place? How is the technology trained? What happens to the inputs and outputs (confidentiality, training, permitted use, ownership, retention, etc)?

  2. Policies and Training - Implement Company Policies governing AI use (including treatment of sensitive information) and train employees on policies and use of technology.

  3. AI Contract Terms - Review key and form contracts and add AI specific contract terms where appropriate. January is an ideal time to revisit your contracts and engagement letters for needed updates.

We do not need to know the exact direction of AI technology or any state and/or national policy in order to take commercially reasonable steps now to prepare and mitigate risk for ourselves, our companies, and our clients.

Next
Next

The Mailbox Rule in 2026