On March 20, 2026, the White House unveiled its National Policy Framework for Artificial Intelligence, providing a blueprint on legislative recommendations and urging Congress to act. It recommends that Congress create a unified federal standard to reduce the regulatory friction of competing state AI regimes, promote AI innovation, and develop an AI-ready workforce, while ensuring the protection of children, consumers, and intellectual property rights. 

The Framework’s Seven Pillars

The recommendations cover seven core pillars:

  1. Protect children – Calls for age-assurance requirements, parental control tools, limits on data collection from minors, and features to reduce risks of exploitation and self-harm on AI platforms.
  2. Safeguard communities – Recommends “augmenting law enforcement efforts to combat” AI-related fraud, limiting energy cost impacts, streamlining federal permitting for AI infrastructure, providing AI resources to small businesses, and ensuring national security agencies have sufficient technical capacity to assess frontier AI model capabilities and associated risks.
  3. Respect intellectual property rights – Affirmatively states that AI training on copyrighted material does not violate copyright law but defers final resolution of that question to the courts. Encourages exploration of voluntary licensing frameworks for rights holders and protection against unauthorized AI-generated digital replicas.
  4. Encourage free speech – Urges the prevention of government coercion of AI providers to censor lawful expression and the ability for consumers to seek redress against federal censorship efforts.
  5. Promote AI innovation and dominance – Proposes “regulatory sandboxes for AI applications,” accessible federal datasets for training AI models, and recommends that no new federal AI regulatory body should be created, relying instead on existing agencies and industry-led standards.
  6. Empower the workforce – Encourages AI educational training and support programs to develop an AI-ready workforce.  
  7. Preempt state laws – Seeks a uniform national standard that preempts potentially unduly burdensome state AI laws while preserving states’ traditional police powers, consumer protections, and zoning authority.

The Remaining Gaps

The framework understandably cannot cover every facet of potential AI issues and is largely silent on regulatory enforcement and a comprehensive data privacy regime (though it does address children’s data and privacy). It does not propose specific penalties, compliance mechanisms, or oversight structures for companies developing or deploying AI. It also does not address potential AI-generated discrimination, algorithmic accountability, and how existing agencies should coordinate enforcement, if at all.

As previously published in Compliance & Enforcement, “the absence of a federal AI framework has left existing legal doctrines—privilege law, constitutional Commerce Clause analysis, decades-old fraud statutes—to absorb questions they were never designed to answer.” This remains an open issue, as illustrated, for example, in the recent Southern District of New York opinion applying attorney-client privilege and attorney work-product—traditional legal doctrines—to novel AI questions without legislative guidance. 

Preemption Needed to Prevent Inconsistency

The framework tracks principles from the White House Executive Order on Ensuring a National Policy Framework for Artificial Intelligence (December 11, 2025), which invoked existing executive authority and general Commerce Clause preemption principles to check state AI regulation. The framework’s call for preemption goes further, noting that AI development is “an inherently interstate phenomenon with key foreign policy and national security implications.” This push for Congressional action implicitly concedes that executive authority alone may be insufficient. Until Congress acts, states retain room to pursue their own AI regimes and the AI legal landscape will remain in flux.

Conclusion

The framework is a serious, if incomplete, attempt to bring coherence to an enforcement landscape that has been improvising. The seven pillars address various pressure points, including preemption, IP, child safety, and censorship, but the absence of any enforcement architecture means that even if Congress acts, implementation questions will land back in the agencies and courts. In releasing this framework, the executive branch may be conceding, implicitly, that it cannot implement its objectives alone. Congress has been handed a blueprint, but whether it is able to enact comprehensive federal legislation is another matter.  Companies utilizing AI should not wait for Congress to act before assessing their exposure.