Don’t Believe the Hype: The Impact of AI on Regulatory Policy and Process Plus Policy Recommendations
By Elizabeth "Bitsy" Skerry
Artificial intelligence (AI) is an ever-evolving technology that presents new challenges to regulatory policy and the regulatory process writ large. This memo serves as a primer on the intersection of AI and the regulatory process and provides policy recommendations. This memo:
- Discusses prior and existing federal AI executive orders and policy memoranda;
- Flags current uses of AI in rulemaking and where it could arise;
- Notes pending legislation and legislative initiatives related to AI in rulemaking; and
- Highlights key literature or other resources covering current discussions and debates.
Background: Setting the Stage
In February 2019, President Trump issued Executive Order (EO) 13859, “Maintaining American Leadership in Artificial Intelligence.” EO 13859 called for American leadership in AI to maintain “the economic and national security of the United States and to [shape] the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.” The EO directed federal agencies “to prioritize AI research and development (R&D) in their annual budgeting and planning process.” For context, this EO was published three years prior to OpenAI releasing ChatGPT. This is important to note because one of the EO’s architects, Michael Kratsios, Trump’s current Science and Technology Advisor, has ties to Peter Thiel and Big Tech and therefore probably had a sense of what was coming.
The primary objectives of EO 13859 were to make the United States the driver of “technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security;” train American workers to develop and use AI; reduce barriers to AI testing and deployment; and promote American research and innovation across the globe, while protecting AI technology from “acquisition by strategic competitors and adversarial nations.”
EO 13859 applied to agencies that conduct AI research and development, develop and deploy applications of AI, award educational grants, and regulate and create guidance for applications of AI. Critics of the EO at the time – AI experts in industry, government, and academia – said the document was not enough, expressing concern about China surpassing the United States in AI development and the EO’s lack of new research funds and details on implementation. Despite criticism, the EO was later codified into law as part of the bipartisan National AI Initiative Act of 2020, legislation aimed at “accelerat[ing] and coordinat[ing] Federal investments and facilitat[ing] new public-private partnerships in research, standards, and education in artificial intelligence.”
President Trump also issued EO 13960 “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” during his first term. The EO states in part, “Agencies must … design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values….”
In October 2022, under President Biden, the White House Office of Science and Technology Policy published the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.” The blueprint outlined five guiding principles informed by public input about how governments and corporations can design, use, and deploy AI broadly while protecting the American people.
In October 2023, President Biden issued EO 14110. EO 14110 instructed the federal government to address emerging AI issues, including directing agencies “to adopt new guidelines, rules, and policies, in addition to requiring them to employ AI officers, engage in international efforts, and in some cases move forward with regulatory proposals.” President Trump rescinded this EO on his first day in office.
On April 3, 2025, under President Trump, the White House Office of Management and Budget (OMB) issued Memorandum M-25-21, which encourages agencies to expand AI use across government, focusing on “innovation, governance, and public trust.” And in July 2025, President Trump issued an executive order titled “Preventing Woke AI in the Federal Government,” which builds on his EO 13960 from December 2020 and purports to address “ideological biases or social agendas . . . built into AI models [that] can distort the quality and accuracy of the output.” This EO represents a bogus attempt to legitimize and deploy Elon Musk’s Grok throughout the federal government, which Public Citizen has argued should not be deployed due to a plethora of safety, moral, and legal concerns.
Also in July 2025, President Trump issued “America’s AI Action Plan,” “to articulate policy recommendations” and set out a so-called “roadmap to victory” for America in “[t]he AI race.” Public Citizen opposes the AI Action Plan, as it was written by and benefits Big Tech. Under Trump’s plan, electricity bills will rise to “subsidize discounted power for massive AI data centers.”
While Biden’s “Blueprint for an AI Bill of Rights” and EO 14110 focused on the safe procurement and deployment of AI, Trump’s AI EO’s, his Action Plan, and OMB’s Memorandum M-25-21 are built on a foundation of speed, innovation, and the interests of Big Tech billionaires above all. M-25-21, for instance, specifically pushes for agencies to swiftly adopt and employ AI.
The memo mentions that agencies “must ensure their use of AI works for the American people,” yet it simultaneously asserts that agencies are empowered “to drive AI innovation and seize the opportunity to apply the best of American AI.” In practice, public trust has proven to be of no consideration under the Trump administration. It’s worth noting that Elon Musk’s DOGE team used AI as a surveillance tool to monitor “at least one federal agency’s communications for hostility to President Donald Trump and his agenda.”
Congress has already required agencies, in the Advancing American AI Act enacted on December 23, 2022, part of that year’s National Defense Authorization Act, to publicize how they are using AI by preparing and maintaining “an inventory of the artificial intelligence use cases of the agency, including current and planned use.”
The Administrative Conference of the United States (ACUS), the executive branch agency that monitors and seeks to influence how federal agencies carry out their regulatory obligations and work in the public interest, has provided a series of recommendations to federal agencies on the use of AI in the regulatory process. ACUS recommendations echo the importance of disclosure from a public policy standpoint, suggesting that “agencies might prioritize transparency in the service of legitimizing its AI systems, facilitating internal or external review of its AI-based decision-making, or coordinating its AI-based activities.”
AI and Rulemaking
Federal agencies could, and recently in some instances do, use AI tools throughout the rulemaking process from the initial stages through completion, raising many legal and policy questions. For example, agencies could, theoretically, use AI tools to identify new rulemaking subjects or regulations to revise or expand upon; draft regulations, including preambles; identify patterns in data to assist with drafting and making legal and policy determinations; and respond to public comments. Public Citizen does not purport that these use cases for AI in the rulemaking process are good or yield net benefits.
Proponents of AI in the regulatory process claim that regulators can use AI to enhance efficacy, efficiency, and focus.
Despite the optimism of proponents that AI has or is close to reaching sophisticated capabilities akin to human reasoning and decision-making, the use of AI in the rulemaking process raises significant legal and policy concerns that have yet to be resolved.
Critics warn that the process of agencies using AI in the notice-and-comment process “could be quite superficial,” leading to less agency engagement with public comments if, for instance, AI was used to read those comments and draft replies.
AI outputs are statistical predictions of which words are likely to appear together. Its outputs are not grounded in subject-matter expertise, institutional knowledge, or the judgment of career staff with decades of rule-writing experience.
AI functions much like predictive text on a smartphone, guessing what word comes next. But federal regulations are highly technical, legally binding documents that require scientific rigor, statutory interpretation, subject-matter expertise, and careful analysis. The factual accuracy that AI purports to possess while producing erroneous information may give federal workers unwarranted confidence in inaccurate and false information.
Moreover, AI systems are prone to error and “hallucination,” generating false or fabricated information. Agencies are not supposed to produce rules that gamble on safety, efficacy, and fact.
Identifying Subjects for Rulemaking
Although some proponents of AI believe it could be used to identify subjects for rulemaking, Public Citizen is opposed to this use case given AI’s current tendency to hallucinate, provide false information, and weak grasp of law.
For example, AI tools could be used by agencies to conduct retrospective reviews of their current regulations, consolidate and analyze high-volume complex economic and scientific data, assess the merits and lawfulness of existing regulations, and synthesize public petitions for rulemaking to identify patterns.
One proposal suggests that agencies could use machine learning tools “to identify trends and patterns in voluminous data suggestive of where standards should be set and what attributes of, say, a product or entity ought to be regulated.” For example, the U.S. Food and Drug Administration (FDA) used AI in 2016 “to process reports of drug adverse events and identify ‘previously undetected relationships’ between certain adverse outcomes and particular drugs and drug combinations.” This was part of the FDA’s experimentation with AI, which began in 2016.
Public Citizen is skeptical of the use of AI to identify subjects for rulemaking, whether by looking for patterns in regulatory petitions or through retrospective reviews, especially given how flawed AI is at this stage of its development. Further, agencies need no assistance with identifying subjects for rulemaking. While some proponents argue that AI could aggregate data expeditiously to help identify needs, the largest challenge for agencies is corporate and political opposition to moving rules to finalization, not the failure to identify those rules in the first place.
Agencies Writing Rules and Developing the Record
Similar to identifying subjects for rulemaking, Public Citizen opposes agencies using AI to develop regulatory proposals. Using AI in this way presents serious risks that should not be overlooked, including that rules have inadequate evidentiary support or flawed legal reasoning.
The Administrative Procedure Act (APA) may prove to be a useful lever in addressing legally vulnerable rules written with AI because agencies will need to comply with the APA and cannot rely on AI as justification for satisfying the law’s requirements.
In addition, AI “may introduce error, bias, or arbitrariness to administrative decisionmaking [sic].”
This is a major concern. Any large language model (LLM), such as Google Gemini, Claude, or ChatGPT, would be relying on the entire universe of LLM data – not just the rulemaking record – and could pull in fake data, junk science, or information created for the purpose of skewing a rulemaking. Human verifiers would not only need to independently verify each item relied upon, but ensure that the AI tool didn’t miss critical information that should be in the rulemaking record. Accurate and reliable human verification of information in the record, as well as exploration of what may be missing from the record, would likely take longer than having humans with expertise develop the rulemaking record in the first place, and would open the door for introducing substantial errors that are not corrected during verification.
The Trump administration has already begun to deploy AI in rule-writing, which highlights grave concerns. The U.S. Department of Transportation (DOT) has announced plans to use Google Gemini to write regulations. DOT’s general counsel, Gregory Zerzan, says the DOT is “the first agency that is fully enabled to use AI to draft rules,” and that DOT doesn’t “need the perfect rule on XYZ. We don’t even need a very good rule on XYZ.”
This alarming admission poses serious safety concerns about the regulations that will be promulgated by DOT moving forward. Using Google Gemini, the agency will be able to generate a proposed rule in mere minutes or seconds, although it is still unknown whether such a rule would survive a court challenge.
Public Citizen takes the position that AI LLM’s are not subject matter experts and should not be used to replace expertise in regulatory decision-making and rule-writing.
It’s unsettling to imagine the deficiencies that could result in an AI-generated rule crafted for DOT by Google Gemini, or by any federal agency using any existing LLM. An agency tasked with ensuring Americans’ safety on our streets and in the skies needs to move carefully in rulemaking. According to one DOT employee, a proofread of Google Gemini’s creation by the employees at DOT who are responsible for writing rules will be the only job for these employees moving forward.
Rules written using LLMs could be “particularly vulnerable to APA challenge.” Governing for Impact notes that “the APA and the cases interpreting it are plausibly read to require certain forms of substantive human involvement in the rulemaking process, which would preclude agencies from entirely outsourcing their work to AI.” This means DOT’s plans likely will not pass muster under the APA.
Furthermore, there is concern about AI making decisions without the ability to make the human value judgments required in the rule writing process. Dr. Martin Peterson, a professor of philosophy at Texas A&M University, posits that although AI can mirror human decision-making, it cannot make moral decisions like human beings can. When it comes to agencies writing rules using AI, the inability for AI to make moral decisions within the bounds of the regulatory process must be taken seriously.
Opponents state that “AI systems should not and likely cannot decide how to balance . . . competing priorities [in the decision-making process agencies undertake when writing rules] because they do not have human experiences, morals, or intrinsic beliefs to draw upon.”
For instance, humans at federal agencies writing regulations make moral decisions that help define the scope of any given rulemaking. Value judgments that go into regulatory decision making can include anything from assessing the value of unquantifiable benefits of regulations to American society (such as clean air and clean water), to whether certain activities by regulated entities are morally wrong and therefore ought to be regulated to protect the public from harm. Additional examples to demonstrate this include decisions between “more or less restrictive care placements, supportive or punitive treatment of immigrants, expedited or lethargic vetting of and release to family members, and reproductive health care options.”
Public Citizen takes the position that rules cannot be written by AI without comprehensive and accurate human verification, confirming that the record is complete, accurate, and morally justifiable – tasks that are impossible for AI at this time. Regulatory decisions should never be fully automated or rubber stamped.
Agencies Explaining and Supporting Rules and Regulatory Impact Analysis
Proponents of AI claim that it might help regulators when deciding whether to adopt a proposed rule or revise proposed regulatory requirements based on cost-benefit analysis and other policy concerns. For instance, some proponents suggest agencies could use AI to assess a proposed rule’s impact, stating that AI could process and compare a large number of variables, interdependencies, and assumptions “at a scale not possible for humans.”
Furthermore, AI might be used by agencies to complete tasks required by the APA and other federal laws. An agency might use LLMs to draft a final rule’s “concise general statement of their basis and purpose,” a NEPA analysis, a cost-benefit analysis, or a preamble in plain English.
Questions and concerns remain about how accurate AI-generated regulatory analyses would be and whether the data generated would hold up in court.
As introduced in sections above, existing provisions of the APA might apply to the use of AI in the federal rulemaking process, namely, disclosure and reasoned decision-making. For instance, the disclosure requirement under 5 U.S.C. § 553 requires agencies to publish notice of a proposed rulemaking and provide an opportunity for public comment. Agencies also have to disclose the technical studies and data that informed their rulemaking. Additionally, the bar on “arbitrary” or “capricious” rulemaking under 5 U.S.C. § 706 requires disclosure, and this extends to agency use of computer models.
One organization writes:
LLMs are, at present, subject to limitations and prone to systemic errors. For instance, LLMs have been found to “hallucinate” false information, a problem that has persisted even as technology has advanced in other respects. They might act sycophantically, validating or agreeing with even objectively incorrect user prompts. It is well known that LLMs frequently replicate biases or errors present in their training data. LLMs also have limited “context windows,” a term that broadly refers to the amount of text or information they can consider at one time. They may thus struggle to accurately process long documents, a problem of particular concern in rulemaking, which often requires analyzing complicated and extensive agency records and lengthy agency publications.
Given this, Public Citizen takes the position that, as AI currently exists, it is not a good use case for regulatory impact analysis.
AI in Notice-and-Comment
AI Use by the Public in Writing Comments
Some proponents believe that AI could provide benefits for individuals writing comments to submit to agencies, such as allowing increasing participation and writing assistance. Proponents of AI in rulemaking state that agencies could use ChatGPT or other artificial intelligence tools to facilitate public understanding of certain issues or draft rules and the types of information that agencies are seeking to gather in the rulemaking process. For instance, AI could be used by organizations for mass comment campaigns.
At the same time, some critics have theorized that AI might only improve the ease of commenting, thereby increasing the quantity of AI generated comments submitted, rather than bolstering the quality of the public comments. If quality is not improved, comments will not be more intelligible, and agencies will lack the information they need from the public. Inundating agencies with comments can lead to quality or model collapse where the agency then takes these comments, puts them into their LLM, and the LLM’s quality of analyzing the comments decreases because the commentary is derivative and lacks a diversity of perspectives.
Additionally, it may be more difficult to distinguish between comments written by individuals using AI and comments written exclusively by AI bots to skew the outcome of a rulemaking. In one recent example, Southern California’s top air pollution authority rejected a proposed rule to phase out gas-powered appliances after receiving a flood of over 20,000 AI-generated comments opposing the rulemaking.
Public Citizen does not seek any solution that would infringe on individuals’ ability to participate in the comment process or prohibit any use of AI to write or edit their comments (e.g., proofreading, formatting). Public Citizen also supports mass comment campaigns that facilitate, without using AI, members of the public exercising their democratic right to participate in government. However, Public Citizen has grave concerns about the use of AI bots submitting fraudulent comments that do not reflect the views of real Americans. Given these concerns, we would caution advocates and agencies from using vendors that deploy AI to help generate comments until further information is known.
In the meantime, technology to identify fake and fraudulent comments must be identified to assist agencies in distinguishing between real and fake comments. Solutions, whether legal, technological, or both are needed to preserve public comments related to individual rulemakings and to preserve the integrity of the public participation process more broadly.
Corporations already play an outsized role in notice-and-comment and should not be given yet another opportunity to capture agencies, especially when doing so delegitimizes the main pathway that exists for communities to make their voices heard about how a proposed regulatory action would benefit or harm them.
AI Use by Agencies Reviewing Comments
Proponents of AI state that AI could be used to analyze, assess, and draft responses to comments submitted by the public as well as comments received in the interagency review process. A bill discussed in more detail later in this memo, H.R. 67, would add AI as a tool agencies can use to assess mass comments in the rulemaking process. Further, OMB has explored the use of AI to parse comments and organize them into “buckets” or topics, a possible use case for AI that they have suggested could be helpful. This amplifies the importance of limited use cases (if AI is to be used at all) and having guardrails such as human verification in place.
Distinct from this limited use case with robust human verification, DOGE’s efforts to use an AI tool to “automate” large portions of the federal regulatory process with limited staff feedback, including the analysis of thousands to hundreds of thousands of comments from members of the public, is of grave concern.
As with other stages of rulemaking, the APA may provide existing protection against unreasonable use of AI to review comments. Reasoned decision-making means agency action must be “reasonable and reasonably explained” in the eyes of the court for the agency action to be valid. Specifically, similar to when agencies use mathematical models to set regulatory standards, the reasoned decision-making requirement under the APA may require the agency to explain in court regarding its use of an AI tool: “the purposes for and means by which the AI product used was designed and finetuned; how the product was prompted; and whether and how agency staff oversaw, validated, and performed quality control on the product’s responses to ensure their reliability and compliance with statutory and constitutional requirements.”
Although Public Citizen opposes the use of AI by agencies at any stage of the rulemaking process, it is our position that any such use must comply with the Administrative Procedure Act.
AI and Retrospective Review
As mentioned above with regard to identifying new areas for rulemaking, AI proponents claim AI could be used in retrospective review of final rules – the process that agencies use to assess whether regulations already on the books need to be reevaluated – “to determine if regulations should be modified based on how they performed in reality once adopted.”
ACUS recommended best practices for agencies to use AI in retrospective reviews to “more efficiently, cost-effectively, and accurately identify rules that are outmoded or redundant, contain typographic errors or inaccurate cross-references, or might benefit from resolving issues with intersecting or overlapping rules or standards.” ACUS also recommended GSA look into opportunities to develop AI that could be used government-wide and for OMB to provide guidance to agencies on those tools.
Some AI proponents believe that agencies could use AI in the retrospective review process by creating “regulatory decision trees rooted in AI” that analyze various real-world outcomes of a regulatory action. They argue that responsibly developed and deployed AI can be used to analyze the effectiveness of regulation, provide businesses and individuals with regulatory clarity, and reduce compliance costs. However, the same risks discussed above with regard to the use of AI throughout regulatory development would apply to retrospective review of final rules. If the risks can be overcome, it would only be with guardrails, including human verification.
Contrary to establishing norms of efficiency in the regulatory process, there is currently an anti-regulatory bill in Congress, the Modernizing Retrospective Regulatory Review Act (H.R. 67), that would incorporate AI into existing retrospective review processes and unnecessarily expand retrospective review beyond what is already required by law by granting agency heads the power to mandate a retrospective review of any regulation of their choosing. H.R. 67 also would add AI as a tool agencies can use to assess mass comments in the rulemaking process. This bill is opposed by Public Citizen and the Coalition for Sensible Safeguards as it imposes a one-sided focus on retrospective review that encourages agencies to weaken rules to reduce burdens on regulated entities rather than strengthen rules to protect the public from harm.
Even worse, the Trump administration plans to unveil a new deregulatory AI tool called SweetREX Deregulation AI Plan Builder (SweetREX DAIP) to slash existing regulations. SweetREX was first discussed on an OMB video call in August 2025 and was still in development at the time. The purpose of SweetREX is to help bring to fruition the goals outlined in President Trump’s “Unleashing Prosperity Through Deregulation” executive order. It was created by DOGE associates operating out of the U.S. Department of Housing and Urban Development (HUD), with a plan for rollout across other federal agencies. Beyond this general information, the federal government has been unwilling to disclose to the public details about SweetREX’s development or the administration’s use of AI in carrying forward its deregulatory agenda.
Public Citizen opposes agency use of AI in retrospective review for any purpose. To the extent agencies engage in this practice, human verification must be employed. Decisions should never be fully automated or rubber stamped. And of course, AI should not be used for mass deregulation.
Policy Recommendations
Given the many deficiencies with AI in its current form, as well as a lack of political will to address the deficiencies, there is a strong case against using these systems in the regulatory process.
Algorithmic systems often replicate and amplify existing discrimination. Across sectors, AI tools have denied people government benefits, job interviews, and loans based on race, gender, and other protected characteristics. AI systems hallucinate case law, fabricate citations in the United States Code, and produce false information across subject matters. There is also concern that AI could generate fake data underlying a regulation or fake public comments in support or opposition of a proposal. Evidence is lacking to support the claim that AI generates efficiency gains.
More broadly, AI systems impose substantial environmental and social costs. Data centers consume enormous amounts of water and energy, driving up electricity bills and increasing air pollution in surrounding communities — burdens that fall disproportionately on historically marginalized groups.
Nonetheless, AI is moving forward quickly, and the regulatory process appears unlikely to remain unaffected. Thus, it is imperative that additional research, safety assessments, transparency reporting, and enforceable guardrails are put in place now.
Public Citizen proposes the following guardrails:
- Disclosure: First and foremost, agencies must disclose when, how, and which AI is used in the rulemaking process, including plain-language explanations of why and how the system affects public outcomes. Documents produced in whole or in part by AI should include a clear disclosure, which goes beyond what is already required by Congress.
- AI systems must undergo pre-deployment testing, risk assessment, and continuous monitoring by humans. Systems deemed unsafe or unreliable, including those with documented safety concerns, such as Grok, must not be used. Public Citizen has advocated for the removal of Grok from federal agencies.
- Federal workers and their unions must be consulted regarding AI deployment, including whether AI use will diminish or terminate their job at the agency.
- AI systems must be equitable and free from algorithmic bias that discriminates against protected classes under federal law – including race, sex, disability, age, national origin, religion, genetic information, and veteran status. Without this assurance, regulatory decisions could include bias and result in biased outcomes.
- Human verification must be employed when agencies use AI. Decisions should never be fully automated or rubber stamped.
- Agencies must prioritize data privacy, including meaningful consent and built-in data protections.
- Members of the public must have access to complaint and grievance procedures when they believe they have been harmed by AI-assisted decisions.
- Agencies must provide a human point of contact for technical errors and inquiries.
- Agencies should publish public reports evaluating the effectiveness, accessibility, timeliness, and outcomes of AI systems compared with human alternatives.
- Any AI-assisted decision must comply with the Administrative Procedure Act.
- The use of AI should take into account its social, economic, and environmental impacts.
AI systems should not be deployed throughout the federal government without the advice and consent of independent experts who understand its technical limitations and without substantial public support. Rules affecting millions of people must account for statutes, existing regulations, impact analyses, judicial precedent, research, and public input. There is little to no evidence that today’s AI systems can reliably perform those tasks.
Further, there is ample evidence that the public distrusts the use of AI by the federal government. Using AI in the regulatory process may consequently seed public distrust in federal agencies and in regulations more broadly.
The federal rulemaking process exists to protect the public. AI may seem to present an opportunity to improve that process, but despite the hype, the many challenges and unanswered questions suggest great risks. Until those challenges are resolved, the questions are answered, and the risks are minimal, AI in regulatory action must be approached cautiously or not at all.
When lives are on the line, experimentation is not innovation — it is reckless and unethical action that is not aligned with the public interest.