fb tracking

Public Citizen’s Comment for OMB’s Federal Agency AI Guidance

The Office of Management and Budget
c/o Cindy Martinez
725 17th Street, NW Washington, DC 20503

Submitted via email to: regulations.gov

OMB–2023–0020, Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum

Dear Director Young,

On behalf of more than 500,000 members and supporters of Public Citizen, we offer the following comment on the Office of Management and Budget’s (OMB) draft guidance[1] directing Federal agencies on artificial intelligence (AI) governance and innovation programs while managing risks from the use of AI. The stated intention of this draft guidance is to create new agency guidelines for AI governance, innovation, and risk management, by utilizing certain minimum risk management practices for AI use cases that affect the civil rights and safety of the public.[2]

Public Citizen has published several documents outlining our thoughts such as our generative AI[3]policy recommendations, and we have also filed a rulemaking petition[4] with the Federal Election Commission to prevent deepfakes in political ads. In addition, we are organizing numerous public meetings on AI topics with thought leaders and policy makers, an example being our recent bipartisan forum[5] discussing AI and the threats it poses to democracy. We now offer commentary on the proposed guidance’s guardrails/staff implementations and a recommendation to require all federal agencies to publicly release an inventory of their already existing AI regulatory authorities.

While AI may enhance government operations, particularly in scientific and research & development fields, the U.S. government’s use of AI has the potential to be an enormous danger to our society unless regulators place significant guardrails in place. Establishing safety standards, implementing rules to prevent racial and other discriminatory practices, and adopting protections for people’s rights and dignity is crucial both for governmental use of AI tools and to establish an example for the private sector.

We strongly support the proposed guidance’s imposition of guardrails for the federal government’s use of AI, particularly in the realm of generative AI use and algorithmic bias. We offer three proposals to strengthen the guidance: strengthening the labeling requirements for content produced with generative AI; modifying the power and responsibilities of the Chief AI Officers to ensure the safety and protective standards of the guidance are protected; and requiring agencies to publicly inventory their regulatory authorities relevant to AI.

A. The proposed labeling guardrails for generative AI use by the federal government are necessary but should be strengthened.

The OMB’s decision to create generative AI usage guardrails is a step in the right direction. While generative AI may be able to help federal agencies streamline workflows, reduce operational costs, and make informed, data-driven decisions, this type of AI can also create havoc for our civil rights and negatively impact trust in our democracy.

The President’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and OMB’s draft guidance correctly see the transformative and worrisome power of generative AI in outputting misleading or deceptive content. Generative AI has powerful potential to manipulate individuals, tailoring content based on user-specific data and even adjusting the mechanism of presentation, e.g., through individualized and customized avatars. This threatens both widespread manipulation and the destruction of social trust. The issue is particularly acute in civic spaces, where deepfakes have the potential to fundamentally and fraudulently impact policy debates and elections.

For example, generative AI can swamp societal discourse with misinformation. With AI’s ability to generate highly realistic text, audio, and visual content, malicious actors will have the ability to manipulate information at an unprecedented scale. A recent deepfake showed President Biden making transphobic remarks.[6]  In Slovakia, a deepfake audio recording is reported[7] to have impacted the recent parliamentary election, with deepfake images also reportedly[8] affecting Argentina’s just-concluded presidential election.

Left unchecked, these generative AI creations will negatively impact public trust, as it will become increasingly challenging for ordinary people to discern between authentic and manipulated content creations from federal agencies. The OMB draft guidance correctly identifies labeling, content authentication, and provenance tracking as vital solutions to the problem of deceptive AI-generated content.

AI labeling helps citizens to contextualize and process AI generated content. For example, after seeing a label stating that generative AI produced a video, the viewer can then decide to ignore or discount the content, seek additional non-AI sources about the content, or share it with their network while providing the context that AI created the content. Federal agencies must do all that they can to help citizens discern between authentic and AI generated content, and the guidance’s labeling requirement is a good starting point toward that goal.

We applaud the guidance’s encouragement to federal agencies include provisions in generative AI model procurement contracts requiring that procured systems have labeling and content provenance capabilities.[9]

But OMB should insist on more. First, OMB should require agencies, not just encourage them, to ensure that generative AI contracts include guarantees on labeling and provenance. Second, OMB should require that every federal agency label all content generated in part or entirely by AI. Third, it should set an expedited schedule for this standard. While it may take some time to establish industry standards for provenance and labeling, the federal government should never rely on AI to generate content without disclosing that fact.

B. The proposed AI bias guardrails contained in the guidance are welcome and much needed.

We strongly applaud the inclusion of AI bias guardrails in the proposed guidance because it shows the Biden Administration’s commitment to ethical AI deployment. By acknowledging the risks of bias and discrimination in AI systems, OMB’s emphasis on transparency, fairness, and accountability will hopefully serve as a north star to guide other AI stakeholders on how to address these issues in their own AI systems. We support the guidance’s determination framework of safety-impacting and rights-impacting AI and the requirement of minimum practices when using these types of AI. These proposed guardrails not only address the ethical implications of AI but also will help build public trust in federal agencies’ use of AI applications.

AI systems are susceptible to biases contained within their training data.[10] Some examples of these biases include race, gender, and income.[11] For example, a federal agency’s resume review algorithm could drift (behaves in an unknown or unpredictable way that doesn’t follow the original parameters) due to a data set input that shows a racial difference between applicants. This drift could result in the hiring algorithm becoming biased against a certain race. And this algorithmic based decision could lead to discriminatory hiring outcomes in federal agencies that end up reinforcing cultural stereotypes and leading to the maintenance or growth of societal divisions.

We have already seen numerous examples of biased algorithms in the private sector. Amazon found a hiring algorithm favored applicants based on words such as “executed” or “captured” that were more commonly found on men’s resumes, and Google’s online advertising AI system showed high-paying positions to men more than to women.[12]

To address and prevent bias in AI systems, federal agencies must take care in their use of AI training data and in programming its algorithms. They must also continuously monitor these systems to ensure fair and unbiased AI outputs. We are pleased to see that the proposed guidance requires federal agencies to implement several AI bias guardrails for safety-impacting and rights-impacting AI systems such as completing impact assessments, actively identifying and addressing elements that contribute to discriminatory or biased outcomes in algorithms, evaluating and minimizing unequal effects, utilizing inclusive and representative datasets, and seeking input and integrating feedback from the groups affected.[13] These guardrails will help protect citizens from the negative impacts of AI usage.

We particularly want to uplift the requirement to consult affected groups. We hope that this proposed guardrail is utilized to allow racial, ethnic, and other minorities to have a full seat at the table and take part in determining how AI is used by the federal government.

C. The proposed waiver power of the Chief AI Officers should be weakened, and conflicts of interest frameworks must be introduced to avoid corporate capture and the misuse of AI within federal agencies.

We understand that a centralized source of AI decision-making power and knowledge is justified for federal agencies. However, under the current proposed guidance, we fear that the discretionary power of the Chief AI Officer (CAIO) to grant waivers from safety and rights protection requirements is too great.

In the proposed guidance, in collaboration with other pertinent authorities, a federal agency’s CAIO can grant an AI application or component a waiver from having to follow the guidance’s minimum practices for safety/rights-impacting AI.[14] The CAIO can justify its waiver decision if it determines that meeting the AI minimum practices stipulation could elevate safety or rights risks in general or pose an unacceptable obstacle to vital agency functions.[15]

It is unclear what is meant by other pertinent authorities, and there is no definition of “unacceptable obstacle” included in the guidance. Additionally, we are concerned about the concept of a federal agency’s CAIO granting a waiver to an AI use case that serves the agency’s vital functions while harming the general public’s rights. For example, the Department of Homeland Security’s CAIO could grant a waiver for facial recognition AI usage for its operations and potentially not be blocked from doing that under the current guidance rules.

We urge the OMB to narrow the grounds for waiver, establish sharper standards for when a waiver is permitted, and institute other procedural protections, perhaps including obtaining authorization from OMB.

Additionally, the guidance should address concerns about the CAIO’s interactions and relationships with the private sector. The potential for a CAIO coming directly from the AI industry is high due to the elevated level of technical expertise needed for this role. A CAIO’s past work in the AI industry could lead to an increase in industry influence, conflicts of interest, and undue favoritism towards private entities because of the CAIO’s past relationships. We worry that any CAIO’s industry past could compromise the public interest and trust in the process. The final OMB AI guidance must contain clear and stringent ethical guidelines, disclosure requirements, and conflicts-of-interest frameworks to prevent the CAIO from becoming a conduit for private industry interests. At minimum, the Biden Day One Ethics Executive Order standards[16] must be applied, but we would also recommend thinking about stronger safeguards due to the high likelihood of revolving door issues with these positions. For example, upon leaving public office, CAIOs should not be allowed to lobby on AI policy for one year or longer, and CAIOs must clearly and publicly report all contact or correspondence with private sector stakeholders.

D. To improve this guidance, OMB should require a public inventory from all federal agencies on their existing authority relevant to AI regulation.

To promote the public interest, OMB should update the proposed guidance to require federal agencies to publicly release a current inventory of their AI regulatory authorities that also contains ideas of how to strengthen or add on to them. The purpose of this inventory requirement would be to encourage agencies to study their existing authorities which may apply to AI and to help them think about where additional regulatory powers might be needed. This exercise should be done by agencies publicly, and would help Congress, civil society, and the public evaluate the adequacy of existing laws and prioritize new regulations where they are needed.

We welcome the OMB’s proposed guidance in utilizing AI in a trustworthy and safe manner and look forward to working together to refine this guidance. For questions, please contact Richard Anthony at ranthony@citizen.org.

Sincerely,

Public Citizen

[1] This notice was published in the Federal Register on Nov. 3, 2023 (88 FR 75625 Fed.Reg.) and is available at https://shorturl.at/clwIO

[2] Staff of Office of Management and Budget, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, The Office of Management and Budget (Nov. 3, 2023) https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf

[3] Staff of Public Citizen, Public Citizen’s Recommendations for Regulating Generative AI, Public Citizen (Sept. 8, 2023) https://www.citizen.org/article/public-citizens-recommendations-for-regulating-generative-ai/

[4] Craig Holman and Robert Weissman, Second Submission: Petition for Rulemaking to Clarify that the Law Against “Fraudulent Misrepresentation” Applies to Deceptive AI Campaign Communications, Public Citizen (July 13, 2023) https://www.citizen.org/article/second-submission-petition-for-rulemaking-to-clarify-that-the-law-against-fraudulent-misrepresentation-applies-to-deceptive-ai-campaign-communications/

[5] Staff of Public Citizen, Generative Artificial Intelligence and Threats to Democracy, Public Citizen (Sept. 21, 2023) https://www.citizen.org/article/generative-artificial-intelligence-and-threats-to-democracy/

[6]  Reuters Staff, Video does not show Joe Biden making transphobic remarks, Reuters (Feb. 10, 2023) https://www.reuters.com/article/factcheck-biden-transphobic-remarks/fact-check-video-does-not-show-joe-biden-making-transphobic-remarks-idUSL1N34Q1IW

[7] Morgan Meaker, Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy, Wired (March 3, 2023) https://www.wired.co.uk/article/slovakia-election-deepfakes

[8] David Feliba, FEATURE-How AI shaped Milei’s path to Argentina presidency, Reuters (November 21, 2023) https://jp.reuters.com/article/argentina-election-ai-idUSL8N3CM2MB

[9] Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, at pg. 22 https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf

[10] IBM Data and AI Team, Shedding light on AI bias with real world examples, IBM (Oct. 16, 2023) https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/

[11] Id.

[12] Id.

[13] Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, at pgs. 15-20 https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf

[14] Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, at pg. 13 https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf

[15] Id. at pg. 14

[16] Biden Administration Staff, Executive Order on Ethics Commitments by Executive Branch Personnel, WhiteHouse.gov (Jan. 20, 2021) https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order-ethics-commitments-by-executive-branch-personnel/