fb tracking

Public Citizen’s Recommendations for Regulating Generative AI

Generative artificial intelligence (AI) may provide significant benefits to society, but it poses an array of definite risks. These threats – not to mention other extraordinary but more speculative risks – are sufficient to merit an extended pause on further development and deployment of generative AI technologies. Society needs time to digest what these technologies portend, and policymakers need time to craft appropriate controls and guardrails. The social price of delaying AI development and the benefits it may offer does not compare to the grave risks, many of which may not be remediable.

With or without a moratorium, generative AI requires a series of strict guardrails to mitigate foreseeable risks. The recommendations here are not intended to be comprehensive, nor to address the interconnected policy challenges presented by widespread reliance on algorithms.

Build a New AI Federal Regulating Agency

Generative AI poses generalized and distinct threats that reach beyond the scope of any existing agency. A new agency – which should also be empowered to address decisional AI and data protection generally – should be created to address these specific challenges. That agency should complement the work of existing agencies. Whether or not regulators create a new agency, they must give some agency enhanced powers and vastly increased resources to deal with the specific issues posed by generative AI.

Recommendations for an AI Agency

  • Licensing Authority: The new agency should have the power to mandate licenses for specific AI technologies, like large language models and AI systems governing critical infrastructure, before they are trained and deployed.
  • Complementary Authority: The agency’s authority should complement existing agency powers and will not take over the existing authority of other agencies like the FDA.
  • Expansive Mandate: The new agency’s mandate should be broad and designed to address unforeseen problems related to AI deployment.
  • Resource Allocation: The agency must be well-resourced, with a minimum of 2,500 full-time employees to effectively carry out their responsibilities.
  • Enforcement Authority: The agency should have strong enforcement powers, including the ability to revoke licenses and impose significant civil fines for rule violations.
  • Preemption: The authority and activities of the new agency must not preempt state regulatory enforcement or private rights of action.

Protect Democracy

Deepfakes threaten to perpetrate massive fraud on voters, convincing them that candidates said or did things that they did not. And the spread of AI-generated content masquerading as authentic or authored/created by a human risks a massive increase in disinformation and misinformation and an undermining of social trust. Urgent action is needed to protect democracy from these threats.

Policy Recommendations

  • Prohibit Deepfake Use in Political Advocacy: Regulators should ban any deepfakes from appearing in political advocacy due to their ability to misinform the populace.
  • Require Disclosure of all AI-Generated Content, Including Text: Disclosure can offset much of the intensified misinformation and disinformation. A prominent disclosure should be mandated for all AI-generated content, with limited exceptions.
  • Work Towards International Agreements Against AI Manipulation: Regulators should work with our allies abroad to create/implement agreements for AI use in politics.

Protect Consumers and Market Fairness

Generative AI is poised to disrupt the consumer marketplace. It will offer new products and new ways to market, some of which will likely be highly manipulative in the absence of regulation.

Policy Recommendations

  • Ban Unregulated Deep Fakes: Regulators should ban unmarked visual and audio deep fakes to combat fraud and deception in the marketplace.
  • Ban Fake Humans in Commercial Transactions: Generative AI-enabled tools should never present themselves as humans, whether through a chatbot or an avatar, and consumers should always know when they are engaging with an AI.
  • Protect Children from Unfair Marketing: Regulators should prohibit AI-powered advertising to children under 18. Data collection by generative AI tools should be prohibited for children under 18.
  • Ban AI-powered Micro-Targeting of Advertising: The combination of AI-assisted, individualized target marketing/data gathering about consumer profiles, and targeted, individualized marketing techniques threatens to overwhelm consumers and is per se unfair.
  • Implement Civil Rights Protections: Regulators should incorporate civil rights protections from the American Data Privacy and Protection Act into any AI regulations.
  • Implement Data Minimization Routines: Regulators should require AI tools to collect only task-necessary data and delete it promptly after use.
  • Implement a Right to Explanation and Human Review: Consumers impacted by AI decisions must have the right to view how a decision was made by the AI decider.
  • Require Special Scrutiny for Health-Related Generative AI Tools: Consumer health-related AI tools and apps should be designated as Class III devices requiring pre-market FDA approval for safety and efficacy,
  • Prevent Concentration and Abuse of Market Power: Dominant tech firms should be prohibited from using AI tools to preserve/expand market power unfairly, for example, through self-preferencing or stripping the internet commons, and dominant firms should be prohibited from acquiring AI startups.
  • Prohibit Trade Agreement Provisions Limiting AI Regulation: For example, there should be no “digital trade” provisions limiting regulators’ authority to review AI source code.
  • Create a Private Right of Action: Parties harmed by AI companies’ non-compliance with regulatory standards should have the right to seek compensation, individually or on a class basis.