fb tracking

From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services

Download PDF

J.B. Branch testified before the U.S. House Committee on Financial Services last week, urging lawmakers to reject efforts by large technology companies to roll back state-level artificial intelligence protections under the guise of promoting innovation.

The hearing, titled From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services, came amid growing pressure from industry groups to preempt state AI laws and replace them with voluntary or temporary regulatory frameworks often referred to as “sandboxes.”

Public Citizen argued that this approach repeats a familiar and costly mistake. For years, Congress deferred to social media companies’ promises of self-regulation, a decision that contributed to widespread misinformation, harm to children, and erosion of democratic trust. Artificial intelligence, the organization warned, presents even greater risks due to its speed, scale, and increasing autonomy.

Public support for AI regulation is overwhelming. According to Gallup, 97 percent of Americans believe AI should be regulated. In response, states across the country — both red and blue — have enacted bipartisan laws addressing discrimination, consumer protection, and nonconsensual deepfake pornography. Public Citizen emphasized that these measures reflect basic American values: fairness, accountability, and protection from harm.

Despite this broad consensus, technology companies have mounted an aggressive campaign to override state authority. Over the summer, proposals to eliminate all state AI laws were rejected but reemerged as part of the must-pass National Defense Authorization Act. After public backlash, similar ideas reemerged under new branding as regulatory “sandboxes.” Public Citizen described these efforts as deregulatory schemes designed to invalidate existing safeguards and prevent states from responding to emerging harms.

The organization also raised concerns about political efforts to frame civil rights protections in AI as “ideologically dangerous.” Public Citizen noted that preventing discrimination, sexism, and antisemitism in automated systems is not ideological, but a continuation of long-standing civil rights law. At the same time, federal agencies have entered contracts with AI systems that have publicly produced racist and extremist content, raising questions about government procurement standards and accountability.

The testimony also highlighted the impact of AI infrastructure on workers and rural communities. Data centers, often promoted as engines of economic growth, typically receive significant tax incentives while creating relatively few jobs. Public Citizen cautioned that these projects often shift costs onto local communities without delivering meaningful economic revitalization.

Public Citizen urged Congress to adopt a clear principle moving forward: responsible AI innovation requires enforceable accountability. The organization called on lawmakers to reject blanket preemption of state laws, require transparency in AI systems, ensure accountability when harm occurs, protect children and workers, and invest in regulatory enforcement capacity.

As Congress debates the future of AI policy, Public Citizen emphasized that the stakes extend beyond innovation. The central question, the organization argued, is whether AI will be governed in the public interest—or primarily for the benefit of the most powerful corporations in the world.