fb tracking

The Risks of AI and What We Can Do

Public Citizen News / September-October 2023

By Robert Weissman

This article appeared in the September/October 2023 edition of Public Citizen News. Download the full edition here.

Big Tech companies are deploying generative Artificial Intelligence (AI) technologies – like ChatGPT – at a pace that is vastly outstripping the ability of regulators even to monitor what is happening, let alone address emerging risks. And risks are extraordinary.

You may have heard of some of the science fiction-sounding scenarios, like AI become self-aware (“sentient”) and prioritizing its own needs over those of humans.

Those frightening scenarios can’t be dismissed out of hand, but there are much more immediate, and much more certain threats that these new technologies pose.

Drawing on our longstanding work to defend consumer rights and hold corporations accountable, protect and expand our democracy, and take on Big Tech companies, we’re fast building a robust campaign to meet the AI moment.

A quick explanation of generative AI: Computers trained to identify lots pictures, sounds or words, and the relationship between them, are able to generate their own content in response to natural-language instructions. You can ask ChatGPT or programs like it to write you a sonnet on climate change, and it will do so instantly. You can ask an image generator to draw you pictures of dogs playing a game of baseball, and it will. And you can instruct a video generator to develop a video of Barbara Streisand singing in the voice of Daffy Duck.

If you use any of these new tools, you’ll be amazed. And they are lots of fun. They also have a lot of potential positive uses, from facilitating drug development to empowering people with disabilities.

But these technologies are very powerful and come with immense risks and disruptive potential. They can make it easier to design new bioweapons or for people to figure out how to build nuclear weapons. They may potentially displace millions of jobs – tens of millions by some estimates – in the United States alone, and severely intensify wealth inequality. By creating lots of junk content, they may pollute the internet to such an extent it ceases to be the knowledge-sharing tool it now is.

One overarching problem is the ability of AI to create content that seems real. Consider the problem of “deepfake” videos, audio or pictures – AI-generated content that appears real.

In upcoming elections, we face the prospect of candidates and committees using AI technology to create a video or audio clip that, for example, purports to show an opponent making an offensive statement, speaking gibberish, falling down drunk or accepting a bribe.

A blockbuster deepfake video released shortly before an election could go “viral” on social media and be widely disseminated, with no ability for voters to determine that it is fake, no time for a candidate to deny it, and possibly no way for a candidate to show convincingly that it is fake. In addition, the public may become quickly conditioned to doubt the authenticity of real video or audio recordings. For example, it’s a near certainty that Candidate Trump would have denied the veracity of the Access Hollywood audio recording if deepfake technology had been pervasive and widely understood at the time.

But the problem extends far beyond deepfakes in elections, as momentous as that is. AI may generate more sophisticated disinformation than anything humans could produce; for example, it takes almost no time for AI tools to create a legitimate-appearing website that may spread falsehood, and those same tools could generate emails, texts and social media posts directing people to look at the website. A corporation could deploy AI tools that appear to be human to market to you, customizing their marketing strategy based on the data they hold about your likes and preferences. Fake content could be used to manipulate the stock market. And on and on.

Topping all this off, there’s a potentially cumulative harm: the destruction of the social trust that undergirds a functioning society. If you’re not sure when you’re dealing with humans, if you can’t believe the things you see in front of you, what exactly are you to do?

These risks are very real. But they are not inevitable. At this moment of proliferation of generative AI technologies, we still have an opportunity to establish norms, standards and laws to prevent or mitigate the dangers we can foresee.

Which is exactly what we’re aiming to do. Here’s a brief overview of some of what we have done and are working on related to AI:

  • We petitioned the Federal Election Commission for a rule making the use of deepfakes illegal in elections. The petition has generated lots of attention and after an initial positive vote in August will be considered in October. We’re also supporting federal and state legislation to ban election deepfakes.
  • In April, we held a vital “Hit Pause” convening on generative AI with Rep. Ted Lieu of California and policy experts.
  • We pulled together a wide-ranging group of public interest allies to coordinate on AI issues.
  • Our researchers are preparing investigative reports on AI and consumer protection, health care, financial markets, military applications and more.
  • We are planning separate convenings on AI and democracy, consumer protection, legal issues and more – with the aim of developing public interest community consensus and strategies around ways to alleviate AI-related risks.
  • We are urging the Food and Drug Administration to demand scientific backing for AI tools that claim to provide therapy or other health care services to consumers.
  • We are developing campaign plans against new Pentagon budget lines for AI weaponry and the whole idea of “autonomous” weapons making decisions about use of lethal force without human intervention.
  • Our trade team is campaigning against trade deal terms that would inhibit countries from regulating AI.
  • We are preparing a major effort to require all AI-generated content to be labeled, an effort we think could meaningfully counter AI’s impact on social trust.

And that’s only a small sampling of the work underway!

We’re proud of what we’ve done so far. We identified a major new area of corporate-created risk. We leveraged our relevant expertise and have built out new expertise on the topic. We’ve pulled allies together, developed innovative solutions, lobbied policy makers, and more.

Proud, but not satisfied or complacent. We know very well the perils of letting the corporate purveyors of powerful and hazardous technologies operate without meaningful public safeguards – and we’re determined to make sure the necessary protections are put in place.