fb tracking

A.I. Is Already Harming Democracy, Competition, Consumers, Workers, Climate, and More

MEDIA ALERT

The buzz around tools like ChatGPT is that generative A.I. will transform the world in ways that increase productivity, spur innovation, and make businesses rich, even as detractors say A.I. could kill us all. Setting aside frightening threats that may materialize as the technology evolves, A.I. is already causing serious harms – documented in a new report from Public Citizen – harms that are all but certain to grow exponentially in the rush to rapidly deploy it.

Right now, businesses are deploying potentially dangerous A.I. tools faster than their harms can be understood or mitigated. History offers no reason to believe that corporations can self-regulate away the known risks – especially since many of these risks are as much a part of generative A.I. as they are of corporate greed. Businesses rushing to introduce these new technologies are gambling with peoples’ lives and livelihoods, and arguably with the very of foundations of a free society and livable world.

Please tell your readers about these harms and call for a pause on the new generative A.I. technologies until meaningful government safeguards are in place to protect the public.

Public Citizen is hosting a hybrid in-person/Zoom conference in Washington, D.C. on Thursday, April 27, featuring U.S. Rep. Ted Lieu (D-Calif.) in which leading academics, technologists, and public policy advocates will discuss the wide range of threats A.I. already poses. We hope you or a colleague can attend. Here are some the biggest dangers and risks:

A.I. is already giving monopolies advantages and encouraging anticompetitive practices. The massive computing power required to train and operate large language models and other generative A.I. gives big corporations with the most resources a huge advantage. Products like ChatGPT have the potential to worsen self-preferencing by search engines – an anticompetitive practice companies like Amazon, Apple, and Microsoft have already abused. Moreover, OpenAI is developing plugins that will allow ChatGPT to carry out actions that can be performed on the web, such as booking flights, ordering groceries, and shopping. By structuring plugins as a kind of app store within ChatGPT, OpenAI is likely to reproduce Big Tech’s tendency to thwart and throttle competition, siphoning money from small and local businesses.

A.I. is already spreading misinformation. Misinformation-spreading spambots aren’t new, but generative A.I. tools easily allow bad actors to mass produce deceptive political content. For example, OpenAI’s newest large language model, GPT-4, is better able to produce misinformation and can do so more persuasively than its predecessors. One study found that text-based generative A.I. can help conspiracy theorists quickly generate polished, credible-looking messages to spread misinformation, which sometimes cites evidence that doesn’t even exist.

A.I. is already making convincing deepfakes. Increasingly powerful audio and video production A.I. tools are making authentic content harder to distinguish from deepfakes. A.I. has already convincingly mimicked President Joe Biden and former President Donald Trump, as well as other high-profile candidates and media figures. The FBI issued a warning in 2019 that scammers are using deepfakes to create sexually explicit images of teenagers to extort them for money. Even the U.S. military has used deepfakes.

A.I. is already exploiting artists and content creators. Works that artists and writers put online have been used without their consent to train generative A.I. tools, which then produce derivative material. For example, far-right trolls used A.I. to transform cartoonist Sarah Andersen’s work into neo-Nazi memes. Artists have filed a class action lawsuit against Stability AI, as have engineers, who say the company plagiarizes source code they wrote. Voice actors are reportedly being subject to contract language allowing employers to synthesize their voices using A.I. And Getty Images – whose watermark bleeds through in images purportedly “created” by A.I. – is also suing. No one gave OpenAI, valued at an estimated $29 billion, permission to use any of this work. And there is no definitive way to find out whether an individual’s writing or creative output was used, to request compensation, or to withdraw material from OpenAI’s data set.

A.I. is already exploiting workers. Companies developing A.I. tools use texts and images created by humans to train their models – and typically employ low-wage workers abroad to help filter out disturbing and offensive content. Sama, OpenAI’s outsourcing partner, employs workers in Kenya, Uganda, and India for companies like Google, Facebook, and Microsoft. The workers labeling data for OpenAI reportedly took home an average of less than $2 per hour. Three separate Sama teams in Kenya were assigned to spend nine-hour shifts labeling 150-250 passages of text of up to 1,000 words each for sexual abuse, hate speech, and violence. Workers said it left them mentally scarred.

A.I. is already influencing policymakers. A.I. can be used to lobby policymakers with authentic-sounding but artificial astroturf campaigns from machines masquerading as constituents. An early example of this: In 2017, spambots flooded the Federal Communications Commission with millions of comments opposing net neutrality. In response, the agency decided to ignore non-expert comments entirely and rely solely on legal arguments, thereby excluding nearly all public input from its rulemaking process.

A.I. is already scamming consumers. Scammers are already using ChatGPT and other A.I. tools for increasingly sophisticated rip-off schemes and phishing emails. In 2019, criminals used A.I. tools to impersonate the CEO of a U.K.-based energy company, successfully requesting a fraudulent transfer of nearly a quarter million dollars. And in 2022, thousands of people fell victim to a voice-imitation A.I. deepfake: Scammers used A.I. tools to pose as loved ones in an emergency situation – and ripped people off to the tune of more than $11 million.

A.I. is already fueling racism and sexism. When data shaped by pre-existing societal biases is used to train algorithmic decision-making machines, those machines replicate and exacerbate the biases. OpenAI’s risk assessment report released with GPT-4’s launch was forthright about the model’s tendency to reinforce existing biases, perpetuate stereotypes, and produce hate speech. And Lensa, an A.I.-powered tool for creating images of users based on selfies, has a tendency to produce overtly sexualized images of women, even more so if the woman is of Asian descent.

A.I. is already replacing media with bogus content. The use of A.I. in journalism and the media is accelerating with virtually no guardrails holding back abuse. BuzzFeed laid off 12% of its workforce and then announced plans to use ChatGPT to produce quizzes and listicles, alarming company staff. Its A.I.’s seemingly authoritative statements included worrisome errors that could confuse or mislead readers. Subsequent reporting revealed that BuzzFeed published dozens of travel articles written almost entirely by generative A.I. that were comically repetitive. Meanwhile, Arena Group, publisher of Sport’s Illustrated and Men’s Journal, recently debuted its first A.I.-written story, which was criticized for several medical errors. And CNET, a once-popular consumer electronics publication acquired in 2020 by a private equity firm, has been quietly producing A.I.-generated content for more than a year, apparently to game Google search results and draw dollars from advertisers.

A.I. is already undermining privacy. ChatGPT has given rise to a host of new data security and surveillance concerns. Because A.I. is trained by scraping the internet for writing, it’s likely that sensitive personal information posted online has been scooped up. Once that data is absorbed into ChatGPT, there’s no way to know what, if anything, it does to keep that data secure. Therapy chatbots collect data about users’ mental health; A.I. tools that mimic deceased loved ones require training on personal and private interactions; virtual friend and virtual romantic partners encourage levels of intimacy that make divulging sensitive information almost inevitable. Little existing regulation limits how businesses might monetize this sensitive data or how A.I. might wittingly or unwittingly misuse it.

A.I. is already contributing to climate change. Training and maintaining generative A.I. tools requires significant computing power and energy, and the more they need, the bigger their carbon footprint. The energy required for training large language models is comparable to five cars’ construction and lifetime use and a car driving back and forth between New York City and San Francisco 550 times. Adding generative A.I. to search engines is predicted to require Google and Bing to increase their computing power and energy consumption by four to five times.

These harms have arisen at the genesis of A.I. Scaling it up now necessarily means exponentially compounding all of them. The speed at which businesses are deploying new A.I. tools practically guarantees that the damage will be devastating and widespread – and that whatever can be done to limit that damage will have a harder time making a difference after A.I. tools are deployed than before.

Please tell your readers that we need strong safeguards and a broad, agile regulatory regime in place before businesses disseminate A.I. widely. Until then, we need a pause.