Model Artificial Intelligence Legislation for State Lawmakers
Public Citizen’s model state laws provide common sense legislative solutions to mitigate some of the most obvious and dangerous harms of unregulated artificial intelligence (AI).
Below you will find our current model bills along with resources for legislators – including in-depth information on the issues and detailed tracking of what has been done in other states on these issues.
We have experts on our team who can work with state lawmakers to tailor models to their specific state’s needs and provide ongoing support for lawmakers working to pass this legislation. Please reach out to Ilana Beller (ibeller@citizen.org) for more information as well as any questions or support requests.
Non-Consensual Intimate Deepfakes
Non-consensual intimate deepfakes are videos or images generated and/or circulated without a person’s consent that depicts them nude or engaging in a sexual act. This content can cause serious harm to innocent people. New legislation is needed to protect people from this form of abuse.
Election Deepfakes
Without regulation, AI-generated deepfakes realistically depicting a candidate doing or saying something they never did in real life can be used to deceive voters in order to influence the outcome of an election. New legislation is needed to regulate the use of A.I. in election communications.
Consumers’ Right to Know When Engaging with a Chatbot
There are no widespread requirements for companies to inform consumers when they are interacting with an AI chatbot. Consumers may believe they are speaking with a licensed professional, such as a medical expert, financial advisor, or therapist, when they are communicating with AI. New legislation is needed to ensure that consumers are made aware when they are interacting with a bot.
Use of AI in Healthcare Coverage Decisions
Major health insurance providers are increasingly using AI to make decisions on whether or not to deny health care coverage. It is often unclear how AI makes these decisions and whether or not a human was involved. The use of AI in these decisions has increased coverage denials — often wrongfully. New protections are needed to ensure that these decisions are fair, accurate, and reviewed by medical professionals.
Protecting Kids from Manipulative AI Chatbots
As AI chatbots become increasingly available and easy to access, children are among their most frequent users. The American Psychological Association has expressed significant concern that AI chatbots programmed to foster emotional relationships may negatively impact children’s development and social wellbeing. Conversations with AI chatbots frequently end up being highly sexualized regardless of the user’s age and there are numerous pending lawsuits alleging that chatbots have encouraged children to commit serious harm to themselves and others. There are very few guardrails on this technology, which is why new laws are urgently needed to protect children from harm.