top of page

A New Era of Protection: California’s AI Privacy Regulations

Writer's picture: WAI CONTENT TEAMWAI CONTENT TEAM

As artificial intelligence (AI) continues to reshape industries, its impact on personal privacy is  a growing concern. AI systems rely on vast amounts of data, including sensitive personal data, such as health records, financial information, and biometric identifiers. AI also presents significant privacy challenges. AI systems often collect more data than necessary, contradicting privacy principles like data minimization. Unpredictable future uses of this data also complicate compliance with existing privacy laws. Finally, AI algorithms, often described as a black box, are difficult for the public to understand, raising questions of responsibility and transparency.

This article is authored by Marie Lamothe, a licenced attorney in California and France, with over 12 years of experience as a legal counsel in technology transactions and privacy law. A certified privacy specialist (CIPP), she has worked across various industries and jurisdictions, supporting software companies in navigating complex legal challenges. She now works for Inductive Automation and has been volunteering with the WAI Global Legal Team since June 2023.


 

To address AI’s impact on privacy, states began enacting their own laws. California, for example, enacted several legislative measures that balance privacy protection with technological advancement. In the 2023-2024 legislative session, California passed multiple bills to better address privacy issues arising from AI, covering areas such as student privacy, transparency in AI training data, and the responsible use of AI-generated content. With President Trump revoking Biden’s AI Executive Order, states like California are now on the forefront in setting up guardrails for regulating AI. 


Expanding California’s Privacy Framework

One of the most significant updates is Assembly Bill 1008, amending the California Consumer Privacy Act (CCPA) to broaden the definition of personal information. Personal information can now exist in “abstract digital formats,” (e.g. “compressed or encrypted files, metadata, or artificial intelligence systems that are capable of outputting personal information”). This adds to the already covered personal information stored in physical formats (e.g., “paper documents, printed images, vinyl records, or videotapes”) or digital formats (e.g., “text, image, audio, or video files”). AI systems, including generative AI systems, that generate or expose personal data are now subject to the same privacy restrictions that apply to other data processors. . 

But the bill is unclear whether it applies to AI models.  This is because the Bill references AI "systems," rather than "models," which can impact the meaning of the law. Adding to the confusion, the Bill leaves the term "systems" undefined. While a "model" usually refers to a specifically trained algorithm (e.g., an LLM), an AI "system" could also include the model architecture, including user interfaces and interfaces (APIs) for interacting with the model, monitoring model performance and usage, or periodically fine-tuning and retraining the model. 

In addition, Senate Bill 1223, building on Colorado’s SB 21-190, includes neural data. According to CCPA, neural data is data  generated by brain-computer interfaces. This acknowledges the increasing importance of protecting new forms of data generated by AI technologies.


Transparency in AI Training Data

Signed into law in 2024 is California’s Assembly Bill 2013. The Bill mandates greater transparency when AI training data. By 2026, companies using AI must disclose information about the datasets used to train their systems and give consumers an option to opt out of.  Companies will also have to disclose the data’s origin, usage, copyright status, and collection dates. This law promotes transparency and accountability, ensuring that AI systems are scrutinized for how they use personal data in large and small datasets.


Combating the Risks of AI-Generated Content with Personal Information

California also passed several laws to mitigate AI harms. Assembly Bill 2905 addresses AI-generated robocalls, requiring that any robocall using AI-generated voices clearly disclose that it is not a real person. This law facilitates liability for incidents like a 2024 deepfake robocall that impersonated former President Biden.

California also expanded its laws to deepfakes. Assembly Bill 1831 extends existing child pornography laws to include AI-generated content, while Senate Bill 926 makes it illegal to use AI-generated nude images for blackmail. Senate Bill 981 requires social media platforms to allow users to report  AI-generated deepfake nudes and temporarily blocks such content during investigations. If confirmed as a deepfake, the content must be removed permanently.

Additionally, California passed two bills to combat deepfake use in elections. AB-2655 requires major platforms like Facebook and X to remove or label election-related deepfakes. AB-2839 holds users accountable for posting or reposting misleading AI-generated content. And AB-2355 mandates that political ads made with AI clearly disclose this fact to voters, helping them navigate the growing complexity of digital political messaging.


Children’s Privacy Protections

California also strengthened privacy laws to protect children. The California Age-Appropriate Design Code Act, enacted in 2022, requires companies offering online services used by children to follow specific guidelines. These include limiting the use of AI in recommendation algorithms and restricting targeted advertising for minors. The law also established the California Children’s Data Protection Working Group to develop best practices for these protections.


Healthcare Protections

California introduced new privacy protections related to AI use in healthcare. Assembly Bill 3030 mandates that healthcare providers inform patients when AI is used to communicate clinical information. This ensures transparency, enabling patients to understand when AI is involved in their care.


Looking Ahead: The Future of AI Privacy in California

Additional regulations are also in the pipeline. One proposal is the AI Data Minimization Transparency (ADMT) regulation, which limits unnecessary data collection by AI systems. Similar to the EU’s Artificial Intelligence Act, California may also adopt a risk-based approach which categorizes AI systems by risk level and imposes stricter regulations on high-risk technologies like facial recognition.


Conclusion

California’s recent AI privacy regulations are shaping a new era where technology and privacy coexist. By prioritizing transparency, consumer rights, and accountability, California is showing how legislation can both protect individual privacy and foster innovation. As AI continues to advance, California’s leadership ensures its residents are protected from the potential risks of AI, while businesses must remain vigilant in adapting to these rapidly evolving regulations.


 

Collaborate with us!


As always, we are grateful to you for taking the time to read our blog post.

If you want to share news that is relevant for our community and to those who read us from all corners of the world-wide WAI community, if you have an appropriate background working in the field of AI and law, reach out to Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or via e-mail silvia@womeninai.co) or to Dina Blikshteyn (via dina@womeninai.co) for the opportunity to be featured in our WAI Legal insights Blog. 


Silvia A. Carretta and Dina Blikshteyn

- Editors


0 comments

Comments


bottom of page