by Rebekah Ninan

This past month, California Governor Gavin Newsom signed a wave of artificial intelligence-related legislation into law. Much public debate has been focused on SB 1047, a proposal ultimately vetoed by Governor Newsom, which would have held AI companies liable for “catastrophic harms” from AI models. Comparatively little attention has been paid to three new laws aimed at health care-related AI and data privacy. Three laws are AB-3030, SB-1223, and SB-1120. AB 3030 requires that health care providers disclose when they have used generative AI to create communications with patients. SB 1223 amended the California Consumer Privacy Act of 2018 to include neural data as sensitive personal information, whose collection and use companies can be directed to limit. Finally, SB 1120 limits the degree to which health insurers can use AI to determine medical necessity for member health care services. This article seeks to summarize these developments in the law.

AB 3030 focuses on the use of generative AI. More specifically, the legislation requires that health facilities, clinics, physicians groups and other health care providers reveal to patients if a communication with the patient was generated with artificial intelligence. The bill stipulates different disclaimers based on different forms of communication. This means that if a provider uses AI to send a patient a direct one-time communication, a disclaimer must be at the top of a letter or at the beginning of any audio communications. For video communications or chat features, the warning should be featured prominently throughout the communication. There are limitations to the use of disclosures: disclosures do not need to be provided if the communication does not include information relating “to the health status of a patient,” otherwise referred to as “patient clinical information.” Additionally, the use of generative AI does not need to be disclosed if the communication has been reviewed by a health care provider. It is unclear what level of review by a provider is required. These exceptions to the disclosure seem to indicate that there are more concerns about generative AI when it is used to supplant a health care provider’s discretion in how to communicate health information. This bill follows in the footsteps of the state of Utah, which also required disclosure of the use of generative AI during health care services earlier this year.

In SB 1223, an amendment to the California Consumer Privacy (CCP) Act, California has protected neural data as sensitive personal information. Neural data, also known as “brain wave data,” is defined by the bill as “information generated by measuring the activity of a consumer’s central or peripheral nervous system.” This data has been used in everything from predicting consumer behavior to helping paralyzed individuals communicate. The media has described neural data as providing the basis for AI-based mind-reading technology. The addition of such data to the CCP means that companies cannot sell or share such data and must make efforts to deidentify the data if collected, and that individuals have the right to know if such data is being collected and the right to delete it. Other types of data that already have this protection include biometric and genetic information. Some scholars have argued that this amendment was perhaps unnecessary because it was already encompassed within California’s existing protections for biometric information. Others argue that the amendment, which protects raw neural data, does not go far enough: leaving unprotected the use of related but nonneural information containing inferences and conclusions that describe what one is thinking or feeling. This view holds that the privacy right is not about neural data itself but what neural data reveals about a person. California’s law is only the second such law in the country, following in the steps of Colorado, but lawmakers from other states including Florida, Texas, and New York have begun to show interest in tackling the topic.

Finally, SB 1120 constrains the unfettered use of AI tools to approve or deny medical treatments, by requiring a licensed health care professional to still make individualized determinations for each member of a health insurance plan. The law has been referred to as the “Physicians Make Decisions Act.” SB 1120 amends the California Health and Safety Code and the Insurance code to ensure that AI, algorithms, and other software do not “deny, delay or modify health care services based, in whole or in part, on medical necessity.” This aligns with recent guidance from the Centers for Medicare and Medicaid Services that Medicare Advantage plans may not make a determination of medical necessity solely based on algorithms using broad data sets. That decision came after journalistic investigation revealed that unregulated predictive algorithms were driving coverage denials in Medicare Advantage, particularly for coverage of post-acute care. Furthermore, AI cannot be used in isolation to deny admission or downgrade hospital stays. Other states have pending bills that would mandate insurers’ disclosure of the use of AI to providers and individuals. Georgia is also considering banning the use of artificial intelligence alone to make coverage determinations.

California’s recent laws represent an emerging effort by states to regulate the use of artificial intelligence in health care. California’s laws can serve as a model for other states’ bills as well as an experiment to see to what degree states can actually constrain the use, and potential harms, of such technology.

 

Rebekah Ninan’s (J.D. 2025) research interests are focused on the intersection of health and the law. She is interested in mass torts, drug product liability, pharmaceutical antitrust, and the administrative law that affects health-related regulations promulgated by agencies.

The post Health Care, AI, and the Law: An Emerging Regulatory Landscape in California first appeared on Bill of Health.