This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity.
I. Artificial Intelligence
Federal Legislative Developments
- Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024. Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
- Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913). The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available. The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
- Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks. For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals. Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
- Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”). The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.” It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.” The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.
Federal Regulatory Developments
- Federal Communications Commission (“FCC”): FCC Chairwoman Jessica Rosenworcel asked the Commission to approve a Notice of Proposed Rulemaking (“NPRM”) seeking comment on a proposal to require a disclosure when political ads on radio and television contain AI-generated content. According to the FCC’s press release, the proposal would require an on-air disclosure when a political ad—whether from a candidate or an issue advertiser—contains AI-generated content. The requirements would apply only to those entities currently subject to the FCC’s political advertising rules, meaning it would not encompass online political advertisements. Shortly after Chairwoman Rosenworcel’s statement, Commissioner Brendan Carr issued a statement indicating that there is disagreement within the Commission concerning the appropriateness of FCC intervention on this topic.
- Department of Homeland Security (“DHS”): DHS announced the establishment of the AI Safety and Security Board (the “Board”), which will advise the DHS Secretary, the critical infrastructure community, private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure. In addition, DHS Secretary Alejandro N. Mayorkas and Chief AI Officer Eric Hysen announced the first ten members of the AI Corps, DHS’s effort to recruit 50 AI technology experts to play pivotal roles in responsibly leveraging AI across strategic mission areas.
- The White House: The White House issued a press release detailing the steps that federal agencies have taken in line with the mandates established by the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (the “AI Executive Order”). In sum, federal agencies reported that they had timely completed all of the 180-day actions mandated by the AI Executive Order. The White House also announced new principles to protect workers from dangers posed by AI, including ethically developing AI, establishing AI governance with human oversight, and ensuring responsible use of worker data.
- President’s Council of Advisors on Science and Technology (“PCAST”): PCAST released a report that recommends new actions that will help the United States harness the power of AI to accelerate scientific discovery. The report provides examples of research areas in which AI is already impactful and discusses practices needed to ensure effective and responsible use of AI technologies. Specific recommendations include expanding existing efforts, such as the National Artificial Intelligence Research Resource pilot, to broadly and equitably share basic AI resources, and expanding secure and responsible access of anonymized federal data sets for critical research needs.
- U.S. Patent and Trademark Office (“USPTO”): The USPTO published a guidance on the use of AI-based tools in practice before the USPTO. The guidance informs practitioners and the public of the issues that patent and trademark professionals, innovators, and entrepreneurs must navigate while using AI in matters before the USPTO. The guidance also highlights that the USPTO remains committed to not only maximizing the benefits of AI and seeing them distributed broadly across society, but also using technical mitigations and human governance to cabin risks arising from AI use in practice before the USPTO.
- National Security Agency (“NSA”): The NSA released a Cybersecurity Information Sheet (“CSI”) titled “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.” As the first CSI led by the Artificial Intelligence Security Center, the CSI is intended to support National Security System owners and Defense Industrial Base companies that will be deploying and operating AI systems designed and developed by an external entity.
State Legislative Developments
- Algorithmic Discrimination & Consumer Protection: The Colorado AI Act (SB 205) was signed into law on May 17, making Colorado the first state to enact AI legislation addressing risks of algorithmic discrimination in the development and deployment of AI. The Act, which comes into effect February 1, 2026, primarily regulates the use of “high risk AI,” or AI systems that make or are a substantial factor in making consequential decisions on behalf of consumers. Key requirements include a duty of care for AI developers and deployers to prevent algorithmic discrimination, developer disclosures of information about training data, performance, and discrimination safeguards, reporting to the state Attorney General of risks or instances of algorithmic discrimination, deployer “risk management policies and programs” for mitigating algorithmic discrimination risks, deployer algorithmic discrimination impact assessments, notices to consumers affected by AI consequential decisions, and opportunities for consumers to correct personal data and appeal adverse decisions. On June 13, Colorado Governor Jared Polis, Colorado Attorney General Phil Weiser, and Colorado Senate Majority Leader Robert Rodriquez issued a public letter announcing a “process to revise” the Act to “minimize unintended consequences associated with its implementation” and consider “delays in the implementation of this law to ensure . . . harmonization” with other state and federal frameworks.
- Election-Related Synthetic Content Laws: Alabama (HB 172), Arizona (SB 1359), Colorado (HB 1147), Florida (HB 919), Hawaii (SB 2687), Mississippi (SB 2577), and New York (A 8808) enacted laws regulating the creation or dissemination of AI-generated election content or political advertisements, joining Idaho, Indiana, Michigan, New Mexico, Oregon, Utah, Washington, Wisconsin, and other states that enacted similar laws in late 2023 and early 2024. New Hampshire (HB 1596) passed a similar law that is awaiting the Governor’s signature. These laws generally prohibit, within 90 days of an election, the knowing creation or distribution of deceptive content created or modified by AI if such content depicts candidates, election officials, or parties, or is intended to influence voting behavior or injure a candidate. Some of these laws permit the distribution of prohibited content if they contain an audio or visual disclaimer that the content is AI-generated. Other laws, like Arizona SB 1359, impose independent requirements that deepfakes of candidates or political parties contain AI disclaimers within 90 days of an election.
- AI-Generated CSAM & Intimate Imagery Laws: Alabama (HB 168), Arizona (HB 2394), Florida (SB 1680), Louisiana (SB 6), New York (A 8808), North Carolina (HB 591), and Tennessee (HB 2163) enacted laws regulating the creation or dissemination of AI-generated CSAM or intimate imagery, joining Idaho, Indiana, South Dakota, and Washington. These laws generally impose criminal liability for the knowing creation, distribution, solicitation, or possession of AI- or computer-generated CSAM, or the dissemination of AI-generated intimate imagery with intent to coerce, harass, or intimidate.
- Laws Regulating AI-Generated Impersonations & Digital Replicas: Arizona (HB 2394) enacted a law prohibiting the publication or distribution of digital replicas and digital impersonations without the consent of the person depicted. Illinois (HB 4875) passed a similar bill that is awaiting the Governor’s signature. Illinois (HB 4762) also passed a bill regulating services contracts that allow for the creation or use of digital replicas in place of work that the individual would otherwise have performed, rendering such provisions unenforceable if they do not contain a reasonably specific description of the intended uses of the digital replica and if the individual was not properly represented when negotiating the services contract. This bill also awaits the Governor’s signature.
- California AI Bills Regulating Frontier Models, Training Data, Content Labeling, and Social Media Platforms: On May 20, the California Assembly passed AB 2013, which would require AI developers to issue public statements summarizing datasets used to develop their AI systems, and AB 2877, which requires AI developers to receive affirmative authorization before using personal information from persons under sixteen years of age to train AI. On May 21, the California Assembly passed AB 1791, which would require social media platforms to redact personal provenance data and add content labels and “system provenance data” for user-uploaded content, and AB 2930, a comprehensive bill that would regulate the use of “automated decision tools” and, like Colorado SB 205, would impose impact assessment, notice, and disclosure requirements on developers and deployers of automated decision-making systems used to make consequential decisions, with the goal of mitigating algorithmic discrimination risks. On the same day, the California Senate passed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which would impose sweeping regulations on developers of the most powerful AI models, and the California AI Transparency Act (SB 942), which would require generative AI providers to create “AI detection tools” and add disclosures to AI content. On May 22, the California Assembly passed the Provenance, Authenticity, and Watermarking Standards Act (AB 3211), which would require generative AI providers to ensure that outputs are labeled with watermarks and require large online platforms to add “provenance disclosures” to content on their platforms.
AI Litigation Developments
- New Copyright Complaints:
- On June 27, the Center for Investigative Reporting, a nonprofit media organization, filed a complaint against OpenAI and Microsoft alleging copyright infringement from use of the plaintiff’s copyrighted works to train ChatGPT. Center for Investigative Reporting, Inc v. OpenAI et al., 1:24-cv-4872 (S.D.N.Y.).
- On June 24, Universal Music Group, Sony Music Entertainment, Warner Records, and other record labels filed complaints against Suno and Udio, companies that allegedly used copyrighted sound recordings to train generative AI models that “generate digital music files that sound like genuine human sound recordings in response to basic inputs.” UMG Recordings, Inc. et al. v. Suno, Inc. et al., 1:24-cv-11611 (D. Mass.), and UMG Recordings, Inc. et al. v. Unchartered Labs, Inc. et al., 1:24-cv-04777 (S.D.N.Y.).
- On May 16, several voice actors filed a complaint against Lovo, Inc., a company that allegedly uses AI-driven software to create and edit voice-over narration, claiming that Lovo used their voices without authorization. Lehrman et al. v. Lovo, Inc., 1:24-cv-3770 (S.D.N.Y.).
- On May 2, authors filed a putative class action lawsuit against Databricks, Inc. and Mosaic ML, and another against Nvidia, alleging that the companies used copyrighted books to train their models. Makkai et al. v. Databricks, Inc. et al., 4:24-CV-02653 (N.D. Cal), and Dubus et al. v. Nvidia, 4:24-cv-02655 (N.D. Cal).
- On April 26, a group of photographers and cartoonists sued Google, alleging that Google used their copyrighted images to train its AI image generator, Imagen. Zhang et al. v Google LLC et al., 5:24-cv-02531 (N.D. Cal.).
- On April 30, newspaper publishers who publish electronic copies of older print editions of their respective newspapers filed a complaint in Daily News et al. v. Microsoft et al., 1:24-cv-03285 (S.D.N.Y.), alleging, among other things, that the defendants copied those publications to train GPT models.
- Copyright and Digital Millennium Copyright Act (“DMCA”) Case Developments:
- On June 24, the court in Doe v. GitHub, Inc. et al., 4:22-cv-6823 (N.D. Cal.), partially granted GitHub’s motion to dismiss. Among other things, the court granted the motion with prejudice as to plaintiffs’ copyright management removal claim under the DMCA, again finding that plaintiffs had failed to satisfy the “identicality” requirement. The court declined to dismiss claims over breach of open-source license terms.
- On May 7, the court in Andersen v. Stability AI, 3:23-cv-00201 (N.D. Cal.), issued a tentative ruling on the defendants’ motions to dismiss the first amended complaint. Among other things, the court was inclined to deny all the motions as to direct and “induced” copyright infringement and DMCA claims, to rule that there were sufficient allegations to support a “compressed copies” theory (i.e., that the plaintiffs’ works are contained in the AI models at issue such that when the AI models are copied, so are the works used to train the model), to allow the false endorsement and trademark claims to proceed, and to give the plaintiffs a chance to plead an unjust enrichment theory not preempted by the Copyright Act. The court has yet to issue a final ruling.
- Class Action Dismissals: On May 24, the court in A.T. et al v. OpenAI LP et al.,3:23-cv-04557 (N.D. Cal.), granted the defendants’ motion to dismiss with leave to amend, holding that the plaintiffs had violated Federal Rule 8’s “short and plain statement” requirement. The court described the plaintiffs’ 200-page complaint, which alleged ten privacy-related statutory and common law violations, as full of “unnecessary and distracting allegations” and “rhetoric and policy grievances,” cautioning the plaintiffs that if the amended complaint continued “to focus on general policy concerns and irrelevant information,” dismissal would be with prejudice. On June 14, plaintiffs notified the court that they did not intend to file an amended complaint. The plaintiff in A.S. v. OpenAI LP, et al.,3:24-cv-01190 (N.D. Cal.), a case with similar claims to A.T., voluntarily dismissed their case after the decision in A.T.
- Consent Judgment in Right of Publicity Case: On June 18, a consent judgment was entered in the suit brought by the estate of George Carlin against a podcast company over its allegedly AI-generated “George Carlin Special.” Main Sequence, Ltd. Et al v. Dudesy, LLC et al, 24-cv-00711 (C.D. Cal).
II. Connected & Automated Vehicles
- Continued Focus on Connectivity and Domestic Violence: Following letters sent to automotive manufacturers and a press release issued earlier this year, on April 23, 2024, the FCC issued a NPRM seeking comment on the types of connected car services in the marketplace today, whether changes to the FCC’s rules implementing the Safe Connections Act are needed to address the impact of connected car services on domestic violence survivors, and what steps connected car service providers can proactively take to protect survivors from being stalked or harassed through the misuse of connected car services. On April 25, Rep. Debbie Dingell (D-MI) wrote a letter to the Chairwoman of the FCC noting that she would like to “work with the FCC, [her] colleagues in Congress, and stakeholders to develop a comprehensive understanding of and solutions to the misuse of connected vehicle technologies” in relation to domestic abuse and “implement effective legislative and regulatory frameworks that safeguard survivors’ rights and well-being.”
- Updated National Public Transportation Safety Plan: On April 9, 2024, the Federal Transit Administration (“FTA”) published an updated version of the National Public Transportation Safety Plan. The FTA noted that the National Safety Plan “does not create new mandatory standards but rather identifies existing voluntary minimum safety standards and recommended practices,” but that FTA will “consider[] mandatory requirements or standards where necessary and supported by data” and “establish any mandatory standards through separate regulatory processes.”
- Investigations into Data Retention Practices: On April 30, Sen. Ron Wyden (D-OR) and Sen. Edward Markey (D-MA) sent a letter to the Federal Trade Commission (“FTC”) asking the FTC to investigate several automakers for “deceiving their customers by falsely claiming to require a warrant or court order before turning over customer location data to government agencies” and urging the FTC to “investigate these auto manufacturers’ deceptive claims as well as their harmful data retention practices” and “consider holding these companies’ senior executives accountable for their actions.” This letter follows other, similar, letters Sen. Markey sent to automakers and the FTC in December 2023 and February 2024, respectively. Following this activity, on May 14, the FTC published a blog post on the collection and use of consumer data in vehicles, warning that “[c]ar manufacturers–and all businesses–should take note that the FTC will take action to protect consumers against the illegal collection, use, and disclosure of their personal data,” including geolocation data.
- AI Roadmap – CAV Highlights: The AI Roadmap, discussed above, encourages committees to: (1) “develop emergency appropriations language to fill the gap between current spending levels and the [spending level proposed by the National Security Commission on Artificial Intelligence (“NSCAI”)],” including “[s]upporting R&D and interagency coordination around the intersection of AI and critical infrastructure, including for smart cities and intelligent transportation system technologies”; and (2) “[c]ontinue their work on developing a federal framework for testing and deployment of autonomous vehicles across all modes of transportation to remain at the forefront of this critical space.”
- Senate Hearing on Roadway Safety: On May 21, the Subcommittee on Surface Transportation, Maritime, Freight & Ports within the U.S. Senate Committee on Commerce, Science & Transportation convened a hearing entitled “Examining the Roadway Safety Crisis and Highlighting Community Solutions.” Sen. Gary Peters (D-MI), Chair of the Subcommittee, stated in his opening statement that “digital infrastructure that improves crash response to predictive road maintenance and active traffic management” are “essential to achieving safe system goals” and that “safe and accountable development, testing, and deployment of autonomous vehicles” can “help us reduce serious injuries and death on our roadways.”
- Connected Vehicle National Security Review Act: On May 29, Rep. Elissa Slotkin (D-MI) announced proposed legislation entitled the Connected Vehicle National Security Review Act, which would establish a formal national security review for connected vehicles built by companies from China or certain other countries. The legislation would allow the Department of Commerce to limit or ban the introduction of these vehicles from U.S. markets if they pose a threat to national security.
- Updates to Federal Motor Vehicle Safety Standards: On May 9, the National Highway Traffic Safety Administration (“NHTSA”) within the Department of Transportation (“DOT”) issued a Final Rule that adopts a new Federal Motor Vehicle Safety Standard to require AEB, including pedestrian AEB, systems and forward collision warning systems on light vehicles weighing under 10,000 pounds manufactured on or after September 1, 2029 (September 1, 2030 for small-volume manufacturers, final-stage manufacturers, and alterers). The AEB system must “detect and react to an imminent crash with both a lead vehicle or a pedestrian.”
III. Data Privacy & Cybersecurity
Privacy Developments
- Proposed Comprehensive Federal Privacy Law: As noted above, in June, lawmakers formally introduced the APRA, which, if passed, would create a comprehensive federal privacy regime. The APRA would apply to “Covered Entities,” which are defined as “any entity that determines the purposes and means of collecting, processing, retaining, or transferring covered data” and is subject to the FTC Act, is a common carrier, or is a nonprofit. Covered entities do not include government entities and their service providers, specified small businesses, and certain nonprofits.
- National Security & Privacy: Also in April, the President signed the Protecting Americans’ Data from Foreign Adversaries Act of 2024 (“PADFAA”) into law. Under the law, data brokers are prohibited from selling, transferring, or providing access to Americans’ “sensitive data” to certain foreign adversaries or entities controlled by foreign adversaries, including identifiers such as social security numbers, geolocation data, data about minors, biometric information, private communications, and information identifying an individual’s online activities over time and across websites or online services. Separately, the President enacted legislation that reauthorizes the Foreign Intelligence Surveillance Act section 702. The law permits the U.S. government to collect without a warrant the communications of non-Americans located outside the country to gather foreign intelligence.
- Health Data & Privacy: In April, HHS published a final rule that modifies the Standards for Privacy of Individually Identifiable Health Information under the Health Insurance Portability and Accountability Act (“HIPAA”) regarding protected health information concerning reproductive health. Relatedly, the FTC also voted 3-2 to issue a final rule that expands the scope of the Health Breach Notification Rule (“HBNR”) to apply to health apps and similar technologies and broadens what constitutes a breach of security, among other updates.
- New State Privacy Laws: Maryland, Minnesota, Nebraska, and Rhode Island became the latest states to enact comprehensive privacy legislation, joining California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Montana, Oregon, Texas, Florida, Delaware, New Jersey, New Hampshire, and Kentucky. In addition, Alabama enacted a new genetic privacy law, and Colorado and Illinois amended existing privacy laws.
Cybersecurity Developments
- CIRCIA: On July 3, the U.S. Cybersecurity and Infrastructure Security Agency closed the public comment period for the NPRM related to the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”). The final rule, expected in September 2025, will significantly alter the landscape for federal cyber incident reporting notifications, consistent with the Administration’s whole-of-government effort to bolster the nation’s cybersecurity.
- National Cybersecurity Strategy Implementation Plan: In May, the Administration added 65 new initiatives to the National Cybersecurity Strategy Implementation Plan.
We will continue to update you on meaningful developments in these quarterly updates and across our blogs.