The AI Roundup Vol. 5
Consent-based design in Sora 2, California’s frontier-model law, new fair-use rulings, global deepfake cases, and the rise of legal-AI tools.
AI @ Berkeley Law is back for the new semester! We’re excited to welcome a new board, new members, and to relaunch our weekly newsletter. Each week, we’ll provide a clear and accessible digest of major developments in AI and law, from lawsuits and regulatory updates to legal applications of AI in practice.
Industry Updates
OpenAI launches Sora 2
OpenAI released Sora 2, its flagship video and audio generation model, as a standalone iOS app in the US/Canada on September 30, 2025. The release announcement includes details about the incorporation of AI safety by design - such as the introduction of the “cameos” feature for consent-based usage of likeness as well as specific features designed to protect teen privacy. Consent-based design choices may become de-facto standards that regulators and courts will look to when assessing platform duties around deepfakes. Read more.
Anthropic ships Claude Sonnet 4.5
On September 29, 2025, Anthropic announced Claude Sonnet 4.5 as “the best coding model in the world”. The update emphasizes long-horizon “agentic” work, upgraded coding capability through Claude Code, and a refreshed Agent SDK. Anthropic also shared that legal experts indicate that this is its best-performing model for legal tasks. Read more.
Legislation & Case Law
California enacts a first-in-the-nation AI safety law (SB-53)
Governor Newsom signed SB-53 on September 29, 2025, a first-of-its-kind law establishing disclosure and incident-reporting duties for frontier AI developers and providing protection for whistleblowers. The statute also seeds CalCompute, a public compute initiative to be established at the University of California. SB-53 is the most concrete U.S. state framework for “frontier” models so far and takes advantage of California’s centrality to the AI ecosystem to shape expectations for model lab accountability ahead of any federal baseline. This is the first state-level law in the U.S. directly focused on frontier AI system safety, filling gaps left by slow federal action. It matters because it signals states may lead the way in AI governance, and companies must now prepare compliance frameworks that could influence national and international norms. Read more | Bill text | Analysis
EU AI Act - comments invited on Article 73
On September 26, 2025, the European Commission opened consultation on Article 73 guidance and an incident reporting template to appraise providers of the obligation before the regime comes into force in August 2026. Article 73 aims to incorporate early risk detection to ensure accountability and build greater public trust in AI. Read more.
Courts back AI training as “fair use” in Bartz v. Anthropic and Kadrey v. Meta
Two federal judges in California recently ruled that training AI models on copyrighted books can qualify as “transformative” fair use. While narrow, these rulings favor AI developers, suggesting courts may be reluctant to block model training despite author lawsuits. The decisions raise the stakes for appeals, and scholars are already proposing new “Develop-Fair Use” doctrines to handle AI-specific issues. Skadden analysis | Kadrey opinion | Bartz opinion
India’s Bombay High Court: AI voice cloning violates celebrity personality rights
In a case filed by singer Asha Bhosle, the Bombay High Court held that unauthorized AI voice cloning can infringe a celebrity’s personality rights. This is a landmark Indian decision, expanding voice likeness protection to the AI era. It matters because it pushes Indian courts into shaping new doctrines on deepfakes and voice cloning, and could spark statutory reforms across emerging markets. Court opinion | LiveLaw coverage | Hindustan Times
Character.AI removes user-generated chatbots based on Disney’s C&D letter
In September 2025, Disney sent a cease-and-desist letter to Character.AI demanding removal of user-made character bots citing not only the unlicensed use of its intellectual property but also concerns that such use would hurt its brand in the long term. Such concerns may have been based on reports that Character.AI-generated chatbots were engaged in grooming and similarly problematic behaviours in conversations with children. Character.AI says such bots were user-generated but are being taken down. This action may indicate that AI platforms may be held liable for harm as intermediaries, regardless of whether such harm was attributable to the user/deployer of the platform. Read more.
Legal-AI Sector Updates
Eve hits unicorn status with $103M fundraise
Eve, a legal AI startup building LLM-powered tools for plaintiff’s lawyers, achieved a $1 billion valuation with a $103 million round led by Spark Capital, with participation from existing investors Andreessen Horowitz, Lightspeed Venture Partners, and Menlo Ventures. Eve serves over 450 firms with tools for demand letters, chronologies, and discovery. Read more.
Thomson Reuters expands access to its gen-AI tools to U.S. law schools
As of September 24, 2025, Thomson Reuters is rolling out CoCounsel Legal and Westlaw Deep Research to students and academics at US law schools. This is a welcome opportunity for incoming cohorts of lawyers to normalize AI-native workflows, raising expectations for associates to be AI-ready and potentially being added to hiring criteria. Read more.
Thanks to this week’s contributors: Anita Srinivasan and Sanchi Bansal!