Welcome to The BLAI Roundup
Keep reading to learn more about current developments in the field!
Trump Promotes AI Images to Falsely Suggest Taylor Swift Endorsed Him
Former President Donald Trump shared AI-generated images that falsely suggested Taylor Swift endorsed him, raising questions about the legal and ethical challenges of using artificial intelligence to manipulate public perception. The images, shared on social media, depicted Swift supporting Trump, although they were labeled as "satire" by some. This incident highlights potential legal issues surrounding the misuse of AI to spread misinformation, especially as AI-generated content can blur the lines between parody and deception, potentially leading to defamation or fraud claims.
Area of Law: Copyright, Privacy, Constitutional Law
Legal Issues: Name/Image/Likeness, Right of Publicity, Fair Use, Political Speech Read it at New York Times
Contributed by: Sienna Horvath
OpenAI Unveils New ChatGPT Model with Advanced Reasoning
Online chatbots like ChatGPT and Google's Gemini have faced issues with solving simple math problems and generating buggy or incomplete code. OpenAI's latest version of ChatGPT, powered by the new OpenAI o1 technology, aims to address these problems by allowing the chatbot to "think through" tasks more thoroughly and provide more accurate answers. Demonstrations of this new model have shown improvements in solving complex puzzles, answering advanced academic questions, and diagnosing medical conditions.
Area of Law: Patent, Trade Secrets, Intellectual Property
Legal Issues: Liability, BiasRead it at New York Times
Contributed by: Chloe Tepper
China opts out of International Agreement on Use of AI in Military Applications
China opted not to sign an international agreement involving over 60 countries, including the U.S., which seeks to establish safeguards on the use of AI in military applications. The agreement, discussed at the Responsible AI in the Military Domain (REAIM) summit in South Korea, emphasizes keeping human control over AI-driven military systems, particularly when decisions involve human lives. The source claims China's refusal, along with 30 other nations, may be linked to its desire to avoid constraints on its military AI advancements. While many nations favor multilateral agreements to prevent fully autonomous AI weapons, experts caution that adversarial nations may not be deterred by these measures.
Area of Law: International, Cybersecurity
Legal Issues: Laws of War, LiabilityRead it at Fox News
Contributed by: Sienna Horvath
Musician Charged with Fraud for Using Bots to Collect Millions in Royalties
Michael Smith, a North Carolina musician, has been charged with fraud for creating hundreds of thousands of fake songs and streaming them using bots to collect $10 million in royalties from platforms like Spotify and Apple Music. Over seven years, Smith used AI-generated music and fake streaming accounts to manipulate royalty payments, evading detection by diversifying his fake songs and streaming them repeatedly. Smith faces serious charges, including wire fraud and money laundering, and could receive up to 20 years in prison for each charge.
Read it at New York Times
Contributed by: Sienna Horvath
Area of Law: Entertainment
Legal Issues: Copyright, Fair UseThe AI Bill Driving a Wedge Through Silicon Valley
California's proposed AI regulation bill mandates that AI companies assess whether their models could cause harm, such as facilitating cyberattacks or producing biological weapons, and requires a "kill switch" for potential misuse. The bill enforces safety testing and risk mitigation for the largest AI developers. Supporters argue that it addresses critical safety concerns, while opponents worry it could stifle innovation, particularly for smaller or open-source developers. Governor Gavin Newsom has until September 30 to decide whether to sign the bill, which could have far-reaching implications nationally and globally.
Read it at Financial Times
Contributed by: Sienna Horvath
Area of Law: Cybersecurity, Intellectual Property
Legal Issues: Regulatory compliance, Administrative burden How Health Insurance Companies Reject Patient Claims Without Reading Them
Health insurance companies such as Cigna are using algorithms to screen patients health insurance claims and reject those claims that do not meet the algorithm's criteria even without sending the claims to a medical provider for review. Industry claims that this improves efficiency, but the use of algorithms for denial and the push for efficiency raise questions of fairness for patients.
Read it at ProPublica
Contributed by: Leak Ly
Area of Law: Privacy, Insurance, Health Care
Legal Issues: Regulatory Compliance, Consumer Protection, DiscriminationOpenAI Acknowledges New Models Increase Risk of Misuse to Create Bioweapons
Large Language Models (LLMs) are advancing rapidly, nearing the threshold of Artificial General Intelligence (AGI). OpenAI continues to enhance its models’ problem-solving and reasoning capabilities, bringing them closer to AGI. However, these advancements, along with the theoretical implications of AGI, raise significant ethical and security concerns. OpenAI is aligning their models with their safety policies. Nevertheless, increased safety considerations and policies often come at the cost of financial performance, competitive advantage and innovation.
Patent, regulation, ethics, human rights
Read it at Financial Times
Contributed by: Kiyan Mohebbizadeh
Area of Law: Patent, Human Rights, Ethics
Legal Issues: Regulatory Compliance, Product Liability We want to give a special thank you to the contributors for making this newsletter possible.
All images were generated with DeepAI. Most summaries were generated with Chat GPT.








