FAQ: Artificial Intelligence (AI) and Machine Learning

Information on this page last updated: September 26, 2025. Please verify details independently, as laws/policies may have changed since this update.

    • Agentic AI (often called an AI agent) is software that you give a goal to, and it plans the steps, uses tools/APIs, and takes actions autonomously, not just chatting back. It can search, fill forms, send messages, move files, or trigger workflows while keeping short-term memory and adjusting its plan as it goes.

    • Artificial Intelligence (AI) is the field of computer science focused on creating systems that can perform tasks that typically require human intelligence. These tasks include things like learning from data, recognizing patterns, understanding language, solving problems, making decisions, and even perceiving the world through vision or sound.

      AI can range from simple rule-based systems to advanced machine learning models that adapt and improve over time. In practice, AI powers things like recommendation engines, voice assistants, self-driving cars, image recognition, fraud detection, and generative tools like ChatGPT.

    • Closed Garden / Walled Garden AI Model is an AI system whose model, data, and tooling are proprietary and not publicly released. Access is through a hosted product or API; the provider controls what you can see, do and ship.

    • Generative AI is software that creates new content including text, images, audio, video, or code by learning patterns from large datasets and then producing original outputs that resemble what it learned (not simple copy-paste).

    • Large Language Model (LLM) is a type of AI trained on vast amounts of text so it can understand and generate language, answer questions, draft emails, summarize documents, translate, or write code by predicting the next word in context. Essentially a text engine that learns patterns from examples.

    • Machine Learning (ML) is a branch of artificial intelligence that focuses on building systems that can learn from data and improve their performance over time without being explicitly programmed. Instead of following fixed rules, machine learning models identify patterns, make predictions, or generate insights by analyzing examples.

      For instance, ML powers tools like spam filters (which learn what looks like spam from past emails), recommendation systems (like those used by Netflix or Spotify), and image recognition software.

  • The Human Artistry Campaign is a coalition of over 180 member organizations advocating for responsible AI development and use that respects the rights of creators and rights holders and strives to enhance human creativity, not replace it. 

    The Digital Creators Coalition (DCC) is a group of associations, companies and organizations that represent individual creators, independent producers, small-and-medium-size enterprises (SMEs), large businesses, and labor organizations from the American creative communities. We contribute significantly to U.S. GDP, exports and employment – collectively employing or representing millions of American creators, and contributing billions of dollars to the U.S. economy.

  • Perfect 10 v. Google (9th Cir. 2007)

    • Perfect 10 sued Google over Image Search, alleging that thumbnail copies and the display of full-size images via in-line linking infringed its copyrights. The Ninth Circuit held that Google’s thumbnails were fair use because they served a transformative search function, and adopted the “server test,” finding no direct infringement for embedded images not stored on Google’s servers. Rights holders argued this weakened online enforcement, while platforms saw it as clarifying how search and linking can lawfully display images. An influential precedent that still shapes web practices.

    Grecco v. RADesign, Inc. (SCOTUS cert. Denied 2025)

    • Photographer Michael Grecco sued RADesign for unauthorized use of his images, raising a key question about the Copyright Act’s three-year statute of limitations, whether claims accrue under the discovery rule (when the owner discovers or should discover the infringement) or the injury rule (when the infringement occurs). After the Second Circuit allowed the case to proceed without rejecting the discovery rule, defendants sought Supreme Court review; the Court declined to hear it, leaving the discovery rule widely available to rights holders even as debate over accrual continues.

    Warner Bros vs. MidJourney

    • Warner Bros. has filed a lawsuit against the AI company Midjourney, accusing it of illegally using copyrighted characters like Superman, Bugs Bunny, and Batman to train its system and generate new images and videos. The studio argues this misleads users and harms its intellectual property, while seeking up to $150,000 per violation. Midjourney denies wrongdoing, claiming its training methods are protected “fair use” and comparing its system to a search engine. This case follows similar lawsuits from Disney and Universal and could have major implications for how AI tools use copyrighted material.

    Disney, Warner Bros, Universal sue MiniMax 

    • Disney, Warner Bros. Discovery, and Universal Pictures have filed a joint lawsuit against Chinese AI company MiniMax, alleging its tool Hailuo AI illegally uses their movies and TV shows to generate images and videos of copyrighted characters like Minions, Superman, and Darth Vader. The studios call the practice an existential threat to Hollywood, warning the technology could soon create full-length unauthorized films. They’re seeking damages and a court order to stop MiniMax from exploiting their works.

    Encyclopedia Britannica

    • Encyclopedia Britannica and Merriam-Webster have sued AI company Perplexity, alleging its “answer engine” copies and summarizes their content without permission. They argue this not only infringes copyright and trademarks but also misleads users by attributing inaccurate AI-generated information to their brands. The lawsuit seeks damages and a halt to Perplexity’s practices, highlighting growing concerns that AI tools can undercut publishers and creators by diverting traffic and revenue away from original sources.

    Anthropic Settlement - $1.5 Billion

    • A group of authors sued Anthropic, alleging it used pirated books to train its AI chatbot, Claude. In a major ruling, the court said training on legally obtained books can qualify as fair use, but storing and using pirated copies is not. Anthropic agreed to a $1.5 billion settlement and must destroy the pirated files.

    • What this means: This case sets an important precedent. AI companies may defend training on lawful data as fair use, but they face serious liability if they rely on pirated or improperly sourced material. For creators, it underscores the need to protect how their work is acquired and used in AI training.

  • U.S. Legal Requirements: None Yet

    As of September 2025, there are no federal law in the U.S. explicitly requiring brands to disclose that artwork was generated using AI. Copyright law puts a strong emphasis on human authorship works created entirely by AI are not eligible for copyright and must be disclosed if submitted for registration.

    Upcoming / International Trends

    In the EU, the AI Act (effective starting 2025) mandates disclosure/watermarking for AI-generated content. A voluntary U.S. congressional proposal, titled the AI Labeling Act of 2023 would require such disclosures, but it has not yet been enacted.

    Brand Risk & Best Practices

    Even without a legal requirement, transparency is critically important. Brands that disclose AI usage, especially in marketing or creative contexts establish trust and align with growing consumer and regulatory expectations.Platforms like Google and Instagram are already pushing for more AI content labeling for clarity and user trust