Back

STEPS TO MONETIZING AND NEGOTIATING AI USE AND TERMS


As AI becomes increasingly integrated into creative production, it’s critical to define how commissioned work may be used and where boundaries should be set. Not all AI use carries the same level of risk, value, or impact on creative rights, and each category requires thoughtful consideration. The framework below outlines common AI use tiers, how they typically function in practice, and when additional fees, protections, or contract terms may be warranted.

Define the AI Use Tiers and Determine if Fees are Warranted

AI can be applied to creative work in several distinct ways, each carrying different implications for control, value, and rights. The following categories outline common AI use cases, from internal review and post-production support to derivative creation and model training for long-term brand systems. Fees can be added as needed to address these uses.

Here are how agencies or brands may use AI with your content and usages associated with it. 

  1. Internal Use: To create content for internal review or potential future use to summarize, analyze, organize, or present commissioned work for internal decision-making or archival purposes. This is similar to non-AI internal usage.

  2. Creative Tool: Allowsretouching, refinement, or controlling enhancements. This tier is similar to allowing clients to edit your imagery in post production for minimal changes such as changing the color of wardrobe, removing talent, or extending backgrounds.

  3. Derivative Works: Allows creation of additional AI-generated deliverables for a defined project. In this category, clients can alter your content to create derivative works that extend the value of the project beyond the original assets such as adding motion to images, or changing backgrounds and or talent to create new content. Licensing for this is equivalent to creating non-AI derivative works and similar to licensing additional usage and should be priced accordingly.  

Note: Where a client has licensed usage for the entire library, artists and agents should be aware that some clients may assume they can apply AI tools to that library in the same manner as traditional post-production, such as changing backgrounds, swapping wardrobe, or making other controlled edits. Artists should proactively determine whether this type of AI use is acceptable and, if so, under what conditions. Does this include adding motion? Swapping backgrounds or talent? Altering Talent? Any AI use of licensed library content should be expressly defined, appropriately compensated where applicable, and limited in scope, rather than implied or assumed.

  1. Model Training:  Allows for new brand and creative development to create new visual systems that may substitute future production. Allowing this use is the equivalent of granting non-AI Work for Hire and should be priced accordingly.

2) Negotiate AI Terms

When AI use is granted for commissioned assets; consider these parameters when negotiating the terms:

  • Limiting use or outputs to the specific project and defined time frame, territory and media, consistent with the original usage terms.

  • Limiting AI processing to within a closed, walled-garden environment only and restricting commissioned assets  from being uploaded to third-party AI platforms whose terms allow model training, onward data use, or retention beyond the agreed scope, unless expressly approved in writing. 

  • Limiting AI creative tools to standard, customary post-production and prohibiting its use from changing talent, locations, composition, or scenarios in a way that is materially different from how the image was shot.

  • Restricting the Licensed Materials from being used for data learning of any kind, including AI training or analysis, being used to establish or inform a new campaign look and feel, visual language, or creative system beyond the licensed project. If you do not restrict this portion of AI use, you are essentially allowing a Work for Hire situation.

3) Understand Indemnification

Indemnification is one of the most critical and often misunderstood issues in AI-related agreements. As AI tools become more integrated into creative workflows, artists and their representatives need to understand where risk sits and who is responsible for it.

The AMA encourages members to treat indemnification as a foundational consideration in AI negotiations, not an afterthought. Clear alignment on tools, usage, and responsibility is essential to protecting both creative value and long-term careers.

Core Principle: If AI use expands scope, longevity, or uncertainty then contracts, indemnification, and fees must expand with it. Creative upside never replaces the need for legal clarity.

  • Tool-specific risk matters.
    When a client or agency specifies the use of a particular AI tool, artists are generally required to operate within the terms and conditions of that platform. As of December 2025, many agency and brand legal teams have limited approval to a small number of tools, most commonly Adobe Firefly.

    This preference is driven by several legal and risk-management factors, including Firefly’s training on licensed and public-domain content, its integration within enterprise-controlled environments, and the availability of commercial indemnification for approved use cases. These factors help provide clearer chain of title, reduce uncertainty around third-party rights, and align with client insurance and compliance requirements.

By contrast, many other AI tools do not currently offer comparable transparency regarding training data, do not provide meaningful indemnification, or include terms that create ambiguity around ownership, reuse, or liability. As a result, these tools are frequently not approved by client legal departments, regardless of their creative capabilities. Tool approval is therefore typically based on legal defensibility and risk mitigation rather than creative preference.

Gut Check Tips:

If a client or agency specifies a particular AI tool Then the artist is generally required to operate within that tool’s terms and limitations.

If a tool lacks transparency around training data, ownership, or indemnification Then it is often not approved by client legal teams, regardless of creative quality.

Bottom line: Tool approval is about legal defensibility and risk mitigation, not creative preference

  • Effectiveness vs. protection is a real tension.
    Some AI tools may offer greater creative flexibility, speed, or technical sophistication, but these advantages often come without the legal protections required from clients for commercial use. In particular, the absence of clear training-data provenance, meaningful indemnification, or well-defined ownership terms introduces risk that both clients and artists must carefully evaluate.

Gut Check Tips:

If a tool offers greater speed, flexibility, or technical sophistication Then it may also introduce:

  • Unclear training-data provenance

  • No meaningful indemnification

  • Ambiguity around ownership or reuse

  • Increased exposure to third-party claims

If legal protections are missing Then both client and artist are assuming elevated commercial risk.

  • Know when risk is being shifted to the artist.
    If a client requires the use of a specific AI tool but is unwilling to indemnify the artist for that use, the artist should carefully assess whether the associated risk is acceptable and, where appropriate, consult their own legal counsel. In response to these risks, artists may choose to decline a project when the requested tools or workflows expose them to uninsurable or open-ended liability. Tool selection is therefore driven less by creative capability and more by whether the risk can be responsibly managed, contractually protected, and aligned with professional standards on both sides.

Gut Check Tips:

If a client requires use of a specific AI tool and refuses to indemnify the artist for that use Then the risk is being shifted downstream to the artist.

If the risk is uninsurable, open-ended, or outside professional norms Then the artist should:

  • Reassess whether the project is viable

  • Consult legal counsel where appropriate

  • Decline the project or request revised terms

Bottom line: Tool choice is driven by whether risk can be responsibly managed, not by what is technically possible.

  • Derivative use requires explicit protection.
    If a client is granted permission to use an artist’s work to create derivative works; whether through AI tools or other downstream processes, the artist should ensure they are fully indemnified for any subsequent use of that work. Without clear indemnification, liability may extend well beyond the original scope, timeline, or intent of the commissioned project, potentially exposing the artist to claims arising from future adaptations, training, or reuse over which they have no control.

    In these situations, artists should also consider whether additional fees are appropriate to account for derivative usage rights. Granting permission for derivative creation often represents a materially different value exchange than standard commissioned use, particularly when the work may inform or generate new outputs over time. Where indemnification, usage boundaries, or compensation do not adequately address this expanded risk and value, artists may reasonably decline participation or require revised terms before proceeding.

Gut Check Tips:

If a client is granted permission to create derivative works from an artist’s assets Then the artist should be fully indemnified for all downstream use.

If derivative use is allowed without clear indemnification Then liability may extend beyond:

  • The original project scope

  • The agreed timeline

  • The artist’s control or intent

If work may be reused, adapted, trained on, or generate future outputs Then that represents a materially different value exchange than standard commissioned use.

Artists should consider:

  • Additional derivative or access fees

  • Clear usage boundaries and timeframes

  • The right to decline if risk and compensation are misaligned