Walled Garden vs Open Systems
As AI tools become more common in creative workflows, it’s important to understand how different systems handle data, intellectual property, and risk. The distinction between a walled garden and an open system helps clarify how creative inputs are used, stored, and protected.
The AMA encourages open dialogue across the creative community to help establish shared definitions, best practices, and evolving standards as AI tools and policies continue to develop.
Understanding whether an AI tool operates as a walled garden or an open system helps all parties:
Align expectations around IP protection and risk
Make informed decisions about tool selection
Establish clearer contract language and usage terms
Protect both creative value and client trust
Walled Garden
A walled garden is a closed AI environment where creative inputs and outputs are isolated, protected, and excluded from broader model training or reuse.
When content lives in a closed garden:
The content is not ingested into the broader LLM
Any derivative outputs stay confined to that environment
The data is excluded from general model training, fine-tuning, or future reuse
No embeddings, weights, or learnings are propagated back into the main system
Technically, this process focuses on data isolation, training exclusion, and controlled inference. Requesting a brand to store an artist's imagery in a "closed garden" ensures those assets are used solely for that specific use case, preventing them from influencing the broader AI model. This practice is often employed in AI-assisted post-production workflows to maintain the security and proprietary nature of the original assets. This guarantees the work remains secure and is used only as intended, allowing for controlled creative exploration with AI.
Why Walled Gardens Matter in Creative Production
As AI tools become more integrated into creative production, artists, agencies, and clients need to understand the distinction between walled garden AI environments and open AI systems. Each serves different purposes and carries different implications for intellectual property, risk, and creative control. They are often required when agencies or clients need to minimize legal, ethical, or reputational risk; particularly around copyright, derivative works, and training data provenance.
Key Characteristics of a Walled Garden (LLM Context)
No external training or reuse
Content uploaded into the system (images, text, video, audio, IP) is not used to train or improve public or third-party models.Restricted data flow
Inputs and outputs stay within the agreed environment and are not shared beyond the project, client, or vendor ecosystem.Clear IP boundaries
Ownership, usage rights, and retention of both source materials and generated outputs are contractually defined and limited.Access controls
Only approved users, teams, or partners can interact with the model or view the materials.Auditability & traceability
The system allows visibility into how content is processed, stored, and deleted.
Key considerations:
Inputs are not used to train or improve public or third-party models.
Data retention and deletion are defined in the contract.
Clear boundaries exist around ownership, usage, and derivative rights.
Access is limited to approved users or project-specific instances.
Typically lower legal, ethical, and reputational risk.
Common use cases:
Client work, branded content, pre-release creative, proprietary IP, talent likeness, and other sensitive or confidential materials.
How It’s Typically Described Contractually
You’ll often see language such as:
“Closed-environment AI tools”
“Non-training, non-retentive AI systems”
“Client-specific or project-specific AI instances”
“No cross-pollination with open models”
When discussing the storage of your/your artist’s images, ensure you clarify the specific "walled garden" the client is utilizing. You have the option to request a walled garden dedicated exclusively to your project. This is preferable to a general "brand imagery" garden, where your assets might be stored alongside others and potentially combined with them to create derivative works.
Open AI Systems
An open system refers to AI tools that operate within a shared or continuously learning environment, where inputs may contribute to model improvement unless otherwise restricted.
Key considerations:
Inputs may be retained and used for ongoing model training.
Data persistence may extend beyond the scope of a single project.
IP controls and visibility into downstream use may be limited.
Content may indirectly influence outputs generated for other users.
Higher uncertainty around copyright, likeness, and derivative use.