Generative AI in Cultural Projects Urgently Requires Protocols
Issue #65: Sociotechnical Tensions, Meetup, Bounties
In this issue: Nicolás Madoery, participant in SoP24 and director of FUTURX, maps tensions between emerging technology and culture, then charts a way forward from a protocol lens. Summer of Protocols and FUTURX will host a meetup in Buenos Aires on November 19th. RSVP here. Also – check out the new non-fiction essay bounties on Discord.
Introduction
Unlike most previous technological disruptions, generative artificial intelligence (genAI) has an exceptionally low barrier to entry. Anyone with internet access and basic language skills can now generate content in the form of text, images, audio, and video. This accessibility explains both its extraordinary speed of adoption and its deep impact on the cultural and creative industries – sectors whose raw material is content but whose primary product is meaning.
From my experience researching and working on the impact of genAI in cultural and creative ecosystems over the past three years, two paths continually intersect.
On one path, there are long-term questions around creativity, the datasets used to train models, copyright reform, and the future of creative labor.
On the other, there are urgent implementation challenges: projects and organizations must adopt these tools to remain relevant, yet doing so ethically demands shared principles and clear agreements among those who use them.
This text stems from FUTURX’s panel Protocols for a Critical Use of AI in Cultural Projects, organized with UNESCO, on August 20, 2025.
Our aim is to stoke the debate around one of the most pressing challenges in the cultural field today: how to design frameworks and protocols that make it easier for people to do the right thing – to use genAI tools to enrich their art and culture without fear, by managing negative side effects such as opacity, bias, or data misuse.
An initial guiding question is:
What should a cultural project or organization do when faced with a high-impact technology like genAI? How – and for what purpose – should it be integrated?
To respond, we propose three starting points:
Build a shared understanding of what critical use of genAI entails.
Recognize the areas where genAI has the greatest impact on cultural projects and organizations.
Embrace the notion of protocols as a fundamental tool to address common challenges posed by technological change.
Can we introduce protocols to cultural projects?
There is a long history of tensions between technology and culture. Technological paradigm shifts require processes of adaptation in which many established practices need to transform. In this sense, the capacity of genAI to produce text, image, audio, and video from pre-existing content datasets once again challenges creative and artistic practices.
Platforms like SUNO – currently the leading generative music platform, producing more than one million songs per day – claim in their article The Future of Music that they train their models with “all the music available on the internet,” just as any individual could. By contrast, many cultural voices argue that this practice is neither ethical nor transparent, calling for state regulation of the use of cultural content in training datasets.
These debates will not be resolved overnight. While legislation is still under discussion in much of the world, we can gain both time and sovereignty if we focus on how these models could adapt to our practices – instead of adapting our practices to the models.
From here, several important questions emerge:
Where do we draw ethical boundaries in creation, reception, analysis, and communication when using content generated wholly or partially with genAI?
How do we protect culture from being included in training datasets without consent? And what possibilities emerge if we stop thinking of genAI solely as a tool and begin to approach it as a cultural, political, and economic system?
What kinds of collective agreements could we imagine so that we can decide – together – how to use the technology?
Protocols appear as one possible answer: engineered arguments that enable cultural projects to structure the relationship between content and technology. The cultural sector needs its own frameworks – dynamic and adaptable – capable of sustaining trust among peers and communities. Protocols can serve as a shared compass to decide which data we use, how we label it, how we guarantee transparency, and which ethical boundaries we do not want to cross.
We are not starting from scratch. UNESCO has already established key references: the 2005 Convention frames culture as a crucial dimension of sustainable development, and the 2021 Recommendation on the Ethics of Artificial Intelligence offers globally accepted values, principles, and policy guidelines.
At FUTURX we have begun translating these ideas into practice through research and collective experimentation, developing frameworks and tools to help cultural organizations navigate the ethical and creative challenges of AI.
What does it mean to use AI well?
As I mentioned at the beginning, what makes genAI truly disruptive is its low barrier to entry. This accessibility democratizes creation, but also multiplies ethical, aesthetic, and social challenges that we are only beginning to grasp.
The problem is that, in most cases, these tools do not guarantee many of the basic requirements a new piece of content should meet: knowing the source and reliability of the information it contains, ensuring transparency, and making sure it is ethical and appropriate for the intended use.
We consider the critical use of this technology to be conscious, reflective, and ethical. This mainly implies:
Analyzing information or content generated by AI, since it may be biased or incorrect.
Maintaining an ethical stance, protecting authorship, cultural diversity, and property.
Preserving our autonomy and vision; using genAI as an ally.
In short, within culture and creativity, critical use means that AI should not dictate the outcome, but rather expand our possibilities – always under our gaze and responsibility.
Creation, Experimentation, and Trust
How does a philosophy of critical use translate into creative and cultural practice? What protocols can we extract from real experiences to guide our practices?
Based on our research and collaborations with different cultural institutions, we have identified a few key considerations:
The distinction between experimental and commercial uses (where clear clauses and traceability are necessary).
The need to align models with values of openness and rights.
The urgency of working with diverse data and metadata to avoid bias.
The value of critical prototyping as a methodology that accepts error and allows us to open the black box of algorithms.
These points show that protocols are not abstract rules, but situated guides that can help orient how we experiment, produce, and distribute culture in an environment dominated by opaque systems.
Areas of Impact: Imagining Protocols for Using AI
After understanding the tensions and needs that arise when aiming for a critical use of genAI, the next step is to look closely at the areas where this technology has the greatest impact on cultural projects and organizations.
Today, trust between peers – whether individuals or projects, within the same organization or across different ones – is facing a crisis due to the lack of transparency, traceability, and the inappropriate use of platforms.
We have identified five key points of interaction between genAI and cultural projects. For each one, we outline a guiding question and suggest an initial protocol.
1. Reception of Content
Guiding question: How can we create mechanisms of trust when works or proposals produced using genAI arrive at an organization or project, where simple prohibition of these tools is not the solution?
Protocol suggestion: The key is knowing how these tools were involved. A simple step is to ask participants to declare it. This transparency opens up the process without diminishing authorship.
Example: A public fund that includes in its application form the question: “Did you use genAI in this process or for the idea you submitted? Please describe how.”
2. Creation of New Content
Guiding question: When organizations produce with AI, how can we ensure accountability?
Protocol suggestion: What matters is leaving a trace: which prompts were used, what decisions were made, how humans and models were combined. This traceability helps explain where a piece comes from and recognize different contributions.
Example: A museum that publishes a visual piece created with AI and specifies on the exhibit label which model was used and the role of the curators.
3. Analysis of Archives and Databases
Guiding question: How do we avoid erasing memories or exposing sensitive data when using genAI to analyze collections?
Protocol suggestion: genAI can organize archives, but it can also obscure histories or leak private information. That’s why it is crucial to review the outputs, safeguard privacy, and ensure proper context before dissemination.
Example: A film archive that digitizes its collection with genAI, but validates each description with specialists before publishing it.
4. Communication with Audiences and Peers
Guiding question: How do we preserve trust when interacting with audiences using genAI tools?
Protocol suggestion: Automated responses can improve efficiency, but relationships with audiences require transparency. It should always be clear when a machine is speaking and when a person is, and a human channel must always remain available.
Example: A record label that uses genAI to draft email responses but has a team review and approve them before sending.
5. Protection and Cultural Diversity
Guiding question: How can we prevent genAI from reproducing biases and erasing singularities?
Protocol suggestion: If left unchecked, models tend to replicate biases and flatten diversity. Incorporating situated datasets and specific metadata is not a technical detail – it is a way of expanding cultural diversity instead of reducing it.
Example: A community archive that becomes a dataset to train a model highlighting local languages or cultural expressions.
Foundations for Culturally Responsible AI Projects
Can we imagine minimal, grounded applications of these protocols?
A culturally responsible use of genAI begins with a techno-realist mindset – one that acknowledges the non-neutrality of technology and reclaims the right to shape it according to our own contexts, values, and imaginaries. The cultural field has the power to design its own tools, languages, and agreements instead of adapting passively to corporate ones.
These protocols will not only help us work better; they will help us build the new codes of trust we need to use this technology ethically and collaboratively. What we need are not grand declarations, but small, situated actions—shared practices that make ethical use tangible in everyday life. This is an open invitation: for artists, institutions, and communities to act as system designers, building a culture with genAI that is responsible, sustainable, and radically human.
Edited by Valentina Cuneo.





