Plagiarism vs homage in generative AI

How creativity gets orphaned between humans and machines

5 min readApr 17, 2025

By Yervant Kulbashian. You can support me on Patreon here.

Imitation is the sincerest form of flattery. Artists of all stripes have been known to reproduce parts of others’ works that inspired them, both within and across art forms, as a form of homage. The extent of the imitation extends from full parodies like Spaceballs — Mel Brooks’ parody of the Star Wars franchise — to an occasional element or line of text reminiscent of some earlier work. The artists in question often studied their predecessors in university, where they proved influential on their artistic symbology and language. And the targets of these homages have rarely expressed annoyance at the imitation (though some have), taking it rather as a compliment.

Same shot recreated across three pieces of media.

It is impossible to create something completely new, from scratch. Few writers invent their own words or grammar. Beyond that, most creatives work within the constraints of a genre, incorporating the many tropes and trends common to its tradition. This is largely out of necessity. To be understood by your audience you must on some level access the language of that audience and reference the substance of the zeitgeist. For example, to express peace, you may represent a pastoral scene, a symbol readily recognized by your audience, and which is as much a part of the public language as the word “peace” itself.

Why, then, when generative AI replicate works they have been trained on, is it taken as plagiarism as opposed to homage? To answer this, we must understand how generative AI create something that has not yet been seen. Modern AI that generate images and videos, e.g. OpenAI’s Dall-E, begin by taking random noise and slowly, pixel-wise, adjusting that initial content to more closely match the patterns in the art they have been trained on. The technology is like taking the sand on a beach and gradually sculpting it into a sandcastle, at each step taking stock of how far you have come, and making the next adjustments.

Souce: https://developer.nvidia.com/blog/generative-ai-research-spotlight-demystifying-diffusion-based-models/

Any specification of what exactly the art turns out to be is done through “conditioning”, e.g. taking some text prompt or catalyst image, and using it to nudge product towards one outcome rather than another. This is like guiding the builder of the sandcastle to bias towards a particular type of castle. Besides the user’s prompt, there is no additional “intention” on the part of the AI to express something important to it. The prompt is the only injection of goal or purpose into the work. After that, the only force guiding the AI is to match what has already been seen.

When we trust a human creator to not plagiarise, it is because we sense that they have their own motives for creating what they did. Those motives make it impossible to merely copy what they have previously seen. At the very least they must themselves select the pieces they will put together, as befits their intention, like a collage. Even during an artist’s “training”, that is, when they are reading/viewing/listening to others’ art, they don’t incorporate the entirety of what they see; they select from it only the parts that are in line with their personal motives.

Contrast this to how a pure out-of-the-box language model (not a heavily refined chatbot) might respond when asked what it’s favourite movie is: it will replicate what others have said with a degree of probability comparable to the various responses. It has never seen a movie; and even if it had it would have no personal preference for one or the other. Modern chatbots have to be heavily restrained from pretending to be people in this way by “artificially” — post hoc — forcing the model to say that it has no such preferences. The default response without such adjustments would be to hallucinate a belief, to simulate a value it didn’t have.

Generative AI are reminiscent of a younger cousin who, looking up to their older relatives, imitates their clothing, mannerisms, and lingo, with the only goal of being “as cool as them”. And even this is an overstatement, since at least the cousin has a motive — to be cool. AI lack any internal drive to accomplish anything; they only act as prompted.

Trust is always instilled by seeing a desire. A person’s passion is the reason we trust them. This extends beyond generative AI to any form of AI. We would trust AI surgeons more if we knew that they sincerely wanted the operation to succeed, instead of merely replicating what they have seen others do — and thoughtlessly causing havoc when that imitation fails. We’d prefer our surgeon to adjust on the fly when they sense that something has gone wrong; this is why we trust humans. Or consider how, when you are driving and see something unexpected, you slow down to try to suss it out; you don’t just make a hard guess and blindly stick to it.

Momentary intentional adjustments make our behaviour properly our own. Abilities like these require something like Reinforcement Learning (RL), since RL is the only type of Machine Learning that learns based on values. However, RL has proven highly unreliable in complex situations, and so production models tend to fall back on imitating patterns in data, e.g. through Behavioural Cloning.

No artistic AI has yet to incorporate directed trial and error into its initial training (RL is only used in adjusting the base model) because no such RL model exists yet that can learn to create art based on its own inherent values. They can only serve as a medium of transmission for the human creator’s goals. If, on the other hand, models had their own motives, and then borrowed the right words, images, or tropes to achieve those, we would trust them more as creatives, since we know they are doing more than copying.

This is the difference between plagiarism and homage. Homage uses others’ works and tropes as a language for communicating the creator’s own intentions. Plagiarism, on the other hand, copies both intention and language because it has no intention of its own. Artists are more comfortable being copied when they know that the perpetrator is trying to achieve something beyond what they did, picking and choosing the “words” or pieces that served those desires. In contrast, the creator of an AI prompt rarely knows anything about the work they are copying. Thus the selection process is entirely in the hands of the AI model, which is in essence not selective at all. The creative act of selection is orphaned — it is neither in the user nor in the AI itself.

--

--

From Narrow To General AI
From Narrow To General AI

Written by From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.

Responses (5)