AI servers
Servers in a Microsoft Bing data center power AI systems. Courtesy of Microsoft

Before there was generative artificial intelligent (GAI) art ─ a specific application within an AI world — there was digital art or CGI (computer-generated imagery). You’ve seen the movies where the credits lists hundreds of digital and CGI artists creating backgrounds and special effects. Now, technology has evolved to include words being extrapolated into images. 

A core question for artists who use these technologies as well as those who appreciate their work is about trust. Is there a human in what we see and hear? Does our lack of understanding of how artists use these technologies give the appearance of magic?  What if these technologies were understood as collaborators in art making? Some would say it doesn’t really matter. Maybe all that matters is that we are entertained.

Opinion logo

In large part, digital editing tools have become accepted as just one more tool set for making art. Many understand that when we use our mobile phones as a camera and then use an app to edit our photos. That’s how mainstream and simple digital tools have been integrated into what we might call the egalitarian aspect of art making. That is likely to be the outcome for imagery in an AI world. 

It is worth taking a close look at how GAI art is made, especially to discover what we identify as the artist in these images. The answer resonates with how much human authorship is involved. That is important to determining whether copyright would be given to such images.

Other questions come into play. Is the artist exploiting images belonging to others ─ the thousands and millions of images on which the GAI model was trained? The answer to both of these questions might turn on how much post-processing occurs to the initial GAI image and whether artists will train their GAI model on their own datasets, their own art and their own photos.

Here is where we can have a productive conversation about who the artist is in an AI world.

I used the following text as a prompt for making an image in an AI model: Photo of a serious chatbot that is half human and half cyborg, sitting in front of a computer screen typing in a text prompt, appearing on the wall in front of the cyborg is an image of a reflective 80-year-old indigenous human. 

I used Bing’s DALLE▪3 text-to-image generator. One of the four images that were generated held my interest. 

Cyborg Imaging a Human Self   Joe Nalven + DALLE▪3

I could have stopped at this point with some minor lighting adjustments. However, I decided to take this image and open it in Photoshop. Photoshop has a tool called generative fill. I enlarged the image’s canvas and let generative fill expand the image. I still wasn’t satisfied. I decided to play with a storyline. 

Here is a cyborg that has generated its own image of a human. It might also be interesting to have this “human” imagine itself with a memory of its youth. But no matter how many word-prompts I used, no memories of youth were generated.

So, I reviewed previous GAI images I created with other words and found one that could fit into this emerging storyline. I copied and pasted this image into the one I had been working on. Now, I was combining images ─ creating a collage ─ within Photoshop. From my previous incarnation as a digital artist, I proceeded to integrate both images into a more robust image. 

The Cyborg’s Imagined Human Has a Memory of Youth. Joe Nalven + DALLE▪3 + Photoshop

Putting aside the goodness of these images, we might ask whether either image could be considered as eligible for copyright? As an artist, I would consider that my first collaboration with DALLE▪3 was minimal. As collaborative human and robot art, I would concede that DALLE▪3’s image had not been anticipated by me even though it was spun out of my text prompt.

In terms of human authorship, I participated more in words than in the image. Can the words and the image be disentangled? In this analysis, it is likely that the words would be discounted and the issue reduced to the image.

But what of  my succeeding image? In the revision, I expanded the canvas using another GAI program; I brought in a separate image that, admittedly, was also a GAI image like the one I was working on; I crafted a title to tell an unusual storyline. I placed both images into a single canvas to make an effective composition and continued with further lighting effects.

If I was asked about the human authorship of this revised image, I would contend that I moved the needle to greater human involvement, to more noticeable human authorship. Is that sufficient for copyright at this moment in the evolution of this new art making technology? Perhaps there are other questions more interesting that the issue of copyright.

Now, let’s have a conversation.

Joe Nalven is an adviser to the Californians for Equal Rights Foundation and a former associate director of the Institute for Regional Studies of the Californias at San Diego State University.