Our 2023-24 cohort application is now live! Click here to apply.

AI-Generated Art

What if I told you that anyone could be an artist? Now, it doesn’t require years of art school or even the most primitive painting supplies, just some compiling computer code and the ability to describe what you want illustrated.

In April 2022, the artificial intelligence research laboratory OpenAI revealed DALL-E 2, the second-generation successor to DALL-E: an AI system that can create realistic images and art from a language description. OpenAI was founded in 2015 by Elon Musk and Sam Altman with a mission of creating artificial intelligence that imparts a positive long-term impact on humanity, and the company has since released cutting-edge technologies such as the transformer language model GPT-3 and the music-generating AI MuseNet. DALL-E 2 completely revolutionized the way people began thinking about art and technology – these two seemingly disparate worlds intersected like never before. Given a high-quality prompt, DALL-E 2 produces art at the level of a human artist in seconds and offers a series of pieces that vary in artistic style, composition, and color palette. DALL-E 2 can even make edits to pre-existing images based on simple phrases, like “remove the person behind me and add the Eiffel Tower in the background”.

Images generated by DALL-E 2 with various text prompts

Since the explosion of the generative AI market after the popularization of DALL-E, new models have been frequently introduced. Stable Diffusion, created by Stability.AI, is not only free but open-source, meaning it has fewer restrictions towards using data from the open web to train their models. Furthermore, apps like Midjourney and Wombo Dream lower the barrier even more when it comes to technical skill – trained on images pleasant to the eye, almost any prompt will generate something that looks visually ‘artsy’ and ‘cool’.

Generative AI models rely on massive amounts of image data from the open web to teach algorithms how to recognize patterns. For instance, DALL-E 2 uses a Generative Pre-Trained Transformer (GPT-3) model, along with 3.5 billion parameters to train the millions of text-image pairs it scrapes from the Internet. The ethics behind this method of explicitly using web images to train AI at such a large scale has been largely debated. Namely, these models purpose data without permission and use it to mass produce what artists would spend months making, often generating eerily similar copies at the expense of the original work. Claude Monet, Gustav Klimt, and Johannes Vermeer are just at the tip of the iceberg of renowned artists who are essentially allocating their work for an AI to examine and reproduce. Less-acclaimed modern day artists have it even worse – with less visibility and support from the public,  generative art enthusiasts may never even know that these artists’ creations have been stolen in a way that lacks artistic integrity.

This appropriation perpetuates the idea that artists are a product and style rather than a human with skill and training. Many artists have formed a deep connection with themselves and their audience through artistic expression, and this is not so easily replicated with a machine. When South Korean illustrator Kim Jung Gi, beloved for his manhua comic book style and highly detailed line art illustrations, passed away in 2022, his work was re-generated by an AI developer, sparking furious backlash from the Japanese anime community. Many were upset that a day after the artist passed his once outstanding and unique art became a commodity for training robots, and claimed that there was no replacement for Kim and his authentic pieces.

Since the art style of anime is viewed similar to a cultural export in Japan, and the degree of similarity DALL-E 2 images are to original works is often unrecognizable to naked eye, this poses potential threats to the employment of Japanese anime artists. Critics have labeled generative AI like DALL-E 2 and Stable Diffusion as “anti-artist”, exploiting artists’ works and techniques in a plagiaristic manner without any safeguards to protect the original creator.

But does this mean we should not be optimistic about these new technologies? I don’t think so. AI like DALL-E has exceeded limitations and has the potential to be used for good, such as creating references for human artists seeking inspiration. Like every other technology, the important part is how we control and regulate these models. Many generative-AI users (artists included) believe that AI will only advance from here, and that human artists and their AI counterparts should learn to mutually exist. Since this Pandora’s box has already been opened, we should instead address discreditation and authorship of artists through restrictions in the law and focus on an audience-driven emphasis on human-made art.

This isn’t the first time a major disruptive force like AI has been released into the art world. From Daguerreotypes in the mid-nineteenth century to the invention of the digital camera to editing softwares like Adobe Photoshop and Illustrator, what we once deemed as traditional art has seen a multitude of changes as we shift to the modern world. Art dons a different purpose in each era and movement, adapting novel meanings as we advance in our ways of thinking. Art is never constant. It is reflective of the human condition and our particular context, and this allows the nature of human creativity to persist.