When the word on the streets is about this new technology of Artificial Intelligence, they have a critical misunderstanding of the situation: It has been here all along. So what is different now?

By Liz Ramsey, Grafill

I grew up in the nineties, featuring textbooks with kids riding keyboards like skateboards into a neon metropolis that represented the internet. Each and every one of these illustrated children had looks of jubilant awe glued to their faces - it was impossible not to be swept away in the excitement this new technology brought to our fingertips.

My family is lower class. My father was a garbage man, and his super power was his ability to find things that still had some kick and fix ‘em up. The things people would throw away! We had a pool, a slide for the pool, a menagerie of rust-riddled cars, boxes of granola bars only a few days expired, an army of old appliances and machines in various states of repair, and - my most prized possession - an old computer. Don’t misunderstand me, this wasn’t a good computer. It had no internet, no programs and everything ran off of huge 8 inch floppy disks, of which we had only 3. My favorite was ‘The Print Shop’, which kickstarted my fascination with design. Imagine it: I, a 6 year old girl, could make greeting cards for my mom on Christmas or invitations to my birthday party all by myself! My parents were so proud - their child was living in a dream era where things were possible.

This remains true today. Digitalisation and global technologies have made the world more accessible than ever before. I can, at this exact moment, order a Pikachu toy directly from Japan. I can search for flights for a summer vacation to Madagascar in mere seconds using advanced AI algorithms that I don't even need to know exist for it to work. I can call my trash-loving dad back in the USA, with the signals going to space and back faster than it took to read this sentence.

Technology is synonymous with today’s world, and outside of a global catastrophe like a massive solar flare or a zombie outbreak, this isn’t likely to change. Especially after the pandemic, where a great digitalization occurred across nations, we are more tightly connected to the digital world than ever before. Elon Musk thinks that our technological enhancements are so interconnected with our lives we cannot live without them. “Most people don’t realize they’re already a cyborg. That phone is an extension of yourself." said Musk. "You have more power in your hand than the president of the United States had 20 years ago.”

Although I am an internet kid, developing and evolving alongside the world wide web like science fiction siblings, I make no claims to be an expert in digital fields. But I do know design (Thanks, Print Shop), and have enough capability for search engines to pretend to understand what is happening.

So. What is happening?

Liz as a young, nerdy child. (Foto: Liz Ramsey)

Klikk for å se bildet "Liz" i full størrelse

Liz as a young, nerdy child. (Foto: Liz Ramsey)

Liz's happy father amongst his broken machines. (Foto: Liz Ramsey)

Klikk for å se bildet "Roger ramsey" i full størrelse

Liz's happy father amongst his broken machines. (Foto: Liz Ramsey)

What is AI? #

Before we begin, I’d like to clarify that this text focuses on image generators and their effect on designers. I make no attempt to broaden that scope by including things like self-driving cars, Boston Mechanic robots doing backflips, or facial recognition for instagram filters. A lot of this information also may not pertain to Norwegian law, where I live and work, but references international laws that may apply where the technology is being developed.

As if art and aesthetics were easy to explain before, digital technologies have complicated matters. Where we thought that only labor intensive or entry level jobs were being affected, programs like DALL-E, Midjourney, and Stable Diffusion have been developed under the mantra of democratizing image creation and have forced their way into nearly every conversation in the creative sector since. Although there are zealous arguments for both sides, those who are incredibly excited about these new programs versus those who see this as a very palpable threat, what is perhaps the most interesting is that despite our human reaction to polarize a controversial topic, the “gray area” in this discussion is huge. This is most likely due to a very non-controversial fact: AI is so very, very, very complicated.

Artificial Intelligence (AI) is an umbrella term for any digital technology built to mimic organic life - not just humans, but plants and animals as well. A subcategory of this is machine learning, where developers teach machines to learn using explorative algorithms. A type of machine learning is deep learning, where layers upon layers upon layers of data are used to compare information when creating new outputs.

A lot of this mirrors how human brains work - when we are children, we are shown images of an apple and then told that this is, indeed, an apple. When we grasp that concept we then are given a pile of plastic fruit and told to find the apple, and perhaps the banana, then perhaps the carrot. Wait - a carrot? That isn’t a fruit. That goes in that other pile with the broccoli and potatoes. A machine would learn in the same way - categorizing outputs off of known information.

The differences between a human brain and a machine are minimal. The most notable is that machines are trained to do highly specific tasks with highly specific datasets - like sorting fruit. If you ask that same machine to also vacuum your floor, it won’t be able to. We, on the other hand, receive most of our knowledge through context. Although many humans have formal schooling and teaching aids like picture books and educational YouTube clips, the vast majority of what we understand comes from what we absorb from our surroundings and by watching others - dialect and jargon, social skills, body language, or problem solving. Like a child mimicking their parents cooking dinner, we see, process, evaluate, and learn without needing a prompt to do so. And we do this all the time with everything. 

The other difference is that machines are far faster at processing than we are. The textbot that is on everyone’s tongues, GPT-3, was trained on 400 billion words, mostly taken from the internet. At the average reading rate for humans, it would take someone nearly 4,000 years to read this much text, and our comprehension rate would be much lower.

Similar to human brains, and particularly smart dogs, AI are also trained with reinforcement learning: If you do something good, you get a reward. This constant feedback is the biggest reason why AI image generators have become such a hot topic today, when only a few years ago the images they created were creepy meme-worthy replications of children’s drawings. Every time we use these programs, we give it more information for what works and what doesn’t at an exponential rate which stimulates further learning and therefore creates better and better work.

Terms like machine learning or deep learning are sub-categories of the field of artificial intelligence, which aims to teach machines to mimic behaviours of organic life. (Illustration: Liz Ramsey)

Klikk for å se bildet "AI chart2" i full størrelse

Terms like machine learning or deep learning are sub-categories of the field of artificial intelligence, which aims to teach machines to mimic behaviours of organic life. (Illustration: Liz Ramsey)

Prompt - the new buzzword #

Most text to image generators work by translating a series of words or phrases given from the user into an image. This series of words is called a prompt, and is where human creativity shines during the process of AI image generation. To get a good image, we need a good prompt, and this requires that the AI can understand both the text (input) and the image (output) and then agree on what text data matches what visual data.

If I type in “Man eating an apple” I will get an image of a humanoid figure with a sphere shaped object close to its face. If the machine didn’t understand these terms (man, eating, and apple) or couldn’t connect them correctly to the visual elements that relate to those terms, I could end up with an image of an airplane riding a banana instead. Training these machines to understand shapes, colors, context, and everything that makes an image an image needs a massive amount of data - called datasets.

So where does this data come from? The answers are techniques called data crawling and data scraping. Data crawling is when a program “crawls” across the internet and simply gathers data on what is available - For example, a crawler may record that a given site has 100 links, 47 images, and 6427 characters. It is unbiased and is just indexing information. Then, when that data is available and a human decides what data is valuable, a scrapebot goes in and records this data with more details. If you have ever copied and pasted something, you have basically done the same job as a data scraper. It takes the URLS for images, not the actual images themselves which is an important distinction, and the metadata associated with the image (think date published, alt-text, SEO keywords, that sort of thing), and puts it in a library. Outside of the initial prompt to “archive images”, there is no human directing these actions. They aren’t specifically targeting one artist or one website unless prompted to do so.

This is common practice for any field, and you, dear reader, have already benefited from data mining when using the internet at any given moment. It is also legal in most countries; Many courts have decreed that any information willingly placed on the internet, and not behind a paywall, is up for grabs.

LAION, the most used image dataset available today, has worked tirelessly to purify their data. Their images have meticulous metadata that connects its image to its intended text. It works similar to hashtags on social media; When I tag my illustration with “pen and ink” and “Liz Ramsey” and “colorful”, I am doing these image generators a favor by telling it exactly what kind of text relates accurately to this image.

To no surprise, image hosting websites are the main sources of image scraping. Sites like Pinterest, Wordpress, Fine Art America, Squarespace, Etsy, Shutterstock, Deviant Art, and Art Station were scraped significantly. 47% of the 6 billion images on the LAION dataset came from only 100 domains. And this excludes big companies like Meta (Facebook and Instagram), Google, and Adobe as they are creating their own datasets for their own AI image generators.

Because many of these images come from sites creatives use to show their work, it is very common to see human touches on AI image generated works such as signatures or the “artist’s studio” in the background. This isn’t the result of a computer copying specific elements from originals, or purposefully trying to become human like a dystopian Pinnochio, but rather a result of the immense amount of images posted online that are signed or feature a pen delicately placed over the final image for presentation purposes.

AI image generators do not copy and paste, nor do they collage images together. The AI is actually learning, at a pixel level, correlations and patterns to be able to create brand new works. They do not take a piece of a picture and overlay it with another. They are looking and comparing millions of images and recognizing similarities, based on user feedback of what works and what doesn't, and putting it all together one dot at a time. This is what many pro-AI artists' argue: That although the data used to create these images is from someone else, the images created (guided by their human hand) are completely new.

As I mentioned before, this is a complicated issue with pros and cons on both sides. It is completely valid to be unsettled by the implementation of AI into our lives. Most of the things we humans are most scared of fall into two categories: 1) the other humans, and their intentions, who have developed these programs and, perhaps much simpler, 2) the existential implications of what humans, reality, and life really are.

Covering AI is such a complex topic, Grafill hosted an informative evening to discuss the new technology.  (photo: Ebba Morlin)

Klikk for å se bildet "Liz lecture2" i full størrelse

Covering AI is such a complex topic, Grafill hosted an informative evening to discuss the new technology. (photo: Ebba Morlin)

Liz lecturing about AI at Grafills Hus (photo: Ebba Morlin)

Klikk for å se bildet "Liz lecture" i full størrelse

Liz lecturing about AI at Grafills Hus (photo: Ebba Morlin)

Legal and ethical implications, and what is being done #

Legal frameworks are notoriously difficult to summarize into black and white terms, and AI is no exception. Lawyers around the globe are still scratching their heads over all the legal implications of AI, and as this technology is developing at lightning speed a lot of what I mention here will probably be outdated as soon as this is published. There are a lot of court cases out now which are aiming to set some precedent for the usage of AI and how data is gained, but I won’t be going deep into any particular case as things are still so very unclear.

Let’s start with what most people agree on: Transparency. You’ll be hard pressed to find even the most passionate tech user who doesn’t want to understand how something works. Despite many of these technologies, such as the LAION image dataset, being open source, the companies using them are notoriously tight-lipped about their uses or intentions. Many of these non-profit companies are later sold off for very for-profit means, including for police and military applications leading to controversial or unethical ends. Stability Diffusion has a current worth of around $1 Billion USD, which is a great example of how much profit something can make while still being labeled as non-profit.

Like the human brain, we do not truly understand how AI works. We understand what happens at the beginning and the end of an image generation: I put a prompt in and art comes out. But between those two points, for better or for worse, is still magic.

Even the datasets are like a black box. Although strides are being made to understand where images have come from, it is still a challenge to precisely pin down what data exists in these datasets and where the images were mined from. Spawning.AI is an advocacy group working to help users understand AI and are behind the “HaveIBeenTrained.com” site, where users can check if their images are included in datasets. They are working closely with Stablility.AI to implement an opt-out feature for future models. There are also tools, such as Glaze, being developed to sabotage data to help fight image scraping of designers.

Privacy is a chief concern for most digital tools. Future developments of AI focus on making datasets bigger and the data contained more precise, but we are not prioritizing understanding them better. In LAION’s images, approximately 2.9% were deemed “unsafe”, meaning containing explicit gore or porn. One woman even reported her private medical photos, taken by her doctor, to be included in these sets. As these scrapers are autonomously doing their jobs with no regard to the images, finding the cracks where personal and private images are coming from is a herculean task. Some of these image datasets offer an “opt-out” service, but there is no reliable way of confirming that this has been done. Simply put, the era of us knowing who sees and uses our images is over.

Regulation is often what splits the techies from the technophobes. In 2023, there are very few companies with simple structures. Gone are the days of small mom-and-pop shops with a boss, an accountant, and some workers. Companies driving these technological shifts are international, with parent companies and subsidiaries, investors and lobbyists, and multi-headed hydra projects with 8 digit budgets.

If AI does something wrong, who is responsible? Is it the investor companies? The engineers or programmers developing the code? The non-profit open-source datasets? The end user playing with these tools?

There are some who believe this is by design, that these image generators are a smoke shield for data laundering, but this fact is much scarier: That regulation is a logistical nightmare decades behind. Imagine a box of Christmas lights, wires and bulbs wrapped impossibly tight, and when you finally follow the link to its end to even start unraveling this mess you find yourself in China. That’s the kind of tangle we are talking about. Generations of developments led by sentiments like “we don’t want to regulate too soon lest we stifle growth” quickly translate to “we won’t fix this unless it's a problem”, which then leads to an inability to predict challenges and pain-points when it comes to negative societal impacts. The field is driven by hype, not caution, and this means that we are swayed by the potential positive applications more than prioritizing safe and steady growth.

An illustration given the prompt: "a group of creatives with pencils and paint brushes, some have ipads or laptops. Some are drawing in a sketchbook. All look happy and are creating beautiful artworks. colorful, vibrant, stylized, good quality, well drawn, realistic, diverse, beautiful, energetic" (Illustration: Liz Ramsey / Midjourney)

Klikk for å se bildet "Prompt" i full størrelse

An illustration given the prompt: "a group of creatives with pencils and paint brushes, some have ipads or laptops. Some are drawing in a sketchbook. All look happy and are creating beautiful artworks. colorful, vibrant, stylized, good quality, well drawn, realistic, diverse, beautiful, energetic" (Illustration: Liz Ramsey / Midjourney)

Can passion be replaced? #

What makes AI scary for designers, outside of the small matter of potential societal collapse, is the fact that their passion is being taken from them and that the world doesn’t seem to care. Some say marketing, accounting, and project management are all fine to outsource to machines, but creating art should never be replaced by something that cannot live. Many creative works are closely linked to their designer’s lived experiences and unconsciously curated preferences cultivated from living that life.

When I use thick black strokes in my illustrations, it is because I like Mike Mignola’s work. I first heard of Mike Mignola from Hellboy in 1996 while in elementary school. I saw these comics when my local library expanded their collection to include comics, which was unheard of in rural USA back then. I was often at the library because we were poor, and books were free. The whole family would go together and each get their own stacks - me with my comics, and my dad with his guide to fixing 1978 Corvette motors. So when I draw harsh pen and ink lines in my work, it is in remembrance of my trash-loving dad who sat next to me at the library when I was a child getting lost in the fantasy of a badass crime-fighting demon. Many designers have a similar story compounded by decades of full-time dedication to honing their craft, often without appropriate compensation or respect. And now, creative lives are being compressed into algorithms with little regard to how that data was gained or used while being called nasty names like “cowards” or “stingy”.

I understand why some people are angry and mad. I empathize.

However, we cannot tackle this giant without taking a close look at all the ways AI has benefited our lives and will continue to do so. It is important to note that these tools are neutral. The technology has no desire to do anything other than what it was designed to do. Most of the tools we use today have put someone out of a job. You and I both agree that the spell-check functionality I used to write this article, absolutely a tool utilizing AI, has certainly saved us both a lot of head scratching. The website in which we publish this article replaces the Dickens’-esque hard-working newspaper boy. There is incredible potential in saving lives with cutting edge technologies being developed today that cannot be discounted or dismissed. In fact, 65% of the children entering primary school today will ultimately end up working in completely new job types that don’t even exist yet - the vast majority of which will be digitally based.

This requires a moment of pause to consider for both sides.

We at Grafill promise to continue monitoring developments, hosting informative classes or social discussions, and to listen to your thoughts and wishes. As we navigate an uncertain future with a member-base filled with folks on both sides of this tangle, we will continue being the arena to make sure everyone is being seen and heard.

And we will do it together.

-----

Have an opinion? I'd love to hear what you have to say! Get in touch at liz@grafill.no

Spill video

The Basics of Artificial Intelligence for Creatives

01:51:15

Did you miss Liz's lecture about AI in February? Here is the recording of the event for you to check out.

Main image: Illustrations created using the AI Image generator Midjourney (Illustration: Liz Ramsey)