I Need to Talk About AI Again

If you’ve read my blog before, well first of all, thank you! But secondly, you may know I’ve written about AI in the past as well. My first look into the topic was very hopeful but wary. Essentially, I was initially inspired by Yuval Noah Harari’s Homo Deus: A Brief History of Tomorrow. What was most exciting about this book was Haran’s optimistic outlook to how AI could tremendously help and progress medicine and science. This is still something to be excited and hopeful for. AI has the potential to move humanity forward in a wonderful way.


In that same post, I also wrote that many artists had noticed that their art had been used without their permission to train the AI. But I know now that’s misleading, because AI isn’t actually learning to draw, it’s just mashing other people’s art together, it’s stealing. But I still ended that post with a touch of hopeful idealism, a warning that personal data is really what’s at stake in all this, and the push to be more cognizant about how we use AI moving forward.

Almost a year later, I wrote again about AI, but this time with my tone leaning more towards annoyance than maintaining that optimism. That frustration was based on the ubiquity of AI in stock image sites. Despite my more exasperated tone, my thought was, “It ain’t goin’ anywhere, so let’s try and figure out how to live with it.

Now here we are again.

Let me say, my thoughts and opinions on generative AI have changed considerably in the last few years. The key word here though is generative AI. All that hope I had in regards to Yuval Noah Harari’s book remains firm in what is now more specifically referred to as Non–Generative AI. In those areas, AI still has the potential to do some real good, and in a lot of ways, it is. The popular genetic testing service, 23andMe uses machine learning to help its users discover if they may be more susceptible to certain diseases as just one of the many additional perks you can opt into. This information doesn’t mean you have a certain disease or that you’ll definitely be diagnosed with that particular disease sometime in the future either. It is however, a means to take preventative steps to help reduce any risk you may have in relationship to that specific disease. There are some more problematic issues with how convoluted and difficult it is to understand those terms of service and privacy policies, but pretending for just one hot second that capitalism is a caring and even altruistic system, and all of us get along happily; this idea is straight out of the pages of Homo Deus.

Generative AI—I.e. putting a prompt into a text box where AI then quickly creates an image, video, or even music—is none of that. It’s straight up theft. It’s not creative, it’s just mashing up other people’s art into an unholy amalgamation of mostly really, really messed up stuff. If what you put out into the universe is what you get back, Generative AI is boiling, hateful, sewage. I hope you had a light lunch!

Now to be fair, there is a very small number of artists who have started training AI on their own art to assist with their own efforts. This can include coloring line art (based on their specific styles and desired color palettes), refining rough concept sketches, or even changing the lighting and light source. Many can view this as that new thing that other people will call out incorrectly as cheating. Is digital art, art? Is an Apple Pencil the same as a paint brush? Is smart shaping that different from a protractor, compass, or set of French curves? They are all tools that assist the artist, and while they certainly help and make life easier, I can tell you first hand that if you can’t draw to begin with, none of those tools are going to competently allow you to “fake it.”

Illustration by me

Filling in a text prompt and then checking TikTok for the few seconds it takes a generative AI program to give you some options based on said prompts is as effective a tool as claiming that by turning your gas stove on makes you Prometheus.

In that second post I wrote about AI, I noted that big name corporations like Adobe use good faith clauses to only allow generative AI learning on its cultivated stock. But many (if not most) generative AI programs are set loose to train off of everything on the World Wide Web, without permission—or at the very least, bury that information in their user agreements. There are a whole lot of problems with that which are terrifying if you give them even a second of thought, but what is starting to happen is inbreeding, where generative AI is learning from what it has already put out!

There are so many other creatives who have pointed this stuff out and in a much more cohesive fashion with excellent research. I highly recommend watching both Drew Gooden’s video AI is ruining the internet and Gabi Belle’s video Everything Wrong with AI

But the truth of the matter is that this blog post—as hopeless and depressing as it is—serves as my attempt to return to posting here more regularly. I’m excited to start writing more, and I hope that you’ll check out my future posts as I continue forward. I promise they will not be this bleak!

I’m also going to try and start posting more artwork again! How frequently that happens will probably not be as consistent as I’m hoping to be with this blog, but when I do it will definitely still be on Instagram. I’m not really loving X at all anymore (who is?!), but for now I’m still there @davehulteen I’m also going to give TikTok a try and I also just joined Substack as drawingdave so let’s try and connect!

How Much A.I. is Too Much A.I.?

I don’t blog enough to be topical or cover current trends, and I like to keep things light and funny, or positive and optimistic in general. In fact, at the end of 2022 when I actually tried to be topical and write about artificial intelligence, I did it from a pretty altruistic standpoint with the hope that working with the changing landscape that a.i. isn’t going anywhere is a better approach then railing against it. At my core, I still believe that, and I’ll put a pin in that for now, but today I’d like to revisit a.i. with a little more experience after having had the wonderful opportunity to visit the Adobe Max Conference in Los Angeles as an attendee.

At the Adobe Max Conference, October 2023 where I got to meet Aaron Draplin!

I say, “a little more experience” because not long after my initial blog post about a.i. in December, I followed that up in April of this year (I told you I don’t blog enough to be topical!) with a post detailing my impressions using ChatGPT and Dall–E which very quickly became dated after both platforms made serious advancements technologically. I think for most people at that time, a.i. was still a fun and funny little moment of pop culture that humorously added to the zeitgeist with things like Keaton Patti’s bananas Olive Garden commercials and Trump Rallies—all created by feeding hours of those particular brands original content into a.i. bot programs. Maybe the undercurrent had a slight worrisome tone of the inevitability of robot overlords, but it was still relatively light–hearted and quaint. Who could have ever imagined we’d be pining for the simpler times of 2022.

Of course, like most technological milestones, once something starts to get traction, it really takes off. Granted, a.i. has actually existed since the mid 20th century (starting with the Perceptron Mark I in 1957), but really started to generate public interest in the 80s with the goal of revolutionizing computer processing. It’s directly because of this that artificial intelligence has thrived recently—not so much because tech geniuses have learned more about a.i. themselves—but rather because computer storage, memory, and speed have increased beyond what most average consumers even need. And in 2023, Dall•E released its latest text–to–image model featuring significantly more nuance and detail, and ChatGPT became the fastest growing consumer software application which now offers it’s GPT–3.5 engine operating its services on a freemium model while ChatGPT Plus offers its GPT–4 engine to users for $20 USD a month. If that’s not impressive enough, as of this post, ChatGPT itself isn’t even a full year old yet having launched in November of 2022! In March 2023, Adobe released its generative a.i. tools to Photoshop in beta testing. Today, those a.i. tools are fully integrated into Photoshop and Firefly a.i. is in beta for Adobe Illustrator—creating editable and functional vector illustrations—as well as Adobe Premiere Pro which also offers beta tools for speech to text editing for videos. It is insanely easy to use and saves lots of time.

All of these images on Adobe Stock are generative a.i.

Let’s step back and address just a couple quick bullet points I made previously regarding a.i.’s learning technique. Now initially I made a mistake and thought the process was called stable diffusion, but that is in fact the name of a latent diffusion model developed by a company called Stability AI. Artificial intelligence uses machine learning to develop a deep knowledge of whatever subject it’s tasked to create. I’m cutting out a lot of context here for the sake of brevity, but imagine a robot who has the capacity to instantly read every single book on a particular subject so that it can then use that knowledge to compose its own creation based on that immediate education. It only has the information it has acquired, so it bases everything it can do on that information alone. I read hundreds of text books about various software programs when I was in school, but it almost all felt useless after I actually started working and realized experience was vastly more important. As a result, most of my initial professional work looks ridiculous, much like what a robot who only learned from reading and not actually doing.

Now imagine that same robot is instructed to paint a masterpiece, but in order to do that, it has to visually take in every painting currently on display in the Louvre. So it rushes around the museum and sees works from DaVinci, Géricault, Michelangelo, and Jacques-Louis David. Afterwards, you ask the robot to paint its own masterpiece, but upon completion you notice it hasn't really painted anything original, but rather cut and pasted elements like the Mona Lisa's smile or Roman columns from the Oath of the Hoaratii. It's specifically these issues that have a lot of creative people feeling pretty upset because the robot hasn't actually created anything, it's just stolen components from others. But then you also notice that because the a.i.'s creators have basically told this robot to go out and learn everything from the internet, you start getting into real troublesome areas because if you haven't noticed, there's some pretty horrible stuff online and it's not exactly hidden either. So now on top of being an art thief, the robot has also learned to be racist, sexist, and creepy. You know, like actual real life people.

Again, I'm really compacting a lot here to keep things from getting bogged down with technical jargon, but these are real concerns that have had companies like Adobe make serious public efforts to proactively promote responsible guidelines for generative a.i. learning and sharing, as well as protecting intellectual property and reflecting diversity in a positive way. For the most part, this corporate responsibility and good faith approach has been necessary, not just because of the reactions from creatives worldwide, but because of the accountability these types of organizations can be held to legally. Remember, theft of any kind is generally frowned upon.

So now we get to the Max conference where the undeniable star was Adobe's generative text–to–image a.i. program, Firefly. All over the conference floor, in its classrooms and displays, and promoted heavily at each keynote session were strikingly beautiful images all created by artificial intelligence. What Firefly offered was so prominent that it sometimes felt invasive. Is it cool and will it save designers from doing tedious things like masking, editing, and rough concepts that require super quick turnaround? Oh man, you bet your a$$ it will. But as one classroom speaker jokingly noted at the beginning of one session that it was the only conference event not promoting any new artificial intelligence tools, he was met with a roaring standing ovation.

Adobe competently showed that it's a.i. deep learning methods were trained solely on its own library of photographs, illustrations, images, and graphics (pretty much its entire stock library) and that its engineers were working tirelessly to integrate guardrails so that diversity and inclusion were represented equally and respectfully. So there you go! Problem solved. You can stop worrying now about everything. Robots are kind, love is love, intellectual property theft is a thing of the past, and the system works great.

Even if all that were true, there's still a hiccup or two. Now put your personal feelings about a.i. aside for just a bit and let's pull that pin out regarding my optimistic outlook from earlier. The cold hard fact is that a.i. isn't going anywhere, and just by looking at what was just a year ago and what is now today, it's pretty obvious that what a.i. can create is only going to get more impressive and it's going to be up to everyone to ensure it's guided properly, safely, and responsibly. Up until the Max conference, I could be heard saying, "Man, if this is where it's at now, imagine what it'll be like in ten years!" But that was out dated thinking when I first wrote about a.i. Especially when Adobe has expressed its expectation to see user generated a.i. images increase 5 times more in the next 3 years than it currently already has which is already over 15 billion!

So while ethically we have a lot of work to do, I think it's also fair to say that we're currently pretty early on in this saga and we're already experiencing some pretty heavy a.i. fatigue, and to that point, I'd like to redirect your attention to stock images. I love stock images, like I adore stock images and I've written about them before (and it's a funny article I'm really proud of too), but because I almost exclusively use Adobe Stock, and because Adobe Stock is the epicenter of Adobe a.i.'s learning process, it's kind of saturating the store with its own product and nothing else. Imagine going to your local grocery store, and all they sold was their brand of corn flakes. No produce, no deli, no butcher; just aisle after aisle of varying sized boxes of store brand corn flakes and nothing else. Because Adobe's generative a.i. has come along so far and has gotten so good, it offers generative a.i. images as stock image options. Originally you'd see one ore two pop up, then it became the majority of what was offered. Now depending on what you're looking for, a.i. generative images can be all that's available.

Real quickly if you haven't read my post on stock image sites; they provide a designer access to photos, graphics, templates, or illustrations the designer wouldn't otherwise have time to create themselves. Creating an ad for a new coffee chain? I can search for something like, "Friends enjoying coffee together in a cafe" on a stock image site just like I might look for something in a search engine and I will get various results that will hopefully match the look and vibe I'm going for.

So just to clarify, I don't have an issue with a.i. generated images. The quality is really good and getting better. There will be times when an a.i. generated image is much better than anything else the stock image site is offering, but it's frustrating how much of it there is. When searching for "Friends enjoying coffee together in a cafe", the language in the search itself is referring to a very human experience. The generative a.i. image is good, but it's not perfect. Plus I think there's a subconscious bias that I personally have that (at least in this instance) using something that was not created by humans but that is supposed to be representative of a human experience does not feel like a genuine, intentional choice to encourage others to buy into the design I'm making.

This image was generated by a.i. Can you tell?

I realize there's so much to unpack there. I'm using a computer to design this resource, is it really that bad that I'm requesting a computer create an additional element to that? Will future designers be less likely to have such a bias if they grow up understanding how ubiquitous this technology is? How "human" does an image have to be to properly reflect a human experience when we're all already familiar with shorthand cues like seeing people in such settings—photographed, illustrated, or otherwise—that creating that established connection is even necessary?

For their part, Adobe has a very clear and up front policy regarding their ethical standards and practices regarding posting, hosting, sharing, and creating generative a.i. images. Now people will point out a lot of that responsibility falls heavily on the users respecting that system, and this is what I personally believe is at the heart of all of this and creates the endless loop of debate surrounding this. Policing people from playing with this technology may be noble, but it's also antithetical to creativity in general. I'll repeat it again:

Artificial intelligence is not going anywhere and will continue to develop, but a.i. is a mirror reflecting back on the people who use it, interact with it, and engage others with it. Just like the world we live in, what we put in will be synonymous with what it gives back.

A.I. images generated from Dall E 2 and Firefly using the prompt, “Renaissance painting of a black cat in a pink tu tu

But what do you think? Is a.i. the cool new future that will bring the world peace and prosperity? Is this the beginning of the end? Will a.i. steal jobs from creative people as well as blue and white collar workers? Or am I just fixated on the loving memory of our cat Destiny and want a.i. to realize how special she was like my wife and I do?

Our actual (late) cat Destiny killing it better than any artist—a.i. generated or otherwise—ever could. Rest in Peace, sweet girl.

Thanks so much for stopping by, and I really would love to hear your thoughts on this. Please follow me on Instagram and let me know! Also like last time, here are more sources as well as some other great articles about a.i.:

A.I. Blog Post Fun Time

Last year I wrote a blog post about A.I. that dealt with how stable diffusion lead to claims of art theft. I’m really proud of that article and you should totally read it here. In short, I took a different stance than most of my peers and tried to approach the subject from a slightly less biased angle and looked at the idea of trying to encourage other artists to embrace A.I. or at the very least, try and find a way to live along side it.

Today I thought I’d put my money where my mouth is and make a post almost completely made by artificial intelligence. The images are all created by Dall–E 2 and the post is written by the controversial Chat GPT. Now I’ve messed around with Dall–E 2 before but this is my first time using Chat GPT. I figured the main reason I started this blog was to reflect how I have grown as a creative and as a father/uncle and how kids help foster creativity in adults. So it seemed apropos that I should ask Chat GPT to tackle that very same thing and then pepper in images from Dall–E 2 with the prompts like: “creatively stifled adult finding inspiration by drawing simple pictures with a child” and “a children's drawing of a busy, frustrated adult.” Here’s how it went:

Chat GPT: Hello! How can I help you?

Dave: Please write a blog post for my website https://www.davehulteendesign.com/blog

Chat GPT: Sure, I'll be happy to write a blog post for your website. What topic would you like me to write about?

Dave: Finding creative inspiration by drawing with children

Chat GPT: Great topic! Here's the blog post:

Finding Creative Inspiration by Drawing with Children

As adults, we often get caught up in the daily grind and struggle to find creative inspiration. Whether you’re a professional artist, graphic designer, or just someone who enjoys doodling in their free time, it’s important to find ways to stay inspired and motivated. That’s why drawing with children can be such a magical and fulfilling experience.

Children have a unique perspective on the world, and their imagination is boundless. They see things differently than we do, and their creativity is not yet limited by experience or practicality. When you draw with a child, you’re invited into their world, and you get to see things from their point of view. This can spark new ideas and perspectives, and help you see your craft in a whole new light.

But drawing with children is not just about finding inspiration. It’s also about connecting with them on a deeper level. Drawing is a form of communication, and when you draw with a child, you’re sharing a moment of creativity and expression. This can build trust and foster a deeper relationship, as you’re creating something together that is unique to your bond.

So, how do you go about drawing with children? Here are a few tips to get you started:

  1. Choose a simple subject: Pick a subject that’s easy to draw, such as an animal, a flower or a tree. Avoid complicated shapes or structures, as it may be too overwhelming for a child.

  2. Let the child take the lead: Allow the child to choose the subject, the materials, and to create the initial sketch. This will empower them and make it more fun for them.

  3. Encourage experimentation: Children love to experiment, so encourage their ideas and let them explore different colors and techniques. Be open to their suggestions and don’t be afraid to try new things yourself.

  4. Make it fun: Keep the session light and fun, without any pressure to create a masterpiece. Think of it as a playful exercise in creativity, not a high-stakes art project.

Drawing with children can be a truly rewarding experience. It allows you to tap into your own inner child, while connecting with a younger generation. It’s a chance to create something beautiful together and to inspire each other to keep on creating.

So next time you’re feeling stuck or in need of a fresh perspective, grab a piece of paper and some crayons, and draw with a child. You may be surprised at what you’ll learn, and the inspiration it may spark.

Okay, Dave here again, and aside from being a little bland? I gotta’ say I’m impressed! Especially the 4th tip to “keep things light and fun” and not think of it as “a high–stakes art project.” The Dall–E 2 images are also not too shabby! I especially love this one; look at that kid’s face! He’s so angry!

Dall–E 2 generated image

The reason I was really curious about trying this out was A: It seemed like an easy lay–up that would require minimal work on my end (meh, not so much), and B: I’ve been seeing lots of ads for A.I. writing blog posts. I wanted to dig just a little deeper and find out if this is wide–spread and the answer seems to be, Maybe?

According to an August 2022 article on Story Chief Insights, Dayana Mayfield writes that search engines like Google tend to be negatively biased towards A.I. written posts because they are very derivative, don’t represent a brands specific voice, and can produce false information. Now most bloggers who do use A.I. use a service called Jasper which allows for more control and uses a more intuitive A.I. Jasper also boasts that major companies like IBM, iHeart Media, and Harper Collins use their services, which lead me to believe this may be more ubiquitous than I assumed, but finding exact figures is understandably hard to determine. That being said, most media outlets feel A.I. is better used by assisting writing rather than replacing—a consensus that seems to be echoed by graphic designers the world over as well.

I personally write because it’s therapeutic rather than being content driven; my three month hiatus is proof–positive of that. However, as I’ve said before, I do see the benefits of A.I. and try to stay open–minded while being responsible. I mean look at these A.I. generated pictures of a black cat I had Dall–E 2 create! Are they good? Meh. Are they entertaining? Absolutely.

Please follow me on Instagram and Twitter and thanks for stopping by! Please let me know your thoughts on A.I. and how it contributed to this article.

A.I. Art: Clarification & Controversy

I wasn’t even planning on writing a blog post this week let alone one on such a topic as art generated by artificial intelligence, but as it is part of my industry and I’ve seen so much outrage from my peers (many who are friends) I thought I’d do a little digging and put something a little more comprehensive together than, “Support artists! Denounce technology!

I’m going to try and keep all of this as brief, simple, and informative as I possibly can, but I’m also going to try and approach this from a (slightly) less biased angle.

What is A.I., and is it Bad?

You don’t have to be relatively well informed to know what AI is. It covers all aspects of our lives from taking care of minor tasks we don’t even think about to the stuff of science fiction nightmares requiring Will Smith to show us that even machines can—and in fact do—love. 

Recently I finished reading Yuval Noah Harari’s Homo Deus: A Brief History of Tomorrow, a follow up to his bestseller Sapiens: A Brief History of Humankind. Harari writes a lot about AI and its benefits. Without going into too much detail and wasting time here, the three big takeaways for now are that:

  • AI is actually a really good thing that can drastically help with all sorts of things to improve life everywhere for everyone 

  • Fundamentally, human life as we continue to understand how it works is not that different from how AI learns, adapts, and grows itself

  • It’s becoming clearer that creative things people believed AI could never reproduce aren’t far off and many areas like classical music are actually surpassing humans in quality, structure, and beauty 

I know most people will not believe any of what I have just written, and that’s completely fine. I would never insist you take my word (or anyone else’s) on anything as gospel at–a–glance. I would strongly encourage you to do your own research though. The point is, however, that whether we like it or not, the world is going to continue to change, it always has, and it’s certainly not going to stop because a few of us don’t like the idea of being replaced by anyone or anything. Automation has been changing how we do our jobs and live our lives for centuries, and the whole process has continued to increase at breakneck speed, especially since the 1980s. In other words, AI is most definitely here to stay, and it’s probably better we figure out how to change with it, rather than stubbornly anchor ourselves against it until the next generation sees us as living fossils who refuse to accept change.

But we’re not here for a lecture on science or philosphy from some Muppet–loving Jersey boy whose blog readership doesn’t extend far past his own family and close friends. So let’s instead talk about AI generated art, specifically the latest fad known as Lensa or “Magic Avatars” because there’s a lot to unpack and it gets complicated. 

Avatar Insanity or High Art?

Remember Bitmojis? I hated Bitmojis when they first debuted. Not because of how well or poorly they’re drawn (depending on your tastes), but because I didn’t come up with the idea first. For those not familiar, Bitmojis aren’t too dissimilar from modern day avatars you create like Meta’s more CGI looking Facebook avatars or Apple’s Memojis. My argument at the time was also that they took away opportunities from artists like myself to create illustrations for profit. On the flip side, they give people who do not possess the skill to draw an opportunity to express themselves creatively quite literally. 

Bitmoji, Facebook, and Memoji avatars of yours truly, and one I drew myself (2017)

The point is that there will always be new technology to engage users and experience something they would otherwise have to commission an artist for. I don’t know a single person that has ever created a digital avatar using some type of technology and then claim that their “artwork” was anything other than a fun opportunity to represent themselves online or even just to “jump on the bandwagon.”

Now I recognize this is a slippery slope that could lead to something more nuanced down the road, but for the time being, let’s all try and remember a social media avatar is not the same as a portrait or caricature that you would give as a gift, hang in your living room, or rock on the side of your 1988 conversion van as you drive to your next gig.

AI Art Stink

The main focus of this article and the commotion that it’s caused is a program called Lensa by Prisma AI. At a glance, Lensa is a pretty standard photo and video app that lets you take and edit media with a variety of different features. It’s most popular of course are the filters that “turn your photos into works of art in the style of famous artists” and transform them “with popular art styles - anime, cartoons, sketches, watercolors...” all by using artificial intelligence. The app is free but offers premium monthly or annual subscriptions. 

Okay, so nothing too new there. I am very far from having my finger on the pulse of any social media platform, but even I’m familiar with these types of things enough to know there are tons of them. I’ve even used some of them before myself, so why is this app getting artists in particular so upset?

The Lensa Learning Problem

Not too long ago (like literally just several months ago) Dall–E 2 debuted to slightly different fanfare. Dall–E 2 is also an AI art generator but it seems it’s absolutely ludicrous creations were more comical than threatening (for the most part). Hence, many people looked at Dall–E’s attempts as technological proof that a computer could never imitate the skill of an experienced artist. Or that optimistically, it’s still a ways off. 

Dall–E (and Lensa) uses Apple’s TrueDepth API which most iPhone users know as the same technology that allows them to unlock their phones just by looking at it. Dall–E 2 learned how to create its images by studying tons of information to create images based on text prompts. This AI learning technique is known as Stable Diffusion. 

This is tricky, but basically when AI uses Stable Diffusion, it’s not just learning to recognize features and characteristics of someone’s art, but essentially manipulating and reproducing elements from that art. So the issue then is that artists are accusing AI of using this learning technique by having Lensa specifically build its creations from existing art without permission from the artists it’s emulating. Now it’s hard to find sources to corroborate some of the allegations I’ve read, but many artists have actively accused Prisma AI of stealing their art specifically, requesting the company stop doing so repeatedly, then Prisma AI allegedly refusing and even cyber bullying them about it. There is compelling evidence to lend credibility to these claims where remnants of the artists signatures from original works are sometimes still visible in the AI generated art. 

Darker Secrets

Okay, maybe I’m coming across a little too unbiased by praising what AI could (or hopefully should) be even though I’ve stated in the past how infuriating and hurtful art theft can be. So let’s look at how one of the more nefarious problems with Lensa isn’t even allegations of art theft, but how it depicts your pictures when they become “art.”

A big problem with AI in general is that it learns its lessons from its creators, and even modern day AI—which is touted as pure—has repeatedly displayed racism, nepotism, and sexism. Lensa has lightened skin tones of people of color, struggled with (re)producing Asian features, and sexualized women and children. Other AI art generators have done similar unwarranted things like taken on macabre tones when “crossbreeding” images that did not previously convey violence. 

I want to be clear: these are not alarmist warnings that AI will rise up and destroy humanity. Artificial intelligence reflects the zeitgeist, which includes everything from cute and fun to morally questionable. In other words, whether it’s an art–stealing bot, a perverted algorithm, or even an altruistic ghost in the machine, it’s all taking its cue from us.

Now What?

So where do we go from here? As artists, we tend to react more emotionally because, you know, suffering is kind of “our thing” (until artificial intelligence corners the market on angst too). But like I mentioned before, technology is going to keep moving forward regardless of how we feel about it and that’s not necessarily a bad thing. A knee jerk reaction is to call out for regulation but that’s way easier said than done for two big reasons. 

While there has been initiative in congress to try and moderate how things like AI continue to develop, bureaucracy—love it or loathe it—purposely moves slowly to make sure it’s covering all its bases properly (and that is an exceptionally gracious and arguably naive platitude). On top of that, it’s an antiquated system that’s literally hundreds of years old. Technology moves ludicrously faster, meaning that by the time well intentioned and thought out legislation finally passes even in the best of circumstances, the applied science behind that technology is usually obsolete, meaning any government progress was all for nothing. 

The second problem is that government officials aren’t exactly young entrepreneurs anymore who even understand the technology they’re hoping to regulate, prioritize, or control. So when you ask older people using an even older system to help answer these questions, you eventually have to consider if the whole process wouldn’t be better served by the AI you want them to regulate in the first place! Understand though that this is not an endorsement to willfully hand the keys over to tech bros like Elon Musk or Sam Bankman–Fried. It’s pretty clear that being rich does not equal being responsible… or smart… or ethical… or competent… or sane. 

So if creative people only know how to get upset over it, and our leaders only know how to politicize it, as usual, it all comes down to you, the user. AI really can do incredible things, but moving forward, it’s up to us to decide how we’ll use it. 

And this is something everyone really needs to learn how to do better. Consider if that neat new AI avatar is worth the likes verses its privacy policy. Yeah, that’s a whole other kettle of fish because one thing practically no one understands is what kind of personal information you allow software developers access to when you agree to terms and services. 

Like almost every app, Lensa uses legalese to ensure you maintain the rights to your photos, but then vaguely explains they have the right to use those photos to independently aid in research, development, and improving new and existing products. 

This is not a dystopian outlook from a conspiracy theorist either. Your personal data is way more important to all of these developers than what kind of review you leave them on the App Store. A great rule of thumb when you download an app is its cost. If the app is free, then you’re the product that’s for sale.

Sources

I did a fair amount of research for this post, so if you’d like to read a bit more in depth on all of this, please check out these articles:

Lensa AI app: What to know about the self portrait generator by Meera Navlakha
Mashable

Understanding the impact of automation on workers, jobs, and wages by Harry J. Holzer
Brookings

Prisma is coming to Android, but there's a way to get it sooner by Stan Schroeder
Mashable

Careful — Lensa is Using Your Photos to Train Their AI by Shanti Escalante-De Mattei
ARTnews

Lensa, the AI portrait app, has soared in popularity. But many artists question the ethics of AI art by Morgan Sung
NBC News

‘Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos by Olivia Snow
Wired

Stable Diffusion
Wikipedia

DALL–E
Wikipedia

If you’d like to track what some artists are saying, a lot of insight on Lensa’s theft I read came from Jon Lam on Instagram. He credited Lauryn Ipsum for the discovery of remnants of artist’s signatures on AI creations. Karla Ortiz is helping lead the fight for artists rights.