I don’t like Generative LLM Content, or what most folks are calling “Generative AI” very much. Professionally, I deal with it, because it’s where everyone in my industry is currently looking. Personally, I have some problems.
I’m going to lay those problems out here, so that every time in the future I go “I don’t like AI” I can just link back to this.
1. Current LLM’s for generative content were trained on copyrighted work, for which the creators were not compensated or informed.
Exactly what it says above: All of the current models are based off stealing other peoples shit. And before someone goes “Oh, but it hasn’t been proven to be theft in a court of law”, you can take that opinion and shove it where the sun does not shine.
The current models do not work without being “trained” on other peoples shit. And to be clear, these things are not intelligent. Training just being means fed data, data that they do not own, and did not have permission to use in this way.
“Oh, but there are models that were trained ethnically-” Yeah, but those aren’t the ones everyone is using.
2. The large scale intention to replace creatives with Generative LLM content.
This is one where my friends who enjoy AI more like to say “Well, it’s inevitable” which isn’t wrong, but also doesn’t make it less shit. There’s also the part where people go “Oh, but it won’t replace artists, it will just allow them to focus their time on more meaningful and larger scale parts of the process!”.
That second group of people are wrong, and I can point at Draconis 8 and Terraforming Mars as two large projects using AI where it did not result in more artists on those projects.
As a side note: I’ve seen some of even my more…. creative/artistic friends imply that it’ll be fine to use this stuff to replace ad copywriting, to which I say “First they came for the ad copy, and I did not speak out for I was not an ad copy writer.”
3. The intent of larger corporations to control these technologies in an permanent SaaS style ecosystem scares me.
So, there’s an on-going trend in software/computer companies to make everything into a “service” instead of a product. You don’t own your software, you rent it forever. Everyone is doing it, and frankly, I think it’s a pretty bad trend. Generally speaking, I think it incentivizes companies to create products that are easy to buy into, but hard to get out of. The software equivalent of a lobster trap.
It was less of a problem when we all just pirated legally purchased with money the Adobe Suite, and cracked it ran it legally with our licenses for the software, because we were running it on our own machines.
But ChatGPT, Meshy, every single one of these larger services are all run on cloud machines, with the inputs and acceptable content controlled by the companies who own them. That means they decide what speech is acceptable and what these systems will generate. And I don’t trust them to do that.
Again, my pro-ai Linux friends will point out that models can be run locally. They’re not wrong, but I think the inevitable failure of the technology enthusiast is the inability recognize that most people do not have the energy and time to devote to use more “pure” technology. There’s a reason the world runs on Linux, but personal computers don’t.
A Few Other Things
There’s a lot to be said on the nature of art, what art is, and the intrinsic human nature some folks might assign to it. I lack that capacity, so I’ll just link to this writeup from Space-Biff, who is a much better writer then me on that part of the discussion.
I’m slightly more ambivalent on this then they are. If every artist was getting fabulously wealthy, fairly compensated, and all of these models ran on local machines where users had control, I wouldn’t care as much, even if I might still argue the point. But they’re not, the ecosystems are closed, and everything is stolen. So I’ll argue against it anyway.