My Stance on "ai" "art"
In light of the admission by MythWeaver that they were using Stable Diffusion to generate content, I went ahead and officially made my position on generative machine learning and large language models clear.
TL;DR: the thing you call "ai" isn't "ai", but LLM and GML, and the systems used to build them were possible because of theft. Please reconsider using them to generate stuff.
So uh. MythWeaver uses Stable Diffusion. Normally I wouldn't post stuff about LLM and GML [large language models and generative machine learning, respectively (it's not Artificial Intelligence, no matter how much they say it is)], but this one got me a bit riled. So here. Lemme make my stance clear.
If you are using any of the following systems to "create new content" [all in quotes because none of those three words necessarily belongs there], please stop.
- MidJourney
- Stable Diffusion
- ChatGPT
- any system based off of these or their derivatives
Yes. I have heard the arguments in favour, but allow me to present arguments against.
- but "ai" gives accessibility to art and writing -
- generative machine learning literally stole mountains of content from unwilling artists to create those models. beside that, you should be striving to make the art you can, not to steal (and in some cases profit from) the art of others. Would I love to be able to create the kinds of things made by Shinga, Birbles, Vyper, and many others? Sure. Do I use those generative systems to pop out such pieces? Fuck no. I make the art I can, and over time I get better and develop my own style. If you look at my art from 15 years ago and compare to now, there's marked improvement because I keep fucking trying.
- but art is expensive -
- damn right it is. Because the people who make it have honed skills over the years and learned to make incredible things. If you aren't willing to pay them, but you're willing to use systems that were built by stealing from them, then you are telling me you don't value art or artists
- I'm using it for personal use -
- great. Fantastic. But if that personal use is so small that it will have no impact at all, then you could probably have just gotten public domain work or gone directly to the source
- I'm not a good writer/coder/developer/editor/et c, and the "ai" is fast and cheap/free -
- for editing something you've written and for debugging code, it's a pretty okay starting point, but it should still go to a human for confirmation. For actually writing or coding something, you should not only NOT use generative machine learning, but you should shy away from anyone who says you should. Why? Because GML systems are made to hallucinate [this term is used in the field to refer to inaccurate info provided by the system] and can only be trusted to do so. They are not intelligent, and they do not care if the info they spit out is accurate. Already we are seeing harmful and dangerous articles popping up that are made by GML and have not bothered to be fact checked. At this point you can only barely trust the internet, and that confidence is getting more and more shaky.
- but it's just getting its feet under itself. We have to let it get better -
- nothing of this sort should ever have been released to the public because it is theft. Releasing a skeleton that's untrained? Cool no big. Then people can easily be held directly accountable for the active theft they're doing. Releasing the theory and making it clear that implementation is miles from true ai? Definitely. But by releasing systems into the wild that fundamentally stole everything that was used to build them? No. Not at all. Because then the end user doesn't take accountability for their own complicity in the theft.
It's cool if you disagree with me, but if you share something with me that was made by LLM or GML, I'm going to dislike it, no matter how cool you think it is, unless you made it from scratch with data you created YOURSELF or received consent from the originators to use.
Author's Note
I make an important exception that isn't laid out above, but I want to make it explicit. Systems that are entirely built for accessibility are generally a force for good. An example, at the time of writing anyway, is Goblin Tools. Why? Because Goblin Tools is strictly an accessibility aid. It can help break down tasks, estimate time requirements, provide feedback about text, and give ideas for rewriting text to be more in line with specific goals. But with the exception of one feature (as of 07 January 2024), the system is almost entirely reactive and requires the user to have written something already (with the obvious caveat that the user could have pulled text from elsewhere). Goblin Tools is a starting point for what the technology COULD be in a positive way.
Tags: --- machine-learning --- generative-tools --- art --- ethical-coding --- fake-ai --- large-language-models --- fb-rants ---
Words: 672
Date: 2024-01-07