

Imo the main difference would be that genAI models have been trained on a whole lot of art without consent, and the few privileged companies who are able to do this are making a ton of money (mainly by investors, not sure how much from paying users). Which is very extractive and centralised. Using others’ art to do memes at least is distributed and not that remunerative
Putting AI aside, if we see art used in a meme of a random shitposter, it feels different than a political party or a big corporation using that art to do meme propaganda/advertisement.
Another interesting field for this is YouTube poops. They use tons of copyrighted materials, from big movies to local youtubers to advertisement. I would consider that fair, but if instead a big television network had a program showing youtubers’ content without permission that’s another story
Another example: Undertale’s soundtrack being made with Earthbound’s sound effects and samples. If it weren’t an indie, especially if it was a big publisher using an indie’s sounds, it wouldn’t have been well received.
So back to AI, when it comes to a person using it for their own projects, the issue to me isn’t really using stolen art, but using a tool that was made with an extractive theft of art by a big corporation, rather than seeking collaboration with artists, using existing CreativeCommons stuff, etc.
We also have to keep the context in mind: copyright laws mainly serve big publishers, hardly ever it protects smaller creators from such big publishers, in any field. The genAI training race is based on a complete lack of interest in applying or at least discussing the law.
I’m glad to see tho that thanks to this phenomenon more and more people are seeing how IP doesn’t make any sense to begin with. Just keep in mind copyright and attribution are two different things.
You could try Google’s new NotebookLM if the legal writing is a book, or even just a long document
Otherwise just use any llm and ask step by step checking references