

Nice! I have seen some people who make and sell content on Mastodon— I appreciate those who do that, imo they are able to help fill a hole or just benefit the lives of people who are seeking that content or services/interactions. I hope they’re able to do it safely, seeing as how they probably have to move off-platform for many things, including payment… and with how payment processors treat any adult related transactions…
Unfortunately, that also means that there’s probably not a direct fedi alternative for OF— in my experience (more info below) and from my research, it’s pretty much the default for large creators to rely on paid actors impersonating them for messaging and interacting. Also OF takes care of the payment details, which idk of any fedi platforms doing.
Anecdote: I got lured into an OF a while ago on a local meetup/dating/rp discord servers. I thought I was smarter than that, but weeks of talking and trusting someone, them slowly eroding boundaries and using emotional manipulation… it’s really powerful, unfortunately.
Anyways that OF page was 100% designed to milk people. A “$3 first month” followed by a recurring “$45” regular price if you don’t cancel should’ve been a red flag. And only softcore posts once you subscribe, but as you talk to the actor in messages, they send more intimate images, but with a paywall. They wouldn’t talk to me anymore if I didn’t pay. It starts at $10, but the next was $30. That’s when I refused. It hurt though, I felt like I knew the person. The whole time they’re pushing this “if you don’t buy it you don’t love me, you don’t want to support me, omg I need grocery money” idea.
I know not all creators use it for that. But the platform certainly enables it with its design and features. I just think a massive portion of the adult industry is founded on exploitation, unfortunately.

The difference between Gen AI and Sony v. Universal feels pretty substantial to me: VCRs did not require manufacturers to use any copyrighted material to develop and manufacture them. They only could potentially infringe copyright if the user captured a copyrighted signal and used it for commercial purposes.
If you read the title and the description of the article, it admittedly does make it sound like the studios are taking issue with copyrighted IPs being able to be generated. But the first paragraph of the body states that the problem is actually the usage of copyrighted works as training inputs:
You compare Gen AI to “magic boxes”… but they’re not magic. They have to get their “knowledge” from somewhere. These AI tools are using many patterns far more subtle and complex than humans can recognize, and they aren’t storing the training inputs using them— it’s just used to strengthen connections within the neural net (afaik, as I’m not an ML developer). I think that’s why it’s so unregulated: how to you prove they used your content? And even so, they aren’t storing or outputting it directly. Could it fall under fair use?
Still, using copyrighted information in the creation of an invention has historically been considered infringement (I may not be using the correct terminology in this comparison, since maybe it’s more relevant to patent law), even if it didn’t end up in the invention— in software, for example, reverse engineers can’t legally rely on leaked source code to guide their development.
Also, using a VCR for personal use wouldn’t be a problem, which I’d say was a prominent use-case. And using it commercially wouldn’t involve any copyrighted material, unless the owner inputs any. Those aren’t the case with Gen AI: regardless of what you generate, non-commercially or commercially, the neural network was built using a majority of unauthorized, copyrighted content.
That said, copyright law functions largely to protect corporations anyways— an individual infringing the copyright of a corporation for personal or non-commercial use causes very little harm, but can usually be challenged and stopped. A corporation infringing copyright of an individual often can’t be stopped. Most individuals can’t even afford the legal fees, anyways.
For that reason, I’m glad to see companies taking legal action against OpenAI and other megacorps which are (IMO) infringing the copyright of individuals and corporations at this kind of a massive scale. Individuals certainly can’t stop it, but corporations may be able to get some justice or encourage more to be done to safeguard the technology.
Much damage is already done, though. E-waste and energy usage from machine learning have skyrocketed. Websites struggle to fight crawlers and lock down their APIs, both harming legit users. Non-consensual AI pornography is widely accessible. Many apps encourage people, including youth, to forgo genuine connection, both platonic and romantic, in exchange for AI chatbots. Also LLMs are fantastic misinformation machines. And we have automated arts, arguably the most “human” thing we can do, and put many artists out of work in doing so.
Whether the lack of safety guards is because of government incompetence, corruption, or is inherent to free-market capitalism, I’m not sure. Probably all of those reasons.
In summary, I disagree with you. I think companies training AI with unauthorized material are at fault. And personally, I think the entire AI industry as it exists currently is unethical.