- cross-posted to:
- technology@beehaw.org
- fediverse@lemmy.ml
- cross-posted to:
- technology@beehaw.org
- fediverse@lemmy.ml
Maven, a new social network backed by OpenAI’s Sam Altman, found itself in a controversy today when it imported a huge amount of posts and profiles from the Fediverse, and then ran AI analysis to alter the content.
It sounds like they weren’t “being fed into an AI model” as in being used as training material, they were just being evaluated by an AI model. However…
Yeah, the general attitude of wild witch-hunts and instant zero-to-11 rage at the slightest mention of it. Doesn’t matter what you’re actually doing with AI, the moment the mob thinks they scent blood the avalanche is rolling.
It sounds like Maven wants to play nice, but if the “general attitude” means that playing nice is impossible why should they even bother to try?
This wasn’t always the case. A lot of research on NLP uses scraped social media posts (2010’s). People never had a problem with that (at least the outrage wasn’t visible back then). The problem now is that our content is being used to create an AI product where there is zero consent taken from the end-user.
Source: My research colleagues used to work on NLP
Consent isn’t legally required if it’s fair use. Whether it’s fair use remains to be ruled on by the courts.