Well, that’s awesome.
Having read the article:
I agree that the approach is no longer viable but I strongly disagree with the rationale. It boils down to three key aspects:
- Wordfreq works by scraping the “open web”. As a result, it is being inundated with massive amounts of gpt spam articles. This is problematic in that it is not “natural language” between people but… those articles never were. If you think anyone talks like the average SEO recipe blog then… more on that later.
- Sites are increasingly locking down access to scraping their text. This… I actually think is really good. I strongly dislike that that locking down means “so that only people who pay us can train off of you” but I have always disliked the idea that people just train models off of social media with no consent whatsoever
- Funding for NLP research is basically dead. No arguments there and I have similar rants from different perspectives. But… that is when you learn how to call what you do AI to get back your old funding.
But I think the bigger part, that I strongly disagree with, is the idea that this is not the language of a post-2021 society. With points like
Including this slop in the data skews the word frequencies.”
But… look up “so-cal-ification” and how many people have some “valley girl” idioms and cadence to their normal speech because that is what we grew up on. Like, I say “like” a lot to chain thoughts together and am under no illusions that came from TV. Same with how you can generally spot someone who grew up reading SFF based on how they use some semi-obscure words and are almost guaranteed to mispronounce them.
Because it is the same logic as “literally there is no word that means literally anymore”. Yeah, it is true. Yeah, it is annoying. But language evolves and it doesn’t always evolve in ways that make sense.
Or, just look at how many people immediately started using the phrase “enshittification” every chance they got. Or who learned about the Ship of Theseus and apply it every chance they get.
Like (there it is again!), a great example is cell phones. Reality TV popularized the idea of putting your phone on speaker, holding it in the palm of your hand, and talking into it. That is fucking obnoxious and has made the world a worse place. But part of that was necessity (in reality tv it is so that the audience gets both perspectives. In reality life it is because of shit like the iphone having a generation or two that would drop calls if you held it like a god damned phone) and then it is just that feedback loop. Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.
AI has already begun to change language and it will continue to do so in the future. That is just reality and it is no different than radio and especially television leading to many regional dialects being outright wiped out.
The problem is that LLMs aren’t human speech and any dataset that includes them cannot be an accurate representation of human speech.
It’s not “LLMs convinced humans to use ‘delve’ a lot”. It’s “this dataset is muddy as hell because a huge proportion of it is randomly generated noise”.
What is “human speech”? Again, so many people (around the world) have picked up idioms and speaking cadences based on the media they consume. A great example is that two of my best friends are from the UK but have been in the US long enough that their families make fun of them. Yet their kid actually pronounces it “al-you-min-ee-uhm” even though they both say “al-ooh-min-um”. Why? Because he watches a cartoon where they pronounce it the British way.
And I already referenced socal-ification which is heavily based on screenwriters and actors who live in LA. Again, do we not speak “human speech” because it was artificially influenced?
Like, yeah, LLMs are “tainted” with the word “delve” (which I am pretty sure comes from youtube scripts anyway but…). So are people. There is a lot of value in researching the WHY a given word or idiom becomes so popular but, at the end of the day… people be saying “delve” a lot.
Speech written by a human. It’s not complicated.
It cannot possibly be human speech if it was produced by a machine.
This is just depressing.
That makes sense. Way too many web search results look and feel like they weren’t written by a human lately. It’s gotten even more difficult for me to figure out what’s trustworthy and what isn’t.
Yep, and the fact they continue to feed these same results back to the AI is going to eventually make them lose their shit. I saw it mentioned in an article or video (can’t remember now which) that when AI starts taking AI created output as input it gets hallucinations, almost like schizophrenia.
When the first three results look like high schoolers copied with slight wording changes from the same source and they are all written in an extremely passive tone, my assumption is AI. Questions on things like cooking temps are the worst in my experience, and I assume that is something which is easy to automate.
I was looking for tips on cutting acrylic sheets and everything I found seemed untrustworthy. Bad advice there could be hazardous.
That I feel is a case of people yearning for a day that never existed.
Like, every GenX/Older Millennial who had a modem too early in life has stories about The Anarchist’s Cookbook. And the thing you learn REAL fast is that people would edit and share MUCH more dangerous versions (and considering what the source was to begin with…). I remember being part of the mod staff for a couple DC++ hubs where we would check versions and tell anyone with a(n overly) dangerous edit to delete that shit or be banned.
Fast forward a couple decades and I needed to do a temporary repair on my car before I could get some “body” damage fixed (like two hours of effort but needed a part). Every attempt at searching, even on reddit, would talk about how you should use flexseal or the good duct tape or whatever. Only lucked out because I found one blog post that talked about how using any of those methods would guarantee you rip off the paint and drastically increase the cost of repairs and to instead use automotive masking tape unless you REALLY needed to drive in a heavy downpour.
Same with doing house work. Youtube is immensely useful for that. But there is a reason so many “maker” channels have “React to life hack” videos. Because if you don’t know what you are doing? Some whackjob using clever editing to make it look like they built a duct adapter out of elmer’s glue and an actual repair video are indistinguishable (especially after youtube hid the dislikes…). And that can range from wasting your time to outright fire hazards or frozen pipes.
The reality is that people have always been shits. And it REALLY fucking sucks when the LLMs designed to parse that, invariably, become shits too. But this has been a problem since people discovered SEO in the first place. Volume has gone up but the problem is not new.
And… late stage capitalism. But I find myself REALLY liking Kagi (libertarian tech bro CEO aside…) simply because it reduces the impact of my search history on results while also letting me manually emphasize some sites or outright block any that piss me off. Still get the SEO blogspam but a lot less.
“Generative AI has polluted the data,” she wrote. “I don’t think anyone has reliable information about post-2021 language usage by humans.”
That is fucking horrifying.
Yeah, the generative AI pollution feels alot like the whole steel thing - since the nuclear tests it’s been impossible for new steel to not be slightly radioactive, which means if they need uncontaminated steel they get it from ships that sunk before those.
This is the exact metaphor I’ve been using when talking to people about the issue. Did we both get it from somewhere I can’t remember, or is it just perfect?
It’s the first thing I thought of when the articles about the generative AI polluting itself started coming out.
Luckily radiation levels have pretty much dropped back to pre-war levels now so new steel can be low-background as well. It was possible to make new low-background steel from 1945 onward too it just would have been more expensive than salvaging pre-war ships. I like the analogy though, it fits.
I’ve been comparing it to Kessler Syndrome, but for culture.
Taking a step back, i wonder… we are reading this stuff now, it effects us too. What if we have already stepped into a linguistic death-spiral of a telephone-game where each generation gets rehashed garbage from the last?
AI language patterns are polluting the data, but are they influencing language usage by humans as well? We should delve into that.
Tie this with the obvious oil pollution, and newly musks radio transmission pollution… Fucking corporations get to pollute the world in every way imaginable to chase a buck and we’re left having to cope with their waste…
Fucking bullshit society we made for ourselves…