I agree that the approach is no longer viable but I strongly disagree with the rationale. It boils down to three key aspects:
Wordfreq works by scraping the “open web”. As a result, it is being inundated with massive amounts of gpt spam articles. This is problematic in that it is not “natural language” between people but… those articles never were. If you think anyone talks like the average SEO recipe blog then… more on that later.
Sites are increasingly locking down access to scraping their text. This… I actually think is really good. I strongly dislike that that locking down means “so that only people who pay us can train off of you” but I have always disliked the idea that people just train models off of social media with no consent whatsoever
Funding for NLP research is basically dead. No arguments there and I have similar rants from different perspectives. But… that is when you learn how to call what you do AI to get back your old funding.
But I think the bigger part, that I strongly disagree with, is the idea that this is not the language of a post-2021 society. With points like
Including this slop in the data skews the word frequencies.”
But… look up “so-cal-ification” and how many people have some “valley girl” idioms and cadence to their normal speech because that is what we grew up on. Like, I say “like” a lot to chain thoughts together and am under no illusions that came from TV. Same with how you can generally spot someone who grew up reading SFF based on how they use some semi-obscure words and are almost guaranteed to mispronounce them.
Because it is the same logic as “literally there is no word that means literally anymore”. Yeah, it is true. Yeah, it is annoying. But language evolves and it doesn’t always evolve in ways that make sense.
Or, just look at how many people immediately started using the phrase “enshittification” every chance they got. Or who learned about the Ship of Theseus and apply it every chance they get.
Like (there it is again!), a great example is cell phones. Reality TV popularized the idea of putting your phone on speaker, holding it in the palm of your hand, and talking into it. That is fucking obnoxious and has made the world a worse place. But part of that was necessity (in reality tv it is so that the audience gets both perspectives. In reality life it is because of shit like the iphone having a generation or two that would drop calls if you held it like a god damned phone) and then it is just that feedback loop. Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.
AI has already begun to change language and it will continue to do so in the future. That is just reality and it is no different than radio and especially television leading to many regional dialects being outright wiped out.
The problem is that LLMs aren’t human speech and any dataset that includes them cannot be an accurate representation of human speech.
It’s not “LLMs convinced humans to use ‘delve’ a lot”. It’s “this dataset is muddy as hell because a huge proportion of it is randomly generated noise”.
What is “human speech”? Again, so many people (around the world) have picked up idioms and speaking cadences based on the media they consume. A great example is that two of my best friends are from the UK but have been in the US long enough that their families make fun of them. Yet their kid actually pronounces it “al-you-min-ee-uhm” even though they both say “al-ooh-min-um”. Why? Because he watches a cartoon where they pronounce it the British way.
And I already referenced socal-ification which is heavily based on screenwriters and actors who live in LA. Again, do we not speak “human speech” because it was artificially influenced?
Like, yeah, LLMs are “tainted” with the word “delve” (which I am pretty sure comes from youtube scripts anyway but…). So are people. There is a lot of value in researching the WHY a given word or idiom becomes so popular but, at the end of the day… people be saying “delve” a lot.
Having read the article:
I agree that the approach is no longer viable but I strongly disagree with the rationale. It boils down to three key aspects:
But I think the bigger part, that I strongly disagree with, is the idea that this is not the language of a post-2021 society. With points like
But… look up “so-cal-ification” and how many people have some “valley girl” idioms and cadence to their normal speech because that is what we grew up on. Like, I say “like” a lot to chain thoughts together and am under no illusions that came from TV. Same with how you can generally spot someone who grew up reading SFF based on how they use some semi-obscure words and are almost guaranteed to mispronounce them.
Because it is the same logic as “literally there is no word that means literally anymore”. Yeah, it is true. Yeah, it is annoying. But language evolves and it doesn’t always evolve in ways that make sense.
Or, just look at how many people immediately started using the phrase “enshittification” every chance they got. Or who learned about the Ship of Theseus and apply it every chance they get.
Like (there it is again!), a great example is cell phones. Reality TV popularized the idea of putting your phone on speaker, holding it in the palm of your hand, and talking into it. That is fucking obnoxious and has made the world a worse place. But part of that was necessity (in reality tv it is so that the audience gets both perspectives. In reality life it is because of shit like the iphone having a generation or two that would drop calls if you held it like a god damned phone) and then it is just that feedback loop. Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.
AI has already begun to change language and it will continue to do so in the future. That is just reality and it is no different than radio and especially television leading to many regional dialects being outright wiped out.
The problem is that LLMs aren’t human speech and any dataset that includes them cannot be an accurate representation of human speech.
It’s not “LLMs convinced humans to use ‘delve’ a lot”. It’s “this dataset is muddy as hell because a huge proportion of it is randomly generated noise”.
What is “human speech”? Again, so many people (around the world) have picked up idioms and speaking cadences based on the media they consume. A great example is that two of my best friends are from the UK but have been in the US long enough that their families make fun of them. Yet their kid actually pronounces it “al-you-min-ee-uhm” even though they both say “al-ooh-min-um”. Why? Because he watches a cartoon where they pronounce it the British way.
And I already referenced socal-ification which is heavily based on screenwriters and actors who live in LA. Again, do we not speak “human speech” because it was artificially influenced?
Like, yeah, LLMs are “tainted” with the word “delve” (which I am pretty sure comes from youtube scripts anyway but…). So are people. There is a lot of value in researching the WHY a given word or idiom becomes so popular but, at the end of the day… people be saying “delve” a lot.
Speech written by a human. It’s not complicated.
It cannot possibly be human speech if it was produced by a machine.