That’s a good text. I’ve been comparing those “LLM smurt!” crowds with Christian evangelists, due to their common usage of fallacies like inversion of burden of proof, changing goalposts, straw man, etc.
However it seems that people who believe in psychics might be a more accurate comparison.
That said LLMs are great tools to retrieve info when you aren’t too concerned about accuracy, or when you can check the accuracy yourself. For example the ChatGPT output of prompts like
- “Give me a few [language] words that can be used to translate the [language] word [word]”
- “[Decline|Conjugate] the [language] word [word]”
- “Spell-proof the following sentence: [sentence]”
is really good. I’m still concerned about the sheer inefficiency of the process though, energy-wise.
This is absolutely in line with who buys into AI hype and why it is infuriating to try to convince them that they are reading way too much into how it seems to know things when all it is doing it returning results are statistically likely to be found as helpful to the audience it is designed for.
I have said that LLMs and other AI are designed to return what people want to see/hear. It doesn’t know anything and will never be useful as a knowledge base or an independently functioning diagnostic tool.
It certainly has uses, but it certainly isn’t going to solve all the things that are promoted by the AI hype train.
I don’t buy into it, but it’s so quick and easy to get an answer, if it’s not something important I’m guilty of using LLM and calling it good enough.
There are no ads and no SEO. Yeah, it might very well be bullshit, but most Google results are also bullshit, depending on subject. If it doesn’t matter, and it isn’t easy to know if I’m getting bullshit from a website, LLM is good enough.
I took a picture of discolorations on a sidewalk and asked ChatGPT what was causing them because my daughter was curious. Metal left on the surface rusts and leaves behind those streaks. But they all had holes in the middle so we decided there were metallic rocks missed into the surface that had rusted away.
Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.
Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.
I’m not sure if you recognize this, but this is precisely how mentalism, psychics, and others in similar fields have always existed! Look no further than Pliny the elder or Rasputin for folks who made a career out of magical and mystical explanations for everything and gained great status for it. ChatGPT is in many ways the modern version of these individuals, gaining status for having answers to everything which seem plausible enough.
She knows not to trust it. If the AI had suggested “God did it” or metaphysical bullshit I’d reevaluate. But I’m not sure how to even describe that to a Google search. Sending a picture and asking about it is really fucking easy. Important answers aren’t easy.
I mean I agree with you. It’s bullshit and untrustworthy. We have conversations about this. We have lots of conversations about it actually, because I caught her cheating at school using it so there’s a lot of supervision and talk about appropriate uses and not. And how we can inadvertently bias it by the questions we ask. It’s actually a great tool for learning skepticism.
But some things, a reasonable answer just to satisfy your brain is fine whether it’s right or not. I remember in chemistry I spent an entire year learning absolute bullshit about chemistry only for the next year to be told that was all garbage and here’s how it really works. It’s fine.
Yes, treating AI answers with the same skepticism as web search results is a decent way to make it useful. Unfortunately the popular AI systems seem to be using multiple times as much energy to give answers that aren’t even as reliable as google used to be.
Back in the day google was using the same ‘was this information useful’ to return results before the SEO craze took off.
And yes, if the stains look like rust and there is a gap then there was a ferrous rock in the mix that rusted away. I have a spot on my sidewalk and a stone slab thing, and found out what caused it from someone who works with those materials!
But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.
There’s no mechanism in LLMs that allow for anything. It’s a blackbox. Everything we know about them is empirical.
LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.
It’s a lot like a brain. A small, unidirectional brain, but a brain.
LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.
I’ll bet you a month’s salary that this guy couldn’t explain said math to me. Somebody just told him this, and he’s extrapolated way more than he should from “math”.
I could possibly implement one of these things from memory, given the weights. Definitely if I’m allowed a few reference checks.
Okay, this article is pretty long, so I’m not going to read it all, but it’s not just in front of naive audiences that LLMs seem capable of complex tasks. Measured scientifically, there’s still a lot there. I get the sense the author’s conclusion was a motivated one.
There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.
Geoffrey Hinton, for one
Can someone please paraphrase the following which I didn’t understand?
Somebody raised to believe they have high IQ is more likely to fall for this than somebody raised to think less of their own intellectual capabilities. Subjective validation is a quirk of the human mind. We all fall for it.
But if you think you’re unlikely to be fooled, you will be tempted instead to apply your intelligence to “figure out” how it happened. This means you can end up using considerable creativity and intelligence to help the psychic fool you by coming up with rationalisations for their “ability”.
And because you think you can’t be fooled, you also bring your intelligence to bear to defend the psychic’s claim of their powers. Smart people (or, those who think of themselves as smart) can become the biggest, most lucrative marks.
The author’s suggesting that smart people are more likely to fall for cons that they try to dissect but can’t find the specific method being used, supposedly because they consider themselves to be infallible.
I disagree with this take. I don’t see how that thought process is exclusive to people who are or consider themselves to be smart. I think the author is tying himself into a knot to state that smart people are actually the dumb ones, likely in preparation to drop an opinion that most experts in the field will disagree with.
I don’t see how that thought process is exclusive to people who are or consider themselves to be smart.
They aren’t saying that this is exclusive to people who consider themselves smart. They’re saying that they’re more likely to fall for the trap by engaging with the assumption of not being susceptible to being tricked. Although I think the author does conflate smart people with people who think of themselves as smart inappropriately.
It’s not a take though, it’s a thing. The tendency to fall into irrational beliefs has been called “Dysrationalia” in psychology and is linked to higher education and intelligence. An example would be the tendency of Nobel prize winners to espouse crazy theories later in life, which is humourously referred to as the Nobel Disease.