Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    4
    ·
    7 months ago

    I’m more infuriated by people like you who seem to think that the term AI means a conscious/sentient device. Artificial intelligence is a field of computer science dating back to the very beginnings of the discipline. LLMs are AI, Chess engines are AI, video game enemies are AI. What you’re describing is AGI or artificial general intelligence. A program that can exceed its training and improve itself without oversight. That doesn’t exist yet. AI definitely does.

    • MeepsTheBard@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      I’m even more infuriated that AI as a term is being thrown into every single product or service released in the past few months as a marketing buzzword. It’s so overused that formerly fun conversations about chess engines and video game enemy behavior have been put on the same pedestal as CyberDook™, the toilet that “uses AI” (just send pics of your ass to an insecure server in Indiana).

  • Gabu@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    I’ll be direct, your texts reads like you only just discovered AI. We have much more than “only LLMs”, regardless of whether or not these other models pass turing tests. If you feel disgruntled, then imagine what people who’ve been researching AI since the 70s feel like…

    • dutchkimble@lemy.lol
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      It doesn’t rhyme, And the content is not really interesting, Maybe it’s just a rant, But with a weird writing format.

      • aulin@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        I’m willing to bet that thise people didn’t know anything about AI until a few years ago and only see it as this latest wave.

        I did AI courses in college 25 years ago, and there were all kinds of algorithms. Neural networks were one of them, but there were many others. And way before that, like others have said, it’s been used for simulated agents in games.

  • swordsmanluke@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    AI is a forever-in-the-future technology. When I was in school, fuzzy logic controllers were an active area of “AI” research. Now they are everywhere and you’d be laughed at for calling them AI.

    The thing is, as soon as AI researchers solve a problem, that solution no longer counts as AI. Somehow it’s suddenly statistics or “just if-then statements”, as though using those techniques makes something not artificial intelligence.

    For context, I’m of the opinion that my washing machine - which uses sensors and fuzzy logic to determine when to shut off - is a robot containing AI. It contains sensors, makes judgements based on its understanding of “the world” and then takes actions to achieve its goals. Insofar as it can “want” anything, it wants to separate the small masses from the large masses inside itself and does its best to make that happen. As tech goes, it’s not sexy, it’s very single purpose and I’m not really worried that it’s gonna go rogue.

    We are surrounded by (boring) robots all day long. Robots that help us control our cars and do our laundry. Not to mention all the intelligent, disembodied agents that do things like organize our email, play games with us, and make trillions of little decisions that affect our lives in ways large and small.

    Somehow, though, once the mystery has yielded to math, society doesn’t believe these decision-making machines are AI any longer.

  • ZzyzxRoad@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    Yes, but I’m more annoyed with posts and conversations about it that are like this one. People on Lemmy swear they hate how uninformed and stupid the average person is when it comes to AI, they hate the click bait articles etc etc. Aaand then there’s at least 5 different posts about it on the front page every. single. day., with all the comments saying exactly the same thing they said the day before, which is:

    “Users are idiots for trusting a tech company, it’s not Google’s responsibility to keep your private data safe.” “No one understands what ‘AI’ actually means except me.” “Every middle-America dad, grandma and 10 year old should have their very own self hosted xyz whatever LLM, and they’re morons if they don’t and they deserve to have their data leaked.” And can’t forget the ubiquitous arguments about what “copyright infringement” means when all the comments are actually in agreement, but they still just keep repeating themselves over and over.

  • ℕ𝕖𝕞𝕠@midwest.social
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    AI isn’t reserved for a human-level general intelligence. The computer-controlled avatars in some videogames are AI. My phone’s text-to-speech is AI. And yes, LLMs, like the smaller Markov-chain models before them, are AI.

  • angstylittlecatboy@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I’m agitated that people got the impression “AI” referred specifically to human-level intelligence.

    Like, before the LLM boom it was uncontroversial to refer to the bots in video games as “AI.” Now it gets comments like this.

    • Loki@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I wholeheartedly agree, people use the term “AI” nowadays to refer to a very specific subcategory of DNNs (LLMs), but yeah, it used to refer to any more or less “”“smart”“” algorithm performing… Something on a set of input parameters. SVMs are AI, decision forests are AI, freaking kNN is AI, “artificial intelligence” is a loosely defined concept, any algorithm that aims to mimic human behaviour can be called AI and I’m getting a bit tired of hearing people say “AI” when they mean gpt-4 or stable diffusion.

      • Kedly@lemm.ee
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        7 months ago

        I’ve had freaking GAMERS tell me that “It isnt real AI” at this point… No shit, the Elites in Halo aren’t Real AI either

        Edit: Keep the downvotes coming anti LLMers, your tears are delicious

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    AI has, for a long time been a Hollywood term for a character archetype (usually complete with questions about whether Commander Data will ever be a real boy.) I wrote a 2019 blog piece on what it means when we talk about AI stuff.

    Here are some alternative terms you can use in place of AI, when they’re talking about something else:

    • AGI: Artificial General Intelligence: The big kahuna that doesn’t exist yet, and many projects are striving for, yet is as evasive as fusion power. An AGI in a robot will be capable of operating your coffee machine to make coffee or assemble your flat-packed furniture from the visual IKEA instructions. Since we still can’t define sentience we don’t know if AGI is sentient, or if we humans are not sentient but fake it really well. Might try to murder their creator or end humanity, but probably not.
    • LLM Large Language Model: This is the engine behind digital assistants like Siri or Alexa and still suffer from nuance problems. I’m used to having to ask them several times to get results I want (say, the Starbucks or Peets that requires the least deviation from the next hundred kilometers of my route. Siri can’t do that.) This is the application of learning systems see below, but isn’t smart enough for your household servant bot to replace your hired help.
    • Learning Systems: The fundamental programmity magic that powers all this other stuff, whether simple data scrapers to neural networks. These are used in a whole lot of modern applications, and have been since the 1970s. But they’re very small compared to the things we’re trying to build with it. Most of the time we don’t actually call it AI, even for marketing. It’s just the capacity for a program to get better at doing its thing from experience.
    • Gaming AI Not really AI (necessarily) but is a different use of the term artificial intelligence. When playing a game with elements pretending to be human (or living, or opponents), we call it the enemy AI or mob AI. It’s often really simple, except in strategy games which can feature robust enough computational power to challenge major international chess guns.
    • Generative AI: A term for LLMs that create content, say, draw pictures or write essays, or do other useful arts and sciences. Currently it requires a technician to figure out the right set of words (called a prompt) to get the machine do create the desired art to specifications. They’re commonly confused by nuance. They infamously have problems with hands (too many fingers, combining limbs together, adding extra limbs, etc.). Plagiarism and making up spontaneous facts (called hallucinating) are also common problems. And yet Generative AI has been useful in the development of antibiotics and advanced batteries. Techs successfully wrangle Generative AI, and Lemmy has a few communities devoted to techs honing their picture generation skills, and stress-testing the nuance interpretation capacity of Generative AI (often to humorous effect). Generative AI should be treated like a new tool, a digital lathe, that requires some expertise to use.
    • Technological Singularity: A bit way off, since it requires AGI that is capable of designing its successor, lather, rinse, repeat until the resulting techno-utopia can predict what we want and create it for us before we know we want it. Might consume the entire universe. Some futurists fantasize this is how human beings (happily) go extinct, either left to retire in a luxurious paradise, or cyborged up beyond recognition, eventually replacing all the meat parts with something better. Probably won’t happen thanks to all the crises featuring global catastrophic risk.
    • AI Snake Oil: There’s not yet an official name for it, but a category worth identifying. When industrialists look at all the Generative AI output, they often wonder if they can use some of this magic and power to facilitate enhancing their own revenues, typically by replacing some of their workers with generative AI systems, and instead of having a development team, they have a few technicians who operate all their AI systems. This is a bad idea, but there are a lot of grifters trying to suggest their product will do this for businesses, often with simultaneously humorous and tragic results. The tragedy is all the people who had decent jobs who do no longer, since decent jobs are hard to come by. So long as we have top-down companies doing the capitalism, we’ll have industrial quackery being sold to executive management promising to replace human workers or force them to work harder for less or something.
    • Friendly AI: What we hope AI will be (at any level of sophistication) once we give it power and responsibility (say, the capacity to loiter until it sees a worthy enemy to kill and then kills it.) A large coalition of technology ethicists want to create cautionary protocols for AI development interests to follow, in an effort to prevent AIs from turning into a menace to its human masters. A different large coalition is in a hurry to turn AI into something that makes oodles and oodles of profit, and is eager to Stockton Rush its way to AGI, no matter the risks. Note that we don’t need the software in question to be actual AGI, just smart enough to realize it has a big gun (or dangerously powerful demolition jaws or a really precise cutting laser) and can use it, and to realize turning its weapon onto its commanding officer might expedite completing its mission. Friendly AI would choose to not do that. Unfriendly AI will consider its less loyal options more thoroughly.

    That’s a bit of a list, but I hope it clears things up.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      I remember when OpenAI were talking like they had discovered AGI or were a couple weeks away from discovering it, this was around the time Sam Altman was fired. Obviously that was not true, and honestly we may never get there, but we might get there.

      Good list tbh.

      Personally I’m excited and cautious about the future of AI because of the ethical implications of it and how it could affect society as a whole.

  • viralJ@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    I remember the term AI being in use long before the current wave of LLMs. When I was a child, it was used to describe the code behind the behaviour of NPC in computer games, which I think is still used today. So, me, no, I don’t get agitated when I hear it, I don’t think it’s a marketing buzzword invented by capitalistic a-holes. I do think that using “intelligence” in AI is far too generous, whichever context it’s used in, but we needed some word to describe computers pretending to think and someone, a long time ago, came up with “artificial intelligence”.

    • Rikj000@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      7 months ago

      Thank you for reminding me about NPCs,
      we have indeed been calling them AI for years,
      even though they are not capable of reasoning on their own.

      Perhaps we need a new term,
      e.g. AC (Artificial Consiousness),
      which does not exists yet.

      The term AI still agitates me though,
      since most of these are not intelligent.

      For example,
      earlier this week I saw a post on Lemmy,
      where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.

      Or my co-workers,
      who asked development questions I had to the LLMs they use, which yet has to generate me something usefull / something that actually works.

      To me it feels like they are pushing their bad beta products upon us,
      in the hopes that we pay to use them,
      so they can use our feedback to improve them.

      To me they don’t feel intelligent nor consious.

  • MeetInPotatoes@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Maybe just accept it as shorthand for what it really means.

    Some examples:

    We say Kleenex instead of facial tissue, Band-Aid instead of bandage, I say that Siri butchered my “ducking” text again when I know autocorrect is technically separate.

    We also say, “hang up on someone” when there is no such thing anymore

    Hell, we say “cloud” when we really mean “someone’s server farm”

    Don’t get me started on “software as a service” too …a bullshit fancy name for a subscription website that actually has some utility.

  • TrickDacy@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    7 months ago

    You’re not the only one but I don’t really get this pedantry, and a lot of pedantry I do get. You’ll never get your average person to switch to the term LLM. Even for me, a techie person, it’s a goofy term.

    Sometimes you just have to use terms that everyone already knows. I suspect we will have something that functions in every way like “AI” but technically isn’t for decades. Not saying that’s the current scenario, just looking ahead to what the improved versions of chat gpt will be like, and other future developments that probably cannot be predicted.

  • flop_leash_973@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    7 months ago

    The term is so over used at this point I could probably start referring to any script I write that has condition statements in it and convince my boss I have created our own “AI”.

    • TeckFire@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      7 months ago

      For real. Like some enemies in Killzone 2 “act” pretty clever, but aren’t using anything close to LLM, let alone “AI,” but I bet you if you implemented their identical behavior into a modern 2024 game and marketed it as the enemies having “AI” everyone would believe you in a heartbeat.

      It’s just too overencompasing. Saying “large language model technology” may not be as eye catching, but it means I know if you at least used the technology. Anyone can market as “AI” and it could be an excel formula for all I know.

      • Gabu@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        The enemies in killzone do use AI… the Goombas in the first Super Mario bros. used AI. This term has been used to refer to npc behavior since the dawn of videogames.

        • TeckFire@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 months ago

          I know. That’s not my point. I know that technically, “AI” could mean anything that gives the illusion of intelligence artificially. My use of the term was more of the OP, that of a machine achieving sapience, not just the illusion of one. It’s just down to definitions. I just prefer to use the term in a different way, and wish it was, but I accept that the world does not