• over_clox@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    1 year ago

    I have no idea. Last night I literally got AI to give me instructions on how to shave alligator hair and how to inflate a foldable phone.

    AI is not actually intelligent, it’s a word prediction model. It’s royally ignorant actually.

      • iByteABit [he/him]@lemm.ee
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        That’s the thing though, it’s not a search engine.

        It’s a language prediction model, if you ask something that it has learned well and predicts correctly you’ll get a nice answer that makes you feel like it’s a search engine.

        If you ask something more obscure or confuse it with words, you’ll get back garbage that hopefully doesn’t look like a right answer, because it’s much better to have a useless answer than a deceiving bad one.

  • people_are_cute@lemmy.sdf.org
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    1 year ago

    It’s my dream that AI takes over middle management and bureaucracy as a whole, and we get rid of all the societal evils that come from corrupt or incompetent management in both - governments and companies. Imagine if every single working person had zero ambiguity in their jobs and complete clarity on when they have to work, and on what. The world would be so much happier!

    • Rentlar@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Ideally it would help put real, complicated but achievable solutions forward to some of the world’s toughest issues, like poverty, hunger, war and disaster,. AI is but a tool and in the current trajectory, much of its use is to advance the interest of capitalist moguls. In order to heed answers of improved AI models to achieve ideals of a harmonious world, we need to start with a change with our society to work towards it and accept change away from purely monetary ends.

      • perviouslyiner@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Any problem that can be expressed mathematically, has a huge search space, and where human intuition doesn’t necessarily help.

        For example if a computer can solve chess then that same line of programming should be able to solve quantum physics and gravity.

    • SirGolan@lemmy.sdf.org
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      You should check out the short story Manna. It’s maybe a bit dated now but explores what could go wrong with that sort of thing.

      • people_are_cute@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I just read the first two chapters. Yes, it doesn’t paint a pretty picture but the dystopia portrayed in that story started with Manna being an unregulated monopoly that was given power over everything.

        In real life you perhaps won’t take it that far. All decisions would still be made and signed off by humans, AI would just be the planner/scheduler. And no tech services firm would want to get into employability tracking, they’ll quickly get chewed out by regulators if their AI product started discriminating against candidates in hiring.

        • SirGolan@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Yeah. I mean I started reading that story and was thinking how cool it would be… Until it started going bad. Something like a GPS for whatever task you were doing at work would be cool.

  • jsveiga@feddit.nl
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Automatically respond to scam calls and emails, keeping scammers overwhelmed with useless work.

    • Rentlar@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      The real robot wars will be the Scam Call AI vs. the Scam Call Answering AI.

      It will be like a new version of chess: Bobby Phisher vs. Magnus Callusthen

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Tax it. If corporations use it to replace employees, they should at least also have to contribute to the improvement of society.

  • ClamDrinker@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    I mostly see psychological benefits:

    • Building confidence in writing and (when roleplaying) in interacting with other people. LLMs dont shame, or get needlessly hostile. And since they follow your own style it can feel like talking to a friend.
    • related to that, the ability to help in processing traumatic events through writing about them.

    For me personally, interacting with AI has helped me conquer some fears and shame that I buried long ago.

  • ArtemonBruno@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    Labour-less advancement. Human pass down centuries of advancement in languages.

    • Version 1: Rough input labour steps automation, in restrictive detail.
    • Version 2: Multiple labour steps input compiled, in restrictive choice. (Less supervise command)
    • Version 3: Task automation. (Communicate wish, whole labour compilations taken care; no production supervise input, only output demands)

    The automation gone from “multiple coffee gadgets” to “1 standard coffee button” to “a warm coffee of less sugar request”. (Now, where are human’s place in this picture? A balloon human such in Wall-E movie?)

  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    Currently the obvious use is to help people express their thoughts in words. It’s helped me a lot writing out resumes and cover letters. This can be extended to languages other than our main/first one.

    It’s also great to narrow down research on a personal scale in areas where if you have no expertise it would be very hard for you to figure out what you are looking for. I’ve used it to ID plants, insects and diseases successfully. I didn’t get a precise result from ChatGPT, but that’s not what I asked. I just requested pointers in the right direction. It delivered.

    The next obvious implementation is with software interface. I’ve already used it (unsuccessfully) to work with Unreal Engine and other 3d software. I got half baked results because the models were not trained specifically for the software in question. But if they were, it would be very easy to just ask the software how to do something instead of searching everywhere for potential answers. That doesn’t sound too far fetched and I heard it’s a feature that will become standard.

        • crazystuff@discuss.online
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I’d say it works pretty well most of the time, probably depends on the coding language. I use it regularly for PHP/Laravel and JS, and still get surprised when it delivers full working functions from a comment.

          There’s a free trial, give it a try

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’ve never had much success having Copilot write actual code. Where is been very helpful is in writing documentation, boilerplate, and just being a very smart autocomplete. That alone has saved me so much time and energy already.

    • SirGolan@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’m curious about this. What model were you using? A few people at my game dev company have said similar things about it not producing good code for unity and unreal. I haven’t seen that at all. I typically use GPT4 and Copilot. Sometimes the code has a logic flaw or something, but most of the time it works on the first try. I do at this point have a ton of experience working with LLMs so maybe it’s just a matter of prompting? When Copilot doesn’t read my mind (which tends to happen quite a bit), I just write a comment with what I want it to do and sometimes I have to start writing the first line of code but it usually catches on and does what I ask. I rarely run into a problem that is too hairy for GPT4, but it does happen.

      • Mothra@mander.xyz
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I am not sure if my answer is correct- I’ve tried ChatGPT to help me with Unreal in February/March this year. I can’t recall what model.

        As for my query- I’m an artist, not a coder. I found ChatGPT would usually point me in the right direction if I had a simple interface question, but not when dealing with materials… Or the sequencer. I haven’t used Copilot though.

        • SirGolan@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Ahh ok that makes sense. I think even with GPT4, it’s still going to be difficult for a non-programmer to use for anything that isn’t fairly trivial. I still have to use my knowledge of stuff to know the right things to ask. In Feb or Mar, you were using GPT3 (4 requires you to pay monthly). 3 is much worse at everything than 4.

  • birdcat@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I see endless possibilities, but it’s questionable if any of them are realistic before we overcome capitalism.

    But one idea I really like is AI helping with the implementation of sortition for democratic decision-making in government.

    Recently, the concept got some attention due to climate protesters demanding it, which I think is nice. So while I don’t want to discuss the concept and where it should be applied, here’s what (future) AIs could do:

    1. Enhanced Random Selection Process: AI can ensure a representative selection from the population for sortition by analyzing demographic data and employing stratified sampling algorithms.

    2. Personalized Education and Communication: Once participants are selected, AI could offer personalized learning paths to prepare them for their role, and adapt communication to suit each participant’s unique circumstances.

    3. Facilitating Communication and Mediation: AI can manage communication among the selected group by setting up secure environments for discussion, and serving as an impartial mediator to promote fairness and respectfulness during deliberations.

    4. Information Provision, Fact-Checking, and Bias Detection: AI can provide relevant, unbiased information on complex topics, perform real-time fact-checking, and monitor discussions for potential biases.

    5. Emotion and Sentiment Analysis: As discussions take place, AI could detect the emotional states and sentiments of participants, ensuring decisions are not overly influenced by emotional reactions.

    6. Advanced Simulation and Scenario Exploration: AI could create sophisticated simulations to help participants understand potential outcomes of the policies they are considering.

    7. Public Accountability and Feedback Collection: After decisions are made, AI can ensure transparency in decision-making by tracking and reporting the progress of the deliberations, and collecting public feedback on the decisions made.

    I should probably add that this list was made with the help of GPT 😅 so a more direct answer to your question might be: AI can help humans lay out their ideas and foster discussions.

    • Oyster_Lust@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      That’s the scariest thing I’ve read in a long time. I’ve gotten so many completely made up “facts” from AI that I wouldn’t want to hand it the keys to my car, much less my freedom. It even cites it’s sources, which don’t exist if you actually check them. The fact that the creators can’t even explain why this is happening makes it even more scary. I’m not scared of AI. I’m just scared of people trusting it. It’s about as trustworthy as a politician, but arguably a lot smarter.

      • birdcat@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Not generally disagreeing with you, but I doubt the following

        thing I’ve read

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        What do you mean we can’t explain it? It’s designed specifically to make up some text that is very statistically likely. If it doesn’t have anything similar in it’s training data, it will try to extrapolate, and that gives you hallucinations.

    • berkeleyblue@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 year ago

      Capitalism isn’t the problem here, it’s unregulated capitalism that doesn’t work.

      Also, you can dislike it but so far, capitalism tampered with socialism, is the best system we have so far. The best Countries in terms of human happines and opportunities (think the Scandinavian states especially and most of central europe generally) are capitalist democracies. We however realised, unlike the US, that you can’t just let corporation do anything they wan and that the state has an obligation to provide services and help to it’s people.

      This anti-capitalist sentiment os so common and not really founded in reality that it feels like a mere buzz word at this point.

  • rufus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    We could outsource all the bureaucracy to machines. We could have entire data centres applying to things, sending that to another data centre, it get’s denied re-done and so on. Doing contracts, billing people, paying bills by billing yet other people.

    Humankind would just need to supply power and meanwhile i could go hiking in the mountains and have every thursday and friday off, because there is no paperwork around anymore.

  • 0x4E4F@lemmy.rollenspiel.monster
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I use it to summarize search results on a certain topic, like what packages hold this or that library, stuff like that… or as a more comprehensive man page generator.

  • UnsyllabledQuickies@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    That’s a great question! It’s something I think about a lot. This is probably gonna sound sarcastic, but I mean it genuinely: Have you asked ChatGPT (or any other LLM) that question? I’d be curious to hear what it might have to say. Of course, its first few answers are probably gonna be just generic, useless stuff, so you’ll have to really drill down into details to find something useful. But you might be able to find some good ideas in there.

    Here are two things that immediately came to mind:

    • Democratization of knowledge and expertise. Think of the many people that now have access to (e.g.) a virtual doctor just because they have an internet connection. As with everything I’m going to say, this comes with the big caveat that nobody should trust LLMs unquestioningly and that they definitely hallucinate and confabulate frequently. Still, though, they can potentially provide quick diagnoses and relevant, immediate, life-saving information in situations where it’s difficult or impossible to get an appointment with a doctor.

    • Handling information problems. I heard someone say recently that because LLMs are likely to be used for spam, ads, propaganda, and other kinds of information distortions and abuses, LLMs will also be the only systems capable of combating those things. For example, if people start using LLMs to write spam emails, then LLMs will almost certainly have to become part of the spam detection process. But even in cases where information isn’t being used maliciously, we still struggle with information overload. LLMs are already being used to sift through (e.g.) the daily news, pick out the top few most important articles, and summarize them for readers. Finding a signal among the noise is actually quite important for all parts of life, so augmenting our ability to do that could be very useful.

    I suspect those answers might be broader and larger-scale than what you were asking for. If so, I apologize!

    • perviouslyiner@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It’s curious to hear AI detection as being a feature, given that it’s just the same machine being used ‘in reverse’ - that arms race will just leave humans unable to know what is real.

  • hoshikarakitaridia@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    For now, I think LLMs are not relevant in its current form, other than helping with writing out ideas.

    I think in the future there’s an LLM that uses Google searches to infer specific information. Basically the assistant that every tech company on this planet pretended to have at so e point, but now it actually exists.

    Other than that, probably not much. Maybe translation, maybe predicting the truthfulness of information, maybe converting data or writing code, but all these things require a variety of different specialized AIs that are designed specifically for these use cases. I don’t see that being an actually commonly used thing until 2030. Maybe we’ll have some of these things by 2025, but I have a feeling it will be ok, and not good enough to really substitute the human resources.