It’s called Pi and it’s a conversational AI made to be more of a personal assistant. In the bit of time I’ve used it, it’s done far better than I expected at reframing and simplifying my thoughts when I’m overwhelmed.

Obviously, talking to a real person is much better if possible, but the reality is some of us don’t have the finances to pay for therapy or other ways to cope with the anxiety/depression that so often comes with ASD. What are your thoughts on this?

  • shootwhatsmyname@lemm.eeOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Agreed. I’ve dabbled in it some but I’m no expert, maybe someone else could chime in. I just haven’t found anything that works quite as well as Pi yet and it was really intriguing to say the least. You can even talk to it verbally back and forth like a phone call

    • Nerd02@lemmy.basedcount.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I am no expert either, but I once trained and ran an AI chat bot of my own. With a decently powerful Nvidia GPU it could output a message every 20-ish seconds (which is still too slow if you want to keep the conversation at a decent pace). I also tried it without a GPU, just running on my CPU (on a PC that had an AMD GPU which is about the same as not having one for ML applications) and it was of course noticeably slower. About 3 minutes per message, give or take.

      And bear in mind, this was with an old and comparatively tiny model, something like Pi would be much more demanding, the replies my model produced hardly made any sense most of the times.

    • TheBluePillock@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I would love to be corrected, but when I looked into it, it sounded like you’d probably want 32gb VRAM or better for actual chat ability. You have to have enough memory to load the model, and anything not handled by your GPU takes a major performance hit. Then, you probably want to aim for a 72 billion parameter model. That’s a decently conversational level and maybe close to the one you’re using (but it’s possible they’re higher? I’m just guessing). I think 34B models are comparatively more prone to hallucination and inaccuracy. It sounded like the 32GB VRAM was kinda entry point for the 72B models so I stopped looking, because I can’t afford that.

      So somebody with more experience or knowledge can hopefully correct me or give a better explanation, but just in case, maybe this is a helpful starting point for someone.

      You can download models on huggingface.co and interact with them through a web-ui like this one.