Direct link to the GitHub repo:
https://github.com/nickbild/local_llm_assistant?tab=readme-ov-fileIt’s a small model by comparison. If you want something that’s offline and actually closer to comparing to ChatGPT 3.5, you’ll want the Mixtral 8x7B model instead (running on a beefy machine):
Sick, I only need 90gb of VRAM!
I’ve got it running with a 3090 and 32GB of RAM.
There are some models that let you run with hybrid system RAM and VRAM (it will just be slower than running it exclusively with VRAM).
Yeah but damn does it get slow.
I always find it interesting how text is so much slower than image generation. I can do a 1024x1024 in probably 20s, but I get like 1 word a second with text.
Languages are complex and, more importantly, much less forgiving to error
Hopefully we see more specific hardware for this. Like extension cards with pretty much just tensor cores and their own ram.
I’d love to see some consumer level AI stuff, sadly it all seems to be designed for server farms and by the time it ages out into consumer prices it’s so obsolete there’s no point in getting it.
Do they want consumer ai cards to exist though?
Think about the data!
Card makers? They only want money, if theres enough consumer level demand they will make them.
I guess your right.
Nice! Thats a cool project, ill have to give it a try. I love the idea of self hosting local LLMs. Ive been playing around with: https://lmstudio.ai/ and it directly downloads from hugging face.
There’s also ollama which seems to be similar. Not sure if LMStudio is open source but ollama is.
Can we have smaller more domain specific models. that shouldn’t require more than casual hardware. like a small model for coding, one for medicine, one for history, and so on. ???
Check out hugging face! Honestly fine tunned models for specific domains seems very popular (if for nothing else because training smaller models is just easier!).
Unfortunately the roleplaying chatbot type models are typically fairly sizeable / demanding. I’m curious how this will develop with more specific AI hardware though, like extension cards with primarily tensor cores + their own ram, so that you don’t have to use your GPU for that. If we can drag down the price for such hardware then locally run models could become much more viable and mainstream.
Dude sorry to say but roleplay is not equally important as medicine or coding XD
For me they are. I have no use for medicine or coding bots.
but you have the use for the very software you’re using daily or medicine developments.
I play D&D from time to time, but saying that roleplaying is more important than medicine is just nuts.
Not so much for the latter but I’m pretty specifically talking about my personal use case here. lol “Roleplaying” in this scenario isn’t really referring to actual tabletop type RPGs btw. It’s the LLM roleplaying specific characters or personas that you then chat with in specific (or not so specific) scenarios. Although that same tech is also experimented with to be used in video games for NPCs. But who knows. A specifically trained model could potentially make a half decent dungeon master too.
There also a huge amount of training, medical and otherwise, that’s done through role-playing. I could definitely see medical students getting use out of learning telemedicine with LLMs that were ultimately adapted from TTRPGs character generator schemas.
That’s gonna be a no from me dawg