You can try a smaller IQ3 imatrix quantization to speed it up, but 22B is indeed tight for 8GB.
If someone comes out with an AQLM for it, it might completely fit in VRAM, but I’m not sure it would even work for a Pascal card TBH.
I hate turn based combat too, but it was super enjoyable in coop. And it’s quite good for being turn based.
It’s also real-time outside of combat, FYI.
For solo, I’d probably get the mod that automates your companions, and reduce the difficulty to your taste to compensate.
I’m sorry, but Democratic leadership, the Washington Post, and the New York Times are not “right wing.” The last Republican presidental candidate thje NYT endorsed was Dwight D. Eisenhower in 1956.
They’re right wing to you and Lemmy, but Lemmy is not the center of America’s political compass. And I’m speaking as a rabid DJT hater who votes straight ticket Democrat, bar one primary I registered republican in just so I could vote against DJT.
Especially if you’re mega rich.
“Well, one lesson I’ve learned is that just because I say something to a group and they laugh doesn’t mean it’s going to be all that hilarious as a post on X,” he said in a follow-up post early Monday. “Turns out that jokes are WAY less funny if people don’t know the context and the delivery is plain text."
I knew people like this in real life, who’d say something horrible and follow it up with “It’s just a joke,” but only if they ‘lose’ and are called out on it.
They’re slimey jerks, and it’s utterly miserable to even be around them. And I don’t understand why so many would worship/follow Elon and dwell on Twitter for it.
Because gun violence is more of a risk than ever.
Honestly I would feel better with the sniper there, so they could stop random nut who shows up with an assault rifle. Which is really sad.
It says center and center-right outlets have a left bias
Which outlets, specifically?
Still an understatement, it deserves it and more.
I don’t even like turned based games. I don’t like most high fantasy. But holy moly, what a ride BG3 is.
I’m just gonna be pissed of their mixed support of modding (due to wotc) kills the modding community. If Skyrim and Rimworld can have a whole universe of fan content, BG3 should too.
She endorsed Biden before. It wasn’t really a surprise.
Is this why everyone is downvoting the fact checker? Because they don’t like it saying their preferred outlets have a bias?
The Guardian does have a left bias, its pretty obvious. That’s not a bad thing.
It’s still everywhere in my news/internet diet.
It’s bleeding, for sure, but it’s big. Its gone bad. But I think its premature to say its collapse is a good thing, because it just won’t go away.
It’s not dead though, it’s still linked to everywhere, from big news to niche communities because it still has that critical mass and inertia.
And I have to be cynical of the Fediverse, but realistically, what replaces it, at least here in the US? Discord? No, thanks, I’d at least rather have information be public.
I’m speaking as someone who has never used Twitter, but I can’t ignore it, as much as I’d like to.
The behavior is configurable just like it is on linux, UAC can be set to require a password every time.
But I think its not set this way by default because many users don’t remember their passwords, lol. You think I’m kidding, you should meet my family…
Also, scripts can do plenty without elevation, on linux or Windows.
That actually is weird.
The problem is that splitting models up over a network, even over LAN, is not super efficient. The entire weights need to be run through for every half word.
And the other problem is that petals just can’t keep up with the crazy dev pace of the LLM community. Honestly they should dump it and fork or contribute to llama.cpp or exllama, as TBH no one wants to split up LLAMA 2 (or even llama 3) 70B, and be a generation or two behind for a base instruct model instead of a finetune.
Even the horde has very few hosts relative to users, even though hosting a small model on a 6GB GPU would get you lots of karma.
The diffusion community is very different, as the output is one image and even the largest open models are much smaller. Lora usage is also standardized there, while it is not on LLM land.
If they silently ignores this (as they seem to be doing?) it just screams “have your cake and eat it,” in regards to whatever WotC imposed on them.
Technically they did not violate the contract. Maybe.
What? You want us to fix this, WotC? Well, you see, that would be quite expensive…
Facebook just didn’t release the code for llama imagegen.
The model you are looking for now is Flux.
TBH this is a great space for modding and local LLM/LLM “hordes”
Oh, and you HAVE to try the new Qwen 2.5 14B.
The whole lineup is freaking sick, 34B it outscoring llama 3.1 70B in a lot of benchmarks, and in personal use it feels super smart.