Note that this is not the only way to run Qwen 3.5 397B on consumer devices, there are excellent ~2.5 BPW quants available that make it viable for 128G devices.
I've had great success (~20 t/s) running it on a M1 Ultra with room for 256k context. Here are some lm-evaluation-harness results I ran against it:
Overall an excellent model to have for offline inference.
Aurornis [3 hidden]5 mins ago
The method in this link is already using a 2-bit quant. They also reduced the number of experts per token from 10 to 4 which is another layer of quality degradation.
In my experience the 2-bit quants can produce output to short prompts that makes sense but they aren’t useful for doing work with longer sessions.
This project couldn’t even get useful JSON out of the model because it can’t produce the right token for quotes:
> *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable.
tarruda [3 hidden]5 mins ago
I can't say anything about the OP method, but I already tested the smol-IQ2_XS quant (which has 2.46 BPW) with the pi harness. I did not do a very long session because token generation and prompt processing gets very slow, but I think I worked for up to ~70k context and it maintained a lot of coherence in the session. IIRC the GPQA diamond is supposed to exercise long chains of thought and it scored exceptionally well with 82% (the original BF16 official number is 88%: https://huggingface.co/Qwen/Qwen3.5-397B-A17B).
Note that not all quants are the same at a certain BPW. The smol-IQ2_XS quant I linked is pretty dynamic, with some tensors having q8_0 type, some q6_k and some q4_k (while the majority is iq2_xs). In my testing, this smol-IQ2_XS quant is the best available at this BPW range.
Eventually I might try a more practical eval such as terminal bench.
Aurornis [3 hidden]5 mins ago
> I did not do a very long session
This is always the problem with the 2-bit and even 3-bit quants: They look promising in short sessions but then you try to do real work and realize they’re a waste of time.
Running a smaller dense model like 27B produces better results than 2-bit quants of larger models in my experience.
amelius [3 hidden]5 mins ago
> This is always the problem with the 2-bit and even 3-bit quants: They look promising in short sessions but then you try to do real work and realize they’re a waste of time.
It would be nice to see a scientific assessment of that statement.
singpolyma3 [3 hidden]5 mins ago
Lots of people seem to use 4bit. Do you think that's worth it vs a smaller model in some cases?
Aurornis [3 hidden]5 mins ago
4 bit is as low as I like to go. There are KLD and perplexity tests that compare quantizations where you can see the curve of degradation, but perplexity and KLD numbers can be misleading compared to real world use where small errors compound over long sessions.
In my anecdotal experience I’ve been happier with Q6 and dealing with the tradeoffs that come with it over Q4 for Qwen3.5 27B.
hnfong [3 hidden]5 mins ago
Generally the perplexity charts indicate that quality drops significantly below 4-bit, so in that sense 4-bit is the sweet spot if you're resource constrained.
simonw [3 hidden]5 mins ago
The project doesn't just use 2-bit - that was one of the formats they tried, but when that didn't give good tool calls they switched to 4-bit.
tarruda [3 hidden]5 mins ago
In my case it the 2.46BPW has been working flawless for tool calling, so I don't think 2-bit was the culprit for JSON failing.
They did reduce the number of experts, so maybe that was it?
arjie [3 hidden]5 mins ago
What's the tok/s you get these days? Does it actually work well when you use more of that context?
By the way, it's been a long time since I last saw your username. You're the guy who launched Neovim! Boy what a success. Definitely the Kickstarter/Bountysource I've been a tiny part of that had the best outcome. I use it every day.
So it starts at 20 tps tg and 190 tps pp with empty context and ends at 8 tps tg and 40 tps pp with 250k prefill.
I suspect that there are still a lot of optimizations to be implemented for Qwen 3.5 on llama.cpp, wouldn't be surprised to reach 25 tps in a few months.
> You're the guy who launched Neovim!
That's me ;D
> I use it every day.
So do I for the past 12 years! Though I admit in the past year I greatly reduced the amount of code I write by hand :/
hnfong [3 hidden]5 mins ago
Apologies to others for the offtopic comment, but thank you so much for neovim. I started using Vim 25 years ago and I almost don't know how to type without a proper Vi-based editor. I don't write as much code these days, but I write other stuff (which definitely needs to be mostly hand written) in neovim and I feel so grateful that this tool is still receiving love and getting new updates.
tarruda [3 hidden]5 mins ago
> in neovim and I feel so grateful that this tool is still receiving love and getting new updates.
@justinmk deserves the credit for this!
terhechte [3 hidden]5 mins ago
Thank you for NeoVim! I also use it every day, mostly for thinking / text / markdown though these days.
Have you compared against MLX? Sometimes I’m getting much faster responses but it feels like the quality is worse (eg tool calls not working, etc)
tarruda [3 hidden]5 mins ago
> Have you compared against MLX?
I don't think MLX supports similar 2-bit quants, so I never tried 397B with MLX.
However I did try 4-bit MLX with other Qwen 3.5 models and yes it is significantly faster. I still prefer llama.cpp due to it being a one in all package:
- SOTA dynamic quants (especially ik_llama.cpp)
- amazing web ui with MCP support
- anthropic/openai compatible endpoints (means it can be used with virtually any harness)
- JSON constrained output which basically ensures tool call correctness.
- routing mode
I don't think I've ever seen the M1 ultra GPU exceed 80w in asitop.
Update: I just did a quick asitop test while inferencing and the GPU power was averaging at 53.55
woile [3 hidden]5 mins ago
Just a single m1 ultra?
tarruda [3 hidden]5 mins ago
Yes. Note that the only reason I acquired this device was to run LLMs, so I can dedicate its whole RAM to it. Probably not viable for a 128G device where you are actively using for other things.
iwontberude [3 hidden]5 mins ago
Thank you, I have been using way too much credits for my personal automation.
Aurornis [3 hidden]5 mins ago
Reading the details, he is using 2-bit quantization and reduced the number of experts per token from 10 down to 4 to get 5 tokens/sec. Cool proof of concept but it’s far from the quality and performance of the 397B model as normally used. Dropping the number of experts is particularly misleading.
This is some interesting work, but applying such extreme measures to LLMs to get them to run severely degrades quality. I know he claims negligible quality loss, but in my experience 2-bit quantizations are completely useless for real work. You can get them to respond to prompts, but they lose their intelligence and will go around in circles.
He also shows 5-6 tokens per second. Again that’s impressive for a large model on limited hardware but it’s very slow. Between the severely degraded model abilities and the extremely slow output the 397B result should be considered an attempt at proving something can technically run, not evidence that it can run well and produce output you’d expect from a 397B model.
He even mentions the obvious problems with his changes:
> *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable.
So right out of the gate this isn’t useful if you want to do anything with it. He could have tried smaller models or less quantizations to get actual useful output from the model, but it wouldn’t look as impressive. It’s honestly getting kind of exhausting to read all of these AI-coded (admitted in the link) and AI-written papers made more for resume building. It would have been interesting to see this work applied to running a useful model that hadn’t been lobotomized instead of applying tricks to get an impressive headline but useless output.
190n [3 hidden]5 mins ago
> *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable.
I was wondering about that statement. Shouldn't it restrict sampling to only tokens that produce valid JSON matching the schema during a tool call? On the other hand, I have heard a lot about how even production LLM providers don't always call tools accurately, so I suppose either it's hard to implement what I described or there's something I haven't thought of that makes it impossible.
jllyhill [3 hidden]5 mins ago
To be honest, I'm getting tired of a "laptop" in every one of these clickbait titles turning out to be $3000 Macbook. Sure, it's impressive to achieve this degree of the LLM compression, but I really don't like that the title implies local LLM becomes a viable for an average person with the actual hardware being out of reach for 99%.
throw284959 [3 hidden]5 mins ago
I ran full version of this model without any swapping on cluster of 2x $3000 laptops (strix halo zbook 128GB) at about 20 tokens per second.
I would say it is in reach for normal person. If anything buying it was great investment, it is work tool, I will probably sell it for more than what I bought it for :)
Computer0 [3 hidden]5 mins ago
Yeah I understand the sentiment, I think it should’ve been “,on a laptop!” instead of “on a laptop”
The github page mentions that a naïve mmap approach is bottlenecked by per-page overhead. Can this be mitigated by setting up explicit "huge" pages? (2M using the CONT PTE feature if the "native" page size is 16k; 32M using a PMD level block mapping; or 1G using the CONT PMD feature.) Does macOS support this out of the box? Alternatively, one might use a simple mmap and then something like posix_fadvise to set up prefetching of the data.
justacatbot [3 hidden]5 mins ago
The quality degradation at 2-bit is a real issue. For actual work tasks, a well-tuned 30B at 4-bit usually outperforms a 70B+ at 2-bit in my experience. The expert reduction on top of that compounds things - you're essentially running a fairly different model. Still interesting to see the upper bound of what consumer hardware can attempt, even if the result isn't production-ready.
andai [3 hidden]5 mins ago
> Metal Compute Shaders — Hand-written Metal kernels
Hand written... by GPT? ;)
Aurornis [3 hidden]5 mins ago
He’s very clear that it was written by AI.
qiine [3 hidden]5 mins ago
It seem strange to me that the only way to use an llm is to fit it entirely in volatile memory from the get go.
To render movies we happily wait for the computer to calculate how lights bounce around, for hours even days.
So why not do the same with AIs? Ask big question to big models and get the answer to the universe tomorrow?
Aurornis [3 hidden]5 mins ago
If you don’t care about turnaround time you can do that.
Most LLM use cases are about accelerating workflows. If you have to wait all night for a response and then possibly discover that it took the wrong direction, misunderstood your intent, or your prompt was missing some key information then you have to start over.
I don’t let LLMs write my code but I do a lot of codebase exploration, review, and throwaway prototyping. I have hundreds to maybe thousands of turns in the LLM conservation each day. If I had to wait 10X or 100X as long then it wouldn’t be useful. I’d be more productive ignoring a slow LLM and doing it all myself.
zozbot234 [3 hidden]5 mins ago
> If you have to wait all night for a response and then possibly discover that it took the wrong direction, misunderstood your intent, or your prompt was missing some key information then you have to start over.
If you have to wait overnight because the model is offloading to disk, that's a model you wouldn't have been able to run otherwise without very expensive hardware. You haven't really lost anything. If anything, it's even easier to check on what a model is doing during a partial inference or agentic workload if the inference process is slower.
qiine [3 hidden]5 mins ago
"If you have to wait all night for a response and then possibly discover that it took the wrong direction, misunderstood your intent, or your prompt was missing some key information then you have to start over."
This exact problem exist for rendering, when you realize that after a long render an object was missing in the background and the costly frame is now useless. To counter that you make multiple "draft" renders first to make sure everything is in the frame and your parameters are properly tuned.
andoando [3 hidden]5 mins ago
There's definitely use cases for this for long running tasks, like doing research, but for typical use cases they require way too much constant supervision and interaction
bertili [3 hidden]5 mins ago
Very impressive! I wonder if there is a similar path for Linux using system memory instead of SSD? Hell, maybe even a case for the return of some kind of ROMs of weights?
Even with a MoE model, which has to move a relatively small portion of the weights around, you do end up quite bandwidth constrained though.
zozbot234 [3 hidden]5 mins ago
Loading experts to system memory is supported by most local-AI frameworks. But you do not gain much by running that part of the decode on GPU, since decode is not compute-limited and the CPU-GPU transfer involves overhead. It's best to use the GPU for speeding up the shared part of the model.
Aurornis [3 hidden]5 mins ago
Using system memory and CPU compute for some of the layers that don’t fit into GPU memory is already supported by common tools.
It’s workable for mixture of experts models but the performance falls off a cliff as soon as the model overflows out of the GPU and into system RAM. There is another performance cliff when the model has to be fetched from disk on every pass.
zozbot234 [3 hidden]5 mins ago
It's less of a "performance falls off a cliff" problem and more of a "once you offload to RAM/storage, your bottleneck is the RAM/storage and basically everything else no longer matters". This means if you know you're going to be relying on heavy offload, you stop optimizing for e.g. lots of VRAM and GPU compute since that doesn't matter. That saves resources that you can use for scaling out.
Aurornis [3 hidden]5 mins ago
It depends on the model and the mix. For some MoE models lately it’s been reasonably fast to offload part of the processing to CPU. The speed of the GPU still contributes a lot as long as it’s not too small of a relative portion of compute.
K0balt [3 hidden]5 mins ago
My thoughts exactly. Something like this could make it so that modest GPU capacity, like a pair of 3090s , and lots of RAM could make big inference more practical for personal labs
JSR_FDED [3 hidden]5 mins ago
This is a very impressive result. If I understand correctly the bottleneck is the SSD in this architecture - the author seems to get almost 15GB/s - but I seem to remember the max b/w was about 8GB/s. What am I missing?
Roxxik [3 hidden]5 mins ago
IO is very bursty in these setups. When the router results are in you can start loading experts from SSD. In this brief moment the SSD is saturated.
Outside of that the SSD is idling.
Table 3 shows for K=4 experts an IO of 943 MB/Tok at 3.15 Tok/s giving an average IO of 2970 MB/s far below what the SSD could do.
I'm not sure, but not all expert weights are used immediately. Maybe they could do async reads for the down tensors parallelizing compute with IO.
Not sure if this works on Mac, I only tested my larger than RAM setup on Linux with io_uring O_DIRECT reads and I saw that about 20% of total reads do finish while my fused upgate matmul is already running.
Edit: Typos
zozbot234 [3 hidden]5 mins ago
The github page mentions that you can't overlap SSD traffic and GPU compute on Apple Silicon, you get heavy contention for the shared hardware resources.
Aurornis [3 hidden]5 mins ago
PCIe 5 doubles the maximum throughout. That’s why the numbers for newer SSDs are about double what you recall for the old maximum.
rado [3 hidden]5 mins ago
MacBook Pro M5 Pro and M5 Max have such SSD speed
selimthegrim [3 hidden]5 mins ago
I have an MBP M4 Pro and a WD Black SN850x in an external TB5 enclosure and I easily get 6-7 GB/s
mkw [3 hidden]5 mins ago
TLDR I took a stab at leveraging Dan's work and making it more practical:
2 bit quantization lobotomizes the model but is impressive nonetheless! Maybe one day we'll be able to have intelligent 2 bit quants... I wonder.
my version supports - 4bit quantization, hybrid streaming (Disk + ram), arbitrary model compatibility, tested on Mamba2, and lets up the framework for LM Studio integration
I leveraged this work (Credit to Danveloper) and am in the middle of making this work on more practical models and quants. It still uses flash streaming, but done so with a control knob so you can choose how much ram and how little ram to use. In the craziest case, it uses as little ram as possible but is very slow, however, in the balanced case you use some ram and it's much faster.
I designed it around the intelligence dense Nemotron 3 Nano 30B and Nemotron Cascade 2 30B models (which are smaller, more intelligence density) and can run on low end 16GB machines, though you can run arbitrarily large models on larger machines (designed for very low end, but capable of high end).
maxloh [3 hidden]5 mins ago
Can you add a license to the repo? Legally we couldn't run any code without a license attached to it.
Wowfunhappy [3 hidden]5 mins ago
...you can't redistribute code without a license, but surely you can legally run it, can't you?
Like, if I write a blog post and put it on my blog, you're allowed to read it, right?
Heck, if my blog contains some Javascript code I wrote, I would imagine your web browser is allowed to run that code without opening you up to copyright infringement, even if I didn't provide an explicit license.
haomingkoo [3 hidden]5 mins ago
Really interesting approach. Curious how the 2-bit quantization affects the model's reasoning ability on longer chains of thought vs shorter prompts. The benchmarkslook solid but real-world usage seems like a different story based on the comments here.
m-hodges [3 hidden]5 mins ago
As frontier models get closer and closer to consumer hardware, what’s the most for the API-driven $trillion labs?
stri8ted [3 hidden]5 mins ago
48 GB is not consumer hardware. But fundamentally, there are economies of scale due to batching, power distribution, better utilization etc.., that means data center tokens will be cheaper. Also, as the cost of training (frontier) models increases, it's not clear the Chinese companies will continue open sourcing them. Notice for example, that Qwen-Max is not open source.
zozbot234 [3 hidden]5 mins ago
Nothing obviously prevents using this approach, e.g. for 3B-active or 10B-active models, which do run on consumer hardware. I'd love to see how the 3B performs with this on the MacBook Neo, for example. More relevantly, data-center scale tokens are only cheaper for the specific type of tokens data centers sell. If you're willing to wait long enough for your inferences (and your overall volume is low enough that you can afford this) you can use approaches like OP's (offloading read-only data to storage) to handle inference on low-performing, slow "edge" devices.
m-hodges [3 hidden]5 mins ago
> 48 GB is not consumer hardware.
It’s a MacBook.
OJFord [3 hidden]5 mins ago
Assuming 'moat' – they'll push the frontier forward; they don't really have to worry until progress levels off.
At that point, I suppose there's still paid harnesses (people have always paid for IDEs despite FOSS options) partly for mindshare, and they could use expertise & compute capacity to provide application-specific training for enterprises that need it.
Everyone is focused on the bad 2 bit result but who cares? He says don’t use it because it’s bad.
Aurornis [3 hidden]5 mins ago
If you don’t care about the output, why not reduce to 1-bit and only 1 active expert? It will be completely useless but it will be faster!
383toast [3 hidden]5 mins ago
yeah 4tok/s is kinda unusable though
matchbox [3 hidden]5 mins ago
this is awesome Dan!
spwa4 [3 hidden]5 mins ago
Does this mean that it should be possible to load up a system with ~10 (seems to me at least the number of active experts) SSDs to get 40 tok/s even on truly gigantic models?
zozbot234 [3 hidden]5 mins ago
SSD bandwidth will ultimately be limited by the amount of PCIe lanes you have available (for something other than the Apple Silicon internal storage). So the approach has inherent limitations. You can of course scale out to multiple systems to get more throughput.
You can use this approach with Intel Optane, which is wearout-resistant unlike NAND and can thus substitute for RAM. Last I checked, it was available quite cheap on the secondary market, ~$1/GB as opposed to ~$15/GB or more for DRAM. (Of course that's nowhere near as cheap as NAND, which is around ~$0.1/GB but quite wearout-prone with heavy writes.)
spwa4 [3 hidden]5 mins ago
Yeah, PCIe is the bottleneck. The point being that whether the data originates from RAM or from NVME or Optane, you cannot get data to the GPU faster with RAM than with SSDs.
Meanwhile PCIe switches exist. So why not build:
1 CPU + memory + ...
N PCIe switch with each 1 low-memory GPU + 6 NVME drives (in theory 5 can saturate the GPU)
Each of those should only bother the CPU when they have some tokens produced and have plenty of PCIe lanes to get at their data.
Such a setup should be able to get a 6 to 8 times speedup from the solution detailed here, and a model compute increase should make relatively little difference in performance.
lostmsu [3 hidden]5 mins ago
How large is the KV cache?
xbar [3 hidden]5 mins ago
0.1 GB per full-attention layer and "The model has 60 transformer layers: 45 GatedDeltaNet (linear attention) + 15 standard full attention." So, 1.5 GB.
pdyc [3 hidden]5 mins ago
impressive, i wish someone takes a stab at using this technique on mobile gpu's even if it does not use storage it would still be a win. I am running llama.cpp on adreno 830 with oepncl and i am getting pathetic 2-3t/s for output tokens
vilequeef [3 hidden]5 mins ago
Why so much RAM?
vilequeef [3 hidden]5 mins ago
Oh Mac, unified. Sometimes it takes a downvote
harshhhhhhhhh [3 hidden]5 mins ago
seems promising , this is the way , can someone benchmark this
frwickst [3 hidden]5 mins ago
I'm getting 6.55t/s using the Qwen3.5-397B-A17B-4bit model with the command:
./infer --prompt "Explain quantum computing" --tokens 100
MacBook Pro M5 Pro (64GB RAM)
j45 [3 hidden]5 mins ago
Appreciate the data point. M5 Max would also be interesting to see once available in desktop form.
logicallee [3 hidden]5 mins ago
can you post the final result (or as far as you got before you killed it) to show us how cohesive and good it is? I'd like to see an example of the output of this.
The original tokens have Ġ instead of space. I had this issue too when writing an inference engine for Qwen. You have to "normalize" those special characters.
rvz [3 hidden]5 mins ago
The technical write up is great, but Mac users should not get too excited just yet on running 300B+ parameter models locally as the TPS isn't that good.
>...at 4.4+ tokens/second
That is even when it is using 4-bit quantization and it is still at that speed.
> The entire 209GB model streams from SSD through a custom Metal compute pipeline.
This is my main problem.
If I were to run this on a Mac SSD, 24/7 for heavy usage such as Openclaw, that is going to significantly reduce the lifetime of the SSD.
Can't imagine using this in the long term right now, but improvements will follow. Still a great write up anyways.
Roxxik [3 hidden]5 mins ago
Does an SSD meaningfully degrade by read only workloads?
JSR_FDED [3 hidden]5 mins ago
Nope, reads don’t cause wear
zozbot234 [3 hidden]5 mins ago
No appreciable wear of course, but read disturb (requiring occasional rewrites) becomes more of an issue as NAND fabrication advances.
etiam [3 hidden]5 mins ago
> If I were to run this on a Mac SSD, 24/7 for heavy usage such as Openclaw, that is going to significantly reduce the lifetime of the SSD.
How sure are you about that? I've never looked closer at how a large LLM with mixture of experts architecture switches between expert modules, but staying on roughly the same topic for the use (as it often would when editing the same codebase), I wouldn't be surprised to see the switches of composition are fairly rare, fairly small, and to the extent it happens it's repeated reads from the flash disk rather than writes it tends to cause.
frotaur [3 hidden]5 mins ago
Afaik the experts are not usually very interpretable, and generally would be surprised if at least one does not change every token. I don't know what happens in practice, but I know at least during training, nothing is done to minimize the number of expert switches between tokens.
Wowfunhappy [3 hidden]5 mins ago
Eh. I mean, 4 tokens a second works fine if you're patient. Go do something else while you wait.
I feel like whenever I'm trying to find information on which local models will work on my hardware, I have to overestimate because people don't know how to wait for things.
Also, reading data doesn't cause SSD wear.
hrmtst93837 [3 hidden]5 mins ago
If you want decent throughput and do not care about burning SSD write cycles on a box that was never meant to act like a tiny inference server, a used server with actual RAM is still the cheaper and less silly option. I woudn't expect Apple's warranty team to be much help.
K0balt [3 hidden]5 mins ago
Is it doing a bunch of ssd writes?
mkw [3 hidden]5 mins ago
stream from the SSD, perform the calculation, discard, repeat
I've had great success (~20 t/s) running it on a M1 Ultra with room for 256k context. Here are some lm-evaluation-harness results I ran against it:
More details of my experience:- https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discu...
- https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discu...
- https://gist.github.com/simonw/67c754bbc0bc609a6caedee16fef8...
Overall an excellent model to have for offline inference.
In my experience the 2-bit quants can produce output to short prompts that makes sense but they aren’t useful for doing work with longer sessions.
This project couldn’t even get useful JSON out of the model because it can’t produce the right token for quotes:
> *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable.
Note that not all quants are the same at a certain BPW. The smol-IQ2_XS quant I linked is pretty dynamic, with some tensors having q8_0 type, some q6_k and some q4_k (while the majority is iq2_xs). In my testing, this smol-IQ2_XS quant is the best available at this BPW range.
Eventually I might try a more practical eval such as terminal bench.
This is always the problem with the 2-bit and even 3-bit quants: They look promising in short sessions but then you try to do real work and realize they’re a waste of time.
Running a smaller dense model like 27B produces better results than 2-bit quants of larger models in my experience.
It would be nice to see a scientific assessment of that statement.
In my anecdotal experience I’ve been happier with Q6 and dealing with the tradeoffs that come with it over Q4 for Qwen3.5 27B.
They did reduce the number of experts, so maybe that was it?
By the way, it's been a long time since I last saw your username. You're the guy who launched Neovim! Boy what a success. Definitely the Kickstarter/Bountysource I've been a tiny part of that had the best outcome. I use it every day.
I ran llama-bench a couple of weeks ago when there was a big speed improvement on llama.cpp (https://github.com/ggml-org/llama.cpp/pull/20361#issuecommen...):
So it starts at 20 tps tg and 190 tps pp with empty context and ends at 8 tps tg and 40 tps pp with 250k prefill.I suspect that there are still a lot of optimizations to be implemented for Qwen 3.5 on llama.cpp, wouldn't be surprised to reach 25 tps in a few months.
> You're the guy who launched Neovim!
That's me ;D
> I use it every day.
So do I for the past 12 years! Though I admit in the past year I greatly reduced the amount of code I write by hand :/
@justinmk deserves the credit for this!
Have you compared against MLX? Sometimes I’m getting much faster responses but it feels like the quality is worse (eg tool calls not working, etc)
I don't think MLX supports similar 2-bit quants, so I never tried 397B with MLX.
However I did try 4-bit MLX with other Qwen 3.5 models and yes it is significantly faster. I still prefer llama.cpp due to it being a one in all package:
- SOTA dynamic quants (especially ik_llama.cpp) - amazing web ui with MCP support - anthropic/openai compatible endpoints (means it can be used with virtually any harness) - JSON constrained output which basically ensures tool call correctness. - routing mode
Update: I just did a quick asitop test while inferencing and the GPU power was averaging at 53.55
This is some interesting work, but applying such extreme measures to LLMs to get them to run severely degrades quality. I know he claims negligible quality loss, but in my experience 2-bit quantizations are completely useless for real work. You can get them to respond to prompts, but they lose their intelligence and will go around in circles.
He also shows 5-6 tokens per second. Again that’s impressive for a large model on limited hardware but it’s very slow. Between the severely degraded model abilities and the extremely slow output the 397B result should be considered an attempt at proving something can technically run, not evidence that it can run well and produce output you’d expect from a 397B model.
He even mentions the obvious problems with his changes:
> *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable.
So right out of the gate this isn’t useful if you want to do anything with it. He could have tried smaller models or less quantizations to get actual useful output from the model, but it wouldn’t look as impressive. It’s honestly getting kind of exhausting to read all of these AI-coded (admitted in the link) and AI-written papers made more for resume building. It would have been interesting to see this work applied to running a useful model that hadn’t been lobotomized instead of applying tricks to get an impressive headline but useless output.
I was wondering about that statement. Shouldn't it restrict sampling to only tokens that produce valid JSON matching the schema during a tool call? On the other hand, I have heard a lot about how even production LLM providers don't always call tools accurately, so I suppose either it's hard to implement what I described or there's something I haven't thought of that makes it impossible.
I would say it is in reach for normal person. If anything buying it was great investment, it is work tool, I will probably sell it for more than what I bought it for :)
Hand written... by GPT? ;)
To render movies we happily wait for the computer to calculate how lights bounce around, for hours even days.
So why not do the same with AIs? Ask big question to big models and get the answer to the universe tomorrow?
Most LLM use cases are about accelerating workflows. If you have to wait all night for a response and then possibly discover that it took the wrong direction, misunderstood your intent, or your prompt was missing some key information then you have to start over.
I don’t let LLMs write my code but I do a lot of codebase exploration, review, and throwaway prototyping. I have hundreds to maybe thousands of turns in the LLM conservation each day. If I had to wait 10X or 100X as long then it wouldn’t be useful. I’d be more productive ignoring a slow LLM and doing it all myself.
If you have to wait overnight because the model is offloading to disk, that's a model you wouldn't have been able to run otherwise without very expensive hardware. You haven't really lost anything. If anything, it's even easier to check on what a model is doing during a partial inference or agentic workload if the inference process is slower.
This exact problem exist for rendering, when you realize that after a long render an object was missing in the background and the costly frame is now useless. To counter that you make multiple "draft" renders first to make sure everything is in the frame and your parameters are properly tuned.
Even with a MoE model, which has to move a relatively small portion of the weights around, you do end up quite bandwidth constrained though.
It’s workable for mixture of experts models but the performance falls off a cliff as soon as the model overflows out of the GPU and into system RAM. There is another performance cliff when the model has to be fetched from disk on every pass.
Outside of that the SSD is idling.
Table 3 shows for K=4 experts an IO of 943 MB/Tok at 3.15 Tok/s giving an average IO of 2970 MB/s far below what the SSD could do.
I'm not sure, but not all expert weights are used immediately. Maybe they could do async reads for the down tensors parallelizing compute with IO.
Not sure if this works on Mac, I only tested my larger than RAM setup on Linux with io_uring O_DIRECT reads and I saw that about 20% of total reads do finish while my fused upgate matmul is already running.
Edit: Typos
https://github.com/matt-k-wong/mlx-flash
2 bit quantization lobotomizes the model but is impressive nonetheless! Maybe one day we'll be able to have intelligent 2 bit quants... I wonder.
my version supports - 4bit quantization, hybrid streaming (Disk + ram), arbitrary model compatibility, tested on Mamba2, and lets up the framework for LM Studio integration
I leveraged this work (Credit to Danveloper) and am in the middle of making this work on more practical models and quants. It still uses flash streaming, but done so with a control knob so you can choose how much ram and how little ram to use. In the craziest case, it uses as little ram as possible but is very slow, however, in the balanced case you use some ram and it's much faster.
I designed it around the intelligence dense Nemotron 3 Nano 30B and Nemotron Cascade 2 30B models (which are smaller, more intelligence density) and can run on low end 16GB machines, though you can run arbitrarily large models on larger machines (designed for very low end, but capable of high end).
Like, if I write a blog post and put it on my blog, you're allowed to read it, right?
Heck, if my blog contains some Javascript code I wrote, I would imagine your web browser is allowed to run that code without opening you up to copyright infringement, even if I didn't provide an explicit license.
It’s a MacBook.
At that point, I suppose there's still paid harnesses (people have always paid for IDEs despite FOSS options) partly for mindshare, and they could use expertise & compute capacity to provide application-specific training for enterprises that need it.
here we go: https://huggingface.co/collections/trillionlabs/tri-series
You can use this approach with Intel Optane, which is wearout-resistant unlike NAND and can thus substitute for RAM. Last I checked, it was available quite cheap on the secondary market, ~$1/GB as opposed to ~$15/GB or more for DRAM. (Of course that's nowhere near as cheap as NAND, which is around ~$0.1/GB but quite wearout-prone with heavy writes.)
Meanwhile PCIe switches exist. So why not build:
1 CPU + memory + ...
N PCIe switch with each 1 low-memory GPU + 6 NVME drives (in theory 5 can saturate the GPU)
Each of those should only bother the CPU when they have some tokens produced and have plenty of PCIe lanes to get at their data.
Such a setup should be able to get a 6 to 8 times speedup from the solution detailed here, and a model compute increase should make relatively little difference in performance.
MacBook Pro M5 Pro (64GB RAM)
>...at 4.4+ tokens/second
That is even when it is using 4-bit quantization and it is still at that speed.
> The entire 209GB model streams from SSD through a custom Metal compute pipeline.
This is my main problem.
If I were to run this on a Mac SSD, 24/7 for heavy usage such as Openclaw, that is going to significantly reduce the lifetime of the SSD.
Can't imagine using this in the long term right now, but improvements will follow. Still a great write up anyways.
How sure are you about that? I've never looked closer at how a large LLM with mixture of experts architecture switches between expert modules, but staying on roughly the same topic for the use (as it often would when editing the same codebase), I wouldn't be surprised to see the switches of composition are fairly rare, fairly small, and to the extent it happens it's repeated reads from the flash disk rather than writes it tends to cause.
I feel like whenever I'm trying to find information on which local models will work on my hardware, I have to overestimate because people don't know how to wait for things.
Also, reading data doesn't cause SSD wear.