HN.zip

Claude Code: connect to a local model when your quota runs out

56 points by fugu2 - 9 comments
alexhans [3 hidden]5 mins ago
Useful tip.

From a strategic standpoint of privacy, cost and control, I immediately went for local models, because that allowed to baseline tradeoffs and it also made it easier to understand where vendor lock-in could happen, or not get too narrow in perspective (e.g. llama.cpp/open router depending on local/cloud [1] ).

With the explosion of popularity of CLI tools (claude/continue/codex/kiro/etc) it still makes sense to be able to do the same, even if you can use several strategies to subsidize your cloud costs (being aware of the lack of privacy tradeoffs).

I would absolutely pitch that and evals as one small practice that will have compounding value for any "automation" you want to design in the future, because at some point you'll care about cost, risks, accuracy and regressions.

[1] - https://alexhans.github.io/posts/aider-with-open-router.html

[2] - https://www.reddit.com/r/LocalLLaMA

mogoman [3 hidden]5 mins ago
can you recommend a setup with ollama and a cli tool? Do you know if I need a licence for Claude if I only use my own local LLM?
alexhans [3 hidden]5 mins ago
What are your needs/constraints (hardware constraints definitely a big one)?

The one I mentioned called continue.dev [1] is easy to try out and see if it meets your needs.

Hitting local models with it should be very easy (it calls APIs at a specific port)

[1] - https://github.com/continuedev/continue

raw_anon_1111 [3 hidden]5 mins ago
Or just don’t use Claude Code and use Codex CLI. I have yet to hit a quota with Codex working all day. I hit the Claude limits within an hour or less.

This is with my regular $20/month ChatGpT subscription and my $200 a year (company reimbursed) Claude subscription.

esafak [3 hidden]5 mins ago
Or they could just let people use their own harnesses again...
hkpatel3 [3 hidden]5 mins ago
Openrouter can also be used with claude code. https://openrouter.ai/docs/guides/claude-code-integration
zingar [3 hidden]5 mins ago
I guess I should be able to use this config to point Claude at the GitHub copilot licensed models (including anthropic models). That’s pretty great. About 2/3 of the way through every day I’m forced to switch from Claude (pro license) to amp free and the different ergonomics are quite jarring. Open source folks get copilot tokens for free so that’s another pro license I don’t have to worry about.
baalimago [3 hidden]5 mins ago
Or better yet: Connect to some trendy AI (or web3) company's chatbot. It almost always outputs good coding tips
swyx [3 hidden]5 mins ago
i mean the other obvious answer is to plug in to the other claude code proxies that other model companies have made for you:

https://docs.z.ai/devpack/tool/claude

https://www.cerebras.ai/blog/introducing-cerebras-code

or i guess one of the hosted gpu providers

if you're basically a homelabber and wanted an excuse to run quantized models on your own device go for it but dont lie and mutter under your own tin foil hat that its a realistic replacement