Show HN: Moltis – AI assistant with memory, tools, and self-extending skills
Hey HN. I'm Fabien, principal engineer, 25 years shipping production systems (Ruby, Swift, now Rust). I built Moltis because I wanted an AI assistant I could run myself, trust end to end, and make extensible in the Rust way using traits and the type system. It shares some ideas with OpenClaw (same memory approach, Pi-inspired self-extension) but is Rust-native from the ground up. The agent can create its own skills at runtime.Moltis is one Rust binary, 150k lines, ~60MB, web UI included. No Node, no Python, no runtime deps. Multi-provider LLM routing (OpenAI, local GGUF/MLX, Hugging Face), sandboxed execution (Docker/Podman/Apple Containers), hybrid vector + full-text memory, MCP tool servers with auto-restart, and multi-channel (web, Telegram, API) with shared context. MIT licensed. No telemetry phoning home, but full observability built in (OpenTelemetry, Prometheus).I've included 1-click deploys on DigitalOcean and Fly.io, but since a Docker image is provided you can easily run it on your own servers as well. I've written before about owning your content (https://pen.so/2020/11/07/own-your-content/) and owning your email (https://pen.so/2020/12/10/own-your-email/). Same logic here: if something touches your files, credentials, and daily workflow, you should be able to inspect it, audit it, and fork it if the project changes direction.It's alpha. I use it daily and I'm shipping because it's useful, not because it's done.Longer architecture deep-dive: https://pen.so/2026/02/12/moltis-a-personal-ai-assistant-bui...Happy to discuss the Rust architecture, security model, or local LLM setup. Would love feedback.
46 points by fabienpenso - 16 comments
It's a different take and heavily inspired at first by OpenClaw, which is a great product and Peter the founder is an amazing human being. I'm adding features than I want, since I do Moltis for my own use but also try to add features than others will enjoy.
I think Rust makes a lot of sense security wise, it does add benefits like being a single binary and very easy to install. I also tried to make it easy to try with a 1-click deploy on the cloud.
I'm not sure this is convincing enough but I think you can only judge by yourself trying it out, and I'd love feedback.
One pain point I have with openclaw is compaction. It uses so many tokens that compaction happens often - but I'd say it's not great at keeping the thread. I think this could be a nice little benefit you offer folks if you can get higher quality continuity.
- Cybersecurity (you can't expect a non-technical person to read a skill)
- Token usage (without a flat fee subscription it'll become expensive very fast)
I understand that security is a hard problem to solve but having a single binary + containers should definitely help! I'll definitely keep an eye on this.
edit: There is a gpt-5.3 model, but selecting that gives me the error:
Error The model `gpt-5.3` does not exist or you do not have access to it. Provider: openai
I don't see a 5.3-codex, and no opus 4.6...
- you can have models you can not actually use (that gpt-5.3 response) - you can have model non-listed.
Those are all coming from the provider with your API_KEY.
Though, I am looking forward to the next generation of AI agents that aren't named after a lobster