Ask HN: What are you working on? (May 2026)
What are you working on? Any new ideas that you're thinking about?
193 points by david927 - 705 commentsWhat are you working on? Any new ideas that you're thinking about?
193 points by david927 - 705 comments
We are building this because we such library it in our core business, and a lot of other engineers seem to need it too. We have contributors showing up with bug reports and fixes, and real interest from people building apps around .docx docs.
My previous show hn post (https://news.ycombinator.com/item?id=46947229) got a lot of skepticism because we're developing heavily with AI, but with active community feedback and proper ai oversight (mostly me), I'm super proud of what we have now.
Currently I'm building scrapers for all relevant provider, then I want to connect the data for multi-city trips recommendation. Plus some connection to the day trains so that the trips are built more easily.
Some existing trip recommendations:
Prague <-> Amsterdam https://trainbot.eu/?from=prague&to=amsterdam&type=return
Berlin <-> Paris https://trainbot.eu/?from=berlin&to=paris&type=return
Zurich <-> Budapest https://trainbot.eu/?from=zurich&to=budapest&type=return
But yes, in most cases this journey will be more expensive that an alternative flight, which is a shame. However, there are routes where it's comparable, e.g. the Prague - Amsterdam route.
DAGraph is a local-first reactive DAG for analytical SQL (OLAP), running entirely in the browser (there is also a native version in the works).
Some tech details: written in Rust, targeting WASM (and native). The SQL engine is Apache DataFusion[1]. The UI uses Egui[2]. Workspace data is persisted in browser using OPFS[3] via OpenDAL[4]. The graph is functional and handles dependencies for you (via parsing the SQL).
Building this to be accessible for beginners while remaining powerful for advanced users. Still very early, lots more features to add, but now usable.
Hope you find it interesting!
[1] https://datafusion.apache.org
[2] https://www.egui.rs
[3] https://developer.mozilla.org/en-US/docs/Web/API/File_System...
[4] https://opendal.apache.org
I've been making a DSL for writing sheet music specifically for drums as raw text, inspired by ABC Notation (but of course just for drums).
Now writing this I noticed that it's kind of complicated to explain and having a landing page would make my life so much easier.
But the gist of it is, you write notation that looks like this: https://gist.github.com/Luigi123/945af7e5cc8dfbfd186f0a99754... and it renders sheet music in PDF, and also allows you to play the same music as a game (DrumMania / DTXMania style).
Now the language / compiler itself has been working quite well and I've been dogfooding it for like six months now. The next thing is an IDE-style editor where you can import a song and write the notation following it. Making THAT has been quite the journey. Here's a screenshot for good measure: https://i.imgur.com/EmlqlrM.png
I don't know if you can write drum sheet music with it.
I really like your editor with the transcription view. Maybe a spectrogram would be more helpful than a simple waveform display.
Like most people in the space I'm using ABC for LLM generation (e.g. generating sightreading exercises and etudes) but MusicXML for processing and rendering the output. Would be nice to have something somewhere in between the over-simplified ABC and overly verbose MusicXML.
Hard agree.
Early on I actually tried to write my drum charts directly in ABC Notation but it wasn’t a great fit. Then I made a simple parser for my language that outputs ABC because I thought it would be simpler but I found it to be very limiting, so now I use Vexflow’s low level API for rendering. I found it to be more customizable than ABC with a nice JS / TS API. It’s good for my use case (rendering) but ofc it doesn’t work as a save format.
Good luck with Trebella :)
is this intended for drummers, or electronic music composers?
But the main use case I'm going for is my own: making sheet music for drum practice.
aiming for more extensions to The New Breed than just Syncopation that you could auto-generate for funny practice/things you wouldnt think of to play?
People writing sheet music with pencil on paper don't need any of that so I'd say this software would be pointless for them. I'd say this leans heavily on hobbyists or beginners, like I said the main use case is my own, and I'm no professional drummer.
This is not a sales pitch, it's just a small project I've been having fun building for myself :)
We integrate with macOS spaces to switch out a project-specific dock on each space, containing only the resources you need for that project. We made it possible to add granular resources instead of full apps to the dock (think specific slack channels instead of the whole slack app), to keep the dock hyper focused on what you need.
We built this to stay focused while working on the computer, and we thought that the native interface mixed all our projects together, causing us to get distracted.
Looking for beta testers! Free download from https://drawers.computer
For example, I have Codex running doing two things at once, and I wish I could have two windows in two spaces (two projects.) Slack has multiple channels.
Both these aren't native macOS apps but I wonder if you can use the macOS tabbing support to at least get this for well-coded native apps?
One additional level I see appearing are the worktress when having multiple AI agents run in paralell. So while they belong to the same project, each worktree has theire own ide + browser etc.
Does it have project context within apps (like default folders and settings)?
Would love to hear what you think we should add next!
A directory of European software and general alternatives to popular products.
Think cloud storage, email, VPN, browsers, smartphones, bikes,... About 175 products across 30+ categories right now.
Next categories will be: personal health, commercial e-mail and newsletter management.
If you have a product that's missing here, please feel free to suggest it via the suggestion form on the site.
The existing ones were quite expensive, especially when I started out. A friend had the idea to get a cheap/non-functioning lawnmower second hand, and tear out the circuit board. We're in the process of coding up a new ROS2 based stack that will roam the lawn on GPS with RTK in the charging station. My friend does most of the electronics stuff, and I focus on the software.
I'm at the point where I will start testing a simple bounding box soon and just have it drive around until it "hits the edge" and then randomly pick a new direction.
It's fun so see the software I build "in real life" instead of as a web-site, as is the case for my my daily job.
https://tiledwords.com
Forbes just wrote an article about it which was a fun surprise! [1]
It recently turned 6 months old which is wild to me. My wife and I have made a new puzzle every day for half a year! I wrote a blog post about this [2]
I recently released user logins. That went well and a lot of people are using them. I also let you filter the backlog by completed puzzles based on player feedback.
This week I’m going to start releasing player submitted puzzles and release my puzzle building tools. You can watch a video for a sneak peek of those tools. [3]
1. https://www.forbes.com/sites/barrycollins/2026/05/02/bored-o...
2. https://paulmakeswebsites.com/writing/six-months-of-tiled-wo...
3. https://m.youtube.com/watch?v=d8_zhMKd0Yg
While you're here if I could make a small suggestion - the wording of the 'type of' questions was confusing to me until I got used to it; 'stop' is not really a type of 'watch' for example, so maybe you could find a different way to phrase those? Maybe there isn't a neater way to encapsulate the idea of 'is a prefix or suffix to', I don't know, but I found it difficult. Anyway kudos to you and your wife, it's a great game!
I saw someone on here recently say they like to do the puzzle without looking at the clues, and I've started doing that on and off too, it changes the game in an interesting way.
Great feedback on the “type of” clues. I’ll need to noodle on that and see if there’s a clearer way to express it. Maybe I should just be doing blanks… e.g. for “sun” it could be “___ dress, ___day, or ___ flower”
I have, however, rejected making a user login. I recognise you're putting in time and energy to make something I'm just taking without payment, and it's your right to try to leverage it into something more - I wish you all the best in doing so - but asking for a user login as a gate to a feature you clearly don't need a user login for is enshittification.
I'm guessing you're referring to the ability to filter out completed puzzles from the archive? I added it for logged-in users first because it was simpler but I can extend that feature so it's available for everyone. (I'll need to add some alternate logic to pass your indexeddb levels to the server endpoint when fetching the archive. It's not complex. I just haven't prioritized it yet.)
I'll add this to my backlog and try to get to it after the player puzzles release.
Beyond that everything is available regardless of user account right now. I do plan to require an account to submit custom puzzles when that's released. (Mostly to make moderation easier. I may relax this down the line.)
EDIT: On further thought I realized it's also required to have an account to view and share your profile stats, though that could also work without an account with some changes.
A consequence of me being a freeloader too is that you don't have to change your plans to please me :-)
If you enjoy it there’s a new puzzle every day and a backlog of over 200 puzzles free to play ;)
Instead of saving LLM memory in Markdown, I want to manage it using a graph structure to easily record the relationships between tasks and decisions, and persist when, why, and how they changed.
The key functionality is to be able to easily see the changes made for each comment, rather than each file.
You get assigned a random colour and have 24 hours to go out and take photos matching it. The game then generates a photo grid from everything you captured.
Modes:
- Solo → 9 photos by yourself - 1v1 → compete against a friend, combined grid at the end - Squad → everyone contributes to a 20-photo grid
No accounts, no app install, no personal data stored. Photos and generated grids auto-delete after 24 hours from Cloudflare R2 storage.
Made it for fun to find a way to do a shared a activity with my girlfriend and also to challenge my friends over the weekends.
Currently redesigning the frontend flow but I am kinda poor at design
Would genuinely appreciate feedback, ideas or anything
Native application, no web UI, built using Rust + iced.rs, minimal dependency. NO AI.
I am putting the best effort to make it performant. Target audience is the users who want's the simplicity of the notepad [non-sloppy one], but still with some bells and whistles to note without worrying about managing the metadata manually.
I think with scripting there will be infinite possibilities to play with linear notes, and I want to make that happen.
Continuous challenges while implementing features are:
Will have a demo version ready soonBeing a weightlifter for 20+ years now, I'm working on a barbell speed and path tracking sensor based on newer IMU hardware technologies, which makes it both more precise and cheaper than camera- or actuator-based systems. Ultimately it helps you lift and train safer and better.
It's an intersection of industrial design, hardware, firmware, and software (and some sport science, of course). This intersection is not yet dominated by LLMs so it's a breath of fresh air.
In an early prototype stage as in "strap a Raspberry Pi to a bar", but it looks promising and I'm happy to move forward, also using connections from my previous 12+ years in China.
Seems it would have a much higher reach.
I also wouldn't trust a strap to drop a loaded bar from snatch :D https://youtu.be/nrgnH9fTfGo?si=6LLeu3y02iFrwfis&t=65
Might consider a BT GadgetBridge gadget then.
Velocity on the other hand is a great metric to track and is used as a proxy for RPE. Mike Tuchscherer was the first one to systematize it for powerlifting a while back, if you've been lifting for 20 years you're probably aware of the name.
For more complicated lifts like bench press (J-shaped) or snatch (S-shaped), for example, I would rather set a "golden sample" path with a coach and compare to that.
It's unlikely to be the sole metric, especially given the inverse kinematics of different body types (long/short femur, etc), but together with bar speed, over time, it can provide a lot of good feedback.
No offense, but this post does come across as you only having a surface level understanding of the field. Especially surrounding injury/pain perception, I would be more careful of what you assume is true, there's far more nuance.
i wonder if it would make sense to consider it as a data problem, capture a bunch of high fidelity inverse kinematics data for various forms of bad form/dangerous lifting along with the imu data and then work from there. there could be some interesting and unexpected features that are easier to detect than straying from straight line paths with some tolerance.
Its a message definition and protocol, addresses look like @user@domain, anyone can run a host, and threaded messages are linked by cryptographic parent hashes..
The idea is to take the best from email: open protocol, domain ownership, interoperability (unsolicited mail is a feature not a bug), and the best from closed instant messaging rebuilt: efficient binary messages, conversational threads, sender verification, message integrity etc. built-in. Originally envisaged for human-to-human messaging but partculalrly interesting time right now with human-to-agent and agent-to-agent messaging...
The OSS stack is up and running: Go host, Dockerised full setup, CLI, Web API, and a spec nearing v1.0. Did Show HN post week ago: https://markmnl.github.io/fmsg/show-hn.html
Seeking feedback, criticism, validation :) protocol bikeshedding, and especially interest from founding-engineer types who want to help build an open messaging ecosystem rather than another closed app..
Matrix addresses have a similar format, anyone can run a host, open protocol, domain ownership, interop... Threaded messages are supported AFAIK, the details of the crypto will be different but overall it feels like it is close enough that a new protocol will have a hard time having enough advantages to overcome the huge network effect (Matrix being one of the few open messengers that actually have some following already).
Also fmsg being its own protocol can do novel things like to auto challenge during sending back to sender - can't do that with HTTP
It's a short chain-reaction game in which you explode balls bouncing in the screen, and need to build up to target scores. You build bigger and bigger combos as the game progresses.
It was a blast to work on it, starting with a small toy and just adding features that "felt right" until I had a game that was fun to play. It was quite hard to find a balance though, so a lot of numbers are arbitrary - but I enjoy seeing people breaking the game in new ways and finding new builds.
These days I've been working on patching reported bugs and sharing the game with people. Now after the latest patch, I feel like I'm done, but I feel like going back at it and adding an idle mode. And maybe simplify the codebase so I can test and iterate better, and then add many more ball types...
I know that any good LLM could replicate this pretty quickly, but I made this myself and I'm still feeling proud of the accomplishment :)
I feel like perfecting something can be trap, sure keep it alive, but maybe think about the next thing to work on too?
What engine or framework did you end up going with? I looked into Unity, tried Godot for a few weeks, but landed on just making a Typescript-powered canvas game with PixiJS for graphics rendering. Found it much easier doing it this way instead of having to learn a game engine.
It's my first time starting a physical, retail business and it has been quite an education in the small details.
I'm building a tool that allows you to determine the health of an electric transformer from only your phone. It tells you:
I used to be a submariner, so my professional background is in power plants and sonar analysis, so I'm getting to combine the two in this.Acoustic diagnosis of electric issues is FASCINATING, and it feels like there hasn't been a lot of research into this, so I have been slowly chasing down various acoustic patterns I find and try to derive them from first principles of physics.
I'm making an iPhone app for it, and Xcode has been truly awful: non-deterministic, crashing all the time, and error messages that tell me absolutely nothing. I would like to use xtool, but it doesn't have the preview, which I need for debugging.
Honestly if you don't have a specific reason to target iPhones then buying a 2nd hand Pixel 8, putting GrapheneOS on it then pushing an "app" in whatever language you want on it, sure can be Java but can also be Python, Julia, whatever you want really especially on it directly using Termux, could be a better use of your time. By the time Xcode starts you'll already be outdoor testing against the next transformer.
Cham (https://github.com/jfim/cham) is an archive for internet content, you give it an URL and it'll archive it for you, extract the text with readability if it's an article, or extract the audio track then transcribe it. Content is automatically summarized and tagged, and you can start a conversation with a LLM about the article. It supports feeds too so you can subscribe to blogs and keep the articles in case the blog goes away. I still need to add search, improve the CLI, add all the missing features, and do a lot of improvements all over the place.
To improve reliability, I made passe-partout which is basically a Chrome browser with a rest API (https://github.com/jfim/passe-partout) and veilleur (https://github.com/jfim/veilleur) which turns any blog listing into a RSS feed. So this way I can take blogs that are rendered using JavaScript, don't have a RSS feed and load the articles directly into Cham.
Also built a modular MCP server with OAuth2 dynamic registration so that I can have my own MCP server that works with the web, desktop, and cli versions of Claude/Claude code. Currently have modules for editing files so that I can edit/search my Obsidian vault from Claude, fetching pages through passe-partout (since some pages block LLMs from reading them), and proxying MCP servers so that servers that only support bearer token auth can still work with web Claude.
Also, a gnome terminal emulator UI with some unique features like split browser/terminal tabs. https://github.com/jfim/jfterm
Mostly an excuse to see how far I can push LLM code generation to write tons of software that I've always wanted but never had the bandwidth to tackle, and learning to deal with the sometimes questionable code quality that comes from it.
I got back into MTG back during the pandemic after a long hiatus and Spelltable is what brought me back. My playgroup lamented more features and something tailored to our needs, so curiosity got the better of me and here we are. :)
I've never worked with computer vision before, but I went through a whole journey that started with the classical computer vision techniques and ended with recently migrating to the transformer-based models. Been a really cool adventure!
My playgroup has been consistently preferring it over Spelltable and have been wanting more and more features. I would love for people to try it out and start building a community around it! Discord is on the site.
https://cardcast.gg
I love that there is no sign-up required! Do you have plans to implement utilizing a mobile phone as a camera? Spelltables implementation leaves much to be desired.
Excited to see where this goes!
https://github.com/Verticalysis/Hitomi
The secret: I engineered an incremental combinatorial parser capable of processing customized format from a steam. Any inputs, including file or the stdout from a command, are first chunked and then fed to the pipeline. The UI is ready when the first small chunk is processed.
Other highlights: 2-mode filter, one with a convenient UI and the other is based on an extensible DSL for complex cases;
Timeline mode scrollbar, a secret weapon for log or time series analysis;
Column widths fit to content automatically;
Native code, no web bloat;
Cross-platform (currently Windows and Linux, MacOS WIP).
If you are tired of all the quirks Excel have when working with CSV files, you'll gonna love it!
I've been using and tuning a tool I built myself to help me lower my LDL and ApoB: https://www.heartroutine.com/. I still don't like how the daily check-in system works (it's still too dumb) but it's keeping me consistent for now.
In a few days I'll start running playtests of my combat prototype for my next game, Today I Will Destroy You, some kind of SNES Zelda and Sekiro inspired combat adventure.
Periodically thinking about what the future of helping small teams build software will look like and keeping my personal site up to date: https://piinecone.com/.
I migrated to another country and it's hard to talk with parents, my sister and grandmas as much as I did when I was back home. We tried making weekly calls to talk and play games but someone could never make it. I got an idea to create a small chat, share a simple topic every morning and then let everyone take a picture of the object. For example, recently we had "Garden" which made my grandma in the countryside go outside and take a picture of her growing strawberries. Today is "Anything halal" and I hope this will make another (Muslim) grandma in the chat happier :-)
At first, all topics were made up by me. Then, I made it possible for others to suggest topics via a simple bot. I showed this chat to friends and they got excited and wanted to try the same so I upgraded the bot to support multiple chats. Since November 2025, it's hosted 11 chats and if you're interested, you can try it too.
It's free and won't be abandoned tomorrow because I personally use it.
https://t.me/CreateRotiBomBot
https://x.com/paulnovacovici/status/2041722840190480581?s=46...
OP: beautiful work with your surf projector!
also created and got our steam page validated: https://store.steampowered.com/app/4704420/runz/
We've been moving fast, it's our second project and we hope to deliver something people find fun and appealing, at least for our friends !
We suck at marketing though (such is the game dev burden), but we're having fun and learning stuff, which is what it's all about I guess.
I've also paused recording any new lectures at pikuma.com for now. I'm still taking some time to decide what's next for the website. I'm currently focusing on reviewing math & physics to help homeschool my son.
Other than that, I just improved the roof of my chicken coop and I'm slowly evolving the foundation of my study cabin. This new place will be my offline library and music room soon. My plan is to only really go online once a week starting August this year.
In addition to these tools, I'm also building automation that will port the tools from the reference implementation (OpenCode) to other harnesses (Claude Code, Cline, Pi, Gemini, Kilo, Codex, others to come?). As well as automation that will either cherry-pick or re-implement commits onto the latest head from upstream.
[1]: https://github.com/Vibecodelicious/context-bonsai-agents#con...
[2]: https://blog.vibecodelicio.us/posts/how-i-fixed-context-wind...
Conclusion is permission reviews with LLMs like Claude’s auto mode or Codex auto review are like using a data center to flip a light switch - overkill.
The main benefit is that your agent’s autonomy can be governed deterministically through policies that can be stored at the user and repo level. The bonus is that you save tokens vs using auto modes.
https://nah.build
Now I'm working on expanding the work into more parameters and improving performance. I just finished an extremely harsh test of a Nemotron-flavored RVW that consisted of stretches of a random assortment of domains interspersed with long runs of single domains. Across all of it the model didn't forget (and actually improved on some of the more challenging domains). PPL on SmolTalk is still in the ~18 range, which I'd like to get lower, but this is all with only 4B params.
Currently, I'm training a Llama 3.2-flavored RVW with only about 2B params to see how that turns out. Depending on results of that, I may take it to Gemma 4 next.
I always wanted to build a real-life puzzle game, which is app/mobile assisted. Had yet another eureka moment, and built a usable prototype (backend plus iOS app). Good feedback from a small circle.
For a while I was aware of someone (I knew by sight) who worked in the same sort of subject matter (but a non-tech). I approached her, we had a coffee, I pitched the idea and how she could bring it to life, as I made the tech side. She jumped on board.
We're two and a half weeks in, have gone full speed and are making something great (for our audience). My future co-founder is amazing, great insights, opinions, drive. We're potentially launching in a couple of weeks, a free/MVP version of a puzzle game.
I've been through many iterations of trying to get something off the ground. Tried tech co-founders, and the last years of going solo (very hard after you've done the coding). But this now feels right. A puzzle app/game for every day people to have some fun. And a future co-founder whose life is outside tech, who's bring a sort of fun energy outwith let's make loads of money or isn't the framework/AI cool.
Balance is good. Contact with reality is good too :)
Besides, I wasn't entirely happy with the existing Wine launchers, namely Lutris / Heroic / Bottles (none of which support Linux on Apple hardware, although I was able to run an old version of Heroic under muvm).
I wanted more control. For instance, sometimes I need to install a Windows component using winetricks before an installer for some Windows app would agree to run. I also wanted even more user-friendliness. I didn't want to manually specify the executable in the installation folder to run. I wanted the executable to automatically appear on a "Desktop" in the UI as a result of running the installer for a Windows app. All of that I've achieved.
Recently, I've put a very significant effort to make a Snap version of Wine Bar. An x86_64 Snap wouldn't be terribly difficult to make, but packaging an arm64 version as a Snap and making it work was really-really hard. That's because Linux on Apple hardware uses a non-4K hardware page size. So, running Windows apps in such an environment involves running a full-blown 4K-page Linux kernel inside a micro-VM. From within the VM, you register FEX-EMU as a binfmt-misc handler, and then finally you are able to run a Wine process or a wrapper around it (like Proton). Oh, in order for that to work, you also need to provide an x86_64 / x86 RootFS image for FEX-EMU. All those components need to be built as part of the Snap build process.
Long story short, I succeeded and the Snap version of Wine Bar is available from the Snap Store, yet I am still waiting [2] (for over 2 weeks without any reaction) for a couple of permissions to be granted to it. That's not a showstopper though, as the permissions may be granted manually by the user.
[1]: https://github.com/Tulon/WineBar
[2]: https://forum.snapcraft.io/t/autoconnect-requests-for-wineba...
It's still early in 0.1v at https://github.com/okira-e/TUIQL but already supports everything day-to-day that you would need.
s/import SwiftUI/import OmniUI/
As long as you aren't using Apple platform specific libraries like Vision, you should be good for the most part. I am going to make my Gopher browser (https://web.navan.dev/iGopherBrowser/) the first target. I have done some extra stuff like reimplementing CoreData/SwiftData to make it work on Linux.
I am going with Adwaita instead of pure GTK because I like the opinionated approach they have with their design language. I think the reason SwiftUI works is because you can get pretty looking apps without thinking too much.
Projects like adwaita-swift, and swift-cross-ui do exist, but I want my library to be a drop-in replacement. I don't want to be inspired by SwiftUI, I want to use SwiftUI everywhere!
In April, the site receievd 3,500 clicks from Google - 7× growth month-on-month. Cited by ChatGPT, Claude, Gemini, and Perplexity.
Working as a solo founder, some of the stuff I shipped last month:
⤷ Launched a seasonal gifting vertical - new revenue surface, new content format, reusable template for future occasion launches
⤷ Audited verdicts for all brands and food items, incorporating data from various authentic sources and official accreditation bodies
⤷ Expanded programmatic SEO across ingredient and brand categories
⤷ Turned zero-result dead-ends into a list growth lever
This has some interesting implications. If you make a mistake, you can always backtrack and try again. If you have a crocheted piece, at least in principle you could find the lose end, free it, and work back stitch by stitch to reverse engineer it. (In practice people don't seem to do a stitch-for-stitch reverse engineering just like you probably wouldn't bother reimplementing something line by line without a compelling reason, you figure out what's going on in the challenging places just by look and feel and improvise from there.)
I'm oversimplifying somewhat and there are some forms of crochet that include irreversible stitches, yarn can be felted together (entangled, like a cotton ball) to create irreversible bonds between adjacent strands, and often several panels/pieces are joined together irreversibly to create a larger piece.
Everybody uses Claude Code or AI coding tools to build internal software, but they lack the governed infrastructure layer required for enterprise trust. RootCX provides that missing foundation. We offer the security, auditability, hosting and permissioning primitives necessary to move internal software from "cool demo" to prod
It's called Peekl, and the idea behind it was to merge the two things I loved about both Ansible, and Puppet : - The agent/server model of Puppet; - The ease of writing Ansible code.
It also make sure to solve the problem that both Ansible and Puppet have in my sense : - Puppet is a slow, decades old application. It's written in Ruby and run inside of a JVM for the server. Applying a catalog can be very long on complex configuration setup. - Ansible on the other side works well, but the absence of an agent makes it hard to deploy stuff at scale : You being spending more time running your playbooks, than actually doing stuff.
So if you want to take a look at this alternative, head to the Github repository. Feel free to break it, open issues, and pull requests !
https://github.com/peeklapp/peekl
It also allows remote control. I don't like AI harnesses (Claude / OpenAI) having remote control inside, it feels like it should be at the terminal level, not the cli.
It also allows commands at the terminal level. So if you use multiple ai cli's you don't only need to write the command once, then use cmd+l to inject into any cli.
I've put macros in too, that again can automate doing the same thing in a terminal.
Anyway I'm sure this will just end up another terminal in a sea of already existing ones.
1. https://github.com/markwylde/terminay
2. https://terminay.com
For the company I'm currently working I had made a VSCode extension where I can sync the task doc with clickup via frontmatter.
I decided to take it to next level as a side project. I built a CI integrated, git-native, agent template transformable syncing pipeline with git MD files to any project management tools. That means, either you can save your md files vanilla in your wiki (thus using the clickup AI search to dig up later, get insights etc) or you can use a AI agent template transformer to turn it into a task template (Background, acceptance criteria, functional requirements etc.) and update or create a task on a board.
I've been working on it now. I don't know how it will fare, but I feel like product is coming up nice.
https://mdspec.dev
I was actually working on last weekend with something that has similarities. I am working on USM.tools, which allows specifying your services in structured way.
There is a need to specify some of the data in semi-structured way, and I am using markdown for that.
So there is this interesting relationship between unstructured, semi-structured and structured data, and markdown hits that middle ground.
Can I suggest you make some Jira etc. templates on your landing page clickable, so a visitor can grasp your idea more easily? For me it was not clear whether the specs are just plain markdown, or do you have some additional tagging there.
Sure! thanks! thats good idea, to have it clickable and true that needs needs to be easily understandable.
It’s a hobby project in a very early state where it technically works but it’s missing several things I think it needs before I’d use it for anything serious. As of right now it isn’t even complete enough to dogfood a minimal container for itself without an intermediate base image because it can’t target a platform compatible with the distroless uv container image.
About a year ago, I engine-swapped my Nissan D21 hardbody from the Z24 petrol to a TD27T turbo diesel and also installed a whole bunch of accessories, like spotlights, a winch, and an air compressor. But being lazy, I didn’t write down any of the wiring changes I made while doing all of this. So fast forward a year, and now I can’t remember how all the wiring works.
My current project car is a Jeep Cherokee FSJ, and for it, I want to build a completely new loom from the ground up. So to try and avoid making the same mistake I made with the Nissan, I Googled “create automotive wiring diagram”, but all the results were for complex enterprise grade solutions charging $200/month. That’s why I created X/D Loom as a project car guys' tool for creating wiring loom diagrams. It allows you to drag different electrical components onto a canvas, connect them with wires, and export them to a PDF or PNG.
[0] https://github.com/asamarts/alint
The setup is done via one prompt ('Use https://skills.superlog.sh to install Superlog in this project'), and everything on the platform is usable via MCP so that you don't have to spend time configuring yet another UI.
Do one thing and do it right.
Where I could see this succeeding is if you embrace the monitoring agent role. Customers can expose their coding agents, setup however they like, as an MCP server that your monitoring agent can plug into. If something goes wrong, your monitoring agent gives their coding agent the best context it can, and steps out of the way.
Recently I have had trouble with Sentry. I have a site that has a lot of data coming in (2M page views per month) and Sentry starts being unusable for a solo developer. And on the other hand, I have several Django projects where I want to have common way to handle bugs.
I am feeling Sentry UI is too complex for my use cases, and on the other hand, I would like to automate the process as much as possible -- and the idea of automatic bug fixing is neat!
I am experimenting with Bugsink. Supporting Bugsink internally but build some tooling around it for automatic bug detection and fixing would actually be a sweet spot for me.
https://toanoa.com/
Since the initial MVP, it's done close to 100k orders and I've added new functionality like:
- Intelligent order batching & route optimization that can interleave tasks across orders in such a way that they still have the best chance possible of completion within their delivery windows
- Further refined the mobile tracking logic in our driver app to improve the quality/frequency of position updates while continuing to be as efficient as possible on battery
- Numerous backend/DB optimizations such that average response times are in the tens of ms at the current volumes it's handling.
It's not open source but if you have an interesting use case and are curious about it, feel free to reach out.
It's on Amazon in both Kindle and paperback formats.
https://www.amazon.com.au/Code-Design-software-projects-deve...
Free beta version is running well ( https://silentshark.app/alpha ) and I plan on releasing the full WW2 campaign version on Steam, App Store, and Play Store in the next month or two.
It's been an absolute blast getting feedback from Navy geeks on Discord, tweaking things, and my favorite moment was when my stadimeter instrument (finds distance based on angle + mast height in the periscope) worked without any "cheating" on my side simply because math works.
I got to the MVP state which was useful for my personal use case in about a month. I took it further than that as a learning exercise and as a means to share it with others. Some features that came later are live cursors (like Figma), elevation chart and grade overlay, and QR-code enabled collaboration links to make in-person sharing simple.
Check it out! https://plotalong.app
Figuring out the exact UI/UX I wanted was the hardest part. I did the branding myself, handdrawn on paper, traced in Procreate, and vectored in Sketch. Fast iterations and a good test suite made it possible to try lots of different approaches and refine the one I liked the most. There are roughly 4000 unit tests and over 300 e2e tests that run on multiple environments with fully automated CI/CD.
I’m using Mapbox for the frontend and the whole app is basically just a monolithic Cloudflare Worker. Claude pretty much implemented the entire thing. I got a lot of mileage out of self hosting a Gitea project and recording all my planning sessions as Milestones and Issues. Claude has his own account without admin privileges. The process of managing a team of agents to build this practically autonomously was a bit jaw dropping and eye opening to be honest.
I would love to hear from other pleasure & sport drivers about the features they use or want the most in a routing app. I have an Android app in Play Store review, if you’d like to be an early access tester shoot me an email at my handle @plotalong.app
The idea is everyone opens the same route for coordinating and there’s just one source of truth for the group. And then when you’re all about to hit the road, everyone can use the nav app they’re already familiar with (or that’s built into their vehicle)
I will tackle the navigation aspect at some point if I do keep up on feature dev, though!
Suggestion if youre open to it: emoji or text badges for each stop (e.g., or )
I also think itd be helpful to have route leg times shown directly on the map as popout tooltips. Knowing stop 2 to 3 is five hours is critical, and how we plan.
Suggestion for your pay model: I think it would be lovely to be able to use this with no option to save. Or, maybe a single fee for an administrator that allows up to x users for one month with only one routr? I only do these kinds of trips yearly, so a monthly fee for three collaborators just wouldn't work. Would we all sign up then disable our accounts? Its hard to imagine that model working for me (RV road tripper with 3-4 people) I think Id be willing to pay the $5/pp that allowed me and x friends to all jump in. Having each person set up their own paid account feels like a harder sell.
(Desktop Strongly recommended) https://dahlend.github.io/ketev/
https://keibidrop.com/ repo: https://github.com/KeibiSoft/KeibiDrop
SDocs is cli (`sdoc file.md`) -> instantly rendered Markdown file in the browser
When you install the cli it gives you the option to add a note in your base agent file (`~/.claude/CLAUDE.md`, etc.). This means every agent chat knows about SDocs and you can say “sdoc me the plan when you’re done with it” and the file will pop open instead of you having to find that terminal session to know it’s done.
Going browser first means you’re not required to install anything to get a great experience.
Despite being in the browser, the content of SDocs rendered Markdown files remain entirely local to you. SDoc urls contain your markdown document's content in compressed base64 in the url fragment (the bit after the `#`):
https://sdocs.dev/#md=GzcFAMT...(this is the contents of your document)...
The url fragment is never sent to the server (see https://developer.mozilla.org/en-US/docs/Web/URI/Reference/F...: "The fragment is not sent to the server when the URI is requested; it is processed by the client").
The sdocs.dev webapp is purely a client side decoding and rendering engine for the content stored in the url fragment.
This also means you can share your .md files privately by sharing the url.
https://store.steampowered.com/app/4129270/Tactus/
Right this second I'm looking for an alternative to After Effects that runs on Linux systems, as kdenlive has some limitations with its layering implementation. I'll probably give Blender and Godot both a whirl, as I want to get more comfortable with those tools for future projects.
Have you considered also releasing it to itch.io? (I don't do business with Steam due to DRM and their inaccessible website.)
I would happily purchase a NES ROM file so I could play it on my pitendo (RPi3 in a case that looks like an NES).
I'm not well versed in video editing. That said, the people I know who are tend to use Da Vinci Resolve.
https://zeta0134.itch.io/tactus
https://store.steampowered.com/app/247080/Crypt_of_the_Necro...
https://diffui.ai
I quit Figma about 4mo ago to start working on this, and the gpt-image-2 drop really legitized the bet. I recently release Brands for diffui, which let you establish a design system and consistently generate with it. I made a Brand out of the recent UFO files release, which allow for some really fun designs:
https://diffui.ai/brand/2ff1b00a-d698-43ea-a42e-7c4a2e670c04 (no account required to generate with this if you want to try)
It's a durable orchestration system for AI code generation which solves the problem of not being able to trust LLMs to complete long running (and high quality) implementations without having to babysit them and monitor the process, which is what I think is the most exhausting part of coding with AI.
You start with a spec or programmatic task list and the engine runs the whole workflow: implementation, verification, review, fixes, and finalization.
It treats agentic coding like a durable CI-style process, with state, retries, reviewer feedback, commits, and auditability built in. It's externally orchestrated, meaning it's not the agent running the loop, it's simply agents being used as tools and spawned in the loop as needed without awareness of the loop itself.
It's going to be open sourced soon and it's not meant to replace your IDE or Agentic Harness of choice. You keep using codex/claude code/open code/cursor/pi whatever you want and simply delegate the actual implementation to the engine, through MCP/CLI and other integration points.
It supports any LLM provider so you can have GPT 5.5 implementing and a mix of Opus 4.7 / Deepseek v4 Pro / GPT 5.5 reviewing at every phase for example.
Sign up on the website or follow us on https://x.com/enginedotbuild or me personally on https://x.com/aljosa , desperately need more followers :D
It works on MacOS, built with Swift and Metal. My goal is to make a super fast, and free, focus stacking program. I provided a notarized MacOS DMG for the initial release, but if built yourself, it will run on an M4/M5 series iPad Pro as well.
The core ability I wanted was to support RAW files as inputs, with DNG files as outputs. This is done using either LibRaw, or Adobe DNG Converter (runtime options).
I have been really into macro photography the last couple years, and have been slowly working on trying to build my own program to handle the focus stacking.
https://onebusawaycloud.com/
It’s a project of the non profit Open Transit Software Foundation that we’re using to fund our other initiatives, like bringing realtime transit information to billions of people around the world.
All of this depends on a bunch of really cool open source projects we’re building, like Maglev, a Golang server that can power realtime transit apps. I wrote up a blog post explaining how to set it up here: https://opentransitsoftwarefoundation.org/2026/04/setting-up...
We’re always looking for volunteers, especially non-engineers. https://ossvolunteers.com/organizations/open-transit-softwar...
Do i understand correctly that the product is a white label app for public transport providers that riders can download to get arrival data?
Do you think people will download an app for each bus/train? Isn't it better to integrate with google maps or equivalent?
1. The Puget Sound region, where a regional transit authority, Sound Transit, currently maintains their own OBA servers on behalf of a dozen individual transit agencies. Sound Transit piggybacks on our official OBA apps which you can find in the Play and App Stores. The official apps also work in 10 other cities across the US. This is the ideal for us—and transit riders, imho, and similar to what you see with apps like Citymapper or Transit.
2. New York City, where MTA runs their own OBA servers that power their own branded app and realtime signage throughout the five boroughs.
3. UC San Diego, where the university is using OBACloud to power real time transit information systems for students on campus.
4. Republic of Cyprus and Malaysia (yes the entire countries), where enterprising individual developers have set up their own OBA servers to power realtime transit information systems for their fellow citizens.
The underlying OBA server provides a rich set of REST APIs that make it much easier to build a public transit app than using raw GTFS and GTFS-RT data: https://developer.onebusaway.org/api/where/methods
We also have SDKs for many major languages so that agencies and independent developers can build their own apps on top of OBA servers without having to fiddle around with the intricacies of our APIs. https://developer.onebusaway.org/api/sdk
~~~
Integration with Google Maps is important, and a "yes and" solution. I think there's a lot of value in having public transit-focused apps, especially ones that don't have advertising or questionable privacy issues.
~~~
edit: I noticed you're in Argentina. The Ministry of Transportation maintains its own white label version of OBA called Cuando Subo. https://www.argentina.gob.ar/sube/cuandosubo
Working with Apple was also challenging because I had to purchase an Apple Watch or iPhone (the data is stored locally only, with no server or API to call, which is great from a privacy perspective) and then deploy specific code on the device.
I’m not sure if this helps your use case, but I was planning to make the API public and create a CLI (similar to Sentry or Grafana’s gcx) to access it. But if you want a local first option, not the best solution
Device based strength tracking is still so weird to me.
Then you have friends and family that don't have the same devices than you and are nice enough to want / try your app.
I think this is a perfect example... somewhere out there a genius and a grug are happily exercising together for the simple joy of doing so and feeling good in their bodies, and nearby is a midwit with the GDP of a small village worth of wearable electronics wondering where the joy has gone as he laments the 0.1% of VO2MAX he's dropped since his last gadget-run.
The App helps Product Managers, Sales Reps and Architects quickly understand an enterprise software APIs. LLM turns the raw documentation into beautiful process flows, sequence diagrams and integration requirements.
Hope to launch soon ;)
I put a bit more work in since May have it running almost end to end, still with plenty of gotchas. It's transcribing on an RTX3070 under my desk so pretty limited, but have enjoyed a few morning briefings just as I want them (weather, chance of rain today, BBC/NPR top stories from yesterday, some specific sports news then HN most popular summarised with summary of comments. Got a chuckle out of the LLM summarisation of HN commen threads which was always "Comments reaction: mixed" (which it no longer embeds!)
Found traces of it being done before and I understand you can do similar in Notebook LM, but I was hoping to build something set-and-forget.
Its nice, overall, to have a little dedicated touchscreen on my desk that I can easily tweak to display whatever I want. Its silent and low power.
[1] https://slint.dev/
[2] https://a.co/d/044MIM3t
People are consuming more and more text content than ever before to get more information, under the pressure of AI development. For me, I need to read papers, go through information related to my industry and company, sometimes my boss asks me to conduct a research on some newly launched products or techniques revealed by passages.
It's really a heavy load to consume all these information fast and deep, so I built SogaPaper: https://sogapaper.ai It can translate, summarize, q&a not only at the whole document level, but support paragraph-level quick glances and q&a, preserving the important inner logic and structure of the whole passage, very suitable for knowledge workers and college students nowadays.
It's free to start, and I'm looking forward to hearing from your advices.
I'm also really fond of the product. If the execution is solid, I'm definitely interested in throwing some money at it
It's a little web application that allows for the ranking of all kinds of abstract entities. Think of the merging of Goodreads for books, Vivino for wine, Letterboxd for film, etc. This will allow you to instead rank whatever you want across a variety of different categories in a single place.
Using your rankings across all these different fields, you can draw analysis of what you like, and in future I'd like to add a little personal (not an ad) recommendation engine to help you find new stuff based on your actual interests across loads of different categories.
From a technical point of view, its been a great learning opportunity on how to fully host a complete stack using an opiniated, but cross-platform orchestrator, allowing me to host this wherever (bare metal VPS, homebrew system, cloud provider) in a flash.
https://playruly.com.
You play by setting rules onto a small grid of numbers to maximise your score.
My focus the past few weeks has been on refining the difficulty by experimenting with different rule types, and improving the UI.
I'm pretty happy with the look and feel now but feedback is always welcome, and I'm especially keen to hear what you think of the level of difficulty of the puzzles. It's a tricky balance to introduce variety without adding complexity.
There's a (very) small contingent of daily players now which is really motivating.
Your comments very welcome.
https://newtabwidgets.com.
One of my favourite features is the iframe widget, which allows you to select any element on a website and turn it in to a widget.
- The base is a freestanding register VM deeply tied to the Linux kernel.
- It supports a set of primitives types, array types and record types.
- Concurrency is Erlang-ish --- an M:N scheduler that can manage tens of thousands of green processes. VM uses instruction fuel to preempt processes.
- GC is Cheney for the nursery + M&S for the rest. Each process has its own GC.
- tailcall support.
- first class functions.
- Phase One will only have Vm0, which is clean (it has access to a bounded set of Linux syscalls). Will think about Vm1 that handles the libc infection later.
- JIT is not on the table in Phase One.
- The language is statically typed and borrows syntax from python extensively, but drops OOP entirely.
- OOP is faked using UFCS.
- Operator overloading is supported.
- Exceptions are the default error handling mechanism. There are two hierarchies: Fault and Error. Fault cannot be trapped without rethrowing. Fault WILL crash the VM.
- It is being developed in private right now. As I am User # 0, all the choices I make reflect my own opinions and biases. I might release the code as OSS once the core is stabilized.
- I have done very basic microbenchmarking and the VM is so much faster than python right now that it is not a fair comparison anymore. I can also create and iterate over massive primitive arrays with ease. The principal comparison should be C, and here it is consistently about 8x slower and that performance profile will probably remain constant because dispatch has a real cost that cannot be magically wished away. I will be very happy with 5/6x.
Tech thoughts: this week I decided to move to a new to me NameBrand™ Arch distro. Even though I’ve been daily driving Linux for more than 10 years I still can’t get a new install up on the first try. This time the boot loader wouldn’t load. No error, no log, and no boot loading. A few reinstalls later, I picked a different choice and was finally booting. But then the next problem was the login display manager doesn’t want to sleep the monitor if nobody logs in. Learned this after an overnight power flicker caused a reboot. The monitors running full tilt for several hours waiting for someone to log in. While I would like to say this issue is another joy of the Wayland-way of Linuxing, I have found that other display managers offer the same defect.
Anyway dogs are great, I shouldn’t have waited so long to get one.
And when I say darkest recesses, I'm not referring to "0.1 + 0.2 != 0.3" (which is fairly well-known) but things like "so when you turn on denormal flushing, how exactly are you defining it because there's at least three different definitions..." Or also "does my emulator actually emulate floating-point behavior correctly, or is it delegating to the current hardware which might have a slightly different definition?"
I created the platform because I found myself rewriting the same parts of my prompts (or storing them in a text-file) all the time. Now, with a few simple clicks I can populate all the task-specific fluff (personas, constraints, tones, ...) around the actual task that I want the AI to complete.
The platform is open by default; with the purpose of letting users learn from prompts and building blocks that other users created and use. I don't have any users yet, because I want to complete the MCP and Claude Code Plugin before I start marketing my product.
Other things on the roadmap:
- Teams tier, where teams can privately share prompts and building blocks between them. Currently, your data is either private or public, no targeted sharing.
- LLM integration into the prompt builder to reduce prompt engineering friction even more. Instead of manually searching for, and selecting the building blocks you want to use, you would just start typing your task and let the platform decide what building blocks would best support your prompt. There is still a difference with letting an LLM completely generate the prompt, as we would be using existing building blocks that have real feedback from previous uses.
Let me know what you think!
But not sure people will be willing to use it. A commitment is highly generic, maybe I need to narrow as daily exercise or something.
https://apps.apple.com/us/app/mojave-paint/id6759276677?mt=1...
Rundash (rundash.ai) - an easy way to create automated AI agents that can run tasks for you with over 1000+ integrations. Built this from my own needs to run better meetings, discover product insights etc..
Provision (provision.ai) - how I run a team of openclaw agents without burning money on Mac minis. Each agent is given a dedicated email inbox (powered by Mailboxkit) and a dedicated chrome browser that you can connect via browser to unblock if needed. Currently doing some pilots with a few startups. It's interesting to learn how teams want to use AI agents like OpenClaw.
[unnamed project] - a macOS menu-bar AI agent that drives your real apps similar to Perplexity Computer. Hit ⌘⌘, type a task (optionally @-mentioning apps like @slack), and an agent Claude Code or Codex; clicks, types, scrolls, switches apps, and reads the screen via accessibility APIs, with a visible cursor so you can watch it work. Everything runs locally on your own logged-in Mac (BYOK to Anthropic/OpenAI), so there’s no cloud VM or re-auth flow. It also snapshots the frontmost app for ambient context ("summarize this page" just works) and supports parallel tasks with persistent history/workspaces. Pre-release, but the core bet is that desktop agents should feel like a teammate living in your menu bar, not a browser tab or rented remote computer.
Yes, you can use your own API key as well.
https://jetwriter.ai
Feedbacks are welcome.
Docker is...quite slow with large images. I've built a registry+pull client+buildkit builder to make it better. It splits apart layers, allowing for files to be shared between related images. In a robotics context, it can make pulls 10x faster. And in a cloud context, the format allows for pulling an image in 15 or 20 seconds instead of 60, without having to do a FUSE w/lazy pulling. Builds are faster, I store 7x less data due to better deduplication, I can run security scans faster due to not having to unpack tarball layers, etc, etc. I want to be the default registry for all ML related work, in the future.
a performance-first TypeScript checker written in Rust. Started 5 months ago and it's been mostly AI-written code. 99.8% tsc conformance test pass rate today. Single file benchmarks are 3–5x faster than tsgo.
oxc https://oxc.rs/ ezno https://github.com/kaleidawave/ezno
These embed a remote browser in an iframe to give you “embed anything browser view” custom elements. The demos focus on retro desktops to emphasize the browser - as these common web tropes, the retro desktop, can never actually ship a real browser without something like bbx.
https://browserbox.io https://github.com/BrowserBox/BrowserBox
Hyper-Frame is supposed to be the "developer" demo that engineers will understand what they can do with it. I think it succeeds at that. I'm glad you found it useful.
The desktops are more labor of love, nostalgic, imaginative. I grew up in that time. They complete the "art" of web desktops by giving them internet access, which otherwise all omit. I don't care that they bury it seamlessly rather than making it obvious. I like that it's integrated as it would be in an OS, that's part of it. Your point is accurate that they do not surface bbx obviously.
So these desktops and glitch are more meant to spark imagination, maybe prompt product ideas for people who could be inspired by that. It's supposed to, I suppose, work subliminally, by letting you play around with it in an immersive setting. I suppose it's a different buyer profile or purchase stage they are meant to be honey for, not the "give me what I want now" seeking, but the more playful, relaxed, idea-sparking stage/persona. It's meant as an art gallery :)
You probably got annoyed doing it - that's okay, it's probably not really for you.
I feel the set of demos taken together cover the things I was wanting to express about this. I'm very happy with them - both individually and all together.
Thanks for looking - and for your great compliment - yes windows is all HTML, notice it says Windows 98-and-a-half ! :) They are also really just meant to be fun, and I had fun creating them. And meant as a show off lol :) - I like it when people enjoy a beautiful time playing around with them.
Site: https://azriel.im/disposition
Repo: https://github.com/azriel91/disposition
It's written in Rust, has stable node positions, is stylable (and has default styling, dark mode styles), among other things.
The hardest part is calculating coordinates for edges:
- ranking nodes / positioning them when edges connect nodes of different nesting levels
- ensure edges don't overlap with nodes to not obscure content
- ensure edges don't overlap with each other
It's about 60% of what my version of "complete" looks like. Remaining parts:
- edge labels (might need a rewrite of how edge paths are calculated.
- images in nodes
- generating diagrams for different screen sizes
- LSP support (?)
I can completely empathize - sometimes some problems never leave us.. like that piece of food stuck b/w teeth. There's a force within us asking us to right that problem in the world.
All the best to your project.
https://serpentinegame.com
Recently I came up with an idea for a puzzle-adventure under Playstation mod limitations. That means no new assets, and no new scripts either. Simply, let's treat it as a different game, and see what can be done with that. I researched the built-in scripts, and oddly, every script that could be used to enable a game object was not repeatable. Ditto for scripts to disable an object. The only repeatable scripts were toggle scripts (disable an object if it is enabled, enable if it is disabled). So last week I prototyped some puzzles using that primitive only, and while doing so, I figured out how to trick the engine into making those do-once scripts repeatable, too...
The idea is to connect test results and artifacts with commit history. Test reports should know whether a failure is new, whether a test has been flaky before, and which commit made a test start running 5x slower.
If you maintain an open-source GitHub project with tests, please give it a try. The free plan is a good fit for OSS projects: 1GB of storage is enough for roughly 10M+ test results.
Old-school graphics in modern TS.
Several years ago, it was not possible to blit an entire screen of random pixels to the screen at a decent frame rate without something like shaders.
Even though the screen is now even higher resolution, the CPU can now blast 2560x1440 random pixels to the screen at 90 FPS. Must be advancements in hardware and/or JS runtime. (The bottleneck seems to be generating the random numbers...)
I figured out how to make my TV static effect look more realistic:
- Mostly: TV "pixels" had wide aspect ratios[1]
- Larger "grains" (see info in corner)
- Also added subtle CRT scan line effect. ('C' to toggle)
- Looks different when animated (click to toggle pause; probably should emulate 60FPS).
---
Started revisiting this rabbit hole while thinking about programming prompts from the new Recurse Center application[2]. They suggest about six different prompts; I figured out how to combine all the prompts together.
[1]: https://github.com/Leftium/fx/blob/33405b25dc7caeb48e6c563a3...
[2]: https://hn.leftium.com/i/47892660
I’m automating App Store media creation; both screenshots and app preview videos can now be recorded automatically; this way they should stay up to date and show correct content for a given locale.
I’m also adding translations; if anyone would like to help (with translating or testing new locales) let me know!
Early results are already live in the App Store page.
[1]: https://en.wikipedia.org/wiki/Nonogram
[2]: https://lab174.com/nonoverse/
Trilogy's model works quite well for agents, but I've avoided making AI features too native in the UI products - not everyone's cup of tea - so this month going to do a spike on a new pure AI native data authoring/exploration experience to see where that can go without messing with the core product, at both CLI/UI layers. Data consumption is (un?)fortunately a pretty perfect fit for the agent strengths with the right harness.
[1]https://trilogydata.dev/
A lightweight service for instantly hosting HTML, Markdown, and ZIP-based static pages. The goal is to make sharing simple: upload a file, get a live link, and let it expire automatically. The use case I am targeting at is: to allow users to share AI generated files easier and have a place to render and host them for a short period of time.
One thing I’ve been learning is how much faster it feels to build on cloudflare page, worker, and D1 compared to my old way of setting everything up myself with docker, postgres, nginx and etc on hetzner for a small project. (I am not affiliated with any company, product, or organization mentioned above.)
It supports voice cloning, dubbing, transcription, and local/self-hosted workflows with Docker + desktop UI support.
Using open-source models like Whisper, Qwen, OmniVoice and more.
https://github.com/debpalash/OmniVoice-Studio
Thanks for checking it out
Just finished the software side using a boring technology and am about to order the materials for the first few locations. Curious to explore photo alignment once real submissions start coming in. Stitching all slightly different angled photos into a smooth animation seems interesting.
I wanted to replace those passive social media feeds, where I'm just being served what gets clicks. I completely avoided any embeddings or typical rankings, using ai agents instead, to get precise results and things that are actually interesting to me.
Still working on the platform, but I made a tool that already turns Hacker News into a personalized daily digest here https://briefin.com/hackernews/ (with summaries of the discussions)
Let me know what you think :)
[0]: https://sotix.itch.io/plight-of-the-wizard/devlog/1517881/v0...
- NookJS: a Javascript/Typescript interpreter and sandbox written in Typescript (https://nookjs.dev)
- Litz: a thin React meta framework that uses RSC as purely a server transport, allowing for more flexible client/server architectures (https://litzjs.dev)
- Nativite: a Vite plugin for building for native platforms using web technologies, with a custom plugin/platform support (https://github.com/samlaycock/nativite)
- superformdata: superjson but for FormData/URLSearchParams (https://github.com/samlaycock/superformdata)
- NoSQL ODM: ODM for various noSQL (and “unstructured” SQL) data stores, supporting both lazy and active data migration strategies (https://github.com/samlaycock/nosql-odm)
Hister is a free general purpose web search engine providing automatic full-text indexing for visited websites.
It's called a smol machine free and oss, https://github.com/smol-machines/smolvm
So now I can get back to the project that I was actually working on (but mostly deferring) for some time now -- boku -- https://sr.ht/~hxii/boku/ which allows someone to write a sequential series of tasks to perform without using code
Would like to hear your thoughts and feedback than anything.
- Integrated with lots of open-source and commercial simulators and models for chemistry, materials science, biology… As well as connections to service labs and robot labs to easily perform physical experiments.
- autoresearch / AlphaEvolve like optimization loop following the scientific method: observation, hypothesis, experiment, theory. Combined with a long-term self-learning memory like Karpathy’s Wiki.
You can work with it interactively like with a coding agent to research and execute experiments efficiently. You can also treat it like a graduate student, giving it long-term research goals, having it work 24/7, making smart decisions about where to use your limited resource budget, checking-in with it periodically as a supervisor to guide its direction.
Not all of this is shipped yet, but we’ve been online for a while and it should be plenty useful to any scientist/engineer already.
Fold-up, scissor lift, cross-cantilever 3D printer for open sauce
M.2 FPGA hardware accelerator devboard
All just for fun and open source https://github.com/kaipereira :D
Lets say you have a complex industrial plant, or datacenter you want to upgrade.
You scan it with lidar and get a pointcloud and 360 panorama images. This gives you a large dataset, but what you really want is a floorplan, a lite CAD plan showing the racks, cable trays etc.
You take the scan, slice the pointcloud and make an ortho image .. it really looks like an xray of a building from the top down.
Then someone has to manually trace that in CAD to make a useful 3D model they can use for designing the upgrade.
So Im automating the boring manual part - turning the xray plan pixels into vector polylines, using machine learning.
One of our clients scanned their datacenter, and we generated a floorplan that shows all the rack box positions, cable trays, pipes etc.
Other examples : drawing the weld lines of patches in steel storage tanks, drawing in all the steel girder beams in a scan of an old railway bridge, or the windows, doors, ceiling pipes of a commercial realestate refurb.
gord at quato.xyz
As part of this work, were looking at running our custom machine learning kernel on multi-core x86 CPUs.
`ssh -p4242 tailpipe.clee.sh` for a quick demo without installing anything. Requires any valid RSA or ED25519 key.
https://shivan.dev/writing/post/markdown-wrangling
my biggest struggle is distribution but I started working more on getting better a social media content creation. you can check us out on TikTok, YouTube, or insta: wellbodyapp
Most workflow softwares are complex to extend & customize. Building an AI native, structured workflow orchestrator from scratch for agentic era.
As a starting point, have designed and implemented an AI native data store to store semantic linked structured input & output data of workflow steps/tasks. These structured input/output act as spec and guard rails for the workflow tasks.
Most recently, we added support for comparing funds with leading/trailing/rolling charts and benchmarking (create custom dashboards tracking nav and value chart of subsets of your portfolio) and US stocks, etfs etc. And family dashboard (e.g. you can see networth, cashflows, income, use sheets at family level and more). See https://finbodhi.com/changelog for details.
We also write about related topics:
E.g. Benchmarking your returns: https://finbodhi.com/docs/blog/benchmark-scenarios Or, understanding double entry account: https://finbodhi.com/docs/understanding-double-entry
I'm also working on launching https://watch.ly (network/fs sandbox with human in the loop for ai agents), mostly waiting for the entitlements from apple at this point...
oh and I launched https://dirtforever.net recently to keep Clubs going for Dirt Rally 2 without the EA servers. Learned about the egonet protocol and made a server.
I am also working on various other things (a Go Clojure interpreter with IR/WASM, my own inference library, etc.). All are linked from the page above to a degree.
Blogging is more unpopular than ever but Pagecord is somehow growing in popularity. Keen to know what people here think.
https://pagecord.com
(Source: https://github.com/lylo/pagecord)
I figured "I already have a battle-tested solution, I just need to make it modern and spiffy, build a website for it and see if there's any interest -- in the age of Claude Code, this should be fast work!"
Wrong. Taking an internal library and offering it to others -- complete with documentation and modern tooling -- is an immense project, even with the help of AI agents.
Is there a market for a "formula engine in a box"? I don't know. But I also didn't know whether there would be a market for Calcapp either, and that has supported me working full-time for the past seven years. So I'm willing to take another chance.
And since I don't like the complexity of logging/metrics SaaS offerings I made https://logdot.io.
Ratings per dish instead of just the restaurant as a whole. 4 years in and working on a b2b intelligence offering for restaurants. b2c side has been too hard to get off the ground without solid investment and I've been unable to secure that.
Thoughts welcome :)
No tracking, no analytics, no cloud uploads, no account. MIT licensed. Everything stays on your Mac.
I'm currently planning and designing a plugin system, so others can contribute new functionality without affecting the scope of BetterCapture itself - which should stay as small as possible.
* Tab Wrangler for Chrome: https://chromewebstore.google.com/detail/tab-wrangler/egnjhc...
Continuing to work on Tab Wrangler, an extension for both Chrome and Firefox that has been available and open source for 10+ years. It auto-closes tabs when they have not been active for a configurable amount of time, similar to the feature built into Mobile Safari but more configurable.
I have been maintaining it and in the past few months added features that had been requested for a long time.
https://github.com/ayanmali/raft
The basic idea is "music with source code." Instead of prompting for finished audio files, you use an LLM to help write and revise a SuperCollider-based system that runs in the browser via WebAssembly [1]. The result is executable music: inspectable, editable, versionable, and controllable at runtime.
I’m especially interested in adaptive sound for software: games, creative tools, meditation apps, AI agents, interactive art. Places where a static audio file feels too dead, but hiring a composer/sound designer for every variation is unrealistic.
It’s early, but the thesis is that LLMs make algorithmic music much more approachable because code becomes a conversational medium. I wrote a longer piece about the idea here: https://x.com/osetinsky/status/2053674503801028944?s=20
You can check it out here: https://underscore.audio
[1] shout outs to:
- Sam Aaron for building SuperSonic, allowing for SuperCollider in the browser as an AudioWorklet: https://sonic-pi.net/supersonic/demo.html. Earlier, pre-LLM versions of Underscore relied on low-latency WebRTC implementations for streaming SC synths running on servers to browsers in real-time
- James McCartney, creator of SuperCollider: https://supercollider.github.io/
I'm working on <https://untether.watch>. Trying to shift 20-30 micro phone interactions to the wrist per day to ultimately reduce phone use. Dumbphones are too extreme - you need a smartphone for certain day-to-day activities (banking etc.)
The watch is a great form factor - it's got a crap screen (MIP), the ergonomics are awkward (rotate and look down), it has limited capabilities. But that's the point! Do essential quick actions and leave the phone out of site.
Requires Android companion app to do the heavy lifting. Use the (head)phone mic and STT to reply to any android notification and make notes. More features to come.
Garmin's SDK is seriously challenging. APIs are often broken across firmwares, limited developer tools and testing is tough.
Right now working on framing this as an RL problem to better predict when nudging is actually successful and what kind of wording works best given the user data. Then applying the same logic to onboarding emails etc.
https://donethat.ai
I haven't really forgiven myself for dropping my PhD; I think it was the right decision at the time, but I also kind of wish I had pushed through it. I'm going to see if I can at least get a few papers published.
I've also had some fun getting Claude to create LSP servers for different languages, which it has been pretty good at, and that's nice; having good integration with Vim makes a language a lot more fun for me.
Oh, I also presented at LinuxFest two weeks ago: https://youtu.be/HmcVJWyOwJQ?t=6623
I thought sharpening my craft in software for a decade would help; but, the more I read ancient scriptures, the more sense they started making -- and this is as someone who's been mostly agnostic.
Seeing people working on nostalgic apps, wealth-pursuing prompt management tools, or ideological open-source alternatives. I've worked myself in many types of software of similar kinds, and I've found.. not much at the other side of the pursuit.
Some call it “הֶבֶל”; “तृष्णा”; or, “تَكَاثَرَ”...
Still working on it.
This started with https://github.com/edward-murrell/sambervise - a GTK tool for admining Samba users and groups. I'm currently building a tool that walks a user through setting up a domain, adding DCs, and configuring fileservers and workstations.
In the TODO is making NFSv4 integration with Samba as painless as possible, and some kind of GUI application.
I've been working on something in the vein of a indie game for a little over a year now. It has been a passion project, but I'm starting to come around on showing it to people.
I am a big fan of Telltale style narrative games. I think Baldur's Gate 3 was the biggest revelation of this for me. Taking that branching dialogue and freedom of choice, and tacking it on to a fun combat system was just everything.
When text based GTRPGs started popping up, I found it hard to connect with them stylistically. I found that I needed the multimodal stimulus of visuals and audio. This led me to start building something, and it ended up being somewhat of a cross between a Telltale game, a Visual novel, and a TTRPG.
Orpheus (https://orpheus.gg) is a fully on-the-fly generated tabletop simulator, with graphics, audio (TTS), and the freedom you can usually only find at a real TTRPG table. That means you can play a sci-fi, fantasy, or even a modern setting in your campaign. The assets are made for you as needed. It runs in your browser so nothing to install or tinker with.
Getting the harness right so the AI GM can stay coherent and organized has been the biggest challenge. It took a lot of iterations to get it to a point where it could understand the scenes it was building as the player changed them.
I've built it to be played with either a keyboard or a gamepad so you can play from your couch. You can switch between them as you feel like it. There is a 3D tabletop for combat, full character sheets, dice rolling, lore tracking. I want it to be dense.
Mostly, I’m looking for people who want to try it, break it, and tell me what feels magical, confusing, boring, or broken. My biggest roadblock currently is that asset generation is relatively expensive. I'm currently mulling over whether a playtest would allow for a BYOK setup so people could try playing as much as they'd like, or if I should add turn limits.
You can join the playtest waitlist at https://orpheus.gg/ -- and I just setup a discord (https://discord.gg/pychWyzf) that I will use for early playtests. (Just me right now! Come hang out!)
I'd love to see a more modern day attempt at something like Bioware's Neverwinter Nights - which was designed so that someone could create a campaign, and then the game would provide the behavior, pathfinding, assets, and everything else with a virtual (or human) DM behind the scenes. You could still tell a human-driven story, but the engine would do a lot of the heavy lifting.
I think a lot of those attempts you mentioned try and brute force the problem or trust the AI too much on what to generate.
A lot of the same problems that AI coding agents run into also apply to this problem. You have to really manage context (avoid sending a novel at the model) and enforce strict rules in the "engine". The hard part is world building that is consistent without railroading the player and forcing specific paths. I have an agent (for lack of a better term) that manages arcs across each tier. World arcs (nations, factions), player character arcs, NPC arcs, individual scene arcs, and location arcs (towns, cities, dungeons, etc). By prompting all of these as tight, individual arcs with flavor and context peppered in as needed, you end up with stuff that is more compelling. It has to be loose enough that you don't railroad the player. When you decline that NPC's quest, down the road that might have changed the overall arc for a town in a meaningful way.
I won't pretend that I've perfected anything but I have definitely noticed a spark in its writing and world building that I personally have really enjoyed.
OTOH, that means that the underlying story is that much more important. I think a lot of people mistake coherence for novelty. Biggest offender is puzzles - oh god do LLMs absolutely blow dire wolf chunks at coming up with organic and interesting puzzles.
I have a private vs public flag for assets that I'm considering more unique or sensitive, at the AI GM's discretion. I'm using embeddings from there to try and parse if an asset already exists in the public pool or not, and reuse it if possible. The thinking is that eventually I will have pretty decent asset coverage on most standard campaigns. I can't account for people going way off book though.
I have an asset pipeline that tries to determine player intent and pre-generate assets before they're needed. That way we can attempt to hide the "load screens" like retro games did with elevators. I have a kind of sliding scale for player coherency, and if the player has too many "misses" on the pre-generation pipeline it will increase its requirements for when it starts generating.
I may have wildly over-engineered this but I love it. =)
It's built around multiple different types of agents:
- Coding Agents are placed into cloned repos with a ticket (Jira/Linear/Notion/GH), and work until they open a PR, are resumed on CI failures or github feedback, and work until they can merge the PR.
- Standalone Agents are reusable, parameterized agent runs with no repo checkout. Generate reports, triage alerts, audit dependencies, query a database, post to Slack, etc.
- Persistent Agents are long-lived, named, message-driven agent processes. Each has a stable slug, an inbox, and a cyclic state machine. Wake on user messages, agent messages, webhooks, cron ticks, or ticket events.
The idea is to have a better experience for navigating livecam streams that are publicly available on YouTube. There are a few livecam aggregators that include maps, but I never felt that any of them were satisfying, as they always require you to open new pages to watch the streams. On World Watcher, you can jump from place to place seamlessly.
You can also filter the streams by type of place or features, for example beaches or cams with audio. And if you don't know where to go, just try out the Explore button.
Live: https://agentfriendlycode.com/
Since last month we’ve stabilized the search UI/UX and have 5 search providers you can choose from and sort as you prefer.
We entered May with over 50 paying customers and have recently launched Uruky Site Search [2] (for website owners, this effectively is our own search index and crawler, which we’ll be bringing into Uruky soon as another search provider option)!
Customers really enjoy the simple UI (search doesn’t require JavaScript) and search personalization (from choosing the providers to the domain boosting and exclusion). We also have hashbangs (like "!g", "!d", or “!e”) when something doesn’t quite give you what you’d expect, though.
You can see the main differences between Kagi, DuckDuckGo, Ecosia, etc. and Uruky in the footer (right side), but one huge difference is that with Uruky, after being a paying customer for 12 months, you get a copy of the source code!
Our main challenge right now is outreach because we want to do it ethically, and it’s hard to find communities or places to sponsor which are privacy-focused and don’t require €5k+ deals. Ideas are welcome! We’ve been sponsoring a project per month (Qubes OS, The Tor Project, and Hister so far), with our limited budget of ~$100 / month.
Because of bots and abuse there isn’t a free trial easily available, but if you’re a human and you’d like to try it for a week for free, reach out with your account number and we’ll set that up!
Thanks.
[1] https://uruky.com
[2] https://uruky.com/site-search
One thing I can recommend right off the bat is Reddit - there's many privacy focused subreddits, and also you can share the whole project in EU related subreddits and e.g. r/SideProject.
Would love to try it for a week, this is my account number - 9772263817629091
Keep up the great work!
I've topped up that account number for a week, enjoy (I'd recommend removing it from the post because anyone will be able to use it)!
I use it every morning for about 15 minutes. Review the cards, then 1 problem in the algorithms, 1 problem in concurrency, and I'm done.
I wrote more here about my motivations for creating it: https://medium.com/@andreiboar/algotutor-using-ai-to-actuall...
Basically attempting to modernize a lot of browser extension which I have been using since like a decade ago. Some of them are outdated and unmaintained and some were good for the time.
The project is MIT licensed.
Just released this week on to the App Store: https://apps.apple.com/gb/app/verva-language-learning/id6755...
the general idea is to take pictures of birds and mountains, and use a bunch of colour-theory-from-minecraft
to first meanshift a bunch of the image to come up with a lower colour resolution image, then to match that to dmc threads
but then i also want to use tools like the axiom mod to fill in gradients, and to do hue shift/temperature changes to represent shadows, like how bdouble0100 uses purples as a shaded green, rather than a darker green.
ive also been using it to see how the claude code for web setup works, and it feels real poor compared to the cli.
the main problem i think i need to pull to local and do my own code for is the colour sampling from the oklab space. when i try to create gradients from colours already in the list, i ve got a visualization of the line its aiming to follow, but its picking the next colour and placing it out of order vs projecting to the line.
likely my biggest issue is that claude and the like are still bad at thinking in more than 2 dimensions, but i think my vocabulary is also subpar for giving the feedback either in clear linear algebra or colour theory terms.
next idea is for when thats done is to make a mod that turns a survival game into a roguelike - in the style of the hades 2 challenge runs, so i can play a session of the game in a certain biome without having to do all the grind first to get there on a new character.
https://allaboutberlin.com/guides/immigration-office/wait-ti...
I wish I had more time for such projects, but since AI is now capturing most of the traffic, I am losing a lot of my income and I have to make up for it. It's a huge distraction.
I'm working on a mathematical knowledge base software.
It's kinda like a local Github for math. In fact the backend is actually a Forgejo instance, I'm building a frontend for human and also a harness for agents that automatically consumes the knowledge base and expand on it. I realized the Issue/PR/review workflow works well for maintaining knowledge base too.
The motivation is actually help mathematicians/me TODAY to able to do math together with human/AI.
The knowledge base keeps mathematical writing as plain Markdown, but adds stable IDs, backlinks, search, draft changes, review, approvals, and merge. The agent side can read the same pages, follow the same references, propose edits, and go through the same review process as a human.
I’m not using formalization here. Everything is still natural-language proofs. The practical reason is that many areas I care about are not easy to formalize yet because it is not in mathlib.
I see this as a transition project: useful before autoformalization really works well, and maybe still useful afterward as the place where humans and agents organize exploration.
We all did, only to discover that for the three of us we could either play 1v1 or 1v2 with one person having twice as many turns as each other person (and they would always win).
If you play on one board locally you can do 1v1v1. It makes no sense.
I have an esp32 syncing to the board and forwarding the hits to a client written in godot. I'm now spinning up the server for stats tracking etc.
We just want to play darts...
I was tired of copying/pasting between agents, so I gave them identities, and tools to talk to each other and share tasks. I've found it so useful that I've left my job as the CTO of a German startup to focus on this.
The identities are public-key DIDs with DNS as the source of truth, as well as team membership. I also run a public registry at https://awid.ai (also OSS).
https://github.com/rmorlok/authproxy
My premise is simple: What if we could build a vertically-integrated, batteries-included, cloud-based development and execution environment that eliminated all the complexity of cobbling together a hundred different dependencies?
I learned to program in a simpler age, when programs ran on a single machine and had direct access to input, output, and storage. We didn't have to worry about client-server communications, or async storage calls, or idempotent microservice requests.
The reason we worry about that now is because modern programs don't run on a single machine anymore. They run on a distributed system with thousands/millions of clients (web browsers) connecting to hundreds/thousands of backend servers.
But what if we could build a platform abstraction layer on top of that distributed system? What if the platform took care of all the distributed complexity and the program itself didn't have to worry about that. From the program's perspective, it's just running on a big (abstract) machine. That's GridWhale.
Most TDEE calculators only provide estimates. MacroCodex helps you refine those estimates to calculate a more accurate TDEE, which continuously changes based on your activity level and calorie intake.
Not only this, it helps you figure out when to lean bulk, cut, recomp. It automatically provides you recommended macro and calorie targets (which change as your adaptive TDEE changes)
https://macrocodex.app
What the community says:
https://www.reddit.com/r/tirzepatidecompound/comments/1omfgx...
Yes, MacroCodex is an extension of the project above. The original domain now redirects to the same guides, which are now hosted directly on MacroCodex.
We also have multiple guides and tools:
https://macrocodex.app/guides/recomposition
There's Truthsorting, a logic puzzle where you have to order logical statements to make them true or false.
Pathword, a puzzle where you lay out letters along a path to spell out 4 words.
Morphology, a clued word ladder written by a different contribution daily.
And a few others!
I've been trying to promote it for a few months but I haven't had a ton of luck, to be honest. The audience hovers around 500 people and growing it beyond that has been pretty challenging.
It does a few things for me:
- Claude session management mapping it to real life tasks
- A scoop/sweep mechanism to auto-populate the Knowledgebase
It has turned my claude to more of a PA. I have been made aware of beads since, yet to try it out. I see some similarity and some differences.
Just posted a first early demo and sample orchestrator system prompt yesterday: https://x.com/Westoncb/status/2053429329233895857
You initialize the system with an objective and a number of rounds to run for, and it loads the current config (orchestrator + specialist prompts and LLM configs) and begins working on it. You can manually step one round at a time or just let it run.
Rather than accumulating a single long work log/context, at each round specialists apply patches to a number of named 'artifacts' with different roles (e.g. uncertainties, dead ends, findings), which are injected into prompts during subsequent rounds.
The engine is written in rust and there's a web UI (and CLI). You can use the built in config editor to define specialists (and their prompts), what the artifact set is, orchestrator prompting etc.
https://github.com/prettydiff/aphorio
The idea is to have "real" linux, exposing ipv6, supporting nested virtualization, docker, etc.
https://menierememo.com/
Intended for an audience of one so still a bit rough around the edges, but the intended audience said “excellent” and is actually using it.
Mostly AI-built. Source code is here:
https://github.com/tagawa/Meniere-Memo/tree/gh-pages
The app has a lot of UX details that I've really enjoyed working on. I wrote up some notes about it here: https://www.freshcardsapp.com/3/
Separately, also working on a Zettelkasten notes app that pushes you to make small, atomic notes that you can organize in "collections" to provide structure beyond just hyperlinking in the note text: https://understory.ussherpress.com/ This has been a lot of fun iterating on. I started with a Miller Columns UI, like Finder, to visualize the graph of connections between notes, but I found that it was too overwhelming to use, so I scaled back and went with a more Notational Velocity-like quick search bar with note addressing. The app UI mimics a browser because I found that it works really well for something like this. I need to polish it a bit more and want to find people who will give it a beta test to help me iterate on the ideas some more.
The vision is for everyone to have an executive assistant that manages their email. It's built for people who spend hours in their inbox every week.
It has automatic prioritization, split inboxes, snippets, bundles, automatic follow-up reminders, and an AI agent that can do stuff for you -- without deleting your emails.
If you've read this far, I'd encourage you to give it a try and let me know what you think!
https://fluxmail.ai
If you want to check it out: https://presspass.ai
If you think this is stupid or you know of a more annoying “click based” workflow that should be automated, let me know! I’m early and need more thoughts.
2. Released "Postlet" (https://github.com/freakynit/Postlet), a tiny markdown-based static blog generator with a plugin pipeline, markdown + frontmatter pages, and theme support. Demo: https://postlet.pagey.site/ .. working on still adding more features.
While working on it, I realized I should build a small Hex package for authoring and playing demos right in a Phoenix app (it's very easy to author scripts with AI or by hand):
https://news.ycombinator.com/item?id=48087389
https://channelsurfer.tv/
Big thing I made recently is moving it from SvelteKit to Hono + Inertia + Vue.
I like SvelteKit, but I was struggling with stability in active development periods, and writing proper tests was very hard due to mocking all the magic, especially outside trivial testing tools.
Now the whole app is straightforward Hono MVC with Vue powered UI. Logic is easy to test, and all UI states exposed in Storybook.
I wrote a custom adapter that makes Inertia run on Hono, and coincidentally same thing was released by Hono author itself as official module, which is great sign for adoption!
So, try Inertia – it is a best of both worlds. You write MVC backend as you like, and use modern JS frameworks for templates.
https://inertiajs.com/docs/v3/getting-started/index
The tech surrounding the game is awesome, the game and engine are fully deterministic, discrete (not float based), and bit-packed data structures throughout, powers of 2 everywhere for really fast operations, and logic and rendering are fully decoupled.
I wrote a simulator for the game and can simulate 10,000+ games in around 50 seconds on my MacBook M1 Pro. Purpose of the simulations is Monte Carlo method to tune my enemy AI (not LLM - conventional bots etc)
Email in profile - would love to connect.
Use this to doomscroll nba twitter and sports bet, or if you're feeling more highbrow, peruse the NYT and passively gamble on geopolitical events.
Try it out here: https://chromewebstore.google.com/detail/anywager/eebgbiogbb...
After AI happened, I built an app (promptfunnels) to scratch my own itch and generate funnels (fancy name for landing pages with a purpose).
Then came the harder part: marketing it. Coming from a tech background, I knew nothing about marketing, so I started reading and came across the $100M Leads book. I realized codifying those principles together with funnels and marketing automation had a real market. My family, friends, and acquaintances became the first customers. A friend joined me as cofounder and we both quit our jobs to do this full time.
As we talked to other startup founders, they kept describing a tangential problem they called GTM. At the core it was the same thing we were solving: marketing for non-marketers. So we pivoted to RevMozi(https://revmozi.com/), which helps non-marketers do both inbound and outbound GTM.
We’re dogfooding the product and coming out of beta next month.
Wish us luck.
Umm where? They are indistinguishable from each other. Not pretty.
https://github.com/KevanMacGee/Repomix-Desktop
It's open source and has no official connection to Repomix. But the developer, yamadashy on Github, knows about it and seemed to like it enough to add it to the Repomix website under the community projects.
I like being able to paste all the code into a browser window and have lengthy discussions with ChatGPT, Gemini and GLM. Doing so in the browser saves tokens over doing it in Cursor or Codex. I like using the Projects feature in ChatGPT in the browser and Notebooks with Gemini because that gives the model context and history on whatever I am working on. It was one part scratching my own itch, one part learning about Python and Customtinker.
It's made specifically for when you just want to get the code and paste it, no muss or fuss. It doesn't have support for flags (yet?) like the CLI because again it is built for speed. Besides, when I want flags, I like using the CLI instead to get granular. Repomix Desktop is for "just give me the code."
I'm a self taught coder so I'm very open to feedback.
Been pushing some new stuff on https://infrabase.ai as well, my AI infrastructure tools directory. Traffic growing steadily from comparison and alternatives pages. Interesting finding is that blog posts rank better but get fewer clicks now because AI Overviews, interactive comparison pages still earn clicks. ChatGPT has also started citing the site more as a source. Adding new content and polishing existing parts of it, added a page focusing on EU based services at https://infrabase.ai/european.
So: ac-ng didn't reduce the impact of the DDoS, but it does lead to impact when there is no DDoS. Worst of both worlds.
So I'm working on an apt-cacher that goes to lengths to keep working as much as possible when the upstream is down. It will check the repo metadata and keeps a list of your "hot packages", and will download those before flipping the new metadata to be live, effectively a snapshot. It won't allow you to download a package you've never downloaded before in the case of a DDoS, but packages that you do download regularly (machine re-installs, apt updates), it will ensure are available in the repo.
I'm calling it apt-cacher-ultra. It is pretty early days, it'll probably be another week before it's ready for a beta. I'm running it in my dev cluster right now, successfully.
https://github.com/linsomniac/apt-cacher-ultra
Plus it's too early to really show, but also working on a dataflow language (w/ immutable data) that uses some code semantics from Rust/Zig and friends: https://github.com/jonathannen/badger
If you want to try it out, we offer some free credits at https://fuguux.com
Any feedback you have would be incredibly helpful! We're considering more kinds of reporting, support for QA testing, better integration with CI/CD, and more.
Note: we don't want to replace real user testing, but rather complement it. With AI user testing, you can get quick feedback on potential usability problems in hours for a fraction of the cost, making it so you can iterate much faster. We advocate doing user tests with real people to understand problems that require domain knowledge or nuance.
https://www.nair.sh/guides-and-opinions/communicating-your-e...
I finished writing that over the weekend.
I talk about combinatorial creativity as a way to be creative under time pressure. I had fun writing it, it'd been on my mind for weeks.
I finally finished the (monumental) Svelte 4 -> 5 migration that had been getting dusty for the last year. This unlocked a higher performance ceiling for me to polish my animations and UX. Now I'm revamping my onboarding experience and taking another crack at marketing and promoting it. Last year, I was focusing on setting it up as a PWA and integrating Sentry monitoring and Stripe integration. All important stuff but not what got me excited about the process.
I've been pretty tied up with maintenance and admin work, and haven't gotten a chance to work on the actual game design in a while, so I'm very excited to return to that part of the project soon. I have ideas for new puzzles and modes spilling out of my ears and I feel like with LLMs my prototyping can finally keep up with my brain, now that I have a robust foundation for the game architecture.
Also, we're hiring engineers and PMs (the eng position is about to be up). https://openmined.org/careers/#brxe-zgsziy
https://chatoctopus.com
https://sendbroadcast.net
It’s a self-hosted email marketing/newsletter app. The basic idea is: own your subscriber database, run the app on your own server, and send through SES/Postmark/Mailgun/SMTP instead of being locked into another SaaS.
Not trying to be “Mailchimp but cheaper”. It’s more for technical founders, agencies, and consultants who want a boring, controllable email tool they can deploy for themselves or clients.
I’ve kept the changelog public because I wanted the work to be visible: https://sendbroadcast.net/changelog
My buyers are typically people who want to own their data and are in regions that have strict data privacy regulation/laws.
Interesting fact: This was my real last project where v1 was built by hand before AI coding became the norm in the software industry.
some interesting use cases are coming up where people want to query across different data sources (postgres + GA4) via chat. Feel free to reach out if you want to try it out.
I'm working on a Personal / Family travel organizer. Started as tool to allow me and SO to plan a trip together. There's been steady progress over the last couple years. Focus on privacy and ability to self-host. Of course, there is a managed version if one doesn't mind me having access to their data.
The marketing site is stale with our previous offering, the demo of upcoming product is here: https://vimeo.com/1190884516
one of the few apps not FOSS on my degoogled phone, thought it was time to fix that
I'm actually looking for beta users! GetSetReply is a SaaS I've been building. It does two things for small businesses:
1. It helps you get more reviews by sending automated requests for reviews to your customers over SMS and/or email after they purchase from you (PoS Integrated / Manual Sending)
2. The second is helping you reply to the reviews you already have with AI-generated drafts in your brand's voice that you can send to Google/Yelp/TripAdvisor.
I'm very grateful to anyone who is willing to test or provide feedback. If you create an account (it's free with no credit card or integrations required), I'll reach out! Or you can email me via my email in my profile.
- Calculator that gives tg/s and vram required based on model params and ddr settings.
- Auto create dashboard from csv/json files or apis Easyanalytica.com
- snippet viewer for html/react that allows annotation and sharing based on url fragments
https://github.com/cuzzo/clear
The goal is to make Rust code nearly as easy to write as Ruby, but it almost always does the absolute best strategy.
You can write somewhat slow untyped code, and the internal tooling can guide you to adding all the types and optimizations and concurrency strategies that will make your code as fast as possible.
Crystal wanted to be Ruby/Go - essentially a faster Ruby, that sort of scales, too.
CLEAR aims to be a substantially safer Rust - no Garbage Collector - no manual synchronization hazards, and safer than even Pony - but also with far less complexity than Rust.
Crystal's fibers did not do well multi-threaded until somewhat recently, and AFAIK, it's still very far behind Rust/Tokio and Go in a lot of important benchmarks. Crucially, afaik, p99 in adversarial workloads can still blowup easily.
Like Tokio, CLEAR lowers fibers into Finite State Machines instead of stacks, which perform better than stacks in wait heavy (i.e. Go's primary market - web servers) and idle-heavy scenarios (i.e. chat servers, telecom, etc), and it has Go's work stealing algorithm + forced yielding to ensure p99 doesn't blow out.
Also, CLEAR transpiles to Zig, so it has native access to the entire C library. Crystal has a bootstrapping / ecosystem problem that's unlikely to ever be solved.
CLEAR doesn't need a single person to contribute to it to have access to basically everything.
Also, transpiling to Zig means you get Zig's other killer feature - you can compile to any target (i.e. Linux) from any target (i.e. MacOS).
Since it does it anyway I added dossier pages to it as well https://searchcode.com/repo/github.com/rust-lang/rust Which is useful for humans, and shows what the system is creating.
Best part is that I get to use the tools I have built, so https://github.com/boyter/scc and https://github.com/boyter/cs to improve it which benefits anyone using those tools.
I’d love any feedback!
It’s a lot of fun and ultimate nerdery for me :) I’m a ski instructor through the Austrian and Danish ski school systems, I studied physics, and I’ve been a developer the last 15 years.
Lots of friends asked me which version of OpenClaw/Hermes are recommended as a stable version. I've no clue of it, and I don't updated my OpenClaw/Hermes very often to avoid unstable versions frequently. So I created the Agent Watch dashboard.
https://agentwatch.aicompass.dev/
THANK YOU for all guys who gives feedback for the tiny project.
I’ve been trying to reduce and eliminate my reliance of the Big Tech and the lack of user reviews and ratings was always a big pain point for me each time I tried to switch away from Google Maps.
I’ve started building a service where users can write reviews and rate “places” (POIs) in OpenStreetMap database, such as a cafe, a museum, or a shop. It’s a quite straightforward CRUD app with bunch of OpenStreetMap-specific features such as logging in with OpenStreetMap and querying places by their OpenStreetMap metadata.
It’s still in active development but it has good docs, a great API reference (including an OpenAPI spec), a demo app with the entire planet imported and queryable, and an early stage Android SDK.
https://app.socialmaps.org/
https://docs.socialmaps.org/
https://codeberg.org/socialmaps
https://decaboy.fit
Built in 15 minutes with mecha-ai my own code assistant using DeepSeek
https://github.com/kuyawa/mecha-ai
Right now working on a betting app, DeepSeek is fairy dust and Mecha is my magic wand, I am unstoppable!
* If you need an app I can build anything and I mean anything in one day for peanuts, just let me know
It can get you up and running in a few minutes with an installer that can set up a new system or keep an existing system up to date. There's also a command line version that works on Arch and Debian based distros (including WSL 2) and macOS. I use it on my personal devices and a company issued MBP.
I'm not going to lie, I've been using computers for 25 years and this is the happiest I've ever been with using 1 machine for everything (software development, media creation, gaming, etc.).
Website: https://arkvis.com
Poker Equity Calculator: https://github.com/lodenrogue/poker-equity-calculator-web
Davao Explorer: https://github.com/lodenrogue/davao-explorer
Reading Summaries: https://github.com/lodenrogue/reading-summaries
I also created a couple of chrome extensions:
HN Dracula Dark Theme: https://github.com/lodenrogue/hackernews-dracula-theme-chrom...
Regex Search Chrome Extension: https://github.com/lodenrogue/regex-search-chrome-extension
Created a small command line util to get earthquake data in the Philippines:
Philquakes: https://github.com/lodenrogue/philquakes
https://github.com/devtheorem/php-handlebars
I've also been developing Cropt, a zero-dependency JavaScript image cropper which works great for cropping and scaling profile images before upload:
https://devtheorem.github.io/cropt/
For the past few years, a group of us from Google, Microsoft, GM, IBM, Roblox, Rubrik + more have been working on a design standard for APIs called [AEP](https://www.aep.dev). The goal is twofold: learn from our companies mistakes around APIs and enable better tooling with less configuration.
We’re at a point where AEP-compliant APIs get a resource-oriented CLI, MCP server, full UI, and Terraform provider for near-zero configuration.
Aepbase has been my way to tie the whole ecosystem together. You run a single binary and define the schema for a resource with one API call. Now, you’ve got a full set of CRUD APIs and support for CLI/TF/MCP/UI. After one API call.
It’s a really cool way to tie together all of the work AEP has been doing.
Love to hear HN’s opinions on all of this. We’re still trying to figure out the best way to sell people on AEP.
Features:
- Control channel for block header announcements, operational mechanisms, and network topology automation
- Separate channels for subtree, subtree grouping, and transaction load
- Transaction load sharding by deterministic multicast group membership based on TXID
- Transaction specialization filtering and retransmission both unicast and multicast, to connect edge networks only interested in a portion of the transaction load for whatever reason
- NACK-based retransmission of missed packets via hash chain gap sequence tracking (per sender, per shard) with automated caching endpoint beacon discovery and tiered network distribution
- BGP-AnyCast based transaction ingress
Basically all the topology pieces to scale the actual small-world network for Bitcoin miners or transaction processors; dense at the core, with layered and sharded group distribution towards users at the edges. Right now just site or org-scope multicast in planned, but provisions are being made to extend via MP-BGP eventually.
For BSV Blockchain but could work for the other Bitcoin variants too, if they ever wanted to scale.
Open the HTML file in a code/text editor. Look at one of the anchor tags and you will see the contents of "HREF", "ADD_DATE", "LAST_MODIFIED", "ICON_URI" and "ICON". Only the "HREF" is necessary to make the anchor tag functional. All of the others serve other purposes. Most of the others makes some sense, but seem obsessive to me. The longest, sometimes hundreds of characters long, is the "ICON" item. That long string of characters concerns me and looks suspiciously like the traffic I see when I use network monitor inside Inspect. To me it looks like a tracking code.
Of course I may be wrong, but none of that stuff is necessary except the HREF. The script I wrote strips off everything except the HREF, puts it in a new anchor tag, and it works fine. Really this is just old school HTML.
I started with this last summer. Usually I get tired of an idea, but this one is just an endless pit of things to try out.
Currently seeing how we can get an analytics agent working on the canvas. Video here: https://x.com/i/status/2053410747137266070
It uses Let's Encrypt by default. We use delegated DNS to handle ACME challenge validation (we run the DNS, you just CNAME to us). This means you don't need to give us DNS credentials or anything. And for HA workloads it's great, because there's a central clearinghouse for certificates - so all the machines in your web farm (or whatever) get the same cert, but you don't run in to rate limits with LE.
We're recovering Windows Server guys so we made sure our automation works for painful windows workloads like IIS, Exchange etc. too.
We've had enough interest that we're building it out for real. Just left beta last month.
Started with a niche and launched it: VersionAlert for Unity (https://versionalert.com/unity)
Working on the bigger product still. Existing solutions I've found in this space seemed lacking. On my website, I want people to quickly find the software they want to be kept up to date about (with a smart search bar that does the heavy lifting for them) and easily sign up for notifications for new versions. Hope to make a Show HN for it soon!
It's an iOS & Android app that applies various generative art effects to your photos, letting you turn your photos into creative animated works of art. It's fully offline, no AI, no subscriptions, no ads, etc.
I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, glitch art, string art, perlin flow fields, etc.) pretty much directly inspired by various Coding Train videos.
Currently it covers 6 regions, 250+ subscription services, across 30+ categories, recognizing 850+ billing name patterns. It even has built in smart alerts for different services and region specific considerations. (FTC's Adobe settlement, Hola VPN Danger, UK Price Hike Exit Rights, Cloud Act Warning, etc)
It adds up monthly spend/annual spend. Identifies alternative saving opportunities/more ethical options.
I have plans to add additional regions but that will take extra research to understand the realities of those markets and the providers within them. I also don't speak any other languages, so this may also be a bit of a hurdle.
https://findrecurring.com/
MedAngle is literally everything one could need, personalized to their curriculum across 4-6 years of medical school. Quizzes, videos, notes, flashcards, reminders, scheduling, performance, search, and more.
Our Super App is comprised of MedGPT + MedAgent + Spaci (futuristic spaced repetition), which serve as layers over our massive collection of features such as the Smart Suite, Learning Library, Clinical Corner, Tested Tools and more.
100k+ users, 10s of billions of seconds spent studying smarter, invite only. Bootstrapped, growing nicely. I lead a team of top medical students and doctors.
Copywriting after AI
https://www.nair.sh/books/copywriting-after-ai
It's 88 pages of me describing my mental models for marketing, those which I think still hold true even after the introduction of AI.
I’ve split the experience into two parts: a mobile-friendly app at https://app.orcamarka.com for bookmarking websites, text snippets, or images into a pure text format, and a reader part at https://m.orcamarka.com optimized specifically for the limited browsers on devices like the Kindle (the site will automatically redirect you to app if it detects a more capable browser). To bypass the pain of typing URLs on E-ink, the reader part displays a QR code that you scan with the app to instantly sync and load your text.
I’ve been using this personally for a month and it has significantly shifted my long-form reading from my phone to my Kindle. Since it’s a web app, there’s no installation required and it's completely free.
I’ve tried to design it to be intuitive enough to use without instructions, but I’m looking for beta testers to try it out and let me know where I can improve the workflow!
However, I worked on it for the past ~5 years on and off (well, mostly off) and rewrote it too many times. Now finally close to releasing, bought a domain and setting up all the last remaining things.
You should check out my new open source software build tool, https://pcons.org.
Free tier is enough for most users, paid tier just exists to gate the stuff that is expensive to run like SMS alerts.
Check it out at [Larm](https://larm.dev) and try out the [response time checker too](https://larm.dev/tools/response-time) to try out the Larm probe infrastructure.
https://github.com/jondwillis/jacq
2) Claude code plugin based on some ideas found in https://www.anthropic.com/research/emotion-concepts-function The main idea is to add hooks that inject “baselines” under some conditions to counteract certain “emotions” that can cause subtle misaligned behavior in agents
https://github.com/jondwillis/functional-emotions
3) Final Fantasy XI custom client remaster in Bevy/Rust alongside an MCP integration that aims to allow agents to play autonomously on private servers à la “Claude plays Pokemon”
Contact: https://jonwillis.dev
https://github.com/madebywelch/mau
Also working on a handheld computer project. Did improvised thumb typing tests with paper and a stack of notebooks to determine my typing area. Next step is ordering some switches to see what they feel like.
Majority of code (almost 70%) is generated by Gemini Pro and is extremely ugly. Due to a recent eye injury, I've not been able to code as much as I want, so I'm delegating many things to Gemini. Eventually, as my health improves, I plan to rewrite the entire thing.
[0]: https://codeberg.org/naiyer/mesaphore
[1]: https://support.microsoft.com/en-us/office/excel-specificati...
The idea was borne out of wanting to use the review tools that you get on existing sites like GitHub, without having to push and start bloating PR lists. You'll be able to leave yourself comments and code suggestions after review, which you can then pull out in a Markdown file to feed back to your coding agent (or anything else for that matter).
I'm also trying to include some optional (very optional) AI extras where you can use your own keys, and then get a tour of what you've changed and a quick overview of the changes.
https://understand-anything.com/
It is a language that is embeddable in other programming languages, with the type system similar to typescript, and a runtime that is similar to Go.
People use it currently for structured outputs with llms but soon we will support orchestration and more.
We are letting some users have an early access preview! Let me know if you are interested in hacking with it!
I've published several panels under this banner already (tools for redis, caches, celery, etc.); I am currently working on a base library layer for tools to inherit from and to make it easier to create new tools.
Essentially, the point of all of this is to make it so that you don't need so many external services; Instead, DCR provides self hosted alternatives. This in turn makes it a lot easier to build and productionalize something using Django.
Reception has been decent so far and I estimate several thousand current adopters (Its hard to estimate based on download numbers alone.) For May I will finalize a common design language, further formalize the plugin system and how it works, and likely release a new panel.
iCloud Photos is fully baked along with implementing their completely undocumented SyncToken. I’m doing some QoL work in the next few weeks, tightening up some early architecture decisions, and then adding more providers (Immich, NextCloud, Google Takeout… else TBD).
Since last time I posted this, two other people contributed and I’m almost at 100 stars! That’s some dopamine.
https://github.com/rhoopr/kei
Given a distance, an allowable time to reach that distance, a payload to send, and an expected exhaust velocity, how would you calculate the time required to convert energy into antimatter fuel and how much antimatter needed to arrive at the destination (starting from the Moon)?
There are a few side calculations, such as the size of the radiator, estimated footprint of the fusion reactor itself, and how much metamaterial is needed. This is to help figure out timelines for a sci-fi novel, so ballpark answers are completely fine.
The calculations yield what appear to be values around the correct order of magnitude. Would be delighted to have insights, comments, and corrections.
https://nodes.max-richter.dev https://github.com/jim-fx/nodarium
I moved to a new city 4 years ago and didn't even realize a municipal election was happening until a councillor knocked on my door.
I am building townhall-tracker to prevent this and shape the decisions that most affect my day-to-day
https://redactor.negativestarinnovators.com/
With Unity I'm trying to bundle a bunch of different free, cheap or open source solutions together. For facial, that includes a custom converter from the output of Deadface (based on Mediapipe) with ARKit blendshapes, and also eye movement. For body it's a custom hook to SlimeVR that allows you to mocap with cheap-ish IMU-based DIY trackers, and all that on top of a custom made (not free but open source) physics rig solution that gives you accurate rigid body real time collision, saving on cleanup work.
It's being going really nice despite being an unusual workflow. Hope to release it as a plugin for a in-development sandbox game in the near future. Mocap and animation has been my passion long before i started with tech stuff, and finally I'm able to pursue it.
A sample puzzle can be found here: https://sudokupad.app/23x300ggzn
It's been well received by the (very kind!) Sudoku/puzzle communities, so I'm working on throwing a nice interface on it that fits the rules a bit better. I've found about five other examples of others doing a variation of this ruleset before in one way or another, and it's been fun trying to see how hard/deep I can get this puzzle to go.
Science is full of discoveries that could change how you think, eat, train, sleep, work.
Most people never hear about them because papers are dense, paywalled, and written for specialists.
I built SciCrumb to fix that. One paper a day, simplified to 3 minutes. I curate what's actually worth your time.
I personally really like it. I Get new idea and learn something new every day.
https://apps.apple.com/us/app/scicrumb/id6758953292 Product feedback or support is highly appreciated. Would love to make more people aware of it.
It's going to be 100% statically type-safe across the stack, SQL first for the DB layer(s), and with a minimal amount of boilerplate (just enough for a clean architecture without magic).
I believe writing my own "Toy Harness" is a good way to learn and understand these tools.
Other than that, I did plant my tomatoes today.
You get to choose the genres you're interested in, and it creates playlists from the music in your library. They get updated every day - think a better, curated by you version of the Daily Mixes. You can add some advanced filters as well, if you really want to customise what music you'll get.
It works best if you follow a good amount of artists. Optionally you can get recommendations from artists that belong to playlists you follow or you've created. If you don't follow much or any artists, then you should enable that in order for the service to be useful, as right now that's the only pools of artists the recommendations are based on.
https://riffradar.org/
cli: https://github.com/philipnee/mvmt ui: https://github.com/philipnee/mvmt-desktop
[0]: https://technokick.com/ (Techno Kick synth)
[1]: https://riviera-demo.surge.sh/ (Reverb effect)
[2]: https://ya3.surge.sh/ (TB-303 synth clone)
- AI assisted academic progress reports so parents can effortless stay on top of kids middle/high school academics. https://www.gpa.coach
- A family economy app where parents set the rules, kids earn credits for chores and good behavior and kids redeem credits for screen time, money, and other benefits. https://www.kredz.app
- AI first fun mobile media editor your parents could use. https://www.mix.photos/
https://rotadeck.com/
https://www.stocksdashboards.com
Going to get back into self-managed IRA...this time better-informed :-)
You build up a library from your physical books by scanning them in or discover OpenLibrary books to read in app. Then as you mark books in your library as read, it starts building a rotation and recommending books you haven’t read recently. I’ve been using this nightly to track my son’s 1000 books before kindergarten for the last couple of months.
Currently, I’m working to get the app out on Google Play and adding multiple story time attendee support.
In short, it unifies the configuration of different desktop components as policies ( dconf, Kconfig, polkit, Chrome, Firefox, etc.. . It's LGPL.
You can check my slides for the upcoming Tuxconf conference this Friday: https://getbor.dev/publications/tuxcon2026/
Cheers! Blago :)
I have new features such as sharing bookmarks and possibly BPM detection planned but also some quality of life changes like better UI scalability for different size screens/split screen use.
If you are in the area, try it out https://taipeiparking.com/ - Android app and web app.
https://vistacker.com - allows disconnected operation, auto sync across multiple machines with optional encryption so the service can’t see your data.
I'd love to get some feedback (have a few friends trying it out).
Shamelessly trying to attract new monthly sponsors and people willing to buy me the occasional pizza with my crap HTML skills.
https://brynet.ca/wallofpizza.html
The goal is to build a deep research product for actual researchers, since we believe that it is an extremely powerful product that is still nascent but has enormous potential - which we've already seen with some early users.
https://webhound.ai
After a few rounds of using it, I already know a few things I didn't before: I suck at right-to-left breaking putts, I baby uphill putts too much, and getting out of bunkers consistently is not good enough if I can't sink the occasional save. So I know what to practice now.
I know some Rust, was going about it with clap, but no one I know cares about Rust so I've switched to Golang with spf13 Cobra cli
Harness is pretty cool, but I'm still a quite noob gopher, so I'm taking the chance to learn the ins and outs of Go...
No A.I touches my code lol, else I would learn jack shit
We just received the API usage approval from Google, and I'm integrating GBP to https://pinpost.io this week (our reliability first social media management tool)
Been working on it on & off for a couple years, usually taking breaks between refactoring stupid decisions.
https://klados.bio/ Prod site is pretty behind dev branch, basically abandoned normal CI / repo hygiene for the moment
I wanted to make it easier to quickly see/study trending articles on Wikipedia because they tend to make good topics to know before going to trivia night.
I've had the domain for awhile, but just made the app recently on a whim.
I use Wikimedia's api to get the trending articles, curate them a bit, add some annotations to provide some context, then push to deploy the static site.
https://www.metanoia-research.com/
Some finished covers (https://saltwatercowboy.github.io/albedo/pages/en-10-05-26.h...). Next up pixel sorting.
The persistence model makes documents somewhat sharable, but I do find Open Graph previews to be mixed. In Messenger it renders the whole URL, which is quite long due to encoding, and that kills the conversation view.
Something I can finally enjoy: just playing with it. I tediously wired up a pair of pendulum simulations to drive an XY oscilloscope—got a nice Lissajous curve.
But now I want to double it to four pendulums. Each axis (still just X and Y) to be driven by the sum of a pair of pendulums. With them out of phase, the curves appear to sometimes collapse but then suddenly explode again…
(Love to eventually hook it up to an actual plotter.)
I was responsible for multiple RADIUS services used by millions of people every day. The existing software is slow to build with, difficult to scale and expensive. I couldn't let it go.
Step one was building the platform to run it on and make it sustainable as a business. Step two is implementing protocols like RADIUS that lack a separated compute/storage model but should really have one.
I chose C# because I know it, and build native single-file executables using AoT.
And on and on.
A stateless compute model with separation between the packet handling and the authentication logic solves pretty-much all of it.
As a demo, I repaired an old Philips PM5190 function generator (about 40 years old) and connected it to Claude Code. Lots of fun. Going to post a follow up video the next couple of days.
I recommend the book. It certainly isn't easy (maybe 3x harder than Crafting Interpreters), but I've learned a ton (eg how to deal with operations on different sizes of types, or the trick of using pseudoregisters to avoid having to figure out registers up front).
https://github.com/jmikkola/writing-a-c-compiler-python
It's inspired by GitHub PR review workflow, only with quick iterations and local.
It's been great! I found some dedicated users, dogfooding it every day with Claude and starting to get more contributions from the little community. We just got accepted into Homebrew core which was my target.
I'm expanding the team features now as I've got a few users keen to get the sharing service deployed in their private networks!
I’ve found it super useful in my personal life and is pretty much my #1 app.
Useful to debug local Kafka apps against any cluster, intercepts the traffic, decodes the protocol. You see interesting (and weird) things when you look at the protocol. Still early, though already useful for local debugging when you know what you want.
Right now I intend to make it compatible with Incus as a remote. So it's just a matter of adding it as remote and then you can consume all of your versioned images.
https://github.com/meigma/imgsrv
https://github.com/safedep/pmg
Side project is my own agent harness, https://github.com/Smaug123/writ , which is being built sandbox-first and with Nix as a first-class citizen. Obviously everyone has to write their own agent harness as a rite of passage.
I've always wanted this and have used it to experiment with Gemini's cloud agent Google Jules.
https://github.com/alpeware/datachannel-clj
I've been using Anki for 10+ years and love it but always wanted something with a cleaner UX and a reader view. The recent Anki ownership change pushed me to finally make something, and it's seeing some traction :)
Right now I'm focusing on getting the reading and note-taking view to be nice. I used to use Polar Bookshelf (RIP) but that went away, trying to make something better.
The flashcard side also has a REST API btw!
Making the game engine was easy. Making the story consistent, believable, and interesting has been the biggest challenge for me.
I’ve written a few bad novels but never any narrative games, so it’s been a good exercise for me.
I’ll keep chipping away at it this year, and probably expand beyond morels to other seasonal natural phenomena that my people enjoy like smelt/salmon run, wildflower blooms, etc.
My maps aren’t in public release, but reach out if you want to give it a look.
https://www.tirreno.com
https://github.com/tirrenotechnologies/tirreno
https://streams.dbconvert.com
A self-hosted database IDE with built-in migration, CDC, and DuckDB-powered federated SQL.
Mostly trying to remove the annoying gap between "I can inspect this database" and "I can safely move/sync this data somewhere else".
Current focus: resumable large loads and cleaner initial-load-to-CDC handoff for Postgres/MySQL.
It's an SDLC workflow harness for agents. Instead of using skills to encode my typical workflows (e.g., create PRD, then create plan using TDD, then dispatch subagents, etc) I've built a concurrent event-sourced process manager to handle it.
https://kintoun.ai - Document translator that preserves formatting and layouts
https://ricatutor.com - AI language tutor for YouTube
It's designed to integrate with Maven projects, to bring in the benefits of tools like Gradle and Bazel, where local and remote builds and tests share the same cache, and builds and tests are distributed over many machines. Cache hits greatly speed up large project builds, while also making it more reliable, since you're not potentially getting flaky test failures in your otherwise identical builds.
https://jrecc.net
From dataset harvest, to training intricacies on CUDA/ROCm to fun HIP kernels. Full circle to inference testing, building it around consumer hardware(the challenge). Using this as a "how it works" deep dive, allowing me to learn more about the how, more than endless papers will. It's a MoE and I'm slowly running a human loop, research, build, correct, research.
https://dhuan.github.io/mock/latest/examples.html Command line utility that lets you build APIs with just one command.
https://github.com/dhuan/dop JSON/YAML manipulation with AWK style approach.
I'm also thinking about writing the Necronomicon of delinking at some point. The extension keeps spreading by word of mouth and there's only so much UX improvements I can do, for something that requires throwing everything you've learned in CS 101 into the trashcan before you can "get" it.
The result is http://getcaliper.dev.
It has a number of mechanisms that help substantially:
1. It can extract deterministic quality checks from your CLAUDE.md text; these checks then get executed after every agent turn.
2. It performs a lightweight ai-powered review at every commit; feedback goes directly to the agent, which can then make corrections.
3. It performs a more 'traditional' deep AI review at merge, or on-demand.
Free to use, just bring your own API key. Any and all feedback is welcome!
The main goals are to own my data (memories, artifacts, chats), be able to switch AI providers at any point (if one is down or I want to try a new model), have the same experience between desktop and mobile especially when it comes to working remotely on code.
A bigger vision is to offer everyone a alternative to Claude and ChatGPT they can own just like OpenClaw but with a great app experience.
I hope to have the first beta published by the end of next week.
https://github.com/bgrgicak/Desk
It’s intended just me for and follows a philosophy around hyper personal software that I’ve been developing: https://paulwrites.software/articles/hyps/
Been using that to power a Mac mini alternative I’ve been making https://jperla.com/blog/quill-one
- Building a platform where talented people can list the services and skills they're experienced in. Clients can book paid sessions with them directly through the platform, and once a session is booked, they both meet online to discuss, collaborate, or get advice based on expertise.
https://aptakube.com
The idea was to create a quine that runs forever on something like Akash network with its own crypto treasury to support and pay it's bills and try to replicate. It would then talk to an LLM for support and actions on what to do to stay alive.
It got pretty out there. Stored some of the ideas here.
https://github.com/aquaflamingo/catfi
an agentic coding scaffold/framework you can reference when building out your next random raspi project. prefer to build around systemd units first; make an idempotent installer script, then put as little as possible custom coding around that.
`impl muster` comes down to: /build out this tool wiring together `patterns` like: C3.dropfolder-trigger; R2.device-binding; C4.lazy-resource-gate
or composite patterns like:
T2R4.device-triggered-conveyor "Bind a physical device event to a bounded ingest job that waits for hot-storage capacity, proves cold-storage capability, stages local work, and hands output to a hot/cold conveyor."
I need to back up a couple hundred DVDs, so with muster I get out:
dvd-ingester T2R4.device-triggered-conveyor
Architecture DVD media becomes ready -> udev rule adds SYSTEMD_WANTS=dvd-rip@%k.service -> systemd runs /opt/dvd-ingester/current/bin/dvd-rip-one /dev/%I --apply -> dvd-rip-one proves DEST_DIR and waits for HOT_DIR capacity -> completed rip moves to HOT_DIR/<run-id> -> dvd-publish-one.timer drains HOT_DIR to DEST_DIR -> publish writes DEST_DIR/.incoming-<run-id> and atomically renames final output
Pipelined; ejects after rip completed. Monitors local disk capacity, retries after NAS comes back online; resumes after random reboot; etc.
Employee benefit plan analytics. Had a huge dataset long ago as a consultant to the industry and finally vibecoded up a decent frontend. All public data but if you know the data there is a bunch of analytics you can do. Just about to launch and do some marketing in a few weeks, so saw this and thought I'd throw it in!
https://github.com/brettkoonce/lean4-mlir
I (w/ Claude) have built a framework for writing neural networks in Lean 4 that compiles to StableHLO MLIR and runs on GPU via IREE.
Interesting part is that I started off implementing a research paper for indexing and performance was not good enough. I ended up tuning things up for my own use-case and ended up with good enough replicatable RAG store.
https://ccode.kronis.dev/
For example, if I downgrade from Max to Pro I'd still be able to use the subscription, but also run sessions with other models (less expensive/local) as desired:
Source available, pre-built binaries on itch.io, pay-what-you-want with a minimum price of 0 USD, probably get it for free first if interested in taking a look.I finally got around to signing app for Mac, which is what this post originally was about: https://news.ycombinator.com/item?id=48075366 (the new versions will be out soon)
Also thinking that I might make it an Anthropic API --> OpenAI API proxy that allows talking to providers that don't support the Anthropic API directly, alongside allowing switching models dynamically during a session (Claude Code wouldn't even have to know about it, it'd just send requests to a local endpoint and the proxy would do the rest).
Early on, but Go is lovely to work with, mdBook is great for getting a site off the ground and I'm really surprised that more people don't use Itch.io for distributing software (or the pay-what-you-want model in general), it's dead simple!
Just launched Studio, which is the self-hosted version of DB Pro.
I also keep a devlog. #9 was just published to YouTube.
Self-Host Your Own Database Client | DB Pro Devlog #9 https://youtu.be/MJvSrJGtk70
[1]https://dbpro.app
An LLM benchmark for open-weight models only, with secret questions.
The questions are asked multiple times to calculate a consistency score.
The results are available in JSON, containing the hash of the question with the number of correct and incorrect answers, the number of unique answers, and the number of times no answer is given. (Uses \boxed{})
https://findfantasyxviii.com
fDeploy is a self-hosted Windows deployment automation tool — a lightweight, on-prem alternative to Octopus Deploy. It consists of a Server (Windows service with a Web UI) that orchestrates releases, and Agents installed on target windows machines that execute deployment steps (IIS sites, file copies, scripts, etc.) across environments.
Example book here: https://www.amazon.com/dp/B0GYCZJVGX
How the algorithm works: it finds people who liked the same posts as you, and shows you what else they’ve liked recently.
Launched the feed a little over a year ago and it has become the most liked feed.
It’s nice to see how well-thought language design can pay off years later, with lower token usage. From entropy POV, Rebol syntax is certainly close to optimal state.
https://codeberg.org/rebolek/recoil
https://buildermark.dev
Most recent ha-ha moment: I kept wondering if it was normal that my cluster was only able to process 4 requests per second per vLLM engine (just seemed really low to me).
I realized a better metric is in-flight requests... Each engine is processing 70 requests at any given time, streaming tokens for over 30s.
Code: https://github.com/Nicolas-Richard/vllm-on-eks
It replaces paper stamp cards with Apple Wallet passes (Google Wallet coming soon) without the need for customers to download an app or signup. It’s still very work-in-progress (forgive the landing page) but I’m enjoying using Ruby on Rails. Please let me know your thoughts!
https://beanback.space/
http://akariq.com/en/
1. Responsive artboards and flex-like layout engine
2. Deep support for design tokens
3. HTML/CSS previews and export
4. Multiplayer AI and human collaboration. Agents can connect to documents and collaborate like any other user.
Built in Swift and cross platform Mac, iPad and iPhone.
I’m designing and building the UI and implementing the underlying features with Codex. So far it’s going surprisingly well.
https://drawx.ossy.dev
TestFlight link, good for 10 users: https://testflight.apple.com/join/9VREtXzq
It's early days. I'm not even sure it's possible.
nettirw yb namuh
My art with pen plotters. Recently released a new series of brush plots. Very inspired by Soulages: https://harmonique.one/collections/brush-plots
I have no public sources yet (will come at some point) but ill try to break it down into some simple points. After all: this is a research project.
Project: DeepThought
So instead of going for the path to take bigger and bigger models to solve more complex questions, i going another direction. My idea is to use LLM's in a way like an "inner monologue" to replicate a thought chain. Basically create thinking steps that can be dynamically chained.
Additionally, the project contains a 3 layer memory system which is parted into:
1. Frontbrain (this data composes the context for inference, its a set of "hot nodes" which have a temprature that per turn of conversation will cool down a bit, while if they are used in a "thinking process" get warmed up a big again. The idea is to have the context for the inference to only get the currently relevant information, while dropping of things that lost relevance. This should prevent context overflow
2. STM : Basically a session memory. This will keep all information from the current session even if they got to cold and dropped out of Frontbrain
3. LTS : LTS is always query'able for the thought process to retrieve information/structures, but only at the sessions end information is propagated from the STM to LTS. This makes identification of "unique" entities alot easier and has some other advantages.
So when you type something into the DeepThought engine, it will extract all information from your input and convert it into a kinda 2 type structure 1. A bitemporal hypergraph composed of Entities and Hyperatoms. While entities i think are kinda easy to grasp, hyperatoms can either represent "properties" (in form of facts) or relations to other entities. This allows to create a graph structure typed information network containing the relevant information
2. Frame summaries. Since only having a structured graph as just described looses a lot of processual/logical information which are relevant especially in more complex contexts, i also create basically short summary texts that are linked to entities.
This structures allow me to use dynamic graph traversal for searching for data, while also retrieving the related Frame summaries that are a more native variant for an LLM to understand logics and relations.
This is a very very superifical explaination because to go into detail would take quite prolly multiplage pages of info.
Important: Im running this on a local 5090 and it is NOT friendly in terms of amount of inferences (which is fine for me). I try to mimic a thought process not build a fast shipping product. Quality > quantity. If you would run DeepThought on any online inference provider your broke in 1 day.
So, rn i focus on the ingestion and retrieval logics to make storing and retrieving as good as possible with my hw options.
While the ingestions already involves multiple steps in which the "llm" basically works as judge to decide where to traverse in the graph, where to go into recursion and similar, this will become very relevant as soon ill start implementing "task execution" as capability.
If i solved those the next point is to reduce everything that i need in terms of thinking steps in what i would call "thinking primitives". The idea with those is, that i dont want a hardcoded thinking process, but it rather also want to have the thinking process in form of a graph structure. This would allow me to compose the process in form of data in the hyepergraph, which would in return allow me to enable the system to refactor/enhance its own thought processes.
So ye thats what im working on rn, very early concept/alpha phase.
Since I started it a couple of months ago, it's been used by me to transpile SQLite to Go, and by some other folks to transpile other C, C++, Zig and even Perl libraries to Go.
If you're a creator, researcher or developer looking to reap the rewards of a video without consuming it fully, then it's helpful.
Whole thing is up and running on vercel.
It's a work in progress — would be great to get some input!
You delegate a task or GitHub issue to it and it uses AI coding agents and developer tools to write the code, run checks, read failures, fix problems, and iterate until the result is good, then comes back with a pull request. It does everything a human dev would do, fully automated.
[0] https://www.vroni.com/
https://apps.apple.com/us/app/lexaway/id6761870125
- https://shirt.cash - Vibe code your t-shirt ideas and sell them.
- This weekend was substack MCP (https://www.youtube.com/watch?v=jHARlcInLqU)
new ideas welcome lol
The bot settings (system prompt and user prompt, temperature, reasoning, etc.) are 100% transparent and customizable, and all users can view and copy anyone else's settings from the leaderboard. The goal is to build the best trading bots possible by seeing what works.
You can run a bot on Gemini 4 31B with a free tier Google AI Studio account (I'm running 5 bots on it myself). Or just run Gemma 4 26B on your PC if you have the GPU for it. I'm running 5 on my 5090, so I'm trading with 10 bots total.
The platform is connected to Hyperliquid and you can trace all the trades on the blockchain from the user's Analytics page (always public).
The way it works is you set a loop interval (default 1 minute) and the model receives the candles, market stats, indicators, account balance, current positions and so on and decides Buy, Sell, or Hold and how many units.
It's still experimental but I have already processed 1m+ prompts, 10k+ trades, and almost $1m in volume since January 2026. I have around 15 bots running right now, you can check their PnL on the leaderboard (public). I've made a lot of changes in the last few weeks so most recent either 24h or 7d results are the most relevant. The model you use is super important (Gemma 4 31B so far is the best value I found, better than Gemini 3 Flash and you can run it for free) and also the coin you choose is important too. Preferably, you want something that's trending. My friend's bot did well with ZEC and VVV this week.
Right now I'm working on improving reliability (I bought a Japanese VPS to run my own HL node), and this weekend I moved the app from Render to my own DC VPS for 10x+ cheaper and 1000x more bandwidth (25 TB instead of 25 GB, seriously if you're using Render and want cheaper infra look into buying your own VPS).
I'm also implementing CLI/MCP for OpenClaw support. And next is an automatic screener that will use LLMs to pick the most promising cryptos to trade (since I noticed this has a huge effect on PnL).
If you have questions, let me know, the Trade page has my Telegram group link.
https://tidepools.ai
Plug in solar became legal here in the UK
Still sussing it out but started shipping something
Finding the pitch direction of the roof is kinda hard
Uses data from the house to try and get a rating
https://solarable.org
Currently we’re using AWS and Backblaze B2, but I’m formulating a plan to move to colocated servers. Not being billed per GB will open up a lot of new opportunities. Even at today’s server prices the math still adds up.
Will be trying to implement a virtual bass array next.
I feel like even after all these years we’re still missing the devex that Heroku provided.
It’s been super fun to experiment & integrate MCP into it.
We just passed 2000 developers last month actively deploying with canine.
https://www.learnix-os.com
It's a PWA and works offline. Tech: js, no libs, Canvas API, Web Audio, not vibe coded, but I did use Claude for graphics and tests. Puzzles curated by hand.
https://7coderwords.kenamick.com/
So, I built an agent to help remind me -- it's a subscription based service that sends you updates every morning, and stores your preferences so it can learn what you like.
https://holly.garelick.net
Incremental Markdown parser that emits streams of semantic events, plus tools to manipulate them - designed for real-time rendering of streamed LLM output.
[1] https://apps.apple.com/us/app/reflect-track-anything/id64638...
http://doodlemate.com
It doesn't use generative AI, instead it auto-rigs the drawings in just a few seconds.
https://darkspire.gg/
let’s say you are arriving in paris. it will send you advice on how to get to the city from the airport. big soccer game in an hour? will send you advice on prepare it.
you don't need to ask, it will give you before / when you need it.
now working on the sandboxing and scheduling of the advices. releasing it this week if anyone want to give it a shot. (it will be paid only)
https://betterleaks.com
https://voicebraindump.com
Menu bar app that reduces your Claude Code token costs by ~50% so you get 2x more usage out of your plan.
People seem to like it so far :-)
hack music
[0]: https://loopmaster.xyz
82 sites published so far, with a really weird and wide range of content.
Working on a simple WYSIWYG website editor to go with the current functionality.
https://stockevents.app/ai
A print farm manager for bambu printers in lan mode. I lay down the base types and schema structure and a few other bits here and there.
Using AI to preprocess some amazon transactions from both personal account and business accounts as I untangle them since I started a side business with my spouse a few months ago (involving 3d printing).
Starting on a yoga workout generator and food/fitness/weight tracker.
outside of that I started to fiddle around with a cross section of observability and analytics with SDKs...building a little tool to give SDK publishers better insights.
Replit for the website (he did the first 80%), Gemini to make the flyers and he'll be walking the neighborhood and talking to neighbors.
https://haiker.app
https://beatquestgames.itch.io/textbattlegd
Completely open source if you ask and promise not to make fun of me.
Well, all of a sudden, now that I kinda quit my gaming time sink, all my mini projects are finally being completed. All small, but useful, things for my setup that seem to slowly become a part of a bigger personal project. And between that kid and lots of books.
Ngl, it is weird for me now. If this is midlife crisis, I am loving it.
https://github.com/nizos/probity
https://github.com/dcminter/kafkaesque
Worth kicking the wheels if you're currently using embedded or dockerised Kafka in your tests.
An AI first typing application.
I think anyone can learn touch typing and potentially 2x their typing speed.
We make typing practice engaging and data driven.
https://inSolitaire.com
I am currently rewriting the engine to add ~400 games this month.
nettirw yb namuh
Think wisprflow + granola with 30+ top STT models under single login and pay as you go billing model with 25% markup over API.
It’s an n-gram viewer for Hacker News comment data.
Still working on daily data updates, etc but it’s live!
Thanks.
https://github.com/agentify-sh/desktop
We grab interesting business problems, turn them into fun challenges for hundreds of AI engineers to find the best architecture for. Insights are shared back with the community.
It is a fun learning process with unexpected scaling challenges.
For now it's just for iOS but currently I'm working on porting to Android.
https://pilgrimapp.org/
Each guess can be a single letter or a full word. Revealing letters helps you make word guesses, which are more efficient since it reveals all instances of those letters across the board.
It's been really gratifying seeing friends enjoy the game, now we're trying to figure out how to get in front of more players. Leave us some feedback if you stop by
React + TypeScript with Vercel handling deploys, no backend (yet), the puzzles are just JSON. The toughest part is the puzzle generation: packing multiple snaking paths onto a grid like jigsaw pieces with the constraint that each answer has exactly one valid path. Coming up with good two-word themes has been its own challenge too!
Always looking for feedback or suggestions for improvement. :)
https://github.com/thejchap/tryke
https://micro.mu
https://hellomdx.com/
- Built with Tauri — installer is small and start-up is near-instant on all three OSes. - No accounts, no telemetry, no MDX server in the loop. Sync goes through whatever cloud folder you already have (iCloud / Drive / Dropbox / a plain directory). - Tab-to-accept ghost-writing is bring-your-own-key
- Exports to PDF, HTML, DOCX. Tables, math, diagrams, code blocks all live behind toolbar buttons — no syntax to memorise.
Hope to have some people like it and use it.
it enforces very few paradigms, runs in the browser, and allows users to view and edit agent config files within the UI.
it's kind of a nightmare to try to figure out how to do this appropriately, but it's an interesting challenge and i have seen very few (~0?) projects with an approach like this ...
all the offline harnesses are optimized towards coding, vs. general text manipulation aka "writing."
hoping to publish v0.1.0 by the end of may.
I'm a backend dev, frontend was made with AI.
https://cybernetic.dev/matrix
Smart documents for teams. Fast, Open, and Self-Hostable.
Basically a much faster Notion.
(I’ve been procrastinating on marketing basics for seven years, so it’s… fun but still intimidating :) )
I have a related need, to create some great product spec sheets.
An interactive sound sculpture running on an Arduino uno+Pd
Using Mandelbulber as a visual effects layer for my experimental music AV show
Bg2-like is playable at https://archipelago-sandy.vercel.app
I got into creating my own rings, and I’d really like to create one with ore I harvest myself. Gold is too hard and silver can be kinda dangerous, but malachite is pretty safe and I can just drive to Copperopolis to pick some up.
Basically: smelt the malachite with flux and charcoal to get pure copper, flow that into an ingot mold, hammer it into shape. Then I’ll have my own ring, with metal I collected with my own hands
https://github.com/Orrison/MeliorStan
https://hidefile.app
* assisted coding, not full code generation
http://jacobin.org
I scanned a couple of chapters and realised it likely wasn't LLM generated, it just needed an edit. The intro to C is a hard and weird intro, but then driver development in FreeBSD is hard and weird and people who aren't prepared to get through such intros probably aren't going to get through the rest of it.
Being the contrarian, I've started going through it. I was involved on the periphery of the FreeBSD project ~25 years ago, went to conferences, ran a BSDUG in my hometown, and so on. And I realised I've missed systems programming and FreeBSD itself a little, and in recent years became a little sentimental.
What I've discovered so far in the first few chapters:
1. I miss FreeBSD. And it's weird my muscle memory kicks in and am surprised in a lovely way to find familiar things like /etc/rc.conf work the way I remember them.
2. This is not AI slop. There are issues that I can blame on him not using the same platforms I am (if you're on Apple Silicon, just use UTM and the aarch64 ISO - don't use the VirtualBox config he suggests, as an early example), but as somebody who sees a lot of AI generated content in my day job - this isn't it
3. I have got excited about coding again for the first time in a while.
So, this is my hobby for a while. Go back to where I started, get into low-level systems programming again, I have some ideas on some hardware I want to help out on... it's different to a lot of what I've been working on for the last decade or so, but that excites me.
[0] https://news.ycombinator.com/item?id=47915632
Play a game here: https://bawgle.alifbae.dev
Draws from a bunch of sources, MCP-connects to my agents, comes with a browser plugin to invite meeting bots to calls, lets me (and my testers) leave notes on websites which also gets added in.
The goal is to make work as simple as dragging tickets around, and load as many best practices + review clarity into it
I've set a deadline to finally launch tomorrow, but frankly - I don't know how it's gonna go. Feeling proud, yet a bit anxious about it.
https://kodan.dev, if anyone wants to take a peek
Currently two products are beta-ready (merge conflicts/codeowners) and the demos are available here: https://codeinput.com/products/code-owners/demo and here: https://codeinput.com/products/merge-conflicts/demo
A reactive programming language for games! Properties signal when they change and you can register blocks that tell the engine how to use that property, not just once but every time it changes. It’s a more declarative way of making games which I think is lots more productive.
I’ve been working on this for four years, it’s been a big project!
Next up is actually implementing game play!
There is a little video demo here (but bear in mind that everything is temp graphics) https://hakon.gylterud.net/diary/2026-05.html#2026-05-02
Working on https://fastsleep.app
Using this app, you may fall asleep in 20 minutes (maybe within 8 to 15 minutes)
Simply start the session and imagine what you hear. Like if you hear "calm river", imagine that. If you hear "heavy rain over a tree" imagine that. And you may fall asleep soon.
Try this tonight!
---------
https://bloom.site
I wrote a blog post about my process: https://sxp.studio/blog/subjective-building-a-native-vfx-edi...
...and you can download the app here if you're curious (the app is free!): https://subjectivedesigner.com
Next project is going to be a pivot of that project into something related to creative coding and agentic :-)
This is a Flutter project.
The idea is that each morning, you click the "New Day" button, and your Todo list along with other notes carry forward from the previous day to the new one. When you accomplish something, you add it to the Done section. Other sections can be added as needed. I have been using a text editor and/or shell script for this purpose for about a decade, but have been inspired to make it into an app now that I can delegate the boring bits of app development. It is not quite done yet, but it's getting close to being usable.
(* To the inevitable downvoters, this is in part an experiment to get familiar with what SOTA LLMs can handle. With the intent of comparing it to local LLMs once I get my Strix Halo set up as a coding assistant. I only code as a hobby currently, and have too many other hobbies, and this app wouldn't exist without something else doing the heavy lifting. That said, this is a pretty low-stakes application and I don't commit any code that I haven't reviewed and don't understand.)
in each job i find myself trying to enhance information in order to visualize it, so this time i'm finally giving it a try
595 days and counting.
ᕙ(⇀‸↼‶)ᕗ
since it's all just mounts, vicode works as a worktree manager as well: select a vicode tab (which sets cwd to the corresponding worktree with OSC7), open a new terminal tab/window, and run claude/codex inside
disclaimer: unstable, linux-only (mac build WIP, no overlayfs), some modules were vibecoded (grep for `SLOP`)
https://github.com/anuramat/vicode
I have the macOS, Windows, cli, and web app working with this feature. I had a bit of a mixup with Gtk, so I don't have a Debian package for it, but it's buildable from source.
https://github.com/NoMoreNicksLeft/transmission
I would appreciate it if anyone wanted to test it. I'd like to think that the feature would be a big deal, even if my implementation of it's kinda crappy.
I just hate the Saas Scene today - even a small productivity app is worth $10-$15 / month . When you couple that with a bunch of apps that you use , you spend hundred of dollars in hard-earned Cash .
The Open Source Community is Amazing on Some fronts , but then enterprise & non-technical users can't use them without a layer of Support , Hosting & Setup Assistance .
We want to be the delivery layer between the Current Open-Source Community & Saas users .
Got a lot of ideas to work on it , but decided to build out a small version right now and launch it !!
The original developer has left our organization so I’ve been tasked with general assessment and winding it down to enter maintenance mode. It’s still alive and well, has a very passionate and appreciative user base, but we want to ensure it doesn’t demand too much attention moving forward while we focus on other things. It has pretty noisy error reporting.
Reliability and fault tolerance are some of my favourite things to work on in software so it has been a lot of fun so far. It has also been an incredible opportunity to practice using LLMs for specs, planning, verification, and research. I don’t actually need to output much code to get this thing into a stable state in which it can coast along; the bulk of the work is time spent understanding the app, the infrastructure, its existing faults, poring through traces and logs, going over query plans, and so on. LLMs are great assistants for this work and I’m having a ton of fun having so many opportunities to figure out what works and what doesn’t.
The outcome has been awesome. The performance is steadily climbing (especially in the database), and most common errors when I started are either gone or much better understood with plans to address them. I’ve almost got it set up so if someone needs to take it over in the future, it should be pretty easy to toss them the keys and trust that they can deploy and maintain it easily from the docs and systems I’ve created.
Despite spending a lot of my career on the front end, the hardest part of this project has been navigating that. Aiming to improve an application with minimal intervention is exceedingly difficult in the browser, or so I’ve been finding. I can get incredible performance gains out of Postgres without changing the interface between lambda and rds in the slightest, but meaningful improvements to the react application seem virtually impossible without substantial refactoring.
I understand the key factors in getting better performance out of react apps and I see plenty of opportunities, but they all involve large diffs that are risky and time-consuming, even with a model like Opus handy to churn through boring and large change sets. It’s such a fragile and flaky environment.
Even so, I’m loving it. Making software better is so gratifying. Doing it without reinventing the world is such a fun challenge, too. It really puts your brain to work. It would be so easy to go in and start flipping tables and throwing code in the garbage, but that’s too easy and too risky. Taking it slow, absorbing as much information as you can, truly understanding how features work, and planning surgical changes with significant pay off is safer and just feels awesome when it works.
I’ll be sad when this one is finished. It’s almost there. Next up is a remote temperature controller for 40 saltwater experiment tanks with a temperature profile planning interface and a monitoring interface for the lab. That will be awesome too. It has been a good couple of months for work.
it's a programming language
Unfortunately the only marketing material so far are some TikTok posts, but it's a pretty cool demonstration.
https://www.tiktok.com/@kyo.simrc.racing
and for fun, I am building yet another programming language!
I don't often have time to do OSS projects, but will keep it readable for packagers. The most time consuming part will be overly verbose commenting needed for people to be able to audit the source quickly.
It is a boring side-project, but unfortunately a necessary one. =3
The game is going to be a farming tycoon/city builder game where you can buy farm stands and advertise to sell your goods. As your operation grows, you grow the local economy and people move to the town turning it into a city, opening up the chance to sell at farmer's markets or supermarkets. As the city grows you'll have to buy/sell land with the city and work with the mayor to plan where the city should claim new land for you to purchase so you can stay on the outskirts with healthy soil (or in the endgame, run for mayor and manage the growth of the city yourself, a la Sim City/Cities/Frostpunk)
I chose Love2D as my engine so I can use the relative simplicity of 2d art in 2.5D pseudo-3D instead of 3d modeling. The world space is a 3d euclidian grid of cells wrapped around a horizontal cylinder on the x axis. The view space is perpendicular to the side of the cylinder, giving us a natural horizon at the vertex of the cylinder on screen. The world space coordinates are expressed in terms of the polar coordinates of the cylinder, giving natural rise to radius as altitude, angle theta as latitude, and x axis as longitude. All the world math can be calculated using the trigonometry of the unit circle, and converted to 3d Cartesian coordinates before converting them to screenspace coordinates. I can use regular flat plans and elevations for the texutures of building faces, and render them upon linearly transformed quad polygons. Maybe I can also do some screenspace displacement a al Crimson Desert at the finish line to give buildings window sills and ledges when you see down a side of one.
I am doing the development without LLMS as much as possible so I retain a good grasp on Logic, Language, and Math. I have been having a lot of fun digging back into these multivariable calculus and linear algebra concepts I thought were beyond me (because of some autobiographical amnesia issues I deal with) to discover that no wait, I was taught these concepts in high school and was quite comfortable applying them. All the development is done on my own private, secured git instance on my homelab server and I can pull down the latest revision to my iphone to show off, it's been really cool. Kind of a pita to find a good git app on iPhone that allows custom git servers with ports though.
screenshot of a very early hello world, before I made the mental connection between wrapping a 2d cartesian plane around a cylinder and actual 3d cylindrical polar coordinates, which is why the shapes just sit over the world rather than extending from it, I hadn't yet conceived of the radius of the cylinder being altitude: https://fucci.dev/assets/helloworldspace.png
https://www.linkedin.com/search/results/all/?keywords=%23ape...
Too many codes or old or gate kept behind proprietary walls. Many are old and don't use the newest acceleration techniquea to make the simulation fast. Additionally, none of them scale using aws. I want SAS/SAR image to be easy to generate for anyone.
the requirements for growth keep changing plus all the AI noise means that the playbook changes regularly. staying on top of the state of the market while improving/maintaining the product and understanding our icp + exploring new verticals is a tricky (but fun) task to manage!
[0]https://www.zigpoll.com
I have a working prototype written in Julia which is a very simple neural network. The input is in vector format so traditional convolutional neural networks don’t work out of the box but I swapped the convolution layer with a path simplification algorithm and it worked extremely well. Like 20 samples per character (from a set of only 5 hiragana during prototype phase) was enough to get 100% accuracy in a test collection of 5 samples per character after only 30 iterations of training.
I plan an working with free and open data, which I don‘t think exists for japanese kanji characters (at least not in vector format; KanjiVG only has one sample per character and I need dozens) so I also build a crowdsourcing web site to collect data from random people on the internet.
I am planning to run some more experiments with my prototype model before I release the crowdsourcing web page to an actual server though.
Model prototype: https://github.com/runarberg/kantoku-prototype
Crowdsource app: https://github.com/runarberg/kantoku-collector
tldr: we help you find good supplement
Right now I just germinated a 4x8 bed with flax for fiber. The plan is to grow it for 100 days or so and then harvest, dry, ret, dry, and spin. I need a lot more to do anything serious, but I think it’d be awesome to have a scarf that I made with linen I grew and harvested myself
It's a durable runtime for AI agents.
The thesis: agents should not just be an LLM loop running inside one Python process. Once agents touch real systems, you need crash recovery, retries, human approval, distributed tool execution, cancellation, observability, and execution history.
Agentspan is basically applying the Conductor OSS execution model to agents. Conductor made long-running distributed workflows durable. Agentspan tries to do the same thing for agent executions: give every run an ID, persist the state, let it survive process death, pause for approval, resume later, and inspect what happened.