I turned Markdown into a protocol for generative UI
There's a lot of work happening around both generative UI and code execution for AI agents. I kept wondering: how do you bring them together into a fully featured architecture? I built a prototype:- Markdown as protocol — one stream carrying text, executable code, and data- Streaming execution — code fences execute statement by statement as they stream in- A mount() primitive — the agent creates React UIs with full data flow between client, server, and LLMLet me know what you think!
39 points by FabianCarbonara - 15 comments
It embodies the whole idea of having data, code and presentation at the same place.
If you're open for contributions I already have an idea for cascading styles system in mind.
Maybe one day someone will invent a rounder wheel.
The wheel is what I would call, passé.
I’m building an agentic commerce chat that uses MCP-UI and want to start using these new implementations instead of MCP-UI but can’t wrap my head around how button on click and actions work? MCP-UI allows onClick events to work since you’re “hard coding” the UI from the get-go vs relying on AI generating undertemistic JSON and turning that into UI that might be different on every use.
So the UI is generated dynamically by the LLM, but the interactions are real server-side code, not just display. Forms work the same way — "await form.result" pauses execution until the user submits.
The article has a full walkthrough of the four data flow patterns (forms, live updates, streaming data, callbacks) with demos.
https://markdown-ui.com/
Markdown UI is declarative — you embed predefined widget types in markdown. The LLM picks from a catalog. It's clean and safe, but limited to what the catalog supports.
My approach is code-based — the LLM writes executable TypeScript in markdown code fences, which runs on the server and can render any React UI. It also has server-side state, so the UI can do forms, callbacks, and streaming data — not just display widgets.