Short essays on engineering, tooling, and stopping on time. Click any title to read.
People keep asking which mobile stack I prefer, as if I picked one. I did not. I use both, and the line between them is mechanical — not aesthetic, not ideological.
If the app also needs a web UI, I build it in SvelteKit and wrap it with Capacitor. Tsukime is the obvious example: there is a real reason to open it in a browser, so the web build is not a nice-to-have, it is a first-class surface. Once that is true, putting the same codebase on mobile through Capacitor costs me almost nothing, and I keep one design system, one router, one set of components.
If the app is mobile only — no browser, no marketing site that doubles as the product — I reach for React Native. The webview tax is not worth paying for an app that will never run in a browser. Native components, native gestures, native modules when I need them. No translation layer pretending to be a phone.
That is the whole rule. Web UI in scope: SvelteKit and Capacitor. No web UI in scope: React Native. Everything else — TypeScript, the design system, the way I structure state — stays the same on both sides.
I started writing code at twelve. Game dev first, because that is what twelve-year-olds want to make. Then Discord bots, then Telegram bots, then desktop clients, then web apps, and now mobile. Eleven years, give or take, with the shape of the work moving every couple of years.
Design showed up later. I was sixteen when I started taking it seriously, which means I have been designing for about seven years and coding for eleven. The order matters. By the time I cared about type, color, and layout, I already had years of muscle memory for how software actually gets built, and design slid in next to that instead of replacing it.
The reason I think this is worth writing down is that the two skills have very different decay rates. Code skill rots fast — the framework I was excited about at fourteen is gone, the bots I wrote at fifteen run on APIs that no longer exist. Design skill compounds slowly. The hierarchy decisions I was making at seventeen still hold up. The instinct for what is loud and what is quiet on a page does not need to be re-learned every two years.
So I keep both halves running. The code half stays sharp because the work demands it. The design half stays sharp because, once you have it, it is cheap to keep — and it makes everything the code half produces look like it was made on purpose.
I am not against MVPs. Shipping the smallest version of an idea, putting it in front of people, learning from what comes back — that is still the right loop. I follow it. I just notice that when I run that loop, the M in my MVPs ends up larger than the M in most other people’s.
UmtilSuite is the clearest example. It is, by my own honest accounting, a minimum viable thing. I cut everything that did not need to be in the first version. And the result was still a multi-tool suite, because the smallest viable answer to “what does this product need to do” was not a single feature — it was a small but coherent set of them.
Tsukime is a different category. Tsukime is not an MVP. It is a deliberately big project, built slowly, and I am not pretending otherwise.
I think the trap with the “ship small” rule is taking it as an aesthetic — like the smaller the launch, the better the engineer. The actual rule is “ship the smallest version that is still a real answer to the problem.” Sometimes that is one screen. Sometimes that is a suite. The discipline is being honest about which one you are looking at, and not shrinking past the point where the thing stops working.
Early on, design felt like the layer you put on top of an app once the app worked. Pick a font, pick a color, round some corners, ship. I do not think that anymore.
The thing that changed my mind was watching the same engineering decision get easier or harder depending on whether the design was in place first. A screen that has been laid out properly tells you what state it needs. A screen that has not been laid out yet asks you to invent state speculatively, and most of the state I have ever invented speculatively has been wrong.
So now I treat design as architecture. The wireframes are the rough draft of the data model. The empty states are the rough draft of the error handling. The transitions are the rough draft of the loading logic. By the time I open the editor, half of the engineering decisions are already pinned down, and the half that remain are the ones I actually want to spend time on.
This is, I think, the unfair advantage of building both halves yourself. You do not have to translate between two people who are looking at the product through different lenses. You only have to be honest with yourself about which lens you are using when, and switch when the work asks you to.
I am not anti-AI. I use it every day, and the work I ship is better and faster for it. What I am against is the thing I see happening to a specific cohort of newer developers — the ones who started coding one or two years ago, in the middle of the AI boom, and who never built the muscle that comes before the AI does anything useful.
That muscle is reading code. Debugging code. Sitting with a bug for an uncomfortable hour and walking out of it actually understanding why it happened. If you skipped that part, AI does not make you faster. It makes you a printer for code you cannot read, and a printer that confidently prints the wrong thing is a worse tool than a slow human who can tell when something is off.
I have been writing code for eleven years. The reason I can use AI well now is that I spent the first nine without it. I know what a fishy stack trace looks like. I know what “this function is doing too much” feels like before I can articulate why. When the model hands me forty lines, I am not accepting them — I am reviewing them, the same way I would review a teammate’s PR, and I reject things constantly.
The cohort I worry about cannot do that review. They paste, they run, it works, they move on. When it breaks — and it always breaks eventually — they paste the error back into the chat and hope. There is no model under the hood. There is no instinct for where bugs live. There is just a slowly growing pile of code nobody, including the person who shipped it, actually understands.
This is not a moral failing. It is a training-data problem in their own heads. They are missing the years of exposure that turn raw output into a thing you can evaluate, and the only fix is the boring one: write code without the model sometimes, on purpose, and sit with what is hard about it. Not because AI is bad. Because being able to read what it gives you is the entire job now.