michael-dean-k/

Topic

software

9 pieces

Heuristics for systems

· 526 words

I declared to my wife this morning that DeantownOS is getting retired. It’s been 3 months since I spiraled into Claude Code for personal systems, and I’m at the point in the curve where the amazement has normalized and I’ve accepted the fact that I’m in a trough of disillusionment. The question now is revise or abort.

The case for aborting ties back to Oliver Burkemann’s Four Thousand Weeks, which popularized the idea that all systems are methods to procrastinate from making hard decisions. They give the illusion that you can do everything, and since AI can meaningfully leverage the volume and range of things you can do, it tempts you to build galaxy-brained systems. The thing I think we fail to realize while in a vibe coding frenzy is the psychic cost to remember and maintain the stuff you build. Yes, it is appealing to “reclaim my computer” and rebuild everything I use as personal software (from Obsidian to Gmail), and it’s even possible, but it’s a new breed of Sisyphean struggle. Once you can mold your own software around you, it’s too easy to endlessly mold, to lose sight of the work and just tinker on your exoskeleton.

I’m obviously skeptical, but I’m still a believer; if I were to revise, to rebuild my Claude stack from scratch, I would have to develop a few heuristics to help me from short-circuiting.

The first one that comes to mind is “will this matter once I’m dead?” Ie: writing an essay matters, because I imagine one day my daughter will read that and get to know me better, or at the very least, future Me in 35 years may enjoy reading words of my past self. But to create detailed daily files that get spliced into atomic “routing files” that then then get saved again to a new destination folder, which exist either as (a) just context for AI, or (b) require some manual effort to prune into something that matters once I’m dead, is to create waaaay too many layers of abstraction between the source and the Work. When I read back my writing from the last few months, only a small is valuable enough to be saved as "logs" in my archive. I was writing for AI, not for my future self.

I made this assumption that atomic daily files are the kernel of a system, and it was an axiom I could never undo. There’s maybe another principle on “don’t build load-bearing infrastructure on an unproven axiom.”

Another one could be “don’t assume future you will have bandwidth,” to do X every day/week/month. Every day I had to review how my AI system proposed to route my logs, and eventually I'd ignore it and get backed up. This means that if something isn’t truly automated, I should be very cautious of it. It's possible to do one little step forever, but not a hundred. Not every promise has brush-your-teeth-scale reliability.

What I’m getting at is that it’s not about maximizing or neglecting systems, but about understanding the right principles so you build something that is actually in service of your life.

Systems skeptic

· 380 words

I don't know if I buy the quote: "you don't rise to the level of your goals, you fall to the level of your systems." (And this is coming from a systems guy.) It's a beautiful piece of rhetoric. The rise/fall structure. The humility to stay grounded. But I just think when you really want to make sense of how to pull off hard things, it should be a little complex, a little more than what can be packaged into a meme.

Two opposite things need to happen at once: top-down destiny forging, and bottom-up monk-like routines. It's a negotiation: "What will I want to complete in 100 days?" is a very different question from, "What should I be doing today?" and you can try to force alignment, but that's not always easy, because what you feel like doing often diverges.

The quote above simplifies this whole dance into a blind trust in systems. A system is a servant, not a master! I write this to remind myself as I'm immersed in probably one of the biggest system rebuilds in my life (one where I'm suddenly able to fluidly create the containers I work within) ...

It is wild to think that probably 50% of my computer use these days are within GUIs I've designed for myself. To me, liquid GUIs are a bigger deal than autonomous agents. My whole conception of what personal computing can be is changing very fast, and it becomes alluring, almost addicting, to continuously evolve my own OS, to see what's possible. It's very easy now to get tangled in knots of systems and software that are all very impressive, lead nowhere, and become chores. What leads to aliveness, to your intentions?

An emerging maxim for me is to start with the goal and let the system emerge around it; otherwise, you feel the cold of the infinite tinker, especially if you are quarantining in the attic from COVID and you can't go touch grass because there appear to feet of snow outside and you are too achey to shovel out your car to go anywhere and so one way to relax when you're sick is to live-clone all incoming Substack posts into local JSON folders and redesign a better algorithm. But to what end?

Deantown OS

· 211 words

Weird post-midnight project: built myself an operating system. Not really, but really. It's just an app that finds all the other apps I've built in my 80_code folder, but then displays them as icons in a Mac dock + desktop GUI. It’s an easy way to see/use/remember what would otherwise be scattered. Lots of weird features, like the clock changes to a random time every 0.5 seconds, and instead of the date it tells me how many thousand days old I am. If you click the "Fun?" toggle, it lets snakes loose. What's trippy is I also built a multi-tab terminal inside of it, so I can Claude Code to code the code I'm coding (actually writing 0 code). Seriously though this is becoming my Notion replacement, a place to write/plan/do, except with complete interface flexibility, and all-local data. Currently writing this note from within the OS. The unlock for me was in realizing the power of local data over cloud apps. Feels like owning vs. renting. When you have everything in a single sandbox on your computer, you can spawn interfaces to help you with anything, and they can be far more idiosyncratic than anything you'd ever find in a mass-market product. Notion doesn't have snakes.

→ source

Makers and the Managerial Goon Loop

· 390 words

Paul Graham’s idea of makers/managers is helpful when thinking about AI agents. The cost of being unreasonably productive is that all your time will go into management. I’ve heard people celebrate this, as if elevating above the work itself and only making high-leverage decisions based on taste is the place we want to be. Disagree. Without actually being in the weeds and making thousands of unbearably slow decisions, you won’t develop taste, and (probably) won’t be a great manager either. I guess the ideal (for me) is to be in maker mode as often as possible, and then let my synthetic managers come in to process my deep work. (Currently have a “proseOS” where I can riff 5k words into a daily note, and then agents come in to route my logs to different interfaces). Ideally, you build the manager once and forget about it. But realistically, a maker can find fun in making manager bots and management apps, and it’s quite easy to slip into a managerial goon loop. What I mean is, similar to masturbating with no intention of ever finishing (aka gooning), it’s very possible to make your own task manager app, and a writing app, and an idea Kanban linked to Obsidian, and why not a new personal website, and a 1,000 day calendar because you can, and seriously anything you can think of, and it’s very possible to just numb out over how unbelievable it is that code, markdown, and interface are now liquids that shape around your every intention, but actually, you never quite finish anything. PKM procrastination is timeless, except now it’s multiplied to new levels. The brute velocity of execution means you’re bound to make many little mistakes, which eventually compound into your own megamachine that traps you with endless bugs and feature ideas and system decay. This is all quite dramatic. I love Claude Code and insist everyone IRL and IFL try it. But now that it’s shockingly trivial to build your own personal software for free, I imagine there will be all sorts of unanticipated psychic costs. For one, it’s dangerous if building your own tools is equal to or more fun than the work the tools are for. I’m sure that wears off. But I generally think this all leads to both extremes: individuals who are unbelievable prolific, and individuals stuck in a goon loop who feel unbelievably prolific.

→ source

Software Incentives

· 449 words

One of the thrills of the AI revolution will be how it untangles software from bad incentives. Today, software is expensive to build and maintain, and so it needs returns to fund itself. The big social media companies have annual expenses of $50m-$50b; they are in no position to operate from virtues, or to deliver on their stated aspirations of “connecting the world,” because they need to optimize for attention and convert it to revenue to fund the ridiculous scale of the operation.

But now we’ve hit the point where autonomous coding is real: Claude’s Opus 4.5 can code for 30 hours straight. I am currently “rebuilding Circle,” the community platform, except not as a platform, but as a single customized instance for my community (Essay Club). I am maybe 4 hours in and half way done. Circle wanted $1k/year, so I built my own with a $20/mo Cursor subscription.

When you can just prompt software into existence, you don’t need fundraising, an expanding team, and all the sacrifices that come with capital. Software can start reflecting the will of visionaries, rather than the exploited psyches of the masses. Of course, AI coding will also enable huckster bot swarms to sell Candy Crush clones and other brain rot variants, but more importantly I think we’re entering a new era of techno-activism.

Millions will use their weekends to spin up apps, sites, tools, platforms, and networks, not for the sake of colonizing the planet’s attention, but for the sake of gift-giving or mischief-making or culture-shaping. It could mean that we shift our attention from hyper-commoditized feeds to mission-driven places.

Today, I think a single person could spin up a million-person writing-based network for under $100k/year (my guess is that’s <0.2% of Substack’s cost). If you clone something exactly (like Twitter>Bluesky), there’s little reason to switch because you lose the network effects. But the oozification of code & interface means that we can start experimenting with better social architectures. How might a network built for human flourishing actually function? A novel concept paired with a small critical mass (just a few hundred people) might be enough to trigger a cascade of platform switching.

The irony is that AI coding is only possible because big companies have been able to amass extreme amounts of capital, resources, and data, but in doing so they’ve released something that could erode their own monopolies on attention, the last scarce resource. Now I think it comes down to what people decide to build. If everyone can build anything, will we each try to build our own empire of extraction, or will we contribute to a culture we want to live in ourselves?

→ source

Writer as Technoactivist

· 153 words

02:32 PM – There’s something to the phrase “writer as technoactivist” that is appealing as we inch towards the 2030s. The word activism has gone sour for me, because it’s a stand-in for laziness, whining, and opinions. But there’s a history of technological activism that goes back to the 1980s and still continues today. I guess there was always a limit on what could be achieved through open-source software movements compared to market hounds. But if AI makes the cost of building things irrelevant, and any “revolutionary” suddenly has a 100-person “workforce” at their whims, then there might be a rise of new kinds of founder-driven institutes with missions you’d never see in the 2000s-2020s. Up until now, there was a fixed band of company types: unicorns, a $10-100m business for VC, a $1m narrowly-optimized market niche business, or a side passion.Feels like we’re entering an exciting new moment where mission-drive people can scale in ways that weren’t possible before.

UI as attention guardrails

· 113 words

Whenever you open an app you give it permission to shape the grooves of your attention. Through its interface, it suggests and implies a limited range of ways you can interact. This all sounds very abstract, and what I really want to say is that I think my Things app (the #1 best selling productivity app, I assume) keeps me in a kind of productivity hell. I have, what, 84 things to do today? Tasks lists should not be ambient all-day guides. I should leave it in the closet, go in there and whiff it for 5 minutes, max 10, commit to memory whatever is important, and then not go back until tomorrow.

Reading in public is rude too

· 166 words

My head is tilted down 60 degrees, and I’m cut off from the people and world around me. My cousin’s cousin was actually in the shop, and I almost missed her. Reading Emerson while waiting online feels extremely rude. Isn’t reading a physical book in public just as bad as reading on your smartphone?

Of course, books aren’t evil. Neither are screens. It’s the action/context mismatch that’s wrong. I guess the problem is that screens make it easy to have all your books with you at all times, and so it’s convenient and normal to be rude.

What you reveal when you say screens are bad for society is that you don’t have the ability to wield tremendous power. It’s not the smartphones to blame, but the apps on them, and so often we realize how mindlessly we install them, and how long we’re willing to be mesmerized by a bad information architecture. When we reach the iOS vibe code singularity, there will be no excuses.

Curating the infinite

· 474 words

If you give an infinite amount of monkeys a typewriter, with an infinite amount of time (obviously theoretical because neither a being or time can be infinite) not only will one of them produce Shakespeare, but the entire Western Canon would be re-derived from scratch in every moment of reality. This captures the difference between astronomic values and infinite values. In astronomic values, given an absurd amount of time, one monkey will eventually do the the impossible and write Shakespeare. But with infinite values, monkeys are inventing Shakespeare as the grammar of space-time. The astronomical shows that the impossible could happen once, but the infinite shows that the impossible could become the fabric of a reality.

And Sora is, like the 2005 Facebook feed, just the start of something new, but something that might actually be as nauseating as the infinite. If you have agents that can reproduce endlessly (potentially infinite “creators”), with the ability to remix/generate one piece of content against every other node in a growing cultural matrix (actually infinite), with limited time/cost (not infinitesimal, but fractional), that leads to every possible reality happening in every moment, at a cost that’s bearable to tech corporations.

I think I find this all interesting now, because something as abstract as the infinite might shape the future of creation/consumption. And to tie this to our talk last night about optimism/pessimism, I think the difference comes down to those who have the agency and discernment to plug in to the infinite on their own terms. It could be as simple as, if you plug in to OpenAI, Meta, or X, and let them use your data to create a generative algorithmic for you, you will be swept away in limitless personalized TV static. But if you know how to build your own tools (hardware, software, social communities), then you have a chance to harness it.

In Sora, I’m currently in a Bob Ross K-Hole, and it triggered an unexplainable interest in trying to explore the edges of Bob Ross lore, which is, now that I write this, so random and pointless and misaligned, but when I do it I’m cracking up and can’t really stop.

Contrast that with my own theoretical "infinite system," where every new log surfaces the 100 most related logs, and then each of those logs becomes the seed for an essay generator, each of which gets rewritten endlessly (for hours, days, or weeks) via an EA software feedback loop, until I decide I want to read it.

And so if you dive into the infinite, even if it’s something you love, it can easily destroy you, and instead we need to make our own systems/agents that can surf those edges for us, and bring back just the right amount of information that we can meaningfully work with.