michael-dean-k/

Archive

January 2026

7 pieces

Do paid subscribers influence discovery on Substack?

· 546 words

Chris Best, founder of Substack, posted that they caught “President Plump,” the #1 growing account on Substack, for using fake subscriptions to boost discovery. I think this was intended to comfort everyone that they caught a scammer (justice!), but actually it confirmed what many were starting to notice: discovery is contingent on you making money. If you have paid subscribers turned off, no algorithmic wind will blow your way. But if you have a spike of paid subscribers in a month, suddenly your old posts will start to go viral, in hopes that even more paid subscribers will bring the platform 10% (this has happened to me before). This isn’t inherently bad. For every President Plump, there is an earnest person trying to finance their creative project.

But at scale I fear it creates a bad pattern, because the accounts that everyone sees will be the ones making the most, and generally these will be marketers and growth hackers more than artists. I think you will find better writing in the gutters of Substack than on their rising leaderboard. If authentic culture emerges outside of monetization, then there’s a real rift between what Substack wants to be (“an engine for culture”) and what it actually is (an algorithm that only rewards monetization).

I think the best we can do is use this information to our advantage. For example, I could have new Essay Club members pay directly through Stripe, but by handling payments through my Founding Members tier on Substack, I get a discovery boost, which is worth the 10% fee. Similarly, if you make small digital products, it might make sense to bundle them into a subscription instead of charging per item.

Should you use a credit card masking service to give yourself 20 paid subscriptions for $5 each? Depends. Basically, for $10/month, you can pay for a probably noticeable increase in discovery. The question is, will you get caught? Maybe they are on the lookout now, but my guess is they would only penalize it at a certain scale. Sam Kriss speculated that President Plump was paying himself around $5,000 per month to reach #1. I’ve never done this, and wouldn’t necessarily recommend it unless you have a hacker mentality and really need the growth. 

At the very least, you should consider having paid subscriptions turned on. Cate Hall found success in charging $1/month and getting to #1 rising. Our very own Yehudis Milchtein also set up $1/month subscriptions and is now #91 rising in literature.

However you approach this, it brings up a bigger question for me on how to build a real engine for culture. It seems like you can’t have an algorithm for a single reward (popularity or money) or else they will be gamed; instead you could give everyone curatorial power relative to their cultural reputation, however you measure that. For example, if we all trust Ted Gioia, then somehow Ted’s like should count more than 10,000 bot likes or $1,000 in fake subscriptions.

I hope this triggers more transparency from Substack on how their algorithm works, and also hope for a new generation of platforms where each person has visibility into and control of the thing that is routing them information.

Organic Voice

· 207 words

Good voice is writing that's unchained from a single register. This is why default AI sounds so robotic: even if you prompt it with the precise style you want, it applies the same approach to every single sentence to make a monotonous caricature. No matter what it is, it’s numbingly uniform.

I find that if a writer gets caught in any register (only hilarious, only referencing Aristotle, only confessing terrible things, every sentence is a metaphor), it becomes annoying and unbelievable. We probably all have our default register. I get annoyed when I catch myself stuck in an analytical register. People don’t act like this IRL. People are 75-sided and context dependent.

As a writer skirts over different objects of focus, the tone should alternate between opposite modes: certainty and doubt, anger and love, approachability and authority, active voice and passive voice. There’s obviously no single tone that’s better than any other, but adaptive tone is better (=more organic) than drone tone. 

Organic voice is, I think, one of the halmarks of the essay. While other genres are locked into specific registers (research papers are certain, neutral, and authoritative, with terrible passive constructions to capture every nuance), essays are exciting because they capture the multitudes of expression.

→ source

Self-Deception

· 387 words

I've always thought 'writing shows you what you think and editing helps you change your mind'—and maybe that’s a decent heuristic—but it’s more complicated than that. I think it’s possible for writing to do the opposite of what we hope, to lead to self-deception. A few thoughts on how:

  1. Premature convergence: When you start drafting, you unlock a new stream of thoughts, but once you find a new center of gravity (a potential thesis), it’s common for all further thoughts to reinforce the thing you happened to stumble on, regardless of its substance. Beyond a point, writing can ossify & lock you into a frame.

  2. Aesthetic attachment: Once you’re trying to make a ‘good’ essay around your thesis, it’s easy to become enamored by phrases, sentences, images, and sources. Expression (vibes/voice) is an entirely different thing than thinking. You can dress up a static/wrong thought to be beautiful/persuasive.

  3. The sunk cost fallacy: after you spend hours on an essay and share it, it’s likely that you’ll continue to believe it. If you’re wrong, you’ll have ‘wasted’ that time. If you change your mind, your readers will have an outdated model of you (OFC, views evolve over time, but I wonder if publishing leads to short-term friction in your evolution).

One possible way around this is to, as soon as you think you found your thesis, to rigorously consider and explore the antithesis (not as a rhetorical strawman, but to really, earnestly, consider the opposite). It means a given draft will be scatter-brained and contradictory, but it’s how you find a synthesis, a more refined thesis. And once you find that, you start over, and repeat, until you end up somewhere that is far more nuanced, interesting, and weird than where you started.

The thing I’m grasping at is that thinking & expression are often at odds, and before you commit to an idea worth expressing, you need to go through rounds of unglamorous self-interrogation. There is probably a mode where thinking _is_expression, but the risk is not wanting to shed something that is elegantly said. One way through this it to get meta and explicitly express your doubt and your evolving POV; I think this is what separates essays from articles and propaganda, and it stops you from brainwashing yourself.

→ source

Software Incentives

· 449 words

One of the thrills of the AI revolution will be how it untangles software from bad incentives. Today, software is expensive to build and maintain, and so it needs returns to fund itself. The big social media companies have annual expenses of $50m-$50b; they are in no position to operate from virtues, or to deliver on their stated aspirations of “connecting the world,” because they need to optimize for attention and convert it to revenue to fund the ridiculous scale of the operation.

But now we’ve hit the point where autonomous coding is real: Claude’s Opus 4.5 can code for 30 hours straight. I am currently “rebuilding Circle,” the community platform, except not as a platform, but as a single customized instance for my community (Essay Club). I am maybe 4 hours in and half way done. Circle wanted $1k/year, so I built my own with a $20/mo Cursor subscription.

When you can just prompt software into existence, you don’t need fundraising, an expanding team, and all the sacrifices that come with capital. Software can start reflecting the will of visionaries, rather than the exploited psyches of the masses. Of course, AI coding will also enable huckster bot swarms to sell Candy Crush clones and other brain rot variants, but more importantly I think we’re entering a new era of techno-activism.

Millions will use their weekends to spin up apps, sites, tools, platforms, and networks, not for the sake of colonizing the planet’s attention, but for the sake of gift-giving or mischief-making or culture-shaping. It could mean that we shift our attention from hyper-commoditized feeds to mission-driven places.

Today, I think a single person could spin up a million-person writing-based network for under $100k/year (my guess is that’s <0.2% of Substack’s cost). If you clone something exactly (like Twitter>Bluesky), there’s little reason to switch because you lose the network effects. But the oozification of code & interface means that we can start experimenting with better social architectures. How might a network built for human flourishing actually function? A novel concept paired with a small critical mass (just a few hundred people) might be enough to trigger a cascade of platform switching.

The irony is that AI coding is only possible because big companies have been able to amass extreme amounts of capital, resources, and data, but in doing so they’ve released something that could erode their own monopolies on attention, the last scarce resource. Now I think it comes down to what people decide to build. If everyone can build anything, will we each try to build our own empire of extraction, or will we contribute to a culture we want to live in ourselves?

→ source

Fever Dream

· 317 words

Over the weekend I had a +101 fever, and so I was banished to an airbed in the attic to not infect the baby. Wrapped in blankets, I found myself in a sequence of near-identical “fever dreams.” Before this, I hadn’t thought about the phrase much. As a metaphor—"the president’s plan is a fever dream”—it implies a delusional desire, but real fever dreams tap into a different thing: for me, they’re about absurd procedural loops. I found myself deeply concerned with the layers of blankets around me: I had the urge to unfold them, visualize each one as a heat map, extract the cold parts with a boxcutter, restitch them into a new blanket, shape this new perfectly cold blanket into an animal sculpture, and then sell it on Etsy. I can’t remember the sequence exactly—it only made sense on the inside—but it was a cold-side harvesting operation for sure. I’d wake up and realize, oh, this whole scheme is stupid and pointless, and now that I know this I can sleep peacefully. Yet as soon as I went back under, I slipped back into this incoherent non-problem. It’s not uncommon to fall asleep and re-enter the same dream, but with a fever dream, I find that all I can do is return to my miscognitions, 5-10 times, until the fever breaks. It’s not scary, but repetition can be hellish (like the Teletubies DO IT AGAIN! sequences). My guess is that an overheated brain that’s deprived of REM will linger on thoughts it can’t digest. It becomes a type of lucid dream, a lame one with no visuals, where awareness of the loop can’t break the loop. There are probably situations better suited for the fever dream metaphor, but I can’t think of them now. Until then, no takeaways other than don’t get a fever, and if you do stay away from blankets.

→ source

The p(doom) of higher education

· 782 words

A few months ago I saw a YouTube video titled something like, “A child born in 2025 is more likely to get killed by AI than graduate college.” What a ridiculous claim. I assumed it was clickbait and didn’t click, but it has jingled around my head enough to the point where I think I can make sense of it’s argument:

  • The average p(doom) of an AI engineer is 16%, meaning there’s a 1 in 6 chance of human extinction (put another way, companies have morally rationalized the need to play Russian Roulette—if we don’t do it the bad guys will—, without acknowledging that if they survive and win, they get the consolation prize of comandeering the whole economy).

  • 40% of US adults, age 25-34, today, have a bachelor’s degree. If there’s massive job automation and employment, a college degree would be both unaffordable and an unreasonable cost if it were. It’s not unthinkable that <15% of next generation gets a college degree, which makes that sensational claim, weirdly, plausible.

I still think it’s a shaky comparison, confusing two different types of probability, and assuming extreme ASI turbulence. But as someone with a daughter born in 2025, it has gotten me to think about how the societal backdrop to her upbringing could be especially weird. Our circumstance already gets slightly weirder with each generation. Except, maybe next loop will be an unavoidable and disorienting flurry of change that will confuse parents and rewrite all of the conditions for the typical coming of age moment (all the teen movies will be sci-fi, the popular memoirs could be written by transhumanists who have upgraded in unimaginable ways, like they no longer need to sleep because of a new pill, or they can control the genitals of their peers with an app, who knows).

And so now, I find myself drawn to a 2045 forecasting project. Trying to predict the future is typically a huge waste of time (unless you’re gambling and win), which is why I’m going to have AI write the whole thing. This is a rare exception where a writing project makes little sense for a human to do. All I’m going to write are the upfront origin documents, and then Claude Opus 4.5 will read 25,000 sources, write a million words or so, and then organize it all into an interactive, oatmeal-looking website called 2045predictions.com (got it).

Before I run it, here’s something I’m currently thinking through:

What is the omega state? When I look at the popular AI forecasts from 2025, it reads to me like they have a pre-determined end state, only to then use detailed forecasting to make it seem convincing. The AI-2027 forecast seems like they came to their conclusion from very detailed calculations on how a hivemind of 200,000 autonomous coders would evolve month-by-month, but I also suspect that they picked the year 2027 because the following year, 2028, is a US election year, and they want the next administration to take AI safety far more seriously (instead of just insisting we have to beat China). I don’t think there’s anything wrong with this. You kind of have to start with an omega state. The future is so boundless that you need to begin with a guess, a bold outline on the general direction of things.

Here’s my omega: let’s assume humanity survives, and let’s assume technology does unlock hyperabundance that leads to a post-scarcity world, HOWEVER, it’s not utopian because it simultaneously unlocks a new cascade of moral, social, and spiritual crises, dilemmas that will test the timeless primitives of humanity (sex, life, death, consciousness, religion, home, etc.). This omega state makes sense for me because (1) we already know that ethical dilemmas scale with technology, and (2) according to the Strauss-Howe generational theory (from the same guys who coined “milennalis,” “Gen-Z,” etc.), this already tends to happen every 80 years (the length of a human lifespan). A new techno-political order creates a spiritual crises that generates an Awakening, a new value system that shapes society for the next century or so. You know what’s 80 years before Kurzweil’s “singularity” of 2045? The counter-cultural revolutions of the 1960s. What I’m getting at is that the 2040s might have echos of the 1960s, where demographics are divided on core issues and LSD is replaced with consciousness-altering machines (Terence McKenna said that computers are drugs, you just can’t swallow them yet).

We currently define the singularity as “the moment when a computer is smarter than all humans combined,” but that effectively means nothing, and it’s far more useful to have some guesses on how we all might freak out about that happening.

Phantom Infant Syndrome

· 748 words

A few days after my daughter was born, I had something which I’m describing as “phantom infant syndrome.” When I was away from her, holding a phone, or fork, or some other manufactured object, I’d get a tactile hallucination in my hands of the softness of her skin and hair. I imagine this is nature’s way of saying go be with your kid (made possible by mild sleep deprivation). And so this is symbolic of one of the many biological drives pulling me away from writing in recent weeks.

This is happening around my five year anniversary of being online, and it’s probably the longest stretch I’ve gone without having urgency to do so. It’s probably healthy and helpful to be relatively non-linguistic for a few weeks, once in a while (I usually write on vacations, so I never really take breaks from it). We’ll see. It’s possible that I’ve thought myself into a trench, and the best way forward is a proper break (I have once said the best editors are friends, time, and weed—although less weed in recent years). Now that I’m immersed, familiar, and comfortable with the rigamarole of infant care (and all the wonder it brings, too), I feel bandwidth opening to write, and I’m curious to see how my practice takes shape from these new constraints. There are real deadlines now. Baby wakes up in … 30 minutes … and I’d like to post this by then.

Last weekend I read through all my writing from 2025, and after the typical EOY reflections and word count calculations, I realized that something has to change. So I published 12 essays, 10 about Essay Architecture, totaling at ~64k words (re: the other two … one was a first-person TikTok odyssey, the other was about the role of psychedelics in evolution). But I also published 150k words in logs, 2.5x the volume. Logs are notes to myself, mild-epiphanies through the day written in complete sentences, all ghost-posted to a monthly Substack post. Unlike my focused and convergent writings about EA, my logs are far more random: recurring topics included the Grateful Dead, movie reviews, notes from a day at the zoo, dream journal entries, usage debates, new architectures for social media, overheard conversations, etc. My logs, in theory, are a low-stakes breeding ground for essay ideas to emerge, but given the demands of my other projects (the textbook, software, and essay prize), my logs stayed unread and undeveloped last year. Now, with parenting in the mix, it makes sense to me to stop logging, or at least, reconfigure it.

Over 4 year, I wrote +8k logs, added to the archive on 95% of days (avg. 5.6 per day), and the whole archive is 650k words. It’s a very personal corpus, one that documents my thoughts and life at a sometimes OCD-level of detail. I thought I’d do this forever, and it sort of stings to stop. I guess I’m not “stopping” as much as setting a stronger filter: I can still capture whatever I want, but I can only save whatever I publish on Notes. I used to argue for the importance of having a low-visibility space where you can publish whatever you want without self-consciousness or the need to set context with strangers, but maybe that’s a luxury I’ve outgrown. This is perhaps a long-winded way to announce something that probably doesn’t need announcing: expect to get a lot more diddles and spontaneous essays like this in the Feed. I figure my email-essays can be more on topic (I have a few slotted for January re: Essay Architecture, the club, and visual breakdowns), while these can be chaotic.

Technically, I’m still logging, but it’s for my daughter and those are private. Every day I write simple journal entries or letters about what happened. I figure one day, when she’s 15 or so, I’ll just hand over The Files and blow her mind. My dad did this for me: a few years ago, after my nephew was born, he sent me 8k words from my first 4 years. It was uncanny to see that he had a logging impulse too, and to learn about all these small events that everyone in the family would have otherwise forgotten (things that were not captured in pictures, like me trying to brush the teeth of stray cat). All this reminds me that writing isn’t just an act of thinking or communicating, it’s an act of memory.

→ source

Archive