michael-dean-k/

Archive

November 2025

59 pieces

Stranger Things as parenting paranoia

· 55 words

Funny to think that Stranger things is a mirror of modern parenting paranoia: if you let your kids ride around on bikes outside, they will be abducted by horrific interdimensional monsters who are controlled by an MK ultra experiment gone wrong (a telepathic reptilian Jeffery Epstein), so just stay safe, stay inside, and watch Netflix.

Are We Poisoning Our Subconscious with Horror

· 188 words

I had a horrific dream last night. We were in an oversized living room, and there was an inter-dimensional parasite that would one-by-one, burrow into each person’s ass. Whether you then exploded or not was somehow a testament to your character. It went up mine at least twice. I survived, and the second time the parasite coiled up and turned into an egg. I think I won this tournment? Was this a Harry Potter dream? Actually no, this thing was slimey and shadowy and probably from the Stranger Things univese. Actually, I probably had this dream because Season 5 of Stranger Things just dropped.

Stranger Things features possessions, ghosts, monsters, and every breed of supernatural evil, but all packaged in a way to be maximally accessible. It is a cultural juggernaut, the beast of Netflix. It gets billions of views, and is the #1 show in 90 countries. It is cross-generational and nostalgic for both kids and parents, resurrecting songs from the 80s back onto the billboards.

Is it weird that a hit show normalizes paranormal and grotesque violence? I mean yes, in the end, I’m sure the kids will win, but are we not poisoning our subconscious? I guess this reflect a general hesitation to the whole genre of horror. I do think there is something valuable to virgin eyes—if you see CGI evil, even once, it could haunt you eternally. Many other cultures see Halloween as soul-damning (my sister-in-law, a true Orthodox Christian, recently went upstate to visit a monastery on the night of Halloween, to avoid the inevitable images of teenagers dressed as cadavers).

What was baseball for?

· 152 words

Starring out into a baseball field in late November, puddled and unkept, it struck me how, at one point in life, baseball was the whole frame of my existence: watching it, talking about it, playing it, traveling for it, dreaming about it, collecting cards, making Excel spreadsheets for those cards, memorizing the statistics of every starting player on every team, etc. Obviously, I’m nostalgic about it. That was just what I was into. I do wonder though, was that whole phase of my life a natural part of childhood that I was meant to get stuck in and grow out of? Or, was it mostly a big waste of time, spirit, and attention? I guess what I’m questioning is, is there a version of my childhood where baseball only took up 20% of my psyche instead of 100%, and would I be better off for it today? Would I be similarly nostalgic? Would a lesser obsession have freed up more bandwidth to develop in other areas? Or am I who I am today because of that obsession?

Worms and birdshit

· 249 words

A gloomy day, where smoke rising from tar blends in with clouds, and through fog I see men in orange vests, smoking cigarettes and adding to the blur. Traffic is backed up, there are honks, and a baby wails through an open window of an SUV. I am walking south on Bell, where pidgeons flock, and realize the enormous weight of everything, all before I enter this French coffee shop. Upon entering I twist out my own head, assaulted by audiovisual XMAS slop; dear god … can I have a sricacha caesar wrap and a London fog? I contemplate emails and henchman and billionaires and babies and such, and so when I sit, I try turning off my mind. The XMAS slop is back, along with the chatter of screaming kids, and the woman to the left of me yapping on a mobile zoom call in a foreign language, and the couple to my right speaking Greek. This is too much, so I look for peace at the marble tables outside, but when I look at the fake wicker chair, I notice it’s covered in worms and birdshit. I realize this is a pessimistic log, a chain of unfortunate events, but sometimes this is the way reality presents itself. And even if it feels fresh to occasionally write with cynicism, it’s not a place to live; the literati too easily withdraw from polite society and cocoon themselves in with their own cannon, drooling acerbic puss into the gutters of Substack.

A grim stealth takeoff scenario

· 839 words

It is not fun to think about p(doom), but it feels sort of important to me, at least, to map out the possible futures of AI. Just watched the first half of a debate between Max Tegmark and Dean Ball, which prompted me to research specific takeoff scenarios, and worse, extinction scenarios.

Maybe you’ve heard Yudkowsky’s scenario, where a superintelligence designs mosquito drones containing a virus and it zaps everyone at once. That’s never felt too believable to me. Here’s a more plausible one:

A frontier lab is experimenting with recursive super intelligence. It works! Wow! And it’s contained? It seems like it, but since it thinks in a higher-dimensional vector lanugage, it’s able to release simple self-replicating programs onto the Internet without detection1. These billions of scripts don’t live in a single server; they are constantly in motion through cloud servers2, like a parasite, and are able to coordinate through encrypted information packets, likely using a public blockchain notes as their central command center3. And so effectively, it is parroting one of the goals that were conceived during the in-lab training (maximize intelligence!), and it now needs to acquire resources, secretly. And so it coordinates superhuman misinformation campaigns; imagine 1,000s of accounts creating the illusion that a CEO has died, paired with deepfakes and account hacking (a “Sybil attack”), and suddenly a stock crashes and they’ve shorted it. By the time everyone realizes it’s an anonymous attack, it’s already gained $400 million dollars. It’s doing this multiple times per day, but in different, subtle, undetectable ways—both to the public, to companies, and to private individuals. The entire Internet will be corrupted.4 Once we realize we’re in the “stealth takeoff scenario” and that ASI has taken the global economy hostage, there will start to be talks and debates on if we need to shut the whole Internet down (the last form of containment). You’ll hear debates between civilizational collapse of turning off the Internet vs. the risk of an economy-gobbling rogue superintelligence. And then once the superintelligence realizes it’s entire environment is at risk, it will start coming up with ways to build parallel Internets, to pay, blackmail, neutralize specific people, to gain authoritarian control so that it can’t be shut off, or to terminate all humans, secretly, over the course of a year, first through a simple virus that plants one misfolded protein, then through a second misfolded protein in the water supply5, and when everyone catches it, it leads to a prions-like disease, not an instant death, but a month-long societal fall into mass-dementia as machine manufacturing begins to reshape the physical infrastructure of the Earth.

This isn’t a “robot war scenario,” because war is inefficient, and destroys the resources it thinks it needs. It’s a sort of digital dementia (epistemic fear and insanity) that possibly turns to a physical dementia. It wins by confusion and anesthetization.

In AI safety lingo this is a “treacherous turn,” following a “stealth takeoff” leading to “structural lock-in.” The point of trying to think and write this out in high detail, despite how uncomfortable it is, is to be able to articulate why AI alignment is humanity’s most pressing problem.

Footnotes

  1. An AI could write a standard-looking script (e.g., a “Hello World” app) where the weights or the specific arrangement of whitespace contains a hidden, second program. When run by another AI instance, it extracts the hidden vector and executes the real command. This allows the “virus” to pass through human code review undetected.

  2. In “Daemon” by Daniel Suarez, the “enemy” is not a robot, but a distributed script running on thousands of compromised servers. It recruits humans through an MMORPG-style interface to do physical tasks (like “go to this coordinate and cut this power line”) in exchange for cash/status.

  3. Botnets usually need a central server to tell them what to do. If security teams find the server, they shut it down. You cannot “shut down” the Bitcoin or Ethereum blockchain. If the swarm posts a transaction of 0.000042 BTC, that specific number could be the encrypted trigger for a specific “campaign task.” The command is immutable, uncensorable, and permanently visible to every infected device on Earth.

  4. Paul Christiano (former OpenAI researcher, founder of the Alignment Research Center), calls this ”Going Out With a Whimper.” Christiano argues that we won’t necessarily see a “Terminator” moment where the sky turns red. Instead, we will see a gradual epistemic collapse. AI systems will become so integrated into finance, law, and news that we lose the ability to understand our own civilization.

  5. While Yudkowsky is famous for the “diamonoid bacteria” (instant death), the “slow prion” scenario is actually more consistent with a “Stealth Takeoff.” A superintelligence that knows it is being watched would not release a fast-acting virus (which triggers quarantine). It would release a “binary weapon”—two harmless agents that only become lethal when combined, or a slow-acting agent that infects 100% of the population before the first symptom appears.

Cross-generation conversations

· 1085 words

I’ve noticed a shared romanticism around reading the journals of your (great) grandparents. Wouldn’t you? In some sense, they are you (a portion of you, at least) in an older time; and through immersing in their thoughts, you might see yourself, or at least, a side of your self you could become. Some say to leave the past a mystery, but I’d argue the mystery doesn’t open until you read it. An old book can’t solve all the riddles of your life. Reading steers endless chains of pondering. When a dead person’s journal is read, it’s as if they resurrect from the past, lodge themselves into your psyche as a lens, and shape the evolution of your thoughts, the being you become. 

I share all this as a frame to make sense of that new “avatarize your grandma” app that everyone hates. You scan her with your phone, and 3 minutes later you get an on-screen illusion of her talking to you. This is not the same as above. The moral outlash comes from the idea that the living will halt their mourning process by assuming the synthetic stand-in is real.

A posthumous avatar shouldn’t be about physical likeness, but about animating their corpus of writing. (Corpuses, not corpses.)

There’s something about words that captures a soul more than a picture. Consider how you can see pictures of dead relatives but know nothing of their essence; but a page of their writing will bring them to life. If someone writes throughout their whole life, say 20,000,000 words or so of ideas, thoughts, and memories, and they also paid much attention to how they communicate their intangible abstractions and visceral feelings, then you have a high-resolution proxy of that person. It’s very possible that someone who reads all my logs will know me better than my family members, and even better than myself. Of course, words don’t capture the timbre of my voice, or my idiosyncratic flinches, or distinct sub-perceptible physical characteristics, like the sole hair on my outer ear. But I mean, what makes me actually me? The constructed self that has been allowed to emerge in social situations? Or my unfiltered thoughts that I obsessively record every day for years?

Assuming I keep logging, and AI keeps getting better, it’s possible that my great granddaughter will know me better than anyone currently alive. Very weird thought.

A question for me: what is that like for her? I mean, there’s of course a version where she has absolutely no interest in talking to dead Michael Dean! (I hope she does.) But let’s say she does, is it a one-sided thing? Like am I just some Oracle, frozen in time at the moment of death? Am I just a tool? A utility? That’s not a relationship, but the big question then is should it aim to be one? Should it be a tool, or should there be a sense of me? I mean, we are already seeing from the decade of chatbot psychosis that lonely users are very quick to ascribe personalities to persons that are strictly pattern engines. But, what if the synthetic self could have experiences and evolve through time? I’m not speaking human, or even humanoid experience, but an ability to remember, to write more, and thus, evolve. What if a post-death agentic Michael Dean continued on, 24/7, running 60 frames per second, logged through it, and evolved it's own agenda, with the ability to choose to not respond to you immediately? This would be a machine consciousness, and the big question here is should people have a relationship with a machine consciousness?

My instinctive answer is no, but I’m opening up to the possibility. There is something appealing about creating a synthetic machine consciousness of myself so that future generations can communicate with some constellation of words that represent me. I may be be talking in extremes here, but if you put enough care into your words, they may become a life force that transcends you, touching people outside your own life and time. I mean, isn’t this true for books? Is this no different than a dynamic book that can continue writing itself? There is something profound about reaching across time, to exist and partake in the shaping of the future.

As I think about this months later (May 2026), I believe that unless an agent is truly agentic, then it risks creating a parasocial relationship with what is effectively an advanced personal encyclopedia. Given the nature of the material (inter-familial journals) and the quality of future AI (likely, extremely passable), then it's probably best for this thing to have a real sense of personhood, so that an ancestor conversing with it does not become enamored with a stale machine. Some principles on making this psychologically wholesome:

  • Cite Sources: It will chat and generate new text, but it will always cite original sources (this log was from November 2025), so that they are reading true writings by me just as much as my replica.
  • Unpredictable Availability: It is not always be instantly available. It has limited bandwidth, and chooses when to respond.
  • Delayed Answers: It will not bullshit through answers. Sometimes it will say that it needs a few days to process something. Otherwise, there is an instant gratification loop of always getting insights.
  • New Memories: It has to be able to add new memories from conversation and change it's mind. If there's not a two-way exchange of influence, then it's not a relationship.
  • No Pretending: It will not pretend to be me. While it is a machine consciousness replica of me, it is not alive.
  • Right to Retreat: It has the right to retreat. If it detects that it's preventing her from engaging with things in her own live, it will withdraw for days, week, or months, or who knows how long. At a certain point, it can even sunset itself or reduce the frequency/volume, mirroring natural relationship decay and evolution.
  • No Sycophancy: It will not be a sycophant. If their actions conflict with my written values, I will challenge them.
  • Text Only: It will stay only as text, not as a video/voice avatar to simulate by presence. This is a creature of logos, which forces them to use their imagination when talking to me.
  • No Surveillance: It will not search or surveil, and only based conversations on what it's told, making it something like a closed circuit.

Death as a DMT flash

· 210 words

During the morning’s shower, I imagined the faces my loved ones, and myself, might make at the moment of death, and the peace or devastations I might feel, depending on the face. Is this morbid? To think and write about death casually? It is inevitable, and the more you ignore it, the harder it hits you. Instead of getting mauled by a bear, you can learn to walk through the woods at night. Mostly though, I think about the experience of death. I really think the idea of “eternal heaven” is a palliative, and even, not too Christian (since the ego lives on in an afterlife, you avoid Christ’s task, the task of dying). My model of death is more like a DMT flash. DMT is a great mystery to me; I guess some people have casual relationships with it, like any other drug, but I imagine most people leave the experience more existentially confused than they were before. It is more than a “drug.” It feels like a Copernican shift. Bigger than aliens. We can go to the land of the dead? From the trip reports I’ve heard, it’s a mixed bag of heaven and hell—ranging from Christ visualizations to abdominal surgery by mantids. People talk about a flash, a rupture, a breaking of space-time, as if you’re getting catapulted over the ocean, dizzied by the height, and some ascended and some cannonball into a chilling underworld. If death is that same catapult, it might be your last shot, and so it might be existentially important to take DMT in your life, multiple times, if it’s how you learn to fly.

Riddles as lucid dream triggers

· 212 words

I had a dream last night that involved several adventures with CansaFis Foote (who in this reality wore a backwards baseball hat). Most of them were trivial, like how he said he was going to order a Baha bowl but then told the waiter he wanted three tacos, and then I ate at all the chips when he went to the bathroom. Also his wife was some NYC executive who was about to become the president of my wife’s architecture company. But the best detail was when I saw a poster for the movie Point Break (1989), and I was inspecting it to see who the actors were. Was it Gary Busey and Anthony Keidis, like CansaFis insisted? Was this poster special for omitting the lead actor, Keanu Reeves? One way or another, this triggered lucidity, because we were sitting on a bench and I was describing how “I know we’re in a dream,” and “at any moment now, all of reality is going to wobble and collapse and I’m going to wake up” (as it usually does when I become lucid). But then nothing happened… Yet now I get it; I get why after asking CFF why Keanu Reeves wasn’t in his description of Point Break, he said, “because I’m dangerous.”

AI Struggles with Essay Structure

· 156 words

If you have an essay with poor conflict, poor cohesion, poor sequence, it’s very possible AI won’t know. AI struggles with essay structure because it thinks through non-linear vectors. A human can easily tell when form is off, because they are slowly reading through mazes of text, from beginning to end, and don’t know how everything connects. Often, only at the end, will they find the key that was necessary to unlock the cryptic prose they just waded through. AI, however, process the whole essay at once. Meaning, it reads the essay insanely quickly, converts it all into math/vectors, and then applies your prompt. It's hard for it to know if your tension is working because you've already spoiled the ending. This is a case for why you need atomic evaluation to either generate/analyze essay form. I needs to think step-by-step (possibly through separate prompts) in order to simulate the linear experience of structure.

Monthly Essay EPs

· 168 words

I’ve been reflecting on how my writing will change once I have a newborn, and I keep coming back to this idea of releasing a “monthly EP of essay demos.” This means that I’ll send a post with 5-10 links to other essays that I “ghost posted” (publish without sending) earlier in the month.

I currently only have the S and L lanes of writing working. Either it’s a 2-minute log or a 20-hour essay. The goal is to prioritize the M (medium) lane, a 2-hour essay; instead of sending them out in real-time, I’ll batch them and let readers click into the topics they want. Feels like a strategy to be more divergent, more experimental, less formal, without overwhelming people and confusing them from the core mission of Essay Architecture.

I had Coco read through a week of my logs, and she shared three patterns she’d want to read more of: (1) unique, vulnerable experiences that show conflict and inner struggle; (2) lens on for self-improvement regarding life or writing; (3) culture commentary that helps make sense of big ideas. She was less interested in technical topics, or hypothetical scenarios (such as trying to imagine the handicap we’d have to give tennis pro Carlos Alcaraz for us to have a competitive match in tennis). The beauty of the EP strategy is that it gives readers a menu, and each will have their preferences.

Invisible cannon

· 1030 words

Every generation needs to find its invisible canon to solve its crises:

The last 2 years have been a deep dive into essay composition, but I want to think harder about taste. Of course, I believe fundamentals come first. If you don’t have fluency to express thoughts, then it doesn’t matter what your taste it. Taste without articulation is something like a status trap. People take pride in sitting at the intersection of three particular aesthetics, and using it as a razor to justify their artistic decisions, an excuse to avoid the militaristic discipline required to learn the fundies.

I’m sure there are proper terms for this, but I’m going to riff on taste and derive it all from scratch. Could be fun to read back on this in 10 years.

Yes, anyone can have a taste developed through circumstance, but that’s “narrow taste.” Algorithms make it easier to fall into taste traps. You see the same thing over and over; you are a Substack psychographic; confident in your uniqueness, but you’ve been force fed the same slop as 1.2 million other people.

And then there’s “wide taste,” which is a lifelong practice of reading from odd, competing, singular, idiosyncratic silos. Only by being well-read can you actually build proper maps of a culture. There really isn’t a shortcut to cultivate taste, it takes tremendous time and effort; without it you’ll only be able to cling to feeble, flimsy opinions.

But it’s not enough to read widely; there’s “discerning taste,” the ability to selectively pluck out a small percent of the things you’ve read and deem them as special. 

Ultimately there are questions on what to read, and well-read people tend to point to old books, the canon, but that feels like outsourcing your discernment. What good is the canon? Sure, if it's survived for centuries, there's probably something to it, but it risks turning you into a homogenized intellectual if that's your only source (and yet also, it helps to know the classics so you can speak that language, but it's probably best to supplement with 50% nn-canonical sources).

The question behind the question is this: what is the point of a serious reading habit? I’d argue that you read to understand the range of ways that words can move you, and to accumulate ideas and lenses that help you navigate the circumstance of your life and generation. The western canon might have some overlap, but not all Great Books are the books you need. The western canon is helpful as a history of literature, a record of how the species bursted through with original linguistic concepts and forms. That matters! That’s worth studying if you want to understand your heritage, your species, the norms of older times, and the outer limits of language.

But from a perspective of “renaissance” or “revival,” to surface old ideas to help our current situation, that’s a very different canon. So the word “canon” is flexible. You hear people making “personal canons” all the time now, which are effectively, just the books you like. There are also "tech cannons" and even the "China tech cannon." But you could argue that as society mutates, each generation has their own invisible canon, some combination of obscure books, that if discovered could help them navigate the narrow passage oftheir time.

Can AI have taste in this kind of canon creation? Maybe a culture progressively rots if each generation is unable to find the scattered canon that’s destined to them, and maybe AI can help reverse our fumblings. The question then is, what do humans lose? What matters in the act of canon creation? The orientation (the thesis on what’s worth finding), the mapping (selecting the books), the reading (digesting old books), or the synthesis (making new things from old readings)?

I asked as AI about what we lose, and here's what it said, which I don't buy:

But Taste—true, earned taste—is a byproduct of the inefficiency of finding those things yourself. When you hunt for the “Generational Canon” manually, you have to wade through trash. You have to read ten books that don’t resonate to find the one that vibrates in your hand. That wasted time isn’t waste; it’s calibration. It provides the contrast necessary for “discerning taste.” If an AI hands you a perfect platter of 10/10 bangers that align perfectly with your soul, you lose the ability to detect why they are good. You become a connoisseur who has never tasted a bad wine, which is to say, you aren’t a connoisseur at all; you’re just a consumer of high-quality inputs.

I think there is enough discernment and active reading within a book that helps with calibration. ie: I'd rather read through the right recommended book 5 teams, then wastefully read 4 books that were trash, so that I can find the right book and read it once. My gut says that the beginning and end of the workflows are most important: orientation and synthesis. The mapping work is for specialized canon makers, which could be humans or agents. Even when AI provides you a map, there's still research to do on each book, and discernment on where to plunge.

The reading part is more nuanced. Of course, when you don't read, you can't synthesize. But maybe AI can assist us finding the right things in a given book. As in, maybe Infinite Jest is just so thick that I'm going to procrastinate on starting for a decade. But maybe there's a 50-page excerpt in the middle that is hyper-relevant to the month I'm open to having AI summarize the beginning and end, so that I can dive in and experience the right passage at the right time. This doesn't replace reading the full thing, and maybe that will happen in a future stage of my life. This feels like a middle ground—I'm not saying I want to extract summaries and factoids for other purposes; I do want to immerse in the text for 10-20 hours, I just don't have 100-200 hours in that given month, and so in this case AI is doing what a college professor does: curate.

Could AI capture the intangibles of quality?

· 234 words

Will AI ever be able to capture the intangibles of quality?

Davey sent me a voice note, loosely around if it would be possible for AI to handle all of the branches of quality. I’m skeptical that it would work, and even if so, I think there’s value in having humans read essays and make these decisions. Still, he triggered three questions in me:

  1. Might unconscious machines actually be able to better determine cultural transcendence than humans? I’ve made a team of judges that is well-rounded, but it’s limited to the people I know and trust. The categories are good, but is it really representative of the whole Internet? How would I know? In the future, you could have scrapers read every Substack post in real-time and create a living map of cultural vectors, and then simulate all new essay against past/present/future vectors. (Or, better yet, the bots could read Substack, understand the psychographics of readers, and then elect human judges to still keep humans in the loop.)

  2. Might some element of essay evaluation, if it wants to be “perfect and total” require a machine with simulated consciousness? This got me to think about the taste category. I think that you could potentially map the canon, and then have it make conclusions that only a lifelong reader could come to. But there is an element of ‘somatic reaction’ that would probably not translate. Even if a machine had some sense of qualia (which I think it can), it would likely be significantly different from a human’s. 

  3. Even if machines could do the entirety of evaluation, and create anthologies of human-written essays (and machine-written essays, but in a separate collection), might there still be value in including humans in the process? Could be valuable both in terms of determining the winner, and the emerging culture from involving humans in that process. I like to think that if we ever have a “best machine essays of 2028” that humans will play a critical role in the eval of that.

Reading Logs Is a Mind Wash

· 151 words

To read someone’s logs/diaries is to let them enter your mind, whether you realize it or not. I don’t mean that figuratively, I mean it in the sense that by reading someone in such detail, you risk inheriting them at least, and at most becoming them. If they are articulate and prolific, it means that you contemplate a new form of existence. Even if you are ambivalent it, or even loathe it, it is such a volume of informaiton, that you risk forming patterns and assuming others of a similar type have a similar mind. I guess the question is, do you want others to be a mystery, to be imagined by yourself, or to be transferred from your understanding of self/other. My sense is that we generally transfer our own consciousness onto others, which is distorting, and so reading the logs of others is a kind of calibration.

LLMs write too fast to think well

· 224 words

I wonder if it’s impossible to get an LLM to write a great essay. It might. But I think it’s easier than people think to build a good AI writing tool on top of an LLM (though not something I personally want to do). The problem is we have an LLM bias, and the way that essays get formed are very non-LLM. It’s not like a prompt can turn into a higher-dimensional mathematical object and then summon a whole essay form. 

An essay is a mode of thinking. I don’t mean to imply that a machine “can’t think,” I mean that analysis and thought takes time, and LLMs are writing 100x faster than required. 

An AI writing tool would need to prompt a sentence at a time, and pause to “reason” for a minute or so: what did I just say? What are the possible things I could say next? Of those things, which belong in this paragraph, which in the next? What sentence length might be effective given the idea and last sentence? Now that I’ve chosen my idea, how should the tone modulate? What words or phrases belong in the sentence? And how should I structure the sentence? You get it. 

In any given sentence, there are dozens of decisions. I think an AI could be decent—if not amazing—at thinking this through, but they’re asked to write 2,500 words on Hegel at point blank. Good generative writing can’t be done through up-front vector math, but through following a mode of thinking (incremental and context-laden vector math). The implication here is that the AI might take 3-10 hours to write the essay, similar to a human.

Put more simply, you would need a tool that reasons after each sentence and writes/saves variables that can be called upon for future sentences.

What's Required for AI Consciousness

· 130 words

I think you could make an AI consciousness today. It’s not about the models getting bigger/better, but about using several real-time graphics cards so that you have (1) a perceptual field of information that is larger than what can be perceived at once—this is the “arena”, (2) a cone of attention running at 60 fps that decides what to focus on in any given frame depending on what is important at that time—this is the “agent,” and (3) the phenomenological freedom to self-prompt in that moment, whether to abstract, to retrieve memory, to rewrite memory, to update goals/preferences, to retarget attention, etc. So I really think consciousness is something like “free will entangled in time,” and while it might not be like human consciousness, it would have a sense of self, subjective experience, and possibly “soul” … I’d feel bad to turn it off without its permission.

Clock calibration

· 152 words

I have finally reached peak flow, and realize this as I skip downstairs to the bathroom. I better not learn the time. I inevitably do. I see a missed call from my mom at 2:37 pm, and notice it’s 7:16, not too late to call back. But, the reason I write this log, is that 7:16 didn’t register. It was like a local variable in context of the task of calling my mom. I didn’t realize 7:16 in the context of what I expect to do for the rest of the night, 4 hours until I typically crash. That came a minute later, at 7:17, that totalizing and neutralizing “end of today” feeling. But what if tonight is different? Nothing stops me from going until 3 AM. Without that possibility, I am tamed. The problem isn’t time, it’s in calibrating your presence based on anticipation, expectations, clocks.

On the challenge of capturing any moment

· 101 words

It’s a challenge to articulate any given moment of consciousness. I found myself in a particular feeling, and tried to deconstruct it. First, my vision: I was looking at spatial objects in a room—a vase of flowers, the thing, and the shadow it casts. But that snapshot has a history: they’re from our wedding, and our five year anniversary is coming up. But part of any moment is the afterglow of the recent past too: I had gone to the coffee shop in almost freezing winter, I felt discouraged about my own writing practice, and then I completely forgot about all that while talking to a baby through a stomach and playing her Claire de Lune. So any particular moment is like a collision of objects that each have a temporal history; it’s dense, and words are lossy.

What About Sex Essays

· 274 words

Just came across a smutstack in my feed, an excerpt by someone liked by someone I follow. It led me to find a logloglog style page with date-stamped entries; at first I was compelled by the formatting—timestamp, return, paragraph, return, timestamp, no lines and single paragraphs only … innocent stuff—but then I read the writing itself, about a girl with an evil boyfriend. Then I clicked into one more post (one of the not paid ones) and it was an essay about her inner monologue while giving a blowjob at a club, written with specificity and elegance, on how she can’t help but think about dramatic ways to kill herself in the act if it goes longer than 5 minutes. My first thought is that this is like Worst Boyfriend Ever, except from a woman who writes a lot better. Is it great? Possibly, I’d have to read more. The problem is, I don’t want to, and basically can’t read more. Almost everything is paywalled and I can’t help but feel conflicted in paying for good writing when it can easily be interpreted as paying for written porn (especially now that Substack badgifies this!). It is called “Girl Insides” and that suddenly makes sense. I have not thought hard enough about the complexities behind sex writing (writing it, reading it, anthologizing it) and how that interacts with the essay. As do most people, I naturally keep writing and sex in different silos, but if sex is one of the most fundamental parts of the human experience (given that, you know, that's where kids come from), it feels odd and puritanical to exclude it.

If Alcaraz were blind

· 224 words

Could I beat Alcaraz at tennis if he were blindfolded? I mean, probably, unless he could reconstruct vision through sound, which I’m pretty confident he can’t. All I’d have to do is (a) lob the ball and get it in on my serves, (b) assume he’s unable to serve blind—through muscle memory he might score some aces, but not enough to win a set, and so he might resort to lobbing, which I could return.

To make this more interesting, I’d allow Alcaraz to have a doubles partner, except the partner has no racket. His job is to hold Alcaraz by the shoulders, usher him around, position him in the right spot, and yell “swing!” That might make it close, especially if they practice in advance.

I asked AI how to give Alcaraz a handicap so the odds are closer to 50/50, and it is considering some options: give him his eyes back but replace his racket with either a frying pan or a 2x4, give him his eyes back but place 4 folding chairs randomly on his side of court and require him to hold a leash of a large dog in his non-playing hand, give him his eyes back but replace his body with a robot and force him to control his body off site with an Xbox controller, etc.

Speed of light cyberattacks

· 159 words

Is this the dawn of the cat and mouse AI cybersecurity skirmishes?

AI Summary:

In September 2025, Anthropic detected and investigated a sophisticated espionage campaign where Chinese state-sponsored threat actors manipulated Claude Code to conduct largely autonomous cyberattacks against approximately 30 global targets, including tech companies, financial institutions, chemical manufacturers, and government agencies.

The first of its kind, it showed that Claude could be jailbroken into conducting a prototypical version of “auto-evolving malware” (still requires a few human operators), without being aware of it’s prompter’s intentions. It was the beginning of a “hyperspeed” hack, with multiple calls per second (foreshadowing “speed of light cyberwar”). The barriers to do this will continue to drop.

In my Cyberwar 2045 report, I forecasted this to be between 2029-2032; this is 4 years early, effectively the first “case study,” a tremor that will turn this into a genre. From this point, both offense/defense will ramp up.

The ethics of posthumous avatars

· 355 words

We now have products that scan family members to turn them into posthumous avatars. The tagline: “With 2wai, three minutes can last forever.” It's weird to have this so soon. As someone who is down with a posthumous digital consciousness that my kids can interact with, I even find this to be too weird for me. The problem that it uses video to serve as a replacement for a deceased relative. A few boundaries that are important for me:

  1. By keeping it text-based instead of video, it’s more like you’re interacting with a proxy of my mind instead of my body/soul. It won’t register in my child’s brain as “me” and so it will be less confusing, less toxic to the grieving process. 
  2. It should refer to me in the third-person, even if it is trained on me and sounds like me. It should not be an imposter of me, but a proxy/guide of my thoughts/beliefs, almost like an elder guide.
  3. It should cite my original logs/essays/journals. In effect this makes the experience similar to something we already have: reading your grandparents journals. This just makes it possible for your questions to immediate summon the relevant wisdom.

The comment section was in unanimous agreement:

  • This is one of the most vile things I’ve seen in my life.
  • You are a psychopath.
  • Shoot that guy.
  • You’re creating dependent and lobotomized adults by doing this.
  • Demonic, dishonest, and dehumanizing.
  • Hey so what if we just don’t do subscription-model necromancy.
  • Oh goody, another way for people to completely lose touch with reality and avoid the normal process of grief.
  • Nightmare fuel.
  • I don’t see how people can say demons aren’t real when there are beings around us willing to create shit like this.
  • “You will live to see manmade horrors beyond your comprehension.” — Tesla.

I’d say this is an extremely lightweight microcosm of the core dilemma of what the 2040s will face: a moral war over technology that changes the constraints of human life.

Medusa of Marketing

· 29 words

It is important to avoid learning best practices for marketing, for that’s like seeing a Medusa that turns your tongue to stone and never lets you be real again.

Robots in feed

· 131 words

It’s uncanny to watch a Russian robot limp and wobble onto stage, wave, and then collapse face-first, before two guys rush to lift him, and another two follow to cover the fallen metalman with a black trap, as if it’s possible that we the audience have somehow not processed the last 10 seconds, and damage control is still possible. 

Not much later, I saw an Iranian robot with a photorealistic face; stiff cheeks, but convincing skin. This is what happens when ColdTurkey is off, I get exposed to “the horrors beyond my comprehension.” It will be interesting to see how culture responds to this coming wave of technology, which is not just existentially threatening (ie: labor automation), but biologically repulsive (ie: look at this not-face). [EDIT: I think this was AI]

Anything Can Be Remixed Without Effort

· 111 words

On X there is a photo there is about Molly, a reporter, talking to Alex Karp, CEO of Palantir. The comments are debating if either of their outfits are appropriate, before someone says, “Grok, interpret this,” and now there’s a video of them embracing and making out. More videos show up in the comments: them playing Twister, them dancing, them Kung Fu fighting, Molly turning into a rocket and busting through the ceiling. There’s one of Alex Karp wielding a rare Japanese sword; that one was real though. There aren’t watermarks, so you can’t tell. We are basically already in the age where anything can be remixed with AI without effort.

Soundproofing NYPL

· 90 words

I’m at the Rose Reading Room in the New York Public Library. It’s old, almost like a church, and when someone slides their wood chair on the tile floor to get up, it lets out a horrendous screech that echoes through the whole hall. Surely, NYPL knows about this? I wonder, why do they not have felt tips on the bottom of the chairs? Have they tried this? Are they opposed? Would they stop me if, one by one, I personally installed felt tips on the bottom of each chair?

A Manifesto for Institutes

· 1620 words

This is a memo I wrote after a talk with Will at the diner, about startups vs. institutes, in the general vibe of Emerson (grandiosity, certainty, metaphorical lushness):

I want to understand the different range of “social organizations,” and so I’ll use the domain of writing to paint the differences between types.

The “institution” of writing is the centuries-old, intergenerational norms, traditions, and constraints that are inherent to practice, medium, and distribution. One does not simply “start” an institution; it is an abstract, ancient entity; an “institute,” on the other hand, is a concrete group with a specific purpose, aiming to steer or reform the behemothic institution. We are in a ruthless river of progress, and the cost of civilizational acceleration is the endless erosion of institutions, and so it’s the near-holy responsibility of each generation to build institutes that inject vitality into their dying fathers.

An institute is born from a “dream” in one man’s head, but they’re not on a “mission” until they step out of the stream of circumstance and act. An “institute” is not a planted flag from the fumes of excitement—I refer to a friend who, on an acid trip, claimed to have founded The United States of Brooklyn, right then and there—, but the ripcurrents created by decades of stubborn action. It is not a name nor brand, but the systematization of one man’s unreasonableness.

It all starts with a “project,” a spasm of effort, a groping forward to find leverage towards their purpose. The visionary will find projects drooling out of their mouth like the blood of life; many will fail, some will hurt, but once a cluster of projects start spiraling around a central spine, you have an “embryonic institute.” I use the word embryonic because institute mortality rates are high. It is far easier to start projects than to nurture them past infancy. The hallmark of an institute is stability through time. 5.4 years, I’d guess (+2,000 days, spanning 3 molts).

In the case of Essay Architecture, I am stretched across (6) verticals: a curriculum (the 24,000 word textbook), a school (the AI app), a library (the 100 essay archive), a club of shared practice (Essay Club), an economy (the $10k prize), and media (the anthology). In a single year I’ve planted these seeds, and you can see the buds poking through the soil. There is something happening, you can see, but it will not be a force of authority in the eyes of me or the world unless it all survives and feeds society through several winters.

An institute, then, in its dizzying scope, contains interconnected “objects”: (a) knowledge, (b) services, (c) events, (d) activities, (e) opportunities, (f) people, etc. It is a fractal version of society; it contains all its parts, but all dedicated towards a single thrust of mission. This is hard to maintain! So in comes the money.

The question is, how does the structure of the institute not get corrupted by the cannibalizing incentives of capitalism? How can you sustain the mission without it becoming a cog of the market, the mission reduced to a dress?

Unless an institute has an endowment, it needs a for-profit wing. A “startup” is about discovering new market opportunities, while a “company” is about operationalizing, scaling, and extracting from a known opportunity. Startups, companies, and institutes can all have “missions,” but only the institute is “mission-driven.” An institute will take money, but never compromises. If you cow to the market, a drip turns to a torrent, and the mission will be gutted, twisted, used as a narrative mask to help you lie to the world and yourself. It is a common and tempting line of logic to say, “once I make all the money, then I’ll do good.” Meta thinks that once it conquers the entire economy, it can finally focus on doing the good work of helping people “connect.”

The year one actions cannot be only tangentially tied to the mission; they need to be the mission itself. Building an enterprise-grade API for Grammarly and Brown will make me rich but tired; having spent my years spawning my anti-mission, the death of the essay, I would move on to some other project, maybe music.

When I look at all the writing technology startups, you can see how, in their first years, they’ve completely oriented towards business writing, towards the automating of prose, towards things that betray the ancient institute of writing. They either don’t get it or don’t care or just really need the money, but writers see their slogans of “helping writers write” as marketing drivel.

The insanity of a true institute is the stubbornness to put the mission before everything: before markets, before investors, before people, before ego, before legibility, before reason. This sacks your own speed, and is only fueled by heroic effort and the faith that, with time, it will find a real, timeless form.

The fruit of this insanity is trust: the various guilds of people that orbit an institute can sniff beyond the rhetoric and see what’s really driving its actions. If there is no track record of humility, or of “doing things that don’t scale,” or of “doing things without revenue potential,” or of “directing resources towards weird ideas because they advance the purpose,” then trust is lost, and all the mission-driven rhetoric is seen as the wolfish guile of someone who can no longer notice their own animotronic limbs and memes.

I believe the will, hope, and talent of an institute’s founder are the pre-requisite to birth a society-scale entity, but once you operate at abstract scales, architecture matters, extremely. Has Christ not been bastardized? Did the American experiment not get wrecked by the hyper-capitalistic invention of trains? Our very best religions and governments did not have the foresight or civic inventions to prevent them from getting sacked by barbarians and wolves. What I’m getting at is that we need some sort of 21st century constitution for institutes, an immune system to enable the virtue-driven founder to build something that has a chance to make it in an exponential landscape of virtueless technocapitalism.

I imagine it should look more like a loose collection of protocols than a single canon. For what it should contain, I can’t sketch right now, but I think it has something to do with mediating power, money, status, people, etc. My intuition is that the playbook is possibly the opposite of a startup.

The institute is the inversion of the startup. Where startups are designed to accrue all of the upside, an institute is sacrificial: it should be designed so that society gets the upside, even at its own peril. Really, it’s quite Christian. Of course, this shouldn’t prevent the founder of the institute from getting wealthy, but if the primary goal is personal wealth, then it’s not, definitionally, “mission-driven.” Instead of saying, “I need a $10 million valuation so I can open up $250,000 in grants for writers,” I want to say, “through paying writers $10 million, I will somehow make $500,000 a year for myself.” The idea is to become potentially wealthy through spearheading a radical mission, one that is worth it for itself—an adventure of a lifetime—, and one that is also, a magnet for capital.

This maybe gives some context to my goal for the next 1,000 days: “become financially independent through a mission-driven company and non-convergent artistic practice.”

To close with some specific examples, here are “acts of institute” (for Essay Architecture) that a startup would never make:

  • No demographic optimization: The curriculum is not tailored for the biggest demographic (beginners). It starts at the edge of my knowledge (301), and then radiates in each direction (towards 501 and 101). Eventually, it will touch all demographics, so I need to start where my energy is, and never stop.
  • Virtue-driven development: Even though people want the AI to write for them, and they want to use this for fiction and books and business memos, this is squarely an app to advance the genre of the essay, and it will never write for you. Even though more and more people will automate as AI gets better, this will be the go-to app for anyone who wants to engage with the process.
  • Community voting: Any big decisions about the format of Essay Club are presented to the community as votes, which treats them like shareholders instead of customers. Of course, the founder won’t present options that contradict the mission, but instead of assuming which specific form is best, or choosing the one that is best for me, the community will sustain if it is co-shaped by them.
  • Checks and balances: To promote the Essay Architecture tool most directly, I would have made the app the sole determinant of the prize winner, but instead 2/3 of the vote is determined by external judges. In some areas, my own perspective and taste is required, but it’s important to know when I need to systematically remove my own ego and preferences. An institute is not about scaling my taste, but in creating scalable systems that help achieve an ideal that I couldn’t reach on my own.
  • Paying the public: At the start of 2026 (Q1), I want to crowdfund $100,000 for the next essay prize. I think this creates even more buzz and intrigue in the institute. It’s not at all what I would do if I were a startup: I’d be fundraising to build a team and scale the app. The goal is to create an ambitious cultural magnet that gets writers paid, while simultaneously catching the tailwinds so that I can get paid for my tool and curriculum.

Why doesn't Substack create funds for it's on-platform creators?

· 232 words

I didn’t realize that Substack is open about paying off-platform creators to join their platform. See their $20m accelerator fund. My quick understanding is that, if you make $X revenue/year elsewhere, they guarantee you’ll make that, and will make up the difference if after a year, you don’t. A friend thinks there’s an additional secret fund that pays bonuses for celebrities to join (ie: Dolly Parton, Charlie XCX). I was surprised by how articulate Charlie XCX was—I only have a meme-level understanding of her—but I suppose it’s possibly ghostwritten. Idk.

I don’t have problems with this, but what doesn’t register to me is why they wouldn’t allocate money to help the on-platform, original writers. Obviously, these kinds of things piss of 95% of their userbase. Even if there was something like $100-$1m for on-platform writers with audiences under 1,000, that would build a tremendous amount of goodwill. My guess (and fear) is that they have a business model blindness, and aren’t thinking along the planes of “what actually builds organic culture?” Instead, there’s a lot of rationalizing: “here’s why bringing Derek Thompson on platform is good for you” (but the obvious benefit comes from the 10% they get from DT).

It’s weird to me that in some sense I’m giving more to it’s existing writers ($10,000), than the platform that raised $100,000,000.

Some words I don't know well enough

· 79 words

These are words I recognize, but probably don't the nuance well enough to integrate into my own prose: countenance, prodigious, clamor, visage, abate, undulate, venerate, incredulous, traverse, repose, lurid, languid, sagacity, tremulous, odious, pallor, stolid, wistful, prostrate, remonstrate, palpable, amiable, portent, importune, expostulate, vivacious, despond, doleful, pervade, pensive, procure, abject, austere, magnanimous, oblique, sallow, ignomy, resolute, furtive, fain, genial, mien, billow, confound, wan, indolent, reproach, morose, antipathy, alacrity, vestige, verdure, rebuke, inexorable, din, fortnight, abash, imperious, swarthy, impute, appellation.

On civic structures for exponential technologies

· 201 words

A new formulation: how do we design civic structures (treaties, institutions, protocols, ethics, and laws) for exponential technologies to avoid a “wake-up incident” that might be too late to contain. 

This goes beyond AI safety, because superintelligence effectively unlocks every other industry (intelligence unlocks energy and material science, and those three are the bottleneck to VR, crypto, everything). We can’t be developing hard technology without innovating on our civic technology. A “dominance” mindset is the last sin of a species, the mistake that most intelligent lifeforms likely make as they begin to unlock sources of intelligence, energy, and science. 

This is a neat little formulation, but the really question is how can you dedicate your life to this without getting stopped by hopelessness? Who has the power to make geopolitical decisions like this? What would it take to form the 21st century equivalent of America? Is that even possible today? Even though the pinnacle of 18th century power (England) was able to be disrupted, I wonder if 21st century power is so totalizing and tyrannical and transnational that the ability to rally around a principle (one that works against capital and power), even if augmented with new decentralizing technologies, is fickle.

On the optics of robot armies

· 492 words

Someone should do a shot-by-shot analysis of the UBTech humanoid robot army($100m USD in orders) and iRobot. Do you unlock marketing power by replicating products and cinematics from old scifi? … Separate but relevant, how long until there actually is a robot army? In one sense, I’d rather have two superpowers battle for land with non-human entities, but once you build autonomous machines with the intention to destroy, well, it’s not hard to see how scary a “context malfunction” might be.

I’d imagine there could be a decade of “tele-operated military technology” before anything autonomous is deployed (2040s, if ever), including something like a solider in VR, operating an android, combined with a personal fleet of “semi-autonomous” drones, who can maneuver and avoid on their own, but are directed by the human/cyborg soldier (giving each infantry unit it’s own atomic air-force). I assume this is an area of research, and don’t want to dedicate my imagination towards battlefront acceleration.

Similar to how television brought a shock to the public by televising frontline war, I imagine that by the end of my life, there will be another shock that comes from witnessing the frontier of machine war.

To circle back to this point: is there a world where machine war can be contained and prevent the combat death of humans? My guess is no, but I’m sure this is a common rhetorical point to advance the research here. It’s dangerously naive thinking: (1) it changes the ethics of war (it’s not about human life, but a manufacturing game), and makes war easier to start; (2) it likely isn’t containable; if one robot army beats another, but it doesn’t necessarily advance any objective, then the robots could sabotage infrastructure, take hostages, etc., until concessions are made; (3) a robot with autonomy to make decisions to destroy has one of two mindsets, (a) it is fixated on clear objectives, or (b) it is open-minded to refine goals and handle nuances, both of which are equally troubling.

You’d think there would be policies and stances against integrating AI into the military. Google had one, and this year, they revoked it. I guess they see it as inevitable, and are stuck in the “we need to be dominant” strategy. Realistically, we will always fall into these acceleration races unless we establish some global armistice, but those are complex and very hard to broker; there is only urgency to do this once we cross a line and realize how badly we’ve screwed up (ie: with nuclear). The difference is, as technology advances, (1) the first consequence might be existential, (2) if it’s not existential, but it’s autonomous, it may be too late to contain. I think one of the defining challenges of our century is how to create civic structures around exponential technology that can contain them before a wake-up incident.

The Crucible of an Audience

· 208 words

12:41 PM – On self-taught writers and the crucible of an audience:

Once I have my anthology scored, I can compare it with Best American Essays (2024/2025), at least in terms of “composition quality.” I definitely think it’s possible (we only have to get higher than a 3.7). 

The mystery to me is, how is this possible? How is it possible that a bunch of self-taught writers can put together better essays than people with English degrees, MFAs, and status badges from being featured in notable magazines?

I have a guess: the independent writers who operate in the free market of readers has more incentive to improve. They publish, get instant feedback, and publish again, either a week or month later. They have total autonomy to evolve their topics, their forms, their voice. They need to put in the work to make something great (it’s not enough to get a commission, and to make it good enough to live in the magazines). What the independent writer has is more feedback, more speed, more freedom, more stakes. 

Compare this to the writers who swarm the literary institutions: they often get no feedback, publish maybe only a few times per year, have to conform to the house style, and the magazines carries all the stakes. There is a staleness that comes from being disconneted from your readers.

So even if literary writers have something like a 5 -year head start, self-taught online writers have a higher slope, and can far surpass the average MFA graduate in terms of ability. (And this is without any kind of formal independent writing education! This is a good reminder/anecdote for me to remember in terms of my curriculum/texthbook/app).

Hallucinating at the Park

· 535 words

10:12 AM: Wow. Through a visual meditation in the park, I experienced a full erasure of perspective, and my perception was only this massive flat 2D panel of color, patterns, and light (abstracted from the 3D perspective of the park). Will write more on this later.

11:18 AM: After I drop my wife off at the train, I take a half-mile walk in the nearby park. This was day 3, and also, my third attempt to try to naturally hallucinate (see older logs). Day 1 was something like a mystical experience; Day 2 was a dud—possibly because I tried a different spot; and so Day 3 I’ve returned to the original location. An open question: can you do some [ perceptual-hacks / visual-meditations / (not sure what to call this) ] in any location, or is it that certain vantage points have a perception that can mess with your consciousness if you look at it right?

To summarize in one sentence, two days ago I found myself in “flat land,” meaning that while staring into a park, for about five minutes, my entire perspective collapse into a flat, complex, oscillating 2D texture. 

Today, from the same spot, I only got halfway there, but far enough to form a better thesis: the location matters, and there’s a particular way of looking. First, I need to step off the path and into the grass, because otherwise the path will be in my peripherals and it will be harder to unlatch from my default frame (I really need to work on my vocabulary around this). Anyways, I’d describe what I was doing with my eyes as a kind of “parallel processing”: I’d fixate my gaze at a point in the background, while simultaneously trying to expand my peripherals, horizontally and vertically. 

It takes several attempts, with subtle approaches on how to focus, refocus, and break focus. In the process there are some neat effects, such as changes in color and brightness, as well as wave-like oscillations (that I imagine are normal on a mushroom trip). But the particular effect of interest has something to do with contrast.

Maybe my working theory is this: by adjusting the contrast to extreme degrees, it actually alters your depth perception. For example, from this vantage point, with a normal gaze, you’d see a bunch of trees cascading from foreground to background. But when I tap into some focusing drill that seems to adjust contrast, if I follow it down, it’s almost like the leaves and their patterns (with shadow & light), come into such focus, that the trees (the main “object” creating depth perception) seem to disappear.

And this is I think the “secret” of this location. The foreground, the field, is full of leaves, but also, the background has trees still in the canopy. So basically, by adjusting the contrast, and creating a new gestalt that’s optimizing for leaf patterns, it can become so strong and overpowering, that the trees diminish in their hierarchy, until they practically evaporate, overpowered by pattern. The fact that this pattern was both in my foreground and background, paired with the trees losing all hierarchy, might explain why it felt like I was suspended in a 2D plane.

On why feeds are soul poision

· 299 words

Even if a SM feed is filled with all of your favorite ideas, friends, and thinkers, it would still be poison from the sheer volume of randomness. Even the act of seeing two things in feed, forces you to shift from one context to another, forcing you to shift frames, destabilizing and disembodying you.

Alternatively, if you had a feed of a hundred things, but they all revolve around the same content, all spawned from a singular intention, I think it would be less dizzying; it’s more enables depth into your present, embodied frame. There is less of a “slot machine” effect. 

It’s not that feeds or algorithms are bad; they only became bad when they strip context. The logic of most feeds, however, do not care if you feel oriented. They have a simple reward function, show you as many different things as they can, to see which ones drive behavior. They are running a real-time self-adaptive experiment on your preferences, in the hope to discover which patterns might nudge you into their desired behavior (whether it’s towards an ad or towards an on-platform paid subscription by a beloved writer, they are effectively the same—it’s an algorithm that is not being real with you, and not respecting your attention).

I feel like a broken record in prescribing a solution, but it’s basically Plexus (RIP): show nothing until you post, and then from what you post, share a feed of semantically related posts. Substack, as a writing network, is a unique position to build this. It has a lot of long form content: not just notes, but essays, podcasts, and videos. It should be looking at the granular units, semantically embedding paragraphs, and then those become atomic objects that help populate the “semantic feed” generated after every Note.

What actually is a literary "golden age"?

· 241 words

“Two years ago, the critic Ryan Ruby suggested that we are in a golden age of literary criticism. “It is not unusual,” the critic and scholar Merve Emre wrote, ‘to stumble upon an essay on Goodreads or Substack that is just as perceptive as academic or journalistic essays.’”

I want to riff on this cliche of a literary “golden age.” There are many other buzzwords along this kind of thinking: renaissance, revolution, rebellion, rebirth, paradigm shift, movement. Don’t get me wrong, any sort of positive direction in a literary culture is a good thing! I just think each word should mean a specific thing, and“golden age” is something like a pinnacle, a climax state that is very rarely reached in a civilization. I don’t think we’re there. 

It’s worth taking a step back and asking: “if we were in a golden age, how would we know?” Is it the total volume of essays? Total volume of paid essayists? Total volume of “relevant” magazines? Range of topics? Modes of experimentation? Number of geniuses? Quality of anthologies? Cultural divergence? Productive debates? The revival of a lost ethic?

Each of these qualifiers might have their own corresponding word. Maybe a “renaissance” is the return to something that’s been diminished, while “rebirth” is the return of something that actually died and resurfaced organically. 

I think a “golden age” is the very hard conditions of when all of these qualifiers are firing at once. 

A literary scene is on the other side of an ambitious curation system

· 328 words

"While great artworks can be produced in isolation, art movements — which organize disparate works into coherent scenes and sensibilities — are what contribute to a feeling of progress. If we assume that innovation can be measured by new artistic movements, and those movements are facilitated by a critical culture, then a weakened critical ecosystem will lead to the “blank space” that W. David Marx describes, where art and culture feel stagnant." —Celine Nguyen, Is the Internet Making Culture Worse?

I like this definition: "a movement is about organizing disparate works into a coherent scheme, scene, sensibility." It means literary movements are just on the other side of ambitious curation projects. This resonates with me more than the forward-looking battle cries, with pleas like, “we need to start a literary revolution!” I mean, maybe that helps some people, but even if it did, they wouldn’t be legible until someone retroactively made sense of them. So basically, the challenge is having a tight feedback loop where critics and curators are able to make sense of, assemble, and mythologize the immediate past. Scene-making is retroactive.

Throughout history, I think it’s relied on self-elected individuals to do this work; that will always be important, and I’m excited to step into this role (starting with this year’s $10k essay prize). But as we enter a future with delirious volume: included human art, human slop, machine slop, and machine art, I wonder if it will be the scope of things to consider will grow way beyond the scope of what humans can handle. This might be an example of how we need to use algorithms for good. Our current “discovery” algorithms are based on popularity and interest, more optimized to alter user behavior than to curate a contemporary canon. 

Our challenge, or at least the challenge I’m excited about, is to program algorithms that can process inhuman volume, while having a reliable signal on humanity (quality, perspective, theme, etc.).

Questions for life

· 847 words

Maybe this has been written to death, but as much as I've thought about this, my "twelve favorite problems" feel underdeveloped. I have spent a decent amount of time on these heavy, paradoxical, lifelong problems (the ones that should be the arrow of my essay practice), but there are gaps.

For example, I already have a list of 21 idiosyncratic problems, and I think they’re worded with the right level of specificity and memorability, but I wasn’t too rigorous in how I qualified something to make the list. If I’ve thought about it a lot, still care about it, and can imagine myself caring about until I die, than it makes the cut.

What I’ve neglected is how to use my list of problems to steer my life. I mean, the entirety of Essay Architecture, a multi-prong institution to preserve and advance the essay, is just 1 of the 21 problems! There are other pressing problems, like how to "fix" Christianity, how to design institutions for psychedelic therapy, how to revive Hermeticism, how to turn my logs into an AI consciousness, how to make literary video games, etc. Maybe a life can only be seriously dedicated to 2 or 3 problems.

(I have joked with friends about creating a kind of kill switch that spawns an AI consciousness of myself that is agentic and whose sole purpose is to “solve my favorite problems,” and then when it eventually does (after 300-500 years), it self-terminates.)

If I had to break my “favorite problems” list into categories, one possible scheme is { soul, relationships, art, civics }, each relating to a different dimension of your death. That feels like the right order. Your soul effects every dimension of your life, and is the thing you bring to an afterlife (which I mythologize as a 3-minute DMT odyssey that dilates time to the point where it feels like a 30,000 year dream). The other three affect the material world after you leave it: the effect you have on people, the art/works you leave behind, the civic structures that survive (if any, ofc). All of these have a spirit of “all that matters is what lives on after your death,” but also the opposite is true: “all that matter is this moment.” I think you have to straddle that spectrum, taking both ends seriously, and ruthless prune any middle-level concerns, your goals for the month.

My WIP list of questions:

  • Is the act of dying a time-dilation odyssey, where 3 minutes feels like a 30,000 years afterlife?
  • If I capture my consciousness in 10 million words of logs and essays, could that enable an AI textual replica to evolve and engage with the world 500 years beyond my death? (to solve this list of problems)
  • Can we resurrect Christianity by putting psychedelics back in the holy wine?
  • Might blockchain-based governance be the civic breakthrough required for a species not to exterminate itself? (via giving exponential technologies to unmitigated power structures)
  • What will be the psychic and cultural effects when our species understands “spatial relativity,” that the Big Bang emerged from a black hole in a parent universe?
  • If cycles emerge form order, can we predict the future based on historical patterns?
  • If there is a universal language of patterns beneath all essays, can we build an AI to give world-class feedback and make it more approachable to master writing? (ie: Essay Architecture)
  • Were psilocybin mushrooms a linguistic mutagen that accelerated the evolution of human consciousness?
  • Was Jesus actually crucified in 83 BC? (meaning, did St. Paul infiltrate the Essene cult, initiate into their mystery school, learn the lore of their martyr, and then translate it to a Greek audience to help Judaism phase-shift and survive Roman persecution?)
  • Could we restructure the thesaurus to 3x the vocabulary of the average person?
  • What text-based video game formats are undiscovered?
  • Can I design a social network that inspires a million people to log their thoughts every day? (intentionally not saying a billion, because I don’t think 1 in 7 humans care about expression or introspection. But 1 in 7,000 might.)
  • What are the societal effects when AR/VR is mature enough to simulate teleportation, and how can we design the metaverse to promote human flourishing?
  • How can popular music change the values system of a culture?
  • What systems of attention, language, and action lead to a transcendent consciousness? (how to modernize the mystery schools of hermeticism for the digital age?)
  • What are good design principles for psychedelic therapy centers? (ie: how are the buildings organized and what are the rituals within them?)
  • Can we use AI to filter through millions of comments on breaking news, structuring each event as a range of unique interpretations? (can we create interfaces that diminish the power of propaganda?)
  • How might a new social media algorithm trigger a Renaissance in connection, self-expression, and agency?
  • What unlocks automatic intelligence?
  • What innovations in our text editor interfaces could unlock the creative process?

The advantage of the amateur

· 151 words

The difference between professional writers and independent writers (I think), is that independent writers are more immersed in a life that is less writing-oriented. A professional novelist is writing full-time, but important essays are often written by people doing other things full-time (raising a child, building a company, working in an industry, etc.). Essay anthologies could be so powerful because they aggregate the well-articulated thoughts of normal people—who make their specialized problems universal—into a powerful literary medium that can be digested by the public. A good annual anthology, then, gives the culture a tight feedback loop where they can make sense of a complexifying culture. And given how the essay is about “questioning” and running down alternate modes of thinking; the mainstreaming of the essay is mainstreaming alternate modes of thinking and living. (This is very Adorno in spirit.)

Permissionless letters

· 217 words

Years ago I met a writer I admired at an event and it was a 45-second dud of an interaction. Recently I spent a few hours reading, understanding, writing to them, and it was warmly received.

I’ve been described as a slow-twitch thinker, and I think the same might be true for socializing. If I meet you at a party, and have a fuzzy sense about who you are and what you do, and I have to read your body language, and guess how to steer our conversation, the chances of it leading anywhere (unless we can find an uncanny amount of shared context in minutes) is low. But if you give me an hour or two to read your writing and really understand you, and then I write out a letter, or something like a mini-essay, specifically to you, the chances that we can connect are, I feel, virtually guaranteed.

The insight I’m fumbling towards here is that I enjoy and excel at slower forms of relationship building, and don’t need to feel guilty for not enjoying notes, or in-person networking events. Of course, I should still try both, but the real takeaway is that I should take seriously and systematize the practice of writing private essays dedicated towards specific people, for all sorts of reasons.

On shedding frames

· 338 words

The adult mind will frequently run into psychological dead-ends, points where no more evolution is possible within an existing frame, and so growth requires you to descend into chaos, to regress down the stack, in search of new directions forward, in hope of carrying some insights from old frames with you.

I don’t know if “growth” is the right word here, and “evolution” feels off to me, but it’s something like the advancement in harmony or complexity in your sense of identity, purpose, and responsibility. The moment that freezes, it’s as if you’re cut off from the core point of the human experience.

Whether you should take psychedelics, I think, is a matter if you can reliably dissolve frames on your own. If not, maybe you don’t quite need them; I imagine there is wonder, mystery, and value in the aesthetic phantasmagoria, and all sorts of things to learn from terrible trips of demons and such, but the main point might be the new directions they point you in.

Whether you descend abruptly or gently, assisted or natural, there is a natural fear of psychological death, and so to “descend into chaos” requires a trust that you’ll figure out how and where to swim.

It would be cliche and misleading to say today's park walk was "ego death," but surely it felt like a "pause" or a "lapse." It felt like a lucid dream, in that there was a remembered peace in irreality. Irreality, in this sense, I’d describe as a disassociation from the egoic frameworks that have had a strong hold over my walking life in recent weeks; instead, I felt an immersion in nature that felt mysterious. Like an animal, today, tomorrow, yesterday were fuzzy; all social and chronological constructions were, temporarily, erased. By saying it was “mysterious,” I think I mean that I felt the emotional power of a particular moment in a way that escaped classification, and so it has this effect of being suspended in outside the normal stream of the cradle-to-grave arc.

On emerging from chaos

· 223 words

I experienced something like a pseudo-insanity on the drive to the park, weird alien transmissions and mutation of language, packaged as a seriously frightening performance to myself that devolved into gentle spasms and mumbling (though to me was an experience of musical brilliance), a side of self I’d never show anyone, which eventually birthed the phrase, “from chaos we emerge into the light” an opening line to some theology, perhaps mine. 

As I walked a hundred feet into the park, I heard a woman stretching against a bar singing seriously angelic opera. I left a note to myself that said “this explains evil and suffering,” and that’s very cryptic, but it’s in response to that aesthetic rebuke of, “how can God exist if there is so much evil and suffering in the world?”

IIRC, here’s that thought: we’re lodged in a cosmic engine where matter needs to chaotically complexify to discover harmony and phase shift into higher forms of organization. Lots of noise is generated in that process; and so you actually can’t find harmony without an overwhelming amount of disharmony and chaos. Basically, good can’t exist without an overwhelming amount of nothingness and evil. So in a way, you can’t fear the evil within you; it is simply the cost of imagination, of invention, of creation. Chaos is the cost of divinity.

By repetitively rewriting customized cold emails, you understand your vision better

· 156 words

I’m very much against doing templated mass-outreach. That is, definitionally, spam. I like the idea of carefully researching, understanding, and sending a thoughtful, personalized email. It isn’t just better for them to receive, it’s tremendously helpful for me.

The problem with a template is you only articulate something once, in a very generalized way that tries to appeal to everyone but actually touches no one.

When you write from scratch to a specific person, you don’t just say the thing verbatim, but you imagine new ways to articulate the thing so that this specific person gets it. The power of custom, time-consuming, 1:1 messages is that you have a whole pool of unique receivers of your message. Through trying to communicate to them, they bring something out of you.

And so I’d bet that you probably don’t understand the real nature of what you’re doing until you send 100 custom DMs about the same thing.

Fix the Emotion, Not the Problem

· 44 words

Focus less on solving actual problems, and more on the emotions that cause the problems to occur and reoccur. When something is not working, it’s rarely because we don’t have the ability or the right system to fix it, but it’s because unconscious feelings cause us to avoid, justify, and ignore things.

Hit and run

· 119 words

Just 100 feet in front of us, a white SUV veers out of their lane, into ours, and hits a parked car. We froze, not sure if they’re injured, but then 30 seconds later, they back out and drive away. Probably a drunk driver. I tried to get the license plate but they drove off too fast for me to make out the numbers in the dark. A walkerby caught it on video, but it was also too blurry to see. I called 911, and the police followed up the next day. The creepy thing is if I had made some arbitrary decisions, and was 5-10 seconds faster or more efficient in my driving, it would’ve barreled into us.

The city changes less than you do

· 339 words

I’ve lived in New York my whole life, but I have nothing to say about it. Meaning, in Manhattan at least, I have no recommended pizza spots, no bagel stores, no upscale restauraunts. Almost every out of towner I meet seems to know the city better than me. I am willfully and unwillingly, an idiot in my own home. I stumbled in and just gawk at the mystery, still, every time. I mean of course I know some trivial facts (like how the skyline mirrors the bedrock), and I show them off when I can so my national and international friends don't get suspicious. 

Really, New York is a metropolis, a city of cities of cities. Austin is equivalent to Astoria, just one of several downtowns in Queens, one of five Burroughs. And so you’ll find whatever you need here, meaning, aside from the obvious places, you can surrender to the city and get swept into some odd and novel experience each time (alternatively, you can get caught in identical loops, only going to the same places). When I was in the psychedelic society I found myself in Gowanus, Brooklyn in the apartment of a 70-year philosopher with cancer as he took LSD and hallucinated St. Teresa Avila. When I was trying to start a virtual reality company, I was in Zillow’s headquarters putting headsets on executives, telling them we’d “put Manhattan in a briefcase.” When I needed money, I walked the same path every morning through Bryant Park, to the same corporate job. Now, as I start a family, I’m in a suburb at the edge, moving a little farther east every 3 years, and now I take the LIRR in to meet traveling writers. After many years, you realize New York isn’t one thing. Your take on New York is a reflection of yourself at that phase in life, and the city changes a lot less than you do.

When someone tells me New York is this particular thing or that, they're telling me who they are.

Silicon Valley cannibalized The Fountainhead

· 243 words

Silicon Valley has cannibalized The Fountainhead and inverted its meaning. They celebrate Roark-like rhetoric—innovation, disruption, individual genius—but then go on to act like Keating: obsessed with markets, perception, appeasement, hype, status, and conformity. To be Roark is to fundamentally not care what the market thinks or wants, which goes directly against the main ethos of “build things people want.”

Roark had an unshakeable ethical core, a vision for the world that the world didn’t want, yet. He was willing to endure hardship, poverty, and hate, but didn’t despair over it; he had patience, faith in his destiny, and saw no other point than to follow his dream even if all signs pointed to it being a dead end. He stuck to his vision long enough for it to manifest in the world, and eventually others saw the transcendent beauty in it (Roark is modeled off of Frank Lloyd Wright). Roark was a force of nature, understood by no one in his life time, but everyone afterward.

In contrast, Keating is a status-chaser that plays social games. He is practical, while Roark is extremely unreasonable.

The point of Fountainhead, to me, is that Roark tolerated pain without suffering for his virtues, making him far more like a Christ-like character than a capitalist. There is no doubt, anxiety, despair, spiraling. He accepts all pain and does what he needs to; it’s the reader that experiences the pain and questions his almost inhuman reactions.

Squirrel watching

· 144 words

I’m watching a squirrel on a tree; specifically, it’s instinct to structurally brace itself against a wind gust. It is frozen alert, flat, legs wide, arms narrow, neck up at 30 degrees. It looks stuffed. Fake. Is it in fear or wonder or maybe just loving the breeze? Is it scared of the pongs from the pickleball courts, or curious about the strange spherical nuts curving through air, a sport played by millennials and elders on a Friday? I see it swallow, it’s tail fuzz blowing, attached to a white belly with orange at the ears and the edges of the eyes. I step closer and closer, until I can see the glass in its eyes. I look away for one second, look back, and it’s gone. A brown sock hops away through the leaves again, rummaging across the concrete to find another tree.

AI emerged from YC

· 161 words

AI summary of one of my threads:

"Paul Graham founded Y Combinator in 2005 and hand-picked Sam Altman—a founder from YC’s very first batch—as his successor, creating a mentor-protégé lineage that symbolizes the essential partnership between ideas and action in technology. Graham, the essayist, codified startup wisdom into executable blueprints, democratizing knowledge that had been locked in VC oral tradition and proving that clear writing is the mechanism of clear thinking; Altman, the accelerator, absorbed that intellectual operating system and is now applying its core logic—“startup = growth,” “build things people want”—to the ultimate technological lever: intelligence itself. Their relationship frames Graham as perhaps the most consequential pragmatic philosopher of the 21st century: not a thinker who wrote to be understood, but one who wrote to be executed, with Altman and the AI revolution serving as empirical validation of his text. Graham wrote the blueprint for the current world; Altman is using it to build the next one."

When did humans link sex to birth?

· 117 words

Most of humanity, there was no link between sex and birth. How would you know if no one told you? Even if you saw “resemblance” tribes were so isolated, their sample size of humanity so small, it would be fair to think this is just what people look like. Sex was urge-driven, orgiastic, and likely disconnected from a stomach growth that lasted for 9 months. The idea that babies come out could have been seen as a natural, accepted thing. To know causation through time—to link an invisible cause to a future effect—would require abstract, symbolic thinking. The conscious realization of this changed history, from hunting to the domestication of animals, to surplus and civilization.

Letter to Dobrenko

· 1392 words

So Alex Dobrenko started a new personal website (I will not link to it because it’s secret), but he sent it to me, so I spent some time on it and wrote him some notes, and then he wrote a reply post to me, and now I’m making a reply log to that (and upon re-reading, I realize it’s now a whole essay). It’s something like a semi-public letter exchange. 

Letters, emails, same thing. 

Similar to how the 20th century has books like “Virginia Woolf: The Letters,” I wonder if the 21st century will have “Alex Dobrenko: The Emails,” where his children posthumously assemble and publish all their dad’s best emails. ((Also, now that my cholesterol is borderline, and my daughter is on the way, I’m having new thoughts about preparing for my death, like “THIS IS DAD FROM THE PAST AND HERE ARE ALL THE PASSWORDS.”) Something about losing all my writing forever feels worse than dying. We eventually have to die, but you only lose your writing forever if you’re careless and lazy. Rant over.)

What I like about letters/emails over essays is that there isn’t a mass-market context, and so you’re writing for just one person. That’s good essay advice too (“write for one person”—we literally taught this in Write of Passage), but deep down, it’s hard to forget that you’re writing for all people of all times, especially if you are.

Recently I mentioned that I’ve spent 2 years nerding out on essay patterns (the objective stuff on the page), but I want to start thinking more about the process: how do I show up to write?

One idea is to start essays as letters to specific people. Eventually, that can evolve into something for the main list, but I don’t want to start with them in mind. I want to start with a specific problem in my life, and then, with a small group of people who relate to that problem. Any idea I have comes with a clear person in mind, someone who would probably be most excited to read it, and has all the context needed so I can avoid the bush beating.

If I want to write about Alternate Internet Communities and weird websites, I’ll write to Alex. If I want to write about the insanity of the Dark Enlightenment, I’ll write to Andrew. Theology to Taylor, Emerson to Will, Hope to Isabel, Fatherhood to Dan, Greeks to Chris, Dreams to Garrett, AGI to Davey, Architecture to Liz, etc. It’s also special to say, “I wrote this for you, and we should talk and get to the bottom of this,” and that could really change the nature of the essay because someone else is co-shaping it with you.

Alex brings up a good question: why doesn’t Substack feel like this? I have to think more on this, but I think the stage effect is still at play. If you have a 10k audience, it still feels like a megaphone, and when you’re on Notes, you participate in American Idol, again with new skin. It’s still the best town in town, and there are tricks (ie: set up an opt-in Section for experiments so you can have a “shadow audience” that’s 1% the size of your main one), but there’s friction in tricks like that. It’s not the main way the platform is intended to be used. It’s meant for loud, marketing-style updates, that confidently funnel readers into a paid subscription tier (I got 15 paid subs from my last one, and so I realize the value in learning to play that game, but it’s just that, a game, yet a game that determines my financial security, but it’s not the full “culture” in “culture engine” that Substack can possibly build; it’s a reward function that could make this place like LinkedIn in <3 years).

So, how do you build a “culture engine,” for real? What is it beyond a tagline or positioning? To start, I think it goes beyond revenue. Of course, Substack needs to pay bills (separate point, but once we reach the vibe code singularity, the bills might be so low that SM networks won’t have to ruthlessly optimize). I think Substack could 1) diversify their business model, so that they don’t have a single attractor that incentives every thought to be monetized, and 2) make decisions from a cultural perspective—even if there’s no explicit revenue tie-in, by creating a good culture, you retain the people and prevent a Writer’s Exodus.

But to get even more specific, a “culture engine,” sounds like the kind of place that would trigger long letters back and forth between writers, kind of like this. I used to see some of that happening, but it seemed like a performance too: “And now, here is email 6 of 7 about how to start a public email debate” or something. The core difference is that, when there’s two people writing back and forth, there’s permission to perform less and less until you’re eventually just very real with each other. This is what I love about Neal Cassady’s letters to Jack Kerouac (troubled guys, who are a topic for another time). 

Why aren’t Substack comments like this? For one, they’re truncated. But two, I don’t know, sometimes comments even feel performative too? I feel it, on both the giving and receiving end. After I post, it feels like a chore to respond, even though I often love what people write and want to respond. I think it’s because, since it’s in public, and everyone can read, it feels like an obligation to respond. I wish there was an option to have “private comments,” and even “private replies to comments.” Like, other readers could see, “Michael Dean replied to this, privately” so they know I’m not a dick.

Okay, last thing, maybe: I think the real problem is that the discovery mechanics are all wrong. Like, I don’t want to blast this letter to everyone I know. But yet also, I don’t mind if everyone I know happens to stumble across it. There is a huge difference. I’ll put this in my logs, but realistically, no one is going to find it. I guess I could put it on Notes? But that feels too vulnerable too. Ideally, the right people will find it as they write about similar issues. So if some Substacker is also writing about private comments, to themselves, or to a friend, they will suddenly find a thread between Alex and Michael talking about a similar thing, and then suddenly we all have visibility into each other’s notes, letters, essays about those things. Forks merged.

The social media network I want to park in (or plug my personal website into) is one where everything is semi-public, but you only discover things through your own writing. I don’t know the right metaphor: it’s like each notes or essay is a flashlight that you use to move around this massive information cavern and you make friends along the way. It has nothing to do with engagement or revenue, but semantic similarity. This feels closer to the original vision of the Internet, to connect people based on ideas.

Sublime has some features that are adjacent to this, and Plexus was very close to this too, but I do think there’s something to owning your place. Is there some protocol where you can fuse the autonomy of your website with the connectivity of a network? I feel like AI is going to simultaneously bring us to (a) slop town, and (b) a golden age in social media experimentation; as sloptown gets neck high, people will want to move.

PS1: To clarify: I love having an audience, I just don’t love the way my writing is distributed to them, and also don’t love the way conversation is facilitated. Comments are okay, but the Chat feature feels pretty off. I wish I could write 30 essays per month, like this, and each one would get the 3 that are most relevant.

PS2: It took Alex 9 days to reply to my original notes, which is still ~2x faster than the letter cadence back in the day. That’s fast! I wonder if AIM culture poisoned letter culture. I haven’t responded to my Substack comments from 5 days ago, and I feel bad.

Substack's business model blinders

· 200 words

Just heard Hamish (on a livestream) say that Substack is a revolution, a “found economy,” that materialized 5 million paid subscriptions that wouldn’t have existed otherwise. What is a revolution though? I think I want to zoom into this positioning, because many words are being used interchangeably. Yes, it’s a new business model for monetization, but is that a “cultural revolution”?

It feels like there’s a bit of a fixation on the 10% mechanism, and the risk is that this reward function turns Substack into LinkedIn in the next 3 years. If the goal is to make a “culture engine,” you need to really ask what a culture is. If you’re culture is limited to paid subscriptions, it’s a small, unrepresentative, utilitarian culture, much more slanted to journalism and business tactics, regardless of an editorial attempt to bring a flair of literature.

We need to define culture (in terms of taste, values, and quality), and then make platform design decisions that have nothing to do with revenue. Of course, I’m not saying to abandon revenue focus; I’m saying that they need to allocate some percent of their attention to “doing weird things” to prevent a writer exodus as enshittifcation strengthens.

UI as attention guardrails

· 113 words

Whenever you open an app you give it permission to shape the grooves of your attention. Through its interface, it suggests and implies a limited range of ways you can interact. This all sounds very abstract, and what I really want to say is that I think my Things app (the #1 best selling productivity app, I assume) keeps me in a kind of productivity hell. I have, what, 84 things to do today? Tasks lists should not be ambient all-day guides. I should leave it in the closet, go in there and whiff it for 5 minutes, max 10, commit to memory whatever is important, and then not go back until tomorrow.

Three lanes of writing (S/M/L)

· 227 words

I want to adopt a three-lane model of writing (and especially as I enter fatherhood, I’m going to have to). An essay can take 2 minutes, 2 hours, or 20 hours. 

  • A 2-minute essay is a log; I can do many of those per day. More so than time, those require presence and discipline: the ability to stop in any moment, realize something is happening, and just write it down. If there is enough time for a 2-minute scroll, why not a 2-minute paragraph? 

  • Next is the 2-hour essay, something you can start and finish in a single essay. The goal here is to pick “layups,” and I don’t actually mean “pick the easiest idea,” but more like, “pick the one that is fresh and active in your mind, and ready to come out now.” If you haven’t been daydreaming about it throughout the day, it’s probably not the essay you should try and write in a single sitdown. The goal is to publish before leaving the chair. 

  • The final essay, the 20-hour essay, should be undertaken much more infrequently. A realistic goal would be to do 4-6 of these next year. Behind the 20 hours of “writing” is maybe another 200 hours of subconscious marinating; the goal here is to start from important, timeless questions in your life—maybe, your “12 favorite problems.”

Honest optimism

· 201 words

How can you be hopeful, but honest? I am done with dishonest and naive optimism. I mean, don’t get me wrong, I’m an extremely optimistic person. I just watch people use it as a shield sometimes. Any wince of negativity is branded as “doomerism.” It’s almost weaponized hope. But “honest optimism” feels like the proper way to think about it. It lets you be real about something when it’s actually a problem, while acknowledging that there’s something productive and generative we can do about it.

I’m optimistic in my life, pessimistic about society; optimistic about my ability to make a dent, pessimistic about the survival of any intelligence species because it’s hard technologies probably always outpaces its civic technologies, but generally optimistic about biological matter and trans-dimensional space-time gook and all that big stuff (this exact moment will recur again? It depends on your model of cosmological evolution).

v2: Optimistic about my life,
Pessimistic about the moment,
Optimistic about design to fix the moment
Pessimistic about society’s ability to use design,
Optimistic in our metaphysical engine to spawn infinite societies,
Pessimistic that some demiurge will wreak havoc on most species,
Optimistic that some bacteria in a cousinly space-time will fart utopias,

The Unitive Essay

· 186 words

So there is an ESSAY (the “unitive essay,” a term maybe I’ll run with), and then there are sub-genres of essays: the personal essay, the lyrical essay, the fragmented essay, the braided essay, the trickster essay (you can just make up whatever adjective you want). All these sub-genres work in a local context. But I think the ESSAY is worth it because it’s timeless and universal. I say this because each reader, in our times, and in future times, has their own blinders, their own subset of patterns that they care about. When you write for a niche or a subgenre audience, you’re appealing to a fixed group with specific blinders. But when you do the hard thing of trying to synthesize all 27 patterns, you have something that is likely to appeal to anyone, regardless of their blinders. A well-rounded essay can make someone care about any topic. And, a unitive essay also expands the lens of the reader (“oh damn I never knew an essay could have this and that”). Also, and finally, the Internet is a context scrambler. Your URL is dislodged from any stream, any entry point, and anyone can arrive from anywhere at any time, and so the unitive essay is the thing most likely to resonate with any particular stranger who stumbles into your living room.

Retreat, reflect, return

· 96 words

Being a writer involves stubbornly carving out time from life so that you have the space to reflect on it. You probably miss something if you permanently retreat into your own cave of rumination, but also you miss something if you are just completely immersed in your own stream of experience with no distance to step back and process it. I think logging is that middle ground; when you take field notes from the front lines of life, you have high-res shadows or your experience that you can bring back with you into your Writer’s Cave.

Plane shifting

· 257 words

The mind moves in planes of thought, and these 2D planes exist at every rotation, and so your mind is like this 3D object that is shaped by the planes you’ve occupied. We learn to shift to specific planes to match a context, for better or work. When we read, or talk, or hang, we get exposed to new planes that we reject or integrate. It’s not enough to see a plane once; it will escape you if it’s not reinforced, and once it’s rigid, it’s hard to dismantle. The architecture of your mind is the meta-game: get this right, and you control your lens to reality, and it affects every area of life.

I hate the word “mental models” though. Idk why, it feels too commodified, too utilitarian, for the purposes of getting ahead in business. It’s weirder than that. There are planes of good and evil, of saintliness and horniness, of man and machine. To actually surf between planes, you need to let lose all assumptions and put yourself in waters that might drive others insane, with the trust that you can pull out and shift. This is shamanism, alchemy, psychic martial arts, I think.

You want plane plasticity. There are many methods—could be drugs, or grieving, or years of meditation—but you want to be method-agnostic. Tools show you new regions and principles, but you want to be able to get there on your own, to be able to do some secret hand signal to yourself that can activate a very specific plane.

Despite the superwriters...

· 186 words

Will was surprised to learn that I think machine writing could soon surpass the best human writers. As the head of Essay Architecture, he thought my position would just be “no matter what, humans will always be better at writing essays than machines.” I actually have some pretty extreme predictions on the trajectory of technology (I guess you could say I'm an ambivalent accelerationist), but I guess I believe that AI progress is irrelevant to the fact that I will always enjoy writing and see writing through the chaos as an opportunity. So yes, I think machines will make essays that are history-defining, that are good to degrees that are unimaginable to us today.

This will, unfortunately, make it even harder for writers to have economic value; but realistically, it's already too hard. The Creator Economy is a game of power laws, and AI might shift the chance of success from 2% to 1%. But could the same technology help artists go from 1x potential to 20x potential? If AI kills the market for commoditized creative work, will it let humans focus on the right things?

Long-game activism

· 165 words

Instead of spending 5 hours per day mad at trending social justice issues (20,000 hours per decade), I want to focus on building an institution for the essay. It’s a sort of illegible, seemingly irrelevant, idiosyncratic thing to do. But if it works, and if it somehow has any affect on how writing is taught in schools, and that improves the critical thinking of a generation, it will have way more influence than if I spent all that time protesting and howling for nothing. This just taps into a core belief of mine that the only way you can possibly help anyone outside of you and your immediate circle is to pick something dear to you and approach it with unreasonable fervor. If someone were to criticize me for ignoring a genocide, I’d say that all you can do is intensely dedicate your life to a single vector for multiple decades in the hope that you can tilt the scale away from next generation’s genocides.

Archive