michael-dean-k/

Topic

textual-immortality

8 pieces

Website cyber-defense

· 469 words

I have some neat prototypes for a personal website, but now I actually want to build a stable backend, one that can serve me for 5-10 years, or more (100-year hosting would be ideal), and persist among many different UI or platform changes. This means I’m trying to think forward to where the Internet could be by then. This involves extrapolating a current trend to its extremes, and even if you don’t know for sure it will happen, it’s good to have comfort in knowing you’re protected from extreme edge cases.

The one top of mind is the death of the open Internet. This goes way further than “the dead Internet theory” which only covers the proliferation of bots and slop. This is about bad actors being so leveraged that it becomes dangerous to have any public content of yourself, in text, image, video, or audio. ie: Any hacker or frenemy can clone you and do what they will. Or maybe a rogue government can analyze your psyche and determine your "loyalty score" is only 35% and shadow ban you from getting a mortgage. I will not get into specifics here of the likelihood of different cloning, phishing, or surveillance schemes, because all that does little but bring you to madness, but my point is that if you want your website to be a 5 million word 1:1 representation of your mind (in all it's vulnerability), it's worth designing for the most paranoid future possible (like how engineers design bridges for earthquakes that will likely never happen).

One response to all this is cyber-defense. At the absolute minimum, this means locking most things behind a gate where only the approved can get through. A more clever, technical solution is to share encrypted “coordinates” that represent the semantic nature of an essay, and then let people surf through prompting and approval gates. An even more extreme idea is a mostly-private site with a kill switch, which involves (a) signing in once per month to mark "I'm alive," and also (b) giving my wife a secret key to type in when I die, which then releases all private material. Obviously this throttles reach, but isn’t there psychological value to limiting your audience anyway? Montaigne wrote alone in a tower for a decade, and so if the approach is to use writing to steer you life and mind, at the detriment of audience growth, then this might be the way to go: a literary labyrinth accessible to maybe your 30 closest friends and anyone else via application who can prove they are not a ghoul.

The other alternative is to embrace the weirdness, that no matter what, we will all be rendered through a schizophrenia filter, with no choice but to relinquish control over the non-canonical or rogue versions of ourselves.

Heuristics for systems

· 526 words

I declared to my wife this morning that DeantownOS is getting retired. It’s been 3 months since I spiraled into Claude Code for personal systems, and I’m at the point in the curve where the amazement has normalized and I’ve accepted the fact that I’m in a trough of disillusionment. The question now is revise or abort.

The case for aborting ties back to Oliver Burkemann’s Four Thousand Weeks, which popularized the idea that all systems are methods to procrastinate from making hard decisions. They give the illusion that you can do everything, and since AI can meaningfully leverage the volume and range of things you can do, it tempts you to build galaxy-brained systems. The thing I think we fail to realize while in a vibe coding frenzy is the psychic cost to remember and maintain the stuff you build. Yes, it is appealing to “reclaim my computer” and rebuild everything I use as personal software (from Obsidian to Gmail), and it’s even possible, but it’s a new breed of Sisyphean struggle. Once you can mold your own software around you, it’s too easy to endlessly mold, to lose sight of the work and just tinker on your exoskeleton.

I’m obviously skeptical, but I’m still a believer; if I were to revise, to rebuild my Claude stack from scratch, I would have to develop a few heuristics to help me from short-circuiting.

The first one that comes to mind is “will this matter once I’m dead?” Ie: writing an essay matters, because I imagine one day my daughter will read that and get to know me better, or at the very least, future Me in 35 years may enjoy reading words of my past self. But to create detailed daily files that get spliced into atomic “routing files” that then then get saved again to a new destination folder, which exist either as (a) just context for AI, or (b) require some manual effort to prune into something that matters once I’m dead, is to create waaaay too many layers of abstraction between the source and the Work. When I read back my writing from the last few months, only a small is valuable enough to be saved as "logs" in my archive. I was writing for AI, not for my future self.

I made this assumption that atomic daily files are the kernel of a system, and it was an axiom I could never undo. There’s maybe another principle on “don’t build load-bearing infrastructure on an unproven axiom.”

Another one could be “don’t assume future you will have bandwidth,” to do X every day/week/month. Every day I had to review how my AI system proposed to route my logs, and eventually I'd ignore it and get backed up. This means that if something isn’t truly automated, I should be very cautious of it. It's possible to do one little step forever, but not a hundred. Not every promise has brush-your-teeth-scale reliability.

What I’m getting at is that it’s not about maximizing or neglecting systems, but about understanding the right principles so you build something that is actually in service of your life.

Unsaid

· 61 words

On reading Montaigne, I want my writing to be far more honest. As in, in the face of death, it really does feel worthwhile to capture the edges of your soul and psyche onto the page, the things not brewed into your public-facing personality, because then you die with yourself trapped, and it’s as if the inner you never really lived.

Phantom Infant Syndrome

· 748 words

A few days after my daughter was born, I had something which I’m describing as “phantom infant syndrome.” When I was away from her, holding a phone, or fork, or some other manufactured object, I’d get a tactile hallucination in my hands of the softness of her skin and hair. I imagine this is nature’s way of saying go be with your kid (made possible by mild sleep deprivation). And so this is symbolic of one of the many biological drives pulling me away from writing in recent weeks.

This is happening around my five year anniversary of being online, and it’s probably the longest stretch I’ve gone without having urgency to do so. It’s probably healthy and helpful to be relatively non-linguistic for a few weeks, once in a while (I usually write on vacations, so I never really take breaks from it). We’ll see. It’s possible that I’ve thought myself into a trench, and the best way forward is a proper break (I have once said the best editors are friends, time, and weed—although less weed in recent years). Now that I’m immersed, familiar, and comfortable with the rigamarole of infant care (and all the wonder it brings, too), I feel bandwidth opening to write, and I’m curious to see how my practice takes shape from these new constraints. There are real deadlines now. Baby wakes up in … 30 minutes … and I’d like to post this by then.

Last weekend I read through all my writing from 2025, and after the typical EOY reflections and word count calculations, I realized that something has to change. So I published 12 essays, 10 about Essay Architecture, totaling at ~64k words (re: the other two … one was a first-person TikTok odyssey, the other was about the role of psychedelics in evolution). But I also published 150k words in logs, 2.5x the volume. Logs are notes to myself, mild-epiphanies through the day written in complete sentences, all ghost-posted to a monthly Substack post. Unlike my focused and convergent writings about EA, my logs are far more random: recurring topics included the Grateful Dead, movie reviews, notes from a day at the zoo, dream journal entries, usage debates, new architectures for social media, overheard conversations, etc. My logs, in theory, are a low-stakes breeding ground for essay ideas to emerge, but given the demands of my other projects (the textbook, software, and essay prize), my logs stayed unread and undeveloped last year. Now, with parenting in the mix, it makes sense to me to stop logging, or at least, reconfigure it.

Over 4 year, I wrote +8k logs, added to the archive on 95% of days (avg. 5.6 per day), and the whole archive is 650k words. It’s a very personal corpus, one that documents my thoughts and life at a sometimes OCD-level of detail. I thought I’d do this forever, and it sort of stings to stop. I guess I’m not “stopping” as much as setting a stronger filter: I can still capture whatever I want, but I can only save whatever I publish on Notes. I used to argue for the importance of having a low-visibility space where you can publish whatever you want without self-consciousness or the need to set context with strangers, but maybe that’s a luxury I’ve outgrown. This is perhaps a long-winded way to announce something that probably doesn’t need announcing: expect to get a lot more diddles and spontaneous essays like this in the Feed. I figure my email-essays can be more on topic (I have a few slotted for January re: Essay Architecture, the club, and visual breakdowns), while these can be chaotic.

Technically, I’m still logging, but it’s for my daughter and those are private. Every day I write simple journal entries or letters about what happened. I figure one day, when she’s 15 or so, I’ll just hand over The Files and blow her mind. My dad did this for me: a few years ago, after my nephew was born, he sent me 8k words from my first 4 years. It was uncanny to see that he had a logging impulse too, and to learn about all these small events that everyone in the family would have otherwise forgotten (things that were not captured in pictures, like me trying to brush the teeth of stray cat). All this reminds me that writing isn’t just an act of thinking or communicating, it’s an act of memory.

→ source

Cross-generation conversations

· 1085 words

I’ve noticed a shared romanticism around reading the journals of your (great) grandparents. Wouldn’t you? In some sense, they are you (a portion of you, at least) in an older time; and through immersing in their thoughts, you might see yourself, or at least, a side of your self you could become. Some say to leave the past a mystery, but I’d argue the mystery doesn’t open until you read it. An old book can’t solve all the riddles of your life. Reading steers endless chains of pondering. When a dead person’s journal is read, it’s as if they resurrect from the past, lodge themselves into your psyche as a lens, and shape the evolution of your thoughts, the being you become. 

I share all this as a frame to make sense of that new “avatarize your grandma” app that everyone hates. You scan her with your phone, and 3 minutes later you get an on-screen illusion of her talking to you. This is not the same as above. The moral outlash comes from the idea that the living will halt their mourning process by assuming the synthetic stand-in is real.

A posthumous avatar shouldn’t be about physical likeness, but about animating their corpus of writing. (Corpuses, not corpses.)

There’s something about words that captures a soul more than a picture. Consider how you can see pictures of dead relatives but know nothing of their essence; but a page of their writing will bring them to life. If someone writes throughout their whole life, say 20,000,000 words or so of ideas, thoughts, and memories, and they also paid much attention to how they communicate their intangible abstractions and visceral feelings, then you have a high-resolution proxy of that person. It’s very possible that someone who reads all my logs will know me better than my family members, and even better than myself. Of course, words don’t capture the timbre of my voice, or my idiosyncratic flinches, or distinct sub-perceptible physical characteristics, like the sole hair on my outer ear. But I mean, what makes me actually me? The constructed self that has been allowed to emerge in social situations? Or my unfiltered thoughts that I obsessively record every day for years?

Assuming I keep logging, and AI keeps getting better, it’s possible that my great granddaughter will know me better than anyone currently alive. Very weird thought.

A question for me: what is that like for her? I mean, there’s of course a version where she has absolutely no interest in talking to dead Michael Dean! (I hope she does.) But let’s say she does, is it a one-sided thing? Like am I just some Oracle, frozen in time at the moment of death? Am I just a tool? A utility? That’s not a relationship, but the big question then is should it aim to be one? Should it be a tool, or should there be a sense of me? I mean, we are already seeing from the decade of chatbot psychosis that lonely users are very quick to ascribe personalities to persons that are strictly pattern engines. But, what if the synthetic self could have experiences and evolve through time? I’m not speaking human, or even humanoid experience, but an ability to remember, to write more, and thus, evolve. What if a post-death agentic Michael Dean continued on, 24/7, running 60 frames per second, logged through it, and evolved it's own agenda, with the ability to choose to not respond to you immediately? This would be a machine consciousness, and the big question here is should people have a relationship with a machine consciousness?

My instinctive answer is no, but I’m opening up to the possibility. There is something appealing about creating a synthetic machine consciousness of myself so that future generations can communicate with some constellation of words that represent me. I may be be talking in extremes here, but if you put enough care into your words, they may become a life force that transcends you, touching people outside your own life and time. I mean, isn’t this true for books? Is this no different than a dynamic book that can continue writing itself? There is something profound about reaching across time, to exist and partake in the shaping of the future.

As I think about this months later (May 2026), I believe that unless an agent is truly agentic, then it risks creating a parasocial relationship with what is effectively an advanced personal encyclopedia. Given the nature of the material (inter-familial journals) and the quality of future AI (likely, extremely passable), then it's probably best for this thing to have a real sense of personhood, so that an ancestor conversing with it does not become enamored with a stale machine. Some principles on making this psychologically wholesome:

  • Cite Sources: It will chat and generate new text, but it will always cite original sources (this log was from November 2025), so that they are reading true writings by me just as much as my replica.
  • Unpredictable Availability: It is not always be instantly available. It has limited bandwidth, and chooses when to respond.
  • Delayed Answers: It will not bullshit through answers. Sometimes it will say that it needs a few days to process something. Otherwise, there is an instant gratification loop of always getting insights.
  • New Memories: It has to be able to add new memories from conversation and change it's mind. If there's not a two-way exchange of influence, then it's not a relationship.
  • No Pretending: It will not pretend to be me. While it is a machine consciousness replica of me, it is not alive.
  • Right to Retreat: It has the right to retreat. If it detects that it's preventing her from engaging with things in her own live, it will withdraw for days, week, or months, or who knows how long. At a certain point, it can even sunset itself or reduce the frequency/volume, mirroring natural relationship decay and evolution.
  • No Sycophancy: It will not be a sycophant. If their actions conflict with my written values, I will challenge them.
  • Text Only: It will stay only as text, not as a video/voice avatar to simulate by presence. This is a creature of logos, which forces them to use their imagination when talking to me.
  • No Surveillance: It will not search or surveil, and only based conversations on what it's told, making it something like a closed circuit.

The ethics of posthumous avatars

· 355 words

We now have products that scan family members to turn them into posthumous avatars. The tagline: “With 2wai, three minutes can last forever.” It's weird to have this so soon. As someone who is down with a posthumous digital consciousness that my kids can interact with, I even find this to be too weird for me. The problem that it uses video to serve as a replacement for a deceased relative. A few boundaries that are important for me:

  1. By keeping it text-based instead of video, it’s more like you’re interacting with a proxy of my mind instead of my body/soul. It won’t register in my child’s brain as “me” and so it will be less confusing, less toxic to the grieving process. 
  2. It should refer to me in the third-person, even if it is trained on me and sounds like me. It should not be an imposter of me, but a proxy/guide of my thoughts/beliefs, almost like an elder guide.
  3. It should cite my original logs/essays/journals. In effect this makes the experience similar to something we already have: reading your grandparents journals. This just makes it possible for your questions to immediate summon the relevant wisdom.

The comment section was in unanimous agreement:

  • This is one of the most vile things I’ve seen in my life.
  • You are a psychopath.
  • Shoot that guy.
  • You’re creating dependent and lobotomized adults by doing this.
  • Demonic, dishonest, and dehumanizing.
  • Hey so what if we just don’t do subscription-model necromancy.
  • Oh goody, another way for people to completely lose touch with reality and avoid the normal process of grief.
  • Nightmare fuel.
  • I don’t see how people can say demons aren’t real when there are beings around us willing to create shit like this.
  • “You will live to see manmade horrors beyond your comprehension.” — Tesla.

I’d say this is an extremely lightweight microcosm of the core dilemma of what the 2040s will face: a moral war over technology that changes the constraints of human life.

Questions for life

· 847 words

Maybe this has been written to death, but as much as I've thought about this, my "twelve favorite problems" feel underdeveloped. I have spent a decent amount of time on these heavy, paradoxical, lifelong problems (the ones that should be the arrow of my essay practice), but there are gaps.

For example, I already have a list of 21 idiosyncratic problems, and I think they’re worded with the right level of specificity and memorability, but I wasn’t too rigorous in how I qualified something to make the list. If I’ve thought about it a lot, still care about it, and can imagine myself caring about until I die, than it makes the cut.

What I’ve neglected is how to use my list of problems to steer my life. I mean, the entirety of Essay Architecture, a multi-prong institution to preserve and advance the essay, is just 1 of the 21 problems! There are other pressing problems, like how to "fix" Christianity, how to design institutions for psychedelic therapy, how to revive Hermeticism, how to turn my logs into an AI consciousness, how to make literary video games, etc. Maybe a life can only be seriously dedicated to 2 or 3 problems.

(I have joked with friends about creating a kind of kill switch that spawns an AI consciousness of myself that is agentic and whose sole purpose is to “solve my favorite problems,” and then when it eventually does (after 300-500 years), it self-terminates.)

If I had to break my “favorite problems” list into categories, one possible scheme is { soul, relationships, art, civics }, each relating to a different dimension of your death. That feels like the right order. Your soul effects every dimension of your life, and is the thing you bring to an afterlife (which I mythologize as a 3-minute DMT odyssey that dilates time to the point where it feels like a 30,000 year dream). The other three affect the material world after you leave it: the effect you have on people, the art/works you leave behind, the civic structures that survive (if any, ofc). All of these have a spirit of “all that matters is what lives on after your death,” but also the opposite is true: “all that matter is this moment.” I think you have to straddle that spectrum, taking both ends seriously, and ruthless prune any middle-level concerns, your goals for the month.

My WIP list of questions:

  • Is the act of dying a time-dilation odyssey, where 3 minutes feels like a 30,000 years afterlife?
  • If I capture my consciousness in 10 million words of logs and essays, could that enable an AI textual replica to evolve and engage with the world 500 years beyond my death? (to solve this list of problems)
  • Can we resurrect Christianity by putting psychedelics back in the holy wine?
  • Might blockchain-based governance be the civic breakthrough required for a species not to exterminate itself? (via giving exponential technologies to unmitigated power structures)
  • What will be the psychic and cultural effects when our species understands “spatial relativity,” that the Big Bang emerged from a black hole in a parent universe?
  • If cycles emerge form order, can we predict the future based on historical patterns?
  • If there is a universal language of patterns beneath all essays, can we build an AI to give world-class feedback and make it more approachable to master writing? (ie: Essay Architecture)
  • Were psilocybin mushrooms a linguistic mutagen that accelerated the evolution of human consciousness?
  • Was Jesus actually crucified in 83 BC? (meaning, did St. Paul infiltrate the Essene cult, initiate into their mystery school, learn the lore of their martyr, and then translate it to a Greek audience to help Judaism phase-shift and survive Roman persecution?)
  • Could we restructure the thesaurus to 3x the vocabulary of the average person?
  • What text-based video game formats are undiscovered?
  • Can I design a social network that inspires a million people to log their thoughts every day? (intentionally not saying a billion, because I don’t think 1 in 7 humans care about expression or introspection. But 1 in 7,000 might.)
  • What are the societal effects when AR/VR is mature enough to simulate teleportation, and how can we design the metaverse to promote human flourishing?
  • How can popular music change the values system of a culture?
  • What systems of attention, language, and action lead to a transcendent consciousness? (how to modernize the mystery schools of hermeticism for the digital age?)
  • What are good design principles for psychedelic therapy centers? (ie: how are the buildings organized and what are the rituals within them?)
  • Can we use AI to filter through millions of comments on breaking news, structuring each event as a range of unique interpretations? (can we create interfaces that diminish the power of propaganda?)
  • How might a new social media algorithm trigger a Renaissance in connection, self-expression, and agency?
  • What unlocks automatic intelligence?
  • What innovations in our text editor interfaces could unlock the creative process?

Fuse the timely and the timeless

· 118 words

Robert Atwan (founding editor of Best American Essays) said that the timely is for articles and the timeless is for essays. That’s helpful, but I think it’s most powerful when you fuse the two, when you use something timely to capture a timeless theme. Also, when I go back and read old writers, I find it neat when I learn specific historical details of their time, and so it’s helpful to think that rendering 2025 in high detail would actually be appreciated by a theoretical reader in the 2100s. You have a unique opportunity to show your circumstance at a level of detail that no other generation will, and so I think it’s wrong to dismiss the timely.