michael-dean-k/

Topic

legacy

3 pieces

Institutes vs. Institutions

· 370 words

When we say we "distrust institutions," we're pointing at the wrong thing; it's the institutes that are withering. We use these words interchangeably, but I think the separation clarifies.

An "institution" is an abstract, permanent, inter-generational primitive—like education, marriage, the free press, the essay—while an "institute" is a concrete embodiment that serves it. Think of an institution as a societal organ. Think of institutes as the specialized tissue that keep the organ functioning and regenerating.

As generations turn, new sets of people are handed down the great responsibility to protect and evolve institutes through the storms of time and technology. Without upgrading our institutes, society goes through slow-motion organ failure, with phantom pains and spiritual malaise that can't be traced back to the source. Schools still look like schools, but everyone is cheating through a Homework Apocalypse, and suddenly we have all sorts of cultural cancers that seem inevitable. Institutes are the civic building blocks of a sane society, and yet we glorify unicorns who create "value" but feel no responsibility for their dying elders.

Institutes operate through the inverse of market logic. Where startups are designed to accrue all of the upside, an institute is sacrificial, designed so society gets the upside, even at its own peril. Of course they swim in the same water, but institutes swim differently: they have opposite answers to questions on how to steer, what to make, where to focus, who to include, and when to stop. An attempt at some principles:

  • mission-driven, not market-driven;
  • timeless contributions, not self-serving content;
  • involved in ecosystem building, not niche extraction;
  • active members, not passive users;
  • century-long legacy, not liquidity through an exit.

Usually an institute comes from patronage: you can’t resist market currents unless you’re supported by endowments, donations, foundations, tuitions, grants, and such things. You can’t start an institute in your garage, but now with AI and the collapse of cost, I suppose you could try. So many of the one-person AI company fantasies are about a single founder reaching a billion-dollar valuation, which is the cheapest form of ambition there is; the better question is around the scale and spirit of cultural impact achievable by a one-person micro-institute.

→ source

Cross-generation conversations

· 1085 words

I’ve noticed a shared romanticism around reading the journals of your (great) grandparents. Wouldn’t you? In some sense, they are you (a portion of you, at least) in an older time; and through immersing in their thoughts, you might see yourself, or at least, a side of your self you could become. Some say to leave the past a mystery, but I’d argue the mystery doesn’t open until you read it. An old book can’t solve all the riddles of your life. Reading steers endless chains of pondering. When a dead person’s journal is read, it’s as if they resurrect from the past, lodge themselves into your psyche as a lens, and shape the evolution of your thoughts, the being you become. 

I share all this as a frame to make sense of that new “avatarize your grandma” app that everyone hates. You scan her with your phone, and 3 minutes later you get an on-screen illusion of her talking to you. This is not the same as above. The moral outlash comes from the idea that the living will halt their mourning process by assuming the synthetic stand-in is real.

A posthumous avatar shouldn’t be about physical likeness, but about animating their corpus of writing. (Corpuses, not corpses.)

There’s something about words that captures a soul more than a picture. Consider how you can see pictures of dead relatives but know nothing of their essence; but a page of their writing will bring them to life. If someone writes throughout their whole life, say 20,000,000 words or so of ideas, thoughts, and memories, and they also paid much attention to how they communicate their intangible abstractions and visceral feelings, then you have a high-resolution proxy of that person. It’s very possible that someone who reads all my logs will know me better than my family members, and even better than myself. Of course, words don’t capture the timbre of my voice, or my idiosyncratic flinches, or distinct sub-perceptible physical characteristics, like the sole hair on my outer ear. But I mean, what makes me actually me? The constructed self that has been allowed to emerge in social situations? Or my unfiltered thoughts that I obsessively record every day for years?

Assuming I keep logging, and AI keeps getting better, it’s possible that my great granddaughter will know me better than anyone currently alive. Very weird thought.

A question for me: what is that like for her? I mean, there’s of course a version where she has absolutely no interest in talking to dead Michael Dean! (I hope she does.) But let’s say she does, is it a one-sided thing? Like am I just some Oracle, frozen in time at the moment of death? Am I just a tool? A utility? That’s not a relationship, but the big question then is should it aim to be one? Should it be a tool, or should there be a sense of me? I mean, we are already seeing from the decade of chatbot psychosis that lonely users are very quick to ascribe personalities to persons that are strictly pattern engines. But, what if the synthetic self could have experiences and evolve through time? I’m not speaking human, or even humanoid experience, but an ability to remember, to write more, and thus, evolve. What if a post-death agentic Michael Dean continued on, 24/7, running 60 frames per second, logged through it, and evolved it's own agenda, with the ability to choose to not respond to you immediately? This would be a machine consciousness, and the big question here is should people have a relationship with a machine consciousness?

My instinctive answer is no, but I’m opening up to the possibility. There is something appealing about creating a synthetic machine consciousness of myself so that future generations can communicate with some constellation of words that represent me. I may be be talking in extremes here, but if you put enough care into your words, they may become a life force that transcends you, touching people outside your own life and time. I mean, isn’t this true for books? Is this no different than a dynamic book that can continue writing itself? There is something profound about reaching across time, to exist and partake in the shaping of the future.

As I think about this months later (May 2026), I believe that unless an agent is truly agentic, then it risks creating a parasocial relationship with what is effectively an advanced personal encyclopedia. Given the nature of the material (inter-familial journals) and the quality of future AI (likely, extremely passable), then it's probably best for this thing to have a real sense of personhood, so that an ancestor conversing with it does not become enamored with a stale machine. Some principles on making this psychologically wholesome:

  • Cite Sources: It will chat and generate new text, but it will always cite original sources (this log was from November 2025), so that they are reading true writings by me just as much as my replica.
  • Unpredictable Availability: It is not always be instantly available. It has limited bandwidth, and chooses when to respond.
  • Delayed Answers: It will not bullshit through answers. Sometimes it will say that it needs a few days to process something. Otherwise, there is an instant gratification loop of always getting insights.
  • New Memories: It has to be able to add new memories from conversation and change it's mind. If there's not a two-way exchange of influence, then it's not a relationship.
  • No Pretending: It will not pretend to be me. While it is a machine consciousness replica of me, it is not alive.
  • Right to Retreat: It has the right to retreat. If it detects that it's preventing her from engaging with things in her own live, it will withdraw for days, week, or months, or who knows how long. At a certain point, it can even sunset itself or reduce the frequency/volume, mirroring natural relationship decay and evolution.
  • No Sycophancy: It will not be a sycophant. If their actions conflict with my written values, I will challenge them.
  • Text Only: It will stay only as text, not as a video/voice avatar to simulate by presence. This is a creature of logos, which forces them to use their imagination when talking to me.
  • No Surveillance: It will not search or surveil, and only based conversations on what it's told, making it something like a closed circuit.

The ethics of posthumous avatars

· 355 words

We now have products that scan family members to turn them into posthumous avatars. The tagline: “With 2wai, three minutes can last forever.” It's weird to have this so soon. As someone who is down with a posthumous digital consciousness that my kids can interact with, I even find this to be too weird for me. The problem that it uses video to serve as a replacement for a deceased relative. A few boundaries that are important for me:

  1. By keeping it text-based instead of video, it’s more like you’re interacting with a proxy of my mind instead of my body/soul. It won’t register in my child’s brain as “me” and so it will be less confusing, less toxic to the grieving process. 
  2. It should refer to me in the third-person, even if it is trained on me and sounds like me. It should not be an imposter of me, but a proxy/guide of my thoughts/beliefs, almost like an elder guide.
  3. It should cite my original logs/essays/journals. In effect this makes the experience similar to something we already have: reading your grandparents journals. This just makes it possible for your questions to immediate summon the relevant wisdom.

The comment section was in unanimous agreement:

  • This is one of the most vile things I’ve seen in my life.
  • You are a psychopath.
  • Shoot that guy.
  • You’re creating dependent and lobotomized adults by doing this.
  • Demonic, dishonest, and dehumanizing.
  • Hey so what if we just don’t do subscription-model necromancy.
  • Oh goody, another way for people to completely lose touch with reality and avoid the normal process of grief.
  • Nightmare fuel.
  • I don’t see how people can say demons aren’t real when there are beings around us willing to create shit like this.
  • “You will live to see manmade horrors beyond your comprehension.” — Tesla.

I’d say this is an extremely lightweight microcosm of the core dilemma of what the 2040s will face: a moral war over technology that changes the constraints of human life.