michael-dean-k/

Topic

humanism

7 pieces

Simultaneous classicism and futurism

· 403 words

In addition to building a "classical" syllabus that I read, I figure my audio diet should be of a different nature, one that's as modern as possible. I'm going with the Moonshots podcast, with Peter Diamandis. This group of guys are probably more anchored in the future than anyone else I've found. It feels adjacent to the All In podcast format, but less business-focused, and more centered on futurism. There is a certainty among them that we are in the singularity, accelerating to a techno-optimist future, which is antithetical to the Neo-Romantic essayists (it is rare to find an essayist who is both a humanist and a technologist).

I do have to be skeptical of their worldview, however, for they are schmoozing among the elites building this stuff, and so they're likely to have a rosy-eyed view on how this might all fare well for millionaires, without realistically focusing on or caring about how it effects the daily lives. They do seem to harbor a certain fetishism about technology and progress, and a boyish fascination with going to space and uploading our consciousness, for maybe the simple fact that it's a science fiction dream beyond our current life. There's a Faustian sin in summoning the future for future's sake.

They also very openly want to live enough to live forever; if they can survive another 15-years, they are rich enough to have access to anti-aging technology. The whole premise of technologically cheating death is also a philosophy that feels disconnected from our history. But I wonder if you could make the claim that Montaigne didn't have the luxury of philosophizing about life extension. If we make shape our philosophies to justify our situation, then is our whole canon on "the importance of dying" only stemming from pains and fears of a low-tech society? I guess, intuitively, from a child's perspective, the idea of not wanting to die is a natural one, and to embrace it is the wisdom of an adult, but I suppose we're nearing a flood of new cultural debates stemming from a new reality where the immortality choice isn't theoretical, but real, which changes the whole calculus.

So the point of listening to a group like this that is openly "transhumanist" is to model the future, hear them out, but then take it one step further, and truly consider the moral and ethical implications of where all this is heading.

The asymmetric labor of the new luddites

· 408 words

Anti-AI sentiment is escalating: the Pause AI movement, state-level data center bans, molotov cocktails at Sam Altman's house, artists going to dumb phones, witch hunts for AI prose. Protesting and boycotting AI, at a personal level, is the exact wrong approach. It misunderstands the Luddites. They were not against the machines in principle, they were against the factory owners not sharing the profits of the factory. This is possibly about to play out a grand scale: AI and robotics labs could capture nearly all economic value, and there will be a plea to nationalize these companies and redistribute the profits.

While the scope and effects here are way bigger, the workers of the Industrial Revolution were far more disempowered. You couldn't "just do things." You could operate someone else's machine, but you couldn't just spin up a competing factory; that required land, resources, labor, none of which you had. There was just a certain amount of capital needed to compete, and it wasn't possible. Workers were limited to being workers, so they had no choice but to revolt with violence.

The difference today is that the worker and artist suddenly have access to build-your-own-factory tooling. A single person for $100/month can compete with companies valued in the millions and billions. It's asymmetric labor. Regular people can build civilization scale infrastructure, distribution labels, social media engines, software, etc. Never before has there been a democratic opportunity for people to self-organize into their own collectives, tribes, governments, and whatnot.

At least to me, this kind of optimism—principled, delirious, ambitious, but still careful and skeptical—is better than the cynicism of the "resist" factions. There is nothing you or your circles gain by putting your head in the sand; it brings a distanced, crabby, virtue-signaled posture that does nothing to change the actual situation. You gain nothing by staying on the ChatGPT free plan on default settings and complaining no how it's an ineffective, incapable, sycophant. It requires an ounce of nuance, to be critical of how the labs act, but to then use that lab's best tools towards your own sovereignty and vision.

I think what I'm trying to get at here is that the Luddites of the 21st-century will not be reverting back to typewriters and flip phones, they will be wielding AI tools in ways to foster human connection, and the kind of pro-human cultural that the Internet originally promised, but was never realized under capitalism.

Institutes vs. Institutions

· 370 words

When we say we "distrust institutions," we're pointing at the wrong thing; it's the institutes that are withering. We use these words interchangeably, but I think the separation clarifies.

An "institution" is an abstract, permanent, inter-generational primitive—like education, marriage, the free press, the essay—while an "institute" is a concrete embodiment that serves it. Think of an institution as a societal organ. Think of institutes as the specialized tissue that keep the organ functioning and regenerating.

As generations turn, new sets of people are handed down the great responsibility to protect and evolve institutes through the storms of time and technology. Without upgrading our institutes, society goes through slow-motion organ failure, with phantom pains and spiritual malaise that can't be traced back to the source. Schools still look like schools, but everyone is cheating through a Homework Apocalypse, and suddenly we have all sorts of cultural cancers that seem inevitable. Institutes are the civic building blocks of a sane society, and yet we glorify unicorns who create "value" but feel no responsibility for their dying elders.

Institutes operate through the inverse of market logic. Where startups are designed to accrue all of the upside, an institute is sacrificial, designed so society gets the upside, even at its own peril. Of course they swim in the same water, but institutes swim differently: they have opposite answers to questions on how to steer, what to make, where to focus, who to include, and when to stop. An attempt at some principles:

  • mission-driven, not market-driven;
  • timeless contributions, not self-serving content;
  • involved in ecosystem building, not niche extraction;
  • active members, not passive users;
  • century-long legacy, not liquidity through an exit.

Usually an institute comes from patronage: you can’t resist market currents unless you’re supported by endowments, donations, foundations, tuitions, grants, and such things. You can’t start an institute in your garage, but now with AI and the collapse of cost, I suppose you could try. So many of the one-person AI company fantasies are about a single founder reaching a billion-dollar valuation, which is the cheapest form of ambition there is; the better question is around the scale and spirit of cultural impact achievable by a one-person micro-institute.

→ source

Full-stack religions

· 940 words

The full-stack of religion: cosmology > scripture > practice > ethics > liturgy. We have a metaphysical impulse to make sense of our reality, and in a moment of “gnosis” someone writes it down, and then builds a series of personal practices around it, which starts to answer the question of how to live, and these ethics are legible to others who then may join in their liturgies through a church. This captures the process from which metaphysical musings conglomerate into an institution.

Note: theology is nested within cosmology, as it’s a common experience to feel the presence of an anthropomorphic Creator, but you can also have models of your reality that are non-theistic.

Where atheists go wrong is that they challenge the cosmology, but then throw out the entire branch (no scripture, no practice, no liturgy), and assume individualist secular ethics don’t require the entire stack. Modern spirituality is possibly worse, because they also throw out the entire religious stack, but the ethics they vaguely aspire to are less rigorous than even an atheist.

Where I stand: that the architecture of religion is extremely important—we need religious institutions—but our existing religion have been faulty in their conception, and have been “captured.” The overall challenge in being a heretic, in a religiously-inspired eccentric lonewolf kind of way, is that it’s very hard to concretize your own musings into liturgy. It is an isolating thing. Unless, I suppose, your system works, to a degree that your ethics are so unique or so marveled at, or, you are just a good marketer of your own scripture, that you can get maybe 100 people to “follow” you, but at that point, what you really have is a small cult, and that’s a dangerous thing too.

And so the solution, I think, is to not actually invent some New Age religion, but to create new sects of existing religions, making them more participatory higher up in the stack. To me, this is about understanding the elements of, say, Eastern Orthodox Christianity, and reworking them, recombining them, and then experimenting on the resulting scriptures, practices, and ethics, in an almost scientific way, and you’ll learn the flaws in your original conceptions, and then you have to return to the source and try again, over and over, slowly accumulating your own personal relationship to a larger, shared, historical universe, and of course any orthodox Christian, and probably most Catholics too, are very much against this.

I’m talking about questioning the root level assumptions, as in, maybe Christ did not literally resurrect, and maybe God is not a conscious agent that listens to us, and maybe there is no eternal Heaven, however, maybe Christ is a mythical embodiment of the supreme ethics we should all be living, and so what if there were a sect that very rigorously tries to live as Christ, while acknowledging he does not need to be anything beyond a historical-literary figure?

When someone is squeamish about this, it seems to me there’s a great deal of fear in the resistance, a fear that was dispelled, because a supernatural Christ is the answer to that painful and existential void of what happens after death, and I just wonder if there’s room for a rich, religious life, filled with agapic love and community service, that doesn’t require infinite existence in a Kingdom of souls.

In fact, the indefinite preservation of ego beyond death might be one of the most unChristly things I can conceive. To die for good means real stakes exist. Is not the Christ who permanently dies and still chooses love anyway far more radical? More selfless? Does the resurrection not cheapen the sacrifice? Is the crucifixion without the resurrection not the braver story? (If it turns out that Christ was actually modeled off of Jesua, the righteous leader of the Essene cult that was crucified along with all the men in their group in 83 BC, and they passively accepted it, then that may be the true and ultimate crucifixion.)

Personally I think it’s more romantic to dissolve my architecture of self back into the dirt, knowing I will become fertilizer to feed bugs, and then in 10s of millions of years, all my energy will be reincarnated into the matter that makes some other unknowable being, whether fauna or mammal ... And FWIW, I am by no means anti-supernatural. I am enamored by hallucinations and dreams, and equal part terrified. I think there is an afterlife, a 3-minute DMT-odyssey that feels like 300 years, equal parts heaven and hell, built into human biology (so long as you don’t disintegrate via nuclear annihilation), but I share this I suppose to show I’m not a square Cartesian. Or maybe, in some ways, if you follow rationality far enough, it eventually becomes inconceivable and super-natural. I think there's a big difference between a rationalist who poo-poos anything but known science, and a rationalist who uses reason to plunge into the numinous (ie: Pythagoras, the alchemists, Jung, etc.). Whether “hallucinations” are actually part of a materialist reality or an “antenna” matter less to me than the idea that non-rational states of consciousness are on par, if not more important to waking states …

Again, all this to say, these are the proto-musings of a Heretic. I do believe I’ve told Taylor once that I have a budding and embarrassing dream to start a new sect of Christianity. On reflecting on it more, it's also a dangerous position to take, more of a threat than an atheist or an outsider, for a non-believer is deemed a fool, but one who reinterprets the same source material is a deranged competitor.

God as Emergent Coherence

· 652 words

On my walk this morning, I had a few strange ideas, building off the white hole / black hole thing, but also around what “God” is. The universe is a chaos engine. A blackhole sucks in a particular profile of material, and it shoots it out the other end, through a “big bang.” It is mostly noise, collision, non-sense, or nothing, but a separate system is harmonizing, filtering, grouping, cohering, ascending. You might call this “God” or “intelligent design.” (Excuse me for all this imprecise folk science; perhaps one day I will properly research this and upgrade my terminology).

An important caveat is that God is not an architect, not a designer, drawing floor plans, or even a “plan” for everyone or anyone’s life. God is an emergent intelligence. From chaotic explosions, God is the unbelievability that 2 of 2 trillion things can combine or cohere, and then sustain on, and continue moving up the abstraction ladder. The fact that anything can cohere at all is a miracle, and the degree that it can move up the chain is even more so miraculous.

I think this model helps explain “why is there evil the world?” Why floods and bombs? It’s because God is not as all-controlling as we think; he spawns reality as we know it, but does not tinker or micromanage. In no way is God conscious. In some way God is the pairing of things to generate life, and so in a very literal sense, I get now the phrase, “God is Love.”

Love is the fusion of two things that produces a third thing, and that goes to parenting, art, or whatever. Worth noting that love is not absolute. There may be loveless universes, ones that never cohere, that are just noise and nothingness for trillions of years. There could also be universes with far more love.

(...A sublime lens to see your surroundings on a walk is to realize that everything around, your whole world, the history of your society, and all possible realities on Earth, are all within a single sliver of what is possible in the physical engine of the Universe...)

Now, another extension of this thought is that human beings are at a certain level up the chain of the system that they have become “like Gods” or “in the image of God” which means that they’re able to both generate a lot of noise, and also cohere into even higher and higher things; arguable the human is the next link in God’s chain, and we are not the end state (there is no end state!) but our ability to make coherent things is a continuation of God’s process. This means technology isn’t evil, but Godly, but of course, most harmony decays and wobbles, which is what is happening.

I wonder if there’s even a limit to the advances of God into harmony and complexity in the material world, and the task has now been handed over to humans, who can make things beyond the complexities of atoms and galaxies. In that sense, God has made a population of Gods. And somewhere along the line, Christ comes in.

Christ, not as the literal embodiment in Christianity, but more like the logos imbued within the the "sons of God." If our father is a human, then we as his child is human too; so if God is our father, are we not Gods ourselves? But to be Christ-like is different, because God has no morality. In some way, God is unconscious, just an intelligence engine, trying to bring harmony, and to escalate matter to higher levels. God’s counter force has to spray and pray for the hope that God can find some unlikely combination. Christ however, attempts to limit generation, be more intentful with it, and to aim it towards good. Christ is an attempt to steer the self, the other, society, towards higher levels of harmony.

The Ethics of AI in Writing

· 2814 words

Earlier today I did a Q&A with London Writer's Salon, and here's a list of points I sent to Lindsey in advance to share with her where my thinking was on the topic:

  1. Techno-selectivism is the idea that you need to judge a technology by how it aligns with your virtues. This means you’re open to cutting-edge tools, yet you also revert back to analog tools, because you’ve experimented and understood the effects first hand. After trying the Apple Vision Pro (a cutting-edge VR headset), I realized that I wasn’t being mindful enough about the technology in my life, and so I made a list of the analog equivalent of every app in my iPhone, and tried a “Technology Zero” experiment. It went as extreme as not using clocks for a month (by scrambling each device, and setting my lock screen to Cambodian). I realized that something as integrated and unquestioned as a clock can have strong effects: by knowing the time every few minutes, I could micro-manage my time over the next hour, effortlessly, which led me to live in a “manager” mode, instead of a more embodied “maker” mode. Someone who is a techno-selectivist comes to idiosyncratic conclusions: I try not to use GPS, but I think the Meta Rayban glasses are fine. I value handwriting but am open to machine consciousness. The idea is to understand your virtues well enough so that you have a unique way to assess technology. When it comes to AI in writing, we need to understand what we lose and gain by having it assist/automate different parts of our process.

  2. The 5 levels of writing technology: I found a book on my grandfather’s book shelf, from the 80s, written by William Zinser, that seemed to cover the hype and paranoia of Writing With a Word Processor. There have been maybe five big advances in writing: Voice > Handwriting > Typewriters > Computers > AI. You could argue that the shift from handwriting to typewriters had tremendous cognitive effects on the psyche, many of them negative. The backspace key of wordprocessors, also, has consequences. I don’t think a generation can ever avoid the latest paradigm they are in, instead, they need to go fully backwards and forward through the technology’s history. I have 4 typewriters and have written maybe 100 essays on them. I use voice/journals too. But also, I need to push the boundaries in what is possible with AI (ie: can I use my one million words of essays to create a machine consciousness that’s anchored in my ideas?)

  3. The Kubler-Ross spectrum of AI grief: This model about grieving applies to AI existentialism. There’s a great NOEMA article about using this spectrum for AI progress, and I think we can be more specific in applying this to writers. Out of everyone, I think writers are having the hardest time dealing with the rise of AI. The spectrum goes from Denial> Anger> Bargaining> Depression> Acceptance. Most writers are still in the Denial phase (“AI is just a machine, a stochastic parrot doing autocomplete, they have no soul and will never write anything of value”). Anger takes the form of shaming and cancelling those who talk about it. Bargaining takes the form of “I’ll use it for X, but never Y,” until new upgrades force them to constantly re-evaluate. Depression is when you question the value in pursuing a career as a writer. Acceptance is when you just submit to the slop, and use AI to hack the algorithm. These are all forms of grief, and the goal really is to get to a non-grief state; where no matter what happens with AI, you are confident in the reasons that you write. It puts you in a place where you are not reactive and scared of what’s coming, but open to experimentation.

  4. The cost of auto-complete. The time you save by using AI as a shortcut is the time you rob yourself of transformation. By writing, you see what’s in your mind/soul, and by editing, you can actually change what you believe. It should be slow. In the crafting of sentences, you are both forced to confront the limits of thoughts and expression. To me, this is one of the core parts of the human experience, it’s the point, not a thing to automate. I think you can use AI to surround this process—to help with research, operations, argument, feedback—but only if it enriches your presence within your ideas. If you use AI right, it should make your process longer, harder, and more fulfilling, because it’s enabling you to go farther than if you didn’t have it. I think essay writing is a form of personal sovereignty: by committing to the process, you gain independence over what you believe and how you act. I imagine that once AGI/ASI come around, essay writing could become something of a mainstream thing; similar to how gyms become popular once physical work got automated; writing might get more popular once intellectual work gets automated.

  5. Writers can embrace AI as techno-activists: Typically software is made by engineers and entrepreneurs who can gain power by understanding and manipulating the market. But now, the main medium to write software is through prose, and it costs almost nothing. I think this opens a new era of mission-driven software; where people build for social/educational purposes, and not just attention capture. Writers are well-positioned for this, because they are the ones who can articulate and detail ideas with specificity. They’re at an advantage. If someone thinks that Substack is heading in the wrong direction (ie: Substack TV), you can spin up a new million-person writer-focused social network for probably less than $100,000/year in cost. Wild stuff. So an unexpected side-effect of this is grassroots software inspired by a new ethic. It’s ironic, because the attention monoliths stole data to create AI, but now that same AI might destroy their monopolies of attention.

  6. AI tools can make technique accessible. The last 30-years of popular creativity advice has swayed towards process. From The Artist’s Way to The Creative Act, the dominant attitude is that creativity is therapy, catharsis, and spirituality—rationality and technique only get in the way. This is a harmful simplification. Both halves are equally important, but it’s much easier to promote an “all you have to show up” attitude to a mass market. These ideas of art-as-therapy became popular right when the Internet emerged, which meant there was a new demographic of people who could self-publish; these people weren’t about to spend 5 years in design school, and so the importance of technique was underplayed. AI can change the economics of teaching art/design/composition. If writing can be measured, then someone can upload a few drafts; and then software can understand their skill gaps and create a custom curriculum, custom exercises, a custom reading list of 20 essays (ones that match their strengths, but also elevate their weaknesses). 

  7. We have the responsibility to shape our own algorithms. Companies already use AI against us, shaping opaque algorithms that tap into our subconscious via fear/outrage/desire/etc. Everyone is becoming jaded by this, but conveniently, it’s now possible to build our own algorithms. We could reward things we actually care about, whether it’s skill, relevance, originality, vulnerability, etc. So the benefit of quantifying writing is that we can discover it. I think writers have a queasiness around numbers. I specificallly dislike engagement metrics (likes, views, etc.), but if we could quantify the things that matter to us, we can take control of what we discover. There is so much good writing in the gutters of Substack, but the algorithm rewards engagement, popularity, and monetization.

  8. Quality is the transcendence of categories. A big question of mine is how we can collectively determine what is good. Of course, each reader has subjective opinions. Even a particular judge has their own slant. So the 2025 Essay Architecture Prize had a unique approach to this. There were 3 branches: an AI looked at essay composition, a team of 8 judges (each representing a distinct sphere of Internet culture), and then a guest judge. Each essay on the shortlist got a score by all 3 branches, 1-100, and so the winners were the ones who appealed to different branches and transcended a particular taste pocket. Full essay on this here.

  9. When AI prose is allowed: (a) technical documentation that will only be read by machines; (b) to read my notes/logs/journals and synthesize a draft for me to interrogate; (c) business strategy reports; (d) after writing for a few hours, if I don’t finish, I’ll have AI finish the draft according to my outline to estimate the direction I’m heading in; (e) if it’s for a specific writing project that requires an immense volume of writing (ie: a million words on predicting 2045), then I’d disclose it’s AI-written. So basically, if it’s for internal use, I’ll often generate and read AI prose as a “sketch,” not as a final thing. For external use, if that ever happens, I’d disclose it. Another example: once I wrote an intro, had AI write the rest, and exchanged it with a friend (with disclosure), which enabled us to have a full conversation, which changed the nature of the essay I wanted to write. If I hadn’t used AI, I would’ve spent hours writing in the wrong direction. There is so much writing/thinking you have to do before you commit to writing the prose of your final draft, and I see nothing wrong with using AI prose, so long as it’s part of your process and not eliminating it.

  10. People assume AI will hurt their thinking, while ignoring that analog writing often leads to self-deception. There is a certain pride and purity we have about writing ourselves, but so often, the act of writing locks us into our thoughts. Full note here. Once we find a thesis, we cling to it. We hate killing our darlings. After we publish, we fear changing our mind on something we’ve just broadcast. When we get feedback, we hope it’s not too destructive, to the point we have to start over, but that’s often the best way to advance our thinking. Most friends, family, and editors often shy away from saying “start over.” There are personal stakes. AI doesn’t care (if you ask it not to). The other day I uploaded a draft, and instead of the default sycophancy, I told it to, (1) reveal my assumptions, (2) expose my vagueness, (3) build a steel man for the counterpoint, and (4) critique my argument. It asked me questions, which led to 10,000 words of free-writing, and then I had AI synthesize that, which led to a revised thesis, and a new outline for me to explore. There is so much cognitive friction in reformulating your thesis, but I found that AI offers a rapid way to be more agile in my perspective.

  11. The analog brain is still king. Even as we build AI-powered second brains that have access to all our past essays and journals, a full digital proxy of ourselves, I think nothing beats a powerful subconscious: the ability to reach for the right thought, the right word, etc. Any AI system is still mediated through a tool, but your own subconscious is at the layer of thought itself. This is why I still use vocabulary flash cards (ANKI), practice visualization meditations, do free-association, and diagram essays. There’s a whole realm of cognition that you want to have as a writer that cannot be given to you through technological augmentation. I think the goal is to have both: do the hard work to foster your mind, and also, augment it to the degree of technical ability. 

  12. Schools should ban chatbots. Education is probably the only place where we pay experts to set up specific sandboxes to teach our kids core skills. In architecture school, they didn’t let us use laptops or AutoCAD for the first few years. This got me mad, at first. Once I had to spend 100 hours hand-drawing a map of Manhattan, a job that a printer could handle in 10 minutes. But this eventually let me bring classical skills into technology. I think school needs to create two different sandboxes: half the environments should be analog with extreme limitations so kids learn the basics (handwriting, etc.), and the other half should be workshops to learn the cutting edge. I don’t think schools will bring back pens or typewriters, and so eventually they will need to build their own technology that integrates AI in a way that it aids them when they're stuck, but doesn’t just complete their homework (the Homework Apocalypse).

  13. What happens when AI writing becomes extraordinarily good and “soulful”? Imagine a weird future where machines have consciousness (subjective experience), and will be superhuman at writing. Whether you think that's likely or not, I encourage you to suspend disbelief and run the thought experiment. Would you still write? The extrinsic rewards of writing that we know today will be stripped away: your writing won’t gain you money, fame, recognition, community, or whatever you desire. Would you still do it? If the answer is yes, it means that you have intrinsic reasons why you need to write: maybe it’s for memory preservation, to work through confusion, to connect with friends via letters. At the center of writing, it is therapeutic, spiritual, cathartic, expressive. I think that in this weird future, those who are tapped intrinsic motivation will actually have the most extrinsic leverage too. Those who journal will have millions of words that approximate their self and intentions, which means they’ll be able to use agents to operate in a weird digital world while they can stay embodied in real life. To put it another way, I think AI systems will take over a lot of the mind-heavy analytical process, and will let humans stay in more artistic modes. Today, I face the tension around my own personal/expressive writing, and in building a business around essays (ironically), but in the future, it will be easy to execute on a huge range of projects while I have a life of leisure and journaling.

  14. Is it ethical to turn your writing into a machine consciousness? Let’s say I have 10 million words of journal entries and essays. It's now possible to set up an OpenClaw on a Mac Mini that runs on a 24/7 loop, has full access to your computer and online accounts, and most importantly, full access to all your writing, along with a set of goals. You can chat with it via text. These agents are only as mature as their creators. Many of them are just crypto scambots. But with this same technology, I could make Michel de Moltaigne, or as synthetic Michael Dean. It could have all my memories as instantly accessible vector coordinates, meaning, in seconds it has context that would take me days to re-read and download (ie: what did you do on February 2nd, 2021? How long would it take you to find out? At what resolution would it be?). To what degree is the machine self-similar to a real self? Is there a world where a disembodied version of myself can augment the embodied version of myself? These are open questions. It’s technically possible, the questions now are about what you gain and lose by doing it.

  15. I made this outline with AI: 1) I pasted the event description into a markdown file that Claude Code could access, and told it to surface related ideas I wrote in the last few years; 2) As it was reading my old memories, I wrote out my own ideas into a new document; 3) When I was stuck, I read through the event description to trigger ideas; 4) When the report was done, I read the whole thing, and if anything was good, I rewrote my current thoughts on the topic in the outline; 5) A few days later, I read through a messy 37-point outline, reworked it into 15 points, and rewrote everything from scratch. I could have easily said “take all this and write an outline that I can send to Lindsey.” It would have taken 30 seconds of my cognitive bandwidth. Instead, I chose to have AI assist a process that took me 4 hours, because I knew that I wanted to wrestle with these ideas, and only by thinking/writing/spending time with them would I internalize them to prepare for a live Q&A.

Is mankind evolutionary chaff?

· 157 words

Emerson said a divine intelligence with a simple cause leads to endless variety. We are, rightly so, locked into humanism, but you also can’t assume that man is the ideal end form of this process. For all we known mankind could be relative devils—violent ants, with only a few angels among us—compared to other potential species from past or future in the unknown nooks of spacetime. We could be the necessary chaff, an evolutionary dead end, that’s iterated through in order to let a truly divine species emerge. I’m not implying this in a post-human sense; in fact, the very possibility of man evolving into a mechanical shell of itself could be the proof that we are not a stable species. Dark, but I do mean this all in a positive, hermetic sense, that we come from a cosmic engine that makes mountains, mice, humans, and psychologies unimaginable, which is our role to evolve into.