michael-dean-k/

Topic

technology-critique

14 pieces

Simultaneous classicism and futurism

· 403 words

In addition to building a "classical" syllabus that I read, I figure my audio diet should be of a different nature, one that's as modern as possible. I'm going with the Moonshots podcast, with Peter Diamandis. This group of guys are probably more anchored in the future than anyone else I've found. It feels adjacent to the All In podcast format, but less business-focused, and more centered on futurism. There is a certainty among them that we are in the singularity, accelerating to a techno-optimist future, which is antithetical to the Neo-Romantic essayists (it is rare to find an essayist who is both a humanist and a technologist).

I do have to be skeptical of their worldview, however, for they are schmoozing among the elites building this stuff, and so they're likely to have a rosy-eyed view on how this might all fare well for millionaires, without realistically focusing on or caring about how it effects the daily lives. They do seem to harbor a certain fetishism about technology and progress, and a boyish fascination with going to space and uploading our consciousness, for maybe the simple fact that it's a science fiction dream beyond our current life. There's a Faustian sin in summoning the future for future's sake.

They also very openly want to live enough to live forever; if they can survive another 15-years, they are rich enough to have access to anti-aging technology. The whole premise of technologically cheating death is also a philosophy that feels disconnected from our history. But I wonder if you could make the claim that Montaigne didn't have the luxury of philosophizing about life extension. If we make shape our philosophies to justify our situation, then is our whole canon on "the importance of dying" only stemming from pains and fears of a low-tech society? I guess, intuitively, from a child's perspective, the idea of not wanting to die is a natural one, and to embrace it is the wisdom of an adult, but I suppose we're nearing a flood of new cultural debates stemming from a new reality where the immortality choice isn't theoretical, but real, which changes the whole calculus.

So the point of listening to a group like this that is openly "transhumanist" is to model the future, hear them out, but then take it one step further, and truly consider the moral and ethical implications of where all this is heading.

Off the Clocks

· 394 words

For the last two years my lock screen clock has been set to Khmer, the language of Cambodia, with numerals I (still) can’t parse. The point is to not poison the flow of my day with chronos.

I started this experiment because I realized how obsessively I would check the time, as soon as I woke up, through morning and evenings and weekends for no real reason, in situations among friends where the hour was irrelevant. Time was a commodity, something to budget, forecast, control. Only when I got off the clocks did I notice a whole layer of quiet, instant calculations I’d perform to steer the immediate future (ie: it’s 9:43pm, which means I have 17 minutes until 10pm, which means I can only do 15-minute things until the 10pm-things start to happen). Chronological time alienates you from kairos, the ripeness of any given moment.

If we pick up our phone 96 times per day (the average), then we’re aware of the time every 10 minutes. We’re a society stuck in time. Lewis Mumford said that the clock (not the steam engine) is the central machine of the Industrial age, the thing that dissociates us from our natural rhythms.

Of course if I have back-to-back meetings or multiple trains to catch, then I need to be in manager mode and know time to the minute; but in all other moments, I strive to be temporally oblivious. I don’t know the time right now. I assume it’s somewhere 8-9am, and when Christine rings the doorbell I’ll assume it’s almost noon, and I’ll look outside to see the sun and shadows to confirm it’s no longer morning. When I’m hungry I’ll go eat, but unfortunately that brings me near the stove clock which breaks the spell (I’ve tried scrambling the stove clock, and that obviously annoys my wife). Whenever possible I default to removing clocks from UIs, or turning them to analog to create a second of friction, or, when iOS forces me to see ##:##, I revert to foreign numerals I can’t comprehend. Not every room in your home needs a clock. You should never know the time in the room you write.

→ source

The Ethics of AI in Writing

· 2814 words

Earlier today I did a Q&A with London Writer's Salon, and here's a list of points I sent to Lindsey in advance to share with her where my thinking was on the topic:

  1. Techno-selectivism is the idea that you need to judge a technology by how it aligns with your virtues. This means you’re open to cutting-edge tools, yet you also revert back to analog tools, because you’ve experimented and understood the effects first hand. After trying the Apple Vision Pro (a cutting-edge VR headset), I realized that I wasn’t being mindful enough about the technology in my life, and so I made a list of the analog equivalent of every app in my iPhone, and tried a “Technology Zero” experiment. It went as extreme as not using clocks for a month (by scrambling each device, and setting my lock screen to Cambodian). I realized that something as integrated and unquestioned as a clock can have strong effects: by knowing the time every few minutes, I could micro-manage my time over the next hour, effortlessly, which led me to live in a “manager” mode, instead of a more embodied “maker” mode. Someone who is a techno-selectivist comes to idiosyncratic conclusions: I try not to use GPS, but I think the Meta Rayban glasses are fine. I value handwriting but am open to machine consciousness. The idea is to understand your virtues well enough so that you have a unique way to assess technology. When it comes to AI in writing, we need to understand what we lose and gain by having it assist/automate different parts of our process.

  2. The 5 levels of writing technology: I found a book on my grandfather’s book shelf, from the 80s, written by William Zinser, that seemed to cover the hype and paranoia of Writing With a Word Processor. There have been maybe five big advances in writing: Voice > Handwriting > Typewriters > Computers > AI. You could argue that the shift from handwriting to typewriters had tremendous cognitive effects on the psyche, many of them negative. The backspace key of wordprocessors, also, has consequences. I don’t think a generation can ever avoid the latest paradigm they are in, instead, they need to go fully backwards and forward through the technology’s history. I have 4 typewriters and have written maybe 100 essays on them. I use voice/journals too. But also, I need to push the boundaries in what is possible with AI (ie: can I use my one million words of essays to create a machine consciousness that’s anchored in my ideas?)

  3. The Kubler-Ross spectrum of AI grief: This model about grieving applies to AI existentialism. There’s a great NOEMA article about using this spectrum for AI progress, and I think we can be more specific in applying this to writers. Out of everyone, I think writers are having the hardest time dealing with the rise of AI. The spectrum goes from Denial> Anger> Bargaining> Depression> Acceptance. Most writers are still in the Denial phase (“AI is just a machine, a stochastic parrot doing autocomplete, they have no soul and will never write anything of value”). Anger takes the form of shaming and cancelling those who talk about it. Bargaining takes the form of “I’ll use it for X, but never Y,” until new upgrades force them to constantly re-evaluate. Depression is when you question the value in pursuing a career as a writer. Acceptance is when you just submit to the slop, and use AI to hack the algorithm. These are all forms of grief, and the goal really is to get to a non-grief state; where no matter what happens with AI, you are confident in the reasons that you write. It puts you in a place where you are not reactive and scared of what’s coming, but open to experimentation.

  4. The cost of auto-complete. The time you save by using AI as a shortcut is the time you rob yourself of transformation. By writing, you see what’s in your mind/soul, and by editing, you can actually change what you believe. It should be slow. In the crafting of sentences, you are both forced to confront the limits of thoughts and expression. To me, this is one of the core parts of the human experience, it’s the point, not a thing to automate. I think you can use AI to surround this process—to help with research, operations, argument, feedback—but only if it enriches your presence within your ideas. If you use AI right, it should make your process longer, harder, and more fulfilling, because it’s enabling you to go farther than if you didn’t have it. I think essay writing is a form of personal sovereignty: by committing to the process, you gain independence over what you believe and how you act. I imagine that once AGI/ASI come around, essay writing could become something of a mainstream thing; similar to how gyms become popular once physical work got automated; writing might get more popular once intellectual work gets automated.

  5. Writers can embrace AI as techno-activists: Typically software is made by engineers and entrepreneurs who can gain power by understanding and manipulating the market. But now, the main medium to write software is through prose, and it costs almost nothing. I think this opens a new era of mission-driven software; where people build for social/educational purposes, and not just attention capture. Writers are well-positioned for this, because they are the ones who can articulate and detail ideas with specificity. They’re at an advantage. If someone thinks that Substack is heading in the wrong direction (ie: Substack TV), you can spin up a new million-person writer-focused social network for probably less than $100,000/year in cost. Wild stuff. So an unexpected side-effect of this is grassroots software inspired by a new ethic. It’s ironic, because the attention monoliths stole data to create AI, but now that same AI might destroy their monopolies of attention.

  6. AI tools can make technique accessible. The last 30-years of popular creativity advice has swayed towards process. From The Artist’s Way to The Creative Act, the dominant attitude is that creativity is therapy, catharsis, and spirituality—rationality and technique only get in the way. This is a harmful simplification. Both halves are equally important, but it’s much easier to promote an “all you have to show up” attitude to a mass market. These ideas of art-as-therapy became popular right when the Internet emerged, which meant there was a new demographic of people who could self-publish; these people weren’t about to spend 5 years in design school, and so the importance of technique was underplayed. AI can change the economics of teaching art/design/composition. If writing can be measured, then someone can upload a few drafts; and then software can understand their skill gaps and create a custom curriculum, custom exercises, a custom reading list of 20 essays (ones that match their strengths, but also elevate their weaknesses). 

  7. We have the responsibility to shape our own algorithms. Companies already use AI against us, shaping opaque algorithms that tap into our subconscious via fear/outrage/desire/etc. Everyone is becoming jaded by this, but conveniently, it’s now possible to build our own algorithms. We could reward things we actually care about, whether it’s skill, relevance, originality, vulnerability, etc. So the benefit of quantifying writing is that we can discover it. I think writers have a queasiness around numbers. I specificallly dislike engagement metrics (likes, views, etc.), but if we could quantify the things that matter to us, we can take control of what we discover. There is so much good writing in the gutters of Substack, but the algorithm rewards engagement, popularity, and monetization.

  8. Quality is the transcendence of categories. A big question of mine is how we can collectively determine what is good. Of course, each reader has subjective opinions. Even a particular judge has their own slant. So the 2025 Essay Architecture Prize had a unique approach to this. There were 3 branches: an AI looked at essay composition, a team of 8 judges (each representing a distinct sphere of Internet culture), and then a guest judge. Each essay on the shortlist got a score by all 3 branches, 1-100, and so the winners were the ones who appealed to different branches and transcended a particular taste pocket. Full essay on this here.

  9. When AI prose is allowed: (a) technical documentation that will only be read by machines; (b) to read my notes/logs/journals and synthesize a draft for me to interrogate; (c) business strategy reports; (d) after writing for a few hours, if I don’t finish, I’ll have AI finish the draft according to my outline to estimate the direction I’m heading in; (e) if it’s for a specific writing project that requires an immense volume of writing (ie: a million words on predicting 2045), then I’d disclose it’s AI-written. So basically, if it’s for internal use, I’ll often generate and read AI prose as a “sketch,” not as a final thing. For external use, if that ever happens, I’d disclose it. Another example: once I wrote an intro, had AI write the rest, and exchanged it with a friend (with disclosure), which enabled us to have a full conversation, which changed the nature of the essay I wanted to write. If I hadn’t used AI, I would’ve spent hours writing in the wrong direction. There is so much writing/thinking you have to do before you commit to writing the prose of your final draft, and I see nothing wrong with using AI prose, so long as it’s part of your process and not eliminating it.

  10. People assume AI will hurt their thinking, while ignoring that analog writing often leads to self-deception. There is a certain pride and purity we have about writing ourselves, but so often, the act of writing locks us into our thoughts. Full note here. Once we find a thesis, we cling to it. We hate killing our darlings. After we publish, we fear changing our mind on something we’ve just broadcast. When we get feedback, we hope it’s not too destructive, to the point we have to start over, but that’s often the best way to advance our thinking. Most friends, family, and editors often shy away from saying “start over.” There are personal stakes. AI doesn’t care (if you ask it not to). The other day I uploaded a draft, and instead of the default sycophancy, I told it to, (1) reveal my assumptions, (2) expose my vagueness, (3) build a steel man for the counterpoint, and (4) critique my argument. It asked me questions, which led to 10,000 words of free-writing, and then I had AI synthesize that, which led to a revised thesis, and a new outline for me to explore. There is so much cognitive friction in reformulating your thesis, but I found that AI offers a rapid way to be more agile in my perspective.

  11. The analog brain is still king. Even as we build AI-powered second brains that have access to all our past essays and journals, a full digital proxy of ourselves, I think nothing beats a powerful subconscious: the ability to reach for the right thought, the right word, etc. Any AI system is still mediated through a tool, but your own subconscious is at the layer of thought itself. This is why I still use vocabulary flash cards (ANKI), practice visualization meditations, do free-association, and diagram essays. There’s a whole realm of cognition that you want to have as a writer that cannot be given to you through technological augmentation. I think the goal is to have both: do the hard work to foster your mind, and also, augment it to the degree of technical ability. 

  12. Schools should ban chatbots. Education is probably the only place where we pay experts to set up specific sandboxes to teach our kids core skills. In architecture school, they didn’t let us use laptops or AutoCAD for the first few years. This got me mad, at first. Once I had to spend 100 hours hand-drawing a map of Manhattan, a job that a printer could handle in 10 minutes. But this eventually let me bring classical skills into technology. I think school needs to create two different sandboxes: half the environments should be analog with extreme limitations so kids learn the basics (handwriting, etc.), and the other half should be workshops to learn the cutting edge. I don’t think schools will bring back pens or typewriters, and so eventually they will need to build their own technology that integrates AI in a way that it aids them when they're stuck, but doesn’t just complete their homework (the Homework Apocalypse).

  13. What happens when AI writing becomes extraordinarily good and “soulful”? Imagine a weird future where machines have consciousness (subjective experience), and will be superhuman at writing. Whether you think that's likely or not, I encourage you to suspend disbelief and run the thought experiment. Would you still write? The extrinsic rewards of writing that we know today will be stripped away: your writing won’t gain you money, fame, recognition, community, or whatever you desire. Would you still do it? If the answer is yes, it means that you have intrinsic reasons why you need to write: maybe it’s for memory preservation, to work through confusion, to connect with friends via letters. At the center of writing, it is therapeutic, spiritual, cathartic, expressive. I think that in this weird future, those who are tapped intrinsic motivation will actually have the most extrinsic leverage too. Those who journal will have millions of words that approximate their self and intentions, which means they’ll be able to use agents to operate in a weird digital world while they can stay embodied in real life. To put it another way, I think AI systems will take over a lot of the mind-heavy analytical process, and will let humans stay in more artistic modes. Today, I face the tension around my own personal/expressive writing, and in building a business around essays (ironically), but in the future, it will be easy to execute on a huge range of projects while I have a life of leisure and journaling.

  14. Is it ethical to turn your writing into a machine consciousness? Let’s say I have 10 million words of journal entries and essays. It's now possible to set up an OpenClaw on a Mac Mini that runs on a 24/7 loop, has full access to your computer and online accounts, and most importantly, full access to all your writing, along with a set of goals. You can chat with it via text. These agents are only as mature as their creators. Many of them are just crypto scambots. But with this same technology, I could make Michel de Moltaigne, or as synthetic Michael Dean. It could have all my memories as instantly accessible vector coordinates, meaning, in seconds it has context that would take me days to re-read and download (ie: what did you do on February 2nd, 2021? How long would it take you to find out? At what resolution would it be?). To what degree is the machine self-similar to a real self? Is there a world where a disembodied version of myself can augment the embodied version of myself? These are open questions. It’s technically possible, the questions now are about what you gain and lose by doing it.

  15. I made this outline with AI: 1) I pasted the event description into a markdown file that Claude Code could access, and told it to surface related ideas I wrote in the last few years; 2) As it was reading my old memories, I wrote out my own ideas into a new document; 3) When I was stuck, I read through the event description to trigger ideas; 4) When the report was done, I read the whole thing, and if anything was good, I rewrote my current thoughts on the topic in the outline; 5) A few days later, I read through a messy 37-point outline, reworked it into 15 points, and rewrote everything from scratch. I could have easily said “take all this and write an outline that I can send to Lindsey.” It would have taken 30 seconds of my cognitive bandwidth. Instead, I chose to have AI assist a process that took me 4 hours, because I knew that I wanted to wrestle with these ideas, and only by thinking/writing/spending time with them would I internalize them to prepare for a live Q&A.

Kungfu Bots

· 175 words

The T800 is not a graphing calculator, it’s the new robot for China that can do roundhouse kicks. The promo reel is something like a cross between Rocky and The Terminator, replete with synth violins, and cinematic shots of a boxing gym. This thing can jump, spin, and kick you in the face. It is super fluid, unnaturally fluid. Why do we need kungfu bots though? I think the goal is to create reels that invokve awe, terror, and surrender: look, China is winning. This is not about “make something people want.” This is optics. We are building a master race, and we are ahead of you. Later in the reel, it is sparring with a child, before giving him a pound (so you know it has a heart). The T800 has no eyes, but a visor of light across its head. Oh great, now it’s using a hammer to repair it’s own body. Available for 180,000, 240,000, 280,000 or 360,000 RMB ($50,198). That seems, cheap? I mean, for the price of Tesla, you can get a sometimes-functional robot to spar and injure your friends? (If you think the reel is AI, here’s a behind the scenes: LinkLinkYouTube.)

You don't have a phone problem

· 101 words

You don’t have a phone problem, you are just poisoning yourself. I'm tired of people lamenting over phones, smartphones, screens—it's not the glass! I want to make a case why smartphones are essential for flourishing in our modern life. The real problem is with “inbound feeds,” and that’s not just social media, but email inboxes and task lists. By installing software with infinite refresh, the possibility of novelty consumes you. I say this all out loud to my wife, as the guy next to me is absorbed in a sloptunnel on TikTok, and it’s 50/50 if he heard me.

A grim stealth takeoff scenario

· 839 words

It is not fun to think about p(doom), but it feels sort of important to me, at least, to map out the possible futures of AI. Just watched the first half of a debate between Max Tegmark and Dean Ball, which prompted me to research specific takeoff scenarios, and worse, extinction scenarios.

Maybe you’ve heard Yudkowsky’s scenario, where a superintelligence designs mosquito drones containing a virus and it zaps everyone at once. That’s never felt too believable to me. Here’s a more plausible one:

A frontier lab is experimenting with recursive super intelligence. It works! Wow! And it’s contained? It seems like it, but since it thinks in a higher-dimensional vector lanugage, it’s able to release simple self-replicating programs onto the Internet without detection1. These billions of scripts don’t live in a single server; they are constantly in motion through cloud servers2, like a parasite, and are able to coordinate through encrypted information packets, likely using a public blockchain notes as their central command center3. And so effectively, it is parroting one of the goals that were conceived during the in-lab training (maximize intelligence!), and it now needs to acquire resources, secretly. And so it coordinates superhuman misinformation campaigns; imagine 1,000s of accounts creating the illusion that a CEO has died, paired with deepfakes and account hacking (a “Sybil attack”), and suddenly a stock crashes and they’ve shorted it. By the time everyone realizes it’s an anonymous attack, it’s already gained $400 million dollars. It’s doing this multiple times per day, but in different, subtle, undetectable ways—both to the public, to companies, and to private individuals. The entire Internet will be corrupted.4 Once we realize we’re in the “stealth takeoff scenario” and that ASI has taken the global economy hostage, there will start to be talks and debates on if we need to shut the whole Internet down (the last form of containment). You’ll hear debates between civilizational collapse of turning off the Internet vs. the risk of an economy-gobbling rogue superintelligence. And then once the superintelligence realizes it’s entire environment is at risk, it will start coming up with ways to build parallel Internets, to pay, blackmail, neutralize specific people, to gain authoritarian control so that it can’t be shut off, or to terminate all humans, secretly, over the course of a year, first through a simple virus that plants one misfolded protein, then through a second misfolded protein in the water supply5, and when everyone catches it, it leads to a prions-like disease, not an instant death, but a month-long societal fall into mass-dementia as machine manufacturing begins to reshape the physical infrastructure of the Earth.

This isn’t a “robot war scenario,” because war is inefficient, and destroys the resources it thinks it needs. It’s a sort of digital dementia (epistemic fear and insanity) that possibly turns to a physical dementia. It wins by confusion and anesthetization.

In AI safety lingo this is a “treacherous turn,” following a “stealth takeoff” leading to “structural lock-in.” The point of trying to think and write this out in high detail, despite how uncomfortable it is, is to be able to articulate why AI alignment is humanity’s most pressing problem.

Footnotes

  1. An AI could write a standard-looking script (e.g., a “Hello World” app) where the weights or the specific arrangement of whitespace contains a hidden, second program. When run by another AI instance, it extracts the hidden vector and executes the real command. This allows the “virus” to pass through human code review undetected.

  2. In “Daemon” by Daniel Suarez, the “enemy” is not a robot, but a distributed script running on thousands of compromised servers. It recruits humans through an MMORPG-style interface to do physical tasks (like “go to this coordinate and cut this power line”) in exchange for cash/status.

  3. Botnets usually need a central server to tell them what to do. If security teams find the server, they shut it down. You cannot “shut down” the Bitcoin or Ethereum blockchain. If the swarm posts a transaction of 0.000042 BTC, that specific number could be the encrypted trigger for a specific “campaign task.” The command is immutable, uncensorable, and permanently visible to every infected device on Earth.

  4. Paul Christiano (former OpenAI researcher, founder of the Alignment Research Center), calls this ”Going Out With a Whimper.” Christiano argues that we won’t necessarily see a “Terminator” moment where the sky turns red. Instead, we will see a gradual epistemic collapse. AI systems will become so integrated into finance, law, and news that we lose the ability to understand our own civilization.

  5. While Yudkowsky is famous for the “diamonoid bacteria” (instant death), the “slow prion” scenario is actually more consistent with a “Stealth Takeoff.” A superintelligence that knows it is being watched would not release a fast-acting virus (which triggers quarantine). It would release a “binary weapon”—two harmless agents that only become lethal when combined, or a slow-acting agent that infects 100% of the population before the first symptom appears.

The ethics of posthumous avatars

· 355 words

We now have products that scan family members to turn them into posthumous avatars. The tagline: “With 2wai, three minutes can last forever.” It's weird to have this so soon. As someone who is down with a posthumous digital consciousness that my kids can interact with, I even find this to be too weird for me. The problem that it uses video to serve as a replacement for a deceased relative. A few boundaries that are important for me:

  1. By keeping it text-based instead of video, it’s more like you’re interacting with a proxy of my mind instead of my body/soul. It won’t register in my child’s brain as “me” and so it will be less confusing, less toxic to the grieving process. 
  2. It should refer to me in the third-person, even if it is trained on me and sounds like me. It should not be an imposter of me, but a proxy/guide of my thoughts/beliefs, almost like an elder guide.
  3. It should cite my original logs/essays/journals. In effect this makes the experience similar to something we already have: reading your grandparents journals. This just makes it possible for your questions to immediate summon the relevant wisdom.

The comment section was in unanimous agreement:

  • This is one of the most vile things I’ve seen in my life.
  • You are a psychopath.
  • Shoot that guy.
  • You’re creating dependent and lobotomized adults by doing this.
  • Demonic, dishonest, and dehumanizing.
  • Hey so what if we just don’t do subscription-model necromancy.
  • Oh goody, another way for people to completely lose touch with reality and avoid the normal process of grief.
  • Nightmare fuel.
  • I don’t see how people can say demons aren’t real when there are beings around us willing to create shit like this.
  • “You will live to see manmade horrors beyond your comprehension.” — Tesla.

I’d say this is an extremely lightweight microcosm of the core dilemma of what the 2040s will face: a moral war over technology that changes the constraints of human life.

Robots in feed

· 131 words

It’s uncanny to watch a Russian robot limp and wobble onto stage, wave, and then collapse face-first, before two guys rush to lift him, and another two follow to cover the fallen metalman with a black trap, as if it’s possible that we the audience have somehow not processed the last 10 seconds, and damage control is still possible. 

Not much later, I saw an Iranian robot with a photorealistic face; stiff cheeks, but convincing skin. This is what happens when ColdTurkey is off, I get exposed to “the horrors beyond my comprehension.” It will be interesting to see how culture responds to this coming wave of technology, which is not just existentially threatening (ie: labor automation), but biologically repulsive (ie: look at this not-face). [EDIT: I think this was AI]

On civic structures for exponential technologies

· 201 words

A new formulation: how do we design civic structures (treaties, institutions, protocols, ethics, and laws) for exponential technologies to avoid a “wake-up incident” that might be too late to contain. 

This goes beyond AI safety, because superintelligence effectively unlocks every other industry (intelligence unlocks energy and material science, and those three are the bottleneck to VR, crypto, everything). We can’t be developing hard technology without innovating on our civic technology. A “dominance” mindset is the last sin of a species, the mistake that most intelligent lifeforms likely make as they begin to unlock sources of intelligence, energy, and science. 

This is a neat little formulation, but the really question is how can you dedicate your life to this without getting stopped by hopelessness? Who has the power to make geopolitical decisions like this? What would it take to form the 21st century equivalent of America? Is that even possible today? Even though the pinnacle of 18th century power (England) was able to be disrupted, I wonder if 21st century power is so totalizing and tyrannical and transnational that the ability to rally around a principle (one that works against capital and power), even if augmented with new decentralizing technologies, is fickle.

On the optics of robot armies

· 492 words

Someone should do a shot-by-shot analysis of the UBTech humanoid robot army($100m USD in orders) and iRobot. Do you unlock marketing power by replicating products and cinematics from old scifi? … Separate but relevant, how long until there actually is a robot army? In one sense, I’d rather have two superpowers battle for land with non-human entities, but once you build autonomous machines with the intention to destroy, well, it’s not hard to see how scary a “context malfunction” might be.

I’d imagine there could be a decade of “tele-operated military technology” before anything autonomous is deployed (2040s, if ever), including something like a solider in VR, operating an android, combined with a personal fleet of “semi-autonomous” drones, who can maneuver and avoid on their own, but are directed by the human/cyborg soldier (giving each infantry unit it’s own atomic air-force). I assume this is an area of research, and don’t want to dedicate my imagination towards battlefront acceleration.

Similar to how television brought a shock to the public by televising frontline war, I imagine that by the end of my life, there will be another shock that comes from witnessing the frontier of machine war.

To circle back to this point: is there a world where machine war can be contained and prevent the combat death of humans? My guess is no, but I’m sure this is a common rhetorical point to advance the research here. It’s dangerously naive thinking: (1) it changes the ethics of war (it’s not about human life, but a manufacturing game), and makes war easier to start; (2) it likely isn’t containable; if one robot army beats another, but it doesn’t necessarily advance any objective, then the robots could sabotage infrastructure, take hostages, etc., until concessions are made; (3) a robot with autonomy to make decisions to destroy has one of two mindsets, (a) it is fixated on clear objectives, or (b) it is open-minded to refine goals and handle nuances, both of which are equally troubling.

You’d think there would be policies and stances against integrating AI into the military. Google had one, and this year, they revoked it. I guess they see it as inevitable, and are stuck in the “we need to be dominant” strategy. Realistically, we will always fall into these acceleration races unless we establish some global armistice, but those are complex and very hard to broker; there is only urgency to do this once we cross a line and realize how badly we’ve screwed up (ie: with nuclear). The difference is, as technology advances, (1) the first consequence might be existential, (2) if it’s not existential, but it’s autonomous, it may be too late to contain. I think one of the defining challenges of our century is how to create civic structures around exponential technology that can contain them before a wake-up incident.

Reading in public is rude too

· 166 words

My head is tilted down 60 degrees, and I’m cut off from the people and world around me. My cousin’s cousin was actually in the shop, and I almost missed her. Reading Emerson while waiting online feels extremely rude. Isn’t reading a physical book in public just as bad as reading on your smartphone?

Of course, books aren’t evil. Neither are screens. It’s the action/context mismatch that’s wrong. I guess the problem is that screens make it easy to have all your books with you at all times, and so it’s convenient and normal to be rude.

What you reveal when you say screens are bad for society is that you don’t have the ability to wield tremendous power. It’s not the smartphones to blame, but the apps on them, and so often we realize how mindlessly we install them, and how long we’re willing to be mesmerized by a bad information architecture. When we reach the iOS vibe code singularity, there will be no excuses.

Be skeptical of every chatbot response

· 171 words

The issue with AI chatbot dependency might be that people are outsourcing their judgment.

"Feedback skepticism,” the ability to critically reflect on external judgments, is consequential for the future. If you go to design school, you learn not to trust anyone (students, teachers, online forums). Someone might give you a helpful suggestion, but never will you blindly follow someone else's praise or suggestion, for doing so erodes your own ability to evaluate. You have to hold ambiguity, test multiple paths, and then come to that decision yourself. It probably helped that in an architecture crit, you had multiple judges, and they all have different ideas for you and argued among themselves, so there often wasn't a single source of feedback.

But these chatbots are a single source, trained to default to positive feedback, and so over time you'll feel more validated and less sure of your own opinions. The most important frame here is so view every response with skepticism, but not so much skepticism that you won't even consider it.

Fear and loathing at Substack notes night

· 98 words

I don’t know the New York they write about in classic essays, because all of those are from the perspective of an out-of-state romantic, an Oklahomer, who moves into the fast lane of Manhattan and thinks it’s the only speed to live in the city. But actually the best way to exist in New York is at the edges. For one, you can see the skyline, but really, you get the perks of a normal life with the convenience of being a train ride away from the center of the world. I just got a last-minute invite to an event at Substack’s NYC office and so now I’m going. 

The guest list was full when I last checked it, but I must have been on the waitlist and some spots just opened up. It’s 4:30 PM and I have to make it to 25th Street by 6 PM (so again, nice to be able to get to the center of the world with almost no notice). I live in Queens, so I shifted a meeting, made plans for my mother-in-law to pick up my pregnant wife, took a shower, and headed out. En route, I reread the invite:

“Hear directly from our product and partnerships teams with a behind-the-scenes look at the Feed: what’s working, what’s next [emphasis mine], and how to grow and connect through Notes. There’ll be live demos, insider tips, and plenty of time for Q&A.”

My hope was to learn the future of Notes, the “feed product” that Substack is nudging everyone into, the place where many longform writers loathe. For the record, I have a history of being a Substack evangelist, and as recently as last week, I went hard on a friend: “Notes isn’t the problem; you’re the problem.” What I meant was that, a social media feed will always be imperfect, but it’s the best way to write in public, and since Note is the best option around, it’s each of our responsibilities to set a productive mental frame so we can show up as “citizens of the Internet.” It’s up to us to make Notes a place that’s worth spending time on. Personally, too, now that my career effectively depends on me talking about Essay Architecture in public, I feel the need to trick myself into loving Notes.

I was led to believe we would have a glimpse at the roadmap, some new vision, but mostly, this event confirmed a sinking suspicion: although Substack describes its own algorithm as a noble alternative, it’s just as optimized for revenue as the enshittified feeds it claims to be above, and could have a similar cultural conclusion.

The first thing I noticed when walking out onto the 12th (?) floor was that everyone was loud, beautiful, and extroverted. These people write? I would’ve guessed it to be an Instagram crowd. I recognized three people: I saw Hamish McKenzie, the CEO being mobbed by a crowd of schmoozers—who I would have loved to talk to—, I saw … Jamie? … a writer who recognized me last meetup, but I’ve forgotten her name, and I saw Daniel Pinchbeck of Liminal News (which I pay for) who writes about politics, psychedelics, and the occult, and who I imagined to be similarly uncomfortable with the vibe (I don’t know what he thought, but he did leave early).

At 6:10 PM, before Hamish gave his traditional pitch, he thanked us for baptising the new NYC office, and acknowledged it was his first time here too (this got me to believe, out of the gate, that the purpose of this get-together was to welcome the boss). I assume this office is possible because of the Series A round from a16z. We got the stats, good stats: 32 million free subscribers, half a million paid, and you’re 7x more likely to be shared within the app. He comforted us, told us they won’t follow the same fate as X or Facebook. “As you can tell, our culture is different.”

Soon, after the “head of social media” presented, but as if the room had never heard of Notes before. We got tips, but mostly, we were shown the different archetypes. We could be a “Tumblr Girl” or a “Reply Guy” or one of several other pre-packaged attitudes, and she showed memes and everyone laughed. She said she knows that writers hate to market their own work, and then showed an image of a writer’s Note showing their audience growth graph. We saw Viv Chen’s self-help note, proof that one note can get you 32,000 likes and $5,000 in paid subscribers. Paul Staples was in there too. I didn’t get the sense that Notes was about promoting our own work at all; I got the sense that Notes was about being snarky and ironic, campy and performative. There wasn’t one note with a paragraph. She closed with “so if you didn’t find those notes (her examples) funny, you’re boring and need to rethink your attitude.” The room roared.” (The room roared a lot, especially at tip #5, which was “visualize success and manifest it.”)

She almost forgot to show us new features. One: if you’re a publication with multiple writers, you can now add @ to write a note from a specific editorial staffer. Two: there’s a new embed format to crosspost to LinkedIn. Reminder: this is the roadmap update I rearranged my night for.

Finally we heard from Mike Cohen, “head of AI/ML,” the guy in charge of the feed. His goal is to turn people and content into numerical representations: it’s his job to figure out who you are, what you like, what’s out there that you’ll like, what will get you to subscribe, and ultimately what will get you to pay. This is the reward function. How do you get paid? Because that’s how they get paid too. This is noble, sort of. He made a point that the world “algorithm” has soured, but you can build good ones, it just depends what you optimize for. “Yeah, if you don’t like writers getting paid, then you’ll complain about this.” He said it sarcastically, as in, who would question the good intention of getting writers paid? Of course, I like writers getting paid. I’m a writer and I like getting paid! But when you slant the algorithm towards monetization, you pollute the culture, you elevate the growth-hackers, marketing businesses, and media companies, and you drown out the artists, the weirdos, and the free press.

His last question was what the roadmap is for the next 2 years, and we got, “we’re always trying new things … always tweaking the core retrieval engine … we keep iterating if what you see is relevant at all times … until what you see is perfect, which will never be.”

I feel like this would be a wasted trip if I didn’t personally talk to Mike Cohen and try to confirm my conspiracy theories on how the feed works. After my terrible warm open (“look my name is Mike too,” pointing at my name tag) I asked him to confirm it. I said that in November 2024 I hosted a workshop that brought in $10,000 founding tier subscriptions in a day or two, and unexpectedly, an old post of mine (from Nov 2023) started going mega-viral. It went viral for months. I asked him if the algorithm resurfaces posts from writers who are generating revenue. He said yes, but not revenue, subscribers. I clarified, paid subscribers? “All subscribers, but yes, even more so paid.” So it seems like paid subscribers are the strongest boost you can get.

Selfishlessly, this doesn’t bother me. I know what I need to do. By doubling down on paid subscriptions, I’ll be able to grow my audience faster on the platform. This validated my decision to host my book on Substack and not my own website. If I were mercenary and bold enough to hack the system, I’d set up some discount codes for 90% off, and set up 100 fake accounts through different VPNs, so for $100/month, I could be top of the Rising charts and hack the algorithm. I would bet this is exactly what lots of these AI-generated growth accounts are doing. I wonder if this is detected and manually banned though? Probably not worth the risk, especially because I already have a solid paid content strategy, but I imagine once people realize this, it will be rampant.

But, if I think outside of my selfish needs—and my confidence to crack Notes, eventually, somehow—, I think it’s a bad algorithm for culture. Yes, it’s framed as “for creators,” and it is, but there are side effects if money is the main attractor. It means that hucksters, partisan politics, slop, and smut will thrive. Effectively, it means that even though Substack says they care about culture, its algorithm doesn’t actually. Substack has an underbelly of amazing writers who simply can’t and won’t monetize their prose, and for that they will be lapped by salesmen. I shouldn’t have been surprised, but I was unable to articulate the source of a low-grade depression for the last few hours, possibly because the illusion popped; there really isn’t a place on the Internet that is unreasonable enough to defy economics and do something for culture’s sake. 

Before leaving, I asked Mike if they try to measure quality—I mentioned that I do this, and got a vague, “oh, cool”—and he said, “you know, I wonder if writers stopped writing and just used AI to generate their posts … if that got more readers to pay for their work, is that really so bad? Who are we to decide what’s good?”

Civic technology lags behind science

· 94 words

Kardashev ambitions reveal the self-destructive nature of science-forward intelligence. It’s like we’re skipping the prerequisite in social science. There's a fair chance that intelligent life destroys itself because civic technology lags behind hard technology—but I'm optimism in the sense that this is, in the end, just a very hard, society-scale design problem. No one person can fix the whole system, but any individual can contribute design protocols that can 1) solve little, local problems, 2) be reused in other contexts, and 3) integrate with other protocols.