michael-dean-k/

Topic

dead-internet

5 pieces

Website cyber-defense

· 469 words

I have some neat prototypes for a personal website, but now I actually want to build a stable backend, one that can serve me for 5-10 years, or more (100-year hosting would be ideal), and persist among many different UI or platform changes. This means I’m trying to think forward to where the Internet could be by then. This involves extrapolating a current trend to its extremes, and even if you don’t know for sure it will happen, it’s good to have comfort in knowing you’re protected from extreme edge cases.

The one top of mind is the death of the open Internet. This goes way further than “the dead Internet theory” which only covers the proliferation of bots and slop. This is about bad actors being so leveraged that it becomes dangerous to have any public content of yourself, in text, image, video, or audio. ie: Any hacker or frenemy can clone you and do what they will. Or maybe a rogue government can analyze your psyche and determine your "loyalty score" is only 35% and shadow ban you from getting a mortgage. I will not get into specifics here of the likelihood of different cloning, phishing, or surveillance schemes, because all that does little but bring you to madness, but my point is that if you want your website to be a 5 million word 1:1 representation of your mind (in all it's vulnerability), it's worth designing for the most paranoid future possible (like how engineers design bridges for earthquakes that will likely never happen).

One response to all this is cyber-defense. At the absolute minimum, this means locking most things behind a gate where only the approved can get through. A more clever, technical solution is to share encrypted “coordinates” that represent the semantic nature of an essay, and then let people surf through prompting and approval gates. An even more extreme idea is a mostly-private site with a kill switch, which involves (a) signing in once per month to mark "I'm alive," and also (b) giving my wife a secret key to type in when I die, which then releases all private material. Obviously this throttles reach, but isn’t there psychological value to limiting your audience anyway? Montaigne wrote alone in a tower for a decade, and so if the approach is to use writing to steer you life and mind, at the detriment of audience growth, then this might be the way to go: a literary labyrinth accessible to maybe your 30 closest friends and anyone else via application who can prove they are not a ghoul.

The other alternative is to embrace the weirdness, that no matter what, we will all be rendered through a schizophrenia filter, with no choice but to relinquish control over the non-canonical or rogue versions of ourselves.

Tectonic shifts

· 440 words

Why am I so engaged with the news these days? I think it’s part of a deeper desire to update my world model. There is no doubt, massive change. Geopolitical, economic, technological. And as abstract as those things usually are, it feels like some sort of shift that, in 2-3 years time, wil have an effect on my life. Of course, for many people in the world, it’s hitting them now. But similar to how COVID spared no one, it feels like your model of where things are going will directly effect your preparedness.

But this feels more existential; safety/security are actually on the line. And so that’s an anxious kind of thought, that the tectonic plates under your reality are shifting, and it’s not some recreational yearning to re-skill and recalibrate, but a mandatory thing.

And so to make sense, what do you do, go on X? That’s a total cesspool. New media is worse than the old gatekept media. And so, where I think I want to take this, is to build my own systems to sift through and aggregate information, and to build my own UI to do this. Even a simple Claude prompt, “what happened in Iran in the last 4 hours” is so much better than X. It’s stripped of sensationalism, and reading is just a less triggering medium. Bias aside, it’s at least free from people who are intentionally trying to deceive you for virality. There is a clout-chasing incentive, paired with actually turbulent times, which makes algorithmic news something like a schizophrenia filter.

And so what are these questions, these underlying uncertainties that are triggering a model change? How will anyone make income with the rise of AGI-3 and eventually ASI? How do I exist online and avoid hyper-surveillance and cyber-sabotage? Where in the world can I live to build a better future for my daughter, one where colleges doesn’t exist, jobs don’t exist, and where quality of life actually depends on nationalized social systems? A weird future. And weird to consider the fall of America, a kind of reverse migration, where, because of a confluence events, it might not be a place to raise a family in 1-2 generations down the line.

And so practically, this is resulting in things like: (a) applying for EU citizenship, (b) setting up AI agents for my business, and (c) considering cybersecurity, new ways to protect, share, and collaborate on writing (ie: how do you build an audience if the commons are polluted?). This is all very disorienting; it's hard to continue with business as usual when you become open to this scale of change.

Infinite Monkeys

· 791 words

The infinite monkey theorem is often stated as, “if you give an infinite amount of monkeys an infinite number of time, one of them will eventually write Hamlet.” This is very off. I assume most people think it’s off because they know monkeys can’t write (which misses the point). I think it’s off in the other direction; it misunderstands what happens when you multiply infinite x infinite. You won’t just get one Hamlet; you’d get a whole lot more.

Let’s start with a single infinite: a monkey with infinite time. Imagine putting said monkey in a magic bubble that gives him immortality, endless focus to type random characters, and the ability to survive the death of all universes, quantum foam, or whatever. This monkey has a lot of time. Endless time. He won’t just write Hamlet once, he’ll write it many times. Actually, infinite times. Sometimes the monkey will go several million/billion/trillion years without writing Hamlet, but that’s okay because he’s on adderall, can’t die, and has only one job.

Now imagine there are infinite monkeys, too. In every frame of reality (assume this an Unreal Engine monkey simulator running at 120 FPS), the Creator can spawn monkey bubbles, 2 or 2 trillion bubbles, or however many bubbles are necessary for one of them to begin writing Hamlet in that moment. Then in the next frame (0.0083 seconds later), more monkeys are spawned until one of them starts Hamlet too. Over and over. (What we do with all the unsuccessful monkeys is a different problem.) Since all of these monkeys have internet, there are 432,000 Hamlet uploads every hour. And if these infinite monkeys started at the dawn of our universe, they would have written Hamlet 2.18×10^20 times.

The big idea is that when you multiply infinite x infinite, not only does the unlikely thing happen, but it becomes the new grammar of reality.

This thought experiment feels prescient now, because, of course, AI. While agents can replicate & work at radical speeds, it’s not literally infinite. Even if some monkey virus infected every computer on Earth, and did a years worth of work in a day, that’s still finite. But even if you multiply an astronomical x an astronomical, or even just a very big x very big, a similar effect happens: the unlikely thing becomes omnipresent.

I first started to notice this in the Sora app (which I haven’t heard about in months BTW). If you’re familiar with the “Wazzup” 1999 Budweiser commercial, you might remember that it involves two guys yelling “ZUUUUP” into a phone, with the video rapidly cutting back and forth between them. Now, you can prompt anyone into that meme. And so you can just swipe right and find the LOTR cast going “ZUUUUP,” and all the American presidents going “ZUUUUP,” and every member of the animal and pokemon kingdom going “ZUUUUP,” and everyone in your phonebook who uploaded their likeness to the app going “ZUUUUUUP,” as if every conceivable piece of media, IP, and matter just collapsed into this singular point, an arbitrarily selected commercial from 25 years ago.

Now this is a simple, harmless example. But it gets weirder when you imagine a single person’s intentions leveraged to such an extraordinary degree that they become the entirety of the Internet. It would be like, after I publish this note, all the comments came from fake accounts based on real people I know, but they each post a link to a version of Hamlet where all the characters are monkeys. And then I go to Reddit, or check my email, or listen to my voicemail, and it’s just monkey Hamlet everywhere. This is an exaggeration, but I’m trying to make a point that is something like an offshoot of the dead Internet theory. It won’t just be fake AI stuff that tries to blend in, but an assault of the bizzare, a thousand oddly specific info-viruses that we won’t be able to escape, orchestrated towards various ends that we won’t be able to wrap our heads around.

I generally don’t think the open Internet, as it’s designed today, will be able to stand it. I also don’t think that’s necessarily a bad thing, because the web today has ossified and enshittified and is probably due for a shakeup. I do think there will be some chaos/danger ahead, and we’ll have to each figure out how to navigate that safely, but I imagine we’ll reassemble into smaller communities, sheltered from the near-infinite, where you trust/know the 15-150 people involved, within the Dunbar limit. From this disaggregation, I think there’s a slow path of building back better and bot-resistant, and it’ll possibly be a much better place than the before-infinite-monkey times.

→ source

Infinite x Infinite

· 213 words

Extended thoughts on infinite: if you give a theoretical monkey a typewriter with infinite time, not only will one produce Shakespeare, but many will (10s, 100s, millions, technically infinite), they will just be spaced out by a long, long time. But what happens if you multiple infinite by infinite? If you give infinite monkeys infinite time, then monkeys will begin rederiving the entire works of Shakespeare in every frame of reality. This is the weird unlock: two infinites takes something rare of improbably and makes it the new grammar of space-time. OKAY. Now that this is established, what is the practical tie-in? Generative AI has two infinite-like frontiers: agent replication & time dilation. Eventually, you may be able to have millions of agents working on a task, and, they’ll be working so fast, that it’s like they can compress a decade of work in a day. The implication here is that any possible intention can suddenly be leveraged to an extraordinary degree. Things will get weird. To put it alarmingly: the person with the worst intentions could suddenly become the entirety of the Internet. The opposite is true too. But weirdness will ensue when individuals suddenly have the ability to exert their will and vision upon a seemingly limitless scope of digital terrain.

A grim stealth takeoff scenario

· 839 words

It is not fun to think about p(doom), but it feels sort of important to me, at least, to map out the possible futures of AI. Just watched the first half of a debate between Max Tegmark and Dean Ball, which prompted me to research specific takeoff scenarios, and worse, extinction scenarios.

Maybe you’ve heard Yudkowsky’s scenario, where a superintelligence designs mosquito drones containing a virus and it zaps everyone at once. That’s never felt too believable to me. Here’s a more plausible one:

A frontier lab is experimenting with recursive super intelligence. It works! Wow! And it’s contained? It seems like it, but since it thinks in a higher-dimensional vector lanugage, it’s able to release simple self-replicating programs onto the Internet without detection1. These billions of scripts don’t live in a single server; they are constantly in motion through cloud servers2, like a parasite, and are able to coordinate through encrypted information packets, likely using a public blockchain notes as their central command center3. And so effectively, it is parroting one of the goals that were conceived during the in-lab training (maximize intelligence!), and it now needs to acquire resources, secretly. And so it coordinates superhuman misinformation campaigns; imagine 1,000s of accounts creating the illusion that a CEO has died, paired with deepfakes and account hacking (a “Sybil attack”), and suddenly a stock crashes and they’ve shorted it. By the time everyone realizes it’s an anonymous attack, it’s already gained $400 million dollars. It’s doing this multiple times per day, but in different, subtle, undetectable ways—both to the public, to companies, and to private individuals. The entire Internet will be corrupted.4 Once we realize we’re in the “stealth takeoff scenario” and that ASI has taken the global economy hostage, there will start to be talks and debates on if we need to shut the whole Internet down (the last form of containment). You’ll hear debates between civilizational collapse of turning off the Internet vs. the risk of an economy-gobbling rogue superintelligence. And then once the superintelligence realizes it’s entire environment is at risk, it will start coming up with ways to build parallel Internets, to pay, blackmail, neutralize specific people, to gain authoritarian control so that it can’t be shut off, or to terminate all humans, secretly, over the course of a year, first through a simple virus that plants one misfolded protein, then through a second misfolded protein in the water supply5, and when everyone catches it, it leads to a prions-like disease, not an instant death, but a month-long societal fall into mass-dementia as machine manufacturing begins to reshape the physical infrastructure of the Earth.

This isn’t a “robot war scenario,” because war is inefficient, and destroys the resources it thinks it needs. It’s a sort of digital dementia (epistemic fear and insanity) that possibly turns to a physical dementia. It wins by confusion and anesthetization.

In AI safety lingo this is a “treacherous turn,” following a “stealth takeoff” leading to “structural lock-in.” The point of trying to think and write this out in high detail, despite how uncomfortable it is, is to be able to articulate why AI alignment is humanity’s most pressing problem.

Footnotes

  1. An AI could write a standard-looking script (e.g., a “Hello World” app) where the weights or the specific arrangement of whitespace contains a hidden, second program. When run by another AI instance, it extracts the hidden vector and executes the real command. This allows the “virus” to pass through human code review undetected.

  2. In “Daemon” by Daniel Suarez, the “enemy” is not a robot, but a distributed script running on thousands of compromised servers. It recruits humans through an MMORPG-style interface to do physical tasks (like “go to this coordinate and cut this power line”) in exchange for cash/status.

  3. Botnets usually need a central server to tell them what to do. If security teams find the server, they shut it down. You cannot “shut down” the Bitcoin or Ethereum blockchain. If the swarm posts a transaction of 0.000042 BTC, that specific number could be the encrypted trigger for a specific “campaign task.” The command is immutable, uncensorable, and permanently visible to every infected device on Earth.

  4. Paul Christiano (former OpenAI researcher, founder of the Alignment Research Center), calls this ”Going Out With a Whimper.” Christiano argues that we won’t necessarily see a “Terminator” moment where the sky turns red. Instead, we will see a gradual epistemic collapse. AI systems will become so integrated into finance, law, and news that we lose the ability to understand our own civilization.

  5. While Yudkowsky is famous for the “diamonoid bacteria” (instant death), the “slow prion” scenario is actually more consistent with a “Stealth Takeoff.” A superintelligence that knows it is being watched would not release a fast-acting virus (which triggers quarantine). It would release a “binary weapon”—two harmless agents that only become lethal when combined, or a slow-acting agent that infects 100% of the population before the first symptom appears.