michael-dean-k/

Topic

ai-psychosis

5 pieces

A personal labyrinth

· 1287 words

My personal website is “out of the bag.” Meaning, it’s not a private thing shared among 3-5 friends anymore; I excitedly shared it with Essay Club yesterday (60 people or so). I am leaking it prematurely because of the giddy hope, that personal websites are the new paradigm for writers, an escape from the enshittified commons. But I have to admit that I haven’t thought through two important questions yet, so here it goes:

1) Does this kill discovery?

If I were to instead publish all my ideas in real-time on Substack notes, would my audience grow more? Probably. The reality is we all self-censor ourselves in public feeds, in a thousand different ways, so it’s not like all of this could naturally emerge in feed. I tried this in January. I killed my logging practice with the goal of trying to just do it all on Notes. For two weeks, I was able to post spontaneously, but I find that if you ever stop momentum, it’s very hard to get back out of your head and into that groove. Overall, I just wrote less. I wonder if there’s truth to the idea that all writing practices grow/incubate/evolve better in semi-public spaces. It’s not that you should ignore the occasional blast. It’s that there’s a natural progression of nurturing ideas.

Another angle is, “I’m not interested in audience growth,” which is true because it’s not motivating for me, but I am in several ways entangled by growth, meaning, a complete lack of growth could threaten the sustainability of my writing. And so a middle ground is to incubate on my website and then selectively drip ideas through notes and newsletters. I could do a weekly or bi-weekly digest, Austin Kleon-style (“10 logs from last week” + essay visualization + updates, etc.). Not as sure how I would do it on Notes. Daily? Sporadically? Something else? Either way, this brings back the whole "public-to-private bridge" concept from Write of Passage. I think some people abandoned websites and just accepted the feeds. I know in 2023 I shifted entirely to Substack thinking it could be my entire digital home, but now it feels like rented land.

So my website gets maybe an A- in unlocking my writing practice, but only a C in growth, but maybe it’s a B in conversion? As in, if someone spends a lot of time on my site (and people have told me they’ve spent hours in my logs), they’re more likely to trust me—due to the sprawling, unoptimized, honest nature of things—and more likely to get a paid subscription or join Essay Club? Unexpectedly, personal writing could be a more honest and more effective form of “marketing” than strategic value-focused content (“Are you in hell? Well I’ve got the thing for you…”).

2) Is there risk in having all my ideas public?

Now that I’m in my own place, relatively unchained, saying what I want, and reading and writing about political science a bit more (I have a draft comparing Karp’s Technorepublic to Leviathan by Hobbes), I’m a bit paranoid to share ideas so openly. It’s hard to imagine facing any real-life consequences for the words I write; I’m just a nobody! It feels hubristic to think that I’d be considered a threat to the state for my thinking, but maybe these thoughts are natural, considering we’re being pleaded to accept an AI-powered surveillance state in exchange for security. (It's not that I think any of my writing is particularly rogue, but let's say I start thinking through a scheme to organize a million swing state voters to rally around a single-issue voting boycott in order to pass a bill on election campaign reform, you can see how democratic ideas might seem threatening to a state.)

It’s effortless for a state agency to scrape the Internet, build psychographic profiles on its citizens, and give them a “loyalty score.” Let’s imagine they also have an “influence score” too, determining how much sway you have over your citizens. If you have medium levels of loyalty and influence, you’re probably not being actively monitored; but if you have extremely low loyalty (L=5/100), it’s a threat even if you’re low influence (I=0) because you might be a terrorist; but also if you have extremely high influence (I=95), and even slight disloyalty (L=45), then that’s a risk too. And if it’s not the state absorbing my context, it could be independent actors scraping my site to clone me and do what they will…

I guess the point is that AI creates such a leverage over information, that you’re own personal data becomes extremely valuable. It can be leveraged not just by you, but anyone who has it. A personal website of an unfiltered nature is a higher-resolution signal than a social media profile where most interactions are shallow.

Grasping at a solution_

If all these concerns are justified (and maybe they’re not), then what are the practical methods of maintaining privacy? I’ve already written ideas about security gates and embedding-based encryption, and that’s all technologically neat, but it creates friction for the readers! Maybe that’s okay? But then this ignores the “entangled with growth” constraint from above…

And so maybe the Third and only way through is to make the encryption solution that is both an alluring and enjoyable UX for the reader.

This starts by understanding how websites get scraped, building solutions to avoid it, and then shaping them to be reader-first. You can only really do this by scraping yourself. I’ve scraped full portfolios from Substack in two different ways, and even a decade’s worth of Marginal Revolution posts. At a minimum this means avoiding RSS and HTML, which this (current) site already violates (ie: it’s ideally on a server and requires permissions to load).

Scrapers can prevent against automated gathering; but not against a person or agency that has already found your site and is willing to sit through slower and manual methods to extract information. A defense here would require gating and admin approval, another hinderance. There is something here about taking monetization dynamics (paywalls) but reinventing them for privacy’s sake. Maybe the way around this is to only encrypt a portion of the content, say 50%, with cryptic previews of what lies beyond (either through titles or redactions or chaos).

To try to synthesize this all together, what if a website were a video game?

Website as gamified maze?

As smart as today’s AI’s are, they still can’t beat Pokemon. They can transform text and code better than the world’s best engineers, but if you ask them to navigate an environment where vision and long-term memory are required, they bomb. Pokemon has very simple inputs too: 4 navigational directions and then a Click/Cancel boolean. If you were to make it more challenging, with inputs that required hand-eye coordination, that could solve two problems: it scrambles existing scrapers, and creates a novel UX.

I also sense there’s something to turning a website into a literal maze, not just an overwhelming sprawl of hyperlinks, but an actual video game you have to navigate through (it would be neat if somehow notes were semantically distributed across a map so there are “towns” of ideas). Can friction be made gamified, exploratory, enjoyable? Maybe it’s not only a matter of walking around, but solving puzzles/riddles at gates to advance deeper into the labyrinth to find more sensitive ideas. Maybe some gates require passphrases, or interactions with me. There could even be a minotaur at the center who holds my deepest memories, aspirations, and fears and if you can kill the Minotaur you get the passphrase to my Bitcoin wallet.

Website cyber-defense

· 469 words

I have some neat prototypes for a personal website, but now I actually want to build a stable backend, one that can serve me for 5-10 years, or more (100-year hosting would be ideal), and persist among many different UI or platform changes. This means I’m trying to think forward to where the Internet could be by then. This involves extrapolating a current trend to its extremes, and even if you don’t know for sure it will happen, it’s good to have comfort in knowing you’re protected from extreme edge cases.

The one top of mind is the death of the open Internet. This goes way further than “the dead Internet theory” which only covers the proliferation of bots and slop. This is about bad actors being so leveraged that it becomes dangerous to have any public content of yourself, in text, image, video, or audio. ie: Any hacker or frenemy can clone you and do what they will. Or maybe a rogue government can analyze your psyche and determine your "loyalty score" is only 35% and shadow ban you from getting a mortgage. I will not get into specifics here of the likelihood of different cloning, phishing, or surveillance schemes, because all that does little but bring you to madness, but my point is that if you want your website to be a 5 million word 1:1 representation of your mind (in all it's vulnerability), it's worth designing for the most paranoid future possible (like how engineers design bridges for earthquakes that will likely never happen).

One response to all this is cyber-defense. At the absolute minimum, this means locking most things behind a gate where only the approved can get through. A more clever, technical solution is to share encrypted “coordinates” that represent the semantic nature of an essay, and then let people surf through prompting and approval gates. An even more extreme idea is a mostly-private site with a kill switch, which involves (a) signing in once per month to mark "I'm alive," and also (b) giving my wife a secret key to type in when I die, which then releases all private material. Obviously this throttles reach, but isn’t there psychological value to limiting your audience anyway? Montaigne wrote alone in a tower for a decade, and so if the approach is to use writing to steer you life and mind, at the detriment of audience growth, then this might be the way to go: a literary labyrinth accessible to maybe your 30 closest friends and anyone else via application who can prove they are not a ghoul.

The other alternative is to embrace the weirdness, that no matter what, we will all be rendered through a schizophrenia filter, with no choice but to relinquish control over the non-canonical or rogue versions of ourselves.

Heuristics for systems

· 526 words

I declared to my wife this morning that DeantownOS is getting retired. It’s been 3 months since I spiraled into Claude Code for personal systems, and I’m at the point in the curve where the amazement has normalized and I’ve accepted the fact that I’m in a trough of disillusionment. The question now is revise or abort.

The case for aborting ties back to Oliver Burkemann’s Four Thousand Weeks, which popularized the idea that all systems are methods to procrastinate from making hard decisions. They give the illusion that you can do everything, and since AI can meaningfully leverage the volume and range of things you can do, it tempts you to build galaxy-brained systems. The thing I think we fail to realize while in a vibe coding frenzy is the psychic cost to remember and maintain the stuff you build. Yes, it is appealing to “reclaim my computer” and rebuild everything I use as personal software (from Obsidian to Gmail), and it’s even possible, but it’s a new breed of Sisyphean struggle. Once you can mold your own software around you, it’s too easy to endlessly mold, to lose sight of the work and just tinker on your exoskeleton.

I’m obviously skeptical, but I’m still a believer; if I were to revise, to rebuild my Claude stack from scratch, I would have to develop a few heuristics to help me from short-circuiting.

The first one that comes to mind is “will this matter once I’m dead?” Ie: writing an essay matters, because I imagine one day my daughter will read that and get to know me better, or at the very least, future Me in 35 years may enjoy reading words of my past self. But to create detailed daily files that get spliced into atomic “routing files” that then then get saved again to a new destination folder, which exist either as (a) just context for AI, or (b) require some manual effort to prune into something that matters once I’m dead, is to create waaaay too many layers of abstraction between the source and the Work. When I read back my writing from the last few months, only a small is valuable enough to be saved as "logs" in my archive. I was writing for AI, not for my future self.

I made this assumption that atomic daily files are the kernel of a system, and it was an axiom I could never undo. There’s maybe another principle on “don’t build load-bearing infrastructure on an unproven axiom.”

Another one could be “don’t assume future you will have bandwidth,” to do X every day/week/month. Every day I had to review how my AI system proposed to route my logs, and eventually I'd ignore it and get backed up. This means that if something isn’t truly automated, I should be very cautious of it. It's possible to do one little step forever, but not a hundred. Not every promise has brush-your-teeth-scale reliability.

What I’m getting at is that it’s not about maximizing or neglecting systems, but about understanding the right principles so you build something that is actually in service of your life.

Apocalyptic Wonder

· 683 words

An otherwise simple walk to catch a train into the city had a dimension that I guess I’ll describe as “apocalyptic wonder.” I don’t mean that in the “end of the world” sense, but in the “unraveling” sense of the word. It was like every phenomenon—a passerby’s limp, a tasteless building, Broadway advertisements—came with a decision: I could see it with my usual categories, almost like through a foggy glass of analysis, or, I can imagine and wholeheartedly believe the most generous and profound interpretation possible. And when you inherit that 2nd option as a lens, it’s like one thing builds off another until there’s a cascade and you just have chills over extremely ordinary things. A grumpy commuter is not someone to judge, but someone deserving of parental love, and you imagine you and them as if you’ve been very close for a lifetime, and just for a second you infer some emotional dimension you would’ve never otherwise known. It very much feels Scroogish, like you’re a deadman with just one evening to remember life from its most charitable angle. I don’t know why I’m feeling this lucidity: could be a new surge of dad hormones, or the frigid weather, or the tie around my neck is too tight, or maybe this new frenzy of spawning new software to wrap around my problems is priming me to believe that I can just spin up my own mental frames to see anything anew, as I please, whenever. 

My friend Andrew, I imagine, would read this and joke that it’s a low-grade form of Claude psychosis. Maybe, but maybe the good kind? I’ve always thought there was something slightly off about seeing normal life with ecstatic wholeness, and that the line between psychosis and mysticism is thin. When LSD was first invented, it took them a decade or so to shift the framing from psychosis—they called it “psycho-mimetic,” a madness simulator—to psychedelic (“mind-manifesting), and eventually mystical, transcendental, entheogenic, etc.

I don’t know what it was, but now that I write this on the train, I’m right back in my regular head. And obviously I love writing, but it makes me think I really need to make sure I have chunks of boredom each day, non-linguistic moments in between things. Infant care sort of produces this feeling too, but it’s different because that is about fusing attention with another being; what I just experienced before was something like full immersion in a chaotic environment. Pure Horus. I guess I’ve found it hard to make time for this because, since time is so limited, there’s a pressure to prioritize and converge in the little time you have: I have a book to launch! (I will be announcing the essay prize winners in early March.)

Anyway I think I’ll post this to Notes. Usually I’d just post a riff like this to a secret corner of my website, but in January I stopped logging, and said I’d try to just use Notes as my public note-taker. So if I want to really remember anything, I have to share it. I think the idea of sabotaging the thing I love—capturing fleeting thoughts in prose—and forcing it through a habit of the thing I’m scared of—public judgment of my every idea through metrics—is a good principle to do more often. It’s weird to take something that really is more like a journal entry and open it up to strangers. I’d basically be okay sharing this with anyone I know, but it make me anxious to think a stranger could find this, and this would be 100% of what they know about me, and they’d have no idea about Essay Architecture or whatever, but I think that kind of disregard is exactly what I’m trying to go for on Notes. If my email essays are on topic and polished and narrative building, then each Note should be its own thing, out of context, unrelated to the last one. And so I’m glad to share something like this after a shipost about snakepit.

→ source

Your probability of AI psychosis

· 50 words

For every 1,000 ChatGPT users, one of them will go insane (ie: "AI psychosis"). It’s like Russian Roulette for your mental health but to the 4th power. [ (1 in 6 ) ^ 4 ]. Put another way, it’s like playing a Russian Roulette with four lives and losing four times in a row.