The p(doom) of higher education
A few months ago I saw a YouTube video titled something like, “A child born in 2025 is more likely to get killed by AI than graduate college.” What a ridiculous claim. I assumed it was clickbait and didn’t click, but it has jingled around my head enough to the point where I think I can make sense of it’s argument:
-
The average p(doom) of an AI engineer is 16%, meaning there’s a 1 in 6 chance of human extinction (put another way, companies have morally rationalized the need to play Russian Roulette—if we don’t do it the bad guys will—, without acknowledging that if they survive and win, they get the consolation prize of comandeering the whole economy).
-
40% of US adults, age 25-34, today, have a bachelor’s degree. If there’s massive job automation and employment, a college degree would be both unaffordable and an unreasonable cost if it were. It’s not unthinkable that <15% of next generation gets a college degree, which makes that sensational claim, weirdly, plausible.
I still think it’s a shaky comparison, confusing two different types of probability, and assuming extreme ASI turbulence. But as someone with a daughter born in 2025, it has gotten me to think about how the societal backdrop to her upbringing could be especially weird. Our circumstance already gets slightly weirder with each generation. Except, maybe next loop will be an unavoidable and disorienting flurry of change that will confuse parents and rewrite all of the conditions for the typical coming of age moment (all the teen movies will be sci-fi, the popular memoirs could be written by transhumanists who have upgraded in unimaginable ways, like they no longer need to sleep because of a new pill, or they can control the genitals of their peers with an app, who knows).
And so now, I find myself drawn to a 2045 forecasting project. Trying to predict the future is typically a huge waste of time (unless you’re gambling and win), which is why I’m going to have AI write the whole thing. This is a rare exception where a writing project makes little sense for a human to do. All I’m going to write are the upfront origin documents, and then Claude Opus 4.5 will read 25,000 sources, write a million words or so, and then organize it all into an interactive, oatmeal-looking website called 2045predictions.com (got it).
Before I run it, here’s something I’m currently thinking through:
What is the omega state? When I look at the popular AI forecasts from 2025, it reads to me like they have a pre-determined end state, only to then use detailed forecasting to make it seem convincing. The AI-2027 forecast seems like they came to their conclusion from very detailed calculations on how a hivemind of 200,000 autonomous coders would evolve month-by-month, but I also suspect that they picked the year 2027 because the following year, 2028, is a US election year, and they want the next administration to take AI safety far more seriously (instead of just insisting we have to beat China). I don’t think there’s anything wrong with this. You kind of have to start with an omega state. The future is so boundless that you need to begin with a guess, a bold outline on the general direction of things.
Here’s my omega: let’s assume humanity survives, and let’s assume technology does unlock hyperabundance that leads to a post-scarcity world, HOWEVER, it’s not utopian because it simultaneously unlocks a new cascade of moral, social, and spiritual crises, dilemmas that will test the timeless primitives of humanity (sex, life, death, consciousness, religion, home, etc.). This omega state makes sense for me because (1) we already know that ethical dilemmas scale with technology, and (2) according to the Strauss-Howe generational theory (from the same guys who coined “milennalis,” “Gen-Z,” etc.), this already tends to happen every 80 years (the length of a human lifespan). A new techno-political order creates a spiritual crises that generates an Awakening, a new value system that shapes society for the next century or so. You know what’s 80 years before Kurzweil’s “singularity” of 2045? The counter-cultural revolutions of the 1960s. What I’m getting at is that the 2040s might have echos of the 1960s, where demographics are divided on core issues and LSD is replaced with consciousness-altering machines (Terence McKenna said that computers are drugs, you just can’t swallow them yet).
We currently define the singularity as “the moment when a computer is smarter than all humans combined,” but that effectively means nothing, and it’s far more useful to have some guesses on how we all might freak out about that happening.