michael-dean-k/

Topic

agi-economics

8 pieces

Notes on the permanent underclass

· 2006 words
  1. A HYPE TERM: "Permanent underclass" is a dramatic mutation of an old term: class inequality. "Underclass" was coined in 1963 (Gunnar Mydral in Challenge to Affluence) and captured the anxiety of automation destroying common jobs. Now that AI is here in a real way, we can't help but imagine the irreversible evisceration of all jobs. When people say "you have 2-3 years to escape the permanent underclass," they mean that this is your last chance to build wealth, because in post AGI-economics, humans don't have economic relevance anymore. Employers employ agents (and eventually robots) instead. And so what will we do with all the meat bodies? The speculation has shades of darkness that start with mass employment, and spiral into feudalism, slavery, and even genocide. The uncertainty is real, but it gets delirious, and often ignores history, and also the many self-stabilizing mechanisms that get triggered on route to a collapse.
  2. MIDDLE CLASS ANOMALY: The real fear here is "the collapse of the middle class," which sounds like a news headline. But separate from AI, my generation is certainly already feeling it. My wife's grandfather was a painter (of houses) and got a million-dollar house (in today's terms) for $10,000. Now people are saying $100k/yr is the new poverty line. While this certainly feels like "the system has screwed us," middle classes are an anomaly, and a mass middle-class—what we had post WW2—is extremely rare. They existed in Athens, Rome, Byzantium, etc. but they were often in isolated cities (ie: Florence at 70,000 people), compared to the Han China Dynasty (100,000,000 people in a two-tier system). The total number of human-years in a middle class is probably around 5%. The relative size of our middle class is even more rare: pre-Industrialization, it was 10-30% of society, where ours is 50-70%. And finally, a middle class rarely persists: it either disintegrates back into an two-tier king/serf system, or, it's forced to transform it's method of work.
  3. FROM WORK TO PERSONAL WORKFORCE: AI will force a change in how next generation's middle class works: from employment to entrepreneurship. I think this is the unspoken tension between elites (who are not concerned with the future being filled with new opportunities), and the normal person (who have never earned a dollar outside of a W2 job). Entrepreneurship is maybe the greatest force for class mobility. This is where "new money" comes from. A poor person could become a billionaire if they know how to work the OS of the market. That is an anomaly and not going away! What's changing though is the economic mobility of non-entrepreneurs. The rising tide is reversing (92% of children born in 1940 earned more than their parents, and it's shifting the other way now), and the rapid automation of jobs via AI certainly won't help. I personally don't doubt that most jobs will get automated away, because I run a small business and I don't have the financial abundance to hire humans at the price they need. I've hired graphic designers, editors, and almost software designers, but found that today's AI models were able to do equal or better work, for a fraction of the cost, and are way more nimble to evolve with my evolving needs. Won't every rational business make this tradeoff? The consolation is that the "end of the work," brings a new era where every person has a personal workforce. It may be hard to find a job, but for $100/month you'll have 10-100 agents on hand, and so do you have a vision? So, no, no one will be in a permanent underclass, so long as they can succeed as an entrepreneur. It's as if the rise of AI has taken the startup/entrepreneur model of Silicon Valley, which once was and still is a minority, and scaled that up to become the new paradigm of work. That is better than nothing, but the odds aren't good. Only 0.05% of startups get funding, maybe 20% get a return; small businesses—the more likely path for the average person—also only have a 20% survival rate after 20 years. So again it's not the decimation of a middle class, but a contraction of the rare post-war middle class (and most middle-classes do emerge after wars) from 60% down to the historical norm of 20%.
  4. REVOLUTION UNLIKELY: The relative size of the lower class isn't necessarily associated with unemployment or risk of revolution. Consider how Mexico has ~70% lower class but only 3% unemployment. I guess the important question for stability in America is if, after AI automation, gig jobs can sustain people who lose their current jobs. 10-20% unemployment would lead to political instability, and 20-30% would create the situation where a revolution could form. If you read Tocqueville (or Brinton or Goldstone, who I haven't read), he says that beyond economics, a few things are required for revolution: an under-utilized but educated youth, elite extraction during widespread suffering, failed reform attempts, defection of intellectuals, coordination capacity... we seem to have all of these. He also notes that revolutions don't come from a collapse of the middle class, but from a perceived sense of being excluded from a new economic order (ie: massive gains from AI, hoarded by a few companies). But Tocqueville also says that the original American Revolution succeeded because we were able to retreat to open space, where the French Revolution failed because it was an open clash within the territory of the aristocracy. If there were a revolution here, it would almost definitely be thwarted, considering NSA surveillance, military power, geographic dispersion, and how most conflict is absorbed into left-right political feuds instead of up-down class feuds. So instead of class war, what's more likely in America is political warfare (underway), which in the worst case leads to authoritarian capture and state fragmentation. A civil war is a distraction from a revolution. The eeriness of all this is that it's right on schedule according to the Strauss-Howe theory; they mapped revolutions going back in 80 years cycles (American Revolution > Civil War > WW2), and predicted 2026 as a crisis that would spawn the next world order.
  5. PROPHETS OF REDISTRIBUTION: So if there is massive job loss and social strife, but no potential for revolution, how will the elites respond? The cynical view is that they will retreat into their already-constructed drone-protected bunkers and let the mess sort itself out. The optimistic view is that the entrepreneurs who are triggering the AI revolution are actually problem solvers at heart, and once or if the AI race is ever "over," they will be unimaginably wealthy and eager to play the role of utopian planners to restructure society in their image. Will elites side with the common man? It's happened. Voltaire was a French intellectual who died a decade before the French Revolution, but through his salons he injected ideas of equality, liberty, and reason into the aristocracy. It was like a Trojan Horse, because the elites became enamored with ideas that undermine aristocracy without realizing, and so they were quick to defect and enable the revolution. In terms of the Strauss-How cycle, Voltaire was a Second Turning "awakening prophet" that laid the spiritual grounding for the Fourth Turning of that time. The parallel to our time is the 1960s, where counter-cultural ideas about communal living, redistribution, and the end of work were forged; and also the very fabric of computing, the Internet, and AI all came out of the consciousness revolution—the sway of egalitarian-minded intellectuals could determine how the elite allocate their trillions. What we're facing is something like a crisis in capitalism. If the market is left to its own terms, with everyone on Polymarket "trading the madness," then it could turn Landian (re: Nick Land's vision for markets as inhuman alienating forces). Or, hyper-capitalism pushed to it's limits just turns into Marxism, and the counter-cultural ethos of the 60s gets fully mainstreamed (it's already in progress: hitchhiking turned to Uber, free love to Tinder, pad crashing to AirBnB, freak foods to Whole Foods).
  6. PAID TO SCROLL: But who will be doing the redistribution and why? I'm skeptical of a "universal basic income," which implies a world government (if you take "universal" seriously). Each country will have different policies on distribution (aka: welfare). We'll likely see a range of implementation, some being highly dysfunctional welfare states, and others being prototypes of a modern democratic socialism. Realistically though, governments will only have the means to redistribute any wealth if they seize and nationalize the AI companies (which Palantir's Karp is suggesting needs to happen). But if we go the way of The Sovereign Individual (where Thiel wrote the forward), it means that companies will replace governments, and lead us to a kind of lawless "anarcho-capitalism." And so in this model, what would elites do? Bunkers or philanthropy? Will Anthropic be anthropic? (We already know OpenAI didn't live up to their name). I think there's a more practical middle, where companies will be incentivized to provide "UBI" themselves. Assuming everything doesn't collapse into a singleton-powered mono-corp, there will still be 3-10 big companies competing, but now with massive budgets. What they used to spend on employees is now automated for a fraction of the cost, and so they might chose to re-allocate that budget to paying citizens, or really, their users. Attention is the last scarce resource, and so by paying users to lock in to their platforms (using their feeds, apps, cars, etc.), they hold that advantage over their competitors. I know that sounds extremely circular, but is not the current AI economy already circular? Is NVIDIA not paying OpenAI to buy their chips? And so why wouldn't OpenAI pay users to pay for their AGI?
  7. NOT SERFS, BUT HIPPIES: If AGI/ASI does bring upon all the sci-fi advances we dream of, then we could see a dramatic cost collapse in everything: materials, medicine, food, energy. It could be trivial for a company to provide all the basic luxuries of living for little or no cost, but in exchange for loyalty. So to bring this back to the permanent underclass: the elite-backed companies, in order to prevent revolution and to beat competitors, could be rationally incentivized to offer a luxury quality of life to its users. What's strange though is that it's luxury without mobility. Meaning, the average person could be provided a sweet apartment and unlimited Grubhub, in exchange not for labor, but loyalty. They might not have the discretionary freedom to do things outside of what's in "the contract" (rings of indentured servitude, but with air conditioning!). ie: Your plan might include a free train and bus pass, but if you want to fly to Europe, you need to grind at gig work for 6 months to get actual money, since the plan offers only amenities. Different communes, I mean... companies... will offer different deals, and if one offers a yearly international vacation (possible by some fuel breakthrough), the others will follow. The citizen will have the freedom to pledge freely, which would make this not like socialism, but the first ever manifestation of communism. We confuse those terms: socialism is when all power is absorbed by the state, where communism is actually stateless and decentralized. North Korea, the USSR, and Maoist China were not communist, but socialist. Communism was Marx's ideal, and he would've never conceived that the path to the first instance of communism was through hyper-capitalism (though of course an alien bastardized version that he would probably hate). And to bring this back to the spirit of the 1960s, heavily anchored in communal ideas: the "permanent underclass," will be a lot less like being a serf and a lot more like being a hippy. Except more like a state-sponsored, highly-surveilled, find-your-meaning-through-our-menu-of-options hippies, with of course competing hippy factions, the permaculturists, the hedonists, the transhumanists, the bloboids, the transcendentalists, the academics, but shared among all of them is a new identity that is decorrelated with their economic value, and more anchored to new social systems of vainglory that are hard to imagine.

The asymmetric labor of the new luddites

· 408 words

Anti-AI sentiment is escalating: the Pause AI movement, state-level data center bans, molotov cocktails at Sam Altman's house, artists going to dumb phones, witch hunts for AI prose. Protesting and boycotting AI, at a personal level, is the exact wrong approach. It misunderstands the Luddites. They were not against the machines in principle, they were against the factory owners not sharing the profits of the factory. This is possibly about to play out a grand scale: AI and robotics labs could capture nearly all economic value, and there will be a plea to nationalize these companies and redistribute the profits.

While the scope and effects here are way bigger, the workers of the Industrial Revolution were far more disempowered. You couldn't "just do things." You could operate someone else's machine, but you couldn't just spin up a competing factory; that required land, resources, labor, none of which you had. There was just a certain amount of capital needed to compete, and it wasn't possible. Workers were limited to being workers, so they had no choice but to revolt with violence.

The difference today is that the worker and artist suddenly have access to build-your-own-factory tooling. A single person for $100/month can compete with companies valued in the millions and billions. It's asymmetric labor. Regular people can build civilization scale infrastructure, distribution labels, social media engines, software, etc. Never before has there been a democratic opportunity for people to self-organize into their own collectives, tribes, governments, and whatnot.

At least to me, this kind of optimism—principled, delirious, ambitious, but still careful and skeptical—is better than the cynicism of the "resist" factions. There is nothing you or your circles gain by putting your head in the sand; it brings a distanced, crabby, virtue-signaled posture that does nothing to change the actual situation. You gain nothing by staying on the ChatGPT free plan on default settings and complaining no how it's an ineffective, incapable, sycophant. It requires an ounce of nuance, to be critical of how the labs act, but to then use that lab's best tools towards your own sovereignty and vision.

I think what I'm trying to get at here is that the Luddites of the 21st-century will not be reverting back to typewriters and flip phones, they will be wielding AI tools in ways to foster human connection, and the kind of pro-human cultural that the Internet originally promised, but was never realized under capitalism.

Tectonic shifts

· 440 words

Why am I so engaged with the news these days? I think it’s part of a deeper desire to update my world model. There is no doubt, massive change. Geopolitical, economic, technological. And as abstract as those things usually are, it feels like some sort of shift that, in 2-3 years time, wil have an effect on my life. Of course, for many people in the world, it’s hitting them now. But similar to how COVID spared no one, it feels like your model of where things are going will directly effect your preparedness.

But this feels more existential; safety/security are actually on the line. And so that’s an anxious kind of thought, that the tectonic plates under your reality are shifting, and it’s not some recreational yearning to re-skill and recalibrate, but a mandatory thing.

And so to make sense, what do you do, go on X? That’s a total cesspool. New media is worse than the old gatekept media. And so, where I think I want to take this, is to build my own systems to sift through and aggregate information, and to build my own UI to do this. Even a simple Claude prompt, “what happened in Iran in the last 4 hours” is so much better than X. It’s stripped of sensationalism, and reading is just a less triggering medium. Bias aside, it’s at least free from people who are intentionally trying to deceive you for virality. There is a clout-chasing incentive, paired with actually turbulent times, which makes algorithmic news something like a schizophrenia filter.

And so what are these questions, these underlying uncertainties that are triggering a model change? How will anyone make income with the rise of AGI-3 and eventually ASI? How do I exist online and avoid hyper-surveillance and cyber-sabotage? Where in the world can I live to build a better future for my daughter, one where colleges doesn’t exist, jobs don’t exist, and where quality of life actually depends on nationalized social systems? A weird future. And weird to consider the fall of America, a kind of reverse migration, where, because of a confluence events, it might not be a place to raise a family in 1-2 generations down the line.

And so practically, this is resulting in things like: (a) applying for EU citizenship, (b) setting up AI agents for my business, and (c) considering cybersecurity, new ways to protect, share, and collaborate on writing (ie: how do you build an audience if the commons are polluted?). This is all very disorienting; it's hard to continue with business as usual when you become open to this scale of change.

An Intelligence Framework

· 703 words

The AI takeoff hysteria is hard to avoid these days, and I'm realizing we don't have clear distinctions between AGI/ASI. I wanted to revisit an old framework of mine to see if anyone finds it helpful (and if it's worth developing). There are some existing classification frameworks, but they're low-resolution. My basic idea is to break AI into three eras: ANI (narrow intelligence), AGI (general intelligence), ASI (superintelligence). Then, you can break each era into 3 tiers. You only shift from one tier to the next when you make breakthroughs across different criteria (let's say, (a) generality, (b) transfer, (c) autonomy, (d) learning, (e) self-modeling). I think the last few weeks are the collective hype of us all realizing we're shifting from AGI-1 to AGI-2. It's exciting/scary, but I think the paranoia mostly comes from not realizing how big the gap is between AGI-2 and ASI-1. (Spoiler: ASI might arrive slower than we think.)

ANI-1 is scripted logic, the lowest form of "artificial intelligence," basically Goombas. ANI-2 might cover Google Maps or AlphaGo, intelligences that excel in a single function, traffic or chess. Siri is ANI-3; even though it feels broad, it really uses voice to route you to 20 or so pre-defined tricks. The chasm between Goomba and Siri is similar to the chasm between early-AGI and late-AGI. ChatGPT and the multi-modal models that followed, capture AGI-1, a single neural network that can do basically anything, even if it sucks: essays, songs, video, code. The newest models (and their agentic harnesses) are feeling like AGI-2. They're significantly better at coding, can run for hours at a time, and are starting to make contributions to machine learning itself.

AGI-2 could last a couple years. As agentic AI matures, I'm sure there will be a few "takeoff" scares, but they'll probably feel more like a flood of a trillion midwits than real ASI (still, that could be enough to break the economy/internet). While we went from AGI-1 to AGI-2 through data, scale, and engineering, it seems like we'll need research breakthroughs to get to AGI-3. It won't be through scaling alone. Whenever and however we get to "human complete" intelligence, the apex of AGI is a single agent that is a master of all human domains, a Nobel Prize winner in every field at once, seamlessly transferring knowledge between them, unlocking a cascade of civilization-altering inventions.

As crazy as AGI-3 could be, it still isn't superintelligence. That has its own era, and the chasm between early ASI and late ASI will be as big a gap between the chatbots who can't count the R's in strawberry and the agents that cure cancer. We can only really speculate on ASI (because it would be truly alien), but we can imagine it as step changes in recursion, scope, and complexity. Imagine ASI-1 as an agent that, as it's working, can infer its own limits, and self-modify its learning paradigms in ways we can't understand. Imagine ASI-3 as something that can monitor reality in real-time, and, reconfigure its hardware in real-time (some hydra of graphics cards, quantum computers, and neuromorphic wetware) to run simulations at unfathomable scales in unimaginable fields, running on a hardware stack so big we have to put it in space and run it on fusion. This goes far beyond my ability to not bullshit, but I think something as insane as this, thankfully, is still far away, which points to the real question nested in my framework:

Could the rise of AGI/ASI be linear? People gravitate towards "AI will plateau" or "the singularity is imminent," but the conservative middle ground is more boring: linear progress. Maybe the exponential advances are real, but so are the extreme frictions of research, infrastructure, and social effects. If AGI-1 arrived in 2022, and AGI-2 arrived in 2026, maybe we'll keep ascending tiers in 4-year intervals: AGI-3 in 2030, the first true "superintelligence" by 2034, and ASI-3 by 2042. This shift from AGI-1 to ASI-1 (12 years), is considered a "slow takeoff" scenario, even though the ANI era took around 70 years. If we zoom out to the scale of a human, linear progress will still feel like centuries of change all in a single turning of generations.

→ source

The p(doom) of higher education

· 782 words

A few months ago I saw a YouTube video titled something like, “A child born in 2025 is more likely to get killed by AI than graduate college.” What a ridiculous claim. I assumed it was clickbait and didn’t click, but it has jingled around my head enough to the point where I think I can make sense of it’s argument:

  • The average p(doom) of an AI engineer is 16%, meaning there’s a 1 in 6 chance of human extinction (put another way, companies have morally rationalized the need to play Russian Roulette—if we don’t do it the bad guys will—, without acknowledging that if they survive and win, they get the consolation prize of comandeering the whole economy).

  • 40% of US adults, age 25-34, today, have a bachelor’s degree. If there’s massive job automation and employment, a college degree would be both unaffordable and an unreasonable cost if it were. It’s not unthinkable that <15% of next generation gets a college degree, which makes that sensational claim, weirdly, plausible.

I still think it’s a shaky comparison, confusing two different types of probability, and assuming extreme ASI turbulence. But as someone with a daughter born in 2025, it has gotten me to think about how the societal backdrop to her upbringing could be especially weird. Our circumstance already gets slightly weirder with each generation. Except, maybe next loop will be an unavoidable and disorienting flurry of change that will confuse parents and rewrite all of the conditions for the typical coming of age moment (all the teen movies will be sci-fi, the popular memoirs could be written by transhumanists who have upgraded in unimaginable ways, like they no longer need to sleep because of a new pill, or they can control the genitals of their peers with an app, who knows).

And so now, I find myself drawn to a 2045 forecasting project. Trying to predict the future is typically a huge waste of time (unless you’re gambling and win), which is why I’m going to have AI write the whole thing. This is a rare exception where a writing project makes little sense for a human to do. All I’m going to write are the upfront origin documents, and then Claude Opus 4.5 will read 25,000 sources, write a million words or so, and then organize it all into an interactive, oatmeal-looking website called 2045predictions.com (got it).

Before I run it, here’s something I’m currently thinking through:

What is the omega state? When I look at the popular AI forecasts from 2025, it reads to me like they have a pre-determined end state, only to then use detailed forecasting to make it seem convincing. The AI-2027 forecast seems like they came to their conclusion from very detailed calculations on how a hivemind of 200,000 autonomous coders would evolve month-by-month, but I also suspect that they picked the year 2027 because the following year, 2028, is a US election year, and they want the next administration to take AI safety far more seriously (instead of just insisting we have to beat China). I don’t think there’s anything wrong with this. You kind of have to start with an omega state. The future is so boundless that you need to begin with a guess, a bold outline on the general direction of things.

Here’s my omega: let’s assume humanity survives, and let’s assume technology does unlock hyperabundance that leads to a post-scarcity world, HOWEVER, it’s not utopian because it simultaneously unlocks a new cascade of moral, social, and spiritual crises, dilemmas that will test the timeless primitives of humanity (sex, life, death, consciousness, religion, home, etc.). This omega state makes sense for me because (1) we already know that ethical dilemmas scale with technology, and (2) according to the Strauss-Howe generational theory (from the same guys who coined “milennalis,” “Gen-Z,” etc.), this already tends to happen every 80 years (the length of a human lifespan). A new techno-political order creates a spiritual crises that generates an Awakening, a new value system that shapes society for the next century or so. You know what’s 80 years before Kurzweil’s “singularity” of 2045? The counter-cultural revolutions of the 1960s. What I’m getting at is that the 2040s might have echos of the 1960s, where demographics are divided on core issues and LSD is replaced with consciousness-altering machines (Terence McKenna said that computers are drugs, you just can’t swallow them yet).

We currently define the singularity as “the moment when a computer is smarter than all humans combined,” but that effectively means nothing, and it’s far more useful to have some guesses on how we all might freak out about that happening.

On civic structures for exponential technologies

· 201 words

A new formulation: how do we design civic structures (treaties, institutions, protocols, ethics, and laws) for exponential technologies to avoid a “wake-up incident” that might be too late to contain. 

This goes beyond AI safety, because superintelligence effectively unlocks every other industry (intelligence unlocks energy and material science, and those three are the bottleneck to VR, crypto, everything). We can’t be developing hard technology without innovating on our civic technology. A “dominance” mindset is the last sin of a species, the mistake that most intelligent lifeforms likely make as they begin to unlock sources of intelligence, energy, and science. 

This is a neat little formulation, but the really question is how can you dedicate your life to this without getting stopped by hopelessness? Who has the power to make geopolitical decisions like this? What would it take to form the 21st century equivalent of America? Is that even possible today? Even though the pinnacle of 18th century power (England) was able to be disrupted, I wonder if 21st century power is so totalizing and tyrannical and transnational that the ability to rally around a principle (one that works against capital and power), even if augmented with new decentralizing technologies, is fickle.

Despite the superwriters...

· 186 words

Will was surprised to learn that I think machine writing could soon surpass the best human writers. As the head of Essay Architecture, he thought my position would just be “no matter what, humans will always be better at writing essays than machines.” I actually have some pretty extreme predictions on the trajectory of technology (I guess you could say I'm an ambivalent accelerationist), but I guess I believe that AI progress is irrelevant to the fact that I will always enjoy writing and see writing through the chaos as an opportunity. So yes, I think machines will make essays that are history-defining, that are good to degrees that are unimaginable to us today.

This will, unfortunately, make it even harder for writers to have economic value; but realistically, it's already too hard. The Creator Economy is a game of power laws, and AI might shift the chance of success from 2% to 1%. But could the same technology help artists go from 1x potential to 20x potential? If AI kills the market for commoditized creative work, will it let humans focus on the right things?

Wicked problems require paradoxical solutions

· 470 words

In "wicked domains," the only solutions are paradoxes.. It requires you to sleep with the enemy. If a problem is wicker, it means no single solution can unfuck a problem. It's an imbroglio. In every solution, everyone dies (in the extreme). Politically, the solution to wickedness is to somehow become all sides at once. We need to become far more authoritarian than is comfortable, AND simultaneously, far more libertarian than comfortable (these are opposites on the Nolan chart). It’s the paradox of being both far left and far right. We can longer exist at any one point on the Nolan chart, we need to straddle the entire diamond. We need unexpected fusions to solve the hardest problems; harnessing the best parts of each extreme, while, somehow, devising incredibly nuanced architectures to prevent the known and likely abuses.

Instead of a diamond, visualize it as a ring around the “radical center” that aims to synthesize all opposites.

Let’s assume authoritarianism and libertarianism are opposites. We have kings, and we have markets. How do you subsume a free market within a benevolent tyrant? I know the K-word (king) has a charge now, and so by even bringing this up, I assume you assume I’m a Trump apologist or something. But actually no. Rather, this comes from the fear of acceleration and Nick Land’s conclusions on capitalism. A free-market pushed to the extremes of automation creates an inhuman and pulverizing force. Alternatively, as we approach AGI/ASI, it’s possible for someone to create an open-source machine God to follow their whims. In this paradigm, decentralization might actually be more dangerous than tyranny, and so we’ll all need to unite under some centralized system that has an antibodies that can protect against the worst possible viruses (please bear the oversimplifications here...).

The general gist comes in this question: can we recreate a free-market economy within a one-world-government system, and design it in a way to prevent abuses from both ends of the spectrum? Obviously, not an ideal situation, but I think accepting paradox is the only way through.

Another problem: How do we fix the debt? Extreme taxation. But then how do we make it worthwhile to pay taxes? The rich gain formal power in government (via equity?) and the ability to control the budget (after base expenses are paid). But then how do you prevent abuses from the wealthy? You could have citizens operate as a check, to vote on and weight final allocations.

If it were ever possible to rebuild political system from scratch, I suppose it would look something like this. Paradoxical. Extreme on both poles. Obvious downsides, but then complex architecture to mitigate. This is the nature of how our species will have to respond to wicker problems and mitigate the abuses of power in the age of exponential tech.