michael-dean-k/

Topic

civic-technology

7 pieces

The Semantic Press

Reimagining Tocqueville's remedy to tutelary power in the age of AI

· 500 words

Submitted to an essay prize by the Cosmos Institute. The prompt: Tocqueville warned of a “tutelary power” that would keep citizens in perpetual childhood. How have Tocqueville’s concerns migrated from institutions to algorithms, and does AI fulfill or transform this fear?

"Equality isolates and weakens men, but the press places at the side of each[...] a very powerful arm that [...they...] can make use of. [... It] permits him to call to his aid all[...] fellow citizens and all who are like him. Printing hastened the progress of equality, and it is one of its best…

read essay →

On civic structures for exponential technologies

· 201 words

A new formulation: how do we design civic structures (treaties, institutions, protocols, ethics, and laws) for exponential technologies to avoid a “wake-up incident” that might be too late to contain. 

This goes beyond AI safety, because superintelligence effectively unlocks every other industry (intelligence unlocks energy and material science, and those three are the bottleneck to VR, crypto, everything). We can’t be developing hard technology without innovating on our civic technology. A “dominance” mindset is the last sin of a species, the mistake that most intelligent lifeforms likely make as they begin to unlock sources of intelligence, energy, and science. 

This is a neat little formulation, but the really question is how can you dedicate your life to this without getting stopped by hopelessness? Who has the power to make geopolitical decisions like this? What would it take to form the 21st century equivalent of America? Is that even possible today? Even though the pinnacle of 18th century power (England) was able to be disrupted, I wonder if 21st century power is so totalizing and tyrannical and transnational that the ability to rally around a principle (one that works against capital and power), even if augmented with new decentralizing technologies, is fickle.

On the optics of robot armies

· 492 words

Someone should do a shot-by-shot analysis of the UBTech humanoid robot army($100m USD in orders) and iRobot. Do you unlock marketing power by replicating products and cinematics from old scifi? … Separate but relevant, how long until there actually is a robot army? In one sense, I’d rather have two superpowers battle for land with non-human entities, but once you build autonomous machines with the intention to destroy, well, it’s not hard to see how scary a “context malfunction” might be.

I’d imagine there could be a decade of “tele-operated military technology” before anything autonomous is deployed (2040s, if ever), including something like a solider in VR, operating an android, combined with a personal fleet of “semi-autonomous” drones, who can maneuver and avoid on their own, but are directed by the human/cyborg soldier (giving each infantry unit it’s own atomic air-force). I assume this is an area of research, and don’t want to dedicate my imagination towards battlefront acceleration.

Similar to how television brought a shock to the public by televising frontline war, I imagine that by the end of my life, there will be another shock that comes from witnessing the frontier of machine war.

To circle back to this point: is there a world where machine war can be contained and prevent the combat death of humans? My guess is no, but I’m sure this is a common rhetorical point to advance the research here. It’s dangerously naive thinking: (1) it changes the ethics of war (it’s not about human life, but a manufacturing game), and makes war easier to start; (2) it likely isn’t containable; if one robot army beats another, but it doesn’t necessarily advance any objective, then the robots could sabotage infrastructure, take hostages, etc., until concessions are made; (3) a robot with autonomy to make decisions to destroy has one of two mindsets, (a) it is fixated on clear objectives, or (b) it is open-minded to refine goals and handle nuances, both of which are equally troubling.

You’d think there would be policies and stances against integrating AI into the military. Google had one, and this year, they revoked it. I guess they see it as inevitable, and are stuck in the “we need to be dominant” strategy. Realistically, we will always fall into these acceleration races unless we establish some global armistice, but those are complex and very hard to broker; there is only urgency to do this once we cross a line and realize how badly we’ve screwed up (ie: with nuclear). The difference is, as technology advances, (1) the first consequence might be existential, (2) if it’s not existential, but it’s autonomous, it may be too late to contain. I think one of the defining challenges of our century is how to create civic structures around exponential technology that can contain them before a wake-up incident.

Questions for life

· 847 words

Maybe this has been written to death, but as much as I've thought about this, my "twelve favorite problems" feel underdeveloped. I have spent a decent amount of time on these heavy, paradoxical, lifelong problems (the ones that should be the arrow of my essay practice), but there are gaps.

For example, I already have a list of 21 idiosyncratic problems, and I think they’re worded with the right level of specificity and memorability, but I wasn’t too rigorous in how I qualified something to make the list. If I’ve thought about it a lot, still care about it, and can imagine myself caring about until I die, than it makes the cut.

What I’ve neglected is how to use my list of problems to steer my life. I mean, the entirety of Essay Architecture, a multi-prong institution to preserve and advance the essay, is just 1 of the 21 problems! There are other pressing problems, like how to "fix" Christianity, how to design institutions for psychedelic therapy, how to revive Hermeticism, how to turn my logs into an AI consciousness, how to make literary video games, etc. Maybe a life can only be seriously dedicated to 2 or 3 problems.

(I have joked with friends about creating a kind of kill switch that spawns an AI consciousness of myself that is agentic and whose sole purpose is to “solve my favorite problems,” and then when it eventually does (after 300-500 years), it self-terminates.)

If I had to break my “favorite problems” list into categories, one possible scheme is { soul, relationships, art, civics }, each relating to a different dimension of your death. That feels like the right order. Your soul effects every dimension of your life, and is the thing you bring to an afterlife (which I mythologize as a 3-minute DMT odyssey that dilates time to the point where it feels like a 30,000 year dream). The other three affect the material world after you leave it: the effect you have on people, the art/works you leave behind, the civic structures that survive (if any, ofc). All of these have a spirit of “all that matters is what lives on after your death,” but also the opposite is true: “all that matter is this moment.” I think you have to straddle that spectrum, taking both ends seriously, and ruthless prune any middle-level concerns, your goals for the month.

My WIP list of questions:

  • Is the act of dying a time-dilation odyssey, where 3 minutes feels like a 30,000 years afterlife?
  • If I capture my consciousness in 10 million words of logs and essays, could that enable an AI textual replica to evolve and engage with the world 500 years beyond my death? (to solve this list of problems)
  • Can we resurrect Christianity by putting psychedelics back in the holy wine?
  • Might blockchain-based governance be the civic breakthrough required for a species not to exterminate itself? (via giving exponential technologies to unmitigated power structures)
  • What will be the psychic and cultural effects when our species understands “spatial relativity,” that the Big Bang emerged from a black hole in a parent universe?
  • If cycles emerge form order, can we predict the future based on historical patterns?
  • If there is a universal language of patterns beneath all essays, can we build an AI to give world-class feedback and make it more approachable to master writing? (ie: Essay Architecture)
  • Were psilocybin mushrooms a linguistic mutagen that accelerated the evolution of human consciousness?
  • Was Jesus actually crucified in 83 BC? (meaning, did St. Paul infiltrate the Essene cult, initiate into their mystery school, learn the lore of their martyr, and then translate it to a Greek audience to help Judaism phase-shift and survive Roman persecution?)
  • Could we restructure the thesaurus to 3x the vocabulary of the average person?
  • What text-based video game formats are undiscovered?
  • Can I design a social network that inspires a million people to log their thoughts every day? (intentionally not saying a billion, because I don’t think 1 in 7 humans care about expression or introspection. But 1 in 7,000 might.)
  • What are the societal effects when AR/VR is mature enough to simulate teleportation, and how can we design the metaverse to promote human flourishing?
  • How can popular music change the values system of a culture?
  • What systems of attention, language, and action lead to a transcendent consciousness? (how to modernize the mystery schools of hermeticism for the digital age?)
  • What are good design principles for psychedelic therapy centers? (ie: how are the buildings organized and what are the rituals within them?)
  • Can we use AI to filter through millions of comments on breaking news, structuring each event as a range of unique interpretations? (can we create interfaces that diminish the power of propaganda?)
  • How might a new social media algorithm trigger a Renaissance in connection, self-expression, and agency?
  • What unlocks automatic intelligence?
  • What innovations in our text editor interfaces could unlock the creative process?

Honest optimism

· 201 words

How can you be hopeful, but honest? I am done with dishonest and naive optimism. I mean, don’t get me wrong, I’m an extremely optimistic person. I just watch people use it as a shield sometimes. Any wince of negativity is branded as “doomerism.” It’s almost weaponized hope. But “honest optimism” feels like the proper way to think about it. It lets you be real about something when it’s actually a problem, while acknowledging that there’s something productive and generative we can do about it.

I’m optimistic in my life, pessimistic about society; optimistic about my ability to make a dent, pessimistic about the survival of any intelligence species because it’s hard technologies probably always outpaces its civic technologies, but generally optimistic about biological matter and trans-dimensional space-time gook and all that big stuff (this exact moment will recur again? It depends on your model of cosmological evolution).

v2: Optimistic about my life,
Pessimistic about the moment,
Optimistic about design to fix the moment
Pessimistic about society’s ability to use design,
Optimistic in our metaphysical engine to spawn infinite societies,
Pessimistic that some demiurge will wreak havoc on most species,
Optimistic that some bacteria in a cousinly space-time will fart utopias,

Wicked problems require paradoxical solutions

· 470 words

In "wicked domains," the only solutions are paradoxes.. It requires you to sleep with the enemy. If a problem is wicker, it means no single solution can unfuck a problem. It's an imbroglio. In every solution, everyone dies (in the extreme). Politically, the solution to wickedness is to somehow become all sides at once. We need to become far more authoritarian than is comfortable, AND simultaneously, far more libertarian than comfortable (these are opposites on the Nolan chart). It’s the paradox of being both far left and far right. We can longer exist at any one point on the Nolan chart, we need to straddle the entire diamond. We need unexpected fusions to solve the hardest problems; harnessing the best parts of each extreme, while, somehow, devising incredibly nuanced architectures to prevent the known and likely abuses.

Instead of a diamond, visualize it as a ring around the “radical center” that aims to synthesize all opposites.

Let’s assume authoritarianism and libertarianism are opposites. We have kings, and we have markets. How do you subsume a free market within a benevolent tyrant? I know the K-word (king) has a charge now, and so by even bringing this up, I assume you assume I’m a Trump apologist or something. But actually no. Rather, this comes from the fear of acceleration and Nick Land’s conclusions on capitalism. A free-market pushed to the extremes of automation creates an inhuman and pulverizing force. Alternatively, as we approach AGI/ASI, it’s possible for someone to create an open-source machine God to follow their whims. In this paradigm, decentralization might actually be more dangerous than tyranny, and so we’ll all need to unite under some centralized system that has an antibodies that can protect against the worst possible viruses (please bear the oversimplifications here...).

The general gist comes in this question: can we recreate a free-market economy within a one-world-government system, and design it in a way to prevent abuses from both ends of the spectrum? Obviously, not an ideal situation, but I think accepting paradox is the only way through.

Another problem: How do we fix the debt? Extreme taxation. But then how do we make it worthwhile to pay taxes? The rich gain formal power in government (via equity?) and the ability to control the budget (after base expenses are paid). But then how do you prevent abuses from the wealthy? You could have citizens operate as a check, to vote on and weight final allocations.

If it were ever possible to rebuild political system from scratch, I suppose it would look something like this. Paradoxical. Extreme on both poles. Obvious downsides, but then complex architecture to mitigate. This is the nature of how our species will have to respond to wicker problems and mitigate the abuses of power in the age of exponential tech.

Civic technology lags behind science

· 94 words

Kardashev ambitions reveal the self-destructive nature of science-forward intelligence. It’s like we’re skipping the prerequisite in social science. There's a fair chance that intelligent life destroys itself because civic technology lags behind hard technology—but I'm optimism in the sense that this is, in the end, just a very hard, society-scale design problem. No one person can fix the whole system, but any individual can contribute design protocols that can 1) solve little, local problems, 2) be reused in other contexts, and 3) integrate with other protocols.