michael-dean-k/

Topic

clout-culture

4 pieces

When fake stunts go viral

· 88 words

There is a viral video of Milwaukee Brewer pitcher, Jacob Misiorowski, throwing a 104 mph fastball to knock an apple off a teammates head, who is sitting on a chair at home plate, arms crossed, back to pitcher. Yes, it's edited, but will everyone tell? What if 5% can't? How many hundreds of kids will try this stunt? Reminds of me of William S. Burroughs thinking he could drunkly shoot a beer bottle off his wife's head and missing. I guess the allure of virality can poison anyone.

Robots in feed

· 131 words

It’s uncanny to watch a Russian robot limp and wobble onto stage, wave, and then collapse face-first, before two guys rush to lift him, and another two follow to cover the fallen metalman with a black trap, as if it’s possible that we the audience have somehow not processed the last 10 seconds, and damage control is still possible. 

Not much later, I saw an Iranian robot with a photorealistic face; stiff cheeks, but convincing skin. This is what happens when ColdTurkey is off, I get exposed to “the horrors beyond my comprehension.” It will be interesting to see how culture responds to this coming wave of technology, which is not just existentially threatening (ie: labor automation), but biologically repulsive (ie: look at this not-face). [EDIT: I think this was AI]

Anything Can Be Remixed Without Effort

· 111 words

On X there is a photo there is about Molly, a reporter, talking to Alex Karp, CEO of Palantir. The comments are debating if either of their outfits are appropriate, before someone says, “Grok, interpret this,” and now there’s a video of them embracing and making out. More videos show up in the comments: them playing Twister, them dancing, them Kung Fu fighting, Molly turning into a rocket and busting through the ceiling. There’s one of Alex Karp wielding a rare Japanese sword; that one was real though. There aren’t watermarks, so you can’t tell. We are basically already in the age where anything can be remixed with AI without effort.

On the optics of robot armies

· 492 words

Someone should do a shot-by-shot analysis of the UBTech humanoid robot army($100m USD in orders) and iRobot. Do you unlock marketing power by replicating products and cinematics from old scifi? … Separate but relevant, how long until there actually is a robot army? In one sense, I’d rather have two superpowers battle for land with non-human entities, but once you build autonomous machines with the intention to destroy, well, it’s not hard to see how scary a “context malfunction” might be.

I’d imagine there could be a decade of “tele-operated military technology” before anything autonomous is deployed (2040s, if ever), including something like a solider in VR, operating an android, combined with a personal fleet of “semi-autonomous” drones, who can maneuver and avoid on their own, but are directed by the human/cyborg soldier (giving each infantry unit it’s own atomic air-force). I assume this is an area of research, and don’t want to dedicate my imagination towards battlefront acceleration.

Similar to how television brought a shock to the public by televising frontline war, I imagine that by the end of my life, there will be another shock that comes from witnessing the frontier of machine war.

To circle back to this point: is there a world where machine war can be contained and prevent the combat death of humans? My guess is no, but I’m sure this is a common rhetorical point to advance the research here. It’s dangerously naive thinking: (1) it changes the ethics of war (it’s not about human life, but a manufacturing game), and makes war easier to start; (2) it likely isn’t containable; if one robot army beats another, but it doesn’t necessarily advance any objective, then the robots could sabotage infrastructure, take hostages, etc., until concessions are made; (3) a robot with autonomy to make decisions to destroy has one of two mindsets, (a) it is fixated on clear objectives, or (b) it is open-minded to refine goals and handle nuances, both of which are equally troubling.

You’d think there would be policies and stances against integrating AI into the military. Google had one, and this year, they revoked it. I guess they see it as inevitable, and are stuck in the “we need to be dominant” strategy. Realistically, we will always fall into these acceleration races unless we establish some global armistice, but those are complex and very hard to broker; there is only urgency to do this once we cross a line and realize how badly we’ve screwed up (ie: with nuclear). The difference is, as technology advances, (1) the first consequence might be existential, (2) if it’s not existential, but it’s autonomous, it may be too late to contain. I think one of the defining challenges of our century is how to create civic structures around exponential technology that can contain them before a wake-up incident.