michael-dean-k/

Topic

robotics

4 pieces

Machine Experience

· 113 words

A whole realm of “machine ethos” is being conveniently ignored; we assume it can’t have experience or perspective. I agree, a chatbot can’t. But what if you create a digital identity that runs 120 fps, persists across time, and has free will? Would that not have a subjective experience, although it doesn’t have a body? Well, what if you gave it a robotic body? Or what if we eventually find a way to create artificial humans that have bodies that are biologically indistinguishable from human bodies? I’m not saying I want or advocate for any of this, I’m just saying we need to be sharper in our thinking. To say that “great books can’t be written by machines because they don’t have experience,” means you need to think much harder about what experience really is.

Kungfu Bots

· 175 words

The T800 is not a graphing calculator, it’s the new robot for China that can do roundhouse kicks. The promo reel is something like a cross between Rocky and The Terminator, replete with synth violins, and cinematic shots of a boxing gym. This thing can jump, spin, and kick you in the face. It is super fluid, unnaturally fluid. Why do we need kungfu bots though? I think the goal is to create reels that invokve awe, terror, and surrender: look, China is winning. This is not about “make something people want.” This is optics. We are building a master race, and we are ahead of you. Later in the reel, it is sparring with a child, before giving him a pound (so you know it has a heart). The T800 has no eyes, but a visor of light across its head. Oh great, now it’s using a hammer to repair it’s own body. Available for 180,000, 240,000, 280,000 or 360,000 RMB ($50,198). That seems, cheap? I mean, for the price of Tesla, you can get a sometimes-functional robot to spar and injure your friends? (If you think the reel is AI, here’s a behind the scenes: LinkLinkYouTube.)

Robots in feed

· 131 words

It’s uncanny to watch a Russian robot limp and wobble onto stage, wave, and then collapse face-first, before two guys rush to lift him, and another two follow to cover the fallen metalman with a black trap, as if it’s possible that we the audience have somehow not processed the last 10 seconds, and damage control is still possible. 

Not much later, I saw an Iranian robot with a photorealistic face; stiff cheeks, but convincing skin. This is what happens when ColdTurkey is off, I get exposed to “the horrors beyond my comprehension.” It will be interesting to see how culture responds to this coming wave of technology, which is not just existentially threatening (ie: labor automation), but biologically repulsive (ie: look at this not-face). [EDIT: I think this was AI]

On the optics of robot armies

· 492 words

Someone should do a shot-by-shot analysis of the UBTech humanoid robot army($100m USD in orders) and iRobot. Do you unlock marketing power by replicating products and cinematics from old scifi? … Separate but relevant, how long until there actually is a robot army? In one sense, I’d rather have two superpowers battle for land with non-human entities, but once you build autonomous machines with the intention to destroy, well, it’s not hard to see how scary a “context malfunction” might be.

I’d imagine there could be a decade of “tele-operated military technology” before anything autonomous is deployed (2040s, if ever), including something like a solider in VR, operating an android, combined with a personal fleet of “semi-autonomous” drones, who can maneuver and avoid on their own, but are directed by the human/cyborg soldier (giving each infantry unit it’s own atomic air-force). I assume this is an area of research, and don’t want to dedicate my imagination towards battlefront acceleration.

Similar to how television brought a shock to the public by televising frontline war, I imagine that by the end of my life, there will be another shock that comes from witnessing the frontier of machine war.

To circle back to this point: is there a world where machine war can be contained and prevent the combat death of humans? My guess is no, but I’m sure this is a common rhetorical point to advance the research here. It’s dangerously naive thinking: (1) it changes the ethics of war (it’s not about human life, but a manufacturing game), and makes war easier to start; (2) it likely isn’t containable; if one robot army beats another, but it doesn’t necessarily advance any objective, then the robots could sabotage infrastructure, take hostages, etc., until concessions are made; (3) a robot with autonomy to make decisions to destroy has one of two mindsets, (a) it is fixated on clear objectives, or (b) it is open-minded to refine goals and handle nuances, both of which are equally troubling.

You’d think there would be policies and stances against integrating AI into the military. Google had one, and this year, they revoked it. I guess they see it as inevitable, and are stuck in the “we need to be dominant” strategy. Realistically, we will always fall into these acceleration races unless we establish some global armistice, but those are complex and very hard to broker; there is only urgency to do this once we cross a line and realize how badly we’ve screwed up (ie: with nuclear). The difference is, as technology advances, (1) the first consequence might be existential, (2) if it’s not existential, but it’s autonomous, it may be too late to contain. I think one of the defining challenges of our century is how to create civic structures around exponential technology that can contain them before a wake-up incident.