Baymax as tech optimism

A very large humanoid robot, the white, vinyl Baymax, hugs the teenaged boy Hiro in his attic workshop.
Baymax hugs Hiro, from the movie Big Hero 6. Still from the movie.

There are some clichés that just won’t go away. Robots in movies must turn evil. Technology is dangerous. Caring is weakness. In cinema, that’s a great set of dynamics to turn into inherent conflict: humans must be at odds with robots, as if by law. A robot can only betray us.

And plenty of stories reinforce this. For as long as we’ve had machines, we’ve had stories of people being crushed by them. In Fritz Lang’s Metropolis (1927), the utopian megacity is powered by human-operated machines (a clear extension of the centuries of extractive capitalism preceding it, echoing coal mining and the dangerous devices of the industrial revolution) that invariably kill those human operators. The power loom, powered by water and then steam, would rip off the scalps of unsuspecting operators – weavers now disenfranchised by a machine that would end up physically harming them. To build tall cathedrals, scaffolding would raise people to new heights, from which even the most skilled masons would fall. There is something about the scale of industry, and the machinery that makes it tick, that has always hurt people.

I don’t mean to be cynical about the machine just yet. There are plenty of other stories that take it to the next step. The golem, a flexible metaphor meaning anything from people’s desire for control to the cold, inhuman relentlessness of a machine, goes back to at least the Talmud. The metaphors of Metropolis extend to a machine that is designed to deceive. The Maschinenmensch is given the likeness of Maria to trick people into trusting it. The word robotnik finally grounds this metaphor of slavery, drudgery and repetitive work as a task suited only for machine, which then in the play that originated it, Rossum’s Universal Robots, becomes the canonical idea of a human-shaped but explicitly non-human machine. The deception was always part of the story.

The most common depiction in modern times is the robot as cinematic villain, and often, in extension, technology itself as the antagonist. 2001: A Space Odyssey (1968) features a thinking computer, HAL 9000, which turns on the astronauts aboard the ship and is defeated by having its hard drives ripped out. The Alien series features several androids with ulterior motives – multiple instances of the same Ian Holm-portrayed[1] android in fact betray the protagonists (once in Alien (1979), once in the unremarkable Alien: Romulus (2024)). In the more obviously anti-technology Terminator (1984) the titular android time-travels to kill humans, this time not incidental to any other mission but as its explicit task. Killer robots are also core to the plot of I, Robot (2004), where the clean, shiny, white humanoid robots turn on their maker.

It can be tempting to see Steven Spielberg’s AI (2001) as a sympathetic portrayal of a humanoid robot – but even so, it is a tragedy through and through, with misfortune, jealousy, the nature of identity, and deep loneliness inside a facsimile of love. Even a movie that loves the android sees it as tragic. And that’s, roughly, the state of things. Technology makes us think about being human, if it doesn’t try to kill us.

The aesthetic drift that we see in these many fictional androids, robots and humanoid machines is reflected in a lot of modern technology. At the time of writing, every computer is aluminum, glass, black and grey, hard and cold. Every piece of technology has flattened into a touch screen slab, and the human–computer interface is reduced to ‘person uses screen’.

Nothing escapes this flattening. Even scams hide under the guise of technology[2]. Theranos, the discredited medical company and complete scam, is the worst of the modern age. Snake oil in a sleek new bottle, technology as the magic word to cover up a flimflam, a lie big enough to land the founder in federal prison. This sleekness of design is now pervasive: biotechnology, personal computing, cars, vapes, refrigerators, everything has the same shape, with a computer built in. Everything is a litle bit ‘technology’, and everything looks a little bit the same.

The one exception is Baymax, the iPod of human-shaped robots, the Theranos-that-worked-and-wasn’t-a-scam, the balloon that walks and talks and heals. And the first time I saw Big Hero 6 (2014), I felt a rush of melancholy. So it is possible? So we can have technology that is truly for us, that has personality, purpose, kindness, and isn’t inherently a risk? Can it be that simple?

The plot of Big Hero 6 revolves around two technologies that are, at their core, trying to do the same thing: to help people. The health care robot Baymax is programmed to care and cure, a walking and talking first aid kit. And the microbots, microscopic independent mechanisms that can combine freely, work as a sort of universal toolkit, a multitool to the power of ten. They both have a utopian intent, to ‘solve problems’[3] and to be of help to humanity. And like every other story about robots – humanoid or not, autonomous or just as a powerful extension of our own human hands – they are also used to harm humanity.

The microbots are stolen – spoilers – and weaponised immediately. They can assemble and reassemble in any shape, wirelessly connected and all serving one task, essentially a robot-as-cloud, a swarm that can take any form and do anything, particularly harm. They are mind-controlled – a neural interface that confirms that, whatever evil the microbots can do, humans are the ones with evil intent, and the tool is just a tool.[4]

Baymax also turns to evil uses, and this time it happens explicitly against its design intent. Unlike Ash, the android from Alien, who was always ultimately a tool for the Weyland corporation, Baymax is a tool for health care, and its corruption is caused by a hack. New code is inserted that turns Baymax into a war robot, intended to destroy the antagonist and the microbots. But this goes against its own code, and the second time the protagonist tries to reprogram Baymax, he is stopped. Baymax refuses to be used for human evil.

Storytelling is often about tension, tension between wanting and getting, between seeing in the distance and getting there, and between helping and hurting – sometimes these things are in a constant, unresolvable balance. So I understand why the robot has to turn evil: for the same reason a British actor with a German accent has to be, which is to say, because that’s the story we keep telling. And the fair question could be: is it interesting to have good technology in our stories? Or does that just become background? Is there a plot to draw from good technology?

Baymax for me proves that there is more to get from “the good robot”. Baymax is like the healer in a role-playing campaign: essential for balance, someone who strengthens the whole team, and even a tempering voice to calm down heated situations. Perhaps that’s a little boring, but boring is essential for survival. Boring is the shape of the world between wars, arguments, hurt, love, innovation and art. Technology, at its finest, is as simple as a paintbrush and as complex as a robot that can heal wounds.

It’s hard to look at today’s attempts at human imitations – at the humanoid robots, at the chatbots writing with cloying sweetness, at the voice assistants telling jokes – and not feel sad. The story has always been the same: the imitation human will always become a threat to the human, goes from imitation to replacement, and eventually we will all be replaced, by a mechanical, unfeeling, methodical, uncaring machine that pretends to be your friend.


  1. There’s more to say about his deepfake resurrection in the latter film, but for now it feels a touch ironic – a betrayal by modern technology to bring back a dead man to then pretend to be a double-crossing robot. Did the filmmakers understand the nasty little trick they pulled? ↩︎

  2. Not that that’s new: when modern, evidence-based medicine was still young and heroic, snake oil salesmen relied on the magic of new cures to sell bullshit. ↩︎

  3. I’ve complained at length about the idea that design solves problems, when most design displaces, shapes, redirects, reframes or simplifies problems, but rarely solves them. Truly solving a problem would mean there’d be no more designs for it in future; most problems cannot be solved, only accommodated, reframed, lightened. ↩︎

  4. Not that I’m sympathetic to this argument. Gun violence is exacerbated by guns, or it wouldn’t be gun violence. The tool makes the violence worse. If you create an amplifier, and it amplifies violence, you are not clean. Your hands are bloody too. I don’t buy it. But this is the conceit in the movie. ↩︎