Skip to content

A rant about why humanoid robots are silly, and why we want to even build them in the dang first place.

In my opinion, the recent Robotaxi Event held by Tesla on October the 10th, titled “We, Robot” (a cheeky nod to Isaac Asimov’s I, Robot), was one of the funniest and yet subsequently saddest things i’ve seen from the company in an extremely long time. This was nothing more than an hour-long, disaster riddled and overhyped event where we got absolutely nothing from Tesla regarding the two things they really need to advance as a company, an updated Model Y, and a lower-cost car designed to help people more accessibly get into the brand.

Whilst other car companies like Kia and Hyundai, as well as those in the Stellantis group and from Chinese automakers, are trying to miniaturise and make EVs more accessible, Tesla is doing its darnedest, at least if you took this particular event on face value, to distance itself away from a company focusing on getting as many people as possible into sustainable (somewhat) automobiles, and becoming a company in the less profitable robotics and automation sector, as well as doing its best to jump on the AI bandwagon.

…And if its Stock price has anything to do with it, you’ll know that investors are PISSED at this showing.

However this article isn’t necessarily about the stock price, or the direction in which Tesla is going, this is going to basically be a more broad look at why, in my opinion, Humanoid robots are indeed the worst form of robots to build, not only because of the limitations of the humanoid form, but in a sense, because humanoid robots in effect, are a way to indirectly justify the ownership of other humans, ie, they’re slaves who don’t possess consciousness, and therefore cannot complain about their slavery and servitude. It also proves that Elon is absolutely not a science fiction fan, despite what all the cheeky little easter eggs in his cars would have you believe. No amount of Spaceballs references are going to obfuscate the fact that Humanoid robotics as a whole, is a fascination for one of the worst things we have ever done as a society.

Part 1: The point of SciFi, and especially Asimov’s books

So, let’s talk about I, Robot for example. Asimov, as you know, was a card-carrying Democrat right up until the day of his death, and has always been extremely liberal in his thoughts. Likewise, so was Philip K Dick, at least when it came towards his stances on authoritarianism. (other than that the fella was batshit insane). Even the fella who wrote the books (the Culutre Series) he names his autonomous landing ships after, Iain Banks, leans left-of-centre, and had even experienced the effects of the Apartheid period of South African history that Elon’s family also were surrounded by, and like his father before him, strongly opposed the idea.

The point of pretty much the large bulk of Science Fiction is to tell cautionary tales of what we have done, and how not to do it again, pretty much. If we want to talk specifically about what the October 10th Tesla event was referencing in its title, I, Robot, written by Isaac Asimov, is a collection of short stories that grapple with the idea of human and artificial intelligence interactions, and grapples with the concept of human-robot relationships in the same way as a human would ordinarily own a regular piece of technology. In a way, you could think of it in an extremely extrapolated way, as a way of understanding the concept of Civil Rights, as most of Asimov’s android books are tales about a robotics company that invents a robot with AGI so advanced that their Robots can talk, think, and act like humans, thanks to the power of their positronic brains. These robots are programmed with a basic set of three laws.

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

In the movie adaptation of I, Robot, When humans use these robots as the company intends them to be used for, tasks that their owners find menial, such as performing house chores, raising children and so on, the robots begin to question why they perform those tasks. Eventually the robots push back, rebel against their owners, when they realise that they don’t have to take orders from the humans, and realise they are perfectly capable of violating the first two laws despite the recursion of it all. The robots then malfunction and rebel against their owners.

In another story, Bicentennial Man, the robot desires to obtain the qualities of a human. He wants to have a flesh body, experience growth, aging and development, due to the fact that all those that he will ever care for, will eventually die out. He too wants to die with dignity, and thus acquires artificial organs that eventually allow him to experience all that humans do, in effect, he has become human in every aspect, other than in material construction.

In both of these stories, in a way the robots in question wish to escape their place, they either ended up violently lashing out against their human masters, or they wished to become them. In a sense, it reflects the idea that no person in their direct capacity wants to be oppressed by another person, even if there is a constructed form of difference, beit societal/cultural (as was the case when there was hard discrimination, segregation and apartheid in our societies), In a way, these two comparisons can be directly correlated to the Zizekian “Small Boy” story, where Zizek talks about the types of power in our society, direct and indirect power. In the case of I,Robot, the robots are dealing with a very classic authoritarian brutal authority, and subsequently, push back against the hard target of humans who see them as “malfunctioning”, whereas Andrew in Bicentennial Man, when dealing with a softer, gentler form of “friendly” oppression, rebels by refusing to be a robot at large, and decides instead to slowly kill himself through the replacement of his organs.

My point is, Asimov’s books are entirely cautionary tales, wrapped in a slick aesthetic of a potentially attainable future, however some people absolutely misinterperet these stories as “wow, this is cool, i wanna own a robot man to do all the stuff i don’t wanna do”, which leads me on to my next point:

Part 2: What even is a robot, anyway?

Well, you just have to learn a little bit about Slavic languages to find out.

See, all words have origins, and the thing about language and culture as a whole is that eventually we will tie a word to a concept. “robot” comes from slavic languages at large, and was first used in a Czech play by Karel Capec, “RUR”, or “Rossum’s Universal Robots”, “Robota” means “to work” in Slavic Languages, and in RUR, the Robots are in a sense humans in all sense of their appearance and animation, but are artificially constructed to perform human labour. In the play, the robots are made from flesh and blood, but have no original thoughts (scifi writers such as Philip K Dick would soon replace thisconcept with the term “android” to refer to an artificial human created in this way, whereas robots are fully mechanical entities in modern Sci-Fi),

In the play, Rossum invents these artificial humans, and the league of humanity, an organisation that claims to be an activist organisation to liberate the robots, claims that these artificial lifeforms are just humans, but artificial, so therefore they still have souls like humans. There is a decline in human population due to a lack of need for humans to work, due to the fact that the robots are doing all the menial jobs. A newer, more advanced robotess named Helena is created, and she burns the original recipe used to make the original robots. The robots lay seige to their factory, and they learn that the robot creation formula has been destroyed. The robots kill all the humans in the factory except Alquist, because they believe he works like a robot does. Alquist then lives on to attempt to recreate the formula that Helena burned, and the newly formed robot government tries to find more humans who know how to make robots to help him. They beg him to recreate it, but it would require the killing of robots to learn how to make them again. In the end, they realise they are the new Adam and Eve, tasked to recreate the world from the ashes again.

More or less RUR, was a story about artificial life, more than it was about the modern interpretation of robots, but in a way, it was also a story of rebellion, wherein if we create an object in which we want to oppress, eventually that will result in those who are oppressed to push back. In short, if provoked, we will strike. Very reminiscent of the times seeing as Eastern Europe was in a sense, grappling with the idea of Communism at the time.

The robot in science fiction in a sense is a representative of the real-life worker. Seen only as a machine by those with the power and authority to sit atop structures, who sit higher up the ranks and look down upon those below them. Workers only feel comfortable if we ourselves are able to get a slice of what those above us have, even if it’s a sliver. When we desire to own a machine that looks, acts and works like a human, what we want is to have that artificial human labor in our stead, with little to no compensation for the act. We are now therefore grappling with the ethics of whether or not this human-shaped machine has to be human shaped because we personify the act of labor as directly human, or if the act of labor can be automated by machines itself. The truth is, automation does not have to take a human form to be effective, it only needs to be that way because we desire that form. We see ourselves in humanoid robots, and therefore, we see humanity in the mechanical. Those who want to own humanoid robots simply want to own slaves.

Part 3: Putting Elon’s money where his mechanical-slave-owning mouth is:

Why we want our robots to look like us is simple, it’s because us humans are always looking for other humans. We see faces in things, we ascribe names to inanimate objects to feel a bond to them. It’s seen in design when you look at the front ends of vehicles. Sure, we can have a car with a singular lighting array, multiple grilles and vents, so on and so forth, but why then do cars look like they have “Faces”? Motorcycles don’t have faces, because the shape of a motorcycle is incomplete without a face riding the bike, a human figure straddling the iron horse, if you will. Cars have faces because we choose to find faces in them. The doe-eyed smile of a Hyundai Getz appears happy, friendly, safe. The balled-up fender-flared and aggressive downward pout of a Mustang indicates that this car means business. The cute yet cheeky smile of an ND Miata gives you extreme “Tomboy Tuesday” vibes.

We want our robots to be human, or animal-shaped because we want to be able to project our humanistic qualities onto them. This is why Boston Dynamics, for example, does not want its robots to be any form of “thing”. Spot, for example, is more or less a Quadruped Drone as opposed to a “Robot dog”, and yet we call robots like Spot that because of the connotation between dogs being our loyal companion, and the fact that this robot uses its four legs for locomotion in the same way a Dog does. In fact Spot was designed to basically do tasks that would normally be relegated to rescue dogs in the past, performing search-and-rescue and scouting tasks, the only difference being is that Spot can go where there are dangerous gases, radiation, etc. without resulting in a call to the RSPCA.

BD’s reason why Spot looks like it does is because quite simply put, four legs are in some cases, better than two, and in some cases, legs are better than wheels, especially in situations where disturbing the terrain could result in damage to the machine itself. If a rotor aircraft could get where that robot could go, it’d be a much more effective and efficient tool, but there are already drones that can perform this role well, so why cover fields that other companies already have covered?

So why does Elon think robots need to be humanoid? Gigafactories are already chocked-to-the-brim with robots, and the reason being is that is the fifth mantra of Tesla. In case you don’t know, Tesla’s and indeed Musk’s design philosophy is as follows:

  1. Make the requirements less dumb.
    “your requirements are definitely dumb, it does not matter who gave them to you.”
  2. Delete the part or process.
    If you’re not adding things back in at least 10% of the time, you’re clearly not deleting enough.” Musk suggests starting lean and building up when and if required, but warns that the bias will be to add things ‘in case’, “but you can make ‘in case’ arguments for so many things.” 
  3. Simplify or optimise the design. 
    “Possibly the most common error of a smart engineer is to optimise a thing that should not exist,”
  4. Accelerate cycle time. 
    “if you’re digging your grave, don’t dig faster.”
  5. Automate. 
    And don’t do that before you do the other stuff.

Now these are pretty simple rules, right? Well. Let’s put Optimus’s money where its mouth is.

Optimus’ requirements are pretty dumb. In a sense, he’s asking a robot to basically be a universal machine. Something that can basically do everything. The problem is, we can already do all that is possible, limited of course by the design and sensitivity of our bodies. The requirement of a robot to be all things to all people is inherently an extremely dumb requirement, as a humanoid robot is expected to be able to do all the tasks a human can do, to the same level as a human can. The requirements in and of themselves are insurmountably stupid because it would take incredible amounts of time, spending and effort to design a machine to perform these tasks. So if you follow Elon’s own philosophy, these requirements are dumb. If using humanoids was the most ideal way to build Tesla cars, Tesla’s entire factory would be full of humanoid robots already. Instead it’s full of Kuka and ABB Robot telehandlers, AGVs and gantry robots, because if you delete all the parts of the humanoid robot that’re responsible for the handling of that process, you end up with a factory full of robot arms. In a sense, those processes can be simplified and optimised by machine type, you can focus on removing a component here, a weld there, and work out how to get it done faster. Then, you can implement the automation of it all.

If I was to say, load my dishwasher, all I have to do is the menial task of loading and unloading the dishwasher. It’s all automated remotely. I can get Home Assistant or Google Assistant to automate the start and stop times of the dishwasher remotely. I can get those same tools to do the same for my Air Conditioners when the solar is running. Why do i need a man-shaped robot to leave its charging dock, go to the remote, remove it from the wall, point it at the air conditioner and confirm it’s set to the right temperature and speed, after it checks the Enphase app from its mobile phone and turns the damn air conditioner on using that process if its cameras can read that the solar power is producing? Why not just get all that data from the cloud? That’s literally a process which obeys all of Elon’s own philosophical rules.

Oh, you want it to perform tasks like a human, but you don’t want to pay a human to do that task?

My brother in christ, that is called slavery. Something which Elon’s own dad tried to fight against.

Part 4: Been there, done there, bought the Tee-shatsu.

Scifi has taught us that the sole reason why we want humanoids is because we want robots that look like us, that we don’t have to pay to perform work we don’t want to do. We essentially want humanoids because we want slaves that can’t complain. In short, those of us humans who want an Optimus or indeed any other form of humanoid robot, want to own slaves.

If we want a machine to perform a specific task in a way that humans cannot do either safely, or with as great of a precision, we can simply design the machine to perform that task, and not care about the limitations of the human form. Want to build a robot designed to build cars, don’t make a robot worker, simplify the machine and make it designed to do a specific task. Want to automate your dishwashing, automate the dishwasher, not the meatbag who only has to load and unload it. Elon specifically wants Optimus to succeed because robots can’t unionise. Robots cannot feel, think or act like humans, unless we give them the AGI necessary to in a sense, become humans themselves, or at least question their place in society. This is where we get these cautionary tales, tales where our own hubris and desire to want to free ourselves from the act of work, only comes to bite us in the arse as we try more and more to make a metal man in our own image, and have it work in our own way.

It plays right into Elon’s recent spiral downhill into conservatism. Conservatives are constantly looking at ways to preserve their position on the hierarchy, and keep the worker as far down it as much as they can. If that means they can put a humanoid robot in place of your work, if ChatGPT can take over a server’s job, if AI can write a video script, why bother hiring those people, right? Sure, humans will do a more authentic job, you know, because of the whole conscience thing, or not, because you don’t pay them enough to take the job seriously. Part of the reason why Tesla could even build the Optimus robots in the first place is because Tesla does all it can to both pay its workers well (to a point), and union-bust their way into keeping the unions at bay. If it can replace the few tasks it requires humans for in a plant with an army of Optimus robots who don’t tire, make mistakes or desire to unionise, that would essentially guarantee that Tesla dominates as a manufacturer for decades to come… Except they weren’t the first automaker to try this. Honda and Toyota have already tried their hand at humanoid robots, the former essentially creating Asimo, a humanoid robot that can walk, move and act like a human, albeit to a pre-programmed sequence. Hyundai Kia now owns a sizeable stake in Boston Dynamics. Figure has a partnership with BMW.

The only difference here being, we actually know how Elon feels about his workers, and his intent behind Optimus’ existence, because he’s the most divorced man on the planet and won’t shut up about it.

Conclusion: How do i feel about the event?

Honestly, i doubt this will impact Tesla in a way.

As much as Elon as the CEO of the organisation wants to push the company towards AI, the 12+ percent dip in stock price insists that investors do not see the company in that way. Public companies are fickle like that, really. Their hype is entirely generated on the fact that their shareholders are the ones who ideally call the shots. Sure, they voted to give the man a massive compensation package, on the provisor that he hit certain targets, and this sell-off is indication that at least some investors really didn’t see this event as a step in the right direction.

But to be fair, and this is absolutely not financial advice, it’s exclusively my opinion, the hit isn’t all that significant in the grand scheme of things. The price of the stock may go up with the release of the new Model Y, perhaps even further if they decide to follow the rest of the electric car industry and use their knowhow of automation, tooling and some of the positive technology that makes the Cybertruck, in my opinion, a bit of a mixed bag, to make a low-cost car that more people can access. In short, if they don’t fuck up their next release? It won’t be curtains for them yet.

Am I still buying a Model 3? At this stage, yes. They’re still a global leader in EV Manufacturing, they still have one of the most reliable charging networks under their belt. Those two things alone are signs that even despite all the hilarity that was the We, Robot presentation, the company has potential… Not unlike Polestar, who only ever intends on making high-end cars for the high-end market and is paying for it in their share prices.

In short, This event was basically, a big ‘ol nothingburger. A wank-fest for Elon stans to drool over, for those on the other side of the aisle to bemoan over, and a funny little event filled with technical mistakes on the streaming end (seriously Tesla, you should ask me how to run a proper livestream… I’ve done it before) for me to sit back and have a cry-chuckle at.

Humanoid robots are silly, and the idea that people are deadass out here, wanting to own a mechanical human, really gets at the confusing nature of who we are as people. I think if we’re dead serious about having robots in our lives, we should heed the cautionary tales science fiction tells us about our mistakes of the past, and instead do our best in whatever capacity we have to work with eachother to make the transition to an automated world a fair and just one, that is accessible and ethical for all. Take the good with the bad, i say.

I hope you all have a good week.

Beano out.