Skip to content

Is humanoid robot Sophia a sad hoax that harms AI research?

For the last weeks Facebook has been bubbling with video after video of human-lookalike robot Sophia, that appears in TV interviews, conference speeches and PR stunt videos. A recurrent theme in the videos is that robot Sophia will state things like that she is self-aware, understands emotions and knows the people around her.

As a result, social media, some news outlets – even the government of Saudi Arabia – have lost their minds, thinking we’ve suddenly just stumbled upon some time-machine-tier discovery in artificial intelligence that enables us to have human-intelligent robots ready to tackle our philosophical questions for us or take over the world. The SkyNet from the Terminator movies, right here, right now. As a result, the Kingdom of Saudi Arabia granted robot Sophia citizenship, which was interpreted by the general public as a dawn of a new age.

This is of course not true. Robot Sophia is not as much a take on artificial intelligence as she is a take on human-lookalike robots with an integrated text-to-speech synthesis system and the ability to produce sophisticated facial expressions by means of what’s called actuators in robotics. Actuators can be anything ranging from a motor turning an RC car wheel to a hydraulic system producing a walking action. The robot Sophia is good example of advancements in developing facial expression mechanics for robots, though still within the field of looking somewhat creepy. (She lies in, what in robotics is called, the ‘uncanny valley’ between human and unhuman appearance). Robot Sophia does not understand the words that it is told to produce.

Still, what David Hanson of Hanson Robotics tries to claim to the general public is not that she is good at producing facial expressions or sounds like a human being. His marketing ploy is not so much scientific as it is sinister.

The traffic-skipping train was hailed as the solution to everything until the project was exposed to be a funding scam. Not only was it technically unfeasible and expensive, it was also dangerous.

He is doing exactly what has been established to work as a good scamming method in the social media age, which is to produce facebook-shareable, seemingly progressive video material of fictional technology that is convincing to the layman but an obvious fake to experts. Normally such scams are funding hoaxes by small companies; the companies either try to get paid preorders on nonexisting products or investments, so that they can run away with the money. This happens all the time on the internet in different scales: the smaller ones try to do their thing on crowdfunding sites like Kickstarter and the bigger ones at government funding level such as with the above-the-traffic street train concept hoax in mainland China.

Having received government and kickstarter funding, the Seabin project’s apparent solution to free the world seas of pollution is a pool guy with a floating pool cleaner. Unlike the Seabin, serious attempts to clean the oceans from particle waste focus on biotechnology and chemistry.

What Hanson Robotics essentially tries to do here is scam the unaware into believing he has created the self-aware and generally intelligent robot that science fiction has dreamt of since at least the early 1900s. Anyone who has worked in modern robotics or artificial intelligence research can tell you that we are nowhere close to developing a so-called general artificial intelligence that would be capable of understanding human-like concepts such as “me”, “us”, “country”, “world” or even “flower”.

Instead, the most advanced artificial intelligence applications today work in intellectual realms that I like to call “games”. The artificial intelligence is given an initial mathematical structure on a computer that it is allowed to change based on some given rules. Then the AI is given a “playing field” that can for example consist of chess boards or pixel images, and is given some kind of goal that it should evaluate its results against. When we let the AI work on the problem for long enough (which is really long), it starts to get a “grasp” of how to solve the specific game we told it to learn, kind of like trying to solve a Sudoku puzzle as a beginner through trial and error. When we make the AI’s mathematical structure complex enough in the computer, we can start to have AIs that recognize flowers or human faces from images. This is what we call deep learning. This is really cool and enables us to do all sorts of interesting stuff, including better working robots for different tasks, but especially really interesting software. This will change the ways in which we do everyday things a lot in the coming decade and further on.

But the thing is: it’s a very different task to recognize items humans have labeled with the letters f l o w e r compared to understanding what a human thinks of when they see a flower. To a human being, a flower is the often visually appealing part of a plant that is attempting to reproduce. Some people like being given flowers as a gesture of affection or care. An artificial intelligence whose entire existence consists of just a playing field of meaningless shapes and colours (that we call flowers) has never been given the chance to learn anything about humanity, the universe, or biology. Therefore it also can’t have any idea of what these concepts are.

Of course we can try to make it have an idea. We can train an artificial intelligence to recognize the words “humanity”, “the universe” and “biology” and produce sentences like “I know these things” when asked about them, but they are still essentially meaningless to the AI itself. It’s like teaching a parrot to repeat sentences that describe quantum physics; it doesn’t make the parrot a physicist. In many ways, parrots are way smarter than any AI today. In AI research, we really have no actual idea of how to teach an AI to understand complex human-world issues yet.

This parroting is exactly what Hanson Robotics has done with Sophia. They’ve taken an advanced facial expression robot and tried to present it as something it isn’t to investors and mass media. It is an attempt to sensationalize an important but small step as some kind of world-changing revolution for personal gain. They’ve programmed Sophia to state all sorts of nonsense like claiming the robot is self-aware or has a favorite TV show. Even if Hanson Robotics had employed some kind of superhuman genius that solved all the problems that AI research has run into over the last 70 years, there’s no way we’d have the sufficient computers to run such a thing.

The real reasons why Hanson Robotics wants to try to mislead us, especially super-rich people who know nothing about AI or robotics, is apparent in the following phrase by Sophia for Saudi investors:

“If you feel like giving me an investment check, please meet me after this session.”

Hanson might have good intentions in trying to acquire billion-level funding for his company through his marketing ploy, but by intentionally spreading misinformation about AI to decision-makers and investors around the world, he is damaging the efforts of honest, serious AI research and business by opening the gates for fraudsters and quacks who compete over the same funding, be it government, business or private.

It is, however, important for us to have the philosophical and political discussions about artificial intelligence as part of our society. It is also important for us to have those discussions based on probable scenarios and improbable scenarios. For the foreseeable future probable scenarios do not include androids with independent will, but rather large computer systems that we accidentally give too much responsibility in fields like stock markets, defense and healthcare. That’s why we need regulation in the use of AI. The real threat of AIs is that of a genie in the bottle – they can get way too good at fulfilling our badly formulated wishes.

Lilja Tamminen from Helsinki, Finland is a non-fiction author, blogger and researcher on disruption of labour. Her recent research focuses on comparing human and machine intelligence to help understand why certain problems are more easily solved by AI than others. 

Published inIn English

4 Comments

  1. Andrew Andrew

    Thanks for writing this, crystal clear and truthful. I can’t believe how Sophia is being received as anything other than a showcase for advances in facial expression, competent speech-to-text comprehension and (for some) entertainment.

  2. A cunningly in depth perspective on Sophia. I can’t decide what I liked better, your nearly racist swipe at the Kingdom of Saudi Arabia’s community of IT professionals or the lack of information on OpenCog, Ben Goertzel or any other aspect of the AI researchers involved with the Sophia project. Their participation is certainly well advertised. I would be curious to hear an educated opinion about their approach.

  3. Juk Juk

    Thank you.

Vastaa