Last week I delved into the conflation of terminologies around autonomous vehicles. This week I’m going to explore a somewhat broader situation. It involves two different emerging fields that I’m equally interested in – robotics and artificial intelligence and even touches on those technologies from last week. I’ll try to make this discussion substantive as opposed to just a rant. But this one is less about honest differences of opinion and more about the misuse of generally accepted terminology.
Most people agree that robotics deals with the building of mechanical systems that can mimic manual tasks that humans can do. In practice, especially in factory automation, like the robot pictured above, these systems have no intelligence per se, but rather follow pre-programmed logic. The same is true of Roomba vacuum cleaners and kit robots like Lego Mindstorms and similar educational robots.
Great leaps have been made in robotics in recent years, notably by Boston Dynamics (acquired by Google in 2013) with their animal-like robots such as Big Dog. But the creation of a bipedal humanoid robot is still incredibly difficult and even state of the art robots drastically underperform humans in most things. And that’s to say nothing of the uncanny valley problem when we try to make robots look more human.
Artificial intelligence is a branch of computer science. In general it deals with trying to solve for computational problems that traditional computing methods cannot easily solve but that humans can easily do. Good examples are processing visual information and interpreting natural language. There are countless approaches to making machines perform such tasks better.
One such approach is machine learning – this is a statistical approach that relies on vast amounts of “training data” to help a computer program “learn” the differences between data and accurately categorize new data that is sufficiently similar. Advances in the field of machine learning (primarily the use of neural networks) have drastically improved the performance of such systems for many narrow applications such as facial recognition.
This has resulted in some amazing advances for AI. IBM’s Watson can access the entire world’s body of written medical knowledge to assist physicians in making difficult diagnoses. Google’s DeepMind beat the world’s Go master last year and continues to improve. AI assistants like Siri and Alexa (pictured above) can use conversational interactions to answer user. But no matter the application, AI is all about the data and the processing of it.
So very clearly robotics is about the hardware and artificial intelligence is about the software. Why, then, does the terminology get confused so often? I’ve read countless articles about “robots” that were chatbots or algorithms that wrote financial articles or did paralegal work. Clearly these things were just computer programs with no physical manifestation besides maybe an avatar. So they are no more robots than your Facebook profile.
The term robot also gets applied to autonomous vehicles. That makes a bit more sense since these are complex mechanical systems. But the addition of autonomy primarily adds sensors for capturing data that gets analyzed by AI – it’s the processing power of humans being replaced, not the motive power. So an autonomous car isn’t really much more robotic than a regular car. Which, if you think about it, serves pretty much the same purpose as the robotic exoskeletons in Aliens and Avatar. But we don’t think of cars that way since they don’t have a humanoid form factor – they’re just machines.
My overarching point, of course, is that AI does not in and of itself make a robot. Nor does a robot like an Avatar exoskeleton of industrial robot arm necessary require AI. The current success of AI in the market is encouraging electronics manufacturers everywhere to claim that even simple algorithmic tech products are “AI powered”, as was very much in evidence at CES this year.
Of course in the popular imagination and in science fiction (and to some extent in real life) these fields do converge. In their quest to achieve better robots, roboticists increasingly imbue their creations with machine learning capabilities in order to give them autonomy. And purveyors of AI systems like to provide some sort of physical manifestation to give their systems more personality. The marriage of Watson AI with SoftBank’s Pepper robots is a perfect example.
Given the current states of the fields of AI and robotics, we are likely to see world-changing AI applications long before we will see the sorts of believable biomimicry in robots that are depicted in popular culture like Westworld and Ex Machina. But remember that not everything called a robot is really a robot and not everything called AI is really AI – caveat emptor.