The most basic definition of a robot is a type of machine capable of carrying out pre-programmed tasks automatically. Although, to be fair, there is argument even among roboticists about what exactly constitutes a robot, as opposed to a simpler machine.

Now that we’re building robots designed for human interaction, there are more exciting possibilities than ever – and more than a little fear and skepticism. Robots conjure up different images to different people. Some are influenced by science fiction, while others picture industrial robots performing tasks previously done by humans. But most negative impressions are based in ideas that are unlikely to come to fruition.

The history of automation is key to understanding people’s attitudes towards robots. Ever since the Luddites destroyed textile machines in the early 1800s, there have been people hostile to and/or threatened by automation that replaces human action. When the first food delivery robots were sent out onto the streets, some people kicked them as they wheeled by.

It’s true that robots can bring about changes that some will dislike. But they can also make life easier and more enjoyable. In fact, from entertainment to elder care, we’re socializing with robots more than ever. We’re also building them for education, disaster relief, healthcare, research, and conservation. And perhaps most importantly, robots are not unleashed upon innocent people or given free rein to take over the world. Rather, we are collaborating with robots in ways that drive innovation in just about every area of life.

Teamwork

To be sure, there are jobs that no longer exist because we’ve built robots to take on those tasks. This is especially true in the industrial sector. And it’s a sure bet that this will continue to happen, although probably to a lesser extent than most people fear. But it’s also true that those replaced tasked were often dangerous, dirty and unpleasant jobs for humans. And some of those displaced workers have been retrained to take on higher-skill jobs, such as controlling those very robots.

The point is, we’re in control. And while it may be fruitful territory for science fiction writers to imagine doomsday scenarios like Skynet, the rest of us are much better off thinking of robots as tools. Yes, they’re programmed with artificial intelligence algorithms of varying degrees of sophistication. But that’s a separate issue. We need to be careful not to discount the benefits of robots just because we’re concerned about AI. The most controversial uses of AI thus far have been well outside its use in robotics.

We’ll be interacting with robots more and more in the future and for the most part that’s a good thing. Take just one form of robots as an example – exoskeletons. By their very nature, exoskeletons don’t function autonomously – they serve only to augment our own capabilities. They can help us do backbreaking labor in factories (without breaking our backs). They help soldiers traverse rough terrain. And they can even help the paralyzed become mobile again. Yet they’re also a type of robot that we’ve to some extent let science fiction dictate our feelings about. But they’re not turning us into Transformers, they’re just aiding in our mobility.

Looking in the mirror

While it’s not always the case in the real world, we tend to think of robots as having a humanoid form. “The cylons look like us now” was the ultimate “oh, frak” moment for many sci-fi fans. The idea that robots will become indistinguishable from humans is a well-worn trope. No longer would humanoid robots be friendly housekeepers like Rosie from The Jetsons. Instead, they would be out to replace us. But such narratives are pure fiction. We’re no closer to creating Westworld than we are to building a real-life Marvel universe.

So before you give your toaster the side-eye, it’s important to remember that robots will probably always be quite different from us. No matter how many somersaulting, stair-climbing robots Boston Dynamics showcases in their videos, there’s a reason we’re not rubbing shoulders with them – a lot of them just aren’t viable right now.

That’s not to say there are no legitimate concerns when it comes to making robots that look like humans or animals. But when researchers make a robot with skin that feels like a real person’s, for example, it’s so those robots can interact with us in a way that improves our lives. For example, we might make a bionic arm for an amputee that looks just like a real one. It will also have sensory capabilities to allow the user’s brain to get the same signals it would have received with their original arm.

Humanoid robots are currently used primarily in healthcare to assist patients emotionally as well as physically. These machines can interact with vulnerable populations. They encourage social behaviors in children with severe autism or developmental disabilities. Their mere presence can reduce stress and loneliness in elderly populations. Robots can patiently sit at bedsides to comfort patients with no family. Headlines like “The artificial skin that allows a robot to feel” or “A robot in human skin” might creep you out, but don’t mistake a robotic component’s ability to use sensors to “feel” as any kind of self-awareness. Robotic senses simply give these machines more functionality, they don’t bring us any closer to a robo-dystopia.

Getting around

One important aspect of robotics and how we think about machines that act upon the world is the ways that they move. One reason we design robots in humanoid forms is that we expect them to navigate our built environments, which are full of obstacles like curbs and stairs. Walking is the form of locomotion that seems most natural. But sophisticated engineering is involved in getting robots to move smoothly and to mimic creatures in the natural world.

While walking, especially bipedal walking, is quite challenging, roboticists have made enormous progress on that front. Boston Dynamics’ clunky Atlas can even do a backflip now. Another approach entirely are robots that can “snake” through disaster areas looking for survivors or even slither up ladders. Rolls-Royce has engineered robots based on the movement of both snakes and swarming bugs. Such machines are capable of moving around inside their jet engines for repair purposes. We’ve even developed amphibious snake robots that can navigate extreme environments.

DARPA has long been on the forefront of robotic technology and they’ve developed multiple robots with interesting locomotive capabilities. While known for their weapons development programs, their work with robots often involves improving a machine’s dexterity or getting it to handle something gently. They’ve built robots that fly and those that swim, and they’re aiming to develop robots that can change their locomotion based on the surrounding environment.

Some university research groups are making robots that move like jellyfish that can monitor and protect coral reefs without altering the ecosystem too much. Others are making robots that we can swallow in order to do everything from deliver drugs to monitor our gut health. These “soft” robots are incredible feats of engineering.

Robot locomotion can also serve simply to entertain. Disney’s imagineers know this well, having been utilizing animatronics in their attractions for decades. Their latest iteration is an untethered, interactive Groot robot that will no doubt be interacting with the most vulnerable audience of all soon – children. And their Stuntronics autonomous stunt doubles have the potential to both improve theme park attractions and make filming superhero movies safer for human stuntmen.

Behavior and engagement

While “cobots” can interact with people for collaboration purposes in the workplace, there are other robots that are built to interact with us as well. SoftBank’s Pepper robot has done everything from serve as a priest to sit at the dinner table with people. In an effort to become a more acceptable presence (something that is often unhelpfully phrased as “fitting in” or “gaining trust”) Pepper will even vocalize the programming rules it’s following when asked to perform a task. That transparency has proved important in alleviating people’s concerns about working with and around robots.

But sometimes instead of transparency, it’s forging a human-robot emotional connection that’s desirable. Columbia Engineering has made a robot that can make facial expressions based on different social situations. The goal is to approximate friendliness, because we humans are socialized to respond to such cues. Here again, we shouldn’t mistake such features as any kind of ability for robots to feel the emotions they are meant to convey. They are merely a way to make robots more acceptable and more useful to us. And while skeptics have argued against any attempt at “humanizing” robots, we tend to do it anyway, for example holding funerals for fallen bomb-defusing robots.

It’s important to remember that the ways in which a robot might act human, such as making a face or changing its posture or tone, are simply bits of programming. Robots don’t have mouths that they use to smile, they have approximations of mouths that can be tugged and pulled to look friendly based on computer code. If they seem to “read their environment” in order to learn how to respond appropriately, that’s only because they have sensors built in to measure things outside themselves, producing data that gets fed into their algorithms. Those algorithms work faster than our brains do, but they don’t have the same nuance. And while a goal of many AI researchers is to develop neural networks that more closely mimic the human brain, those same researchers will likely tell you we’re a long way off from making a robot version of an organ we don’t really understand to begin with.

All that said, robots can and do make us feel. They elicit emotions that might make us feel both vulnerable and manipulated. But even in cases when they ask (or beg) humans to perform an action with a pre-programmed tone of voice, no one is hiding the fact that they’re machines. Even Pepper, designed to be a real team player, is just collecting data to produce personalized responses. This is why we study how these robots produce emotions in humans instead of simply reducing their capacity for interaction.

In-store experiences

Retail is one area in which we’ll likely find many uses for social robots. Putting robots into physical shops can potentially boost revenue. The machines can interact with customers when other staff is busy, for example. And getting information on the location of an item from a retail robot might be welcomed.

Walmart began experimenting with robots in their stores several years ago, as did Lowe’s with their LoweBot. Grocery chains like Giant Food Stores have also begun testing out the utility and challenges of having robots roam the aisles. The Covid pandemic has encouraged many of these retail chains to expand their use of robots, and has also encouraged others to begin exploring the possibilities.

Robots can reliably help keep shelves stocked and clean up spills. And it won’t be long before their improving social skills (and people’s increasing comfort interacting with them) allow them to take on more customer service roles. And robots can also measure customer behavior and collect data, a capability that could allow retailers to continually improve their products and services.

The next wave of retail technology might have us thinking of entire stores as giant robots, in a way. Amazon has popularized the notion of autonomous retail with its Amazon Go stores, which promise an end to checkout lines and cash registers. Instead, products are tracked via cameras and sensors so that customers are simply charged appropriately as they walk out. Other retailers are getting in on the autonomous action as well. Convenience store chain Circle K and Dutch convenience store operator Wundermart are but two examples, each using a different vision-based system that’s competing with Amazon for dominance in the nascent autonomous retail market.

Where we’re going

It’s interesting to see in our current post-pandemic environment that jobs which pay little and require plenty of human interaction or hard labor are going unfilled. Will we be angered by further waves of automation if robots are replacing jobs that people didn’t want anyway? Will our feelings about robots that smile change when they end up in customer service positions? Will we find soft robot hands creepy when they’re the reason our fruit doesn’t get bruised between being plucked from the tree and bagged at the grocery store?

It’s clear that in places like hospitals and elder care facilities, robots will continue to positively impact human lives. In places like theme parks we are going to get ever more sophisticated animatronic entertainment. We may be more reluctant in welcoming robots into our stores and sidewalks, but when they’re bringing us our delivery orders, we’ll likely welcome them. We’ll probably get used to seeing them in hotels and serving as robot docents at museums and galleries.

As long as we remember that we’re the ones in control and the we have choices in how we use robots, we should feel empowered to keep on innovating. Robots will ultimately help us in many ways and improve the common good. And our own ingenuity can help us design them in ways that are less alienating to the people they are meant to help.

Featured Image: Pepper Robot via Shutterstock