Researchers and media outlets have been alerted to issues of racial and gender prejudice in artificial intelligence algorithms and the data used to train huge language models such as ChatGPT. However, same issues also arise with social robots, which are meant to interact with people and have physical bodies based on benign human or animal models.
Socially assistive robotics is a subset of social robotics whose goal is to engage with ever increasingly varied groups of people. One of its pioneers, Maja Matarić, states that the lofty objective of its practitioners is “to create machines that will best help people help themselves.” People on the autism spectrum, kids with special needs, and stroke victims in need of physical rehabilitation are already receiving assistance from the robots.
However, neither their appearance nor their interactions with humans represent even the most fundamental facets of the diversity that exists in our society. As a sociologist who focuses on the interaction between humans and robots, I think that this issue will only worsen. In the United States, children of color are being diagnosed with autism at a higher rate than white children. It’s possible that many of these kids will communicate with white robots.
So why are #robotssowhite, to borrow the popular Twitter hashtag used during the 2015 Oscars, so white?
Why most robots are white?
Why does Kaspar, who is intended to engage with children with autism, have rubber skin that resembles that of a white person, considering the diversity of people they would be exposed to? Why are the robotic educational and museum exhibits like Nao, Pepper, and iCub covered in glossy white plastic? Technology ethicist Stephen Cave and science communication researcher Kanta Dihal address racial bias in robotics and AI in their book The Whiteness of AI. They also point out that stock photos of robots with shiny white surfaces are widely available online.
What’s happening in this situation?
What robots are now in use is one problem. The majority of robots are not created from the ground up; instead, engineering labs buy them for projects, modify them using unique software, and occasionally combine them with other technology, like robotic hands or skin. Therefore, design decisions made by the original developers (Aldebaran for Pepper, Italian Institute of Technology for iCub) confine robotics teams. These design decisions typically have a sleek, clinical appearance with glossy white plastic, evoking the aesthetic of other tech items like the first iPod.
I refer to this as “the poverty of the engineered imaginary” in a paper I gave at the American Sociological Association meeting in 2023.
How people view robots in society
Anthropologist Lucy Suchman addresses a “cultural imaginary” of what robots should look like in her seminal work on human-machine interaction, which was revised with chapters on robotics. The common representations seen in books, pictures, and movies create a cultural imaginary that influences people’s attitudes and perceptions as a whole. The cultural imaginary of robots is rooted in science fiction.
Neda Atanasoski and Kalindi Vora refer to this cultural imagination as the “engineered imaginary,” which contrasts with the more pragmatic concerns of how computer science and engineering teams perceive robot bodies. The design of service robots, which are meant to perform routine tasks, is a topic of intense debate in the field of feminist science studies. Works such as Jennifer Rhee’s “The Robotic Imaginary” and Atanasoski and Vora’s “Surrogate Humanity” challenge the racial and gendered presumptions that drive this design.
The creative imagination that elevates robots to the status of white people—and, typically, women—has its roots in European antiquity and an explosion of works during the height of industrial modernity. The term “android” was first used in Auguste Villiers de l’Isle-Adam’s 1886 novel “The Future Eve.” Karel Čapek introduced the word “robot” in his 1920 play “Rossum’s Universal Robots.” Thea von Harbou’s 1925 novel “Metropolis,” which served as the inspiration for her husband Fritz Lang’s well-known 1927 film of the same name, also featured a sexualized robot named Maria.
Maybe ancient Rome served as the model for this cultural fantasy. A passage found in Ovid’s “Metamorphoses” (8 C.E.) talks of Pygmalion’s infatuation with a statue of Galatea made “of snow-white ivory.” Pygmalion asks Aphrodite to bring Galatea back to life, and she grants his wish. The story has been adapted in many literary, artistic, and cinematic forms; Méliès’ 1898 film has one of the earliest special effects ever used in a motion picture. The purity of Galatea’s flesh is highlighted in paintings that capture this moment, such as those by Raoux (1717), Regnault (1786), and Burne-Jones (1868–70 and 1878).
The multidisciplinary path to inclusivity and diversity
How might this cultural legacy be challenged? After all, engineers Tahira Reid and James Gibert contend that diversity and inclusivity ought to be included in the design of any human-machine interaction. But aside from the Japanese-looking robots, non-white robot designs are uncommon. Also, the stereotype of the submissive feminine gender is often reflected in Japan’s robots.
It is not enough to just cover devices with black or brown plastic. There is more to the issue. Although impressive, the Bina48 “custom character robot” has limited speech and interaction. It is based on the head and shoulders of an African American millionaire’s wife, Bina Aspen. A video project based on interactions between African American artist Stephanie Dinkins and Bina48.
One such exchange highlights the ridiculousness of discussing racism with a disembodied animated head as, although having no personal experience of its own, the AI-powered responses allude to an anonymous person’s childhood experiences with racism. Similar to the “memories” of the replicant androids in the “Blade Runner” films, they are implanted memories.
I spoke about how social science techniques can contribute to the creation of a more inclusive “engineered imaginary” at the November 2022 Being Human festival in Edinburgh. For instance, in collaboration with Guy Hoffman, a Cornell roboticist, and Caroline Yan Zheng, a Royal College of Art Ph.D. candidate at the time, we requested papers for a book titled Critical Perspectives on Affective Embodied Interaction.
How much people’s bodies convey to others through gesture, expression, and vocalization—and how this varies among cultures—is a recurring theme in both that partnership and other works. In that situation, it’s one thing to make robots’ appearances reflect the diversity of humans who use them, but what about expanding the range of interactions they can have? Social scientists, interaction designers, and technologists can collaborate to create more cross-cultural sensitivity in gestures and touch, for example, in addition to making robots less exclusively white and female.
In particular, for those who require support from the newest models of socially assistive robots, such work holds the potential of reducing the unsettling and frightening aspects of human-robot contact.