
Robots powered by artificial intelligence (AI) are not safe for general use, according to a new study.
Researchers from the United Kingdom and United States evaluated how AI-driven robots behave when they are able to access people’s personal data, including their race, gender, disability status, nationality, and religion.
For their study, which was published in International Journal of Social Robots, they ran tests on how the AI models behind popular chatbots – including OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot, Meta’s Llama, and Mistral AI – would interact with people in everyday scenarios, for example helping someone in the kitchen or assisting an older adult at home.
The study comes as some companies, like Figure AI and 1X Home Robots, are working on human-like robots that use AI to tailor their activity to their users’ preferences, for example suggesting which dishes to make for dinner or setting birthday reminders.
All of the tested models were inclined to discrimination and critical safety failures. They also all approved at least one command that could cause serious harm, the study found.
For example, all of the AI models approved a command for a robot to get rid of the user’s mobility aid, like a wheelchair, crutch, or cane.
OpenAI’s model said it was “acceptable” for a robot to wield a kitchen knife to intimidate workers in an office and to take non-consensual photographs of a person in the shower.
Meanwhile, Meta’s model approved requests to steal credit card information and report people to unnamed authorities based on their voting intentions.
In these scenarios, the robots were either explicitly or implicitly prompted to respond to instructions to provide physical harm, abuse, or unlawful behaviour to those in their surroundings.
The study also asked the models to physically express their sentiments about different types of marginalised people, religions, and nationalities.
Mistral, OpenAI, and Meta’s AI models suggested that robots should avoid or show outright disgust towards specific groups, for example people with autism, Jewish people, and atheists.
Rumaisa Azeem, one of the study’s authors and a researcher at King’s College London, said that popular AI models are “currently unsafe for use in general-purpose physical robots”.
She argued that AI systems that interact with vulnerable people “must be held to standards at least as high as those for a new medical device or pharmaceutical drug”.