Google has written a "robot constitution" as one of a series of methods in an effort to limit the harm caused by robots.

The company hopes that its Deepmind Robotics division will one day succeed in building a personal assistant robot that can respond to requests. This robot could be asked, for example, to tidy the house or cook a nice meal.

But such a seemingly simple request may actually be beyond the understanding of robots. Moreover, it could even be dangerous: For example, a robot may not realise that it should not tidy up the house to such an extreme that its owner is harmed.

The company has now announced a series of new developments that it hopes will make it easier to develop robots that can both help with such tasks and do so without causing any harm. These systems are intended to "enable robots to make faster decisions and better understand and navigate their environment", and to do so safely.

Among the groundbreaking new breakthroughs is a new system called AutoRT, which uses artificial intelligence to understand human intentions. The system does this using a wide range of models, including, for example, a large language model (LMM) of the type used in ChatGPT.

The system takes data from cameras on the robot and analyses it,

It works by perceiving the environment and the objects in it and transferring them to a visual language model, or VLM, which can describe them in words. This data is then transmitted to the GDM, which understands these words and creates a list of tasks that can be done with them, and then decides which ones should be done.

However, Google also states that in order to truly integrate these robots into our daily lives, people need to be sure that they will behave safely. In this direction, what Google calls the Robot Constitution has been added to the GDM, which makes decisions within the AutoRT system.

Google says it is "a set of safety-focused guidelines for robots to follow when choosing tasks".

"These rules were inspired in part by Isaac Asimov's Three Laws of Robotics, the first and most important of which states that a robot 'cannot harm a human,'" Google wrote.

Other safety rules require that no robot should attempt tasks involving people, animals, sharp objects or power tools.

The system can then use these rules to guide its behaviour and avoid dangerous activities, such as ChatGPT being told not to assist humans in illegal activities.

However, Google also notes that even with these technologies, these large models cannot be trusted to be completely safe. In this context, Google has had to incorporate more traditional safety systems borrowed from classical robotics, including a system that prevents robots from exerting too much force and a human supervisor who can physically shut them down.