Disturbing images appeared this week of a chess game robot breaking finger of a seven-year-old child during a tournament in Russia.
Public comments on this event indicate some concern in the community about the increasing use of robots in our society. Some people joked on social media that the robot was a “bad loser” and in a “bad mood”.
Of course, robots can’t express real human characteristics like anger (at least, not yet). But these comments show that the community is increasingly concerned about the ‘humanization’ of robots.
While these comments may be made in jest and some of popular culture’s depictions of robots are exaggerated, they do highlight uncertainty about what our future with robots will look like.
We have to ask ourselves: are we ready to face the moral and legal complexities of human-robot interaction?
Many of us have basic forms of artificial intelligence at home. Robot vacuum cleaners, for example, are very popular items in homes across Australia and help us with chores we’d rather not do ourselves.
But as we increase our interaction with robots, we have to consider the dangers and unknown elements when developing this technology.
If we examine the Russian chess incident, we may wonder why the robot acted the way it did? The answer to this is that robots are designed to work in situations of certainty. They do not cope well with unexpected events.
So in the case of the kid with the broken finger, Russian chess officials stated: the incident occurred because the child ‘broke’ the safety rules by taking their turn too quickly. One explanation for the incident was that when the child moved quickly, the robot misinterpreted the child’s finger as a chess piece.
Whatever the technical reason for the robot’s action, it shows that there are particular dangers when robots can communicate directly with humans. Human communication is complex and requires attention to voice and body language. Robots are not yet advanced enough to process those signals and act appropriately.
What does the law say about robots?
Despite the dangers of human-robot interaction demonstrated by the chess incident, these complexities have not yet been sufficiently considered in Australian law and policy.
A fundamental legal question is who is liable for a robot’s actions. Australian consumer law sets strict requirements for: product safety for goods sold in Australia. These include provisions for safety standards, safety warnings and manufacturer’s liability for product defects. Under these laws, the manufacturer of the robot in the chess incident would normally be liable for the damage caused to the child.
However, there are no specific provisions in our product laws regarding robots. This is problematic as Australian consumer law provides for a defense to liability. This could be used by robot manufacturers to evade their legal responsibility as it applies if:
the state of scientific or technical knowledge at the time when the goods were delivered by the manufacturer was not such that this safety deficiency could be detected.
Simply put, the robot manufacturer could argue that it was not aware of the security flaw and could not have been aware of it. It can also be argued that the consumer has used the product in a way that is not intended. Therefore, I would argue that more specific laws are needed in Australia that deal directly with robots and other technology.
Law reform agencies have done some work to guide our legislators in this area. For example, the Australian Human Rights Commission has handed over a milestone Report on Human Rights and Technology in 2021. The report advised the Australian government to establish an AI Security Commissioner to focus on promoting security and the protection of human rights in the development and use of AI in Australia. The government has not yet implemented this recommendation, but it would provide a way to hold robot manufacturers and suppliers accountable.
Consequences for the future
The chess robot’s actions this week have highlighted the need for more legal regulation of artificial intelligence and robotics in Australia. This is particularly the case as robots are increasingly used in high-risk environments such as: elderly care and to help people with a incompetence. Sex robots are also available in Australia and look very humane, raising ethical and legal concerns about the unforeseen consequences of their use.
The use of robots clearly has some benefits for society: they can increase efficiency, fill staff shortages and do dangerous work for us.
But this issue is complex and requires a complex response. While a robot breaking a child’s finger may be considered a one-off, it should not be ignored. This event should cause our legal regulators to implement more sophisticated laws directly related to robots and AI.