Saturday, May 21, 2022

Self-driving cars will soon be able to easily hide in plain sight. We must not allow them.

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Self-driving cars will soon be able to easily hide in plain sight. The lidar sensors on the roof that currently mark many of them are likely to get smaller. Mercedes vehicles with the new, partially automated Drive Pilot system, which carries its lidar sensors behind the grille of the car, are already indistinguishable from ordinary human-driven vehicles with the naked eye.

Is this a good thing? As part of our Driverless Futures project at University College London, my colleagues and I recently completed the largest and most comprehensive research into the attitude of citizens self-driving vehicles and traffic rules. One of the questions we decided to ask, after conducting more than 50 in-depth interviews with experts, was whether autonomous cars should be labeled. The consensus of our sample of 4,800 UK citizens is clear: 87% agreed with the statement “It should be clear to other road users whether a vehicle is driving itself” (only 4% disagreed, the rest unsure).

We sent the same survey to a smaller group of experts. They were less convinced: 44% agreed and 28% disagreed that a vehicle’s status should be advertised. The question is not easy. There are valid arguments on both sides.

We could argue that people should in principle know when they interact with robots. That was the argument put forward in a 2017 report commissioned by the UK Research Council for Engineering and Physical Sciences. “Robots are manufactured artifacts,” he said. “They should not be deceptively designed to exploit vulnerable users; instead, their machine nature should be transparent.” If self-driving cars are actually tested on public roads, other road users could be considered test subjects in that experiment and given something like informed consent. Another argument for labeling, this practical one, is that – as with a car driven by a student driver – it is safer to give ample berth to a vehicle that may not behave like a vehicle being driven. operated by a well-trained man.

There are also arguments against labelling. A label could be seen as a waiver of the responsibilities of innovators, meaning that others must recognize and accommodate a self-driving vehicle. And it could be argued that a new label, without a clear shared awareness of the limits of technology, would only create confusion on roads that are already full of distractions.

From a scientific point of view, labels also affect data collection. If a self-driving car learns to drive and others know this and behave differently, this could affect the data collected. Something like this seemed to be in the mind of a Volvo executive who told a reporter in 2016 that “just to be safe”, the company would use unmarked cars for its proposed self-driving trial on UK roads. “I’m pretty sure people will challenge them if they are flagged by braking really hard for a self-driving car or putting themselves in the way,” he said.

On balance, the arguments for labelling, at least in the short term, are more convincing. This debate is about more than just self-driving cars. It goes to the heart of the question of how new technologies should be regulated. The developers of emerging technologies, who they often portray initially disruptive and world-changing tend to portray them as merely incremental and unproblematic once the regulators knock. But new technologies don’t just fit into the world as it is. They reshape worlds. If we want to realize their benefits and make good decisions about their risks, we have to be honest about it.

To better understand and manage the deployment of autonomous cars, we need to dispel the myth that computers will drive just like humans, only better. Management professor Ajay Agrawal for example, has argued that self-driving cars basically just do what drivers do, but more efficiently: “People get data in through the sensors – the cameras on our faces and the microphones on the sides of our heads – and the data comes in, we process the data with our monkey brains and then we take actions and our actions are very limited: we can turn left, we can turn right, we can brake, we can accelerate.”

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article