“They probably saved millions of lives using those technologies,” he says, “and the result is that sold… [the necessity of] state surveillance for many Chinese.”
Is there ‘good’ surveillance technology?
Once someone (or an entity) starts using surveillance technology, the downward slope is extremely slippery: No matter how noble the motive for developing and deploying it, the technology can always be used for more evil purposes. For Chin and Lin, China shows how the ‘good’ and ‘bad’ uses of surveillance technology are always intertwined.
They report extensively on how a surveillance system in Hangzhou, the city where Alibaba, Hikvision, Dahua and many other technology companies are based, was built with the benevolent premise of improving city management. Here, with a dense network of cameras on the street and a cloud-based ‘city brain’ processing data and issuing orders, the ‘smart city’ system is used to monitor disasters and enable rapid emergency responses. In a notable example, the authors talk to a man who accompanied his mother to the hospital in an ambulance in 2019 after she nearly drowned. The city was able to turn all the traffic lights in their path to reduce the time it took to reach the hospital. It is impossible to argue that this is not a good use of the technology.
But at the same time, it has come to a point where ‘smart city’ technologies are almost indistinguishable from ‘safe city’ technologies, which aim to strengthen the police force and track down suspected criminals. The surveillance company Hikvision, which partially powers the life-saving system in Hangzhou, is the same company that facilitated the mass incarceration of Muslim minorities in Xinjiang.
China is far from the only country where the police rely on a growing number of cameras. Chin and Lin show how New York City police have used and misused cameras to build a facial recognition database and identify suspects, sometimes using legally questionable tactics. (MIT Technology Review also reported earlier this year on how Minnesota police built a database to observe protesters and journalists.)
Chin argues that given this track record, the technology itself can no longer be considered neutral. “Certain technologies naturally lend themselves to harmful use. Especially with AI being applied to surveillance, they lend themselves to authoritarian outcomes,” he says. And like nuclear researchers, for example, scientists and engineers in these fields should be more careful about the potential harm of the technology.
It is still possible to disrupt the global supply chain of surveillance technology
There is a sense of pessimism when talking about how surveillance technology will develop in China, as invasive deployment has become so widespread that it is hard to imagine the country changing course.
But that doesn’t mean people should give up. One key way to intervene, Chin and Lin argue, is to cut off the global surveillance technology supply chain (a network MIT Technology Review wrote about last month).
Contents