By Anders Corr*
China is now arguably the leader in biometric surveillance technologies that use artificial intelligence, including facial recognition and all the other biometrics one can imagine — genes, voice, gait, beard, iris, and the old standby, fingerprints.
AI biometrics may even be able to distinguish the ear to identify someone in a crowd of Hong Kong protesters, for example. When the frontliners realized this, they started covering their ears, not just faces, with the now signature black neck gaiters.
Hong Kong is at the cutting edge of protest technology meant to beat AI biometric surveillance, including the use of laser pointers and spray paint to blind surveillance cameras, and the destruction of “smart” lamp-posts. Protesters toppled one such post in August and ripped out the electronic innards of at least 19 others, publishing what they found on social media. The lamp posts are designed to include cameras, image processers, wifi and 4G/5G connections.
A September study by the Carnegie Endowment for International Peace found the following: “Technology linked to Chinese companies — particularly Huawei, Hikvision, Dahua, and ZTE — supply AI surveillance technology in 63 countries, 36 of which have signed onto China’s Belt and Road Initiative (BRI).”
Huawei alone supplies AI surveillance technology to a minimum of 50 countries worldwide. Only Japan’s NEC Corporation comes close to this number at 14 countries. China subsidizes exports of its surveillance technology through soft loans to places like Kenya, Laos, Mongolia, Uganda, and Uzbekistan. Facial recognition is often used in places like prisons, airports, and public city streets in these locations.
China is taking charge of not only the international market for AI biometric surveillance, but its governance also. The authoritarian country, for example, is systematically influencing the setting of biometric standards at the United Nations International Telecommunication Union (ITU). Multiple Chinese companies are seeking to set ITU standards such that their facial recognition services cooperate in a standard manner across interlinked platforms.
They seek maximum competitive advantage, whereby ITU standards reflect what Chinese companies are already doing, and that exclude the methods and technologies of competing companies. If the ITU rewards China by using its preferred standard, China will use this to turbo-charge its own marketing of surveillance technology. That could be a double-edged sword for countries that adopt Chinese technology, as China could build back-doors into the hardware, software or even the standards to make a uniform international surveillance network transparent to Chinese hackers.
China is installing AI surveillance systems at a record pace in China itself. In 2018, China’s market for AI surveillance increased 13.5 percent, compared to a still strong growth of 5 percent for the rest of the world. In other words, the biggest global victims of AI biometric surveillance technology are Chinese citizens themselves, who have no chance to vote for the technology that is used against their attempts to improve their human rights, including the right to vote.
Facial surveillance technology is getting criticism for privacy violations from organizations like the Electronic Privacy Information Center (EPIC). Along with over 90 other non-governmental organizations in October, Washington DC-based EPIC called for a global “ban on facial surveillance”.
The EPIC website claims: “In 2019 the Swedish Data Protection Authority prohibited the use of facial recognition in schools. The state of California prohibited the use [of] facial recognition on police-worn body cameras. Several cities in the United States have banned the use of facial recognition systems, and there is a growing protest around the world.”
On Dec. 12, EPIC President Marc Rotenberg told a Boston Global Forum audience at Harvard that facial recognition was used for political control and violated privacy violations.
However, he did not sufficiently distinguish between authoritarian and democratic use of the technology in his prescription to ban face surveillance. While China certainly uses facial recognition for political control as it helps enforce laws against public protest, for example, democracies typically protect freedom of assembly. They could even do so using AI biometric surveillance were they to use the technology to capture a terrorist seeking to bomb a protest. Terrorists in Afghanistan, for example, frequently target peaceful political protests by feminists, anti-corruption activists, and veterans.
While Chinese citizens have not given their consent for utilization of AI biometric surveillance that indubitably violates their privacy, citizens in democracies who choose political representatives that deploy such systems to combat terrorism or espionage are through their votes for those representatives giving consent.
An old example of biometric surveillance long-accepted in democracies is dusting for fingerprints near a public crime scene, for example, the Boston marathon bombing in 2013. Consent could not have been gained from all the innocent people whose fingerprints might have been picked up that day. But fingerprint collection would have been accepted as legally and ethically proper, without undue imposition on the privacy rights of innocent individuals whose fingerprints might have been found at that time.
Biometric surveillance is a powerful tool in the hands of democratic or authoritarian governments. Countries like China will not give it up easily, if at all. So if democracies give it up unilaterally, it could weaken them with respect to their more authoritarian peers.
When confronted with this argument at the Harvard conference, Rotenberg said he thought that privacy was fundamental to democracies and so strengthening it could not “weaken” them. But he did not address the argument that a ban on facial surveillance would make it more difficult for democracies to identify terrorists, spies, or school shooters.
While it is true that authoritarian governments use biometric surveillance to violate the privacy and political rights of their citizens, they also use espionage and violence to violate the rights of citizens in democracies. Many Chinese citizens have been caught conducting industrial and other espionage in the United States. Pakistani citizens have utilized terrorist tactics in India. Russian citizens have been caught seeking to assassinate political rivals in Britain and Germany. Allowing such individuals to commit crimes with impunity would seriously weaken the social fabric of our democracies in which the rule of law is decided and enforced by the citizens themselves.
While we should oppose the use of biometric surveillance in autocracies to the extent used against democracy advocates, and thereby protect the rights of citizens who lack the vote and right to assemble in those countries, we should welcome biometric surveillance that is accountable to the public through democratic law in that those powerful technologies can protect democratic societies and their rights from the terrorism, espionage and more mundane crimes that might otherwise destabilize them and make them easier to infiltrate, exploit or influence by authoritarian states.
The bottom line is that if we want democratic forms of governance to prevail over autocratic forms of governance globally, or even if we just want to protect democratic governance from autocrats who seek to destabilize them, we should trust our democratically-elected representatives to use the most advanced technological tools available to defend us and our democratic institutions. Trust in a government that is our government by virtue of our votes is the foundation of democracy.
*Anders Corr holds a Ph.D. in government from Harvard University and has worked for U.S. military intelligence as a civilian, including on China and Central Asia.
The views and opinions expressed in this article are those of the author and do not necessarily reflect the official editorial position of UCA News.