By Karen Allen*
When several United States (US) companies withdrew their facial recognition software products amid concerns about flaws, biases and misuse in the wake of the killing of George Floyd, did leaders in sub-Saharan Africa take note? Have events in the US served as a clarion call for governments to ensure that regulations are in place before the rollout of what many see as one of the most intrusive forms of surveillance?
Emerging biometric technologies have become ubiquitous across many parts of Africa, including facial recognition technologies in Zimbabwe, Uganda and South Africa. They’re used to help combat identity theft, fraud and other threats, including terrorism. Much of the technology is being developed by the private sector as well as Chinese state entities as part of the drive to develop ‘smart cities’ across Africa.
High-speed internet has made it possible to collect vast amounts of data that must be recorded, analysed and stored. Although internet usage in Africa remained behind world figures in 2017, one in five households in Africa uses the internet now, according to the World Bank, and these figures are rising.
Where infrastructure permits, biometric data is being used to monitor borders, to grant access to government services such as welfare payments, and to protect commercial entities from fraud. Facial recognition, or its close cousin facial authentication technology, is used by law enforcement and private security companies for security, digital forensics and predictive policing. In business it is among the technologies deployed for access control and client registration.
Facial recognition technology is considered by many to be more reliable than fingerprint technology, and helps to reduce fraud. In South Africa last year a 20% increase in digital banking fraud was reported by the South African Banking Risk Information Centre compared to the previous year.
Artificial intelligence-driven one-to-one authentication, where someone’s identity is matched against an ID document or another identifier, is less susceptible to abuse as it requires prior consent. This is according to Gur Geva, CEO of iiDENTIFii, a South African biometrics company, speaking at a June Institute for Security Studies (ISS) webinar on the issue.
In contrast, facial recognition technology, where a ‘match’ is made against a database, doesn’t depend on this. Therefore facial recognition technology has attracted the greatest controversy. In the extreme, it has seen US companies such as Clearview AI face legal challenges by civil liberty groups. The company is accused of amassing a database of billions of faces, captured from images placed on social media platforms and other websites, and selling an app to provide access to law enforcement agencies.
Central to this challenge is the issue of presumed consent. South Africa’s Protection of Personal Information (POPI) Act of 2013 sets out the circumstances under which data can be collected, gathered and stored. Although much of the law has only just been rolled out and is yet to be tested, such harvesting of data made public for one intended use, and sold on for a different purpose, would almost certainly be deemed illegal under the act.
The potential harms that facial technology expose citizens to include hacking, invasion of privacy and bias. ‘Cybersecurity is a huge problem,’ says Dr Brett van Niekerk, a cyber expert at the University of KwaZulu-Natal, ‘because biometric technologies operate within the system of cyberspace,’ and the data stored, if not sufficiently secured, can be leaked, altered or stolen.
A stolen identity could then be used almost as a digital balaclava to perpetuate further crimes, for instance gaining access to a building, a computer network or someone’s bank accounts. Furthermore, with data being centralised, such as it would under Kenya’s proposed Huduma Namba digital ID scheme, a single point of failure creates a particular risk for hacking attacks.
The denial of privacy is another potential harm. The right to privacy is enshrined in numerous international conventions and national constitutions. There are concerns that the technology, if not properly checked, is prone to ‘function creep’ and being deployed as a tool of mass surveillance.
This could be either to identify individuals in protests, for example, or as reported in Uganda, to potentially identify and track opposition politicians. There Huawei has installed facial recognition systems in closed-circuit television cameras as part of its Safe City initiative.
The third threat is algorithmic bias, where repeated studies have shown facial technologies to have a high error rate in accurately identifying people of colour. Renée Cummings, a US criminologist and advocate of ethical artificial intelligence, said at the ISS webinar that such biases had led to an ‘over-policing of black and brown communities in the US by law enforcement.’
It prompted a debate about whether countries such as South Africa needed to develop context-specific algorithms before the technology was deployed. This would help ensure that the database against which a face is matched is an accurate reflection of local demographics.
In South Africa a raft of legislation including the 2013 POPI Act and the Cybercrimes Bill of 2017, which is yet to become law, try to mitigate the unintended consequences of emerging technologies that could offer positive transformations in African states. Yet the speed with which digital innovation is progressing threatens to outpace the law and the lawmakers.
Regular audits of facial recognition databases, context-specific algorithms and checks to ensure that the most robust cybersecurity measures fortify against intrusion are some measures policymakers should consider. At a United Nations level, discussions on cybersecurity must ensure that networked biometric technologies are included.
*About the author: Karen Allen, Senior Research Adviser, Emerging Threats in Africa, ISS Pretoria
Source: This article was published by ISS Today