Here are some observations about a Press Association article published in the Guardian newspaper about the police trials of facial recognition technology. It highlights some of the ways expansions of police power are rendered necessary and common-sense, even in articles that are posed as criticisms of them. This is very much a snap response of a non-expert in this field. More can be found about facial recognition technology on Liberty’s website.
A police force has defended its use of facial recognition technology after it was revealed that more than 2,000 people in Cardiff during the 2017 Champions League final were wrongly identified as potential criminals.
The frame is primarily about false-positives and developing an ‘it could be you’ affect. The audience therefore identifies with innocence and being on the right side of the law obscuring the ways everyone is in breach of the law in some way but only some are held accountable to it.
South Wales police began trialling the technology in June last year in an attempt to catch more criminals. The cameras scan faces in a crowd and compare them against a database of custody images.
Trialling a technology without saying whether such trials have a legal basis. It normalizes an increase in the police power without public consultation and accountability. The database is one of ‘custody images’ – presumably this includes people who are arrested but not charged but even if not, the technology seems targeted at recriminalizing an already criminalised population.
As 170,000 people arrived in the Welsh capital for the football match between Real Madrid and Juventus, 2,470 potential matches were identified.
It’s relevant that it was used in the context of an influx of Italian and Spanish people.
However, according to data on the force’s website, 92% (2,297) of those were found to be “false positives”.
92% figure seems shocking sets up an acquiescence if new deployments of the technology have less shocking figures.
South Wales police admitted that “no facial recognition system is 100% accurate”, but said the technology had led to more than 450 arrests since its introduction. It also said no one had been arrested after an incorrect match.
Compares the failure of this technology to other facial recognition systems rather than other methods of policing. It does not point to the scale of non-arrest interaction with the police such as stop and searches that incorrect matches have led to.
A spokesman for the force said: “Over 2,000 positive matches have been made using our ‘identify’ facial recognition technology, with over 450 arrests.
“Successful convictions so far include six years in prison for robbery and four-and-a-half years imprisonment for burglary. The technology has also helped identify vulnerable people in times of crisis.
Identifies serious sounding offences based on the length of jail time to give the impression the technology leads to justice being done. No explanation given of what the second sentence means, neither who the vulnerable people were not why the police’s interaction with vulnerable people was a positive thing. Justifies policing on the basis of humanitarian control.
“Technical issues are common to all face recognition systems, which means false positives will be an issue as the technology develops. Since initial deployments during the European Champions League final in June 2017, the accuracy of the system used by South Wales police has continued to improve.”
Framing ‘issues’ as technical removes debate about the use, effect and function of the technology to a depoliticised and technocratic sphere.
The force blamed the high number of false positives at the football final on “poor quality images” supplied by agencies, including Uefa and Interpol, as well as the fact it was its first major deployment of the technology.
The problem is blamed not on the technology itself but on the practices of other agencies.
Figures also revealed that 46 people were wrongly identified at an Anthony Joshua fight, while there were 42 false positives from a rugby match between Wales and Australia in November.
All six matches at a Liam Gallagher concert in Cardiff in December were valid.
Much lower numbers false-positives at later events (without telling us of the rate of false-positives) create impression that the trial is working. The result is that while the article’s set up of a shocking statistic that seems to be critical of the technology, ends up alleviating concerns about it. The examples are all sporting events – it doesn’t mention the use of the technology at Notting Hill Carnival, for example.
The chief constable, Matt Jukes, said the technology was used where there were likely to be large gatherings, because they were “potential terrorist targets”.
Mention of terrorist targets – just in case you hadn’t already linked the increased police power with dangerous, racialised threats.
“We need to use technology when we’ve got tens of thousands of people in those crowds to protect everybody, and we are getting some great results from that,” he told the BBC. “But we don’t take the use of it lightly and we are being really serious about making sure it is accurate.”
Drawing on a fear of crowds and masses to justify technology – a vague means of protest, parties, highstreets. Also underscores the common-sense connection between safety and increased police power. Provides assurances that they are taking it ‘seriously’ without any content given about what accountability measures direct the use of this technology.
The force said it had considered privacy issues “from the outset”, and had built in checks to ensure its approach was justified and proportionate.
Does not unpack and make concrete what ‘privacy’ issues these were nor what ‘checks’ are put in place.
However, the civil liberties campaign group Big Brother Watch criticised the technology.
In a post on Twitter, the group said: “Not only is real-time facial recognition a threat to civil liberties, it is a dangerously inaccurate policing tool.”
Only group to respond is Big Brother Watch on Twitter. Giving abstract ‘civil liberties’ and also repeating the line that the main objection is inaccuracy.