OPINION: AI in front line policing

NSWCCL has long been troubled about the privacy concerns associated with the largely unregulated use of AI, biometrics and facial recognition technology in frontline policing. There are a many high-risk applications of this technology which are being implemented without sufficient public discussion or consent.  Its use in CCTV ‘mood tracking’ cameras in 2023 Mardi Gras Parade[1], monitoring that people are at home during the COVID-19 quarantine[2] and as a tool in controversial predictive policing[3] – goes to show the virtually unstoppable, ever-expanding scope creep of surveillance capacity. The law has notoriously been unable to keep up with AI and NSWCCL observes that this leaves the window open for law enforcement to race ahead of regulation to use people’s data in an increased surveillance capacity.


Of particular concern to the NSWCCL, was the lack of transparency and accountability by police following the rejection and call for a complete overhaul of the Identity-Matching Services Bill in 2019. The Bill proposed to introduce the National Facial Biometric Matching Capability (Capability) – a national, interoperability biometric database where all drivers’ licenses, visa, passport and citizenship photos could be accessed by state and federal security and law enforcement agencies. It was rejected by the Parliamentary Joint Committee on Intelligence and Security due to inadequate protection of citizen’s privacy rights with proper safeguards. In the wake of this, The Australian Human Rights Commission had called for a freeze on ‘high-risk’ facial recognition due to concerns of the high level of discretion and almost unrestricted power given to law-enforcement and intelligence bodies over sharing personal data[4]. NSWCCL continues to back this position.

The bill has yet to be reintroduced and passed and yet, Australian police agencies have proceeded in the absence of any clear legislative framework. Victoria, South Australia and Tasmania have already fed in drivers’ licences to the database and NSW Police are continuing to conduct a “limited (low volume) trial” of the Commonwealth Face Matching Services in a “phased roll-out”[5]. As well, in 2020, Australian police agencies were reported to also have trialled a private, unaccountable facial recognition service called Clearview AI[6] – a service that collects over 30+ billion of publicly availably facial images from the web and social media, and then uses machine learning to make and match a biometric template for each face. Australian police agencies had initially denied they were using the service, but a list of Clearview AI’s customer was stolen and disseminated, revealing users from the Australian Federal Police as well as the state police in Queensland, Victoria and South Australia[7]. In 2021, the OIAC found the AFP failed to comply with its privacy obligations by using Clearview AI’s tool on a trial basis[8]– which was also to have found to breach privacy by scraping Australian’s biometric information from the internet and disclosing it through a facial recognition tool without obtaining consent and without taking reasonable steps to notify those whose data was scraped. [9]  

Biometric technology has been used by NSW Police since 2004. However, most Australians have little first-hand knowledge of policing practices as they do not often come into direct contact with police. It is difficult to hold police accountable if the public are not aware about the software tools, their security measures, how data is collected and stored, and under what conditions those tools are being used. With the ubiquitous use of biometrics, AI and facial recognition in surveillance of nearly all ordinary aspects of everyday life – this lack of transparency and accountability is especially concerning.

Currently, there is little in place beyond the Privacy Act that can safeguard against the encroachment of the individual’s right to privacy and autonomy. And, there is even less in place to ensure that police use of facial recognition and biometric technology is transparent and accountable. As a result, NSWCCL strongly maintains its call for a moratorium on the use of such surveillance technology in frontline policing unless and until there are effective legal safeguards in place.


[1] Grubb, Ben (2023). ‘How your phone and mood will be tracked at Mardi Gras’ (SMH)

[2] Kaye, Byron (2021). ‘Australia’s two largest states trial facial recognition software to police pandemic rules’ (Reuters)

[3] Singer, S., (2016) ‘Many Facial-Recognition Systems Are Biased, Says U.S. Study’ - The New York Times

[4] Barbaschow, Asha (2021) ‘ Human Rights Commission calls for a freeze on ‘high-risk’ facial recognition’ (ZDNET)

[5] NSW Police, ‘Facial Recognition’ (Link no longer available)

[6] Taylor, Josh (2021) ‘Calls to stop NSW police trial of national facial recognition system over lack of legal safeguards’ (The Guardian)

[7] Goldenfein, Jake (2020) ‘Australian police are using the Clearview AI facial recognition system with no accountability’ (The Conversation)

[8] Office of Australian Information Commissioner (2021) ‘Commissioner Initiated Investigation into the Australian Federal Police (Privacy)

[9] Office of the Australian Information Commissioner (2021) ‘Clearview AI breached Australian’s privacy’