The use of facial recognition by law enforcement in NSW requires urgent review

It is troubling that the NSW Police Minister, Paul Toole has not ruled out expanding the use of facial recognition technology currently deployed by NSW Police to include the practice of predictive policing. In a recent NSW Parliament budget estimates hearing Minister Toole said “The NSW Police Force continually reviews new technology to assist police in their role, and will consider expanding the use of technology, as required”.

AI algorithms and face-recognition systems have repeatedly failed to ensure a basic standard of equality, particularly by showing discriminatory tendencies towards people of colour.[1] The use of AI in this context often occurs in two different areas: risk scoring—evaluating whether or not a defendant is likely to reoffend in order to recommend sentencing and set bail—or predictive policing, using “so-called” insights from various data points to predict where or when crime will occur and direct law enforcement action accordingly.[2]

The problem remains, it just doesn’t work. The unevaluated bias of these tools has put people at bigger risk of being perceived as high-risk offenders, thus further entrenching racist tendencies in the justice and prison systems. Such racial discrimination inherited in AI disgraces its transformative implementation into society and violates equal treatment and the right to protection.[3]

Ed Santow is a professor at the University of Technology Sydney focuses on the responsible use of technology. As a former Australian Human Rights Commissioner, he also led work on artificial intelligence. Santow says facial recognition technology raises serious questions for our society.

“Even if that technology was perfectly accurate, and it’s not, but even if it were, it also takes us into the realm of mass surveillance,” he says. “And I think there will be great concern in the Australian community about walking down that path.”

NSW Council for Civil Liberties has long raised concerns regarding laws protecting individuals against breaches of their privacy rights in the artificial intelligence space. Current Australian laws have simply not kept pace with those technologies. There has been a “drift towards self-regulation in the technology sector, as laws and regulators have not effectively anticipated or responded to new technologies”[4]

While there will always be some degree of regulatory lag with regards to policy design and implementation, capacity-building programs should specifically target policy makers to ensure the development of a policy framework that is remains relevant as technology progresses. The long overdue review of Australia’s Privacy Act should go some way to address this, our concern remains that it is too little too late.

For more information, read the article from Innovation Aus.

 

 

[1]Singer, S., 2016 Many Facial-Recognition Systems Are Biased, Says U.S. Study - The New York Times (nytimes.com)
[2] A 2016 ProPublica investigation revealed that not only was COMPAS, an ML-powered software widely used in the U.S. criminal justice system, was inaccurate at forecasting future crime and heavily biased against black defendants. The investigators looked at risk scores of over 7,000 people arrested in Broward County, Florida and compared them with subsequent criminal records. They found that only 20% of the people predicted to commit violent crimes went on to do so. And when looking at the full range of crimes, only 61% of defendants deemed likely to reoffend were actually arrested for a future crime. Jeff Larson, Julia Angwin, “Machine Bias,” text/html, ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-riskassessments-in-criminal-sentencing.
[3] Baweja, S., Singh,S, 2019 Beginning of Artificial Intelligence, End of Human Rights | LSE Human Rights
[4]  Farthing, S., Howell, J., Lecchi, K., Paleologos, Z., Saintilan, P. and Santow, E., 2019. Human Rights and Technology: Discussion Paper. <https://humanrights.gov.au/sites/default/files/document/publication/techrights_2019_discussionpaper_0.pdf>