Policy: Human Rights and Technology

2020 NSWCCL AGM

Item 8.3        Policy on Human Rights and Technology

Human Rights and Digital Technology

Australia has experienced an exponential uptake and increased sophistication of surveillance methods, AI informed decision making, and other modern technologies collecting vast amounts of data (Digital Technology). At the same time, laws protecting individuals against breaches of their privacy rights have not kept pace with those technologies. There has been a “drift towards self-regulation in the technology sector, as laws and regulators have not effectively anticipated or responded to new technologies” [1]. While there will always be some degree of regulatory lag with regards to policy design and implementation, capacity-building programs should specifically target policy makers to ensure the development of a policy framework that is remains relevant as technology progresses.

NSWCCL acknowledges that “digital technologies have the potential to facilitate efforts to accelerate human progress, to promote and protect human rights and fundamental freedoms” but also that “the impacts, opportunities and challenges of rapid technological change […] are not fully understood”.[2] In fact, surveys have shown that community trust in new and emerging Digital Technologies has been diminishing, for example, with most Australians concerned about their online privacy.[3]

Safeguards are necessary to ensure that the liberties and rights of Australians are not unreasonably curtailed by Digital Technology. As a society, we need to avoid the possibility that people feel unable to go about their normal business because they are constantly being watched or tracked.  Once collected, used and stored by third parties, personal private information becomes increasingly difficult to protect and regulate. Often that personal private information is collected or used in a manner that is without the knowledge, or consent, of the individual.

NSWCCL policy, in the face of the expansion of Digital Technology, includes:

  1. A national strategy on new and emerging Digital Technologies that promotes effective regulation, consistent with Article 22 of the EU General Data Protection Regulation (GDPR).

    Australian government policy on Digital Technology has tended towards self-regulation which is also, inevitably, fragmented. The Australian Productivity Commission has called for fundamental, systematic change in the way governments, businesses and individuals handle data.[4] As a starting point, the substance of Article 22 of the GDPR, should be adopted by Australian legislators, as best practice. Article 22 of the GDPR provides for the right not to be subject to a decision based solely on ‘automated processing, including profiling’ which has a legal or significant impact on the individual.

  2. A National Bill of Rights. One of the most significant gaps, from a policy perspective, with regards to the protection of human rights, data collection and AI informed decision making, is the absence of legislated human rights protection, particularly through a national Human Rights Act or charter. As a corollary, international policies and treaties around human rights and Digital Technology protection need to be more effectively implemented.
     
  3. Implementation of legislative framework with a human-rights centred approach.

    Australia needs “greater statutory clarity regarding the ambit of responsibility and consequence of automated decision making”.[5] The overarching framework should provide for Digital Technology being designed and applied around principles of transparency, accountability, responsibility, mitigation of risk, fairness and trust. It should provide for clear and enforceable laws as a main means to ensure and promote an accountable and responsible use of Digital Technology, aiming at fostering innovation while also protecting human rights.

  4. Accountability of institutions for decisions that are made using Digital Technology and liability for the consequences of those decisions.

    Exclusion and discrimination can be exacerbated  by the “feedback loop of injustice”.[6] For example, if AI is tasked to make a decision it will base its decision on past data, and if a person if affected that is part of a group sharing a characteristic such as race, age, gender or other, it is therefore likely to replicate past imbalances and injustices that that group was involved in.

    This problem concerns society defining areas, such as capital distribution (who gets the home loan?), employment (who gets the job?), and criminal justice (who goes to jail?). While the public discussion of the human rights implications of Digital Technology has tended to focus on the right to privacy and non-discrimination, other areas are also engaged, such as the right to equality, the right to work, the right to justice, and the right to health.

  5. Notification to the individual impacted when Digital Technology facilitated decision making occurs.

    The Council of Europe, Commissioner of Human Rights, considers that those who have had a decision made about them by a public authority, that is solely or significantly informed by the output of an AI system, should be promptly notified.[7] In the context of public services, especially justice, welfare, and healthcare, the individual user needs to be notified in clear and accessible terms that an AI system will be interacting with them and that there is hasty recourse to a complaints person. Specific information about processing, purpose and the legal basis for processing, should be available to the individual whether that information is retrieved directly, or from other sources.

  6. A Consumer Protection approach to Consent. While consent of the user is a necessary condition for the use and decision-making processes of Digital Technology, it is not sufficient. A user should not be able to consent to waive rights under consumer law; laws which provide that the data controller must do certain specified things. Where consent is required and sought, that consent needs to be express, voluntary, specific and unambiguous;[8] not bundled consent, nor opt out. Any changes in use of information collected or stored should prompt a requirement for renewed express consent.

  7. Reform to more easily assess the lawfulness of decision-making by Digital Technology. Accessing technical information used in decision-making or having open source AI are methods for doing so.

  8. Easily accessed complaints and independent appeal processes, and remedies for the benefit of the adversely affected individual user.

    Digital technologies are still developing and high error margins need to be accounted for. At any stage, a user affected by automated decision making should have the right to human intervention.[9] The appeal system(s) that will need to be established must be easily and cheaply accessible, so that those in vulnerable positions have the chance to contest contentious decisions.

  9. A moratorium on the use of the technology should be implemented in any situation where the use of a technology in a specific situation is not regulated clearly enough by the policy and/or legislative framework.

    Digital Technology needs to be continuously assessed for accuracy and reliability, as software behind, for example, facial recognition can still show high error margins and substantial system bias. Misidentification and bias affecting citizens have led to various city and state governments, international organisations and software companies, to either impose or call for a moratorium on the technology’s use, until its functionality and the laws around it meet certain conditions.[10]

  10. The establishment of a Digital Regulatory Body (DRB) tasked with developing policies around the design and application of big data, AI informed decision-making systems and advanced surveillance technologies. Its powers including:

    a) Enforcement of policies. The DRB should be tasked with supervising compliance with data protection regulations by government and the private sector. [11] The powers invested in the body, like European models, should include investigation and access to premises and data processing equipment, for the purposes of compliance with regulations. There should be authority to impose a fine and/or a ban on processing.[12] 

    b) Regular auditing of public and private organisations’ systems to ensure high rates of policy compliance. Regular auditing also serves to detect potential bias in Digital Technologies. The DRB, given the appropriate expertise, should be able to keep intellectual property confidential and yet recognise where algorithms reinforce social differences and discrimination.

    c) Advocacy, encouraging laws and practices around technologies to be human rights compliant and used for the public good. Soft measures could take the shape of offering targeted education and training for decision makers and leaders, in the Australian private and public institutions, to build capacity around existing and new laws in the context of new technologies.

    d) Fostering innovation and technological progress. In order to achieve both human rights compliance and technological innovation and progress, the regulatory body could be tasked with the implementation of ‘regulatory sandboxes’. In these regulatory sandboxes “new products or services can be tested in live market conditions but with reduced regulatory or licensing requirements and exemption from legal liability, and with access to expert advice and feedback”[13].

    e) Research into making AI more privacy friendly. Privacy friendly AI systems can more easily comply with regulations, use anonymisation techniques and explain how data is processed.[14]

  11. A limited statutory cause of action to sue for serious breach of privacy, where there is a reasonable expectation of privacy. The existing privacy legislation at Commonwealth and State levels does not provide protection, or remedy, for many kinds of invasion of personal privacy. Any cause of action needs to be broadly formulated to capture future forms of privacy infringement.[15]

    In 2019, the Australian Competition and Consumer Commission recommended that a new statutory cause of action be created to cover serious invasions of privacy with the aim to reduce the “bargaining power imbalance” between individuals and digital platforms.[16]

Resolution

That the proposed policy on Human Rights and Technology be adopted.

Moved at the NSWCCL AGM October 21st 2020 by: Michelle Falstein

Seconded by: Stephen Blanks


[1] Farthing, S., Howell, J., Lecchi, K., Paleologos, Z., Saintilan, P. and Santow, E., 2019. Human Rights and Technology: Discussion Paper. <https://humanrights.gov.au/sites/default/files/document/publication/techrights_2019_discussionpaper_0.pdf> [Accessed 13 September 2020] at p.38

[2] UN Human Rights Council, 2019. New and emerging digital technologies and human rights: 41st session. [online] Available at: <https://documents-dds-ny.un.org/doc/UNDOC/LTD/G19/208/64/PDF/G1920864.pdf?OpenElement> [Accessed 13 September 2020] at p.2

[3] Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017) Digital Rights in Australia Departments of Media Communications, and Government and International Relations, Faculty of Arts and Social Sciences, and the University of Sydney Law School, University of Sydney. <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3090774 >accessed 25 Feb 2020

[4] Australian Productivity Commission (2017) Data Availability and Use Report, p. 12 in Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017) Digital Rights in Australia Departments of Media Communications, and Government and International Relations, Faculty of Arts and Social Sciences, and the University of Sydney Law School, University of Sydney. pp21-22 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3090774 >accessed 25 Feb 2020 

[5] Murray, A., 2019. Legal technology: Computer says no …but then what. The Proctor, 39(8), 48-49.

[6] Eubanks, V., 2017. Automating inequality: How high-tech tools profile, police, and punish the poor/Virginia Eubanks. New York, NY: St. Martin's Press.

[7] Council of Europe Commissioner of Human Rights (May 2019) Unboxing Artificial Intelligence: 10to protect Human Rights https://rm.coe.int/unboxing-artificial-intelligence-10-steps-to-protect-human-rights-reco/1680946e64; Also Art 13 GDPR

[8] The Norwegian Data Protection Authority (January 2018) Artificial Intelligence and privacy Datatilsynet, p.29

[9] Art 22 GDPR

[10] Conger, K., Fausset, R. and Kovaleski, S. F., 2019. San Francisco Bans Facial Recognition Technology. The New York Times. [online] 14 May. Available at: <https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html> [Accessed 14 September 2020]; Kelion, L., 2019. MPs call for halt to police's use of live facial recognition. BBC. [online] 18 Jul. Available at: <https://www.bbc.com/news/technology-49030595> [Accessed 14 September 2020]; Larson, N., 2020. UN urges 'moratorium' on facial recognition tech use in protests. [e-book]: AFP. <https://news.yahoo.com/un-urges-moratorium-facial-recognition-tech-protests-142542401.html> [Accessed 26 June 2020].

[11] Shaping Europe’s digital future -Report/Study (8 April 2019) Ethics guidelines for trustworthy AI <https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai>

[12] The Norwegian Data Protection Authority op.cit. p.23

[13] Op.cit. Farthing, et al., 2019 p.118

[14] The Norwegian Data Protection Authority op.cit. p.28

[15] Witzleb, Normann (2011) A statutory cause of action for privacy? A critical appraisal of three recent Australian law reform proposals 19 Torts Law Journal 104-134 DOI: 10.13140/2.1.3159.1684

[16] Australian Competition and Consumer Commission (June 2019) Digital Platforms Inquiry- Final Report <https://www.accc.gov.au/system/files/Digital%20platforms%20inquiry%20-%20final%20report.pdf>