Submission: AI technologies in Australia

The NSW Council of Civil Liberties submits that the proliferation Artificial Intelligence (AI) poses significant risks to the civil rights of the Australian public, despite providing many new social and economic opportunities. As it stands, Australia’s regulatory system fails to fully address and balance these risks against the wealth of opportunities – an issue that will grow with increased use of these technologies.

Our submission responds to two issues arising out of the Terms of Reference presented by the Select Committee on Adopting AI: (e) opportunities to foster a responsible AI industry in Australia; and (f) potential threats to democracy in institutions from generative AI.

In light of these risks and opportunities, the NSWCCL's overarching submission is that a human rights framework that adopts principles of accountability and a proportionate risk-based approach is required to create an effective regulatory framework for AI that minimises risks to Australians without stymieing potential opportunities for innovation and application of these technologies. We also recommend that:

  1. Civil society education programs be introduced, along with education for various regulators;
  2. Reform of the existing patchwork of legislation that covers AI regulation be undertaken, including improved privacy protections for citizens;
  3. Bespoke AI regulation be introduced that adopts a risk-based approach to AI, with graduated obligations for AI developers, deployers and users of AI according to risk.

This should include:

  1. transparency requirements for all deployers of AI, which become more onerous with the risk associated with the kind of AI;
  2. distinct and more onerous transparency requirements for public sector organisations that use AI and ADM;
  3. prohibitions on some kinds of AI use in decision-making (differing from private and public sectors);
  4. flexibly-defined prohibitions on AI that poses an unacceptable risk of harm; and
  5. a regime that delegates specific compliance responsibilities for developers (of upstream and downstream applications), deployers and users; and
  6. A statutory office of an AI Safety Commissioner be introduced, to lead regulation and research of new AI risks and coordinate responses of different government bodies and agencies.

Appended to our are the NWSCCL’s previous submissions on AI regulation made to the Department of Prime Minister and Cabinet’s Digital Technology Taskforce[1] and the Department of Industry, Science and Resources.[2] These submissions are relevant to the Select Committee’s current inquiry, and we submit that the Select Committee should consider the recommendations outlined therein. 

Read our submission here.

 

[1] NSWCCL, Submission to Department of the Prime Minister and Cabinet, Digital Technology Taskforce, ‘Positioning Australia as a Leader in Digital Economy Regulation – Automated Decision Making and AI Regulation – Issues Paper’ (20 May 2022) (NSWCCL Submission to ADM and AI Regulation Issues Paper).

[2] NSWCCL, Submission to Department of Industry, Science and Resources, ‘Safe and Responsible AI in Australia – Discussion Paper’ (26 July 2023) (NSWCCL Submission to Safe and Responsible AI in Australia – Discussion Paper).