In our submission we highlight that the proliferation of Artificial Intelligence, AI, could pose significant risks to the civil rights of the Australian public. As it stands, Australia’s regulatory system fails to fully address these risks – an issue that will grow with increased use of these technologies.
In our submission we recommended that:
1. A statutory office of an AI Safety Commissioner be introduced, to lead regulation and research of new AI risks and coordinate responses of different government bodies and agencies;
2. Reform the existing patchwork of legislation that covers AI regulation, including improved privacy protections for citizens;
3. Introduce bespoke AI regulation that adopts a risk-based approach to AI, with graduated obligations for AI developers, deployers and users of AI according to risk. This should include:
a. Transparency requirements for all deployers of AI, which become more onerous with the risk associated with the kind of AI;
b. Distinct and more onerous transparency requirements for public sector organisations that use AI and ADM;
c. Prohibitions on some kinds of AI use in decision-making (differing from private and public sectors);
d. Flexibly-defined prohibitions on AI that poses an unacceptable risk of harm; and
e. A regime that delegates specific compliance responsibilities for developers (of upstream and downstream applications), deployers and users.
The Australian Government is thinking through mitigation of potential risks of AI and how to support safe and responsible AI practices. You can read more about the consultation here.
For more information read our full submission.