The Challenges of AI-Decision-Making in Government and Administrative Law: A Proposal for Regulatory Design

Authors

  • Benedict Sheehy
  • Yee-Fui Ng

DOI:

https://doi.org/10.18060/28360

Abstract

Artificial Intelligence (“AI”) decision-making is rapidly becoming a key technology in government. AI decision-making has a number of significant benefits. These include reduced costs, consistency in decision-making, and the potential for reduced bias. It is not, however, an unalloyed good. There are a number of risks associated with the technology. For example, it may have inherent biases or be prone to developing certain biases. These issues are most evident when AI is used to deal with vulnerable populations, populations unaware of or less able to deal with advanced technologies and sophisticated systems in general. With these populations, AI has the potential to do significant harm. Governments have struggled to understand and address AI decision-making appropriately. And nowhere is this clearer than in dealing with vulnerable populations. This Article addresses these issues using theories of effective regulation and two case studies of AI decision-making in Australia. It proposes two regulatory innovations: a default advocate for negative decisions “Golden Rule”, and a monitoring body that, among other things, implements Professor Aziz Huq’s proposed “right to a well-calibrated decision.”

Downloads

Published

2024-06-10

Issue

Section

Articles