AI in Legal Systems: A Principled Approach
As artificial intelligence continues to evolve, it does so with a double edge, presenting great opportunities but also potential risks. How do we know for sure if the good outweighs the harm of various AI applications? On what basis should we trust (or mistrust) AI to make decisions that impact people?
In our legal system, the answer to this question is of significant import. An improper or negligent use of AI has the potential to create a legal system that sacrifices humanity for perceived efficiencies and may perpetuate biases based on skewed or incomplete data. As applications multiply, the possibility of unintended negative consequences arising from complex algorithms used for, say, sentencing and other important legal functions may extend well beyond our comfort level – or well beyond what is just, for that matter.
In consideration of this reality, various organizations are beginning to address, if in fragmentary ways, what should be the norms for the trustworthy adoption of AI. Recently, however, there has been a groundbreaking, multidisciplinary effort that stands out above the rest in its effort to identify foundational principles that advance the world toward the establishment of norms for governing the rise of AI in society.
The IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (EAD1e), “A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems,” comprising contributions from international experts in various domains, has tackled the challenge head-on. With regard to AI in legal systems, the report articulates four principles for the dependable adoption of AI that are designed to be individually necessary and collectively sufficient, globally applicable but culturally flexible, and capable of being put into action.
These four principles—which are elaborated upon in the Bloomberg Insights piece, Four Principles for the Trustworthy Adoption of AI in Legal Systems, by Eileen M. Lach and Nicolas Economou—can be briefly summed up as follows:
- Effectiveness: Does the technology succeed in meeting its intended purpose? How can we know?
- Competence: Are the operators of AI competent to address the scientific requirements the field demands?
- Accountability: Can we appropriately apportion responsibility for AI applications among those who create, procure, deploy and operate it?
- Transparency: Is there sufficient information to determine the extent to which an AI-enabled process can be trusted?
If we’re to reap the benefits of AI and mitigate the risks, a principled approach is mandatory. As Lack and Economou discuss in their article, a thorough understanding and adoption of these principles will set the development of AI sound footing, not just for legal systems but well beyond.
Read more about this topic here.