Autonomous and Intelligent Systems are Now in Sight of the IEEE Standards Association. Here's What That Means.
An ethics certification program for A/IS is now in the works.
Mysteries abound in life and some, no doubt, add spice to the mundane. But, the mysteries of the impenetrable algorithms that drive so much of our evolving “smart” world demand our attention.
If it were just the novelty of smart appliances chatting among themselves while Alexa orchestrates, it wouldn’t much matter. But it’s not. It’s the fact that the algorithmic processes required for such intelligence are snaking their way into the most foundational aspects of our lives.
How are we to make sense of how AI is impacting us? Should we trust our lives to the AI-enabled diagnostics that our doctors rely upon? Accept the conclusions of financial apps that decide our borrowing opportunities? Abide the legal ones that may be in part relied upon to determine our freedom—or our prison sentence? How can we know how accurate these things are—or even what (or whose) goal is being fulfilled?
The honorable intentions of developers notwithstanding, the proliferation of fate-determining algorithms and so-called “smart” devices that interact with us and each other has moved along with few (if any) formal standards and little or no ethical oversight. Yet our lives are being forever altered by them, with little hope for most of us that we can ever understand how they reach their decisions or whether they work as intended.
So far, there have been no organizations formalizing the development of ways to validate processes that advance transparency, accountability, reliable measurement of effectiveness and accuracy, standards of competence, and reduction in algorithmic bias in autonomous and intelligent systems (A/IS). There have been no entities developing ways to let us know if such products are safe or trusted by any body of experts, or who could provide a publicly available and transparent series of marks.
The IEEE, the world’s largest technical professional organization dedicated to advancing technology for humanity, and the IEEE Standards Association (IEEE-SA) have announced that they are taking concrete action to establish one of the world’s first programs dedicated to the creation of an A/IS certification process and marking methodology supported by a global standards development organization. The effort is embodied in the launch of the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), the goal of which is to create specifications for certification and marking processes that advance transparency, accountability, and reduction in algorithmic bias in these systems. As members of a legal community that is coming to rely such algorithms, H5 is proud to be among the founding members of this initiative.
According to the ECPAIS webpage, the initiative intends to “offer a process and define a series of marks by which organizations can seek certified A/IS products, systems, and services.” Their stated goal is to deliver the following outcomes:
- Criteria and process for a Certification / mark focused on Transparency in A/IS
- Criteria and process for a Certification / mark focused on Accountability in A/IS
- Criteria and process for a Certification / mark focused on Algorithmic Bias in A/IS
ECPAIS is promoting an open, inclusive, global initiative which will engage “experts that span the fields of engineering, law, science, economics, ethics, philosophy, politics, and health,” and they are inviting interested experts to join. Valued expertise would include specialists developing A/IS based products and services, academics with expertise in A/IS, and government organizations involved with AI/S policy and/or regulations.
H5 believes that this important endeavor will advance the trustworthy adoption of TAR and other forms of AI in the legal system.
Click here to learn more about this initiative.