As artificial intelligence (AI) algorithms become more and more commonplace in everything from streaming music and movies to social media to autonomous vehicles to various legal technology use cases, it’s inevitable that its ubiquitous use is a cause for increasing concern. As we begin to learn more and more about potential pitfalls – the invasiveness of AI vis à vis our privacy and the ability for AI algorithms to work as intended without introducing bias, for example – the more we should be paying attention to whether and how its use may be regulated.
Although there have been regulatory efforts in the past, more are coming in 2021.
Recent AI Regulation Activities Before 2021
The stage for some of the AI regulations that will come our way in 2021 has been set for at least a couple of years now. As we discussed in this article available for download here, the American Bar Association (ABA) House of Delegates addressed the question of ethical responsibility regarding AI when it passed Resolution 112 in August 2019. The resolution urged “courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.” Resolution 112 also comes with a terrific 15-page report that provides supporting information and guidance to courts and lawyers.
In addition, as noted in this article from Fast Company, US Senators Ron Wyden and Cory Booker, and Rep. Yvette Clarke introduced the Algorithmic Accountability Act in 2019, which proposed that companies with more than $50 million in revenues (or possession of more than 100 million people’s data) would have to conduct algorithmic impact assessments of their technology. Back then, however, the bill was never read in committee, much less on the Senate floor (Clarke’s companion bill in the House also didn’t advance). The article noted that Wyden, Booker, and Clarke plan to reintroduce their bills in the Senate and House this year.
Europe’s Artificial Intelligence Act
So far in 2021, the European Commission unveiled the first-ever proposed legal framework on artificial intelligence (AI): Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). As this article notes, the AI Act has attempted to find the middle ground by adopting a risk-based approach that bans specific unacceptable uses of AI, heavily regulates some other uses that carry important risks, and says nothing—except encouraging the adoption of codes of conduct—about the uses that are of limited risk or no risk at all. The gradation of risks is represented using a four-level pyramid—an unacceptable risk, a high risk, a limited risk and a minimal risk. And, similar to GDPR, these regulations would apply to AI systems for which output is used within the EU, even if the producer or user is located outside the EU.
US AI Initiatives in 2021
Initiatives are taking shape in the US as well. In addition to the expected resubmission of the Algorithmic Accountability Act, Senator Edward Markey and Congresswoman Doris Matsui introduced the Algorithmic Justice and Online Platform Transparency Act of 2021 to prohibit harmful algorithms, increase transparency into websites’ content amplification and moderation practices, and commission a cross-government investigation into discriminatory algorithmic processes throughout the economy.
From a research standpoint, the National Science Foundation (NSF), Department of Energy (DOE) and National Oceanic and Atmospheric Administration (NOAA) all have various initiatives in progress that could play a big role in the development of AI.
These are just a few examples of initiatives that are impacting how AI technology is implemented and used in today’s society. With so much attention focused not only on how to get the most out of AI technology, but also how to implement it in an ethical manner, it’s more important than ever to stay abreast of changing legislation and regulations regarding use of the technology. It’s also vitally important to work with experts who understand all of the considerations when implementing AI technology into legal workflows, effectively and ethically. After all, it’s their job to keep track of these developments!
For more information on H5’s Advanced AI and Matter Analytics®, click here.