Regulatory Oversight Podcast

AI: Overview and Current Regulatory Landscape (Part One)

Episode Summary

In this episode, Stephen Piepgrass and colleagues Michael Yaghi and Trey Smith provide an overview of AI, including uses and risks, and the increased focus on AI by various regulators, including state attorneys general, federal agencies, and local governments.

Episode Notes

Join us for the first in a series of episodes covering artificial intelligence (AI). As technology continues to develop, more companies are using AI in their day-to-day business, and with increased use comes increased risk. In this episode, Stephen Piepgrass and colleagues Michael Yaghi and Trey Smith provide an overview of AI, including uses and risks, and the increased focus on AI by various regulators, including state attorneys general, federal agencies, and local governments.

Our panel also offers insights on the potential legal pitfalls that businesses using AI should understand, and what they can do to prevent significant liability exposure to regulators.

Part two of this podcast series will focus on the technology, opportunities, risks, and best practices associated with the use of AI.

Episode Transcription

Regulatory Oversight – AI: Overview and Current Regulatory Landscape (Part One)

Stephen Piepgrass:

Welcome to another episode of Regulatory Oversight, a podcast that focuses on providing expert perspective on trends that drive regulatory enforcement activity. I'm Stephen Piepgrass, one of the hosts of this podcast and leader of the firm's Regulatory Investigations Strategy + Enforcement practice group. This podcast features insights from members of our practice group, including its nationally ranked State Attorneys General practice, as well as guest commentary from business leaders, regulatory experts, and current and former government officials. We cover a wide range of topics affecting businesses operating in highly regulated areas.

Before we get started today, I wanted to remind all our listeners to visit and subscribe to our blog at RegulatoryOversight.com so you can stay up to date on developments and changes in the regulatory landscape. 

Today, I'm joined by my colleagues Mike Yaghi and Trey Smith to discuss the trending topic of artificial intelligence. This episode will be the first in a series dedicated to artificial intelligence. This episode is meant to give you, our listeners, an overview of the current regulatory landscape in the world of AI. We invite you to come back in the coming weeks to hear more of our episodes in this series that will be dedicated to the technical aspects of AI and to the regulatory implications of its use in healthcare, privacy, and financial services. Mike and Trey, thank you for joining us today. I know this is a topic that both of you have been following closely, and I'm very much looking forward to our discussion.

Trey Smith:

Great to be here with you today. Thanks for having me on the podcast. Looking forward to digging into the topic of AI with you both.

Michael Yaghi:

Likewise, thank you, Stephen, for the intro. And Trey, look forward to having this important discussion with you and hopefully provide some insights to clients and prospects about how important artificial intelligence technology is in the regulatory space, primarily from the state AG perspective, but also the federal level as well, since we're seeing a lot of activity in this space.

Stephen Piepgrass:

To kick things off, why don't I ask, Trey, if you might be able to provide us a little bit of a background on AI, the technical side of it, and particularly algorithms and the role that they play, for those of our listeners who may not be as familiar with the tech side of all of this?

Trey Smith:

Sure. AI, artificial intelligence, it's essentially technology that seeks out to mimic or imitate human behavior. It's principally run by algorithms. They're just a series of logical statements, mathematical instructions, if you will, that are programmed to solve a specific problem by instructing computers to perform some predefined decisions that a human will typically design. Computers have had the ability to process algorithms since the 1970s, but modern AI layers these instructions in an almost unfathomable complexity on top of enormous pools of data. So unlike the simple programs of the 1970s, modern AI adapts and autonomously learns from a data set.

The way that happens is that the artificial intelligence is trained on a large data set that surrounds a decision, and it identifies patterns, if any exist, within that data and uses its insights to make a current decision or to undertake some action or to demonstrate some type of human-like reasoning. The algorithms of artificial intelligence, they are trained by those data sets, but many of them are autonomous in that they continue to look for insights and learn how to render better decisions based on data that the AI encounters. As these technologies go through more data and they seek to implement any insights, we're still understanding how the AI chooses certain variables to make its decisions, and because that is lost on a lot of technology designers, we see these decisions being made totally by the AI without any insight on the human end as to how they're made.

Stephen Piepgrass:

And that's why a lot of people refer to that as a black box, because at the end of the day, it can be hard to know why the AI is making the particular decisions that it is, because that aspect of the process is being done by the AI rather than with human involvement. Is that right?

Trey Smith:

That's exactly right.

Michael Yaghi:

Yeah. I think that it's critical technology I guess, that with AI, all technology companies try to minimize cost, right, and expand efficiencies with technology, and that's why this is so important and very pervasive. It really segues into what Trey was talking about, using those algorithms and technology to really analyze large data sets, right, for intelligent computer systems to make decisions. But the problem is sometimes those decisions could have adverse impacts on society. So at a high level, we're seeing a lot of AI technology being used by companies primarily to help predict events and outcomes, for example, or to help companies execute contracts, help transport cargo from one place to another. In the healthcare industry, we see it assisting doctors and patients in making clinical decisions. And so it's just a broad way for companies to leverage massive data sets in a way that will help them be more efficient in making decisions in whatever industry they're in.

And it's so pervasive that a lot of consultants or experts are predicting that AI is going to be so widely used. For example, PricewaterhouseCoopers has opined that AI will contribute almost over $15 trillion to the global economy over the next decade, or by the end of this decade. Deloitte similarly has noted that AI and automated processes for companies to use to increase their efficiency, that companies are really leveraging that AI into most of their business models, and they're replacing some of their core business functions and decisions. And that's where I think the tension comes from with respect to regulators and state AGs, where they're really focusing in on company use of AI and how it might adversely impact consumers and build in sort of biases based on the underlying data set. That's the high-level issue that's being framed right now that we're seeing from regulators.

Stephen Piepgrass:

And I know, Mike, since we attend a lot of these meetings and know a lot of these regulators areas where we have seen them voicing concerns, talking about AI include, you mentioned the medical field obviously, we also see it in background screening, tenant screening, credit screening. All of those areas that touch very closely on consumers and have a consumer impact are ones that lend themselves to AI, because they do involve those large data sets and predictive behavior technology. But at the same time, as we know from working in this space, because they're consumer facing, they are apt to draw scrutiny from regulators, and that's really where the intersection of this emerging technology and regulatory activity takes place.

Can you, Trey, talk a little bit about some real-world examples where AI has been criticized for predictive biases or other issues that have led to regulatory scrutiny?

Trey Smith:

I'd like to acknowledge AI technology, we're going to see a real increase in its usage as we go forward. It's technology that is readily available. You have companies like Amazon Web Services offering it in the easy to consume format to potential business customers. And so as more businesses adopt it, I think we're going to see some of its uses in a lot more circumstances. Some of the principle uses that we see here today are decision making in healthcare. They'll have healthcare companies that utilize algorithms in different ways, whether that's administrative work or analyzing medical imaging or determining how to allocate their scarce resources, whether that might be doctors, specialists, whatever the case is. You also see it, like you mentioned, in the insurance, in the risk analysis context, countless other places.

Michael Yaghi:

If I could add to that, you see it in areas where, for example, background screening, right? Companies will take millions and millions of pages of data sets or information and try to use AI technology to make background screening decisions on either consumers or maybe employees. And the key issue I think, with utilizing underlying data sets sometimes is if there are biases built into that dataset, then you're essentially introducing the same bias into the AI technology and the algorithm. So that's where there are a lot of regulators looking at different industries, like Trey mentioned, healthcare is one of them, but financial services is obviously another one. And companies using the AI technology, regardless of their industry, need to be cognizant of those issues, right? And they need to make sure that they're not building in and training a computer system to do human work, replicating the human biases that exist in the underlying data.

Stephen Piepgrass:

One issue that this raises in my mind is this question of transparency, which is something that you'll hear regulators talk a lot about, and I can see where there may be a tension here between the regulator's desire for transparency and decision-making and the use of AI, which sometimes is referred to as a black box. Is that something where we are seeing regulators really focus and is there a way to resolve that tension?

Michael Yaghi:

It is an area where regulators, especially state AGs are focused on, is transparency, right? Because in certain contexts when companies are making decisions, for example, about consumers, if you don't know how the underlying data is being used and whether there's built-in biases, a consumer may have no knowledge that a decision was made that impacts them that was based on that bias. So transparency of how AI technology is being used by companies and the type of data they're relying on is critical to that process. And that's what a lot of the AGs are focused on, they want to have that transparency, Stephen, so you're right in raising that, that's a very critical piece of this. Regulators want to understand and have that transparency, not just for themselves but consumers, right? We want to know what kind of information companies have about us, and then what are they utilizing for purposes of making decisions based on things we're seeking? Whether it's a credit application for a loan, whether it's seeking out healthcare services, medical services, it's important that people have that transparency and understand how AI is being used and whether or not you're adversely impacting segments of society or groups of consumers per se.

Stephen Piepgrass:

So we've talked a little bit about some background issues. What is AI? How's it being used? And what are some of the issues on which regulators are focused? I think it may be helpful for us to talk a little bit about some specifics and get into what particular AGs, what states, and maybe also what federal agencies are doing about AI, ways that they seem to be tackling this and thinking about this. I know one of the AGs who's been a leader in a lot of different areas, and he's going to be leaving office at the end of this month at the time that this recording's taking place, December of 2022, is Karl Racine from Washington DC. He's taken some very recent action on this. Trey, maybe you could fill us in a little bit on General Racine's latest action on AI.

Trey Smith:

Attorney General Racine introduced the Stop Discrimination by Algorithms Act of 2021, December of last year. If that bill is successful in becoming law, it would impose a duty on companies to ensure their algorithms don't generate any biased outcomes to consumers. It would require things like an annual audit of their algorithms for any sort of discriminatory patterns. The reporting requirements would include documenting how the algorithms are built, the kind of variables they use to make determinations, and then report the results and any corrective steps that are taken to the Attorney General. The Act also provides the Attorney General of DC with enforcement authority and its own cause of action to obtain civil penalties for each violation. We're seeing a lot of attorney generals exercising concern about how algorithms and machine learning is being implemented. In addition to Attorney General Racine, we're also seeing Attorney General Rob Bonta out of California take some action in this sphere. In August, he sent 30 hospital CEOs letters seeking information about how their hospitals and related providers are identifying and also addressing racial and ethnic disparities, and specifically hospital decision-making tools.

Stephen Piepgrass:

That act in DC, and I know that was one of the very first and one that was used as a model or looked to as a model in other states and being discussed at the federal level, it does seem to be interesting that it requires some significant disclosure about how these algorithms are made. And in this area in AI, as in others that we see, there does seem to be, as I've mentioned before, a tension here between the businesses that want to use this proprietary technology, the work that they have built and invested into creating these AI machine learning technologies and the desire of regulators for transparency and the desire to just get that information out to the public. It will be very interesting, I think, to see how that plays out. We've written in other contexts about things like the clash between the right to repair movement and the desire for auto manufacturers to keep their technologies secret, and they raise all sorts of public safety and other reasons for doing that. But one is simply so that they're able to compete in the marketplace. Clearly there is a tension here that we will see how that ends up being resolved.

Trey Smith:

And that tension is even magnified by the fact that I'm sure a lot of these hospital CEOs and anyone else who's deploying AI, they themselves don't even know how the algorithm is making decisions. That poses an additional challenge in addition to just the reporting requirements in and of themselves and wanting to protect their proprietary technology.

Stephen Piepgrass:

That classic black box we were talking about earlier.

Michael Yaghi:

That's a good point, because a lot of companies, they're mostly not even intending to be biased, it's just they're not aware that the underlying data may have those biases built into them. There was a healthcare study that highlighted a bias in clinical algorithms that required, for example, Black patients to be sicker than white patients before they were recommended for care. That's typically not something that company was trying to accomplish, but it's not being aware of the underlying data and the bias in the data fed into the AI. And that's the type of information that companies need to really be aware of.

Stephen Piepgrass:

Trey, why don't you fill us in on a couple more activities by states. Obviously NAAG, the National Association of Attorneys General is one of the leading associations of AGs. They just announced this year, I know the Center on Cyber and Technology, or CyTech and that one of their focuses is going to be to support AGs in understanding emerging technologies like machine learning, AI and potential bias and discrimination resulting from that. So it will be interesting to see how that resource is used and how AGs coordinate action through NAAG as we know they do. Are there any other state activities, Trey, that you'd like to highlight for the audience?

Trey Smith:

There are a few I'd like to comment on. In June, 2022, we saw Colorado pass the law requiring state agencies' facial recognition technology to first be tested in operational settings and to take a bunch of different reasonable steps to ensure that the facial recognition achieved the best possible results. In addition, we saw the former Vermont Attorney General, Thomas Donovan, filing a lawsuit also against a company's use of facial recognition technology, just determining whether the use of that technology to map faces of individuals and children and selling that data to businesses and law enforcement violated Vermont consumer protection laws.

Stephen Piepgrass:

One of the trends we have seen recently is local governments getting into the regulatory act. And I thought it was very interesting that in May of 2020, New York City amended its administrative code to add a subchapter to cover automated employment decision tools or AI. That particular statute prohibits employers from using AI to screen candidates or employees for employment decisions unless the algorithm is subject to a bias audit, and then the results of that audit have to be made available on the employer's website. So very interesting to see some of these more progressive cities beginning to take action. New York is often somewhat of a canary in the coal mine, but with localities flexing their regulatory muscle more and more, wouldn't be surprised if we don't see more of that.

Moving on from the state and local action, though, we are also seeing activity at the federal level on AI. Mike, maybe you can talk a little bit about recent activity by the administration or by federal agencies?

Michael Yaghi:

Yeah. We've seen, for example, the Federal Trade Commission has warned Congress about the dangers of AI and issued guidance to companies that use AI, for example, in the background screening context. We also know that EEOC published guidance regarding how an employer's use of AI could adversely impact Americans with disabilities, for example. And the administration also in October of this year through the White House's Office of Science and Technology, issued what is called a blueprint for the Bill of Rights regarding artificial intelligence use. And they flagged about five potential harms that AI can cause. And one harm is an unsafe or ineffective systems where the system, if it's designed to foresee the possibility of endangering an individual or a community's safety, for example, how is that being used and it may not be safe or effective for that purpose.

The second one is the algorithmic discrimination. We've sort of talked about that. Does the AI discriminate against certain individuals or groups? Is there inherent bias in historical data fed into the AI? Is there algorithmic discrimination in certain areas? Item three that was raised is the intrusion upon data privacy. Privacy is a huge issue right now. We see a lot of states passing laws on privacy and protecting consumer privacy like Utah, California, Colorado. So one of the areas is can user data be collected and used in a way that violates those privacy rights? Is it strictly necessary for purposes of AI, for example, the type of data being collected? That's an area for potential harm. A fourth area of potential harm is the lack of notice and explanation. That's sort of the transparency we talked about earlier, whether or not people know what's being fed into AI and how are these decisions being made.

The fifth and final area of harm that was raised in the blueprint deals with the lack of alternatives to AI, potential flaws that could arise with AI and its risk of harming individuals, and they lack the opportunity to basically engage with a human decision maker, right? Because a computer's deciding, like I had mentioned earlier, whether their application for a loan is approved or whether they're approved for medical care that's covered under their health plan. That lack of an alternative, if we're stuck with AI only and the AI is resulting in a bias, how does that impact consumers, and what are their alternatives to that AI? That's an area that really regulators would want to focus on and also be protecting against and making sure people have options so that they're not adversely impacted.

Stephen Piepgrass:

Very interesting to hear the steps the administration is taking and the areas that they're really focused on in tackling this subject. Any areas, Trey or Mike that we haven't had a chance to talk about that you think our listeners might be interested in hearing about on the AI front before we wrap things up?

Trey Smith:

I think that regulators are going to take a focus on any potential harms that they see, whether they're discovered or are still understanding them, and use those as their guide. Which is why I think that the blueprint that the White House released, even though it may be challenging to implement, it is only a blueprint after all, I just think it provides really great insights as to the areas that states are focusing on. Same with the Stop the Discrimination by Algorithms Act, they make it clear that they're focused on those areas that Mike just discussed.

Michael Yaghi:

I'll just add, I think it's important, if companies are using AI, to streamline cost and improve efficiencies, right? That's the whole point of this, to try to make things more efficient and reduce overall cost to companies. They really need to be aware of the potential legal pitfalls, and they really need to take the time to work with their counsel to go through the potential pitfalls before they deploy something like this. Because a cost saving measure such as AI and utilizing AI to become more efficient could really blow back on a company if they've exposed themselves to significant liability exposure to regulators. And that's something that they just should build into their cost savings analysis and making sure that they take that extra time with their legal teams to really make sure they're not stepping into any pitfalls and having something blow up in their face. That's just a critical area here because we see so many state AGs and the federal government focused on this issue.

Stephen Piepgrass:

Yeah, as with so many emerging technologies, they can be wonderful and great for consumers in that decisions can be made, approvals can be made faster than ever before using technologies like AI. Large numbers of consumers can be serviced and served using AI technology. But at the same time, there is a risk for discrimination to slip in, especially if you're dealing with bad data, the classic garbage in garbage out computer scenario. Given the strong focus by regulators, any business that is using AI on a regular basis needs to be thinking all of these issues through and following very closely what the states and the federal government and some localities are doing in this area. 

Mike and Trey, once again, thanks for joining us today. I know our listeners very much enjoyed your insights and I really enjoyed our conversation. I want to thank our audience for tuning in today as well. As a reminder, please keep an eye out for our other episodes in this series on AI that we will be releasing in the coming weeks. And please make sure to subscribe to this podcast through Apple Podcasts, Google Play, Stitcher, or whatever platform you use, and we look forward to talking with you next time.

Copyright, Troutman Pepper Hamilton Sanders LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman Pepper does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper. If you have any questions, please contact us at troutman.com.