Regulatory Oversight Podcast

12 Days of Regulatory Insights: Day 2 - AI Under Scrutiny

Episode Summary

Stephen Piepgrass, Mike Yaghi, and Cole White analyze the rising regulatory scrutiny of artificial intelligence (AI) technologies by state attorneys general.

Episode Notes

In the second episode of our special 12 Days of Regulatory Insights podcast series, Cole White, a member of Troutman Pepper's Regulatory Investigation, Strategy, and Enforcement (RISE) practice group, is joined by colleagues Stephen Piepgrass and Mike Yaghi to analyze the rising regulatory scrutiny of artificial intelligence (AI) technologies by state attorneys general (AG).

The conversation kicks off with an analysis of a recent settlement by the Texas AG's office with a health care technology company, which was the first such AG settlement pursuant to a state consumer protection act involving generative AI. Mike provides insights into how state consumer protection laws are being utilized to regulate AI, while Stephen highlights the broader implications for public policy and enforcement.

Following a discussion on the regulation of AI in telemarketing, which highlights a recent Federal Communications Commission (FCC) ruling influenced by a bipartisan group of state AGs, Mike and Stephen conclude with a forward-looking analysis of the legislative and enforcement activities related to AI that can be anticipated over the next year.

Episode Transcription

Regulatory Oversight Podcast: 12 Days of Regulatory Insights: Day 2 - AI Under Scrutiny
Hosts: Stephen Piepgrass, Mike Yaghi, Cole White
Date Aired: December 6, 2024

Cole White:

Hello everyone and welcome back to Troutman's special holiday edition of the Regulatory Oversight podcast, “The 12 Days of Regulatory Insights.” These twelve episodes focus on highlighting key trends from the past year across several different legal areas of interest with the goal of keeping you informed and engaged during this festive season.

I'm Cole White, a member of Troutman's Regulatory Investigation, Strategy, and Enforcement, or RISE Practice Group, and nationally recognized state attorneys general team. Before I get started today, I want to remind all our listeners to visit and subscribe to our blog at RegulatoryOversight.com so you can stay up to date on the latest developments and changes in the regulatory landscape.

Today I'm joined by my colleagues, Stephen Piepgrass and Mike Yaghi, to discuss the increase in regulatory investigation and enforcement activity among state attorneys general related to artificial intelligence technologies over the past year. Stephen leads the firm's RISE practice and represents clients in single and multi-state enforcement actions, including inquiries and investigations, as well as litigation involving state attorneys general and other state and federal governmental enforcement bodies.

Mike, also in our RISE Practice Group, represents high-profile clients and regulatory enforcement investigations involving all facets of their business operations. Both are also members of Troutman’s state attorney general practice group. Stephen and Mike, thanks for joining me today.

Stephen Piepgrass:

Great to be with you, Cole, and good to see you, Mike.

Michael Yaghi:

Yes, agree. Thank you both and it's great to join you both today on this excellent topic.

Cole White:

Absolutely, I'm very excited. As I said, our conversation today is focused on state AGs and AI over the past year. So, the first thing I want to talk to you both about is a recent settlement agreement that was reached by the Texas Attorney General's Office with a company called Pieces Technologies, a healthcare technology company. After an investigation, the AG found that the company had made deceptive claims about the accuracy of their AI product that may have deceived hospitals into purchasing and using the products. Specifically, the company made claims that the products were “highly accurate” and had an extremely low hallucination rate.

Mike, with your expertise in representing clients and these types of consumer protection investigations, I'm curious about your thoughts on the AG bringing this action under the sort of traditional consumer enforcement mechanisms. Can you talk about how state consumer protection laws provide flexibilities for AGs to regulate novel legal areas like artificial intelligence?

Michael Yaghi:

Yes. This is a great area and an example of this settlement where state AGs like AG Paxton and other states are not going to wait for new congressional laws or state legislatures passing new laws to regulate artificial intelligence. I think this is a great example of how broad, unfair, and deceptive trade practice act statutes in the states are and how they give the state AGs as the top law enforcement officer of those states the power to go after companies if they perceive something to be allegedly deceptive or unfair in the marketplace.

This settlement really was interesting because the AG perceived and was alleging that the AI model that the company was marketing to the public, marketing to hospitals to basically gather and summarize for medical staff, patient information, had a hallucination rate or an error rate. We'll call it an error rate. That was higher than what they claimed the model would have, right?

So, the attorney general basically said, “We think that's a deceptive and/or an unsure practice," and was alleging those claims under that broad statute in the state of Texas. The company denied the allegations and didn't consent any liability, of course, but it just represents how states aren't going to wait if they perceive AI being used or deployed in a way that could potentially deceive or be unsure in the marketplace.

I think statutes are extremely broad, as you noted, and really cases come on a case-by-case basis, the key facts in any situation the state's going to look at, but it'll be based on the totality of whatever those circumstances are. So, companies engaging, I think this is a good signal to all industries, right? Because it's not limited to any particular industry, but all industries that want to use artificial intelligence don't think you're in some sort of holding pattern, waiting for federal or state laws to regulate this space. The states are definitely going to rely on their UDAP or Consumer Protection Statutes to regulate the marketplace. And this is a perfect example of that.

Cole White:

In many ways, Mike, this, I'm sure, was very familiar to you given your advertising and marketing practice, really, although it is an AI case and really the first major AI settlement to win out of an AG’s office, it really is just a classic advertising and marketing settlement in another way. Is that characterization pretty accurate?

Michael Yaghi:

It's 100 % accurate. If you're advertising a product and it's going to be arguably or allegedly a misrepresentation of that product or the use of that product or how the product functions, which is essentially what the state was alleging here, is that you were marketing this to hospitals, claiming it was more accurate than it allegedly was. It's absolutely right. Any kind of advertising that could be deceptive or misleading or confusing is precisely the type of challenge that the states would bring under their broad consumer protection laws, and it was a healthcare company, right?

So, again, that's why I get back to it's not AI-specific or healthcare specific, it's just a traditional advertising case like you noted, Stephen, and the state saying, “Hey, we're going to protect our patients and our hospital system because we allege some violations here and we're going to pursue it.” So, again, the message I think should be, don't think you're in some sort of safe harbor until more laws and regulations come in the artificial intelligence space. Be prepared and ensure the accuracy of your AI models, how you're presenting them to the public, how you're marketing and promoting them, because other states will certainly follow if they perceive similar issues and no jurisdictions.

Cole White:

One of the interesting things I noted about the settlement was, I don't believe there was a dollar value in the settlement. I believe it was only purely injective relief. Is that right, Mike? If so, what's the significance of that? Was the AT tried to send a message here?

Michael Yaghi:

It's basically saying, so if you can't quantify restitution or specific harm, that's actually not very typical, right? States typically do ask for monetary relief and so it is unique in this context. But I think they wanted to have a settlement where the pieces, the company obviously was correcting the way it was marketing its AI model and to ensure safety. I think, and I don't know this for certain, but I would say that because this did touch on the healthcare space, there was probably a sense of urgency to get some settlement out there and get the company to correct basically its business model and correct what it was doing in the marketplace since it touches on patient care, summary of patient care to help medical staff. Again, I think it's something that the state was probably quick to get a resolution on given the sensitivity around healthcare-related product and potential impact or adverse impact allegedly on patients.

Stephen Piepgrass:

I think you're exactly right, Mike, and I'll jump in here. I found that Cole, to be really telling the fact that there was no monetary payment here. As our listeners know, AGs are, yes, focused on the law, and that was the first point that we were making, Mike, and as we were talking about this really being an advertising and marketing case, but they're also very focused on public policy. There is, I would characterize it almost as a race to create the law of settlements around AI issues. This matter presented a perfect opportunity for the Texas AGs’ office to step in and jump to the fore in this area, addressing really two hot-button subjects, AI and healthcare. That I think is really what drove this settlement and created a settlement that didn't have a monetary value to it, but it does send a message to the industry.

It's a message that's being set using laws that we're all very familiar with, UDAP, that are common across all AGs’ offices, and this was the Texas AG really putting a stake in the ground as to the direction that he sees AI going and wants companies in the industry to really focus on. So, I think it's a fascinating settlement for that reason too and it reflects both the legal issues and the legal hook that the AGs have with their current laws that they are empowered to enforce, as well as the public policy issues that they're very much focused on.

Cole White:

Absolutely. So, AI providers in Texas, you have been warned no more non-monetary settlements and you don't need to wait for AI laws to be enacted in order to be held responsible. So, great answers from both you on that. I appreciate that.

Moving to our next topic, I wanted to talk a little bit about AI in telemarketing. So, in January, the Pennsylvania Attorney General, Michelle Henry led a bipartisan group of 26 state AGs and sending a comment letter to the FCC asking that the agency require the use of AI by telemarketers be regulated under the Federal Telephone Consumer Protection Act, TCPA. Then, just a few short weeks later, the FCC issued a ruling that recognized that calls made with AI-generated voices are “artificial” under the TCPA, which is in keeping with these AGs' request.

Michael, the TCPA, obviously, a federal law here, can you talk about the importance of federal action in this case and why the AGs are sending a letter and not, for example, acting to regulate these calls themselves?

Michael Yaghi:

This is a critical, and I think, a perfect example of how technology is changing technology, not fitting within a specific existing regulation or a law, right? I'll start with the state. So, while the states do have the ability to enforce the federal TCPA, there's provisions that give them that type of authority, but the whole that was here. Was it a lack of authority to enforce the TCPA, that was the whole? I think the whole here was the technology being utilized in the marketplace, and I'll explain what that is basically.

Because the TCPA prohibits artificial or pre-recorded voices to be sent to consumers through a recorded message in a telemarketing context, a telemarketing call. Those types of artificial and pre-recorded messages are prohibited under the TCPA, unless you have written consent. You have to have expressed written consent from the consumer to send him or her those calls.

I think what the states were seeing is that what happens if an AI, a fake human voice, is utilized to make a phone call, to engage in telephone solicitations, and the states clearly saw that as a potential gray area, or at least arguably, companies could claim it's a gray area, so they wanted to have, I think, the FCC sort of weigh in on that under the TCPA and say, because they asked the federal agency to conclude, which it did later, like you noted, Cole, to conclude that artificial intelligence creating human voices to make telephone solicitations should be treated the same way a pre-recorded call or artificial call is being handled. And that's why they wanted to ask the FCC to weigh in on that.

Each state has their own sort of TCPA, telephone solicitation laws and regulations. Those also probably have the same hole. I mean, I haven't done a 50-state survey on those laws in a while, but I can assure you, in the years we've been doing this, I've never seen any provisions that address artificial intelligence. So, they have the same holes, I'm certain, in their state level, and they wanted to have some sort of uniformity across the country by reaching out to the FCC and to sort of make that very clear.

Now, with that in place, now that the state see activity in their jurisdictions, they clearly could pursue those under both the federal TCPA, their state laws, and say, “Hey, you're using AI in a way that violates the law and you don't have express written consent from consumers to send those calls, we wanted to stop.”

Cole White:

I think that makes perfect sense, a great example of federal-state collaboration. Stephen, from your perspective on the state's attorney general team, what impact do you think the AGs’ letter had on FCC's ultimate decision here? Do letters from state AGs to the federal government have any kind of outsized impact on federal policy? And if so, what does that look like?

Stephen Piepgrass:

I think this is a great example of exactly that, the fact that they do have such an impact. Here, when the FCC announced its declaratory ruling, it specifically referenced the fact that the ruling, at least one of the purposes for it was to give state AGs across the country new tools to go after bad actors making robocalls. They call it out specifically. Clearly, this was a response to the state AGs request, and it really does highlight the outsized role state AGs have not just at the state level, but when they work together to make changes at the federal level with agencies. Here's a perfect example.

The other reason I really like this, and again, this ties back into Mike's advertising and marketing practice, is the FCC specifically references the use of AI to create celebrity voices, politician voices, and confused consumers. When you're dealing with the use of celebrities, the very first thing we all think about in this space is endorsements. There's all sorts of requirements that I think are beginning to be built into the law when you see AI-generated pictures. You must have watermarks on them so that you can tell that it was AI-created.

In this situation though, you're talking about a mimicked celebrity voice or mimicked politician's voice. How do you protect for that? Is there an obligation on the part of businesses or companies that want to use AI to generate that type of buzz to reveal that that's what they really are doing? So, that's something I think, Mike, you might be able to speak to and I'm sure something that your clients would be interested in hearing about.

Michael Yaghi:

That is definitely a hot area, endorsements and testimonials and advertising. We've seen a lot of increased activity from the feds, the FTC in that space, and state AGs, actually. There's been states partnering with the FTC actually in that very arena and tackling that issue. So, using AI to make consumers think that a specific celebrity or someone well-known, public figures promoting a product is a major concern. We're going to see as we talk about this next, there's examples of states actually raising that in the political context, for example, saying we're concerned about AI maybe misleading voters, for example, with putting statements in politicians' mouths that weren't true.

So similarly, if AI is being used to portray celebrities or public figures promoting products and services, that's a huge no-no. The FTC's endorsement and testimonial rules would come into violation and it's misleading the consumer. But I also think that's an example of Texas' law. The endorsement testimonial rules the FTC will say is broad enough for us to go tackle that problem, we don't need a specific AI regulation or law in the books to do it, right? So, if AI is being used in that context, you could expect the FTC to certainly investigate that and make sure that people aren't being misled.

So yes. I mean, what do you do in that context? There's a lot of dialogue at the federal and state level. Do you put watermarks. You require watermarks on AI-generated video and statements and content and things like that? But as that gets unraveled and thwarted out at the federal and state level with new laws, as we've been saying, the theme will be existing laws are going to be enforced by federal and state regulators. They're not going to wait for legislatures or the Congress to act.

Cole White:

Exactly. I think that's a great takeaway here is that states are taking an active hand here, and they've expressed their priorities with respect to certain consumer protection roles and that they're going to step up and serve the community as needed. Thanks so much for those answers, guys. Absolutely insightful. Like I said, I think the key thing here is states have enforcement mechanisms that they're going to use regardless of a lack of specific AI legislation in their states.

Before we close out today, I want to pose one final question to you both. Looking into your crystal ball for a moment, what types of activity do you expect that we'll see from the states related to the regulation of AI over the next year? So, this could be legislative initiatives, enforcement actions, consumer advisories, multi-state litigation, open to whatever you all think is coming next in the world of AI.

Michael Yaghi:

Yes, it's a great question. In fact, California is a pretty good example of it. I think we're going to see – so the look ahead, I would say, is there's going to be a lot of increased activity at the state legislature level, where governors, attorneys general, and legislators are trying to put some guardrails around AI, right? And passing laws. California is a perfect example of this back in September. There was a Senate Bill, 1047 that was introduced to put some guardrails around AI, and Governor Newsom vetoed it and he was very concerned.

Well, let me back up. There's some of the top 30, 35 AI companies, the companies that are developing large AI models are in California, right? So, there was a lot of concern about rushing to legislation to regulate the space, and it might dampen that innovation and the technology's growth and expansion, and Newsom some vetoed the bill. It was basically going to put some pretty big safeguards around the very large companies developing large AI models, and it was excluding smaller companies, which was one area where the governor was a little concerned about, right?

You could see if you're regulating not the entire market, smaller entities could come out and create the same concerns, and the concerns are pretty similar to what we were talking about earlier, right? Misusing AI to misrepresent people's identities, confused voters in elections, spreading misinformation, all of those things are areas of concern and that these laws are trying to regulate risks or attacking critical infrastructure, so public safety issues, privacy issues, things like that.

The governor was a little concerned that maybe it wasn't being applied to the marketplace properly. He had concerns that the law wasn't really based on sort of empirical or scientific evidence either. Is it really addressing specific issues or are we just trying to broadly say companies have to have “safeguards”, to ensure that AI protects the marketplace in all these different areas?

So, he was concerned about that. Like I mentioned, he was really concerned about smaller companies emerging and being not captured by the law, which all those same problems would exist, right? Privacy, critical infrastructure, interfering with the democratic process, misleading people, consumers, voters, et cetera. And Attorney General Bonta, also earlier this year, I think sent a letter to Congress about concerns, with other states, by the way, about concerns of AI being used, for example, in child sexual abuse contexts, child protection contexts. So, I think the point is you're going to see in the year ahead and beyond more activity at the state level for sure and bills trying to get past and putting some guardrails around AI in all these different contexts.

Stephen Piepgrass:

So, I think Mike did a great job of covering the waterfront when it comes to potential legislative efforts, particularly at the state level around AI. My guess is technology companies took the veto in California as a good sign, suggesting that states are going to be deliberate about this and really trying to think about not just consumer protection, but also the potential benefits of AI to industry and making sure that a balanced approach is adopted because that first piece of legislation that comes out often ends up being the model for other states.

On the enforcement side, this is very much on state AGs’ radars and on federal regulators' radars as well, since we've already talked about the FCC and the FTC. But I know having been to many state AG conferences over the past year and really over the last several years, AI has been one of those topics on their agendas. I would not be surprised that there would be multiple multistate going on right now looking at AI issues and hot-button topics, particularly around bias and decision-making in multiple different industries.

Healthcare is obviously one. We talked about that earlier in this conversation. Financial services is another. Any area that is consumer-facing where AI is being used to help make decisions is one that is likely to draw scrutiny from attorneys general. There's also all of the criminal angle, and because state AGs play both a civil and a criminal role, when it comes to the criminal side of things, they're focused on fraud, abuse, scams, which can be civil or criminal, and obviously election interference was a huge topic right before this last election, and that will continue to be the case as state elections come up in the coming years.

So, I think we're really at the tip of the iceberg when it comes to AI and enforcement, and this will be an area I know we are watching very closely in the coming year, and I know our clients will be as well.

Cole White:

Absolutely. I think that focus is only going to grow. So, Stephen and Mike, I want to thank you both so much for joining me today. This was an awesome conversation. I know our listeners enjoyed your valuable insights. I also want to thank our audience for tuning in to this special holiday series. Stay tuned for more episodes of “The 12 Days of Regulatory Insights,” and please make sure to subscribe to this podcast via Apple Podcast, Google Play, Stitcher, or whatever platform you use, and we look forward to hosting you again soon. Thanks, everyone.

Copyright, Troutman Pepper Hamilton Sanders LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman Pepper does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper. If you have any questions, please contact us at troutman.com.