In this episode, Stephen Piepgrass, Michael Yaghi, and Barry Boise discuss the potential risks health care companies may face with increased reliance on AI, as well as the increased focus on AI by various regulators and state attorneys general, particularly in the health care space.
AI continues to capture the headlines. One recent headline noted that ChatGPT passed the medical boards. In this third episode, Stephen Piepgrass and colleagues Michael Yaghi and Barry Boise discuss the potential risks health care companies may face with increased reliance on AI, as well as the increased focus on AI by various regulators and state attorneys general, particularly in the health care space.
Our panel also offers suggestions on how health care companies can engage with state regulators to discuss the issues, share their perspective on how AI could assist the health care industry, and create an ongoing dialogue with regulators to make sure that private industry and the government are working together.
In part four of this podcast series, we will look at the use and impact of AI in the financial services industry.
Regulatory Oversight Podcast – AI: Impact and Use in the Health Care Industry (Part Three)
Host: Stephen Piepgrass
Guests: Michael Yaghi and Barry Boise
Stephen Piepgrass:
Welcome to another episode of Regulatory Oversight, a podcast that focuses on providing expert perspective on trends that drive regulatory enforcement activity. I'm Stephen Piepgrass, one of the hosts of the podcasts, and the leader of the firm's Regulatory Investigations, Strategy + Enforcement practice group. The podcast features insights from members of our practice group, including its nationally ranked state attorneys general team, as well as guest commentary from business leaders, regulatory experts, and current and former government officials. We cover a wide range of topics affecting businesses operating in heavily regulated areas.
Before we get started today, I want to remind all our listeners to visit and subscribe to our blog at regulatoryoversight.com, so you can stay up to date on developments and changes in the regulatory landscape. Today, in this third episode focused on the trending topic of artificial intelligence, I'm joined by my colleagues Mike Yaghi and Barry Boise.
Mike is part of our Regulatory group and the AG team, and Barry is a partner in our Health Sciences group, focusing on litigation and investigations. We'll discuss the potential risks health care companies may face with increased reliance on AI, as well as the increased focus on AI by various regulators and state attorneys general, particularly in the health care space. Mike and Barry, thank you for joining us today. I know this is a topic both of you are following closely, and I'm very much looking forward to our discussion.
Barry, maybe we will kick this off with you. As you know, AI is in the headlines today a great deal. But for those listeners who are maybe joining this series for the first time, can you talk a little bit about why companies are interested in using AI and a bit about how it is being used?
Barry Boise:
Thank you, Stephen, and thank you for having me on this podcast. You're right that AI is capturing the headlines, and the more recent headlines are that you had ChatGPTs passing the medical boards. The step one, step two and step three boards that all medical students have to pass, and that was untrained AI. There's machine learning, where algorithms can be improved, and improved to help the ability to predict events assisting a whole range of different tasks. So for example, in the context of health care, you have AI being used to predict outcomes using algorithms and machine learning. This February, MIT researchers, for example, revealed Sybil, an AI program that can now take low-dose CT scans and predict lung cancer risk over the next six years. The ability to detect lung cancer in its early stages is challenging. That's the difference between life and death for so many people.
So the ability to take information, large data sets, apply algorithms beyond what we were previously able to do, enables greater ability to predict events and outcomes. Also see this predictive ability, it's critical in drug discovery. It helps researchers predict surrogate markers to treat disease, and it also assists in predicting what current assets are investigated or used to treat one disease that may be effective for other uses. And we saw this with the lightning-fast discovery of certain treatments for COVID. So again, the ability to take large data sets, apply algorithms and machine learning ability to really advance health care in many different ways. We see it in the ability to manage contracting, supply chain issues, and really assist doctors and patients in clinical decision-making. We know, for example, that the FDA has cleared over 500 medical devices that address issues or use AI. And another good example is in the radiology space, where most of these clearances have occurred.
In radiology, AI is able to find patterns and associations that detect nodules an example of lung cancer or other associations not easily detected by the human eye, and assist radiologists in evaluating a wide range of disease states. So really, it's a technology that allows a broader use of big data, and big data is certainly prevalent in the health care setting. Even hospitals and payers are using AI and this capability to help make population health determinations, how to use scarce resources. For example, the deployment of ventilators during COVID was one example where AI was used, or just which patient population should receive more intensive engagement based on certain baseline characteristics. So it really is prevalent in the health care space.
Stephen Piepgrass:
All of those are great things about AI. Sounds like it is a huge boon for productivity, and a lot of potential here for really helping consumers and helping patients and researchers. At the same time, we have been hearing about some concerns about AI. And Mike, maybe you can talk a little bit about that.
Michael Yaghi:
Yeah, thanks, Stephen. There's no question that the boon in use of artificial intelligence is very helpful in helping companies marshal large data sets and help them more efficiently make decisions. But we've seen that there are a lot of issues from regulators also impacting the use of AI and their focus on making sure that AI isn't used either intentionally or unintentionally in a way that harms consumers. There's no doubt that artificial intelligence directly impacts consumers in many ways, including in the health care context. For these reasons, the AI technology is really under scrutiny by a lot of regulators. We've seen many state AGs, for example, state legislatures, the Federal Trade Commission, the Biden administration, and as noted, the FDA, all share concerns over the use of AI. In fact, Attorney General Rob Bonta last year sent a letter to CEOs of hospitals really raising this issue and concern.
I mean, his concern was with respect to whether or not there were underlying biases in data sets, right? About patient populations, for example, that were being used by AI systems, and whether or not those AI decision-making tools would somehow perpetuate some unfair bias toward, let's say, non-white patients versus white patients in making some of their decisions. He was really focusing on making sure that data used by AI is not used in a way that will adversely impact disadvantaged groups where they were excluded from the health care industry, for example. And so state regulators, through the AGs as well as the federal government, are going to be really focused and continue to remain focused on use of AI to make sure that companies aren't unintentionally using data in a way that could adversely impact different segments of our society.
Stephen Piepgrass:
Barry, can you maybe elaborate on that a little more? Not just on the point about Bonta, but how other regulators are looking into AI, particularly in health care?
Barry Boise:
Certainly, Stephen. It's interesting. The issue that Bonta raised was first raised in a published article in the Journal of Science that found that hospitals and payers were using data sets to assess who would receive more intensive care and treatment. The data sets they were using were really payer data sets. So the problem was that there was a historical bias concerning the use of health care and disparity of use of health care among minority patient populations. That disparity plays out and gets exacerbated when those data sets are used to project who should receive care in the future. So the underlying thread of the concerns by regulators with AI really get to a couple of issues. One, are the data sets being used appropriate, or are they data sets that reflect inherent biases will only get repeated? Two, are the algorithms appropriate, and are they tested in a way that will make sure that they're not creating unintended consequences and bias?
And we've seen these very concerns played out by state AGs. As Michael said, state legislatures are evaluating certain legislation. The FTC has put out notice about the need to make sure that data sets are appropriate and algorithms are appropriate, and they're not being misrepresented in any way. The Biden administration has spoken on this issue about some of the hazards of AI, and FDA has also expressed concern about a different bias. They call it automation bias, that's seen when doctors are more likely to follow recommendations presented by AI, even if they don't understand fully the source data or how those recommendations came to be.
Michael Yaghi:
Yeah, and I'd like to actually comment. What Barry said makes sense and is important. We've also seen, for example, California has previously proposed regulations to restrict those automated decision systems in the employment context, and we have reason to believe that there's going to be similar restrictions that may be introduced or considered for the health care industry, especially given Attorney General Bonta's position and his letter.
And it's many states, right? We've seen, for example, the Ohio Attorney General, the former Vermont Attorney General. We've seen Colorado pass laws requiring state agencies' use of facial recognition, for example, be tested in operational considerations, in that they take reasonable steps to ensure the best quality of results. And so in many different contexts, states and regulators are looking at this issue. And it's just going to grow in the health care industry, mainly because there's a lot of information that the health care industry could use through AI to help them make more efficient decision-making.
But I think going through the pandemic, and especially coming out of the pandemic, there's a lot of focus on regulators making sure that patients have adequate access to health care, and they're certainly going to make sure that AI isn't used in a way that either intentionally or unintentionally excludes any population groups from gaining access to health care. For one example, Attorney General of California is also investigating mental health parity issues. There's a focus on making sure that patients have adequate access to health care, and the complication of AI is going to just raise regulator scrutiny and has raised regulator scrutiny in that context.
Stephen Piepgrass:
It might be helpful for our listeners to understand a little bit more about the authority under which agencies are regulating AI. Barry, maybe you could talk a little bit about sources of authority for regulating in this space.
Barry Boise:
Sure, Stephen. Each state has consumer protection laws that are incredibly broad if activity can impact consumers. Each state's law is a little bit different, but at a high level, interactions or transactions that are deemed unfair or deceptive or unconscionable, all can be deemed a consumer protection violation. And the attorney generals have historically claimed extraordinarily broad authority and power to regulate a wide range of activity that they believe is harming consumers.
Stephen Piepgrass:
And Mike, maybe you could speak a little bit to FTC authority in this space as well, and what they cite.
Michael Yaghi:
Yes. The Congress has given the Federal Trade Commission very broad authority under Section 5 of the FTC Act. Section 5 gives the federal government and the FTC the ability to investigate and seek redress for any type of unfair or deceptive actor practice, basically. So if health care companies are using AI in a way that's going to unfairly exclude certain populations from, for example, access to health care versus giving access to other populations, especially if it's based along racial lines, for example, that's something that the FTC has the authority to investigate and evaluate whether or not a company's actually engaging in that type of conduct. And the FTC could either serve a civil investigative demand on a company, and really, these demands are usually very broad. And they can dive right into a company's use of artificial intelligence and really get to the algorithms that the company's using, the underlying data that the company's using, whether or not the companies are misrepresenting to patients and the public how their systems work or how their decisions are being made. And it could be really difficult for a company to have to deal with those issues after receiving a CID.
It really makes sense for companies to look at this issue on the front end and really get the right advice to make sure that they're developing their use of AI, the data they're relying on, the algorithms they're relying on upfront, right, to do a compliance check on the front end to say, "We've really looked at these issues. We're really deploying it in a thoughtful way. We've really put guardrails around making sure there is no bias, intentional or unintentional," so that in the event a civil investigative demand is served on the company, they have some defenses and ability to respond. And hopefully, if they do that compliance work upfront, they may never and should never get a CID. But definitely, the FTC has broad authority to look at these issues and make sure that companies aren't engaging in any type of unfair conduct, which is a very broad standard under Section 5 of the FTC Act.
Stephen Piepgrass:
Thank you, Mike. Really appreciate, and I know our listeners do, too, the practical tips there. Barry, especially in the heavily regulated health care space, what tips would you offer for companies operating in this space that are thinking about or maybe already using AI?
Barry Boise:
There are a couple things I would suggest. First, and I hope this podcast has helped a little bit in this score, understand what the regulators' concerns are and the tools they have for enforcement, and address those. So if there is novel technology, there's something where the regulators won't understand how a certain product works and you think it will draw some regulatory scrutiny given the nature of the patient population or the nature of the product. There are ways to proactively reach out to regulators to educate them. In fact, National Association of Attorney Generals has an educational component on technology that is focused on trying to keep attorney generals as regulators up to speed on issues of cybersecurity, but also AI. They would welcome the opportunity to learn more from the people they're regulating. So it's not in every circumstance that you proactively outreach, but there are circumstances where it makes a lot of sense to do so.
Stephen Piepgrass:
And Barry, I'd echo that, and also add that Republican AG Association, the Democratic, and AGA all offer different ways of reaching out to AGs on issues like this. This is one that, as you all talked about today, they are very focused on and lots of different ways that you can, and that we could help with reaching out to them and explaining a business's position in this area. Mike, any additional thoughts on your end, whether that's with the AGs in particular or with other regulators for companies operating in the AI space?
Michael Yaghi:
Actually, to your point, Stephen, the different AG associations are great ways for health care companies if they want to engage with these state regulators, they could get involved in. It's a great way to sort of meet the different AGs, their staff lawyers. And it gives them an opportunity to talk about some of these issues, and really share sort of their side about how AI could assist the health care industry and have a dialogue with regulators to make sure that private industry and the government are working together. So noting those associations is a great way for any health care companies interested in these issues and wanting to stay on top of what regulators are thinking and saying, and have a dialogue with them. Those are an opportunity. Join those associations, go to the conferences, and meet the regulators.
Stephen Piepgrass:
Barry, you had mentioned FDA earlier. Maybe you could go into a little more detail about what FDA's position is, right now I know it's been changing over the years, on the use of AI.
Barry Boise:
Sure. FDA has broad authority to regulate what are defined as medical devices, which really can include just about anything that's used to treat, cure, prevent, mitigate, or diagnose disease in humans. And you might be thinking right now, "Well, maybe all my health apps could qualify and meet that technical definition. Is FDA even regulating those?" And the answer is it was a little confusing at times, and there has been guidance over the years, but in 2016, as part of the sweeping Cures Act, among the topics that were covered was a desire by Congress to spur innovation and exclude from the definition and hence regulation by the FDA of certain health applications and other, what they considered lower-risk innovation. Over the past six years, the FDA has really mightily struggled to provide really meaningful guidance to the industry as to where those lines draw, particularly when it comes to clinical decision software, in the same time period that AI innovation has really accelerated.
Maybe it's hard to imagine Congress or FDA completely foresaw AI passing medical exams, but it had the opportunity to provide greater guidance to industry in its most recent guidance, which it put out in September, and it provided some guidance, but it also signaled that FDA will be engaging in more regulation, not less regulation, of clinical diagnostic and clinical decision software. I mentioned earlier FDA had the concern, really an untrusting of physicians, in my opinion, of automation bias, that physicians would place too much weight on AI-based clinical decision software, that physicians would need to fully understand the data sets and the algorithms and how the process worked, and would have time to consider how all that would fit into a differential diagnosis or a treatment course, which is unrealistic in emergency medicine and unrealistic in many different settings in the hospital. But nonetheless, FDA has issued this guidance, which is not binding law, but provides direction of where FDA's considering its regulatory authority, which signals greater regulation and authority, and concern about the use of AI in apps and medical devices and clinical decision software.
Stephen Piepgrass:
Barry and Mike, I want to thank you again for joining us today. I know our listeners very much appreciated your insights as well, and I want to thank our audience for tuning in today, too. As always, we appreciate you listening, and don't hesitate to reach out to the Troutman team if we can help. I hope you'll join us for our fourth AI podcast episode, where we discuss AI's impact on the financial industry. Please make sure to subscribe to this podcast through Apple Podcast, Google Play, Stitcher, or whatever other platform you use, and we look forward to having you join us next time.
Copyright, Troutman Pepper Hamilton Sanders LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship.The views and opinions expressed in this podcast are solely those of the individual participants. Troutman Pepper does not make any representations or warranties, express or implied, regarding the contents of this podcast.Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper. If you have any questions, please contact us at troutman.com.