In this episode, our panel examines the use and impact of AI in background screening, including the potential risks companies may face with increased reliance on AI.
Many companies use machine learning algorithms and artificial intelligence (AI) to assist with employment decisions and tenant screening. In our final episode, Stephen Piepgrass and colleagues Ron Raether and Dave Gettings examine the use and impact of AI in background screening, including the potential risks companies may face with increased reliance on AI.
Our panel also explores the increased focus on AI by various regulators and state attorneys general and offer some best practices to consider when developing or adopting machine learning models into business processes.
Regulatory Oversight Podcast —AI: Impact and Use in Background Screening (Part Five)
Stephen Piepgrass:
Welcome to another episode of Regulatory Oversight, a podcast that focuses on providing expert perspective on trends that drive regulatory enforcement activity. I'm Stephen Piepgrass, one of the hosts of the podcast and the leader of the firm's Regulatory Investigations Strategy + Enforcement practice group.
This podcast features insights from members of our practice group, including its nationally ranked State Attorneys General team, as well as guest commentary from business leaders, regulatory experts, current and former government officials, and Troutman Pepper colleagues.
We cover a wide range of topics affecting businesses operating in heavily regulated areas. Before we get started today, I want to remind all our listeners to visit and subscribe to our blog at regulatoryoversight.com, so they can stay up to date on developments and changes in the regulatory landscape.
Today, in our final episode focused on the trending topic of artificial intelligence, I'm joined by my colleagues Ron Raether, leader of our Privacy + Cyber team, and Dave Gettings, a partner in our Consumer Financial Services practice group.
We'll discuss how companies are using AI in background screening, and the potential risks they may face with increased reliance on AI.
We'll also discuss the increased focus on AI by various regulators, including state attorneys general, and some best practices to consider. Ron and Dave, thank you very much for joining us today. I know this is a topic both of you have been following closely, and I'm very much looking forward to our conversation.
Ron Raether:
It's great to be here.
Dave Gettings:
Thanks for having us, Stephen.
Stephen Piepgrass:
Ron, I thought maybe we could kick it off with you, and this is a question we've asked on most of the podcasts in this AI series, but can you give just a little bit of background, and this also helps our listeners get up to speed if they're joining us for the first time, on what is AI? What are machine learning algorithms, particularly as it pertains to the background screening world?
Ron Raether:
Stephen, it's wonderful to be on this podcast and be able to participate in addressing a very interesting topic, and one that in this particular industry we've been addressing for numerous years. In fact, probably over a decade.
And as somebody familiar with technology and who follows technology trends, I'll avoid the debate between the difference between machine learning and artificial intelligence. Because when you speak with techies, there is an important distinction between the two.
I think for the purpose of this podcast and our discussion today, it's really trying to use technology and computers to do things that humans just aren't capable of doing within the practical limitations of being a human, either time or processing and the like.
So it's beginning with a number of different inputs that are helping the software solution to learn and develop answers to specific questions. Let me put it sort of simply.
Normally a piece of software is written with very logical rules. If A, then B. If C, then D. If you can't find D, then go back to A. There's a very strict set of rules that the program is written towards.
With artificial intelligence or machine learning, what you're doing is you're asking the software to create its own rules, to create that logic for themselves. So you build the software in a way that says, I'm going to give you all these inputs, and with these inputs, I want you to start working towards answering some specific questions.
How you go about doing that software, I want you to figure that out on your own. And that's sort of how machine learning or artificial intelligence works.
Stephen Piepgrass:
Great. Thanks, Ron. I appreciate the primer on that, and at some point, we'll have to talk a little more about the difference between machine learning and AI since I am one of those nerds, but I had not focused on that.
Dave, maybe you could talk a little bit about the use cases for algorithms, for AI, particularly as they pertain to background screening.
Dave Gettings:
Yeah, absolutely. Happy to, and appreciate you having me here. I will note that I typed in your first question into ChatGPT, and I said, please answer that question in the voice of Ron Raether.
And I got exactly the answer Ron provided. So we know artificial intelligence is working, especially in the podcast space.
There are a few different use cases for machine learning, for AI, for just computer assisted decisioning in the screening space. And the thing is, it changes every day and every year. Because as soon as a consumer reporting agency or other technology company figures out a better way to do things, then other companies are copying them and you move on to the next development, the next better way to do things, so to speak.
A few areas where we tend to see AI or machine learning in the screening space are one, match logic, and what we're talking about with respect to match logic is how does a screening company, for example, pair a public record with the applicant or with the consumer on whom the report is being prepared to make sure it is the best match possible? To try to reduce false positives and also false negatives.
And so a lot of companies are using machine learning for that. Programming variables into the machine learning, like name, date of birth, potentially social security number, commonness of name. And logic related to how to deal with commonness of name, location of residents, location of public records, frankly, any variables they can think of to better predict whether a public record is going to be an accurate match to a consumer.
Another area we've seen it, once you've got the match, is in the decisioning process, specifically for example, recommendations as to whether to employ an applicant or whether to rent a property to an applicant.
Many customers use background reports to help inform their decisions with respect to rental or employment, and some consumer reporting agencies have designed models to help customers in making those decisions. Those models, for example, may analyze factors that could lead to a negative rental outcome, meaning a default on the rent, or even negative employment outcomes.
And then these models analyze different factors in the consumer's background that might lead to those outcomes and try to mitigate against those.
So aspects, for example, in the tenant screening space, like rent to income, debt to income, different potential factors which could lead to negative outcomes, and try to optimize the decisioning process to avoid those outcomes. And constantly developing the model from both positive and negative outcomes in order to learn.
Another aspect is sort of related, is on, for example, the riskiness of a rental. If you've decided you are going to rent to a tenant, but you need to factor in what the pricing for that tenant's going to look like. Maybe it's a greater security deposit, maybe it's a slightly increased rent to account for the potential riskiness of a tenant.
We've seen consumer reporting agencies offer those types of models to their customers, which again leverage data to try and reach the best outcome and mitigate the potential risk of a tenant.
Stephen Piepgrass:
Thanks, Dave. That's a great background. And you may have answered at least some of this question, but I think it's still one worth asking and getting Ron's perspective on. And that is, Ron, why are companies interested in using machine learning algorithms or artificial intelligence? I now know those are two different things, in the background screening process?
Ron Raether:
Actually, Dave's response gives me a great springboard to talk a little bit about the difference between machine learning and artificial intelligence. Although I will say, Dave, that I think South Park has the best explanation of ChatGPT and how to use it or not use it.
But what Dave really described was machine learning. So with machine learning, what you're doing is you're taking set variables that are traditionally used within the background screening process, for example, to determine whether a candidate is the appropriate person for an apartment or for a job, and then having the software learn from that data.
In other words, we predicted that based on variable X, a tenant would be more likely to default. So they're going back and looking at that data to see if in fact there is a correlation between X and Y, if they've been evicted previously versus the likelihood they're going to default on their rental payment. And this go around.
AI answers the question of, more broadly, what will be relevant to somebody as a job applicant or somebody as my tenant? We're not pre-judging whether that's going to be an eviction or prior payments. The AI solution would actually come up with what it thinks are the variables that will impact whether somebody's going to be a good tenant or good job applicant.
I think we use a lot of machine learning right now, not a lot of AI currently in the background screening process. And part of the consequence is that the technology that's been in play and I think the issues that have been pressing for our clients have been more around regulatory litigation enforcement, trying to improve matching, but likewise wanting to service their industries and doing so in a way that's going to be trusted by that employer or by that property manager.
Our clients want to deliver accurate information, right? That's what their customers want. They want to put the right person in the job. They want to put the right tenant in that property. They want that property to have a certain occupancy rate and a certain profitability, and they want their data and their background screening provider to be able to deliver on that.
But it takes time for industries to get comfortable with the use of technology. But what we've found is, in fact, I think Dave's going to talk a little bit more about this, if you can remove some of the inherent bias in the programming process, if you begin to trust and feed the right data into the systems, the overall process is improved. And that's really what our clients and ultimately their customers want.
The one thing that I'll continue to try to inform people on is that those systems can only work as well as the input data or the source data. And by that, I mean if courts or if other sources have inaccurate information, if they're transposing middle name and last name, or they're transposing social security numbers or entering the wrong date of births, they're feeding inaccurate information into the system that has had an effect.
What's going to be really interesting is to see whether our clients in the background screening industry and the further adoption of artificial intelligence specifically is going to allow us to have a feedback loop to the courts and other sources to help them improve their data, and thus overall enhance the background screening process.
Stephen Piepgrass:
Thanks, Ron.
Dave, maybe you could talk a little bit more about the regulatory risks that our clients face. Obviously, that's a lot of the reason why many of them are listening to this podcast today, particularly in the background screening world.
Dave Gettings:
I'd love to say there are no regulatory risks, but unfortunately there are, and they're really developing as we speak. We've had regulatory investigations where even the regulators were developing their theories of what the client potentially did that was risky throughout the entire investigation. So it's a little difficult to put a fine point on exactly what the risks are, except to say that regulators are focusing on a lot of different aspects and are developing their theories as they go.
With respect to some of the areas we've seen to date, I would say there's generally been two main focuses from a number of regulators. The first is whether the technology assisted screening is leading to inaccurate results. And whether that's a violation of state law, of UDAAP requirements, or even of federal law, for example, the FCRA. How can it potentially lead to inaccurate results?
Well, Ron obviously had a great point. If there's bad data coming in, well, that can potentially lead to bad data going out. But besides that, for aspects like matching, whether the matching logic algorithms are leading to inaccurate results for individuals with common names. Because they're not adequately adjusting based on the commonness of a name, especially given some of the limited items of personal identifiable information that are available in public records.
Or with respect to accuracy, whether scoring algorithms are leading to artificially low scores for certain tenants who may have otherwise been model tenants in the past. If an algorithm is focused really heavily on debt to rent ratios, while the consumer has a really high income and is able to pay that rent, maybe the regulator thinks the algorithm is not treating the consumer's ability to pay as a strong enough factor.
Another area that we've seen regulators start focusing on increasingly, and it's really a developing area of the law, especially for consumer reporting agencies and companies in the tenant screening space, is whether technology assisted screening is leading to a disparate impact on certain protected classes.
And that can even apply potentially, or at least regulators are focusing on it, even when it's clear that the consumer reporting agency or the other company in the screening space has no intention whatsoever of creating a disparate impact on a certain protected class.
For example, regulators may be looking at whether the algorithms have a bias built into them, subjectively or objectively, that could negatively and unfairly impact an entire class of people. Regulators are sort of learning as they go what those algorithms look like, what potential factors they consider, and how those factors can potentially impact protected classes even in ways that companies never frankly envision.
Stephen Piepgrass:
Thanks, Dave.
Speaking of regulators, I know we've talked a little bit about what they're interested in and focused on. Any guidance that we have from regulators so far on the use of algorithms, AI, in background screening or more generally?
Ron Raether:
Really, no. And part of what's going on right now in the regulatory space is that we have individuals that believe there's something broken, perceive that the problem is somehow focused on consumer reporting agencies and the background screening industry, without any objective facts to be able to support that.
So for example, we've heard from the CFPB and to a certain extent the FTC about what Dave was saying, that somehow there's a disparate impact with respect to these decisioning algorithms, the algorithms that are deciding whether to rent a property to somebody with conditions or not conditions.
They claim that there's some data, some analysis to be able to back up that position. We've asked for that information via a FOIA request, and I have been turned down to get access to that. I've not seen the data, and personally based on my experience in working with this industry for 20 plus years, I'm also not convinced that the effect of this decisioning algorithm actually has a disparate impact.
What we try to do and what I think would be more productive is if there was a conversation with the regulators about working towards solving what I think is the common issue, and the common issue is to have a system that is going to put qualified people in jobs, a system that's going to put qualified people in properties.
The misperception is that tenant screening or background screening companies don't care, and I think that's a mistake by the regulators. But as a consequence of what I think to be some of this misinformation we have, for example, Senator Sherrod Brown asking the CFPB to further investigate the tenant screening industry and the algorithms that are used.
We have papers coming out of the CFPB and reports that are suggesting that there is something going on in the industry, but in my interactions with the regulators, what you have is this inexplicable tension between the suggestion that somehow humans being involved in the process is going to improve the accuracy or reduce the inherent or potential bias, unconscious bias or conscious bias within the systems.
I don't think any of that's been frankly established by the data. It's going to be interesting to see how continued interactions with the regulators on these points play out. But as a consequence, we haven't gotten any real guidance from the regulators as to what's permissible, what's not permissible, beyond just these broad statements that we should avoid disparate impact.
Or that the matching should somehow be even more accurate than what's already being provided by the existing tools. Even though, for example, the Fair Credit Reporting Act only requires reasonable procedures to assure maximum possible accuracy, yet it seems to me the regulators continue to advocate for perfection, no inaccuracy at all.
Dave Gettings:
It's a good point, Ron, on the maximum possible accuracy, something we've seen also in FOIA requests, to Ron's point, is there is cooperation between the regulators and the plaintiff's bar, and so it sort of becomes a bit of a self-licking ice cream tone, so to speak. Where the plaintiff's bar is pushing regulators to look into issues and take action.
The regulators look into issues, take action, issue rules, have consent orders, and then the plaintiff's bar cites those rules and consent orders as the standard of what is reasonable procedures to assure maximum possible accuracy.
From a defense perspective and a defense bar perspective, clients are really having to push back on both plaintiff's counsel and regulators working together.
Stephen Piepgrass:
Those are great points, and I think the scrutiny from the plaintiff's bar and from regulators working together makes this last question a really important one. And that is advice on best practices, especially in a situation where we don't have a lot of guidance, but we know close scrutiny is occurring.
If you're a company and thinking about using AI, using machine learning, what are some of the practices that they should be considering employing? Ron, maybe I'll let you address that, and Dave, chime in as well if you'd like.
Ron Raether:
So there's a couple of different steps that we would suggest that companies engage in. The first is have good documentation around your models. How they were built, the steps that went into them, good empirical evidence in terms of the testing that was done. And critical to this would be having a third party come in and test those models, independent of the data scientists within the company, for things like unconscious bias or disparate impact.
Testing whether the logic and the rules that the machine learning product was built on, or as we get to artificial intelligence, some of those basic rules that we're providing to the software are meeting the concepts and the objectives that again, I think the industry shares with the regulators.
Accuracy, being able to put qualified people in jobs or apartments regardless of their ethnicity, race, sexual orientation, or other potentially protected categories. But being able to then present, I think objectives, empirically based findings, to either the regulator or if the regulator's unwilling to be objective and demonstrate a willingness to listen, ultimately to a judge or a jury.
And I think that's what's going to make any practice more effective for a consumer reporting agency or a background screening company.
Dave Gettings:
The only other item I would add is especially for consumer reporting agencies, it's always helpful for the customers to have input into how they're going to customize their own models to work for their own property or employment decisions. Because we've seen regulators take the position that the consumer reporting agency is actually the one making the decision on a rental or actually the one making a decision on an employment application because it's their models being utilized.
And as a result, the consumer reporting agency is the entity that's creating the disparate impact. And the more the consumer reporting agency can have the customer involved in that process, the customer creating their criteria by which applicants are measured, the more the consumer reporting agency will be protected from that type of claim.
Stephen Piepgrass:
Great.
Well, Dave and Ron, thank you very much. I really enjoyed having you on the podcast this afternoon. I know our listeners very much appreciated your insights.
And I want to thank our audience for joining us as well. As always, we appreciate you listening, and don't hesitate to reach out to the Troutman Pepper team if we can help.
Please make sure to subscribe to the podcast at regulatoryoversight.com, and today's podcast as well, FCRA Focus, through Apple Podcast, Google Play, Stitcher, or whatever platform you use.
We look forward to having you join us next time. Thank you.
Copyright, Troutman Pepper Hamilton Sanders LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman Pepper does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper.If you have any questions, please contact us at troutman.com.