Ashley Taylor, Colorado Senate Majority Leader Robert Rodriguez, and Troutman Pepper Locke Privacy + Cyber partner David Stauss discuss the Colorado AI Act — widely viewed as the nation's first comprehensive legislative framework focused on high‑risk AI systems and algorithmic discrimination.
In this episode of Regulatory Oversight, host Ashley Taylor is joined by Colorado Senate Majority Leader Robert Rodriguez and Troutman Pepper Locke Privacy + Cyber partner David Stauss for an in‑depth discussion of the Colorado AI Act—widely viewed as the nation's first comprehensive legislative framework focused on high‑risk AI systems and algorithmic discrimination. Senator Rodriguez explains how Colorado's work on consumer privacy laid the groundwork for AI regulation and walks through the origins, goals, and core provisions of the Act, including its emphasis on transparency, risk assessments, and protecting consumers in sectors such as employment, housing, health care, education, finance, and government services.
Stauss situates the Colorado AI Act within the rapidly evolving state, federal, and international AI landscape, describing how lawmakers have sought to avoid a "Wild West" of conflicting state requirements by coordinating through a multi-state work group, and how that effort mirrors the development of state privacy laws. The conversation then turns to answer practical questions companies are asking—how to approach and structure AI risk assessments, the role of attorney-client privilege, how state attorneys general are likely to enforce these laws, and how to navigate growing tensions between state innovation and federal preemption efforts, including reported moves by the Trump administration to curb state AI regulations.
Regulatory Oversight Podcast — AI, Algorithms, and Accountability: Unpacking the Colorado AI Act With Senator Rodriguez
Host: Ashley Taylor
Guests: David Stauss and Senator Robert Rodriguez
Aired: January 8, 2026
Ashley Taylor (00:04):
Welcome to another episode of Regulatory Oversight, a podcast dedicated to delivery expert analysis on the latest developments shaping the regulatory landscape. I'm Ashley Taylor, one of the hosts of the podcast and the co-leader of our firm's State Attorneys General Team and a member of our Regulatory Investigation Strategy and Enforcement Practice. This podcast features insights from members of our practice group, including its nationally ranked state attorney general practice, as well as guest commentary from business leaders, regulatory experts, and current and former government officials. We cover a wide range of topics affecting businesses operating in highly regulated areas. Before I begin, I want to encourage all of our listeners to visit and subscribe to our blog at regulatoryoversight.com to stay current on the latest and regulatory news. Today, I'm joined by Colorado State Senator Robert Rodriguez and my colleague, Dave Stauss, to discuss the Colorado AI Act, widely described as the nation's first legislative framework governing high-risk AI systems.
We'll unpack the law's origins, core obligations, and its place in the emerging state and federal AI regulatory landscape and discuss practical compliance steps, real world challenges for businesses and what the future of AI governance may look like. Senator Rodriguez was first elected to the Colorado Senate in 2018 and currently serves as Senate majority leader. During the 2024 legislative session, he was a prime sponsor of the Colorado AI Act. Dave Stauss is a member of our firm's privacy and cyber team, advising clients on existing and emerging state, federal, and international privacy, AI, and information laws and associated regulations. Senator Rodriguez and Dave, thank you both for joining me today. I've been looking forward to our conversation. Thanks for having us.
Robert Rodriguez (01:53):
Thank you for having us.
Ashley Taylor (01:54):
Gentlemen, why don't we start with you all discussing the significance of AI regulation in today's technological landscape?
Robert Rodriguez (02:02):
Yeah, I think a lot of this, and at least for me coming into my eighth year of legislation, a lot of this started with consumer data privacy was policies that we passed and Colorado and I sponsored a bill in 2021, which was the third comprehensive consumer privacy law in the nation. And I think we're at like 27 or 28 states I think now have a privacy framework they've started doing. And some of the arguments we're seeing now as the same arguments we saw then was that we need to have a federal level legislation so states aren't hodgepodging it. And I think from that we got into AI became the new evolving thing. AI has been around for a long, long time, but I think when the emergence of ChatGPT and the more uses of it, it became a thing for some of us in the privacy space was to start looking at regulating AI because I think for many of us in this space felt that we were behind the eight wall on privacy regulations and the federal government still hasn't done anything nationally on privacy, that we needed to take the step to start looking at some steps to regulate AI as this is new evolving and changing so fast that we can provide some good transparent consumer protections into laws for Colorado citizens.
David Stauss (03:15):
Yeah and I can jump in, Senator. I think it's important. It could set the table a little bit on. Colorado, the AI Act is what I would just colloquially call an algorithm discrimination law. I mean, at core, it's about algorithm discrimination. And there's a piece of it that's about notice of people. Notice to users who's interacting with AI is something that Senator Rodriguez felt passionately about during the drafting of the law. And essentially, you can't tell it's AI on the other end. You got to tell people it's AI on the other end. But as you dig into what is an AI law, and we saw this last year tracking all the different bills and regulations, it's a ton of different topics. It's election interference like defects, the use of AI and saying your opponent says something they did not say. And it looks like they did say it.
I know Senator Rodriguez, something probably you're very passionate about and the legislature is as well. Health laws, the use of AI with respect to health and prior authorization and who gets to decide the computer versus the human as to whether somebody gets a surgery, for example. And even chat features, like we mentioned before with respect to the Colorado AI Act, but that's become even more nuanced now with respect to companion chatbots. So when you were passing a law this past year, California tried and I think it got better by the governor, the companion chatbot feature law and even like pricing algorithms as well, the use of algorithms to do pricing. So it's almost too multifaceted, it's almost all over the place right now in the different types of laws and regulations that are out there, and that's just only to scratch the surface. But as the Senator kind of mentioned, I think we're seeing the same exact thing.
And I think it's apropos of having Senator Rodriguez talk about this because the pattern has followed exactly how it followed with privacy. With privacy, you saw a few state lawmakers who were interested in bridging that gap between the federal inactivity and the fact that something needed to be done. And now you see those lawmakers, a lot of the same lawmakers coming back like Senator Rodriguez, Senator Meroni and Connecticut coming back and trying to do something in the AI space that makes sense because again, the federal government has been unwilling to take steps. And even more so, I mean, before the current administration, there was at least a Biden executive order that talked about things like algorithm discrimination that's been rolled back on their Trump administration. And I'm sure we'll get into it with a bunch of the conversation on what the Trump administration's done. But in some respects, it feels like the bank is back together again to doing AI laws as opposed to doing privacy laws.
And a lot of people have just moved on from doing privacy regulation to doing AI regulation.
Ashley Taylor (05:41):
Well, let's drill down, if we could, into the Colorado AI Act, and let's talk about its specific purpose and a few of the key provisions since this is clearly becoming a framework for how other states are thinking about this issue. So let's start there.
Robert Rodriguez (05:56):
So I mean, I think the bill, and it was started through the multi-state that we were trying to come up with when we dealt with privacy, was to try to set up streamlined definitions and uses of a policy that could be utilized. And a lot of the work we did in drafting it was some stuff when we saw the EU AI Act. It took some of the definitions from there, trying to come up with some type of monolithic type definitions that could be uniform across the country as it built up. The core of the bill is basically a disclosure and transparency bill with some risk-based assessments and duty to do no harm from the developers and deployers. It requires developers and defines who a developer is to do some risk assessments on their own tools and to disclose information that was used to train their AI for consequential decisions.
The bill's key focus is in healthcare, housing, finance, education, and government services. Those are the main buckets that it addresses. And it's basically about making a decision that affects somebody's life, whether it's they get a job, whether they get a loan, whether they get housing, whether they get assessed for acceptance in the college or assessments for providing more and the government stuff as well as law enforcement and some other things that need to be done and Medicaid and stuff in the government spaces. It really requires them to give some disclosures. And as a deployer of somebody who buys and uses these systems, they're required to give information of how the system's trained and the risks and possible harms that could become from it. So the deployer who buys the systems or utilizes systems to the front end consumer can provide that information to the consumer, whether they know that they made a decision, the system's making decision, hopefully give them the reason for the decision and give them the opportunity to correct or appeal or change the decision if some of the information is going to be false or not accurate.
It has AG only enforcement, it doesn't have a private right of action. And I think some of it for us when we were doing the policy was trying to, while we're trying to regulate and create transparency, was also trying not to stifle innovation of a new innovative technology. So the bill has some provisions in it that says that you get a rebuttable presumption and an affirmative defense as long as you notify the AG and if you catch an error or some type of harm being created, that you found it and fixed it. For me at the core of this bill, which is my struggle with industry having so much pushback on it, it's really asking them to try. I think people know with these systems that they're going to make errors, it's new, it's evolving, it's still learning, it's bad data in, bad data out.
And as it advances, hopefully we get to a point where we incorporate more liability, but right now we're just trying to get them in a direction where they're looking at their systems and trying to make thoughtful decisions and how the algorithms make a decision that could affect somebody's life.
Ashley Taylor (08:41):
Well, Senator Rodriguez, I wanted to pick up on something you said at the beginning of your comments. You mentioned a multi-state group. Would you share with our listeners that the multi-state structure that you operate within? And Dave, I want you to pick up on something Senator Rodriguez said around industry. One of the first questions that someone in industry will have relates to a risk assessment and the work product related to that risk assessment, right? Is it protected? How can it be used? Can it be used against you? Must it be disclosed in litigation? Are those issues, Dave, addressed in the statute or have those issues been addressed in real time on the ground?
Robert Rodriguez (09:22):
Yeah. So as I alluded to earlier in 2020, 2021, when I was doing consumer data privacy, I had the opportunity to get reached out to by other legislators across the country on, "Hey, I saw you passed a bill and I'd love to learn how you did it. " And through this group, and it was Senator Maroney out of Connecticut had reached out to me after my bill had passed and he's like, "I've been trying to run one here for a couple years. How do we get it done?" From that started a group of us in other states, he passed his bill a year later and other states started following suit and a lot of it became us interacting with each other. What did I deal with when I passed my bills? How could it be improved? What pitfalls did you see? How did you use these definitions and why?
And from that, we got a group of legislators that I think in 2022 started talking about AI and we wanted to get a state together and we have a multi-state, probably now it's about 30 legislators across the country, bipartisan. There's Republicans and Democrats on there across the country and multiple states that have either looked at this or trying to look at this policy. And we started doing a deep dive into AI and how to do a policy where we had group meetings with think tanks and education people and industry people coming to talk to us to teach us all the basics of AI, how it is used, the history of it, and some of the problems that they foresee that it's not either not ready to use yet or that they have concerns that it's not ready. And from that, we still continue to meet to this day.
It was moderated originally by the Future of Privacy Forum. Now I think Senator Mur only has it being moderated by Princeton. Some students out of Princeton are help moderating with some of the professors and the facilitators there. And we have a every two-week meeting now that we still talk about AI. We currently are talking about chatbots, data centers, AI policy, pricing models. And we just get into all the specific buckets as we see legislation. Years ago, the easy low-hanging fruit obviously was the deep fakes and the political deepfakes that's evolved into the algorithm pricing or now the hot topic I think of this year is going to be chatbots. I think one of the biggest ones you're going to see, I think in Colorado have a couple bills on AI and healthcare, whether it's therapy and treatment and chatbots being used for them and making decisions for healthcare problems.
But as it evolves, many people do it. The Colorado AI Act was always meant to be some type of disclosure bill with some type of risk assessments, but it wasn't intended to be an all- encompassing bill while we go in. You need an employment bill specific to that industry. You need to help finance bills specific to that industry. It was create a chassis of policy of assessments and transparency to build off of for industry by industry.
David Stauss (12:00):
Maybe I could pick up on that note and then circle back, Ashley, to the question you asked me about risk assessments. It's almost like a miracle that this country has the privacy laws that it does and that they are at their ability to actually be interoperable. I mean, if you circle back five, six years ago when California passed the CCPA, it was the wild west. You had states running different types of privacy laws. Oklahoma was doing a full opt-in one. Colin Waukee was running that one. Sarah Rodriguez became the third state to pass a consumer data privacy law using the Washington Privacy Act model, which Virginia had used, but even Colorado was different than Virginia. And you had other states just going about with different definitions and different concepts. And the fact that we got some level of commonality interoperability on the privacy level was quite miraculous when you look back on it because it was very diffuse and you had lobbyists running around the country just trying to say, "Please don't break the system. Please don't break the system.”
And so the concept at core behind the AI work group was, "Hey, let's not do it the hard way again. Let's get the people together who care about this topic and have them educated on these issues and find commonality in things like definition. What does AI mean at core? What should be the definition of AI, which is a huge battle when you're writing these laws?" And that's been over the last two or three legislative cycles that's been solidified to a large extent if we're talking about AI or generative AI, but it was really this grand opportunity for industry and civil society to go educate lawmakers and lawmakers educate each other almost as the federal government would work if it actually functioned properly where people would actually across state lines, get educated and pass law. So anyway, it's been much maligned.
The AI work group has been much aligned in some respects, but I always thought it was this great forum for people to sort of have this opportunity to learn and be educated. So turning the page though to the risk assessment, to your question about that, there are requirements as Senator said in the Colorado AI to conduct risk assessments. This is nothing new to privacy professionals. GDPR requires data protection impact assessments. Even the Colorado Privacy Act requires data protection assessments. The Connecticut Kids Bill SB3 that requires risk assessments with respect to children's privacy issues. Senator Rodriguez's SB41, which is the Colorado Amendment for Kids that was adopted from Connecticut also requires risk assessments. So this is a concept that's very well known to privacy professionals and AI professionals and the like. There is a way in which the Attorney General's office can request the risk assessments, but they don't have to be submitted or filed with the Attorney General's office.
That was actually a specific issue that got raised and along the way there was a suggestion that maybe all the risk assessments would have to get filed, prove that you've done. But there's practical realities. Does the AG and the state have the server space to actually accept so many risk assessments? What is the dollar figure to receive all of that information? And what's the manpower to review risk assessments? So would it just be some sort of process without a point at the end of the day and cost the state a lot of money? So that was shelved and the concept was basically you had to do them and ND AG's office could request them, which is the exact same process as you'll find under Colorado Privacy Act. You can maintain attorney-client privilege. There's a provision in the statute that says you do not waive privilege if you submit them to the AG's office.
I know companies, when they're addressing this also, just practically speaking, will do two different types of risk assessments. One is the public facing risk assessment and one is the attorney-client privilege risk assessment. So are ways of trying to maintain privilege and to do that hard look at your processes and make sure you're not developing evidence that could be used against you, so to speak. So I do think that there was a thoughtful process in drafting the AI acts that really tried to maintain that balance of making sure that you don't have to put on paper like that. And you see that in litigation, subsequent remedial measures type issues, where there's not a duty to disclose that type of stuff. Dave,
Ashley Taylor (16:10):
I want to pick up on your comments about what appears to be the widening division between state and federal regulators in this area. President Trump is apparently according to news reports considering an executive order to block state AI laws as the White House pushes for a federal framework on the technology. According to reports from the Hill, the order would direct Attorney General Pam Bonnie to establish a task force focused on challenging state AI measures and seeking to restrict some federal funds to states who pass laws that are deemed "onerous." It also directs the Federal Trade Commission to issue a policy statement on how a law prohibiting unfair and deceptive practices would apply to AI models and how it could potentially preempt state AI laws. I wonder if you all could comment on what I hear companies talking about as the tension between complying and understanding the crosscurrents of state and federal regulators in this area.
Robert Rodriguez (17:16):
I think for me, and I bring a lot of this back to privacy, many of us have done it. And at the same time, it was the same argument that we can't have hodgepodge of laws, which I find onerous because as we were the third state and other states have done it, industry tends to go with the stronger rule and then just applies it across the board. Many of us probably saw the cookie announcements and stuff on the internet and on browsing before it was even a law in our state, whether we saw it or not, because that's just how it flows and it's easier to do. And I think any of us in the privacy space and/or AI space would love to see some kind of federal law to either build off of now, hopefully it would be a floor versus a ceiling, but I think you see that fight going on with privacy was as many of us have done it and there have been multiple iterations of privacy laws passed in the federal government that they've tried to pass that they can never get through because they always get caught up, is there going to be a preemption?
Are we going to override the other states, which gets more complicated because California's probably not going to let them override the laws and the work they've done for years. And I see the similar problem with AI is I think they're going to want them to do nothing instead of something. And I think if they did a moratorium, there'd be no incentive for them to do anything then, unless there's some kind of major harm. As we talked about before we got on this call right now, they're still doing Children's COPA privacy laws that they're trying to update from decades ago before we had the internet that they still can't come to agreement on. But I would hope any type of moratorium doesn't apply unless they come up with some kind of framework because you shouldn't even, and this comes from executive orders from the president and the Senate and them who did studies on AI before this became even before Trump and during Trump was that you need a data privacy law before you can get into AI because a lot of this stuff gets into the data, the data's collected, how it's being used, can you correct it or change it?
I don't know how you do an AI law without being able to have those tools unless you're just doing some type of disclosure and/or ban. But I think the federal government needs to take a step on it, but I just don't know if there's the will to take a good step. Just history showing that they haven't even been able to press a privacy law. And I think an AI moratorium, even the federal one and with the one large bill or this one not looking at it, it's like, what are we talking about as far as onerous? It's like my bill requires a risk assessment. Is that onerous? Is it too much to ask? Is every administration going to define what's onerous if they go into this lake with the AGs? And I don't know how industry would like that either because it'll change every time you get a new administration.
So unless you get something written on paper, I don't think this will be useful and I don't think states would like stepping over them. And I think legislators overall in Congress probably have mixed feelings about it too. They just got a lot of pressure to do something. And this for them seems like something, which to me, doing nothing is something for them and I don't understand it myself.
David Stauss (20:12):
Yeah. At core, the issue comes down to innovation versus regulation. And we're seeing that play out in the EU with AI ads and the measures in place now or the movement in place now to pull back on some of those requirements. You see the Biden administration was much more regulatory friendly. The Trump administration is obviously much more innovation friendly. I think it's like most things in life, I think it's a much more complicated discussion. We do have parallels in the privacy space. So for example, kids, children's privacy, we've mentioned that a few times. I said a few minutes ago, we were really lucky to get privacy done correctly because it's interoperable, the laws that are out there. Kids is not the same. Kids is a complete disaster with the laws that are out there, the social media laws and the children's privacy laws that are out there.
There isn't zero interoperability. There are instances in particular, like with what Arkansas did last year and passing against law, that it is one or the other. You can pick once to comply with one state or another state where you have to have state specific things. And so that's obviously the fear of the federal government is the states left to their own devices will come up with stuff that business can't deal with. And so that was the idea of the multi-state work group to try to find that commonality. Now, the flip side though is when you start thinking about things like data brokers, right? So data brokers heavily unregulated, people were like, they find out, "Hey, my data's being sold left and right." That's a real big problem for a lot of people. And now the data broker industry is being heavily regulated to the point where I think it's going to drive many out of business.
So it's almost like if you wait too long to regulate, you can drive entire sectors out of business. So I think the answer at the end of the day is thoughtful regulation. So let me give you an example. There was a conference that attended a couple months ago where the keynote speaker said he went to high school and he asked the kids in the audience, said, "Raise your hand if you have an AI friend." And he said, "A third of the high school students who are in the audience raised their hand and said they had an AI friend." That is a recipe for disaster. It is that core and absolute recipe for disaster. Every parent out there can recognize that's a problem. What is that AI friend saying to my son or my daughter? What is the conversation? I want to know more. Where am I involved?
All those types of things. Thoughtful regulation in that space is going to be something that's going to get 95% of the people in the room saying, "We got to do it. " And industry is going to say they want to do it as well. So I think at core, when we get into this sort of discussion of innovation versus regulation, it's not regulation for the sake of regulation. It is regulation on very specific discrete issues that address concrete harms that we all recognize need to be hemmed in to make sure that the innovation can happen properly without harming people at the end of the day.
Ashley Taylor (23:00):
Yeah. And observation from my perspective as a practitioner, as I listened to this conversation, is that in the absence of that thoughtful regulatory process where you bring all the stakeholders to the table, what I see developing is a regulatory framework in large part informed by enforcement settlements where companies are forced to divine from these public settlements the issues that drive regulators, which may be, to your point, Dave, slightly different than the statute. And you end up with compliance efforts that are responding to enforcement matters as driven by the enforcer rather than the drafters of the legislation like Senator Rodriguez. And the difference between the two creates what we see at a practical level as potential conflicts, not because anybody's doing anything wrong, but because people have different priorities. You have one priority by an enforcer in one state, which is different than the drafter of the legislation in another state.
And so that gap is being filled, but it's being filled on an ad hoc basis and driven by the particular facts of an enforcement matter rather than a thoughtful discussion of policy at a higher level.
David Stauss (24:22):
I absolutely agree. I mean, you still have, right? So we're talking primary laws at the end of the day. Colorado AI actors, other states, the past things like California and Utah and Maine, the past AI balls, but you still have UDAP laws, like state UTAP laws. I know actually you deal with a lot that apply to these things and they talk about things like fairness and those types of concepts that can be applied differently in a red state versus a blue state. The concept, the fundamental concept, the regulator's idea of what's fair. We came up with this and the kids came up against this in the kids space when this concept of best interest of a child. Well, I'll tell you what, best interest of a child means a heck of a lot different thing in California than it means in Kentucky. I mean, it just does.
And that's not to say that one's right and one's wrong. It just fundamentally means different things depending upon the beliefs of the people. And we are a large country and it is very hard for people to operate across this incredibly large country with exactly what you suggested, which is sort of like that regulatory enforcement action of what means what.
Ashley Taylor (25:27):
This may surprise some of our listeners, but state AGs are not only enforcers, but they are regulators and they play an important role in the legislative process as well. They have what are known as state attorney general legislative packages, and they are asked to opine on significant pieces of legislation in every state. So I've always been a strong advocate for allowing NAG to discuss significant bills, to have both input and a public position so that they are stating, "Here's how we view the bill. Here's what we view as a ball and a strike under the bill so that companies can understand even during the regulatory process how the enforcer views the bill." And I think that's particularly important, Senator Rodriguez, in the multi-state context you were discussing since states have a similar multi-state enforcement mechanism, it would be, I think, beneficial to all parties if that multi-state enforcement protocol was adopted to the legislative process as well so that frankly, states could learn from each other during that process.
I think there'd be a lot of similarities and they would help to smooth out some of the differences which would benefit everyone.
Robert Rodriguez (26:41):
I think that's correct. I think at least my attorney general is very consumer advocate friendly and also worked in the privacy space under the Obama administration, and he was very useful in his rulemaking. And I do know for a fact that they do meet and work with a lot of the other AGs across the country in at least the privacy space, not as much the AI because it hasn't evolved completely fully, but as they get into it, our AG was very engaged with the rent algorithm bill here because he was also part of that lawsuit with the rent algorithms that were going on while it got vetoed, he still stood up for a law like that. And I know that he worked with us and many other states on the privacy things that tweaks come. It's like, "Hey, we have this problem here. What are you seeing?
How are you enforcing it and how do you tweak it? " And they'll either come to us as legislators and change it, or he's got some pretty broad making rulemaking authority under the Privacy Act that he could tweak out himself.
Ashley Taylor (27:32):
Well, in our last couple of minutes, I'd like both of you all to help us look to the future. What are you all expecting in terms of additional AI bills in Colorado or in other states? And what is the issue du jour that our listeners should be thinking about in the AI context? I
Robert Rodriguez (27:51):
Think the chatbots is going to be a very big topic in 2026. I think as you're seeing lawsuits and harms to children, I think chatbots are a big thing. A lot of discussions on it in the healthcare/psychotherapy spaces with treatment and they're having chatbots do it, whether should it be licensed, should it be approved by the FDA because it's a product. I think there's a lot of different policies coming out in that space. Obviously, the chatbots with children is going to be a big one. The pricing, the surge pricing and the regulatory pricing models, I think are bills that are going to still be looked at in Colorado, as well as the rent algorithms. Anything with any type of AI pricing, I think is something that will be looked at or regulated. But the ones I've heard the most about is probably going to be the chatbots and the healthcare.
David Stauss (28:38):
Yeah. And I love to hear, Senator Rodriguez, what do you think is going to happen in Colorado on the AI app before we go? But I would echo the chatbots are ... There's a couple different flavors. You set your website, you're interacting with consumers and it's AI, you got to tell people it's AI. That's its core. That's main, the law of pass and main. And then you've got things like companion chatbots. So that's like AI is your friend, your girlfriend, your boyfriend. And when I give speeches on this, I typically say they're like, "Hey, if you're building a chatbot that's going to be somebody's girlfriend or boyfriend, you're going to get regulated, that's life." That's a high risk activity that you're going to get regulated in. So I think we'll see those New York past one, I mentioned last year, the pricing ones, I mean, there was tons of those bills floating around last year.
You were past the disclosure one where it said that you have to disclose whether your pricing was based upon, personal data was used in the pricing coming up with your price. I think that was 3008 got put into the omnibus bill. So I think you might see more along those lines. I think lawmakers, like Senator Rodriguez, they look around what other lawmakers have been able to get across the finish line and they pick those things and say, "Hey, maybe my state too." But I do want to be quiet for a second and I do want to hear Senator Rodriguez’s latest update on what might happen in Colorado when you all jump back into session in what, six weeks or so? I guess.
Robert Rodriguez (30:01):
It feels like that. So since the bill was signed in 2024, at first in the nation and surprised that it got signed, whether that was a gift or a curse, still remains to be seen for me. But I think we took a stab at a task force during the session last year early on and tried to work on a policy, which was Senate Bill 25-318 that we tried to do some tweaks too to address some stuff that while this bill was stakeholdered pretty aggressively with the tech industry, we didn't probably do the work that we needed for the small mom and pop startup industries. And obviously the newer group that's got into this fight has been the venture capitalists, which is a whole different beast to be conversation. But we made an attempt at that, which we couldn't get to the finish line. And then we did an attempt in our special session that we had a month or two ago, which was a five, six day session, and tried to gut the bill down to just pure disclosures and a right to correct and got rid of the risk assessments and all the duty of care and just said, "Hey, just disclose it and let them correct it."
Every one of those has been a move forward with things that I've learned as a policymaker. Currently right now, the governor and conversations I had with him going into that special session is that I think the civil groups and some of the tech industry groups agreed that maybe we need to bring in a third-party administrative facilitator. So right now the governor's hired a facilitator, a moderator to negotiate with some of the tech groups, the VC groups, the government groups, education, healthcare, some of the buckets we weren't able to fill over the last couple years for input from them and deployers, more information from them of what's burdensome for them to do because a small mom and pop probably isn't going to have the bandwidth to do mass amount of risk assessments. And if they modify it, then it gets wonky in the policy if you're an employer because if you change it.
But right now they're in meeting, they meet once a week for two, two and a half hours. We're going to see where that goes. During the special session, I extended the implementation date from February 2026 to June 30th of 2026 to give us another session to see if we figure it out. Some of the industry wasn't ready to implement it in February. Some were, some weren't. So we gave the extension. I'm hopeful there's a few tweaks that we need to clean up that we've had over the years. We've had conversations with the banks and stuff on the fraud detection and making sure that the intent wasn't to prevent them from doing their fraud implementation. So that's something we got to tweak out. Some of the recording requirements, the AG and maybe a little bit more broader rulemaking might be needed for him to give more stuff.
He can define what the risk assessments and stuff are. But if I got really lucky, I wouldn't have to change anything into Incentive 205, but I do know there are a few things I have to change whether it's the fraud information systems for the financial sectors that needs to be clarified and probably narrowing down. I think one of the big things we found with industries. And like I said, God bless you lawyers, you all have an opinion. Many lawyers gave people thoughts that my bill applied to them. There was somebody in our task force that says, "I do photographs for medical equipment on my website. My lawyer says that this bill encompasses me because it says healthcare." And we're like, "No, because you're not discriminating." But somebody charged them to say that there's exposure there. So trying to clarify what the intent is for consequential decisions and bias and disparate impact and discrimination is just needs to be fine-tuned so they know what it applies to and what it's intended for.
That's what I think that we need to do.
Ashley Taylor (33:30):
Well, I want to thank both of you all for joining us today, and I really appreciate you taking the time to discuss these issues with us. And I want to thank our listeners for tuning in as well. Remember to subscribe to this podcast via Apple Podcast, Google Play, Stitcher, or whatever platform you use. And we look forward to having you join us next time. Thanks again.
Copyright, Troutman Pepper Locke LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper Locke. If you have any questions, please contact us at troutman.com.
---------------------------------------------------------------------------
DISCLAIMER: This transcript was generated using artificial intelligence technology and may contain inaccuracies or errors. The transcript is provided “as is,” with no warranty as to the accuracy or reliability. Please listen to the podcast for complete and accurate content. You may contact us to ask questions or to provide feedback if you believe that something is inaccurately transcribed.