S2 E32: AI and Compliance with Privacy Regulations and the TCPA

Audio version

AI and Compliance with Privacy Regulations and the TCPA

Transcript

Jordan Eisner: All right. Welcome to Compliance Pointers. I’m your host, Jordan Eisner, VP of Sales at CompliancePoint. I’m very excited today to be joined by a first-time guest on the Compliance Pointers podcast, but a familiar face here at CompliancePoint, Kara Urbaniak. Urbaniak, am I saying it right, Kara?

Kara Urbaniak: You got it.

Jordan Eisner: Marketing Compliance Consultant here at CompliancePoint. Kara’s been with the organization for, I can’t believe it’s already been three and a half years, Kara. It seems like just yesterday, but you are coming fresh off employee of the quarter. You’re rising up and comer here, obviously at CompliancePoint. I should not be surprised at the fact that you’ve been here three and a half years and you started in our Marketing Compliance Group and now you’re assisting with not only that group, but also our Data Privacy Group, I think for 18 months, maybe even more.

Before that, you worked with Vanguard in finance, a different line of work, but used to different types of clients and the challenges you might encounter there.

But you’re a good fit for this podcast today because of the marketing compliance background, because of the data privacy background, because we’re talking about everybody’s favorite topic, artificial intelligence, AI for short, and how its use could jeopardize your organization’s compliance with the TCPA, hence where the marketing compliance background comes in or privacy regulations. Also a result, potentially in your company being sued as we’re starting to see.

Kara, good to have you on.

We’ll start with what was probably the catalyst for this podcast and blog post that you put together, and that’s what Patagonia is facing. Give us the details on that. That’s what spurred this conversation, and well, I guess that’s what spurred the legal action against Patagonia that then created this conversation and many about what’s going on there. Give us the details.

Kara Urbaniak: Yeah. Thank you for the intro.

The lawsuit against Patagonia is essentially alleging that the company violated California’s Invasion of Privacy Act, and they’re saying that they did so by using AI tools in their customer service operations. The key thing here is that they did so without informing customers that their conversations were being recorded and monitored.

The company that hosts the AI tools, they’re not being sued directly in this lawsuit, but they are named. TalkDesk is the company that’s named that makes an AI software that does enable companies to do things like intercept or record and analyze customer interactions, and this would include phone calls.

The lawsuit saying that Patagonia intentionally installed TalkDesk’s products knowing that the products would intercept and record callers’ conversations, specifically claiming that they use these AI tools for things like monitoring and analyzing customer service conversations. Sometimes this could be used to do things like improve customer support or identify issues, or we’ve seen companies integrate these products into their overall quality assurance process.

But the main thing here is that the complaint is arguing that customers weren’t properly notified that their calls were being monitored or recorded by any AI technology. Failing to disclose this surveillance could end up being a violation under California’s Invasion of Privacy Act, which as we’re starting to see can lead to some significant legal and financial troubles, and definitely starting to see more companies run into these issues.

Yeah. Because it doesn’t seem like too uncommon of a story.

Jordan Eisenr: For organizations to scramble in to figure out where can they use AI anywhere, where can they make things more efficient. To quote Jeff Goldblum, I guess from Jurassic Park, so busy thinking about if they could, not stopping to think about if they should, or how they should in some of these instances.

Okay. Anything else on that you would add? There’s the background or need to know about it before we dive into other questions?

Kara Urbaniak: No. I think just the main thing here is that not just the fact that they use the AI, I think it really comes down to that they shared this information and didn’t have the consent to do so.

Jordan Eisner: So we just mentioned, this is not too uncommon. We see other organizations looking to implement and leverage AI accordingly. So are other companies facing issues like this?

Kara Urbaniak: Yeah. So we have started to see a few other cases like this that have come up in, I would say, the past year or so, it’s been more prevalent.

So in this case with Patagonia, TalkDesk was named, but they weren’t the direct company being sued. However, there is another case where TalkDesk is even being sued directly, and the plaintiffs are claiming that they’ve been profiting from the use of data from their clients.

So not just TalkDesk, we’ve seen other big companies been involved as well. So Home Depot and Google are both involved in something similar as well where the plaintiffs are claiming that Home Depot had utilized Google’s Cloud Contact Center AI and then allowed Google to access, record, read, learn the contents of the call without ever having getting consent or disclosing the sharing of this information with a third party.

So this one’s pretty similar. The plaintiff in this case is also alleging that Home Depot was using Google’s AI tools to monitor and transcribe calls with customer service, and that Google then further used that data to further train the AI model.

So this one’s turned into a class action complaint. They’re seeking $5,000 per violation with the claim that Google has been doing this since 2021. So for a company like Google may not be too much of a big deal, but for a lot of others, that could be pretty damaging.

So there’s just a couple that have been coming up like this, but we definitely think we’ll start to see a few more now that AI has become so prevalent.

Jordan Eisner: There’s awareness around from plaintiff standpoint, what they can bring up.

Google doesn’t surprise me because it seems like Google and Facebook, or Meta, and all these other big organizations are usually some of the first to be trialed on this, but Home Depot is a little surprising. Patagonia was very surprising, I’m sure for them, but I think also for a lot of consumers as well as a brand that you don’t typically see involved in this sort of thing.

Okay. So there are some other companies, and we talked about the main suit that brought this together.

What about other states? We’ve mentioned California, and this one stems from California. You’ve seen other states push forth, I guess, cases like this against organizations or consumers in those states?

Kara Urbaniak: Yes. So in the cases I just talked about with Patagonia and Talk Desk, Home Depot, Google, these are all being brought under California’s Invasion of Privacy Act, but we are starting to see other states take action on passing laws to regulate AI.

So Colorado was one of the first and introduced some additional consumer protections that would require developers of what they’re calling high-risk AI systems to use reasonable care to protect consumers from any foreseeable risk or risk of discrimination. A lot of this is going to center around disclosures and transparency, but there will be some additional obligations.

In Utah, they’re doing something similar, so they’ve passed their own AI Policy Act after Colorado. There are some similar requirements with transparency and accountability, but at a high level, essentially, any business that’s subject to the Utah law would need to disclose when a consumer’s interacting with what we’re calling a generative AI system, so anything like a chat bot.

This would be administered and enforced by Utah’s Consumer Protection Division. That gives them pretty broad reach in requiring these disclosures.

There are some key differences between the Utah law for certain businesses that may be defined as what they’re calling a regulated occupation, and that would be something that’s required to have a license by the State Department of Commerce.

But overall, I think these two laws are highlighting a trend that we’re starting to see with AI regulations at the state level. I think we’re probably going to start to see more states follow suit just since this is becoming such a huge part of our world, and especially in the absence of overarching federal rules, states are probably going to start to take action.

Jordan Eisner: Yeah. I would imagine so. Well put.

We talked at the top of this call about your experience in TCPA and working in our marketing compliance group, which a majority of the work we do in that group is stemmed around TCPA. But this also doesn’t involve TCPA, but there are some lessons related to TCPA compliance that can probably be learned and consent.

Give us that sense or tell us about where you see those or lessons learned from a TCPA standpoint.

Kara Urbaniak: Yeah. These cases are really related to the privacy laws in California, but I think it could set precedent for others to sue for violations under the TCPA.

I’m sure a lot of contact centers and people that are making a lot of outbound phone calls are going to be enticed to use these different types of AI technologies to improve performance and just create efficiencies overall, but they may not understand potential implications or privacy concerns.

It’s important to note that the TCPA requires businesses to obtain express written consent. If you’re going to be engaging in certain types of automated communications like AI-driven interactions.

In these situations, customers would have to clearly and conspicuously agree, and this is usually done through an electronic web form or written document to receive communications from a business that may be automated or involve data collection and analysis by AI tools.

While we haven’t seen a case under the TCPA like this yet, we think that this is going to be a topic and concern that’s on the radar of not just professional plaintiffs but the FCC as well.

In general, we always see professional plaintiffs creatively coming up with new ways to go after companies. Once they catch wind that people are doing this under a certain privacy law, they’ll be happy to find ways to exploit that under the TCPA, especially as more and more contact centers or outbound sales centers are going to start making a lot of calls using AI technologies.

Jordan Eisner: That goes back to the mad rush of implementing, leveraging AI in combination with a very scrutinized space, very litigant space, and well-put. Once plaintiffs start wising up to avenues to put these together, they will. TCPA, as we know, has been very high volume class actions over the past several years.

What about from a privacy perspective? What are some key takeaways from this case?

Kara Urbaniak: From a privacy perspective, I would say ensuring that your privacy notices are transparent about any use of AI is key, and I think also understanding consumer sentiment.

There was a recent study from KPMG earlier in the year, and they found that three out of five Americans feel unsure of AI still, and about only half of those surveyed feel that the benefits actually outweigh the risk.

In these newer times with AI, we think maintaining consumer’s trust is incredibly important. We suggest doing things like reviewing your current language, and privacy notices, and disclosures, ensuring that vendors that a company is sharing data with are acting in the role of a service provider.

I would say it’s important to note that this is all still alleged, but these are lawsuits that are going to be costly for some companies, and while they haven’t been proven in court, they can still damage brand reputation, and it can be a bad look for a company as well.

Consent and data sharing aren’t just unique to AI, but under California’s Invasion of Privacy Act, we are definitely starting to see more claims specific to the use of that. Consent may not be the only way for companies to use AI tools, but again, we are really seeing courts focus on this right now, so that could set further precedent.

Again, just really ensuring that disclosures and privacy notices are transparent, or when obtaining consent to make automated calls or anything using AI, that that’s clear to the consumer.

Jordan Eisner: When in doubt, ask permission.

Kara Urbaniak: Yeah, exactly.

Jordan Eisner: Okay. This has been a pretty good overview. Anything else you’d add?

Kara Urbaniak: I think one of the big things is just understanding that consumer sentiment towards AI is still pretty new. I’m sure just people are used to in their day-to-day life, these things being new topics, and so it’ll probably take some time before these technological advancements catch up with society and the laws as well. I think just being extra careful and being extra transparent is the right stance for companies to take as we navigate this.

Jordan Eisner: Okay. Well, Kara, thanks for taking the time to be on the podcast. Welcome to the Compliance Pointers guest list. I’m sure we’ll hit you back up for another podcast in the near future.

For our listeners, thank you for watching and or listening. As a reminder, we’re always producing content like this in the world of marketing compliance, data privacy, information security, cybersecurity. So continue to check back, subscribe if you haven’t already.

If you’re interested in learning more about CompliancePoint, please don’t hesitate to reach out. You can find us on our website. There’s an email address, connect@compliancepoint.com.

Kara is on LinkedIn, I’m on LinkedIn. We welcome any interaction there. Until next time. Thanks everyone.

Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.