S3 E6: Using AI in Security and Compliance Assessments
Audio version
Using AI in Security and Compliance Assessments
Transcript
Jordan Eisner: Welcome back to Compliance Pointers for our viewers and our listeners. Compliance Pointers is a podcast brought to you by CompliancePoint, which is the company that I, Jordan Eisner, work at, and Brandon Breslin, our Director of Assurance Services, also works at, and our guest on the podcast today.
Hey, Brandon.
Brandon Breslin: Hey, everybody. How’s it going?
Jordan Eisner: Long time no see.
Brandon Breslin: Yeah, exactly. Exactly. Thanks for having me on, Jordan.
Jordan Eisner: Now, you’ve done video with us, right?
Brandon Breslin: Yeah, yeah. I have.
Jordan Eisner: You talked about that. I can’t remember the topic because we’ve done several of these podcasts together.
Brandon Breslin: Maybe one of the consolidated audit topics.
Jordan Eisner: Yeah. Usually when speaking with you, it’s somewhere in the realm of security assurance, which is a group you run, and probably has something to do with PCI or ISO or SOC 2 or a whole host of different responsibilities you have.
But today, we’re going to be talking about the benefits and potential risks of using AI when conducting security and compliance assessments. So something you’re seeing a lot more of, I’m sure you’ve been, and for those that don’t know, Brandon’s been in the industry. And when I say the industry, I mean security assurance, PCI, SOC 2, ISO for more than 10 years. He’s a graduate of UGA. I know he’d want me to throw that out there.
Brandon Breslin: Go dawgs.
Jordan Eisner: Go dawgs. There you go.
And I’m sure you’ve seen a lot of trends over definitely last year with AI, but even prior to that, and you probably have some thoughts on the future. So today we’re going to talk about how it’s being used, the pros, the cons, and what to look for in the future.
Brandon Breslin: It’s a great topic. It’s just continuously changing at a rapid pace.
Jordan Eisner: So break it down. Tell us how AI is being used today in security and compliance assessments.
Brandon Breslin: So I think AI is being used in both sides of the coin, right? And I would say before we even, we use the term AI so loosely now, right? What does that even mean? There are so many different applications and methods for that. I would say right now in the cybersecurity and compliance space, most of your usage of AI is around just traditional tasks that people on your team would do from a documentation perspective, gathering information, completing documents, and gathering evidence.
But there are some more complex tasks, right? So control implementation, there’s compliance monitoring tools that we’ve seen out there that are now AI-driven. We’ve even seen predictive analytics that are run.
But you’re still for the most part seeing the traditional, I don’t want to use the term mundane, but they are tasks that are maybe a recurring task that could be easily alleviated by AI tools, reporting, policy, procedure development, right? Or even taking first passes at that and then having somebody that’s more equipped to review it, review that.
Jordan Eisner: Clean it up a little bit, make it sound a little bit more human.
Brandon Breslin: Right. And I think with any of these uses of AI, right, you want to be careful about the information that’s put into it. And you also want to be careful about the information that’s put out or that’s generated out of the tool. I know we’ll get into a little bit more here shortly, but around some of those risks, you really want to be careful when you’re… What platform are you entering your data in, right? What are you actually inputting your client or customer information, your company’s information? You really want to be cognizant of that. And you want to make sure that you’re following organizational policy around that.
Jordan Eisner: What I was going to ask in follow up to that. Talked about how it’s being used. I think there were some clear points on the mundane task and perhaps accomplishing some of that, maybe doing something more efficiently.
So talk about the benefits of using AI and the security and compliance assessments. You alluded to it a little bit with that. What else do you see as a benefit?
Brandon Breslin: You hit the nail on the head. The efficiency piece is number one, and I feel confident that many organizations that are using AI right now or AI tools or tools that are leveraging AI are driven around the efficiency gains there.
A few other areas, right, around continuous monitoring, I mentioned things that are simpler tasks that can be easily achieved that don’t require a lot of time for a human to do or an employee to do. If you can automate that process, why not?
Scalability, consistency, reducing human error, reducing costs in general. Time is money, so reducing costs, finding new avenues, risk management. How do you prioritize certain risks to the organization? You can use those tools to help establish what those risks are.
And I think a bigger core avenue for AI that we’re going to see next is the adaptability, right? So right now, AI is used for more simple things, but we’re just in the alpha stages of the use of AI, right? When we start getting into these more complex avenues of incorporating AI into the seamless workspace that we have now, like, again, we’re just scratching the surface of the iceberg right now, it’s going to become extremely integrated into our daily lives to the point where we’re probably not even going to be able to live without it in the near future.
Jordan Eisner: Yeah, that scares me. But I mean, there are things you can do that we just can’t, you know, and it would be foolish to not pursue those things.
Brandon Breslin: Yeah, I think one area that’s interesting specifically for the use of AI is accuracy. Because I touched on it a little bit earlier, it’s a benefit and a risk, right? So, the accuracy of the data that’s outputted will only come out based on what you put in, right? Good inputs result in good outputs. That concept is still the same for AI models as well.
So you really have to be cognizant and understanding of what data you’re putting into the tool so that you’re getting the correct outputs. Because if you’re not using the tool in the right way, it’s not going to improve your process, right? The technology should not be the process that should empower the process that should enhance the process. That’s the same for any AI tool as well.
So it’s just interesting to think about the accuracy piece because anybody can use the tools right? However, you have to be, you know, a decent level of expertise to understand how to actually engage with the tool, how to write the correct prompts that will give you the information that you’re looking for based on whatever the task or, you know, concept that you’re looking to solve for maybe.
Jordan Eisner: So again, and you’re hinting at, you know, the questions that I have and each of these answers and the next one is really what are the potential risks? We talked about the good. What are the risks as you see it today in using AI and security and compliance assessments? You talk a little bit about it being the process as opposed to enhancing the process. So maybe it’s too much reliance on it or just letting it run rampant right without having human check or review in some of those instances.
But what are the potential risks that you see in using it?
Brandon Breslin: Yeah, you’ve got to set the guardrails right. You hit on that. The overreliance, you need the boundaries from an organizational perspective. If you have not as an organization even talked about AI or considered establishing boundaries, that needs to be a critical number one priority right now. You need to develop an AI policy or procedure or at least establish the governance structure that you’re going to do around handling of AI.
If you don’t implement anything, your employees will start to use it with no regard. You really need to establish the boundaries of what is allowed, what is not allowed, and maybe not even getting to the granularity of which specific tools are allowed, but the concept of what type of data can we use? Can we input into the tool? What type of data are we able to use out of the tool?
I talked about accuracy. You want to be careful about what actually comes out and double check the information to make sure that it’s actually accurate for what you’re looking for. I would definitely say setting boundaries, too much over-reliance of risk.
More bigger picture, the legal and regulatory compliance aspect of AI is something that’s ongoing. I think there’s two court cases right now that are actually at the Supreme Court or they’re about to be at the Supreme Court around data ownership and who actually owns that data specifically for a couple of court cases that are around an organization and their customers data that was used in AI and who ultimately owns the information that came out of that tool. That’s a pretty contentious piece right now of who owns the tool may be clear of who owns it, but what about the actual data that’s inputted and that comes out of it? Especially if there’s a decision that’s made based on that, that can impact the landscape here of who owns the data for AI or the specific tools that that data comes out of.
Limited contextual understanding is something else though that comes to mind. We as a society have already started to use AI in a limited capacity and we tend to give prompts based on the certain situation that we’re in. However, it’s only learning based on the input that you give it. So it may not have the full context of the situation that you are trying to get information out of.
It’s similar to telling somebody a story and not giving them the full picture. You only tell them one piece of the story. They don’t know their answer of what may happen in the story may be different if they had the full context of the story. So I think it’s a similar situation like that as well that you want to be cognizant of.
Jordan Eisner: That’s a good point. So the risks are probably key considerations you would give to organizations that are looking to adopt AI for these assessments. So is it, you know, if I were to ask that question, hey, I’m an organization, I’m going to leverage AI, what do I need to look out for or what would you recommend? Is it mostly guardrails around those risks you just talked about and ensuring you have a plan from the get-go on how you’re going to tackle X, Y, and Z?
Brandon Breslin: That’s definitely the first step. Just like any decision-making process in an organization, make sure you’re following your standard procedure that’s been outlined by your executive board or board of directors or executive committee, whoever is in charge of your organization. Make sure you’re following that standard process for decision-making and strategy development.
I don’t think anything’s different from an AI perspective, right? It’s just a little bit more of a complex issue because of, you know, there’s so many unknowns at this point of who owns the data, what are the boundaries, right?
And I will say for organizations of different sizes and complexities, the boundaries may be different, right? Not every organization is going to take the same approach to AI. One organization may have a stronger risk appetite to be willing to use AI in a more integrated manner. Maybe they’re building a platform that has AI built in from the core, right? Maybe there’s other organizations that are very risk-averse and they’re only going to use AI in a very controlled manner, in a very monitored sense.
So there’s completely different ways that you can get, you know, that you can establish the boundaries for using the tools. , I think, you know, make sure you have careful selection, right? Really do your research if you’re an organization that’s looking at different LLMs or large language models. Really understand what you’re looking at. Be an informed consumer, right?
And then from a transparency perspective, make sure you’re, you know, you’re working with those in your organization that are making those decisions that you really talk through all the strategies that make sense for your future goals as an organization.
I mean, there’s other things out there, right? Make sure you’re prioritizing security, make sure they are doing regular audits and testing, or you’re doing that for the tool, data quality, accuracy. There’s of course ethical guidelines around that as well. You want to be cognizant of.
Jordan Eisner: No, that’s a good point. You’re bringing on the tool to help with your security and compliance assessments, but then you’re creating a situation where you also need to ensure that the tool itself is not, you know, causing any disruption. In terms of your compliance and your security posture and how you’re leveraging that tool, so it starts to create a little bit of work for itself too. So you want to make sure that the efficiencies it’s going to realize are worth that.
Brandon Breslin: Yeah. And you want to incorporate vendor management selection processes into this as well, right? We haven’t talked about that. Just like any common frameworks that you see out there, PCI, SOC, NIST, CIS, ISO, they all have vendor management guidelines around due diligence, ongoing monitoring, contracting, making sure you have specific language in there that protects the organization and its customers around sensitive data handling, confidentiality, availability, security, all of those things. That should be applied to the same AI tool use selection process as well.
There are so many tools out there now already. You want to understand who are the vendors that are developing these, right? Are you comfortable where they work? What they’re operating in? What are their environments that they’re operating in? Is their reputation strong? All of those considerations from a vendor selection process.
Jordan Eisner: Yeah. All good stuff. I think our listeners will really appreciate that if they’re embarking on incorporating AI into these assessments, which I’m sure a lot of them are.
Brandon Breslin: Absolutely. And to kind of bring it back to the… I know we’re wanting to incorporate this into assessments, we talked at the beginning about how it can be used to implement controls and things like that. I think I see there are so many benefits in that from a…If you’re in a compliance role or a compliance officer role, or you’re in a compliance team, or you’re charged with governance of compliance, yes, there are risks here of AI, but I think it’s also worth exploring some of these benefits of how can you take your organization to the next level that maybe you didn’t have the capability in the past, right?
Now based on some of these tools, you can start to look at overlap across frameworks. You can look at overlap of policies, procedures, operational controls, technical controls, all the security configurations that are set within the environment, understanding where the gaps are, where can you take that to the next level, implement new controls, modify current controls, all of those things.
Jordan Eisner: So I got a question for you in closing. What’s in the future for leveraging AI and security compliance assessments? What’s next to come out? Obviously, it’s your opinion.
Brandon Breslin: Yeah, exactly. I wish I had a crystal ball I could look into.
Jordan Eisner: Where do you see it going? Because some of the stuff it can do now probably wasn’t imagined in the years that preceded it. What do you think takes it up a next level in terms of leveraging AI and these assessments?
Brandon Breslin: Yeah, that’s a great question. I mentioned earlier that I think that we’re just scratching the surface, right? I do think there’s going to be, and I strongly think this, that there’s going to be an explosion of the adoption of AI. And there’s also going to be a significant increase in the number of available tools, number of available LLMs to use, number of available use cases with the understanding or the caveat that there could be more risks out there.
So you really want to be careful as you’re going down this path, right? Don’t just adopt something for the sake of adoption, right? Really look at, again, look at your business strategy, look at your alignment of IT and business goals for the organization, work with your executive management team. Understand is AI relevant for our organization? Should we adopt it? If we do adopt it, what’s our vendor selection process? What’s our incorporation into the organization? What type of sensitive data we want to include in there? How do we want to evaluate it, right? How do we make sure that it’s continuously operating in its intended manner, right? Are we going to do regular audits? Are we going to do testing on it? Are we going to create our own custom platform based on open source LLMs that are available to download and modify, right?
And then from a use case standpoint, I do see continued increase in capabilities there, increased automation. My hope is that even though AI may likely reduce the number of jobs available, hopefully it creates more jobs as a new kind of line of service or line of business that’s available out there in the marketplace.
And I do think there’s probably going to be some AI powered compliance platforms. There’s already common GRC platforms out there in the marketplace. I do think that AI will continue to be integrated into those from a core level.
Jordan Eisner: Well, you heard it here first from Brandon Breslin, February 2025 with his future AI predictions. So don’t say he didn’t tell you so.
Well, appreciate your time, Brandon. I think that’s a good point to wrap up for our viewers and listeners. Don’t forget to follow us on LinkedIn. Don’t forget to subscribe to this podcast if you’re not already. And if you are subscribed and you like what you’re hearing, please comment, leave a review, make suggestions, right? We’re very open to that, I would add, you know, and different topics that we can get into.
We’ve got a wide variety of expertise here at CompliancePoint and not only information security, but data privacy and other regulatory areas.
And then if you have specific questions about your organization, what you’re going through, how you’re leveraging AI, you want to talk to Brandon, you want to talk to myself or any of our experts, don’t hesitate to reach out. You can find all sorts of ways to contact us on our website or you can find Brandon and I directly on LinkedIn.
Until next time. Thanks, Brandon.
Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.