Video: AI in Financial Services and Insurance: A Word On Risk

In this video, Nate Buchanan, the COO and Co-Founder of Pathfindr, addresses the issue of AI risk with particular regard to financial services and insurance.



Setup a call with Nate

Please submit this form, if you would like to setup a call with Nate.





VIDEO TRANSCRIPT

Wanted to touch briefly on risk because this is a area that is often near and dear to particularly banking and insurance, capital markets, wealth management. Basically, the any of those highly regulated financial industries, risk is very, very important. And it's there's this perception that in some in some circles anyway, that AI is too risky to be used with customer data at least and sometimes, you know, overall with banking in general.

What I wanna talk about here is kind of the eight areas that most AI ethics and compliance frameworks around the world adhere to. These specific categories here were published by the Australian government back in two thousand nineteen. So this was before the all of the, you know, Gen AI revolution, so to speak. But these eight areas also align to the policies and regulations that are currently being debated and rolled out across other jurisdictions such as the EU, the UK, and the US.

So, you know, it's everything from and I don't need to drain the the slide, but it's everything from the the idea that AI systems should benefit and not harm individual society in the environment to that the idea that AI should not be discriminatory. It needs to be inclusive and accessible. It needs to protect people's data and uphold their privacy rights and ensure that their data is secure. It needs to be auditable.

It needs to make if you're using AI to, you know, make a decision internally, particularly a decision around something like whether to give somebody credit or not, that decision needs to be explainable and contestable, in the right context if somebody feels that they were, unfairly treated due to an AI based decision.

And then finally, accountability.

It's important to keep in mind that leaders who implement AI for their organizations are ultimately accountable for the outcomes of that of that AI process or tool. And it's it's been established in court that it's not a defense that AI made the decision, and you're not accountable. It's true that if you've implemented the system, that ultimately a human is accountable somewhere. So what does all this mean for for FSI?

Even though it can sound a bit scary from a risk perspective, there is absolutely ways to get value from AI without putting yourself in unnecessary risk.

One, and this might seem obvious, but, you'd be surprised at at at how often we we discuss it is you can focus on internal, meaning internally facing, not customer facing, or otherwise low risk use cases first. Enterprise knowledge search is one example of that. So if you have, a lot of different types of documents in maybe a SharePoint site somewhere or, you know, PDF documents that detail the different types of products that you offer, or your internal policies.

It's relatively straightforward to create a chatbot based experience for your employees that puts that knowledge more easily accessible at their fingertips without putting the wrong knowledge in the wrong hands. Meaning, it's not that difficult to write to create a chatbot that would allow somebody to search for, you know, what is the leave policy or how much leave do I have while also not allowing them to say, what does Gary in the cubicle next door make every year? There's ways to avoid that. Another thing to consider is that, depending on who your cloud provider is.

So most folks are either on Microsoft, AWS, or GCP. All three of those, providers offer native inbuilt AI services, particularly Gen AI services that can be delivered entirely within your firewall. So at least in the case of Microsoft, and in the case of the others, I'm just using Microsoft because that's what we have. If you have an Azure OpenAI instance that's powering a lot of your AI use cases, you can essentially opt out from your enterprise data being used to train OpenAI's models.

That's one of the fundamental things that they offer. So you can safely provide your data within your Azure tenant, gain the benefits of using the inbuilt LLM without worrying about your data, you know, escaping and being used for training purposes.

Another point that I would make is, you know, it's important to be mindful of the roles that are participating in any pilots that you're running. So I mentioned not being able to look up Gary's salary, you know, depending on what, use case you're testing out. It's a good idea to to have a think about who's gonna be involved in in that pilot, who's gonna benefit most from it, and how are you going to structure the rollout across the organization so that you're not necessarily letting everybody off the leash, at once to use the tool. You're understanding, okay.

This is what it's good at. Here's what it's not good at. Here's what we need to fix. And then as you get more confident in in the capabilities of what you've built, you'll be able to more carefully control who's doing what.