Great AI, Great AI Responsibility

Nate Buchanan, COO & Co-Founder, Pathfindr

For every article, post, or video excitedly talking about the potential of AI, there is another one warning about its dangers. Given the press and hype around each new AI breakthrough, it’s no surprise that governments, business leaders, and academics are closely tracking the development of the technology and trying to put guardrails in place to ensure public safety.

At Pathfindr, we’re keenly aware of the importance of responsible, ethical AI. It’s central to our core values and we’re very proud to have Dr Darryl Carlton on our team leading our AI Policy and Compliance capability.

But wait, you might be thinking. Why would I need an AI policy in the first place? What do ethics mean in the context of AI? How is it possible for someone to “AI responsibly”? It’s not like drinking or gambling, after all.

There are lots of ways to answer those questions, many of which are outside the scope of this week’s newsletter. Perhaps we should start by breaking down some categories of things to watch out for regarding AI implementations. The Australian government published a list of eight AI Ethics Principles back in 2019 (well before the Gen AIssance) that broadly align to the principles and regulations being considered by other governments around the world including the EU, UK, and US.

Human, Societal, & Environmental Wellbeing

AI systems should benefit - not harm - individuals, societies, and the environment.

This is a lofty goal, and is perhaps the most important of these principles in the context of what people are most concerned about when it comes to AI. In terms of individual harm, the worries around deepfakes and other manipulations of a person’s likeness have the potential to cause serious problems. Societies are also at risk from misuse of AI in this way, as the concerns around politically charged misinformation have made clear. From an environmental standpoint, AI can be very compute-heavy and has a large carbon footprint in some cases.

However, the flipside of this principle is the BENEFIT that AI can provide. So much value and so many new possibilities can be unlocked by AI. If governments and companies are able to join forces to put appropriate guardrails in - and if LLM technology can continue to operate more efficiently as models deliver “good enough” results with less power - we’ll be able to mitigate the harm while maximizing the benefit.

Human-Centered Values

AI systems should respect human rights, diversity, and the autonomy of individuals.

This one is a bit more nebulous - it sounds good, but can be challenging to put into practice when you consider that AI requires vast amounts of data to operate, and most of the data it pulls from is created by humans. And humans, as we all know, tend to have a spotty record on things like rights and diversity. But if we take a step back, and think about what human-centered values could mean, we can envision it being an aspiration that AI systems strive to represent the “better angels of our nature” (to quote Abraham Lincoln) rather than simply be a regurgitation of what has come before.

Fairness

AI systems should be inclusive and accessible, and should not drive discriminatory practices.

As implied above, if AI is discriminatory, it’s not its “fault” - rather, it’s the data it was trained on. There is also the possibility that AI systems will over-index on anti-discrimination, with problematic results. The important thing to remember, as with the principle on human-centered values, is that teams implementing AI need to do their best to give as many people as possible the best experience possible when they use their solution.

Privacy Protection & Security

AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.

This seems counterintuitive. If data feeds AI - and the more data, the better - how can we be sure that an AI system that’s powerful and easy-to-use can keep data secure? For companies concerned about data protection and privacy (i.e. all of them) one way to adhere to this principle is to keep your AI experimentation within the “walls” of your cloud provider. Azure, AWS, and GCP all provide options to use their LLMs without your data being used to train their models. Longer-term and larger-scale implementations can make things a bit more complicated in this regard but there are ways to begin your AI journey without exposing yourself to this kind of risk.

Reliability & Safety

AI systems should reliably operate in accordance with their intended purpose.

The classical (fictional) cautionary tale here is Skynet of Terminator franchise fame - an AI system designed for national defense that goes rogue and starts a nuclear war. But the more immediate truth is that AI, and particularly LLMs, often behave in unpredictable ways. This is a feature, not a bug, so in that sense we can’t necessarily say that they are “unreliable” but it does mean that teams need to have monitoring and continuous improvement processes in place to ensure that the AI solutions they’ve implemented are continuing to add value and aren’t causing major issues. Problems begin when unpredictability turns into unusability, and that’s what you need to watch out for.

Transparency & Explainability

People need to know when they are being impacted by AI, and when an AI system is engaging with them.

It’s one thing for a customer to be notified that the chatbot they’re talking with on their bank’s website is powered by an LLM. You’re opting in at that point. But what about companies that are using their knowledge of your purchase history to automatically generate marketing emails targeted at you? You’d be receiving communications that you might not have otherwise, because the company used AI to infer what you might be likely to buy and created an offer for you. In this scenario, you’re being impacted by AI and the company would have an obligation to notify you to that effect on the communication.

Contestability

There needs to be a timely process in place for people to challenge the use or outcomes of AI systems.

I believe this principle will be one of the more difficult ones to adhere to or enforce. The potential applications of AI are so varied that it would be very challenging to have a process whereby every time AI makes a decision “about” or “for” someone, the person has a mechanism to question or reverse it. For use cases such as credit decisioning at a bank, if someone feels that their case has been decided incorrectly one could see how that could be appealed. Facial recognition systems at airports giving a false positive for a law-abiding citizen as showing up on a no-fly list is somewhat more complicated.

Accountability

People responsible for AI systems should be accountable for their outcomes.

To me, this one is perhaps the easiest to abide by. If you’ve implemented an AI system, the onus is on you to maximize the benefits and minimize the risks. Make sure the right guardrails are in place from a technical and process standpoint, and your downside can be very small. But make no mistake - when it comes to AI, the buck stops with humans.

I hope you’ve enjoyed this brief tour of ethical AI principles and some musings on each of them.

Other Blogs from Nate


AI for Quality Engineering

Continuing our AI series that we began in last week’s edition with our deep-dive on how AI can make a difference in private equity, this week we’ll focus on a capability instead of an industry.

AI for Private Equity

Occasionally at The Path, we like to take a break from our regular, Pultizer-worthy content to write a deep dive on how AI can make a difference in a particular industry. This week we’re focusing on private equity and how GPs and their management teams can use AI to manage risk, optimize performance, and seize opportunities that others might miss.

It's not too late

Specifically, we’re going to unpack a particular finding in The State of Generative AI in the Enterprise, a report based on data gathered in 2023 and published by Menlo Ventures. Over 450 enterprise executives were surveyed to get their thoughts on how Gen AI adoption has been going at their companies.

Good AI Governance

It may not be everyone's favorite corporate function....but it's very necessary. No corporate buzzword elicits as many reactions - most of them negative - as “governance”. Whether it’s a Forum, Committee, or Tribe, anything governance-related is often perceived as something that gets in the way of progress, even if people acknowledge that it’s necessary.

AI for CFOs

For those who think about corporate financials all day, it’s tough out there right now. That won’t come as a surprise to CFOs, or people who work in a CFO’s organization, but it was certainly a wake up call for me as I started learning on the job at Pathfindr.

Bang for your AI Buck

In this blog, we will show you how to put together a value framework that will help your team decide where to invest in AI capabilities and how to maximize the return on that investment.

Build vs. Buy vs. Wait

In this blog, Nathan Buchanan explains why strategic decisions around AI implementation can be so difficult to make.

Know What You're Signing Up For

Previously, we talked about different ways to calculate value from AI implementation. We focused on the different types of value, where it could be found across an organization and the things to keep in mind when you’re trying to track it. What we DIDN’T focus on was the other side of the discussion.

Righting the AI Ship

In this week’s edition of the Path we’ll talk about some ways that AI efforts go wrong, and what teams can do about them.

AI for Purpose

If you're a Not For Profit, you've probably heard that AI can help you address these needs, but you’re not sure where to start, or how to afford it even if you did. What can you do?