Video: Developing an AI Strategy in Healthcare

In this video, Michael Ashby the ex head of AI at Telstra Health uses a short case study to make the point that implementing a successful AI project is absolutely not just about the technology.



Ready for a chat?

Please complete this form if, having watched the video, you would like to have a chat with Michael Ashby





VIDEO TRANSCRIPT

The particular project that I want to talk about. I ran this project at Telstra Health.

Telstra Health as you may or may not know is a large healthcare organization in Australia.

It's a wholly owned subsidiary of Telstra.

And, when I joined in twenty fifteen, they were awarded the running of the national cancer screening register. Which meant that they set up, and that was my role to set up the infrastructure, the application, all of the security all of that sort of stuff. So essentially, the running model for the national cancer screen register for both cervical cancers and bowel cancers.

As a part of that register, we received reports from pathologists and a looking at one on the screen here on the right hand side, this is a histopathology report. And probably again, preaching to the choir, but for the purposes of the video, histopathology is where a histopathologist looks down a microscope at cells. And writes a report on what he or she sees down that microscope.

The key thing about this is that there is no standard format for these. It is largely, free text. We do receive them electronically, and that's, an HL7 report there.

And the free text, theFT field on the HL seven. But the format of it, and I've deliberately chosen a small one there, so it would fit on the screen. But these can go for pages. Absolutely up to the pathologist, the language, the headings, all of those sorts of things, completely up to the pathologist in in what they what they choose to do.

Some interesting things there from a data point of view and and looking at that cervical biopsy, I'm reading from the clinical notes there. CXVX stands for cervical biopsy at nine o'clock. Look at look at the way that that's that's being formatted. You know, how do you do data validation, you know, rules based validation?

On on nonsense like that that you're receiving, and that this is just a small example.

So what's essentially the problem?

Well, these come in, as I said, electronically, and they are manually coded. In other words, there is a coder, you'll know what coders, they they sit there and they read these reports. And they determine a diagnosis code, ICD ten, coding, or SNOMED coding, whatever it is.

That we've deemed to be used, and they will apply the appropriate diagnosis code to this. We were, at the time, receiving around about seventeen and a half thousand, almost twenty thousand reports per month. Which roughly equates to around about a thousand a day of these. The coders at peak speed, fully trained, could do about sixty five of these a day.

So essentially takes them about an hour. So that included the validation So one coder would code it. Second coder would come in and and validate. So the the entire from receiving a report to actually validated diagnosis code, we were doing sixty five per day.

Now code is cost roughly around about a hundred and fifty k each. And you can do the maths there, and you can work out that it was roughly costing us around about two point four million dollars a year in coders just to keep up. Now when we took on the register, there was a huge backlog, huge, and I'm talking hundreds of thousands of reports that needed to be coded. So we costing us two point four million dollars just to keep up.

The business case for this was actually quite extraordinary. It was gonna pay back in less than nine months in terms of if we could get the get it get it working. We entered into a co development partnership with IBM and, started started the project.

So what's this project successful?

The original target was above ninety percent. Absolutely achieved that knocked it out of the park. Very, very clearly out of the park.

I learned that accuracy was an ill defined concept.

I took took many, many, discussions and took people through this. It's not actually an ill defined concept. It is very well defined. It was ill understood or not very well understood in our organization about what accuracy was, and we had different measurements This led to a disagreement on what the goal posts were.

And I'm, will say that as of this date, this project hasn't gone into production. It's sitting there ready to go into production. It hasn't gone in yet.

Why did I start and introduce to your project, which by normal measures, I e, it hasn't gone into production yet. It's not saving any money, it's not a successful project.

I did it because, the technology succeeded the technology was an absolute outstanding success by any measure. The speed, accuracy, volume, all of those sorts of things. Really, clinical safety, everything, but there were some key things that, we we neglected to do along the way. And, that's going to that brings out some really good learnings for for us.

So get your organization ready for an AI project. The first one there, I've got to have a project governance framework Well, actually, if you look along there, not too many of these are unique and new things for AI projects, and that's my big underlying thing is don't treat AI projects too differently to any others. So have a good project governance framework. Example, Agile.

Agile is really useful in, in in this one. Have a data governance policy because AI machine learning, all of this requires a large amount of data and having in place data governance, where do you collect the data, where do you store it, how is it gonna be used all of those sorts of things already in place, and they will underpin the project security policies in place. What about applications in the cloud sending data off-site, sending clinical data into the cloud? Where is that cloud?

Is it within Australia? Is it o seas. A lot of our health care requirements are that the data remain within the sovereign borders of the country we're operating in. So having a security policy encoupled tightly with your data governance is a good is necessary, not just a good thing.

We also But this is the new one. Make sure you've got an AI governance and ethical news policy. The field is changing so fast that you're not gonna be able to get too specific in this policy, but laying down some principles, some guidelines, some guardrails as it were for your project to say, as an organization, This is what we will do with AI, but this is what we won't do. We won't do these things.

And in health care, one of key ones that came across and in Telstra Health, we developed this policy. We're not going to build self learning, autonomous AI clinical decision machines. They need to be explainable. They need to be controlled and trained in an appropriate environment. So having those guardrails is very important, and they're engaging with your clinical safety committee.