Dr. Muhammad Mamdani stands near St. Michael's Hospital and the Li Ka Shing Knowledge Instititute.

Dr. Muhammad Mamdani

What is generative artificial intelligence (AI) and how will it – and other AI technologies – transform health care? Over the past year, chatbots like ChatGPT and image generators like DALL-E have become household names. At the same time, people are seeking to understand the impact that these AI-powered tools will have on daily life.

Unity Health is home to the first hospital in Canada with a dedicated applied AI team. Since 2017, the Data Science and Advanced Analytics team has launched more than 50 innovations into use. As pioneers in the field, the network is already seeing the positive impact AI can make on hospital operations and patient care.

We sat down with Dr. Muhammad Mamdani, the VP of Data Science and Advanced Analytics at Unity Health, to learn more about the future of AI in health care and how Canada can maximize the potential of these emerging technologies.

What is generative AI?

Generative AI typically refers to machine learning algorithms that enable computers to ‘learn’ on existing data – such as text, images, voice and videos – to create or ‘generate’ new content. Generative AI can not only create new content that is sometimes indiscernible from the original data, but it can also generate answers to questions, concisely summarize complex information and create new content – like text or images – from simple prompts.

Ten years from now, what role do you think AI will play in hospitals and other healthcare organizations?

Ten years from now, we’re probably going to see AI much more ingrained in what we do in day-to-day health care. When we look at generative AI, it will be involved in pulling data from the systems we have and being able to make sense of it.

I’m hoping, at that point, it will be reliable enough to help with diagnosis and treatment plans. I also see it being much more involved in predicting what may happen to a given patient. We already have quite a few algorithms around this but I expect it will be better able to highlight at-risk patients for us.

I also see a big role for AI around automation. Right now, the big discussion is around how AI can help us with menial tasks, like scheduling. In 10 years, I think the definition of menial will change.

What system level changes need to happen for AI to make a broad impact in health care?

The first thing is a social change – AI and data literacy has to be there. From a system or organizational perspective, we need to embrace digital even more. Axe the fax needs to be serious.

We also have to take more of a systems approach. Unity Health is doing the best we can to embrace digital. Other hospitals are as well, but coordination and consolidation is ideal. The more data you have, oftentimes, the more things you can do. With rare conditions where you only see a couple of cases a year at our hospital, if you multiply that by a hundred hospitals you would have enough data to create algorithms around that rare condition.

As a health system in Canada, we have to be more disciplined about how we address data. We need alignment across the provinces on things like data standards. We would all benefit from the AI that could be created with high-quality, nation-wide health data sets.

What changes in health care do you foresee in the near future as the field of generative AI grows?

You need a lot of data to make generative AI models work well. A model is only going to be as good as the data you feed it. This is why, when we look at ChatGPT, it will ‘hallucinate’ or make things up. It’s using data that is often incomplete or data of variable quality and reliability from the internet and trying to sift through all the nonsense that’s there.

For AI in health care, the sources need to be much more credible, such as peer-reviewed and trusted publications. Big tech companies are already working on training algorithms with that kind of data to make tools that will be a lot more reliable and accurate for medical use. There are still going to be things that these new models get wrong, but they will be much better than what is currently available.

At Unity Health, we don’t have those kinds of large data sets in-house but there are other AI tools that we can build internally. For example, something we’re talking about creating very soon – with our team and a bit of external help – is an algorithm using generative AI that will help draft some of our clinicians notes for them.

Clinicians spend hours every day writing admission notes, discharge notes, progress notes and the like, which sucks up a lot of time they could be spending with patients. If we automated a significant proportion of it so the clinician only has to review, edit and approve the notes, it could hugely reduce the administrative burden on clinicians.

Are there any myths about AI that you think need to be debunked?

One thing I hear all the time is: There’s so much research, why don’t you just deploy and use the AI? I think people conflate AI research and applying AI in real life. Responsible AI is also about humans and the social context. The myth is that AI is just about the tech; I would argue it is much more about humans and how we interact with AI than it is about the technology itself.

Myth number two is that you could generate AI on practically anything, and you can’t. You have to be very disciplined and focused on what to develop AI on. If you’ve got garbage data sets or you’ve compromised on your algorithm, it’s going to be just awful.

Myth number three is a fundamental misunderstanding about how much work it takes to develop and deploy AI, especially in health care. You have to have the right systems in place, including the right staffing supports and teams in place to properly resource an AI solution before, during and after launch.

How do we balance the risks and benefits of AI in health care?

It’s a balance between embracing AI and putting guardrails around it. It is a really tough balance because if you put too many guardrails around AI, you’re not realizing its potential and can stifle innovation. On the other hand, blindly trusting AI could also be bad because it does make mistakes.

I think the biggest threat to us right now is a lack of data and AI literacy. Understanding what these technologies can and cannot do is critical.

There have already been some strong developments in the health AI space to build that shared understanding and literacy. For example, Health Canada worked together with the U.S. Food and Drug Administration and the United Kingdom’s Medicines and Healthcare products Regulatory Agency in 2021 to develop guiding principles for good machine learning practices in health care. The guiding principles offer a values-oriented approach that can be applied to a variety of health AI projects.

Canada is also home to many experts in AI – both academically and those who are involved in deploying and using AI like we do here at Unity Health. Many of these experts, myself included, are involved in discussions and offering their practical knowledge to help inform regulations so Canadians can benefit from responsibly developed AI that helps to improve our health system.

By: Robyn Cox