Photo: Kim Perry
There’s been a lot of hype around generative artificial intelligence tools such as ChatGPT over the past year or so. But for all the emerging healthcare use cases involving large language models, it’s worth remembering that one major value of LLMs is their ability to improve natural language processing capabilities that have been around for decades.
Healthcare organizations need AI systems that can ingest and understand an entire chart for a patient, or an entire organization’s EHR system – and genAI tools are well-positioned to help.
We interviewed Kim Perry, chief growth officer at emtelligent, a medical NLP company, who speaks to the power of generative AI and large language models and their relationship with NLP, AI systems that can understand an entire chart or an entire EHR system, and how “medical-grade AI” can improve patient outcomes and prevent clinician burnout.
Q. There is something of a misunderstanding about the power of generative AI and large language models – that in fact they really are powering natural language processing, another form of AI. Please talk about this relationship.
A. It’s amazing how less than a year ago, ChatGPT and generative AI essentially were unknown to the general public. And now ChatGPT has more than 100 million users around the world, including consumers and entire industries. Unfortunately, a lot of people form opinions on emerging technologies based on initial impressions and casual reading. But genAI and large language models are new topics for a lot of consumers and companies, so some confusion is inevitable.
Natural language processing predates generative AI and recent advances in LLMs. However, we now are seeing genAI and LLMs being used to create what we call “medical-grade AI,” or medical-grade NLP. LLMs are being trained specifically on clinical data and, unlike traditional NLP, medical-grade AI is capable of unlocking the 80% of medical data currently hidden in unstructured text.
The ability of medical-grade AI to process and understand millions of documents will transform how clinicians do their jobs at the point of care. But its value will extend across the healthcare continuum to benefit not just providers, but payers, pharma, life sciences and academic researchers.
Q. AI like ChatGPT is great for answering a single question, but you suggest provider organizations need systems that can understand an entire chart or an entire EHR system. Please elaborate.
A. GenAI is great for answering a single question, but that’s assuming the answer actually is correct. Too often, applications like ChatGPT invent facts, or “hallucinate.” That’s simply unacceptable in a situation where clinicians must make evidence-based decisions at the point of care.
If providers can’t trust a source of information, they’ll stop using it. Further, genAI can drown users with a firehose of keyword-driven data that makes it difficult for clinicians to find the information they need.
Providers need tools that can access, understand and contextualize patient information from a single chart or across an EHR system. This is where traditional NLP has fallen short; it can’t understand all that unstructured data.
Medical-grade AI leverages the advances in machine learning and LLMs I mentioned earlier to allow provider organizations to unlock the value of unstructured data such as free-text notes. For example, medical-grade AI can understand medical terminology, including acronyms and slang, that is indecipherable to traditional NLP.
And since medical-grade AI is built for enterprises, it can process and understand millions of documents. This ability to scale is critical as the volume of health data continues to grow.
Q. How can what you call “medical-grade AI” improve patient outcomes?
A. Medical-grade AI can improve patient outcomes by giving clinicians the information they need, when they need it, and in an easily consumable form, thus enabling them to provide more effective care.
When clinicians are unable to access patient information in unstructured data – about a previous procedure, say, or chronic condition or severe allergy to a medication – they lack a holistic view of the patient. This can lead to mistakes in diagnosing and treating patients that result in negative outcomes.
Conversely, when medical-grade AI generates automated patient history summaries, clinicians at the point of care have immediate access to information that enables insights into that patient’s health and well-being. This is especially valuable when a doctor is seeing a patient for the first time.
As I mentioned earlier, clinicians won’t use tools they don’t trust because they can’t verify the information they’re being given. Medical-grade AI overcomes that concern by linking information in a patient summary back to the original data in that patient’s chart, allowing clinicians to review context and verify sources for accuracy.
Q. How can “medical-grade AI” prevent clinician burnout?
A. Clinicians spend far too much time looking for and sifting through patient data. This creates a lot of stress for providers because they want to be interacting with patients, not staring at a computer screen or scanning for specific information buried in a lengthy patient chart.
Medical-grade AI offers powerful capabilities such as context-sensitive search and automated summaries that substantially improve workflows. By striking the right balance between recall and precision, medical-grade AI enables clinicians to be more efficient and effective in treating patients.
Better workflows help minimize burnout because clinicians don’t feel as if they are struggling constantly to keep up with their patient workloads. It only makes sense that when clinicians have smarter digital tools at their desks and at the examination table, they will be less frustrated and better able to practice at the top of their license. And that’s what all doctors want to do.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication.
Source: Read Full Article