AI won’t deliver care, but it will augment it

Natural large language models (LLMs) like ChatGPT are being used across the economy. But while they can make healthcare better, how can you use them ethically and safely? We chat to Charles Cross, co-founder of Emma AI, to find out more.
Cross says in general the sector has come on leaps and bounds, with an increasing amount of digitalisation. However, the sector only uses 3% of healthcare data.
He explains this remaining 97% is being unlocked by LLMs: “That’s because the majority of healthcare data is stored in natural language, i.e. it’s written down. We have the ability to analyse numbers, but until recently didn’t have the same ability to analyse words.”
Emma AI had impressive results even before the advent of LLMs. “We initially started Emma as a 24/7 care team support app. Care professionals could query any question they had about the organisation or the people they were caring for. Later on we started sending predictive and preventative insights to team members at the time they were delivering the care, based on the last seven days of care delivery.
“Through analysing this natural language data we could give insight into things like skin viability, the progression of pressure sores, changes in mood, their interaction with family members, changes in medication, their hydration, nutrition, and reactions to their daily routine. It helps care workers see what’s worked well and where there are concerns.
“This was of huge value to the frontline care team. We found there was a 59% reduction in out-of-hours calls to the central office because through the app the care workers could get the right information at the right time.”
This process is made quicker with LLMs, but the new technology also enabled Emma AI to take on new capabilities. In the area of administration and improving care quality, Cross says Emma AI can compile six months of care data, alongside any written external feedback and the audio of a meeting between a manager and worker, to create documents reviewing a care worker’s performance.
LLMs can also audit the impact of medications and operational efficiencies.
While there has been some talk of using LLMs for clinical purposes, such as talk therapy, Cross sees limited application for LLMs on the delivery side of social care. He says: “AI can’t deliver care, and even if it could we shouldn’t allow it to impact the personal side of care. We have to remember what it is. But it can augment delivery.”
He believes this is essential: “Over the next 20 years, the cost of providing healthcare is forecasted to double. We currently have over 300,000 vacancies within health and social care. With the projected increase in the amount of people that we need to care for, we’re going to be in an even greater deficit. I don’t think augmenting people with AI and actually filling some of these vacancies by AI is the problem.
“I actually think we’ve got a much bigger problem of trying to drive supply in a really tough macroeconomic environment. AI can actually redirect resources to frontline care delivery, which is great because having that personal touch is essential to providing any personalised care.”
How about the safety side of this? Cross says Emma AI is compliant with both GDPR and SOP Type 1 and 2. But more interesting is the “zero data retention policy” which he says requires more thought than care providers sometimes appreciate.
He explains: “A lot of providers who are using platforms like Chat GPT or Gemini are unknowingly using the public access which means those foundational models train on their data. We have gone through the process and safeguards for the foundational models to give us zero data retention policies. This is important because these platforms don’t automatically ringfence and delete data as it costs them money to do that.”
There is also a matter of ensuring the LLM doesn’t hallucinate information. The successful adoption of AI requires the information to be visible, transparent and explainable. Cross tells us: “We have 98.7% context recall, but in 1.3% of cases Emma will say she doesn’t have the information when she actually does.
“As such we have an in-house expert on the books who is analysing all of Emma’s answers – allowing a 100% accuracy of Emma AI responses. To help with this, under every single answer Emma gives you can go into the backend and see the source and you can also see that source within the context of the full document it was taken from with another click. This transparency and visibility allows confidence in adoption, which is vital to see a positive impact of this technology.
“There has been a lot of discussion about the ethics of AI, but for me the key thing is if you’re using your own database and dataset and can train a model with extremely high accuracy and context recall, and if it can give useful information. We’ve gone through that process, and you do need to in health and social care.”
Could Emma AI be transported into other sectors of health? And will it? Cross agrees that much of the data collected by, for example, a GP will also be natural language. While he doesn’t directly confirm that Emma AI will expand into these sub-sectors, he points out the flow of data between health and social care needs to improve so that quality of care and patient experience can be bettered.