Healthcare not only accounts for a large part of the US economy (about 20%) but also, usually, enjoys decent profit margins. So it should be no surprise that entrepreneurs have an interest in using AI to tackle healthcare problems.
From reading x-rays, to predicting heart disease, to understanding protein binding, AI or AI-using applications have enjoyed considerable success in specialized areas. This article discusses the potential for using AI in medical transcription which, if successful and interconnected, could improve healthcare across the board.
The Problem
For medical, legal, billing and payment purposes, accurate medical records are essential. And in the current era of ‘big data’ such records (if accurate and available) when mined should provide significant advances in medical knowledge. Even before data-mining considerations were an issue, many medical procedures were coded and this will aid future data mining.
But this begs the question, are the records accurate? This question has been researched for at least 50 years. On the patient side, a recent survey
finds that almost half of patients had to correct some sort of medical inaccuracy in their data. Of course, this only reflects data that they have actually seen. The fact that the error rate decreases with age and, presumably, computer savy (11% for boomers, 61% for genz) strongly suggests the actual rates are much higher. And again, this refers only to data patients can see which rarely includes transcripts.
On the doctor end, despite decades of attention, significant error rates are still found in transcripts, as in
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7647276/
Given that the errors reported there lean strongly to over-billing rather than under-billing, it is conceivable that these error are features, not bugs. Thus, you would not expect them to go away without monitoring.
Positives
1) The technology is already here. Current Large language models (LLM’s) have the ability to process sophisticated text. And while the precursor to text analysis, speech recognition, is still not perfect, it has improved considerably (e.g.,
https://www.abbadox.com/blog/impact-voice-recognition-radiology-report-accuracy).
Most importantly, automated speech recognition outperforms humans in transcription.
2) Doctors need not take notes and so can focus on listening to and examining the patient and discerning their apparent affect (frail, nervous, slow, afraid, stoic, brave, etc...). They will also have more time to talk and examine.
3) The records will be more accurate (as descried earlier, current records have significant errors). This will not only allow the doctor to more accurately assess the progression of the patient over time, but will also allow the data to be aggregated across patients and studied statistically. I suspect an enormous amount of unknown disease markers and inter-disease, inter-symptom, and symptom-disease correlations will be discovered this way.
Improved data accuracy starts a virtuous cycle. Once statisticians believe the data is good enough to allow reliable inferences, given the huge sums available for disease cures, there will be a rush to start integrating this data and mining it.
4) The record may be discoverable legally. This works both ways --- it could indict doctors for missing something or exonerate them. In either case, it should speed litigation and identify problem doctors.
Problems
1) Hospitals: Knowledge is power. Hospitals have it, you don’t, and they want to keep it that way. Hospitals resist posting information concerning patient outcomes. And even when the law requires posting information, as it does with respect to prices, they routinely flaunt the requirement. Data mining hospital records opens up a whole new area of resistance.
2) Doctors: Very few of us sound particularly intelligent in transcripts, especially in unscripted conversation. Doctors may fear their words, or lack thereof, can be used against them in reviews or court. And, as suggested by the paper cited above (and more recent discoveries are more than suggestive), some doctors are creating fake diagnosis to increase billing. Regardless of whether this is due to pressure from above or purely-self interest, having AI audit the transcript to see whether diagnosese and treatments are justified may be resisted.
3) Privacy: Ostensibly, privacy is easy --- patients get a new made up number that identifies their records, but not them. But somewhere, there is at least one record that matches this number with their actual identity. And, most likely there will be many such records in the databases of multiple providers because that’s just easier Odds are some will be hacked. Of course, this is not news, e.g.,
https://www.wired.com/story/hospital-hack-300-million-patient-records-leaked/
But the transcripts will be more personal than just a test result: a transcript can reveal a lot about a persons character. This applies to both patients as doctors.
Read Comments