Advertising


News

‘Extremely unwise’: Warning over use of ChatGPT for medical notes


Jolyon Attwooll


12/06/2023 1:43:28 PM

The use of tools like ChatGPT for medical notes has already been banned in places. Here, newsGP looks at the implications for general practice.

ChatGPT screenshot
Since the launch of ChatGPT, there has been much discussion about its implications for healthcare. Image: REUTERS/Florence Lo.

Among the ubiquitous coverage of ChatGPT and other Large Language Models (LLMs) in recent months, the ABC ran a report on a Perth health service that has banned use of the software to write medical notes.

According to the broadcaster, the chief executive of Perth’s South Metropolitan Health Service sent the following message to staff after medical notes compiled using ChatGPT were uploaded to patient record systems.

‘Crucially, at this stage, there is no assurance of patient confidentiality when using AI bot technology, such as ChatGPT, nor do we fully understand the security risks,’ he wrote.

‘For this reason, the use of AI technology, including ChatGPT, for work-related activity that includes any patient or potentially sensitive health service information must cease immediately.’

To get a sense of the implications for general practice, newsGP asked Dr David Adam, a member of the RACGP Expert Committee – Practice Technology and Management (REC–PTM), for his view.
 
While Dr Adam does not believe the use of LLMs such as ChatGPT is common in general practice, he warned that it would be ‘extremely unwise’ for doctors to use it to write referrals or medical notes. He elaborates on the risks in detail below.
 
The World Health Organization (WHO) expressed similar concern over the use of LLMs last month, when it published a call for the ‘safe and ethical’ use of AI in healthcare.
 
‘While WHO is enthusiastic about the appropriate use of technologies, including LLMs, to support healthcare professionals, patients, researchers and scientists, there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs,’ a media release from the organisation stated.  
 
‘This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.’

Dr Owen Bradfield, a GP and Chief Medical Officer at indemnity insurance organisation MIPS, had the following view on the ABC report.

‘From a medicolegal perspective, a doctor tasked with completing a discharge summary already has a legal, ethical, and professional obligation to ensure that it is accurate and correct and that its creation and disclosure does not result in a breach of the doctor’s duty of confidentiality,’ he told newsGP.

‘These duties exist whether or not AI is used.’

Dr Bradfield says regulation of AI is complex and evolving.

‘I note that the Commonwealth Attorney-General’s department recently released a report into a Review of the Privacy Act that seeks to regulate data flows, which might include future uses of AI, by ensuring that people are made aware of how their data is being accessed and used,’ he said.
 
As well as putting questions to Drs Adam and Bradfield, newsGP also spoke to David Vaile, a UNSW lecturer in big tech, AI and the law, and the current Chair of the Australian Privacy Foundation. His views are below.
 
Dr David Adam’s thoughts in full:
Current large language models (LLMs) such as ChatGPT and GPT-4, marketed as ‘Artificial Intelligence’, work by using large amounts of public textual information to generate algorithms which can process prompts of various kinds.

A mobile phone’s predictive text can suggest the next word in a sentence; LLMs can suggest whole paragraphs or more at a time.

Unfortunately, it is easy to infer understanding or competence from these models, but there have been many examples in recent months where the output that ChatGPT or similar programs generate is misleading, incomplete or plain wrong.

It is easy to confuse confidence with competence, and the output of these programs certainly appears extremely confident.

Users describe ‘hallucination’ or ‘confabulation’, where the program outputs plausible but false descriptions. A recent high-profile example is the lawyer who submitted documents to a court which contained a number of fake arguments and references.

Importantly, tools like ChatGPT have not been designed or tested to provide diagnostic or therapeutic advice. We would expect doctors not to prescribe untested medications; we should expect the same with using these tools for clinical work without appropriate safeguards and they should largely be confined to research settings at the moment.

There are serious ethical questions regarding the production of these programs. The models are created by feeding huge amounts of text from the Internet and other sources to an algorithm. The provenance of that text is not well-documented (the paper On The Dangers of Stochastic Parrots is probably the best reference; there is a reasonable summary here) and some have argued breaches copyright provisions.

The output has been shown to hold significant racist and sexist bias, which won’t surprise anyone who has spent much time on social media. I encourage doctors to consider the ramifications of using programs which we know have been generated in a potentially harmful way and that can produce potentially harmful output.

Finally, there is a major privacy aspect to doctors using ChatGPT and other LLMs. The privacy policies of these programs often state that the data submitted to them can be used for further training, and Samsung has banned their use due to confidential data being exposed to others. Using ChatGPT to write referrals or make notes and presuming that not putting the patient’s name in the text is enough to preserve their privacy is foolhardy.

Your practice’s standard consent form is unlikely to cover the use of these tools.

Overall, unless a patient has specifically consented to the use of an experimental service, with potential privacy implications, I would encourage doctors to avoid LLMs despite the allure of the flowery text they produce.
 


Interview with David Vaile, Chair of the Australian Privacy Foundation
According to Mr Vaile, who lectures in big data, ethics and AI at UNSW, much of the recent coverage of the technology has not had the right balance.
 
‘One criticism I would note is the generally breathless, uncritical tone towards the prospects of various sorts of AI,’ he told newsGP. ‘They’re very different, they're very variable.’
 
He labels those behind LLMs such as ChatGPT as ‘practitioners of the cult of disruption’.
 
‘They’re about distraction, and breathless hype and promotion … it’s off the dial,’ he said.
 
‘There’s a thing called the hype cycle … and it reaches a peak of inflated expectations, then it drops off when people realise nothing could be as good as what they promoted.
 
‘We’re sort of at that peak at the moment.’
 
Mr Vaile said he helped set up a new course about the big tech, AI and the law, which ran for the first time about four months ago.
 
‘Everything was changing under your feet every day,’ he said. ‘The regulatory environment is not clear, that’s another issue.
 
‘But the tech, the implications, the functionality, basically it’s been overturned in the last six months.’
 
Like Dr Adam, he raises the issue of false statements that can be made by the LLMs, which he compares to a ‘very confident and randomly completely wrong con artist’.
 
‘It’s digesting a vast amount of material, a lot of it rubbish, a lot of it private, a lot of it proprietary,’ he said.
 
‘Of the many different issues and regulatory and legal problems that are thrown up, the copyright, intellectual property and confidentiality ones are huge.’
 
While Mr Vaile acknowledges the benefits of using AI for very specific purposes, giving the example of its use in radiology to pick up subtle histological changes that even experienced humans may miss, he makes the point that it is not generative AI at play in these instances.
 
‘The more generalised you are expecting it to do … the range of ways that something can go wrong are wider,’ he said.
 
Like Dr Adam, Mr Vaile sounds a strong note of caution about the level of transparency applied by companies behind the LLMs.
 
‘The secrecy behind which the very small number of international mega corporations that are doing this stuff …  you basically can’t tell what they intake, how they process it, what they do with your stuff,’ he said.
 
‘They’re completely opaque.
 
‘There’s the secrecy but also the omnivorous, indiscriminate sort of appetite.
 
‘That’s what you’re dealing with. Personal medical information is the most sensitive form of data, it’s got a special legal category [in] privacy law all around the world.’
 
He describes the use of such data as ‘the nuclear fusion version of information in terms of potential richness and value, but also hazard and risk, and potential harm’.
 
The speed of change should be another reason to pause, he says.
 
‘It suggests we should be using the precautionary principle, rather than move fast and break things.’
 
The RACGP’s position statement on the use of artificial intelligence in primary care was published in March 2021 and is available on the college’s website.
 
Log in below to join the conversation.



artificial intelligence ChatGPT Large Language Models


newsGP weekly poll Which RACGP request would you most like the Government to fund in the upcoming Federal Budget?
 
12%
 
0%
 
87%
 
0%
 
0%
Related






newsGP weekly poll Which RACGP request would you most like the Government to fund in the upcoming Federal Budget?

Advertising

Advertising


Login to comment

Dr GdM   13/06/2023 7:07:20 AM

Concerns about AI in healthcare are valid, but what about the risks of not evolving with technology?

Healthcare, like any other industry, must evolve and adapt to technological advancements. By embracing these technological innovations while simultaneously addressing security concerns, healthcare can unlock the transformative power of AI, ultimately leading to better patient outcomes and improved efficiency.

We can't overlook the risks of not evolving with technology. By striking the right balance between technological progress and risk management, we can ensure that healthcare remains at the forefront of innovation while maintaining high standards of patient care and ethical practice. It is essential to actively engage with technology, evaluate its impact, and adapt policies accordingly.


Dr Matthew Piche   13/06/2023 7:18:34 AM

Good points raised in this article, but it’s pretty one sided. Has anyone asked ChatGPT its opinion?


Dr Jimmy Tseng   13/06/2023 8:07:12 AM

As usual, a very biased article about new technology. There is nothing noted here about turning off ChatGPT’s Chat History & Training - which explicitly states that the prompt and results do not get sent for training or recording.

Neither does this article explain how newer models, such as GPT-4 are less “hallucinogenic”.

Other than referrals, there doesn’t seem to be anything here about ways ChatGPT can routinely help the GP. What about writing NDIS or WorkCover letters? Writing up factsheets - especially for topics that don’t have an existing fact sheet. Also how about summarising and formalising medical notes?

Similar to when Google started offering search, it’s a new technology that needs to be understood to be used correctly. Google search has the same issues with privacy, worse still, with your location recorded.


Dr WC   13/06/2023 9:26:23 AM

Agree with a few comments here - right to be cautious, but really the focus should be on how to use ChatGPT/LLM technology better for our work (med notes or otherwise). And as GPs, to be part of that conversation & development of new tools.

We wrote a letter to AJGP about this recently (June 2023 correspondence) :

" Yet a look at history reveals that disruptive technology, such as the computerised medical record in the 1980s and the ‘World Wide Web’ in the 1990s, once posed similar challenges to the medical community...

To conclude, we pose the questions – how do we optimise the use of ChatGPT in general practice to benefit us, our trainees, and patients? Will these tools impact the doctor–patient interaction differently to existing web resources? We call on GPs to be part of the conversation and research in shaping the future use of ChatGPT and similar LLM technology in healthcare."

https://www1.racgp.org.au/ajgp/2023/june/june-2023-corresponden


Dr E   13/06/2023 8:23:24 PM

Interesting article, interesting times.

I programmed a basic GPT model from scratch by following this guide: https://www.youtube.com/watch?v=kCc8FmEb1nY, which was created by one of the original programmers. The key point is that there's nothing stopping a health agency from training a Chat GPT on high-quality data, such as a State EMR, with appropriate legal/data protection measures in place. Or they could use a selected portion of those records. This, coupled with alignment by medical specialists, could result in a highly-tuned Chat GPT model that can provide exceptionally accurate answers.