Will generative AI have a role in general practice?

Jolyon Attwooll

21/06/2023 3:27:16 PM

When a newsGP article looked into privacy concerns around the use of tools such as ChatGPT, readers wondered about the broader picture.

ChatGPT image
The potential for tools such as ChatGPT to help GPs and patients is being assessed. Image: AAP Photos

Last week, newsGP published an article surrounding the use of tools such as ChatGPT for writing medical notes and referrals, prompted by an ABC report of a ban imposed on the technology in five Perth hospitals.
Three interviewees with expertise in technical, privacy and medico-legal fields shared their views, with the consensus being that the concerns are valid and the technology should be used with considerable caution.
While its focus was on the narrow area of privacy, the article led several readers to comment on the wider potential uses for the technology.
One prompted a laugh.
‘Good points raised in this article, but it’s pretty one sided,’ a reader wrote. ‘Has anyone asked ChatGPT its opinion?’
Others expressed frustration that the potential of the technology was not addressed in detail.
‘Biased… and overly simplistic,’ one said.
‘Concerns about AI in healthcare are valid, but what about the risks of not evolving with technology?’ another responded.
‘By striking the right balance between technological progress and risk management, we can ensure that healthcare remains at the forefront of innovation while maintaining high standards of patient care and ethical practice.
‘It is essential to actively engage with technology, evaluate its impact, and adapt policies accordingly.’
Correspondence in the latest edition of The Australian Journal of General Practice (AJGP) had a similar message.

Dr Winnie Chen, a GP completing her PhD in health informatics and health economics, and Associate Professor Asanga Abeyaratne, a Digital Health and Informatics Principal Research Fellow at the Menzies School of Health Research, co-authored a letter outlining possibilities and concerns.
They reported an ‘informal experiment’ in which ChatGPT answered four out of five multiple choice questions correctly in the recent AKT 2023 exam – noting too that different answers could be generated to the same question thus ‘limiting the repeatability of such testing’.
They also wrote of ‘exciting opportunities to boost clinician efficiency’, which could include summarising individual electronic health records, assisting with documentation such as radiology reports, and multilingual communication.
Acknowledging concerns about ‘AI hallucinations’, biases, and the potential for unethical uses, they called on GPs ‘to be part of the conversation and research’ in shaping the healthcare uses of ChatGPT and technology like it.
‘How do we optimise the use of ChatGPT in general practice to benefit us, our trainees, and patients?’ they wrote.
To delve into that further, newsGP approached Dr Chen to expand on those views, which she did, with some input from her co-author.
Here are the questions we sent, and the answers in full.

Uses for general practice 
newsGP: I won’t ask you to look into a crystal ball too much as I know how difficult predictions are with a technology that is evolving like this. But if you can offer any broad sense of how likely it is large language models (LLMs) will be used as part of general practice, that would be great.
Dr Chen says there are several possible areas for its use. The below is her reply in full:
I see immediate uses for GP researchers. As a GP researcher I use LLMs every day to assist with coding for data analysis, writing tasks (e.g. brainstorming, outlining documents, wordsmithing final drafts). 
Now or in the near future, I see patient-facing LLM chatbots being an area of interest. Many medical-specific chatbots are being developed – e.g., Med-PaLM (Google). As mentioned in the letter, patients already seek answers on ‘Dr Google’, so it is unclear if there are additional impacts on the doctor-patient relationship with this.
Medical education
Now or in the near future, I see LLMs being used for education purposes – on the student’s part to study medical content or on the educator’s part to assist with delivering education: e.g. creating multiple choice questions, short answer questions, clinical cases. JMIR Publications is running a series about use of LLMs in medical education.
This is of most interest to clinicians, but I think developing tools for routine clinical use still requires some work.
For GPs, the greatest potential time-saver, in my view, is incorporation into clinical software to summarise individual information, assist with documentation, and letter/report generation.
There is also potential for new forms of clinical decision support incorporated into our clinical softwares. But this may take years to develop both from a technological and governance perspective.
Right now, stakeholders in Australia have just started to discuss a national approach to governance of the AI sector (see this recent MJA article), including regulations to mitigate risks of AI (see reference from our letter).
Data and privacy in healthcare is a difficult issue to navigate in healthcare and can take time. We are still using faxes (see previous newsGP article)!

 New technology can help healthcare progress, but it often takes time to adopt.

Associate Professor Asanga Abeyaratne also writes:
LLMs will be widely adopted in every domain where synthesis is required and medicine is not an exception.
The important thing is that the entity that is providing the service (for instance OpenAI) should not disclose (or sell) to third parties with other interests. It is important that this is made clear in the legislation.
Until such [time], it is advisable to strip out personally identifiable information from data submitted.
Being part of the discussion
newsGP: You mentioned optimising the use of ChatGPT for the benefit of general practice – do you have any views on this yourself? What about the measures that should be taken to ensure general practice is part of the conversation?  
Dr Chen replies:
As a starting point I think GPs – and doctors in general – should be familiar with the technology. The best way is to try it for ourselves, to test what it can and can’t do well.
Not all individual GPs will be interested in using or developing LLM technology for routine clinical use, and that is okay. 
But let’s not stop at ‘AI isn’t as good as a clinician’, ‘AI is inaccurate’, ‘AI is dangerous’. No tool is perfect and all new health technology brings new challenges.
Let’s think together – how can it help with my work? What needs to be developed next? What needs to be done to make it more accurate? How do we use it ethically and appropriately?
Without familiarity with LLMs we are missing out on opportunities to contribute to discussions with our colleagues (medical students, trainees, other doctors, health managers etcetera) about its use.
Being well informed also means we have our input into industry developing tools that might benefit general practice, as well as have a say in professional bodies seeking to have a position on the ethics/regulations of LLMs, and shape the research and evaluation of such technologies.
Privacy concerns
newsGP: Please feel free to comment on the area of privacy raised in the original article.
Dr Chen replies:
Entering patient information into ChatGPT, even with the chat history turned off, is a legitimate concern. Therefore, any identifiable information should be removed in prompts (e.g. dot points to generate a patient letter).
When we consider potential breaches of privacy, let’s consider ChatGPT/LLMs in light of what’s already being used.
What about search engines? GPs use it every day. When logged in, search engines record your search history unless you specify for it not to.
What about social media platforms where GPs discuss anonymised cases with patient permission (e.g. GPs Down Under on Facebook)?
I think it is right to be aware and concerned about the potential privacy breaches and be reminded of a doctor’s obligation to protect patient privacy – particularly identifiable patient information.
However, that should not mean we shun all such technology and miss out on its potential benefits in our field. Furthermore, LLMs can be implemented within rather than external to health services, and that’s a potential area to explore.
Log in below to join the conversation.

AI artificial intelligence general practice

newsGP weekly poll What areas of healthcare were you hoping would get more funding in this year's Federal Budget?

newsGP weekly poll What areas of healthcare were you hoping would get more funding in this year's Federal Budget?



Login to comment

Dr Bruce Maybloom   22/06/2023 9:28:19 AM

20 years ago, General Electric had a really cool, short upbeat clip set to clubbing music about diagnostics with the message "Give a drop of blood and walk through a door". The doorway being rapid sequencing whole-body MRI. Note taking is child's play for modern technologies. Of course there is more to general practice than that. But as the GP shortage increases, there will be a greater need and role for AI. Glad I didn't do pharmacy, pathology or non-interventional radiology as those jobs may get slammed by AI.

Dr Jimmy Tseng   22/06/2023 9:33:18 AM

Thank you for this article. The general problem I seem to have is the anxiety of tech illiterate doctors and owners, trying to place restrictions without understanding the use and limitations of the technology. It is understandable too - all these acronyms (LLM, GPT).

Restrictions can be appreciated in a hospital environment due to the complexity of their environment, but in a community setting, many policies are dictated by someone reading only the headline - and unfortunately many recent articles glance over the benefits, or use cases, especially when there are solutions to manage the privacy and accuracy problems.