Advertising

Letters
Volume 52, Issue 6, June 2023

June 2023 correspondence


Download article
Cite this article    BIBTEX    REFER    RIS

GP supervisor professional development

The article by Ingham et al, ‘Finding and addressing weakness in GP supervisor professional development’, is excellent and thought-provoking.1 I hope that the transition to profession-led training may lead to The Royal Australian College of General Practitioners and The Australian College of Rural and Remote Medicine contributing to, and agreeing on, a national curriculum for GP supervisor professional development. The in-practice quality improvement (QI)-based approach discussed in the article is ideal to be used in any general practice, with the authors noting ‘that there is no single, correct way of supervising and teaching’. I look forward to the in-practice QI-based approach being adopted in a range of general practices soon, with the results evaluated and published. Such an approach would seem to fit neatly into the continuing professional development categories of ‘reviewing performance’ and ‘measuring outcomes’ for supervisors.

Author

Murry Ludington MB, BS, FRACGP, MRCGP, DA(UK), DRCOG, External Clinical Teaching Visitor, The Royal Australian College of General Practitioners, East Melbourne, Vic

Reference
  1. Ingham G, Clement T, Anderson K, Plastow K, Ruth D, Hayes A, Connor W. Finding and addressing weaknesses in GP supervisor professional development. Aust J Gen Pract 2023;52(1-2):70–74. doi: 10.31128/AJGP-03-22-6374.

A quick chat about ChatGPT

We are writing to start a chat about ChatGPT discussing its potential uses and limitations in healthcare. It is important that general practitioners (GPs) in Australia are aware of the technology and have input into the debates that shape its future use.

ChatGPT is an artificial intelligence (AI) chatbot that has taken the world by storm since its release in November 2022. It is one of several iterations of the generative pre-trained transformer (GPT) series developed by OpenAI. It uses large language models (LLMs), trained on large volumes of internet text to ‘learn’ and predict patterns of human language to generate human-like responses.1 As of today (28 March 2023), a search of ‘ChatGPT’ already yields over 166 PubMed articles, and thousands more in Google Scholar.

ChatGPT and similar LLMs have wide implications for the medical field. ChatGPT has approached the ‘passing mark’ of the United States Medical Licensing Examination (USMLE).2 We conducted an informal experiment with ChatGPT on the recent AKT 2023.1 public exam report and showed that it correctly answered four out of five multiple choice questions, and provided plausible explanations for each answer. However, it is important to note that the responses generated are probabilistic, meaning that different answers can be generated to the same question – limiting the repeatability of such testing. Proposed clinical uses of ChatGPT include summarising individual electronic health records, assisting with documentation such as radiology reports, and assisting with multilingual communication.3 These pose exciting opportunities to boost clinician efficiency.

Although routine clinical use remains at infancy stages, anecdotally, we have encountered patients who have started to symptom-check on ChatGPT. In our setting, medical students and GP registrars have started to experiment with ChatGPT as a study aid – with our local university explicitly banning ChatGPT for assessments. We also know of GP academics who have started to use ChatGPT for brainstorming, editing, and generating data analysis code across various programming languages (eg R, Python, Stata).

Nevertheless, ChatGPT is not without its limitations. Currently, it does not have access to the internet and knowledge post 2021. More concerning are issues relating to ‘AI hallucinations’ and other factual inaccuracies, biases, and unethical uses of such technology.3–5 This has prompted calls for caution, urging for ethical and regulatory safeguards to be in place prior to clinical use.5 Yet a look at history reveals that disruptive technology, such as the computerised medical record in the 1980s6 and the ‘World Wide Web’ in the 1990s,7 once posed similar challenges to the medical community.

To conclude, we pose the questions – how do we optimise the use of ChatGPT in general practice to benefit us, our trainees, and patients? Will these tools impact the doctor–patient interaction differently to existing web resources? We call on GPs to be part of the conversation and research in shaping the future use of ChatGPT and similar LLM technology in healthcare.

Authors

Winnie Chen MBBS, MPH, FRACGP, Menzies School of Health Research, Darwin, NT; Lecturer, Flinders NT, Darwin, NT

Asanga Abeyaratne MBBS, MRCP(UK), FRACP, Digital Health and Informatics Principal Research Fellow, Menzies School of Health Research, Darwin, NT

References
  1. Eloundou T, Manning S, Mishkin P, Rock D, editors. GPTs are GPTs: An early look at the labor market impact potential of large language models. Cornell University, 2023. Available at https://arxiv.org/abs/2303.10130 [Accessed 28 March 2023].
  2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health 2023;2(2):e0000198. doi: 10.1371/journal.pdig.0000198.
  3. Shen Y, Heacock L, Elias J, et al. ChatGPT and other large language models are double-edged swords. Radiology 2023;307(2):e230163. doi: 10.1148/radiol.230163.
  4. The Lancet Digital Health. ChatGPT: friend or foe? Lancet Digit Health 2023;5(3):e102. doi: 10.1016/S2589-7500(23)00023-7.
  5. Harrer S. Attention is not all you need: The complicated case of ethically using large language models in healthcare and medicine. EBioMedicine 2023;90:104512. doi: 10.1016/j.ebiom.2023.104512. 
  6. de Dombal FT. Ethical considerations concerning computers in medicine in the 1980s. J Med Ethics 1987;13(4):179–84. doi: 10.1136/jme.13.4.179.
  7. Sonnenberg FA. Health information on the Internet. Opportunities and pitfalls. Arch Intern Med 1997;157(2):151–52.
This event attracts CPD points and can be self recorded

Did you know you can now log your CPD with a click of a button?

Create Quick log
References

    Download article