Advertising


News

Regulation ‘must keep pace’ with AI revolution: RACGP


Alisha Dorrigan


31/07/2023 4:30:10 PM

While AI may help reduce the administrative burden for GPs significantly, a college submission has said regulatory concerns need addressing.

Doctor using a computer tablet.
The college is calling for GPs to be directly involved in any AI regulatory changes that will impact their work or their patients.

As the world scrambles to develop policies and regulations that keep up with the rapidly developing artificial intelligence (AI) industry, risk mitigation is a primary concern.
 
With this in mind, the Department of Industry, Science and Resources recently released a discussion paper, inviting responses to assist in the development of a governance arrangement to protect people from the risks posed by AI, while continuing to allow the technology to provide benefits.
 
The safety concerns posed by AI are perhaps most critical in the healthcare sector, with AI-enabled robots used for surgery listed in the highest risk category in the draft risk management approach outlined by the Government.
 
The RACGP has responded with a submission that supports a risk-based approach with mandated regulations and calls for GPs to be involved directly in changes that will impact their work and their patients.
 
‘Our view is that AI has the potential to revolutionise the delivery of medicine, and regulation must keep pace with these technologies to keep patients safe,’ RACGP President Dr Nicole Higgins wrote.
 
‘Further, GPs must be included and involved in the development and implementation of relevant AI technologies, as well as the regulatory approaches that govern their use in Australia.’
 
She said using the expertise of GPs in the design, testing, implementation, regulation, and post-market surveillance of AI products ‘will give the best hope these technologies are appropriate for use with patients by clinicians in the Australian primary care setting’.
 
‘The RACGP would support cross-industry development of a framework for the use of AI in medical settings, where GPs have a seat at the table and can bring their vast knowledge to bear on this topic,’ Dr Higgins wrote.  
 
While there are wide-ranging health concerns related to AI use, including threats to privacy, safety, employability, health behaviours and access to medical care, many are excited by its potential.
 
Dr Rob Hosking, GP and chair of the RACGP Expert Committee in Practice Technology, told newsGP that regulatory frameworks need to reflect the application and risk profile of the types of AI products used by GPs.
 
‘We need some form of regulation to ensure we are using tools that are valid and reliable,’ he said.
 
‘We are aware that AI can be biased depending on the algorithms that are used to train it.
 
‘Administrative AI may not need to be so tightly regulated, [whereas] clinical interpretation and treatment recommendations need tighter regulation which will be a challenge in the rapidly developing area of AI.’
 
Despite the regulatory issues, Dr Hosking also believes AI has ‘great potential’ in primary care.
 
‘AI will make a huge difference to the way GPs practice in future, it is a matter of whether the software we use is safe and appropriate for the people we look after and to protect ourselves,’ he said.
 
Dr Chris Irwin, GP and co-founder of AI software application ConsultNote.ai, has been using AI in clinical practice to reduce his administrative burden.
 
By using AI to write notes and referral letters, along with flagging potential differential diagnoses, the technology ‘will absolutely revolutionise doctor training and what happens in the consult room,’ according to Dr Irwin.
 
‘The primary feature is writing the doctors’ notes for them,’ he said.
 
‘This is not about replacing doctors, it’s about giving doctors tools for reflection and reducing the administrative burden.’
 
The technology could potentially reduce burnout and improve Medicare compliance by assisting GPs with their record keeping, he also believes.
 
He says patients have been accepting of using an AI microphone in the consulting room that ‘intelligently listens’ but does not record, and reports positive feedback as doctors are able to spend more time focused on their patients.
 
‘The whole point of this app is building human connections that are aided by AI,’ Dr Irwin said.
 
‘The AI is doing the mind-numbing, crushing paperwork so that the doctor can focus on the art of medicine.’
 
In terms of safety, Dr Irwin said doctors will continue to bear responsibility for their records or clinical decision making. 
 
‘AI is a tool, and like any tool it needs to be used properly and you need to understand the limitations of that tool,’ he said.
 
Dr Irwin predicts uptake of the technology will be rapid, and expects a similar change to the move from written records to computer-based systems.
 
‘This will be the second IT revolution in medicine,’ he said.
 
Submissions on safe and responsible AI use in Australia have been extended, with the Government considering voluntary approaches to AI regulation by offering tools and principles that can guide users, as well as enforceable regulatory approaches that could include legislative changes and mandatory standards.
 
The ‘Supporting responsible AI: discussion paper’ is on the Department of Industry, Science and Resources’ website. Consultation is open until 4 August.
 
The RACGP’s position statement on the use of artificial intelligence in primary care is available on the College’s website.
 
Log in below to join the conversation.



administrative burden artificial intelligence patient safety primary care


newsGP weekly poll What areas of healthcare were you hoping would get more funding in this year's Federal Budget?
 
10%
 
10%
 
10%
 
50%
 
20%
 
0%
Related






newsGP weekly poll What areas of healthcare were you hoping would get more funding in this year's Federal Budget?

Advertising

Advertising


Login to comment

A.Prof Christopher David Hogan   1/08/2023 8:24:53 PM

Historically , technical advances have always outpaced ethics , regulations & the law.
*
The most memorable recent example was the development of assisted reproductive technology & the attempts to regulate it. The initial attempts at regulation were adhoc, incomplete & later needed significant revision as countries realised that their initial predictions of the impact of ART were grossly inaccurate.
This suppported the adage that the best way to perform reputational suicide is to make a prediction!
I recommend the initial regulations all whave a sunset clause which means they be reviewed & revised after an interval of say 1-2 years.
This would be supported by a significant exercise in data collection & analysis


A.Prof Christopher David Hogan   1/08/2023 8:54:40 PM

When it comes to using AI to intelligently listen there are many traps. As I learned when I was involved in conversational analysis of GP consultations. It is not just words that are important but also body language, intonation, accent, phrasing, timing & pausing. These are also influenced by cultural expectations & practices.

Using AIs to write patient notes should be an aid for GP input not a replacement & they should be reviewed by the GP before they become a part of the patient’s permanent record.


A.Prof Christopher David Hogan   1/08/2023 8:54:59 PM

I agree that the first issues to address should include safety for all concerned, access to accurate information, patient & clinician privacy (you can learn a lot about people by analysing what they write), employability of all involved, health behaviours, timely access to medical care.
***
Logic or deductive reasoning has a role in medicine but it also has limitations. It is only as good as the information that is provided to make decisions and sadly, medical knowledge is incomplete with so many “unknown unknowns.”
If logic were sufficient, there would be no need for drug trials or therapeutic trials.
Medicine relies on what can be proved not what is believed or expected.


A.Prof Christopher David Hogan   1/08/2023 8:55:08 PM

Historically , technical advances have always outpaced ethics , regulations & the law.
*
The most memorable recent example was the development of assisted reproductive technology & the attempts to regulate it. The initial attempts at regulation were adhoc, incomplete & later needed significant revision as countries realised that their initial predictions of the impact of ART were grossly inaccurate.
This suppported the adage that the best way to perform reputational suicide is to make a prediction!
I recommend the initial regulations all whave a sunset clause which means they be reviewed & revised after an interval of say 1-2 years.
This would be supported by a significant exercise in data collection & analysis