Advertising


News

How GPs can use epidemiology to help diagnose coronavirus


Matt Woodley


28/10/2020 4:41:05 PM

COVID tests are not foolproof, so doctors must take steps to ensure false-negatives do not result in hidden transmission, researchers say.

Virus infecting crown of people
Epidemiological factors can be more relevant than test results when attempting to diagnose patients.

No one really knows how accurate ‘gold standard’ polymerase chain reaction (PCR) tests are when determining COVID.
 
They achieve 100% sensitivity in laboratory conditions, but in the hands of individual clinicians with varying experience at administering such tests, some estimates suggest they could miss as many as one in three cases.
 
Considering Australia has conducted nearly 8.6 million COVID tests since January – more than 27,000 of which have been positive – such a failure rate could mean thousands of potentially infectious people have been, and will be, missed if clinicians rely on the results of the test alone.
 
For this reason, University of Sydney researchers are urging frontline practitioners to apply clinical epidemiology methods when making a COVID-19 diagnosis for an individual patient.
 
Such an approach, according to a pre-print paper published recently in the Medical Journal of Australia (MJA), requires clinicians to estimate the probability of disease before and after a test result in order to make decisions based on all of the available evidence.
 
Dr Fiona Stanaway, Senior Lecturer and Course Coordinator of Clinical Epidemiology at the University of Sydney’s School of Public Health, co-wrote the paper. She told newsGP while testing is important, results need to be combined with other contextual information such as the presence of symptoms or close contact with a known case to give a better indication of whether someone is carrying the virus.
 
‘This is a new virus and these are new tests so there’s not much good quality research on what the actual sensitivity and specificity of the test is, particularly in those with minimal or no symptoms,’ she said.
 
‘What’s more relevant is that if you’re a close contact or you’ve recently travelled from a region with high rates of transmission, your chances of having COVID are higher, so even with a negative test result you need to quarantine for the recommended 14 days.’
 
Lead author of the MJA paper Associate Professor Katy Bell from the University of Sydney’s Faculty of Medicine and Health and School of Public Health told newsGP the consequences for clinical and public health decision making also need to be taken into account when interpreting test results.
 
‘We want to be especially sure that people who work with high risk populations – for example, in aged care or in healthcare – do not have the infection before they return to work after being tested for SARS-CoV-2 infection,’ she said.
 
‘This might mean that more than one negative test result is needed to be sufficiently confident that infection has been ruled out.’
 
As a result, the paper emphasises the importance of gaining a clearer picture of the probability of a disease by applying the test results to the clinical context of the particular patient sitting in front of the doctor.
 
‘That is why clinicians need to bring together multiple sources of information and leave no stone unturned [when] making a decision on whether a disease is likely to be present or absent, and the need for management options such as quarantine,’ Dr Stanaway said.
 
‘Clinicians also need to consider patient symptoms, risk-factor information, as well as medical test results such as virus and imaging tests.’

Fiona-Stanaway-article.jpg
‘Clinicians need to bring together multiple sources of information and leave no stone unturned [when] making a decision on whether a disease is likely to be present or absent,’ co-author Dr Fiona Stanaway said.

One method suggested by the authors is to use Fagan’s Nomogram, developed in 1975 as a ‘paper calculator’ to help clinicians graph how a diagnostic test changes the probability and likelihood of a patient being affected with a medical condition.
 
‘Using the nomogram helps clinicians to quantify what the chances are of the patient truly having COVID-19 based on both their test results and other risk factors such as close contact with a known case,’ Dr Stanaway said.
 
‘This provides clearer guidance to clinicians and public health professionals making decisions about how to manage positive and negative test results in different patients.
 
‘There is often poor understanding of how imperfect tests can be and that tests can both miss people, as well as falsely diagnose people that have nothing wrong.
 
‘The principles outlined in [our] paper help demonstrate why clinicians need to take into account more than just test results when deciding how likely it is that someone has a disease.’
 
But while agreeing with the ‘general logic’ and premise of the paper, RACGP Expert Committee – Quality Care (REC–QC) member Dr Michael Tam told newsGP he believes it needs to be understood in a wider context. 
 
‘It is mostly implausible that a clinician in frontline clinical practice with a patient in front of them will quantitatively estimate the pre-test probability, and use the positive and negative likelihood ratios of a specific test to compute the posterior probability, and from this, decision-making into clinical actions,’ he said.
 
‘However, I would expect this sort of reasoning to guide and justify clinical policy decisions and guideline algorithms.’
 
Instead, Dr Tam prefers the method described in a 2011 paper published in BMJ Evidence-based medicine.
 
‘This qualitative approach enhances the reasoning heuristics that many experienced clinicians will employ, rather than circumventing it,’ he said.
 
Regardless of the approach used, Associate Professor Bell hopes the article will help clinicians to provide evidence-based patient-centred care, while preventing the spread of the pandemic.
 
‘The importance of accurate medical tests has been brought to the fore with the pandemic, where a key strategy to control spread of the infection is through robust testing and contact-tracing programs,’ she said.
 
‘But given the known inaccuracies of the tests developed to diagnose the new disease, clinicians may find it difficult to decide whether or not the patient in front of them actually has COVID-19, whether this possibility has been adequately ruled out, and the safety of releasing them from self-isolation or quarantine.
 
‘By following clinical epidemiology methods for estimating probability of disease in the patient, and then revising this as test results and other information become available, clinicians and public health professionals can make informed decisions about the management of their patients that balance potential benefits against potential harms.’
 
Log in below to join the conversation.



COVID-19 epidemiology false negative


newsGP weekly poll Which of the RACGP’s 2024 Health of the Nation advocacy asks do you think is most important?
 
69%
 
4%
 
5%
 
10%
 
9%
Related




newsGP weekly poll Which of the RACGP’s 2024 Health of the Nation advocacy asks do you think is most important?

Advertising

Advertising


Login to comment

A.Prof Christopher David Hogan   29/10/2020 9:40:04 AM

<sigh> How nice of colleagues from other disciplines to tell us how to do our jobs<sigh>
When we were taught the use of epidemiology in clinical practice , we were told that if it looks like a duck, walks like a duck, and quacks- it is a duck even if is labelled as a pigeon.
In other words, before you look at a test, look at the patient


Dr Peter Angus MacIsaac   29/10/2020 2:01:24 PM

The previous comment is harsh and dismissive: the Bell paper is republishing the well described "science" of diagnostic testing that underpins clinical practice and the quacking duck analogy. The inclusion of the perspective of authors from a consumer, general practitioner and public health specialist in current COVID-19 testing strategies, would have been helpful.
Yet:
1. how can the best evidence of test effectiveness be based on the experience in Wuhan in the early days of the pandemic. We have had 9 months to get a better handle on that.
2. We need clear advice to public, GPs about what is a higher risk (pre-probability) situation) - to know when to be skeptical of a negative result and isolate and retest.

Doctors do low value tests on unlikely clinical scenarios every day eg in my family an MRI for a migraine with truely awful incidental finding & consequences to put that to rest Tests and their interpretation can do harm - kudos to Dr. Bell and her team


Dr Peter Angus MacIsaac   29/10/2020 2:40:44 PM

The previous comment is harsh and dismissive: the Bell paper is republishing the well described "science" of diagnostic testing that underpins clinical practice and the quacking duck analogy. The inclusion of the perspective of authors from a consumer, general practitioner and public health specialist in current COVID-19 testing strategies, would have been helpful.
Yet:
1. how can the best evidence of test effectiveness be based on the experience in Wuhan in the early days of the pandemic. We have had 9 months to get a better handle on that.
2. We need clear advice to public, GPs about what is a higher risk (pre-probability) situation) - to know when to be skeptical of a negative result and isolate and retest.

Doctors do low value tests on unlikely clinical scenarios every day eg in my family an MRI for a migraine with truely awful incidental finding & consequences to put that to rest Tests and their interpretation can do harm - kudos to Dr. Bell and her team