One thing I will say regarding where things are at in reports of anything working or not working regarding COVID-19 treatment:
We're often looking for very easy, black-and-white answers to things, and we try to fit each intervention into some kind of dichotomous framework of "it works" or "it doesn't," yet we fail to synthesize the information enough to know what it is or isn't telling us.
For instance, all of this is better looked at not through a lens of "does it work or not," but rather through a lens of
sensitivity and specificity. That would better help us appraise information rather than feeling compelled to accept or dismiss information. All across medicine, tests are used as information to aid in the decision-making process. No tests (or, at least not often) are used in isolation. A clinical correlation is warranted. Tests that are
sensitive mean that it is sensitive to catching whatever problem you are testing for. Because it casts a more wide net, sensitive tests generally come at the cost of
specificity. With a sensitive test, it means that if you have the condition tested for, you are very likely to test positive. With sensitive tests, there are generally much more false positives. So testing positive on a
sensitive test doesn't necessarily mean you have the condition tested for. On the other hand, a
specific test means that if you test positive, you very likely have the condition tested for (meaning there are few false positives). Neither sensitivity nor specificity are inherently good or bad.
They are both important. Obviously, in an ideal world, you'd have tests that are both sensitive
and specific, but that's not the world we live in. It requires us to utilize each type of test in different circumstances to give us certain information about a problem. In many circumstances, a sensitive test helps
rule out conditions, rather than diagnose them. Sensitive tests are often used as screening tests, as many sensitive tests may be very cheap before you start escalating through the testing process for very expensive tests. To some, this has caused them to think results of a sensitive test are worthless. For instance, a D-dimer is a level that would be elevated in circumstances in which there may be clots, such as a pulmonary embolism. The test is sensitive, but it is not specific. There's a bit of hesitation sometimes in ordering it because it may not be unlikely to have an elevated D-dimer, even if you're not having a pulmonary embolism, which then leaves a physician in the position of having to chase down why the D-dimer is elevated. But an elevated D-dimer doesn't tell you that you've got a pulmonary embolism. So why order it? Is it worthless? If someone comes in with shortness of breath and chest pain, and you have a negative D-dimer, then you can pretty reasonably conclude that the cause of their presentation is
not a pulmonary embolism, unless you have some other relevant clinic information to override that. So even though the test itself isn't specific for anything (there are multiple different things that could cause its elevation), it can still provide very useful information that
guides management and treatment.
Now, shifting from a sensitive test to specific ones. The example I would give here would be Alzheimer's dementia. The only true
specific test (i.e. confirmatory) is a brain biopsy. That would be the "gold standard." Not in terms of actually doing it is it standard, but that's the only way to definitively determine if one has Alzheimer's dementia. Clearly that's not something that's being done. With regard to dementia, you are first doing a
screening test (i.e. a sensitive test), such as a Montreal Cognitive Assessment. The Montreal Cognitive Assessment is more
sensitive than it is
specific, meaning that people could still score in a range of dementia but not actually have dementia. But specifically to Alzheimer's dementia, you're not going to be doing a brain biopsy so you will be left with more flawed ways to reach the conclusions, relative to the definitive test (brain biopsy). All across medicine, information from both sensitive and specific tests are utilized to help guide next steps in management. It doesn't mean
at all that a positive D-dimer means that someone automatically gets anti-coagulated with suspected pulmonary embolism, but it does mean that the presence (or absence) of an elevated D-dimer may change the next steps in management.
Now, my point in how this all relates to what's going on now. Given where we are in the midst of all this, it should come as no surprise to us that the type of information we could get regarding any potentially positive management of COVID-19 and its sequelae would be analogous to being
sensitive but not necessarily
specific for good management practices. What I mean by that, is that you will hear
a lot of different things that may have potential. Not all of them will actually bear out to be beneficial interventions. The evidence you would have to support them would be far from conclusive. The best available evidence may actually just be reports of people on the ground. That certainly doesn't mean that those reports will end up proving true, any more than an elevated D-dimer may not actually be because of a pulmonary embolism. In this sense, what we're currently hearing at the ground-level may be
a sensitive test for finding potentially beneficial interventions, but is certainly
not a specific one. What this tells us is that the data on the ground will
precede more conclusive kinds of evidence that would unfold over larger intervals of time, but the fact that something precedes that does not mean that all things preceding that evidence will go on to have that evidence validating it. I trust that everybody understands that part. What I see people struggling with is a backwards rationalization of saying that
because the ground-level evidence is flawed, it is therefore evidence against something. This is very flawed reasoning. The equivalent of this would be saying that because a D-dimer is so poorly specific, that
the presence of an elevated D-dimer is evidence that one does not have a pulmonary embolism. That's essentially where I feel we are at in all of this. We're suggesting that because the kind of evidence on the ground is too sensitive (it is) and often misleading (it often is), that somehow the presence of this evidence is evidence against it. Rather, it needs to be approached in the same way that one would actually approach managing a patient with an elevated D-dimer. The person who tells everyone "well, you've got an elevated D-dimer, so you probably don't have a pulmonary embolism" is going to end up with a lot of dead patients.