Month: September 2015

Amorolfine, nail varnish, and the solvent solution

FingernailsOne of the treatments for fungal nail infections is amorolfine nail lacquer. It’s medicated, but as far as I know it’s not a pretty colour. There’s also a bit in the instructions about not using nail varnish while you’re using amorolfine. The question I got asked (by a man) was, why should this be so?

If he’d been a woman, he’d probably have known the answer. Not because men are inherently stupid, or because women are inherently good at organic chemistry, but because more women wear nail varnish than men. I hardly ever wear it myself, but I still have a couple of bottles which are lasting quite well, considering I don’t remember when I bought them.

But the thing that you know, if you’ve used nail varnish more than once or twice, is that a new layer of nail varnish can dissolve an older layer. It’s useful if you’re caught short without any nail varnish remover, but not so good if you’re trying to repair chips or make really great designs.

The reason it works is because of the solvent: most nail varnishes use ethyl or butyl acetate as the solvent. This evaporates when the nail varnish dries, leaving a hard layer. If, however, you add more solvent (as in, more nail varnish), then the original layer will be dissolved again and you can wipe it off.

And when it comes to amorolfine, it’s just nail varnish with an antifungal in it instead of pretty colours – and it uses the same solvents as ordinary nail varnish. So if you put ordinary nail varnish on over the top of your medicated antifungal nail varnish, you risk your antifungal nail lacquer either coming off or getting diluted. And if that happens, there’s the risk that it won’t work as well. So, it’s safer all round just to have boring nails until the infection is gone.

Advertisements

The end is nigh? Homeopathy on the NHS in Glasgow

Fairy Gold

Fairy Gold

A judicial review challenge to Glasgow Health Authority’s decision to stop funding referrals to a homeopathic clinic has failed. You can read the judgement here.

This is a considerable relief to those of us who support evidence based medicine – that is, who believe that the NHS should spend its limited resources on treatments that have good evidence that they actually work, or at least probably work.

I’ve been the person trawling through the research papers, trying to figure out whether a particular treatment is beneficial or not. I’ve written the final report for the formulary committee, so that they can decide whether the evidence is good enough, or not, to commit patients’ health, and NHS money, to. It’s therefore a continual astonishment to me that “alternative” treatments seem to get a free pass.

There is no good evidence that homeopathy works. Any non-alternative treatment that had as little going for it as homeopathy would have been ditched years ago. Yes, I can understand that homeopathy was popular in the 19th century, when it actually was a good option. When the alternative is medicines containing mercury or arsenic, then little sugar pills start to look really good. I’d pick the sugar pill over the mercury chloride myself. But modern medicine has moved on.

Nowadays, we’ve got beyond the stage of “Just don’t kill the patient, eh?” Nowadays, we actually aim to make people better, and a lot of time we even achieve it. And, since the NHS is paid for by the taxpayer, we also try to do it in a cost-efficient manner.

Why is it that “alternative” treatments get held to a lower standard than mainstream medicine? Why is it that all you have to do is tell some kind of mystical cock-and-bull story, and suddenly you don’t have to jump through all the hoops that the boring old scientists do?

But what really gets my goat about homeopathy is that it is based on a fundamental error. The chap who started all this – a Dr Hahnemann – noticed that preparations of cinchona bark cured malaria. He further noted that taking cinchona bark induced malaria-like symptoms in himself. He therefore came up with the theory – not unnaturally, given the state of medicine at the time – that a thing that induced symptoms in large quantities would cure them in small quantities. Thus homeopathy.

Unfortunately for Hahnemann, and the theory of homeopathy, the reason cinchona bark cures malaria is because it contains quinine. Which is still one of the main drugs for treating malaria. Cinchona bark cures malaria not because of any homeopathic principles but because it’s chock-full of a powerful drug that kills the malaria parasite.

So, despite the fact that the entire theory of homeopathy is based on a massive (though understandable at the time) error, it’s still surviving. I wonder if it’s because most people want to believe in the magic, the fairy-dust? Maybe the people holding the budget want to believe in little sugar pills because, compared to monoclonal antibodies, they’re cheap. If we could all just take a little sugar pill and get better, or if the fairy tale about the gold at the end of the rainbow were real, the NHS would not have the financial woes that it does.

But the NHS does not waste time searching for fairy gold, and it should not waste money on homeopathy. This judgement represents one more step taken in the fight against waste and inefficiency.

Good work, NHS Lothian.

Fun with statistics: Low numerators

statistics-706382_640Last week, I came across a great statistical trick that I had to rush out and share with everyone, because it was so incredibly cool.

It’s about ratios with low numerators.

The problem you have with interpreting clinical trial results sometimes is that sometimes the event you are looking for either didn’t happen, or only happened once or twice. This means that it would seem to be quite difficult to calculate a reliable risk for that event – particularly if it didn’t happen at all.

Luckily, help is at hand, and there is a simple statistical method to obtain an upper 95% confidence limit for a zero numerator (when the event doesn’t happen at all), or 95% confidence intervals for when you have a numerator between 1 and 4 (i.e., the event happened between 1 and 4 times).

This enables you to judge how reliable your study results are – or, at least, what’s the worst that could happen.

Numerator is zero

For example,

“We reviewed 14,455 eye examinations done with the drug fluorescein, and nobody died.”(1) Surely, a death rate of 0/14,455 means that either nobody ever dies after a fluorescein eye examination – or, alternatively, that the risk of death is uncalculatable?

Well, the former is difficult to believe, and the latter is unacceptable. So does this mean that in order to calculate a risk of death, you have to keep doing whatever it is, until somebody dies?

Well, we could probably make some estimates about the maximum risk.

Obviously, we are not going to be able to calculate an accurate chance-of-death if nobody has died yet. However, if 14,455 patients in a row had the examination and they all survived, then the risk can’t be too high – for instance, it couldn’t be 1/100, or even 1/1000, because probably we wouldn’t be so lucky as to get all the way to 14,455 without somebody dying if that was the case. On the other hand, we couldn’t be quite so confident about saying that the risk of dying must be less than 1/10,000 – because our first death might just be a bit late. We certainly couldn’t say that the risk must be less than 1/15,000. So we know there must be an upper limit where we can say “we’re pretty sure that the chance of dying isn’t any more than X”.

So, if we can work it out like that, there must be a proper way of doing it. Fortunately, Hanley and Lippman-Hand(2) come to our rescue.

In medicine, we tend to deal with 95% confidence intervals a lot. Basically, your 95% confidence interval is where you can say “I’m 95% confident that the real result – if we checked the whole population and not just a sample – would be within this range.” (It’s a bit more complicated than that, but this is a useful way of thinking of it.)

We use 95% because it’s a convention that a 5% chance that the results of your study are completely due to chance, or otherwise unrepresentative of reality, is low enough that we can live with it. Hanley and Lippman-Hand report a simple way of finding out where the upper limit of your 95% confidence interval is (i.e., the point at which you can say “There’s only a 5% chance that the real number is beyond this point”).

All you have to do it:

Upper limit of 95% confidence interval = 3/n, where n is the number of people in your group.

So, for the fluorescein patients above, we do 3/14,455 = 1/4818. So, we can be 95% sure that the risk of death after an eye examination with fluorescein is less than 1/4818. It might be a lot less – but it probably won’t be any more than that.

And that’s a very comforting thing. Now we have some real numbers.

This is important, because there’s a very real difference between a risk of approximately 1/5000, and a risk of zero.

Low numerator

But what if we tested a lot more patients, and one died? Would our problems be over at that point?

Yannuzzi et al(3) said “We looked at 221,781 eye examinations done with the drug fluorescein, and only one patient died.” So, that gives a chance-of-death of 1:220,000. Fantastic!

However, what if the 221,782nd patient (who didn’t quite make it into the study), also died?

That would be a chance-of-death of 2:221,782, or round about 1:110,000. Twice as often. Just with one more patient. This makes those numbers seem suddenly less comforting.

But fortunately, there is a mathematical workaround for this as well.(4) The following table gives a “fudge factor” numerator to use  for different sizes of observed numerator and different sizes of denominator group.

Upper Limit of Exact 95% Confidence Intervals From the Binomial Distribution
Observed Numerator*
Denominator 0 1 2 3 4
10 2.6 4.5 5.6 6.5 7.4
20 2.8 5.0 6.3 7.6 8.7
50 2.9 5.3 6.9 8.3 9.6
100 3.0 5.4 7.0 8.5 9.9
200 3.0 5.5 7.1 8.7 10.1
500 3.0 5.5 7.2 8.7 10.2
1000 3.0 5.6 7.2 8.8 10.2
*for zero numerators, it’s a single upper confidence limit. The others are confidence intervals.

So, for a group of 221,781 patients, of whom 1 died… well, the table doesn’t quite go that far. But the “fudge factor” numerator to use for an observed numerator of 1 and a denominator of >1000 is going to be at least 5.6.

So, if we use 5.6: 5.6/221,781 = 1/39,604.

So the worst it could possibly be is a risk of death of approximately 1/40,000.

And how accurate is that?

Well, in 1983, Zografos(5) reported on 594,687 angiographies with fluorescein. In his study, he found that 12 patients had died. And that results in a risk of death of 1/49,557.

Zografos didn’t give us any confidence intervals, either, but his observed frequency of death after fluorescein angiography is very close to (and on the right side of) our estimated upper confidence interval from the smaller study by Yannuzzi.

REFERENCES

  1. Beleña JM, Núñez M, Rodríguez M. Adverse Reactions Due to Fluorescein during Retinal Angiography. JSM Ophthalmol. 1:1004.
  2. Hanley JA, Lippman-Hand A. If nothing goes wrong, is everything all right?: Interpreting zero numerators. JAMA. 1983 Apr 1;249(13):1743–5.
  3. Yannuzzi LA, Rohrer KT, Tindel LJ, Sobel RS, Costanza MA, Shields W, et al. Fluorescein angiography complication survey. Ophthalmology. 1986 May;93(5):611–7.
  4. Newman TB. IF almost nothing goes wrong, is almost everything all right? interpreting small numerators. JAMA. 1995 Oct 4;274(13):1013–1013.
  5. Zografos L. [International survey on the incidence of severe or fatal complications which may occur during fluorescein angiography]. J Fr Ophtalmol. 1983;6(5):495–506.