The recent conversation on Twitter between Fraser Nelson of the Spectator and Graham Medley, Chair of the SAGE modelling committee, was illuminating, to say the least.
Here are the screenshots so you can read it for yourself:
The gist of the conversation appears to be:
Graham Medley : we only model bad stuff because decisions are only made if things are bad
Fraser Nelson : run that one by me again
Yes, run it by us all again, Graham. Please. We’re still trying to get over our incredulity here.
It doesn’t start well does it?
The point being missed is that these scenarios are not predictions
Well, we’ve kind of figured that one out after seeing several hundred wrong “predictions” over the last 21 months (and no correct ones), but no matter. But it’s such a strange thing to say. He’s basically saying that the models don’t predict anything.
What are they then? Isn’t the whole purpose of these things to say “if the virus has properties X and we don’t implement policies Y, then this is what we will see”?
If they are not predictions then what in the name of Batman’s soiled underpants are they then?
Fraser then asks a very reasonable question (paraphrased) : but other people have looked at the very realistic possibility that we have lower virulence here and that changes things massively. So why not include that?
The answer is another question
What would be the point of that? What would decision-makers learn from that scenario?
Maybe they would learn that the best decision to make is simply to do nothing at all, perhaps?
I mean, colour me stupid, but if someone is trying to sell me insurance against being abducted by aliens, I might want to ask one or two questions about the likelihood of such a scenario.
Here’s all the really bad things that might happen - we’re just going to completely ignore all the good stuff that might happen.
These answers from the modeller are just so freakin bizarre. The guy is arguing that the only scenarios (they are predictions, let’s not try to disingenuously play with words here) that need to be presented are the ones where some decision has to be made.
I took my car in for a service and the guy told me an aircraft might land on it and so I needed to make a decision about whether to strengthen the shell, at great cost. He also said that the UK might have a rapid freeze like the ones seen in the movie The Day after Tomorrow, so I should invest in some snow tyres. The car was perfectly fine - he just told me about all the stuff that might go wrong.
See how crazy it sounds?
Frazer does a great job of asking sane questions. Graham does a similarly great job at providing insane answers.
The key here comes at the end where Graham admits
We generally model what we are asked to model
In a nutshell, then, they’ve been asked to model only the scary stuff. Graham is trying (and failing) to give some post-hoc justification by waffling on about the difference between scenarios and predictions and how only things that require decisions need to be presented.
But it’s clear to see they’ve been instructed to be a vehicle for fear porn.
I’m taking you to the dress shop, dear - but you can only try on those dresses that make your arse look absolutely HUGE.
He’s desperately trying not to say the obvious: this is about propaganda.
Lysenko is asking him about the harvest.
Thanks for screenshotting, as the conversation gave me a good (dark) laugh this morning.
I really, really hope GM was lying when he said "What would be the point of that?" was a genuine question. And "What would decision-makers learn from that scenario?" I'm impressed FN could formulate a coherent response because I can't quite get my jaw to close after reading that.