At this afternoon’s clinical session of the International Symposium, on translating evidence into practice, we heard two very different angles on this topic.
The opening presentation, questioning whether the evidence leads clinicians or whether it lags behind them was given by Professor Jeffrey Rosenfeld, a long-standing member of our Symposium Programme Committee. “When I agreed to talk on this topic, I was very enthusiastic, but now it seems less of a good idea” he confessed at the beginning of the talk. “I’m not questioning the value of evidence; but it is not always available. There are many situations in MND clinics where we use our experiences of managing the disease aswell as the evidence – it is the difference between the art and science of medicine”.
He illustrated his point in two ways. Firstly, he wrote a list of methods he may use to treat various symptoms in the clinic. Judging by eye, the list on Prof Rosenfeld’s slide was perhaps four columns wide and ten lines deep. Of these, perhaps four or five strategies are listed in practice guidelines.
Secondly, Prof Rosenfeld used these same four or five strategies and talked about the criteria used to reach them. They were the result of: two of publications on practice parameters, ten ‘class 1’ studies (the most rigorous way of gathering evidence), 13 ‘class 2’ studies… and my writing wasn’t quick enough to catch the number of pieces of the next level down of evidence gathering – but I hope that you get his (and my) point: even to get the evidence we do have, it is a huge amount of work!
He also discussed the question of the audience or purpose of the guidelines, who are they written for– is it the experienced MND clinicians sitting in the audience at the Symposium?, Is it to highlight the minimum requirements for good management of MND for community neurologists? Or perhaps it is to highlight and challenge researchers in the field to plug the large gaps in this evidence?
Towards the end of his presentation he touched on a topic discussed in more detail later. Having evidence of good practice is not new, but what is new is the increasing reliance on this evidence. Perhaps what is different between two clinicians is the variation in how the guidelines are applied.
Professor Rosenfeld’s comments were acknowledged by his peers as a way of helping us to reconcile the paucity of evidence and doing our best for people with MND.
In contrast, Professor Ben Brooks presented what appeared to be a very quantitative, performance measurement approach of implementing or applying the published evidence. We received a whistle stop tour of a cycle of ‘Plan’, ‘Do’, ‘Study’ (ie what happened) and ‘Act’ (to sustain methods and improvements) methodology being piloted at the Carolinas Neuromuscular Centre in Charlotte, USA.
I’m still trying to digest the implications of all of this…
Your feedback on ReCCoB reporting means a lot to us, please spare 5-10 minutes to complete our short questionnaire about our reporting before you leave our blog: http://surveymonkey.com/s/ReCCoB