Log in

No account? Create an account
14 January 2008 @ 03:23 am
I've gone over to Wordpress - sorry 'bout that, but it lets me do much more interesting & useful things.

There is a very long & geeky new post on the subject of the book 'Impossible Cure' - case study of homeopathy curing autism. Read it & subsequent entries at: http://brainduck.wordpress.com .

Thanks, & please do let me know if you've any problems with the new site,
This post is a bit outside of my usual areas of interest, so there's bits of background that I'd really appreciate comments on. It does give the chance for a nice introduction to some interesting areas of psychological testing & health economics, and it's even somewhat topical, so here goes.

I'm a regular listener to 'The Archers', a long-running radio soap opera on BBC Radio 4 'An everyday tale of simple country folk'. I'm maybe 40 years younger than most of the demographic, but blame that one on Mum.

An ongoing storyline is that one of the characters, 'Jack Woolley', has dementia, probably Alzheimer's. In Monday's episode, he was given what appears to be the Mini Mental State Exam ( http://www.patient.co.uk/showdoc/40000152/ ), a psychological test used to measure the progress of dementia. The examining doctor also talked about his medication, though without specifying what Jack was taking. Episode here if you are interested: http://www.bbc.co.uk/radio4/archers/catch/this_week.shtml?mon

Obviously, the full 10-minute test wasn't on air in a 12-minute episode, but parts were. Now, I'm not trained to work clinically with patients, or to administer the MMSE, but I've had a go at scoring what there is.
Orientation - Jack needed a bit of prompting to say where he was, but he eventually came out with 'Hospital, in Borchester, England' - 3 points of 5?
Registration - here Jack had to remember 3 words ('Apple, picture, table). This took him 2 attempts, first managing two words, then all three. 3 points of 3?
Attention & calculation - Jack had to spell 'WORLD' backwards (a working memory task - more on that in my next post), but he only got as far as 'D' - is this 1 point of 5?
I'm not too sure on the scoring for these tasks, & we didn't get an overall numerical score, only 'in line with what we would expect'. But messing around with percentages, and assuming that Jack did equally well on all areas of the test, Jack would have scored 16 out of 30. This would put him just over the border from 'moderate' into 'severe impairment'.

Whilst throughout the episode Jack is somewhat confused & disorientated at times, rather obviously has problems, and is 'wandering' problematically at times, he is able to talk, interact with people, do basic self-care for himself, & isn't nearly at the worst dementia can do to people. I have worked in a few elderly care homes and watched my grandfather go steadily downhill, and at the worst dementia can leave people completely unable to communicate, having no idea where they are, not recognising close family, doubly incontinent, etc. It's horrible. The problem with using the Mini Mental State Exam alone to score the progress of dementia is that there's a long way to get worse from a score of 'severe impairment', and even a score of 0 can cover a fair range of problems. It doesn't adequately differentiate between people who aren't 'in the middle'.

This effect of not adequately differentiating between people at the bottom of a measuring scale is called by psychologists a 'floor effect'. Imagine if you wanted to test the fitness of a group of 'couch potatoes', and did so by making them all run a mile. Now imagine you stopped timing after 7 minutes. Most of the group would not have got to the end in that time. Even though there would be an important difference in fitness between those who took 10 mins & those who took 30, your test would group them all together as having the lowest possible score - this is a 'floor effect', because the test is too hard.

You can also get 'ceiling effects' - for this imagine you have a group of PhD physicists, and gave them all a lower tier GCSE Maths paper. Even though some of the physicists would be cleverer than others, they would probably all still get the highest possible score of a C, because the test is too easy. The MMSE also has ceiling effects, because most people will need to loose a fair bit of function before they start having problems with the simple tasks on it.

NICE (National Institute for Health and Clinical Excellence) basically decide whether new treatments provide enough benefit for their costs to be prescribed on the NHS (oversimplifying massively here, sorry). They look at Quality Adjusted Life Years, QALYs, which are a measure of years of life gained from a treatment, adjusted by the quality of the life gained - so if with one intervention you live for an extra 10 years in severe pain, it might be more worthwhile to have an intervention which will give you an extra 8 years of pain-free life. The way they do the adjustments is fascinating, but perhaps for a different post. More on QALYs here: http://www.library.nhs.uk/healthmanagement/ViewResource.aspx?resID=123545 [there's a brilliant BMJ article somewhere written by someone from NICE explaining this in more detail, but I can't seem to find it - anyone?]

QALYs are rationing, although the government don't really like to say so. NICE decide how much a QALY is worth, and (at least in theory) try to spend the same amount per QALY across every illness. Last I heard, a year of good quality life was worth ~£30-35 0000, though that's probably a bit out of date & an oversimplification. More than that, and the money could be better spent elsewhere - which is why all the fuss about prescribing things like Herceptin, 'cos they cost too much per QALY. (I'll scrimp on the general ethical debates around QALYs - try here: http://www.ethics-network.org.uk/ethical-issues/resource-allocation/ethical-considerations for some points).

Alzheimer's disease isn't curable. There are some drugs which might be able to slow its progress. I'll skip the details though again they are rather cool (see here for starters: http://www.alzheimers.org.uk/site/scripts/documents_info.php?documentID=147 ). However, there has been a lot of argument over whether they can be prescribed on the NHS, as they cost a lot per QALY. NICE decided that only those with 'moderate' dementia should get acetylcholinesterase inhibitors. This is because people with moderate dementia show the biggest change on the MMSE when given the drugs. This could be because the drugs work best in moderate dementia, or it could be because of the floor & ceiling effects in the MMSE - people who have 'mild' dementia quickly get up to the top score and can't go further, or people with severe dementia get a lot better but still don't manage to score much more than bottom on the MMSE.

There's two big issues here really. Firstly, is the MMSE really measuring improvements properly, or is lack of measurable improvement just a task artefact? That's the sort of thing which make psychologists very interested. Is the MMSE a valid measure of changes in early & severe dementia?
The other issue is much bigger and more complicated, and it's around what & how we pay for healthcare. Dementia is a complicated one. Because people with dementia will eventually need very expensive residential care, it might be worth spending more on drugs to save on care. But then you get into arguments about who pays for care - in England then 'social' care, help with things like eating & toileting, is means-tested so does not always come from NHS budgets. There's also issues around how much we should pay for health overall, the balance between preventative measures and treatments, how worthwhile is it to develop expensive new drugs when most people can't access the ones we've got - it's a lot to think about.

Anyway, I'm surprised & slightly disappointed that Jack is still getting his medication on the NHS, & despite being rather well-off seems to be coming up for social help too. 'The Archers' has a special place in British life, and I suspect that were Jack not to be getting help then questions would be asked in Parliament. At least it would get people thinking and talking about this sort of thing - it's an uncomfortable thing to think about but too important to be swept under the carpet because of that.

Comments welcome, I'm sure some of you will know much more about this than me.

blog stats
21 November 2007 @ 11:08 pm

When I first heard of DORE, I thought they sounded like a good idea. There is not a lot of research into dyspraxia in particular (which I have), so anything that produces good results would be very welcome.

The problem is that they keep publishing papers which are a bit rubbish really, with silly mistakes, and where the results don't quite say what their write-up implies. They charge a lot of money for the treatment, & it requires much time & effort from children & parents. I did lots of chucking beanbags around as a Duckling, I know how pants it is to be made to stand on one leg throwing beanbags into shoeboxes & being reminded how rubbish you are every day while the other children are out playing or learning things that are actually useful. It has never been useful for me in my adult life to try to throw a beanbag into a shoebox with the wrong hand standing on one leg. These things are not harmless, and should be properly researched before being sold for a lot of money to desperate parents & children who already have enough to deal with.

If DORE was the 'Miracle Cure' they claim in a book title, it would be fantastic. So why is DORE research so badly done & inadequate? They have money, they have people doing the programme to test, so if it works & they want to encourage its wider use, they should do a better job of proving it.

The latest research is here: http://www.dore.com.au/researchscience/DORE_MATCHED_DATA_STUDY.pdf

Quick summary for those of you with better things to do than read a point-by-point ramble through a 30-page paper. This research was not designed in such as way as to be able to show that DORE works. There is no control group, no attempt to look at whether their measures correlate with real-world success in school etc, no follow-up, and only three in five of their participants were actually diagnosed with Dyslexia. Even if you take the research on its own terms, it shows that for most people then DORE doesn't work - only the bottom few % showed any improvement, and this was mostly to do with stuff like bead-threading, not measures of reading and writing. Without a control group it is not possible to tell whether this would have happened anyway, for example as the children got older or if they were being given extra help in school too. It is not surprising if children get better at things over time, especially if they have parents who are willing to put a lot of effort & money into helping them learn.

The Holfordwatch blog has had a look at this paper too, in perhaps a slightly more amusing fashion than my usual undergrad plodding through (I have to read lots of papers on Dyslexia every week because I am studying it as part of an an undergraduate psychology course). They can be found here: http://holfordwatch.info/2007/11/23/dore-research-paper-shows-that-dore-is-not-useful-for-a-substantial-proportion-of-potential-clients/

I am feeling particularly geeky today, so I have read it all & put down a point-by-point criticism. I might have missed things or got them wrong, please add a comment if you think I have.


Wenjuan Zhang: http://www2.warwick.ac.uk/fac/sci/statistics/staff/research/wenjuan_zhang
'Her involvements in recent consultancy projects include contracts with LSC, DDAT, Dft, Education Walsall, and MG Rover'.
I haven't heard about her before. Seems to have just done the stats. Now, Maths is not my strongest point, but can any readers please explain why they are using 'deciles'? The stats are put together in a way I'm not really familiar with, but that could be just because I am but a humble & not-very-clever undergrad who does not yet know enough about such things.

Professor David Reynolds: done lots of pro-DORE sresearch, caused mass resignations from the editorial board of the journal 'Dyslexia'. Has been previously heavily criticised for financial links to DORE. He's published similar-sounding stuff before:

Dr Roy Rutherford. He is the 'Global Medical Director for Dore' http://www.dore.co.uk/KeyPeople.aspx (it is rather odd for an organisation which claims a miracle cure for dyslexia to have quite so many spelling and grammatical mistakes on that webpage, including confusing homophones, but there you go).
Now, I should admit that I have not been so keen on Dr Rutherford since he called one of my lecturers a 'very aggressive lady' in the Times newspaper, without actually having met her. http://www.timesonline.co.uk/tol/life_and_style/health/child_health/article1344439.ece

My first grumbles on a very quick skim-read through the paper:

The usual gripes about them using their own tests. Some of them are fairly standard assessment tools, some aren't.

'More than 60% of subjects in this cohort have been previously
assessed by a specialist and diagnosed with dyslexia prior to
attending Dore'. Hmmm - 40% haven't? Only 50% of people in my final-year project will have been diagnosed, but I would expect to get different results on a range of measures - some of them similar to what DORE use - for the 50% who do have a diagnosis & the 50% who don't.

'Currently between 70-80% of Dore clients complete the program and receive final DST testing'
Have to check, but I expect this is poorer follow-up than you'd get with a school-based programme (the usual for phonics research), & obviously the people who drop out will be more likely to be the ones showing no improvement.

'In fact in the literature on interventions for literacy the opposite is found to be the case i.e. that those with the more severe deficits in both cognitive and literacy performances make the smallest responses in terms of literacy improvement. Thus even when we look at relatively crude data assessing dyslexia based changes with Dore it is immediately obvious that the reverse is true.'
??? which literature and which interventions are they looking at ??? Their paper has no references at all. I have to put lots of references in my work, and only some poor underpaid postgrad will ever read most of it. I wonder what happened to their references?

p11: 'It can be seen that looking at the data in this way seems to suggest that there are highly significant improvements is a range of important cognitive skills related to dyslexia but not in performance
in literacy based tests (OMR, TMS, NWR and OMW).'
This is what you would expect if the intervention did not work. People get better on the things they are trained on, but this does not make them better at the things they aren't trained on but are actually important. (though later p12 onwards they claim actual improvements in useful endpoints for lowest few %).

p11: 'Using this overall analysis literacy scores hold their own over time which is not what is usually seen in practice where there is a tendency to decline down the scale with time and subjects tend to fall further and further behind their peer groups.'
This does not happen using an effective intervention, where catch-up can take place - showing DORE performs worse than phonics [I'm most familiar with phonics-based approaches so I will use them as a comparison - other ideas are available...]

p17: 'Even taking this into account we still see significant improvements in most areas with occasional exceptions in one minute reading and spelling (age group 12.5-16.5) and nonsense word passage (age groups 6.5-9.5, 9.5-1.5). One minute writing does not show much change throughout the age groups but the mean performances are high initially and well into the normal range.'
So some of the most ecologically valid stuff is what isn't changing?
'Ecological Validity' is not about making sure you print your reports on recycled paper. It is a term psychologists use to describe how well what you are measuring fits with the real world. So if you say that your results show that children have improved by 2 academic years in just 6 months, but the only thing you have used to test them on is a block design task and they are not getting better marks at school, the block design task would have poor ecological validity because it would not relate to anything which is important in the real world. It is quite easy to teach people to do better on the sorts of tests used in psychological research. A bit of practice often does the trick on its own. However, making improvements in real life is what's important, and how much you can write in one minute (how fast you can write) may be an important skill in real life - more so in most careers than bead threading.

p19: 'It has been recognised in these peer studies that tests of full reading skill (i.e requiring reading of word passages and comprehending written language) that subjects using Dore are shown to make considerable progress. We expand on this whole issue here as it has caused considerable debate amongst reading academics and has led to inappropriate criticism and ignoring the highly positive outcomes of the Dore research work so far.'
Suggest you read the whole paragraph. Not utterly implausible, but would like a reference or further work to back this up before accepting. Taking such a personal tone isn't usual in an academic paper - looks like someone has hurt their feelings?

p19: 'We can also announce that the majority of children making up this group have been previously formally diagnosed with dyslexia. This fact rather discounts prior criticism of the peer reviewed studies where not all children had a previous formal diagnosis.'
A 60% majority is still pants. Diagnosed by who? 'Peer reviewed studies' are all very well, but when your 'peer reviewed study' leads to 5 resignations from the board of a prestigous journal and 9 published rebuttals, maybe quite a lot of 'peers' disagree with you.

p20: 'Postural stability forms part of the battery of assessments as many studies show that balance and posture can be impaired in dyslexia. In fact Stoodley showed precision balance performance and reading performance are linked across the spectrum. Balance is of course a fundamental area of cerebellar control.'
Co-morbidity between dyslexia & dyspraxia (often finding both in the same person) does not mean that there is one underlying dysfunction. This is a 3rd variable problem. For example, both dyslexia and dyspraxia could come from a genetic or developmental problem with brain development, but different areas of the brain could be affected in both. Assuming that they are directly linked is like saying that meningitis and head injury are the same thing because they share the symptom of headache. Having a motor control problem does not automatically imply 'cerebellar dysfunction'. 

p20: 'This test used here is a rather crude screening tool useful for more significant postural deficits. In fact many studies have shown that it is with precision balance testing and often under dual tasking
conditions where postural control is found to be deficient in dyslexics and ADHD children.'
There's not so much wrong with making up your own 'ultra-precise' tests, psychologists usually have to give tasks which are more difficult than you would do in real life to find out how far your mind can go. But beware of tests with no real-world implications. It is a bit like a cosmetics company promising 'microscopically smooth skin' - unless you usually look at people's skin with a microscope, it may make no difference.

p20: 'Some argue that verbal working memory is deficient due to poor underlying phonological skills. However we are aware of few studies which suggest that phonological training enhances working memory skills.'
There's a fair bit out there on working memory in dyslexia. 5-second search of Web of Science for 'phonolog* AND working AND memory AND dyslexi*' kicks up 154 hits, including stuff like:

Savage R, Lavers N, Pillay V., Working memory and reading
difficulties: What we know and what we don't know about the

Smith-Spark JH, Fisk JE., Working memory functioning in developmental
dyslexia MEMORY 15 (1): 34-56 JAN 2007

Conti-Ramsden G, Durkin K., Phonological short-term memory, language
and literacy: developmental relationships in early adolescence in
(2): 147-156 FEB 2007

Brambati SM, Termine C, Ruffino M, et al. Neuropsychological deficits
and neural dysfunction in familial dyslexia  BRAIN RESEARCH 1113:
174-185 OCT 3 2006

McCallum RS, Bell SM, Wood MS, et al. What is the role of working
memory in reading relative to the big three processing variables
(orthography, phonology, and rapid naming)? JOURNAL OF

Savage RS, Frederickson N
Beyond phonology: What else is needed to describe the problems of
below-average readers and spellers?

Berninger VW, Abbott RD, Thomson J, et al.
Modeling phonological core deficits within a working memory
architecture in children and adults with developmental dyslexia

Thomson JM, Richardson U, Goswami U
Phonological similarity neighborhoods and children's short-term
memory: Typical development and dyslexia
MEMORY & COGNITION 33 (7): 1210-1219 OCT 2005

Savage R, Frederickson N, Goodwin R, et al.
Evaluating current deficit theories of poor reading: Role of
phonological processing, naming speed, balance automaticity, rapid
verbal perception and working memory

bored now, but there's more...
My current textbook is book 'Alloway, T. P. & Gathercole, S. E.
(2006). Working memory in neurodevelopmental conditions. Psychology Press.', which would not be a bad place to start should someone want an intro to what's out there. *More* research would be nice (want to fund my PhD?) but there's already a fair bit.
Phonological training may or may not enhance working memory skills (actually, that's very close to my lecture topic next week - maybe I should go & do a bit of the reading to check), but there is a lot of research on working memory in dyslexia.

p27: 'This is a very large study of consecutively completing clients from Dore centres who are essentially receiving no 'special attention' (as is the case with many controlled studies) but are experiencing the typical Dore product.'
This be Stoopid. Of course DORE is 'special attention', & it is hardly beyond the realms of possibility that more effort is being put into reading & writing skills too when people are doing DORE. There is something called the 'Hawthorne Effect', where the act of measuring something & paying extra attention to it in itself causes a change (it's a bit like psychology's version of the Heisenberg Uncertainty Principle). A psychologist called Hawthorne was studying the effects of factory lighting on productivity. When he turned the lights up, people worked more. When he turned the lights down, people worked more. When he took all his clothes off & did the can-can, people worked more (ok, I made that last one up). But whatever he changed, people worked more. Being measured & having changes going on changed the behaviour that Hawthorne was trying to measure. It mucked his study up, but he did get a cool effect named after him - fair swap I think.
What do the authors mean by 'controlled study'? There *is no* control group for comparison, everyone is getting DORE.

p27: 'which later transfer solidly to responses to literacy support and practice.'
Haven't actually shown this in any ecologically valid way. They've shown that for a particular subgroup (lowest %) there's some improvement on some tests, not that this translates into stuff like
doing better in class.

'As Dore involves no specific literacy or cognitive based training of any sort then the improvements are theorised as being directly linked to the observed neurological improvements in cerebellar function.'
They haven't controlled for attention, maturation, placebo, Hawthorne, cohort effects, even proper diagnosis, and a whole bunch of stuff. I'd expect a good GCSE student to have a better understanding of the need for a 'fair test'.

p27: 'However they differ in as much as we have been able to reduce the watering down effect of those subjects with initial normal or superior performances in some tests.'
Why were you treating people who performed above average in your tests?

The whole conclusion is semi-detached from the report's actual
findings, & reads as a sales pitch.

p28: 'The previously published research studies'
You mean the ones that caused 5 Dyslexia editors to resign, have been subject of multiple rebuttals, etc etc? Oh, *those* studies.

p28: 'anecdotally from Dore clients over much longer time spans.'
Why can't they FOLLOW UP. It's a bit trickier, but not actually impossible!

p28: 'One of the original criticisms of the published research studies was that not every child who participated was diagnosed as  dyslexic. In this study we know that at least 60% of subjects were diagnosed with dyslexia prior to attending Dore. It was also found that the dyslexic group showed initial literacy test performances which were slightly weaker than the non-diagnosed group. However the outcomes in both groups were equivalent after Dore. This tends to dismiss initial criticisms and additionally suggests that an initial diagnosis of dyslexia is not essential to benefit from the Dore program.'
60% diagnosed is not adequate to draw conclusions. Still, two in five children who DORE has cost lots of time, money, commitment, practice, parental attention, opportunity to learn useful things or do what normal children do, have NOT been diagnosed with dyslexia. DORE is not a miracle cure. Even its supporters agree that it is hard work & expensive. To put two in five children through that when their difficulties weren't significant enough for a formal diagnosis is IMO unethical. This paragraph scares me. Is *everyone* going to be able to benefit from DORE? Is the plan to make *every* parent feel guilty for not paying lots of money to DORE & making their child throw beanbags around, even if they don't have a diagnosis?

p28: 'What is exciting about these findings is that Dore appears, by stimulating and improving cerebellar function, to impact on core cognitive skills associated with dyslexia. Correcting these learning related skills rather than focussing on training literacy skills directly leads to transfer to literacy acquisition without any
specific intensive training in literacy.'
They have NOT demonstrated this. There has been no demonstration of a 'transfer to literacy acquisition', just some proxy endpoints (~surrogate measures) which may or may not have anything to do with a child's ability to read & learn in a classroom. Is it included specially to be quotable? This is a conclusion detached from the paper's actual findings. 

p28: 'The sad part is that rather than embrace this intervention the reading industry led by the phonological theorists have chosen to severely criticise and ridicule it through manipulation of information and hiding behind authoritative academic positioning.'
My lecturer may or may not be 'aggressive', but I'm afraid I rather like challenging people's assertions with data or methodology. There's nothing wrong with criticising work that just isn't adequate to show what it claims to show. That's how science works.

I'd love DORE to be proved right, because if it worked it would make life easier for me & a lot of my family. Doing poor research isn't a good way to prove that your treatment works, and by not doing a decent job on the research then they are wasting moneythat could be used for good research, they are making it harder to have good research accepted, and they are wasting time when good research could get an effective intervention to more people sooner.
What does the 'reading industry' mean? DORE costs a lot of money. Most of the stuff done by 'the phonological theorists' is provided through schools for free. Come & look round the department car park, those who don't have bicycles aren't exactly driving BMWs.
'Academic positioning'? I can't hide behind authorative academic positioning. I'm but a humble undergrad, the lowest of the low, not worthy so much as to clean out the cages of the lab rats. But even I know that to show an intervention is effective you need a comparison group.

Whether you think DORE works or not, this latest bit of research won't shed much light on the matter. It should be a disappointment to everyone on any side of the argument. Doing things properly, with a control / comparison group, proper diagnosis, follow-up, and a real-world measure of how the children did in school, isn't that difficult to do. One really good study is worth more than dozens which are so badly designed & run that they can't show anything, however good the treatment is. Whether you are a 'supporter' or a 'sceptic', you should be angry about this waste in an area that urgently needs research.

blog stats
21 November 2007 @ 09:57 pm
It is probably a better idea than me rambling at you in person or on t'internet at less convenient times.

The first few posts will be cut & paste from emails, forum posts, etc, so sorry if they aren't that tidy - enough people read then I might even get round to making them pretty one day.

This is mostly for me to ramble about brain-geeky topics - mostly Specific Learning Difficulties, SpLDs (basically things beginning with dys-, like dyslexia & dyspraxia), and Autistic Spectrum Disorders, ASDs, & related topics.

There's a lot of stuff said about them in the popular press that really is not supported by the evidence, & given that I'm learning how to read & interpret the evidence then I thought I'd try to share some of that.

I'm a final year Psychology undergrad, & I'm also looking for adults (14+), diagnosed with dyslexia, dyspraxia, etc, to help with my final year project. More on that after Christmas when I get ethical clearance, but if you live in NE England, London, South Wales, or anywhere on a train line between them, and can spare an hour to help my degree & Science, please bung me an email or a comment - thanks!

The usual stuff - don't take psychological advice from people you don't know on LJ, go and see your GP, school learning support people, or a BPS-registered Clinical or Educational Psychologist.