Dementia diagnosis: a decade of evidence

In this blog for clinical, research and lay readers, Dr Terry Quinn, the coordinating editor of Cochrane Dementia and Cognitive Impairment, looks at the latest Cochrane evidence around testing for dementia.  Terry describes the Dementia Group’s ongoing work on dementia assessment and shares some thoughts on reviews of test accuracy. 

“Dear Dr McShane, I am spending some time in Oxford and wondered if I could help with one of your Cochrane reviews, I am particularly interested in diagnosis of dementia….”

I sent this email to Rupert McShane the coordinating editor of the Cochrane Dementia Group in 2009. At the time, I thought that working on a review would be a useful way to spend my evenings while working in old age psychiatry in Oxford.  Almost ten years, multiple reviews and two kids later, I am still working on dementia diagnosis reviews with the team in Oxford.

As we publish our latest reviews and protocols on dementia screening tests (AD-8 [1] Mini-cog[2]) ACE [3]) and collate many of our diagnosis reviews for a Cochrane Special Collection to coincide with World Alzheimer’s Day [4], I am feeling reflective.  What have I learned from all this time spent working on dementia test reviews.  It is perhaps a sign of how much Cochrane has influenced my thought processes, that I will structure my blog using the PICO (Population; Intervention; Comparator; Outcome) format.

Who is being tested?

When we talk about dementia diagnostic tests we need to consider the population who are being tested.  Related to this are questions around the setting in which the test is being used and the purpose of the test.[5]  A test that performs well in a dedicated memory service may not work so well in a General Practice Surgery and we should not conflate a very brief test designed for initial screening with more detailed neuropsychological batteries or structured interviews that try to give a formal diagnosis. When we designed our first test accuracy reviews we had separate titles for community, primary care and secondary care settings.  However, even within these settings there are variations in how a test performs and why the test is being used.  For example, within a secondary care category we could include both memory clinics and the Emergency Department, but clearly the testing performed in these areas is very different.  Looking at all of our dementia screening test reviews there seems to be a disconnect between setting and evidence.  In practice, short screening tests are most likely to be used in acute settings but the majority of research on these tools comes from very specialist memory clinics.

Short screening tests are most likely to be used in acute settings but the majority of research on these tools comes from very specialist memory clinics.

The intervention (the test)

The intervention in these reviews is the test.  Although we use the term ‘diagnostic test accuracy’ to describe our reviews, many of the assessments we study are better thought of as screening or triage tests rather than definitive diagnostic tools – we would not make a diagnosis of dementia on the basis of a five minute pencil and paper test.

There are many dementia screening tests to choose from.  In fact, when we reviewed tests used in dementia research it seemed that almost every research team was using a different tool.[6]  So, do our Cochrane reviews help us choose the best test for a particular person?  Well hopefully they help a bit, and it is reassuring that our reviews have been used to inform guidelines.[7]  However, to be really useful for clinical practice, I think the reviews need to become more sophisticated.  Rather than the current approach of looking at the accuracy of tests in isolation, we need to compare tests and perhaps consider more than just test accuracy. Issues such as feasibility, acceptability and cost of the test may be just as important when deciding on the best test for a certain situation.  With colleagues in the NIHR Complex Reviews Support we are helping develop methods to allow for comparative analyses of multiple tests.  We would hope to apply these methods to our next test accuracy reviews, watch this space.[8]

Gold standard comparator?

In our reviews we assess the test of interest against a gold standard comparator.  But what is the gold standard for dementia diagnosis?  More than any other aspect of our test accuracy work this is the question that has generated the most debate and there have been some ‘robust’ conversations with review authors and peer reviewers.  Some would argue that autopsy assessment of the brain is the true gold standard for dementia diagnosis, but few studies have this information and we know that older adults can have brain changes of dementia with no associated memory and thinking problems and vice-versa.  More recently there has been interest in using brain scans, blood tests or fluid from the spine to diagnosis dementia.[9]  The enthusiasm for these biomarkers has not always matched the research evidence and more than once Cochrane Dementia has had to sound a note of caution around use of these tests.[10]  The landscape is evolving rapidly but at the moment we still think that the gold standard for dementia diagnosis is good old fashioned clinical assessment, talking to the person and those that know them well.

Outcomes: looking beyond accuracy

It would seem obvious that the outcome of interest in a test accuracy study would be accuracy.  This is certainly the approach we have taken in our reviews, describing tests in statistical terms such as sensitivity/specificity and likelihood ratio. However, as with many aspects of dementia diagnosis research, the reality is more complex than we first anticipated.[11]  Accuracy is useful but it doesn’t tell us about the benefits and harms that come from making a diagnosis, it doesn’t tell us whether the diagnosis is accepted by the patient or clinician or tell us anything about what happens once the diagnosis is made.  We need to consider the effect that testing has on the person’s future health.[5]  This is an issue that we are starting to grapple with in Cochrane Dementia and will probably keep me busy for another ten years.

These are my personal observations, but I would be really keen to hear what you think. So, please get in touch with your thoughts on dementia diagnosis.

Take-home points

  • The Cochrane Dementia and Cognitive Improvement Group have a suite of reviews looking at the accuracy of dementia tests; some of these reviews are featured in a recent Special Collection.

  • The design, conduct and interpretation of dementia test accuracy reviews have evolved over the last decade.

  • New directions for dementia test accuracy include comparing different tests in one review and describing the effect of testing on subsequent treatment and outcomes.

Join in the conversation on Twitter with @CochraneDCIG @DrTerryQuinn @CochraneUK or leave a comment on the blog.

References may be found here.

Terry Quinn has nothing to disclose.

Terry Quinn

About Terry Quinn

view all posts

Dr Terry Quinn is Stroke Association / Chief Scientist Office Senior Clinical Lecturer based in the Institute of Cardiovascular and Medical Sciences, University of Glasgow. Terry has a broad research portfolio, his principal research interests are trial methodology, functional/cognitive assessment and neuropsychological consequences of cardiovascular disease. Recent notable outputs include co-authoring best practice guidance for test accuracy studies in dementia; creating online training for stroke trials and developing short form assessment scales. Terry has published extensively on stroke, cognition and test accuracy. He is Principal Investigator for a number of studies and holds a program grant to look at cognitive outcomes following stroke. Terry has editorial board positions with PLOS, Frontiers and Stroke , including coordinating editor of the Cochrane Dementia Group. He is part of the NIHR Complex Reviews Support Group and is founder and co-chair of the Scottish Care-Home Research Group. Terry’s work has always maintained a clinical focus and he combines research activity with teaching and clinical commitments in the wards of Glasgow Royal Infirmary. Email: Twitter: @DrTerryQuinn URL:

1 Comments on this post

  1. Avatar

    Thanks for a clear, full and helpful review of a very complex area!
    You mention very briefly cost. In many cash-strapped provider units (e.g. NHS depts!) this is a major factor. Identifying which are ‘open source’ and which are copyright, & thus only commercially available is a major factor in usage.
    Unless a commercial screening or diagnostic tool is dramatically better than an open source one, it will rarely be used, except for research. If it *is* manifestly better, then it may be allowed by commissioners to be part of patient management guidelines.
    That then raises the question as to what constitutes ‘dramatically better’ and ‘manifestly better’, and a specific health economic methodology will need to be applied (or invented) to answer this. This is s whole separate discussion!

    Kit Byatt / Reply

Leave a Reply

Your email address will not be published. Required fields are marked *