Monday, August 31, 2015

Cats on Treadmills (and the plasticity of biological motion perception)


Cats on a treadmill. From Treadmill Kittens.


It's been an eventful week. The 10th Anniversary of Hurricane Katrina. The 10th Anniversary of Optogenetics (with commentary from the neuroscience community and from the inventors). The Reproducibility Project's efforts to replicate 100 studies in cognitive and social psychology (published in Science). And the passing of the great writer and neurologist, Oliver Sacks. Oh, and Wes Craven just died too...

I'm not blogging about any of these events. Many many others have already written about them (see selected reading list below). And The Neurocritic has been feeling tapped out lately.

Hence the cats on treadmills. They're here to introduce a new study which demonstrated that early visual experience is not necessary for the perception of biological motion (Bottari et al., 2015). Biological motion perception involves the ability to understand and visually track the movement of a living being. This phenomenon is often studied using point light displays, as shown below in a demo from the BioMotion Lab. You should really check out their flash animation that allows you to view human, feline, and pigeon walkers moving from right to left, scrambled and unscrambled, masked and unmasked, inverted and right side up.






Biological Motion Perception Is Spared After Early Visual Deprivation

People born with dense, bilateral cataracts that are surgically removed at a later date show deficits in higher visual processing, including the perception of global motion, global form, faces, and illusory contours. Proper neural development during the critical, or sensitive period early in life is dependent on experience, in this case visual input. However, it seems that the perception of biological motion (BM) does not require early visual experience (Bottari et al., 2015).

Participants in the study were 12 individuals with congenital cataracts that were removed at a mean age of 7.8 years (range 4 months to 16 yrs). Age at testing was 17.8 years (range 10-35 yrs). The study assessed their biological motion thresholds (extracting BM from noise) and recorded their EEG to point light displays of a walking man and to scrambled versions of the walking man (see demo).





Behavioral performance on the BM threshold task didn't differ much between the congenital cataract (cc) and matched control (mc) groups (i.e., there was a lot of overlap between the filled diamonds and the open triangles below).

Modified from Fig. 1 (Bottari et al., 2015).


The event-related potentials (ERPs) averaged to presentations of the walking man vs. scrambled man showed the same pattern in cc and mc groups as well: larger to walking man (BM) than scrambled man (SBM).

Modified from Fig. 1 (Bottari et al., 2015).


The N1 component (the peak at about 0.25 sec post-stimulus) seems a little smaller in cc but that wasn't significant. On the other hand, the earlier P1 was significantly reduced in the cc group. Interestingly, the duration of visual deprivation, amount of visual experience, and post-surgical visual acuity did not correlate with the size of the N1.

The authors discuss three possible explanations for these results:
(1) The neural circuitries associated with the processing of BM can specialize in late childhood or adulthood. That is, as soon as visual input becomes available, initiates the functional maturation of the BM system. Alternatively the neural systems for BM might mature independently of vision. (2) Either they are shaped cross-modally or (3) they mature independent of experience.

They ultimately favor the third explanation, that "the neural systems for BM specialize independently of visual experience." They also point out that the ERPs to faces vs. scrambled faces in the cc group do not show the characteristic difference between these stimulus types. What's so special about biological motion, then? Here the authors wave their hands and arms a bit:
We can only speculate why these different developmental trajectories for faces and BM emerge: BM is characteristic for any type of living being and the major properties are shared across species. ... By contrast, faces are highly specific for a species and biases for the processing of faces from our own ethnicity and age have been shown.

It's more important to see if a bear is running towards you than it is to recognize faces, as anyone with congenital prosopagnosia ("face blindness") might tell you...


Footnote

1 Troje & Westhoff (2006):
"The third sequence showed a walking cat. The data are based on a high-speed (200 fps) video sequence showing a cat walking on a treadmill. Fourteen feature points were manually sampled from single frames. As with the pigeon sequence, data were approximated with a third-order Fourier series to obtain a generic walking cycle."


Reference

Bottari, D., Troje, N., Ley, P., Hense, M., Kekunnaya, R., & Röder, B. (2015). The neural development of the biological motion processing system does not rely on early visual input Cortex, 71, 359-367 DOI: 10.1016/j.cortex.2015.07.029






Links to Pieces About Momentous Events

Remembering Katrina in the #BlackLivesMatter Movement by Tracey Ross

Hurricane Katrina Proved That If Black Lives Matter, So Must Climate Justice by Elizabeth Yeampierre

Project Katrina: A Decade of Resilience in New Orleans by Steven Gray

Hurricane Katrina, 10 Years Later, Buzzfeed's Katrina issue

ChR2: Anniversary: Optogenetics, special issue of Nature Neuroscience

ChR2 coming of age, editorial in Nature Neuroscience

Optogenetics and the future of neuroscience by Ed Boyden

Optogenetics: 10 years of microbial opsins in neuroscience by Karl Deisseroth

Optogenetics: 10 years after ChR2 in neurons—views from the community in Nature Neuroscience

10 years of neural opsins by Adam Calhoun

Estimating the reproducibility of psychological science in Science

Reproducibility Project: Psychology on Open Science Framework

How Reliable Are Psychology Studies? by Ed Yong

The Bayesian Reproducibility Project by Alexander Etz

A Life Well Lived, by those who maintain the Oliver Sacks, M.D. website.

Oliver Sacks, Neurologist Who Wrote About the Brain’s Quirks, Dies at 82, NY Times obituary

Oliver Sacks has left the building by Vaughan Bell

My Own Life, Oliver Sacks on Learning He Has Terminal Cancer


Subscribe to Post Comments [Atom]

Sunday, August 09, 2015

Will machine learning create new diagnostic categories, or just refine the ones we already have?


How do we classify and diagnose mental disorders?

In the coming era of Precision Medicine, we'll all want customized treatments that “take into account individual differences in people’s genes, environments, and lifestyles.” To do this, we'll need precise diagnostic tools to identify the specific disease process in each individual. Although focused on cancer in the near-term, the longer-term goal of the White House initiative is to apply Precision Medicine to all areas of health. This presumably includes psychiatry, but the links between Precision Medicine, the BRAIN initiative, and RDoC seem a bit murky at present.1

But there's nothing a good infographic can't fix. Science recently published a Perspective piece by the NIMH Director and the chief architect of the Research Domain Criteria (RDoC) initiative (Insel & Cuthbert, 2015). There's Deconstruction involved, so what's not to like? 2


ILLUSTRATION: V. Altounian and C. Smith / SCIENCE


In this massively ambitious future scenario, the totality of one's genetic risk factors, brain activity, physiology, immune function, behavioral symptom profile, and life experience (social, cultural, environmental) will be deconstructed and stratified and recompiled into a neat little cohort. 3

The new categories will be data driven. The project might start by collecting colossal quantities of expensive data from millions of people, and continue by running classifiers on exceptionally powerful computers (powered by exceptionally bright scientists/engineers/coders) to extract meaningful patterns that can categorize the data with high levels of sensitivity and specificity. Perhaps I am filled with pathologically high levels of negative affect (Loss? Frustrative Nonreward?), but I find it hard to be optimistic about progress in the immediate future. You know, for a Precision Medicine treatment for me (and my pessimism)...

But seriously.

Yes, RDoC is ambitious (and has its share of naysayers). But what you may not know is that it's also trendy! Just the other day, an article in The Atlantic explained Why Depression Needs A New Definition (yes, RDoC) and even cited papers like Depression: The Shroud of Heterogeneity. 4

But let's just focus on the brain for now. For a long time, most neuroscientists have viewed mental disorders as brain disorders. [But that's not to say that environment, culture, experience, etc. play no role! cf. Footnote 3]. So our opening question becomes, How do we classify and diagnose brain disorders neural circuit disorders in a fashion consistent with RDoC principles? Is there really One Brain Network for All Mental Illness, for instance? (I didn't think so.)

Our colleagues in Asia and Australia and Europe and Canada may not have gotten the funding memo, however, and continue to run classifiers based on DSM categories. 5 In my previous post, I promised an unsystematic review of machine learning as applied to the classification of major depression. You can skip directly to the Appendix to see that.

Regardless of whether we use DSM-5 categories or RDoC matrix constructs, what we need are robust and reproducible biomarkers (see Table 1 above). A brief but excellent primer by Woo and Wager (2015) outlined the characteristics of a useful neuroimaging biomarker:
1. Criterion 1: diagnosticity

Good biomarkers should produce high diagnostic performance in classification or prediction. Diagnostic performance can be evaluated by sensitivity and specificity. Sensitivity concerns whether a model can correctly detect signal when signal exists. Effect size is a closely related concept; larger effect sizes are related to higher sensitivity. Specificity concerns whether the model produces negative results when there is no signal. Specificity can be evaluated relative to a range of specific alternative conditions that may be confusable with the condition of interest.

2. Criterion 2: interpretability

Brain-based biomarkers should be meaningful and interpretable in terms of neuroscience, including previous neuroimaging studies and converging evidence from multiple sources (eg, animal models, lesion studies, etc). One potential pitfall in developing neuroimaging biomarkers is that classification or prediction models can capitalize on confounding variables that are not neuroscientifically meaningful or interesting at all (eg, in-scanner head movement). Therefore, neuroimaging biomarkers should be evaluated and interpreted in the light of existing neuroscientific findings.

3. Criterion 3: deployability

Once the classification or outcome-prediction model has been developed as a neuroimaging biomarker, the model and the testing procedure should be precisely defined so that it can be prospectively applied to new data. Any flexibility in the testing procedures could introduce potential overoptimistic biases into test results, rendering them useless and potentially misleading. For example, “amygdala activity” cannot be a good neuroimaging biomarker without a precise definition of which “voxels” in the amygdala should be activated and the relative expected intensity of activity across each voxel. A well-defined model and standardized testing procedure are crucial aspects of turning neuroimaging results into a “research product,” a biomarker that can be shared and tested across laboratories.

4. Criterion 4: generalizability

Clinically useful neuroimaging biomarkers aim to provide predictions about new individuals. Therefore, they should be validated through prospective testing to prove that their performance is generalizable across different laboratories, different scanners or scanning procedures, different populations, and variants of testing conditions (eg, other types of chronic pain). Generalizability tests inherently require multistudy and multisite efforts. With a precisely defined model and standardized testing procedure (criterion 3), we can easily test the generalizability of biomarkers and define the boundary conditions under which they are valid and useful.
[Then the authors evaluated the performance of a structural MRI signature for IBS presented in an accompanying paper.]

Should we try to improve on a neuroimaging biomarker (or “neural signature”) for classic disorders in which “Neuroanatomical diagnosis was correct in 80% and 72% of patients with major depression and schizophrenia, respectively...” (Koutsouleris et al., 2015)? That study used large cohorts and evaluated the trained biomarker against an independent validation database (i.e., it was more thorough than many other investigations). Or is the field better served by classifying when loss and agency and auditory perception go awry? What would individualized treatments for these constructs look like? Presumably, the goal is to develop better treatments, and to predict who will respond to a specific treatment(s).

OR should we adopt the surprisingly cynical view of some prominent investigators, who say:
...identifying a genuine neural signature would necessitate the discovery of a specific pattern of brain responses that possesses nearly perfect sensitivity and specificity for a given condition or other phenotype. At the present time, neuroscientists are not remotely close to pinpointing such a signature for any psychological disorder or trait...

If that's true, then we'll have an awfully hard time with our resting state fMRI classifier for neuro-nihilism.


Footnotes

1 Although NIMH Mad Libs does a bang up job...

2 Derrida's Deconstruction and RDoc are diametrically opposed, as irony would have it.

3 Or maybe an n of 1...  I'm especially curious about how life experience will be incorporated into the mix. Perhaps the patient of the future will upload all the data recorded by their memory implants, as in The Entire History of You (an episode of Black Mirror).

4 The word “shroud” always makes everything sound so dire and deathly important... especially when used as a noun.

5 As do many research groups in the US. This is meant to be snarky, but not condescending to anyone who follows DSM-5 in their research.


References

Insel, T., & Cuthbert, B. (2015). Brain disorders? Precisely. Science, 348 (6234), 499-500 DOI: 10.1126/science.aab2358

Woo, C., & Wager, T. (2015). Neuroimaging-based biomarker discovery and validation. PAIN, 156 (8), 1379-1381 DOI: 10.1097/j.pain.0000000000000223



Appendix

Below are 34 references on MRI/fMRI applications of machine learning used to classify individuals with major depression (I excluded EEG/MEG for this particular unsystematic review). The search terms were combinations of "major depression" "machine learning" "support vector" "classifier".

Here's a very rough summary of methods:

Structural MRI: 1, 14, 22, 29, 31, 32

DTI: 6, 12, 18, 19

Resting State fMRI: 3, 5, 8, 9 11, 16, 17, 21, 28, 33

fMRI while viewing different facial expressions: 2, 7, 10, 24, 26, 27, 34

comorbid panic: 13

verbal working memory: 25

guilt: 15 (see The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning)

Schizophrenia vs. Bipolar vs. Schizoaffective: 16

Psychotic Major Depression vs. Bipolar Disorder: 20

Schizophrenia vs. Major Depression: 23, 31

Unipolar vs. Bipolar Depression: 24, 32, 34

This last one is especially important, since an accurate diagnosis can avoid the potentially disastrous prescribing of antidepressants in bipolar depression.

Idea that may already be implemented somewhere: Individual labs or research groups could perhaps contribute to a support vector machine clearing house (e.g., at NTRIC or OpenfMRI or GitHub) where everyone can upload the code for data processing streams and various learning/classification algorithms to try out on each others' data.

1.
Brain. 2012 May;135(Pt 5):1508-21. doi: 10.1093/brain/aws084.
Multi-centre diagnostic classification of individual structural neuroimaging scans from patients with major depressive disorder.
Mwangi B Ebmeier KP, Matthews K, Steele JD.

2.
Bipolar Disord. 2012 Jun;14(4):451-60. doi: 10.1111/j.1399-5618.2012.01019.x.
Pattern recognition analyses of brain activation elicited by happy and neutral faces in unipolar and bipolar depression.
Mourão-Miranda J Almeida JR, Hassel S, de Oliveira L, Versace A, Marquand AF, Sato JR, Brammer M, Phillips ML.

3.
PLoS One. 2012;7(8):e41282. doi: 10.1371/journal.pone.0041282. Epub 2012 Aug 20.
Changes in community structure of resting state functional connectivity in unipolar depression.
Lord A Horn D, Breakspear M, Walter M.

5.
Neuroreport. 2012 Dec 5;23(17):1006-11. doi: 10.1097/WNR.0b013e32835a650c.
Machine learning classifier using abnormal brain network topological metrics in major depressive disorder.
Guo H Cao X, Liu Z, Li H, Chen J, Zhang K.

6.
PLoS One. 2012;7(9):e45972. doi: 10.1371/journal.pone.0045972. Epub 2012 Sep 26.
Increased cortical-limbic anatomical network connectivity in major depression revealed by diffusion tensor imaging.
Fang P Zeng LL, Shen H, Wang L, Li B, Liu L, Hu D.

7.
PLoS One. 2013;8(4):e60121. doi: 10.1371/journal.pone.0060121. Epub 2013 Apr 1.
What does brain response to neutral faces tell us about major depression? evidence from machine learning and fMRI.
Oliveira L Ladouceur CD, Phillips ML, Brammer M, Mourao-Miranda J.

8.
Hum Brain Mapp. 2014 Apr;35(4):1630-41. doi: 10.1002/hbm.22278. Epub 2013 Apr 24.
Unsupervised classification of major depression using functional connectivity MRI.
Zeng LL Shen H, Liu L, Hu D.

9.
Psychiatry Clin Neurosci. 2014 Feb;68(2):110-9. doi: 10.1111/pcn.12106. Epub 2013 Oct 31.
Aberrant functional connectivity for diagnosis of major depressive disorder: a discriminant analysis.

10.
Neuroimage. 2015 Jan 15;105:493-506. doi: 10.1016/j.neuroimage.2014.11.021. Epub 2014 Nov 15.
Sparse network-based models for patient classification using fMRI.
Rosa MJ Portugal L Hahn T Fallgatter AJ Garrido MI Shawe-Taylor J Mourao-Miranda J.

11.
Proc IEEE Int Symp Biomed Imaging. 2014 Apr;2014:246-249.
ELUCIDATING BRAIN CONNECTIVITY NETWORKS IN MAJOR DEPRESSIVE DISORDER USING CLASSIFICATION-BASED SCORING.
Sacchet MD Prasad G Foland-Ross LC Thompson PM Gotlib IH.

12.
Front Psychiatry. 2015 Feb 18;6:21. doi: 10.3389/fpsyt.2015.00021. eCollection 2015.
Support vector machine classification of major depressive disorder using diffusion-weighted neuroimaging and graph theory.
Sacchet MD Prasad G Foland-Ross LC Thompson PM Gotlib IH.

13.
J Affect Disord. 2015 Sep 15;184:182-92. doi: 10.1016/j.jad.2015.05.052. Epub 2015 Jun 6.
Separating depressive comorbidity from panic disorder: A combined functional magnetic resonance imaging and machine learning approach.
Lueken U Straube B Yang Y Hahn T Beesdo-Baum K Wittchen HU Konrad C Ströhle A Wittmann A Gerlach AL Pfleiderer B, Arolt V, Kircher T.

14.
PLoS One. 2015 Jul 17;10(7):e0132958. doi: 10.1371/journal.pone.0132958. eCollection 2015.
Structural MRI-Based Predictions in Patients with Treatment-Refractory Depression (TRD).
Johnston BA Steele JD Tolomeo S Christmas D Matthews K.

15.
Psychiatry Res. 2015 Jul 5. pii: S0925-4927(15)30025-1. doi: 10.1016/j.pscychresns.2015.07.001. [Epub ahead of print]
Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression.
Sato JR Moll J Green S Deakin JF Thomaz CE Zahn R.

16.
Neuroimage. 2015 Jul 24. pii: S1053-8119(15)00674-6. doi: 10.1016/j.neuroimage.2015.07.054. [Epub ahead of print]
A group ICA based framework for evaluating resting fMRI markers when disease categories are unclear: Application to schizophrenia, bipolar, and schizoaffective disorders.
Du Y Pearlson GD Liu J Sui J Yu Q He H Castro E Calhoun VD.

17.
Neuroreport. 2015 Aug 19;26(12):675-80. doi: 10.1097/WNR.0000000000000407.
Predicting clinical responses in major depression using intrinsic functional connectivity.
Qin J, Shen H, Zeng LL, Jiang W, Liu L, Hu D.

18.
J Affect Disord. 2015 Jul 15;180:129-37. doi: 10.1016/j.jad.2015.03.059. Epub 2015 Apr 4.
Altered anatomical patterns of depression in relation to antidepressant treatment: Evidence from a pattern recognition analysis on the topological organization of brain networks.
Qin J, Wei M, Liu H Chen J Yan R Yao Z Lu Q.

19.
Magn Reson Imaging. 2014 Dec;32(10):1314-20. doi: 10.1016/j.mri.2014.08.037. Epub 2014 Aug 29.
Abnormal hubs of white matter networks in the frontal-parieto circuit contribute to depression discrimination via pattern classification.
Qin J, Wei M, Liu H Chen J Yan R Hua L Zhao K Yao Z Lu Q.

20.
Biomed Res Int. 2014;2014:706157. doi: 10.1155/2014/706157. Epub 2014 Jan 19.
Neuroanatomical classification in a population-based sample of psychotic major depression and bipolar I disorder with 1 year of diagnostic stability.
Serpa MH, Ou Y Schaufelberger MS Doshi J Ferreira LK Machado-Vieira R Menezes PR Scazufca M Davatzikos C Busatto GF Zanetti MV.

21.
Psychiatry Res. 2013 Dec 30;214(3):306-12. doi: 10.1016/j.pscychresns.2013.09.008. Epub 2013 Oct 7.
Identifying major depressive disorder using Hurst exponent of resting-state brain networks.
Wei M Qin J, Yan R, Li H, Yao Z, Lu Q.

22.
J Psychiatry Neurosci. 2014 Mar;39(2):78-86.
Characterization of major depressive disorder using a multiparametric classification approach based on high resolution structural images.
Qiu L Huang X Zhang J Wang Y Kuang W Li J Wang X Wang L Yang X Lui S Mechelli A Gong Q2.

23.
PLoS One. 2013 Jul 2;8(7):e68250. doi: 10.1371/journal.pone.0068250. Print 2013.
Convergent and divergent functional connectivity patterns in schizophrenia and depression.
Yu Y Shen H, Zeng LL, Ma Q, Hu D.

24.
Eur Arch Psychiatry Clin Neurosci. 2013 Mar;263(2):119-31. doi: 10.1007/s00406-012-0329-4. Epub 2012 May 26.
Discriminating unipolar and bipolar depression by means of fMRI and pattern classification: a pilot study.
Grotegerd D Suslow T, Bauer J, Ohrmann P, Arolt V, Stuhrmann A, Heindel W, Kugel H, Dannlowski U.

25.
Neuroreport. 2008 Oct 8;19(15):1507-11. doi: 10.1097/WNR.0b013e328310425e.
Neuroanatomy of verbal working memory as a diagnostic biomarker for depression.
Marquand AF Mourão-Miranda J, Brammer MJ, Cleare AJ, Fu CH.

26.
Biol Psychiatry. 2008 Apr 1;63(7):656-62. Epub 2007 Oct 22.
Pattern classification of sad facial processing: toward the development of neurobiological markers in depression.
Fu CH Mourao-Miranda J, Costafreda SG, Khanna A, Marquand AF, Williams SC, Brammer MJ.

27.
Neuroreport. 2009 May 6;20(7):637-41. doi: 10.1097/WNR.0b013e3283294159.
Neural correlates of sad faces predict clinical remission to cognitive behavioural therapy in depression.
Costafreda SG Khanna A, Mourao-Miranda J, Fu CH.

28.
Magn Reson Med. 2009 Dec;62(6):1619-28. doi: 10.1002/mrm.22159.
Disease state prediction from resting state functional connectivity.
Craddock RC Holtzheimer PE 3rd, Hu XP, Mayberg HS.

29.
Neuroimage. 2011 Apr 15;55(4):1497-503. doi: 10.1016/j.neuroimage.2010.11.079. Epub 2010 Dec 3.
Prognostic prediction of therapeutic response in depression using high-field MR imaging.
Gong Q Wu Q, Scarpazza C, Lui S, Jia Z, Marquand A, Huang X, McGuire P, Mechelli A.

30.
Neuroimage. 2012 Jun;61(2):457-63. doi: 10.1016/j.neuroimage.2011.11.002. Epub 2011 Nov 7.
Diagnostic neuroimaging across diseases.
Klöppel S Abdulkadir A, Jack CR Jr, Koutsouleris N, Mourão-Miranda J, Vemuri P.

31.
Brain. 2015 Jul;138(Pt 7):2059-73. doi: 10.1093/brain/awv111. Epub 2015 May 1.
Individualized differential diagnosis of schizophrenia and mood disorders using neuroanatomical biomarkers.
Koutsouleris N Meisenzahl EM Borgwardt S Riecher-Rössler A Frodl T Kambeitz J Köhler Y Falkai P Möller HJ Reiser M Davatzikos C.

32.
JAMA Psychiatry. 2014 Nov;71(11):1222-30. doi: 10.1001/jamapsychiatry.2014.1100.
Brain morphometric biomarkers distinguishing unipolar and bipolar depression. A voxel-based morphometry-pattern classification approach.
Redlich R Almeida JJ Grotegerd D Opel N Kugel H Heindel W Arolt V Phillips ML Dannlowski U.

33.
Brain Behav. 2013 Nov;3(6):637-48. doi: 10.1002/brb3.173. Epub 2013 Sep 22.
A reversal coarse-grained analysis with application to an altered functional circuit in depression.
Guo S Yu Y Zhang J Feng J.

34.
Hum Brain Mapp. 2014 Jul;35(7):2995-3007. doi: 10.1002/hbm.22380. Epub 2013 Sep 13.
Amygdala excitability to subliminally presented emotional faces distinguishes unipolar and bipolar depression: an fMRI and pattern classification study.
Grotegerd D Stuhrmann A, Kugel H, Schmidt S, Redlich R, Zwanzger P, Rauch AV, Heindel W, Zwitserlood P, Arolt V, Suslow T, Dannlowski U.

Subscribe to Post Comments [Atom]

Saturday, August 01, 2015

The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning


R2D3 recently had a fantastic Visual Introduction to Machine Learning, using the classification of homes in San Francisco vs. New York as their example. As they explain quite simply:
In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions.
You should really head over there right now to view it, because it's very impressive.


Computational neuroscience types are using machine learning algorithms to classify all sorts of brain states, and diagnose brain disorders, in humans. How accurate are these classifications? Do the studies all use separate training sets and test sets, as shown in the example above?

Let's say your fMRI measure is able to differentiate individuals with panic disorder (n=33) from those with panic disorder + depression (n=26) with 79% accuracy.1 Or with structural MRI scans you can distinguish 20 participants with treatment-refractory depression from 21 never-depressed individuals with 85% accuracy.2 Besides the issues outlined in the footnotes, the reality check is that the model must be able to predict group membership for a new (untrained) data set. And most studies don't seem to do this.

I was originally drawn to the topic by a 3 page article entitled, Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression (Sato et al., 2015). Wow! Really? How accurate? Which fMRI signature? Let's take a look.
  • machine learning algorithm = Maximum Entropy Linear Discriminant Analysis (MLDA)
  • accurately predicts = 78.3% (72.0% sensitivity and 85.7% specificity)
  • fMRI signature = guilt-selective anterior temporal functional connectivity changes (seems a bit overly specific and esoteric, no?)
  • vulnerability to major depression = 25 participants with remitted depression vs. 21 never-depressed participants
The authors used a standard leave-one-subject-out procedure in which the classification is cross-validated iteratively by using a model based on the sample after excluding one subject to independently predict group membership but they did not test their fMRI signature in completely independent groups of participants.

Nor did they try to compare individuals who are currently depressed to those who are currently remitted. That didn't matter, apparently, because the authors suggest the fMRI signature is a trait marker of vulnerability, not a state marker of current mood. But the classifier missed 28% of the remitted group who did not have the guilt-selective anterior temporal functional connectivity changes.”

What is that, you ask? This is a set of mini-regions (i.e., not too many voxels in each) functionally connected to a right superior anterior temporal lobe seed region of interest during a contrast of guilt vs. anger feelings (selected from a number of other possible emotions) for self or best friend, based on written imaginary scenarios like “Angela [self] does act stingily towards Rachel [friend]” and “Rachel does act stingily towards Angela” conducted outside the scanner (after the fMRI session is over). Got that?

You really need to read a bunch of other articles to understand what that means, because the current paper is less than 3 pages long. Did I say that already?


modified from Fig 1B (Sato et al., 2015). Weight vector maps highlighting voxels among the 1% most discriminative for remitted major depression vs. controls, including the subgenual cingulate cortex, both hippocampi, the right thalamus and the anterior insulae.


The patients were previously diagnosed according to DSM-IV-TR (which was current at the time), and in remission for at least 12 months. The study was conducted by investigators from Brazil and the UK, so they didn't have to worry about RDoC, i.e. “new ways of classifying mental disorders based on behavioral dimensions and neurobiological measures” (instead of DSM-5 criteria). A “guilt-proneness” behavioral construct, along with the “guilt-selective” network of idiosyncratic brain regions, might be more in line with RDoC than past major depression diagnosis.

Could these results possibly generalize to other populations of remitted and never-depressed individuals? Well, the fMRI signature seems a bit specialized (and convoluted). And overfitting is another likely problem here...

In their next post, R2D3 will discuss overfitting:
Ideally, the [decision] tree should perform similarly on both known and unknown data.

So this one is less than ideal. [NOTE: the one that's 90% in the top figure]

These errors are due to overfitting. Our model has learned to treat every detail in the training data as important, even details that turned out to be irrelevant.

In my next post, I'll present an unsystematic review of machine learning as applied to the classification of major depression. It's notable that Sato et al. (2015) used the word “classification” instead of “diagnosis.”3


ADDENDUM (Aug 3 2015): In the comments, I've presented more specific critiques of: (1) the leave-one-out procedure and (2) how the biomarker is temporally disconnected from when the participants identify their feeling as 'guilt' or 'anger' or etc. (and why shame is more closely related to depression than guilt).


Footnotes

1 The sensitivity (true positive rate) was 73% and the specificity (true negative rate) was 85%. After correcting for confounding variables, these numbers were 77% and 70%, respectively.

2 The abstract concludes this is a “high degree of accuracy.” Not to pick on these particular authors (this is a typical study), but Dr. Dorothy Bishop explains why this is not very helpful for screening or diagnostic purposes. And what you'd really want to do here is to discriminate between treatment-resistant vs. treatment-responsive depression. If an individual does not respond to standard treatments, it would be highly beneficial to avoid a long futile period of medication trials.

3 In case you're wondering, the title of this post was based on The Dark Side of Diagnosis by Brain Scan, which is about Dr  Daniel Amen. The work of the investigators discussed here is in no way, shape, or form related to any of the issues discussed in that post.


Reference

Sato, J., Moll, J., Green, S., Deakin, J., Thomaz, C., & Zahn, R. (2015). Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression Psychiatry Research: Neuroimaging DOI: 10.1016/j.pscychresns.2015.07.001

Subscribe to Post Comments [Atom]

eXTReMe Tracker