Common Factors Theory, Models, and Data

     In the Budd & Hughes (2009) article, the authors discuss the ongoing debate of the Dodo verdict, or the idea that all evidence-based psychotherapies compared in many metanalyses result in similar outcomes for clients. For many researchers and practitioners based in CBT, this idea that different treatments do not provide an an advantage for certain disorders runs against their understandings of treatment and psychopathology. Therefore, many meta-analytic findings conducted by those from CBT researchers conclude that CBT is more effective than other psychotherapies for diagnoses such as depression. I wonder how the CBT theorists would explain issues such as regression to the mean? Would they would say it is possible, but their techniques add something unique that expedites it? In the Budd and Hughes (2009) article, the authors argue that this conclusion may be more due to the loyalties of the researchers than the conclusion of the data. I wonder if this issue could be solved by having clinicians run the studies double blind. If graduate students with little theoretical training were supervised to conduct the research as well as the treatment sessions, theoretical alliance could be avoided. I liked the point the author made about the lack of emphasis that RCTs place on the therapist client relationship. In almost every theory, adaptation to the client's symptoms, desires, and specific problems is present, meaning that each therapy could not and should not be applied in a uniform way to each client. I wonder if there are variables that could be measured at this individual level such as deviation from protocol, symptoms targeted, and reduction in certain symptoms that would point to more active ingredients in psychotherapy. 

    The Marcus et al., 2014 article is a metanalysis of different treatments for psychotherapy specifically focusing on a more updated version of the Dodo bird hypothesis as well as understanding cognitive behavioral therapy as it compared to other treatments. Overall, this study was very thorough and described in detail their selection criteria, any issues that arose in coding different outcomes as primary or secondary, and took care to justify and explain their statistical methods. In the introduction of this paper, the authors talk about an idea proposed by Rounsaville and Carroll (2002) that if common factors are truly the key to psychotherapy, programs should emphasize common factors instead of intelligence. This is similar to the Budd and Hughes (2009) article, for it discusses (although is on the opposite side of) the issue concerning theoretical alliance vs common factors and their value in clinical practice and training. However, there seems to be a happy medium between these points of view that Marcus and colleagues provide. Toward the end of the article, they propose that students should be able to discern when clients are in need of a more general approach, and when clients need manualized treatment. This would fit in with Budd and Hughes (2009) article, for understanding factors such as effective interpersonal characteristics and the therapeutic relationship could be researched. However additionally, well-researched and empirically supported treatments could still have their place and could continue to expand to a multitude of problems. I appreciated that this article differentiated cognitive behavioral therapy and behavioral therapy, however I am surprised they did not run an analysis with them combined as to compare to the Tomlin article. In general, I wonder if there is research on decision making for practitioners as to when to use a manualized treatment vs a more general approach. Although this clearly depends on theoretical orientation, this may be another study useful to do on graduate students. 

    The Vaidyanathan et al. (2015) article discusses various methods for testing the etiology of psychopathology and their respective disadvantages. The article further discusses how strict standards of research should be implemented not just concerning each field, but instead to combine many of the findings of different research strategies together to avoid incorrect conclusions. I also think the idea of combining methodologies and conclusions from various fields could assist in unifying different camps of psychology research that exist to prove their specific theory or method as the correct one. Although there is use to understanding treatment effectiveness, many of the Dodo bird studies illustrate the lack of fruit that is bared from comparing small differences in theory. Concerning funding, the authors propose that many research funds go to waste due to methodological concerns, poor research designs, and utilizing new technology that does not produce useful or practical results. I think this is an interesting and warranted critique and is something to keep in mind when considering the use of AI in research for psychology moving forward. I also like their proposed solution, for it seems like it could save money and use limited funds more effectively. Toward the beginning of the paper, the author talks about the phenomena of "technomyopia" which concerns the issue of being amazed at new developments in technology and using them for projects without deeply considering if it will be useful. Although I do think there is a burden on researchers in this circumstance, I would also argue that this phenomena is exacerbated by large funding groups, conferences, and grant sponsors who encourage the use of new and exciting technology as a space for securing money or an important talk at a conference. All together, I liked the authors discussion of the role theory should play in a broader world of statistics and data. Additionally, this felt like a practical example of the standards set out by Meehl (notwithstanding the complaint against null hypothesis significance testing). Overall the section on depression emphasized that when taking together many different types of data, depression seems to be different when it is one episode vs. many episodes. Additionally, persistent depression may be due to a vulnerability and it should be considered as distinct. For the section on substance use in teens, the article argues that the gateway idea is not well supported in quasi-experimental studies. In addition, there seems to be evidence for cognitive issues before taking drugs which may explain why teens use in the first place and continue to. Overall, the article's quote of "This should be a very clear sign that what we lack is not information, but rather how to deal with it" (Vaidyanathan et al., 2015) summarizes their key issue with the state of the research and how to fix it. 

    In the podcast on the Meehl paper, the hosts discussed the implications of Meehl's recommendations for theories and his critiques of null hypothesis significance testing. On the surface, the idea of testing a theory based on its ability to make risky, unobvious, and correct predictions is a good idea. However, I agree with the hosts that much of psychological research cannot often predict the direction of an effect, never mind the precision of an estimate. I also wonder if psychology could in fact predict with sufficient accuracy an effect size or a conclusion, would these results be applicable to real world scenarios in which there are many other variables? The hosts seem to wrestle with this topic, which I thought ended up in a debate of the internal-external validity trade off. However, one solution to this issue may be to make noisier studies that still require a high degree of prediction accuracy. If this could be done, a theory could be applicable to many settings and units and have a high internal validity despite all of the noise. Toward the end of the podcast, the hosts questioned if surprise necessarily implicates real world impact. I think the answer to this is no, for some of the most useful psychological studies are things that seem to make sense and are almost obvious, but can be applied in to new contexts, situations, and problems. For example, it may seem obvious that a person who blames a human for their actions instead of the surrounding environment is different than someone that blames the environment for human action. However, taking this theory and applying it generally to community, political, and legal principles is where the theory is impactful and surprisingly useful. In this way, the role of theories might not be found in their initial shock, but instead to apply to diverse parts of human nature. In this way, their application would be surprising.  


Grade: 23/25



Comments

Popular posts from this blog

Emotion Focused Therapy for Depression

Contextual Behavioral Science- ACT