86. Blind faith in the medical authority has ended

Welcome to the Radically Genuine podcast. I'm Dr. Roger McFillin. We are making some changes with the Radically Genuine brand. And I use that term loosely. We've got a candy bar. I am starting a Substack. You can visit drmcphillan.substack.com and you can sign up for the free weekly newsletter. And the reason that I'm doing that, a lot of people have reached out wanting Some more information regarding our content, whether it's my Twitter post or some of our podcast episodes, this is an opportunity for me to delve into some of these topics, even promote some research that we're using either in our podcast episode or on my Twitter. And it really does speak a lot to the episode that we're going to have today. There's so much, well, we'll just use the word medical misinformation. I hate that term because it got hijacked during COVID by special interest and the media and government. But I was listening to an amazing podcast, uh, Joe Rogan, who I'm a fan of Joe Rogan's podcast. He's popular. I know a lot of people are listening to that podcast, but very interesting gentlemen, Dr. Assim Nihaltra, Mahaltra, who is an esteemed cardiologist out of the UK, who has really become very vocal. because of vaccine injuries. Prior to that, he was somebody who was promoting a lot of the misinformation regarding statins and saturated fat in its correlation with heart disease that was driving poor health practices. Speaking pretty openly about how the sugar industry funded some of this research and how so much misinformation is spread and... A lot of practicing physicians, in fact, most practicing physicians are getting information through research that is kind of filtered through conflicts of interest and biases. And on this podcast, he mentioned the paper. 2017. John Ioannidis and colleagues. titled How to Survive the Medical Misinformation Mess. And he was referring to this paper, and it did set the stage for a number of things, conversation points throughout his podcast. And I thought it was really important to download that paper and kind of do a deep dive into it a little bit. And in order to be able to have some intelligent conversation on this, I invited Dr. Susan Hannan back on the podcast. I know a lot of people have reached out wanting her. back on for both a female perspective, but also she's very articulate and very bright and really adds to the conversation. And given the fact that she is an academic and a researcher, and she's the director of clinical research here at our center, I think she provides an area of expertise to be able to discuss kind of evidence-based practice, some of the challenges in being able to... utilize research findings to drive clinical practice and talk about the challenges in the current academic field as well as the limitations and how the limitations in science and how it's promoted and the challenges systemically. Also on the podcast today, Sean is back with us, of course. For the non-intelligent contributions. In fact, now, Sean is a prior executive in the advertising field. He's also a currently in our health field. He is Chief Financial Officer of Center for Integrated Behavioral Health. So from a business perspective, a marketing perspective, he does bring a lot of important wisdom and information about how industry works, especially how the advertising field works. believe we are honestly behind what I always call the pharma iron curtain. And the pharma iron curtain is the limitation of information through the lens of industry manipulation for profit. And in an allopathic medical system like the United States and most of Western society, what drives practice is a lot of information that is filtered through special interests. And I think that is the point of today's podcast. I'm just gonna throw this general question out. I'd like Dr. Hannon, you know, just to get your thoughts on this. I think we both believe in an evidence-based movement. Evidence-based practice should be at the forefront of any intervention, psychologically, medically. And what that does is it should protect patients. It should protect patients from treatments that have not been proven to work and are not supported by any sound scientific background, a foundation of empirical scrutiny. Instead, I believe it's turned into a scam, to be honest with you. That evidence-based practice is a term that has become something that is marketed to physicians and psychologists and the guys that it meets the highest standard of research at this particular stage and we're providing treatments that are best available evidence. Thoughts? Well, that's a great way to start. I agree with that statement, that evidence-based care, evidence-based practice. has become this trendy word where now I think so many, both academics and clinicians, they're almost primed when they see that, then the brain thinks this must work because it's labeled as such. And I think anything now could almost be labeled as evidence-based treatment without the proper scrutiny, like you said. So I want to start with this. The paper kind of opened up with this at 20 to 50 percent of all healthcare services delivered in the United States is inappropriate, wasting resources, and or harming patients. That's upward of half. That's shocking. Because I thought that was just in the mental health field. I thought there was much of the harm in my field and kind of what I've kind of examined. But it's really widespread. It's widespread, I think across the system for multiple reasons. Number one. And as far as this paper is concerned, they identified four key problems that I'd like to get into. The first one I think is most important, that much published medical research is not reliable or is of uncertain reliability, offers no benefit to patients or is not useful to decision makers. And so let's talk about what that actually does mean. With so much publications that exist out there, there is really a growing need for well-designed and conducted systemic reviews and meta-analyses to provide valid cumulative evidence on a lot of relevant topics. And this is really, really not easy to meet. in one survey of 60,352 studies, a meager 7%. criteria of a high quality methods and clinical relevancy, 7% and fewer than 5% passed a validity screening for an evidence-based journal. So that means the overwhelming majority of the research does not meet high quality methods. And so what is high quality research from your perspective? That's another great question. So you mentioned in that first bullet point in the article reliability. So for those folks who maybe aren't aware what we mean by reliability when we're talking about scientific studies, we're really talking about consistency. So imagine you're throwing darts at a dartboard. Are you hitting the same section of the dartboard every time? That would be reliable. But you could be reliable and not accurate. Right? So the goal when you're playing darts is to hit the bullseye. Right? Um, so you could be throwing the darts. Well, actually, I mean, maybe sometimes the goal isn't always to hit the bullseye. I don't play darts, but let's just pretend that that's always what you want to do. I know there's scoring and other things involved, but let's just say you want to hit the bullseye, but you're consistently hitting the bottom kind of left section. You you're reliable, but you're not accurate. You're not valid. So I think for these For sound scientific studies, you need to be concerned about both. You need to be concerned about consistency. Are we getting the same measurement, the same thing over time? But also validity, accuracy. What is it that we're measuring? And I think for me, at least, as an academic, that has been the hardest thing to capture, to think about in the field of mental health. Because I think right now there's a disagreement about what even depression is. or what anxiety is. And so oftentimes we're using these self-report measures in randomized control trials, other studies, as a way to say, this is depression, but what if it's not? And if that foundation, right, of what it is that we're measuring is not true, then everything above that is likely invalid as well. That actually, I had questions too. So when you talk about evidence-based research, There's different types of studies in the way that they're structured and the way that they're implemented that are definitely a better quality. And even when you're just doing general research and you're, you're Googling and you're trying to find out information. It for a lay person, you don't know what you're reading. So if at like the top of that chain, what's like the highest quality level of research that is valid. Um, and what's the, what's it called? What's the name of it? Yeah, so in my opinion, I would say it's the RCTs, the randomized control trials. And so these are typically, it depends on what phase. So there's different like phase one, phase two, phase three. So there's different levels of these trials. But as you move into phase three and phase four, these are large scale studies. So I would say typically, and I wouldn't say this is necessarily my opinion personally, but typically evidence, like good evidence-based Science are large studies with a large number of participants. There's randomization to groups. So whether folks are in treatment one or treatment two or like active, let's say vaccine versus placebo, how someone is determined to be in a group is completely by random chance, not by any characteristic of their own. So those are RCTs, randomized controlled trials. And are those meta analysis, those, those kinds of reviews? A review of multiple RCTs. So would those be anything that comes out as like a meta meta analysis? That is the gold standards. That's the one that you can read and trust and interpret accurately. Yes and no. I mean, the quality of a meta analysis is based on the quality of the RCT. So as long as what's feeding that is accurate, which is really problematic. So I want to jump in. I don't want to deviate that. That to me is very confusing. They're good questions. And that's where this systemic bias and the conflicts of interest come into play. So I mean, I'm just going to stay in my wheelhouse and I'll stay with depression, for example. Okay, so I want to go back to Dr. Hanna's point. First of all, people don't realize that the idea of depression as an illness is a modern construction, right? And it's been identified as a modern illness basically through the use of categories, symptom checklists. How is that developed by strong science? No, it's not. It's developed by a bunch of people kind of sitting around and say, hey, you know, people have come to see you clinically and they present themselves as depressed, which is a label. What do you generally see? And so they might say prolonged periods of sadness. Well, how long is long enough to be able to meet that criteria? And it's kind of arbitrary. Let's try two weeks. Completely arbitrary. And if it's two weeks, okay, well, what if they're not sad? all the time. Like, all right, how about some of the time or most, most of the time sounds good. Okay, most of the time for two weeks. All right, let's start there. All right. Yes, depression means there must be some sadness. And let's add some other things. Could there be appetite changes with someone who's depressed? Yeah, I mean, some people like have a really poor appetite and they're at risk of losing weight, which complicates other problems. Okay, that's true. Well, some other people, you know, you know, eat too much food. Okay, well, let's add that in there too. And then some people might say sleep. Some people sleep too much. Other people don't sleep enough. And so here's the problem, is eating too much or eating not enough, or not sleeping enough or sleeping too much could be symptoms of other conditions. And so you try to lump it in under a general category, you are at risk of false positives. So identifying people with a condition that you made up based on symptom checklist that don't really meet what would be considered a definition of that. And there could be multiple factors, multiple reasons why someone might feel fatigued or having sleep problems or a number of things that have nothing to do with what we would consider a psychiatric illness. So that in itself is problematic. But if we're staying within the depression kind of world and we're gonna understand today how A lot of the information that we assume to be evidence-based treatment is not. What gets published matters. So if Sean, if you make that statement that a meta-analysis is going to include the published research, and Dr. Hanna says yes, but it has to be high quality published relief research, well, what happens if you don't publish research that didn't yield the results? You do not have the totality of the actual science. And that's the problem with antidepressants for the treatment of the constructed concept label of depression. Would those then not be randomized controlled trials? Sounds like it would be controlled if you don't have a favorable result. Controlled has to do with what group you're placed in. Whether your trial yielded a result or not is different. A significant result or not. Yeah, so it's called the file drawer problem. So especially the top... tiered journals. So the journals that have what's called a high impact factor, where if you publish in that journal, you'll likely be cited, I don't know, maybe 20 times per year on average. A lot of those journals will only publish significant results, meaning showing that there was a significant difference between your different conditions, whatever groups that you had. You could run a very scientifically sound robust study and have non-significant findings, right? The data are what they are, but you are significantly less likely to get that published, especially in a top tier journal. So what happens, it just gets filed away in the file drawer and no one sees it. The public doesn't see it. And imagine two things. One, it's an industry run trial where potentially billion dollars, billions and billions of dollars are at risk here on trying to get a favorable result for that drug, or if you're an academic. in a publish or perish environment where getting grant money or achieving tenure is really dependent on having a lab that produces high quality science. When we talk about bias, that's what we mean. Even when we stay away from fraud, when there's clear fraud that exists, just the bias is the you really want your study to turn out well. And you have to get published in order to continue with your career. That influences what scientific research becomes part of the published research and what is promoted on media and other things. Our desire to want to see something work can influence our, can influence how we interpret data. There's such a conflict of interest there, right? And I've talked about this openly with my students, how I am not at an R1 institution, so I do not have to worry about securing external grants, but I do have a publishing minimum. They don't tell you exactly what it is, but they say on average you should be publishing one scientific paper in AP Review Journal per year. And so, right. There's on one hand, obviously, as a researcher, the... the draw and the passion to conduct sound research. And also at the same time, I have to keep pushing this out in order to keep my job or to attain tenure. And especially I think in our field, in the mental health field, a lot of the quality research is longitudinal in nature. It takes time, right? To really understand what's going on, the phenomenon that's happening, see how it changes over time. then there's this right to keep my job, I have to keep pushing out papers. So just conducting purely longitudinal research is really hard. So is the unintended consequence of that just a lot more research out there? That's a poor quality. That is my opinion. Yes. And according to this article, that's what they're trying to prove that 7%. Um, a meager 7% passed criteria for high quality methods and, and clinical relevancy. So the majority of research that's coming out doesn't fit that standard. really to say that's reliable and it's valid. And that's mine included. Like I'm certainly not sitting here on trying not to sit here on this high horse. Like especially I think when I was in graduate school and just getting out of graduate school, right? There's such this draw to just get something out that I'm sure if you were to examine some of my papers, it would not meet that criteria. Sure, yeah. And then the problem when you're talking about industry and the pharmaceutical industry is they're trying to get their drugs to market. And so, Everything that they're going to do design wise is to try to demonstrate some statistically significant response in comparison to placebo, placebo being the control group. And if the FDA requires only two positive results in order to achieve FDA approval, they can continue to run studies in different ways. and use some form of statistical manipulation to be able to try to develop that difference between groups. And one of the ways to develop that statistical difference is to really kind of control the final measure that you're using. And if that final measure can be created to somehow amplify a response from the drug group, by using a symptom check checklist, right? So for example, this is what has happened with the original trials for antidepressants is that they would take the control group and they would withdraw them from their current medications. So if they were on psychiatric drugs, they would withdraw them to nothing, use them as the comparison group. not acknowledging that that abrupt withdrawal from the drug was going to create withdrawal symptoms, which would intensify mood and other physiological symptoms that are being measured on the checklist. The other thing is if the drug itself created something like increased agitation that led to violence, suicide or self-injury, they remove them from the trial. They got them off the drug and they removed them from the trial. In science, you have to report that, right? That was not submitted to the FDA. So essentially what you're doing is you're amplifying any response from the drug group and you're creating a sicker control group for comparison. And then there's the difference between what is statistically meaningful and what is clinically relevant, which is what we saw and continue to see with antidepressant drugs. That you can create a statistical difference doing those things, publication bias, all the different tactics the pharmaceutical industry uses, but even then, it doesn't create a real clinically relevant response. We can't really identify through published research that either short or long term, the drug itself... creates this antidepressant effect and is really meaningful to us as clinicians. Yeah, we've gotten to this culture now where we see something is statistically significant and we celebrate that so much without actually questioning and looking at the data, looking at the numbers, what that means. Because Roger, like you're saying, it's not hard to manipulate statistical significance. So if it depends on what type of... analyses that they're using, but I think commonly, like inferential statistics are using what are called p-values to determine whether or not something is significant, that's still very common. And the way that you calculate a p-value, like some of the components that go into calculating a p-value is the sample size. So the number of participants in your study, as well as something called the standard error. So think like how noisy or how messy are your data? So Roger, as you were talking, I was thinking, yes, of course, if we make our data as clean as possible, that absolutely does not have external validity, meaning it doesn't actually represent the actual population that we're testing this against. The cleaner the data. the less noise there is. And so the more likely you're going to have statistical significance and also the higher end, the more sheer number of participants that you have, the higher number of participants, that also is going to increase the likelihood that you'll have that like prized, it's like P is less than 0.05, meaning that it's statistically significant. But that's just math, that's just mathematics. It doesn't actually mean that it's clinically meaningful or clinically significant, Roger, like you're saying. the TADS study, which we've talked a lot about this, because I'm interested in antidepressants for children and adolescents who are a very vulnerable population. And it's part of my mission that parents can be able to get this information in order to make a more informed decision because children can't consent for themselves. Children and teenagers are relying upon adults. And so we have to get that information out there. But what's interesting is that in the analyzation of the data, the original uh, endpoint measure. So maybe, so in this study, they, I can't remember exactly what the measure was, uh, you know, child depression checklist, something like this, that original endpoint that was going to measure whether the antidepressant had any effects showed no difference between that and the other other treatment arms, which were cognitive behavioral therapy. Um, or, um, I can't remember what another control group was. Bottom line was there really was no statistic. statistical difference. So they just created other checklists. You can just create another measure to try to show this worked on this. Which completely is against the point of having an endpoint. Like the endpoint is to say, okay, we've decided that this equals a significant reduction in depression symptoms or alleviation of depression. So if you just completely ignore the endpoint, what's the point of having it to begin with? Are those typically defined in advance? They should be they have to be they have to be well they should be. Because I think about like when I worked in marketing, we would have marketing plans that we're proposing or we're trying to do something new. The leadership would always say what's your measurement of success? And we would say we're anticipating this happening or we we want to see this get up to this level. And that would define whether or not our our attempt was successful or a failure later on. And then you know that whole margin of safety would always come in plus or minus a couple percentage and that would be always what we're debating. You know, didn't really do anything we don't know because of the plus or minus was so high that it just didn't really feel confident. Yeah. I mean, it's like one of those things where you might say, oh, well, there's really no difference between the drug group and these control groups on this depression scale. But look at this. We found that there is a difference here on this item, which is hopelessness. And then this item, which is concentration. And then the next thing you know, like the author's conclusions are something like the fact like fluoxetine in comparison to placebo is significantly better in decreasing symptoms of hopelessness and improving concentration, right? And then all of a sudden you see that as part of the abstract and part of the conclusion and that seems to somehow drive. clinical practice, the pharmaceutical companies will use this in their pamphlets. So this goes back to problem number two, is that most healthcare professionals are not aware of this problem. And this kind of ignorance emerges in several studies and surveys. One that's listed in this study is that in a study of journal reading habits, internists Approximately half of whom were alumni of the Robert Wood Johnson Clinical Scholars Program reported that they obtained information mostly from abstracts and not the full articles, stating that they relied on editors to assure rigor and study quality. Such trust may be misplaced. For example, recent studies showed that several editors of peer reviewed journals could not tell whether a trial was randomized without a special checklist. Even then, of the 324 studies editorial staff considered at randomized trials, 127 or 39 percent were actually not randomized. Many healthcare professionals put too much trust in abstracts for filtering the literature or expect that the systemic reviews or guidelines will get rid of the unreliability and non-utility problem. One study found that nearly half of of study results, implying benefit when there was no statistically significant difference in the primary endpoint between study arms. Flawed primary studies are compounded by flawed systemic reviews and lead to flawed clinical guidelines. Most healthcare professionals are not even minimally aware of these issues. Yeah. I don't think... most people are aware of what the peer review process actually looks like. So when I, well, it depends on the journal that you're submitting to as well, which we can maybe get into in a little bit. But let's say I've written up my research project into a manuscript. I'm getting ready to submit it to a journal. I will submit it and either it will immediately get desk rejected by the editor. The editor maybe says, this is just poor quality or it doesn't fit the aims of the journal. If it doesn't get desk rejected, then it will be sent out to, I say on average, typically between two and four peer reviewers. Now these peer reviewers ideally are typically other academics that have knowledge and expertise in typically I publish in traumatic stress and post-traumatic stress disorder. So ideally expertise in that. But especially recently, I would say over the past two or three years. Sometimes it's taking the journals months to find folks who are willing to peer review. And I think there's a couple of reasons for that, right? And listen, like, you know that peer reviewers, you know, they get supported financially significantly, right? Yeah, they get supported with like begging emails. Like, yeah, like nothing. They don't get anything. It's completely free labor. Isn't that part of the problem? When you talk about something systemically, it's a for-profit system where the people doing the work don't get paid for it. That's just messed up. It's like part of the service component of the academic's job. But if they were getting paid for it, would there then be more bias? Well, someone's getting paid. Remember, we usually the publications and then the company that's gonna have a successful outcome. You have to pay for the journal. Yes. Right. So I don't know how and this is something I should probably look into what do these journals? What are they these companies that that run these journals? I mean, that's a that's a for profit business. So whenever there's whenever there is money involved, there's going to be some form of bias. So the academics, the ones doing the research themselves, the ones who are reviewing the studies, that's free work. Yes. Yeah, but those publications aren't selling advertising. So what's the additional source of revenue for a journal? Maybe, actually, some journals do advertising, but a journal is usually like a printed magazine, which is dated, everybody searches online. There has to be some revenue model all the work that's going into the publication of a journal institutions pay for them, okay, individual pays for them. Like if I wanted to, if I wanted to get some journals, the access is only through a paywall. That's it. That's the other thing that's a problem. And the only reason why doctors are reading abstracts is because a lot of things are behind a paywall. unless their medical facility or whatever has these journals laying around for them to really read everything. But most people are doing a quick Google search. They're relying on guidelines more than ever. So we've totally we've talked about this on the podcast previously, especially in our primary care centers, that they are relying on guidelines that are published by major medical organizations. And I've spoken about this, the American Academy of Pediatrics. And I read their guidelines and I reviewed their research for the recommendation of fluoxetine Prozac for children and adolescents. And it was such a limited review of the totality of the science that I see it as criminal, absolute criminal behavior. Cause I don't see in any way based on science, why we should be recommending SSRIs for children and adolescents at all. Were there like two studies that they mentioned? It was interesting. They took a review study from like 2000, 2001, and they used a couple studies there to be able to promote the drug as safe and effective without in any way recognizing the limitations of that research. And then We've talked about this, the TAD study had two publications. It had an original publication and then had a second publication where another group was able to get the data. And then they spoke to what was missing from the first study. So basically think about it as some form of negligence in the first study and where they left out important data and the conclusions didn't meet that data. And then the second study came out a couple of years back. couple years later who did a re-analysis of it and said, well, no, we can't come to that conclusion. And so we have different conclusions based on the same data set. The American Academy of Pediatrics only included the first one. And this is why replication is so important in science. It's a part of science, but again, these top tier journals seem to prefer just like the new trendy statistic or the new trendy finding and... I completely blanked on what I was gonna say there. It'll come back. We're a clickbait culture. So like a new trendy research or a new idea is going to potentially get more views or going to be on a national news network in terms of an important new finding. Yeah. Although it's not valid. And this is what I was gonna say. The p-values were never intended to be like this one and done thing. It was never intended to run, let's say, a t-test or a nova and say, oh, we've got p is less than 0.05. This means we've proved this to be true. We can never prove anything to be true with statistics. It's all we're talking about probability here. We never actually know the truth. I think that makes people really uncomfortable. The p is less than 0.05 means that we're pointing at something. So let's continue to do research to replicate to see if we continue to point at that thing. And then there's the issue of the diversity of who you're studying and whether the results could be generalizable to other groups. So just, the question always is, who is more likely to be part of a study anyway, in the first place? And then are we doing our research on a group of people that doesn't represent the greater? population, you know, things like that. Often you see this in research where you generalize findings when that wasn't indicative from the research. I think especially in the mental health field with randomized controlled trials, that is the biggest, one of the biggest flaws and limitations is that again, with these experiments, we want high control. And so high control means we are going to only select participants who have, let's say, meet DSM criteria for major depression and nothing else. I mean, Roger, you know, that is like the needle in the haystack, like an actual practice. There's so many comorbidities because these categories are bullshit anyway. And so, right, like how does this generalize to what you actually see in practice? Which was the problem in that TAD study, the groups who were doing the cognitive behavioral therapy arm of that study. Researchers who were in charge of that group would later say... This wasn't cognitive behavioral therapy. Like we had kids in there who had like conduct disorder problems, ADHD. It was like really challenging. They all had all these comorbidities. Some were depressed, some acted out, some were internalizers, others were externalizers. It wasn't this same group of kids. It was just a mishmash of kids who were having problems. And as you know, CBT is kind of an evidence-based treatment umbrella. where they have all these treatments for specific problems. So if someone comes into me and says, you know, I'm just experiencing these symptoms of depression, we might then turn to that science. If someone's just saying I have an eating disorder, we might turn to that science. But most people are a combination of a lot of things. And you can't just take a standardized treatment in a manual and then apply it to all people and then expect that to have much of a result. The result probably isn't the treatment. The problem is the... Any result that occurs from that is probably just the interaction effect, the doctor, patient, or the fact that they are in something that's designed to help them. I thought this was very important. So how many, just throwing it out there, what do you think it would be the percentage of kids and teens that can be labeled with depression at any given time? That depression that we've made up, that we've modernly constructed? What do I think? 98%. Well, because you can be... I could go into a physician's office. Wait, wait, are you talking... Yeah, I think he misunderstood. Let's go through the question again because I think I could get labeled as depressed at any time during my teenage years. I know what you're referring to and I'm gonna get to that later. Okay, thank you. But if you look into like text... if you opened up a textbook in a medical school... Okay. and we know how those textbooks are influenced, but they said they identified depression to be a illness, and this is the prevalence rate for children and teenagers at any given time for that illness. Dr. Hanan, what would you say that was percentage-wise? 14%. That's high, right? That's really high. In fact, it's 3.2%. Okay, so according to the Centers for Disease Control, traditionally about 3.2% of children have been labeled with depression. Assuming that is accurate, that means 32 out of every 1,000 children and adolescents would be experiencing that problem. And this is considered a low base rate occurrence, okay? Now, the Academy of Pediatrics. They want to screen everyone for depression. Okay, everyone. So with a reasonably strong screening tool, and this is a strong screening tool, that would accurately identify about 80% of a clinical population, roughly. So let me tell you what that means. If the 32 people out of 100, or out of 1000 who have depression, That screening tool will identify 26 of them accurately with depression. They'll fail to identify six of them. This is called a false negative. The problem with screening tools are false positives. So of the remaining 968 kids who don't have depression, the screening tool, a reasonably strong one, will identify one hundred 194 of them as having depression when they do not. Now, imagine if we followed the American Academy of Pediatrics guidelines and just screened everyone. Extrapolate that to the entire population of children and you will falsely identify millions of kids with depression, a mental illness, an illness that they do not have. Now. If 3.2% of the children at any one time might really meet that criteria for depression, what is happening recently? In 2019, that number has ballooned to one in five. 20% of adolescents aged 12 to 17 are now identifying as depressed. This is what we would call a problem with diagnostic inflation. And one of the reasons for diagnostic inflation is how we are representing the condition. So through the categories themselves, and then screening tools that are developed that are going to inflate the number in the population. What in our local area, what is the screening tool for depression from Lehigh Valley Health Network, which is our local large health network? is mandating their pediatricians to screen depression. What are they using? The pH Q9. pH Q9 for adolescents, who was that developed by? Pfizer pharmaceutical Pfizer. So talking about conflicts of interest, and I don't know how strong the pH Q9 is, but it's not 80% accurate. So I mean, common sense would say that let's you put a screening tool out there and all of a sudden your percentage of depressed people jumps up so significantly when you take a step back and say, all right, hold on a minute, there's something wrong with the screening tool. Not if you want to increase the sales of your drug. But if I'm not a pharmaceutical sales rep, and I'm a doctor, and all of a sudden, my patient population is now a lot more depressed than they have been prior to administering this PHQ-9. Wouldn't you stop, pause and say, hold up a second? Nope, you blame it on COVID, you blame it on... Society, yeah. You don't look at the underlying core problem with the diagnostic category itself. And all the PHQ-9 is is the nine symptoms according to DSM. That's literally all it is. It's just the checklist according to the DSM. Which takes me to problem number three. Most healthcare professionals lack skills in being able to evaluate the reliability and usefulness of evidence. And this is the problem in the American expert culture that exists. We are assuming that that physician, that doctor, that therapist, that psychologist that we go to, number one, understands the research literature in its totality, is aware of any problems that might exist. And then number three has the skills to critically evaluate reliability and usefulness of evidence. So here is some interesting studies that I think I should probably talk about. According to this article, they found a lack of basic skills required for determining a study's reliability and applicability. For example, in a pre-test administered to a sampling of more than 500 physicians, clinical pharmacists, and other healthcare professionals intending an evidence-based medicine training program, 70% failed a simple three-question critical appraisal training program test. The three pre-test questions were designed to determine if attendees could recognize the absence of a control group, understand the issues of overestimating benefit when provided with relative risk reduction information without absolute difference information, and determine whether an intention to treat analysis was performed. Surprisingly, among those reported feeling confident to evaluate the medical literature, 72% failed the test, even with generous criteria for correct Answers. We have repeated this is according to the studies authors, we have repeated the same pre-test with various groups each year with similar results. A well-designed and conducted trial reported similar findings. Clinicians without formal evidence-based medicine training score poorly on a 15 test question. And we just continue to see this problem grow, right? So Which is why they rely upon abstracts. They rely, I think they rely upon guidelines now. I don't think they have time to study research. Tell me how someone on the front lines of in healthcare, with the amount of hours they're working, can dedicate time to reviewing research literature. So where are they getting their findings from? And not just reviewing research literature, but critically. like analyzing themselves and questioning, do I have the knowledge to understand this literature? Even just the statistical tests that are used, I mean, there's a lot of faith that the researcher is running the analyses correctly. The peer reviewers, from my experience, have some questions sometimes about analyses, but oftentimes the results section is pretty much kind of glossed over, and they're focusing on the intro or the discussion, and I'm not saying they shouldn't be focusing on those things, but... If I made a flaw in my analysis and I did not catch that, it might not ever have gotten caught and that could be published. Yeah, I've looked at a lot of those studies and you're right, I do that. I'm not an expert, but by any means, I'll read the intro, the abstract, some of the findings, and then you get down to the area where all those analysis are performed and I'm looking at symbols, I don't even know what they are. And I'm like, hmm, how can I critically evaluate this more thoroughly? And I can't. I would want to have somebody sitting next to me and saying, all right, explain what this means. And there's no way to, to really do that. I'm not saying to have no faith in the researcher, but I think a healthy dose of skepticism, right. And just questioning like, is this valid is needed, especially now. My concern with when I was doing a lot of the reviews for antidepressants in children and adults is how many times the author of the paper took liberties in their conclusions. So I would read this and sometimes I would like, I have to be missing something. Like I just read it five times in a row. And then I would like kind of examine the data. I would read how they constructed the study, what the data says, and then I would see their conclusions. How in the world can you come to that strong of a conclusion based on that evidence? And it's always my opinion is there should be a really high threshold, really high standard for when you're talking about a medical intervention that could potentially create harm in a wide group of people, you have to be very careful about what you say. You can't use words that are just so definitive and so confident when the data doesn't suggest that and the totality of the research. allows you just to be very skeptical of that one result. And that goes back to what you said before, even if you created some statistical difference, I don't think you go and then make these widespread conclusions that we know now that this drug is safe and effective compared to- Yep, that's never what it was intended for. And this is exactly what this Ionitis paper communicates, is that we're getting false conclusions. the confidence from a lot of medical professionals, they are overestimating benefits and they're underestimating risk, you know, really based on ignorance, ignorance of how the system works. And unfortunately, I think there's an awakening that's occurring because we can look back at the COVID debacle and they made some, they made mistakes in messaging, they being the, the major players who could benefit from mass vaccine, they promoted theory as fact. But their language that they used turned a lot of people off when they would so definitively state something that was clearly experimental. And then you get more information from journalists who would report. And this is accurate that the pharmaceutical companies were pushing that their data not be revealed for 70 years and that they wouldn't be liable for any lawsuits. They wouldn't be liable for any harms that are created by the vaccine. So when that happens, when you already have an experimental vaccine, mRNA, we use the word vaccine very loosely because it didn't have the qualities of a vaccination. but it is a new technology. And when it's experimentally provided on large groups of people, and then they tell you it's safe and effective when it hasn't been studied for the long term, and we can't come to those conclusions, and then you see governments or even media trying to chastise those who won't take it. I mean, that in itself created a widespread awakening for a lot of people. So can I, um, I'll jump in here cause this ties into something. So even prior to that period, so a seven year period, it was 2006 to 2013, a team of reviewers from health news review.org, many of whom were physicians evaluated the reporting by us news organizations on new medical treatments, tests, products, and procedures. After reviewing 1,899 stories, which was 43% newspaper articles, 30% wires or news services, 15% online pieces, and 12% network television stories. The reviewers graded most stories unsatisfactory on five of 10 review criteria, costs, benefits, harms, quality of evidence, and comparison of the new approach with alternatives. Drugs, medical devices, and other interventions were usually portrayed positively, potential harms were minimized, and costs were ignored. Yeah, it's time, it's time for everyone to wake up and some institutions are going to have to crumble in order for us to rebuild this in a way that is humane, that is honest and allows people to be ethical and medical professionals to be ethical. I am just shocked how many medical professionals, really smart people. It's really hard to get into medical school and graduate medical school. I am completely shocked how many medical professionals just repeat messaging as if it sounds science. And that's always a red flag for me. But we I mean, we lost complete confidence in the allopathic medical environment when we would take our kids and they were pushing COVID vaccine on my my teenage son, my teenage son who already had COVID, where there's a significant risk to myocarditis and teenage boys in particular. on an experimental vaccine. And they told us that the reason that he should take it because they're told to. Because the CDC said so. Or it's they say, Would you like a vaccine? And that's the question. Would you like one? What like this is candy now? I'll take the green lollipop. Yeah. And luckily, you know, we're pretty educated on these things. And you just ask the questions like, why would you recommend that he's already had COVID? Well, the CDC is recommending it, but it's an experimental vaccine with unproven efficacy. And it really seems to be a significant risk of adverse events for someone in his age. Why would he? And they just look at you like a blank stare. And this is a medical professional. And you just realize then that they are trained to follow guidelines right now. And it's the... it's providing healthcare on an assembly line. Yeah, I've been amazed with, after working here at the practice for a few years, just, and I know Roger, you've had the same experience, but how many clients have come in and said, yeah, like a year ago, I met with my family doctor and I filled out a questionnaire, like the PHQ-9, and within 10 minutes, I was prescribed an antidepressant. And, you know, a lot of times the clients that I've spoken to, This is their family doctor, so they have a lot of faith in their doctor that if they're recommending a treatment, it must be beneficial, it must be effective. How is that not working outside of your bounds, number one, to be able to diagnose? They're not diagnosing depression. The doctor isn't. Essentially, the client is giving themselves a diagnosis by filling out a self-report measure, answering it in that way. The client's saying, I have depression. And the physician is saying, well, you must, because you have a 20 out of 40 on this measure. So here, let's take some Zoloft. Like, that is just insane to me. Yeah, we had to meet with our intake coordinators, those who are on the front lines and the phone calls, because we're getting, and they're new, so we have to train them. And so they're getting the depth of the client's problem as they're calling in, so we can really match them with appropriate clinician, and we can know upfront if we're the right group for them. And you just see like, three quarters of the people call up and say, I have ADHD, major depressive disorder, obsessive compulsive disorder, and anxiety. And then so they would just write that down, not knowing that that gives us zero information about how to help that person, absolutely nothing. And that's just getting attached to the label that they're being provided and where are they getting these labels from? They're getting it from doctors who believe it's a discrete and identifiable medical illness. Come on, these are constructs. The DSM was never meant for this. Even the original authors of the DSM and why they did it, it was in order to get really some, to get aligned with categorizing this for healthcare purposes, for reimbursement purposes, like to be part of the system, not identifying it as if they're real medical illness with biomarkers and a number of other things. They're just constructs and a way to communicate. And so if you thought about it as only a shorthand of a way to kind of communicate a general kind of problem, then it's fine. But it hasn't, it hasn't turned into that. It's morphed into this idea where people believe they have a mental health diagnosis with the same legitimacy as if you had type one diabetes. Well, and it's morphed into identification, unfortunately, right? We're right. People are fed this lie that these are... like observable diseases. And look, we are not saying that depression isn't a thing. It's something, right? It's that absolutely people are suffering and there is so much pain. We are not trying to deny that at all. But what I've seen happen so frequently now is folks will get this diagnosis, let's just say depression, and then that becomes part of their identity and their whole worldview. I'm a depressed person, therefore I can't do this. I'm a depressed person and therefore I can do this. This is a biochemical imbalance. And so therefore I'm going to be like this forever unless I take medications. It just becomes their whole world. You get, it's like you get stuck in this box and it's almost impossible to even question the validity of that. So like bring this back to the normal person, maybe just a patient or a client or a family, knowing all these things and all these layers of problems exists, like the hell do we do? Like how do we navigate this entire system knowing that it's, it has these flaws at multiple levels. It starts first with education and you know, the mainstream media has absolutely failed us because they have been so influenced by finances. So, I mean, you can't really publish stories that are going to in any way harm the business of your advertisers. And so when you are that influenced by the pharmaceutical industry, the food industry, or other government entities that have alignment and interests, you know, then you're getting a filtered information. It doesn't matter whether it's the left or the right, it's the same exact thing. And so that's why I love the idea of podcasts as a way of more of a freedom approach to being able to have conversations. Because ultimately what you're gonna have to do is you're gonna have to determine who you trust. And the person that you're gonna trust when they're a medical provider, is somebody who has a pretty thorough grasp of risks and benefits and then has an ethical commitment to respecting your right to choose based on best available evidence. And so we all know that informed consent is both a legal and ethical imperative. And so this was like the another major point in this article was basically how we're not able to make informed healthcare decisions. So when it comes to discussions regarding a treatment intervention, Physicians frequently describe the nature of the decision they have to make, 83%, so pretty high, right? Like do this or not do this. But infrequently elicited their patients' preferences, which was only 19% of the time. They only discussed alternatives 14% of the time. They only discussed risks and benefits of the procedure 9% of the time. Oh, that's really low. Which is the law. And they only discussed uncertainties 5% of the time when we have a lot of uncertainties. It's all uncertain. And rarely 2% of the time assess the patient's understanding of the decision. So we're not getting informed consent, but you also cannot get informed consent if the physicians are not informed. And that goes back to what your kind of concern is. Where are you gonna get information from? So I do think it starts with this and hopefully that if we're able to kind of promote this, that really the physicians out there and the mental health professionals who really do care about ethical practice are going to be looking into this. And they're going to have different conversations with their with their patients. So I thought, if I created a book that became widely popular, and so Dr. Hannan, you tell me the book is going to be under this premise, your depression is a gift of gratitude. And there's an entire book that changes the way that depression is viewed. Instead of an illness that is outside of your control, that you somehow just obtained because of poor genetics and bad circumstances. And instead was viewed as a gift that's provided to you through evolution or through spiritual means or whatever it may be. that each experience is there to serve you. And if you feel poorly, whether that's physically, emotionally, or a combination of both, it is designed in order to drive change. Change that would enhance learning, improve your quality of life, and improve your entire experience as a human being. And there was therapies developed for it, or there was programs developed for it, or certain protocols to... to really utilize the gift that depression has been provided to you, it changes everything because how you think about what you experience influences the outcome. So if you just identified depression as an illness, well, then it's going to be treated as a medical illness and people are going to look at it through that lens. And that what drives drugs and that what drives one's coping with it. Depression isn't the same as cancer in stage four. Depression traditionally is episodic and episodic means that there is a beginning and an end. And even without formal intervention, it was going to end at some point because you adapted or life circumstances changed. And if we took away these categories that people try to fit themselves into where the doctors try to fit, their clients into and instead we saw the individual exactly as they are, someone who was unique in that moment. No one else could experience them the same way ever again because you'll never meet someone who is the same as that person in their experience. And professionals saw it as an opportunity to learn and grow from the gift that's provided to them in order to help other people. It changes everything. And so that's the language and the messaging and how things are created in the sick care system. It's a sick care system with poor science driving this idea of evidence-based practice that we know is doing more harm than good. And I do believe institutions have to fall in order for them to be recreated. Yeah, what you just described is so... much more empowering than the current model that we have. And as you were speaking, I was just thinking how, like I can understand how the medical model came to be. So, you know, 200 years ago, if someone was experiencing what we call now as symptoms of maybe like mania or psychosis, maybe even depression, the explanation at the time was, maybe that person is possessed by a demon or by the devil or they're a witch, you know, we need to burn her at the stake. So I can appreciate moving from that to a medical model, right? Of like, okay, maybe there's not some demonic possession going on. Maybe there's something biological. But I think like you're saying, we're at this paradigm shift now where we're recognizing this medical model is no longer serving us and it is not empowering people. and instead of labeling these things as diseases, what if instead we describe them as experiences, experiences that yes, there's a lot of pain there, but that people can learn from, they can make meaning out of, so that they can grow even more. Additionally, I think it's not always fair to describe, and we're using depression today, we could probably use a lot of other of these diagnoses. It's not always fair to say it's like purely psychological. Like, so for example, when we see somebody who's maybe experiencing like metabolic illness, right? They're obese, they don't exercise, they eat poor foods, they're nutrient deficient. They're going to present themselves with symptoms on that checklist that will get categorized as depression. It could be fatigue, it could be sleep problems, it could be low mood, but the problem is not psychological. The problem is with lifestyle, and that lifestyle will influence those symptoms on the symptom checklist. So if you look at a psychiatric diagnosis of some umbrella term with unknown origin, well, then you can actually take the steps to probably identify what is actually happening or contributing to why that position is presenting that way. But if you present it as an illness, as if it's in itself, then it's like, oh, well, you just have depression. Here's the treatment for it. It's this drug and cognitive behavioral therapy, which is... you know, widely misapplied to, you know, everyone or some other form of the talk therapy. But if you open it up. If you are more flexible in your thinking and you're open-minded and you switch, you just shift out of that mindset, which we've been conditioned to think about this to be. And we see it differently. We see it as an opportunity. It doesn't, it doesn't matter if it's your relationships you have or the food that you eat or the lifestyle you live or how you sleep or what you watch or the family you grew up in. All of it is an opportunity for change. And it's just your body telling you something. It's your body giving you very important messages. And we have to listen to that. Yeah, I so appreciate you saying that, Roger. As you were speaking earlier, I was thinking, I think when you were making the comment about, like, who do we trust? I would love for people in general to just learn to trust themselves more and to be so in tune with their body and with their mindset that they can also make decisions for themselves. Because at the end of the day, Like sure, as therapists, we can help coach people and cheer lead them on, but it's up to the client to change their emotions and their behaviors. Like we cannot do that for them. And so I've worked with so many clients that have said, oh yeah, I'm on this antidepressant, I have all of these side effects. Like they know that it's not working, and yet because their doctor told them to take it, they will continue to take it. They're completely ignoring their own experience. just because an expert, a medical professional, whatever you want to call it, wrote them this prescription. Yeah, well said. I mean, definitely well said. So to kind of wrap this up, and this is a great article, How to Survive the Medical Misinformation Mass, John Einidis and colleagues, 2017. Really, really important and important for us to be able to communicate that today. I want to thank both of you for your contribution. I want to go back to what I opened this up with with the Substack. I am going to be a messenger of this. I don't profess myself. I'm a clinician, but I do really enjoy reading and thinking about this from different perspectives. So if you sign up for our newsletter, dr mcphillan.substack.com, we'll take some of this research, I'll be able to disseminate that research for you to make an informed decision. It's another perspective, questions to bring to your doctors. We don't get informed consent. Well, let's try to be informed and let's spread this information. It's the only way institutions are going to crumble. Improvement is only going to be made if we highlight what the harms are. And when you're talking about 20 to 50% of healthcare services delivered in the United States as inappropriate, wasting resources or even harming patients, the things that stand out in my mind immediately. are things like the overprescribing of antibiotics and antidepressants and other pills. When the lifestyle interventions are what was needed, we need to start promoting health, not drugging symptoms.

Creators and Guests

Dr. Roger McFillin
Host
Dr. Roger McFillin
Clinical Psychologist/Executive Director @cibhdr | Coach & Consultant @ McFillin Coaching & Consultation | Radically Genuine Podcast⭐️top 5% in global downloads
Kel Wetherhold
Host
Kel Wetherhold
Teacher | PAGE Educator of the Year | CIBH Education Consultant | PBSDigitalInnovator | KTI2016 | Apple Distinguished Educator 2017 | Radically Genuine Podcast
Sean McFillin
Host
Sean McFillin
Radically Genuine Podcast / Advertising Executive / Marketing Manager / etc.
Susan Hannan, Ph.D.
Guest
Susan Hannan, Ph.D.
Psychologist / Director of Clinical Research
86. Blind faith in the medical authority has ended
Broadcast by