PsyDactic

What is a placebo?

T. Ryan O'Leary Episode 30

Send us a text

What is a placebo?  You may already be thinking something like: A placebo is an imitation, fake, sham, decoy, or trick treatment that we give to people in studies to see if the treatment under investigation is any better or worse.  Placebos are supposed to be both benign and inert, meaning they should neither harm nor help a patient beyond the patient feeling or reporting that they are better or worse after they received some kind of treatment.  It seems strange that there is something that can take innumerable forms and still seems to work at least a little bit on so many different things. Placebos are like an all-powerful potion or magic spell.  For some treatments, even active treatments, placebo effects account for the vast majority of the effect size and it is not just an illusion.


Please leave feedback at https://www.psydactic.com.

References and readings (when available) are posted at the end of each episode transcript, located at psydactic.buzzsprout.com. All opinions expressed in this podcast are exclusively those of the person speaking and should not be confused with the opinions of anyone else. We reserve the right to be wrong. Nothing in this podcast should be treated as individual medical advice.

Welcome to PsyDactic - Residency Edition.  Today is March 13, 2023, and I am Dr. O’Leary, a third year psychiatry resident in the National Capital Region.  This is a podcast for psychiatry residents and other interested in the same kind of things that we are.  There are a few things that this podcast is not.  It is not medical advice.  It is not produced by experts in recording, editing and mixing sound.  I do all of that in my very amateurish way.  It is also not the opinion of the Federal Government, Department of Defence, Defense Health Agency, the American Psychiatric Association, or the League of Nations.  It is my own opinion.  Just me.  Only, lonely me.  So let's get started.


Today I want to explore a question that may seem to have an obvious answer on the surface, but if we take some time to struggle with it a little bit more, we might learn something about what it is we are doing as psychiatrists.  That question is: What is a placebo?  You may already be thinking something like: A placebo is an imitation, fake, sham, decoy, or trick treatment that we give to people in studies to see if the treatment under investigation is any better or worse.  Placebos are supposed to be both benign and inert, meaning they should neither harm nor help a patient beyond the patient feeling or reporting that they are better or worse after they received some kind of treatment.


This seems straightforward.  Especially, for example, if you have two identical capsules, one of which contains a little sugar and the other contains a compound that is supposed to be pharmaceutically active in some way, like an antipsychotic.  But it doesn’t have to be a sham pill.  It could be a device like a TMS machine that makes noise but produces no electrical field, or acupuncture needles that retract and stick to the skin instead of puncturing.  It could be an action such as eye movements or hovering hands over a person’s body.   It seems strange that there is something that can take innumerable forms and still seems to work at least a little bit on so many different things. Placebos are like an all-powerful potion or magic spell.  For some treatments, even active treatments, placebo effects account for the vast majority of the effect size and it is not just an illusion. There are a number of reasons a person may report feeling better, or may actually improve after taking a placebo.


One reason is a statistical phenomenon called regression to the mean.  What it basically means is that if someone is reporting feeling really bad, then whether you give them something or not, they are more likely to randomly report feeling better next time than feeling worse or the same simply because there is more statistical space near the mean.  Think of it like this, you are in a long hallway with a bunch of doors.  You notice that there are a lot more doors near the center of the hallway than there are near the edges.  That is where most people are: near the center.  That is also where you normally live, but for some reason you are now in a room near the edge of the hallway.  Not only are you kinda lonely now, because fewer people live there, but if you were to wake up randomly in another room, that room would likely be closer to the middle of the hallway, where you used to live, and closer to where most people currently live, than the room you are currently in.  That is a statistical phenomenon that happens when you measure a lot of people who are suffering.  In aggregate, they regress to the mean.  There is less space to get worse than there is to get better, so by random chance most will get better and fewer will get worse.  The average suffering will get closer to what the population average is.  If you conduct a study on people near the edges without proper controls, then it will look like the treatment worked by random chance alone.


Instead of talking about population means, we could focus on an individual’s average place. For an individual, if they are feeling bad, then there is also less chance that they will feel worse after a treatment, than to feel better, so whatever you give them, they are often more likely to get better than worse.  Think about someone who had a bad score when bowling.  Usually they bowl a 150, but this time, they bowled a 110.  You give them a Green M&M and tell them that it will help them perform better.  In the next game, they bowl a 145.  They regressed to their average.  The confidence boost they got may also have helped, but it is not necessary.  Mere chance is enough.  Now think of the patient who comes into your office reporting the worst mood they can ever remember having.  What is the most likely thing they will report after you start them on Prozac?


Interestingly, a nocebo could also work like this only in the opposite direction.  A nocebo is like a placebo in that it is not expected to have any real specific effects, but may make someone feel worse because they expect it to do this.  Imagine the same bowler just bowled a 210 and you slip them a red M&M stating that it is going to take away their magnificent bowling powers.  This person’s average is 150, so regardless of what you give them, they are more likely to bowl something close to 150 next time than not.  However, they are cursing you as the cause of their sudden worsening because it was you who slipped them the red M&M.


But regression to the mean is only one explanation that relies on probability.  There is also the natural history of most disorders.  When someone’s body or brain is malfunctioning, there are more than just random processes at work.  There are active processes within our bodies that are trying to restore something approaching our own normal or homeostasis.  When we are far from our own averages or population averages there are both random reasons why our next measurements will be closer to the mean AND there are also active processes pulling us closer to our mean.  The mean is often a place humans are programmed to be.


A person may get better due to random (merely statistical) or active (homeostatic) processes working at the same time.  That is one of the reasons that we randomly assign people to treatment groups in studies.  The randomized controlled trial is supposed to make our statistics control for the unmeasured factors that could affect the effect size.  We want our groups to be at about the same distance from the mean and also to have other active factors randomly assigned to different groups so overall, they have the same combined effect on the effect size in both groups.


Included in this random assignment is the factor of expectation of benefit.  A patient or participant might report a benefit of a treatment because there are individual idiosyncratic or individual social factors involved in their improvement.  There is something powerful about wanting to get better and expecting to.  Participants may report getting better because they want the provider to feel good about themselves. “You are doing good doctor, don’t worry.”  Their tendency to please the provider might count for something. There might also be a factor of a study participant wanting other people who might benefit from the treatment to not be affected by their own lack of response.  This may result in them not reporting failure for other people’s benefit.  This is a complex social phenomenon.  If they feel others might benefit, then to protect others from their own failure, they might feel a pressure to report good news.  Conversely there may be many social factors contributing to a nocebo effect, including dislike of the provider, suspicion of medicine, or Big Pharma, or a feeling of loss of control.


Another factor could be how we perceive our own internal state.  When we are getting cared for, we might report things like improvement in pain, improvement in mood and motivation, better sleep, less guilt and worthlessness.  We might be coming from a place where we felt alone but now we are in a place where people seem to care.  We may have hope where there seemed to none before.  We may also now understate or underreport things that might have seemed more disturbing previously when our minds were in an uncared for or isolated state when we were trying to make sense of our distress previously.  When care is provided, anxiety can be reduced, concentration improved, obsessions and compulsions may be less distressing and easier to resist.  Delusions might be farther from the surface of our thoughts.  Urges to cut or pick may be less intense.


To say that a placebo effect is all in somebodies head is both non-specific and often unhelpful because it inspires us to stop trying to figure out why someone improved. Let me take the example of something that might take advantage of some of the same internal processes that placebos work on: imaginal therapies.  In short, imaginal therapies are when a patient is asked to imagine themselves in a certain situation and then report back to the therapist.  Some relaxation and mindfulness techniques utilize this.  Imaginal therapy may be used in exposure and response therapy.  “I want you to imagine touching a door handle and walking through the door.  Now imagine that you don’t wash your hands, but instead you take off your shoes.  After you do this you also do not wash your hands but instead you take off your jacket.”  In trauma-based therapies, imaginal therapy can bring a patient back to a period of trauma and allow them to experience it again in an environment where they can work through their feelings and have a very different temporal outcome.


Is this merely taking advantage of the placebo response?  As therapists we might say something like, “We are processing trauma and encouraging extinction learning,” or “The patient is practicing distress tolerance and allowing their neural networks to learn to respond to a normally distressing situation in a new way.”  Is placebo all that different?  Are patient’s imagining improvement when taking that pill?  There is a reason that many therapies are so likely to be tested in non-inferiority trials vs no treatment or a wait list.  The mechanism they are testing may not be all that different from a sham treatment.  It might just be a more sophisticated version of the placebo response.


There are some interesting phenomena that I want to discuss briefly.  One is that some placebo responses can be attenuated by giving opioid antagonists like naltrexone or naloxone.  These are most prominently reported in pain studies, when a response to a sham analgesic disappears when given naltrexone.  This suggests that endogenous opioids may be involved in some placebo responses.  There may also be a strong dopaminergic response that gives someone a feeling of well-being.  There are also studies that have shown that patients who normally receive an active treatment, might have the same physiological response when given a placebo as they did when given the active treatment previously.  This suggests a kind of conditioned response in the body that develops during the course of active treatment that can be triggered by a fake treatment.


Other interesting phenomena are that there seems to be an increase in the placebo response depending on factors such as the color a pill given (blue pills are better than white ones and gold pills are the best), whether there is a medical device involved (a machine hooked up to your ear may give more pain relief than a fake pill), and whether there is surgery involved (chest pain can be reduced by pretending to ligate the internal mammary artery).  The more complicated the proposed treatment appears, the more difficult it seems to be to find an effect size in the active treatment that differs from a sham treatment.


There is also reported in many meta-analyses an apparent increase in the placebo response over time, which makes it more difficult in contemporary studies to differentiate between a placebo and an active treatment than it used to be.  Maybe the expectation of participants in more recent times is so high that they report more benefits simply because they feel they were better treated than participants 30 or 40 years ago felt.  Maybe we are also better at blinding patients than we used to be.  This increase in the placebo effect could have at least two outcomes.  It could augment an active treatment, in effect increasing the total response to an active treatment.  It could also make it harder to find a difference between an active treatment and a placebo, requiring higher powered studies that result in less impressive looking results.  In this case, it may be more appealing to researchers to test non-inferiority to other existing treatments than it is to test against a placebo.


It has also been reported that some people are placebo responders and some are not, which has prompted researchers to propose that studies include at least two trials.  The first trial attempts to identify people who are placebo responders or not.  In the second study, placebo responders are excluded.  By excluding placebo responders, it is assumed that we might be able to better measure the actual effect of a treatment.  I’m not sure about this, in part because the placebo response is so complex.  For example, some of those identified to be responders may have just regressed to the mean or improved because of the natural course of the disease.  They are then labeled placebo responders and excluded.


It is unclear how this is going to help identify the actual effects of the medication in a later trial, because there is no a priori reason why, in the second trial, some people may also improve due to regression or active internal processes.  Also, I don’t think it has been established that placebo non-responders do not also have non-specific biases in the way that they report.  Non-responders may be improving, but not perceiving a difference (e.g. because they are convinced they cannot get better) and therefore are not reporting it.  Not reporting a response to a placebo may mean that there was a placebo response, but their condition actually worsened, so in the end it looks like there was no response.  They do not report a response to a placebo, but also do not report worsening, even if their condition actually is worsening.  There are also certainly factors that have exacerbated some participants' symptoms that were not captured on the metrics that researchers are using.  This is all to say that I do not think that it is established that someone who does not appear to respond to a placebo initially won’t respond later.  Likely, we are all placebo and nocebo responders in some or most situations and we don’t just have the tools to tell what is what in any particular study.  That is why we randomize.


There are tons of non-specific responses in the treatments that we give people that might make it seem like they are getting better or worse, but it is often very hard to tell those from the intended or unintended effects of our drugs and therapies.  It is also easier to recommend that a patient continue a treatment when their reported response is good, than to try to convince a patient to power through a side-effect.  In the end, if we aim to do things that maximize the placebo response of our patients in our clinic, we are likely to do good for them, even if this same technique, if applied in research, would cripple our ability to find a statistically significant difference in a randomized controlled trial.  Whether a response is primarily a placebo response or not is important for a researcher who is struggling to understand how their treatment actually works. Clinicians primarily need to know whether or not our patient is benefiting from our treatment.  With this in mind, I encourage you to do whatever you can to maximize that placebo response.  It is not just an illusion.


Thank you for listening.  I am Doctor O and this has been an episode of PsyDactic - Residency Edition.


People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.