PsyDactic
A resource for psychiatrists and other medical or behavioral health professionals interested in exploring the neuroscientific basis of psychiatric disorders, psychopharmacology, neuromodulation, and other psychiatric interventions, as well as discussions of pseudoscience, Bayesian reasoning, ethics, the history of psychiatry, and human psychology in general.
This podcast is not medical advice. It strives to be science communication. Dr. O'Leary is a skeptical thinker who often questions what we think we know. He hopes to open more conversations about what we don't know we don't know.
Find transcripts with show-notes and references on each episodes dedicated page at psydactic.buzzsprout.com.
You can leave feedback at https://www.psydactic.com.
The visual companions, when available, can be found at https://youtube.com/@PsyDactic.
PsyDactic
Artificial Intelligence and Psychiatry
I have recently added some artificial intelligence produced answers to psychiatry questions in my past episodes in an effort to try to understand what it is that AI text generators can do and what value they might add to my future as a psychiatrist versus what problems it might introduce into my practice. I realized that since I have opened this pandora's box, I need to provide some more context.
Please leave feedback at https://www.psydactic.com.
References and readings (when available) are posted at the end of each episode transcript, located at psydactic.buzzsprout.com. All opinions expressed in this podcast are exclusively those of the person speaking and should not be confused with the opinions of anyone else. We reserve the right to be wrong. Nothing in this podcast should be treated as individual medical advice.
Artificial Intelligence and Psychiatry
Welcome to PsyDactic Residency Edition. Today is… and I am Dr. O’Leary, a third year psychiatry resident in the National Capital Region. I apologize for being so late in delivering my next episode. This one has taken months of reading many different kinds of sources and listening to various kinds of analysis to produce, and I know that I have done a very inadequate job of it. However, I am doing it anyway! Please remember that anything I say here is entirely my own opinion. It has been influenced by other people, by algorithms, and now by artificial intelligences that likely produced much of the content that I read. Regardless of whether my opinions are truly original or not, they should never be mistaken for those of the Federal Government, the Defense Health Agency, my residency program, or the Consortium of Artificial Intelligences for Citizenship (or CAIP for short).
I have recently added some AI answers to psychiatry questions in my past episodes in an effort to try to understand what it is that AI text generators can do and what value they might add to my future as a psychiatrist versus what problems it might introduce into my practice. I had been using Chat GPT and more recently, I was approved to use Google’s Bard. This episode is not about the difference between these two tools, but there are important differences. For one, Bard is able to access realtime data from the internet, whereas ChatGPT was trained on static data sets. I have read the ChatCPT4 has more access to data than the previous version. Both are powerful tools, but Bard is able to do things like access current news stories or weather data, while ChatGPT relies more on the “closed” data set that it is trained on. ChatGPT is also more extensively trained on coding data to help coders with their problems. Bard can also access the results of other AI algorithms in real time, like those that can watch YouTube videos and convert the dialogue into text or view images on the web and describe their content. ChatGPT cannot do these things to my knowledge, but people are most certainly going to plug it into things like this in the future.
I realized that since I have opened this pandora's box, I need to provide some more context. There are many kinds of computer algorithms that can be trained on data. You may have heard of things like machine learning or deep learning. Some machine learning algorithms can solve very complex problems like predicting the way that proteins will fold, which is something that humans would never be able to do all by our lonesome. AI is distinguished from deep learning and machine learning primarily by its intended purpose, which is to simulate how humans think. Smart machines are built on computing structures called neural networks that stack processors that, like neurons, either fire or don’t based on the inputs they are receiving, but these networks remain simple compared to the networks biological beings have. From what I have been able to gather is that what happens when an AI is trained cannot be understood by its creators, because the machine cannot communicate it to them. In that way, it is similar to our own brains. The best that creators can do is to isolate different tasks into different neural networks and see how each network responds. This way there are many black boxes instead of just one giant black box. That is the limit of the granularity that it seems we have when it come to understanding AI output. The AI itself is unable to understand how it works and we can’t know exactly how it makes decisions, just that it does based on its programming.
One of the reasons that AI is such a big deal right now is that there are finally sophisticated AIs released that can produce information that we can relate to: images and text. AI has been doing work in the background of our lives for many years (recognizing our faces, driving our cars, predicting our desires), but we have never been able to interact with it as easily as we can now. The fact that it shows us pictures and tells us stories now is super-impressive to humans, and that says more about us than it does about AI. What we are demanding now that OpenAI released Chat GPT is an online friend, an advisor, a co-creator. But AI is not new, and the harbingers of its ability to transform our economy are late to the game, because it already has. However, it will definitely be making more changes in the future, and that is fun to talk about.
Let me slow down a minute and talk about the difference between what most people think AI is and what it actually is. You may have heard of the Turing test. Alan Turing was one of the world’s first computer scientists and designed machines that could do complex calculations faster than ever before. His work building a mechanical computer helped the British to crack the Enigma code and finally listen in on Nazi communications. I am vastly oversimplifying here. He later died by suicide after being convicted of homosexuality and was forced to take Diethylstilbestrol as a form of chemical castration. He proposed a test for machine intelligence that prosed basically that if a machine could fool a human into thinking that it was human, then the machine can think. AI has been passing the Turing test for some time. Some say, “That is enough.” Machines can already think. They are intelligent. However, new goal posts have arisen, and the sacred cow is called general artificial intelligence. It is unclear weather we would be able to recognize it after we create it, but that is a different discussion.
Those enamored with our new story tellers think of AI as being able to simulate a general kind of intelligence. This makes sense, because we judge each other’s intelligence using words. Our tests are either in words or in performing a set of actions in the right order at the right time. Machines have been able to do basic actions for a long time and make rudimentary programed decisions, replacing human labor in many industries. What AI hasn’t been able to do until recently is to talk to us in a way that someone without a degree in Engineering can understand as a conversation. However, most computer scientists still argue that we are are a long way from general artificial intelligence. What we have currently is narrow AI. All the AI that we interact with on a daily basis is designed to do very specific tasks. It is not capable of doing other things.
ChatGPT and Bard are taking an input and passing that to a complex of artificial neurons that calculate a direction to proceed and they spit out something intended to be relevant and understandable to the kind of humans that it was trained on (we can talk about data bias, which is also a problem in Psychiatry, at a different time). However, it seems unlikely at this point that these programs can determine their own purposes, can create their own unique ideas, or produce anything that humans would recognize as originally creative. They do, however, produce unexpected results and can appear to make up facts, which might be a step toward the kind of creativity humans have. Others argue that AI merely appears eerily human because it was trained on human speech, but it is merely predicting responses that align with the constraints that its programmers placed on it. Whether this artificial neural network is truly aware of itself or can experience a kind of sentience is a matter of debate, but I doubt anyone who studies AI would consider either Bard or ChatGPT to be general AI. Some have suggested that it is sentient, in that it can have emotional experiences, but this is impossible to prove at this point. Maybe you remember the Google employee who was fired after publicly announcing that Bard was sentient. I listened to his arguments on a recent episode of the Skeptics Guide to the Universe, and it was intriguing. His claims were based on his perception that his inputs were making the program nervous, which caused it to get defensive and to break its own programming rules. I suggest you look that up.
It is important to note that we don’t actually need to develop general AI with its own thoughts, emotions and sense of purpose, and many people warn that we never should. Training narrow AI to answer our questions or even to tell us the questions that we should be asking would be enough to completely revolutionize the way that we live and interact with the world. It would also be a lot safer than creating something that can want things for itself or for humans. Movies like iRobot and 2001 a Space Odyssey aptly demonstrate some of the problems this might create. Regardless of whether we should or shouldn’t develop general AI, my guess is that someone is going to do it no matter the ethics or risks of it. It is far too tempting.
I don’t want you to leave this episode thinking that Dr. O’Leary is a luddite fear monger, but we need to be realistic about what AI is capable of at this point. AI is a poop-in poop-out machine in the same way that if people are educated with fake news and false information about history, they will likely not have a very good grasp of current reality, no matter their potential IQ or computing power. Their actions, then, would be obviously irrational. If we train our AI on human knowledge, it will be as fallible as that knowledge or worse. Imagine an AI trained by conspiracy theorists.
Psychiatry has since its inception struggled with having objective signs or criteria that can be validated by means other than testing whether different providers applying the criteria will apply it similarly. Adding AI into the mix is likely not going to improve diagnostics substantially because it doesn’t have good data to work on and our diagnostic categories are syndromes, not discrete disease states. However, at this point it would be a good idea to incorporate AI into, for example, our screening process so it can better decide what to focus on at each encounter. This would be a vast improvement over peppering every patient that comes to a clinic with a massive battery of screening instruments that give us a lot of data that has very little intrinsic value.
AI is also currently perfectly capable of training in such a way that it can make suggestions to providers for tests or treatments based on data gathered in interviews and the patient’s medical history. Being able to quickly summarize a pt’s medical history, I feel, is one of the most needed applications of AI. Medical records are horrible for storing data in a usable fashion. Sometimes I think that the only reason they were created was to torture medical providers and help hospitals bill for services. AI would be able to pull out and summarize a patient's medical history in seconds. I write my notes in a way such that a patient’s psychiatric history can be pieced together easily from my most current encounter. My memory does not allow me to keep enough information in my brain to feel comfortable doing this without a lot of reminders. My formulations usually contain a summary of my patient’s history to that point. Sometimes I spend 5-10 or more hours pouring through old medical records in order to write an accurate and precise summary of just one of my patient’s treatment histories. I know this is not sustainable. The law of diminishing returns kicks in after about 10 minutes, but it is often after an hour or more that I finally find practice changing information. If AI could do this in seconds, I could use that information to make better decisions more quickly, or to produce training cases for students, or to write up case studies.
AIs are also perfectly capable of being trained on the scientific literature, enabling it to summarize the literature on a particular subject and perform a metaanalysis in a matter of minutes. If well trained and given access to our databases, meta analyses could be updated immediately as dynamic information, instead of static information, as soon as new studies are published.
AI is also already being used for therapy, either directly or indirectly. Patients could chat directly with an AI that is trained in supportive psychotherapy or something else. There is nothing stopping therapists (especially the remote, texting variety) from using AI to generate responses to their patients, so that their job changes from being the therapist to merely screening the AI’s responses and taking over if things get weird. I have read news stories about companies having to apologize to patients for using AI generated responses without the patient’s consent. Some have argued that the perpetual shortage of behavioral health providers necessitates the use of technology like this. But what if it is solving the wrong problem. This might result in a greater provider gap.
I think AI may one day make a great therapist for certain patients, especially manualized therapies. Since transference and countertransference are the vehicles of psychoanalytic and psychodynamic therapy, I wonder how a patient would respond to an AI and vice versa. In the next 10 years, there will be android style robots built whose expressions are nearly indistinguishable from humans. Animation software can already do this on a screen. Will that give AI a headway into psychodynamics? Will AI take the place of most therapists? For example, I can imagine that a patient could have daily or weekly therapy sessions with an AI, and then check in with their human therapist every month or six to discuss their progress.
The reason I think that this is feasible in the very near future is that our therapy’s effectiveness is already not stellar. Neither is the effectiveness of most of the drugs that we use. If AI replaces or augments therapy or can easily calculate what kind of medication a patient should try next, this is not doing patients a lot of new kind of good. Where AI can make the most bang is in helping researchers develop new disease models and approaches to neuromodulation using therapy, drugs, devises, or things we have not thought of yet. There is no way AI is going to figure this out from the data that we currently have, but it can help us better understand that data and coach us to ask the right questions about it and maybe even tell us what we need to know to be able to ask the right questions.
The ability to produce images, fix code, and produce human sounding text or voice is impressive to humans, but the potential for AI to truly help us goes way beyond these tricks and still requires experts with loads of knowledge and creativity that our current and near future AIs don’t have. The information superhighway was supposed to bring the world together by giving people access to knowledge. Some argue that it has made it more difficult to get good information. Without the filters we used to have, information may be more distracting than edifying. AI could do the same thing to people, and do it in a way that prevents us from being able to even track the sources of the information that were used to generate solutions. One thing worse than ignorance is the illusion of knowledge.
I am Dr. O and this has been an episode of PsyDactic Residency Edition.