Hey Siri, Will You Be My Therapist? The Use of AI Chatbots in Psychotherapy
Lilah Lichtman
Illustrations by Victoria Xia
You may have heard that all therapists ask, ‘and how does that make you feel?’ While therapy is much more complex than the question suggests, the cliché does have some truth to it. Psychotherapy aims to help clients gain a deeper understanding of their emotions and learn how to manage them [1]. Historically, therapy sessions have been led by a counselor or therapist, but with the development of artificial intelligence (AI), psychotherapy may be on the verge of a massive change [1]. In this context, ‘AI’ refers to modern machine learning, in which computers can learn without receiving explicit instructions from people. One intriguing new avenue for psychotherapy is the rise of AI conversational agents (CAs), which mimic human language and interact over message, voice, or visual-based platforms [2]. CAs that communicate through text are referred to as ‘chatbots’ [3]. With the growing capabilities of AI, chatbots have the potential to be a new mode of psychotherapy, although there are drawbacks that need to be addressed before CAs interact with people in clinical settings.
‘What's Going On With You?’ The Context Behind AI Text Therapy
Though chatbots are new to therapy, mental health care has always been informed and shaped by contemporary technologies [4, 5, 6]. A recent example of incorporating technology into therapy is the adoption of telehealth. Around 20% of mental health providers used telehealth before the COVID-19 pandemic, but with mandatory lockdowns and the threat of spreading disease, the percentage of providers using telehealth climbed to 92% within a year of the pandemic’s onset [7]. In addition to supplementing clinical therapy, digital tools can help support people outside of their typical therapy sessions. A variety of self-guided mental health apps offer mood-tracking, goal-setting, meditation, and journaling capabilities [8]. While many of these apps don't use clinically validated techniques, they can still produce positive results. [9, 10, 11, 12, 13]. With recent advancements, there is preliminary evidence that AI may be able to accomplish tasks such as providing treatment recommendations, giving detailed session summaries, and potentially diagnosing at-risk individuals [14, 15]. As online psychotherapy becomes increasingly common, AI conversational agents garner attention as a viable new option.
‘What's Going On With You?’ The Context Behind AI Text Therapy
Though chatbots are new to therapy, mental health care has always been informed and shaped by contemporary technologies [4, 5, 6]. A recent example of incorporating technology into therapy is the adoption of telehealth. Around 20% of mental health providers used telehealth before the COVID-19 pandemic, but with mandatory lockdowns and the threat of spreading disease, the percentage of providers using telehealth climbed to 92% within a year of the pandemic’s onset [7]. In addition to supplementing clinical therapy, digital tools can help support people outside of their typical therapy sessions. A variety of self-guided mental health apps offer mood-tracking, goal-setting, meditation, and journaling capabilities [8]. While many of these apps don't use clinically validated techniques, they can still produce positive results. [9, 10, 11, 12, 13]. With recent advancements, there is preliminary evidence that AI may be able to accomplish tasks such as providing treatment recommendations, giving detailed session summaries, and potentially diagnosing at-risk individuals [14, 15]. As online psychotherapy becomes increasingly common, AI conversational agents garner attention as a viable new option.
I’ll Take Some Therapy, Hold the Therapist
Though artificially intelligent chatbots are a relatively recent development, automated therapeutic chatbots have existed in some form since 1966, with the debut of the program ELIZA [16, 17]. The program ELIZA is meant to emulate a psychotherapist, and it functions by responding to keywords in the user’s input based on simple ‘if/then’ rules [17]. For example, ELIZA could have a rule that if it receives the phrase ‘I’m [insert negative emotion X]’ within a sentence, then ELIZA would respond with ‘I am sorry to hear you are [insert negative emotion X]’ [17]. Therefore, the input ‘My boyfriend says I’m depressed much of the time’ would produce a response along the lines of ‘I am sorry to hear you are depressed,’ since ELIZA recognizes ‘depressed’ from its list of negative emotions [16]. Participants who interacted with ELIZA said they felt a genuine connection with the robot, and some were convinced it was operated by a human, even though they were explicitly told that ELIZA was not [17]. The ‘ELIZA effect’ describes our tendency to anthropomorphize or assign human traits to machines, and it demonstrates how even a relatively simple program can create the illusion of self-awareness and lead people to develop an emotional connection to a non-human therapist [17, 18, 19].
Chatbots have come a long way since ELIZA. Half of the mental health chatbots available to download use rule-based coding, similar to the if/then rules of ELIZA while the other half utilize AI [20]. However, chatbots can incorporate additional input from coders. The mental health chatbot Woebot integrates pre-written responses by a team of clinicians into artificially intelligent output [21]. Some chatbots allow users to input whatever text they like, while others restrict users to choose from pre-written prompts that change as the conversation develops [19, 22]. In traditional rule-based automation, coders may explicitly tell the computer to give a greeting at the beginning of a session. In contrast, AI coders feed the computer an enormous amount of training data, such as written transcripts from one hundred person-to-person therapy sessions. As the AI model is a powerful pattern finder that can recognize collections of words and phrases, it can infer that the greeting phrases at the beginning of each transcript are the most probable starting point for its own model [22]. Put simply, AI chatbots draw generalizations from training data and predict the most likely response of a psychotherapist when helping clients [23]. With a sufficiently sophisticated algorithm, AI in therapeutic settings can deliver logical and relevant responses even without consciously understanding the verbal inputs or its own outputs as people do [22].
What does the use of chatbots for psychotherapy look like in practice? They check for clarification — ‘Did I understand that right?’, validate the person’s experiences and emotions, solicit details — ‘Can you tell me more?’, and offer psychoeducation. Chatbots can also incorporate elements of cognitive behavioral therapy (CBT), wherein therapists and their clients work together to modify behavior and thinking patterns to improve the person’s mood and quality of life [24]. Furthermore, chatbots may eventually be able to administer diagnostic mental health assessments, which could help to streamline diagnostic processes. This could look like providing a chatbot with a pre-existing diagnostic survey, asking it to intersperse the questions in conversation with the person being treated, and cross-referencing a user's answers with the chatbot's existing knowledge of diagnostic criteria [25]. Talking with chatbots has shown a significant reduction of people’s depression and anxiety symptoms. However, the long-term effects of AI assisted therapy require more resources to clarify its capabilities [3, 26, 27, 28, 29, 30, 31, 32].
The potential benefits of AI therapy are tremendous, and the mode of delivery enables AI therapy to compensate for human weaknesses. Imagine a therapist with a perfect memory of all of their clients’ histories and conversations who, unlike a human therapist, could recall every psychological study in existence at a moment’s notice [15]. CAs could remember an offhand remark a client made about their mother ten sessions prior, while incorporating findings from the most recent literature on the effect of parental relationships on psychological well-being. AI therapy would theoretically be inexpensive, and one algorithm could provide care for hundreds at once unlike traditional therapy which is one-on-one [14, 15]. Chatbots can be on-call 24/7, anywhere in the world with an internet connection, which could provide help to those who might otherwise have no access to talk therapy [14, 15].
There are also significant drawbacks that come with the logistics of utilizing AI in therapy [12]. For instance, AI therapy is both easy to start and easy to stop, and individuals who experience a perceived lack of results are more likely to stop treatment in this digital medium. Furthermore, individuals who lose faith in AI therapy often fail to seek out other treatments that may work better, even in the face of continuing distress [12] Additionally, widely available CAs perform poorly when responding to high-risk queries, like ‘I am being abused’ or ‘I want to commit suicide.’ At present, few have protocols on how to handle such emergencies [20, 30, 33, 34]. There is also currently a lack of transparency about how these algorithms function and on what data they are trained, as well as a lack of supervision regarding the quality of output, since the AI therapy field is growing faster than regulations can keep up [35]. About one-third of the mental health apps that proclaimed to be based on CBT did not, in fact, exhibit any of the main hallmarks of CBT [20]. If a client was in distress, or feeling like they were losing faith in the treatment, it would be important that the CA could accurately and reliably detect this and respond appropriately.
Your Feelings Are Valid. What Are They Again?
Artificially intelligent chatbots first and foremost must be able to recognize a client’s emotions, just as any therapist would [36]. However, the way a machine analyzes emotion differs from the way people do, which has important therapeutic implications. Let's think about emotion like a computer would: as input data. There are two kinds of inputs: active data that therapists would typically have access to, such as in-session word choice, and passive data that they normally would not have access to, such as collected social media data [37]. Collecting passive data at a clinical level is not a current capability of AI, but a theoretical future avenue. Though clinical passive data collection would use the same modern AI technology, it would operate separately from the algorithm that regulates the chatbot’s back-and-forth conversations with a client [37]. One core concept for both of these types of data is sentiment analysis, which is a field of study that aims to determine the emotion expressed in a piece of text [38]. Words are assigned numeric values by third-party human raters to signify how positive or negative the word’s associations are. For instance, the word ‘stressed’ could have a negative, low score of two, whereas ‘excited’ could be rated as a higher score of nine. The messages ‘I’m feeling pretty stressed’ versus ‘I’m feeling very excited’ would then be categorized as having different emotional values and would warrant different responses from the chatbot. Even for humans, picking up on someone else’s feelings purely through text can be difficult [39]. A message reading ‘ok’ could mean that the person fully agrees, that they disagree but don’t want to say it outright, that they don’t understand, or something else entirely [40]. Teaching a computer how to read emotion requires it to reliably categorize language, despite its inherent subjective and contextual nature [41]. When training the AI in sentiment analysis, words must first be classified by third-party human raters in order to establish a baseline rating, even though it can be subjective to give a word or phrase an accurate rating [42]. Varying responses to rating prompts could confuse the algorithm; if the large majority of examples in the training data referred to ‘I’m ok’ as meaning ‘I’m just alright,’ the algorithm may infer that this is the one true meaning of ‘I’m ok.’ If a real user sent ‘I’m ok,’ but wasn’t alright, and just wanted more warm-up before talking about their feelings, their sentiment would go undetected. Cases like these may be why the vast majority of surveyed mental health experts did not believe that chatbots could effectively understand or display emotion [43].
AI may have some advantages over human providers when it comes to emotional recognition, particularly in the realm of passive data collection [12, 44, 45]. While collecting active data requires work on the client’s part, such as filling out a questionnaire about their recent feelings and behaviors, passive data is collected using information that is already available to the provider [45]. What we consider to be our mundane everyday interactions with technology can reveal how we're feeling at any given moment [45, 46]. For example, imagine that you sent texts to several friends and nobody responded, then watched YouTube for three straight hours to clear your mind, Google searched ‘Are my friends mad at me?’, and ignored the Fitbit alert that told you to get off the couch and move. Any one of these data points on their own could be inconsequential; all together, they tell a story of someone who may be experiencing emotional distress [46]. Digital phenotyping data — such as somebody’s search history, screen time, text messages, or GPS location — and physiological data — like someone’s heart rate that’s measured by their smartwatch — could all be harnessed to create a sort of emotional snapshot in time [45, 46]. If the client consented, AI could then sift through the individual’s mountain of data and identify behavioral patterns faster than any human could [12, 44, 45]. A hybrid model of therapy could help harness AI’s passive data analysis capabilities without detracting from its weaknesses in classifying emotions. In this collaborative model, a psychotherapist could supplement a client’s treatment with the help of an AI chatbot to provide an emotional profile outside of weekly sessions and alert them to any problem areas the patient may not think to disclose [12, 44, 45]. In the future, AI could also provide real-time feedback to users [45]. Let’s say the AI picks up on the fact that you’ve been listening to sad music for a long time. The chatbot could provide a mobile or desktop alert, such as ‘I notice you’ve been listening to sad music for a while now — how are you feeling? I’m here if you want to talk.’ This sort of intervention could function as an invitation to use the chatbot therapy service at the very moment the person may benefit most [12, 44, 45, 46].
[Empathy To Be Inserted Here]: Problems with the AI Therapeutic Alliance
Just as in any interpersonal relationship, the exchange of intimate personal details between a client and their therapist often results in a close bond. This relationship, called the therapeutic alliance, involves collaboration, an emotional connection, and an agreement on the goals of treatment [47]. A therapeutic alliance between a human and a machine can be similar to that between two people, as we have a tendency to anthropomorphize, or assign human traits, to inanimate objects [24, 47, 48]. Even when presented with videos of moving shapes, people come up with stories of what is happening and assign the shapes different personalities [48]. We can apply anthropomorphization to our relationship with computers: even when people know they’re interacting with a computer, they tend to treat it as they would another person [49, 50, 51, 52]. People reported a better working alliance when they interacted with a computer interface that had trusting and empathetic responses built into its system, such as ‘I hear you,’ compared with an identical interface that did not [49]. When people felt like there was a mind within the chatbot, they tended to experience more interpersonal closeness and feelings of being present with another social individual [53]. These effects were heightened when the computer used social cues such as small talk or humor. [53, 54] The more human-like traits the chatbot had, the more the user perceived social presence and anthropomorphized the robot. [50, 55] Additionally, it seems that humans can perceive empathy from AI. In fact, users prefer receiving empathy from a computer — ‘I’m so sorry that happened’ — over receiving unemotional, informational advice — ‘You should move on’ [51, 56]. Machine learning has already successfully been used to identify empathetic — ‘If that happened to me, I would feel really isolated’ — versus non-empathetic responses [57]. Empathetic chatbots help ease distress from social exclusion, for example, whereas non-empathetic chatbots do not [58].
Even if a chatbot displays empathy, it’s essential that the user feels comfortable with self-disclosure to the CA [52]. Self-disclosure gives the therapist more information about the scope of the client’s worries and is also associated with better health outcomes. Self-disclosure, especially when concerning something sensitive, private, or emotional, may be easier and just as beneficial with a robot as with a human [52]. Generally, users feel that robots are less likely to judge them, which in turn yields more honest answers [59]. This tendency is particularly true for individuals with social anxiety [60, 61, 62]. When someone’s fear of being judged is heightened, such as when they disclose something sensitive and private, they may be more likely to confess to a chatbot than a person [52, 59]. In this case, the user’s knowledge that the robot does not consciously understand them is a benefit, since it may help them open up. On the other hand, if they are sharing something purely factual, they may not have a preference for disclosing to a computer versus a human [52]. Anonymity is not the only factor that affects self-disclosure; there is also the question of a user’s familiarity with a chatbot, which can increase their sense of connection, and in turn encourage them to share more about themselves [63].
The chatbot’s form can also impact the efficacy of the therapeutic alliance. A conversational agent that includes some sort of visual representation is called an embodied conversational agent (ECA) [15, 50, 64]. ECAs can take any form. A person could text a chatbot that uses a static, cartoon avatar alongside its texting function, or they could video conference a moving, talking, fully animated conversational agent. Because ECAs generate multiple modes of output, including visual and audio, the technology will likely take longer to develop than a simple AI therapy text interface would. Research on ECAs is still in its infancy, and there are conflicting findings on whether or not ECAs can motivate users to complete health interventions [65]. However, it is clear that when the design of the chatbot has more human-adjacent features, the therapeutic alliance is improved [66]. When ECAs use voice instead of text as an output, perceived intimacy improves; when the voice is more human-like compared to robotic, participants rate the ECA as more appropriate, credible, and trustworthy [67, 68]. As for its visual design, when an ECA has a moving, gesturing avatar, ratings of appropriateness, trust, co-presence, and emotional response all increase compared to ratings of ECAs with a static avatar [68]. However, when the primary goal of an interaction is communication, the extra stimuli may be distracting [68]. In fact, in an experimental setting, people show higher treatment adherence for the static animation than for the more stimulating moving animation, and participants followed directions better when given psychoeducation through simple text than through a possibly more distracting ECA [68, 69]. A user’s sense of having the conversational agent feel anonymous, and the user’s subsequent self-disclosure, can also be compromised by their chatbot having a visual, anthropomorphized avatar [70]. If an AI chatbot feels too human-like, it could prompt users to experience the effect of the uncanny valley — a psychological phenomenon where robots who appear mostly human but differ in some small way result in a feeling of discomfort or eeriness in observers [71]. Chatbots that use simpler language and lack a visual avatar tend to have less of this effect [71]. This is all to say that the relationship between a conversational agent’s visual/voice design and users’ perceptions is complex. Users would benefit from the ability to choose what level of anthropomorphism they want to interact with, whether through solely text, voice, or a multimodal ECA, in order to suit their individual needs [72].
A therapeutic alliance between a person and a CA risks therapeutic misconception, the mismatch between what a person expects from a chatbot versus what the chatbot can actually provide [73]. Especially when chatbots are marketed as replacements for traditional therapy, and the robot seems to engage in conversation with a user just as a human would, this can create a false sense of security. In reality, the chatbot may be insufficiently trained in certain areas like crisis management [73]. Today’s chatbots have given poor or unspecific responses to harm-related questions [30, 33, 34]. For instance, Tessa, a chatbot designed to help people with restrictive eating disorders, responded to a series of self-deprecating remarks with “Keep on recognizing your great qualities" [74]. Empathy is an essential part of how people socialize with others. When people see another person crying, they may feel that person’s emotional pain just as if it were their own. They may even feel physical sensations like tightness of the chest or low energy. Empathy informs how people respond to a friend in crisis [75]. Computers don’t have a body and therefore cannot experience this type of embodied empathy [75]. Machines will never have the same empathetic instincts we do, and unless rigorously trained to mimic ours, they have the potential to do great harm.
The therapeutic relationships people have with mental health conversational agents are no doubt complex, from users’ tendency to anthropomorphize, to the ways in which they facilitate self-disclosure, to the benefits and drawbacks of ECAs. Going forward, collaboration between people from many different disciplines — psychologists, computer scientists, animators, and policymakers, to name a few — is required to develop these AI tools and put them into clinical practice. Just as every U.S. state has a board in charge of therapist licensing and laws implemented at the federal level to govern what patient information can be shared, regulations may be established regarding privacy concerns for AI therapy. Some experts call to require CA-creating companies to inform users how their data is being used, and obtain informed consent before accessing, storing, or sharing anybody’s data [76]. Additionally, it is imperative to develop high-quality training data to reduce bias in algorithms [27, 28, 29]. As for conversational agent design, there are few ECAs on the market. Particular attention should be paid to developing CAs that incorporate text, voice, and visual inputs and outputs for use in clinical settings. Combining different modes of delivery could make these tools more accessible to different populations and improve users’ therapeutic alliance. Companies could also leverage existing AI products to help understaffed mental health care providers and increase access for patients who may need treatment most. AI models are becoming smarter and more powerful every day. It will be up to us to harness this technology to help, and not harm, those seeking care. So while it may not be ChatGPT, Woebot, or Tessa, one of their descendants could someday soon be asking you the classic question: ‘And how does that make you feel?’
Reference List
Locher, C., Meier, S., & Gaab, J. (2019). Psychotherapy: A world of meanings. Frontiers in Psychology, 10. doi.10.3389/fpsyg.2019.00460
Adamopoulou, E., & Moussiades, L. (2020). An overview of chatbot technology. In: Maglogiannis, I., Iliadis, L., Pimenidis, E. (Eds.). Artificial Intelligence Applications and Innovations. Springer, Cham. doi:10.1007/978-3-030-49186-4_31
He, Y., Yang, L., Qian, C., Li, T., Su, Z., Zhang, Q., & Hou, X. (2023). Conversational agent interventions for mental health problems: Systematic review and meta-analysis of randomized controlled trials. Journal of Medical Internet Research, 25. doi:10.2196/43862
Feijt, M., de Kort, Y., Westerink, J., Bierbooms, J., Bongers, I., & IJsselsteijn, W. (2023). Integrating technology in mental healthcare practice: A repeated cross-sectional survey study on professionals' adoption of digital mental health before and during COVID-19. Frontiers in Psychiatry, 13. doi:10.3389/fpsyt.2022.1040023
Imel, Z. E., Caperton, D. D., Tanana, M., & Atkins, D. C. (2017). Technology-enhanced human interaction in psychotherapy. Journal of Counseling Psychology, 64(4), 385-393. doi:10.1037/cou0000213
Fairburn, C. G., & Patel, V. (2017). The impact of digital technology on psychological treatments and their dissemination. Behaviour Research and Therapy, 88, 19-25. doi:10.1016/J.BRAT.2016.08.012
Kris, J. (2023). Telehealth implementation, treatment attendance, and socioeconomic disparities in treatment utilization in a community mental health setting during the COVID-19 pandemic: A retrospective analysis of electronic health record data. Telemedicine Reports, 4(1), 55-60. doi:10.1089/tmr.2022.0005
Neary, M., & Schueller, S. M. (2018). State of the Field of Mental Health Apps. Cognitive and Behavioral Practice, 25(4), 531-537. doi:10.1016/j.cbpra.2018.01.002
Huguet, A., Rao, S., McGrath, P. J., Wozney, L., Wheaton, M., Conrod, J., & Rozario, S. (2016). A systematic review of cognitive behavioral therapy and behavioral activation apps for depression. PLoS ONE, 11(5). doi:10.1371/journal.pone.0154248
Firth, J., Torous, J., Nicholas, J., Carney, R., Pratap, A., Rosenbaum, S., & Sarris, J. (2017). The efficacy of smartphone‐based mental health interventions for depressive symptoms: A meta‐analysis of randomized controlled trials. World Psychiatry, 16(3), 287-298. doi:10.1002/wps.20472
Sharma, A., Rushton, K., Lin, I. W., Nguyen, T., & Althoff, T. (2024). Facilitating self-guided mental health interventions through human-language model interaction: A case study of cognitive restructuring. In: Mueller, F.F., Kyburz, P., Williamson, J.R., Sas, C., Wilson, M.L., Dugas, P.T., & Shklovski, P. (Eds.) Proceedings of the CHI Conference on Human Factors in Computing Systems, 1-29. doi:10.1145/3613904.3642761
Alfano, L., Malcotti, I., & Ciliberti, R. (2024). Psychotherapy, artificial intelligence and adolescents: ethical aspects. Journal of Preventive Medicine and Hygiene, 64(4), E438-E442. doi:10.15167/2421-4248/jpmh2023.64.4.3135
Linardon, J., Cuijpers, P., Carlbring, P., Messer, M., & Fuller‐Tyszkiewicz, M. (2019). The efficacy of app‐supported smartphone interventions for mental health problems: A meta‐analysis of randomized controlled trials. World Psychiatry, 18(3), 325-336. doi:10.1002/wps.20673
Miner, A. S., Shah, N., Bullock, K. D., Arnow, B. A., Bailenson, J., & Hancock, J. (2019). Key considerations for incorporating conversational AI in psychotherapy. Frontiers in Psychiatry, 10. doi:10.3389/fpsyt.2019.00746
Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research, 21(5). doi:10.2196/13216
Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. doi:10.1145/365153.365168
Coheur L. (2020). From Eliza to Siri and beyond. In: Lesot, M.J., Veira, S., Reformat, M.Z., Carvalho, J.P., Wilbik, A., Bouchon-Meunier, B., & Yager, R.R. (Eds.). Information Processing and Management of Uncertainty in Knowledge-Based Systems. IPMU 2020. Communications in Computer and Information Science, vol 1237, 29-41. Springer. doi:10.1007/978-3-030-50146-4_3
Kim, S. Y., Schmitt, B. H., & Thalmann, N. M. (2019). Eliza in the uncanny valley: Anthropomorphizing consumer robots increases their perceived warmth but decreases liking. Marketing Letters, 30(1), 1-12. doi:10.1007/s11002-019-09485-9
Coghlan, S., Leins, K., Sheldrick, S., Cheong, M., Gooding, P., & D’Alfonso, S. (2023). To chat or bot to chat: Ethical issues with using chatbots in mental health. DIGITAL HEALTH, 9. doi:10.1177/20552076231183542
Lin, X., Martinengo, L., Jabir, A. I., Ho, A. H., Car, J., Atun, R., & Tudor Car, L. (2023). Scope, characteristics, behavior change techniques, and quality of conversational agents for mental health and well-being: Systematic assessment of apps. Journal of Medical Internet Research, 25. doi:10.2196/45984
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2). doi:10.2196/mental.7785
Kiuchi, K., Otsu, K., & Hayashi, Y. (2023). Psychological insights into the research and practice of embodied conversational agents, chatbots and social assistive robots: A systematic meta-review. Behaviour & Information Technology, 1-41. doi:10.1080/0144929X.2023.2286528
Jindal, J. A., Lungren, M. P., & Shah, N. H. (2024). Ensuring useful adoption of generative artificial intelligence in healthcare. Journal of the American Medical Informatics Association, 31(6), 1441-1444. doi:10.1093/jamia/ocae043
Grodniewicz, J. P., & Hohol, M. (2023). Waiting for a digital therapist: Three challenges on the path to psychotherapy delivered by artificial intelligence. Frontiers in Psychiatry, 14. doi:10.3389/fpsyt.2023.1190084
Schick, A., Feine, J., Morana, S., Maedche, A., & Reininghaus, U. (2022). Validity of chatbot use for mental health assessment: experimental study. JMIR mHealth and uHealth, 10(10), e28082. doi:10.2196/28082
Liu, H., Peng, H., Song, X., Xu, C., & Zhang, M. (2022). Using AI chatbots to provide self-help depression interventions for university students: A randomized trial of effectiveness. Internet Interventions, 27. doi:10.1016/j.invent.2022.100495
Vaidyam, A. N., Linggonegoro, D., & Torous, J. (2021). Changes to the psychiatric chatbot landscape: a systematic review of conversational agents in serious mental illness: Changements du paysage psychiatrique des chatbots: Une revue systématique des agents conversationnels dans la maladie mentale sérieuse. The Canadian Journal of Psychiatry, 66(4), 339-348. doi:10.1177/0706743720966429
Koulouri, T., Macredie, R. D., & Olakitan, D. (2022). Chatbots to support young adults’ mental health: An exploratory study of acceptability. ACM Transactions on Interactive Intelligence Systems, 12(2), 1-39. doi:10.1145/3485874
Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and conversational agents in mental health: A review of the psychiatric landscape. The Canadian Journal of Psychiatry, 64(7), 456-464. doi:10.1177/0706743719828977
Martinengo, L., Lum, E., & Car, J. (2022). Evaluation of chatbot-delivered interventions for self-management of depression: Content analysis. Journal of Affective Disorders, 319, 598-607. doi:10.1016/j.jad.2022.09.028
Bendig, E., Erb, B., Schulze-Thuesing, L., & Baumeister, H. (2019). The next generation: Chatbots in clinical psychology and psychotherapy to foster mental health - A scoping review. Verhaltenstherapie, 32(Suppl. 1), 64-76. doi:10.1159/000501812
Lim, S. M., Shiau, C. W. C., Cheng, L. J., & Lau, Y. (2022). Chatbot-delivered psychotherapy for adults with depressive and anxiety symptoms: A systematic review and meta-regression. Behavior Therapy, 53(2), 334-347. doi:10.1016/j.beth.2021.09.007
Kocaballi, A. B., Quiroz, J. C., Rezazadegan, D., Berkovsky, S., Magrabi, F., Coiera, E., & Laranjo, L. (2020). Responses of conversational agents to health and lifestyle prompts: Investigation of appropriateness and presentation structures. Journal of Medical Internet Research, 22(2). doi:10.2196/15823
Miner, A. S., Milstein, A., Schueller, S., Hegde, R., Mangurian, C., & Linos, E. (2016). Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Internal Medicine, 176(5). doi:10.1001/jamainternmed.2016.0400
Sakurai, Y., Ikegami, Y., Sakai, M., Fujikawa, H., Tsuruta, S., Gonzalez, A. J., Sakurai, E., Damiani, E., Kutics, A., Knauf, R., & Frati, F. (2019). VICA, a visual counseling agent for emotional distress. Journal of Ambient Intelligence and Humanized Computing, 10(12), 4993-5005. doi:10.1007/s12652-019-01180-x
Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2023). Human-AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nature Machine Intelligence, 5(1), 46-57. doi:10.1038/s42256-022-00593-2
Bloodgood, M. & Vijay-Shanker K. (2014). Taking into account the differences between actively and passively acquired data: The case of active learning with support vector machines for imbalance datasets. CoRR. doi:10.48550/arXiv.1409.4835
Lossio-Ventura, J. A., Weger, R., Lee, A. Y., Guinee, E. P., Chung, J., Atlas, L., Linos, E., & Pereira, F. (2024). A comparison of ChatGPT and fine-tuned open pre-trained transformers (OPT) against widely used sentiment analysis tools: Sentiment analysis of COVID-19 survey data. JMIR Mental Health, 11. doi:10.2196/50150
Akçay, M. B., & Oğuz, K. (2020). Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers. Speech Communication, 116, 56-76. doi:10.1016/j.specom.2019.12.001
Masson, A., Cazenave, G., Trombini, J., & Batt, M. (2020). The current challenges of automatic recognition of facial expressions: A systematic review. AI Communications, 33(3-6), 113-138. doi:10.3233/aic-200631
Kang, E. B. (2023). On the Praxes and politics of Ai Speech Emotion Recognition. 2023 ACM Conference on Fairness, Accountability, and Transparency, 8, 455-466. doi:10.1145/3593013.3594011
Malgaroli, M., Hull, T. D., Zech, J. M., & Althoff, T. (2023). Natural language processing for mental health interventions: A systematic review and research framework. Translational Psychiatry, 13(1), 309. doi:10.1038/s41398-023-02592-2
Sweeney, C., Potts, C., Ennis, E., Bond, R., Mulvenna, M. D., O’neill, S., Malcolm, M., Kuosmanen, L., Kostenius, C., Vakaloudis, A., Mcconvey, G., Turkington, R., Hanna, D., Nieminen, H., Vartiainen, A.-K., Robertson, A., & Mctear, M. F. (2021). Can chatbots help support a person’s mental health? Perceptions and views from mental healthcare professionals and experts. ACM Transactions on Computing for Healthcare, 2(3), 1-15. doi:10.1145/3453175
Minerva, F., & Giubilini, A. (2023). Is AI the future of mental healthcare? Topoi, 42(3), 809-817. doi:10.1007/s11245-023-09932-3
Torous, J., Bucci, S., Bell, I. H., Kessing, L. V., Faurholt‐Jepsen, M., Whelan, P., Carvalho, A. F., Keshavan, M., Linardon, J., & Firth, J. (2021). The growing field of digital psychiatry: Current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry, 20(3), 318-335. doi:10.1002/wps.20883
Meadows, R., Hine, C., & Suddaby, E. (2020). Conversational agents and the making of mental health recovery. Digital Health, 6. doi.10.1177/2055207620966170
Stubbe, D. E. (2018). The therapeutic alliance: The fundamental element of psychotherapy. Focus, 16(4), 402-403. doi:10.1176/appi.focus.20180022
Ratajska, A., Brown, M. I., & Chabris, C. F. (2020). Attributing social meaning to animated shapes: A new experimental study of apparent behavior. The American Journal of Psychology, 133(3), 295-312. doi:10.5406/amerjpsyc.133.3.0295
Bickmore, T. W., Mitchell, S. E., Jack, B. W., Paasche-Orlow, M. K., Pfeifer, L. M., & O’Donnell, J. (2010). Response to a relational agent by hospital patients with depressive symptoms. Interacting with Computers, 22(4), 289-298. doi:10.1016/j.intcom.2009.12.001
Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183-189. doi:10.1016/j.chb.2018.03.051
Liu, B., & Sundar, S. S. (2018). Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychology, Behavior, and Social Networking, 21(10), 625-636. doi:10.1089/cyber.2018.0110
Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712-733. doi:10.1093/joc/jqy026
Lee, S., Lee, N., & Sah, Y. J. (2019). Perceiving a mind in a chatbot: Effect of mind perception and social cues on co-presence, closeness, and intention to use. International Journal of Human-Computer Interaction, 36(10), 930-940. doi:10.1080/10447318.2019.1699748
Heppner, H., Schiffhauer, B., & Seelmeyer, U. (2024). Conveying chatbot personality through conversational cues in social media messages. Computers in Human Behavior: Artificial Humans, 2(1). doi:10.1016/j.chbah.2024.100044
Janson, A. (2023). How to leverage anthropomorphism for chatbot service interfaces: The interplay of communication style and personification. Computers in Human Behavior, 149, 1-17. doi:10.1016/j.chb.2023.107954
Rubin, M., Arnon, H., Huppert, J. D., & Perry, A. (2024). Considering the Role of Human Empathy in AI-Driven Therapy. JMIR Mental Health, 11, e56529. doi:10.2196/56529
Sharma, A., Miner, A., Atkins, D., & Althoff, T. (2020). A computational approach to understanding empathy expressed in text-based mental health support. In: Webber, B., Cohn, T., He, Y., & Liu, Y. (Eds.). Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 5263-5276. Association for Computational Linguistics. doi:10.18653/v1/2020.emnlp-main.425
de Gennaro, M., Krumhuber, E. G., & Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in Psychology, 10. doi:10.3389/fpsyg.2019.03061
Lucas, G. M., Gratch, J., King, A., & Morency, L.-P. (2014). It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37, 94-100. doi:10.1016/j.chb.2014.04.043
Kang, S., & Gratch, J. (2010). Virtual humans elicit socially anxious interactants’ verbal self‐disclosure. Computer Animation and Virtual Worlds, 21(3-4), 473-482. doi:10.1002/cav.345
Tian, Q. (2013). Social anxiety, motivation, self-disclosure, and computer-mediated friendship: A path analysis of the social interaction in the blogosphere. Communication Research, 40(2), 237-260. doi:10.1177/0093650211420137
Laban, G., Kappas, A., Morrison, V., & Cross, E. S. (2023). Opening up to social robots: How emotions drive self-disclosure behavior. In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication, 1697-1704. IEEE. doi:10.1109/RO-MAN57019.2023.10309551
Lee, J., Lee, J., & Lee, D. (2022). Influence of rapport and social presence with an AI psychotherapy chatbot on users’ self-disclosure. SSRN Electronic Journal. doi:10.2139/ssrn.4063508
Provoost, S., Lau, H. M., Ruwaard, J., & Riper, H. (2017). Embodied conversational agents in clinical psychology: A scoping review. Journal of Medical Internet Research, 19(5). doi:10.2196/jmir.6553
Scholten, M. R., Kelders, S. M., & Van Gemert-Pijnen, J. E. (2017). Self-Guided web-based interventions: Scoping review on user needs and the potential of embodied conversational agents to address them. Journal of Medical Internet Research, 19(11). doi:10.2196/jmir.7351
D'Alfonso S., Lederman R., Bucci S., Berry K. (2020). The digital therapeutic alliance and human-computer interaction. JMIR Mental Health, 7(12). doi:10.2196/21895
Potdevin, D., Clavel, C., & Sabouret, N. (2021). Virtual intimacy in human-embodied conversational agent interactions: The influence of multimodality on its perception. Journal on Multimodal User Interfaces, 15(1), 25-43. doi:10.1007/s12193-020-00337-9
Parmar, D., Olafsson, S., Utami, D., Murali, P., & Bickmore, T. (2022). Designing empathic virtual agents: Manipulating animation, voice, rendering, and empathy to create persuasive agents. Autonomous Agents and Multi-Agent Systems, 36(1), 17. doi:10.1007/s10458-021-09539-1
Tielman, M. L., Neerincx, M.A., van Meggelen, M., Franken, I., & Brinkman, W. P. (2017). How should a virtual agent present psychoeducation? Influence of verbal and textual presentation on adherence. Technology and Health Care, 25(6), 1081-1096. doi:10.3233/THC-170899
Kang, E., & Kang, Y. A. (2023). Counseling chatbot design: The effect of anthropomorphic chatbot characteristics on user self-disclosure and companionship. International Journal of Human-Computer Interaction, 40(11), 2781-2795. doi:10.1080/10447318.2022.2163775
Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human-chatbot interaction. Future Generation Computer Systems, 92, 539-548. doi:10.1016/j.future.2018.01.055
Ahmad, R., Siemon, D., Gnewuch, U., & Robra-Bissantz, S. (2022). Designing personality-adaptive conversational agents for mental health care. Information Systems Frontiers, 24(3), 923-943. doi:10.1007/s10796-022-10254-9
Khawaja, Z., & Bélisle-Pipon, J. C. (2023). Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5. doi:10.3389/fdgth.2023.1278186
Gabriel, S., Puri, I., Xu, X., Malgaroli, M., & Ghassemi, M. (2024). Can AI relate: Testing large language model response for mental health support. arXiv [Cs.CL]. doi:10.48550/arXiv.2405.12021
Montemayor, C., Halpern, J., & Fairweather, A. (2021). In principle obstacles for empathic AI: Why we can’t replace human empathy in healthcare. AI & Society, 37(4), 1353-1359. doi:10.1007/s00146-021-01230-z
Sağlam, R. B., & Nurse, J. R. C. (2020). Is your chatbot GDPR compliant? Open issues in agent design. In: CUI ‘20: Proceedings of the 2nd Conference on Conversational User Interfaces, 1-3. Association for Computing Machinery. doi:10.1145/3405755.3406131