Search
Popular Search Resources for
The overarching research goal is to develop and deploy science-based, AI-driven solutions to address mental health challenges across underserved communities. GCAISMH is working on numerous initiatives, with several proposals in preparation or under review. Below we describe several illustrative initiatives focused around three major areas of AI for Mental Health:
Large Language Foundation models, such as GPT-4, have revolutionized various fields with their broad applicability. Training these models on domain-specific data, significantly enhances their accuracy, relevance, and user trust. Foundation models trained on mental health-specific multimodal data could transform mental health care by significantly enhancing applications for virtual therapy, early screening and diagnosis, and tailored patient education. Benefits include improved accessibility, constant availability, anonymity, and cost-effectiveness.
The GCAIMH is focused on exploring these applications while addressing challenges around ethical concerns and bias mitigation. Multidisciplinary teams of GCAIMH researchers and clinicians are leveraging Generative AI foundation models in three initiatives that aim to improve: 1) palliative care for cancer patients, 2) interventions for autism spectrum disorders, and 3) psychological first aid in crisis. The latter involves integrating data from scientific papers on mental health interventions, clinical trials data, electronic health records, public health databases and real-time intervention data, to train a comprehensive and robust large language foundation model for psychological first aid. This will power multiple applications, such as virtual remote therapy and a mental health expert chatbot, that accelerate and improve mental health assistance to those afflicted by crises across the globe.
Digital Twins for Precision Mental Healthcare
Accurately identifying and diagnosing mental health problems and the optimal treatment for each individual remains an unsolved challenge. In mental healthcare, a Digital Twin is a virtual model of an individual's mental health states, continuously updated from various data sources such as wearable devices, medical records, and patient self-reports. Digital Twins employ AI tools and multiscale mechanistic modeling to simulate different scenarios and predict outcomes, providing personalized, data-driven diagnosis and treatment plans. Continuous monitoring and predictive analytics significantly enhance proactive care, and patient engagement and education. Digital twins promise to revolutionize the field of mental health, as they are doing in other fields of science, including oncology and cardiology, where they have been successfully deployed.
The GCAIMH is actively working on developing AI-driven Mental Health Digital Twins, including four funded initiatives, three under review and four in preparation. These address wide-ranging issues such as depression, addiction and substance use disorders, autism spectrum disorders, schizophrenia, bipolar disorder, epilepsy, and dementia and Alzheimer's disease. An exciting GCAIMH initiative involves adapting a validated successful deployment in Sweden to underserved communities across Brooklyn. This novel mental health digital twin could then be deployed to developing countries with the support of partnering humanitarian organizations.
AI in mental healthcare presents a transformative opportunity to improve access, diagnosis, and treatment, but it is essential to ensure a responsible, ethical and safe implementation of the technology. This includes developing unbiased and accessible AI tools that prioritize diversity, equity and inclusion, establishing regulatory frameworks and ethical guidelines for AI and related technologies, providing AI literacy and ethical training for professionals, patients and communities across the healthcare ecosystem.
The GCAIMH prioritizes AI ethical and safety issues through multiple initiatives, including two funded projects and four in preparation. These address issues related to substance abuse among students, autism spectrum communities, training programs for underserved populations, health disparities and equity in ADHD populations, and the ethics, regulation and equal access of neurotechnologies and AI. The latter has become a recent high-priority issue for many international organizations, including the United Nations (see UNHRC resolution and UNESCO initiatives). AI-driven neurotechnologies can directly access and manipulate the brain, and, in controlled settings, have demonstrated great potential to improve the well-being of people with neurological disorders and mental illnesses. However, they also pose a threat to notions of human identity, human dignity, freedom of thought, autonomy, mental privacy and well-being, particularly with the increase of poorly regulated direct-to-consumer products. GCAIMH researchers, in collaboration with UNESCO and international law experts, have edited and published a book on the topic, and a research paper exploring human rights systems of protection from consumer neurotechnologies, and are expanding this work to global communities at risk.