Vana.Guide

make your chatbot safe, effective and transparent

without A backbone, gen-AI bots remain a messaging service

  • Gen-AI chatbots are trained on general, public data and respond in an accordingly non-personal way.

  • Gen-AI chatbots can’t handle vulnerable and at-risk patients, due to it’s fundamentally unpredictive nature.

  • Gen-AI chatbots lack interpretability due to their reliance on complex neural network architectures trained on billions of data points.

Vana.guide makes your chatbot safe, more effective and transparent

  • Users/clients that are vulnerable -from the start or later- or enter a vulnerable 'state'. Are detected as they enter this state, flagged and redirected towards the proper service or care giver. The measurement of the states are valid, as they are based on validated and thoroughly researched questionnaires.

  • As the mental state of each users is modeled (see network-oriented modeling). Vana.Guide deeply understands how the user feels on an hourly or daily basis, and directs questions, exercises and interventions of the chatbots accordingly. Substantially improving effectiveness of the chatbot.

  • Through clarity of the specific mental state of each user (see network-oriented modeling), now the actions of the connected chatbot can be explained. This allows to tie improved or worsened outcomes, to the administered exercise or intervention. Creating an explainable pathway to practitioners, clients and other stakeholders.

a web of symptoms form an interactive network

  • Instead of viewing mental disorders like depression or anxiety as a single underlying cause, Vana.Guide functions as a web of interacting symptoms (see network-oriented modeling). For example, low energy might lead to inactivity, which worsens mood. These connections are made explicit in a visual network.

  • Some symptoms are more ‘influential’ than others. They keep other symptoms going or bridge between anxiety and depression. The model helps pinpoint which symptoms to target for the biggest impact in therapy.

  • The model doesn’t just use symptoms. It also incorporates contextual influences like stress, sleep, or social contact. Factors clinicians already consider, now formalized in a model to simulate their effects.

  • The model can simulate what happens over time when a symptom or external factor changes. Like improving sleep or introducing mindfulness. This helps forecast treatment impact and explore “what-if” scenarios.

  • The model can guide a chatbot to personalize support. Choosing exercises, timing, or tone of messages based on the user’s current symptom network and projected changes, creating more relevant and supportive interactions.

How to apply this to your chatbot?

See also De Bruijn et al. 2025

  • The chatbot can ‘predict’ what will happen if a certain intervention is applied, and apply the intervention accordingly.

  • Find which factors (e.g. symptoms, positive emotions, attitude or social interactions) need to be regulated to avoid an undesired state (such as stress or anger).

  • Add this as a module and give your chatbot a deep understanding of the underlying patient dynamics.

it is based on dr. jabeen’s doctoral research

Dr. Jabeen’s PhD-research was part of Prof. Dr. Treur’s research group ‘Social AI’ of the VU University of Amsterdam. The research applies Network Analysis (Borsboom, 2013) to area’s in mental health care (thesis).

Team

michiel van vliet
Founder

With a background in pharmacy, Michiel became interested in mental health after completing 18 months of intensive psychotherapy. With his own eyes, he saw the incredible power of mental health care and made it his mission to support therapists in offering care to many more people. Before, he worked as a product owner in hospital software (ChipSoft) and lead a team to develop an AI-based application (his startup Deep Dynamics).

Fakhra jabeen
co-founder

Dr. Fakhra Jabeen is a researcher in Mental Health and Wellbeing. During her PhD at the VU University, she studied human behaviours on social media, using computational AI (thesis). She tested and built several ML-models. She also designed a chatbot to improve the mental health of university students. And she has experience using computational AI, to improve mental health and wellbeing in the healthcare sector.

Advisors

  • Robert-Jan van der Horst

    Advisor

    Robert brings a wealth of experience, having worked for over 14 years as Business Information Manager (DSM) and over 10 years as Chief Information Officer (DSM and Centrient Pharmaceuticals). Currently he is a Managing Partner of Q7 Consulting (https://www.q7-consulting.com/), offering a CIO as a Service to Life Sciences and Pharma.

  • Marina Borges

    Advisor

    Marina has 11 years building technology to realize business value as a senior manager for data science and former chief product officer for AI products at EY. She believes in solving hard problems for a better world and sees mental health as a priority.