Multi-context architectures of neuro-symbolic AI systems for Mental Health

Mental care systems require patient assessment and diagnosis. Thus, creating a reliable artificial intelligence that provides a mental state examination (MSE) requires proper verification that guarantees accuracy and robustness. Both aims are only reachable when the system explains its decision and states its criteria at a professional level. It, therefore, needs to learn proper evaluation methods from clinical experts. It is not enough to collect labelled data and provide an accurate but obscure response to guarantee trust. Trust requires joining learning and knowledge and therefore symbolic AI systems that are the ones balancing both. However, current symbolic systems such as neuro-symbolic AI cannot mix different knowledge contexts in a single decision-making architecture, as the human mind requires. There are a few attend, such as multi-context systems, which are still in their infant days [1].

The different mental states related to MSE, concretely stress, anxiety and depression, relate to multiple behavioural contexts. They deal with complex individual behaviours that dynamically change. This makes their identification challenging by requiring constant supervision through sensors focused on behaviour, speech, emotion, perception, thought content, thought process, insight and cognition [2,3].

This project aims to develop new multi-context methods for neuro-symbolic systems to combine different levels of sensing intrusion. It will provide a taxonomy from diverse devices related to people’s mental evolution. This taxonomy will use the strength of professional psychological background with different neuro-symbolic AI aiming to guarantee privacy, accuracy and robustness. It will significantly focus on the explainability of complex multi-context decisions, which is an open problem [4].

The student will start with the following research questions:

RQ1: How different intrusion levels can the neuro-symbolic system combine for suitable prediction?
The researchers have collected datasets [5-9] of MSE sensors, coming from smartphones, smartwatches, and specific mental signal and heart rate devices. The databases consider different intrusion levels for forecasting the person’s mental state over multiple weeks. We will create different new approaches to multi-context symbolic AS combined with time series analysis to measure: 1) How the people’s states evolve; 2) How long it takes to provide a reliable prediction; and, 3) which features require more attention.

RQ2: How can a multi-context architecture explain the decision from a psychological perspective?
We will create a new architecture to combine the neuro-symbolic AIs explanations according to our psychological criteria. We will also include profiling mechanisms into the architecture via psychological metrics to improve the explanations. The thesis includes support from Dr Mariana Pinto da Costa, a Consultant Psychiatrist at South London and Maudsley NHS Foundation Trust and a Senior Lecturer at the Psychological Medicine Department at King’s College London, whose experience will provide a strong background to define reasonable metrics.

RQ3: How can the system help to improve a person’s mental state?
This part focuses on providing recommendations and making the multi-context architecture dynamic to evolve the learning and decision-making process based on the outcomes and explanations of the neuro-symbolic AI. For the recommendations, the support of a mental health expert is mandatory to understand their reliability and create a suitable language for the people on the other side. The recommendation success will feed the original architecture to dynamically improve its understanding of individuals.

Some relevant papers serving as starting points for the project are:
[1] Bianchi, F., Palmonari, M., Hitzler, P., & Serafini, L. (2019, September). Complementing logical reasoning with sub-symbolic commonsense. In International Joint Conference on Rules and Reasoning (pp. 161-170). Springer, Cham.
[2] Hickey, B. A., Chalmers, T., Newton, P., Lin, C. T., Sibbritt, D., McLachlan, C. S., … & Lal, S. (2021). Smart devices and wearable technologies to detect and monitor mental health conditions and stress: A systematic review. Sensors, 21(10), 3461.
[3] T. Kolenik, “Methods in digital mental health: smartphone-based assessment and intervention for stress, anxiety, and depression,” in Integrating Artificial Intelligence and IoT for Advanced Health Informatics. Springer, 2022, pp. 105–128
[4] Garcez, A. D. A., Bader, S., Bowman, H., Lamb, L. C., de Penning, L., Illuminoo, B. V., … & Gerson Zaverucha, C. O. P. P. E. (2022). Neural-symbolic learning and reasoning: A survey and interpretation. Neuro-Symbolic Artificial Intelligence: The State of the Art, 342, 1.
[5] Boukhechba, M., Daros, A. R., Fua, K., Chow, P. I., Teachman, B. A., & Barnes, L. E. (2018). DemonicSalmon: Monitoring mental health and social interactions of college students using smartphones. Smart Health, 9, 192-203.
[6] Wang, R., Chen, F., Chen, Z., Li, T., Harari, G., Tignor, S., … & Campbell, A. T. (2014, September). StudentLife: assessing mental health, academic performance and behavioral trends of college students using smartphones. In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing (pp. 3-14).
[7] Parent, M., Albuquerque, I., Tiwari, A., Cassani, R., Gagnon, J. F., Lafond, D., … & Falk, T. H. (2020). Pass: a multimodal database of physical activity and stress for mobile passive body/brain-computer interface research. Frontiers in Neuroscience, 14, 542934.
[8] Garcia-Ceja, E., Riegler, M., Jakobsen, P., Torresen, J., Nordgreen, T., Oedegaard, K. J., & Fasmer, O. B. (2018, June). Depresjon: a motor activity database of depression episodes in unipolar and bipolar patients. In Proceedings of the 9th ACM multimedia systems conference (pp. 472-477).
[9] Schmidt, P., Reiss, A., Duerichen, R., Marberger, C., & Van Laerhoven, K. (2018, October). Introducing wesad, a multimodal dataset for wearable stress and affect detection. In Proceedings of the 20th ACM international conference on multimodal interaction (pp. 400-408).

Project ID

STAI-CDT-2023-KCL-15