Ethical considerations for the use of AI in mental health care settings
Recent research has shown some clinicians are reluctant to adopt the use of such tools in general healthcare settings, due to concerns about the quality and regulation of AI systems, as well as questions around legal and ethical concerns.
When it comes to adopting AI tools in mental health care settings, there will likely be additional factors to consider such as the sensitivity of information shared and potential vulnerability of service users. Therefore, it is crucial that we include the perspective of both service users and staff who will be using such AI systems. To this end, in a project funded by the British Academy and Leverhulme Trust, Adferiad partnered with a multidisciplinary research team specialising in Law, Computer Science, and Medicine from the Universities of Swansea, Southampton, and Nottingham Trent, to identify both service user and staff concerns around the potential uptake and use of AI systems in mental health care services.
Through surveys, one-to-one interviews, and focus groups, we engaged with over 100 Adferiad service users and staff. From their feedback we identified several key concerns – issues of accessibility (e.g., access to the internet, computer literacy), developer bias, a lack of personalised care, issues around data privacy and confidentiality, and the loss of the human element. However, several perceived benefits of AI were also identified including reduced waiting times, increased contact hours, and enhanced patient choice for those people who may prefer to engage with AI rather than with a human (therapist).
Overall, respondents felt that AI had a number of potential uses in mental health settings, such as streamlining routine administrative tasks and providing generic signposting advice. However, there was a clear consensus that AI should be used as a complementary tool to support existing services and should not replace the role of the human within mental health care settings.
AI technologies are advancing at a pace that is challenging to keep up with, and while this rapid progress offers exciting possibilities, it is important that we are well informed before proceeding. Ensuring that AI technologies deployed in mental healthcare settings are safe, suitable, and developed with input from both staff and service users is essential. There is a need for ongoing research in this area to help establish robust ethical guidelines for the use of AI in mental healthcare, ensuring that these tools are not only effective but also transparent and trustworthy.