From a recent pilot study, Research Fellow Caroline Green has found that some care providers have been using generative AI chatbots (such as ChatGPT) to create care plans. This not only poses risks with respect to patient confidentiality, arising out of the input of personal data into the language model, but it could also lead to carers inadvertently causing harm because of faulty guidance output by the information.
However, there are also some potential benefits to using AI technology in this way; including helping to relieve pressure caused by heavy administrative work, for example by using phone app PainCheck, which uses AI-trained facial recognition to identify whether someone unable to speak is in pain.
It could also allow for someone without the relevant experience or medical expertise to care better for a loved one.
This research has helped to open up the discussion on the benefits, risks and ethical challenges faced with using AI in social care.
Read the full article in The Guardian.