Lessons from the Literature

Volume 24, Number 5: May 2023

Artificial Intelligence in Palliative Medicine: A review of the literature

By Barry Ashpole
IAHPC Board Member; communications consultant

While the coverage of artificial intelligence (AI) in the health care literature in general is fairly extensive, it is less so in its possible application in end-of-life care. The following is a summary of key open access journal articles on the topic, plus a reading list.

For a broad overview on the potential for AI to shape and to transform clinical practice and public health in general, see a 2021 essay published by a United States think tank, The Hastings Center.1 The authors emphasize that AI applications invite reflection on the extent to which they rely upon seemingly opaque black box technologies; reduce opportunities for human assessments and interventions; and automatize situations, decisions, and allocations that can have significant consequences. 

A conversation with "Cicely"

Palliative care physician Justin Sanders was inspired by Dame Cicely Saunders, who died in 2005. Using an AI program designed to create opportunities to converse with historical and fictional characters, Dr. Sanders set out to have a conversation with the pioneer in hospice and palliative care. He was struck by the cogency with which this AI agent explained many issues that we contend with daily in seeking and providing care in the context of serious illness. Access it here.

Doctors as "handmaidens" to AI?

Doctors’ potential reliance on opaque algorithms—which might add to clinician workloads or even render a doctor “handmaiden to an AI” that is actually making treatment decisions—could affect patients’ attitudes. In light of challenges like these, the ideal of trustworthy AI figures prominently in documents concerning its ethical use. In the 2019 guidelines of the High-Level Expert Group on Artificial Intelligence, trustworthy AI is the target notion around which other principles and requirements are centered.2 The European Commission, in a 2020 white paper, proposed the development of an “ecosystem of trust” in which AI is to be pursued and realized.3 

Ethical considerations are the main focus of discussion of AI in palliative care, primarily in the context of predicting mortality and decision-making. Automated algorithms could cause subtle shifts in how shared decision-making plays out.

Prognosis

The use of AI to identify people approaching the end of life, so as to prevent unwanted and non-beneficial care, is an important goal. People with serious illnesses are at risk of physical and psychological suffering at the end of life in large part due to care that does not align with their priorities.4 For patients who desire it, accurate prognostic information should ideally help them make decisions about treatments, prepare for the future, and focus on their priorities. 

However, AI algorithms are susceptible to socio-economic or racial bias, and the resulting information can exacerbate disparities of care.4

Further research is required

A 2022 Anglo-American study with a broader view identified 16 priority areas for palliative care research that involved many applications of technologies, including: care for patients and caregivers, self-management and reporting of diseases, education and training, communication, care coordination, and research methodology. The authors summarized priority areas into eight topics: big data, mobile devices, telehealth and telemedicine, virtual reality, artificial intelligence, smart home, biotechnology, and digital legacy.4

Human-centered design and robust governance systems should be considered in future research. There is considerable risk at present if AI algorithms are running in the background or used without the knowledge of patients and families.It is important that the risks of using these technologies in palliative care are properly addressed to ensure that these tools are used meaningfully, wisely, and safely.4

In summary

Figuring out who should receive palliative care, and when, is one of the field’s most pressing questions.

Machine learning technologies can be useful, especially as clinicians and health systems seek to allocate and improve access to scarce resources. Machine learning models, with their ability to rapidly analyze data from various sources, can predict who is likely to progress to unacceptable functional dependence, or even die. This could signal who might need additional palliative care services that, when appropriately timed, improves outcomes such as quality of life, patient and caregiver satisfaction, and health care spending efficiency. 

References
  1. Braun m, Bleher H, Hummel P. A Leap of Faith: Is There a Formula for “Trustworthy” AI? Hastings Center Report, 2021; 51(3): 17-22. 
  2. High-Level Expert Group on Artificial Intelligence. European Commission, 2019. 
  3. White Paper on Artificial Intelligence: A European approach to excellence and trust. European Commission, 2020. 
  4. Nwosu, AC, McGlinchey, T, Sanders J, Stanley S, Palfrey J, et al. Identification of Digital Health Priorities for Palliative Care Research: Modified Delphi study. JMIR  2022; 5(1): e32075. 
  5. Kindvall C, Cassel CK, Pantilat SZ, DeCamp M. Ethical Considerations in the Use of Artificial Intelligence Mortality Predictions in the Care of People with Serious Illness. Health Affairs, September 2020. 
  6. Porter AD, Harman S, Lakin JR. Power and Perils of Prediction in Palliative Care. Lancet 2020; 395(10225): 680-681. 


Additional readings/resources

  1. Machine yearning: How advances in computational methods lead to new insights about reactions to loss. Current Opinion in Psychology, 2022; 43(2): 13-17. https://www.sciencedirect.com/science/article/abs/pii/S2352250X21000683?via%3Dihub#! 
  2. Natural language word embeddings as a glimpse into healthcare language and associated mortality surrounding end of life. BMJ Health & Care Informatics, 2021;28:e100464. https://informatics.bmj.com/content/28/1/e100464 
  3. Association between high cost user status and end-of-life care in hospitalized patients: A national cohort study of patients who die in hospital. Palliative Medicine, 2021;35(9):1671-1681. https://journals.sagepub.com/doi/full/10.1177/02692163211002045
  4. What will emerging technologies mean for the future of palliative care? Palliative Care Australia (2019). https://palliativecare.org.au/story/palliative-matters-what-will-emerging-technologies-mean-for-the-future-of-palliative-care/ 
  5. Of Slide Rules and Stethoscopes: AI and the Future of Doctoring. Hastings Report, 2019;49(5):3. https://onlinelibrary.wiley.com/doi/10.1002/hast.1041 
Lay press perspective: a sampling

  1. An invisible hand: Patients aren’t being told about the AI systems advising their care. STAT, 15 July 2020. https://www.statnews.com/2020/07/15/artificial-intelligence-patient-consent-hospitals/ 
  2. An experiment in end-of-life care: Tapping AI’s cold calculus to nudge the most human of conversations. STAT, 1 July 2020. https://www.statnews.com/2020/07/01/end-of-life-artificial-intelligence/

Read Barry Ashpole's bio.

Comments? Send them to Barry Ashpole.


Previous page Table of contents Next page

Share

The contents of this newsletter, including (but not limited to) all written material, images, photos are protected under international copyright laws and are property of the IAHPC. You may share the IAHPC newsletter preserving the original design, the IAHPC logo, and the link to the IAHPC website, but you are not allowed to reproduce, modify, or republish any material without prior written permission from the IAHPC.