Chair(s)
Mr Lars-Åke Söderlund, co-chair of the FIP Technology Advisory Group, Sweden, and Ms Emanuella Nzeribe acting president of the FIP Early Career Pharmaceutical Group (ECPG), NigeriaIntroduction
Generative artificial intelligence (AI) chatbots are computer-based programmes based on deep neural networks that can simulate human-human conversation via text or voice to engage in a dialogue with users. With ChatGPT, there has been an explosion of new large language models (LLMs) on the scene. Each LLM is distinct and performance can vary. However, all LLMs suffer the same limitations, as they are not trained to be factual but to sound credible, posing concerns particularly in high-stakes environments like health care. “Hallucinations” are a known weakness of generative AI chatbots, leading to nonsensical outputs. Lack of explainability and trustworthiness are major concerns for LLMs.
ChatGPT lacks the capacity for self-correction, leading to a lack of accountability. Inaccurate or misleading responses of models can perpetuate misinformation and hinder vital learning and improvement processes for AI systems.
Deep learning systems have billions of parameters, enabling them to learn complex, non-linear relationships about the world. It also means that we cannot intuitively understand how and what they are learning. They are the ultimate “black box” – once unleashed, their outputs are difficult to analyse, even by their developers. This makes it difficult to trust these models to behave consistently and safely. The question we must seek to answer is how we can leverage the usefulness of AI while guarding against its flaws and misinformation. ChatGPT and other LLMs are already at large and have seen unprecedented adoption over the past year. Some businesses that began by “experimenting” with ChatGPT now use it, often unchecked, to save on labour costs. Yet, it remains a technology without regulatory safeguards, highlighting the need for human oversight.
How is this technology likely to impact the world of health care and pharmacy, and whether it represents an opportunity or a threat, is a question worth exploring.
Programme
11:00 – 11:05 |
Introduction by the chairs |
11:05 – 11:35 | The need for trust — Artificial intelligence, health and collaboration Dr Ricardo Baptista Leite, HealthAI, Switzerland |
11:35 – 12:05 | AI, chatbots and ChatGPT — Threat to knowledge work or a “dancing bear”? Dr Siraaj Adams, Digital Health, South Africa |
12:05 – 12:25 | Panel discussion |
12:25 – 12:30 | Closing |
Learning objectives
- To understand the pros and cons of the use of AI in pharmacy and health care, including risks.
- To identify the ethical considerations using chatbots and ChatGPT.
- To outline how AI, digital tools like chatbots and ChatGPT, can improve patients outcomes.
Take home messages
- Artificial intelligence (AI) and chatbots are becoming omnipresent in our daily lives. Despite rapid improvements in natural language processing in the last years, the technology behind chatbots is still not completely mature and chatbots still make a lot of mistakes during their interactions with users.
- Chatbots have the potential to revolutionise pharmacy practice by providing personalised and accessible health information, promoting medicine adherence, and improving patient outcomes. However, several concerns must be addressed, such as privacy and security, bias and discrimination, responsibility and liability and autonomy and decision-making.
- To ensure that chatbots are implemented ethically and effectively, future research should focus on developing more advanced natural language processing and machine learning algorithms, integration with wearable technology and electronic health records, collaboration with other healthcare providers and increased focus on privacy and security measures. The implications for the future of pharmacy practice are significant. Chatbots can improve access to care, reduce healthcare costs and increase patient satisfaction.