Guide

AI in healthcare: navigating the noise

A comprehensive guide supporting healthcare leaders to make sense of AI and explore the art of the possible.
Sarah Ferry, Chibuchi Amadi-Livingstone

20 September 2024

Background

The rapid expansion of artificial intelligence (AI) in 2023 propelled it into the spotlight, prompting increasing questions over how AI could be used to support the healthcare system. Deployed well and in the appropriate contexts, AI has the potential to help the healthcare system overcome a range of challenges.

With the field of AI evolving at pace, this primer has been developed to support healthcare organisations to navigate the noise. Intended for board members and wider teams, it demystifies the language of AI and showcases how organisations and systems are using it – and to what end. With more and more organisations exploring AI’s potential, this primer also suggests how you can use early conversations with suppliers to consider whether an AI solution is what you need.

Demystifying AI in healthcare: the jargon buster

Please note that not all definitions have an associated example as in many cases an example would be repeating the definition. 

The basics

  • The use of specialised software and technology to carry out repetitive tasks, following a set of instructions and workflows set out by humans. These tasks typically remain consistent over time and include actions like sending appointment reminders, missed appointment notifications, or even receipts after online purchases. If a task is not explicitly outlined in the instructions, the machine cannot perform it. 

    Example: In healthcare, automation extends to patient monitoring, medication management and administrative tasks in hospitals and clinics.

  • A set of well-defined rules or processes used by an AI system to conduct tasks such as discovering new insights, identifying patterns, predicting outcomes and solving problems.

  • The capability of a computer system to mimic human cognitive functions such as learning, problem-solving, interpreting visual information, understanding, and responding to spoken or written language. AI uses maths, logic and patterns learned from data to simulate human reasoning and make decisions and recommendations. 

    Example: In healthcare, AI can be used to enhance diagnostic processes, personalise treatment plans and manage healthcare data efficiently.

  • Any information that can be processed or analysed to gain insights. Data can take the form of numbers and statistics, text, symbols, or multimedia such as images, videos, sounds and maps.

    Example: In the context of healthcare, data can encompass patient records, clinical studies and real-time health monitoring outputs.

  • A subset of AI that enables machines to automatically learn and improve from experience without explicit programming. By using set processes to analyse large amounts of data, ML systems can identify patterns, help make decisions, and improve their performance with little to no human intervention. 

    Example: In healthcare, ML applications include predicting disease progression, analysing medical images and optimising clinical workflows.

  • A simple representation of an aspect of the real world. It is a programme that has been trained on a set of data to recognise certain patterns or make certain decisions without further human intervention.

  • A prompt is a question, command or statement input into an AI model to initiate a response or action, facilitating interaction between a human and the AI to generate the intended output.

Interpreting models

  • Accuracy is a metric in machine learning that measures how often the model correctly predicts the outcome. It is the fraction of predictions that the model got right, indicating the overall correctness of the model's predictions. It shows how often a classification ML model is correct overall. Accuracy is useful when the classes are balanced (ie the number of instances in each class is roughly the same). However, it can be misleading in cases of imbalanced classes. Ideally, accuracy should be as close to 100 per cent as possible, however, 70-90 per cent are often cited as acceptable ranges. It is important to remember that 50 per cent accuracy means 50 per cent are classified as positive and 50 per cent as negative, which is essentially the same as random classification. 

  • Bias occurs when an AI system produces results that are systematically prejudiced due to flawed assumptions in the machine learning process. This bias can reflect and perpetuate human biases and social inequalities present in the initial training data, the algorithm itself, or the predictions it generates.

    Example: Pulse oximeters are less accurate for people with darker skin tones, meaning AI applied to this device can underestimate skin cancer in people with darker skin due to less data.

  • A measure of how understandable, or explainable, the decisions of an AI system are to humans.

    Example: An AI system may predict which patients are most in need of surgery but should be able to explain why it has prioritised patients in a certain way.

  • Where humans can understand how the results of an AI model were obtained.

  • This is the degradation of a machine learning model's predictive accuracy over time, caused by changes in real-world environments or new input data differing from the data used during training.

    Example: When a new bus route opens to a hospital making a model used to predict did-not -attends less accurate at predicting attendance patterns for patients, due to different trends compared to when the model was trained (prior to the new bus route).

  • Precision is the proportion of positive class predictions that were actually correct. For instance, if the model predicts 100 instances as positive and 70 of them are truly positive, the precision is 70 per cent. Precision shows how often an ML model is correct when predicting the target class.

  • ML scalability refers to the capability of a machine learning system to handle increasing amounts of data and computational resources without compromising performance or precision. It involves the ability to process large datasets while still producing accurate results in a reasonable amount of time.

  • Sensitivity is the proportion of actual positive class instances that the model correctly identified. For example, if a dataset has 100 positive instances and the model correctly identifies 60 of them, the recall is 60 per cent. Recall measures the ability of the machine learning model to identify all objects of the target class. 

  • Specificity indicates the model's ability to accurately predict true negatives for each category. In other words, specificity assesses how well the model correctly identifies instances that do not belong to the target class.

  • Training a model in machine learning is the process of teaching a machine learning algorithm to make predictions or decisions based on data. 

    Example: In healthcare, this often involves training with clinical data to improve accuracy in diagnosis and treatment efficacy.

Types of data

  • Extremely large and rapidly growing collections of diverse data types including, structured and unstructured, which are so complex that traditional data processing software cannot handle them.

    Example: In healthcare, big data can include processing multiple structured and unstructured data sources, such as genetic data, medical history, and lifestyle factors to support personalised medicine.

  • Data that is organised and formatted in a specific way, making it easily readable and understandable by both humans and machines, allowing viewers to immediately recognise the type of data they are looking at. 

    Example: A patient's electronic health record (EHR) that includes fields for name, age, blood pressure and diagnosis codes is structured data. 

  • This is artificially generated data produced by computer algorithms or simulations, designed to mimic the patterns and characteristics of real-world data, and often used as an alternative to actual data.

  • A final check of an unseen dataset to confirm that the ML algorithm was trained effectively and validate that the model can make accurate predictions.

  • The data used to train machine learning models. Curated training datasets are fed to machine learning algorithms to teach them how to make predictions or perform a desired task.

  • Data that does not have predefined structure or organisation. Unlike structured data, which is organised into neat rows and columns in a database, unstructured data is an unsorted and vast information collection. 

    Example: In healthcare, unstructured data often includes medical notes, audio recordings of patient interactions and images from various diagnostic procedures.

  • Data not included in the training set of the model, allowing data scientists to evaluate how well (using metrics like accuracy, precision, sensitivity and specificity) the model makes predictions based on new data unseen by the model as it is being trained.

Types of machine learning

  • A neural network is a type of machine learning programme that makes decisions similarly to the human brain. It processes data using interconnected units called neurons, which work together to identify patterns, weigh options, arrive at conclusions and learn and improve over time. This method, inspired by how biological neurons function, teaches computers to handle complex problems by mimicking the brain's layered structure.  

  • A subset of machine learning that allows an AI-driven system to learn through trial and error, using feedback from its actions. 

    Example: This is particularly useful in personalized medicine, where systems learn to optimise treatments based on individual patient responses.

  • A type of machine learning that falls in between supervised and unsupervised learning. It is a method that uses a small amount of labelled data and a large amount of unlabelled data to train a model. 

    Example: This approach is beneficial for patient data where obtaining fully-labelled datasets can be costly or impractical.

  • A category of machine learning where labelled datasets (each input has a known output) are used to train algorithms to predict outcomes or recognise patterns. By studying these datasets, the computer learns to predict the output given new input data. It is like teaching a computer by showing it many examples and letting it figure out how to do things correctly. 

    Example: In healthcare, this method is extensively used for diagnostics, such as identifying diseases from medical imaging data.

  • A type of machine learning that does not need labelled data or human guidance. It works with unlabelled data to discover patterns and insights within the dataset. The algorithms explore the dataset without explicit instructions to find unknown relationships or insights independently. It is like letting the computer explore the dataset with the teacher allowing it to uncover patterns and structures by itself. 

Types of model

  • A form of machine learning that employs artificial neural networks, inspired by the human brain, to learn from vast amounts of data (including labelled and unlabelled, structured and unstructured data). These networks enable the digital systems to learn and make decisions automatically and independently without human intervention. 

    Example: These models are increasingly used in areas such as pathology, radiology, and genomics.

  • A machine learning model trained on a vast amount of data so that it can be easily adapted for a wide range of applications. A common type of foundation model is large language models, which power chatbots such as ChatGPT.

  • A system comprising a human and an AI component, in which the human can intervene in some significant way, such as by training, tuning or testing the system’s algorithm, so that it produces more useful results. It is a way of combining human and machine intelligence, helping to make up for the shortcomings of both.

  • A machine learning model capable of performing various natural language processing tasks. These tasks include generating and classifying texts and images, answering questions conversationally, translating between languages, predicting, and summarising content. It uses deep learning algorithms and a vast dataset to achieve these capabilities. 

    Example: In healthcare, these models assist in clinical decision support and patient interaction.

  • This is a machine learning model that processes and combines different types of data, such as images, videos and text, to make more accurate determinations, draw insightful conclusions, or make precise predictions about real-world problems.

    Example: Multimodal models can include data from an electronic health record, an image captured by an X-ray and a radiologists written description of an X-ray to derive conclusions around diagnoses.

Applications of AI

  • An AI hallucination occurs when an AI, such as a large language model, produces false or misleading information that seems factual but is actually inaccurate or nonsensical. This can be through identifying patterns that do not exist in real life.

    Example: For example, an AI model suggesting the wrong medication for a patient based on hallucinated data. 

  • Ambient AI is a type of AI that blends into the environment to improve human interaction without being noticeable. It works quietly in the background, using sensors to understand and predict human behaviours. It continuously collects data from these devices like sensors to make real-time decisions.

    Example: In healthcare, ambient AI can be used to monitor patient conditions in real-time, optimise hospital operations and deliver personalised healthcare services, all while minimising the need for direct human command or intervention. It enhances patient care by predicting needs and intervening proactively, thereby improving patient outcomes and operational efficiency.

  • A field of AI that trains computers to interpret and understand the visual world. Machines can accurately identify and locate objects and then react to what they ‘see’ using digital images from cameras, videos and deep learning models.

    Example: Computer vision is used in tools that automatically screen for diabetic retinopathy from retinal images.

  • A computer-based system that helps users make decisions by analysing large amounts of data, providing insights and suggesting possible courses of action. It combines data, analytical models and user-friendly software to support problem-solving and decision-making.

    Example: In healthcare, this could include a machine learning algorithm that analyses radiology images to provide a diagnosis to support physician decision making.

  • A computer model that simulates an object in the real world, such as a biological system. Analysing the model’s output can tell researchers how the physical object will behave, helping them to improve its real-world design and/or functioning. 

  • Algorithms capable of creating new original content, including text, images, audio, simulations and software code, in response to user prompts or requests. 

    Example: In the medical field, generative AI is used to simulate patient data, develop virtual models for training, and generate synthetic biological data for research.

  • Predictive analytics is the process of using data to forecast future outcomes. The process uses data analysis, machine learning, AI, and statistical models to find patterns that might predict future behaviour. 

    Example: Healthcare applications include predicting disease outbreaks, patient deterioration, and therapy outcomes.

  • Predictive analytics is the process of using data to forecast future outcomes. The process uses data analysis, machine learning, AI , and statistical models to find patterns that might predict future behaviour. 

How the healthcare system is using AI

The potential uses of AI are widespread, from clinical uses to administrative  support, in primary care settings and hospital settings. Below we have highlighted examples from within our membership that showcase the breadth of potential AI use within the healthcare system. These examples also put the AI jargon into a real-life context to help further understanding of key terminology. 

Reducing DNAs and last-minute cancellations

Background

Mid and South Essex (MSE) NHS Foundation Trust supports a population of 1.2 million people. It had an average did-not-attend (DNA) and short-notice cancellation rate of 8 per cent (the average in England is approximately 7.6 per cent). 

DNAs are more prominent in patients grappling with work and caring responsibilities, who find it difficult travelling to the hospital at a specific time. This disproportionately affects patients from minority and marginalised communities and can thereby lead to a widening of health inequalities. MSE found that its DNA rate for the most deprived was consistently higher than that for the least deprived across all age groups. 

Previously

Many practices and hospitals send letters, emails, text and calls to remind patients of their appointments. These blanket approaches do not account for individual patient behaviours. 

Healthcare staff may manually review appointment histories and contact patients who had missed appointments in the past, but this is time and resource intensive. Some organisations may overbook certain clinics to compensate for anticipated no shows.

Solution

MSE rolled out Deep Medical, which uses a model that integrates machine learning to predict patient no-shows and short-notice cancellations (<48 hours). Using AI, the system processes both structured data (like patient demographics and appointment history) and unstructured data (like electronic health record notes) to identify relevant patterns. 

The predictive analytics model is developed through a deep learning model which is a subset of supervised machine learning. This model is trained using training data, fine-tuned with validation data and evaluated with the test data to ensure the model is accurate and precise and has high sensitivity and specificity. The neural network format of this model allows it to recognise complex patterns in real-world data. 

It uses AI to understand patient engagement and develop personalised reminder schedules for patients. It then uses automation to send these reminders and manage patient appointments. It also identifies frail patients where low compliance could potentially be of clinical concern and highlights them to the relevant clinical teams. 

Outcomes

Based on a six-month pilot scheme at MSE, the model:  

  • created an additional 1,910 patient visits into clinic 
  • prevented 377 DNAs 
  • filled 217 last-minute cancellation spots ️ 
  • reduced DNA rate by up to 50 per cent when patients are proactively contacted two to three weeks ahead of their appointments
  • when used at full scale, it is predicted it will allow an additional 80,000 patients to be seen each year at the trust, increasing productivity significantly

Top tips

Make sure you are clear on:

  • what the specificity and sensitivity of the data is and what the impact of the mis-classified data is
  • where the training data has come from and how valid it is in your local population or whether revalidation is required
  • how to audit the machine learning to ensure it doesn’t drift from the primary use or introduce new biases.

Transforming wound care

Background

In North Cumbria approximately 50 per cent of the community nurses' workload involves managing both acute and chronic wounds, which amounts to the cost of around £41.7 million a year. 

Previously

Prior to using the AI model, community nurses would assess wounds with a manual tape measure. Band 3 and 4 staff are sometimes unsure how to dress and treat more complex wounds, therefore, this would require taking a photo and sending it back to base where a band 6 or 7 would tell them how to treat the wound by looking at photos and advising how to change and apply the dressing remotely. In more severe cases the band 6/7 would need to go out to treat the wounds themselves, reducing time to care.

This was a very subjective way to treat and dress the wounds with a lot of room for interpretation and no consistency across community teams.  There was no way to keep a record of the wounds and the changes. 

Solution

North Cumbria Integrated Care (NCIC) Foundation Trust, as part of North East and North Cumbria (NENC) ICB, transformed its approach to wound care by introducing a digital tool, Minuteful for Wound by Healthy.io, that uses a model which uses an AI algorithm to support clinicians at various experience levels to assess wounds confidently, consistently and safely. 

This model offers a higher degree of accuracy and precision than the current standard care. The digital tool enables consistent wound imagery, measurements and assessments to be captured easily at the point of care. The AI-powered colour recognition technology automatically detects wound area and tissue types within the wound bed from a three-second video scan of the wound by a smartphone. The AI also ensures image quality by recognising lighting conditions, distance and scan technique. Assessment prompts within the app guide allow clinicians of all experience levels to record assessments aligned to best practice guidance according to wound type.

The approach offers a live caseload review portal to provide central access to wound data. With this easily accessible data, clinical teams are able to optimise care plans.

The data gathered from AI measurements and assessments enables wounds that may be deteriorating or static over time to be identified and flagged for review that might otherwise not be identified. The new approach includes assigning clinicians to perform regular virtual caseload reviews within the tool's portal. This earlier detection and ability to review the caseload virtually, leads to earlier intervention and faster healing for patients.

Outcomes

Since using the model, NCIC has made the following positive impact for both staff and patients. 

For the workforce, it has: 

  • opened access to data to help them understand wound care admission and case loads 
  • upskilled and retrained the workforce to dress and treat wounds with confidence, which has led to greater empowerment of junior clinical staff to undertake assessments
  • provided assurance for staff as they are creating a log of the wound which provides detail on how the wound was treated if the patient is admitted into hospital
  • released time to care for band 6/7 staff
  • reduced admin burden for staff.

For patients, it has: 

  • reduced healing time and improved patient outcomes 
  • reduced hospital admissions 
  • supported self-care 
  • minimised the likelihood of future infections, admissions to hospital and amputations. 

By working in partnership with the company, NENC ICB is hoping to: 

  • reduce incidents by early identification of deteriorating wounds and clinical risks
  • standardise clinical data collection and reporting
  • improve consistency of adherence to formulary and best-practice guidelines
  • increase system-wide collaboration with ICS partners to expand digital wound management beyond the trust, to primary care, acute services and nursing/care home settings
  • enable access to services, including emergency departments/vascular specialists both via app and the web page portal for greater information sharing. 

Top tips

Make sure you are clear on:

  • what IT infrastructure and resources are required for roll out and whether you have these already
  • what training, standard operating procedures and ways of working are required for an effective roll out.

Impact on the workforce

To find out more about the benefits of this project for the workforce as well as patients, please read this case study.

Improving patient triage and optimising staff time

Background

Chapelford Medical Centre wanted to update its systems to triage patients in a smarter way, helping them get the care they need while also optimising use of staff time. 

Previously

Chapelford Medical Centre had a triage tool and questionnaire to collate information that supported staff with triaging patients. This used automation and operated like a decision tree. The centre wanted to move to an approach that allowed evidenced-based and AI-supported decisions about the best course of action for a patient’s care. The ultimate goal was to make informed decisions and spot opportunities that would not be possible with a human-only approach, based on combining the information presented by the patient with information within patient records. 

Solution

Anima, an integrated care platform, has both automation and AI modules. In the automation module, patients complete a questionnaire to support with triage and based on their answers, they will be directed to an appropriate care pathway. 

In the AI module, Anima uses analysis of the structured and unstructured data input into the questionnaire by patients, and machine learning based on the outcomes of patients who go through the pathway to predict the best course of action for future patients. It uses a model and algorithm to follow the pathway of patients through triage and their outcomes, and refine the course of action for future patients with similar characteristics who fill in the questionnaire. 

Outcomes

Anima went live within the GP practice in August 2023 and the ambition is to expand across the primary care network from August 2024. 

The Chapelford Medical Centre team plans to be in contact with Anima regularly and provide mutual training support. The team will review and work out whether improvements can be made to the AI suggestion, and also improve their own experiential learning for next time by using a trusted source of information.

They are aiming to create an enhanced access system based on need, and not ‘first in the queue’. This includes better management of patients, ultimately less demand, an improved workload profile for clinicians, and a fully-functional multi-disciplinary team. AI will support the team at each stage, enabling staff to signpost effectively, and employ more junior team members to make effective and supported decisions. Access increases, patient need decreases, clinician workload levels out, and work-life balance improves.

Chapelford hopes to achieve a position where all patients get the care they need every time, first time. This is a challenge, but one that will significantly improve patient journey, and also maximise the potential of appointments by managing patients within one consultation. They aim to maximise individual clinician’s skills and achieve a 0 per cent need for referral between teams for the presenting condition. 

Numbers of staff are unlikely to increase, so Chapelford hopes to increase effective capacity through more efficient use of the team and getting it right first time. By feeding back through an AI system, the system will continually improve and have a real-time feedback loop.

Top tips

Make sure you are clear on: 

  • which elements of the solution use AI and that it really is AI
  • what the AI is improving or making better, and how it is measured
  • what the clinical safety case is for the solution you are buying
  • what your North Star is and whether the AI product feeds into a bigger and wider solution.

Transforming the cataract care pathway

Background

The cataract care pathway at Chelsea and Westminster Hospital was experiencing long waiting times and limited capacity,

Previously

The pathway required patients to attend a mandatory in-person pre-operative appointment before being placed on the surgery waiting list for their first eye. The did-not-attend (DNA) rate for these pre-operative clinics ranged from 30 per cent to 50 per cent, resulting in wasted clinical capacity.

Post surgery, although there are very low risks of complications and adverse outcomes from cataract surgery (less than 5 per cent), all post-operative patients were requested to attend an in-person appointment four weeks later. After consultation, patients were listed for their second eye surgery or discharged to the community.

Solution

Chelsea and Westminster Foundation Trust leveraged Ufonia’s Dora, an AI clinical assistant that can telephone patients and have a routine clinical conversation using AI-enabled automation, to transform their cataract care pathway. 

Dora has a telephone-based voice conversation with patients at multiple timepoints in their cataract journey. This includes pre-operative assessment health screening, surgery and appointment reminders, post-operative checks and patient-reported outcome measures. The technology uses AI natural language processing to be able to interpret a patient's responses. It focuses on making AI accessible, meaning patients do not require any technology understanding, user accounts, hardware devices or training – they simply have a conversation. 

Outcomes

Dora was initially deployed as a pilot, with pre-defined technical and operational success criteria. These were met during the initial phase meaning the trust now plans to adopt Dora to support their cataract patient pathways, advocate for regional implementation across North West London ICB, and is considering expanding the solution to other clinical pathways. 

Successes include:

  • 65 per cent call completion rate at pre-operative assessment stage, surpassing the original target of 60 per cent
  • 91 per cent agreement rate between Dora and Chelsea and Westminster Hospital clinical staff, exceeding the goal of 90 per cent
  • on-the-day cancellation rate has dropped significantly to just one patient in six months, compared to four patients per month before implementing Dora, surpassing the target of two patients per month.
  • fewer than 2.5 per cent of patients at the post-operative check experienced an unexpected management change, well below our target of 10 per cent
  • 63 per cent of completed calls passed the Dora assessment, exceeding the target of 50 per cent 
  • honoured as finalists in the HSJ Digital Awards 2024 for the Driving Change through AI and Automation Award.

Top tips

Make sure you are clear on: 

  • what implementation support exists from the tech side to ensure key milestones can be met
  • what resources might be required and what lessons could be learned from previous implementations, to support internal resource allocation
  • which metrics are being collected and when, to ensure robust evaluation following implementation.

To AI or not to AI? Key questions to consider

It is important to remember that AI may not be the appropriate solution to the problem your organisation is facing. Start with the challenge and ensure you select a solution that solves the issue. To support with innovation adoption, please refer to our Scaling Innovation guide. 

To help determine whether AI is the right fit for solving the problem, and whether further exploration of the solution is required, we have prepared a set of key questions to ask suppliers of AI solutions in early discussions:

Questions to ask suppliers

What to look out for

  1. What is this AI solution designed to do?

Do the intended uses of the AI solve the issues you are trying to solve?

What uses does the AI have regulatory approval and certifications for? Are there any exclusions or limitations you need to be aware of?

2. What evidence is there supporting the effectiveness of the AI? Does evidence exist in real-world settings?

Has the solution been tested in a real-world clinical setting? 

How do the results compare to effectiveness without use of the AI component? Is the AI adding value?

How similar is the training data cohort to your population? 

Are there potential biases that have been raised that you need to consider and explore in more depth? Does the supplier indicate if and how they mitigate against any biases?

Has another organisation rolled out the product that you could get in contact with to gather their experience?

3. What implementation support within your existing infrastructure  is required?

What level of effort will be required to integrate the solution into existing digital pathways and for ongoing monitoring? 

Will any changes be required in IT and digital systems and data management? 

Has another organisation rolled out the product that you could get in contact with to gather their experience?

These questions can be used to gather an understanding of the purpose of the proposed AI, the evidence supporting its use and the implementation support required. 

If after hearing the responses you feel the product may be a good solution to the problem you are trying to solve, deeper exploration is required before a final procurement decision can be made. 

Next steps if moving forward with an AI solution

Following the initial conversation, if you have decided the product could be a good solution, assemble a team of experts from within your organisation with expertise covering data and IT infrastructure, information governance, implementation and transformation, cyber security and clinical leadership (if the AI has a clinical component) to further explore the feasibility of adopting the AI into your organisation. There are several resources that can support AI adopters with navigating the complex environment of adopting AI within the NHS, including: 

  • A Buyer’s Guide to AI in Health and Care, developed by NHS Transformation Directorate in 2020 contains additional questions and detailed answers. The majority of the information is still relevant in the 2024 environment; however, elements of the regulatory environment section may not reflect the current situation. 
  • Developing Healthcare Workers’ Confidence in AI report developed by the NHS AI Lab and Health Education England in 2022 contains additional questions to consider when conversations with suppliers develop (pages 88-91).
  • All Adopters' Guidance lists of all the guidance and regulations that apply to adopters of digital technologies in health and social care.
  • Artificial Intelligence where the NHS Digital Academy has curated resources including a framework to support healthcare workers understand their level of understanding in AI.
  • Adopt AI on the NHS England Transformation Directorate website curates a selection of resources.

If you are interested in sharing your experience of using AI within your organisation, or getting involved in future work the NHS Confederation is doing in this space, please contact: Rezina Hakim at rezina.hakim@nhsconfed.org