The dilemma of using AI in the ICU: Vulnerable to triggering erroneous medical measures

Ali Akbar Septiandri
Ali Akbar Septiandri

Writing in The Conversation, Ali Akbar Septiandri (UCL Statistical Science) explains how we can prevent bias in the use of AI in the medical world.

Patients who are treated in the intensive care unit (ICU) room, are usually connected to an IV trunk and a number of  medical devices. The use of medical devices, such as vital sign monitors, aims to monitor the  patient’s vital signs , such as heart rate, blood pressure, and  saturation  (oxygen levels in the blood).

These medical devices are rich in data that is useful to support diagnosis - the process is now greatly aided by artificial intelligence (AI) technology. For example, AI technology can help  monitor the condition of post-cardiac patients ,  , predict  the progression of COVID-19 patients’ conditions , identify  observable signs (clinical phenotypes) of sepsis , and determine the  optimal patient treatment strategy.

The problem is, not all’AI models (programs trained to recognize patterns) are  capable of producing the help  doctors need to diagnose patients. Moreover, the  data sources  that will be processed by doctors are very diverse, ranging from the results of medical device readings in the ICU, analysis of blood samples in the laboratory, to the patient’s health history.

The diversity of data sources  causes the medical decision-making process to become more complex. As the complexity of the cases handled increases, the results of AI readings are becoming more and more transparent.

This condition can exacerbate bias-whether you realize it or not-affecting the medical actions that doctors perform.

AI is vulnerable to exacerbating bias against medical measures

In the medical world, some studies highlight a number of biases that can influence doctors’ decisions. A 2019 study, for example, revealed that  female patients  were more at risk of experiencing a reduction or discontinuation of intensive care than men. This trend occurs even after other factors, such as old age and critical condition of the patient, are also considered.

Another study found that there is inequality in  health care  between black and white races in the United States (US). This inequality causes the death rate of black patients in the ICU to be higher than whites.

Bias in the medical world will allegedly get worse if doctors only rely on AI models in diagnosing diseases,  without analyzing the data using reason. Especially when  AI models learn patterns from biased data  due to suboptimal sampling processes and social inequalities that limit people’s access to health care facilities.

This phenomenon is like students doing college assignments using ChatGPT, without critically reviewing the results.

AI models do promise better accuracy and personalization in aiding diagnosis. However, the results are processed from very complex large data, so human  analysis is still needed  before taking medical action.

Without skepticism about this technology,  the impact could be fatal.

Prevent bias in the use of AI in the medical world

Unfortunately, in the field, there is often a push to automate the decision-making process for medical actions on the grounds of  the efficiency of intensive care resources.

For example, in the case of the COVID-19 pandemic, the pressure on ICU use has encouraged the  automation of decision-making processes  such as patient bed allocation. In this kind of precarious situation, the possibility of taking shortcuts by relying on AI models without criticizing the process becomes greater. In fact, a number of  issues of access to health services , such as limited insurance coverage, rigid work schedules of health workers, and the long distance to the nearest health facilities are alleged to exacerbate bias in the medical world.

AI models that often make decisions based on existing biases, will learn these patterns as commonplace, then interpret them as important knowledge in the decision-making process.

Therefore, preventive measures are needed to avoid bias due to the adoption of AI-based medical technology, including:

1. AI education for healthcare practitioners

Health practitioners need to be educated about AI to be more careful about the potential and limitations of this rapidly developing technology.

Health education institutions should overhaul the curriculum and make AI one of the subject matter. This method can help change the mindset of health practitioners regarding the use of AI, which is developing very massively.

2. Driving more clinical AI innovation

Healthcare practitioners should encourage AI developers to generate models whose data analysis can be interpreted clinically,  simply, and transparently.

Policymakers need to require AI developers to create transparent models before the technology is widely adopted.

3. Research collaboration

Another crucial thing that is often overlooked is the limited diversity of ICU data available to the public. A  2022 review , revealed that publicly available ICU data and used in many studies were sourced from only four different data sets (data sets) all from Europe and the United States.

Therefore, global research collaboration is needed in sharing ICU data. As mentioned earlier, a model of biased data will only perpetuate that bias. As an illustration, AI models  are at risk of drastically reducing their performance  when used in different hospitals in a country. In this case, the use of local data from other countries is urgently needed to reduce the severity of such biases.

The country’s medical community can adapt to the computational linguistic community which routinely holds Language Resources and Evaluation (LREC) conferences every two years. This forum aims to publish data and research results from various regions in the world.

Meanwhile, the global medical community needs to hold conferences and create journals that focus on communicating medical data and research from countries that are less known in the global academic world.

4. Regulation of the use of AI

It is not enough to stop there, the government needs to implement regulations regarding the ethics of using AI in the medical world.  The WHO  and  the European Union , for example, have formulated regulations to ensure and evaluate the performance, safety, and accuracy of AI models in the health sector so that they are easy to use, free from bias and discrimination.

Ensuring bias-free intensive care requires a comprehensive effort involving a diverse range of stakeholders, including doctors, health workers, academics, technology developers, and governments.

Commitment from all stakeholders is needed to start this collaboration. This step can begin with understanding, acknowledging, and evaluating the existence of invisible bias in health services. 

This article was originally published in  The Conversation   on 26 January 2025.


  • University College London, Gower Street, London, WC1E 6BT (0) 20 7679 2000