A Marymount High School student publication

The Anchor

A Marymount High School student publication

The Anchor

A Marymount High School student publication

The Anchor

Polls
Sorry, there are no polls available at the moment.

Addressing Bias in AI: Is AI Racist, and What Can We Do About It?

Addressing+Bias+in+AI%3A+Is+AI+Racist%2C+and+What+Can+We+Do+About+It%3F

Summary of Artificial Intelligence in the Medical Field

As artificial intelligence becomes more advanced and integrated with modern society, this type of technology can be beneficial, but also harmful in the medical field. Scientists today are developing artificial intelligence technology to diagnose patients, however, with that type of technology, there is an inherent risk of bias. Ph.D. computer scientist, Marzyeh Ghassemi, aims to combat this exact problem through her research at the Massachusetts Institute of Technology. In the early stages of her journey, Ghassemi researched if artificial intelligence took place in the medical industry, and how this would create a disparity in machine learning, causing models to exhibit shortcomings due to the limitations in data training. 

During her research, Ghassemi worked with two MIT PhD students, Yuzhe Yang and Haoran Zhang, EECS computer scientist Dina Katabi, and the Thuan and Nicole Pham Professor. In their paper, they focused on how machine learning models focus on the different subpopulation shifts, and by learning more about the mechanisms, artificial intelligence can be more equitable models. More importantly, uncovering the mechanism can advance artificial intelligence in identifying patients in underrepresented subgroups. 

The four shifts that the group has identified are spurious correlations, attribute imbalance, class imbalance, and attribute generalization. In this paper, the group used an example of camels and cattle. In the study, they used a camel on sand and a cow on grass– the machine can generalize the attributes of the camel and cow by identifying the type of ecosystems that they inhabit. Now in the setting of the medical field, if this type of generalization were to happen, this can cause detrimental effects as the medical conditions of an individual can be specific – not as simple as the type of environment they live in. For example, in a data set, if males were more diagnosed with pneumonia than females, then the machine would perform better for males. Though the researchers applied classifiers to the machine learning model, there unfortunately, has been no improvement in attribute generalization. However, the ultimate goal of this research is achieving equity in healthcare for all populations. 

Reflection 

Machine Learning in the Medical Field: Use Cases & Challenges

The practice of artificial intelligence is fascinating. As a student who uses artificial intelligence, such as TurnItIn or Parlay, it is interesting how artificial intelligence, is not only applied in academia but also in medicine. However, with the encroaching presence of artificial intelligence, this type of technology concerns me as it acquires the potential to save or kill someone. From the article, the main concern of this type of technology is bias, and as history shows (such as Henrietta Lacks or the Tuskegee Syphilis Study), bias in the medical field is life-threatening. Though I admire that researchers are addressing the inherent bias in artificial intelligence in the medical field, it concerns me that if the problems such as attribute generalization (which in my opinion, is just a fancy term for stereotyping) are already an issue, would it be beneficial to even give artificial intelligence a place. One may argue that having artificial intelligence in the medical field can bring efficiency and productivity allowing for the advancement of both technology and the medical field. However, it is important to note that the majority of the delays and issues faced in the medical field are rooted in bias and discrimination. If we add artificial intelligence to the equation, this brings up the question of who will be held accountable if a patient is inaccurately diagnosed due to bias in their code. In the current stages of artificial intelligence, I believe that it is too early for it to take a presence in the medical industry. 

Though not explicitly mentioned in the article, it is important to note that the current STEM field is dominated by white males. This can cause an imbalance in diagnosing patients from minority groups as most of the artificial intelligence code is written by white male scientists and can cause negative implicit bias. This type of unconscious prejudice can cause detrimental effects such as failing to identify their medical condition correctly. This is why it is important to have diversity in the STEM field because, especially in this case, could be a matter of life or death. 

Link

https://news.mit.edu/2023/how-machine-learning-models-can-amplify-inequities-medical-diagnosis-treatment-0817