AI, machine learning algorithms are susceptible to biased data
Harmful stereotypes and inequalities influencing healthcare can create potentially damaging biases in the data on which artificial intelligence and machine learning algorithms are trained.
Harmful stereotypes and inequalities influencing healthcare can create potentially damaging biases in the data on which artificial intelligence and machine learning algorithms are trained.
That’s the contention of Pilar Ossorio, professor of law and bioethics at the University of Wisconsin-Madison Law School.
According to Ossorio, gender and race affect how patients are treated in the U.S. healthcare system, which leads to algorithmic bias as a result of these social problems.
She points to the fact that studies have shown statistically significant differences in which women are under-treated compared with men—such as heart disease—even when they have come to physicians with the same set of symptoms.
“That leads to poor treatment, and that’s going to be reflected in essentially all healthcare data that people are using when they train their algorithms,” Ossorio told last week’s Machine Learning for Health Care conference in Ann Arbor, Mich. “You need to be thinking about this. It’s not just a technical problem. It’s also an ethical problem. We as a community need to be developing some guidelines about how we deal with problems like this.”
Also See: AI presents host of ethical challenges for healthcare
Ossorio made the case that race also affects how patients are treated in healthcare, referencing the fact that there is a lack of diversity in genomic data. In particular, she noted that genomic databases—such as the Genome-Wide Association Studies (GWAS)—are heavily skewed toward people of Northern European descent.
“There are still large groups of people for whom we have almost no genomic data,” added Ossorio. “This is another way in which the datasets that you might use to train your algorithms are going to exclude certain groups of people altogether.”
Separately, Ossorio pointed out that people of color are undertreated for pain in comparable kinds of medical procedures vs. white people.
“There’s enormous amounts of data on this,” she said. “This is often framed as people of color undertreated for pain. Perhaps, there are times when white people are being over-treated for pain. It’s not always straight forward and easy to understand where the injustice even lies when you see a disparity.”
At the same time, Ossorio acknowledged that not all gender and racial differences in health or healthcare reflect injustice. Nonetheless, she emphasized that machine learning could help to either ameliorate injustice in healthcare or contribute to it.
“If the healthcare system gets serious about making sure that people with similar disease presentations get similar care, you can make almost all of those disparities go away,” Ossorio concluded. “If our algorithms can identify those kinds of situations and they are used in the right way, they could actually help us to practice better, fairer and more just medicine.”
That’s the contention of Pilar Ossorio, professor of law and bioethics at the University of Wisconsin-Madison Law School.
According to Ossorio, gender and race affect how patients are treated in the U.S. healthcare system, which leads to algorithmic bias as a result of these social problems.
She points to the fact that studies have shown statistically significant differences in which women are under-treated compared with men—such as heart disease—even when they have come to physicians with the same set of symptoms.
“That leads to poor treatment, and that’s going to be reflected in essentially all healthcare data that people are using when they train their algorithms,” Ossorio told last week’s Machine Learning for Health Care conference in Ann Arbor, Mich. “You need to be thinking about this. It’s not just a technical problem. It’s also an ethical problem. We as a community need to be developing some guidelines about how we deal with problems like this.”
Also See: AI presents host of ethical challenges for healthcare
Ossorio made the case that race also affects how patients are treated in healthcare, referencing the fact that there is a lack of diversity in genomic data. In particular, she noted that genomic databases—such as the Genome-Wide Association Studies (GWAS)—are heavily skewed toward people of Northern European descent.
“There are still large groups of people for whom we have almost no genomic data,” added Ossorio. “This is another way in which the datasets that you might use to train your algorithms are going to exclude certain groups of people altogether.”
Separately, Ossorio pointed out that people of color are undertreated for pain in comparable kinds of medical procedures vs. white people.
“There’s enormous amounts of data on this,” she said. “This is often framed as people of color undertreated for pain. Perhaps, there are times when white people are being over-treated for pain. It’s not always straight forward and easy to understand where the injustice even lies when you see a disparity.”
At the same time, Ossorio acknowledged that not all gender and racial differences in health or healthcare reflect injustice. Nonetheless, she emphasized that machine learning could help to either ameliorate injustice in healthcare or contribute to it.
“If the healthcare system gets serious about making sure that people with similar disease presentations get similar care, you can make almost all of those disparities go away,” Ossorio concluded. “If our algorithms can identify those kinds of situations and they are used in the right way, they could actually help us to practice better, fairer and more just medicine.”
More for you
Loading data for hdm_tax_topic #better-outcomes...