Scientists at Conicet and the National University of the Sea (UNL) have developed a Research project aimed at identifying and predicting artificial intelligence (AI) bias in diagnostic imaging (thereby minimizing the margin of error in medical diagnosis). A man from Tandil led the initiative and they were recently selected for a Google Inclusion Research Award (Google AIR).
Working with Argentine seals “Unsupervised Bias Discovery: Predicting the Problem of Algorithmic Fairness in Machine Learning Models for Medical Image Analysis Without Reference Annotations”, Selected by Google AIR, the company’s international recognition for researchers who positively impact the public interest.
Enzo Ferrante, Conicet Researcher, Institute for Signals, Systems and Computational Intelligence (sinc(i), Conicet-UNL)Led this research with the aim of identifying and predicting problems with artificial intelligence (AI) bias in diagnostic images, thereby minimizing the margin of error in medical diagnosis.
He is from Tandil, Buenos Aires province, where he graduated as a systems engineer, Ferrante later specialized in France and England, and his work has a great contribution and international exchange.
“We have been working on the development of artificial intelligence systems for image diagnosis for many years, Assist radiologists and doctors who always make the final decision. This assistance process takes many forms. For example, from medical images, the presence or absence of pathology can be detected by the neural network technology we use. Or an X-ray image to tell if the person has pneumonia. We’ve been using artificial intelligence for this kind of development for a long time. We also do all the work related to magnetic resonance imaging of the brain to locate tumors and be able to measure them. That’s the whole series related to artificial intelligence for medical image analysis, an area I’ve been working on since my PhD in Paris,” explains Ferrante.
Let me add: “In this particular project that Google is funding us with this call, the idea is to be able to predict the emergence of these biases. That is, try to be able to assess whether the system is biased before it is put into use.”
In this context, the term “biased” Refers to artificial intelligence models that have different performance depending on the demographic group to which a patient belongs. When it comes to analyzing health data with AI, this topic looms large when considering medical image analysis tasks such as computer-aided diagnosis.
Asked what research they had in mind when starting the project, he replied: “As far as the type of images we work with, we mostly look at x-ray images, in this case images of the torso, which we first use to show Such bias may exist. These are x-ray images of the chest cavity from which different diagnoses such as pneumonia, pneumothorax, and enlarged heart can be made.
“Generally, we use available public databases created by some other university. For example, the one we’re working on was created by Stanford University, and another was created by the National Institutes of Health in several different places. In Argentina, we had some collaborations with the Italian Hospital of Buenos Aires, especially on this project. So we’re working with them and some of the radiologists there,” he said.
One of the key questions is whether they can determine how to reduce the margin of error.In this regard, the researchers pointed out “There are so-called bias mitigation techniques, which are some of the commonly used approaches.”
“Some have to do with rebalancing the database. To do this, first determine if there is a bias in the database we use for training, such as the model performing worse on women than on men. So, with that information, you can go to the database and see, are women really well represented? ’ she described.
and continued: “Working on the data plane is always one of the easiest ways to solve a problem. Then, what we develop on a methodological level is to consider new algorithms, where the automatic learning process performed by the model is in some way affected by some constraints that make the model not perform differently in different environments. Group. Something can be modified during training”.
this project funding and support Conicet, from the Faculty of Engineering and Water Sciences, National Marina University and the National Agency for the Promotion of Research, Technology Development and Innovation.