Luke Oakden-Rayner, a PhD student in radiology, writes:
a preprint* titled Reading Race: AI Recognises Patient’s Racial Identity in Medical Images. ...But what is it bad that medical instruments are automatically getting accurate results? In every other case that is a good thing, not a bad thing. A comment notes:The more clinical and safety/bias related researchers were shocked, confused, and frankly horrified by the results we were getting. Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned. ...
AI can trivially learn to identify the self-reported racial identity of patients to an absurdly high degree of accuracy ...
Despite many attempts, we couldn’t work out what it learns or how it does it. ...
We are talking about racial identity, not genetic ancestry or any other biological process that might come to mind when you hear the word “race”. Racial identity is a social, legal, and political construct that consists of our own perceptions of our race, and how other people see us. In the context of this work, we rely on self-reported race as our indicator of racial identity. ...
An urgent problem
AI seems to easily learn racial identity information from medical images, even when the task seems unrelated. We can’t isolate how it does this, and we humans can’t recognise when AI is doing it unless we collect demographic information (which is rarely readily available to clinical radiologists). That is bad.
Interesting article. I wish you had explained why you believe that the algorithm detecting race was a bad thing. You wrote at length about health disparities, but never closed the loop on what that means for the algorithms. What is a simple case where, had the algorithm not detected race, it would make the right choice but didn’t?I think the problem here is that the BLM allies say that race is a social construct, and that any inequitable racial outcomes are the result of bad policies.
This research, if correct, proves otherwise. Note that the AI accurately identified the patient's race, as he identifies himself, from xrays and other images. It should enable better diagnoses and treatments.
Go ahead and read his convoluted and nonsensical argument. Somehow it is touchy that Blacks get identified as Blacks.
No comments:
Post a Comment