Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals8. A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics9.Note that it won't work to just omit ethnicity and gender data from the databases, and assume that the resulting AI system will be ethnicity and gender neutral. That is because they see the "world as it is" as biased, so ethnicity and gender have to be put into any AI model, and then experts need to fudge the data and cook the algorithms so that the results are more like the world that good lefty social justice warriors aspire to.
Such nascent de-biasing approaches are promising, but they need to be refined and evaluated in the real world.
An open challenge with these types of solutions, however, is that ethnicity, gender and other relevant information need to be accurately recorded. Unless the appropriate categories are captured, it’s difficult to know what constraints to impose on the model, or what corrections to make. The approaches also require algorithm designers to decide a priori what types of biases they want to avoid. ...
As computer scientists, ethicists, social scientists and others strive to improve the fairness of data and of AI, all of us need to think about appropriate notions of fairness. Should the data be representative of the world as it is, or of a world that many would aspire to?
This article is probably not even controversial in academic circles. They take it for granted that data and algorithms need to be manipulated to achieve their left-wing goals. They can't get there if machines deal honestly with raw facts.