Are Algorithms Value-Free? Feminist Theoretical Virtues in Machine Learning

By Gabbrielle M Johnson

The main purpose of this article is to examine the bias that influences machine learning algorithms and to argue that no algorithm can truly be value-free, given that at different stages of its development, value-based decisions need to be made, thus reflecting the inherent bias of the programmers. Firstly, Johnson introduces the concept that objectivity can never really be applied to scientific theories because it is impossible for science to be further studied without human reasoning, as drawing conclusions takes more than just the pure data itself. Additionally, interpretation can always lead to different ideas, so since there is more than one possible conclusion, the output heavily relies on the individuals who analyze this research. She supports this concept further by incorporating other terms such as underdetermination, which suggests that there can be multiple scientific theories that are equally justifiable based on the same evidence, and inductive risk, which describes the potential negative consequences of making incorrect predictions when practicing inductive reasoning, a data generalization tactic that is often utilized in machine-learning. Later on, this leads her to introduce two distinct arguments from the feminist philosophy of science that both oppose the ideal world of value-free scientific knowledge and algorithms. The first argument against demarcation explains that historically, philosophers of science have demarcated between science and non-science using a set of criteria that is expected to be value-free and unbiased. However, feminist philosophers challenge this argument, as they believe that such criteria are inherently influenced by social, ethical, or political values. Johnson emphasizes that in the situation of machine-learning, many stages of the development process such as choosing training data, selecting variables, and so on, require value-based decision-making to occur, undermining the criteria in a way since these ‘parts of the design require subjectivity (determining the criteria itself is a decision based on values). The second argument from inductive risk discusses that where uncertainty lies, there is always a risk of making errors, otherwise known as inductive risk. When it comes to scientific practices or real-world situations, when consequences from the risks arise, both positive and negative, they must be weighed and controlled, requiring personal judgment and violating the value-free ideal. As a result of her research, Johnson calls for an intriguing solution: instead of striving for this unachievable value-free ideal, society should promote greater transparency of the biases involved in making these supposedly objective decisions. Encouraging ethical supervision over these algorithms and machine-learning invites awareness and the much-needed inspection of what is considered a truly impartial system.

Reflecting back on all that I read, I found this paper to be unique in its perspectives and argument structure. The author’s points were strong, clear, and introduced many ideas that are relatively new to me. I could also make some connections to previous readings with examples that she mentioned in her points such as risk-assessment algorithms to prevent recidivism. She crafts a similar argument with this subject—prioritization of fairness criteria is subjective, as is measuring the societal impacts of risk factors on different groups of people. In general, this research topic sparked my thinking a lot, and I was definitely able to expand my knowledge about algorithmic bias!