Accountability in Algorithmic Decision Making — A view from computational journalism.

By Nicholas Diakopoulos

This article’s main topic of discussion is the ethical concern surrounding algorithms that may increasingly play a little bit too big of a role in our daily lives when dispensing information and making judgments about content. First, the author, Diakopoulos, introduces common types of decisions that algorithms make, and where potential biases within those can arise. Number one is prioritization, in which he makes the argument that prioritization is a form of discrimination because algorithms will prioritize information about certain groups, while hurting the interests of others, which could be an unintended consequence of the design. Number two is classification, where entities can be classified together based on key characteristics, and with this, it is helpful to consider potential human biases in the selected training data and the balancing of the overall accuracy of classifications like false positives and negatives. Number three is association, which he explains creates connections between beings, and can sway human perception of the situation if careless about the accuracy of the association. Number four is filtering, which describes the danger of it transforming into something greater such as censorship or discrimination. Next, Diakopoulos compares the different responsibilities of the government and private sector companies concerning accountability of their actions. He argues that the algorithms that the government employs are usually expected to be more accountable and transparent than those of private companies because of the standards of our democratic society in which we elect representation. However, much of the government algorithms still lack true transparency and regulation even when they are feeding very important policy decisions. Private corporations do not have as much pressure for public accountability, and customer satisfaction and high quality data can be combated by letting the users themselves identify and fix mistakes in the data. In his following section, Diakopoulos then mentions some aspects that it is crucial for algorithms to be transparent about. This includes human involvement (encourages accountability of humans that control the algorithm), data (how data is collected, edited, its accuracy, privacy risks, and personal information), the model (inputs and weighted features of the model, as well as the rationale and assumptions behind it), inferencing (potential for error, accuracy rate, and level of uncertainty), and algorithmic presence (if/when it is used, personalization, and content filtering). Finally, he mentions a few challenges that researchers might encounter along the way. For example, how do we balance transparency with user enjoyment and experience? How could we disclose information about these valuable algorithm processes without falling victim to manipulation or gaming techniques? He poses elements like these that need to be considered in order to create a fair and functional system of transparency.

To end, I thought that this was a concise argument that sparked my curiosity and imagination about this particular subject of algorithmic bias. I admired how the author split a lot of main topics into sections and further explained in detail, expanding the audience’s comprehension of each and every idea even more. His real-world solution for more transparency within machine-learning and more accountability among its creators made me realize the amount of flaws in our current practices and how much we can strive to improve them.