Transdisciplinary Design

Reflection On Objectiveness and Human Values of Technology

Posted on December 3, 2017

What is technology?

Is technology potentially freeing us, or repressing us?

Our recent discussions around technology, particularly algorithmic models and artificial intelligence, has raised a lot of thoughts on the prevalence of attributed objectivity to technology as well as our human values.

Mathwashing 1 and Objectivity

Almost within the blink of an eye, algorithmic technology has permeated all areas of our life. Think about how many decisions, either made by ourselves or other agencies, are based on the result of some kind of program or software? While we rely so much on technological agencies to make decisions because we assume artificial intelligence is unbiased, how often do we question whether these results are truly objective? Is artificial intelligence objective? It seems they are because the core of algorithm is math. Who would argue that math is subjective?

What we often forgot to take into consideration is who created these programs. Machine learning depends on very subjective human training for pattern recognition, which reflects and magnifies trainers’ biases and assumptions. The majority of programmers, especially machine learning experts, are males. Will some algorithmic models different from what they are now if more female programmers are involved in the programming process? If so, does this mean there has long been an underrepresentation of female standpoints?

Already, mathematical models are being used to help determine who gets hired for a job, who is approved for a loan or even who makes parole. For example, COMPAS, a machine model developed by a company called Northpointe offers to predict defendants’ likelihood of reoffending, and is used by some judges to determine whether an inmate is granted parole2. In an investigation conducted by ProPublica, it found that this system is biased against blacks and minorities3. In addition to this finding, it has also reported that defendants, who have been rated by these assessments, rarely have chances to challenge their assessment. In addition, the data processed that led to such results are rarely revealed.

Even though the investigations have revealed risk assessment software to be highly problematic, however, these models have been used in courtrooms across nations to assist judges to make decisions. For centuries decisions in the legal process have been guided by human instincts and personal bias; such bias was also carried into the design of these risk assessment models.

Evidence of bias has been found in areas such as online advertising, recruiting and pricing strategies driven by presumably neutral algorithms1. For consumers or the vulnerable “other”, who are unaware of, or fail to recognize the complexity behind these programs, it’s most likely that they have been capitalized on by companies or not treated fairly.

Human Values

Much of the political content Americans see on social media every day is not produced by human users. According to a recent report, about one in every five election-related tweets from Sept.16 to Oct 21 was generated by bots4. These bots are designed by people with specific political standings, and are used to automatically produce contents to participate in online discussions. The content produced by bots are almost impossible for humans to identify. While it’s hard to quantify the influence these bots have had on the election results, it’s clear that these bots have altered public perceptions.

A recent article by James Briddle unveiled some weird things that are happening on Youtube Kids5. Bots have been used to generate higher view records for lots of videos to make them appear in the front of a search. It’s a natural tendency for parents to think that the more the view numbers, the more credible the video. Since bots are used to push nursery videos which contain violent or abusive content to top search, it’s easy for kids to be exposed to such contents. While Youtube Kids is considered as a relatively creditable source, and there’s a lack of transparency for parents to be in full control of what their children are watching on the internet, the possibilities of children been exposed to unhealthy content on the internet greatly increased.

In another example, it took less than 24 hours for Twitter to turn an innocent chatbot @TayandYou, created by Microsoft, into a racist. As the bot was learning through its interaction with other Twitter users, its response is a reflection of the prejudices of our society and its inability to filter out inappropriate contents. Tay’s adventure on Twitter show that even big corporations like Microsoft forgot to take preventative measures against these problems6.

Algorithmic generation of perverse content is a special case of a much more general problem. A paradox unfolded in front of us: while these algorithmic systems are designed by human being, even in some cases with good intentions, these systems follow their own internal logic, producing results that don’t always align (at all) with human values. How can we, as designers, tackle this problem? I do not have answer at this moment, but I will keep these thoughts in mind as I move along in my studies at Transdisciplinary Design program to explore possible solutions.

Yuxin Cheng

 

References

 

  1. Byrnes, Nanette. “Are Machine Learning Algorithms Biased?” MIT Technology Review, MIT Technology Review, 24 June 2016, www.technologyreview.com/s/601775/why-we-should-expect-algorithms-to-be-biased/.
  2. Briddle, James. “Something Is Wrong on the Internet.” Medium, 6 Nov. 2017, http://medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2 5.
  3. Mattu, Julia AngwinJeff LarsonLauren KirchnerSurya. “Machine Bias.” ProPublica, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  4. Knight, Will. “Google’s AI Chief Says Forget Elon Musk’s Killer Robots, and Worry about Bias in AI Systems Instead.” MIT Technology Review, MIT Technology Review, 3 Oct. 2017, www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/.
  5. Ferrara Research Assistant Professor of Computer Science, University of Southern California, Emilio. “How Twitter Bots Affected the US Presidential Campaign.” The Conversation, 1 Dec. 2017, theconversation.com/how-twitter-bots-affected-the-us-presidential-campaign-68406.
  6. Vincent, James. “Twitter Taught Microsoft’s Friendly AI Chatbot to Be a Racist Asshole in Less than a Day.” The Verge, The Verge, 24 Mar. 2016, www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.