Transdisciplinary Design

Implications behind behavioral design – Part 2: Applications in machine learning

Posted on December 21, 2017

My first introduction to machine learning (ML) was during a cognitive neuroscience class lecture, in my undergraduate studies, on the design of psychology and decision-making experiments. The experiments I studied utilized predictive algorithms to understand how situational factors (e.g. transparency or timing of information delivery) influenced perceptual processing in decision-making behavior.

An example of one of these experiments could be a digital bargaining situation between two players who have to decide how to divide up a lump sum of money (pie size). One player is told how much money is in the pie, while the other is withheld this information. Within these constraints, a lump sum ranging from $1-6 is chosen for each experiment. In attempting to predict an outcome of whether these two players reach a bargaining agreement, one can assess the likelihood through a statistical regression analysis based on the quantity of the initial lump sum. However, with the availability of ML tools, additional behavioral features, such as decision timing or computer cursor position, can be assessed to add predictive power. [1]

Figure 1: Experimental bargaining scenario

(A) The informed player is prompted to select an initial offer on the screen and is shown the bargaining pie size (B) An example of cursor locations by both players 3 seconds into the bargaining process (red bar represents the informed player’s activity and the blue bar represents the uninformed player’s activity) (C) After 6 seconds, both players come to a bargaining agreement. (D) The informed player is notified how much money they won in the bargain.

Figure from “Artificial intelligence and behavioral economics” by Colin Camerer (2017)

Three datasets were fed through a predictive ML algorithm. The first set asked the algorithm to predict whether a scenario led to a bargaining agreement based on the size of the pie. The second set included only high precision behavioral process data (e.g. behavioral variables such as timing). The third set fed both types of data.

Figure 2: An ROC curve showing significantly higher predictive accuracy when pie size data and behavioral process data were fed through the ML algorithm. Basically, the closer the curve moves to the upper left corner of the graph, the better the algorithm is at predicting the outcome.

Figure from “Artificial intelligence and behavioral economics” by Colin Camerer (2017)

Though this experiment was conducted in the neuroeconomics lab of Colin Camerer at Caltech, this type of study could easily be applied in industry-based settings, especially with the interests of powerful technology companies in mind. As I mentioned in my previous post, behavioral design as informed behavioral economics theory takes on the notion of “nudging” individuals to modify behaviors based on subliminal non-conscious cues. Therefore, the implications of this research applied to the design of interfaces, experiences, and products by companies such as Google, Facebook, Amazon, etc. could have massive impact on shaping user behavior and influencing perceived decision-making agency.

As mentioned in a FastCo.Design article last year:

“The steady march toward a world where machines taught by humans make our lives easier, smoother, and more delightful has gone largely unquestioned. Now, as algorithms make their way into systems that deeply affect our democracy–not only the way we access journalism and discuss politics, but also how criminals are convicted and other fundamental mechanisms of our government–it’s time to begin reckoning with AI”.[2]

If we consider humans as non-free agents without the capacity to reflectively and consciously consider the intentions behind their actions, we succumb to the idea that these emergent digital power structures within our environment have complete agency over our well-being and reality. We begin to believe that free will is non-existent and that the drive to act upon good intentions is not a possibility. In other words, “we eliminate uncertainty, we forfeit the human replenishment that attaches to the challenge of asserting predictability in the face of an always-unknown future in favor of the blankness of perpetual compliance with someone else’s plan”.[3]

Having previously introduced the argument that behavioral design presupposes a controlling agent that acts upon unconscious and unreflective attitudes of individuals, I question how and whether we can consider an alternative form of behavioral design in a capitalist society. As argued by Dunne and Raby:

“The design profession needs to mature and find ways of operating outside the tight constraints of servicing industry. At its worst product design simply reinforces global capitalist values. Design needs to see this for what it is just one possibility and to develop alternative roles for itself. It needs to establish an intellectual stance of its own, or the design profession is destined to lose all intellectual credibility and viewed simply as an agent of capitalism”.[4]

In my global flows debate several weeks ago, I argue that on a global scale, neoliberalism can be analyzed as a continued form of natural selection. The amplification of artificially intelligent applications has the potential to bring consumptive and profit-driven behavior to a whole a new level. However, staying in line with my debate argument – are these amplified dangerous repercussions really much different than the patterns of human behavior we have been observing throughout history?

Having spent many weeks thinking about this topic, I introduce and leave you with a new idea.

I believe that emergent and creative potential in ML technologies could surface awareness of our implicit attitudes and behaviors on a scale that we as individuals can reckon with. These technologies could surface “evidence” of our behavioral tendencies and biases. It could harness immediate and direct feedback mechanisms by feeding us information that we would not recognize through our own cognitive capacities. Though I am a skeptic that such a nascent technology will fundamentally change the systems we live in, I believe that applying expansive and unusual approaches can provoke us to consider alternative realities with potential for change.

– J Liu

[1] Camerer, Colin F. “Artificial intelligence and behavioral economics.” (Manuscript in progress). National Bureau of Economic Research. 2017

[2] Campbell-Dollaghan, Kelsey. “The Algorithmic Decmocracy.” FastCo.Design. 2016

[3] Zuboff, Shoshana. “Secrets of Surveillance Capitalism.” Frankfurter Allgemeine. 2016

[4] Dunne, Anthony and Fiona Raby. Design Noir: The secret life of electronic objects. Birkhauser. 2001