An overview of Algorithmic Experience (AX)
Artificial intelligence has been increasingly becoming an integral part of everyday life through a wide variety of implementations, such as e-commerce recommender systems, movie recommendations, tailored content aggregation services or navigation systems. While the rapid adoption of algorithm technologies has the potential to greatly improve users’ experience and increase services, it is still unclear how users cognitively accept such recommender systems. In other words, what factors affect our satisfaction and adoption of these algorithm recommender systems? And how could we improve users’ algorithmic experience?
The problem of AI: from users’ perspectives
By analysing 35,448 users’ reviews towards Facebook, Netflix and Google Maps, Eiband et al. (2019) have explored that not only accuracy but the interaction between users and systems also influences users’ experience. On the one hand, this study suggests that algorithmic and knowledge-based problems such as biased content curation (Facebook), the mismatch between recommendations and user interest (Netflix) or inaccurate destination (Google Maps) are common problems of these systems. On the other hand, users may feel annoyed when they have limited control over the way systems work (user choice). For example, users may feel disappointed when Google Maps keeps overwriting their manually selected routes without informing them. In addition, the way systems interact with users’ feedback is also important. With the binary rating system (like/ dislike), users may struggle to provide meaningful feedback to Netflix, which in turn affects the relevance of recommendations.
Given these human-centered problems, a growing body of research starts to explore “algorithmic experience” or “AX” (Alvarado & Waern 2018) – an overarching view on user interaction with intelligent systems. This notion includes the effort of fostering user control over algorithmic decision making, transparently increasing the awareness of how the system works and deliberately activating or deactivating algorithmic influence.
Algorithmic experience and user control
Human agency and oversight over the system is one of the emerging topics. By allowing users to corroborate, manage and stay in control of the algorithm, the system can make users feel empowered and capable of managing what the system thinks about them (Alvarado & Waern 2018; Kumar et al. 2020). According to the survey, 56% of users wish they could have options to control Facebook newsfeed and filter the content themselves, by turning on/ off/ adjusting People You May Know function for example (Alvarado & Waern 2018). In the same vein, researchers have suggested the participation of humans in the process of designing AI systems. Engaging end-users in the developing process may improve users’ perceived fairness and trust, increase algorithm awareness and understanding of algorithmic decision making, thus lead to a more empathetic stance (Lee et al. 2019).
Algorithmic experience and transparency
Recent research on algorithmic adoption and algorithmic behavioural actions have drawn on perceived transparency and fairness to explain users’ attitudes, actual use, level of acceptance, satisfaction and continue intention (Shin 2020; Shin, Zhong & Biocca 2020). Transparency refers to how the system makes visible what the algorithm knows about a user and explains why the algorithm present results based on that profiling (Alvarado & Waern 2018), which in turn improve the algorithmic experience. In the same vein, Shin’s (2020) trust feedback loop also suggests that perceived transparency and accuracy would assure trust, which in turn facilitates intention and satisfaction.
Algorithmic experience and algorithmic awareness: the use of explanation
Another substantial body of work focuses on helping users make sense of intelligent systems, for example through explanations. Such explanations often target the algorithmic decision-making process or a particular output instance. On one the hand, some authors agree that the explanation positively influences perceived transparency (Brunk, Mattern & Riehle 2019). On the other hand, others argue that the effectiveness of explanation depends on the transparency mechanisms (awareness, correctness, accountability, and interpretability) as well as how the system explains (Rader, Cotter & Cho 2018). For example, differences towards explanation styles (input influence, sensitivity, case-based or demographic), delivery methods and modalities also affect the users’ perception of justice differently (Binns et al. 2018).
In conclusion…
Algorithms are becoming an integral part of most everyday services, in which algorithm experience is becoming an emerging topic to approach human-centered AI. In other words, creating AI from the perspective of what satisfies human and societal needs are far more concerned, instead of pushing what is technically possible.
Recent research has explored different ways to increase users’ satisfaction and adoption towards AI, including improving users’ participation, increasing transparency, fairness or using explanation. However, there is still limited understanding towards the possible solutions, which call for further examination and attention. By understanding the user cognition and perception, future work could be dedicated to design insightful human-centred algorithm systems. Algorithms that are user-centered will be key to designing such human-centered systems.
Written by Diem-Trang Vo
Edited by Duy Dang-Pham
References
Alvarado, O & Waern, A 2018, ‘Towards algorithmic experience: Initial efforts for social media contexts’, in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-12.
Binns, R, Van Kleek, M, Veale, M, Lyngs, U, Zhao, J & Shadbolt, N 2018, ‘‘It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions’, in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-14.
Brunk, J, Mattern, J & Riehle, DM 2019, ‘Effect of Transparency and Trust on Acceptance of Automatic Online Comment Moderation Systems’, in 2019 IEEE 21st Conference on Business Informatics (CBI), vol. 1, pp. 429-35.
Eiband, M, Völkel, ST, Buschek, D, Cook, S & Hussmann, H 2019, ‘When people and algorithms meet: user-reported problems in intelligent everyday applications’, in Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 96-106.
Kumar, A, Braud, T, Tarkoma, S & Hui, P 2020, ‘Trustworthy AI in the Age of Pervasive Computing and Big Data’, arXiv preprint arXiv:2002.05657.
Lee, MK, Kusbit, D, Kahng, A, Kim, JT, Yuan, X, Chan, A, See, D, Noothigattu, R, Lee, S & Psomas, A 2019, ‘WeBuildAI: Participatory framework for algorithmic governance’, Proceedings of the ACM on Human-Computer Interaction, vol. 3, no. CSCW, pp. 1-35.
Rader, E, Cotter, K & Cho, J 2018, ‘Explanations as mechanisms for supporting algorithmic transparency’, in Proceedings of the 2018 CHI conference on human factors in computing systems, pp. 1-13.
Shin, D 2020, ‘How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance’, Computers in Human Behavior, p. 106344.
Shin, D, Zhong, B & Biocca, FA 2020, ‘Beyond user experience: What constitutes algorithmic experiences?’, International Journal of Information Management, vol. 52, p. 102061.
Enjoy Reading This Article?
Here are some more articles you might like to read next: