What the human perspective reveals about news recommender systems, and how it can be integrated into media practices

Activity: Talk or presentation typesLecture and oral contribution

Arni Már Einarsson - Other

What the human perspective reveals about news recommender systems, and how it can be integrated into media practices

AI systems have become accessible for organizations beyond big-tech companies, offering democratic news institutions new means to automate tasks and enrich services. However, AI system are treated as complex systems that citizens without technical merits cannot comprehend. In our case of news recommender systems (NRS), systems differ from traditional user interfaces by their adaptive and continuous nature. NRS adapt news exposure to individual preferences, change in accordance with the data foundation, and are continuously A/B tested and optimized (Seaver, 2019). This adaptive nature of NRS hinders involvement of users in the process (Hartikainen et al., 2022; Xu, 2019; Yang et al., 2020). Instead, the user perspective is framed more speculatively as imaginaries (Bucher, 2017) or folklore (Ytre-Arne & Moe, 2021) to which we seek to devise an alternative. In the context of Platform Intelligence in News – a project implementing and evaluating NRS’ in a Danish legacy news organization, this paper asks, “What does user involvement reveal about news recommender systems, and how can the users’ perspective be integrated into everyday media practices?” Addressing these questions, we interviewed twelve active users of a Danish news outlet currently implementing NRS about their media diets, familiarity with recommender systems, perceived potentials, and perceived concerns. A topical analysis of the interview transcripts reveals five main findings:
1.Users seek efficient ways to find relevant content.
2.Users expect recommendations of high quality.
3.Users expect, and are capable of identifying, commercial intentions structuring recommendations.
4.Users display diverse media diets and welcome serendipity but fear filter bubbles.
5.Users fear that accountability is the cost of personalization.

Although, most users consider themselves technical novices, their reflection system logics, opportunities, and consequences correspond with discussions in the literature on relevance, diversity (Bodó et al., 2019; Mattis et al., 2022), control (Harambam et al., 2019), and trust (Bodó, 2021; Shin, 2020) and editorial priorities (Møller, 2022; Vrijenhoek et al., 2022). Hence, in a development process, users disclose meaning-making that can inform the appropriate balancing of, for example, personal relevance versus diversity, free versus paid content recommended, and between the scaling recommended versus editorially curated content in the news interface. As systems continuously evolve, news reading values can be operationalized in recurring questionnaires to complement existing system- and industry-centered metrics for recommendations (Ge et al., 2010; Vrijenhoek et al., 2022).
Also, we suggest that user involvement is about empathizing with users and understanding news reading holistically, instead of isolated behavior. For instance, users describe using various news outlets selectively for different purposes (e.g., sports in one venue and politics in another). Therefore, filter bubble effects appear less prevalent in a Danish context whereas concerns of media accountability and misalignment with users’ expectations appear more severe. As personalization is becoming normalized in news media (Møller, 2022) devising cross-media auditing methods (e.g., Heuer et al., 2021; Sandvig et al., 2014), for instance, can be promising path to monitor the development of news personalization, and ensure that algorithmic systems not only serve commercial but also democratic aspirations of news institutions.
References
Bodó, B. (2021). Mediated trust: A theoretical framework to address the trustworthiness of technological trust mediators. New Media & Society, 23(9), 2668–2690. https://doi.org/10.1177/1461444820939922
Bodó, B., Helberger, N., Eskens, S., & Möller, J. (2019). Interested in diversity: The role of user attitudes, algorithmic feedback loops, and policy in news personalization. Digital Journalism, 7(2), 206–229. https://doi.org/10.1080/21670811.2018.1521292
Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086
Ge, M., Delgado-Battenfeld, C., & Jannach, D. (2010). Beyond accuracy: Evaluating recommender systems by coverage and serendipity. Proceedings of the Fourth ACM Conference on Recommender Systems, 257–260. https://doi.org/10.1145/1864708.1864761
Harambam, J., Bountouridis, D., Makhortykh, M., & van Hoboken, J. (2019). Designing for the better by taking users into account: A qualitative evaluation of user control mechanisms in (news) recommender systems. Proceedings of the 13th ACM Conference on Recommender Systems, 69–77. https://doi.org/10.1145/3298689.3347014
Hartikainen, M., Väänänen, K., Lehtiö, A., Ala-Luopa, S., & Olsson, T. (2022). Human-Centered AI Design in Reality: A Study of Developer Companies’ Practices : A study of Developer Companies’ Practices. Nordic Conference on Human-Computer Interaction, 1–11. https://doi.org/10.1145/3546155.3546677
Heuer, H., Hoch, H., Breiter, A., & Theocharis, Y. (2021). Auditing the biases enacted by YouTube for political topics in Germany. Mensch Und Computer 2021, 456–468. https://doi.org/10.1145/3473856.3473864
Makhortykh, M., de Vreese, C., Helberger, N., Harambam, J., & Bountouridis, D. (2021). We are what we click: Understanding time and content-based habits of online news readers. New Media & Society, 23(9), 2773–2800. https://doi.org/10.1177/1461444820933221
Mattis, N., Masur, P., Möller, J., & van Atteveldt, W. (2022). Nudging towards news diversity: A theoretical framework for facilitating diverse news consumption through recommender design. New Media & Society, 14614448221104412. https://doi.org/10.1177/14614448221104413
Møller, L. A. (2022). Recommended for you: How newspapers normalise algorithmic news recommendation to fit their gatekeeping role. Journalism Studies, 23(7), 800–817. https://doi.org/10.1080/1461670X.2022.2034522
Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry, 22, 4349–4357.
Seaver, N. (2019). Knowing Algorithms. In J. Vertesi, D. Ribes, C. DiSalvo, Y. Loukissas, L. Forlano, D. K. Rosner, S. J. Jackson, & H. R. Shell (Eds.), DigitalSTS (pp. 412–422). Princeton University Press. https://doi.org/10.2307/j.ctvc77mp9.30
Shin, D. (2020). How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance. Computers in Human Behavior, 109, 106344. https://doi.org/10.1016/j.chb.2020.106344
Vrijenhoek, S., Bénédict, G., Gutierrez Granada, M., Odijk, D., & De Rijke, M. (2022). RADio – Rank-aware divergence metrics to measure normative diversity in news recommendations. Proceedings of the 16th ACM Conference on Recommender Systems, 208–219. https://doi.org/10.1145/3523227.3546780
Xu, W. (2019). Toward human-centered AI: A perspective from human-computer interaction. Interactions, 26(4), 42–46. https://doi.org/10.1145/3328485
Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020). Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3313831.3376301
Ytre-Arne, B., & Moe, H. (2021). Folk theories of algorithms: Understanding digital irritation. Media, Culture & Society, 43(5), 807–824. https://doi.org/10.1177/0163443720972314

24 May 2023

Event (Conference)

TitleICA Pre-conference: Building the Conditions for Responsible Human-Centric AI Systems
Date24/05/202324/05/2023
CityToronto
Country/TerritoryCanada
Degree of recognitionInternational event

ID: 348258421