BORIS Theses

BORIS Theses
Bern Open Repository and Information System

Fair AI-Based Voice Agents for Product Recommendations. Design Measures for Socio-Technical Fairness

Weith, Helena Victoria Katharina (2024). Fair AI-Based Voice Agents for Product Recommendations. Design Measures for Socio-Technical Fairness. (Thesis). Universität Bern, Bern

[img]
Preview
Text
24weith_hvk.pdf - Thesis
Available under License Creative Commons: Attribution-Noncommercial (CC-BY-NC 4.0).

Download (1MB) | Preview

Abstract

Artificial intelligence (AI) offers extensive opportunities in customer services not only for businesses but also for customers. Recommender systems and voice agents are examples of such opportunities, enabling personalized recommendations and eyes-off and hands-off interaction. However, while AI presents substantial benefits, it also comes along with risks; for example, in the form of unfairness (Weith et al., 2023). It is thus crucial to understand the application of AI for customer services, such as voice agents providing product recommendations, and how fairness is impacted. Drawing on four distinct papers, this dissertation suggests measures used to achieve fairness of AI for voice agent product recommendations (VAPRs). Based on qualitative as well as quantitative methodological approaches, this dissertation provides conceptual and empirical contributions on how to achieve fairness of VAPRs. It provides empirical evidence on the effect of design measures for users’ perceptions of fairness and subsequent consequences. It thus also provides actionable measures for businesses to achieve fairness of VAPRs. Table 1 provides an overview of the four papers including their title, authorship, methodological approach, publication medium and publication status. Paper One “Artificial Intelligence for Customer Services – Focusing on Recommender Systems and Voice Agents” is a single authorship paper based on a qualitative literature review. The paper served as input for a later adapted version in co-authorship with Christian Matt, which was accepted for publication within a book series on Human-Computer Interaction in 2024. Paper Two “Faire KI-basierte Sprachassistenten – Handlungsfelder und Massnahmen zur Erzielung einer sozio-technischen Fairness von Sprachassistenten” also in single authorship is based on 12 expert interviews. The paper was accepted for publication in HMD - Praxis der Wirtschaftsinformatik. Paper Three “When Do Customers Perceive Artificial Intelligence as Fair? An Assessment of AI-based B2C E-Commerce” is written in co-authorship with Christian Matt and based on 20 user interviews. The paper was published within the Proceedings of the 55th Hawaii International Conference on System Sciences in 2022. Paper four “Information Provision Measures for Voice Agent Product Recommendations – The Effect of Process Explanations and Process Visualizations on Fairness Perceptions” is also in co-authorship with Christian Matt and based on two online between-subjects experiments with 553 participants in total. The paper was published after one review round in Electronic Markets – The International Journal on Networked Business in November 2023. Figure 1 serves as a guiding overview outlining the contextual relationship between the four papers constituting this dissertation. Paper one serves as a foundation, providing an overarching view on AI in customer services. It highlights the functions and benefits of two core AI-based systems (recommender systems and voice agents) and their joint application for voice agent product recommendations. However, alongside the benefits, paper one also introduces associated challenges with AI for customer services, focusing especially on fairness of AI-based systems. Recognizing the multifaceted nature of fairness for VAPRs, paper two, three, and four delve further into this challenge. While paper one outlines that fairness of VAPRs can be split into two perspectives - social fairness and technical fairness - paper two focuses on the summarizing socio-technical fairness, also introducing both perspectives and their interplay. It provides a comprehensive framework delineating five areas for action with associated measures for businesses to ensure fairness of their voice agents. The framework strengthens the inseparable interplay between the social and technical perspective. Paper three and four further narrow their focus to the social fairness perspective of VAPRs; i.e., the perception of fairness by users. Paper three presents a structured set of 19 comprehensive and actionable rules for achieving fairness of AI. These rules are based on a qualitative basis, paving the way for paper four to empirically measure the impact of two key design measures (process explanations and process visualizations) addressing the VAPRs shortcomings. Paper One: Artificial Intelligence for Customer Services – Focusing on Recommender Systems and Voice Agents. Paper one focusses on the application of AI for customer services. It outlines the technical foundations of AI, its application along the customer journey when using customer services, and managerial implications for the implementation and management of AI. It focusses especially on the application of AI-based recommender systems, voice agents and their combination for customer services. While machine learning enables recommender systems to provide personalized product recommendations to customers based on vast and complex data (Zhang et al., 2021; Ilkou et al., 2020), natural language processing enables the interaction of voice agents with customers solely based on speech (Diederich et al., 2022). While enabling these advantages, AI-based systems also face challenges and risks, which are further outlined in paper one. One core risk outlined is a lack of fairness of AI-based systems. Negative examples of unfair AI, such as Amazon’s decommissioned HR recruiting tool preferring men over women (Dastin et al., 2018), the e-commerce shop Staples offering the same product to different customers at different price points based on their ZIP codes (Valentino-DeVries et al., 2012) or voice agents providing unsatisfying responses to people with accents (Harwell, 2018), have attracted public attention. However, businesses might not be aware of such issues even though a lack of fairness can cause negative economic, reputational, or legal effects for businesses (Dolata et al., 2021). Thus, it is important for businesses to understand fairness of AI and how to achieve it. The following papers shed light on the fairness of voice agents for product recommendations. Paper Two: Faire KI-basierte Sprachassistenten – Handlungsfelder und Massnahmen zur Erzielung einer sozio-technischen Fairness von Sprachassistenten. Fairness of artificial intelligence can be considered from two perspectives: social fairness perspective and technical fairness perspective, combined as socio-technical fairness (Dolata et al., 2021). The technical perspective employs mathematical methodologies to mitigate biases and disparities in data and algorithms (Barocas et al., 2021; Feuerriegel et al., 2020). In contrast, the social fairness perspective adopts a user-centered approach, exploring perceptions of fairness, factors influencing fairness perceptions, and subsequent behavioral responses. Ultimately, the technical and social perspectives are closely related, and it requires a holistic understanding of how to address them (Dolata et al., 2021). With this in mind, paper two provides a conceptual framework based on 12 expert interviews outlining areas of action and specific measures to ensure socio-technical fairness for voice agents. It hereby expands the conceptual socio-technical perspective of AI by specific measures for voice agents. Paper two specifically targets practitioners by providing them with an overview regarding the variety and dependencies of measures for socio-technical fairness. Paper Three: When Do Customers Perceive Artificial Intelligence as Fair? An Assessment of AI-based B2C E-Commerce. While a holistic understanding of the socio-technical fairness perspective is essential, a thorough qualitative as well as quantitative understanding of the two fairness perspectives individually is also required. While the technical fairness perspective aims at algorithmic accuracy, the social fairness perspective delves into perceptions of fairness and examines users’ reactions. In contrast to the notable attention the technical fairness perception received, research on the social fairness perspective is rather underrepresented (Kordzadeh et al., 2021; Feuerriegel et al., 2020). Recognizing the importance of the social dimension in VAPRs, paper three seeks to address this gap by shedding light on the social fairness perspective. The focus of the social fairness perspective lies in considering user-centric and subjective aspects influencing individuals’ perceptions of fairness of VAPRs. The social perspective consists of four well-established interrelated fairness dimensions (procedural, distributive, interpersonal, and informational) (Carr, 2007; Beugré et al., 2001). To gain transparency on how to design fair voice agents for e-commerce, paper three is based on 20 in-depth interviews with regular e-commerce users. The interviews offer a qualitative exploration of users’ perceptions resulting in 19 AI fairness rules outlining when customers perceive AI-based voice agents for product recommendations as fair. These rules build on previously well-established fairness dimensions (Colquitt et al., 2015; Carr, 2007), while specifically focusing on the social fairness perspective and reflecting voice agent specifications. They provide guidelines for practitioners when developing and designing voice agents for product recommendations. Paper Four: Information Provision Measures for Voice Agent Product Recommendations – The Effect of Process Explanations and Process Visualizations on Fairness Perceptions. While paper three applies a qualitative approach through outlining overall design measures, it also sets a foundation for paper four. Paper four provides quantitative empirical evidence about the fairness perception of VAPRs and subsequent behavioral responses. While there are two key advantages of VAPRs, hands-free and eyes-off ubiquitous voice commerce transactions, they also hold two major shortcomings (recommendation engine opacities and audio-based constraints) limiting users’ information level. Thus, it can be challenging for users to assess recommendations, provoking perceptions of being treated unfairly (Dolata et al., 2021; Kordzadeh et al., 2021). Drawing from the information processing (Sweller et al., 2011; Atkinson et al., 1968) and stimulus-organism-response theory (Kordzadeh et al., 2021, Mehrabian et al., 1974), paper four empirically validates the effect of two information provision measures – process explanations and process visualizations – on the fairness perceptions by customers. Based on two online between-subjects experiments with a total of 553 participants, paper four demonstrates, that process explanations have a positive impact on perceptions of fairness, whereas process visualizations do not. Thereby process explanations based on users’ profiles and their purchase behavior show the strongest effects in improving perceptions of fairness. Paper four contributes to the literature on fair and explainable AI by extending the rather algorithm-centered perspectives (Kordzadeh et al., 2021; Rai, 2020) through considering audio-based constraints of voice agents and directly linking them to users’ perceptions and responses. Paper four provides evidence for practitioners when designing and providing suitable process explanations instead of process visualizations.

Item Type: Thesis
Dissertation Type: Cumulative
Date of Defense: 5 March 2024
Subjects: 000 Computer science, knowledge & systems
300 Social sciences, sociology & anthropology
300 Social sciences, sociology & anthropology > 330 Economics
600 Technology > 650 Management & public relations
Institute / Center: 03 Faculty of Business, Economics and Social Sciences > Department of Business Management > Institute of Information Systems
Depositing User: Hammer Igor
Date Deposited: 15 Oct 2025 07:14
Last Modified: 15 Oct 2025 07:40
URI: https://boristheses.unibe.ch/id/eprint/5990

Actions (login required)

View Item View Item