The Role of AI in Shaping Trust: Group Identity as a Moderator in Trust Toward Negative Feedback Providers
MPhil Thesis Defense
04 Aug 2025 (Mon)
2:30pm – 4:30pm
LSK Rm5047
Ms Yu (Aria) Guo, HKUST

This thesis investigates how the use of artificial intelligence (AI) by feedback providers influences recipients’ trust in organizational settings. While prior research has primarily examined trust in AI as an autonomous agent, less is known about the interpersonal consequences of AI-assisted task completion on trust between colleagues. Drawing on Mayer and Davis’s (1995) multidimensional model of trust, comprising ability, benevolence, and integrity, and attribution theory, we explore how AI usage shapes recipients’ perceptions of feedback providers, with feedback valence, and social identity as key moderators.

Across three experimental studies, we tested these relationships in organizational and student samples. Study 1 employed a two-cell design comparing trust perceptions when feedback providers used AI versus no AI in completing tasks. Results showed that AI usage increased perceived integrity, which mediated trust, especially in the context of negative feedback. Study 2 incorporated feedback valence (positive vs. negative) in a 2×2 design and found that AI usage reduced perceived integrity and trust under positive feedback but not under negative feedback. Study 3 examined the moderating role of social identity (ingroup vs. outgroup) in a student sample, revealing no significant moderation effects, suggesting potential cultural or contextual boundary conditions.

Overall, this research advances our understanding of how AI integration in workplace tasks reshapes interpersonal evaluations and trust formation, emphasizing the nuanced role of perceived integrity as a key mediator. However, the divergent results across studies suggest that contextual factors may significantly influence these dynamics. These findings provide practical insights for organizations seeking to effectively manage trust and feedback processes in increasingly AI-augmented work environments, and they underscore the need for future research to explore the boundaries of these mechanisms.