News

LSST Luton Researchers Reveal How Generative AI Shapes Trust and Bias in Digital Platforms

Kunal Chan Mehta

By Kunal Chan Mehta | Article Date: 19 March 2026

Hararia Ijaz, Lecturer and Teaching Fellow in Business, LSST Luton (l) and Kiran Arooje, Teaching Fellow and Module Leader (r), LSST Luton, identify how AI-generated content interacts with human cognitive bias, shaping engagement and trust online. Photo: LSST.
 

By Kunal Chan Mehta | Article Date 19/03/26

An innovative study led by Hararia Ijaz, Lecturer and Teaching Fellow, and Kiran Arooje, Teaching Fellow in Business, Coordinator of Personal Academic Tutors and Module Leader (both at LSST Luton), in collaboration with experts across the UK and Pakistan, has uncovered how generative AI subtly influence users’ cognitive biases, shaping engagement and trust on digital platforms.

Published in TPM – Testing, Psychometrics, Methodology in Applied Psychology, the research draws on survey data from 386 UK participants and employs Structural Equation Modelling (SEM) – a sophisticated statistical technique that allows researchers to map complex relationships between observed behaviours and underlying psychological factors. The findings reveal that transparency in AI systems is key to reducing bias and boosting user trust. The research findings show the urgent need for responsible AI practices as digital platforms increasingly integrate generative technologies.

“As generative AI becomes more pervasive in online communications, understanding its impact on human cognition is essential,” said Hararia Ijaz. “Our research provides empirical evidence of how AI-generated content interacts with cognitive biases, influencing not just what users see, but how they interpret and respond to information. This knowledge is vital for designing transparent and ethical AI systems that maintain user trust while harnessing the potential of emerging technologies.”

Drawing on evidence from the UK, the research highlights the increasingly complex relationship between technology and trust in the digital economy. As businesses integrate AI tools into customer service, marketing and online communication, the findings suggest that consumers’ responses are not driven solely by the information they receive but also by the psychological cues and biases that shape how that information is interpreted.

“At LSST, we champion research that advances academic understanding while addressing the current challenges of the modern digital world,” said Ali Jafar Zaidi, CEO of LSST. “This study exemplifies our commitment to exploring how emerging technologies can be used responsibly to benefit society.”

“Generative AI is transforming how organisations engage with consumers online,” said Charlie Tennant, Vice Principal of LSST. “But as this research shows, maintaining transparency and trust will be essential as digital platforms become increasingly AI-driven.”

Full Author List:

The study is the result of a multidisciplinary collaboration with experts in business, IT, and clinical psychology:

Hararia Ijaz – Teaching Fellow in Business, LSST Luton.

Ahmed Touqeer – Digital Marketing Manager, Jarvis Technologies, UK.

Kiran Arooje – Teaching Fellow in Business / Coordinator of Personal Academic Tutors / Module Leader, LSST Luton.

Muhammad Maaz Ul Haq – Assistant Manager IT, University of Education, Lahore, Pakistan.

Syeda Manal Fatima – PhD Scholar in Clinical Psychology, University of Gujrat, Pakistan.

Saira Majid – Head of Department, Clinical Psychology, The Superior University, Lahore, Pakistan.

While generative AI offers powerful opportunities to personalise content and boost engagement, the study suggests that transparency and credibility will be essential for maintaining consumer trust on increasingly automated digital platforms.

Looking ahead, the study also opens avenues for further exploration in this rapidly evolving field. “As generative AI continues to develop, future research should investigate practical interventions, such as AI transparency tools and user education strategies to reduce cognitive bias and improve trust,” said Kiran Arooje. “It is equally important to explore how organisations can implement responsible AI guidelines in real-world settings, helping to bridge the gap between theory and practice.”

The full article is available here: https://www.tpmap.org/article-view/?id=4236 / DOI: https://doi.org/10.5281/zenodo.19007061

For additional information or interviews, please direct questions to LSST’s Public Relations Manager via kunal.mehta@lsst.ac.

We hope you enjoyed reading LSST News. Join our vibrant academic community and explore endless opportunities for growth and learning at www.lsst.ac/fe, www.lsst.ac/courses or via admissions@lsst.ac. Discover your path at LSST and embark on a transformative educational journey today.

Think Further. Think Higher. Think LSST.



Leave a Reply

Your email address will not be published. Required fields are marked *

Top