News

LSST Wembley Academic to Present AI and Ethics Research at Oxford University Conference

Kunal Chan Mehta

By Kunal Chan Mehta | Article Date: 4 July 2025

 

We are delighted to announce that a leading research paper authored by Dr Elaheh Barzegar, Graduate Trainee Lecturer at LSST Wembley, has been accepted for presentation at the 9th International Conference on Modern Research in Education, Teaching and Learning (ICMETL 2025) at Oxford University.  

 

The conference (22 – 24 August 2025) will convene leading scholars from around the globe to examine pressing developments in pedagogy and academic innovation.  

 

Titled Balancing Ethics and Mental Health: The Influence of AI Tool Use in University Learning Environments, Dr Barzegar’s paper interrogates the increasingly complex relationship between AI, student wellbeing and academic integrity. 

 

In a rapidly digitising educational landscape, where AI-assisted tools have become embedded in everyday learning practices, Dr Barzegar’s research confronts a pivotal question: At what point does AI support cross the threshold into ethical ambiguity and psychological strain? 

 

“As an educator, I’ve witnessed a rapid shift in how students engage with AI to manage academic demands,” Dr Barzegar explains. “While these tools offer undeniable benefits, I observed patterns of over-reliance, increased stress and ethical confusion – especially in the absence of clear guidance. This duality is what prompted my inquiry.” 

 

Ethics at the Edge of Innovation 

The research highlights a central ethical dilemma when students are often unsure where legitimate AI assistance ends and academic dishonesty begins. 

“Many students operate in a grey zone,” Dr Barzegar observes. “They are uncertain whether using AI constitutes innovation or infringement. This lack of clarity – particularly when institutional messaging is vague – can lead to unintentional breaches of academic integrity.” 

 

Moreover, the research raises concerns about the erosion of critical thinking skills. With generative AI tools offering instant output, the temptation to bypass original cognitive effort is real. Dr Barzegar advocates for discipline-specific ethical guidelines, noting that universal policies often fail to account for nuanced academic contexts. 

 

Mental Health in the Machine Age 

Perhaps most striking is the research’s psychological dimension. The study also finds that unguided or excessive use of AI correlates with declining student wellbeing. 

“Students reported both relief and anxiety. On one hand, AI helped them manage time and workload. On the other hand, its unchecked use caused stress, particularly when students felt unsure whether their usage was ‘allowed’. This mirrors the principles of cognitive load theory and the transactional model of stress: ambiguity increases anxiety.” 

 

The emotional toll, Dr Barzegar notes, is compounded by inconsistent messaging across higher education institutions, leading to diminished academic confidence and heightened emotional fatigue. 

 

A Framework for Responsible Integration 

In response to these challenges, the paper proposes a three-pronged institutional framework for the ethical and sustainable adoption of AI in higher education: 

Clear Institutional AI Policies – Develop and disseminate transparent, accessible guidelines that define ethical AI use, with discipline-specific examples to eliminate ambiguity. 

Education and Awareness – Deliver targeted workshops and digital literacy training to build students’ understanding of AI’s capabilities, limitations, and academic boundaries. 

Integrated Mental Health Support – Equip student wellbeing services to recognise AI-related stressors and provide tailored support, ensuring students feel safe, informed and empowered in their learning. 

 

“By embedding ethics, education and emotional support into the AI integration process, we can cultivate learning environments where students feel both empowered and protected,” Dr Barzegar concludes. “AI should augment education and not erode its integrity or its humanity.” 

 

Mr Ali Jafar Zaidi, LSST’s Deputy CEO, added: “Dr Barzegar’s research addresses one of the most intellectually and ethically exigent questions that faces higher education today. We are profoundly honoured that such a contribution, emerging from LSST’s academic community, will be presented at a forum as esteemed as Oxford University.” 

 

The acceptance of this research by ICMETL 2025 highlights LSST’s ongoing commitment to scholarship that is both socially relevant and academically rigorous. As universities worldwide grapple with the challenges posed by AI, this work offers an urgently needed blueprint for balancing innovation with responsibility. 

  

Further details about ICMETL 2025 can be found at www.icmetl.org. 

 

For additional information, please direct questions to LSST’s Public Relations Manager via kunal.mehta@lsst.ac. 

 

We hope you enjoyed reading LSST News. Join our vibrant academic community and explore endless opportunities for growth and learning at www.lsst.ac/courses or via admissions@lsst.ac. Discover your path at LSST and embark on a transformative educational journey today.  Think Higher. Think LSST.    

 

For more LSST News visit www.lsst.ac/life  



Leave a Reply

Your email address will not be published. Required fields are marked *

Top