Beyond the Numbers: How Artificial Intelligence is Redefining the Financial Analyst and How LSST Students Can Turn Insight into Impact
Article Date | 15 December, 2025
By Diwa Ojha, Lecturer in Business, LSST Elephant and Castle
“AI will not replace financial analysts, but analysts who understand AI will replace those who don’t.”
A New Era of Financial Understanding
The finance profession is experiencing one of the most significant periods of transformation in its history. Artificial Intelligence (AI) has moved from an emerging concept to a practical tool embedded across core financial activities, including automated investment modelling, real-time fraud detection, and advanced forecasting (Afua et al., 2024). While these developments have improved speed and accuracy, they have also reshaped expectations of financial professionals. Technical competence alone is no longer sufficient; effective financial practice now requires ethical awareness, critical judgement, and the ability to interpret data within its wider social and organisational context. Human-centred financial intelligence, grounded in ethical judgement, critical thinking, and professional responsibility, will ultimately determine whether AI is integrated responsibly and sustainably within modern finance.
Within this changing landscape, institutions such as LSST occupy an increasingly important role. As a provider of inclusive, practice-focused higher education, LSST supports non-traditional students who are preparing to enter professions shaped by rapid technological change. From my perspective, this context demands an educational approach that extends beyond technical training. It requires an emphasis on reflective thinking, ethical awareness, and professional judgement; capabilities that are essential in a financial sector where automated systems increasingly influence decision-making.
Although AI can analyse thousands of variables within seconds, it cannot interpret the human context behind financial decisions. This limitation reinforces the continuing importance of professional judgement in financial analysis. Several studies suggest that while AI enhances forecasting accuracy and supports risk management, its outputs still require careful human interpretation to ensure responsible and ethical decision-making (Celestin & Mishra, 2025). From my perspective, this highlights a broader reality; AI may strengthen analytical capability, but its true value depends on the ethical judgement and critical thinking applied by those who use it.
As a result, the role of modern financial analysts is evolving. Analysts increasingly act as a bridge between digital systems and human values, translating algorithmic outputs into informed, accountable decisions. This requires the confidence to question technology, the skill to interpret complex information, and the judgement to consider wider social and ethical implications. Through applied teaching, ethical discussion, and reflective learning, LSST supports students in developing the judgement and confidence needed to use AI responsibly within modern finance.
Developing the Skills of Future Analysts
The role of the financial analyst has evolved fundamentally from record-keeping and reporting to insight generation and ethical judgement. This article argues that human-centred financial intelligence, rather than technical capability alone, will determine whether AI is integrated responsibly within modern finance.
Much of the discussion around AI highlights its ability to improve efficiency and automate complex processes, and there is no doubt that these technologies have made financial work faster and, in many cases, more accurate. However, Mohsen et al. (2024) emphasise that technical expertise without ethical awareness can increase the risk of bias and weaken accountability in financial decision-making. While AI can support analytical capability, it does not remove the need for judgement, reflection, and responsibility. In practice, it is these human skills that determine whether AI leads to better decisions or simply faster ones.
This reveals an important contrast within literature. While Celestin and Mishra (2025) focus on AI’s capacity to enhance forecasting accuracy and risk management, Mohsen et al. (2024) caution that such technical improvements may heighten ethical and governance risks if human oversight is insufficient. Together, these perspectives highlight a central tension in AI adoption: performance gains do not automatically translate into responsible financial decision-making.
Drawing on my experience as a lecturer in business, I have observed that many non-traditional learners approach AI with hesitation, not because they lack ability, but because prior educational and professional experiences did not emphasise digital confidence. When AI is presented as a complex technical system, engagement can decline. However, when it is framed as a decision-support tool that raises ethical, regulatory, and human questions, engagement and confidence increase. This reinforces a key insight; effective financial education begins with understanding and judgement, not technical expertise alone.
Ethics, Education, and Human Judgement
AI is now widely used across the financial sector, and there is broad agreement that it has made many processes more efficient. Financial institutions increasingly use AI to support tasks such as fraud detection, risk assessment, and forecasting. However, reliance on AI alone can lead to unclear or incomplete decisions if outputs are accepted without question. As a result, human oversight remains essential.
This issue is also visible in educational contexts. During teaching activities in one of my Finance and reporting for management accounting class, I have encountered situations where students used AI tools to solve financial problems but struggled to explain how or why a particular answer was reached. In some cases, the solution provided by AI was inaccurate; in others, it was correct but poorly understood. This raises an important ethical concern. When AI is used without reflection, learning becomes superficial and decision-making loses depth. Understanding the reasoning behind a calculation is as important as arriving at the correct result.
This perspective is reflected across both academic research and professional practice. Rather than replacing human decision-making, AI is most effective when used as a support tool that informs judgement and requires ethical interpretation (Deloitte, 2024). Regulatory guidance reinforces this position, with bodies such as the UK Financial Conduct Authority making clear that responsibility for automated decisions continues to rest with human decision-makers. For example, major UK banks such as HSBC have adopted AI-driven fraud detection and transaction-monitoring systems while retaining human review panels for high-risk or ambiguous cases. This approach reflects regulatory expectations that AI should enhance, rather than replace, professional judgement, ensuring accountability remains clearly assigned to human decision-makers.
From Artificial Intelligence to Human Understanding
AI has undoubtedly transformed financial analysis by enabling faster calculations and more efficient data processing. However, speed and efficiency alone do not guarantee sound decisions. Financial judgement requires context, ethical awareness, and an understanding of consequences, all of which remain human responsibilities. This emphasis on human accountability is increasingly reinforced through regulation. The UK Financial Conduct Authority has consistently stressed that firms remain responsible for decisions supported by algorithms (FCA, 2025), while the EU Artificial Intelligence Act classifies many financial AI applications as high-risk, requiring transparency, strong governance controls, and meaningful human oversight (European Union, 2024). Together, these frameworks signal a clear regulatory expectation: ethical responsibility in finance cannot be automated.
For future financial professionals, this has important implications. The ability to interpret AI outputs, challenge automated recommendations, and justify decisions ethically is no longer optional; it is a core professional competency. Graduates entering the finance sector will be expected not only to understand how AI tools work, but to demonstrate accountability for how they are used in practice.
The future of finance will therefore depend not on how advanced AI becomes, but on how thoughtfully it is used. Professionals must be able to question outputs, recognise limitations, and apply judgement where automated systems fall short. Artificial intelligence can support financial capability, but it is human understanding that ensures decisions are responsible, fair, and meaningful.
References
Afua, W., None Adeola Olusola Ajayi-Nifise, Bello, G., Tubokirifuruar, S., None Olubusola Odeyemi and None Titilola Falaiye (2024). Transforming financial planning with AI-driven analysis: A review and application insights. World Journal of Advanced Engineering Technology and Sciences , 11(1), pp.240–257. doi: https://doi.org/10.30574/wjaets.2024.11.1.0053.
Celestin, M. and Mishra, A.K. (2025). AI-Driven Financial Analytics: Enhancing Forecast Accuracy, Risk Management, and Decision-Making in Corporate Finance. Janajyoti journal ., 3(1), pp.1–27. doi: https://doi.org/10.3126/jj.v3i1.83284.
Deloitte. (2025). Trustworthy and Ethical AI Thought Leadership | Deloitte US. [online] Available at: https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/articles/trustworthy-ethical-ai-thought-leadership.html . (Accessed: 12 December 2025).
European Union (2024). Regulation – EU – 2024/1689 – EN – EUR-Lex. [online] eur-lex.europa.eu. Available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj . (Accessed: 12 December 2025).
FCA. (2025). AI and the FCA: our approach. [online] Available at: https://www.fca.org.uk/firms/innovation/ai-approach . (Accessed: 12 December 2025).
Mohsen, S. E., Hamdan, A. and Shoaib, H. M. (2024) ‘Digital transformation and integration of artificial intelligence in financial institutions’, Journal of Financial Reporting and Accounting , 23(2), pp. 680–699. doi: 10.1108/jfra-09-2023-0544.




