The Futility of Resisting AI in Education: A Historical and Critical Analysis
Article Date | 12 May, 2025
By Dr Muhammad Emdadul Haque (SFHEA), Senior Lecturer at LSST Wembley
Abstract
The rapid emergence of Artificial Intelligence (AI) in the educational realm is a critical juncture in the history of pedagogy. Its ability to tailor learning, optimise administrative processes, and revolutionise testing has long been known and appreciated, yet the implementation of AI in education is confronted by intense opposition. Remembering past opposition to calculators in the 1980s and the internet in the early 2000s, this resistance is founded on fear of the erosion of critical thinking, ethical concerns, cheating, and dehumanisation of learning environments.
This essay critically discusses the historical patterns of resistance to educational technology and situates the backlash against AI in this context. It makes a case based on theories of technological determinism (McLuhan, 1964) and moral panic (Cohen, 1972) and challenges the socio-political processes behind resisting AI and evaluating the pedagogical implications of ignoring this dislocating move. The article includes comparative international case studies China’s policy-driven deployment of AI, Finland’s decentralised and ethical adoption, and the United States’ fragmented but innovation-led uptake to compare and contrast various global approaches.
In addition, the article discusses strategic advice for UK universities to adopt AI responsibly, harmonising technological progress with humanistic pedagogy. The conclusion posits that opposing AI in learning is not only futile but could be counterproductive to learner equity and institutional salience. Therefore, the emphasis has to change from containment to co-creation where educators are empowered to ethically craft the future of AI-enriched learning.
1.Introduction
The 21st century has witnessed a convergence of rapid technological change and growing demands on education systems to prepare learners for an increasingly digital world. At the heart of this shift lies Artificial Intelligence (AI), a suite of computational technologies capable of learning, reasoning, and adapting, which are now being integrated into classrooms, learning platforms, and administrative systems worldwide (Chen, Chen and Lin, 2020). Defined broadly, AI in education (AIEd) refers to the use of machine learning, natural language processing, predictive analytics, and intelligent tutoring systems to enhance the effectiveness and efficiency of teaching and learning (Lin, Huang and Lu, 2023).
Despite its potential, the integration of AI into education has been met with apprehension. Educators’ express concerns about loss of pedagogical control, student dependency on automation, and the erosion of traditional academic values (Beirat et al., 2025). These anxieties are exacerbated by popular media portrayals of AI as a threat to human agency, and by legitimate fears regarding data privacy, bias in algorithms, and the commodification of education. Yet such reactions are not new. As will be shown throughout this paper, the history of educational technology is characterised by waves of resistance, followed by eventual assimilation.
This introduction sets the stage for a broader interrogation of why AI in education provokes resistance, what underlying societal and pedagogical concerns it surfaces, and how these concerns can be addressed. Drawing upon historical analogies and critical theory, the discussion foregrounds the urgent need to move beyond binary debates of “AI versus teacher” or “innovation versus tradition.” Instead, the focus must be on developing a nuanced understanding of AI as a tool that, when responsibly designed and implemented, can uphold and even enrich the core values of education equity, creativity, criticality, and community.
To this end, this paper will first explore the historical antecedents of technological resistance in education, analysing parallels with the introduction of the printing press, calculators, and digital platforms. It will then examine the current landscape of AI adoption in education through global case studies, highlighting both best practices and ethical tensions. The final sections will propose a roadmap for UK higher education institutions to embrace AI not as a threat, but as a catalyst for pedagogical transformation. In doing so, the paper calls for a paradigm shift from reactive caution to strategic empowerment in how educational leaders, policymakers, and practitioners approach AI in learning environments.
2. Historical Resistance to Educational Technology
Historical resistance to educational technology is a ubiquitous trend. With each major technological development from the printing press to web-based technology there has been scepticism that is born of fear over loss of established pedagogy, exploitation of knowledge by commodification, or making teachers impotent. Here, this writer explores three paradigmatic examples: the printing press, the calculator, and the internet.
2.1 The Printing Press: Memory Against Mass Learning
When Johannes Gutenberg invented the movable type printing press in the 15th century, scholars and clergy feared that it would weaken memory, promote heretical thinking, and overwhelm society with unregulated information. Prior to its discovery, knowledge was stored and shared through oral culture and hand-written manuscripts, which were only available to the elite (Füssel, 2020). Critics felt that books would make students mentally lazy, relying on external sources rather than internal memory (Greenblatt, 205; Finley, Naaz and Goh, 2028).
In spite of these fears, the printing press prompted an unprecedented expansion of literacy and scholarly study. It catalysed the Reformation, the Scientific Revolution, and ultimately, modern education. Today’s resistance to AI recalls similar concerns especially on the matter of critical thinking, retention of facts, and epistemic power. The historic irony is apparent: what was feared as being dangerous to education is now its basis.
2.2 The Calculator: Automation vs. Cognition
The introduction of electronic calculators into schools in the 1970s and 1980s provoked widespread backlash from educators and policymakers who feared that dependence on machines would inhibit mental arithmetic and diminish mathematical reasoning (Cohen, 1987; Banks 2011). One of the most publicised demonstrations of this resistance occurred in April 1986, when a group of elementary and secondary school teachers in Sumter, South Carolina, picketed against the use of calculators in grade schools. A widely circulated Associated Press photograph, published in The Daily Item on April 5, 1986, captured protestors holding signs that read “Turn them OFF until upper grades.” The teachers argued that premature reliance on calculators would hinder students’ understanding of core mathematical principles (Lawrence, 1986).
Although the identities of the protest leaders are not individually documented in the article, the event gained national attention and reignited debates within teacher unions particularly the American Federation of Teachers, which would later debate the appropriate age for calculator use in classrooms.
Ironically, subsequent empirical research demonstrated that calculators, when used appropriately, enhanced students’ ability to focus on higher-order problem-solving rather than rote computation (Hennessy, 1999). Today, calculators are not only accepted but considered essential in most math curricula. The contemporary debate around AI closely mirrors this historical resistance: Are students bypassing intellectual engagement, or are they evolving how they learn?
2.3 The Internet and Wikipedia: Information Access Versus Authority
The large-scale adoption of the internet between the 1990s and early 2000s heralded a revolutionary shift in how pupils gained access to knowledge. Yet this revolution was resisted. Educators, scholars, and policymakers opposed digital encyclopaedias, most importantly, Wikipedia, on the argument that the open-editing process would destroy academic correctness and authority (Yun, Lee and Jeong 2023). Most schools and colleges prohibited Wikipedia as a credible source, warning students that the use of Wikipedia would weaken their research and critical thinking skills.
This opposition was largely on the grounds of misinformation fears, source credibility, and the potential watering down of traditional academic standards. Today, Wikipedia remains widely utilised as a starting point for learning about unfamiliar topics yet remains unacceptable as a citable source of information in formal academic work in most institutions. Rather than championing it as not rigorous scholarship, teachers now utilize Wikipedia to teach critical research practices helping students how to evaluate credibility, track the chain of primary sources, and navigate the politics of knowledge construction (Ho, 2022; Malik, Rafiq and Mahmood, 2023).
The Wikipedia case offers an illustration of how technological change can be resisted by critical engagement, policy development, and pedagogical innovation. Such controversies surround AI today as well in education. As with Wikipedia, AI has the potential to mislead and empower depending on how it is taught and controlled. Therefore, creating AI literacy rather than resorting to blanket prohibitions may be the more realistic and educationally productive response.
3. Case Studies of Global AI Integration in Education
AI integration into education is following divergent trajectories in national contexts. The case studies illustrate how sociopolitical arrangements, pedagogical philosophies, and cultural values inform the design and deployment of AI technologies in education systems.
3.1 China: AI as Statecraft in Education
China has pursued a comprehensive, government-driven strategy to integrate AI in education. Huang and Gadavanij (2025) state that from September 2025, all students at the primary and secondary levels must receive a minimum of eight hours of AI education annually. The curriculum integrates basic coding and machine learning concepts with AI ethics, delivered through stand-alone courses and integration with STEM subjects.
The initiative falls under China’s ambition to become the global AI champion by 2030. Education-wise, the plan resonates with Confucian principles of discipline and state-led learning, coupled with centralized control over curriculum design. Facial recognition software to track student attention, AI-powered homework markers, and intelligent monitoring of classrooms are all part of China’s AI educational toolkit (Bhutoria, 2021).
Critical Evaluation: Efficient and scalable though China’s system is, it also raises grave ethical issues. The use of biometric data and constant monitoring risks normalising surveillance from an early age (Lähteenmäki, Hakkala and Koskinen, 2025). Critics argue that the model prioritises compliance over creativity and puts AI in a disciplinary instead of a pedagogical role.
However, China’s rapid development of teacher training modules, textbooks, and AI integration standards offers valuable lessons in policy coordination and resource allocation domains in which Western education systems are behind.
3.2 Finland: Ethical Pedagogy Meets Adaptive AI
In comparison, Finland’s approach is decentralised and ethically sensitive. AI is viewed as a support system, not a substitute for teaching by humans. AI platforms like ViLLE and Claned are employed in schools to personalise learning and assist independent development, particularly in maths and language learning (Vella, 2025).
Crucially, teachers in Finland participate in the co-design of AI instruments to ensure pedagogical consistency and openness. Teachers are encouraged to introduce AI only where it is demonstrably proven to enhance student resonance and inclusivity (Koskinen, 2023). Additionally, Finland’s national curriculum embraces ethical discussions on automation and data protection, forging a generation of critically aware learners.
Critical Evaluation: Finland’s model is a model of democratic values in technology integration. It prioritises dialogue, reflection, and experimentation over standardisation. Its success is also partly attributable to Finland’s well-funded education system and high levels of teacher autonomy conditions not readily reproducible elsewhere.
3.3 United States: Innovation and Inequity in Higher Education
In the US, AI in education has been largely market-driven. Ed-tech companies and universities have driven the creation of tools such as:
Automated grading systems for large-scale testing
AI-powered chatbots for 24/7 student assistance
Predictive analytics for students at risk of dropping out
Top universities such as MIT and Stanford are creating state-of-the-art AI curricula and ethics frameworks (Cantú-Ortiz, 2020). Online platforms such as Coursera and Khan Academy use AI to personalise learning globally.
Critical Evaluation: The U.S. model promotes innovation but lacks systemic coherence. More endowed institutions can experiment with AI, but under-resourced schools might lack the infrastructure to adopt such tools. Furthermore, the commercialisation of AI comes with student data exploitation and the privatisation of education.
Based on the insights from the international case studies, the following section considers forward-thinking steps that education systems can follow to harness the potential of AI and counter its intrinsic risks.
4. Future Directions and Strategic Response
With AI being progressively embedded within societal infrastructure, education systems must actively shape strategic responses to both its opportunities and threats. The future of AI in education should not be relinquished to technological determinism nor market-driven innovation. Instead, it requires concerted, inclusive, and ethically attuned planning across four important domains: pedagogy, policy, research, and institutional capacity.
4.1 Beyond Opportunistic Experimentation to Strategic Design
Universities must move beyond opportunistic experimentation and begin to design AI integration for long-term impact. This requires establishing pedagogical goals that AI should enhance, rather than being adapted to the teaching requirements of existing technologies (George, 2023). Furthermore, AI integration needs to enhance, not replace, learner-centred approaches emphasising scaffolding, formative feedback, and metacognitive development.
4.2 Developing AI Literacy at Every Level
AI literacy must be developed as a fundamental competency similar to media or digital literacy. Students must learn not only to use AI tools, but to critically evaluate them, identify bias, and understand their ethical implications (Akgun and Greenhow, 2022). Professional development for educators is needed to enable confident, critical, and creative deployment of AI technologies in practice.
Table 1: AI Literacy Framework for Stakeholders
Group | Core Competencies | Example Activities |
Students | Understanding AI basics, bias detection | Simulations, case study discussions |
Teachers | Ethical tool use, pedagogical integration | AI-focused CPD, interdisciplinary units |
Administrators | Procurement literacy, policy alignment | Vendor evaluation rubrics, audits |
4.3 Cross-Sector Collaboration
No single actor university, state, or tech company can ensure responsible AI deployment. Collaboration and intersection among higher education institutions and developers are critical in developing AI tools that align with academic needs, respect privacy, and respect diversity. Universities also need to get involved in developing regulatory frameworks, leveraging their role as knowledge institutions (European Commission, 2023).
Country Examples
> Canada: Canada supports AI education programs focusing on inclusive innovation through its Pan-Canadian AI Strategy, financing AI research institutions like Mila and Vector Institute.
> Singapore: Singapore’s SkillsFuture initiative integrates AI into lifelong learning, and the Ministry of Education’s AI@Education program pilots AI literacy from primary to tertiary levels (Frana, 2024)
> Australia: The University of Queensland and Data61’s CSIRO have developed AI ethics modules and national trials of curriculum including the Australian Curriculum Assessment and Reporting Authority (ACARA) (Knight et al., 2023).
4.4 Global Benchmarks and Inclusive Policy Design
Countries should draw upon well-tested AI strategies, such as the OECD’s AI Principles or UNESCO’s AI and education guidelines, as blueprints for national policy. Such guidelines foster transparency, fairness, accountability, and human-centered design. Policymakers must involve a broad range of stakeholders – students, parents, and civil society if public confidence and inclusiveness are to be secured.
UK Context: Gherehes (2023) highlight the importance of developing an AI talent ecosystem, through AI conversion courses as well as sector-led education reforms. Khawaja (2024) has also drawn up guidance for embracing responsible AI at higher education and supporting institutions with data ethics, assessment reform, and institutional preparedness frameworks.

The diagram is a visually presented integrated structure for inclusive integration of AI into education. For the sake of viewing this diagram, please use the appendix or embedded image file included in this document. The figure depicts four core domains – AI Literacy, Ethics & Regulation, Pedagogical Innovation, and Cross-Sector Partnerships – each interfacing with students, teachers, developers, and policymakers.
4.5 Mitigating Key Risks and Emerging Issues
While AI holds great promise, there are dangers that must be actively mitigated:
> Algorithmic Bias: AI systems can worsen existing education disparities if trained on biased data, impacting grading, admissions, or study assistance (Obed Boateng and Bright Boateng, 2025).
> Over-Automation: Over-reliance on automation can reduces teachers’ skillsets, decrease personalisation, and risk to depersonalise the learning process.
> Educational Technolohy Monopolies: Dependence on a limited number of tech providers can potentially limit institutional autonomy and create ownership concerns over data as well as commercial surveillance (Komljenovic and Williamson, 2024).
Mitigation of these risks can be achieved through active auditing, open-source alternatives, and stakeholder co-design.
4.6 Maintaining the Human in the Loop
Finally, the future of AI in education must defend the special place of human connection. Education is not the transmission of content; it is guidance, compassion, adaptability, and care factors that no program can fully replace. AI should complement human capabilities and not substitute for them.
From international practices described above, the next section focuses on the UK higher education landscape, offering strategic directions towards responsible incorporation of AI.
5.Embracing AI in UK Higher Education
UK universities and colleges are at the heart of the world’s education ecosystem. With a reputation for academic excellence and innovation, the UK must move quickly to embrace AI not only to stay ahead of the competition but to enhance equity, pedagogy, and institutional viability. The task is to create a balanced strategy that embraces innovation while safeguarding academic values and ethical norms.
5.1 Strategic Integration and Institutional Leadership
Southworth et al. (2023) identifies education as one of the strategic areas in AI talent development. It calls on universities to develop interdisciplinary AI curricula and invest in AI literacy for all graduates. Elite universities such as University College London (UCL), the University of Oxford, and Imperial College London have already launched AI-dedicated degrees and responsible AI research centres.
To introduce AI across disciplines, HEIs are to include fundamental AI modules in non-STEM courses to enhance awareness among future lawyers, teachers, social workers, and artists. An example of an interdisciplinary initiative is the University of Edinburgh’s Centre for Data, Culture and Society.
5.2 Personalised Learning and Assessment Reform
AI supports adaptive learning systems that have the ability to tailor content to the individual learning rate and style of each student. Vashishth et al. (2024) recommends that institutions pilot AI-driven platforms that offer real-time feedback, learning analytics, and adaptive pathways. For example, the Open University is using predictive analytics to identify students at risk of disengagement and introduce targeted support.
Assessment, too, can be rethought. Instead of standardised testing, AI offers hope for formative, competency-based assessment that reflects students’ learning and creativity. Yet this must be accompanied by strong safeguards against over-surveillance and automation bias.
5.3 Staff Development and Ethical Governance
A key blocker to AI adoption in UK HEIs is staff preparedness. Universities must fund continuous professional development (CPD) programs to equip educators with the competence and confidence to pedagogically use AI. CPD must address not only technical skill, but also ethical concerns, algorithmic transparency, and inclusive design.
At a governance level, universities must create explicit frameworks for responsible AI use. These must address:
Ethics review boards for AI applications in research and teaching
Data protection policies mapped to UK GDPR
Student participation in tech procurement and implementation decisions,
5.4 Inclusive Infrastructure and Equity Issues
While AI can help support diverse learners, it can also increase inequality if access is not equal. UK HEIs will have to ensure digital inclusion by:
Providing hardware and high-speed internet for students who require them
Providing AI tools with multilingual and accessibility features
Creating algorithms that are culturally and gender unbiased
Such projects as the OfS Digital Capability Framework can guide institutions towards equitable implementation. In addition, UKRI and Innovate UK funding needs to be expanded to support inclusive AI education pilots.
5.5 A Culture of Co-Design and Innovation
To succeed in the long term, students, educators, and developers must collaborate in identifying AI tools based on pedagogical needs. Hackathons, user-testing workshops, and interdisciplinary innovation hubs can help create a vibrant culture of co-design.
Case Snapshot: At the University of Bristol, the AI in Education Lab has partnered with student unions to test AI feedback systems, enabling real-time dialogue between learners and developers.
UK HEIs must also engage with global networks like the UNESCO AI Competency Framework and the European University Association’s AI working group to share knowledge and set standards (Ul Hassan, Murtaza and Rashid, 2024).
The UK has the intellectual capital and institutional drive to be a world leader in ethical, efficient AI-driven higher education. Leadership, however, involves more than words – there needs to be investment in infrastructure, educational creativity, and long-term governance. By being first to harness AI as a collaborator in learning, not a competitor, UK institutions can future-proof their mission while equipping the next generation of learners.
-
Conclusion
This article has examined critically the historical, cultural, and policy-driven dimensions of resistance to AI in education. Through comparative case studies and visionary strategies, it has argued that technological resistance is neither new nor necessarily irrational, but it is problematic when it impedes pedagogical innovation and systemic development.
Historically, every educational technology from the printing press to the internet has encountered resistance based on apprehensions of dehumanisation, academic laziness, or loss of scholarly control. Over time and evidence, however, these tools only proved their potential to increase educational access, equity, and efficacy. The present resistance to AI repeats this pattern and, if not critically reoriented, promises to repeat earlier errors by holding on to outmoded paradigms.
AI today has transitioned from a buzzword of the future to an existing reality within learning management systems, assessment platforms, and classroom discussions. Rather than demonise or prohibit AI tools, education stakeholders must accept a vision of co-agency, a vision where human teachers guide, refine, and shape AI to augment and complement their efforts, not devalue them.
Each of the examples of China, Finland, the United States, and the United Kingdom represents varying patterns of AI adoption. None is perfect, but all provide insights into the policy, ethics, infrastructure, and pedagogy that need to come into play. The UK is at a turning point: with resources, brains, and institutional frameworks to be world leaders, it has to choose between gradual adjustment and grand change.
In the future, the education system must build AI literacy, advocate for rights to data, support inclusive access, and build human-machine collaboration according to the principles of care, justice, and curiosity. Resistance to AI is understandable but not sustainable. What we require today is thoughtful, ethical, and equitable leadership that will enable AI to become not the end of education we know, but the beginning of its most human-centric chapter yet.
References:
Akgun, S. and Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in k-12 settings. AI and Ethics, [online] 2(3), pp.431–440. doi:https://doi.org/10.1007/s43681-021-00096-7.
Banks, S. (2011). A Historical Analysis of Attitudes Toward the Use of Calculators in Junior High and High School Math Classrooms in the United States Since 1975. Master of Education Research Theses. [online] doi:https://doi.org/10.15385/tmed.2011.1.
Beirat, M.A., Tashtoush, D.M., Khasawneh, M.A., Az-Zo’bi, E.A. and Tashtoush, M.A., 2025. The Effect of Artificial Intelligence on Enhancing Education Quality and Reduce the Levels of Future Anxiety among Jordanian Teachers. Appl. Math, 19(2), pp.279-290. http://dx.doi.org/10.18576/amis/190205
Bhutoria, A. (2022). Personalized education and artificial intelligence in United States, China, and India: A systematic Review using a Human-In-The-Loop model. Computers and Education: Artificial Intelligence, 3, p.100068. doi:https://doi.org/10.1016/j.caeai.2022.100068.
Cantú-Ortiz, F.J., Galeano Sánchez, N., Garrido, L., Terashima-Marin, H. and Brena, R.F. (2020). An artificial intelligence educational strategy for the digital transformation. International Journal on Interactive Design and Manufacturing (IJIDeM), 14(4), pp.1195–1209. doi:https://doi.org/10.1007/s12008-020-00702-8.
Chen, L., Chen, P. and Lin, Z., 2020. Artificial intelligence in education: A review. Ieee Access, 8, pp.75264-75278. http://Doi: 10.1109/ACCESS.2020.2988510.
Cohen, D.K. (1987). Educational Technology, Policy, and Practice. Educational Evaluation and Policy Analysis, 9(2), pp.153–170. doi:https://doi.org/10.3102/01623737009002153.
Finley, J.R., Naaz, F. and Goh, F.W., 2018. Memory and technology. Cham: Springer International Publishing.
Frana, P.L., 2024. Assessing Smart Nation Singapore as an International Model for AI Responsibility. International Journal on Responsibility, 7(1), p.2.
Füssel, S., 2020. Gutenberg and the Impact of Printing. London: Routledge. Füssel, S., 2020. https://doi.org/10.4324/9781315253718.
George, A.S. (2023). Preparing Students for an AI-Driven World: Rethinking Curriculum and Pedagogy in the Age of Artificial Intelligence. Partners Universal Innovative Research Publication, [online] 1(2), pp.112–136. doi:https://doi.org/10.5281/zenodo.10245675.
Gherhes, C., Yu, Z., Vorley, T. and Xue, L. (2023). Technological trajectories as an outcome of the structure-agency interplay at the national level: Insights from emerging varieties of AI. World Development, 168, p.106252. doi:https://doi.org/10.1016/j.worlddev.2023.106252.
Greenblatt, S., 2015. Learning to curse: Essays in early modern culture. Routledge: New York. https://doi.org/10.4324/9780203823736.
Hennessy, S. (1999). The potential of portable technologies for supporting graphing investigations. British Journal of Educational Technology, 30(1), pp.57–60. doi:https://doi.org/10.1111/1467-8535.00090.
Ho, M. (2024). Facilitating Knowledge Exchange, Knowledge Building, Research Competence, and Digital Literacy: Student-Led Wikipedia Group Project in an Undergraduate Class of Public Administration in Hong Kong. Journal of Political Science Education, pp.1–22. doi:https://doi.org/10.1080/15512169.2024.2356625.
Huang, X. and Gadavanij, S. (2025). Power and marginalization in discourse on AI in education (AIEd): social actors’ representation in China Daily (2018–2023). Humanities and Social Sciences Communications, 12(1). doi:https://doi.org/10.1057/s41599-025-04621-5.
Khawaja, S. (2024). Transforming Higher Education Institutions into Entrepreneurial Hubs: The Evolving Role of Business Incubation in Public and Private higher education Sectors. SSRN Electronic Journal. doi:https://doi.org/10.2139/ssrn.4969935.
Komljenovic, J. and Williamson, B. (2024). Behind the platforms: Safeguarding intellectual property rights and academic freedom in Higher Education. [online] www.research.ed.ac.uk. Education International. Available at: https://www.research.ed.ac.uk/en/publications/behind-the-platforms-safeguarding-intellectual-property-rights-an.
Knight, S., Dickson-Deane, C., Heggart, K., Kitto, K., Dilek Çetindamar Kozanoğlu, Maher, D., Narayan, B. and Zarrabi, F. (2023). Generative AI in the Australian education system: An open data set of stakeholder recommendations and emerging analysis from a public inquiry. Australasian Journal of Educational Technology, 39(5), pp.101–124. doi:https://doi.org/10.14742/ajet.8922.
Lähteenmäki, C., Hakkala, A. and Koskinen, J. (2025). Evaluating the Impact of Mass Surveillance Through Ethical Theories. [online] Available at: https://www.utupub.fi/bitstream/handle/10024/179944/Lahteenmaki_Camilla_opinnayte.pdf?sequence=1.
Lawrence, J. (1986) ‘Math teachers protest against calculator use’, The Daily Item, 5 April. Sumter, S.C.: Associated Press.
Lin, C.C., Huang, A.Y. and Lu, O.H., 2023. Artificial intelligence in intelligent tutoring systems toward sustainable education: a systematic review. Smart Learning Environments, 10(1), p.41.
Malik, A., Rafiq, M. and Mahmood, K. (2023). Wikipedia and academia: University faculty patterns of use and perceptions of credibility. Journal of Librarianship and Information Science. doi:https://doi.org/10.1177/09610006231190652.
Obed Boateng and Bright Boateng (2025). Algorithmic bias in educational systems: Examining the impact of AI-driven decision making in modern education. World Journal of Advanced Research and Reviews, 25(1), pp.2012–2017. doi:https://doi.org/10.30574/wjarr.2025.25.1.0253.
Southworth, J., Migliaccio, K., Glover, J., Glover, J., Reed, D., McCarty, C., Brendemuhl, J. and Thomas, A. (2023). Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence, 4(1), p.100127. doi:https://doi.org/10.1016/j.caeai.2023.100127.
Ul Hassan, M., Murtaza, A. and Rashid, K. (2024). Redefining Higher Education Institutions (HEIs) in the Era of Globalisation and Global Crises: A Proposal for Future Sustainability. European Journal of Education. doi:https://doi.org/10.1111/ejed.12822.
Vashishth, V. K., Sharma, V., Sharma, K. K., Kumar, B., Panwar and Chaudhary, S. R. (2024). AI-Driven Learning Analytics for Personalized Feedback and Assessment in Higher Education. Advances in media, entertainment and the arts (AMEA) book series, pp.206–230. doi:https://doi.org/10.4018/979-8-3693-0639-0.ch009.
Vella, O., 2025. The Future of Maths Learning: Personalised and AI-Driven. eBookIt. com.
Yun, J., Lee, S, H. and Jeong, H. (2016). Intellectual interchanges in the history of the massive online open-editing encyclopedia, Wikipedia. Physical review, 93(1). doi:https://doi.org/10.1103/physreve.93.012307.