Opinion
How generative AI is really changing education by outsourcing the production of knowledge to big tech
Read more on post.
Generative AI tools such as ChatGPT, Gemini and Claude are now used by students and teachers at every level of education.
According to a report by Anthropic, the company behind Claude, 39% of student interactions with the AI tool involve creating and improving educational content, such as practice questions, essay drafts and study summaries. A further 34% interactions seek technical explanations or solutions for academic assignments – actively producing student work.
Most responses to this from schools and universities have been to focus on immediate concerns: plagiarism, how assessments are conducted and job displacement. They include teaching AI literacy or developing courses for students on how to use and understand AI tools.
While these are important, what’s being overlooked is how evolving generative AI systems are fundamentally changing our relationship with knowledge itself: how we produce, understand and use knowledge.
This isn’t just about adding new technology to classrooms. It changes how we think about learning and challenges the core ideas behind education. And it risks granting power over how knowledge is created to the tech companies producing generative AI tools.
The bigger shift
Generative AI tools, including ChatGPT, Claude and Gemini, can now create content, combine information and even mimic reasoning. As these AI systems are used more in classrooms and lecture theatres, they start to challenge the traditional ways we think about knowledge and learning.
My research focuses on what’s known as AI epistemology. Epistemology is the study of the origins and nature of knowledge. AI epistemology in education means grappling with new questions about how knowledge is produced.
Generative AI can instantly generate seemingly authoritative text on any subject. This forces us to reconsider what constitutes “original thought” versus “assisted thinking”. Traditional skills such as source evaluation, logical reasoning and weighing up evidence, need to be reconsidered when the “source” is a complex AI system trained on huge amounts of data that we can’t fully see or understand.
This represents a profound departure from centuries of education built on human-to-human knowledge transmission. Generative AI doesn’t just change what students learn but fundamentally alters how they come to know anything at all.
Students are increasingly likely to validate ideas by how well generative AI explains them, and increasingly less through their own analysis.
Traditional education relies on learning activities and assessments that align with what teachers want students to be able to do or understand. For example, if the goal is critical thinking, students practice analysing texts and are tested on their analysis skills, not just memorisation, to build deep understanding. But this framework assumes students construct knowledge independently through experience and reflection.

EF Stock/Shutterstock
Generative AI fundamentally disrupts this model. Students can produce sophisticated outputs without the cognitive journey traditionally required to create them.
Students are now becoming co-creators of knowledge in a machine-mediated system. Co-creation means these groups work together to produce learning outcomes. But co-destruction occurs when their conflicting goals undermine the educational experience. My research on value co-creation and co-destruction in higher education reveals that students, educators, administrators and technology providers with competing interests are shaping educational value.
For example, students might want efficiency, educators want deep learning, and tech companies want engagement metrics. These tensions can either enhance or erode learning quality. This framework now applies to AI integration. When generative AI helps students genuinely understand concepts, it creates value. When it enables shortcuts that bypass learning, it destroys value. Co-creation in education isn’t new, but generative AI as a co-creator changes everything.
Human thought survival
While often framed as the fourth industrial revolution, the current AI shift is more accurately an intellectual revolution. When we outsource thought unthinkingly to machines, we hand unprecedented power to shape knowledge to the technology companies developing this evolving technology.
Tech companies already turn our online behaviour into profit by collecting data to predict and influence what we do next, but we now face something deeper. If a handful of companies own the primary means of knowledge production, they control how we understand the world. Their algorithms’ biases, training data choices, and commercial incentives will determine what is produced and disseminated.
We’ve been here before. Social media exploits our cognitive vulnerabilities to capture our attention. But this time, the stakes are higher. It’s not just our attention at risk, but our capacity to think independently.
This isn’t about whether traditional education remains relevant. It absolutely does. It’s about educators defining what meaningful learning looks like now it’s accompanied by AI.
Generative AI isn’t just a sophisticated calculator; it changes how we understand knowledge. It’s reshaping how students conceptualise expertise, creativity, and their own cognitive capabilities. This will fundamentally change how young people think and learn.
Educators must ensure pedagogical wisdom, not commercial interests, guides this transformation. Encouragingly, this work has started. Centres for responsible AI, such as the one at my own university, ensure educators are driving these critical conversations rather than simply responding to them.