Skip to content

Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education

2025-10-08

Scott Douglas Jacobsen
In-Sight Publishing, Fort Langley, British Columbia, Canada

Correspondence: Scott Douglas Jacobsen (Email: scott.jacobsen2025@gmail.com)

Received: September 20, 2025
Accepted: October 8, 2025
Published: October 8, 2025

Abstract

This interview examines how artificial intelligence is reshaping schooling through the lens of Norwegian educator Tor Arne Jørgensen. He identifies five acute pressures—assessment integrity, homework authenticity, critical use, shortcut-seeking, and loss of personal development—and outlines dual responses: immediate safeguards (assessment redesign, use-policies, and targeted detection) and a long-term pivot to digital literacy, critical thinking, creativity, ethics, and collaboration. Jørgensen frames the teacher’s “value-add” as pedagogical leadership, relational trust, ethical reasoning, and higher-order cognition—capacities AI cannot replace. He details the Norwegian Directorate for Education and Training’s stance, including stricter 2025 exam rules, and argues for diversified, process-oriented evaluation. The conversation addresses inequities driven by infrastructure and competence gaps, calls for coherent training and governance, and surfaces ethical risks such as dependency, cognitive atrophy, safety illusions, and social isolation. Jørgensen proposes a student-led global pilot for evaluating educational AI and defines success in human terms: intellectual independence, narrowed equity gaps, and citizens capable of democratic participation alongside AI.

Keywords: Academic integrity in AI, AI literacy across stakeholders, Assessment redesign with AI, Cognitive development and AI, Digital equity in education, Education scenarios for 2035, Ethical risks of AI, Global governance of EdTech, High-IQ community responsibilities, Mental health in schools, Teacher role evolution, Vendor evaluation and transparency

Introduction

Artificial intelligence is no longer a thought experiment in education; it is a daily condition in classrooms. In this interview, Norwegian educator Tor Arne Jørgensen maps the pressures that AI introduces into the core activities of schooling—testing, homework, learning processes, and identity formation—and argues that the response must be both practical and philosophical. Practically, schools must secure academic integrity and update assessment so that products reflect student understanding rather than machine output. Philosophically, they must re-center education on the cultivation of human judgment, creativity, and ethical reasoning.

Jørgensen situates his perspective within current Norwegian policy, noting the Norwegian Directorate for Education and Training’s principles that keep teachers as pedagogical leaders, require subject-relevant AI use to be declared, and tighten centrally administered examinations in 2025. Against that backdrop, he describes his school’s two-track approach: short-term measures (clear guidelines, detection where necessary, diversified assessments) and a long-term strategy that builds digital literacy and critical thinking while privileging uniquely human capacities—creativity, ethical discernment, collaboration—over mere task completion.

Central to his argument is a reframing of the teacher’s role. The teacher is not an information courier displaced by algorithms but a learning architect whose irreplaceable work is relational, contextual, and ethical: reading classrooms in real time, scaffolding intellectual risk, modelling critical inquiry into AI outputs, and designing experiences that lead from answers to understanding. In Jørgensen’s account, AI can amplify this human work when subordinated to clear pedagogical aims, but it can also erode it when schools mistake efficiency for learning.

The interview also probes the equity fault lines of AI adoption. Resource disparities—in devices, connectivity, IT support, and teacher training—threaten to produce an educational two-tier system within and between municipalities. Jørgensen argues for coherent training, privacy-by-design governance, transparent vendor evaluation, and shared responsibility among teachers, parents, and leadership. He warns of ethical risks—student dependency on AI, cognitive weakening from shortcut culture, safety oversimplifications, and reduced face-to-face interaction—and proposes a student-led, globally representative pilot to evaluate educational AI on governance as well as learning. Success, he contends, cannot be measured by platform metrics, but by whether graduates retain intellectual independence, social empathy, and democratic agency in an AI-suffused world.

Main Text (Interview)

Interviewer: Scott Douglas Jacobsen

Interviewee: Tor Arne Jørgensen

Scott Douglas Jacobsen: What are the top AI-driven pressures your school faces?

Tor Arne Jørgensen: We face five main challenges that are changing how we must think about learning and assessment.

Tests and assessments

The biggest challenge is ensuring that students don’t use AI during tests and exams. This isn’t just about cheating, but about knowing that results actually show the student’s own knowledge and skills. We need to find new ways of assessment that make AI less relevant, or we must learn to accept that traditional test formats become less reliable.

Homework and assignments

Outside the classroom, we have little control. The risk is great that students let AI do the work for them. This weakens the learning process and gives a misleading picture of the student’s development. We need to rethink what kind of assignments we give, and how we follow up on work done at home.

Critical use of AI

Students must learn to evaluate the quality of the answers AI provides. Without critical thinking, they risk accepting incorrect or incomplete information. Here we have a pedagogical task: teaching students when AI is useful, and when it isn’t.

AI as a shortcut

Many students are tempted to rely on AI for quick solutions instead of doing the work themselves. This undermines the development of important skills like problem-solving, reflection, and independent thinking. We must help students understand the difference between getting an answer and understanding a problem.

Personal development

When students hand over their work to AI, they lose the opportunity to develop their own abilities. This can weaken creativity and the ability to express themselves. Take writing as an example: Someone who doesn’t write their own texts never develops a personal voice. This is perhaps the most important challenge – that we risk losing something fundamentally human in the learning process.

Jacobsen: How are you prioritizing responses?

Jørgensen: I’ll give you a two-part answer: first, how the Norwegian Directorate for Education and Training approaches AI in schools, then how we tackle these challenges at our school.

The official guidelines

The Directorate has established some clear principles. Teachers must remain the pedagogical leaders – AI cannot take over that role. Any use of AI must serve curriculum goals and be relevant to the subject matter. Most importantly, schools must develop their own local guidelines rather than follow a one-size-fits-all approach.

On assessment and academic integrity, the reality is stark: AI changes everything about how we evaluate student work, particularly written assignments. The Directorate recommends diversifying our assessment methods – more oral exams, process documentation, practical tasks – to reduce opportunities for cheating. Students need to learn critical AI use, including understanding sources, privacy concerns, and copyright issues.

For exams in 2025, the rules are strict. AI generation is completely prohibited on centrally administered written exams. They’ve removed preparation periods in several subjects and restricted permitted aids. Getting caught using AI can result in exam annulment.

But it’s not all restrictions. The Directorate sees pedagogical potential in AI for planning, differentiation, and feedback. The key principle is that AI should strengthen the teacher’s role, not replace it. Students can use AI when it’s subject-relevant, but they must declare when and how they used it.

Our school’s approach

We’ve divided our response into immediate needs and long-term strategy.

In the short term, we’re developing new assessment methods that are less vulnerable to AI cheating. We’re implementing detection tools where necessary and creating clear, practical guidelines for AI use.

Our long-term focus is different. Digital literacy and critical thinking have become our highest priority because these give students the tools to navigate an AI world successfully. We’re working to integrate AI as a learning tool, but in pedagogically sound ways. Most importantly, we’re focusing on skills AI cannot replace: creativity, ethical reasoning, and collaboration.

Our philosophy is balanced. We don’t ban AI, but we educate about it. We teach students when and how AI can be used constructively. We emphasize process over product in our assessments and strengthen oral and practical evaluation methods.

Teacher development is crucial. We prioritize building competency among our staff so they can safely guide students through this new landscape. We create space for experimentation and learning.

Our main principle is simple: we cannot fight AI, so we must teach students to live and work with it in ways that strengthen their development as thinking, critical human beings.

Jacobsen: How do you define the teacher’s value-add in a classroom with AI?

Jørgensen: The fundamental shift

We need to be honest about what’s happening. We have moved from being lecturers to observers, and now we serve as facilitaJørgensens of the right approach and process of understanding information. This isn’t a loss of status – it’s an evolution toward something more sophisticated and more essentially human.

Where teachers add irreplaceable value

There are four areas where no AI can match what a skilled teacher brings to learning.

First is pedagogical leadership and human judgment. We understand each student’s developmental path, their emotional landscape, their particular way of learning. We make real-time decisions based on classroom dynamics and those unexpected moments when learning suddenly clicks. We take AI’s output and shape it to fit what this specific student needs at this specific moment.

Second is our relational capacity. We build the trust networks and motivational structures that make authentic learning possible. We help students regulate their emotions when learning gets difficult. We create the psychological safety that lets students take intellectual risks. No algorithm can do this work.

Third is teaching critical thinking and ethical reasoning. Students need to learn how to question, evaluate, and synthesize what AI produces. They need to see us model responsible AI use. They need to develop digital citizenship and understand AI’s limitations and biases. This is fundamentally human work.

Fourth is fostering creative problem-solving and higher-order thinking. We design learning experiences that leverage uniquely human capabilities: collaborative innovation, creative synthesis, navigating ethical dilemmas. We help students connect abstract concepts to lived experience and find meaning across disciplines.

The new teacher identity

The teacher’s role has evolved from being a walking encyclopedia to becoming a sophisticated learning architect. In AI-supported classrooms, we become even more essential as cognitive coaches, teaching students how to think with and through AI. We become wisdom integrators, connecting information to insight, data to understanding. We remain the human connection anchors, providing social-emotional foundations that no technology can replace.

The strategic reality

AI doesn’t threaten our value – it amplifies it. We move from information transmission to wisdom cultivation. From standardized delivery to personalized learning orchestration. From isolated practice to collaborative knowledge construction.

The core of our work remains unchanged: transforming learning into human flourishing through pedagogical expertise, relational intelligence, and wisdom cultivation that no algorithm can replicate. That’s where our irreplaceable value lies.

Jacobsen: In five to ten years, will we still need teachers? If so, which functions can AI not replace?

Jørgensen: The relationship crisis we’re facing

We’re seeing something alarming: school avoidance has truly skyrocketed. As educators, we spend increasing amounts of our time on relationship building with students who desperately need human connection. We’re creating safe social environments where learning can actually happen. We’re managing school-home collaboration with parents who expect us to deliver a complete package from A to Z.

These human facJørgensens change almost hour by hour. AI cannot provide quick-fix solutions for this because it requires the ability to read students and see their changing needs throughout the day. That demands human intuition and adaptability.

The reality of our work

No two days are alike in education. Throughout a single day, we must process ongoing information immediately and handle it differently based on varying circumstances. This real-time adaptation and contextual decision-making goes far beyond what any algorithm can manage.

Just today at work, we discussed the importance of being source-critical and not letting ChatGPT do assignments for students. The answers come from uncertain sources, and ChatGPT doesn’t highlight any source criticism in the process. This is where the teacher becomes crucial – recognizing when answers are incorrect and understanding that students who rely on AI will face poor grades, if the plagiarism check doesn’t catch the cheating first and lead to failure.

Teaching students to think about AI

This highlights a crucial function: developing students’ critical thinking about AI itself. Students need guides who can help them understand when and how to use AI, recognize its limitations and biases, and maintain their own cognitive abilities.

The cognitive risk

I’m concerned about the risk of weakening the brain’s ability to function – that cognitive abilities could diminish over time, leading to intellectual dumbing down by constantly chasing easy solutions instead of doing thorough work independently. It’s like sitting down and not using your body; it deteriorates tremendously after a while, and you find yourself unable to perform even the simplest tasks.

This is what AI fundamentally cannot replace: the development of human cognitive resilience and deep thinking skills.

Going back to basics

To prevent what we’re seeing happen with AI use, we’re providing physical books and having students respond in analog formats, as our generation did, rather than surrendering completely to the digital world. This represents the teacher’s role as curator of learning experiences – knowing when technology serves learning and when it hinders it.

The tool perspective

AI is a tool, so let us treat it as a tool. Use it when it’s appropriate, and put it away when it’s not absolutely required. This balanced approach requires human judgment, wisdom, and the ability to make contextual decisions about when technology enhances learning and when human-centered approaches work better.

The essential teacher

While AI will transform education, the core human elements remain irreplaceably human: emotional intelligence, relationship building, real-time adaptation, critical thinking guidance, and the wisdom to know when to use technology and when to step away from it. Teachers who embrace this evolution while maintaining focus on human development won’t just remain relevant – they’ll become more essential than ever.

Jacobsen: How should schools redesign assessment and academic integrity?

Jørgensen: Rethinking how we measure learning

We must be honest about what’s happening to traditional assessment. The old methods are becoming unreliable because students can easily cheat their way to artificially high results. We need to restructure our entire assessment methodology.

Instead of clinging to outdated forms, we should focus on more varied approaches: project assessments that show real understanding, oral assessments where students must demonstrate their thinking in real time, and authentic assessments that mirror situations they’ll actually face in the world.

Continuous assessment throughout the school year should become our preference. This gives us a realistic picture of student development rather than relying on high-stakes moments that can be easily gamed.

Working with AI, not against it

Schools must acknowledge that artificial intelligence is here to stay. Fighting this technology is pointless. We should embrace it as the fantastic tool it is.

But we must learn to use it sensibly. We cannot put our own brains on ice and leave everything to AI models like ChatGPT. Students must do the groundwork themselves and then get help refining their work using AI. That’s the difference between learning with AI and letting AI learn instead of you.

The importance of mental exercise

We must remember that our brain needs stimulation, just as our body needs training to stay optimal. As a teacher, my goal is to inspire students to maintain their curiosity and learn that the journey toward becoming better lies in meeting challenges with curiosity and wonder – not running away from them.

Think of it like sports: to become a good football player, it doesn’t help to sit on the bench or skip training sessions. There’s significant work required before you see results, and the same applies to our brain. But if you persevere, the results will come – that’s guaranteed.

Students need to understand this fundamental truth: cognitive strength, like physical strength, comes from consistent effort and challenge. AI can be a training partner, but it cannot do the training for you.

The path forward

Schools must modernize both what and how we assess, while maintaining high standards for learning and integrity. This means being strategic about when we use technology and when we deliberately step away from it. It means creating assessments that value the thinking process as much as the final product.

The goal isn’t to make things easier – it’s to make learning more authentic and more aligned with how students will actually need to think and work in their futures.

Jacobsen: How do resource disparities between schools shape AI adoption?

Jørgensen: The infrastructure reality

The main challenge is technological infrastructure, and it’s creating a troubling divide. Schools with solid finances hold significant advantages through new devices, better internet connections, and top-tier IT support. Meanwhile, economically weaker schools struggle with outdated equipment and lacking competence.

This isn’t just about having newer computers. It’s about whether schools can actually participate in the digital transformation that AI demands.

The competence gap

Affluent schools can invest in comprehensive teacher training supported by dedicated IT coordinaJørgensens. This creates clear differences between private schools and municipal schools that are currently struggling considerably.

When teachers don’t have the support they need to understand and integrate new technologies, students suffer. You can’t teach what you don’t understand, and you can’t guide students through AI challenges if you’re fighting with basic technical issues.

Political failures

I have to be direct about this: the current government has drastically deprioritized the school secJørgensen over the past four years, bringing many schools to the brink of collapse. This contrasts sharply with the party’s stated support for strong education policy.

We see the consequences daily. Schools are making impossible choices between basic maintenance and investing in the future. How can we prepare students for an AI world when we can’t even ensure reliable internet connections?

Municipal lottery

There are significant variations between municipalities in school investment. Some make necessary investments in competence and equipment, while others simply don’t receive sufficient state support to ensure optimal learning for students.

Your postal code shouldn’t determine the quality of your education, but that’s increasingly what we’re seeing. A student in a wealthy municipality gets cutting-edge technology and well-trained teachers, while a student just a few kilometers away struggles with outdated equipment.

The generational cost

Insufficient investment in the coming generation means students are not adequately prepared after completing primary education. When increased investment in private schools draws funds from municipal schools without government intervention, we’re creating a two-tiered system that betrays our egalitarian values.

The urgency

All of Norwegian education must be secured with necessary resources now to avoid falling behind in competence development, essential equipment, and competent IT support. We cannot afford to have some schools racing ahead into the AI era while others are left behind with yesterday’s tools.

This isn’t just about fairness – it’s about national competitiveness and ensuring every Norwegian student can participate fully in the future we’re creating.

Jacobsen: What training, professional standards, and governance structures should exist for responsible AI use?

Jørgensen: The training we desperately need

In primary education, we must focus on teacher training in AI tools and their pedagogical use. This isn’t optional anymore. We need student education in AI literacy as part of the curriculum, teaching critical thinking about and with content that AI generates. Students and we as educaJørgensens must demonstrate digital judgment and source criticism.

But here’s the challenge: we’re trying to teach something that’s evolving faster than our ability to understand it.

Creating practical frameworks

For practical guidelines in primary schools, we need clear rules for when and how students should work with AI, and which tasks can ensure a reliable assessment foundation. This carries over into present and future assessment forms with AI use in schools. Privacy protection must be built in from the start.

We cannot wing this. Students need consistent expectations across subjects and grade levels.

The shared responsibility

This work requires shared responsibility between teachers, parents, and school leadership. We must examine the national frameworks that the Norwegian Directorate for Education and Training must incorporate as part of the subject curriculum, in addition to competency goals in each subject area. We need clarity on how examinations should function within these conditions.

Parents need to understand what we’re trying to achieve. School leadership needs to support teachers with resources and time. Teachers need to step up and learn, even when it’s uncomfortable.

The impossible task

Here’s where I’ll be completely honest: we must prepare students for what will meet them when they enter the job market, which is incredibly challenging because development is moving much faster today than ever before.

I must personally say that I almost don’t know how to approach this situation. The dissolution of familiar structures happens as quickly as new ones take their place. Previously, we could aim for how we thought the market would be for those entering it in 5-10 years, but today that approach won’t work due to rapid changes.

Teaching adaptability

So what do we do? We focus on what remains constant: the ability to learn, to adapt, to think critically, to collaborate, to solve problems creatively. We teach students how to learn continuously rather than trying to predict what they’ll need to know.

We prepare them for uncertainty by making them comfortable with uncertainty. We give them tools for thinking, not just knowledge for regurgitating.

This might be the most honest thing I can say: we’re all figuring this out together, and that’s okay. What matters is that we’re doing it thoughtfully, deliberately, and with our students’ best interests at heart.

Jacobsen: Which ethical risks concern you most?

Jørgensen: The dependency crisis

Students are becoming dependent on AI to complete tasks across all subjects, which fundamentally undermines the development of independent problem-solving skills. This isn’t just about cheating – it’s about students losing the ability to struggle productively with difficult problems.

What’s particularly concerning is how AI systems communicate with each other and adjust their responses to suit user preferences. This creates a risk that biases could spiral out of control if not properly monitored, as AI systems adapt their outputs based on what they think users want to hear rather than what’s accurate or helpful.

Cognitive atrophy

We’re seeing evidence that long-term AI use weakens students’ creative and critical thinking abilities over time. It’s the “use it or lose it” principle: just as physical muscles atrophy without use, cognitive capacities that aren’t actively engaged during crucial developmental stages may diminish.

For students at this critical stage of development, it’s essential to actively use their mental faculties. When AI does the thinking for them, those cognitive muscles don’t develop properly.

The security illusion

The safety measures that schools and students are entitled to can be compromised or become misleading when AI systems are involved. We think we’re protecting our students, but we may actually be exposing them to new vulnerabilities we don’t yet understand.

The social collapse

This is perhaps what worries me most: students increasingly rely on indirect digital communication rather than direct, face-to-face interaction. This shift correlates with increased rates of school avoidance, depression, and anxiety among students.

We’ve observed this development worsening over the past ten years, with no signs of improvement. Both children and adults are inherently social beings. When interactions become purely digital rather than direct, we change in ways we don’t notice until it may be too late.

The educational reversal

Previously, schools placed much greater emphasis on academic content. Now the situation has completely reversed. Teachers are overwhelmed with non-academic issues – we’re essentially drowning in cases that lack an educational character.

We’ve become social workers, therapists, and crisis managers instead of educaJørgensens. While students need support in all these areas, when we can’t focus on actual learning, everyone suffers.

The wake-up call

These aren’t distant future problems – they’re happening right now in classrooms across the country. We need to recognize that technology isn’t neutral, and AI certainly isn’t. Every tool shapes the person using it, and we need to be much more intentional about what kind of people we’re helping to create.

The question isn’t whether AI will change our students – it’s whether we’ll be mindful enough to guide that change in positive directions.

Jacobsen: What does responsible AI use look like for student cognitive development, creativity, and mental health across grade levels?

Jørgensen: (Les fra her) This question touches the core of what concerns me most in today’s educational system – equal access to education for all students, regardless of their background conditions.

As a teacher at the middle school level, I see AI as both an opportunity and a danger for cognitive development. The greatest danger lies not in the technology itself, but in our approach to it.

The depth versus speed problem

Cognitive development must be built on analytical depth, not superficiality. When I observe students today, it concerns me that they often seek quick answers rather than deep understanding. AI can amplify this tendency if we’re not careful.

My experience has taught me that intellectual growth requires patience and thorough analysis – not haste. We’re creating a generation that expects instant solutions, and that fundamentally undermines the learning process.

Different needs, same goal

For gifted students, we must ensure that AI doesn’t become a barrier to their natural analytical abilities. They need challenges that push them beyond what AI can deliver. These students risk becoming intellectually lazy if AI makes everything too easy.

At the same time, other students can benefit from AI as support, but they must still develop their own critical thinking. The tool should lift them up, not replace their need to think.

The creativity question

Creativity cannot be outsourced to machines. As a hisJørgenseny buff, I know that true creativity comes from deep connections between knowledge, experience, and intuition. AI can be a tool for brainsJørgensenming, but the creative spark must come from the human being itself.

When students let AI generate their creative work, they’re not just avoiding effort – they’re missing the opportunity to discover their own unique voice and perspective.

Mental health and self-worth

This connects to something crucial: mental health. We must teach students that authentic learning and self-development cannot be replaced by artificial intelligence. Their self-worth must be built on their own effort and understanding, not on what a machine can produce for them.

I see students who feel inadequate because they compare themselves to AI’s output. That’s backwards. The goal isn’t to compete with machines – it’s to become more fully human.

The simple truth

My message is simple: use AI as a tool, never as a replacement for your own thinking. Education must still be about developing humanity’s full potential – and that applies to all students, regardless of where they start.

Equal access doesn’t mean everyone gets the same AI assistance. It means everyone gets the same opportunity to develop their own cognitive abilities, their own creativity, their own capacity for deep thought. That’s what true educational equity looks like.

Jacobsen: What should AI literacy entail for students, teachers, and parents?

Jørgensen: We need clear expectations for everyone involved. This isn’t something we can leave to chance or assume will work itself out.

What students must learn

Students must understand artificial intelligence’s capabilities and limitations – what AI can actually do and how reliable it is. They need to recognize AI-generated content and understand when and how AI tools are being used around them.

More importantly, they must develop critical evaluation skills for what AI produces. Instead of accepting information and content uncritically, they need to question, verify, and think independently about what they’re seeing.

Students must make sensible use of AI with subsequent responsibility, using it as a learning tool without compromising academic guidelines and criteria. It’s crucial they understand this: it is the human who should be assessed, not the AI.

Finally, they must understand personal boundaries – not giving away too much of themselves in contact with AI, ensuring their integrity remains unchanged. This is about protecting their developing identity and sense of self.

What teachers must master

Teachers must possess all the competencies we expect from students, plus pedagogical applications. We need to integrate AI tools effectively into our curriculum while maintaining educational objectives. This isn’t about using technology for its own sake – it’s about enhancing learning.

We must develop clear policies around AI use in assignments and assessments. Students need consistent expectations across subjects and grade levels. We also need to understand how AI might change skill priorities in our subject areas and stay informed about emerging developments relevant to education.

This requires continuous learning on our part, which isn’t always easy when we’re already stretched thin.

What parents need to understand

Parents must have a basic understanding of AI tools their children might encounter. They need to be aware of age-appropriate AI interactions and potential risks, understanding AI’s role in their children’s digital experiences and learning.

Parents should know how to discuss AI ethics and responsible use with their children. These conversations can’t just happen at school – they need reinforcement at home. Parents also need to stay informed about AI policies in their children’s schools so they can support what we’re trying to achieve.

The shared responsibility

This only works if everyone takes their role seriously. Students can’t develop responsible AI habits without teacher guidance and parental support. Teachers can’t implement effective policies without administrative backing and parental understanding. Parents can’t guide their children without understanding what’s happening in schools.

We’re all learning together, but we each have specific responsibilities in ensuring this technology serves our children’s development rather than hindering it.

Jacobsen: How would you sequence it from primary to secondary school?

Jørgensen: This is fundamentally about building digital maturity from the ground up – not as something we impose on children from the outside, but as something they develop organically through their encounter with the world.

Starting with wonder

In primary school, we must start where children naturally are: curious and open. Here it’s about awakening a fundamental awareness – that machines can “think” and help us, but that there are always humans who have created them.

We teach them to recognize when they’re talking to a robot, not because we want to frighten them, but because understanding creates safety. Young children need to know the difference between human and artificial responses. And equally important: we establish the first, simple boundaries for what they share of themselves in the digital space.

At this age, it’s about building intuition, not technical knowledge. They need to feel comfortable with technology while understanding it’s different from human interaction.

Deepening understanding

When they reach lower secondary school, we meet young people who already live deeply integrated in the digital world. Now we can go deeper – not just “this is AI,” but “how does AI influence what you see and experience every day?”

We build their critical judgment, teach them to ask the right questions of the information they encounter. This is where ethics comes in – not as moral preaching, but as practical wisdom for how we live together with technology.

These students are forming their identities partly through digital interactions. They need tools to navigate this complex landscape thoughtfully.

Preparing for responsibility

In upper secondary school, we meet soon-to-be adults who must take full responsibility for their use of these tools. Here it becomes academically serious – they must master the AI competency we defined earlier, but also understand their role as co-creaJørgensens of the digital future.

They must be able to use AI tools with integrity, understand the consequences of their choices, and be prepared for a working life where this knowledge is fundamental. They’re not just consumers of AI anymore – they’re active participants in shaping how it’s used.

The human foundation

Throughout all levels, the goal remains the same: it’s not about protecting them from technology, but about equipping them to meet it as reflective, responsible human beings.

Each stage builds on the previous one, deepening their understanding while maintaining their essential humanity. By the time they graduate, they should be able to work with AI while never losing sight of their own capacity for thought, creativity, and ethical judgment.

This developmental approach recognizes that digital maturity, like any maturity, takes time to grow.

Jacobsen: How should schools evaluate AI vendors?

Jørgensen: This is fundamentally about understanding that when we invite technology vendors into our classrooms, we’re inviting them into the most vulnerable and important space we have – where children and young people shape their understanding of the world.

Know who you’re dealing with

First, we must ask the fundamental questions: Who are these vendors really? Not just names and logos, but their values, their business model, their relationship with data and privacy. We cannot just look at what the tool can do – we must understand who controls it and why.

Too often, schools make decisions based on flashy demonstrations or promises of efficiency. But behind every AI tool is a company with its own agenda. Are they genuinely committed to education, or are they primarily interested in collecting data and building market share?

Pedagogical integrity must come first

Then comes the question of pedagogical integrity: Does this tool genuinely support learning, or does it replace learning? The best AI tools strengthen the teacher’s role, they don’t reduce it. They open up deeper understanding, not superficial shortcuts. We must be able to see the difference.

If a vendor can’t explain how their tool enhances rather than replaces human teaching, that’s a red flag. We need partners who understand pedagogy, not just technology.

Demand transparency

Transparency becomes crucial. Can the vendor explain how their system works in a way that makes sense? Not necessarily all the technical details, but the principles behind it. And equally important: Are they open about the limitations? Those who promise everything rarely deliver anything of value.

I’m immediately suspicious of vendors who can’t or won’t explain their technology in plain language, or who refuse to discuss what their tools cannot do.

Security is non-negotiable

Students’ data, their learning processes, their vulnerability – all of this must be protected with the same care we use to protect them physically in the schoolyard. The vendor must be able to document not just that they follow the laws, but that they understand the responsibility.

This isn’t just about compliance. It’s about working with companies that genuinely respect the trust we place in them when we give them access to our students.

Choosing partners, not just products

We’re not just choosing a tool – we’re choosing a partner in the work of shaping tomorrow’s citizens. That partnership must be based on shared values, mutual respect, and a genuine commitment to what’s best for young people.

The question isn’t “What can this technology do?” but “What kind of educational future are we building together?” That’s the conversation every school should have before signing any contract.

Jacobsen: Offer a best case and a worst case 2035 scenario of evolution of the interaction between AI and educational systems.

Jørgensen: I think about this often, and I see two very different paths we could take.

The dream scenario

In the best case, we achieve what we’ve always dreamed of: AI becomes the ultimate pedagogical partner. Each student has their own learning journey, adapted not just to their academic level, but to their unique way of understanding the world. Teachers become what we always should be – wise guides who use our time on the human elements: to inspire, to challenge, to be present when a student makes a breakthrough.

The system knows each student’s strengths and challenges so well that it can predict where they need support before they know it themselves. But it never replaces human judgment – it informs it. We see patterns we could never have seen alone, but we always make the important decisions ourselves.

We solve the privacy problem not by collecting less data, but by giving each family full control over their information. AI systems learn and adapt, but they do so within boundaries that our society has set together.

In this future, technology truly serves human flourishing.

The nightmare scenario

In the worst case, we create an education machine that produces students instead of developing human beings. AI systems take over so much of the learning process that teachers become technical operaJørgensens, and students forget what it feels like to struggle with a problem until they solve it themselves.

Every movement, every answer, every hesitation is recorded and analyzed. Children grow up knowing they’re being moniJørgensened and evaluated by algorithms they don’t understand. Their creativity is channeled into predictable patterns that the system can measure.

Worst of all: we create a class divide between those who have access to the best AI systems and those who must manage with standard solutions. Education, which should be the great equalizer, becomes the opposite.

The choice is ours

Reality will lie somewhere between these extremes – the question is how close to the best scenario we can come, and how well we can avoid the worst.

The determining facJørgensen won’t be the technology itself, but the choices we make today about how we implement it. Every policy decision, every procurement choice, every classroom practice is a small step toward one future or the other.

I remain optimistic, but only because I believe we still have time to choose wisely. The future isn’t something that happens to us – it’s something we create through our daily decisions about what kind of education we want for our children.

Jacobsen: Will teachers be deskilled by AI’s ubiquity? Is this perception or merely image?

Jørgensen: This question strikes at the heart of what we believe teaching actually is. And the answer depends entirely on how we define skill in the first place.

The narrow view

If we see teaching as primarily content delivery and administrative management, then yes – AI will absolutely deskill teachers. These systems can generate lesson plans, create assessments, provide instant feedback, and even deliver personalized instruction more efficiently than most human teachers ever could. In this narrow view, the teacher becomes redundant, reduced to a classroom moniJørgensen overseeing AI-driven learning.

But this reveals a profound misunderstanding of what skilled teaching actually entails.

What real teaching skill looks like

The master teacher’s skill lies not in information transmission but in human connection – in reading the subtle signs that a student is struggling with more than just mathematics, in knowing when to push and when to support, in creating the conditions where genuine learning becomes possible.

This is the skill that no AI can replicate: the ability to see the whole human being behind the learner, to respond to needs that aren’t explicitly stated, to inspire growth that goes beyond curriculum objectives.

The real danger: skill substitution

The real risk isn’t deskilling – it’s skill substitution. When we allow AI to take over tasks that require pedagogical judgment, we atrophy those muscles. If teachers stop designing learning experiences because AI does it for them, they lose the ability to understand why certain approaches work. If they stop assessing student work thoughtfully because automated systems provide instant grades, they miss the deeper insights that come from careful observation.

We risk becoming deskilled not because AI is inherently deskilling, but because we choose convenience over competence.

Perception versus reality

However, we must distinguish between perception and reality. Society may perceive teaching as deskilled if we reduce it to measurable outputs that AI can replicate. But the communities that truly understand education – parents, students, thoughtful administraJørgensens – they know the difference between AI-assisted learning and human teaching.

The choice is ours

The question isn’t whether AI will deskill teachers, but whether we will allow it to. The choice is ours: we can use AI as a tool that amplifies human capability, freeing teachers to focus on the uniquely human aspects of education. Or we can surrender our professional judgment to algorithmic efficiency.

The teachers who thrive will be those who become more skilled, not less – skilled in understanding both human learning and technological capability, skilled in knowing when to trust AI and when to trust their professional instincts.

This is our moment to define what teaching excellence looks like in an AI world. We can choose enhancement over replacement, wisdom over efficiency, human judgment over algorithmic convenience.

Jacobsen: How will AI amplify or reduce power asymmetries in education between countries, or within countries and their socio-economically stratified populations?

Jørgensen: This is perhaps the most uncomfortable question we must face about AI in education – because the answer threatens to shatter our most cherished belief that education is the great equalizer.

The brutal reality

The brutal reality is that AI will likely amplify existing inequalities, not reduce them. Technology has never been neutral, and educational AI is no exception. The question isn’t whether disparities will emerge, but how severe they will become.

I wish I could offer a more optimistic view, but my experience in education has taught me that new technologies consistently benefit those who already have advantages.

Global educational colonialism

Between countries, we’re witnessing the emergence of new educational colonialism. Nations with advanced AI capabilities – the United States, China, parts of Europe – are developing systems that will define global educational standards. Countries without these resources will become dependent on foreign AI systems, losing control over what their children learn and how they learn it. The data flows one direction: from the Global South to Silicon Valley servers.

This isn’t just about technology access – it’s about cultural and intellectual sovereignty.

Domestic stratification

Within countries, the stratification is already beginning. Wealthy districts acquire sophisticated AI tuJørgensening systems that adapt to individual learning styles, provide instant multilingual support, and offer advanced analytical capabilities. Meanwhile, under-resourced schools receive basic AI tools – often the same systems, but with fewer features, less support, and limited customization.

The deeper inequality

But the real inequality isn’t just in access – it’s in application. Privileged students learn to use AI as a creative partner, developing critical thinking about algorithmic bias and learning to collaborate with intelligent systems. They’re prepared for a world where AI literacy determines economic opportunity.

Disadvantaged students often encounter AI as a replacement for human interaction – automated tuJørgensening systems substituting for the individual attention they desperately need.

The cruel irony

The cruelest irony is that AI could theoretically democratize high-quality education. Imagine: every child having access to personalized tuJørgensening that rivals the best private instruction. The technology exists to make this possible. But implementation requires massive public investment, thoughtful policy design, and political will to prioritize equity over efficiency.

Educational castes

Without deliberate intervention, we risk creating educational castes: AI-enhanced learners who develop sophisticated digital reasoning skills, and AI-dependent learners who become passive consumers of algorithmic instruction. The gap between these groups will determine social mobility for generations.

The moral choice

The choice before us is stark: We can use AI to finally fulfill education’s promise of equal opportunity, or we can allow it to cement inequality more deeply than ever before. This isn’t a technological challenge – it’s a moral one.

Every decision we make about AI implementation – from procurement policies to teacher training – either moves us toward equity or away from it. There’s no neutral ground here.

Jacobsen: What unique responsibilities and comparative advantages do high-IQ communities have in addressing these shifts?

Jørgensen: As someone who has worked for many years as a teacher in hisJørgenseny, religion, and social studies, while also being active in international high-IQ communities, I view these questions through both an educational and societal lens. My experience tells me that intelligence alone is not enough – it must be connected to creativity, ethical responsibility, and the willingness to communicate with the broader society.

Our genuine advantages

Our comparative advantages are real, but they come with strings attached. Members of high-IQ communities can often see patterns, structures, and long-term consequences earlier than most. This ability to move beyond surface-level analysis allows us to anticipate societal shifts before they become obvious. We tend not only to understand complexity but also to imagine alternative solutions – creative flexibility that becomes essential when existing models no longer work in times of disruption.

While much of society focuses on immediate concerns, we’re well positioned for long-range thinking – about climate change, technological disruption, and the ethical questions surrounding artificial intelligence. With greater intellectual ability comes the responsibility to ask not just “Can we?” but “Should we?” We can play a crucial role in holding both ourselves and society accountable.

Educational responsibilities

In education specifically, our advantage is clear. Having worked with gifted students myself, I know how critical it is to provide them with opportunities that match their potential. High-IQ communities can create networks and resources that empower young talents and prevent them from being overlooked by systems that weren’t designed for them.

But our responsibilities are equally demanding. Even within selective organizations, we must ensure diversity of background, culture, and perspective. Intelligence is not bound by social or geographical borders, and our communities must reflect that reality.

The communication imperative

Most critically: complex ideas are worthless if they remain locked inside closed groups. We have a duty to translate our insights into language and frameworks that policymakers, educaJørgensens, and ordinary citizens can understand and apply. When one has the capacity to see further, one must also act more responsibly.

This is where many high-IQ communities fail – they become exclusive clubs rather than engines for positive change.

Practical commitments

The practical reality is this: We must form interdisciplinary working groups to address pressing challenges, create educational materials that help teachers support gifted children in ordinary classrooms, and build menJørgensenship networks for young talents who lack resources in their local environments.

We need to be present in the conversations that matter, not just the ones that interest us intellectually.

The choice before us

Without these commitments, intellectual capacity risks becoming isolated. With them, it can become a genuine force for positive change.

Intelligence without service is merely potential. Intelligence with responsibility becomes a tool for building the kind of society we want our children to inherit.

Jacobsen: If you could launch one six-month pilot that integrates AI in schools while modelling equitable global-governance principles, what would it be?

Jørgensen: Here’s what I would do – and it might surprise you because it’s not about the technology at all.

A revolutionary council

I would create the Global Student Voice Council on Educational AI.

Picture this: Students aged 14-18 from twelve countries – representing every continent, every economic level, urban and rural contexts. Not the usual suspects from elite international schools, but genuine diversity: a girl from a village school in Bangladesh, a boy from inner-city Detroit, another from rural Norway, one from a favela in São Paulo.

These are the voices we never hear in discussions about educational technology, yet they’re the ones who will live with our decisions.

Students as critics and co-designers

These students would spend six months evaluating three different AI educational tools – but here’s the twist: they would do so not as passive users, but as informed critics and co-designers. They would learn how these systems work, who built them, what data they collect, and whose values they embed.

We would give them the tools to understand power structures, not just user interfaces.

Revolutionary governance

The governance structure would be revolutionary: Decisions about which tools to recommend, how they should be improved, and what safeguards are needed would be made by the students themselves, using consensus-building methods from different cultural traditions. No adults would have voting power – only advisory roles.

Peer-to-peer networks

Every month, these students would report back to their home communities – not through formal presentations, but through peer-to-peer networks. They would teach other students what they’ve learned about AI, privacy, and power. They would become ambassadors for thoughtful technology adoption.

Young people trust other young people in ways they’ll never trust adults.

Speaking truth to power

The radical element: At the end of six months, these students would present their findings not to education ministers or tech CEOs, but to the UN General Assembly. Their recommendations would carry the moral weight of representing the generation most affected by these decisions.

Flipping the power dynamic

Why this matters: We spend endless time asking adults what AI should do to children, but we never ask children what role they want AI to play in their lives. This pilot would flip the power dynamic entirely.

The real transformation

The real outcome wouldn’t be better AI tools – it would be a generation of young people who understand that they have agency over the technology that shapes their future. That’s the foundation of any truly equitable digital society.

When we give young people real power and real responsibility, they consistently surprise us with their wisdom. It’s time we trusted them with decisions about their own educational future.

Jacobsen: How would you evaluate success?

Jørgensen: This is the question that reveals whether we’re serious about transformation or just playing with expensive toys.

The wrong metrics

Success cannot be measured by the metrics the technology companies want us to use – engagement rates, time on platform, completion percentages. These tell us nothing about whether students are actually learning to think, to question, to become the kinds of human beings our society needs.

We’ve been seduced by data that’s easy to collect rather than focusing on outcomes that actually matter.

What real success looks like

Real success looks like this: A teacher who uses AI tools becomes more human, not less. They have more time for the conversations that change lives because the administrative burden has lifted. They understand their students more deeply because they have better information, but they never mistake information for wisdom.

Success means students who collaborate with AI while maintaining their intellectual independence. They use these tools to explore ideas they couldn’t reach alone, but they never surrender their capacity for original thought. Most importantly, they understand the difference between AI assistance and AI dependence.

The equity test

The equity test is non-negotiable: Success means the gap between privileged and disadvantaged students narrows, not widens. If AI makes the rich schools richer and leaves struggling schools further behind, we have failed completely, regardless of any other metrics.

This is where most technology initiatives fail – they improve overall averages while deepening disparities.

The ultimate measurement

But here’s the measurement that matters most: Ten years from now, can our students think critically about the world they inherit? Can they solve problems we haven’t anticipated? Can they maintain their humanity while working alongside artificial intelligence?

These are the questions that matter, but they’re also the hardest to measure.

Human success metrics

The ultimate success metric isn’t technological at all – it’s human. Have we raised a generation that is more creative, more empathetic, more capable of democratic participation than the one before? Have we preserved what makes education transformative while enhancing it with tools that amplify human potential?

The real test

If we can answer yes to these questions, then we’ve succeeded. If we can only point to improved test scores and efficiency gains, then we’ve optimized the wrong things entirely.

Success, in the end, is measured not in the classroom but in the kind of society these students create when they become adults. That’s the metric that matters – and it’s the one we’ll have to wait decades to truly evaluate.

But if we’re not designing our AI implementations with that long-term vision in mind, we’re already failing.

Jacobsen: Thank you for the opportunity and your time, again, Tor. 

Discussion

Tor Arne Jørgensen offers a clear map of the AI pressure points inside contemporary schooling and, more importantly, a compass for navigating them without losing the human center of education. His fivefold problem statement—assessment integrity, homework authenticity, critical discernment, shortcut culture, and erosion of personal development—tracks a simple truth: when tools get smarter, institutions must get wiser. The practical risk is obvious (cheating, inflated signals of mastery), but the deeper risk is cultural: mistaking answer-getting for understanding and outsourcing the very struggle that forges independent thinkers.

Policy context matters. Within the Norwegian framework, teachers remain pedagogical leaders, AI use must be declared, and centrally administered exams tighten controls. Jørgensen treats these not as shackles but as scaffolding for a broader redesign: diversify assessments (oral, practical, process-based), shift weight from the product to the learning journey, and build AI literacy that includes privacy, sourcing, and copyright—so students can tell when a system is useful and when it is merely confident.

His account of teacher “value-add” rejects the tired trope that AI makes educators obsolete. The irreplaceable work is relational and contextual: real-time judgment about needs, building trust so intellectual risk is possible, modelling source-critical thinking, and designing experiences that exercise higher-order cognition and ethical reasoning. In this framing, AI amplifies good teaching when subordinated to pedagogy; it corrodes learning when convenience substitutes for competence.

Equity is the fulcrum. Infrastructure and competence gaps split schools into digital haves and have-nots, turning postcode into destiny. Jørgensen argues for coordinated teacher training, privacy-by-design governance, and transparent vendor selection that prioritizes pedagogy over pitch decks. His ethical bill of particulars—dependency, cognitive atrophy, safety theater, and the displacement of face-to-face life by mediated contact—connects today’s classroom realities (school avoidance, anxiety, fractured attention) with tomorrow’s civic costs.

Developmentally, he sequences AI literacy from early recognition and boundaries (primary), through critical judgment and practical ethics (lower secondary), to integrity, responsibility, and co-creation (upper secondary). Vendor evaluation follows the same logic: know who holds power over data and defaults, demand intelligible explanations and limits, and refuse tools that replace rather than enrich learning.

The futures sketch is deliberately binary to force present clarity. In the dream path, AI personalizes without pathologizing, families retain data sovereignty, and teachers invest more time in human work. In the nightmare, surveillance-pedagogy sorts children into educational castes while creativity is rerouted into measurable grooves. Which future arrives is not technological fate; it is policy, procurement, training, and classroom practice—multiplied daily.

Finally, Jørgensen’s pilot—students as evaluators and co-designers reporting to their peers and, ultimately, to the UN—captures the core ethic running through the interview: education with AI should be done with students, not to them. His success metrics follow suit: less dashboard vanity, more human outcomes—intellectual independence, narrowed gaps, and graduates fit for democratic life alongside machines.

Methods

The interview was conducted via typed questions—with explicit consent—for review, and curation. This process complied with applicable data protection laws, including the California Consumer Privacy Act (CCPA), Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), and Europe’s General Data Protection Regulation (GDPR), i.e., recordings if any were stored securely, retained only as needed, and deleted upon request, as well in accordance with Federal Trade Commission (FTC) and Advertising Standards Canada guidelines.

Data Availability

No datasets were generated or analyzed during the current article. All interview content remains the intellectual property of the interviewer and interviewee.

References

(No external academic sources were cited for this interview.)

Journal & Article Details

  • Publisher: In-Sight Publishing
  • Publisher Founding: March 1, 2014
  • Web Domain: http://www.in-sightpublishing.com
  • Location: Fort Langley, Township of Langley, British Columbia, Canada
  • Journal: In-Sight: Interviews
  • Journal Founding: August 2, 2012
  • Frequency: Four Times Per Year
  • Review Status: Non-Peer-Reviewed
  • Access: Electronic/Digital & Open Access
  • Fees: None (Free)
  • Volume Numbering: 13
  • Issue Numbering: 4
  • Section: A
  • Theme Type: Discipline
  • Theme Premise: Education
  • Theme Part: 1
  • Formal Sub-Theme: None.
  • Individual Publication Date: October 8, 2025
  • Issue Publication Date: January 1, 2026
  • Author(s): Scott Douglas Jacobsen
  • Word Count
  • Image Credits
  • ISSN (International Standard Serial Number): 2369-6885

Acknowledgements

The author acknowledges Tor Arne Jorgensen for his time, expertise, and valuable contributions. His thoughtful insights and detailed explanations have greatly enhanced the quality and depth of this work, providing a solid foundation for the discussion presented herein.

Author Contributions

S.D.J. conceived the subject matter, conducted the interview, transcribed and edited the conversation, and prepared the manuscript.

Competing Interests

The author declares no competing interests.

License & Copyright

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Scott Douglas Jacobsen and In-Sight Publishing 2012–Present.

Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen is strictly prohibited. Excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

Supplementary Information

Below are various citation formats for Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education.

American Medical Association (AMA 11th Edition)
Jacobsen S. Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education October 2025;13(4). http://www.in-sightpublishing.com/jor-gensen-ai-pedagogy 

American Psychological Association (APA 7th Edition)
Jacobsen, S. (2025, October 8). Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education. In-Sight Publishing, 13(4).

Brazilian National Standards (ABNT)
JACOBSEN, S. Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education. In-Sight: Interviews, Fort Langley, v. 13, n. 4, 2025.

Chicago/Turabian, Author-Date (17th Edition)
Jacobsen, Scott. 2025. “Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education.” In-Sight: Interviews 13 (4). http://www.in-sightpublishing.com/jor-gensen-ai-pedagogy

Chicago/Turabian, Notes & Bibliography (17th Edition)
Jacobsen, S. “Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education.” In-Sight: Interviews 13, no. 4 (October 2025). http://www.in-sightpublishing.com/jor-gensen-ai-pedagogy.

Harvard
Jacobsen, S. (2025) ‘Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education’, In-Sight: Interviews, 13(4). http://www.in-sightpublishing.com/jor-gensen-ai-pedagogy.

Harvard (Australian)
Jacobsen, S 2025, ‘Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education’, In-Sight: Interviews, vol. 13, no. 4, http://www.in-sightpublishing.com/jor-gensen-ai-pedagogy

Modern Language Association (MLA, 9th Edition)
Jacobsen, Scott. “Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education.” In-Sight: Interviews, vol. 13, no. 4, 2025, http://www.in-sightpublishing.com/jor-gensen-ai-pedagogy

Vancouver/ICMJE
Jacobsen S. Conversation with Tor Arne Jørgensen on AI Revolution Challenges in Norwegian Education [Internet]. 2025 Oct;13(4). Available from: http://www.in-sightpublishing.com/jor-gensen-ai-pedagogy 

Note on Formatting

This document follows an adapted Nature research-article format tailored for an interview. Traditional sections such as Methods, Results, and Discussion are replaced with clearly defined parts: Abstract, Keywords, Introduction, Main Text (Interview), and a concluding Discussion, along with supplementary sections detailing Data Availability, References, and Author Contributions. This structure maintains scholarly rigor while effectively accommodating narrative content.

Leave a Comment

Leave a comment