Skip to main content
← Back to Blog
·17 min read·By Priscilla Han

Is AI Making Our Kids Smarter — Or Just Faster?

The first generation with universal school technology scored lower than the one before it. As AI enters classrooms, the research suggests the same pattern is repeating. Here's what it means for how you choose a school — and a framework you can use this week.

AIEducationCritical ThinkingSchool Selection

The Short Answer

For the first time in the modern history of standardized testing, a generation has scored lower than its predecessors across attention, memory, executive function, and general IQ. Gen Z — the cohort that grew up with universal screen access and technology-saturated classrooms — is measurably less cognitively capable than Millennials. Now AI tools are entering schools and offices at an even greater scale than tablets did a decade ago, and early workplace data suggests the same pattern is repeating: output rises, but the thinking underneath it erodes.

This article is not an argument against technology. It is an argument for intentional education choices — backed by specific data, and ending with a framework you can use on your next school tour.


Part I: The Evidence

A Generation Broke the Pattern

On January 15, 2026, cognitive neuroscientist Dr. Jared Cooney Horvath, director of LME Global, submitted written testimony to the U.S. Senate Committee on Commerce, Science, and Transportation. He testified alongside Dr. Jenny Radesky (University of Michigan), Dr. Jean Twenge (San Diego State University), and Emily Cherkin (The Screentime Consultant). His central claim: Gen Z — people born roughly between 1997 and 2010 — represents the first generation in modern measurement to score lower than their parents on basic attention, memory, literacy, numeracy, executive function, and IQ-linked measures.

Horvath tied the decline to large-scale classroom screen adoption starting around 2010, the same moment 1:1 tablet programs began rolling out globally. In a follow-up Substack post published in March 2026, he wrote: "When NAEP performance is aligned with state-level digital adoption, scores plateau and then decline."

The data underneath his claim is not one study. It is a convergence across multiple independent datasets.

The PISA Evidence

The OECD's Programme for International Student Assessment (PISA) tests 15-year-olds in over 80 countries every three years. The 2022 results, published in December 2023, documented an unprecedented decline:

MeasureChange (2018 → 2022)Equivalent
Mathematics−15 points~¾ of a school year
Reading−10 points~½ of a school year
ScienceDecline (smaller)

In PISA's scoring system, 20 points equals roughly one year of learning. The decline was not confined to one country or region — it was global.

When the OECD examined the role of digital devices specifically, the findings were stark. Their 2024 report Students, Digital Devices and Success found:

  • 30% of students across OECD countries reported being distracted by their own digital device use in "every or most" math lessons. A further ~25% reported distraction from other students' devices.
  • Students distracted by smartphones in math class scored 15 points lower than undistracted peers — equivalent to three-quarters of a year of learning.
  • Students using phones 5 to 7 hours per day scored 49 points lower in math than those using phones for up to one hour — equivalent to nearly 2.5 years of learning.
  • Moderate device use (up to one hour per day for learning) was associated with a 14-point increase in math scores. The relationship is not linear: a little helps; a lot harms.
  • 45% of OECD students reported feeling nervous or anxious when their phone was not near them.

A separate peer-reviewed study by Dr. Jean Twenge, published in the Journal of Adolescence in early 2026, analyzed 1,788,128 students across 36 countries using PISA data from 2000 to 2022. Her finding: declines in academic performance and increases in loneliness were larger in countries where adolescents had greater smartphone access and spent more time using electronic devices for leisure during school hours. The pattern was international, not isolated.

The Sweden Experiment

Sweden was, until recently, a global leader in digital schooling. In 2009, the country swapped printed textbooks for computers and tablets across schools. By 2019, even preschoolers were required to use digital tools. Then reading scores started falling.

In 2021, Swedish fourth-graders scored 544 on the IEA's PIRLS international reading assessment — down from 555 in 2016, an 11-point decline. Sweden's Minister for Schools, Lotta Edholm, announced a reversal that has since become one of the most significant education policy shifts in Europe:

  • SEK 685 million (2023) through SEK 555 million annually (from 2026) allocated for physical textbooks — one per pupil per subject.
  • Mandatory phone collection for the entire school day in compulsory schools, expected before autumn 2026.
  • Preschool screen restrictions: no digital learning tools required; only analogue tools (books) for children under 2; greatly restricted non-analogue tools for all other preschool children. Entered force July 1, 2025.
  • The government's previous digitalisation strategy was formally rejected after criticism from neuroscientists and paediatricians.

Norway, Finland, and France have since followed with similar restrictions. The Netherlands restricted phones in classrooms from 2024. The global direction of policy-making has quietly but firmly shifted — even as edtech marketing budgets remain enormous.

Why Technology Doesn't Improve Learning: The Wiliam Synthesis

Few researchers have studied classroom outcomes as long as Professor Dylan Wiliam, Emeritus Professor of Educational Assessment at UCL's Institute of Education. In his January 2026 essay, Wiliam summarized decades of observation:

"A number of ways that this might be done have been attempted, including changes to the structure of schooling, to the governance of schools, and to the curriculum, and an increased role for digital technology. While each of these approaches has produced some successes, the net impact at system level has been close to, if not actually, zero."

His explanation is simple and difficult to dispute. Citing Larry Cuban's Oversold and Underused: Computers in the Classroom (2002), Wiliam argues that the only thing that reliably improves student outcomes is teacher quality. In the best teachers' classrooms, students learn at twice the rate of average classrooms. Technology changes delivery. It does not change the mechanism. When it crowds out the mechanism — when screens replace the interaction that actually builds thinking — outcomes decline.

The Neuroscience of Handwriting

One specific finding underpins Sweden's reversal. In January 2024, Professors Audrey van der Meer and Ruud van der Weel at the Norwegian University of Science and Technology (NTNU) published a high-density EEG study in Frontiers in Psychology comparing brain activity during handwriting versus typing. Using 256 sensors on 36 university students, they found:

  • Handwriting produced widespread theta and alpha wave connectivity (3.5–12.5 Hz) across central and parietal brain regions linked to memory formation, sensory processing, and attention.
  • Typing produced minimal connectivity in those same regions.
  • Van der Meer's summary: "Handwriting activates almost the whole brain as compared to typewriting, which hardly activates the brain as such."

Children who learned to write and read exclusively on tablets, van der Meer noted, "can have difficulty differentiating between letters that are mirror images of each other, such as 'b' and 'd'. They literally haven't felt with their bodies what it feels like to produce those letters."

This builds on Mueller & Oppenheimer's influential 2014 study "The Pen Is Mightier Than the Keyboard", which found that students who took longhand notes performed better on conceptual questions than laptop note-takers — because handwriting forced them to process and summarize rather than transcribe.


Part II: The Same Pattern Is Now Forming in Offices

If this only affected schools, parents could compensate at home. But the same dynamic is now emerging in workplaces, which means it will shape the world your children graduate into.

The UC Berkeley Study

In February 2026, Aruna Ranganathan (Associate Professor, Haas School of Business, UC Berkeley) and Xingqi Maggie Ye (PhD student, Berkeley Haas) published "AI Doesn't Reduce Work — It Intensifies It" in Harvard Business Review. They tracked 200 employees at a U.S. technology company over eight months and found:

  • AI tools did not reduce work — they intensified it.
  • After an initial productivity surge, employees worked longer hours, at a faster pace, on a broader range of tasks.
  • By month six, burnout, anxiety, and decision paralysis spiked.
  • "Workload creep" — employees took on more tasks than sustainable, leading to cognitive fatigue and lower quality work.

The "Brain Fry" Study

In March 2026, a team led by Julie Bedard and Gabriella Rosen Kellerman (a psychiatrist and co-author of Tomorrowmind with Martin Seligman) at Boston Consulting Group published "When Using AI Leads to 'Brain Fry'" in Harvard Business Review. Surveying nearly 1,500 full-time U.S. employees, they found:

  • 14% of AI-using workers reported experiencing "brain fry" — defined as mental fatigue from excessive use of, interaction with, and oversight of AI tools beyond one's cognitive capacity.
  • Symptoms included mental fog, headaches, slower decision-making, and what one participant described as "a dozen browser tabs open in my head."
  • Brain fry is distinct from burnout: it is acute cognitive strain, not chronic emotional exhaustion.
  • Critically, when AI was used to eliminate repetitive tasks, workers reported lower burnout and greater engagement. The problem is not AI itself — it is AI used without boundaries.

Business Insider called the broader trend "The Great AI Deskilling" in March 2026: AI creates an illusion of expertise while quietly eroding the judgment that built it.

The uncomfortable question for parents: if the adults in your child's future workplace are losing the ability to reason under their own power, the skill that will distinguish your child's career is not AI fluency. It is the thinking that AI cannot replace.


Part III: What This Means by Age

The concerns are not the same for a six-year-old and a sixteen-year-old. Here is what the research says at each stage.

Ages 6–10: The Foundation Years

This is when working memory expands rapidly, executive function develops, reading fluency is established, and fine motor skills — including the handwriting neural pathways documented by van der Meer — are forming. The WHO recommends no more than one hour of recreational screen time per day for children under five; for ages 6–10, no major body provides a specific limit, but the evidence is clear that displacement is the primary risk. Every hour on a screen is an hour not spent on physical play, social interaction, reading, or hands-on exploration.

What to prioritize: Handwriting instruction. Physical manipulatives for math (the Singapore Math Concrete-Pictorial-Abstract approach, where students use blocks and bar models before symbols). Read-alouds and independent reading with physical books. Unstructured play.

Ages 11–13: The Critical Window

The prefrontal cortex — the brain region responsible for goal planning, impulse inhibition, and abstract thinking — undergoes peak remodeling between ages 13 and 15. Synaptic pruning accelerates: the brain is literally deciding which neural connections to keep and which to eliminate, based on use. "Use it or lose it" is neurologically literal at this age.

This is when students can first genuinely engage with hypothetical reasoning, philosophical questions, and multi-step logical arguments. It is also the age when most children get smartphones, and when PISA data shows distraction effects begin to bite.

What to prioritize: This is the window where Socratic questioning, discussion-based learning, and philosophical inquiry become developmentally appropriate and maximally impactful. If critical thinking neural pathways aren't exercised during this pruning window, they may be permanently weakened. Students need to practice independent reasoning before AI tools become available to them.

Ages 14–16: Responsible Integration

By this age, the question shifts from "should students use technology?" to "how should they use it?" The IB's approach is instructive: AI-generated content must be cited like any other source — credited in the body of text and referenced in the bibliography. The IB explicitly stated in 2023 that it will not ban AI, calling bans "an ineffective way to deal with innovation." Instead, it requires disclosure, evaluation, and original thinking on top of whatever AI provides.

What to prioritize: AI literacy as a skill (understanding what LLMs can and cannot do). Mandatory disclosure of AI use. Process documentation — drafts, thinking journals, revision histories. Significant in-class, handwritten assessment to verify authentic capability. The goal is students who can use AI as a thinking partner, not a replacement.


Part IV: A Framework for Evaluating Schools

The research above is not an argument for sending your child to a school that bans technology. It is an argument for choosing a school where technology is a tool inside a curriculum that builds thinking — rather than a substitute for one.

The distinction matters because technology marketing in education looks identical in both cases. A school that has deployed iPads in every classroom and a school that has integrated iPads into a rigorous, teacher-led curriculum will use the same language in their brochure.

The 5-Minute Classroom Test

If you get a school tour, observe any classroom for five minutes:

  1. Look at the walls. Is student thinking displayed (reasoning, questions, drafts, "I wonder..." statements) or just finished products and decorations?
  2. Count the ratio. How many minutes is the teacher talking versus students talking? Elite discussion-based schools (Harkness method, used at Phillips Exeter and Wellington College) target 70–80% student talk. Average schools are 80%+ teacher talk.
  3. Check the desks. Are students facing each other (collaborative, discussion-ready) or all facing front (lecture)?
  4. Ask a student. "What are you working on and why?" If they can explain the purpose, the teaching is working. If they say "because the teacher told us to," it isn't.
  5. Look for handwriting. Are students writing by hand at all, or is everything on screens?

10 Questions That Reveal a School's Real Approach

On teaching methodology

  1. "Can you walk me through how a typical lesson on a difficult concept is taught?" Listen for questioning sequences, student discussion, worked examples, and formative assessment. Be skeptical if the answer centers on which app or platform is used.

  2. "How is student understanding assessed beyond standardized tests?" Look for extended essays, oral defenses, Socratic seminars, project-based assessment, and teacher-written feedback. Schools using Harvard Project Zero's Visible Thinking routines — protocols like See-Think-Wonder and Claim-Support-Question — are structurally building metacognition.

  3. "What does your school do that a well-resourced online platform cannot?" A good school will have a confident, specific answer. A weak school will struggle.

On technology policy

  1. "At what ages are devices used, and for how long per day?" Compare to the PISA evidence: moderate use (up to one hour per day) is associated with modest gains; heavy use is associated with steep declines.

  2. "What is your policy on smartphones during the school day?" Schools that allow phones in class are ignoring the PISA data. The gap between phone-free and phone-permissive schools is now measurable in test scores.

  3. "How do you handle AI in student work?" A nuanced answer distinguishes between AI as a learning aid (acceptable with disclosure) and AI as a substitute for original thought (unacceptable). Schools that say "we ban it" haven't thought it through. Schools that say "we don't have a policy yet" are behind.

On curriculum design

  1. "How does your curriculum develop critical thinking and independent reasoning?" For IB schools: ask about Theory of Knowledge (TOK) — a mandatory course where students write a 1,600-word essay analyzing how knowledge is constructed, plus a new TOK Exhibition (since 2022) linking real-world objects to epistemological concepts. For A-Level schools: ask about depth and assessment style. For AP: ask specifically about AP Seminar and AP Research (the Capstone program) — without these, AP is primarily content-and-exam.

  2. "How much writing do students do by hand, particularly in early grades?" The van der Meer research is clear: handwriting activates memory, attention, and sensory processing networks that typing does not.

  3. "Can I see a sample of student work from two years ago?" Schools that build thinking will show extended written work, science fair projects, and analytical essays. Schools optimizing for output will show polished presentations with thin argumentation underneath.

  4. "How do you handle a student who gets the right answer but can't explain why?" This reveals whether the school values understanding or compliance.

Red Flag Bingo

If you hear three or more of these on a school tour without substantive explanation, be skeptical:

  • "We're a 1:1 school" (without explaining why devices improve learning)
  • "AI-powered personalized learning" (usually means branching logic from the 1960s with a modern UI)
  • "21st century skills" (without naming specific skills or how they're measured)
  • "Future-ready" (without defining what that means)
  • "Our students are digital natives" (this is a debunked myth — being born near technology does not equal understanding it)
  • "Data-driven instruction" (without showing you the data or how it changes teaching)
  • "Meets every student where they are" (marketing phrase used by virtually every edtech vendor)

A Note on Curriculum Choice

Families choosing between IB, A-Levels, and AP often ask which is best. All three can produce excellent outcomes, but they distribute cognitive demand differently:

DimensionIB DiplomaA-LevelsAP
Core critical thinking requirementTheory of Knowledge (mandatory, externally assessed)None mandatoryAP Seminar / Research (optional Capstone)
Mandatory original researchExtended Essay (4,000 words, externally assessed)EPQ available but optionalAP Research (optional)
Breadth vs depthBreadth: 6 subjects + coreDepth: typically 3 subjectsModular: 4–10 courses
AI-resistance of assessmentHigh (oral defenses, extended essays, internal assessments)High (written exams)Moderate (many exams are MCQ-heavy)
Ethical reasoning requirementEmbedded in every Area of Knowledge (post-2022 curriculum)Subject-dependentSubject-dependent
Global recognitionUniversalStrongest in UK / CommonwealthStrongest in US

If your primary concern is developing independent thinking that AI cannot substitute for, the IB Diploma's core components (TOK, Extended Essay, CAS) are the most structurally aligned with that goal. This is not a claim that every IB school delivers this well — only that the curriculum is designed to require it.


The Bottom Line

Dr. Horvath closed his Senate testimony with a line that applies well beyond the classroom:

"We are redefining education to better suit the tool. That's not progress. That is surrender."

The children who will thrive in an AI-saturated world will not be the ones who learned to use AI earliest. The research is consistent across two decades, 80+ countries, and now early workplace data: tools raise output; they do not raise thinking. The children who thrive will be the ones who learned to think without them — and then chose, consciously and with judgment, when to pick the tools up.

As a parent, your leverage is at the point of school choice, and in the questions you ask your child each week. The framework above is a starting point. If you want help applying it to the specific schools on your shortlist — comparing curricula, evaluating teaching methodology, and navigating admissions — book a free 30-minute consultation.


Sources

  1. Horvath, J.C. (2026). Written testimony, U.S. Senate Committee on Commerce, Science, and Transportation, January 15, 2026.
  2. Horvath, J.C. (2026). "The NAEP Evidence Behind My Senate Testimony." The Digital Delusion (Substack), March 2026.
  3. OECD (2023). PISA 2022 Results (Volume I and II). OECD Publishing, Paris.
  4. OECD (2024). Students, Digital Devices and Success. PISA, OECD Publishing, Paris.
  5. Twenge, J.M. (2026). "International Declines in Academic Performance and Increases in Loneliness Are Linked to Electronic Devices." Journal of Adolescence, 98(1):250-261.
  6. IEA (2023). PIRLS 2021 International Results in Reading.
  7. Swedish Government (2024). "Government investing in more reading time and less screen time."
  8. Wiliam, D. (2026). "How do we prepare students for a world we cannot imagine?" Substack, January 7, 2026.
  9. Van der Meer, A. & van der Weel, R. (2024). "Handwriting but not typewriting leads to widespread brain connectivity." Frontiers in Psychology.
  10. Mueller, P.A. & Oppenheimer, D.M. (2014). "The Pen Is Mightier Than the Keyboard." Psychological Science, 25(6):1159-1168.
  11. Ranganathan, A. & Ye, X.M. (2026). "AI Doesn't Reduce Work — It Intensifies It." Harvard Business Review, February 9, 2026.
  12. Bedard, J. et al. (2026). "When Using AI Leads to 'Brain Fry'." Harvard Business Review, March 5, 2026.
  13. Business Insider (2026). "The Great AI Deskilling Has Begun." March 2026.

Need guidance on this topic?

Book a free 30-minute consultation with Priscilla.

Get in Touch