blog

Stop Using Tomorrow's Tools to Solve Yesterday's Problems

Written by Ashish Bansal | Feb 12, 2026 10:51:48 PM

The hottest trend in edtech right now? AI that generates video lectures and podcasts from textbooks.

Upload your curriculum. Get a slick two-host podcast in minutes. Investors gasp. Audiences clap.

Here's the problem: if lectures worked, YouTube would have replaced schools by now.

YouTube has millions of free lectures from brilliant educators, including Sal Khan, 3Blue1Brown, and Professor Leonard, from MIT OpenCourseWare. They're world-class, accessible, and available to every student on earth.

And yet, only 26% of U.S. eighth graders are proficient in math.

So why are we building AI that generates more of what already isn't working?


Decades of research point in the same direction

A meta-analysis of 225 studies found that students in active learning environments score nearly half a standard deviation higher on exams than those in lectures. They're 1.5x less likely to fail. And here's the kicker: a Harvard/PNAS study found students in lectures believed they learned more, while actually learning less.

Lectures may feel productive, but in reality, they aren't. They create an illusion of comprehension. Students get a sense of clarity without ever having to retrieve the idea on their own or apply it to a new problem.

Benjamin Bloom proved this in 1984. Students with one-on-one tutors outperformed 98% of lecture-taught students. The advantage wasn’t better content or because tutors gave better lectures. It was because they probed misconceptions, responded to confusion in real time, and changed course when something wasn’t clicking.

Tutoring is a fundamentally different activity: the student thinks, the tutor guides. That's the opposite of a lecture.


"But AI video saves students from searching YouTube for hours."

Yes, this is true. But it solves the wrong problem.

A student stuck on quadratic equations searches for "quadratics help." Thinking it will help them understand. But the real reason they’re struggling? Factoring, from two grades ago, they didn’t quite master.

A perfectly matched AI video doesn't fix that learning gap. The student watches it, still doesn't get it, and comes to the conclusion they're "just bad at math."

The issue wasn’t today’s lesson. Breakdowns happen before the current lesson begins. It was a gap that surfaced and was left unaddressed.

"What about interactive video? Pause and ask questions. Generate quizzes."

Closer, but still backwards.

A student pauses a video to ask, "I don't get step 3," and gets another explanation or a mini lecture about the lecture. The system rephrases the procedure without diagnosing the student’s reasoning. Then they get a quiz in class, testing recall, not understanding. Knowing you have gotten a question wrong doesn't teach you why.

The quiz flags the answer as incorrect, without identifying the underlying misconception. Was it a calculation mistake? A misunderstanding of the equation? A gap from years earlier? A red X marks the outcome, but it doesn’t clarify the reasoning that led there. Over time, this vagueness leads to frustration.

And here's the deeper issue: math is cumulative. A student struggling with the Pythagorean theorem might be missing square roots from 6th grade, or can't rearrange an equation to isolate a variable. No video on Pythagoras — interactive or not — will surface that. An AI tutor asks two questions and finds it.

By probing with follow-up questions, AI tutors narrow down where the logic breaks and adjust instruction from that moment onward.

Carnegie Mellon research confirms this: AI-based virtual helpers that question students and encourage critical thinking produce better learning outcomes than passive content delivery. Even Khan Academy's own efficacy data shows it's the practice-to-mastery component — not the videos — that drives learning gains in their ~350K-student studies. The gains come from guided practice with feedback, not from watching more video explanations.

"But videos help motivated kids!"

Exactly. And that's the problem.

Education Next calls it the 5 Percent Problem: across major online math platforms like Khan Academy, DreamBox, i-Ready, and IXL, only about 5% of students use the programs as recommended. The gains only show up for that group. The other 95%? Minimal impact, if any.

Worse, the 5% who succeed skew toward higher-income, higher-performing students — kids who would likely find a way to learn regardless. As MIT neuroscientist John Gabrieli put it, he is"impressed how education technology has had no effect on outcomes."

A highly motivated student will learn from a YouTube video, a textbook, or a cereal box. Schools don't buy tools for the top 5%. They buy them for everyone else. We don't have an explanation shortage. We have a practice-with-feedback shortage.

The challenge is sustained, structured engagement for students who lack the knowledge to push themselves through confusion, not access to information.


The real question isn't "how do we generate better content?"

It's "how do we replicate the interaction that makes tutoring work — at scale?"

That means AI that:

  • Asks questions instead of giving answers
  • Detects gaps through conversation, not self-diagnosis
  • Spirals back to prerequisites — even from previous grades — and returns
  • Adapts in real time to each student's level and language

Each response from the student changes what happens next. This isn't a feature you bolt onto a video. It's a different paradigm.

Meanwhile, ACM research on Google's NotebookLM found that LLM-generated summaries "get things wrong, make things up, and do so in complex, non-obvious ways" — mimicking but not producing the outcomes of human comprehension. Generating more AI content isn't just ineffective. It can actively mislead a student.

Fluency can resemble understanding without guaranteeing it.

At StarSpark AI, we built this. An AI math teacher, not an AI lecture generator.

Students answer 30 questions per week. No videos. No podcasts. Just guided, Socratic practice with an AI that teaches the way great tutors do. The system evaluates each response and adjusts difficulty and questioning accordingly.

The result: +1 standard deviation improvement in 3 months. C students became B students. B students became A students. In a classroom study — not a demo. These results were measured against baseline assessments within real classroom settings.

The next time you see an edtech demo that generates a slick video, ask one question:


"What does the student do?"

If the answer is"watch", you're looking at yesterday's solution in tomorrow's packaging.

If a student is actively responding, revising, and thinking through each step, the learning experience looks very different.

We should stop solving problems of yesterday with tools of tomorrow.

Ashish Bansal is CEO and Co-Founder of StarSpark AI, an AI-powered math learning and tutoring platform using the Socratic method for personalized K-12 instruction. Previously ML at Google, Amazon, and Twitter.IIT and Kellogg MBA.

 

References

  1. Bloom, B.S. (1984). "The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring." Educational Researcher, 13(6), 4–16. PDF
  2. Deslauriers, L. et al. (2019). "Measuring Actual Learning Versus Feeling of Learning in Response to Being Actively Engaged in the Classroom." Proceedings of the National Academy of Sciences. Link
  3. Freeman, S. et al. (2014). "Active Learning Increases Student Performance in Science, Engineering, and Mathematics." PNAS,111(23), 8410–8415. Link
  4. Carnegie Mellon University (2021). "New Research Shows Learning Is More Effective When Active." Link
  5. Sinka, J. (2025). "Thinking Smarter, not Harder?Google Notebook LM's Misalignment Problem in Education." Proceedings of the 43rd ACM International Conference on Design of Communication. Link
  6. Khan Academy (2024). "Khan Academy Efficacy Results, November 2024." Link
  7. National Center for Education Statistics (2022)."NAEP Mathematics: Mathematics Highlights 2022." Link
  8. Holt, L. (2024). "The 5 Percent Problem: Online mathematics programs may benefit most the kids who need it least." Education Next, 24(4), 26–31. Link