Learning Science Overview

The Fluency API is built on research-backed learning science principles. This section explains the "why" behind the algorithm's decisions.

Core Philosophy

The Fluency API is built on three research-backed but controversial principles:

  1. Algorithm-controlled curriculum: The system controls which facts students practice and when. No user choice within a skill—the algorithm manages interleaving and spaced repetition.

  2. Separate assessment from practice: Daily Speedrun diagnostics identify current knowledge. Practice sessions build automaticity for specific gaps. Clear separation, not a blended mode.

  3. Timed practice forces automaticity: Students move from slow strategy-based calculation to instant automatic recall only through time pressure that makes multi-step reasoning non-viable.

These decisions go against some of the conventional wisdom in EdTech. The sections below explain the research and counterarguments.

Core Approach

The Fluency API builds mathematical automaticity through:

  1. Strategy-driven fact sequencing: Facts are grouped and ordered to leverage cognitive strategies (doubles → near-doubles → making ten). This respects how the brain naturally builds automatic recall.

  2. Spaced repetition with expanding intervals: Facts are reviewed at increasing intervals (1 day, 3 days, 1 week) to maximize long-term retention while minimizing practice time.

  3. Adaptive interventions: When students make errors, the algorithm selects remediation strategies based on error patterns—from light cues to intensive production-based practice.

  4. Dual-mode design: Daily "Speedrun" diagnostics identify current knowledge. "Practice" mode targets specific gaps with timed questions that force automatic retrieval.

  5. Time-based XP with architectural safeguards: Students earn 1 XP per minute of active practice. Gaming is prevented architecturally (game-controlled speed, completion caps, daily limits) rather than behaviorally.

What Makes This Different

Most math practice apps use random question selection or let students choose which facts to practice. The Fluency API takes control away from the user and manages the entire curriculum sequence, mixing new material with long-term review according to spaced repetition schedules.

This "algorithm knows best" approach is controversial but effective. Students can't practice only easy facts, can't skip ahead with gaps, and can't avoid necessary review. The trade-off: less user choice, faster path to genuine automaticity.

Three Deep Dives

We've documented the research, reasoning, and controversial decisions in three "BrainLift" documents:

Math Fluency

The foundation: What is automaticity? Why does it matter? How do you build it?

Covers the cognitive science of working memory, the three phases of fact learning (procedural → derived strategies → automatic retrieval), and why timed practice is the catalyst that forces the leap from calculation to recall.

Key insight: The endless runner's game-controlled variable speed isn't just for fun—it's the pedagogical timer that creates retrieval pressure while preventing gaming.

Counterargument: Jo Boaler's research on math anxiety from timed tests. Our response: stealth assessment in a game context reduces anxiety while maintaining the cognitive benefits of time pressure.

Fact Set Sequencing

The curriculum: Which facts should students practice first? Why does order matter?

Explains how facts are grouped by cognitive strategy (not just numeric similarity), why "doubles" must be mastered before "near-doubles," and how inverse operations (division, subtraction) leverage their primary operation counterparts.

Key insight: Random practice is pedagogically inefficient. Strategy-driven sequencing builds anchor facts first, then uses those anchors to automate derived facts.

Spiky opinion: We deliberately restrict user control over the curriculum sequence. Students select a skill (multiplication), but the algorithm controls the order of fact sets within that skill.

XP System

The motivation: How do you reward learning without enabling gaming?

Documents our controversial time-based XP system (1 XP = 1 minute) and explains why it works despite violating every best practice in the field.

Key insight: Time-based XP is normally terrible (rewards dawdling, punishes efficiency), but becomes viable when the algorithm controls pacing architecturally. Students can't rush (game-controlled speed), can't farm (completion caps), and can't dawdle (cooldowns exhaust practice).

Trade-off: XP becomes a time metric rather than an achievement signal. We accept this because the system is "embarrassingly simple" for third graders to understand, and Progress % separately tracks actual learning.

The Complete Specification

For implementation-level detail, see the Complete Learning Science Specification. This document covers every Important Technical Decision (ITD) with context, alternatives considered, and rationale.

Fair warning: it's dense. Start with the BrainLifts above unless you need implementation-level detail.