TrashCat Learning Science Spec (Semi-Condensed)
TrashCat is an educational game designed to build mathematical fluency through engaging, research-backed gameplay. It combines the excitement of an endless runner game with a sophisticated spaced-repetition learning algorithm. This document outlines the core learning science principles and the key decisions that shape the educational experience, providing a clear view into the pedagogical foundation of the game.
Scope and Assumptions
Motivational Framework: XP System
We use effort-intrinsic XP (time-based: 1 XP = 1 minute of active run time) rather than content-intrinsic XP (milestone-based). This simplified approach relies on our learning algorithm to control pacing, practice availability, and session duration. XP is awarded for run time only (excluding interventions), with daily caps on Speedrun XP and completion caps on Practice XP once skills reach 100%. Cosmetic rewards (in-game items, visual unlocks) provide immediate acknowledgment of milestone completion.
For comprehensive documentation of the XP system design decisions, research backing, and implementation rationale, see the XP System BrainLift.
Educational Scope: Building Fluency, Not Conceptual Understanding
TrashCat builds fluency through automaticity for students who have already received direct instruction on basic operations externally. Our goal: move students from strategy-based retrieval to automatic, reflexive recall. We provide point-wise reinforcement (targeted interventions) to address gaps, not comprehensive instructional sequences or conceptual explanations.
For comprehensive documentation of the cognitive science principles, instructional models, and research backing our fluency-building approach, see the Math Fluency BrainLift.
Assessment Scope: Integration with TimeBack Ecosystem
TrashCat operates within the TimeBack educational ecosystem, which handles standardized pre- and post-testing. We do not provide comprehensive standardized diagnostic assessments. Our internal assessment mechanisms (Speedrun mode, practice performance tracking) are designed for in-app adaptation and motivation, not external reporting of learning outcomes.
Reporting Scope: Teacher and Parent Visibility via TimeBack
TrashCat does not provide in-app teacher dashboards or parent reporting interfaces. All learning events are published to the TimeBack platform using the 1EdTech Caliper standard, providing unified reporting, standardized data format, reduced redundancy, and comprehensive visibility without TrashCat-specific interfaces.
Overarching Decisions
ITD: We embed mathematical practice within an endless runner where game-controlled variable speed enforces pedagogical timers without player speed control.
- Context: Building fluency requires repeated timed practice; traditional methods like flashcards are monotonous and fail to maintain engagement.
- Alternatives:
- Static flashcard app: focused but lacks engagement needed for sustained practice.
- Untimed/turn-based game: supports accuracy but fails to develop speed component of fluency.
- Player-controlled speed: creates perverse rushing incentives, undermining learning quality.
- Rationale: The endless runner's continuous movement and time pressure naturally align with timed recall needs for fluency building. By making all speed control automatic and game-determined (never player-controlled), we maintain complete pedagogical control over pacing, prevent inappropriate rushing, and create performance-based competitive differentiation while preserving motivational benefits.
ITD: We separate diagnostic assessment (Speedrun) from fluency building (Practice) into two distinct, sequential modes.
- Context: Students need to both demonstrate current knowledge and practice what they're still learning; mixing these creates confusion and penalizes students for facts they haven't learned.
- Alternatives:
- Single adaptive mode: blurs purpose for students ("Am I being tested or practicing?").
- One-time pre-test: fails to capture learning from outside the app.
- Rationale: Two modes establish clear daily rhythm: "Show us what you know today" (Speedrun), then "Train to get faster" (Practice). Speedrun provides daily baseline enabling highly targeted Practice, reducing cognitive load and improving engagement.
ITD: We support all four basic mathematical operations aligned with Common Core standards (Multiplication, Addition, Subtraction, Division).
- Context: While the prototype focused on multiplication, comprehensive math fluency requires covering all foundational operations students learn concurrently.
- Alternatives:
- Focus only on multiplication: simpler product but severely limited market and educational impact.
- Custom non-standard curriculum: difficult for educators to integrate into existing lesson plans.
- Rationale: Covering full basic fluency scope makes the product a complete solution for schools. Aligning with Common Core adopts a widely-recognized, research-backed framework that builds trust and simplifies adoption for educators.
ITD: We scope all learning sessions at the full skill level, not at specific fact sets or families.
- Context: A student's knowledge is holistic; scoping at smaller levels (e.g., just 'x5' table) creates artificial silos preventing effective mixing of new material with long-term review.
- Alternatives:
- Session per fact set/family: gives users control but breaks spaced repetition model and leads to inefficient practice.
- Rationale: Full-skill scoping empowers spaced repetition and fact selection algorithms to work as intended, creating seamless adaptive experience covering entire curriculum. While scope is full skill, practice sorting logic naturally focuses and scaffolds practice, starting with earliest fact sets and progressing logically.
ITD: We use multiple-choice questions with in-game answer objects for timed gameplay, reserving alternative formats for untimed interventions.
- Context: Building fluency requires measuring rapid recall; open-ended input introduces confounding variables (typing speed) unrelated to mathematical ability, and doesn't integrate with endless runner mechanics.
- Alternatives:
- Open-ended text input: closer to worksheets but conflates math recall with typing speed.
- Handwriting recognition: technologically complex and not suitable for fast-paced game.
- Rationale: MCQs eliminate input mechanics friction, providing pure signal of mathematical fluency without confounding variables. Alternative formats (fill-in-the-blank, spinner) are reserved for interventions where goals shift to varied reinforcement—these provide deeper engagement with specific facts after errors without compromising timing precision needed for fluency measurement during gameplay.
ITD: We generate pedagogically-informed distractors based on common mathematical errors, filtered for validity (positive, reasonable range).
- Context: Random distractors don't help diagnose specific misunderstandings; pedagogically-informed distractors representing common mistakes provide stronger diagnostic signals.
- Alternatives:
- Random distractors: simple but provides little diagnostic value.
- Fixed pre-authored distractors: time-consuming and difficult to maintain.
- No filtering: creates implausible options reducing diagnostic value.
- Rationale: Algorithmic generation of error-based distractors provides best balance of pedagogical value and scalability, programmatically creating common errors for any fact then filtering to ensure all options are valid, ensuring questions effectively identify student misconceptions without manual authoring or invalid choices.
Practice Mode
ITD: We group facts into pedagogically-sound fact sets to manage cognitive load and implement structured curriculum sequence.
- Context: Presenting all problems simultaneously (e.g., all 100 multiplication facts) would be cognitively overwhelming; learning is more effective with small, coherent chunks building on prior knowledge.
- Alternatives:
- Random presentation: simplest approach but pedagogically ineffective, causing frustration and cognitive overload.
- Single long sequence: provides order but lacks logical grouping for pattern recognition.
- Rationale: This approach respects cognitive load theory by breaking large skills into manageable chunks with enforced curriculum sequence ensuring coherent pedagogical progression. Specific groupings (doubles, near-doubles, make-10) are based on established arithmetic teaching strategies, helping students achieve automaticity efficiently by practicing related facts together.
ITD: We move individual facts bidirectionally through discrete, research-backed learning stages based on performance.
- Context: Learning isn't monolithic; students move from exposure to accuracy to speed but can also forget—purely linear models don't account for this reality.
- Alternatives:
- Promotion-only model: simpler but fails to address knowledge decay when students forget.
- Single adaptive loop: less transparent and harder to implement distinct pedagogical rules for different learning phases.
- Rationale: Individual fact-level tracking provides granularity needed for true personalization; discrete stages map directly to learning science of fluency development. Demotion capability is critical for responding to forgotten knowledge, ensuring mastery is not just achieved but durably retained over time.
ITD: We implement timers through game-controlled dynamic speed and distance-based spawning rather than independent countdown clocks.
- Context: Timing can be implemented through abstract UI timers or physical game mechanics (speed/distance); traditional educational games use countdown timers divorced from the game world.
- Alternatives:
- Independent countdown timer UI: makes timing explicit but decouples from gameplay, reducing urgency.
- Constant speed with variable distance: works but creates visual inconsistency.
- Player-controlled speed: undermines pedagogical pacing goals, creates perverse incentives.
- Rationale: Game-controlled variable speed creates seamless integration of pedagogy and gameplay, with target-tracking smoothing ensuring natural transitions. Different implementations between Practice (variable distance, variable speed) and Speedrun (fixed distance, variable speed) optimize each mode's specific goals while maintaining automatic, game-controlled speed in both modes.
ITD: We use simplified sequential answer presentation for Assessment stage to reduce initial cognitive load.
- Context: First in-app assessment involves high cognitive load as students must retrieve answers and adapt to assessment format; evaluating four options simultaneously can be overwhelming.
- Alternatives:
- Same representation for all stages: simpler to implement but creates steep difficulty curve.
- Shorter timer before first answer: rushes students before they've retrieved the fact.
- Strict 4-second total timer: physically impossible at game-controlled running speed.
- Rationale: Altering visual presentation and structuring timer around first answer appearance effectively lowers Assessment stage cognitive load without breaking game flow or creating physically impossible timing constraints. It's a UX-driven solution achieving a pedagogical goal: making first in-app assessment less intimidating and more accessible while respecting endless runner format physical constraints.
ITD: We prioritize fact selection by recency of error, curriculum sequence, and fluency stage using multi-level priority system.
- Context: Many potential facts could be shown in any practice session; purely random or sequential approaches would be pedagogically inefficient.
- Alternatives:
- Random selection: simple but fails to reinforce recent learnings or correct mistakes timely.
- Strict sequential order: too rigid, doesn't adapt to actual performance.
- Rationale: Prioritized approach creates highly personalized, efficient learning experience ensuring mistakes are corrected immediately (highest priority), curriculum is respected above all else (foundational facts before advanced), learning progression is logical (earlier-stage facts first), cognitive load is managed (specific ratios of fluency stages), and practice is varied (sorting by last asked time plus randomization).
ITD: We provide adaptive reinforcement interventions on every incorrect answer, in parallel with state demotion, with intervention type selected based on error patterns.
- Context: Incorrect answers provide multiple signals: decreased mastery data point, real-time moment needing reminder, and specific error nature (slip vs. struggle vs. misconception) suggesting different intervention intensities.
- Alternatives:
- Intervene instead of demoting: creates forgiving UX but corrupts learning model integrity.
- Demote without intervention: accurate but pedagogically weak, penalizes without support.
- Random intervention selection: simpler but doesn't match support intensity to error severity.
- Rationale: Parallel adaptive approach provides both data integrity (via demotion) and appropriately-calibrated pedagogical support (via intervention). When mistakes occur, demotion applies immediately for accurate long-term knowledge modeling while adaptive intervention selection analyzes error patterns to choose most appropriate type, separating long-term state tracking from immediate instructional support while matching intervention intensity to diagnosed needs.
ITD: We ensure continuous gameplay by hierarchically degrading fact selection rules (general cooldown → Review cooldown → working memory limits) while preserving Repetition cooldowns.
- Context: In intense sessions, students might answer so quickly that all eligible facts are temporarily on cooldown; having no question stalls the game and breaks UX.
- Alternatives:
- End the session: frustrating and confusing UX.
- Show "come back later" message: breaks core gameplay loop.
- Bypass all cooldowns including Repetition: compromises long-term retention.
- Rationale: Hierarchical degradation guarantees continuous question stream while preserving most pedagogically critical constraint (cross-session spaced repetition). This pragmatic trade-off momentarily bends lower-priority pedagogical rules (general cooldown first, then Review cooldown, then working memory limits) for the larger goal of keeping students engaged and practicing, never compromising cross-session repetition schedule driving long-term retention.
ITD: We limit working memory load by dynamically capping concurrent facts building fluency based on recent performance.
- Context: Working memory has strict capacity limits; simultaneously building automaticity for too many facts prevents effective automatic recall for any of them.
- Alternatives:
- No limit: leads to cognitive overload and poor retention.
- Fixed limit for all students: too restrictive for advanced learners, too aggressive for struggling students.
- Rationale: Dynamic working memory limit protects against cognitive overload while adapting to individual capacity, allowing high-performing students to work on more facts simultaneously while struggling students get more focused, manageable sets.
ITD: We adapt difficulty parameters dynamically based on recent accuracy and initial knowledge level, filtering Expert difficulty by first Speedrun performance.
- Context: Students' cognitive capacity fluctuates moment-to-moment; students entering with perfect knowledge require fundamentally different learning paths than those building automaticity from scratch.
- Alternatives:
- Manual difficulty selection: burdens users with decisions they may not be equipped to make.
- Static algorithm: fails to provide appropriate challenge or support.
- No initial knowledge filtering: allows students needing practice to bypass necessary spaced repetition.
- Rationale: Dynamic difficulty maintains optimal challenge zone for each student at each moment by adjusting parameters based on real-time performance. Filtering difficulty eligibility by initial knowledge level ensures aggressive progression (Expert difficulty) is reserved for students who've already built automaticity externally and don't need the app's spaced repetition, allowing students who know material to finish quickly while maintaining pedagogical rigor for those needing systematic practice.
ITD: We apply bulk promotion to accelerate progress when mastery is demonstrated across a fact set in higher difficulty levels.
- Context: Advanced learners recognize patterns and generalize knowledge; if a student correctly answers significant portions of a fact set with high accuracy, it signals automaticity for the underlying pattern, not just individual facts.
- Alternatives:
- Require individual proof for every fact: ensures complete coverage but creates unnecessary busywork for students with clear automaticity.
- Skip fact sets entirely: too aggressive, might miss edge cases.
- Rationale: Bulk promotion is pedagogically sound when applied carefully; curriculum sequencing ensures "skipped" facts naturally surface in subsequent practice when algorithm selects next unknown fact from that set. Students ultimately practice all facts, but system respects demonstrated mastery by not forcing redundant repetition—only enabled in higher difficulty levels where students have proven ability to generalize.
ITD: We provide timed recall from the start with no untimed Grounding stage, using interventions to address revealed gaps.
- Context: Untimed initial exposure applies to initial concept learning when students encounter entirely new ideas; our goal is building automaticity for facts students have already encountered and should retrieve (even if slowly or via strategy).
- Alternatives:
- Include untimed Grounding stage: appropriate for initial concept learning but unnecessary for automaticity building, doesn't create retrieval pressure.
- Separate non-game mode for Grounding: adds complexity for no pedagogical benefit.
- Rationale: Absence of untimed stage is pedagogically correct for automaticity goals—students entering the app have received instruction on these facts; they need timed retrieval practice to move from strategy-based (slow) to automatic (fast) retrieval. When timed assessment reveals gaps (via incorrect answer), we immediately provide untimed intervention formats (fill-in-the-blank, spinner, audio reinforcement) giving scaffolded, pressure-free exposure for that specific fact—more efficient than providing untimed exposure to all facts preemptively.
ITD: We transition to a no-reward random review mode after all facts in a skill reach Mastered stage.
- Context: Students will eventually master all material in a skill; we need clear end-state acknowledging this achievement while allowing long-term knowledge maintenance.
- Alternatives:
- Lock the skill: demotivating, removes long-term review opportunity critical for retention.
- Continue normal algorithm: confusing as there are no more stages to progress to, devalues XP by rewarding already-mastered content.
- Rationale: This approach provides satisfying conclusion to active learning while encouraging long-term retention. Switching to no-reward, random-review mode clearly communicates primary goal achievement, signals skill completion for congratulatory messaging, and keeps skill available for low-stakes practice.
Speedrun Mode
ITD: We design Speedrun as a finite, no-lives race where automatic performance-based speed penalties create competitive differentiation without player speed control.
- Context: Speedrun serves as diagnostic assessment and competitive challenge; if students could fail/die, some would never complete the assessment; if speed weren't affected by performance, there'd be no competitive incentive.
- Alternatives:
- Lives system in Speedrun: incomplete diagnostic data for struggling students, demotivating.
- No speed penalty for errors: uniform completion time regardless of performance, eliminating competitive differentiation.
- Player-controlled speed: allows intentional rushing creating "death spiral".
- Rationale: No-lives, time-based design creates low-stakes diagnostic with natural competitive differentiation—every student completes assessment (ensuring complete data) but high performers achieve better times (creating leaderboard motivation). Fixed base speed with automatic performance-based slowdowns prevents intentional rushing while incentivizing correct answers and obstacle avoidance, with finite nature (one correct or two mistakes per fact) keeping runs short and manageable.
ITD: We use random fact selection across the full skill in Speedrun, ignoring fact sets, cooldowns, difficulty management, and working-memory limits.
- Context: Stage-based practice algorithm optimizes for learning via strategic ordering and pacing; Speedrun optimizes for demonstration speed where any incomplete fact is acceptable at any moment.
- Alternatives:
- Reuse practice selection with flags: increases conditional complexity, couples experiences.
- Post-completion randomizer: works only for mastered facts, retains cooldowns.
- Rationale: Pure random selection maximizes throughput, is easy to reason about and explain, and keeps practice engine uncompromised by Speedrun-specific conditionals.
ITD: We define Speedrun completion as exactly one correct answer per fact regardless of previous incorrect attempts.
- Context: More complex criteria (e.g., 3 correct after a miss) complicate tracking and user comprehension.
- Alternatives:
- Variable or higher fixed counts: more rigorous but slower, harder to explain, less competitive.
- Rationale: One-and-done demonstrates basic fluency while keeping runs brisk; we can raise rigor later without architectural churn.
ITD: We scale time-to-answer inversely with recent accuracy through automatic game-controlled speed adjustments in Speedrun.
- Context: Speed should reflect fluency; as accuracy rises, time pressure should increase to differentiate performance, but this must be game-controlled to prevent diagnostic manipulation.
- Alternatives:
- Tiered time windows: easier to message but less responsive and smooth.
- Player-controlled speed boost: allows students to manipulate diagnostic results.
- Rationale: Continuous game-controlled inverse relationship rewards every incremental accuracy gain with tangible speed benefit while maintaining diagnostic integrity by preventing player manipulation.
ITD: We use fact families (same three numbers) and sample one per family for breadth in Speedrun.
- Context: Fact families are well-established educational constructs highlighting inverse relationships and number sense.
- Alternatives:
- Include all facts: maximal coverage but impractically long runs.
- Pure random without family constraint: simple but risks skew, repetition, poor coverage.
- Rationale: Sampling one per family balances breadth and run length, matches educator expectations for fact-family coverage, and creates consistent behavior across all four operations.
ITD: We retire a Speedrun fact after two mistakes in the same run.
- Context: Repeatedly surfacing the same missed item can stall progress and harm flow in a timed challenge.
- Alternatives:
- Never retire: maximizes exposure but increases churn and frustration.
- Retire after one mistake: faster runs but less evidence of eventual correctness.
- Rationale: Two mistakes strikes balance between giving student fair second chance and avoiding run-stalling repetition; errors are still recorded for downstream placement/practice.
ITD: We apply Placement promotions to Practice only on the first completed Speedrun per skill per day, strictly within the same operation, with promotion target based on initial knowledge level.
- Context: Speedrun serves as once-daily diagnostic for specific operation; both XP rewards and placement logic must be limited to first completed run to protect learning integrity and prevent gaming.
- Alternatives:
- Placement on every run: allows farming placement by repeated guessing.
- Placement only for exact facts: narrower mapping, misses transfer within related facts.
- Placement across operations: overgeneralizes knowledge.
- Fixed promotion target: forces high-performers through unnecessary practice.
- Rationale: Limiting placement to once per day aligns with XP rewards and maintains Speedrun's role as daily diagnostic tool—first completed run provides fresh knowledge evidence while subsequent runs add no new diagnostic value. Operation-scoped boundary reflects pedagogical reality that students develop fluency at different rates across operations; using first Speedrun to detect initial knowledge level allows differentiating between students needing to build automaticity versus those with existing mastery needing minimal maintenance.
ITD: We adapt placement promotion targets based on initial Speedrun performance (Perfect: to Mastered; Near-Perfect: first-attempt to Mastered, retry to Review; Needs Practice: to Review).
- Context: High school students taking elementary math apps already know the material through years of prior education; forcing them through full spaced repetition wastes time and creates disengagement, but students with incomplete knowledge need full learning path.
- Alternatives:
- Single promotion target for all students: treats advanced students same as beginners, creates frustration.
- Manual difficulty selection: cognitive burden, doesn't respond to performance data.
- Gradual promotion only: forces students with demonstrated mastery through unnecessary repetition.
- Multiple fine-grained thresholds: more complex, three-level system captures meaningful pedagogical distinctions.
- Rationale: Using first Speedrun accuracy as permanent diagnostic makes pedagogical sense—students entering with perfect accuracy have already built automaticity externally and don't need spaced repetition to fixate already-durable knowledge. Students with very high accuracy (>95%) have most facts mastered but a few gaps; promoting first-attempt-correct to mastered acknowledges existing knowledge while enrolling retry-required facts in Review for targeted practice.
ITD: We advance Speedrun-proven facts to Review or Mastered stages based on initial knowledge level, promoting entire fact families strictly within the operation being practiced.
- Context: Correct Speedrun answer signals existing knowledge for that specific operation; forcing re-learning from scratch in Practice for that operation is inefficient and demotivating.
- Alternatives:
- Promote only specific fact: fails to credit automaticity across related family facts.
- Smaller promotion: less efficient, forces practicing already-demonstrated knowledge.
- Always promote to Review: traditional conservative, forces perfect-knowledge students through unnecessary repetition.
- Always promote to Mastered: too aggressive for students needing practice.
- Promote across operations: overgeneralizes knowledge.
- Rationale: We credit entire fact family within operation to acknowledge demonstrated automaticity across related facts in that operation. For students needing practice, promoting to Review respects demonstrated knowledge by skipping introductory stages but maintains spaced repetition for durable automaticity; for students with perfect/near-perfect initial knowledge, promoting to Mastered acknowledges they've built automaticity through external mechanisms and don't need the app to rebuild what's already automatic.
Motivation and Rewards
ITD: We award XP based on active time (1 XP per minute) with mode-specific caps: Practice XP stops at 100% skill completion; Speedrun XP awarded once per day.
- Context: We need to incentivize time investment in building automaticity while preventing endless grinding through completion caps (Practice) and daily caps (Speedrun); time-based approach is maximally simple but requires architectural controls.
- Alternatives:
- XP per milestone/score: more pedagogically sophisticated but requires complex tracking, less immediate feedback, or creates high-stakes anxiety.
- Unlimited XP after completion or per Speedrun: enables XP farming on mastered material or repeated diagnostic grinding.
- No XP after first mastery: too restrictive, prevents motivated students from continuing to earn rewards.
- Rationale: Time-based XP is viable because we control problematic conditions: students cannot rush (game-controlled speed), cannot farm (completion caps + daily Speedrun caps), and cannot dawdle (algorithmic targeting + cooldowns). Simplicity and immediacy of "run time = XP time" creates tight motivational loop while sophisticated algorithm ensures quality practice beneath the surface; daily Speedrun cap guides students back to Practice Mode for main learning.
ITD: We always award cosmetic rewards for first-time Practice stage completion of fact sets, including placement-triggered completions, but never award XP for placement.
- Context: Completing a fact set is significant milestone; placement system can cause multiple fact set completions at once—major achievement deserving immediate acknowledgement.
- Alternatives:
- No reward for placement completion: misses key opportunity to reward demonstrated Speedrun knowledge.
- Award XP for placement completion: violates principle of awarding XP only for building new automaticity through practice, creates "double-dipping".
- Rationale: We create tight feedback loop between achievement and reward; awarding cosmetic for every first-time fact set completion provides immediate positive feedback reinforcing daily Speedrun value and celebrating student's existing knowledge, while separating from XP prevents corrupting our metric for building new automaticity.
ITD: We provide dedicated progress screen visualizing status and rewards for every fact set within the skill being practiced.
- Context: To stay motivated, students need to see their own growth; while game provides immediate question-level feedback, it's hard to see bigger journey picture without dedicated summary view.
- Alternatives:
- No progress screen: leaves students without overall accomplishment sense or clear roadmap, highly demotivating.
- In-game progress indicators only: useful but don't provide detailed reflective overview needed to understand curriculum scope.
- Rationale: Dedicated progress screen makes mastery visible and motivating, serving as primary tool for students to reflect on learning and see direct effort results, critical for long-term engagement.
ITD: We provide competitive leaderboards for Speedrun across four configurations (Global/Weekly, Global/All-Time, School/Weekly, School/All-Time), including all completed runs regardless of retired facts.
- Context: Competition and social comparison are powerful motivators for target age group, but different students are motivated by different competitive contexts (global vs. school-level) and temporal scopes (weekly vs. all-time).
- Alternatives:
- Single global leaderboard only: demotivating for students not in top tier or in smaller/lower-performing schools.
- School-only leaderboards: limits aspirational goals.
- Weekly-only leaderboards: erases historical achievement.
- Exclude runs with retired facts: penalizes attempting challenging material.
- Rationale: Four-category system ensures more students find resonant leaderboard context, maximizing motivational reach by providing multiple achievement/recognition pathways. Including all completed Speedruns (even with retired facts) rewards effort and completion rather than perfection, which is more pedagogically sound—we want students attempting all material, not just facts they're confident about.
ITD: We display an always-visible Fact Grid during all gameplay with the current question's fact highlighted, using single grid format for all four operations.
- Context: While progress screen provides high-level overview, students benefit from immediate context during gameplay; grids are standard educational tools for multiplication and addition showing patterns and relationships.
- Alternatives:
- Show grid only between questions or on-demand: adds friction, fails to make grid constant peripheral learning support.
- Use different visualizations for different operations: adds complexity, breaks visual consistency.
- Rationale: Always-visible grid provides constant, low-cognitive-load context; coercing all four skills into same grid format prioritizes simple, consistent UX over adhering to traditional representations for each operation—benefit of single, predictable interface outweighs awkwardness of representing subtraction or division in grid.
Gameplay and Feedback
ITD: We use in-game consequences (lives system) in Practice Mode where incorrect answers lose lives and 5 consecutive correct answers grant lives.
- Context: In game environments, students need immediate feedback connecting learning performance to game outcomes; pure pedagogical feedback can feel disconnected from core gameplay loop.
- Alternatives:
- No game consequences: purely informational feedback, reduces motivational power.
- Score-only consequences: less visceral, doesn't create same tension or achievement.
- Life on every correct answer: too generous, removes challenge.
- Rationale: Lives system creates tight coupling between learning performance and game survival, making every answer feel consequential and transforming abstract right/wrong feedback into tangible game stakes. Bi-directional system (lose on mistakes, gain on streaks) mirrors pedagogical promotion/demotion concept in intuitively understood game mechanics.
ITD: We provide multi-modal feedback (visual and audio) for every answer with redundant success/error indicators.
- Context: Multi-sensory feedback is more effective than single-modality feedback; students process information through multiple channels, and redundant feedback across modalities reinforces messages more strongly.
- Alternatives:
- Visual feedback only: misses auditory learners and students not looking at screen.
- Audio feedback only: misses students with sound off or hearing impairments.
- No immediate feedback: violates fundamental learning principles about timely correction.
- Rationale: Multi-modal feedback is well-established best practice in educational technology, maximizing likelihood that every student receives and processes correctness signal regardless of sensory preferences or momentary attention state.
ITD: We play audio of the complete equation when presenting each question in both Practice and Speedrun modes.
- Context: Reading fluency and mathematical fluency are separate but related skills; some students may struggle to decode written equations quickly, confounding mathematical fact recall measurement.
- Alternatives:
- Text only: creates barrier for students with reading difficulties or auditory learning preferences.
- Audio only: creates barrier for students with hearing difficulties or visual learning preferences.
- Optional audio: adds cognitive load deciding whether to use it, creates friction activating it.
- Rationale: Automatic audio narration removes reading as confounding variable in measuring mathematical fluency, supports universal design for learning principles by providing multiple means of representation, and automatic nature removes decision friction ensuring consistent multi-sensory presentation.
ITD: We play two-part audio correction feedback on incorrect answers (brief "wrong" cue followed by correct equation with answer).
- Context: Incorrect answers represent both failure points and learning opportunities; simply indicating "wrong" misses the chance to teach correct answer at the exact moment student is most receptive.
- Alternatives:
- Visual correction only: misses auditory encoding pathway and students not looking at screen.
- No correction, just "wrong" feedback: misses learning opportunity entirely.
- Delay correction until intervention: loses immediate, context-rich moment when student is most primed to learn.
- Rationale: Immediate audio correction capitalizes on error-driven learning opportunity—research shows errors, when immediately corrected, are powerful learning moments. Audio modality ensures correction is received even if visual attention is elsewhere, with two-part structure clearly separating evaluative feedback from instructional content.
ITD: We wait for all feedback audio to complete before presenting the next question, but each question receives its full timer allocation starting fresh.
- Context: When students answer incorrectly, we play correction audio; immediately displaying next question while this plays would create confusing audio overlap.
- Alternatives:
- Allow audio overlap: creates confusing audio collisions where correction audio from Question A plays while Question B is presented.
- No feedback audio: removes valuable multi-sensory learning channel, reduces accessibility.
- Rationale: Audio gate is simple practical solution to prevent audio overlap; since each question gets full timer regardless of when it appears, fluency-building objective is preserved—every recall is still strictly timed. Variable delay between questions is merely buffer for feedback processing, doesn't affect core pedagogical mechanic of timed mathematical recall.