# API Documentation > Complete API documentation ## Overview ### Introduction # TrashCat Docs **TrashCat** is a math fluency platform with two parts: a game (3D endless runner) and the **Fluency API** (the adaptive learning engine that powers it). The API handles spaced repetition, diagnostic assessment, adaptive question selection, and targeted interventions—so you can build engaging educational experiences without implementing the learning science from scratch. ## For Developers You want to integrate adaptive math fluency into your game, web app, or educational product. **What you'll find here:** - How to authenticate and integrate the Fluency API - The complete learning loop: questions → answers → feedback → interventions - Full API reference with endpoints and examples - How TrashCat implements the API (as a worked example) **Start here**: [Developer Guide](https://docs.trashcat.learnwith.ai/guides-getting-started.guide.txt) → [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt) ## For Educators Evaluating You're deciding whether to adopt TrashCat (or another app using this API) for your classroom or school. You want to understand the research foundation. **What you'll find here:** - The cognitive science behind automaticity and fluency building - Why facts are sequenced the way they are (strategy-driven curriculum) - How spaced repetition works and why it's effective - The controversial decisions we made and the research backing them **Start here**: [Learning Science Overview](https://docs.trashcat.learnwith.ai/guides-learning-science-overview.txt) → [Math Fluency BrainLift](https://docs.trashcat.learnwith.ai/brain-lifts-math-fluency.brainlift.txt) ## For Teachers & Parents You're a teacher or parent whose students are already using TrashCat. You need practical guidance on how to support them. **What you'll find here:** - How the daily learning journey works (Speedrun → Practice) - What XP and progress mean - How long students should practice each day - What to do when they're frustrated or breezing through - How to handle skill completion **Start here**: [Teacher/Parent Guide](https://docs.trashcat.learnwith.ai/guides-teacher-parent.guide.txt) ## For LLMs We provide this documentation in LLM-friendly format for usage with any AI: - **Navigation index** (links with descriptions): https://docs.trashcat.learnwith.ai/llms.txt - **Full documentation** (all content in one file): https://docs.trashcat.learnwith.ai/llms-full.txt ## User Guide ### Teacher/Parent Guide # TrashCat: A Parent & Teacher Guide Welcome to TrashCat! We're excited to partner with you to help your student build mathematical fluency. This guide explains what TrashCat is, how it works, and how you can support your student's learning journey. ## What is TrashCat? TrashCat is an educational game designed to build **mathematical fluency**: the ability to recall basic math facts (like 7×8=56) quickly and accurately. It follows the story of a cat trying to find his way home, solving math puzzles to navigate obstacles. The platform combines this engaging game with a sophisticated, research-backed learning algorithm to make practice effective and fun. **Important Note:** TrashCat is **not** designed to teach mathematical concepts from scratch. It assumes students have already been introduced to _what_ multiplication is (for example). Our goal is to take that conceptual knowledge and build automatic, fast, and accurate recall. ## Key Terms Understanding a few terms will help you support your student's learning: - **Skills:** The four mathematical operations covered by TrashCat (Addition, Subtraction, Multiplication, Division). - **Facts:** Individual math problems like 7×8=56 or 12-5=7. Each skill contains many facts. - **Fact Sets:** Small, logical groups of related facts (like "doubles" or "×5 facts") that are practiced together. - **XP:** Experience Points. A measure of active practice time, where 1 XP equals 1 minute of the cat running and answering questions. - **Speedrun:** A fast-paced daily challenge that serves as a diagnostic assessment. - **Practice:** The main learning mode where students build fluency through adaptive, spaced repetition. ## Getting Started: The First-Time Experience The very first time your student opens the app, they will be guided through a short, mandatory tutorial. This teaches them the core game mechanics: how to move the cat, how to select answers, and what obstacles look like. This one-time tutorial ensures they understand _how_ to play before they _start_ learning. After the tutorial, all skills are immediately available to practice. ## The Daily Journey: How TrashCat Works ### The Home Screen The main screen of the game shows the available mathematical skills. We recommend progressing in this order, which is how they are displayed: 1. **Addition** 2. **Subtraction** 3. **Multiplication** 4. **Division** From here, your student will follow an intentional two-step "Daily Journey" for the skill they are working on. ### State 1: Start of Day → The Challenge (Speedrun) When your student starts, the game will highlight the **Speedrun** button. - **What it is:** A fast-paced, timed challenge. The student races through a selection of facts that covers the entire skill. - **Purpose:** This is a quick daily _diagnostic_ to see what your student knows _today_. - **How it Works:** This is a finite race where mistakes or hitting obstacles trigger automatic time penalties (slowing the cat down). Faster times indicate higher accuracy and fluency. A typical run takes less than 10 minutes. - **Protective Fail-Safe:** If the system detects your student is struggling significantly (answering many questions incorrectly in a row), the run will **automatically stop** to prevent frustration. We will guide them to Practice mode, which is the better place to build fluency. A stopped run awards no XP and doesn't count as their daily attempt. - **Rewards:** To get rewards, the student must **complete the entire run**. _Only_ the **first completed Speedrun of the day** (per skill) provides: 1. **XP:** Equal to the completion time in minutes (e.g., a 3-minute run awards 3 XP, a 7-minute run awards 7 XP). 2. **Placement:** If your student answers facts correctly, the system "places" them out of the early practice stages for those facts. This respects their knowledge and saves them time. Subsequent Speedruns on the same day are just for fun and to compete on the leaderboards. They do not grant more XP or placement. ### State 2: Speedrun Complete → The Training (Practice) After the daily Speedrun, the home screen will highlight the **Practice** button. This is the core learning experience. - **What it is:** A "smart" endless runner game where the system's algorithm selects the _perfect_ fact for your student to practice at the _perfect_ moment. - **Purpose:** This is where new fluency is built and long-term memory is strengthened. - **How it Works:** - **Fact Sets:** Individual facts are organized into small, logical groups called "fact sets" (like "doubles" or "×5 facts"). These fact sets move through several pedagogically informed learning stages designed to build fluency step-by-step. - **Lives System:** Your student starts with a set number of lives. Each mistake costs one life. If they run out of lives, the Practice session ends. - **Dynamic Difficulty:** The game adapts to your student's performance. If they are struggling, the system will adjust to a more supportive learning path. This may take more time to master material, but the difficulty will always be appropriate. - **How it Ends:** Practice sessions can end in a few ways: 1. Your student runs out of lives. 2. All available facts are on cooldown (the system uses strict spacing to build long-term memory, so if no facts are ready, the session ends naturally). 3. Your student reaches the recommended daily practice time (about 20 minutes). A message will appear encouraging them to rest, but they can _choose_ to continue if they wish. 4. Your student decides to pause and exit the session. Any rewards for milestones completed _during_ that session are saved. ### State 3: Practice Complete → The Rest Once your student has completed both their daily Speedrun and Practice, the home screen will show a message encouraging them to "Come back tomorrow!" This "rest" is a critical part of the learning process, allowing their brain to consolidate new memories. ## Motivation & Rewards TrashCat uses several systems to keep students motivated. - **XP (Experience Points):** Students earn XP based on active practice time: - **Daily Speedrun:** Completing their first Speedrun each day awards XP equal to the completion time (1 minute = 1 XP). - **Practice Sessions:** During Practice, students earn XP continuously as the cat runs and answers questions (1 minute of run time = 1 XP). Time spent in interventions (error corrections) does not count toward XP—only active running time. - **Completion Cap:** Once a skill reaches 100% (all facts mastered), students can still practice it for maintenance, but no more XP is awarded for that skill. This encourages students to work on new skills while maintaining learned material. - **Cosmetic Rewards:** Students earn fun in-game items (like new outfits for their cat) for completing fact set milestones. These provide immediate acknowledgment of achievement. - **Transparent Rewards:** All rewards are awarded **automatically** the instant they are earned. You can track all potential and earned rewards on the **Rewards Screen**, which is accessible from the game's home screen. - **Leaderboards:** Students can compete for the fastest Speedrun times across four different categories: - **Weekly Leaderboards:** Reset each week, providing fresh opportunities for achievement - **All-Time Leaderboards:** Track permanent records for long-term excellence - **School-Level Competition:** Compete with classmates and friends - **Global Competition:** Compare performance with students worldwide ## A Supportive Learning Environment The platform is designed to be a positive and effective learning tool. - **Audio Narration:** Every question is automatically read aloud by the system. This removes reading speed as a barrier, ensuring that students are demonstrating mathematical knowledge, not reading ability. It also supports different learning styles. - **Gradual Speed Progression:** The game starts with generous time windows for new facts and gradually reduces to faster timers as students build fluency. This gradual progression prevents overwhelming students with speed pressure before they're ready. - **Intelligent Practice Capping:** We want to prevent "cramming." After sufficient effective practice, a dialog will appear encouraging your student to stop for the day, explaining that rest is crucial for learning. They can choose to continue if they wish, but the system guides them toward healthy learning habits. - **Targeted Interventions:** When a student makes a mistake, the game doesn't just say "wrong." It pauses to provide a brief, interactive "intervention" (like re-building the fact with digital manipulatives) to reinforce the correct answer before they continue. The type of intervention adapts to the student's error pattern. Simple mistakes get quick corrections, while repeated struggles receive more intensive support. ## Your Role as a Parent or Teacher 1. **Monitor Progress:** TrashCat does not have its own built-in parent/teacher dashboards. All student progress, activity, and mastery data are sent to the central **TimeBack** platform, where you can access detailed mastery data, session history, and time-on-task metrics for each skill. 2. **Encourage Consistency:** Your main role is to help build a consistent habit. Help them follow the "Daily Journey" (Speedrun first, then Practice) for approximately 20-30 minutes each day. Short, daily practice is far more effective for long-term memory than one long session per week. ## Common Questions ### How long should my child play TrashCat each day? **Recommended: 20-30 minutes maximum per day.** In a typical session: - **Speedrun:** 5-10 minutes - **Practice:** 15-20 minutes - **Total:** About 20-30 minutes **Important boundaries:** - **Minimum:** Less than 10 minutes per day (if they still have skills to work on) likely won't yield productive results. We want them at about the 20-minute mark. - **Maximum:** We do not recommend more than 60 minutes per day, even if your child is highly engaged. Mass practice (playing for hours at once) is not conducive to long-term retention. Short, consistent daily sessions are far more effective. ### What if my child skips the Speedrun? They can skip it if they want—the Speedrun is not mandatory. However, we recommend completing at least one Speedrun per week. **Why the Speedrun matters:** The Speedrun re-diagnoses what your child knows today. Since children learn math in other contexts (school, homework, other apps), they may gain fluency through those activities. The Speedrun detects this progress and prevents wasting time practicing facts they already know by "placing them out" of early practice stages. ### How long does it take to complete a skill? It depends on your child's starting point: - **Already fluent (just brushing up):** 1-2 days - **Building fluency from scratch (but has conceptual understanding):** 1-2 weeks of consistent daily practice ### What should I do if my child is constantly frustrated? The system adapts difficulty automatically, but if your child is getting questions wrong consistently, it may indicate a lack of **conceptual grounding**. **Important:** TrashCat does not teach _what_ multiplication is. It makes multiplication _fluent_. Your child needs to understand the underlying concept (e.g., what multiplication means, how it works) before using TrashCat. If they're struggling persistently, they may need more direct conceptual instruction from other resources (Math Academy, Khan Academy, etc.) before continuing. ### What if my child is breezing through everything? This is not a problem! The game automatically adapts to high performance by increasing difficulty. Let the system handle it. ### What if my child wants to play for hours? Even if they're obsessed and highly motivated, we do not recommend more than 60 minutes per day. Spaced practice over multiple days is what builds long-term retention—not marathon sessions. The system will encourage them to rest after sufficient practice, but you may need to enforce this boundary. ### What happens when my child reaches 100% on a skill? **Congratulations!** 100% completion means your child has achieved genuine fluency for that skill. All facts have been practiced to automaticity through our spaced repetition system. **What this means:** - They can recall facts instantly (under 2 seconds) without calculation - Their working memory is freed up for higher-order math problems - Practice mode becomes unavailable for this skill - They can no longer earn XP from Practice for this completed skill (this rewards _building_ new fluency, not repeating mastered content) **What they should do next:** 1. **If they haven't completed all skills:** Move to the next skill (Addition → Subtraction → Multiplication → Division). They can still earn XP and unlock new rewards there. 2. **If they've completed all four skills:** They're done with active learning in TrashCat! They can compete on leaderboards for fastest Speedrun times or move on to higher-order math in their regular curriculum. 3. **For maintenance:** Run a Speedrun once every 1-2 weeks (per skill) to prevent knowledge decay. **If your child wants to keep practicing a completed skill:** Some students get hooked and want to keep playing even after 100% completion. Here's how to handle it: - **What they'll see:** A dialog: "Congratulations! No more XP is awarded for practice in this skill." with options to continue without XP or return to the main menu. - **How to redirect them:** Celebrate the achievement ("You've graduated from this skill!"), explain that XP rewards building new skills, and redirect to the next uncompleted skill or to competing for faster Speedrun times on leaderboards. ### What order should my child work through the skills? **Recommended sequence:** Addition → Subtraction → Multiplication → Division However, if your child is already confident in earlier skills, they can skip ahead. **We softly recommend doing at least one Speedrun on each skill**, as it may uncover unexpected gaps. ### Can my child work on multiple skills at the same time? Yes, they can switch between skills freely. However, consistency within a single skill typically yields faster progress than constantly switching. ### How does TrashCat fit with my child's school math curriculum? TrashCat is a **supplement**, not a replacement. It focuses exclusively on building automaticity with basic math facts. Most schools include some form of fact fluency practice, and TrashCat is a faster, more engaging, gamified tool for achieving the same goal. ### Should I sit with my child while they use TrashCat? No. TrashCat is designed to work independently. The system provides all necessary instruction, feedback, and support. Your role is to encourage consistent daily practice and monitor their progress through the TimeBack platform. ### Data Analyst Guide # TrashCat: A Data Analyst Guide This guide helps data analysts understand TrashCat's analytics data published via Caliper (for educational analytics) and MixPanel (for product analytics). ## Analytics Systems Overview TrashCat publishes data through two distinct analytics pipelines: ### Caliper (via TimeBack) **Purpose:** Standards-based learning analytics for educational analytics. **Integration:** Events are sent to the TimeBack Platform, which handles Caliper 1.2 event formatting and QuickSight data set generation. **Key Event Types:** - **AssessmentItemEvent** - Question display and completion - **GradeEvent** - Question scoring and XP awards - **AssessmentEvent** - Competition lifecycle (started, submitted) - **AssignableEvent** - Skill and fact set lifecycle (started, completed) **Common Use Cases:** - Standards-based learning record tracking - Skill and fact set progress tracking - Competition session tracking ### MixPanel **Purpose:** Product analytics and user behavior tracking **Integration:** Events are sent directly from both the backend (learning events) and frontend (UI/loading events) to MixPanel. **Key Event Types:** - Learning events (question answered, fact progression, rewards) - Session events (competition completion, practice milestones) - UI events (page loads, game loading, authentication) **Common Use Cases:** - Student engagement analysis - Session duration and completion tracking - Performance trend analysis - Feature usage monitoring ## Understanding XP (Experience Points) **XP awards time-on-task, with architectural controls to ensure quality practice.** XP is calculated as **1 XP per 60 seconds of active running time**. "Active running time" means the cat is moving and the student is answering questions. Time spent in interventions (error corrections) does NOT count toward XP. **Implementation details:** - `XP_PER_TIME_UNIT = 1` - `TIME_UNIT_SECONDS = 60` - Formula: `totalXp = Math.floor(activeDurationSec / 60) * 1` **Mode-Specific Caps:** **Practice Mode:** - XP is awarded continuously during practice sessions - Once a skill reaches 100% completion (all facts mastered), **no more XP is awarded** for that skill - Students can still practice for maintenance, but `xpAwarded` will remain at 0 - This prevents XP farming on mastered content **Speedrun (Competition) Mode:** - XP is awarded **only once per day** per skill - XP amount equals completion time in minutes (e.g., 7-minute run = 7 XP) - Only **completed** runs award XP (aborted runs award 0 XP) - Subsequent completions on the same day award 0 XP **Why students might not get XP despite long practice:** - Skill is already 100% complete (Practice mode) - Already completed a Speedrun today for this skill (Competition mode) - Session was aborted before completion (Competition mode only) - Student is spending time in interventions (doesn't count as active running time) ## Caliper Events Published ### Event Schema Overview All Caliper events follow the IMS Global Caliper Analytics 1.2 specification and include these base fields: - `@context` - Always `http://purl.imsglobal.org/ctx/caliper/v1p2` - `id` - Unique event identifier (URN UUID format) - `type` - Event type (AssessmentItemEvent, GradeEvent, AssessmentEvent, or AssignableEvent) - `actor` - Student user ID as URN UUID string (e.g., `urn:uuid:student-platform-id`) - `action` - Action performed (Started, Completed, Skipped, Graded, Submitted) - `edApp` - Application identifier string (e.g., `https://trashcat.learnwith.ai`) - `session` - Session identifier (currently `urn:tag:auto-attach`) - `eventTime` - ISO 8601 timestamp when event occurred **Recent Schema Changes (November 2024):** 1. **Simplified actor and edApp fields** - Changed from nested objects to simple string identifiers. This reduces redundancy since these identifiers are constant per user/application and can be resolved by the consumer. - Old: `actor: { id: "urn:uuid:123", type: "Person" }` - New: `actor: "urn:uuid:123"` 2. **Removed profile field** - The profile field was redundant with the type field and has been removed from all event schemas. 3. **Added lifecycle tracking** - New events track the complete lifecycle of skills, fact sets, and competitions (start/complete events) for comprehensive learning progress tracking. 4. **Question isPartOf now distinguishes Practice vs Speedrun** - Question events (AssessmentItemEvent) now use different `isPartOf` parent types based on the learning mode: - **Practice questions:** `isPartOf` links to the Fact Set as `AssignableDigitalResource` with `mediaType: curriculum/section` - **Speedrun questions:** `isPartOf` links to the Speedrun Assessment as `Assessment` type - This allows TimeBack to filter/group questions by parent type to analyze Practice engagement vs Speedrun performance separately. **Available Actions:** - `Started` - Resource/question was started or viewed - `Completed` - Question was answered or resource was completed - `Skipped` - Question was skipped or timed out - `Graded` - Question or XP was scored - `Submitted` - Competition was submitted **Score Types:** - `QUESTION_RESULT` - Individual question score (0 or 1) - `XP` - Experience points awarded - `MASTERY` - Fact set mastery achievement (100/100) ### Event Types #### 1. Question Viewed (AssessmentItemEvent) **Action:** `Started` **When Published:** When a question is displayed to the student **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object.id` - Question ID (URN UUID) - `object.type` - `AssessmentItem` - `object.name` - Question text (e.g., "7 × 8") - `object.isPartOf` - Parent resource (varies by mode, see below) - `object.dateToStartOn` - Question display timestamp - `object.dateToSubmit` - Expected answer deadline (if timed) - `object.maxAttempts` - Always 1 - `object.maxSubmits` - Always 1 - `object.maxScore` - Always 1 - `object.isTimeDependent` - Whether question is timed - `object.version` - Always "1.0" - `eventTime` - Event timestamp **isPartOf by Mode:** - **Practice Mode:** - `object.isPartOf.id` - Fact Set URL (e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}/factsets/{factSetId}`) - `object.isPartOf.type` - `AssignableDigitalResource` - `object.isPartOf.mediaType` - `curriculum/section` - `object.isPartOf.name` - Fact Set ID - **Speedrun Mode:** - `object.isPartOf.id` - Skill URL (e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}`) - `object.isPartOf.type` - `Assessment` - `object.isPartOf.name` - Skill name #### 2. Question Answered (AssessmentItemEvent) **Action:** `Completed` (for answered) or `Skipped` (for timed out/skipped) **When Published:** When student submits an answer or times out **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object` - Same assessment item as viewed event (including mode-specific `isPartOf`, see Question Viewed above) - `generated.id` - Response ID (URN UUID) - `generated.type` - `Response` - `generated.attempt.id` - Attempt ID (URN UUID) - `generated.attempt.type` - `Attempt` - `generated.attempt.assignee` - Student user ID (URN UUID) - `generated.attempt.assignable` - Assessment item ID (URN UUID) - `generated.attempt.count` - Number of attempts (includes interventions) - `generated.attempt.dateCreated` - Response creation timestamp - `generated.attempt.startedAtTime` - When question was displayed - `generated.attempt.endedAtTime` - When answer was submitted - `generated.dateCreated` - Response creation timestamp - `generated.startedAtTime` - When response started - `generated.endedAtTime` - When response ended - `generated.extensions.answerType` - `CORRECT`, `INCORRECT`, `TIMED_OUT`, or `SKIPPED` - `generated.extensions.choiceId` - Selected answer choice ID - `generated.extensions.timeTookToAnswerMs` - Response time in milliseconds - `eventTime` - Event timestamp **Note:** Skipped/timed-out questions do not include `generated` fields. #### 3. Question Graded (GradeEvent) **Action:** `Graded` **When Published:** Immediately after a question is answered (not for skipped questions) **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object` - Attempt ID from the answer event (URN UUID string) - `generated.id` - Score ID (URN UUID) - `generated.type` - `Score` - `generated.scoreType` - `QUESTION_RESULT` - `generated.attempt` - Attempt ID reference (URN UUID) - `generated.maxScore` - Always 1 - `generated.scoreGiven` - 1 for correct, 0 for incorrect - `generated.comment` - Human-readable result (e.g., "Correct=56, User Answer=56, Result=CORRECT") - `generated.dateCreated` - Grading timestamp - `eventTime` - Event timestamp #### 4. XP Awarded (GradeEvent) **Action:** `Graded` **When Published:** When XP is earned during practice or competition **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object.id` - Attempt ID (URN UUID) - `object.type` - `Attempt` - `object.assignee` - Student user ID (URN UUID) - `object.assignable.id` - Course ID (URN UUID) - `object.assignable.type` - `AssignableDigitalResource` - `object.assignable.mediaType` - `curriculum/course` - `object.assignable.name` - Course name - `object.assignable.isPartOf.id` - Subject ID (URN UUID) - `object.assignable.isPartOf.type` - `AssignableDigitalResource` - `object.assignable.isPartOf.mediaType` - `curriculum/subject` - `object.assignable.isPartOf.name` - Subject name (e.g., "Math") - `generated.id` - Score ID (URN UUID) - `generated.type` - `Score` - `generated.scoreType` - `XP` - `generated.attempt` - Attempt ID reference (URN UUID) - `generated.scoreGiven` - Amount of XP awarded - `generated.dateCreated` - Award timestamp - `eventTime` - Event timestamp #### 5. Competition Started (AssessmentEvent) **Action:** `Started` **When Published:** When a student begins a competition (Speedrun) session **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object.id` - Assessment ID (skill API URL, e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}`) - `object.type` - `Assessment` - `object.name` - Skill name - `generated.id` - Attempt ID (URN UUID) - `generated.type` - `Attempt` - `generated.assignee` - Student user ID (URN UUID) - `generated.assignable` - Assessment ID (skill API URL) - `generated.startedAtTime` - Competition start timestamp (ISO 8601) - `eventTime` - Event timestamp #### 6. Competition Submitted (AssessmentEvent) **Action:** `Submitted` **When Published:** When a student completes a competition (Speedrun) session **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object` - Assessment ID as string (skill API URL, e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}`) - `generated.id` - Attempt ID (URN UUID) - `generated.type` - `Attempt` - `generated.assignee` - Student user ID (URN UUID) - `generated.assignable` - Assessment ID (skill API URL) - `generated.startedAtTime` - Competition start timestamp (ISO 8601) - `generated.endedAtTime` - Competition end timestamp (ISO 8601) - `eventTime` - Event timestamp (same as endedAtTime) **Note:** Competition aborted events do not generate Caliper events. #### 7. Skill Started (AssignableEvent) **Action:** `Started` **When Published:** When a student answers their first question in a skill (tracked across both practice and competition modes) **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object.id` - Skill resource URL (e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}`) - `object.type` - `AssignableDigitalResource` - `object.mediaType` - `curriculum/course` - `object.name` - Skill name - `object.isPartOf.id` - Parent subject ID (URN UUID) - `object.isPartOf.type` - `AssignableDigitalResource` - `object.isPartOf.mediaType` - `curriculum/subject` - `object.isPartOf.name` - Subject name (e.g., "Math") - `generated.id` - Attempt ID (URN UUID) - `generated.type` - `Attempt` - `generated.assignee` - Student user ID (URN UUID) - `generated.assignable` - Skill resource URL - `generated.startedAtTime` - First answer timestamp (ISO 8601) - `eventTime` - Event timestamp **Note:** This event is triggered once per student per skill, regardless of mode (practice or competition). #### 8. Skill Completed (AssignableEvent) **Action:** `Completed` **When Published:** When a student masters all facts in a skill (reaches 100% completion) **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object` - Skill resource URL as string (e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}`) - `generated.id` - Attempt ID (URN UUID) - `generated.type` - `Attempt` - `generated.assignee` - Student user ID (URN UUID) - `generated.assignable` - Skill resource URL - `generated.startedAtTime` - First answer timestamp (ISO 8601) - `generated.endedAtTime` - Completion timestamp (ISO 8601) - `eventTime` - Event timestamp (same as endedAtTime) **Note:** Skill completion represents mastery of all fact sets within the skill. #### 9. Fact Set Started (AssignableEvent) **Action:** `Started` **When Published:** When a student answers their first question in a fact set **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object.id` - Fact set resource URL (e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}/factsets/{factSetId}`) - `object.type` - `AssignableDigitalResource` - `object.mediaType` - `curriculum/section` - `object.name` - Fact set ID - `object.isPartOf.id` - Parent skill URL - `object.isPartOf.type` - `AssignableDigitalResource` - `object.isPartOf.mediaType` - `curriculum/course` - `object.isPartOf.name` - Skill name - `generated.id` - Attempt ID (URN UUID) - `generated.type` - `Attempt` - `generated.assignee` - Student user ID (URN UUID) - `generated.assignable` - Fact set resource URL - `generated.startedAtTime` - First answer timestamp (ISO 8601) - `eventTime` - Event timestamp #### 10. Fact Set Completed (AssignableEvent) **Action:** `Completed` **When Published:** When a student masters all facts in a fact set (all facts reach final mastery stage) **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object` - Fact set resource URL as string (e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}/factsets/{factSetId}`) - `generated.id` - Attempt ID (URN UUID) - `generated.type` - `Attempt` - `generated.assignee` - Student user ID (URN UUID) - `generated.assignable` - Fact set resource URL - `generated.startedAtTime` - First answer timestamp (ISO 8601) - `generated.endedAtTime` - Completion timestamp (ISO 8601) - `eventTime` - Event timestamp (same as endedAtTime) **Note:** Fact set completion represents mastery of all individual facts within that set. #### 11. Fact Set Mastery Graded (GradeEvent) **Action:** `Graded` **When Published:** Immediately after a fact set is completed (mastered), alongside the Fact Set Completed event **Key Fields:** - `actor` - Student user ID (URN UUID string) - `object.id` - Attempt ID (URN UUID) - `object.type` - `Attempt` - `object.assignee` - Student user ID (URN UUID) - `object.assignable` - Fact set resource URL (section, e.g., `https://api.trashcat.learnwith.ai/learning/v1/skills/{skillId}/factsets/{factSetId}`) - `object.startedAtTime` - First answer timestamp (ISO 8601) - `object.endedAtTime` - Completion timestamp (ISO 8601) - `generated.id` - Score ID (URN UUID) - `generated.type` - `Score` - `generated.scoreType` - `MASTERY` - `generated.attempt` - Attempt ID reference (URN UUID) - `generated.maxScore` - Always 100 - `generated.scoreGiven` - Always 100 (indicates full mastery) - `generated.dateCreated` - Grading timestamp - `eventTime` - Event timestamp (same as endedAtTime) **Note:** This event is always fired alongside the Fact Set Completed event. The `object.assignable` references the section (fact set) to indicate what was mastered. ### Lifecycle Tracking Implementation TrashCat tracks learning activity lifecycles across both practice and competition modes to provide comprehensive analytics: **Skill Lifecycle:** - **Started:** Triggered on the first question answered in a skill, whether in practice or competition mode. The system checks both practice and competition states to determine the earliest start time. - **Completed:** Triggered when all fact sets within a skill reach mastery (100% completion). This represents full skill mastery. **Fact Set Lifecycle:** - **Started:** Triggered on the first question answered in a specific fact set. The start time is tracked per fact set. - **Completed:** Triggered when all facts in a fact set reach the final mastery stage. This indicates complete mastery of that fact set. **Competition Lifecycle:** - **Started:** Triggered when a student begins a Speedrun (competition) session. - **Submitted:** Triggered when a student successfully completes a Speedrun session. Aborted competitions do not generate a submitted event. **Key Implementation Details:** - Start times are calculated by looking at the earliest answer timestamp across both practice and competition states - All lifecycle events include precise ISO 8601 timestamps for duration analysis - The system ensures each "started" event is only sent once per student per entity (skill/factset/competition) - Completion events include both start and end timestamps for accurate duration tracking ## MixPanel Events Published ### Backend Events #### question_displayed **When Published:** When a question is generated and shown **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `question_id`, `question_text`, `fact_id`, `fact_set_id` - `stage_type` / `learning_mode` - e.g., "introduction", "practice", "review" - `time_started`, `time_to_answer` - `choices`, `choice_count` - `timestamp` #### question_answered **When Published:** When a student answers or times out on a question **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `question_id`, `question_text`, `fact_id`, `fact_set_id` - `answer_type` - "correct", "incorrect", "timed_out", "skipped" - `answer_value`, `correct_answer_value` - `time_started`, `time_ended`, `actual_time_to_answer_seconds` - `intervention_type` - e.g., "build_fact", "show_answer" (if applicable) - `stage_type` / `learning_mode` - `timestamp` #### individual_fact_progression **When Published:** When a single fact advances or regresses between stages **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `fact_id`, `fact_set_id` - `from_stage`, `to_stage` - Stage IDs (e.g., "intro_stage_1" → "practice_stage_1") - `answer_type` - Trigger for progression - `consecutive_count` - Number of consecutive correct answers - `timestamp` #### bulk_promotion **When Published:** When multiple facts in a fact set are promoted due to demonstrated mastery (e.g., from Speedrun placement) **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `fact_set_id` - `promoted_facts_count` - Number of facts promoted - `consecutive_correct_count` - Trigger threshold - `coverage_percentage` - Percentage of fact set answered correctly - `timestamp` #### fact_set_review_ready **When Published:** When all facts in a fact set complete their introduction stage and are ready for review **Key Properties:** - `user_id`, `skill_id`, `session_id` - `fact_set_id`, `next_fact_set_id` - `total_facts_count`, `total_answer_count` - `timestamp` #### fact_set_completion **When Published:** When a fact set reaches full mastery (all facts in final stage) **Key Properties:** - `user_id`, `skill_id`, `session_id` - `completed_fact_set_id`, `next_fact_set_id` - `total_facts_count`, `total_answer_count` - `started_at_time` - When first question in fact set was answered - `ended_at_time` - When fact set was completed - `timestamp` #### fact_set_started **When Published:** When a student answers their first question in a fact set **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `fact_set_id` - `started_at_time` - Timestamp of first answer - `timestamp` #### reward_event **When Published:** When XP or cosmetic rewards are earned **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `reward_xp_amount` - XP awarded (may be 0 if only cosmetic) - `reward_cosmetic` - Cosmetic item ID or 0 - `timestamp` #### competition_started **When Published:** When a Speedrun (competition) session begins **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `started_at_time` - Competition start timestamp - `timestamp` #### competition_completion **When Published:** When a Speedrun is successfully completed **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `successful` - Always true for completed events - `time_spent_sec` - Total elapsed time including penalties - `active_duration_sec` - Active running time (basis for XP) - `questions_answered`, `correct_count`, `incorrect_count` - `xp_earned` - XP awarded (0 if not first completion today) - `items_earned` - Cosmetic items unlocked - `total_facts`, `total_mastered_facts` - Skill-level stats - `started_at_time` - Competition start timestamp - `ended_at_time` - Competition end timestamp - `timestamp` #### competition_aborted **When Published:** When a Speedrun is automatically stopped due to too many errors **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `reason` - Abort reason (e.g., "too_many_errors") - `questions_answered` - Questions attempted before abort - `incorrect_in_window` - Number of errors in recent window - `window_size` - Size of error window checked - `started_at_time` - Competition start timestamp - `timestamp` #### skill_started **When Published:** When a student answers their first question in a skill (across practice and competition modes) **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `started_at_time` - Timestamp of first answer - `timestamp` #### skill_completed **When Published:** When a student masters all facts in a skill (reaches 100% completion) **Key Properties:** - `user_id`, `skill_id`, `session_id`, `algorithm_id` - `started_at_time` - When first question was answered - `ended_at_time` - When skill was completed - `timestamp` ### Frontend Events #### page_load **When Published:** When the application loads **Key Properties:** - `user_id`, `organization_id`, `organization_name` - `timestamp` #### sign_up / sign_in / sign_out **When Published:** During authentication flows **Key Properties:** - `user_id`, `organization_id`, `organization_name` - `timestamp` #### game_loading_started / game_loading_success / game_loading_failure **When Published:** During Unity game initialization **Key Properties:** - `gameType` - "unity" - `gamePath` - Unity build path - `loadingTimeMs` - Time to load (success only) - `timestamp` #### unity_error **When Published:** When Unity encounters a runtime error **Key Properties:** - `error` - Error message - `timestamp` #### Custom Unity Events Unity can send arbitrary MixPanel events via the bridge. Event names and properties are defined in the Unity codebase. These appear in MixPanel with: - Standard user context (`user_id`, `organization_id`, etc.) - Custom properties from Unity (converted from strings to appropriate types) ## Session Data Structure Session data is stored in DynamoDB and queryable via the backend API. **Key Fields:** - `userId`, `skillId`, `date` (YYYY-MM-DD) - `sessionId` - Unique session identifier - `algorithmId` - "practice" or "competition" - `createdAt` - Session creation timestamp (ISO format) - `lastActiveDurationSec` - Most recent active duration reported - `lastXpAwardedDurationSec` - Duration at which XP was last calculated - `xpAwarded` - Total XP earned in this session - `isCompleted` - (Competition only) Whether run finished - `completedAt` - (Competition only) Completion timestamp - `hasRecommendedSessionCompletion` - Whether system recommended rest ## Common Analysis Patterns ### Calculating Accuracy **From MixPanel:** ``` Accuracy = (COUNT WHERE answer_type = "correct") / (COUNT WHERE answer_type IN ["correct", "incorrect"]) ``` **Note:** Exclude `timed_out` and `skipped` from denominator, or include them as errors depending on your analysis goal. ### Verifying XP vs Time Spent **Expected XP:** ``` expectedXp = Math.floor(lastActiveDurationSec / 60) ``` **Actual XP:** ``` actualXp = xpAwarded (from session data or reward_event) ``` **If they don't match:** - Skill may be 100% complete (Practice mode) - Daily Speedrun already completed today (Competition mode) - Session was aborted (Competition mode) ### Identifying Learning Patterns **Questions to ask:** - How many interventions per student per session? - Average time to answer by stage type? - Fact progression velocity (intro → practice → review → mastery)? - Drop-off points (which fact sets cause students to quit)? **Key Events:** - `question_answered` - Filter by `intervention_type IS NOT NULL` for intervention frequency - `individual_fact_progression` - Track stage transitions - `fact_set_completion` - Measure milestone achievement rate ### Monitoring Engagement **Session Duration:** ``` sessionDuration = lastActiveDurationSec (from session data) ``` **Session Completion Rate:** ``` completionRate = (COUNT WHERE isCompleted = true) / (COUNT sessions) (Competition only) ``` **Daily Active Users:** - Count distinct `user_id` per day with any session activity **Retention:** - Track users with sessions on consecutive days ## Troubleshooting Common Data Questions ### "Student practiced for 30 minutes but got 0 XP" **Check:** 1. Is skill at 100% completion? (Practice mode caps XP) 2. Did they already complete a Speedrun today? (Competition mode daily cap) 3. Is `lastActiveDurationSec` actually progressing? (System may have paused for cooldown) ### "XP doesn't match session duration" **Expected:** XP is based on **active running time**, not total elapsed time or wall clock time. Interventions don't count. Also check for completion caps. ### "Accuracy seems low but student is progressing" **Expected:** The system adapts by showing easier facts or moving backward through stages. Low accuracy may indicate the algorithm is actively working to find the right difficulty level, which is normal during early practice. ### "Student has many sessions on the same date" **Expected:** Students can have multiple sessions per skill per day: - Multiple practice sessions (ending due to lives, cooldown, or exit) - One scored Speedrun + unlimited unscored attempts - Sessions across different skills Each session has a unique `sessionId`. Aggregate by date for daily activity summaries. ### "Caliper events missing for some questions" **Check:** 1. Does user have a `platformId`? (Required for Caliper, not for MixPanel) 2. Was the answer skipped/timed-out? (No graded event is sent for non-attempts) 3. Check error logs for TimeBack API failures ## Data Access **MixPanel Dashboard:** - Access product analytics and user behavior data - Log in with your organization credentials - Contact trashcat@trilogy.com if you need access **Caliper Events (via TimeBack):** - Events are forwarded through the TimeBack learning records platform to QuickSight - Access data sets via AWS QuickSight - Contact timeback@trilogy.com for QuickSight access or TimeBack platform credentials ## Learning Science ### Overview # Learning Science Overview The Fluency API is built on research-backed learning science principles. This section explains the "why" behind the algorithm's decisions. ## Core Philosophy The Fluency API is built on three research-backed but controversial principles: 1. **Algorithm-controlled curriculum**: The system controls which facts students practice and when. No user choice within a skill—the algorithm manages interleaving and spaced repetition. 2. **Separate assessment from practice**: Daily Speedrun diagnostics identify current knowledge. Practice sessions build automaticity for specific gaps. Clear separation, not a blended mode. 3. **Timed practice forces automaticity**: Students move from slow strategy-based calculation to instant automatic recall only through time pressure that makes multi-step reasoning non-viable. These decisions go against some of the conventional wisdom in EdTech. The sections below explain the research and counterarguments. ## Core Approach The Fluency API builds **mathematical automaticity** through: 1. **Strategy-driven fact sequencing**: Facts are grouped and ordered to leverage cognitive strategies (doubles → near-doubles → making ten). This respects how the brain naturally builds automatic recall. 2. **Spaced repetition with expanding intervals**: Facts are reviewed at increasing intervals (1 day, 3 days, 1 week) to maximize long-term retention while minimizing practice time. 3. **Adaptive interventions**: When students make errors, the algorithm selects remediation strategies based on error patterns—from light cues to intensive production-based practice. 4. **Dual-mode design**: Daily "Speedrun" diagnostics identify current knowledge. "Practice" mode targets specific gaps with timed questions that force automatic retrieval. 5. **Time-based XP with architectural safeguards**: Students earn 1 XP per minute of active practice. Gaming is prevented architecturally (game-controlled speed, completion caps, daily limits) rather than behaviorally. ## What Makes This Different Most math practice apps use random question selection or let students choose which facts to practice. The Fluency API takes control away from the user and manages the entire curriculum sequence, mixing new material with long-term review according to spaced repetition schedules. This "algorithm knows best" approach is controversial but effective. Students can't practice only easy facts, can't skip ahead with gaps, and can't avoid necessary review. The trade-off: less user choice, faster path to genuine automaticity. ## Three Deep Dives We've documented the research, reasoning, and controversial decisions in three "BrainLift" documents: ### [Math Fluency](https://docs.trashcat.learnwith.ai/brain-lifts-math-fluency.brainlift.txt) **The foundation**: What is automaticity? Why does it matter? How do you build it? Covers the cognitive science of working memory, the three phases of fact learning (procedural → derived strategies → automatic retrieval), and why timed practice is the catalyst that forces the leap from calculation to recall. **Key insight**: The endless runner's game-controlled variable speed isn't just for fun—it's the pedagogical timer that creates retrieval pressure while preventing gaming. **Counterargument**: Jo Boaler's research on math anxiety from timed tests. Our response: stealth assessment in a game context reduces anxiety while maintaining the cognitive benefits of time pressure. ### [Fact Set Sequencing](https://docs.trashcat.learnwith.ai/brain-lifts-fact-set-sequencing.brainlift.txt) **The curriculum**: Which facts should students practice first? Why does order matter? Explains how facts are grouped by cognitive strategy (not just numeric similarity), why "doubles" must be mastered before "near-doubles," and how inverse operations (division, subtraction) leverage their primary operation counterparts. **Key insight**: Random practice is pedagogically inefficient. Strategy-driven sequencing builds anchor facts first, then uses those anchors to automate derived facts. **Spiky opinion**: We deliberately restrict user control over the curriculum sequence. Students select a skill (multiplication), but the algorithm controls the order of fact sets within that skill. ### [XP System](https://docs.trashcat.learnwith.ai/brain-lifts-xp-system.brainlift.txt) **The motivation**: How do you reward learning without enabling gaming? Documents our controversial time-based XP system (1 XP = 1 minute) and explains why it works despite violating every best practice in the field. **Key insight**: Time-based XP is normally terrible (rewards dawdling, punishes efficiency), but becomes viable when the algorithm controls pacing architecturally. Students can't rush (game-controlled speed), can't farm (completion caps), and can't dawdle (cooldowns exhaust practice). **Trade-off**: XP becomes a time metric rather than an achievement signal. We accept this because the system is "embarrassingly simple" for third graders to understand, and Progress % separately tracks actual learning. ## The Complete Specification For implementation-level detail, see the [Complete Learning Science Specification](https://docs.trashcat.learnwith.ai/specs-learning-science.spec.txt). This document covers every Important Technical Decision (ITD) with context, alternatives considered, and rationale. Fair warning: it's dense. Start with the BrainLifts above unless you need implementation-level detail. ### Interventions # TrashCat Interventions ## Intervention Categories & Selection Logic The system uses **adaptive intervention selection** based on error patterns: 1. **Cross-operation error detected** (e.g., answered 12 for 3×4, which is 3+4+3+3) → **Specialist Tools** (deterministic) 2. **Repeated error in same session** (same fact wrong twice) → **Production/High-Effort Tools** (random selection) 3. **First error** → **Scaffolded/Low-Effort Tools** (random selection) 4. **If student fails an intervention** → **Reuse same intervention type** (avoid intervention hopping) --- ## Scaffolded/Low-Effort Tools _For simple slips—brief corrections with minimal cognitive load_ ### `CueFading.FRQ` **How it works:** Flash the correct answer (2-3 seconds), hide it, ask student to recall from memory. **Learning science:** Implements **cued recall** transitioning to **free recall**. The brief exposure primes working memory, then immediate retrieval demand strengthens the memory trace through the **testing effect**. Minimizes production cost—students don't generate the answer, just recognize and retrieve it. ### `TurnWheels.AnswerOnly` **How it works:** Display the correct answer using scrollable digit wheels/spinners. **Learning science:** **Passive exposure** with interactive element. Student manipulates the interface to reveal the answer, creating light **motoric encoding** without production demands. The scrolling action adds mild engagement beyond pure visual presentation. ### `Retry.MCQ` **How it works:** Show the exact same MCQ question again. **Learning science:** **Immediate retrieval practice** while memory trace is hot. The first error eliminates wrong answer from consideration (negative priming), making second attempt easier. Tests whether error was due to momentary lapse vs. genuine knowledge gap. ### `DragDrop.AnswerOnly` **How it works:** Show the problem with factors visible (e.g., "7 × 8 = \_\_\_"), student drags only the answer digits into the blank. Provides draggables for answer choices only (correct + distractors). **Learning science:** **Production effect lite**—student generates the answer through assembly rather than pure recall, but with the problem structure visible as support. Lower cognitive load than full equation reconstruction since factors remain displayed. Digit manipulation creates **motoric memory traces** and forces attention to answer structure (place value, digit order) without computational or relational assembly demands. --- ## Production/High-Effort Tools _For deeper encoding when student is stuck—requires generative effort_ ### `DragDrop.FITB` **How it works:** Show equation with ALL parts as blanks (e.g., "**_ × _** = \_\_\_"), student drags digits to reconstruct the entire equation. Provides draggables for all equation parts (factors and answer) plus distractors. **Learning science:** **Deep generative learning**—student must reconstruct the complete relational structure, not just retrieve the answer. Requires holding the problem in working memory while assembling all three components. The empty equation forces attention to **how all three numbers relate through the operation**. High production demand—physically manipulating all parts creates strong motoric encoding and forces active processing of the mathematical relationship. ### `CueFading.ListenMCQFRQ` **How it works:** Play equation as audio, show MCQ, then ask free response. **Learning science:** **Multi-modal encoding** (auditory + visual) creates multiple retrieval paths. Audio narration supports **phonological loop** in working memory, helpful for auditory learners. The MCQ→FRQ progression is **scaffolded production**, moving from recognition to generation. ### `TimedRepetition.Recall.FRQ` **How it works:** Show answer, let student copy it, cover it, ask for recall. **Learning science:** Classic **copy-cover-recall** method from reading fluency research. Copying creates **dual encoding** (visual perception + motor production). Immediate covered recall is **retrieval practice** with minimal retention interval, strengthening the fresh memory trace. ### `AnswerFirst.MCQ.FRQ` **How it works:** Display full equation with answer, then quiz with MCQ followed by free response. **Learning science:** **Pre-exposure + retrieval practice**. Seeing the complete fact first establishes the memory trace, then the MCQ→FRQ sequence provides **graduated retrieval difficulty**. The full equation display emphasizes the relationship between all three numbers. --- ## Specialist Tools _For diagnosed misconceptions requiring targeted remediation_ ### `TurnWheels.All` **How it works:** Show entire equation using interactive digit wheels for all values. **Learning science:** Reserved for **cross-operation errors** (e.g., adding when should multiply). Interactive manipulation of all equation components forces attention to the **operator** and **structural relationships**. The wheels allow exploration of how changing each element affects others, supporting **relational understanding** rather than rote memorization. Addresses **systematic misconceptions**, not random errors. --- ## Key Design Principles 1. **Interventions are untimed** - removes retrieval pressure, allows encoding-focused practice 2. **No XP during interventions** - architectural penalty for errors (time cost) without punitive feel 3. **Parallel with demotion** - interventions provide support, but fact still demotes in learning stages (maintains data integrity) 4. **Adaptive intensity** - light corrections for slips, heavy production for struggles, diagnostic tools for misconceptions 5. **No intervention hopping** - if student fails an intervention, they get the same type again (consistency aids learning) ### Math Fluency ### Math Fluency BrainLift - **Owner** - Serban Petrescu - **Purpose** - To create a definitive, research-backed knowledge base on the cognitive science and instructional design principles required to build effective, engaging, and scalable math fluency products. - This document is designed to align product, design, and engineering decisions with a shared understanding of what works in learning science, moving from foundational facts to actionable, opinionated strategy. - It is NOT an implementation spec or a product roadmap, but the scientific and pedagogical "constitution" that governs those execution-level documents. ### Experts **Core Proponents & Researchers** - **Siegfried Engelmann** - **Why:** The originator of Direct Instruction (DI). His work provides the foundational model for explicit, structured, mastery-based teaching that prevents knowledge gaps from forming before fluency practice begins. - **Locations:** [National Institute for Direct Instruction (NIFDI)](https://www.nifdi.org/) - **Ogden R. Lindsley** (1922-2004) - **Why:** The pioneer of Precision Teaching. His work on using daily, timed measurements (1-minute timings) and charting (the Standard Celeration Chart) is the basis for data-driven methods that rapidly accelerate fluency. - **Locations:** [Precision Teaching Resources](https://psych.athabascau.ca/open/lindsley/) - **Daniel Willingham** - **Why:** A cognitive scientist who excels at translating research into practical advice for teachers. His work clearly explains the cognitive science behind _why_ practice and automaticity are critical for freeing up working memory for higher-order thinking. - **Locations:** [Personal Blog](https://danielwillingham.com/), [X/Twitter](https://x.com/dtwillingham) - **Valerie Shute** - **Why:** A leading researcher in stealth assessment. Her work provides the scientific basis for designing assessments embedded within games that measure student knowledge without causing the anxiety of traditional timed tests. - **Locations:** [Google Scholar](https://scholar.google.com/citations?user=VfKOZ5IAAAAJ&hl=en) **Modern Implementers & Practitioners** - **Jennifer Bay-Williams** - **Why:** A prominent voice in modern math education who focuses on "basic fact fluency that is foundational, not tedious." She provides models for moving from conceptual understanding, through strategy use, to automaticity. - **Locations:** [Corwin Press Author Page](https://www.corwin.com/author/jennifer-m-bay-williams), [X/Twitter](https://x.com/JBayWilliams) - **Sara VanDer Werf** - **Why:** A classroom-focused expert who provides immediately actionable, research-backed guidance on using tools like Curriculum-Based Measurement (CBM), spaced practice, and building effective routines for fluency. - **Locations:** [Personal Blog](https://www.saravanderwerf.com/), [X/Twitter](https://x.com/saravdwerf) - **Greg Tang** - **Why:** An expert in creating visually-driven games, puzzles, and activities that build number sense and accelerate fact recall, demonstrating how to make practice engaging and effective. - **Locations:** [TangMath.com](https://tangmath.com/), [X/Twitter](https://x.com/gregtangmath) - **Sal Khan** - **Why:** As the founder of Khan Academy, he is an expert in the large-scale implementation of technology-based learning systems that blend conceptual instruction with practice. His work provides insights into motivation and engagement at scale. - **Locations:** [Khan Academy](https://www.khanacademy.org/), [X/Twitter](https://x.com/khanacademy) - **Michael Orosco** - **Why:** An expert in the cognitive science of math learning, particularly for students with learning difficulties. His 2025 research provides a direct link between specific instructional interventions (like using familiar contexts and worked examples), their impact on reducing working memory load, and subsequent improvements in math performance. - **Locations:** [University of Kansas](https://epsy.ku.edu/people/dr-michael-j-orosco), [Google Scholar](https://scholar.google.com/citations?user=UiKOJ90AAAAJ&hl=en) **Counter-Expert** - **Jo Boaler** - **Why:** A professor of mathematics education at Stanford and a leading voice for an alternative approach. She argues that an over-emphasis on timed tests and rote memorization can cause math anxiety and undermine deep, flexible, and creative mathematical thinking. Her work provides the strongest research-based counter-argument to a fluency-first model. - **Locations:** [youcubed.org](https://www.youcubed.org/), [X/Twitter](https://x.com/joboaler) ### DOK4: Spiky Points of View - **The most effective way to build fluency is to make time pressure concrete. An endless runner game where game speed is the timer is superior to a simple clock.** - We believe that students build speed most effectively when they _feel_ the pressure. In our model, the character's running speed is the timer—it is entirely game-controlled and accelerates as students become more accurate. This makes the need for faster recall a tangible, in-the-moment reality, which is more motivating and effective than watching a number count down. - _Derived from Insight:_ An effective learning system's primary goal is to keep students in the 80-95% accuracy "sweet spot." - **Why it's controversial:** The standard approach is to use abstract UI elements like countdown timers. The opposing view is that integrating time pressure into a core game mechanic could be distracting or add extraneous cognitive load. We believe the motivational gain from making the timer a concrete part of the game world outweighs this risk. - **The algorithm knows best. The system must control the curriculum to ensure interleaved practice and spaced repetition.** - Many apps allow users to practice in narrow silos (e.g., "just the x7 table"). We believe this is suboptimal. To ensure facts are constantly revisited at expanding intervals, our system operates on the full skill domain (e.g., all of multiplication), algorithmically mixing new facts with crucial long-term review of older ones. The user's job is to play; our job is to manage their learning path. - _Derived from Insights:_ Short, daily practice sessions are superior...; The most efficient path to fluency is to diagnose and target a student's specific weaknesses... - **Why it's controversial:** The mainstream approach in many consumer apps is to maximize user choice and control. Our view is that for effective learning based on cognitive science principles like spaced repetition, taking away that choice is a necessary, pedagogically-sound trade-off. - **Learning and assessment must be separate, explicit modes. A daily diagnostic "Speedrun" followed by targeted "Practice" is superior to a single, blended mode.** - While many adaptive systems try to invisibly mix assessment and practice, we believe this blurs the student's purpose. Our model is explicit: first, a low-stakes diagnostic to show us what you know today. Then, a separate practice mode to work on your specific weaknesses. This clarity of purpose reduces cognitive load and makes progress feel more tangible. - _Derived from Insight:_ The most efficient path to fluency is to diagnose and target a student's specific weaknesses... - **Why it's controversial:** The prevailing trend in "intelligent tutoring" is to create a single, seamless, adaptive experience. We believe this seamlessness comes at the cost of clarity. We are betting that making the seams visible—separating the "test" from the "training"—is ultimately more effective and motivating for the learner. - **Skip untimed practice stages. Start directly with timed practice and use untimed interventions only when data proves a gap exists.** - Because our product assumes students have received prior conceptual instruction, our focus is on building automaticity. Therefore, we start immediately with timed practice to apply retrieval pressure. We handle conceptual gaps not by default, but by exception: when performance data (e.g., repeated errors) provides clear evidence of a misunderstanding, we then provide a targeted, untimed intervention. - _Derived from Insights:_ If a student repeatedly fails a fact, the problem is likely conceptual...; Mastery gates—requiring high accuracy and speed... are critical. - **Why it's controversial:** Many educators and researchers argue strongly for an untimed initial learning phase to reduce anxiety and build confidence. Our approach is a calculated bet based on our product's specific scope: we are not the primary instructional tool. We are a targeted practice tool, and in that context, we believe our "timed-first, intervene-when-necessary" model is more efficient. - **Rewards must incentivize progress on new material, not grinding on known facts.** - Many learning games reward any correct answer, which encourages students to stick to easy problems they've already mastered. We believe this is a flawed incentive structure. Our system awards valuable XP only for the one-time achievement of bringing a _new_ set of facts to a state of fluency. Reviewing mastered content is critical for retention, but it is its own reward. - _Derived from Insight:_ Giving students a sense of agency... is a low-cost, high-impact method for boosting engagement. - **Why it's controversial:** The common wisdom in game design is to reward all positive actions to maximize engagement. Our model is a bet that students are more motivated by the genuine accomplishment of mastering difficult new material than by earning points for trivial work. It prioritizes long-term learning goals over short-term engagement metrics. - **Engineer out exploits by making them physically impossible. Pace is dictated by the game world, not player reaction time.** - Many learning games are vulnerable to students rapidly guessing to get through content faster. We eliminate this exploit by design. Answers are physical objects on the track that the player must run to. Since running speed is 100% game-controlled, there is no way for a student to "answer faster" to speed up the game. The minimum time to answer is dictated by the physics of the game, not the student's input speed. This enforces a deliberate pace and ensures our performance data is a true signal of knowledge. - _Derived from Insight:_ An effective learning system's primary goal is to keep students in the 80-95% accuracy "sweet spot." - **Why it's controversial:** Conventional game design often links player speed to rewards, allowing skilled players to advance more quickly. Our model intentionally decouples player input speed from game progression speed. We are betting that the pedagogical benefit of high-fidelity data and an enforced, thoughtful pace is more valuable than a traditional "race-to-the-finish" reward mechanic. ### DOK3: Insights - **An effective learning system's primary goal is to keep students in the 80-95% accuracy "sweet spot," as this is the zone of maximum learning velocity.** - An adaptive engine that dynamically adjusts difficulty to maintain this accuracy rate will outperform a static one. When accuracy exceeds 95%, the system should increase the challenge (introduce new facts, shorten timers); when it drops below 80%, it should reduce the challenge (focus on known facts, provide scaffolding). - _Grounded in Facts:_ Zone of Proximal Development, Working Memory Limits. - **Mastery gates—requiring high accuracy and speed on one set of skills before unlocking the next—are critical for preventing cumulative knowledge gaps.** - Students should not be allowed to race ahead with a shaky foundation. Enforcing a clear, high bar for proficiency (e.g., 95% accuracy at 30 CQPM) ensures that every layer of knowledge is solid before the next is built upon it. - _Grounded in Facts:_ Mastery Learning, Learning Goals & Benchmarks. - **The most efficient path to fluency is to diagnose and target a student's specific weaknesses, not to practice all facts uniformly.** - An algorithm that uses error analysis to identify a student's 3-5 most fragile facts and focuses practice on them will produce faster gains than one that simply presents random problems from a large pool. - _Grounded in Facts:_ Practice Techniques (Targeted Practice). - **Every error is a critical learning opportunity that must be addressed immediately with corrective feedback and a forced redo.** - To prevent errors from being encoded into long-term memory, the system must (1) provide the correct answer immediately, (2) explain it if necessary, and (3) require the student to correctly answer the same question before moving on. Delaying this feedback is significantly less effective. - _Grounded in Facts:_ Practice Techniques (Error Correction), Implementation Pitfalls (Delayed Feedback). - **Short, daily practice sessions are superior to longer, infrequent ones for building durable, long-term memory.** - The total weekly dosage of practice is the key driver of gains. This dosage is most effectively delivered in short, concentrated bursts (10-15 minutes) that leverage the principles of spaced repetition, rather than in marathon sessions that lead to cognitive fatigue. - _Grounded in Facts:_ Practice Techniques (Session Length), Core Instructional Models (Total Dosage), Cognitive Science Principles (Spaced Repetition). - **If a student repeatedly fails a fact, the problem is likely a weak memory trace, not a deep conceptual gap. The system must escalate from simple cues to production-based practice.** - After 2-3 consecutive errors on the same fact, a simple cue or reminder has proven insufficient. The system must escalate the cognitive demand by switching from recognition-based tasks (like MCQ) to production-based interventions (like `CopyCoverRecall` or `CueNoCue`) that force active, effortful recall to build a more durable memory. True conceptual interventions (like `DigitalManipulative`) are reserved for specific, diagnosable errors like cross-operation confusion. - _Grounded in Facts:_ Bridging Conceptual Gaps, Practice Techniques (Targeted Practice). - **Giving students a sense of agency, even through small cosmetic choices, is a low-cost, high-impact method for boosting engagement.** - Intrinsic motivation is significantly increased when students feel a sense of control. Allowing them to choose an avatar, a background theme, or a daily goal provides this agency without compromising the core pedagogical loop, leading to increased time-on-task. - _Grounded in Facts:_ Motivation & Student Agency. ### DOK1-2: Facts #### Aligned **General Concepts & Definitions of Fluency** - Math fluency is the ability to solve problems accurately, efficiently, and flexibly. - Automaticity is the component of fluency where basic facts are recalled instantly (typically in 2 seconds or less) without conscious calculation. - The primary goal of automaticity is to free up working memory. When basic calculations are effortless, a student can devote their full cognitive resources to understanding complex, multi-step problems without interrupting their flow of thought. - Strong math fact fluency is a key predictor of student confidence and positive mathematical identity. - Difficulties with fact retrieval are often a root cause of broader math learning difficulties. - Fluency development follows a clear progression: first comes **Conceptual Understanding** (knowing what numbers and operations mean), which leads to **Procedural Fluency** (using strategies to find an answer), which is finally trained into **Automaticity** (instant recall). **Mastery** is achieved when this fluency can be transferred and applied to novel problems and contexts. - Meaningful memorization, which connects facts to the underlying concepts, is more effective and durable than rote memorization. **Learning Goals & Benchmarks** - A standard benchmark for automaticity is 30–40 correct questions per minute (CQPM) with at least 95% accuracy. - Reaching these benchmarks strongly predicts better performance on more complex math tasks. - Timed assessments are a necessary tool for measuring the speed and accuracy components of automaticity. - Foundational fluency is a prerequisite for success in higher-level math like fractions and algebra. - Common Core State Standards define specific grade-level fluency endpoints: - `CCSS.MATH.CONTENT.1.OA.C.6`: Fluently add and subtract within 20. - `CCSS.MATH.CONTENT.3.OA.C.7`: Fluently multiply and divide within 100. - `CCSS.MATH.CONTENT.5.NBT.B.5`: Fluently multiply multi-digit whole numbers. **Cognitive Science Principles** - Human working memory is severely limited. A student can typically only attend to 3-5 new, unmastered pieces of information at once before cognitive overload occurs. Automatic fact recall bypasses this bottleneck. - Retrieval practice (actively recalling information from memory) is significantly more effective for building long-term memory than passively reviewing material. - **Spaced repetition** with **expanding intervals** (e.g., 1 day, 3 days, 1 week) between practice sessions is essential for durable, long-term retention. Practicing at fixed, short intervals (e.g., every day) leads to rapid forgetting. - The **Zone of Proximal Development** describes the optimal level of difficulty for learning. For fluency tasks, this is often cited as the "85% Rule": if accuracy is above 95%, the task is too easy and little learning occurs; if accuracy drops below 80-85%, the task is too hard, causing frustration and reducing the rate of learning. - Interleaving (mixing different types of problems) improves a student's ability to distinguish between concepts and enhances long-term retention compared to practicing one skill at a time (blocked practice). **The Science of Working Memory in Math** - **Working memory limitations are a primary driver of math difficulties.** The different components of working memory (phonological, visuospatial, and central executive) collectively explain a very large portion of individual differences in children's math performance—as much as 56%. This quantifies the importance of the concept and strengthens the rationale for freeing up cognitive resources. - **Instruction can be designed to deliberately reduce working memory load.** Research highlights two evidence-based strategies for this: 1) Using familiar contexts (e.g., a local store) requires less working memory than unfamiliar ones, and 2) Using worked examples (showing a fully solved problem) serves as a blueprint that prevents students from overloading working memory searching for a correct procedure. - **Math anxiety directly consumes working memory resources.** The link between anxiety and poor performance isn't just about feelings. Attentional Control Theory, supported by meta-analyses, shows that anxious, threat-related thoughts compete for the exact same limited working memory capacity that is needed to perform calculations. This creates a cognitive—not just emotional—bottleneck. **Core Instructional Models** - **Direct Instruction** is a highly effective, teacher-led model that uses explicit, systematic, and scaffolded teaching. The instructor shows the student exactly what to do, guides them through structured practice, and then has them practice independently until the skill is solid. - **Mastery Learning** requires students to achieve a high level of proficiency (e.g., 90%+) on one topic before being allowed to advance to the next. This model is effective at preventing the cumulative knowledge gaps that can form in traditionally-paced classrooms. - Combining explicit strategy instruction with timed practice improves fluency, transfer, and long-term retention compared to timed drills alone. - The **total dosage** of practice (total time-on-task over a period) is a major predictor of fluency gains, more so than the length or frequency of individual sessions. A typical effective dosage is 60-80 minutes per week. **Practice Techniques** - Practice sessions are most effective when they are short and frequent (e.g., 10-15 minutes daily). - Practice is significantly more efficient when it adaptively targets a student's specific, diagnosed weaknesses rather than covering all material uniformly. - Gradually fading scaffolds and hints (e.g., visual aids, worked examples) is a proven technique for transitioning students from guided practice to independent recall. - Forcing a student to correct an error immediately after making it is an effective technique for preventing the error from being encoded in memory. - Timed practice is necessary to build the speed component of fluency, but it should only be introduced after a student has demonstrated high accuracy. **Assessment & Progress Monitoring** - Timed probes (e.g., 1-3 minute assessments) are standard tools for measuring automaticity. - Separating diagnostic assessment from fluency-building practice provides clarity for the student and allows for more targeted interventions. - Stealth assessment, which embeds measurement invisibly within gameplay, can reduce test anxiety and provide continuous, rich data on student performance. - Visible progress indicators (e.g., dashboards, progress charts) are strong motivators and help students self-monitor their learning. - Without continued maintenance practice, fluency declines over time (e.g., a 15% drop in CQPM after 3 months). **Bridging Conceptual Gaps** - When students make repeated errors in procedural practice, it often signals an underlying conceptual misunderstanding. - For these cases, visual models are a well-documented, effective tool for remediation. Representing addition/subtraction on a number line or multiplication/division as an array helps connect abstract symbols to concrete quantities, correcting the root misconception. **Motivation & Student Agency** - A body of research in self-determination theory shows that providing students with simple forms of choice is a proven driver of intrinsic motivation. Even minor choices, such as selecting a game avatar or setting a personal goal, can significantly increase engagement and effort. - This sense of agency helps shift the student's mindset from being a passive recipient of instruction to an active participant in their own learning. **Transfer of Skills** - Automaticity in basic facts is a primary predictor of success in complex, multi-step problem-solving. The cognitive load freed up by not having to calculate simple facts is directly reallocated to analyzing the structure of a complex problem. - A lack of basic fact fluency is a significant bottleneck that prevents students from succeeding in higher-order tasks like word problems and algebraic reasoning, even when they understand the abstract concepts. **Implementation Pitfalls** - Insufficient total practice time is a primary cause of failure in fluency-building programs. - Low-fidelity implementation of research-backed models (e.g., shortening sessions, not following procedures) negates their effectiveness. - Poor user experience in digital tools (e.g., confusing interfaces, technical glitches) leads to frustration and disengagement. - Delayed or unclear feedback allows errors to become ingrained. When a student makes a mistake, the brain's memory reconsolidation process can strengthen the incorrect pathway if it is not immediately and clearly corrected. - Student anti-patterns like rapid guessing or hint abuse can invalidate assessment data and hide critical learning gaps. - Over-reliance on multiple-choice questions can create an illusion of fluency. This is because recognition (picking the right answer) is cognitively less demanding than production (generating the answer independently). - Poorly designed incentive systems can undermine learning. If rewards (like points or badges) can be earned more easily through low-effort behaviors (like guessing) than through genuine effort, students will naturally optimize for the reward, not the learning. #### Counter - Some educational models prioritize **strategic flexibility** over the speed of automaticity. They argue that deep understanding involves knowing multiple ways to solve a problem and choosing the best one for the context, a skill that pure speed drills do not develop. - Timed tests can induce **math anxiety**. The time pressure can trigger a threat response in the brain, flooding working memory with anxious thoughts and inhibiting the very cognitive functions needed for mathematical reasoning, leading to poor performance and a negative cycle of avoidance. - Extrinsic rewards, especially competitive ones like **leaderboards**, can decrease intrinsic motivation. Research shows they primarily motivate students who are already high-achievers and can demotivate the majority, who may feel perpetually ranked at the bottom. - Mandated, uniform drills can be less effective than practice that is more **voluntary or integrated into authentic, problem-solving activities**. When practice feels disconnected from a meaningful goal, engagement can drop, reducing time-on-task. ### Fact Set Sequencing ### Fact Set Sequencing BrainLift - **Owner** - Serban Petrescu - **Purpose** - To establish a definitive, research-backed knowledge base on the optimal sequencing and grouping of arithmetic facts for building mathematical fluency. - This document will explain the cognitive science and pedagogical rationale behind _why_ facts are introduced in a specific order, moving from foundational principles to an actionable, opinionated strategy for curriculum design. - It will serve as the governing "constitution" for how we structure learning paths in our products. ### Experts **Aligned** - **Jennifer Bay-Williams** - **Why:** A leading voice in modern math education who focuses on "basic fact fluency that is foundational, not tedious." Her work provides models for moving from conceptual understanding, through strategy use, to automaticity, directly informing the pedagogical basis for strategy-driven fact grouping. - **Locations:** [University of Louisville](https://louisville.edu/education/faculty/bay-williams), [X/Twitter](https://x.com/JBayWilliams) - **Arthur Baroody** - **Why:** A prominent researcher in the development of children's mathematical thinking. His work emphasizes a "phases of learning" model for basic facts, which outlines a progression from counting strategies, to reasoning strategies, to mastery (automatic recall). This provides a cognitive science framework for why sequencing matters. - **Locations:** [University of Illinois](https://education.illinois.edu/profile/art-baroody), [Google Scholar](https://scholar.google.com/citations?user=bVe1ZeQAAAAJ&hl=en) - **Daniel Ansari** - **Why:** A cognitive neuroscientist who studies the brain basis of numerical and mathematical skills. His research helps explain the cognitive mechanisms underlying how children develop number sense and automaticity, providing a neurological basis for curriculum design. - **Locations:** [Western University](https://www.edu.uwo.ca/about/faculty-profiles/daniel-ansari/index.html), [Numerical Cognition Lab](http://www.numericalcognition.org/) - **Nicole M. McNeil** - **Why:** The lead author of a comprehensive 2025 paper, "What the Science of Learning Teaches Us About Arithmetic Fluency." Her work synthesizes the current state of research on sequencing, retrieval practice, and explicit instruction, confirming that deliberately organized, strategy-driven practice is superior to random practice. - **Locations:** [University of Notre Dame](https://psychology.nd.edu/people/nicole-mcneil/), [Google Scholar](https://scholar.google.com/citations?user=XtA968IAAAAJ&hl=en&oi=ao) **Counter** - **Jo Boaler** - **Why:** A professor of mathematics education at Stanford who argues that an over-emphasis on timed, rote memorization can cause math anxiety. Her work provides the strongest research-based counter-argument, emphasizing deep, flexible number sense over speed, which forces us to justify our focus on automaticity. - **Locations:** [youcubed.org](https://www.youcubed.org/), [X/Twitter](https://x.com/joboaler) ### DOK4: Spiky Points of View - **To ensure the most effective path to automaticity, we deliberately restrict user control over the intra-skill curriculum sequence.** - **Why it's controversial:** The mainstream approach in many educational and consumer apps is to maximize user agency, allowing students or teachers to select specific facts or sets to practice (e.g., "just the x7 table"). This is believed to increase engagement and honor learner choice. - **Our Bet:** While we allow users to select a skill domain (e.g., Multiplication), we are betting that the massive efficiency gains from a perfectly sequenced, machine-controlled curriculum within that skill will create more long-term motivation through genuine progress than the fleeting engagement of intra-skill choice. Our facts show that derived strategies depend on automated anchors; allowing a user to practice `x7`'s before `x2`'s and `x5`'s are automatic encourages brittle rote memorization, which is pedagogically unsound and undermines the entire strategy-driven model. - **A fact is automated once via its simplest path, then maintained via spaced repetition; we do not re-teach known facts with new strategies.** - **Why it's controversial:** A common pedagogical view is to revisit facts with multiple strategies to build flexible thinking (e.g., re-teaching `2+8` as a "making ten" problem after it was already learned via "counting on"). This is thought to deepen conceptual understanding. - **Our Bet:** We are betting that for the specific goal of automaticity, this approach is inefficient. A fact should be automated once via the earliest, simplest applicable strategy. After that, it enters a pool of mastered facts to be reviewed through spaced repetition to ensure long-term retention. We do not move it back to an earlier strategic phase to re-practice it with a new strategy. This maximizes efficiency and accelerates the path to 100% automaticity across all facts. While this may trade some strategic flexibility for speed, that trade-off is acceptable for the primary goal of achieving automaticity. - **To maintain student motivation, we will enforce a maximum size on fact sets; pedagogically-pure 'catch-all' sets that become too large must be broken down into smaller, digestible chunks.** - **Why it's controversial:** A pure instructional designer might argue that all remaining "Think-Addition" facts belong to the same strategic category. Splitting them is an artificial distinction based on surface-level characteristics rather than the underlying cognitive strategy. - **Our Bet:** We are betting that the motivational benefit of completing several small, manageable "levels" (e.g., 10-15 facts) far outweighs the cost of this pedagogical impurity. A student who sees they have 40+ facts left in one giant "All Else" set may feel overwhelmed, whereas a student who sees a few smaller, achievable sets feels a constant sense of progress. The faster feedback loop of completion is the stronger motivator. ### DOK3: Insights - **Strategy-driven sequencing is the only effective path to automaticity.** Random practice is inefficient. The most effective path to automaticity is to sequence practice in a deliberate order that mirrors the logic of derived strategies. By practicing "doubles" before "near-doubles," the brain first strengthens and automates the anchor facts, making the derived strategy easier and faster to apply. Repeated, timed execution of the derived strategy is what eventually "chunks" the process into a single, automatic retrieval. - **Inverse operations must be automated as an extension, not a separate topic.** Subtraction and division automaticity are not developed in isolation. They are built almost exclusively by leveraging the inverse relationship with addition and multiplication ("Think-Addition/Multiplication"). Therefore, to be effective, practice for an inverse operation fact set (e.g., `÷2`'s) must be tightly coupled with and sequenced immediately after the mastery of the corresponding primary operation fact set (e.g., `x2`'s). - **Procedurally simple 'rule-based' facts should be automated first to build momentum.** For the specific goal of automaticity, a rule's procedural simplicity is more important than its conceptual complexity. Facts governed by simple, consistent rules (e.g., `n+0`, `nx1`, `nx0`) should be front-loaded in the curriculum. While the underlying concepts (like zero) can be abstract, the procedures are trivial to execute. Automating this large block of facts quickly provides students with early success, builds momentum, and significantly reduces the total cognitive load of un-mastered facts. - **'Doubles' are the foundational anchor facts and must be mastered before their derivatives.** The "Doubles" facts in addition (`7+7`) and their multiplication equivalent (`x2`) are the most critical strategic pillar in the curriculum. They are memorable, pattern-based, and serve as the cognitive anchor for a large family of more difficult derived facts (`near-doubles`, `x4`, `x8`). A curriculum must therefore ensure that these anchor facts are practiced to a high degree of automaticity before introducing practice for the facts that are derived from them. - **Subtraction practice must be sequenced to automate two distinct strategies: "Counting Back" for small subtrahends and "Think-Addition" for larger ones.** Because subtraction strategies are asymmetrical, a one-size-fits-all approach is inefficient. Practice must be structured to first automate the simple, procedural "Counting Back" strategy (`n-1`, `n-2`). For all other facts, practice must be designed to explicitly automate the more powerful "Think-Addition" strategy, leveraging the existing network of automated addition facts. - **Fact sets must be grouped by a single, independent strategy and sequenced by dependency.** To minimize cognitive load, each fact set should represent a single, discrete cognitive strategy (e.g., 'doubles', 'making ten'). These sets should then be sequenced based on their strategic dependencies. Anchor fact sets (like `x2` and `x5`) must be automated before the derived sets that rely on them (like `x4` and `x6`). Once a critical mass of anchor facts and strategies is automated, the final remaining facts are grouped to encourage the rapid selection between already-mastered strategies. - **Cognitive interference between similar-looking facts can be minimized through deliberate set design.** In addition to grouping by strategy, research suggests avoiding the inclusion of facts with high surface-level similarity in the same practice set (e.g., practicing `6x7=42` and `6x8=48` together). This reduces the risk of "interference," where the memory of one fact disrupts the retrieval of a similar one, thereby accelerating the path to automation for both. - **Timing is a tool for consolidation, not a performance evaluation; it must be introduced carefully to mitigate anxiety.** While timed practice is the necessary catalyst to force the leap from strategy to retrieval, its primary risk is affective (creating math anxiety), not cognitive. To mitigate this, timing must be introduced only _after_ a student demonstrates high accuracy with a strategy. All feedback related to timing must focus on progress and mastery, not on speed as a measure of worth. - **The Commutative Property must be explicitly leveraged to maximize practice efficiency.** A curriculum that treats `3x7` and `7x3` as two distinct facts to be learned is inefficient. An effective system must be designed to explicitly reinforce the Commutative Property, ensuring that practice on one form of the fact (e.g., `3x7`) contributes directly to the automaticity of the other (`7x3`). This effectively halves the required practice time for non-square facts. - **Timed practice is the mechanism that forces the leap from strategy to retrieval.** To prevent "strategic entrenchment"—where students simply get faster at calculation instead of automating retrieval—practice must be strictly timed. The time limit must be short enough to make multi-step derived strategies non-viable. This controlled time pressure is the essential catalyst that forces the brain to abandon the slower, effortful Phase 2 strategy and build the fast, direct neural pathway required for Phase 3 automatic retrieval. ### DOK1-2: Facts #### The Cognitive Path to Automaticity The primary goal of fluency practice is not to teach concepts, but to convert slow, effortful, and working-memory-intensive strategies into fast, effortless, and automatic retrieval from long-term memory. Cognitive science shows this happens in a predictable, three-phase developmental sequence. Our product's role is to provide structured practice that accelerates students from Phase 2 to Phase 3. - **Phase 1: Procedural Strategies (Counting & Modeling).** This is the starting point for a child's understanding of an operation. They solve problems by physically or mentally modeling the action, for example by counting on their fingers or drawing groups. This phase is accurate but extremely slow and cognitively demanding. _We assume students have largely passed this phase before using our product._ - **Phase 2: Derived Fact Strategies (Reasoning & Relating).** In this critical intermediate phase, students use a toolkit of reasoning strategies to solve problems. They leverage known facts to solve unknown ones (e.g., solving `6+7` by reasoning from the known double `6+6=12`). This is faster than counting but still requires conscious effort and consumes working memory. _Our curriculum should be designed to take students who are reliant on these strategies and guide them to the final phase._ - **Phase 3: Automatic Retrieval (Automaticity).** This is the end goal. The fact is no longer "solved" or "derived"; the answer is retrieved directly from a well-consolidated, long-term memory network, typically in under a second. This process is fast, effortless, and consumes virtually no working memory, freeing the student's cognitive resources to focus on higher-order problem-solving. - **The Commutative Property Halves the Learning Load.** A foundational principle of arithmetic is the Commutative Property (`a+b = b+a` and `axb = bxa`). By having students understand that `3x8` is the same as `8x3`, the number of unique multiplication and addition facts a student must automate is nearly cut in half. - **The Risk of "Strategic Entrenchment".** A primary risk in fluency practice is that students can become extremely fast and efficient at executing _derived fact strategies_ (Phase 2) without ever making the cognitive leap to _automatic retrieval_ (Phase 3). For example, a student might get very fast at solving `6+7` by thinking `(6+6)+1`, but they are still performing a multi-step calculation. This "strategic entrenchment" occurs when practice lacks sufficient time pressure to make the slower, procedural strategy non-viable. - **Subtraction Involves Two Distinct and Asymmetrical Strategies.** Unlike addition and multiplication, subtraction is not commutative. Research shows that students naturally develop two different strategies based on the relationship between the minuend and subtrahend: - **"Counting Back" (or "Take Away"):** Used when subtracting a small number (e.g., `10-2`). The student starts at the whole and counts backward. This is a procedural, counting-based strategy. - **"Counting Up" (or "Think-Addition"):** Used when the numbers are close together (e.g., `10-8`). The student starts at the part and counts up to the whole, reframing the problem as a missing addend problem (`8+?=10`). This is a more advanced, relational strategy. #### Optimal Sequencing for Automating Strategies - **Addition: From Counting to Retrieving** - **Automating `+0` (The Identity Rule):** Practice begins by automating the Additive Identity Property. While the concept of zero is abstract, the rule ("the number stays the same") is procedurally the simplest in arithmetic. Its consistency allows for rapid automation, even if deep conceptual understanding is still developing. - **Automating `+1` and `+2` (The "Counting On" Strategy):** Practice next focuses on making the foundational strategy of "Counting On" so fast it becomes automatic. This leverages a child's most basic, concrete understanding of the number line. - **Automating the "Doubles" as Anchors:** The `Doubles` facts (e.g., `7+7`) are practiced next. Because they are memorable and pattern-based, they are the easiest set of non-counting facts to commit to long-term memory. They become the primary "anchor facts" from which other facts are derived. - **Automating "Near-Doubles" via the Doubles Anchor:** Practice then moves to `Near-Doubles` (e.g., `7+8`). The practice is explicitly designed to leverage the now-automatized "Doubles" facts. By repeatedly and quickly executing the procedure `(7+7)+1`, the brain eventually "chunks" this two-step process into a single, automatic retrieval: `7+8=15`. - **Automating "Making Ten":** Practice for this strategy is sequenced later as it's more cognitively demanding. It requires repeatedly practicing the decomposition of a fact (e.g., `9+7` -> `9+1+6` -> `10+6`) until the top-level association (`9+7=16`) is stored and retrieved as a single unit. - **Automating `+9` and `+10` (The Place Value Strategies):** The `+10` facts are practiced to automaticity based on their simple place-value pattern. Then, the `+9` facts are practiced using the efficient derived strategy of "add 10, subtract 1." - **Multiplication: From Patterns to Properties** - **Automating the "Rules" and "Easy Patterns" (`x0, x1, x10, x5`):** Practice starts here because these facts either follow simple, memorable rules (Zero and Identity properties) or leverage highly familiar patterns (skip-counting by 10s and 5s). Automating this large block of foundational facts first provides the student with early success and a strong base of known facts. - **Automating the "Doubles" Family (`x2`, `x4`, `x8`):** The `x2` facts are practiced first in this group to build a bridge from the known addition "Doubles." Practice then proceeds to `x4` and `x8`. The goal of this sequenced practice is to make the derived strategies ("double-double" for x4, "double-double-double" for x8) so rapid that the original fact (e.g., `8x7`) becomes a single, retrieved memory, rather than a multi-step calculation. - **Automating with the Distributive Property (`x3`, `x6`, `x9`):** Practice for these final sets leans heavily on automating derived strategies that use the distributive property. `x3` is practiced as "double the number plus one more group" (`2n + n`). `x6` is practiced as "five groups plus one more group" (`5n + n`). `x9` is practiced as "ten groups minus one group" (`10n - n`). Repeatedly executing these specific, efficient strategies under time pressure is what consolidates the multi-step process into a single, automatic retrieval for each fact. - **Division: From Sharing to Thinking Multiplication** - **Prerequisite: Multiplication Automaticity.** Division automaticity is almost exclusively built by leveraging the inverse relationship between multiplication and division. Research and pedagogical consensus show that students achieve fluency by reframing division problems as "missing factor" multiplication problems. Therefore, the practice sequence for division must directly mirror and follow the sequence for multiplication. - **Automating `÷1` and "Divide by Itself":** Practice begins by automating two simple rules: dividing by 1 (Identity Property, e.g., `8÷1=8`) and dividing a number by itself (e.g., `8÷8=1`). - **Automating via "Think-Multiplication":** The core of division fluency is converting division problems into missing factor problems (e.g., `56÷8` becomes `8x?=56`). The practice sequence is therefore designed to automate this conversion, leveraging the fact families that have already been practiced to automaticity during multiplication. The sequence of practice (by `÷10`, `÷5`, `÷2`, etc.) directly follows the multiplication sequence to ensure the anchor facts are in place. For example, a student masters `x2` facts, then immediately practices `÷2` facts to automate the inverse relationship. - **Subtraction: From Counting Back to Thinking Addition** - **Prerequisite: Addition Automaticity.** Research strongly indicates that students learn very few subtraction facts independently. Automaticity with subtraction is achieved by leveraging the inverse relationship between addition and subtraction. Therefore, the practice sequence for subtraction must mirror and follow the sequence for addition. - **Automating `-0, -1, -2` (The "Counting Back" Strategy):** Practice begins by automating the inverse of "Counting On." This is the most basic procedural strategy for subtraction and is practiced first to build initial confidence and success. - **Automating "Minus Itself" and "Neighbors":** Practice on `n-n` (e.g., `8-8`) automates a simple rule. Practicing "Neighbors" (e.g., `9-8`) automates the "one difference" rule, which is a simple extension of counting back. - **Automating via "Think-Addition":** The core of subtraction fluency is converting subtraction problems into missing addend problems (e.g., `15-7` becomes `7+?=15`). The practice sequence is therefore designed to automate this conversion. - **Automating with "Making Ten":** This is a powerful derived strategy for subtraction. `14-9` is practiced by thinking "first take away 4 to get to 10, then take away 5 more." Repeatedly practicing this decomposition under time pressure automates the top-level fact `14-9=5`. - **Automating with Place Value (`-10`, `-9`, `-8`):** Practice for `-10` reinforces place value. Practice for `-9` and `-8` automates a derived strategy that leverages the `-10` anchor (e.g., to solve `17-9`, think `17-10` is 7, so `17-9` must be 8). ### Addendum: Recommended Fact Set Sequences #### Multiplication | Order | Fact Set | Rationale (Strategy to Automate) | | :---- | :------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1 | **x0** | **The Zero Property.** Automate the `nx0=0` rule. It is procedurally the simplest rule and provides a foundational "win." | | 2 | **x1** | **The Identity Property.** Automate the `nx1=n` rule. Like the Zero Property, it is a simple, independent rule to be automated early. | | 3 | **x10** | **Place Value.** Leverage the simple base-10 pattern. Provides a strong anchor for the `x9` strategy. | | 4 | **x5** | **Skip Counting.** Leverage familiar skip-counting patterns and its relationship to `x10` (half of ten). | | 5 | **x2** | **The Addition Bridge.** Automate by connecting to the already-known addition "Doubles." This is the foundation of the doubles family. | | 6 | **x4** | **The "Double-Double" Strategy.** Practice is sequenced to automate the derived strategy of `(n x 2) x 2`, building directly on the `x2` anchor facts. | | 7 | **x8** | **The "Double-Double-Double" Strategy.** Extends the doubling strategy (`(n x 4) x 2`), reinforcing the power of using known facts to solve unknown ones. | | 8 | **x9** | **The "Ten-Minus-One" Strategy.** Automate the powerful derived strategy of `(n x 10) - n`, leveraging the already-mastered `x10` anchor facts. | | 9 | **x3** | **The "Double-Plus-One" Strategy.** Automate the derived strategy of `(n x 2) + n`, leveraging the `x2` anchor facts. | | 10 | **x6** | **The "Five-Plus-One" Strategy.** Automate the derived strategy of `(n x 5) + n`, leveraging the `x5` anchor facts. | | 11 | **x7** | **The "Five-Plus-Two" Strategy.** Automate by practicing decomposition (`7n` as `5n + 2n`) using other known facts. | #### Addition | Order | Fact Set | Rationale (Strategy to Automate) | | :---- | :--------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------ | | 1 | **+0** | **The Identity Rule.** Automate the `n+0=n` rule. Procedurally the simplest and provides an early win. | | 2 | **+1** | **The "Counting On by One" Strategy.** Automate the most foundational, concrete strategy of counting on by a single step. | | 3 | **+2** | **The "Counting On by Two" Strategy.** Extend the "Counting On" skill to two steps, providing a gentle increase in procedural complexity. | | 4 | **Doubles** | **The First Anchor.** Automate `n+n` facts. These are memorable and serve as the primary anchor for derived strategies. | | 5 | **Near-Doubles** | **The First Derived Strategy.** Automate `n+(n+1)` facts by leveraging the now-mastered "Doubles" facts (e.g., `(7+7)+1`). | | 6 | **Making Ten** | **The "Bridge Ten" Strategy.** Automate by repeatedly practicing the decomposition of facts to bridge through 10 (e.g., `9+7` -> `(9+1)+6`). | | 7 | **+10** | **The Place Value Anchor.** Automate `n+10` facts by leveraging the simple, pattern-based nature of our base-10 system. | | 8 | **+9** | **The "Ten-Minus-One" Strategy.** Automate the derived strategy of `(n+10)-1`, building directly on the `+10` anchor facts. | | 9 | **+3,+4** | **Strategic Consolidation.** Consolidate automaticity by encouraging rapid, flexible application of previously mastered strategies to the final set of facts. | | 10 | **+5,+6** | **Strategic Consolidation.** Consolidate automaticity by encouraging rapid, flexible application of previously mastered strategies to the final set of facts. | #### Subtraction | Order | Fact Set | Rationale (Strategy to Automate) | | :---- | :---------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1 | **-0** | **The Identity Rule.** Automate the `n-0=n` rule. Mirrors the `+0` rule. | | 2 | **-n (Minus Itself)** | **Simple Rule.** Automate the `n-n=0` rule. | | 3 | **-1** | **"Counting Back" by One.** Automate the simplest procedural subtraction strategy. | | 4 | **-2** | **"Counting Back" by Two.** Extend the "Counting Back" skill to two steps. | | 5 | **-3** | **"Counting Back" by Three.** Further extend the procedural "Counting Back" skill. | | 7 | **Neighbors** | **"Counting Up" by One.** The first and simplest application of the "Think-Addition" strategy (`n-(n-1)`). | | 8 | **"Think-Addition" via Making Ten** | **Inverse of Making Ten.** Leverage automated `pairs-to-10` addition facts (e.g., for `10-7`, think `7+?=10`). | | 9 | **-10** | **Place Value Anchor.** Automate `n-10` by leveraging place value understanding. This is the anchor for `-9` and `-8`. | | 10 | **-9** | **"Think-Addition" via the Ten Anchor.** Automate the derived strategy that uses the `-10` anchor (e.g., for `17-9`, think `9+?=17` by relating it to `10`). | | 11 | **-8** | **"Think-Addition" via the Ten Anchor.** Extend the derived `ten-anchor` strategy. | | 12 | **-7,-6** | **"Think-Addition" via the Ten Anchor.** Further extend the derived `ten-anchor` strategy. | | 13 | **-4,-5** | **Strategic Consolidation via Think-Addition.** Consolidate automaticity by encouraging flexible application of the "Think-Addition" strategy to the final set of subtraction facts. | #### Division | Order | Fact Set | Rationale (Strategy to Automate) | | :---- | :------- | :---------------------------------------------------------------------------------------------------------------------------------- | | 1 | **÷1** | **The Identity Property.** Automate the `n÷1=n` rule. Taught immediately after the `x1` rule to establish the inverse relationship. | | 2 | **÷n=1** | **Dividing by Itself.** Automate the `n÷n=1` rule. A simple, independent rule that is the inverse of `1xn=n`. | | 3 | **÷10** | **Place Value.** Practice is sequenced to automate "Think-Multiplication" by leveraging the now-mastered `x10` facts. | | 4 | **÷5** | **Skip Counting Inverse.** Automate "Think-Multiplication" by leveraging the now-mastered `x5` facts. | | 5 | **÷2** | **Halving.** Automate "Think-Multiplication" by leveraging the now-mastered `x2` (doubles) facts. | | 6 | **÷4** | **Halving the Half.** Automate "Think-Multiplication" by leveraging the now-mastered `x4` facts. | | 7 | **÷8** | **Halving the Quarter.** Automate "Think-Multiplication" by leveraging the now-mastered `x8` facts. | | 8 | **÷9** | **The `x9` Inverse.** Automate "Think-Multiplication" by leveraging the now-mastered `x9` facts. | | 9 | **÷3** | **The `x3` Inverse.** Automate "Think-Multiplication" by leveraging the now-mastered `x3` facts. | | 10 | **÷6** | **The `x6` Inverse.** Automate "Think-Multiplication" by leveraging the now-mastered `x6` facts. | | 11 | **÷7** | **The Final Set.** The last set, automated by leveraging the final mastered `x7` facts. | ### XP System ### XP System BrainLift - **Owner** - Serban Petrescu - **Purpose** - To establish a definitive, research-backed knowledge base on XP (eXperience Points) systems for educational games, specifically for designing TrashCat's XP system. - This document will analyze and synthesize insights from three established XP systems: Math Academy (research-backed adaptive learning platform), Alpha Andy M / TimeBack (mastery-focused multi-app ecosystem), and LearnWithAI / Athena (content-specific assessment-heavy approach). - It will serve as the authoritative reference for making XP design decisions in TrashCat, ensuring our system incentivizes genuine learning, prevents gaming behaviors, and aligns with our dual-mode (Speedrun + Practice) pedagogical model. - The goal is to extract universal principles and identify context-specific adaptations needed for a game-controlled speed, fact-level progression system where students cannot control pacing. ### Sources - [LearnWithAI XP Brainlift](https://docs.google.com/document/d/1l0Nib4MMXmvt_aH6tTWhlZ-iQAO9Lhgy498d99maT8M/edit?tab=t.0#heading=h.xtglzohn0siz) - [Andy M's XP Brainlift](https://workflowy.com/s/xp-system-brainlift/pHUfABBoeCF5et2Z) - [Math Academy's Way](https://docs.google.com/document/d/1LLZK_34Oer9LwuqAv-pqxfXlR8n7V8zJ_MO323R7egI/edit?tab=t.0#bookmark=id.8xtr7duh3gfd) ### DOK4: Spiky Points of View - **We use effort-intrinsic XP (time-based) instead of content-intrinsic XP (milestone-based), directly contradicting universal best practice.** - **Why it's controversial:** Every examined system (Math Academy, Alpha Andy M, LearnWithAI) explicitly rejects time-based XP. Alpha Andy M states: "Expected XP belongs on each piece of course content and is student independent." The research consensus is that time-based XP punishes efficiency, rewards dawdling, and enables gaming. Content-intrinsic XP (fixed per task) is considered the only viable approach. - **Our Bet:** We bet on the sophistication of our learning algorithm to prevent gaming behaviors architecturally, making time-based XP viable where it would fail in other systems. Our algorithm controls: (1) which facts are eligible for practice (spaced repetition cooldowns), (2) when practice sessions end (completion caps, cooldown exhaustion), (3) how fast students can answer (game-controlled speed), and (4) daily diagnostic frequency (Speedrun caps). These constraints ensure that "active play time" is always productive learning time - students cannot dawdle, farm, or game the system because the algorithm controls pacing and availability. Given this architectural protection, "1 minute of work = 1 XP" becomes maximally simple and understandable for third graders without sacrificing integrity. We accept that XP becomes a time metric rather than an achievement signal, but rely on Progress % to track actual learning outcomes. - **We award zero XP for intervention time, even though students spend real effort on error correction.** - **Why it's controversial:** In a time-based XP system (1 minute = 1 XP), excluding intervention time creates complexity. Students are actively engaged during interventions (TurnWheels, DragDrop, CueFading), spending ~30 seconds of real effort per intervention. Why does this time not count when all other active time counts? - **Our Bet:** XP tracks run time only - when the cat is moving and students are answering math facts. Interventions pause the run, creating a natural time penalty for errors. This architectural decision aligns XP with forward progress (facts answered) rather than total engagement time. The simplicity of "run time = XP time" is clearer than "active time = XP time," and it preserves the pedagogical principle that interventions are corrective work, not rewarded work. Students understand: mistakes cost you time (paused XP accumulation) plus effort (completing the intervention). - **We award XP for Speedrun despite it being purely diagnostic assessment.** - **Why it's controversial:** Alpha Andy M explicitly states "Do NOT award XP for passive activities... Wait until learning is verified." Speedrun is assessment (showing what you know), not practice (building what you don't). Traditional thinking says assessment shouldn't earn XP unless tied to performance thresholds. - **Our Bet:** Speedrun is active work requiring focus and effort. It's the critical daily ritual that drives Practice mode targeting. In our time-based system, the daily cap (only the first Speedrun per skill awards XP) prevents farming while ensuring students are incentivized to complete the diagnostic properly. The fixed time-based reward doesn't create sandbagging incentives (poor performance doesn't extend the run) or answer-lookup incentives (performance doesn't affect XP earned). - **We stop awarding XP when skills reach 100% completion in both Practice and Competition modes, hard-capping progression despite unlimited available practice.** - **Why it's controversial:** Math Academy allows unlimited XP earning per day. Their system uses pace-based efficiency adjustments but doesn't cap absolute XP. Alpha Andy M sets daily targets rather than caps. Preventing students from earning XP for continued practice on mastered material could feel punishing to motivated students who want to keep playing. - **Our Bet:** Without this cap, our time-based system enables trivial XP farming: students could grind completed skills indefinitely, earning 60 XP/hour with zero cognitive load. This would corrupt XP as a learning metric entirely. The cap ensures XP tracks genuine learning effort, not mindless repetition. Students can still practice completed skills for maintenance (pedagogically valuable), they just don't earn XP for it. This applies to both Practice mode and Competition mode - completed skills award zero XP regardless of game mode. This forces students to engage with new, challenging content to earn XP, maintaining integrity of the reward signal. - **We award no bonuses for exceptional performance, creating zero quality-based multipliers.** - **Why it's controversial:** All examined systems award 20-25% bonus XP for exceptional performance (>95% accuracy). Research (Egram, 1979) empirically validates that performance-contingent bonuses increase future performance. Math Academy states "quality impacts efficiency 50x more than pace." Flat-rate XP regardless of accuracy ignores quality entirely as a factor in earning rewards. - **Our Bet:** In our time-based system, quality already affects XP rate indirectly: accurate students avoid slowdown penalties and intervention time, completing more questions per minute. Adding explicit bonuses would reintroduce complexity that the executive decision explicitly rejected. We accept that our XP system provides weaker incentives for careful work than research-backed alternatives, prioritizing "embarrassingly simple" over "pedagogically optimal." Game-controlled speed provides some quality signaling; we rely on that rather than XP multipliers. - **Game-controlled speed is our anti-gaming advantage that eliminates rush-farming exploits architecturally.** - **Why it's controversial:** Most educational games and apps must invest heavily in detecting rush/gaming behaviors (answer time analysis, pattern detection, anti-cheat systems). We're claiming we don't need most of this despite having a time-based XP system, which is traditionally the most gameable approach. - **Our Bet:** By removing player control over running speed, we architecturally eliminate the "rush to farm XP" exploit despite using time-based XP. Students physically cannot answer faster than the game allows. Combined with (1) daily Speedrun caps, and (2) completed-skill caps, we prevent the main gaming vectors that make time-based XP problematic in other systems. Time-based XP in a player-controlled environment is terrible; time-based XP in a game-controlled environment is merely mediocre. Our architectural advantage lets us use a simpler system that would fail elsewhere. - **Progress percentage counts all stage advancements holistically, not just final Mastery, keeping Progress and XP fully decoupled.** - **Why it's controversial:** Math Academy can count only completed lessons as progress because lessons are daily-completable units. TrashCat's Mastered stage requires 11+ days of spaced repetition. Counting only Mastered facts would show 0% progress for the first 11 days, making the system feel broken. With time-based XP showing constant accumulation regardless of learning milestones, Progress % becomes the only metric tracking actual curriculum advancement. - **Our Bet:** Progress should reflect visible forward movement through the learning pipeline. We weight all stages (Assessment=0%, Practice=20%, Review=40%, Repetition=70%, Mastered=100%) and average across all facts. This creates an honest progress metric that moves daily while still emphasizing durable achievement. Progress % measures current knowledge state; XP measures time invested. These are intentionally decoupled: a student can earn 60 XP on a skill while Progress increases from 35% to 45%, or earn 60 XP while Progress stays at 95% (grinding late-stage repetitions). Different metrics for different purposes. - **We gate time-based XP on 80% accuracy per minute in Practice mode only, keeping Competition mode purely time-based.** - **Why it's controversial:** Our time-based XP system (1 minute = 1 XP) was designed to be "embarrassingly simple." Adding an accuracy threshold reintroduces complexity and creates a scenario where students can play for a minute but earn zero XP if their accuracy is below 80%. This contradicts our stated goal of simplicity and could feel punishing to struggling students. - **Our Bet:** Pure time-based XP without any quality signal creates a gaming vector in Practice mode: students could randomly guess answers, maintaining active session time while learning nothing. The 80% accuracy threshold (aligned with Alpha Andy M's mastery threshold) ensures XP represents productive learning time, not just time spent. We measure accuracy using active session time (excluding paused time, interventions) in 60-second windows matching our 1 XP = 1 minute baseline. This preserves the time-based simplicity while adding a minimal quality gate. Students who are genuinely trying will naturally exceed 80% accuracy; only deliberate gaming or severe struggle triggers the gate. For struggling students, the algorithm's difficulty adjustment and intervention system should bring them above 80% naturally. Competition mode remains purely time-based (1 minute = 1 XP) without accuracy gating, as the competitive context and different question selection logic provide sufficient anti-gaming protection. ### DOK3: Insights - **Never decrement XP retroactively for knowledge decay or fact demotions.** XP is a permanent record of effort expended. Progress/Mastery tracks current knowledge and can decrease. When facts demote from forgetting, XP earned during the original promotion stays. Award negative XP only at task completion for detected gaming, never later. - **Never track actual time spent. Always use a fixed expected XP per task.** Effort-intrinsic XP is fundamentally broken: punishes efficiency, rewards dawdling, and enables gaming. Content-intrinsic XP (fixed per task) is the only viable approach. Calibrate by speed-running competent users. - **Mastery thresholds gate when to award XP, not whether intermediate milestones deserve XP.** Math Academy's 60% floor allows partial XP (50-75%) for partial lesson completion. Alpha Andy M's 80% floor awards zero below, full above for complete lesson mastery. The threshold determines the performance bar for earning XP on a specific learning event (lesson completion, stage transition, assessment). Higher thresholds enforce rigor but increase frustration; lower thresholds allow progression while struggling. For fluency apps, intermediate stage transitions (Slow→Fast Practice) are genuine milestones deserving XP, unlike comprehensive learning apps, where intermediate understanding isn't sufficient. - **Negative XP targets gaming patterns only, never individual mistakes or poor performance.** Detect patterns across multiple events: systematic rapid guessing, idle timeouts, obvious cheating. Calibrate so genuine students rarely see penalties. - **Prevent gaming architecturally first, detect behaviorally second.** Individualized paths, randomized questions, and changed reattempts eliminate cheating vectors structurally. Build architectural defenses first. Add behavioral detection as a secondary layer. - **Daily cap diagnostics. Never cap practice.** Fixed-XP diagnostics can be farmed; cap to once per day per skill. Variable-XP practice self-limits (you run out of content to progress). Caps prevent farming; targets drive consistency. Use appropriately. - **Display expected XP upfront. Hide calculation details.** Show what's earnable ("Up to 10 XP"). Hide how it's calculated (multipliers, penalties). Transparency for goal-setting, opacity for anti-gaming. - **Award 20-25% bonus XP for exceptional performance.** Egram (1979) empirically validates performance-contingent bonuses increase future performance. Meaningful bonus (20%+) incentivizes careful work without creating risk-aversion. The threshold for "exceptional" depends on assessment noise: traditional UIs can require 100% (pure knowledge signal), game-based environments should use 95%+ (allows margin for game-induced errors like obstacle interference or navigation mistakes). Low-hanging motivational fruit. - **Pace-efficiency gains require deep prerequisite graphs with implicit review credit.** Math Academy achieves efficiency ∝ pace^0.1 because its knowledge graph has "encompassings" \- advanced topics that implicitly practice many simpler prerequisites. Fast students learn advanced topics before reviews are due on prerequisites, so one advanced task knocks out multiple reviews, reducing total XP needed. TrashCat's structure is flat: facts within a skill have minimal dependencies (only fact families). A student learning 8×7 doesn't implicitly practice 8×6. Conclusion: expect a weak pace-efficiency relationship in TrashCat. Doubling pace doubles speed, but doesn't reduce total XP needed. - **Award fixed XP for diagnostics with a minimum completion threshold.** Performance-based diagnostic XP creates perverse incentives (sandbagging for easier placement or answer-lookup for harder placement). Fixed XP \+ ≥50% completion requirement \+ daily cap prevents all gaming vectors. - **Never award XP for automated placement consequences.** Assessment effort earns XP once. Placement promotions from that assessment earn zero. This prevents double-dipping and maintains XP time-equivalence. Use cosmetics for placement celebration. - **Optimize quality first, pace second. Quality is the dominant factor.** Math Academy states, "the quality of your work is the single greatest factor that affects your learning efficiency," while pace changes efficiency by only 1.07x per doubling. Invest in pedagogy and behavioral coaching before motivational systems. Careful work \>\> fast work. - **Measure accuracy using active session time, not wall-clock time.** When calculating per-minute accuracy for XP gating, use the actual active session duration (excluding paused time, interventions, app backgrounding) rather than wall-clock timestamps. This ensures accuracy windows reflect genuine active time and prevents clock manipulation or pause-related edge cases. Each answer should record the active session time at submission, enabling precise time-window calculations on the backend. - **Educate about session length; don't restrict.** Show diminishing returns data. Don't cap total daily XP. Students grinding for hours are still learning. Respect the user’s agency while providing guidance. - **XP works best as part of a comprehensive motivational ecosystem, not in isolation.** Math Academy's success comes from combining XP with weekly leagues, student choice, social leaderboards, fun emphasis, and optional opt-outs. Focusing only on XP calibration (thresholds, bonuses, penalties) without considering the broader motivational architecture may miss why gamification succeeds. The whole system must balance extrinsic rewards (XP, leagues) with intrinsic motivation (autonomy, mastery, curiosity) and provide escape valves for students who find gamification demotivating. - **Beware the dark side of XP gamification.** While effective for many students, XP systems can create unhealthy addiction patterns, feel punishing rather than rewarding, and force rigid progressions that frustrate curious learners. Design with awareness that gamification is a powerful tool that can backfire if not carefully balanced with genuine learning goals and student agency. Include opt-outs and escape valves for students whose learning styles clash with points-based systems. ### DOK1-2: Facts #### **Core XP Philosophy Across Systems** - **All three systems use a 1 XP \= 1 minute baseline.** Math Academy, Alpha Andy M, and LearnWithAI all calibrate XP so that 1 XP represents approximately 1 minute of focused, productive effort from an average serious student. This baseline provides intuitive time-equivalence for goal-setting and progress tracking. - **XP is content-intrinsic (fixed per task), not effort-intrinsic (actual time tracked).** Each piece of content (lesson, quiz, test) has a fixed "expected XP" value that is the same for all students, determined by speed-running a competent person through the content. Students don't earn more XP by spending more time on a task. If a slow learner takes 2 hours to complete a 10 XP lesson and a fast learner completes it in 20 minutes, both earn the same 10 XP (plus any performance bonuses). The slow learner is simply working at 0.08 XP/minute while the fast learner is working at 0.5 XP/minute. Alpha Andy M explicitly states: "Expected XP belongs on each piece of course content and is student independent. Expected XP is the same no matter which student is engaging with the content." - **XP is explicitly decoupled from progress percentage.** Math Academy and LearnWithAI explicitly separate XP (effort \+ quality metric) from Progress % (curriculum completion metric). Math Academy states that "a student's progress (percent of topics completed) in a course is highly correlated with, but fundamentally different from, the amount of XP that they have earned." Alpha Andy M conflates these less clearly but recognizes XP includes time spent on tests/quizzes that don't directly advance the curriculum. - **XP is cumulative and monotonically increasing; knowledge state can regress.** Across all systems, XP functions as a permanent, ever-increasing record of learning effort over time. Knowledge/mastery state, however, can decrease when students forget material or fail to demonstrate retained knowledge. The two metrics serve different purposes: XP tracks historical effort, while progress/mastery tracks current capability. - **XP serves as both a measurement tool and an incentive.** All systems recognize XP's dual role: (1) measuring learning effort/quality objectively, and (2) incentivizing desired behaviors through gamification. Math Academy uses XP for weekly leagues, Alpha Andy M ties it directly to daily time goals and rewards, LearnWithAI uses XP for daily/weekly tracking. #### **Quality-Based XP Scaling** - **Math Academy awards graduated XP below mastery.** Math Academy awards XP on a sliding scale: 100% accuracy gets bonus XP (+2-3 points), 90-99% gets full (100%) XP, 67-89% gets most (75%) XP, 60-66% gets little (50%) XP, and \<60% gets zero or negative XP. The cutoff for earning any XP is 60% accuracy. - **Alpha Andy M uses a binary threshold at 80% for mastery.** Alpha Andy M awards XP only when students demonstrate mastery at 80%+ accuracy (90%+ for assessments). Below this threshold, students receive 0 XP if effort was sincere, or negative XP if effort was insincere (cheating/gaming). No partial credit is awarded below the mastery threshold because "we don't believe the student has learned at the rigor necessary for a mastery-based system." - **LearnWithAI varies XP thresholds by activity type.** LearnWithAI varies its approach by activity type: lessons require 100% mastery of MCQ assessments to award XP, writing skills award XP on each correct attempt even if not first-try, and tests award XP proportional to score (2 XP per correct MCQ question, variable XP per FRQ point). - **Awarding bonus XP for perfect performance is standard practice.** All three systems award bonus XP for 100% accuracy: Math Academy gives \+2-3 points on tasks, Alpha Andy M gives \+20-25% bonus, and LearnWithAI awards bonus XP for 100% on tests. Research (Egram, 1979\) shows that awarding bonus points for high performance increases future performance. #### **Reattempt and Retry Policies** - **Math Academy delays reattempts but doesn't explicitly reduce XP.** When students fail tasks, Math Academy changes questions and requires a delay period before reattempt. The system allows students to continue on other learning paths while waiting. No explicit XP reduction for reattempts is documented in the analyzed materials, though the system does halt failed lessons temporarily. - **Alpha Andy M implements steep XP reductions on reattempts.** Alpha Andy M implements strict reattempt penalties: 50% of expected XP on the first redo, 25% on the second redo, and 0% on the third attempt. This discourages reliance on multiple attempts and incentivizes careful, focused work on the first try. - **LearnWithAI allows XP to be earned on multiple attempts, but with safeguards.** LearnWithAI awards XP each time students correctly answer writing skill assessments, even after incorrect attempts. However, they acknowledge the gaming risk and plan to use TimeBack's anti-pattern detection. They intentionally don't cap this in MVP to observe student behavior first. #### **Demotions, Forgetting, and Knowledge Regression** - **Knowledge state can regress without affecting cumulative XP.** When students forget material or fail reviews, their internal knowledge state (which topics are mastered, stage positions, etc.) can be demoted or regressed. However, this state change does not trigger XP decrements. The XP they earned when originally learning the material remains in their total. - **Math Academy explicitly discusses knowledge profile "peel backs" without mentioning XP loss.** Their FAQ asks, "Why doesn't it just peel back their knowledge profile immediately?" when students fail. The answer discusses carefully removing topic credit from knowledge profiles based on failure patterns, but never mentions removing the XP that was earned when those topics were learned initially. This confirms XP is a permanent effort record, not a knowledge state indicator. - **Systems award XP for reattempts even though the knowledge was supposedly already learned.** When a student has to re-learn material they forgot, they can earn new XP for the reattempt (though often at reduced rates per Alpha Andy M's policy). This further confirms that XP represents effort expended, not unique knowledge acquired. #### **Negative XP and Gaming Prevention** - **All systems penalize gaming behaviors with negative XP.** Math Academy, Alpha Andy M, and LearnWithAI all implement negative XP penalties for detected gaming, cheating, or deliberately poor effort. The penalty ranges from \-2 to \-5 XP depending on severity. - **Negative XP is awarded for specific completion events, not retroactively for knowledge decay.** When students complete a task with gaming/cheating behaviors, the system awards negative XP at the moment of completion. XP is never decremented retroactively because a student later forgot material or regressed in their knowledge. XP represents effort expended (good or bad), not current knowledge state. - **XP penalties are calibrated to avoid punishing genuine students.** Alpha Andy M explicitly states: "The system must be calibrated to ensure that well-intentioned learners are rarely penalized." LearnWithAI echoes this by saying penalties should "reflect the level of frustration a teacher would feel toward poor effort." - **Systems use pattern recognition to detect gaming, not individual mistakes.** Systems look for patterns (multiple rapid answers, systematic wrong answers, idling) rather than penalizing individual mistakes. Math Academy's XP penalty system is designed so that "students who used the system properly and truly gave their best effort rarely (if ever) experienced penalties." #### **Daily Caps and Frequency Limits** - **Math Academy does not enforce daily XP caps.** Math Academy allows unlimited XP earning per day. Their system uses pace-based efficiency adjustments (efficiency ∝ pace^0.1) but doesn't cap absolute XP. Students can work multiple hours and continue earning XP, though long sessions trigger efficiency warnings after 45+ minutes. - **Alpha Andy M sets daily XP targets rather than caps.** Alpha Andy M requires 120 XP daily (2 hours), divided into 4 subject blocks of 25 minutes each, plus 20 minutes buffer. This is a target/requirement rather than a cap \- students need to earn this minimum to meet their daily goal. - **LearnWithAI uses activity-specific daily XP caps.** LearnWithAI caps placement tests to once (fixed XP regardless of retakes) and only awards XP for unit test retakes when "this is their recommended activity." This prevents farming through repeated test-taking. #### **Assessment and Diagnostic XP** - **Math Academy awards XP for diagnostic exams.** Students earn XP during placement diagnostics based on their performance answering questions. The diagnostic is treated as active learning work that deserves XP, even though it's assessing existing knowledge rather than building new knowledge. - **Alpha Andy M only awards XP for passive activities after verification.** Alpha Andy M explicitly states: "Do NOT award XP for passive activities alone (e.g. articles, videos). Wait until a lesson quiz verifies the learning or test to award XP." They award XP retroactively for reading/video time once learning is verified through assessment. - **LearnWithAI awards a fixed XP amount for placement tests.** LearnWithAI gives fixed XP for placement test completion (30 XP for MCQ placement, variable for FRQ placements) regardless of performance. This recognizes the time/effort of the assessment without creating incentives to perform poorly to retake. #### **Transparency and Display** - **Math Academy shows XP values upfront to students.** Students see XP values on tasks before attempting them. This transparency helps with goal-setting and creates clear expectations. The system also shows XP earned immediately after task completion. - **Alpha Andy M displays the expected XP per lesson.** Each lesson has an associated expected XP value (e.g., 10 XP or 15 XP) that students see before starting. This helps students plan how many lessons they need to complete to reach their daily 120 XP goal. - **LearnWithAI shows available XP at task entry points.** Their system displays "XP to earn" for lessons (on 100% mastery), "Up to X XP" for tests (if full marks are achieved), and fixed XP for placement tests. This gives students clear expectations about potential XP gains. - **Detailed XP calculation rules are not shown to users.** LearnWithAI explicitly states: "The detailed rules and calculation of XP is intentionally hidden to maintain simplicity." This transparency-without-complexity approach prevents students from gaming the system while keeping the experience understandable. #### **Progress Metrics and Reporting** - **Math Academy's progress metric is nonlinear and slows over time.** Math Academy explains that "Progress is nonlinear. Students make progress very quickly at the beginning of a course because they can focus primarily on learning new topics... But the more they learn, the more there is to review – so progress slows down." Progress % is deliberately kept separate from XP to avoid confusion. - **Math Academy's progress percentage can decrease, but XP earned never decreases.** When students fail tasks on "conditionally completed" topics (low-confidence diagnostic placements), Math Academy "peels back" the student's knowledge profile, removing credit for those topics and decreasing the progress percentage. However, XP already earned is never taken away \- it remains as a permanent record of effort expended. Math Academy FAQ explicitly states that they will "peel back their knowledge profile" in response to failures, but there is no mention anywhere of retroactively removing earned XP. - **Progress decrease happens only for low-confidence placements, and is intentionally slow.** Math Academy is deliberately slow to peel back knowledge profiles to prevent gaming: "If we peeled back a student's knowledge profile quickly in response to failing a task, even when there is strong evidence that a student knows the prerequisite content, then it would create an exploit: whenever tasks begin to feel challenging, an adversarial student could intentionally fail a number of tasks to peel back their knowledge profile until they reach the point where they have days of super easy work ahead of them." - **Math Academy enforces a minimum for new content exposure.** The system ensures that "on average, students have the opportunity to work on a lesson at least \~25% of the time or so at a minimum." This prevents students from getting stuck in pure review mode. - **Alpha Andy M uses XP as the main progress metric.** While they acknowledge lessons are useful for tracking progress, XP is the primary metric students, guides, and parents use to understand daily/weekly learning effort. The 120 XP daily requirement provides a clear, simple target. #### **Competitive and Motivational Features** - **Math Academy keeps motivation high with a weekly league system.** Students are grouped into leagues based on XP earning speed. Weekly promotions/demotions occur based on league position. Each week provides a fresh start while maintaining cumulative XP. This creates ongoing competitive motivation without permanent discouragement. Students can opt out if leagues aren't motivating for them. Math Academy states: "What we have seen in the performance of hundreds of kids using the system is that they really enjoy gaining points, getting in higher and higher leagues and racing against each other on the leaderboard. They are excited about the learning process." - **Math Academy structures quizzes as recurring assessment gates.** A quiz is assigned every 150 XP covering recent topics. Quizzes are timed (though timing can be adjusted for accommodations) and students cannot refer back to examples during quizzes. Topics missed on quizzes trigger immediate review assignments. After adequate review, the quiz becomes available for optional retake earning additional XP. This creates low-stakes, frequent assessment with immediate feedback and opportunities for improvement. - **Math Academy emphasizes fun and avoiding burnout explicitly.** Their stated goal is to "be as efficient as possible and have fun. We never want to push students too far or burn them out." This balances rigor with enjoyment as a design principle, not an afterthought. - **Math Academy provides student choice within structured progression.** Students are given "an array of diverse, non-overlapping learning tasks" to choose from at any time. This autonomy allows students to select between shorter vs. longer lessons based on their current state, creating a sense of agency while the algorithm ensures all choices are pedagogically sound. - **LearnWithAI encourages consistency with streak bonuses.** TeachTales awards \+10% XP multiplier for maintaining a 5-day streak (at least 1 quiz completed each day with 80%+ accuracy). This incentivizes consistency and daily engagement. #### **XP Enforces Good Math Habits Beyond Pure Accuracy** - **Math Academy gates XP on both correctness and good math habits.** The system doesn't just check answers—it enforces "diligent efforts such as reading example problems carefully, using pencil and paper, and checking incorrect answers alongside fully worked out solutions." XP is withheld or reduced when students exhibit rush patterns, skip steps, or show other indicators of poor learning behaviors. This architectural coupling makes it impossible to earn high XP through shortcuts. - **Math Academy can cut lessons short when detecting struggle.** If the system detects a student is struggling to comprehend a new concept, it will "cut the lesson short and save it for another time. The student will not get any XP for this lesson." Students almost always earn full XP on the second attempt. This prevents students from grinding through material they're not ready for. - **On rare occasions, Math Academy assigns negative XP for rushing and guessing.** When the system detects clear patterns of rushing through tasks without genuine engagement, it can assign negative XP to discourage the behavior. This is described as rare, indicating the threshold is calibrated to avoid false positives on genuine students. #### **Critical User Feedback on XP Gamification** - **Not all users find Math Academy's XP system motivating.** Some users report the XP system feels "non-rewarding when I get points and it feels like I get punished when I answer something wrong." Others describe it as "the forever unimpressed tutor—quick to penalize and very light on encouragement." Competitors like Brilliant are cited as having more engaging gamification despite being less pedagogically efficient. - **XP systems can create unhealthy addiction patterns.** Some users report feeling "hooked" on earning XP in ways that concerned them, comparing it to Duolingo's addictive but shallow engagement. The numerical feedback loop can capture attention in ways that don't necessarily align with deep learning goals. One reviewer was asked by a stranger "Are you hooked?" after noticing their high XP accumulation rate. - **The rigidity of XP-gated progression frustrates some learners.** Users report frustration that Math Academy "doesn't let you skip anything, and forces you down a rigid, unchangeable path." While mastery learning has benefits, the strict dependency graph can prevent students from exploring topics they're curious about if prerequisites aren't yet satisfied. - **XP systems work better for some personality types than others.** Math Academy explicitly allows students to opt out of leagues if competitive gamification isn't motivating for them. This acknowledges that extrinsic motivation through points and competition doesn't universally drive engagement across all learners. #### **XP Source Attribution and Tracking** - **Systems track the source of XP for analytics.** Math Academy tracks XP from lessons vs. reviews vs. quizzes. LearnWithAI plans to show XP source breakdown. This allows students to understand where their effort is going and enables coaches to identify issues (e.g., student spending too much time on reattempts). - **Daily and weekly XP total breakdowns are standard.** All systems show daily XP totals and weekly XP totals. Alpha Andy M resets daily at midnight local time and weekly at Sunday/Monday. This creates natural goal-setting periods and fresh starts. #### **Expected XP Setting Methodology** - **Expected XP is set by timing expert completion, not by tracking student time.** Systems do not measure how long individual students actually spend on tasks. Instead, each task is assigned a fixed "expected XP" value upfront by having a competent person complete it and timing them. This expected XP becomes the baseline that all students can earn for that task, regardless of how long it actually takes them to complete it. - **Alpha Andy M uses speed-running to calibrate expected XP for lessons.** They state: "Manually 'speed running' (timing a motivated human as they complete material correctly) is the best way to set expected XP for each lesson." They explicitly reject using averages or student actuals because "Using student actuals to set expected XP always overstates because it includes students who aren't using the app correctly." - **Math Academy benchmarks courses based on a 40 XP per weekday pace.** Math Academy models their average student on "a serious (but imperfect) student who works an average of 40 XP per weekday." Their courses are benchmarked assuming this pace. For example, AP Calculus BC is 6,000 XP, which would take 150 weekdays at 40 XP/day. - **Total XP required for a course is always an estimate, not a fixed number.** LearnWithAI explicitly states: "An important implication... is that the total number of XP for a student to complete a course is an estimate, not a fixed value." This is because spaced repetition requirements vary by student performance. #### **Learning Efficiency and Pace Relationships** - **Math Academy defines "pace" as the average XP earned per weekday.** Pace is simply a measure of how much focused learning time a student puts in each day. For example, a student earning 40 XP per weekday has a "pace of 40," meaning they spend about 40 minutes of productive learning time each weekday on average. - **Math Academy defines "learning efficiency" as curriculum progress per XP spent.** Learning efficiency is a multiplier that determines how much curriculum completion (e.g., topics mastered) a student achieves for each XP they earn. Higher efficiency means less total XP is needed to complete a course. Efficiency is primarily determined by the quality of a student's work (accuracy and pass rates on tasks). - **Higher pace leads to higher learning efficiency through a mathematical relationship.** Math Academy discovered that learning efficiency ∝ pace^0.1. This means if you double your daily pace (e.g., from 20 to 40 XP/weekday), your learning efficiency increases by approximately 7%. The mechanism is that faster-paced students build new knowledge ahead of their review schedule, which allows the system to find more opportunities to "knock out" multiple reviews implicitly through single advanced tasks, reducing total XP needed. - **The pace-efficiency relationship makes course completion time non-linear.** A 3000 XP course takes 15 weeks at 40 XP/weekday (baseline), but only 7 weeks at 80 XP/weekday (not 7.5 weeks as simple division would suggest), because the efficiency multiplier reduces total XP needed. Conversely, at 20 XP/weekday it takes 32 weeks (not 30 weeks), due to efficiency loss from the system being unable to optimize reviews as effectively. - **Quality of work has a much larger impact on efficiency than pace.** Math Academy states that "the quality of your work is the single greatest factor that affects your learning efficiency." Poor performance (low accuracy, many failures) forces the adaptive system to assign more remediation and reattempts, substantially increasing total XP needed to complete a course. This qualitative impact dwarfs the quantified \~7% efficiency gain from doubling pace. #### **Anti-Cheating Through Individualization** - **Math Academy individualizes learning paths to reduce cheating opportunities.** They state: "Math Academy customizes its learning path to each individual student, so it's unusual for classmates to have the opportunity to work on the same topic at the same time – and even if they do, then they are served different questions." This architectural approach to anti-cheating is more robust than behavioral detection alone. - **Use of question banks and randomization is a common anti-cheating measure.** Math Academy uses "a large bank of questions for each topic" and assessments are "fully individualized and even randomized." This prevents students from gaining an edge by seeing classmates' work. - **Math Academy changes reattempt questions to discourage memorization, not just repetition.** Math Academy changes questions when students reattempt failed tasks and waits for a delay period. This prevents students from memorizing specific question answers rather than learning the underlying concept. #### **XP for Placement and Credit-by-Exam** - **Math Academy does not award extra XP for topics placed out of via diagnostics.** While students earn XP for answering diagnostic questions during the exam, they don't receive additional "placement XP" for topics that are automatically marked as mastered based on their diagnostic performance. The XP was already earned during the assessment itself. - **LearnWithAI prevents double-dipping by not awarding XP for bypassed material.** They state: "Students are only awarded XP for the learning progress they make in Athena, so we don't award them XP for the 70% of the material they have already mastered" (if they place out of it). This prevents awarding XP both for the placement test and for the bypassed content. #### **Typical XP Benchmarks Across Systems** - **Math Academy courses generally total around 3000 XP.** Most Math Academy courses contain about 3000 XP assuming prerequisite mastery (ranges from 2000 XP for Pre-Algebra to 4000 XP for Pre-Calculus). AP Calculus BC is 6000 XP, about twice an average course. - **Alpha Andy M has a 120 XP per day requirement.** Their system requires 120 XP daily (2 hours), structured as 4 subjects × 25 min each \+ 20 min buffer. Over a school year (180 days), this totals 21,600 XP. - **Math Academy finds that 50-60 XP per day is the maximum sustainable rate.** Based on their analysis, "Maximum sustainable rate is approximately 50-60 XP per day" for long-term consistency. The highest recorded monthly total was 5,728 XP (about 200 XP/day), but this is not sustainable long-term. #### **Session Length and Efficiency Considerations** - **Math Academy signals that efficiency may decline after 45+ minute sessions.** They don't cap sessions but note efficiency may decline: "there is room for us to more precisely calibrate our spaced repetition system in the future, and it's on our to-do list." Long sessions can reduce retention effectiveness. - **Alpha Andy M structures daily learning into 25-minute subject blocks.** Their 120 XP daily requirement is broken into 4 subjects of 25 minutes each. This structured approach aligns with attention span research and Pomodoro-style focused work blocks. - **Short, frequent study sessions result in better retention than long, infrequent ones.** LearnWithAI's FAQ asks: "If I have a limited amount of time to devote each week, should I allocate that time into longer, less-frequent sessions or shorter, more-frequent sessions?" The research-backed answer consistently favors shorter, more-frequent sessions for better retention. ### Complete Specification # TrashCat Learning Science Spec (Semi-Condensed) TrashCat is an educational game designed to build mathematical fluency through engaging, research-backed gameplay. It combines the excitement of an endless runner game with a sophisticated spaced-repetition learning algorithm. This document outlines the core learning science principles and the key decisions that shape the educational experience, providing a clear view into the pedagogical foundation of the game. ## Scope and Assumptions ### Motivational Framework: XP System We use effort-intrinsic XP (time-based: 1 XP = 1 minute of active run time) rather than content-intrinsic XP (milestone-based). This simplified approach relies on our learning algorithm to control pacing, practice availability, and session duration. XP is awarded for run time only (excluding interventions), with daily caps on Speedrun XP and completion caps on Practice XP once skills reach 100%. Cosmetic rewards (in-game items, visual unlocks) provide immediate acknowledgment of milestone completion. For comprehensive documentation of the XP system design decisions, research backing, and implementation rationale, see the XP System BrainLift. ### Educational Scope: Building Fluency, Not Conceptual Understanding TrashCat builds **fluency through automaticity** for students who have already received direct instruction on basic operations externally. Our goal: move students from strategy-based retrieval to automatic, reflexive recall. We provide **point-wise reinforcement** (targeted interventions) to address gaps, not comprehensive instructional sequences or conceptual explanations. For comprehensive documentation of the cognitive science principles, instructional models, and research backing our fluency-building approach, see the Math Fluency BrainLift. ### Assessment Scope: Integration with TimeBack Ecosystem TrashCat operates within the TimeBack educational ecosystem, which handles standardized pre- and post-testing. We do **not** provide comprehensive standardized diagnostic assessments. Our internal assessment mechanisms (Speedrun mode, practice performance tracking) are designed for **in-app adaptation and motivation**, not external reporting of learning outcomes. ### Reporting Scope: Teacher and Parent Visibility via TimeBack TrashCat does not provide in-app teacher dashboards or parent reporting interfaces. All learning events are published to the TimeBack platform using the **1EdTech Caliper** standard, providing unified reporting, standardized data format, reduced redundancy, and comprehensive visibility without TrashCat-specific interfaces. ## Overarching Decisions ### ITD: We embed mathematical practice within an endless runner where game-controlled variable speed enforces pedagogical timers without player speed control. - **Context:** Building fluency requires repeated timed practice; traditional methods like flashcards are monotonous and fail to maintain engagement. - **Alternatives:** - Static flashcard app: focused but lacks engagement needed for sustained practice. - Untimed/turn-based game: supports accuracy but fails to develop speed component of fluency. - Player-controlled speed: creates perverse rushing incentives, undermining learning quality. - **Rationale:** The endless runner's continuous movement and time pressure naturally align with timed recall needs for fluency building. By making all speed control automatic and game-determined (never player-controlled), we maintain complete pedagogical control over pacing, prevent inappropriate rushing, and create performance-based competitive differentiation while preserving motivational benefits. ### ITD: We separate diagnostic assessment (Speedrun) from fluency building (Practice) into two distinct, sequential modes. - **Context:** Students need to both demonstrate current knowledge and practice what they're still learning; mixing these creates confusion and penalizes students for facts they haven't learned. - **Alternatives:** - Single adaptive mode: blurs purpose for students ("Am I being tested or practicing?"). - One-time pre-test: fails to capture learning from outside the app. - **Rationale:** Two modes establish clear daily rhythm: "Show us what you know today" (Speedrun), then "Train to get faster" (Practice). Speedrun provides daily baseline enabling highly targeted Practice, reducing cognitive load and improving engagement. ### ITD: We support all four basic mathematical operations aligned with Common Core standards (Multiplication, Addition, Subtraction, Division). - **Context:** While the prototype focused on multiplication, comprehensive math fluency requires covering all foundational operations students learn concurrently. - **Alternatives:** - Focus only on multiplication: simpler product but severely limited market and educational impact. - Custom non-standard curriculum: difficult for educators to integrate into existing lesson plans. - **Rationale:** Covering full basic fluency scope makes the product a complete solution for schools. Aligning with Common Core adopts a widely-recognized, research-backed framework that builds trust and simplifies adoption for educators. ### ITD: We scope all learning sessions at the full skill level, not at specific fact sets or families. - **Context:** A student's knowledge is holistic; scoping at smaller levels (e.g., just 'x5' table) creates artificial silos preventing effective mixing of new material with long-term review. - **Alternatives:** - Session per fact set/family: gives users control but breaks spaced repetition model and leads to inefficient practice. - **Rationale:** Full-skill scoping empowers spaced repetition and fact selection algorithms to work as intended, creating seamless adaptive experience covering entire curriculum. While scope is full skill, practice sorting logic naturally focuses and scaffolds practice, starting with earliest fact sets and progressing logically. ### ITD: We use multiple-choice questions with in-game answer objects for timed gameplay, reserving alternative formats for untimed interventions. - **Context:** Building fluency requires measuring rapid recall; open-ended input introduces confounding variables (typing speed) unrelated to mathematical ability, and doesn't integrate with endless runner mechanics. - **Alternatives:** - Open-ended text input: closer to worksheets but conflates math recall with typing speed. - Handwriting recognition: technologically complex and not suitable for fast-paced game. - **Rationale:** MCQs eliminate input mechanics friction, providing pure signal of mathematical fluency without confounding variables. Alternative formats (fill-in-the-blank, spinner) are reserved for interventions where goals shift to varied reinforcement—these provide deeper engagement with specific facts after errors without compromising timing precision needed for fluency measurement during gameplay. ### ITD: We generate pedagogically-informed distractors based on common mathematical errors, filtered for validity (positive, reasonable range). - **Context:** Random distractors don't help diagnose specific misunderstandings; pedagogically-informed distractors representing common mistakes provide stronger diagnostic signals. - **Alternatives:** - Random distractors: simple but provides little diagnostic value. - Fixed pre-authored distractors: time-consuming and difficult to maintain. - No filtering: creates implausible options reducing diagnostic value. - **Rationale:** Algorithmic generation of error-based distractors provides best balance of pedagogical value and scalability, programmatically creating common errors for any fact then filtering to ensure all options are valid, ensuring questions effectively identify student misconceptions without manual authoring or invalid choices. ## Practice Mode ### ITD: We group facts into pedagogically-sound fact sets to manage cognitive load and implement structured curriculum sequence. - **Context:** Presenting all problems simultaneously (e.g., all 100 multiplication facts) would be cognitively overwhelming; learning is more effective with small, coherent chunks building on prior knowledge. - **Alternatives:** - Random presentation: simplest approach but pedagogically ineffective, causing frustration and cognitive overload. - Single long sequence: provides order but lacks logical grouping for pattern recognition. - **Rationale:** This approach respects cognitive load theory by breaking large skills into manageable chunks with enforced curriculum sequence ensuring coherent pedagogical progression. Specific groupings (doubles, near-doubles, make-10) are based on established arithmetic teaching strategies, helping students achieve automaticity efficiently by practicing related facts together. ### ITD: We move individual facts bidirectionally through discrete, research-backed learning stages based on performance. - **Context:** Learning isn't monolithic; students move from exposure to accuracy to speed but can also forget—purely linear models don't account for this reality. - **Alternatives:** - Promotion-only model: simpler but fails to address knowledge decay when students forget. - Single adaptive loop: less transparent and harder to implement distinct pedagogical rules for different learning phases. - **Rationale:** Individual fact-level tracking provides granularity needed for true personalization; discrete stages map directly to learning science of fluency development. Demotion capability is critical for responding to forgotten knowledge, ensuring mastery is not just achieved but durably retained over time. ### ITD: We implement timers through game-controlled dynamic speed and distance-based spawning rather than independent countdown clocks. - **Context:** Timing can be implemented through abstract UI timers or physical game mechanics (speed/distance); traditional educational games use countdown timers divorced from the game world. - **Alternatives:** - Independent countdown timer UI: makes timing explicit but decouples from gameplay, reducing urgency. - Constant speed with variable distance: works but creates visual inconsistency. - Player-controlled speed: undermines pedagogical pacing goals, creates perverse incentives. - **Rationale:** Game-controlled variable speed creates seamless integration of pedagogy and gameplay, with target-tracking smoothing ensuring natural transitions. Different implementations between Practice (variable distance, variable speed) and Speedrun (fixed distance, variable speed) optimize each mode's specific goals while maintaining automatic, game-controlled speed in both modes. ### ITD: We use simplified sequential answer presentation for Assessment stage to reduce initial cognitive load. - **Context:** First in-app assessment involves high cognitive load as students must retrieve answers and adapt to assessment format; evaluating four options simultaneously can be overwhelming. - **Alternatives:** - Same representation for all stages: simpler to implement but creates steep difficulty curve. - Shorter timer before first answer: rushes students before they've retrieved the fact. - Strict 4-second total timer: physically impossible at game-controlled running speed. - **Rationale:** Altering visual presentation and structuring timer around first answer appearance effectively lowers Assessment stage cognitive load without breaking game flow or creating physically impossible timing constraints. It's a UX-driven solution achieving a pedagogical goal: making first in-app assessment less intimidating and more accessible while respecting endless runner format physical constraints. ### ITD: We prioritize fact selection by recency of error, curriculum sequence, and fluency stage using multi-level priority system. - **Context:** Many potential facts could be shown in any practice session; purely random or sequential approaches would be pedagogically inefficient. - **Alternatives:** - Random selection: simple but fails to reinforce recent learnings or correct mistakes timely. - Strict sequential order: too rigid, doesn't adapt to actual performance. - **Rationale:** Prioritized approach creates highly personalized, efficient learning experience ensuring mistakes are corrected immediately (highest priority), curriculum is respected above all else (foundational facts before advanced), learning progression is logical (earlier-stage facts first), cognitive load is managed (specific ratios of fluency stages), and practice is varied (sorting by last asked time plus randomization). ### ITD: We provide adaptive reinforcement interventions on every incorrect answer, in parallel with state demotion, with intervention type selected based on error patterns. - **Context:** Incorrect answers provide multiple signals: decreased mastery data point, real-time moment needing reminder, and specific error nature (slip vs. struggle vs. misconception) suggesting different intervention intensities. - **Alternatives:** - Intervene instead of demoting: creates forgiving UX but corrupts learning model integrity. - Demote without intervention: accurate but pedagogically weak, penalizes without support. - Random intervention selection: simpler but doesn't match support intensity to error severity. - **Rationale:** Parallel adaptive approach provides both data integrity (via demotion) and appropriately-calibrated pedagogical support (via intervention). When mistakes occur, demotion applies immediately for accurate long-term knowledge modeling while adaptive intervention selection analyzes error patterns to choose most appropriate type, separating long-term state tracking from immediate instructional support while matching intervention intensity to diagnosed needs. ### ITD: We ensure continuous gameplay by hierarchically degrading fact selection rules (general cooldown → Review cooldown → working memory limits) while preserving Repetition cooldowns. - **Context:** In intense sessions, students might answer so quickly that all eligible facts are temporarily on cooldown; having no question stalls the game and breaks UX. - **Alternatives:** - End the session: frustrating and confusing UX. - Show "come back later" message: breaks core gameplay loop. - Bypass all cooldowns including Repetition: compromises long-term retention. - **Rationale:** Hierarchical degradation guarantees continuous question stream while preserving most pedagogically critical constraint (cross-session spaced repetition). This pragmatic trade-off momentarily bends lower-priority pedagogical rules (general cooldown first, then Review cooldown, then working memory limits) for the larger goal of keeping students engaged and practicing, never compromising cross-session repetition schedule driving long-term retention. ### ITD: We limit working memory load by dynamically capping concurrent facts building fluency based on recent performance. - **Context:** Working memory has strict capacity limits; simultaneously building automaticity for too many facts prevents effective automatic recall for any of them. - **Alternatives:** - No limit: leads to cognitive overload and poor retention. - Fixed limit for all students: too restrictive for advanced learners, too aggressive for struggling students. - **Rationale:** Dynamic working memory limit protects against cognitive overload while adapting to individual capacity, allowing high-performing students to work on more facts simultaneously while struggling students get more focused, manageable sets. ### ITD: We adapt difficulty parameters dynamically based on recent accuracy and initial knowledge level, filtering Expert difficulty by first Speedrun performance. - **Context:** Students' cognitive capacity fluctuates moment-to-moment; students entering with perfect knowledge require fundamentally different learning paths than those building automaticity from scratch. - **Alternatives:** - Manual difficulty selection: burdens users with decisions they may not be equipped to make. - Static algorithm: fails to provide appropriate challenge or support. - No initial knowledge filtering: allows students needing practice to bypass necessary spaced repetition. - **Rationale:** Dynamic difficulty maintains optimal challenge zone for each student at each moment by adjusting parameters based on real-time performance. Filtering difficulty eligibility by initial knowledge level ensures aggressive progression (Expert difficulty) is reserved for students who've already built automaticity externally and don't need the app's spaced repetition, allowing students who know material to finish quickly while maintaining pedagogical rigor for those needing systematic practice. ### ITD: We apply bulk promotion to accelerate progress when mastery is demonstrated across a fact set in higher difficulty levels. - **Context:** Advanced learners recognize patterns and generalize knowledge; if a student correctly answers significant portions of a fact set with high accuracy, it signals automaticity for the underlying pattern, not just individual facts. - **Alternatives:** - Require individual proof for every fact: ensures complete coverage but creates unnecessary busywork for students with clear automaticity. - Skip fact sets entirely: too aggressive, might miss edge cases. - **Rationale:** Bulk promotion is pedagogically sound when applied carefully; curriculum sequencing ensures "skipped" facts naturally surface in subsequent practice when algorithm selects next unknown fact from that set. Students ultimately practice all facts, but system respects demonstrated mastery by not forcing redundant repetition—only enabled in higher difficulty levels where students have proven ability to generalize. ### ITD: We provide timed recall from the start with no untimed Grounding stage, using interventions to address revealed gaps. - **Context:** Untimed initial exposure applies to initial concept learning when students encounter entirely new ideas; our goal is building automaticity for facts students have already encountered and should retrieve (even if slowly or via strategy). - **Alternatives:** - Include untimed Grounding stage: appropriate for initial concept learning but unnecessary for automaticity building, doesn't create retrieval pressure. - Separate non-game mode for Grounding: adds complexity for no pedagogical benefit. - **Rationale:** Absence of untimed stage is pedagogically correct for automaticity goals—students entering the app have received instruction on these facts; they need timed retrieval practice to move from strategy-based (slow) to automatic (fast) retrieval. When timed assessment reveals gaps (via incorrect answer), we immediately provide untimed intervention formats (fill-in-the-blank, spinner, audio reinforcement) giving scaffolded, pressure-free exposure for that specific fact—more efficient than providing untimed exposure to all facts preemptively. ### ITD: We transition to a no-reward random review mode after all facts in a skill reach Mastered stage. - **Context:** Students will eventually master all material in a skill; we need clear end-state acknowledging this achievement while allowing long-term knowledge maintenance. - **Alternatives:** - Lock the skill: demotivating, removes long-term review opportunity critical for retention. - Continue normal algorithm: confusing as there are no more stages to progress to, devalues XP by rewarding already-mastered content. - **Rationale:** This approach provides satisfying conclusion to active learning while encouraging long-term retention. Switching to no-reward, random-review mode clearly communicates primary goal achievement, signals skill completion for congratulatory messaging, and keeps skill available for low-stakes practice. ## Speedrun Mode ### ITD: We design Speedrun as a finite, no-lives race where automatic performance-based speed penalties create competitive differentiation without player speed control. - **Context:** Speedrun serves as diagnostic assessment and competitive challenge; if students could fail/die, some would never complete the assessment; if speed weren't affected by performance, there'd be no competitive incentive. - **Alternatives:** - Lives system in Speedrun: incomplete diagnostic data for struggling students, demotivating. - No speed penalty for errors: uniform completion time regardless of performance, eliminating competitive differentiation. - Player-controlled speed: allows intentional rushing creating "death spiral". - **Rationale:** No-lives, time-based design creates low-stakes diagnostic with natural competitive differentiation—every student completes assessment (ensuring complete data) but high performers achieve better times (creating leaderboard motivation). Fixed base speed with automatic performance-based slowdowns prevents intentional rushing while incentivizing correct answers and obstacle avoidance, with finite nature (one correct or two mistakes per fact) keeping runs short and manageable. ### ITD: We use random fact selection across the full skill in Speedrun, ignoring fact sets, cooldowns, difficulty management, and working-memory limits. - **Context:** Stage-based practice algorithm optimizes for learning via strategic ordering and pacing; Speedrun optimizes for demonstration speed where any incomplete fact is acceptable at any moment. - **Alternatives:** - Reuse practice selection with flags: increases conditional complexity, couples experiences. - Post-completion randomizer: works only for mastered facts, retains cooldowns. - **Rationale:** Pure random selection maximizes throughput, is easy to reason about and explain, and keeps practice engine uncompromised by Speedrun-specific conditionals. ### ITD: We define Speedrun completion as exactly one correct answer per fact regardless of previous incorrect attempts. - **Context:** More complex criteria (e.g., 3 correct after a miss) complicate tracking and user comprehension. - **Alternatives:** - Variable or higher fixed counts: more rigorous but slower, harder to explain, less competitive. - **Rationale:** One-and-done demonstrates basic fluency while keeping runs brisk; we can raise rigor later without architectural churn. ### ITD: We scale time-to-answer inversely with recent accuracy through automatic game-controlled speed adjustments in Speedrun. - **Context:** Speed should reflect fluency; as accuracy rises, time pressure should increase to differentiate performance, but this must be game-controlled to prevent diagnostic manipulation. - **Alternatives:** - Tiered time windows: easier to message but less responsive and smooth. - Player-controlled speed boost: allows students to manipulate diagnostic results. - **Rationale:** Continuous game-controlled inverse relationship rewards every incremental accuracy gain with tangible speed benefit while maintaining diagnostic integrity by preventing player manipulation. ### ITD: We use fact families (same three numbers) and sample one per family for breadth in Speedrun. - **Context:** Fact families are well-established educational constructs highlighting inverse relationships and number sense. - **Alternatives:** - Include all facts: maximal coverage but impractically long runs. - Pure random without family constraint: simple but risks skew, repetition, poor coverage. - **Rationale:** Sampling one per family balances breadth and run length, matches educator expectations for fact-family coverage, and creates consistent behavior across all four operations. ### ITD: We retire a Speedrun fact after two mistakes in the same run. - **Context:** Repeatedly surfacing the same missed item can stall progress and harm flow in a timed challenge. - **Alternatives:** - Never retire: maximizes exposure but increases churn and frustration. - Retire after one mistake: faster runs but less evidence of eventual correctness. - **Rationale:** Two mistakes strikes balance between giving student fair second chance and avoiding run-stalling repetition; errors are still recorded for downstream placement/practice. ### ITD: We apply Placement promotions to Practice only on the first completed Speedrun per skill per day, strictly within the same operation, with promotion target based on initial knowledge level. - **Context:** Speedrun serves as once-daily diagnostic for specific operation; both XP rewards and placement logic must be limited to first completed run to protect learning integrity and prevent gaming. - **Alternatives:** - Placement on every run: allows farming placement by repeated guessing. - Placement only for exact facts: narrower mapping, misses transfer within related facts. - Placement across operations: overgeneralizes knowledge. - Fixed promotion target: forces high-performers through unnecessary practice. - **Rationale:** Limiting placement to once per day aligns with XP rewards and maintains Speedrun's role as daily diagnostic tool—first completed run provides fresh knowledge evidence while subsequent runs add no new diagnostic value. Operation-scoped boundary reflects pedagogical reality that students develop fluency at different rates across operations; using first Speedrun to detect initial knowledge level allows differentiating between students needing to build automaticity versus those with existing mastery needing minimal maintenance. ### ITD: We adapt placement promotion targets based on initial Speedrun performance (Perfect: to Mastered; Near-Perfect: first-attempt to Mastered, retry to Review; Needs Practice: to Review). - **Context:** High school students taking elementary math apps already know the material through years of prior education; forcing them through full spaced repetition wastes time and creates disengagement, but students with incomplete knowledge need full learning path. - **Alternatives:** - Single promotion target for all students: treats advanced students same as beginners, creates frustration. - Manual difficulty selection: cognitive burden, doesn't respond to performance data. - Gradual promotion only: forces students with demonstrated mastery through unnecessary repetition. - Multiple fine-grained thresholds: more complex, three-level system captures meaningful pedagogical distinctions. - **Rationale:** Using first Speedrun accuracy as permanent diagnostic makes pedagogical sense—students entering with perfect accuracy have already built automaticity externally and don't need spaced repetition to fixate already-durable knowledge. Students with very high accuracy (>95%) have most facts mastered but a few gaps; promoting first-attempt-correct to mastered acknowledges existing knowledge while enrolling retry-required facts in Review for targeted practice. ### ITD: We advance Speedrun-proven facts to Review or Mastered stages based on initial knowledge level, promoting entire fact families strictly within the operation being practiced. - **Context:** Correct Speedrun answer signals existing knowledge for that specific operation; forcing re-learning from scratch in Practice for that operation is inefficient and demotivating. - **Alternatives:** - Promote only specific fact: fails to credit automaticity across related family facts. - Smaller promotion: less efficient, forces practicing already-demonstrated knowledge. - Always promote to Review: traditional conservative, forces perfect-knowledge students through unnecessary repetition. - Always promote to Mastered: too aggressive for students needing practice. - Promote across operations: overgeneralizes knowledge. - **Rationale:** We credit entire fact family within operation to acknowledge demonstrated automaticity across related facts in that operation. For students needing practice, promoting to Review respects demonstrated knowledge by skipping introductory stages but maintains spaced repetition for durable automaticity; for students with perfect/near-perfect initial knowledge, promoting to Mastered acknowledges they've built automaticity through external mechanisms and don't need the app to rebuild what's already automatic. ## Motivation and Rewards ### ITD: We award XP based on active time (1 XP per minute) with mode-specific caps: Practice XP stops at 100% skill completion; Speedrun XP awarded once per day. - **Context:** We need to incentivize time investment in building automaticity while preventing endless grinding through completion caps (Practice) and daily caps (Speedrun); time-based approach is maximally simple but requires architectural controls. - **Alternatives:** - XP per milestone/score: more pedagogically sophisticated but requires complex tracking, less immediate feedback, or creates high-stakes anxiety. - Unlimited XP after completion or per Speedrun: enables XP farming on mastered material or repeated diagnostic grinding. - No XP after first mastery: too restrictive, prevents motivated students from continuing to earn rewards. - **Rationale:** Time-based XP is viable because we control problematic conditions: students cannot rush (game-controlled speed), cannot farm (completion caps + daily Speedrun caps), and cannot dawdle (algorithmic targeting + cooldowns). Simplicity and immediacy of "run time = XP time" creates tight motivational loop while sophisticated algorithm ensures quality practice beneath the surface; daily Speedrun cap guides students back to Practice Mode for main learning. ### ITD: We always award cosmetic rewards for first-time Practice stage completion of fact sets, including placement-triggered completions, but never award XP for placement. - **Context:** Completing a fact set is significant milestone; placement system can cause multiple fact set completions at once—major achievement deserving immediate acknowledgement. - **Alternatives:** - No reward for placement completion: misses key opportunity to reward demonstrated Speedrun knowledge. - Award XP for placement completion: violates principle of awarding XP only for building new automaticity through practice, creates "double-dipping". - **Rationale:** We create tight feedback loop between achievement and reward; awarding cosmetic for every first-time fact set completion provides immediate positive feedback reinforcing daily Speedrun value and celebrating student's existing knowledge, while separating from XP prevents corrupting our metric for building new automaticity. ### ITD: We provide dedicated progress screen visualizing status and rewards for every fact set within the skill being practiced. - **Context:** To stay motivated, students need to see their own growth; while game provides immediate question-level feedback, it's hard to see bigger journey picture without dedicated summary view. - **Alternatives:** - No progress screen: leaves students without overall accomplishment sense or clear roadmap, highly demotivating. - In-game progress indicators only: useful but don't provide detailed reflective overview needed to understand curriculum scope. - **Rationale:** Dedicated progress screen makes mastery visible and motivating, serving as primary tool for students to reflect on learning and see direct effort results, critical for long-term engagement. ### ITD: We provide competitive leaderboards for Speedrun across four configurations (Global/Weekly, Global/All-Time, School/Weekly, School/All-Time), including all completed runs regardless of retired facts. - **Context:** Competition and social comparison are powerful motivators for target age group, but different students are motivated by different competitive contexts (global vs. school-level) and temporal scopes (weekly vs. all-time). - **Alternatives:** - Single global leaderboard only: demotivating for students not in top tier or in smaller/lower-performing schools. - School-only leaderboards: limits aspirational goals. - Weekly-only leaderboards: erases historical achievement. - Exclude runs with retired facts: penalizes attempting challenging material. - **Rationale:** Four-category system ensures more students find resonant leaderboard context, maximizing motivational reach by providing multiple achievement/recognition pathways. Including all completed Speedruns (even with retired facts) rewards effort and completion rather than perfection, which is more pedagogically sound—we want students attempting all material, not just facts they're confident about. ### ITD: We display an always-visible Fact Grid during all gameplay with the current question's fact highlighted, using single grid format for all four operations. - **Context:** While progress screen provides high-level overview, students benefit from immediate context during gameplay; grids are standard educational tools for multiplication and addition showing patterns and relationships. - **Alternatives:** - Show grid only between questions or on-demand: adds friction, fails to make grid constant peripheral learning support. - Use different visualizations for different operations: adds complexity, breaks visual consistency. - **Rationale:** Always-visible grid provides constant, low-cognitive-load context; coercing all four skills into same grid format prioritizes simple, consistent UX over adhering to traditional representations for each operation—benefit of single, predictable interface outweighs awkwardness of representing subtraction or division in grid. ## Gameplay and Feedback ### ITD: We use in-game consequences (lives system) in Practice Mode where incorrect answers lose lives and 5 consecutive correct answers grant lives. - **Context:** In game environments, students need immediate feedback connecting learning performance to game outcomes; pure pedagogical feedback can feel disconnected from core gameplay loop. - **Alternatives:** - No game consequences: purely informational feedback, reduces motivational power. - Score-only consequences: less visceral, doesn't create same tension or achievement. - Life on every correct answer: too generous, removes challenge. - **Rationale:** Lives system creates tight coupling between learning performance and game survival, making every answer feel consequential and transforming abstract right/wrong feedback into tangible game stakes. Bi-directional system (lose on mistakes, gain on streaks) mirrors pedagogical promotion/demotion concept in intuitively understood game mechanics. ### ITD: We provide multi-modal feedback (visual and audio) for every answer with redundant success/error indicators. - **Context:** Multi-sensory feedback is more effective than single-modality feedback; students process information through multiple channels, and redundant feedback across modalities reinforces messages more strongly. - **Alternatives:** - Visual feedback only: misses auditory learners and students not looking at screen. - Audio feedback only: misses students with sound off or hearing impairments. - No immediate feedback: violates fundamental learning principles about timely correction. - **Rationale:** Multi-modal feedback is well-established best practice in educational technology, maximizing likelihood that every student receives and processes correctness signal regardless of sensory preferences or momentary attention state. ### ITD: We play audio of the complete equation when presenting each question in both Practice and Speedrun modes. - **Context:** Reading fluency and mathematical fluency are separate but related skills; some students may struggle to decode written equations quickly, confounding mathematical fact recall measurement. - **Alternatives:** - Text only: creates barrier for students with reading difficulties or auditory learning preferences. - Audio only: creates barrier for students with hearing difficulties or visual learning preferences. - Optional audio: adds cognitive load deciding whether to use it, creates friction activating it. - **Rationale:** Automatic audio narration removes reading as confounding variable in measuring mathematical fluency, supports universal design for learning principles by providing multiple means of representation, and automatic nature removes decision friction ensuring consistent multi-sensory presentation. ### ITD: We play two-part audio correction feedback on incorrect answers (brief "wrong" cue followed by correct equation with answer). - **Context:** Incorrect answers represent both failure points and learning opportunities; simply indicating "wrong" misses the chance to teach correct answer at the exact moment student is most receptive. - **Alternatives:** - Visual correction only: misses auditory encoding pathway and students not looking at screen. - No correction, just "wrong" feedback: misses learning opportunity entirely. - Delay correction until intervention: loses immediate, context-rich moment when student is most primed to learn. - **Rationale:** Immediate audio correction capitalizes on error-driven learning opportunity—research shows errors, when immediately corrected, are powerful learning moments. Audio modality ensures correction is received even if visual attention is elsewhere, with two-part structure clearly separating evaluative feedback from instructional content. ### ITD: We wait for all feedback audio to complete before presenting the next question, but each question receives its full timer allocation starting fresh. - **Context:** When students answer incorrectly, we play correction audio; immediately displaying next question while this plays would create confusing audio overlap. - **Alternatives:** - Allow audio overlap: creates confusing audio collisions where correction audio from Question A plays while Question B is presented. - No feedback audio: removes valuable multi-sensory learning channel, reduces accessibility. - **Rationale:** Audio gate is simple practical solution to prevent audio overlap; since each question gets full timer regardless of when it appears, fluency-building objective is preserved—every recall is still strictly timed. Variable delay between questions is merely buffer for feedback processing, doesn't affect core pedagogical mechanic of timed mathematical recall. ## Developer Guide ### Getting Started # Getting Started: Using the Fluency API This guide shows you how to build a learning application or game on top of the Fluency API. ## What the API Provides Building an effective learning system requires deep expertise in cognitive science: spaced repetition algorithms, diagnostic assessment, error pattern analysis, and adaptive intervention selection. **The Fluency API handles all of this complexity for you.** You focus on building an engaging game or app. We provide the adaptive learning engine that determines which question to ask next, when to review previous material, and how to remediate errors. Whether you're building a 3D runner, a mobile quiz app, or a web-based practice tool, you get research-backed learning science without implementing any of it yourself. The API provides two learning modes (**Speedrun** for diagnostic assessment, **Practice** for adaptive training), targeted interventions for incorrect answers, and built-in progress tracking with XP and leaderboards. ## Environments We provide two environments for development and production: ### Integration (Testing) - **API URL**: `https://api-integration.trashcat.rp.devfactory.com` - **User Pool ID**: `us-east-1_cfmKouGuW` - **User Pool Domain**: `trashcat-integration-userauth.auth.us-east-1.amazoncognito.com` - **Region**: `us-east-1` ### Production - **API URL**: `https://api.trashcat.learnwith.ai` - **User Pool ID**: `us-east-1_NBNupqDm5` - **User Pool Domain**: `trashcat-production-userauth.auth.us-east-1.amazoncognito.com` - **Region**: `us-east-1` ## Step 1: Choose Your Authentication Model ### Option 1: Client Credentials **Use this if:** - You have your own backend server - You have your own user authentication system - You want full control over user management **How it works:** 1. Contact trashcat@trilogy.com to request client credentials (Client ID + Client Secret) 2. Your backend authenticates users using your own auth system 3. Your backend exchanges client credentials for an access token via Cognito OAuth endpoints: ``` POST https://{USER_POOL_DOMAIN}/oauth2/token Content-Type: application/x-www-form-urlencoded grant_type=client_credentials& client_id={CLIENT_ID}& client_secret={CLIENT_SECRET} ``` 4. Your backend makes API requests with: - `Authorization: Bearer {access_token}` header - `X-User-Email: {user_email}` header (to identify which user) **Advantages:** - Full control over user management - Can integrate with existing auth systems - Centralized token management ### Option 2: Use Our Authorization Server **Use this if:** - You're building a frontend-only game (web, Unity WebGL, mobile) - You want to minimize backend infrastructure - You want us to handle user management **How it works:** 1. Contact trashcat@trilogy.com to request an App Client ID and provide your redirect URLs 2. Integrate AWS Amplify or implement OAuth 2.0 flow in your game 3. Users sign in via our Cognito User Pool (supports email/password, Google SSO, magic links) 4. Your game receives an ID token and uses it for all API calls **Example using AWS Amplify**: ```typescript import { Amplify } from 'aws-amplify'; import { fetchAuthSession, signIn } from 'aws-amplify/auth'; // Configure Amplify const currentOrigin = window.location.origin; Amplify.configure({ Auth: { Cognito: { userPoolId: 'us-east-1_cfmKouGuW', userPoolClientId: '', loginWith: { oauth: { domain: 'trashcat-integration-userauth.auth.us-east-1.amazoncognito.com', scopes: ['openid', 'email', 'profile'], redirectSignIn: [`${currentOrigin}/`], redirectSignOut: [`${currentOrigin}/`], responseType: 'code', }, }, }, }, }); // Sign in await signIn({ username: email, password }); // Get ID token for API calls const session = await fetchAuthSession(); const idToken = session.tokens?.idToken?.toString(); ``` **Advantages:** - No backend required - User management handled automatically - Token refresh handled by auth library ## Step 2: Make API Requests Once you have a token, include it in all API requests: ``` Authorization: Bearer {id_token} ``` For client credentials flow, also include: ``` X-User-Email: {user_email} ``` **Example API call:** ```typescript const response = await fetch('https://api-integration.trashcat.rp.devfactory.com/learning/v1/skills', { headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json', }, }); const { skills } = await response.json(); ``` ## Step 3: Explore the API The [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt) documents all available endpoints with request/response schemas. You can make live requests directly from the interactive playground in your browser. **Alternative tools:** - **Postman/Insomnia**: Import the OpenAPI spec from https://docs.trashcat.learnwith.ai/openapi.json. - **LLM integration**: Use https://docs.trashcat.learnwith.ai/llms.txt for AI-friendly documentation ## Example: Basic Learning Loop This example demonstrates the core request/response flow: fetch a question, display it to the user, submit their answer, and show feedback. ```typescript // 1. Get available skills const skillsRes = await fetch(`${API_URL}/learning/v1/skills`, { headers: { Authorization: `Bearer ${idToken}` }, }); const { skills } = await skillsRes.json(); // 2. Initialize session const sessionId = crypto.randomUUID(); // 3. Get a question const questionRes = await fetch(`${API_URL}/learning/v1/skills/${skills[0].id}/algorithms/practice/questions`, { method: 'POST', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ sessionId }), }); const { question } = await questionRes.json(); // 4. Display question to user console.log(question.text); // "7 × 8 = ?" console.log(question.choices); // [{ id: "1", value: 56, label: "56" }, ...] // 5. Submit answer (after user selects) const answerRes = await fetch( `${API_URL}/learning/v1/skills/${skills[0].id}/algorithms/practice/questions/${question.id}/answers`, { method: 'POST', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ sessionId, answer: { choiceId: selectedChoiceId }, timeTookToAnswerMs: 3000, state: 'Answered', capabilities: { intervention: false }, // Disable interventions for MVP activeSessionDurationSec: 60, }), }, ); const feedback = await answerRes.json(); // 6. Show feedback console.log(feedback.answerType); // "CORRECT" or "INCORRECT" console.log(feedback.correctAnswer); // { value: 56 } ``` ## Error Handling If a request fails, the API returns standard HTTP error codes with descriptive messages: ```typescript // Example error response (401 Unauthorized) { "statusCode": 401, "message": "Unauthorized", "error": "Invalid or expired token" } // Example error response (400 Bad Request) { "statusCode": 400, "message": "Bad Request", "error": "sessionId is required" } ``` Always check the HTTP status code before parsing the response. For production error handling patterns, see the [Complete Example](https://docs.trashcat.learnwith.ai/guides-complete-example.guide.txt) guide. --- ## Related Docs ### [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt) Complete endpoint documentation with request/response schemas. Use the interactive playground to make live requests directly from your browser. ### [Complete Example](https://docs.trashcat.learnwith.ai/guides-complete-example.guide.txt) Production-ready code showing error handling, session management, and integration patterns for building robust learning applications. ### [Interventions Guide](https://docs.trashcat.learnwith.ai/guides-interventions.guide.txt) Deep dive into the adaptive intervention system. Learn when each intervention type triggers and how to implement the UI for personalized error correction. ### [Timing Fields Guide](https://docs.trashcat.learnwith.ai/guides-timing-fields.guide.txt) Critical timing implementation details. Understanding `timeTookToAnswerMs` and `activeSessionDurationSec` is essential for the learning algorithm to function correctly. ### Timing Fields # Timing Fields When submitting answers, accurate timing data is critical for the learning algorithm to function correctly. The API uses this data to determine spaced repetition intervals, assess fluency progress, and calculate XP rewards. ## Required Fields ### `timeTookToAnswerMs` Time in milliseconds for this specific question. **How to track it:** - Start timing when question is displayed - Stop timing when user submits answer - Include think time and interaction time for this question only **Example:** ```typescript const questionStartTime = Date.now(); // ... show question, wait for user answer const timeTookMs = Date.now() - questionStartTime; ``` ### `activeSessionDurationSec` Total active play time in seconds since session started. **How to track it:** - **Only count active learning time** (when user is answering questions) - **Exclude**: menu navigation, pauses, game-over screens, loading times, etc. - Track using a game timer that pauses during non-learning activities **Example:** ```typescript let activeSessionTime = 0; // seconds function onQuestionDisplayed() { const questionStartTime = Date.now(); // ... wait for user answer const questionDurationMs = Date.now() - questionStartTime; activeSessionTime += questionDurationMs / 1000; // Add to total } ``` ## Full Example ```typescript const sessionId = crypto.randomUUID(); let activeSessionTime = 0; async function askQuestion() { const questionStartTime = Date.now(); // Get question const questionRes = await fetch(`${API_URL}/learning/v1/skills/${skillId}/algorithms/practice/questions`, { method: 'POST', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ sessionId }), }); const { question } = await questionRes.json(); // Show question and wait for answer const selectedChoiceId = await getUserAnswer(question); // Calculate timing const timeTookMs = Date.now() - questionStartTime; activeSessionTime += timeTookMs / 1000; // Submit answer with timing const answerRes = await fetch( `${API_URL}/learning/v1/skills/${skillId}/algorithms/practice/questions/${question.id}/answers`, { method: 'POST', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ sessionId, answer: { choiceId: selectedChoiceId }, timeTookToAnswerMs: timeTookMs, state: 'Answered', activeSessionDurationSec: activeSessionTime, capabilities: { intervention: false }, }), }, ); } ``` ## Common Mistakes **Including pause time in active session duration:** ```typescript // ❌ Bad: Counts everything const activeTime = Date.now() - sessionStart; // ✅ Good: Only counts learning time let activeTime = 0; onQuestionShow(() => startTimer()); onPause(() => stopTimer()); ``` **Using fixed timing instead of actual measurements:** ```typescript // ❌ Bad: Fake timing const timeTookMs = 3000; // Breaks algorithm // ✅ Good: Real timing const start = Date.now(); const timeTookMs = Date.now() - start; ``` --- ## Related Docs ### [Getting Started](https://docs.trashcat.learnwith.ai/guides-getting-started.guide.txt) Authentication and basic learning loop showing where timing fields fit into answer submission. ### [Complete Example](https://docs.trashcat.learnwith.ai/guides-complete-example.guide.txt) Production code demonstrating proper timing tracking with pause/resume handling and state persistence. ### [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt) Complete schema documentation for the answer submission endpoint, including timing field requirements and validation rules. ### Interventions # Interventions: Adaptive Error Correction When a student answers incorrectly, the Fluency API analyzes their error pattern and recommends a specific intervention type. You implement the UI; the API provides the intelligence. ## Why Interventions Matter Research shows that immediate, targeted error correction is one of the most powerful tools for building durable memory. Without intervention, students may encode incorrect associations (like "7 × 8 = 54") into long-term memory, creating persistent errors that are hard to unlearn. **Key learning science principles:** - **Immediate feedback** prevents incorrect answers from being stored in memory - **Active recall** (reproducing the answer) builds stronger memory traces than passive review - **Varied practice modalities** (visual, audio, kinesthetic) engage multiple memory pathways - **Escalating support** adapts to error patterns—simple mistakes get quick corrections, persistent errors get deeper remediation The intervention system implements these principles automatically. The API detects error patterns and selects the appropriate intervention type. You implement the UI for each type. ## How the API Selects Interventions The API uses error history to determine which intervention to apply: **First error on a fact:** - Light intervention (e.g., `CueFading.FRQ` - briefly show answer, then test recall) **Second consecutive error:** - Medium intervention (e.g., `DragDrop.FITB` - reconstruct answer through interaction) **Third or more consecutive errors:** - Heavy intervention (e.g., `TimedRepetition.Recall.FRQ` - extended practice with copy-cover-recall) **Cross-operation confusion** (e.g., answering 7+8 when asked 7×8): - Conceptual intervention (not currently implemented, reserved for future) The selection algorithm adapts based on the student's current session performance, their historical accuracy with this fact, and the type of error made. You don't control which intervention is selected—the API provides this intelligence. ## Intervention Types Reference Each intervention type serves a specific pedagogical purpose. The API returns an `interventionType` string; you implement the corresponding UI flow. **Note:** TrashCat's game shows one way to implement these interventions. Your implementation can differ—use your app's visual style, interaction patterns, and technical constraints. The core learning mechanics should remain consistent, but the presentation is yours to design. ### `None` **What it is:** No intervention needed. Student continues to the next question. **When triggered:** First-time correct answer, or after a failed intervention when the system decides to move forward. **Implementation:** No special UI. Simply proceed to the next question. ### `AnswerFirst.MCQ.FRQ` **What it is:** Show the answer first with a reverse equation format ("? × ? = 56"), then have student identify which fact produces that answer from multiple choices, then type the answer. **When triggered:** Student's first or second error on this fact during the current session. **Reference Video:** [▶️ Watch AnswerFirst.MCQ.FRQ](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/AnswerFirst.MCQ.FRQ.mp4) **Implementation guidance:** 1. Show reverse equation: "? × ? = 56" (answer visible, factors hidden) 2. Display multiple fact choices (e.g., "7 × 8", "6 × 9", "8 × 7") 3. Student selects the correct fact that produces the answer 4. If correct, flip to show original question with keypad for free response 5. Student types the answer to confirm recall 6. If wrong at any step, repeat from step 1 ### `CueFading.FRQ` **What it is:** Progressive cue fading where students first see and click the full answer, then type with partial cues (one digit shown), then type with no cues. **When triggered:** First error on a fact the student has seen before (not brand new). **Reference Video:** [▶️ Watch CueFading.FRQ](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/CueFading.FRQ.mp4) **Implementation guidance:** 1. Show equation with highlighted answer button (e.g., "7 × 8 = **56**") 2. Student clicks the answer button to acknowledge 3. For multi-digit answers: show partial cue (e.g., "5\_") and student types the missing digit 4. Finally: show no cue and student types the full answer 5. If wrong at any step, return to step 1 ### `CueFading.ListenMCQFRQ` **What it is:** Multi-modal reinforcement. Play the fact as audio ("seven times eight equals fifty-six"), then test with MCQ, then free response. **When triggered:** Second error on a fact, particularly when audio reinforcement may help. **Reference Video:** [▶️ Watch CueFading.ListenMCQFRQ](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/CueFading.ListenMCQFRQ.mp4) **Implementation guidance:** 1. Play audio narration: "7 × 8 = 56" 2. Show visual equation simultaneously 3. Test with MCQ 4. Test with free response 5. Both must be correct to proceed Audio can be generated using text-to-speech or pre-recorded. The key is multi-sensory encoding (auditory + visual + recall). ### `DragDrop.AnswerOnly` **What it is:** Student reconstructs the correct answer by dragging digits into place. **When triggered:** Second or third error when kinesthetic interaction may strengthen memory. **Reference Video:** [▶️ Watch DragDrop.AnswerOnly](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/DragDrop.AnswerOnly.mp4) **Implementation guidance:** 1. Show the question: "7 × 8 = ?" 2. Provide draggable digit tiles (0-9) 3. Student drags digits to build answer (e.g., "5" then "6" for 56) 4. Validate when complete 5. Provide immediate feedback if incorrect For multi-digit answers, students must place each digit in the correct position. The physical act of "building" the answer reinforces memory through motor interaction. ### `DragDrop.FITB` **What it is:** Fill-in-the-blank using drag-and-drop for the **entire equation**. Student reconstructs all parts: both factors AND the answer. **When triggered:** Second or third error when full equation reconstruction helps reinforce the relationship. **Reference Video:** [▶️ Watch DragDrop.FITB](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/DragDrop.FITB.mp4) **Implementation guidance:** 1. Show equation structure with blanks: "\_\_ × \_\_ = \_\_" 2. Provide draggable number tiles (factors, answer, and distractors) 3. Student drags numbers into all three positions 4. Validate the complete equation (supports commutative property: 7×8 = 8×7) 5. Provide immediate feedback This differs from `DragDrop.AnswerOnly`—here the student must reconstruct the entire equation, not just the answer. ### `Retry.MCQ` **What it is:** Show the same question again with the same multiple-choice options. **When triggered:** First error when the mistake appears to be a mis-click or attention lapse (not conceptual confusion). **Reference Video:** [▶️ Watch Retry.MCQ](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/Retry.MCQ.mp4) **Implementation guidance:** 1. Show error feedback briefly 2. Re-display the exact same question with same choices 3. Wait for second attempt 4. Proceed if correct, escalate if incorrect again This is the lightest intervention—simply giving the student another chance without additional scaffolding. ### `TimedRepetition.Recall.FRQ` **What it is:** Copy-Cover-Recall pattern. Show the answer, let student view it, cover it, then test recall after a brief delay. **When triggered:** Third or more consecutive errors—indicates need for extended, deliberate practice. **Reference Video:** [▶️ Watch TimedRepetition.Recall.FRQ](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/TimedRepetition.Recall.FRQ.mp4) **Implementation guidance:** 1. Show the complete equation: "7 × 8 = 56" 2. Allow student to study for 3-5 seconds 3. Optionally: have student "copy" by typing/selecting it once 4. Cover the answer 5. Wait 2-3 seconds 6. Ask for free response recall 7. Repeat if incorrect This intervention is more time-intensive but builds stronger memory through deliberate, effortful recall. ### `TurnWheels.All` **What it is:** Interactive digit wheels for the **entire equation**. Student scrolls wheels to set all parts: both factors AND the answer. **When triggered:** Second or third error when interactive manipulation may help. **Reference Video:** [▶️ Watch TurnWheels.All](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/TurnWheels.All.mp4) **Implementation guidance:** 1. Show equation structure with three sets of digit wheels: "[ ] × [ ] = [ ]" 2. Each wheel scrolls through digits 0-9 3. Student scrolls all wheels to form the complete equation 4. Validate when all wheels are set (supports commutative property: 7×8 = 8×7) 5. Auto-submit when correct equation is formed This combines visual and kinesthetic learning—students physically manipulate all parts of the equation. ### `TurnWheels.AnswerOnly` **What it is:** Scrollable digit wheels showing only the answer (not full equation). **When triggered:** Similar to `TurnWheels.All` but with reduced context. **Reference Video:** [▶️ Watch TurnWheels.AnswerOnly](http://wseng-docs.s3-website-us-east-1.amazonaws.com/trashcat/docs/scalar/TurnWheels.AnswerOnly.mp4) **Implementation guidance:** 1. Show question: "7 × 8 = ?" 2. Show digit wheels pre-set to correct answer: "56" 3. Student can scroll wheels to view/confirm 4. Require explicit acknowledgment before continuing ## Implementation Strategies ### MVP Approach: Skip Interventions For your first integration, disable interventions entirely: ```typescript capabilities: { intervention: false; } ``` The API will never return intervention types. Show simple correct/incorrect feedback and move to the next question. This lets you validate the core learning loop before adding intervention complexity. ### Progressive Approach: Implement Subset Implement interventions in phases based on complexity: **Phase 1:** Simple interventions (minimal UI work) - `None` (default) - `Retry.MCQ` (reshow question) - `CueFading.FRQ` (flash answer, test recall) **Phase 2:** Interactive interventions (moderate UI work) - `DragDrop.AnswerOnly` - `DragDrop.FITB` **Phase 3:** Advanced interventions (significant UI work) - `AnswerFirst.MCQ.FRQ` - `CueFading.ListenMCQFRQ` - `TimedRepetition.Recall.FRQ` - `TurnWheels.All` - `TurnWheels.AnswerOnly` For unimplemented types, fall back to a simpler intervention (e.g., treat `TurnWheels.All` as `CueFading.FRQ`). ### Full Approach: All Intervention Types Implement all intervention types to provide the complete adaptive learning experience. This gives the API maximum flexibility to select the optimal remediation strategy for each student's error pattern. ## Reference Implementation TrashCat's Unity game provides a complete reference implementation of all intervention types. You can review the code, see the UI patterns, and watch the embedded videos above to understand how each intervention works in practice. **Important:** Your implementation doesn't need to match TrashCat's visual design. The core learning mechanics (what information is shown, how long it's visible, what the student must do) should remain consistent, but adapt the presentation to your app's style, technical platform, and user experience. **Access the reference implementation:** - [TrashCat Unity Intervention System](https://github.com/learnwith-ai/trashcat/tree/main/game/Assets/ReusablePatterns/FluencySDK/Scripts/Runtime/InterventionSystem/Interventions) --- ## Related Docs ### [Getting Started](https://docs.trashcat.learnwith.ai/guides-getting-started.guide.txt) Authentication and basic learning loop. Start here if you haven't integrated the API yet. ### [Complete Example](https://docs.trashcat.learnwith.ai/guides-complete-example.guide.txt) Production code showing how to integrate interventions into a full learning session with error handling and state management. ### [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt) Complete documentation of the answer submission endpoint, including all `capabilities` options and feedback response structure. ### Leaderboards # Leaderboards The leaderboard system is flexible and allows learning apps to define custom leaderboards based on their needs. ## Features - **Categories & Subcategories**: Use `categoryId` (e.g., skillId) to create leaderboards per skill, and optionally filter by `subcategory` (e.g., organizationId) - **Flexible Units**: Track any metric (points, seconds, stars, coins, etc.) - **Sort Directions**: Ascending (lower is better, for times) or descending (higher is better, for scores) - **Weekly & Global Rankings**: Each leaderboard maintains both weekly and all-time rankings ## List Available Leaderboards ```typescript const res = await fetch(`${API_URL}/miscellaneous/v1/leaderboards`, { headers: { Authorization: `Bearer ${idToken}` }, }); const { leaderboards } = await res.json(); // Returns: [{ id: 'practice-scores', name: 'Top Scores', unit: 'points', sortDirection: 'desc' }, ...] ``` **Current leaderboards:** - **`practice-scores`**: Top scores (points, higher is better) - **`competition-times`**: Best times (seconds, lower is better) Additional leaderboards can be defined as needed. Contact trashcat@trilogy.com to request new leaderboard configurations. ## Submit a Score ```typescript // Example: Submit competition time const totalTimeSeconds = activePlayTime + penaltyTime; await fetch( `${API_URL}/miscellaneous/v1/leaderboards/competition-times/scores?categoryId=${skillId}&subcategory=${organizationId}`, { method: 'POST', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ score: Math.round(totalTimeSeconds), }), }, ); ``` ## Retrieve Rankings ### Weekly Leaderboard ```typescript const weeklyRes = await fetch( `${API_URL}/miscellaneous/v1/leaderboards/competition-times/weekly/current?categoryId=${skillId}&subcategory=${organizationId}&limit=20`, { headers: { Authorization: `Bearer ${idToken}` }, }, ); const { entries, currentUserEntry, totalCount } = await weeklyRes.json(); ``` ### Global Leaderboard ```typescript const globalRes = await fetch( `${API_URL}/miscellaneous/v1/leaderboards/competition-times/global?categoryId=${skillId}&subcategory=${organizationId}&limit=20`, { headers: { Authorization: `Bearer ${idToken}` }, }, ); const { entries, currentUserEntry, totalCount } = await globalRes.json(); ``` ### Response Format Both endpoints return: ```typescript { "entries": [ { "rank": 1, "score": 180, "playerName": "Alice", "unit": "seconds" }, { "rank": 2, "score": 195, "playerName": "Bob", "unit": "seconds" }, // ... up to limit ], "currentUserEntry": { "rank": 47, "score": 240, "playerName": "CurrentUser", "unit": "seconds" }, "totalCount": 156 } ``` **Key fields:** - `entries`: Top performers (limited by `limit` parameter) - `currentUserEntry`: Current user's ranking (included even if outside top entries) - `totalCount`: Total participants in this leaderboard ## Leaderboard Scoping Use `categoryId` and `subcategory` to create focused competitions: **School-level competition:** ```typescript // Only students from the same organization compete ?categoryId=${skillId}&subcategory=${organizationId} ``` **Global competition:** ```typescript // All students compete (omit subcategory) ?categoryId=${skillId} ``` **Skill-specific:** ```typescript // Different leaderboards per skill ?categoryId=multiplication // vs ?categoryId=division ``` --- ## Related Docs ### [Getting Started](https://docs.trashcat.learnwith.ai/guides-getting-started.guide.txt) Authentication setup required for accessing leaderboards and submitting scores. ### [Complete Example](https://docs.trashcat.learnwith.ai/guides-complete-example.guide.txt) Production code showing how to integrate leaderboard submissions into your learning session flow. ### [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt) Complete endpoint documentation for leaderboard operations, including all query parameters and response schemas. ### User Storage # User Storage The storage API is designed for frontend-only apps that don't have their own backend. If you have your own backend, you should store game state there instead. ## When to Use **Use the storage API if:** - You're building a frontend-only game (web, Unity WebGL, mobile) - You don't have your own backend - You need to store simple game state per user **Don't use it if:** - You have your own backend (store data there instead) - You need complex queries or relationships - You need to store large amounts of data ## Available Endpoints ### Store Data ```typescript await fetch(`${API_URL}/miscellaneous/v1/storage/game-settings`, { method: 'PUT', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ volume: 0.7, difficulty: 'medium', lastPlayedSkill: 'multiplication', }), }); ``` ### Retrieve Data ```typescript const res = await fetch(`${API_URL}/miscellaneous/v1/storage/game-settings`, { headers: { Authorization: `Bearer ${idToken}` }, }); const settings = await res.json(); console.log(settings.volume); // 0.7 ``` ### Delete Data ```typescript await fetch(`${API_URL}/miscellaneous/v1/storage/game-settings`, { method: 'DELETE', headers: { Authorization: `Bearer ${idToken}` }, }); ``` ## Common Use Cases **Game settings:** ```typescript await fetch(`${API_URL}/miscellaneous/v1/storage/settings`, { method: 'PUT', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ volume: 0.7, musicEnabled: true }), }); ``` **Current progress:** ```typescript await fetch(`${API_URL}/miscellaneous/v1/storage/current-progress`, { method: 'PUT', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ currentSkill: 'multiplication', currentLevel: 5 }), }); ``` **Unlocked cosmetics:** ```typescript await fetch(`${API_URL}/miscellaneous/v1/storage/cosmetics`, { method: 'PUT', headers: { Authorization: `Bearer ${idToken}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ unlocked: ['hat-1', 'hat-2', 'shirt-3'] }), }); ``` ## Data Persistence Stored data is: - **Per-user**: Each user's storage is isolated - **Persistent**: Data survives app closes and reinstalls - **Simple key-value**: No complex queries or relationships supported - **Size limits**: Store reasonable amounts (< 100KB per key recommended) If your data doesn't meet these constraints, use your own backend storage. --- ## Related Docs ### [Getting Started](https://docs.trashcat.learnwith.ai/guides-getting-started.guide.txt) Authentication required for storage access. See auth setup if you're using the OAuth flow. ### [Complete Example](https://docs.trashcat.learnwith.ai/guides-complete-example.guide.txt) Production patterns showing how to use storage for session persistence and recovery. ### [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt) Complete endpoint documentation for storage operations, including size limits and data format requirements. ### Complete Example # Production Integration Patterns This guide shows production-ready patterns for integrating the Fluency API. All examples assume you have an OpenAPI-generated client from the [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt). ## Session State Management Track session state across questions and handle pause/resume: ```typescript interface SessionState { sessionId: string; skillId: string; algorithmId: 'speedrun' | 'practice'; activeSessionTime: number; // seconds lives: number; isPaused: boolean; questionCount: number; } // Initialize const session: SessionState = { sessionId: crypto.randomUUID(), skillId: selectedSkill.id, algorithmId: 'practice', activeSessionTime: 0, lives: 3, isPaused: false, questionCount: 0, }; // Pause tracking function pauseSession() { session.isPaused = true; stopTimer(); } // Resume with same sessionId function resumeSession() { session.isPaused = false; startTimer(); } ``` ### Session Persistence For apps that may close mid-session: ```typescript // Save after each answer localStorage.setItem('current_session', JSON.stringify(session)); // Restore on reopen const saved = localStorage.getItem('current_session'); if (saved) { const state = JSON.parse(saved); // Validate not stale (< 1 hour old) const age = Date.now() - state.startedAt; if (age < 60 * 60 * 1000) { session = state; // Resume } } ``` ## Error Handling ### Network Errors with Retry ```typescript async function fetchWithRetry(fetchFn: () => Promise, maxRetries = 3): Promise { for (let attempt = 0; attempt < maxRetries; attempt++) { try { return await fetchFn(); } catch (error) { const isLastAttempt = attempt === maxRetries - 1; // Don't retry client errors (4xx) if (error.statusCode >= 400 && error.statusCode < 500) { throw error; } if (isLastAttempt) throw error; // Exponential backoff: 1s, 2s, 4s await delay(Math.pow(2, attempt) * 1000); } } } // Usage const { question } = await fetchWithRetry(() => api.getNextQuestion(skillId, algorithmId, sessionId)); ``` ### Auth Token Expiry ```typescript async function makeRequest(requestFn: () => Promise): Promise { try { return await requestFn(); } catch (error) { // Token expired - refresh and retry once if (error.statusCode === 401) { await refreshToken(); return await requestFn(); } throw error; } } // With Amplify async function refreshToken() { await fetchAuthSession({ forceRefresh: true }); } ``` ### Handling API Errors ```typescript // Error response formats // Standard error: // { statusCode: 400, message: "Bad Request", error: "sessionId is required" } // Validation error: // { statusCode: 400, message: "Validation failed", errors: [{ field: "timeTookToAnswerMs", message: "..." }] } try { const feedback = await api.submitAnswer(/* ... */); } catch (error) { if (error.statusCode === 400) { // Bad request - likely our bug console.error('Invalid request:', error.message); showMessage('Something went wrong. Please restart.'); } else if (error.statusCode === 401) { // Auth expired showMessage('Session expired. Please sign in again.'); redirectToLogin(); } else if (error.statusCode >= 500) { // Server error - retryable showMessage('Server error. Retrying...'); await retry(); } } ``` ### Handling `skillAvailability` ```typescript const { question, skillAvailability } = await api.getNextQuestion(/* ... */); switch (skillAvailability) { case 'Available': displayQuestion(question); break; case 'OnCooldown': // Spaced repetition enforcing wait showMessage('All facts are resting. Come back later!'); endSession('cooldown'); break; case 'Completed': // 100% mastery showMessage('Skill mastered! 🎉'); endSession('completed'); break; case 'Unavailable': showMessage('Skill not available.'); endSession('unavailable'); break; } ``` ## Best Practices ### Timing Accuracy ```typescript // ✅ Good: Track per-question timing const start = Date.now(); const timeTookMs = Date.now() - start; // ❌ Bad: Use fixed values const timeTookMs = 3000; // API needs real timing ``` ### Active Session Time ```typescript // ✅ Good: Only count learning time onQuestionDisplay(() => startTimer()); onPauseMenu(() => stopTimer()); // ❌ Bad: Count everything including menus const activeTime = Date.now() - sessionStart; // Wrong! ``` ### Session ID Persistence ```typescript // ✅ Good: One sessionId per session const sessionId = crypto.randomUUID(); // Use for all questions in this session // ❌ Bad: New sessionId per question await askQuestion(crypto.randomUUID()); // Wrong! ``` ### Error Recovery ```typescript // ✅ Good: Save state, offer recovery try { saveState(session); await api.submitAnswer(/* ... */); } catch (error) { showDialog(['Retry', 'End Session', 'Menu']); } // ❌ Bad: Silent failure catch (error) { console.error(error); // User sees nothing } ``` --- ## Related Docs ### [Getting Started](https://docs.trashcat.learnwith.ai/guides-getting-started.guide.txt) Authentication setup and basic learning loop. Start here if you're new to the API. ### [Interventions Guide](https://docs.trashcat.learnwith.ai/guides-interventions.guide.txt) Detailed breakdown of each intervention type with UI implementation guidance and reference videos. ### [Timing Fields Guide](https://docs.trashcat.learnwith.ai/guides-timing-fields.guide.txt) Critical timing implementation details for `timeTookToAnswerMs` and `activeSessionDurationSec`. ### [API Reference](https://docs.trashcat.learnwith.ai/api-reference.txt) Complete endpoint documentation. Generate a typed client from the OpenAPI spec to use in your code. ## API References ### API Reference # Fluency API - **OpenAPI Version:** `3.1.1` - **API Version:** `0.0.1` This API provides reusable functionality for building fluency using games or other timed learning applications. The API currently supports math-based common core standards but might be extended in the future to support other types of fluency. The main learning algorithm is exposed via the operation tagged with the `Learning` tag. The other APIs are convenience APIs in support of client applications. ## Servers - **URL:** `https://api-integration.trashcat.rp.devfactory.com` - **Description:** integration ## Operations ### Sign up a new user - **Method:** `POST` - **Path:** `/authentication/v1/sign-up` - **Tags:** Authentication Create a new user account with email and password #### Request Body ##### Content-Type: application/json - **`email` (required)** `string`, format: `email` - **`fullName` (required)** `string` - **`password` (required)** `string` **Example:** ```json { "email": "user@example.com", "password": "SecurePass123!", "fullName": "John Doe" } ``` #### Responses ##### Status: 201 Created ###### Content-Type: application/json - **`email` (required)** `string`, format: `email` - **`fullName` (required)** `string` - **`id` (required)** `string` — Unique user identifier within our app - **`platformId`** `string` — External platform identifier (e.g., LMS user ID) **Example:** ```json { "id": "", "email": "", "fullName": "", "platformId": "" } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### LTI 1.3 launch endpoint - **Method:** `POST` - **Path:** `/authentication/v1/lti/1.3/launch` - **Tags:** Authentication Handle LTI 1.3 launch requests from learning management systems #### Request Body ##### Content-Type: application/x-www-form-urlencoded - **`id_token` (required)** `string` — LTI 1.3 ID token from the learning management system **Example:** ```json { "id_token": "" } ``` #### Responses ##### Status: 302 Redirect ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### List available skills - **Method:** `GET` - **Path:** `/learning/v1/skills` - **Tags:** Learning Get a list of all available learning skills #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`skills` (required)** `array` **Items:** - **`displayName` (required)** `string` — Short display name of the skill (e.g., Multiplication, Addition) - **`id` (required)** `string` — Skill identifier, based on Common Core standards - **`metadata` (required)** `object` — Metadata about the equation structure and naming for this skill - **`factorAName` (required)** `string` — Display name for the first operand (e.g., "Addend", "Factor", "Minuend", "Dividend") - **`factorBName` (required)** `string` — Display name for the second operand (e.g., "Addend", "Factor", "Subtrahend", "Divisor") - **`operator` (required)** `string`, possible values: `"x", "+", "-", "÷"` — Mathematical operator for this skill - **`resultName` (required)** `string` — Display name for the result (e.g., "Sum", "Product", "Difference", "Quotient") - **`name` (required)** `string` — Display name of the skill - **`progress` (required)** `number` — Progress as decimal from 0-1 based on facts mastery - **`state` (required)** `string`, possible values: `"NOT_STARTED", "IN_PROGRESS", "COMPLETED"` — Current state of the skill for the student - **`availabilityStatus`** `string`, possible values: `"Available", "SkillCompleted", "ExceededStrugglingThreshold", "AllFactsOnCooldown", "SessionCompletionRecommended"`, default: `"Available"` — Current practice availability status - **`competitionsCompletedToday`** `number`, default: `0` — Number of competitions completed today for this skill - **`lastWorkedOn`** `string` — Date and time the student last worked on the skill, in ISO format **Example:** ```json { "skills": [ { "id": "", "name": "", "displayName": "", "metadata": { "operator": "x", "factorAName": "", "factorBName": "", "resultName": "" }, "state": "NOT_STARTED", "availabilityStatus": "Available", "lastWorkedOn": "", "progress": 0, "competitionsCompletedToday": 0 } ] } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get fact sets for a skill - **Method:** `GET` - **Path:** `/learning/v1/skills/{skillId}/fact-sets` - **Tags:** Learning Retrieve fact sets and their facts for a specific skill #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`factSets` (required)** `array` **Items:** - **`facts` (required)** `array` **Items:** - **`factorA` (required)** `number` — First operand in the math fact - **`factorB` (required)** `number` — Second operand in the math fact - **`factSetId` (required)** `string` - **`id` (required)** `string` - **`operator` (required)** `string`, possible values: `"x", "+", "-", "÷"` — Mathematical operator - **`result` (required)** `number` — Correct answer to the math fact - **`text` (required)** `string` — Fact display text (e.g., "7 x 8") - **`relationships`** `array`, default: `[]` — Related facts (e.g., fact family members) **Items:** - **`factId` (required)** `string` — ID of the related fact - **`type` (required)** `string`, possible values: `"FACT_FAMILY"` — Type of relationship between facts - **`id` (required)** `string` - **`name` (required)** `string` — Display name of the fact set - **`skill` (required)** `object` - **`displayName` (required)** `string` — Short display name of the skill (e.g., Multiplication, Addition) - **`id` (required)** `string` — Skill identifier, based on Common Core standards - **`metadata` (required)** `object` — Metadata about the equation structure and naming for this skill - **`factorAName` (required)** `string` — Display name for the first operand (e.g., "Addend", "Factor", "Minuend", "Dividend") - **`factorBName` (required)** `string` — Display name for the second operand (e.g., "Addend", "Factor", "Subtrahend", "Divisor") - **`operator` (required)** `string`, possible values: `"x", "+", "-", "÷"` — Mathematical operator for this skill - **`resultName` (required)** `string` — Display name for the result (e.g., "Sum", "Product", "Difference", "Quotient") - **`name` (required)** `string` — Display name of the skill **Example:** ```json { "skill": { "id": "", "name": "", "displayName": "", "metadata": { "operator": "x", "factorAName": "", "factorBName": "", "resultName": "" } }, "factSets": [ { "id": "", "name": "", "facts": [ { "id": "", "text": "", "factorA": 1, "factorB": 1, "operator": "x", "factSetId": "", "result": 1, "relationships": [] } ] } ] } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get next question - **Method:** `POST` - **Path:** `/learning/v1/skills/{skillId}/algorithms/{algorithmId}/questions` - **Tags:** Learning Get the next question for a learning session using specified algorithm #### Request Body ##### Content-Type: application/json - **`sessionId` (required)** `string` — The session ID for the current session - **`factSetId`** `string` — If provided, the learning state will be initialized for the specified fact set **Example:** ```json { "sessionId": "", "factSetId": "0" } ``` #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`question` (required)** `object | null` - **`choices` (required)** `array` — Available answer choices **Items:** - **`correct` (required)** `boolean` — Whether this choice is the correct answer - **`id` (required)** `string` — Identifier for this answer choice, unique within the question - **`label` (required)** `string` — Display text for the answer choice - **`value` (required)** `number` — Numeric value of the answer choice - **`factId` (required)** `string` - **`factSetId` (required)** `string` - **`id` (required)** `string` - **`stageType` (required)** `string`, possible values: `"grounding", "assessment", "practice", "review", "repetition", "mastered"` - **`text` (required)** `string` — Question text displayed to the user - **`premature`** `boolean` — Whether the question was raised prematurely - **`timeToAnswer`** `number` — Time limit to answer in seconds - **`sessionId` (required)** `string` — The session ID for the current session - **`algorithmId`** `string`, possible values: `"practice", "competition", "review"`, default: `"practice"` - **`skillAvailability`** `string`, possible values: `"Available", "SkillCompleted", "ExceededStrugglingThreshold", "AllFactsOnCooldown", "SessionCompletionRecommended"`, default: `"Available"` — Current availability status of the skill **Example:** ```json { "question": { "id": "", "stageType": "grounding", "factId": "", "factSetId": "", "text": "", "choices": [ { "id": "", "value": 1, "label": "", "correct": true } ], "timeToAnswer": 1, "premature": true }, "sessionId": "", "algorithmId": "practice", "skillAvailability": "Available" } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Submit answer - **Method:** `POST` - **Path:** `/learning/v1/skills/{skillId}/algorithms/{algorithmId}/questions/{questionId}/answers` - **Tags:** Learning Submit an answer to a question and get feedback using specified algorithm #### Request Body ##### Content-Type: application/json - **`capabilities` (required)** `object` — Client capabilities for receiving various features - **`intervention` (required)** `boolean` — Whether the game can currently receive interventions (e.g., not if player already died, game over, etc.) - **`sessionId` (required)** `string` — The session ID for the current session - **`state` (required)** `string`, possible values: `"Answered", "Skipped", "TimedOut"` — Final state of the question (if it was answered, timed out, etc.) - **`timeTookToAnswerMs` (required)** `number` — Time taken to answer in milliseconds, as measured by the client - **`activeSessionDurationSec`** `number`, default: `0` — Total active session duration in seconds until the current answer was submitted - **`answer`** `object` — User answer (omitted if skipped or timed out) **Example:** ```json { "sessionId": "", "answer": { "choiceId": "" }, "timeTookToAnswerMs": 1, "state": "Answered", "capabilities": { "intervention": true }, "activeSessionDurationSec": 0 } ``` #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`answerType` (required)** `string`, possible values: `"Correct", "Incorrect", "Skipped", "TimedOut"` - **`correctAnswer` (required)** `object` — The correct answer choice - **`correct` (required)** `boolean` — Whether this choice is the correct answer - **`id` (required)** `string` — Identifier for this answer choice, unique within the question - **`label` (required)** `string` — Display text for the answer choice - **`value` (required)** `number` — Numeric value of the answer choice - **`factId` (required)** `string` - **`factSetId` (required)** `string` - **`interventionType` (required)** `string`, possible values: `"None", "Retry.MCQ", "TurnWheels.All", "CueFading.ListenMCQFRQ", "AudioSegmented", "DragDrop.FITB", "NumberLine", "CueFading.FRQ", "TimedRepetition.Recall.FRQ", "AnswerFirst.MCQ.FRQ", "TurnWheels.AnswerOnly", "DragDrop.AnswerOnly"` — Learning intervention applied (if any) - **`sessionId` (required)** `string` — The session ID for the current session - **`stageType` (required)** `string` - **`timeToNextQuestionSec` (required)** `number` — Seconds to wait before showing next question - **`algorithmId`** `string`, possible values: `"practice", "competition", "review"`, default: `"practice"` - **`consecutiveCorrect`** `number`, default: `0` - **`consecutiveIncorrect`** `number`, default: `0` - **`reviewStats`** `object` — Review session statistics (only present during review sessions) - **`attempts` (required)** `number` — Number of questions answered so far - **`correctFast` (required)** `number` — Number of correct answers within the fluency threshold (3 seconds) - **`correctSlow` (required)** `number` — Number of correct answers slower than the fluency threshold - **`incorrect` (required)** `number` — Number of incorrect answers - **`skipped` (required)** `number` — Number of skipped questions - **`totalFacts` (required)** `number` — Total number of facts in the review session - **`passed`** `boolean` — Whether the review was passed (only set when skillAvailability is SkillCompleted) - **`reward`** `object` — Rewards earned from the review session (only set when session completes with passed=true) - **`cosmetic`** `number`, default: `0` — Number of in-game cosmetic items awarded - **`xp`** `number`, default: `0` — Experience points earned - **`reward`** `object` — The rewards earned for this answer (if any) - **`cosmetic`** `number`, default: `0` — Number of in-game cosmetic items awarded - **`xp`** `number`, default: `0` — Experience points earned - **`skillAvailability`** `string`, possible values: `"Available", "SkillCompleted", "ExceededStrugglingThreshold", "AllFactsOnCooldown", "SessionCompletionRecommended"`, default: `"Available"` — Status of skill availability or completion - **`startReview`** `object` — Data to start a review session (returned when practice triggers a review) - **`factSetId` (required)** `string` — The fact set ID to start a review session for - **`statistics`** `object` — Statistics - **`totalAnswersSubmitted` (required)** `number` — Total number of answers submitted so far in this skill - **`totalEstimatedAnswersRemaining` (required)** `number` — Estimated number of questions remaining to complete the skill - **`totalFacts` (required)** `number` — Total number of facts in the skill - **`totalIncorrectAnswers` (required)** `number` — Total number of incorrect answers submitted in this skill - **`totalMasteredFacts` (required)** `number` — Number of facts that have been mastered - **`updatedFacts`** `array` — Facts that had their stages updated as a result of this answer **Items:** - **`factId` (required)** `string` — ID of the fact that had its stage updated - **`factSetId` (required)** `string` — ID of the fact set containing the updated fact - **`newProgressWeight` (required)** `number` — Progress weight of the new stage from the stage configuration - **`newStageType` (required)** `string` — Type of the new stage that the fact was promoted to **Example:** ```json { "consecutiveCorrect": 0, "consecutiveIncorrect": 0, "stageType": "", "factId": "", "factSetId": "", "answerType": "Correct", "correctAnswer": { "id": "", "value": 1, "label": "", "correct": true }, "timeToNextQuestionSec": 1, "interventionType": "None", "reward": { "xp": 0, "cosmetic": 0 }, "skillAvailability": "Available", "statistics": { "totalAnswersSubmitted": 1, "totalEstimatedAnswersRemaining": 1, "totalFacts": 1, "totalMasteredFacts": 1, "totalIncorrectAnswers": 1 }, "updatedFacts": [ { "factSetId": "", "factId": "", "newProgressWeight": 1, "newStageType": "" } ], "reviewStats": { "totalFacts": 1, "attempts": 1, "correctFast": 1, "correctSlow": 1, "incorrect": 1, "skipped": 1, "passed": true, "reward": { "xp": 0, "cosmetic": 0 } }, "startReview": { "factSetId": "" }, "sessionId": "", "algorithmId": "practice" } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get progress summary - **Method:** `GET` - **Path:** `/learning/v1/skills/{skillId}/reports/progress-summary` - **Tags:** Learning Get a summary of learning progress for a skill #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`progressSummary` (required)** `object` - **`factSets` (required)** `array` — Progress information for each fact set in the skill **Items:** - **`factSetId` (required)** `string` - **`factSetName` (required)** `string` - **`progress` (required)** `number` — Progress as a decimal from 0 to 1, based on weighted stage completion - **`stageType` (required)** `string`, possible values: `"grounding", "assessment", "practice", "review", "repetition", "mastered"` — The lowest learning stage among all facts in this set - **`reward`** `object` — Reward earned for this fact set - **`cosmetic`** `number`, default: `0` — Number of in-game cosmetic items awarded - **`xp`** `number`, default: `0` — Experience points earned - **`skill` (required)** `object` - **`displayName` (required)** `string` — Short display name of the skill (e.g., Multiplication, Addition) - **`id` (required)** `string` — Skill identifier, based on Common Core standards - **`metadata` (required)** `object` — Metadata about the equation structure and naming for this skill - **`factorAName` (required)** `string` — Display name for the first operand (e.g., "Addend", "Factor", "Minuend", "Dividend") - **`factorBName` (required)** `string` — Display name for the second operand (e.g., "Addend", "Factor", "Subtrahend", "Divisor") - **`operator` (required)** `string`, possible values: `"x", "+", "-", "÷"` — Mathematical operator for this skill - **`resultName` (required)** `string` — Display name for the result (e.g., "Sum", "Product", "Difference", "Quotient") - **`name` (required)** `string` — Display name of the skill - **`progress` (required)** `number` — Progress as decimal from 0-1 based on facts mastery - **`state` (required)** `string`, possible values: `"NOT_STARTED", "IN_PROGRESS", "COMPLETED"` — Current state of the skill for the student - **`availabilityStatus`** `string`, possible values: `"Available", "SkillCompleted", "ExceededStrugglingThreshold", "AllFactsOnCooldown", "SessionCompletionRecommended"`, default: `"Available"` — Current practice availability status - **`competitionsCompletedToday`** `number`, default: `0` — Number of competitions completed today for this skill - **`lastWorkedOn`** `string` — Date and time the student last worked on the skill, in ISO format **Example:** ```json { "skill": { "id": "", "name": "", "displayName": "", "metadata": { "operator": "x", "factorAName": "", "factorBName": "", "resultName": "" }, "state": "NOT_STARTED", "availabilityStatus": "Available", "lastWorkedOn": "", "progress": 0, "competitionsCompletedToday": 0 }, "progressSummary": { "factSets": [ { "factSetId": "", "factSetName": "", "progress": 0, "stageType": "grounding", "reward": { "xp": 0, "cosmetic": 0 } } ] } } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get progress details - **Method:** `GET` - **Path:** `/learning/v1/skills/{skillId}/reports/progress-details` - **Tags:** Learning Get detailed learning progress for a skill including individual facts #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`progressDetails` (required)** `object` - **`factSets` (required)** `array` — Detailed progress information for each fact set including individual facts **Items:** - **`facts` (required)** `array` — Individual facts in this fact set with their progress **Items:** - **`factorA` (required)** `number` — First operand in the math fact - **`factorB` (required)** `number` — Second operand in the math fact - **`factSetId` (required)** `string` - **`id` (required)** `string` - **`operator` (required)** `string` — Mathematical operator - **`progress` (required)** `number` — Progress as a decimal from 0 to 1, based on stage completion - **`result` (required)** `number` — Correct answer to the math fact - **`stageType` (required)** `string` — Current learning stage type of this fact - **`text` (required)** `string` — Fact display text (e.g., "7 x 8") - **`factSetId` (required)** `string` - **`factSetName` (required)** `string` - **`progress` (required)** `number` — Progress as a decimal from 0 to 1, based on weighted stage completion - **`stageType` (required)** `string`, possible values: `"grounding", "assessment", "practice", "review", "repetition", "mastered"` — The lowest learning stage among all facts in this set - **`reward`** `object` — Reward earned for this fact set - **`cosmetic`** `number`, default: `0` — Number of in-game cosmetic items awarded - **`xp`** `number`, default: `0` — Experience points earned - **`skill` (required)** `object` - **`displayName` (required)** `string` — Short display name of the skill (e.g., Multiplication, Addition) - **`id` (required)** `string` — Skill identifier, based on Common Core standards - **`metadata` (required)** `object` — Metadata about the equation structure and naming for this skill - **`factorAName` (required)** `string` — Display name for the first operand (e.g., "Addend", "Factor", "Minuend", "Dividend") - **`factorBName` (required)** `string` — Display name for the second operand (e.g., "Addend", "Factor", "Subtrahend", "Divisor") - **`operator` (required)** `string`, possible values: `"x", "+", "-", "÷"` — Mathematical operator for this skill - **`resultName` (required)** `string` — Display name for the result (e.g., "Sum", "Product", "Difference", "Quotient") - **`name` (required)** `string` — Display name of the skill - **`progress` (required)** `number` — Progress as decimal from 0-1 based on facts mastery - **`state` (required)** `string`, possible values: `"NOT_STARTED", "IN_PROGRESS", "COMPLETED"` — Current state of the skill for the student - **`availabilityStatus`** `string`, possible values: `"Available", "SkillCompleted", "ExceededStrugglingThreshold", "AllFactsOnCooldown", "SessionCompletionRecommended"`, default: `"Available"` — Current practice availability status - **`competitionsCompletedToday`** `number`, default: `0` — Number of competitions completed today for this skill - **`lastWorkedOn`** `string` — Date and time the student last worked on the skill, in ISO format **Example:** ```json { "skill": { "id": "", "name": "", "displayName": "", "metadata": { "operator": "x", "factorAName": "", "factorBName": "", "resultName": "" }, "state": "NOT_STARTED", "availabilityStatus": "Available", "lastWorkedOn": "", "progress": 0, "competitionsCompletedToday": 0 }, "progressDetails": { "factSets": [ { "factSetId": "", "factSetName": "", "progress": 0, "stageType": "grounding", "reward": { "xp": 0, "cosmetic": 0 }, "facts": [ { "id": "", "text": "", "factorA": 1, "factorB": 1, "operator": "", "factSetId": "", "result": 1, "progress": 0, "stageType": "" } ] } ] } } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get learning science deep dive - **Method:** `GET` - **Path:** `/learning/v1/skills/{skillId}/reports/learning-science-deep-dive` - **Tags:** Learning Get detailed learning science analytics and insights for a skill #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`learningScienceDeepDive` (required)** `object` - **`difficultyAdaptation` (required)** `object | null` — Dynamic difficulty adaptation metrics (null if not configured) - **`adaptationDirection` (required)** `number` — Direction of difficulty adaptation (-1: easier, 0: no change, 1: harder) - **`currentDifficulty` (required)** `object` — Current difficulty level configuration - **`label` (required)** `string` — Human-readable difficulty level name - **`value` (required)** `number` — Numeric difficulty level (higher = more difficult) - **`recentDifficultyAccuracy` (required)** `number` — Recent accuracy rate used for difficulty adaptation decisions - **`factItems` (required)** `array` — Detailed statistics for individual facts currently being learned **Items:** - **`accuracyPercentage` (required)** `number` — Historical accuracy rate for this fact as a decimal from 0 to 1 - **`consecutiveCorrect` (required)** `number` — Current streak of consecutive correct answers for this fact - **`consecutiveIncorrect` (required)** `number` — Current streak of consecutive incorrect answers for this fact - **`factId` (required)** `string` - **`factSetId` (required)** `string` - **`factSetName` (required)** `string` - **`lastSeenUtc` (required)** `string` — ISO timestamp when this fact was last presented to the user - **`nextReviewTime` (required)** `string | null` — ISO timestamp when this fact should next be reviewed (null if not applicable) - **`repetitionState` (required)** `string`, possible values: `"None", "TooSoon", "Optimal", "TooLate"` — Spaced repetition timing state for this fact - **`stageType` (required)** `string`, possible values: `"grounding", "assessment", "practice", "review", "repetition", "mastered"` — Current learning stage of this fact - **`overallStats` (required)** `object` — Overall learning statistics and progress metrics - **`completedFacts` (required)** `number` — Number of facts that have reached mastered stage - **`completedFactSets` (required)** `number` — Number of fact sets where all facts have reached mastered stage - **`currentStreak` (required)** `number` — Current streak of consecutive correct answers for this skill - **`overallAccuracy` (required)** `number` — Overall accuracy rate as a decimal from 0 to 1 - **`overallProgressPercent` (required)** `number` — Overall progress as decimal (0-1 scale) based on weighted stage completion - **`stageDistribution` (required)** `object` — Count of facts at each learning stage - **`assessment`** `number` - **`grounding`** `number` - **`mastered`** `number` - **`practice`** `number` - **`repetition`** `number` - **`review`** `number` - **`strugglingFactsCount` (required)** `number` — Number of facts with 2 or more consecutive incorrect answers - **`totalAttempts` (required)** `number` — Total number of answer attempts across all sessions for this skill - **`totalFacts` (required)** `number` — Total number of math facts in the skill - **`totalFactSets` (required)** `number` — Total number of fact sets in the skill - **`sessionStats` (required)** `object | null` — Statistics for the most recent learning session (null if no sessions) - **`accuracyPercentage` (required)** `number` — Session accuracy rate as a decimal from 0 to 1 - **`correctAnswers` (required)** `number` — Number of unique questions answered correctly - **`correctQuestionsPerMinute` (required)** `number` — Rate of correct answers per minute - **`endTime` (required)** `string | null` — ISO timestamp when the session ended (null if ongoing) - **`incorrectAnswers` (required)** `number` — Number of unique questions answered incorrectly - **`questionsAnswered` (required)** `number` — Number of unique questions that received answers (correct or incorrect) - **`questionsDisplayed` (required)** `number` — Total number of questions shown in this session - **`questionsPerMinute` (required)** `number` — Rate of questions displayed per minute - **`questionsSkipped` (required)** `number` — Number of unique questions that were skipped - **`sessionDurationSec` (required)** `number` — Total session duration in seconds - **`sessionId` (required)** `string` — Unique identifier for the learning session - **`startTime` (required)** `string` — ISO timestamp when the session started - **`spacedRepetition` (required)** `object` — Spaced repetition system metrics - **`totalFactsInInterSessionRepetition` (required)** `number` — Number of facts in repetition stage (inter-session repetition) - **`totalFactsInIntraSessionRepetition` (required)** `number` — Number of facts in review stage (intra-session repetition) - **`totalFactsPastTheirDueTime` (required)** `number` — Number of review/repetition facts that are past their scheduled review time - **`workingMemory` (required)** `object` — Working memory load and capacity metrics - **`factsBeingLearned` (required)** `number` — Current number of facts being actively learned (not yet mastered) - **`factsBeingLearnedPercentage` (required)** `number` — Percentage of working memory capacity being used - **`maxFactsBeingLearned` (required)** `number` — Maximum number of facts that should be learned simultaneously at current difficulty - **`state` (required)** `string`, possible values: `"Underloaded", "Optimal", "Overloaded"` — Current working memory load state - **`skill` (required)** `object` - **`displayName` (required)** `string` — Short display name of the skill (e.g., Multiplication, Addition) - **`id` (required)** `string` — Skill identifier, based on Common Core standards - **`metadata` (required)** `object` — Metadata about the equation structure and naming for this skill - **`factorAName` (required)** `string` — Display name for the first operand (e.g., "Addend", "Factor", "Minuend", "Dividend") - **`factorBName` (required)** `string` — Display name for the second operand (e.g., "Addend", "Factor", "Subtrahend", "Divisor") - **`operator` (required)** `string`, possible values: `"x", "+", "-", "÷"` — Mathematical operator for this skill - **`resultName` (required)** `string` — Display name for the result (e.g., "Sum", "Product", "Difference", "Quotient") - **`name` (required)** `string` — Display name of the skill **Example:** ```json { "skill": { "id": "", "name": "", "displayName": "", "metadata": { "operator": "x", "factorAName": "", "factorBName": "", "resultName": "" } }, "learningScienceDeepDive": { "overallStats": { "totalFactSets": 1, "completedFactSets": 1, "totalFacts": 1, "completedFacts": 1, "overallProgressPercent": 1, "currentStreak": 1, "stageDistribution": { "grounding": 1, "assessment": 1, "practice": 1, "review": 1, "repetition": 1, "mastered": 1 }, "totalAttempts": 1, "overallAccuracy": 1, "strugglingFactsCount": 1 }, "spacedRepetition": { "totalFactsInIntraSessionRepetition": 1, "totalFactsInInterSessionRepetition": 1, "totalFactsPastTheirDueTime": 1 }, "difficultyAdaptation": { "currentDifficulty": { "value": 1, "label": "" }, "adaptationDirection": 1, "recentDifficultyAccuracy": 1 }, "workingMemory": { "factsBeingLearned": 1, "maxFactsBeingLearned": 1, "factsBeingLearnedPercentage": 1, "state": "Underloaded" }, "factItems": [ { "factId": "", "factSetId": "", "factSetName": "", "stageType": "grounding", "accuracyPercentage": 1, "consecutiveCorrect": 1, "consecutiveIncorrect": 1, "lastSeenUtc": "", "repetitionState": "None", "nextReviewTime": null } ], "sessionStats": { "sessionId": "", "startTime": "", "endTime": null, "questionsDisplayed": 1, "questionsAnswered": 1, "questionsSkipped": 1, "correctAnswers": 1, "incorrectAnswers": 1, "sessionDurationSec": 1, "accuracyPercentage": 1, "questionsPerMinute": 1, "correctQuestionsPerMinute": 1 } } } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get aggregated daily activity - **Method:** `GET` - **Path:** `/learning/v1/dates/{date}/activity` - **Tags:** Learning Get the aggregated daily activity across all skills for a specific date #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`completedCompetitions` (required)** `number` - **`date` (required)** `string` - **`xp` (required)** `number` **Example:** ```json { "date": "", "xp": 1, "completedCompetitions": 1 } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Force set stage - **Method:** `PUT` - **Path:** `/learning/v1/skills/{skillId}/fact-sets/{factSetId}/debug/force-set-stage` - **Tags:** Debug Debug endpoint to force set the learning stage for a fact set #### Request Body ##### Content-Type: application/json - **`stageType` (required)** `string`, possible values: `"grounding", "assessment", "practice", "review", "repetition", "mastered"` **Example:** ```json { "stageType": "assessment" } ``` #### Responses ##### Status: 204 No Content ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Submit feedback - **Method:** `POST` - **Path:** `/miscellaneous/v1/feedback` - **Tags:** Miscellaneous Submit user feedback to the system #### Request Body ##### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` #### Responses ##### Status: 204 No Content ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get storage data - **Method:** `GET` - **Path:** `/miscellaneous/v1/storage/{key}` - **Tags:** Miscellaneous Retrieve user-specific storage data by key #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`data` (required)** `string` — JSON string or text data to store. For non-sensitive client data such as game preferences, UI settings, or other state. Not for educational progress data. - **`storageKey` (required)** `string` — Unique key for the stored data - **`userId` (required)** `string` — ID of the user who owns this data **Example:** ```json { "data": "", "storageKey": "", "userId": "" } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Store data - **Method:** `PUT` - **Path:** `/miscellaneous/v1/storage/{key}` - **Tags:** Miscellaneous Store user-specific data by key #### Request Body ##### Content-Type: application/json - **`data` (required)** `string` — JSON string or text data to store. For non-sensitive client data such as game preferences, UI settings, or other state. Not for educational progress data. **Example:** ```json { "data": "" } ``` #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`data` (required)** `string` — JSON string or text data to store. For non-sensitive client data such as game preferences, UI settings, or other state. Not for educational progress data. - **`storageKey` (required)** `string` — Unique key for the stored data - **`userId` (required)** `string` — ID of the user who owns this data **Example:** ```json { "data": "", "storageKey": "", "userId": "" } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Delete storage data - **Method:** `DELETE` - **Path:** `/miscellaneous/v1/storage/{key}` - **Tags:** Miscellaneous Delete user-specific storage data by key #### Responses ##### Status: 204 No Content ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### List leaderboards - **Method:** `GET` - **Path:** `/miscellaneous/v1/leaderboards` - **Tags:** Leaderboard Get a list of all available leaderboards #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`leaderboards` (required)** `array` — Available leaderboards **Items:** - **`id` (required)** `string` — Unique leaderboard configuration identifier - **`name` (required)** `string` — Human-readable leaderboard name - **`sortDirection` (required)** `string`, possible values: `"asc", "desc"` — Sort direction - ascending means lower is better - **`unit` (required)** `string` — Unit of measurement (points, seconds, etc.) **Example:** ```json { "leaderboards": [ { "id": "", "name": "", "unit": "", "sortDirection": "asc" } ] } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Submit score - **Method:** `POST` - **Path:** `/miscellaneous/v1/leaderboards/{leaderboardId}/scores` - **Tags:** Leaderboard Submit a new score to the leaderboard. Example: POST /leaderboards/practice-scores/scores?category=addition\&subcategory=school123 #### Request Body ##### Content-Type: application/json - **`score` (required)** `number` — Score to submit **Example:** ```json { "score": 1 } ``` #### Responses ##### Status: 201 Created ###### Content-Type: application/json - **`achievedAt` (required)** `string`, format: `date-time` — When score was achieved - **`categoryId` (required)** `string` — Category identifier - **`id` (required)** `string` — Unique score identifier - **`leaderboardId` (required)** `string` — Leaderboard configuration identifier - **`score` (required)** `number` — Score achieved - **`userId` (required)** `string` — User who achieved this score - **`weekStart` (required)** `string` — Week start date (YYYY-MM-DD) - **`subcategory`** `string` — Optional subcategory identifier (e.g., organizationId) **Example:** ```json { "id": "", "userId": "", "weekStart": "", "score": 1, "categoryId": "", "subcategory": "", "leaderboardId": "", "achievedAt": "" } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get weekly leaderboard - **Method:** `GET` - **Path:** `/miscellaneous/v1/leaderboards/{leaderboardId}/weekly/{weekStart}` - **Tags:** Leaderboard Get top performers and current user position for the week. Example: GET /leaderboards/practice-scores/weekly/current?category=addition\&subcategory=school123\&limit=20 #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`entries` (required)** `array` — Top entries with ranks **Items:** - **`achievedAt` (required)** `string`, format: `date-time` — When best score was achieved - **`playerName` (required)** `string` — Player display name - **`rank` (required)** `number` — Current rank - **`score` (required)** `number` — Best score for this period - **`unit` (required)** `string` — Unit of measurement for the score - **`userId` (required)** `string` — User ID - **`playerImage`** `string` — Player avatar URL - **`totalCount` (required)** `number` — Total number of players with scores in this category - **`current`** `object` — Current user entry (if any) - **`achievedAt` (required)** `string`, format: `date-time` — When best score was achieved - **`playerName` (required)** `string` — Player display name - **`rank` (required)** `number` — Current rank - **`score` (required)** `number` — Best score for this period - **`unit` (required)** `string` — Unit of measurement for the score - **`userId` (required)** `string` — User ID - **`playerImage`** `string` — Player avatar URL **Example:** ```json { "entries": [ { "userId": "", "score": 1, "achievedAt": "", "rank": 1, "playerName": "", "playerImage": "", "unit": "" } ], "current": { "userId": "", "score": 1, "achievedAt": "", "rank": 1, "playerName": "", "playerImage": "", "unit": "" }, "totalCount": 1 } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ### Get global leaderboard - **Method:** `GET` - **Path:** `/miscellaneous/v1/leaderboards/{leaderboardId}/global` - **Tags:** Leaderboard Get all-time top performers and current user position. Example: GET /leaderboards/practice-scores/global?category=addition\&subcategory=school123\&limit=20 #### Responses ##### Status: 200 Success ###### Content-Type: application/json - **`entries` (required)** `array` — Top entries with ranks **Items:** - **`achievedAt` (required)** `string`, format: `date-time` — When best score was achieved - **`playerName` (required)** `string` — Player display name - **`rank` (required)** `number` — Current rank - **`score` (required)** `number` — Best score for this period - **`unit` (required)** `string` — Unit of measurement for the score - **`userId` (required)** `string` — User ID - **`playerImage`** `string` — Player avatar URL - **`totalCount` (required)** `number` — Total number of players with scores in this category - **`current`** `object` — Current user entry if not in top entries - **`achievedAt` (required)** `string`, format: `date-time` — When best score was achieved - **`playerName` (required)** `string` — Player display name - **`rank` (required)** `number` — Current rank - **`score` (required)** `number` — Best score for this period - **`unit` (required)** `string` — Unit of measurement for the score - **`userId` (required)** `string` — User ID - **`playerImage`** `string` — Player avatar URL **Example:** ```json { "entries": [ { "userId": "", "score": 1, "achievedAt": "", "rank": 1, "playerName": "", "playerImage": "", "unit": "" } ], "current": { "userId": "", "score": 1, "achievedAt": "", "rank": 1, "playerName": "", "playerImage": "", "unit": "" }, "totalCount": 1 } ``` ##### Status: 400 Bad Request ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 401 Unauthorized ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 404 Not Found ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ``` ##### Status: 500 Internal Server Error ###### Content-Type: application/json - **`message` (required)** `string` **Example:** ```json { "message": "" } ```