00 Overture
Built with Opus 4.7 · A Claude Code Hackathon
Apr 21–28, 2026 · Anthropic × Cerebral Valley

Mora.

A different door into reading — for kids who learn differently, in a language that isn’t home yet.

An iPad-first, fully on-device dyslexia + ESL tutor for L1-Japanese learners. Orton-Gillingham phonics, a yokai mentor, and a CoreML wav2vec2 phoneme model that hears L1-Japanese substitutions and turns them into coaching — never red Xs.

∕r∕ A red horned-rabbit yokai with rainbow scarf, mascot for the /r/ phoneme
Mentor 02/r/
∕ʃ∕ A pale teal seal-like yokai with a captain's hat, mascot for the /sh/ phoneme
Mentor 01/sh/
∕æ∕ A red apple-shaped yokai with leaves and an axe, mascot for the short /a/ phoneme
Mentor 03/æ/

01 The Ask

I built Mora for one kid I love.

A child close to me lives with dyslexia. Last year, our family moved across an ocean, and English became the second language they’d learn to read. The local school’s IEP — the structured-literacy support plan that would actually help — is gated behind finishing ESL first. That’s a one-year delay before help they actually need begins.

Barton tutoring, the gold-standard Orton-Gillingham program, is wonderful and out of reach for a family-funded effort. We were the only ones in a position to fix this. So I built the tool I wished existed — built around the way this kid learns, the language they came from, and a daily mentor they want to come back to.

Because the user is a child, the answer to “does any of this leave the iPad?” has to be no. No raw audio. No transcripts. No per-trial details. The pronunciation model runs entirely in CoreML on the device. The only cloud touchpoint we’ve even considered is a future Parent Mode sync over CloudKit private DB — and that isn’t in v1.


02 Three-Minute Demo

One iPad. Five days. The whole loop.

Parent → child handoff. Yokai weekly intro, tile-board decode, on-device pronunciation feedback, decodable sentences. Recorded on a real iPad, no cuts to a simulator.

REC · 00:00 / 03:00 16:9 · 1080p
YouTube unlisted · for judge review

03 What It Does

Four pieces. One daily quest.

Each day is a short A-Day session: warmup → decode → sentences → summary. The yokai shell turns it into a quest the kid wants to come back to, instead of a drill he has to be coaxed into.

01 · WARMUP

Yokai mentor

Each week, a hand-illustrated yokai mentor appears with a voiced clip. The shell turns daily phonics practice into a quest, not a drill — built around the kid’s actual interests.

02 · DECODE

Tile-board decoding

Multisensory phonics — Orton-Gillingham’s core — means moving the letters yourself. Every word is built only from graphemes the learner has explicitly mastered. Decodable, never guessed.

03 · SPEAK

Pronunciation feedback

An INT8-quantized wav2vec2 phoneme posterior runs in CoreML on-device. The engine catches L1-Japanese substitutions and surfaces them as coaching, not red Xs.

04 · SENTENCES

Decodable sentences

Each sentence is generated against the learner’s mastered grapheme set. The yokai voices the line first; tiles fill as the kid speaks. Difficulty ramps from Day 1 short → Day 5 full.

04 L1-Aware Coaching

What L1-Japanese sounds like, mathematically.

English-only ASR systems flag a Japanese-speaking kid’s pronunciation as “wrong.” Mora’s L1Profile protocol encodes the substitutions a Japanese-speaking child predictably produces — so the model treats them as known interferences to coach through, not failures.

bheard ←v No /v/ in JP — defaults to bilabial
rheard ←l JP ら-row → English /l/, /ɹ/, or tap
sheard ←sh /ʃ/ neutralized before /i/
sheard ←th /θ/ absent — alveolar fricative substitutes
uheard ←schwa Vowel reduction unfamiliar to JP phonotactics
05 The Numbers

Five days. A working app.

Hackathon constraints met head-on. Every number measured, not estimated.

100+ Merged PRs
Apr 21–26
5 SPM packages
Core ← Engines ← UI
6→1 Hours saved
by parallel sub-agents
25.84 MB CoreML model
INT8 wav2vec2
0 Cloud calls
at runtime
· Times we’ll
iterate next

06 Built with Opus 4.7

The model isn’t a coworker. It’s a workshop.

I’m not a Swift engineer by trade. Three Claude Code workflow patterns let one person ship a real iPad app in five days.

i.

Spec → Plan → Execute, repeat.

Every PR backed by a written spec under docs/superpowers/specs/ and an implementation plan under docs/superpowers/plans/. The repo doubles as its own architecture record.

$ git log --oneline | wc -l 112 $ ls docs/superpowers/specs | wc -l 38
ii.

Parallel sub-agents on isolated worktrees.

A four-phoneme decodable-sentence content batch that would have taken six hours sequential, finished in one. Each phase ran as a sub-agent in its own git worktree — no cross-batch state, no merge conflicts.

$ git worktree list main [base] wt-phoneme-r ▸ batch-r wt-phoneme-l ▸ batch-l wt-phoneme-sh ▸ batch-sh wt-phoneme-th ▸ batch-th
iii.

OSLog → Claude’s context, live.

I authored a Claude Code skill oslog-stream-to-file that pipes Apple OSLog from the running iPad into /tmp/<repo>.log, which Claude reads directly. Debug round-trip collapsed from “screenshot Console.app” to a sentence in chat.

[skill] oslog-stream-to-file ► tail -F /tmp/mora.log 15:02:11 [Orchestrator] phase=decode word=ship 15:02:13 [EngineA] ∕ʃ∕ matched · conf=0.92 15:02:14 [EngineA] ∕v∕ → ∕b∕ substitution

07 Architecture

Five Swift packages. One way down.

A thin iOS app target wires SwiftData and presents RootView. All real logic lives in five local SPM packages with strictly one-way dependencies — the kind of layering that lets a single author keep moving without untangling cycles every other day.

UIMoraUISwiftUI, observes orchestrator
02MoraEnginesSessionOrchestrator · Engine A · Engine B
01MoraCoreDomain · L1Profile · SwiftData
×MoraTesting
×MoraMLX

Core ← Engines ← UI, with MoraTesting and MoraMLX cross-cutting both. No upward edges. No cycles. The thin Mora/ app target wires SwiftData persistence and presents the root view.

MoraMLX hosts the bundled INT8 wav2vec2 CoreML model used at runtime, exposed through narrow protocols defined in MoraEngines. MoraUI deliberately doesn’t depend on MLX — the rendering layer never knows what model is running.

MoraCore’s L1Profile protocol is the seam that makes Mora multilingual without rewrites. Korean and English profiles, age-driven difficulty (entry / core / advanced), and an in-app language switch are all queued behind it.

// invariant
No raw audio, transcripts, or per-trial details
leave the device. Pronunciation feedback runs
entirely in CoreML, on the iPad you hold.

08 Try It

Clone, generate, build.

No build server. No paid services. xcodegen regenerates the project, xcodebuild against the iOS Simulator. Targets iPad-first; runs on iPhone for development.

# 1. clone the repo
git clone https://github.com/youtalk/mora.git
cd mora

# 2. fetch + SHA-verify the wav2vec2 CoreML model
bash tools/fetch-models.sh

# 3. regenerate Xcode project from project.yml
xcodegen generate

# 4. open in Xcode → run on iPad sim
open Mora.xcodeproj