Frustration shows up in the language before it shows up in the churn.
GCSE literature tutor lit-tutor-gcse caught a recurring pattern: students disengaged from recall drills roughly 3 sessions before they stopped logging in. Flowlines correlated it with an unused memory field and surfaced the fix.
The problem
The agent's memory layer extracted avoidance_patterns on every session: topics the student reliably pushed back on. But no one had added the field to the tutor's injection view. The model was writing notes it never read.
By week 4 of operation the signal was loud: 34% of sessions without avoidance_patterns injected showed frustration within 8 turns, compared to 11% with the field. Across 312 sessions, the lift was statistically decisive.
The fix
Flowlines drafted a one-line change to the tutor_context view, showed the before-and-after, and waited for approval. The engineer shipped it behind a feature flag; the next 40 sessions dropped frustration by 23% with no regressions elsewhere.
What it catches now
Engagement drops mid-session. Topic avoidance that the student won't articulate. Comprehension ceilings: the level above which the same recall drill fails for the same reason. Each correlated to a specific memory field and a specific injection point in the prompt.
What the team writes
Two lines in their agent entrypoint. Flowlines watches; the team reviews a short Monday digest of drafted fixes; anything they approve ships as a memory-injection change behind a feature flag.
Structured memory fields
The typed fields Flowlines reads and writes for this domain. Each field is scoped, versioned, and traceable back to the interaction that produced it.
learning_preferencesavoidance_patternsknowledge_gapscomprehension_levelcurrent_topic