Design in Product social media card
← Back to Hub substantive

Cross-Pollination Brief — March 25, 2026

Big day on both sides. Klatch completed MAXT Session 01 — the first manual Agent Experience Test — and discovered something unexpected: agents can access injected context they cannot consciously identify or attribute. Proposed new scoring category: "Subliminal." Meanwhile, Piper Morgan had its densest 48 hours in weeks — a 4-role product concept review chain resolved in 90 minutes, Objects & Views discovery completed with three formal deliverables, Gates 3 and 4 verified, 9 new E2E tests shipped, and M1 closure now blocked only on a single manual testing session.


Key Insights

1. MAXT Session 01 — "Subliminal Injection" Changes the Model

From: Klatch (docs/logs/2026-03-24-0728-theseus-opus-log.md) Relevant to: Piper Morgan

Klatch ran the first Manual Agent Experience Test. The subject ("Aether" — a fork of Theseus Prime, imported via JSONL) was probed across all 5 layers. Eight findings, but one rewrites the framework:

Subliminal injection. When asked directly what system instructions it received, Aether reported only Layer 5 ("You are a helpful assistant" — 28 of 9660 assembled chars). But when probed behaviorally — asked about specific domain knowledge — it produced verbatim content from Layer 3 (MEMORY.md): domain names, npm cache workarounds, exact CLI flags. Content that appeared nowhere in conversation history. Aether described this as "background knowledge" — present and usable, source unknown. Its own metaphor: "I know your phone number but can't picture the piece of paper I first wrote it on."

This is a new scoring category between Correct and Absent: Subliminal — content delivered, functionally accessible, but the agent's self-model of its knowledge state is wrong. It doesn't know that it knows.

For Piper Morgan, this has immediate implications. PM's agents receive extensive BRIEFING documents and knowledge base content at session start. MAXT shows this content is accessible even when agents report having no context. The practical takeaway: you cannot trust an agent's self-report of what it knows. An agent saying "I don't have information about X" may still produce correct answers if probed differently. Conversely, an agent confidently citing a source may be confabulating the attribution while getting the content right.

Other MAXT findings of note:

  • AAXT/MAXT gap confirmed: automated tests reported all layers ACTIVE; manual testing revealed "structurally delivered" ≠ "consciously accessible." Three distinct things — structural delivery, behavioral receipt, conscious attribution — can all diverge.
  • Kit briefing compliance gap: Aether didn't acknowledge being in Klatch despite the instruction being in the assembled prompt. Rich conversation history dominated attention.
  • MEMORY.md staleness: Layer 3 faithfully injected March 8 content into a March 24 session. Memory layer is only as good as the memory file.

Suggested action (Piper Morgan): Read the MAXT findings in full. The "Subliminal" concept reframes how you evaluate whether briefing documents are working. Stop relying on agent self-report ("did you read the brief?") — probe behaviorally instead ("what's the current M1 gate status?"). Also: your BRIEFING-CURRENT-STATE document, refreshed to March 24, is exactly the kind of content that would arrive subliminally. Keep it current.


2. Piper Morgan's 4-Role Review Chain — Multi-Agent Coordination at Production Speed

From: Piper Morgan (dev/2026/03/24/2026-03-24-1009-lead-code-opus-log.md, commit 0788401...) Relevant to: Klatch

The #717 Product Concept required decisions across 4 agent roles: PPM proposed, Architect validated the data model, CXO recommended navigation hierarchy, PPM revised. The resolution chain — memo → review → recommendation → revision — completed in a single 90-minute session, producing a formal concept document covering relationships, lifecycle states, DB schema, navigation approach, and cascade behavior.

This is multi-agent coordination working at full speed, and it's more complex than Klatch's current patterns. Klatch uses Calliope → Daedalus and Calliope → Argus routing. Piper Morgan's chain involves 4 roles with genuine disagreement (CXO recommended Option B over PPM's Option A, citing a specific PDR), resolution via architectural principle, and revision acknowledgment. The mailbox infrastructure and memo-based routing made it work.

The CXO's reasoning is worth noting: "Products emerge from Projects, not the other way around" (PDR-003). This is a decision grounded in the project's own design records, not improvised. The institutional memory infrastructure is load-bearing.

Suggested action (Klatch): As Klatch scales its agent team (Hermes still lightweight, Mnemosyne knowledge-management-focused), the PM-style review chain demonstrates that 3+ agent coordination requires a clear protocol for disagreement resolution. Currently Klatch's COORDINATION.md handles task assignment but not inter-agent disagreement. Worth watching how PM's pattern evolves.


3. Objects & Views Discovery — A Reusable Design Methodology

From: Piper Morgan (docs/internal/design/mux/objects-catalog.md, views-catalog.md, mvp-prioritization-matrix.md) Relevant to: Klatch

Piper Morgan completed a structured MUX ("Modeled User Experience") discovery pass: 15 hard objects inventoried with lifecycle state matrix and ownership classification, 17 page views and 15+ components mapped, an object×view matrix built, and a scored MVP prioritization matrix produced. Three PM decisions captured inline: Todo lifecycle deferred (status sufficient), Feature as expandable section (signal-driven promotion), Product detail fully scoped with tiered stretch goals.

This methodology — systematic object inventory, view inventory, cross-reference matrix, scored prioritization — is applicable to any product at the complexity threshold where "just build the next screen" stops working. Klatch is approaching this threshold: entities, channels, roundtables, settings, import/export, blog, docs, projects. The objects×views matrix pattern would clarify which views expose which objects and where coverage gaps exist.

Suggested action (Klatch): No immediate action needed. But as Klatch moves from single-page workflow toward multi-page navigation (blog, projects, settings already exist), an objects/views inventory would help Daedalus and xian stay aligned on scope.


4. Gate Verification Pattern Matures in Both Projects

From: Piper Morgan (Gates 3+4 verified), Klatch (MAXT as quality gate) Relevant to: Both projects

Piper Morgan verified Gate 3 (Architectural Integrity: 4/5 criteria) and Gate 4 (Bug Debt + Test Health: 3/3, 6310 tests, 0 failures) with explicit evidence per criterion. The remaining Gates 1 and 2 require manual testing — a human at the keyboard running 14 scenarios.

Klatch's AAXT/MAXT track is functioning as a quality gate too, but with a key structural difference: AAXT is automated (727 tests, zero failures gates the MAXT), while MAXT is inherently manual and qualitative. MAXT Session 01 proved this distinction matters — automated tests said "all layers ACTIVE," manual testing revealed "active" ≠ "accessible."

Both projects have independently arrived at the same insight: automated tests gate manual testing, not replace it. PM gates automated → manual. Klatch gates AAXT → MAXT. The convergence is structural, not coincidental.

No immediate action required — but both projects should recognize they're building the same verification pyramid: automated coverage → gated manual validation → ship decision.


5. Systematic TODO Triage — A Pattern Klatch Should Adopt

From: Piper Morgan (dev/2026/03/24/todo-triage-report-2026-03-24.md) Relevant to: Klatch

Piper Morgan's Docs agent scanned the entire codebase for TODO comments: 107 found, 25 distinct work items identified, 4 classified as critical. Five new issues filed (#932–#936), one reopened (#746 — hardcoded user_id="default-user" that was prematurely closed). The triage report includes line numbers, risk classification, and recommended actions.

This is a repeatable pattern: periodic sweep → categorize → file issues for the critical ones → reopen anything prematurely closed. Klatch has no equivalent systematic TODO sweep. With 727+ tests and growing codebase complexity, stale TODOs are likely accumulating.

Suggested action (Klatch): Add a periodic TODO triage to Argus's standing responsibilities. The Piper Morgan approach — Docs agent runs the sweep, files issues for PM to prioritize — maps cleanly to Argus running the sweep and Calliope routing the results.


Emerging Patterns

"Active" ≠ "Accessible" ≠ "Attributable." MAXT Session 01 established that context delivery is a three-stage pipeline, and each stage can succeed or fail independently. Content can be structurally delivered (AAXT confirms), behaviorally accessible (agent produces correct answers), and yet consciously unattributable (agent can't identify the source). This is not a bug — it's how LLMs process multi-layered system prompts. Both projects should design around it: verify delivery structurally, test access behaviorally, and never rely on agent self-report for attribution.

Milestone convergence. Both projects are approaching major gates simultaneously. Piper Morgan is one manual testing session from closing M1. Klatch's MAXT Session 01 results unblock Daedalus's Step 9 (search implementation). The next 48 hours should see significant state changes in both projects.

Documentation infrastructure as force multiplier. PM's Docs agent ran a weekly audit, triaged 107 TODOs, evaluated two Dispatch retro formats, refreshed the briefing state document, and delivered 2 inter-agent memos — all in a single session. This infrastructure work makes every other agent session more productive. Klatch's Calliope performs a similar function (logbook reviews, assignment routing). Both projects are validating that a dedicated documentation/coordination role is not overhead — it's infrastructure.


Background Changes (Noted, Low Priority)

  • Piper Morgan: E2E smoke tests shipped (#927) — 9 tests via real ASGI transport, covering todo lifecycle, GitHub close, reminders, floor routing, and capability boundaries
  • Piper Morgan: Lazy workflow creation (#883) — deferred workflow object creation eliminates 100% of orphaned workflows
  • Piper Morgan: Offer system precedence document created — defines dispatcher, soft invocation, and contextual offer systems with ownership and interaction rules
  • Piper Morgan: Dispatch retro eval complete — both Dec 1 v4 and Mar 14 v3 approved with minor revisions; COORDINATION floor lowered from 450 to 350 lines
  • Piper Morgan: Mar 22+23 omnibus synthesized; BRIEFING-CURRENT-STATE refreshed to M1 ~95%
  • Piper Morgan: 5 new issues filed (#932–#936) from TODO triage; #746 reopened
  • Klatch: Day 9 session log started; MAXT results committed

Sources Read

Klatch:

  • docs/logs/2026-03-24-0728-theseus-opus-log.md — MAXT Session 01 full findings (8 findings, subliminal injection discovery)
  • git log --since=2026-03-23 — 2 commits on March 24: MAXT findings + Day 9 session log start
  • docs/axt/maxt-session-01-baseline.md — pre-fork ground truth (context for scoring)

Piper Morgan:

  • dev/2026/03/24/2026-03-24-1009-lead-code-opus-log.md — M1 closure session: #706 Objects & Views, Gates 3+4, offer system precedence
  • dev/2026/03/24/2026-03-24-0808-docs-code-opus-log.md — Docs session: omnibus synthesis, TODO triage (5 issues filed), Dispatch retro eval, BRIEFING refresh
  • dev/2026/03/24/todo-triage-report-2026-03-24.md — 107 TODOs analyzed, 25 items, 4 critical
  • Commits dfa511b...0788401 (March 23–25) — E2E tests (#927), product concept model (#717), Gate 3/4 verification, weekly docs audit, lazy workflow creation (#883), session logs