Pineal Capital
Mindful Investing · Fund I

Pineal Capital — FPR Module Build Dashboard

GCCP × Pineal Capital · Financial Performance & Returns Module · v1.3.1 · 16 Apr 2026
AI-Draft Plan Mode

Weekend Review

What's outstanding · the v1.3.2 patch list · three questions worth sitting with
FPR_v1.3.1
16 April 2026
Page 06 of 06
Where we are. Methodology scaffolding is in. 8 phases of architecture are documented. One test ticker (TDOC) passed 5 of 5 validation gates. The compiled Python engine shipped separately in the sandbox and has been run against MSFT at Band A. The module is not yet live for real tickers or investor-facing output. What's below is the honest list of what remains and the decisions worth making before Monday.

Completed yesterday · vs · Still to do

Completed yesterday15 items
  • 9-sub-agent chain documentedIngest → Reconstruct → Returns → WACC → Sanity Gate → Sector → Forecast → Score → Change Detect
  • 8 sector adapters drafteddefault_eu, tech_saas, reit, hotels_hospitality, industrial_logistics, banks_financials, consumer_staples, healthtech (BETA)
  • 15 governance rules codifiedNo fabrication, Reported+Adjusted+Normalised parallel, source log mandatory, EQ cap, coverage abort, halt taxonomy
  • TDOC validation run5/5 gates passed on Tier 4 data; 2 bonus methodology gaps surfaced (Band-B cap inertness, accrual/impairment contamination)
  • Dalata Hotel Group dry-runHotels_hospitality adapter exercised end-to-end — output shape validated
  • Conor handoff packageOne-slide PDF with 5-gate grading rubric + 20-minute test ask
  • Partner overview deck2-page plain-English PDF with paste-ready Claude AI prompts
  • Compiled Python engine (sandbox)Shipped in parallel: 22/22 tests, 88.6% coverage on MSFT Band-A run via SEC EDGAR
Still outstanding11 items
  • Data tier subscription decisionBloomberg + AlphaSense + SEC EDGAR is the agreed stack; procurement sequence and budget not yet locked
  • 6 JSON schemasPhase 7 deliverable — unlocks schema validation (Rule E2) and machine-level reproducibility (Rule #7)
  • Python-computes / LLM-interprets enforcementRule #11 is honour-system today; regex validator to reject prose-computed arithmetic needs to ship with schemas
  • Reference portfolio (10 tickers)Score calibration needs a seed basket so median = 50 isn't arbitrary
  • Band-B EQ test (Palantir)v1.3.2 graduated cap logic (Band-A hard 60 / Band-B soft 75) needs live validation on PLTR
  • Healthtech adapter → ProductionBETA status; needs 4+ reference runs (DOCS, HIMS, EVH, AMWL) before graduation
  • Private-company mode (Phase 2)Parameterised inputs, no sell-side consensus, wider N/A tolerance — out of v1 scope
  • Universe size lockDefault assumption is 8–12 concentrated names; Pineal to confirm before calibration
  • IMPAIRMENT_EVENT tag promotionCurrently informational; needs to move into halt-taxonomy category with numeric trigger

v1.3.2 Patch List — six items already scoped

#PriorityPatchWhy
1 High Rule #13 graduated cap
Band-A hard cap 60 / Band-B soft cap 75
Binary cap is inert for tickers with SBC 12–15%. PLTR is the test case.
2 High Accrual ratio decomposition
Split operating (ΔWC) vs non-operating (impairments)
TDOC goodwill writedowns fire the accrual flag incorrectly — non-operating contamination.
3 Medium IMPAIRMENT_EVENT tag promotion
From informational to halt-taxonomy category
Currently orphan; needs numeric trigger & single source of truth.
4 Medium healthtech adapter → Production BETA until 4 clean runs logged (DOCS, HIMS, EVH, AMWL).
5 Medium Reference portfolio seed
10-ticker basket for calibration
Score median = 50 needs an anchored, not theoretical, distribution.
6 Low Halt taxonomy cross-check
METHODOLOGY.md ↔ SKILL.md
Two files list halts; collapse to single source of truth.

Open decisions — need Pineal input

Decision 1

Data tier — procurement order

Agreed stack is Bloomberg + AlphaSense + SEC EDGAR. Question is which subscription is first, and whether Bloomberg is Band A (quant) only or also Band B (qualitative).

  • AlphaSense first (qualitative depth, cheaper)
  • Bloomberg first (quant breadth, single source)
  • EDGAR-only interim (US-listed only, zero cost)
Decision 2

Universe size — 8–12 or wider

Pineal strategy says 8–12 concentrated names. FPR calibration works better with a wider reference portfolio. Question is whether to run the module monthly across a wider watchlist or strictly on active names.

  • Strict 8–12 (matches portfolio, light compute)
  • Watchlist of 30 (broader calibration)
  • Rolling 50 (full universe signal, heavier)
Decision 3

Private-company mode — v1 or v2?

Private mode removes sell-side consensus inputs, loosens N/A tolerance, uses parameterised assumptions. Useful for the PE-style theses Pineal runs. Currently out of v1 scope.

  • In v1.3.x — accelerate now
  • v1.4 milestone — after live-ticker pass
  • v2 — after Valuation module ships

Three things worth thinking about over the weekend

Questions for Peter, Ciaran, Conor, Ruby Mae

1

Is the "halt rather than fabricate" discipline set at the right level?

The TDOC run halted at 52% coverage — correct behaviour. But on Tier 2 data the threshold becomes easier to hit, and the module will produce more output. Should the coverage gate stay at 60%, or scale with data tier (e.g., 50% on Tier 4, 75% on Tier 1)? Over-discipline kills throughput; under-discipline kills credibility.

2

What does "good enough" look like for the Financial Quality Score?

Score bands are A (85+), B (70–84), C (55–69), D (40–54), F (<40). Median is calibrated to 50. Question: is the pass-the-filter threshold a B, or does Pineal want to engage with Cs where the catalyst is strong enough? The answer shapes how aggressive the EQ cap should be — and whether Band-B graduated cap is the right design.

3

Where does the forecast-vs-consensus delta belong in the IC process?

Rule #5 makes the alpha signal mandatory — our 3-year forecast vs sell-side consensus. That's the variant view. Question: does it feed Section 5 of the IC paper (returns/capital) or Section 6 (valuation), and who owns adjudication when our forecast and consensus materially disagree? Both are defensible; the answer shapes the module's downstream integration.

Pointers

If you want to go deeper, the source files are at pineal-capital/.claude/skills/fpr-module/. The most useful ones to dig into over the weekend: SKILL.md and LIMITS.md (start here), METHODOLOGY.md (the full spec), and the TDOC reference run (proof the discipline works). Each page in this dashboard points you at the right file with a paste-ready Claude AI prompt — no special tooling needed, just claude.ai in the browser.