Financial Performance & Returns · v1.3.1 · What we've built, what's left, how to review it
FPR_v1.3.1
16 April 2026
2-page briefing · Plan mode
Plan mode · Not live yet
This is a build project, not a production system. No live analyses have been produced for investors. Everything here is methodology scaffolding and a test-ticker validation run. Partner sign-off is required before we turn it on for real names.
In one paragraph
If you only read one thing
The FPR Module is the first block of the Pineal AI Equity Analyst. It takes a listed stock ticker and, in a disciplined chain of 9 steps, reconstructs 10 years of financials, strips out the noise (share-based comp, one-offs), computes proper returns on capital, benchmarks against peers, builds its own 3-year forecast, and scores the business out of 100. It ends before valuation — no Buy/Sell yet. Its job is to answer one question: is this business actually good, or does it just look good?
It is deliberately over-disciplined. It refuses to produce output when the underlying data is too thin (the test run halted at 52% coverage — correct behaviour). It caps scores when earnings quality is weak. It declares its sector adapter up front. It cannot silently fabricate.
The 9 steps, in sequence
#1Data Ingest
#2Clean & Adjust
#3Returns & DuPont
#3BCost of Capital
#3CSanity Check
#4Sector & Peers
#5Forecast Gap
#6Score & Report
#7Change Detect
What this does for Pineal
What it does
Industrialises the historical financial work that sits behind every IC paper. Feeds directly into Section 3 (historical performance) and Section 5 (returns, capital structure, earnings quality) of the IC template. Valuation happens in a separate module downstream.
Why it matters
A portfolio manager or research director can start every new idea with a defensible, reproducible quality baseline — not a fresh blank spreadsheet. Bear cases and bull cases diverge from a shared, audited set of numbers. Time-to-thesis drops materially.
Where we are
Spec is written, 8 sector adapters are drafted, one test-ticker run (Teladoc / TDOC) has validated the discipline. The compiled Python engine is being built in parallel; Pineal-aligned data stack (Bloomberg + AlphaSense + SEC EDGAR) is agreed.
15 governance rules (no fabrication, coverage gate, EQ cap)
TDOC test ticker — 5/5 gates passed
Halt taxonomy & cold-start tags
Still to come
Compiled Python engine (in build this week)
6 JSON schemas & bit-for-bit reproducibility
Data tier lock-in (Bloomberg / AlphaSense)
Reference portfolio for score calibration (10 tickers)
Band-B EQ cap test (Palantir)
What to Review — and How
The 5 files worth reading in detail, with paste-ready Claude AI prompts
FPR_v1.3.1
16 April 2026
Page 2 of 2
How to use this page
For each file below: open the file, then paste the suggested prompt into Claude AI (claude.ai) along with the file contents. Claude will explain it in plain English at whatever depth you want. You don't need Claude Code or Cowork to do this — the regular Claude web app is fine.
Open claude.ai in your browser
Attach the file (or paste its contents) into a new chat
Paste the suggested prompt from the grey box
Ask follow-ups in plain English — Claude will go as deep as you want
5 files to review — in order
SKILL.mdStart here · 5 min
The one-page skill definition. Tells you what the module does, how it's triggered (/fpr TICKER), and the 15 rules it operates under. Best quick orientation.
Paste this into Claude AI
"I'm a portfolio manager / research director. Read this FPR Module skill definition and explain in plain English what the module does, who it's for, and what makes it different from just asking an LLM for financial analysis. Then tell me what I should sanity-check as a reviewer — where would you expect this to go wrong?"
LIMITS.mdRead second · honest disclosure
What's NOT built yet. We wrote this before we built the rest, to keep ourselves honest. Read this to understand the gap between v1.3.1 (today) and the full compiled system.
Paste this into Claude AI
"This is the LIMITS file for a financial-analysis AI module in build. Summarise what is NOT yet working, what 'spec-interpretive mode' means in plain English, and which of these limitations would materially affect output quality if I ran the module today on a live ticker."
METHODOLOGY.mdDeep dive · the spec
The full 13-section governance spec. This is the "how the sausage is made" file — every rule, every halt, every adjustment. Dense. Don't try to read it cold; use the prompt below.
Paste this into Claude AI
"This is the methodology spec for an AI equity analyst module. Give me a 1-page plain-English summary of: (a) the 15 rules and which ones are the most important, (b) the halt taxonomy — when does it stop and why, (c) the scoring system and how the Earnings Quality cap works. Then flag any rules that feel redundant or unclear to you."
reference_runs/TDOC_v1.3.1_validation.mdProof it works · 10 min
The test-ticker run on Teladoc Health (TDOC). Shows how the module behaves on thin data — it correctly halted at 52% coverage, routed through a new BETA adapter, and flagged two methodology gaps (both now patched in v1.3.2). This is the evidence that the discipline works.
Paste this into Claude AI
"This is a validation run for an AI financial analysis module on Teladoc (TDOC). Tell me: (1) did the module behave correctly given that data was thin? (2) what did it find that surprised you? (3) if you were QA'ing this, what would you push back on?"
One of 8 sector adapters — this one is BETA, built from the TDOC validation gap. Shows how the module overrides default metrics with sector-specific KPIs (members, ARPU proxy, clinician utilisation) when a hybrid subscription+visit business is identified.
Paste this into Claude AI
"This is a sector adapter for digital health / telemedicine companies in an AI equity analyst. Explain in plain English how sector adapters work, why healthtech needed its own adapter (rather than using SaaS), and whether the KPI thresholds in the table look reasonable to you for companies like Teladoc, Doximity, HIMS."