Bolnet · Private Equity Workshop

AI for private equity, worked out in code.

A working toolkit for private-equity firms who want to turn AI from deck-slide into measurable portfolio value. Twelve operations, all shipped on real public data — pandas math, row-level evidence on every dollar, MIT-licensed, air-gappable.

Volume I Number 12 of 12 Imprint MIT-licensed Issue 2026-04-26
Mixed-vertical fund
$1.14B
7-portco BX corpus, identifiable
Procurement spread
$936M
28 federal agencies, 132 vendors
Plan drift
$121.6M
5 of 7 initiatives off-track
CIM red-flags
49
single 10-K, section-cited
Citation accuracy
87%
across 13-memo eval corpus
Public datasets
5
HF · Kaggle · CFPB · SEC · USAS
Open the catalogue View on GitHub
The problem

Private equity is two AI cycles behind its own best portcos.

Most "AI for PE" today is slides, vendor demos, and a 100-day plan template no one operationalizes. Value gets left on the table at three layers — deal, portco, fund. Each layer has a partner-grade artifact missing on the desk. This stack ships those artifacts.

Layer What's broken today What ships here
Deal team DD memos hand-assembled; market scans take days; no consistent prospect scoring; CIM red-flag schedules built by associates over two-night reads. /cim /explainer /exit-proof-pack /dd-checklist · plus a 17-command deal workbench in Claude Code
Portco operating Operating partners can't compare decisions across companies; AI roadmaps drift inside a PDF; 100-day plans surface at the QBR, six weeks late. /diagnose-decisions /plan-drift /normalize /procurement · top-3 dollar-quantified opportunities from the portco's own data
Fund / LP No view of which portcos share the same leakage pattern; LP letters can't quantify operational alpha; DDQ packets contradict each other across vintages. /benchmark-corpus /ddq /eval /ai-act-audit · fund-wide rank, archetype index, consistency-checked LP packets
Twelve operations in the workshop

Each tool ships one artifact. Each artifact lands on one seat in the org chart.

Every tool ships a static, auditable artifact a partner can defend. Every figure traces to a row in the underlying source data; every assumption sits in plain Python. Click Watch the demo to see the partner-grade walkthrough; Open the example to inspect the rendered report a portco would receive today.

I.
Wedge · Layer 1 · Portco operating

Decision Diagnostic (DX)

Ingest a portco's CSVs; surface the top-3 repeatable, high-volume decisions being made badly — each with a counterfactual, a time-stability score, and row-level evidence. Pure pandas; the LLM never touches arithmetic.

$1.12B/yr identified on real US mortgage data — 12 portcos diagnosed across Lending Club, Yasserh, CFPB HMDA. Fund-level corpora built at $3.2M, $184M, $1.14B.
II.
Wedge · Layer 1 · Deal team

Model-to-Narrative Explainer

Take any DX OpportunityMap and render a board-defendable memo — the why, the counterfactual, the risk of inaction, the rollout plan. Letterpress design; every figure traceable to a source field; whitelisted prose patterns.

13 board memos rendered across 12 portcos. Eval corpus: 87% mean citation accuracy, 100% coverage, 100% consistency board-vs-operator.
III.
Moat · Layer 2 · Fund / LP

Cross-Portco Benchmarking (BX)

Roll up N portcos into a fund-level rank table, archetype index, peer groups by cosine similarity, and quarter-over-quarter persistence — the LP-letter exhibit no scorecard vendor ships, because no scorecard vendor sees the row data.

3 fund corpora shipped — 5-region Lending Club ($3.19M), 5-state CFPB HMDA ($183.83M), 7-portco mixed-vertical ($1.14B).
IV.
Moat · Layer 2 · Fund / LP

LLM Eval for PE

The QA layer every PE-grade AI artifact needs. Citation accuracy, hallucination rate, coverage, consistency — four numbers, one rubric, deterministic scoring. Pure pandas plus regex; no LLM at runtime in the scoring layer.

13 memos scored, 87% citation accuracy. The contract every memo crosses before an LP sees it.
V.
Diligence · Layer 3 · Deal team

CIM Red-Flag Extractor

Drop a CIM, an S-1, or any 10-K. Get back a section-cited list of red flags an associate would otherwise spend two nights surfacing by hand — eight heuristic flag families, every flag carrying a citation a reviewer can verify in fifteen seconds.

49 red flags surfaced, 17 sections, 1 page — Sotera Health 10-K, EDGAR-fetched, regex-deterministic.
VI.
Diligence · Layer 3 · Deal team at exit

Seller-Side AI Diligence Pack

Buyer diligence will ask: prove the AI EBITDA. This is the trail you give them, before they ask, so the number doesn't get haircut at LOI. Provenance ledger, sensitivity table, defensibility checklist — schedule to the SPA, not the marketing deck.

$1.12B base · $560M-$1.46B sensitivity for MortgageCo. Every claim cites the OpportunityMap row + evidence row IDs.
VII.
Diligence · Layer 3 · Fund / IR

DDQ Automation + Consistency

The Q1 2026 ILPA AI diligence questions, answered against the actual artifacts in the fund's working directory — not against marketing copy. Every answer cites the file it came from. Cross-answer consistency checks run before the LP sees the packet.

12 questions answered · 38 artifacts indexed · 26 consistency flags surfaced before send.
VIII.
Diligence · Layer 3 · Portfolio analytics

Portfolio Normalization

Three portcos. Three different chart-of-accounts. One unified view, every cell traceable to its source field. The pre-step every cross-portco analysis needs and almost none of them do — defensible roll-up, every cell tracing home.

3 portcos folded · 195,473 rows · 10 canonical fields · 9 anomalies flagged before rollup.
IX.
Operate · Layer 4 · Portco operating

100-Day Plan Drift Monitor

Day-Sixty of a hundred-day plan. The board is in seven weeks. This is the page the operating partner walks in with — diffed against real public 10-Q actuals, fetched from SEC EDGAR, no manual reconciliation. The page the consultant didn't ship.

5 of 7 initiatives off-track · –$121.6M EBITDA at risk against the plan signed at close.
X.
Operate · Layer 4 · Portco operating

Procurement Benchmarking

Apollo's flagship value-creation lever, productized for the rest of mid-market PE. Public federal-contract data, no auth, no vendor. Same data Apollo's procurement team uses; same math; one-hundredth of the headcount.

28 buyers · 132 vendors · $936M cross-buyer price spread on real USAspending.gov FY2024 data.
XI.
Operate · Layer 4 · Compliance

EU AI Act Compliance Pack

Regulation (EU) 2024/1689. Article 6 high-risk classification. Deadline: 2 August 2026. Documentation skeleton, every obligation tracing to a public article — sized for the GC's red-pen pass, not for the law-firm memo.

High-risk verdict · Annex III §5(b) · 8 articles addressed end-to-end for LendingCo-EU.
XII.
Operate · Layer 4 · Fund CFO

Agent Sprawl Auditor

The audit no fund runs on its own AI deployments and every fund should. Per-agent: model, run cost, run count, last-run date, health verdict — zombie, runaway, misaligned, or healthy. Three deterministic checks; no LLM at runtime.

23 agents inventoried · 16 flagged · $19,880 annual savings if pruned. Real registry, modeled telemetry from Anthropic's published pricing.
Where to start

Find your seat, open your tool.

Every operation lands on one role in the firm. Below is the partner-grade triage — the one tool to open in your first thirty minutes with this stack.

Operating partner
Run DX on one portco
/diagnose-decisions
One-page memo for the next board.
Deal team
CIM scan + IC memo
/cim /explainer
Section-cited red-flag schedule + IC-grade memo.
Managing partner
BX across the fund
/benchmark-corpus
Operational-alpha exhibit for the LP letter.
IR / fund CFO
DDQ + Eval
/ddq /eval
ILPA-shaped packet, consistency-checked before send.
Portco CEO / CFO
DX on your own data
/diagnose-decisions
90-day plan grounded in your data, not a vendor pitch.
How it runs

Four design principles. Non-negotiable.

The artifact a partner can defend is built by the math underneath, not the prose on top. Every tool in the workshop honors the same four contracts.

I.
Deterministic core
Pandas, not LLMs, for the math. Run the same input twice — identical output, every time. The language model only narrates what the math already proved.
II.
Auditable output
Every dollar number traces to row-level evidence. The renderer raises before it ships a number it can't ground. JSON sidecars travel with every report.
III.
Air-gappable
No portco data leaves the operator's machine. All twelve tools run end-to-end on a laptop, with no external API calls except documented public endpoints (EDGAR, USAspending).
IV.
Extensible by one file
A new vertical template is one Python file. A new tool is a new MCP module. The 17-skill, 18-command surface is the working directory, not a vendor's roadmap.

Twelve tools. One stack. One auditable contract.

Open the catalogue. Drop a portco's CSVs. Watch the partner-grade memo render in seconds. Every number traces to a row.