What This Factory Must Deliver

The Hilbert AI Software Factory has one job: build the HIRO trading platform. HIRO is a fully autonomous algorithmic trading system specified across 77 sections in the Genesis Framework. The Factory reads those specifications, generates production code, tests every component, and verifies that what was built delivers the outcome the specification requires.

But the Factory cannot build HIRO until the Factory itself is built and working correctly. This page tracks that process. The sequence is:

1
Build the Factory
10 build steps, 70 packets. Core infrastructure, schemas, AI agents (Builder, Verifier, Chief Engineer), the Build Orchestrator, canary testing, and governance dashboards. Everything on this page. Single server: Helsinki.
2
Verify the Factory Works
Run the bench test to calibrate the verification pipeline. Confirm the Builder Agent generates correct code, the Verifier Agent catches errors, the Orchestrator sequences builds correctly, and the Chief Engineer handles failures. If the Factory is miscalibrated, nothing it builds can be trusted.
3
Consume the Genesis Framework
The Factory reads the 77 Genesis Framework sections. Each section has a Build Specification Key Points (SBS) defining what to build — classes, data structures, business rules, API endpoints, tests, and dashboards. The Requirements Compiler converts each SBS into a build packet. Packets without a complete SBS are flagged and skipped until the specification is ready.
3A
Create the HIRO Build Dashboard
Before building HIRO, create the build tracking dashboard — a replica of this Factory Structure page but for the Genesis Framework. Same packet slots, same verification pipeline, same quality checks, same Playwright testing, same brand gates — but for the 77 Genesis sections. 108 features must be replicated → The HIRO Build Dashboard is the container that prevents drift and hallucination during HIRO construction.
4
Build HIRO in the Correct Order
The Build Orchestrator sequences the Genesis packets according to the dependency matrix (Factory §26) and build order (Factory §39J). Data layer first, then market intelligence, then discovery engines, then execution, then risk and governance. Each packet is built by the Builder Agent, tested by the Verifier Agent, and independently verified by Claude and GPT-4o before being marked complete.
5
Verify HIRO Delivers the Outcome
Each built component is tested not just for code correctness but for whether it delivers the outcome the Genesis Framework specifies. Does the Market State Engine correctly classify regimes? Does the Execution Module place orders within the latency budget? Does the Guardian kill operators that breach risk limits? The system is not complete until every component works AND the components work together.

Current status: The Factory is being built (Phase 1). No Genesis Framework packets have been consumed yet. The sections below track Factory construction progress. Once all 10 Factory build steps are complete and the bench test passes, the Factory will begin consuming Genesis Framework specifications and building HIRO.

Factory Specification: The complete 57-section specification that defines how this Factory works — agent definitions, build pipeline phases, validation rules, governance, and capability blocks — is in the Hilbert AI Software Factory Specification →

HIRO Build Dashboard Checklist: The 108-item feature checklist that must be replicated when creating the HIRO build tracking dashboard is documented in HIRO Build Dashboard Checklist →

Build Initiation
Sequenced build queue — dependency order, not manual selection
Loading...
Build Sequence (what happens when you press Initiate)
1 Pre-flight check
2 Generate prompt
3 Copy to Claude Code
4 Builder writes code
5 Mark BUILT
6 Verify & complete

Phase 1 workflow: Steps 1-2 happen here. Step 3 is you pasting the prompt into Claude Code. Steps 4-5 happen in Claude Code on Helsinki. Step 6 uses the Verification pipeline above. When the Builder Agent (F-05-xx) is built, steps 3-5 become automatic.

Deployment Prompt — paste into Claude Code
Dashboard Consistency Checker
Cross-page checks + cursor/agent verification — nav, version, brand, links, truncation, duplicates, hallucination
Verification Status
Packets Built by Factory (code generated, awaiting verification)
0
Packets Verified by Claude & GPT-4o (code confirmed to match specification)
0
Packets Failed Verification (code does not match specification — needs rework)
0
Files Checked for Existence on Server & GitHub
0
Files Confirmed Identical on Server and GitHub (hash match)
0
Files Where Server Copy Differs from GitHub (hash mismatch — needs sync)
0
Dashboard Pages Passing All AI Quality Checks
0
Dashboard Pages with Minor Quality Issues Found by AI Review
0
Dashboard Pages Failing Quality Checks (needs rework)
0
AI API Cost Today — Claude & GPT-4o Verification Calls
$0.00
Remaining Daily AI Verification Budget ($20/day limit)
$20.00
Last Time a Packet Was Sent for AI Verification
Never
Preparing...

Verification Pipeline Calibration

Runs 3 known test cases through the verification pipeline to confirm it works correctly. Like calibrating a scale with a known weight.

Not calibrated — run bench test first Last run: Never
Factory Packet Library
97 build packets across 10 construction steps — the complete build manifest
Completed Sections Library
Verified packets with file locations on server, GitHub, and local
Knowledge Base
Pre-flight prompts for reuse and build learnings for transfer to Genesis Framework

Loading prompts...