AI-Assisted First-Pass Assessment
A public method for producing initial dossiers on inspectability, novelty signals, and possible impact using only public materials.
What This Protocol Does
The Panta Rhei Research Program makes unusually large claims across mathematics, physics, life sciences, and metaphysics. For readers, critics, journalists, and domain specialists who want a structured first signal before investing deeper human time, we provide an AI-assisted first-pass assessment protocol.
The protocol is designed to help outsiders evaluate three preliminary questions using only public materials and a frontier AI model:
- Research-form legitimacy — Does the work appear to be a serious, inspectable research artifact that has earned external scrutiny?
- Novelty and relevance — Do its claims appear potentially novel and materially relevant relative to frontier work in the domains addressed?
- Impact and salvage value — If the strongest claims held, or partly failed, what would remain scientifically, formally, or methodologically significant?
The output is a typed dossier — not a verdict, not a truth-probability, not a peer-review substitute. It is a structured first signal designed to make the preliminary question more disciplined.
Who This Is For
The protocol is intended for anyone who needs an initial, structured public assessment of whether the framework appears serious enough to warrant deeper attention:
- Journalists covering the research or its claims
- Critics evaluating the work’s methodological seriousness
- Policymakers assessing potential significance for science-policy decisions
- Investors conducting due-diligence on the research program
- Domain experts deciding whether the work merits a closer look from their specialty
Three Assessment Modes
The protocol provides three prompt templates, each scoped to a different level of analysis.
Series-level assessment
Evaluates the entire seven-book monograph series and the research architecture as a whole. This is the recommended starting point for a general first-pass assessment.
Book-level assessment
Evaluates a single book, its corresponding Guided Tour, and its Lean companion. Use this when you want a focused assessment of one domain (e.g., only the physics books, or only the foundations).
Domain-level assessment
Evaluates the framework from the perspective of a particular discipline — pure mathematics, particle physics, cosmology, philosophy of science, or any other relevant field. Use this when you want a specialist’s-eye-view dossier.
Start Here
If you are new to the protocol, work through these in order:
- Read the Methodology — understand the three-gate structure, what AI can and cannot do, and what question this protocol actually answers.
- Review the Three-Gate Rubric — the 17-criterion scoring framework that structures every dossier.
- Review the Usage Rules — the 10 rules that govern responsible use.
- Choose a prompt — series-level, book-level, or domain-level — and run it on a frontier model with the public materials loaded.
Public Materials for Pre-Loading
When running the protocol, provide the model with the relevant public sources. Suggested URLs:
- Atlas (main site): https://panta-rhei.site
- TauLib (Lean 4 library): https://github.com/Panta-Rhei-Research/taulib
- Guided Tours: Guided Tours
- Books: Books
Do not upload confidential, unpublished, or third-party restricted materials into general-purpose AI systems. The protocol is designed to work entirely with public materials.
Downloads
- Scorecard template (CSV) — a blank three-gate scorecard for recording assessment results
- Dossier template (JSON) — the structured output schema for typed dossiers
This protocol is not peer review. It is a first-pass assessment method. Any serious judgment about correctness, novelty, or scholarly priority must ultimately be made by human experts. A positive outcome means the work appears serious enough to deserve structured scrutiny — it does not mean the claims are proven true or that expert review is no longer necessary.