From research idea to reviewed draft
corpus + evidence
Your research topic is matched against a domain-specific corpus. Relevant papers are ranked, and an evidence context is built with verified citations.
draft + verification
The proposal section is built against NIH structure, then checked for citation support, evidence gaps, and structural weakness before it is treated as usable output.
simulated study section
A simulated reviewer scores each criterion (1-9), identifies weaknesses, and provides revision instructions.
$ axion-grants --topic "mGlu2/3 antagonist ketamine temporal stacking"
--corpus psychedelics-mental-health
--section full
[corpus] 1,538 papers loaded · 30 matching
[generate] full research strategy... done
[verify] DOIs: 66/67 valid (99%)
[review] NIH study section... done
Overall Impact: 6/9 — satisfactory (not funded)
$ axion-grants --revise full_proposal.md
[revise] addressing 10 critical + 8 major issues... done
[verify] DOIs: 13/13 valid (100%)
[re-review] NIH re-scoring... done
Overall Impact: 6 → 4/9 — very good (+2 improvement) From "not funded" to "borderline fundable" in one revision
Full Research Strategy generated for a novel ketamine dosing protocol. The reviewer identified 10 critical issues, 8 major issues. The revision loop addressed all of them — and the re-review confirmed measurable improvement.
NIH SCALE: 1 = EXCEPTIONAL (BEST) → 9 = POOR (WORST) | ORIGINAL → REVISED
What the revision loop fixed:
before: critical gaps
after: revision addressed
The reviewer doesn't give generic advice. It identified the exact NIH policy being violated (NOT-OD-15-102), proposed specific sample sizes with power calculations, caught a sample-size arithmetic error in the original proposal, and estimated that the timeline was 17 months short. This is the same rigor a real study section would apply.
Cross-domain proof: same pipeline, different field
To prove Axion-Grants works beyond one domain, we ran the same pipeline on an immunotherapy topic — CAR-T cell exhaustion reversal via NR4A pathway inhibition. Different corpus, different field, same verification rigor. 51 citations, all verified. Score improved after one revision cycle.
NIH SCALE: 1 = EXCEPTIONAL (BEST) → 9 = POOR (WORST) | ORIGINAL → REVISED
What the revision loop fixed:
before: critical gaps
after: revision addressed
Different corpus (8,269 oncology papers), different domain (immunotherapy), same verification pipeline. The reviewer caught compound availability as a fatal flaw — exactly what a real study section would flag. The revision addressed it with validated preliminary data.
The cost comparison
Input and output
you provide
you get
Simulated reviewer objections before submission. Revision rate drops when objections are pre-addressed.
Requires your Specific Aims in structured format. The system challenges your proposal — uncomfortable by design.
Start with an audit or one bounded proposal review
Bring the Specific Aims page, draft, or live reviewer pressure point. We define the smallest defensible engagement and return the objections before study section does.
[ grants audit ]