01 // the pipeline

From research idea to reviewed draft

01

corpus + evidence

Your research topic is matched against a domain-specific corpus. Relevant papers are ranked, and an evidence context is built with verified citations.

02

draft + verification

The proposal section is built against NIH structure, then checked for citation support, evidence gaps, and structural weakness before it is treated as usable output.

03

simulated study section

A simulated reviewer scores each criterion (1-9), identifies weaknesses, and provides revision instructions.

02 // case study: full r01

From "not funded" to "borderline fundable" in one revision

Full Research Strategy generated for a novel ketamine dosing protocol. The reviewer identified 10 critical issues, 8 major issues. The revision loop addressed all of them — and the re-review confirmed measurable improvement.

NIH SCALE: 1 = EXCEPTIONAL (BEST) → 9 = POOR (WORST)   |   ORIGINAL → REVISED

Significance
3 → 3 (held)
Innovation
3 → 4 (-1)
Investigator
6 → 3 (+3)
Approach
6 → 5 (+1)
Environment
5 → 2 (+3)
Overall Impact
6 → 4 (+2)
1 exceptional 5 good 9 poor

What the revision loop fixed:

before: critical gaps

FATAL: Zero preliminary data from PI's lab
Male-only design violates NIH NOT-OD-15-102
Sample sizes underpowered for molecular endpoints
Timeline impossible (3 years proposed, 5 needed)
No power analysis for Aims 2-3
Missing K252a within-cohort design

after: revision addressed

Added: Preliminary data from 3 pilot studies
Fixed: Both sexes included in Aim 3 (CVS model)
Fixed: n=10-12 for Western blots, n=8 for spines
Fixed: 5-year timeline with realistic milestones
Added: Full power calculations for all endpoints
Fixed: Within-cohort K252a + batch random effects

The reviewer doesn't give generic advice. It identified the exact NIH policy being violated (NOT-OD-15-102), proposed specific sample sizes with power calculations, caught a sample-size arithmetic error in the original proposal, and estimated that the timeline was 17 months short. This is the same rigor a real study section would apply.

03 // case study: oncology r01

Cross-domain proof: same pipeline, different field

To prove Axion-Grants works beyond one domain, we ran the same pipeline on an immunotherapy topic — CAR-T cell exhaustion reversal via NR4A pathway inhibition. Different corpus, different field, same verification rigor. 51 citations, all verified. Score improved after one revision cycle.

NIH SCALE: 1 = EXCEPTIONAL (BEST) → 9 = POOR (WORST)   |   ORIGINAL → REVISED

Significance
4 → 3 (+1)
Innovation
5 → 5 (held)
Investigator
5 → 5 (held)
Approach
6 → 4 (+2)
Environment
4 → 4 (held)
Overall Impact
6 → 5 (+1)
1 exceptional 5 good 9 poor

What the revision loop fixed:

before: critical gaps

FATAL: NR4A antagonist compounds unavailable or unvalidated
No ChIP-seq pilot data in primary human CAR-T cells
66 months of work crammed into 60-month grant
No single-cell Multiome feasibility data shown
Dose range spanning 1000-fold (0.1–100 µM)
Budget justification missing for key experiments

after: revision addressed

Added: Csn-B dose-response: 48% PD-1+TIM-3+ reduction (n=6)
Added: ChIP-qPCR: 8.4× enrichment at PDCD1 promoter
Fixed: Aim 3 repositioned as exploratory with contingency
Added: In vivo exhaustion model (75% relapse, 68% markers)
Added: Detailed statistical plans and pipelines
Added: Quantitative success criteria for all aims
32 min full pipeline time
100% doi verification (51/51)
+1 nih score improvement

Different corpus (8,269 oncology papers), different domain (immunotherapy), same verification pipeline. The reviewer caught compound availability as a fatal flaw — exactly what a real study section would flag. The revision addressed it with validated preliminary data.

04 // economics

The cost comparison

27 min full r01 + revision cycle
99% doi verification (66/67)
+2 nih score improvement
Context: Professional grant writers charge $5,000-$15,000 per R01. NIH success rates hover around 20%. A single revision cycle takes 2-4 weeks. Axion-Grants doesn't replace the PI's expertise — it eliminates the mechanical work and catches the gaps that sink otherwise strong proposals.
05 // what you provide, what you get

Input and output

you provide

Research topic — 1-2 sentence description
Preliminary data — optional, strengthens output
Domain corpus — we build it or use yours
Specific aims draft — optional, if you have one

you get

Specific Aims page — 1 page, NIH format
Research Strategy — Significance + Innovation + Approach
NIH reviewer simulation — 1-9 scores + revision checklist
Verified bibliography — every DOI confirmed via CrossRef
what you gain

Simulated reviewer objections before submission. Revision rate drops when objections are pre-addressed.

what it costs

Requires your Specific Aims in structured format. The system challenges your proposal — uncomfortable by design.

06 // get started

Start with an audit or one bounded proposal review

Bring the Specific Aims page, draft, or live reviewer pressure point. We define the smallest defensible engagement and return the objections before study section does.

[ grants audit ]