01 // what it does

You direct. The engine verifies.

Define the domain. Provide the corpus. The engine checks citation support, maps contradictions, and pressures the draft before it reaches a reviewer.

Any scientific domain
1000s papers per corpus
~57% rejected by critic
1 viable hypothesis from 21
100% dois verified

Sources: PubMed, Semantic Scholar, or your own corpus. Evidence verified before delivery. Weak claims rejected early.

02 // verification pipeline

How it works

corpus ingestion Papers indexed by domain-specific ontology
support + contradiction check citations and evidence support inspected before output moves
independent critique domain critic attacks claims on mechanism, evidence, and feasibility
review-ready package only what survives moves forward for human review

The critic adapts. Pharmacology gets mechanism-of-action attacks. Clinical research gets evidence-hierarchy pressure. Same axes. Domain expertise shifts.

human-ai discovery loop

You direct. You judge. The engine verifies between.

Most AI research tools surface papers fast. This tool asks a different question: does your draft survive scrutiny?

you define corpus + question
engine verifies citations + attacks claims
you review verdicts + redirect
engine re-runs verification on revised scope
you accept review-ready output

The loop repeats until output is either verified or rejected. No hallucination passes. No unsupported claim survives. The researcher controls direction; the engine enforces evidence standards.

03 // case study: psychedelic medicine

1,538 papers. 21 candidate hypotheses. 1 survived The Gate.

A research team ran the complete psychedelic and ketamine antidepressant literature. The verification layer rejected most candidates. One survived with verified support.

the discovery: temporal stacking of mGlu2/3 antagonism after ketamine

Verdict: VIABLE
Mechanism: Sequential mGlu2/3 receptor antagonist administration 24-72h post-ketamine intercepts the declining neuroplasticity window, extending BDNF-TrkB-mTOR signaling
Evidence: Supported by 12 verified papers across behavioral pharmacology, receptor binding, and BDNF pathway studies
Potential: Could reduce cumulative ketamine exposure by 75% while sustaining antidepressant effects for 4+ weeks
Status: NIH R01 Specific Aims generated. Reviewer simulation scored 2/9 Innovation (Outstanding on the NIH scale where 1 is best)

Cross-referenced papers rarely read together. The real value: unsupported ideas do not masquerade as conclusions.

04 // case study results

All 7 directions, 21 hypotheses

Case study results comparison
Direction Hypotheses Best Verdict Key Finding
mGlu2/3 + Ketamine Stacking 3 VIABLE Temporal stacking extends neuroplasticity window
5-HT2A Receptor Modulation 3 WEAK Partial agonist approach needs more specificity data
Muscimol + SSRI Interactions 3 WEAK Safety finding: serotonin syndrome risk flagged
Neuroplasticity Cascades 3 WEAK BDNF timing windows need dose-response data
Default Mode Network 3 REJECTED Mechanism too diffuse for actionable protocol
Gut-Brain Axis 3 REJECTED Insufficient mechanistic evidence for psychedelic link
Epigenetic Markers 3 REJECTED Too speculative for current evidence base
05 // unexpected value

It finds risks too.

case study bonus: drug interaction safety signal

The engine flagged a triple interaction: muscimol + trazodone + sertraline creating serotonergic excess. Emerged from combinatorial analysis across 1,538 papers — too diffuse for manual review.

Any domain. The engine catches contradictions and safety signals that fast synthesis flattens away.

what you gain

Adversarial critique catches failure modes single-model review misses.

what it costs

3x inference latency. 24-hour turnaround. Requires a well-formed question.

06 // your corpus, your review pressure

Bring output that needs to survive review.

Any domain with real literature and a real reviewer on the other side.

[ research audit ]