← Back to Research

Multi-Agent Evidence Retrieval with Citation Provenance for Resource-Limited Clinical Settings

Faculty-advised research — team forming

→ Core contributors included as co-authors

Evidence Retrieval AI

Evidence retrieval for resource-limited clinical settings where standard access infrastructure is unavailable. The gap between published literature and frontline care decisions is the core constraint. Research conducted with faculty advisors at Boston University.

Multi-AgentEvidence RetrievalClinical SystemsCitation IntegrityResource-Limited Settings

The crisis

  • Rural clinics operate without access to specialists or dedicated research librarians
  • Doctors in underserved regions make clinical decisions without access to current medical evidence
  • AI could save US healthcare $150 billion by 2026 (PMC)
  • 109 studies (2019-2024) show AI and telemedicine can transform rural healthcare, but adoption blocked by cost and infrastructure (arxiv, Aug 2025)
  • Refugee camp health workers already using AI health assistants in the absence of specialists (Zaatari camp, Jordan Frontiers 2025)
  • The evidence gap is not a content problem the literature exists. It is an access and retrieval architecture problem.

About this research

Clinical evidence retrieval fails in underserved settings because the default stack assumes resources that are not present: paid databases, librarians, and time for manual search. The access gap between published literature and frontline clinical decisions is structural, not incidental. This thread studies that constraint in low-resource clinical environments.

Key findings

  • (In Progress)

Related project

View project →

Open roles

AI Research Engineer

Open

Runs experiments, implements baselines, and benchmarks retrieval across specialist agents. Builds provenance from source through synthesis.

Skills: Python, Pinecone, OpenAI Embeddings, AWS, RAG Systems, Evaluation

Apply →

Data & Statistical Analyst

Open

Handles evaluation datasets, statistical validation of retrieval quality, and quantitative comparison against PubMed and RAG baselines.

Skills: Statistics, Medical corpora, Evaluation metrics, Data pipelines

Apply →

Research Writer

Open

Literature review, draft paper sections, and format citations for clinical AI and medical informatics venues.

Skills: Academic Writing, Medical literature, Citations, AMIA / npj-style formatting

Apply →

Research Coordinator

Open

Tracks sprint progress, meeting notes, and keeps the collaboration thread organized across engineers and advisors.

Skills: Project coordination, Documentation, Async communication

Apply →