Faculty-advised research — team forming
→ Core contributors included as co-authors
Agent Framework Research
First systematic performance benchmark across multi-language agent development frameworks, covering latency, throughput, memory, and framework overhead under concurrent and sustained load conditions across Go and Python ecosystems.
As agent development frameworks proliferate across languages and providers, practitioners lack empirical data for comparing them under realistic conditions. This work establishes the first cross-language benchmark covering Google ADK Go, ADK Python, LangGraph, and OpenAI Agents SDK across five scenarios — from single-agent baseline to high-concurrency sustained load. The benchmark isolates framework overhead from LLM API time, enabling fair comparison of the coordination layer itself.
Team
Lead — Go Implementation
FilledDishant Pandauria
Implements ADK Go benchmark, Go concurrency patterns, AWS execution.
Skills: Go, Google ADK, Benchmarking, AWS
Lead — Python + Analysis
FilledRashanjot Kaur
Implements LangGraph and OpenAI SDK benchmarks, paper writing coordination.
Skills: Python, LangGraph, OpenAI SDK, Research Design
Statistical Analysis
FilledSiddhant Shah
Statistical analysis, chart generation, results validation.
Skills: Statistics, Python, Data Analysis
Faculty Advisor
FilledProf. Eugene Pinsky
Methodology review and publication venue selection.
Skills: Computer Science, Research Methodology