Adaptive LLM Routing Under Budget Constraints

Fujitsu Research of India
*Equal Contribution

Abstract

Large Language Models (LLMs) have revolutionized natural language processing, but their varying capabilities and costs pose challenges in practical applications. LLM routing addresses this by dynamically selecting the most suitable LLM for each query/task. Previous approaches treat this as a supervised learning problem, assuming complete knowledge of optimal query-LLM pairings. However, real-world scenarios lack such comprehensive mappings and face evolving user queries. We thus propose to study LLM routing as a contextual bandit problem, enabling adaptive decision-making using bandit feedback without requiring exhaustive inference across all LLMs for all queries (in contrast to supervised routing). To address this problem, we develop a shared embedding space for queries and LLMs, where query and LLM embeddings are aligned to reflect their affinity. This space is initially learned from offline human preference data and refined through online bandit feedback. We instantiate this idea through Preference-prior Informed Linucb fOr adaptive rouTing (PILOT), a novel extension of LinUCB. To handle diverse user budgets for model routing, we introduce an online cost policy modeled as a multi-choice knapsack problem, ensuring resource-efficient routing.

Key Contributions

  • Formulation LLM routing as a budget constrained contextual bandit problem
  • A Preference-Prior Informed LinUCB algorithm (PILOT) that combines offline human preference data with online bandit feedback to route queries. We also show our preference prior helps achieve a lower regret bound than standard algorithm.

BibTeX

@inproceedings{bhagat2025evaluating,
  title     = {Evaluating Compound AI Systems through Behaviors, Not Benchmarks},
  author    = {Panda, Pranoy and Magazine, Raghav and Devaguptapu, Chaitanya and Takemori, Sho and Sharma, Vishal},
  booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2025},
  pages     = {23934-23949},
  year      = {2025},
  publisher = {Association for Computational Linguistics}
}