top of page
octomesh.png

OpenMesh

The World's First Task-Level Model Routing for AI Agents

Powering the next billion AI agents with TaskRouter — delivering up to 99% lower cost, higher speed, and improved accuracy

What is TaskRouter?

TaskRouter is a task-level model routing system for modern AI agents and multi-step workflows. Rather than forcing one model to handle an entire workflow, TaskRouter decomposes complex processes into individual tasks and dynamically selects the best model for each one. This reflects how real agent systems work, where planning, retrieval, reasoning, coding, verification, and summarization happen across distinct stages with different performance requirements. TaskRouter continuously evaluates models across quality, latency, and cost, then routes each task to the most appropriate endpoint in real time. The result is higher efficiency, better task-model fit, and up to 90 percent lower inference cost than running every operation on a single frontier model.

Why TaskRouter

Task-level model routing

Agent workflows are broken into distinct tasks such as reasoning, coding, retrieval, and structured generation. TaskRouter sends each task to the model best suited for that step, instead of forcing one model to do everything.

Continuous model evaluation

TaskRouter continuously benchmarks available models across task categories and updates routing decisions as model quality, latency, and pricing change.

Optimized execution at scale

TaskRouter executes model calls across distributed inference infrastructure, improving efficiency, reducing latency, and supporting reliable production-scale workloads.

Architecture Flow

TaskRouter sits above inference infrastructure and determines which model should execute each task. Rather than treating inference as a single model call, it represents AI workflows as task graphs and dynamically routes each node through the most appropriate execution path.

1. Task Graph Input

AI agents, applications, and automated workflows generate task graphs composed of multiple model calls.

2. Model Intelligence Layer

TaskRouter evaluates available models and selects the optimal one for each task based on quality, latency, and cost.

3. Optimization Engine

Routing policies continuously improve as new models enter the ecosystem and benchmark signals update.

4. Efficient  execution layer

Selected model calls are executed across distributed inference infrastructure for reliable, low-latency performance.

Performance Benefits

99%+

task completion accuracy improvement through model specialization

90–95%

cost reduction compared with single-model agent pipelines

Model

new models are evaluated and added continuously

One API

developers integrate once with the unified API while routing adapts automatically

bottom of page