Build AI workflows without provider glue.

ProviderPlane is for Node.js applications that have moved beyond isolated model calls. Build multi-step workflows across OpenAI, Anthropic, Gemini, Mistral without hard-coding provider behavior into every step.

Why this exists

Raw model SDKs are fine when your app only needs isolated model calls. They get harder to manage when you need branching, provider-chain fallback, persistence, approval steps, or multimodal execution across several stages.

Workflows stop being simple

Branching, fan-out, approval gates, persistence, and multimodal steps quickly turn 'just call the model' into a workflow problem.

Provider behavior spreads through the app

Model choice, provider-chain fallback, and provider-specific differences get harder to manage once they show up across handlers, services, and helpers.

The execution model gets hard to see

If sequencing, fallback behavior, and transformations only exist across callbacks, wrappers, and provider-specific code paths, the workflow gets harder to inspect, test, and change.

What teams actually get

Keep workflow logic in one place

Describe the graph once and keep orchestration concerns out of route handlers, jobs, and random utility files.

Reduce provider lock-in without rewriting the app

Support multiple providers through one workflow layer without turning your application architecture into a provider migration project.

Make workflows inspectable

Export, inspect, persist, and resume workflows instead of treating orchestration as opaque glue that nobody wants to touch.

What ProviderPlane is not

Not a prompt library

ProviderPlane is not a convenience wrapper around one provider. It is for applications where workflow shape and execution policy need to be modeled explicitly.

Not trying to hide your application logic

ProviderPlane gives workflow structure a clearer place to live without trying to swallow your business logic, failure policy, or custom steps.

Not an agent framework

ProviderPlane is workflow-first. You can build agent-style systems on top of it, but the library itself does not impose an agent runtime or hide execution behind autonomous planning behavior.

A workflow layer above providers

ProviderPlane gives you one place to define the workflow instead of spreading provider calls, sequencing, and transformation logic across the application.

const workflow = pipeline
  .chat(generateText.id, "Generate one short inspirational quote in French.", {
    normalize: "text"
  })
  .tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateText })
  .transcribe(transcribe.id, { responseFormat: "text" }, { source: tts, normalize: "text" })
  .output((values) => ({
    generatedText: String(values.generateText ?? ""),
    transcriptText: String(values.transcribe ?? "")
  }))
  .build();

Three planes of control

ProviderPlane gives you more than one layer to operate at, depending on how much control you need.

Build at the workflow level

Use this layer when you want to define the flow directly instead of stitching provider calls together step by step.

When you need finer execution control

Use jobs when you need more control over how execution is scheduled, retried, or integrated into the rest of your system.

When you want direct provider access

Use the core client directly when you want low-level provider access without going through workflow or job abstractions.

Where to start