IntegrationMigration

Migrate from OpenAI to LLMWise in ~15 Minutes

Keep your prompts and message format. Swap your client to the official LLMWise SDK and get multi-model routing, failover, and orchestration on top of one API key.

You only pay credits per request. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Why teams start here first
No monthly subscription
Pay-as-you-go credits
Start with trial credits, then buy only what you consume.
Failover safety
Production-ready routing
Auto fallback across providers when latency, quality, or reliability changes.
Data control
Your policy, your choice
BYOK and zero-retention mode keep training and storage scope explicit.
Single API experience
One key, multi-provider access
Use Chat/Compare/Blend/Judge/Failover from one dashboard.
Quick start
pip install llmwise  # or npm i llmwise

Full example

Migration
# Python (LLMWise SDK)
# pip install llmwise
import os
from llmwise import LLMWise

client = LLMWise(os.environ["LLMWISE_API_KEY"])

# Messages keep the same shape: role + content
resp = client.chat(
    model="auto",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Compare Python and Rust for backend development."},
    ],
    max_tokens=512,
)
print(resp["content"])

# TypeScript (LLMWise SDK)
// npm i llmwise
import { LLMWise } from "llmwise";

const tsClient = new LLMWise(process.env.LLMWISE_API_KEY!);

for await (const ev of tsClient.chatStream({
  model: "claude-sonnet-4.5",
  messages: [{ role: "user", content: "Write a TypeScript utility type for deep partial." }],
})) {
  if (ev.delta) process.stdout.write(ev.delta);
  if (ev.event === "done") break;
}
Evidence snapshot

Migration integration overview

Everything you need to integrate LLMWise's multi-model API into your Migration project.

Setup steps
5
to first API call
Features
8
capabilities included
Models available
9
via single endpoint
Starter credits
40
trial 7 days · paid credits never expire

What you get

+OpenAI-style messages format (role + content) so prompts migrate cleanly
+Official Python + TypeScript SDKs (plus raw REST if you prefer)
+Chat, Compare, Blend, Judge, and Mesh failover through one API key
+Auto routing (balanced/cost/latency/reliability goals)
+Failover routing with fallback chains for 429/5xx/timeouts
+Streaming via SSE with token deltas
+Credit-based pay-per-use (paid credits do not expire)
+BYOK optional for direct provider routing

Step-by-step integration

1Get your LLMWise API key

Sign up at llmwise.ai and copy your API key from the dashboard. You get 40 free trial credits to start, then continue with non-expiring paid credits. No credit card required.

export LLMWISE_API_KEY="your_api_key_here"
2Install the official SDK (recommended)

Use the official SDKs for Python and TypeScript. Your prompt/message structure stays the same; you only swap the client call.

pip install llmwise
# or
npm i llmwise
3Send your first request (messages stay familiar)

LLMWise uses OpenAI-style role/content messages. Choose a model or use model="auto" to route by goal.

resp = client.chat(
  model="auto",
  messages=[{"role": "user", "content": "Hello, world!"}],
)
print(resp["content"])
4Add reliability with Mesh failover routing

Specify a fallback chain so requests can retry on another model when the primary is rate-limited or failing.

resp = client.chat(
  model="gpt-5.2",
  routing={"strategy": "rate-limit", "fallback": ["claude-sonnet-4.5", "gemini-3-flash"]},
  messages=[{"role": "user", "content": "Summarize this incident report."}],
)
print(resp["content"])
5Upgrade from chat to orchestration (Compare/Blend/Judge)

Use Compare to benchmark models, Blend to synthesize, and Judge to score responses. These are native modes designed for production eval and routing decisions.

resp = client.compare(
  models=["gpt-5.2", "claude-sonnet-4.5", "gemini-3-flash"],
  messages=[{"role": "user", "content": "Explain eventual consistency."}],
)
print([r["model"] for r in resp["responses"]])

Common questions

Is LLMWise a drop-in replacement for the OpenAI SDK?
LLMWise uses the same familiar role/content message format, but it’s a native API with its own endpoints and streaming event shape. For the simplest integration, use the official LLMWise SDKs (Python/TypeScript) or call the REST API directly.
Do I need to rewrite my prompts?
Usually no. Your prompts and messages migrate cleanly because the message shape is familiar. You typically just swap the client call to LLMWise and then start experimenting with Auto routing and multi-model workflows.
How do I stream responses?
Use chat_stream (Python) or chatStream (TypeScript). You’ll receive SSE JSON events with delta text, and a final done event that includes credits_charged and credits_remaining.
How do I keep production traffic reliable during outages?
Use Mesh routing (fallback chains) to retry transient failures on backup models. Auto mode can also add an implicit fallback chain based on your optimization policy settings.

One wallet, enterprise AI controls built in

You only pay credits per request. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Chat, Compare, Blend, Judge, MeshPolicy routing + replay labFailover without extra subscriptions