IntegrationTypeScript

Integrate Multiple LLM APIs in TypeScript with Full Type Safety

Use the official LLMWise TypeScript SDK to access multiple models with one API key. Typed requests, streaming iterators, and failover routing.

You only pay credits per request. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Why teams start here first
No monthly subscription
Pay-as-you-go credits
Start with trial credits, then buy only what you consume.
Failover safety
Production-ready routing
Auto fallback across providers when latency, quality, or reliability changes.
Data control
Your policy, your choice
BYOK and zero-retention mode keep training and storage scope explicit.
Single API experience
One key, multi-provider access
Use Chat/Compare/Blend/Judge/Failover from one dashboard.
Quick start
npm install llmwise

Full example

TypeScript
// npm i llmwise
// Repository: https://github.com/LLMWise-AI/llmwise-ts-sdk
import { LLMWise } from "llmwise";

const client = new LLMWise(process.env.LLMWISE_API_KEY!);

// Basic chat (non-stream)
const resp = await client.chat({
  model: "auto",
  messages: [{ role: "user", content: "Explain TypeScript generics with examples." }],
  max_tokens: 512,
});
console.log(resp.content);

// Streaming chat (SSE JSON events)
for await (const ev of client.chatStream({
  model: "claude-sonnet-4.5",
  messages: [{ role: "user", content: "Write a Node.js Express middleware for rate limiting." }],
})) {
  if (ev.delta) process.stdout.write(ev.delta);
  if (ev.event === "done") break;
}
Evidence snapshot

TypeScript integration overview

Everything you need to integrate LLMWise's multi-model API into your TypeScript project.

Setup steps
5
to first API call
Features
8
capabilities included
Models available
9
via single endpoint
Starter credits
40
trial 7 days · paid credits never expire

What you get

+Official LLMWise TypeScript SDK (fetch-based, lightweight)
+Full TypeScript types for requests and responses
+Async iterators for streaming with for-await-of loops
+Works in Node.js 18+ and modern runtimes with fetch
+Mesh failover routing via fallback chains
+Compare / Blend / Judge orchestration modes
+AbortSignal support for cancellation
+Safe-by-default pattern: keep API key server-side

Step-by-step integration

1Install the LLMWise TypeScript SDK

Install the official llmwise package for Node.js/TypeScript. Repo: https://github.com/LLMWise-AI/llmwise-ts-sdk

npm install llmwise
2Configure the client

Create a client instance using your API key. The base URL defaults to https://llmwise.ai/api/v1.

import { LLMWise } from "llmwise";

const client = new LLMWise(process.env.LLMWISE_API_KEY!);
3Make a typed chat request

Call client.chat() with a model ID and OpenAI-style messages.

const response = await client.chat({
  model: "gemini-3-flash",
  messages: [
    { role: "system", content: "You are a senior TypeScript developer." },
    { role: "user", content: "How do I implement the builder pattern in TypeScript?" },
  ],
});

console.log(response.content);
4Stream tokens with async iterators

Use client.chatStream() to receive SSE JSON events. Render ev.delta and stop on the done event.

for await (const ev of client.chatStream({
  model: "deepseek-v3",
  messages: [{ role: "user", content: "Write a Redis caching utility in TypeScript." }],
})) {
  if (ev.delta) process.stdout.write(ev.delta);
  if (ev.event === "done") break;
}
5Switch models dynamically

Pass a different model ID (or model="auto") to route requests. You can also enable failover routing with a fallback chain.

type ModelId = "gpt-5.2" | "claude-sonnet-4.5" | "gemini-3-flash" | "deepseek-v3";

async function ask(prompt: string, model: ModelId = "gpt-5.2") {
  const res = await client.chat({
    model,
    messages: [{ role: "user", content: prompt }],
  });
  return res.content;
}

// Same code, different model
const gptAnswer = await ask("Explain closures in JavaScript.", "gpt-5.2");
const claudeAnswer = await ask("Explain closures in JavaScript.", "claude-sonnet-4.5");

Common questions

Does the LLMWise TypeScript SDK work in edge runtimes like Cloudflare Workers?
The SDK is fetch-based and works in environments that provide the Fetch API. In production, keep your API key server-side (workers, route handlers, or backend services) and proxy requests from the browser.
Should I call LLMWise directly from the browser?
Generally no. Treat your LLMWise API key like a secret. Use a backend route (Next.js Route Handler, serverless function, or API server) to call LLMWise and stream results to the browser.
How do I add type safety for model IDs in TypeScript?
Define a union type like type ModelId = 'gpt-5.2' | 'claude-sonnet-4.5' | 'gemini-3-flash' and use it as the model parameter type. This gives you autocomplete and compile-time checks when switching between models.
Is there a performance difference between LLMWise and calling providers directly?
LLMWise adds small gateway overhead, but you gain routing, failover, multi-model orchestration (Compare/Blend/Judge), and unified usage/billing observability. For most apps, the reliability and workflow wins outweigh the extra hop.

One wallet, enterprise AI controls built in

You only pay credits per request. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Chat, Compare, Blend, Judge, MeshPolicy routing + replay labFailover without extra subscriptions