Use the official LLMWise TypeScript SDK to access multiple models with one API key. Typed requests, streaming iterators, and failover routing.
You only pay credits per request. No monthly subscription. Paid credits never expire.
Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.
npm install llmwise
// npm i llmwise
// Repository: https://github.com/LLMWise-AI/llmwise-ts-sdk
import { LLMWise } from "llmwise";
const client = new LLMWise(process.env.LLMWISE_API_KEY!);
// Basic chat (non-stream)
const resp = await client.chat({
model: "auto",
messages: [{ role: "user", content: "Explain TypeScript generics with examples." }],
max_tokens: 512,
});
console.log(resp.content);
// Streaming chat (SSE JSON events)
for await (const ev of client.chatStream({
model: "claude-sonnet-4.5",
messages: [{ role: "user", content: "Write a Node.js Express middleware for rate limiting." }],
})) {
if (ev.delta) process.stdout.write(ev.delta);
if (ev.event === "done") break;
}Everything you need to integrate LLMWise's multi-model API into your TypeScript project.
Install the official llmwise package for Node.js/TypeScript. Repo: https://github.com/LLMWise-AI/llmwise-ts-sdk
npm install llmwise
Create a client instance using your API key. The base URL defaults to https://llmwise.ai/api/v1.
import { LLMWise } from "llmwise";
const client = new LLMWise(process.env.LLMWISE_API_KEY!);Call client.chat() with a model ID and OpenAI-style messages.
const response = await client.chat({
model: "gemini-3-flash",
messages: [
{ role: "system", content: "You are a senior TypeScript developer." },
{ role: "user", content: "How do I implement the builder pattern in TypeScript?" },
],
});
console.log(response.content);Use client.chatStream() to receive SSE JSON events. Render ev.delta and stop on the done event.
for await (const ev of client.chatStream({
model: "deepseek-v3",
messages: [{ role: "user", content: "Write a Redis caching utility in TypeScript." }],
})) {
if (ev.delta) process.stdout.write(ev.delta);
if (ev.event === "done") break;
}Pass a different model ID (or model="auto") to route requests. You can also enable failover routing with a fallback chain.
type ModelId = "gpt-5.2" | "claude-sonnet-4.5" | "gemini-3-flash" | "deepseek-v3";
async function ask(prompt: string, model: ModelId = "gpt-5.2") {
const res = await client.chat({
model,
messages: [{ role: "user", content: prompt }],
});
return res.content;
}
// Same code, different model
const gptAnswer = await ask("Explain closures in JavaScript.", "gpt-5.2");
const claudeAnswer = await ask("Explain closures in JavaScript.", "claude-sonnet-4.5");You only pay credits per request. No monthly subscription. Paid credits never expire.
Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.