IntegrationLangChain

Use LangChain with 9 LLM Providers Through One Gateway

Use LLMWise inside LangChain via a tiny Runnable wrapper. Switch models, enable Auto routing, and add failover without coupling your app to a single provider.

You only pay credits per request. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Why teams start here first
No monthly subscription
Pay-as-you-go credits
Start with trial credits, then buy only what you consume.
Failover safety
Production-ready routing
Auto fallback across providers when latency, quality, or reliability changes.
Data control
Your policy, your choice
BYOK and zero-retention mode keep training and storage scope explicit.
Single API experience
One key, multi-provider access
Use Chat/Compare/Blend/Judge/Failover from one dashboard.
Quick start
pip install llmwise langchain-core

Full example

LangChain
import os
from llmwise import LLMWise
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambda

client = LLMWise(os.environ["LLMWISE_API_KEY"])

prompt = PromptTemplate.from_template(
    "You are a concise technical writer.\n\nQuestion: {question}"
)

def call_llmwise(text: str) -> str:
    resp = client.chat(
        model="auto",
        messages=[{"role": "user", "content": text}],
        max_tokens=512,
    )
    return resp["content"]

llm = RunnableLambda(call_llmwise)
chain = prompt | llm

print(chain.invoke({"question": "What are the SOLID principles in software engineering?"}))

# Swap models for A/B tests by changing one string:
# resp = client.chat(model="claude-sonnet-4.5", messages=[...])
Evidence snapshot

LangChain integration overview

Everything you need to integrate LLMWise's multi-model API into your LangChain project.

Setup steps
5
to first API call
Features
8
capabilities included
Models available
9
via single endpoint
Starter credits
40
trial 7 days · paid credits never expire

What you get

+Use LLMWise in LangChain via a small Runnable wrapper
+OpenAI-style messages format (role + content)
+Auto routing (balanced/cost/latency/reliability goals) without provider lock-in
+Mesh failover routing with fallback chains for 429/5xx/timeouts
+Compare / Blend / Judge modes available for eval and orchestration workflows
+Streaming supported (SSE) when you need token-level output
+Works cleanly with RAG pipelines and prompt templates
+Plays nicely with LangSmith or your own tracing (you own the wrapper)

Step-by-step integration

1Install dependencies

Install the official LLMWise SDK plus LangChain core. You’ll wrap LLMWise as a Runnable so the rest of your chain stays the same.

pip install llmwise langchain-core
2Set your API key

Store your API key in an environment variable so it stays out of source code.

export LLMWISE_API_KEY="your_api_key_here"
3Wrap LLMWise as a Runnable

Use a small RunnableLambda that accepts text and returns the LLMWise response content.

import os
from llmwise import LLMWise
from langchain_core.runnables import RunnableLambda

client = LLMWise(os.environ["LLMWISE_API_KEY"])

def call_llmwise(text: str) -> str:
  resp = client.chat(model="auto", messages=[{"role": "user", "content": text}])
  return resp["content"]

llm = RunnableLambda(call_llmwise)
4Use it in chains and agents

Compose your prompt template and pipe it into the runnable. Swap models or routing in one place (the wrapper).

from langchain_core.prompts import PromptTemplate

prompt = PromptTemplate.from_template("Answer concisely: {question}")
chain = prompt | llm
print(chain.invoke({"question": "Explain dependency injection."}))
5Add reliability with Mesh failover routing

For production traffic, add a fallback chain so requests can retry on another model when a provider is saturated or failing.

resp = client.chat(
  model="gpt-5.2",
  routing={"strategy": "rate-limit", "fallback": ["claude-sonnet-4.5", "gemini-3-flash"]},
  messages=[{"role": "user", "content": "Summarize this incident report."}],
)
print(resp["content"])

Common questions

Can I use LLMWise with LangChain agents and tools?
Yes. Keep tool execution in your app, and use LLMWise as the reasoning engine. Your wrapper can return plain strings (simple) or structured JSON if you want stricter outputs.
Does LLMWise support LangChain's async methods?
Yes. Use AsyncLLMWise (Python SDK) and wrap it in an async Runnable for end-to-end async chains.
How do I use LLMWise with LangChain retrieval chains (RAG)?
Build your RAG chain normally (retriever -> prompt). Then have your final Runnable call LLMWise with the combined prompt. You can A/B test models by changing one string, without touching your retriever.
Can I use LangSmith tracing with LLMWise?
Yes. LangSmith traces LangChain operations at the chain level. Since the LLM call lives in your wrapper, you can also attach extra metadata (resolved model, credits charged) for richer traces.

One wallet, enterprise AI controls built in

You only pay credits per request. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Chat, Compare, Blend, Judge, MeshPolicy routing + replay labFailover without extra subscriptions