IntegrationGo

Integrate Multiple LLM APIs in Go with One SDK

Use the official LLMWise Go SDK to call multiple AI models with one API key. Idiomatic Go patterns, streaming SSE, context cancellation, and built-in failover.

Credit-based pay-per-use with token-settled billing. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Why teams start here first
No monthly subscription
Pay-as-you-go credits
Start with trial credits, then buy only what you consume.
Failover safety
Production-ready routing
Auto fallback across providers when latency, quality, or reliability changes.
Data control
Your policy, your choice
BYOK and zero-retention mode keep training and storage scope explicit.
Single API experience
One key, multi-provider access
Use Chat/Compare/Blend/Judge/Failover from one dashboard.
Quick start
go get github.com/llmwise-ai/llmwise-go

Full example

Go
// go get github.com/llmwise-ai/llmwise-go
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/llmwise-ai/llmwise-go"
)

func main() {
	client := llmwise.NewClient(os.Getenv("LLMWISE_API_KEY"))

	// Basic chat request
	resp, err := client.Chat(context.Background(), &llmwise.ChatRequest{
		Model: "auto",
		Messages: []llmwise.Message{
			{Role: "user", Content: "Explain goroutines vs OS threads."},
		},
		MaxTokens: 512,
	})
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(resp.Content)

	// Streaming chat with context cancellation
	ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
	defer cancel()

	stream, err := client.ChatStream(ctx, &llmwise.ChatRequest{
		Model:    "claude-sonnet-4.5",
		Messages: []llmwise.Message{{Role: "user", Content: "Write a concurrent web crawler in Go."}},
		Stream:   true,
	})
	if err != nil {
		log.Fatal(err)
	}
	defer stream.Close()

	for stream.Next() {
		ev := stream.Event()
		if ev.Delta != "" {
			fmt.Print(ev.Delta)
		}
		if ev.Done {
			fmt.Printf("\n\nCredits charged: %d\n", ev.CreditsCharged)
			break
		}
	}
	if err := stream.Err(); err != nil {
		log.Fatal(err)
	}
}
Evidence snapshot

Go integration overview

Everything you need to integrate LLMWise's multi-model API into your Go project.

Setup steps
6
to first API call
Features
8
capabilities included
Models available
9
via single endpoint
Starter credits
20
free credits never expire

What you get

+Official LLMWise Go SDK with idiomatic patterns (interfaces, error wrapping)
+Streaming SSE via iterator-style API (Next/Event/Err)
+Context cancellation and timeout support via context.Context
+Mesh failover routing with automatic retry on 429/5xx/timeouts
+Compare / Blend / Judge multi-model orchestration modes
+Typed request and response structs with JSON tags
+Connection pooling via http.Client with configurable transport
+Structured error wrapping with sentinel errors for easy handling

Step-by-step integration

1Install the LLMWise Go SDK

Add the official LLMWise Go module to your project. Requires Go 1.21 or later.

go get github.com/llmwise-ai/llmwise-go
2Set your LLMWise API key

Store your API key as an environment variable. The SDK reads it at client creation time.

export LLMWISE_API_KEY="your_api_key_here"
3Create a client

Instantiate the LLMWise client. It uses a shared http.Client with connection pooling by default.

import "github.com/llmwise-ai/llmwise-go"

client := llmwise.NewClient(os.Getenv("LLMWISE_API_KEY"))
4Send a basic chat request

Call client.Chat with a context and a typed ChatRequest struct. The response includes content, token counts, and cost metadata.

resp, err := client.Chat(context.Background(), &llmwise.ChatRequest{
	Model: "gemini-3-flash",
	Messages: []llmwise.Message{
		{Role: "user", Content: "What is the difference between channels and mutexes in Go?"},
	},
})
if err != nil {
	log.Fatal(err)
}
fmt.Println(resp.Content)
5Stream tokens in real time

Use client.ChatStream to receive SSE events. The iterator pattern (Next/Event/Err) is idiomatic Go. Pass a context with a timeout for cancellation.

ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

stream, err := client.ChatStream(ctx, &llmwise.ChatRequest{
	Model:    "deepseek-v3",
	Messages: []llmwise.Message{{Role: "user", Content: "Implement a rate limiter in Go."}},
	Stream:   true,
})
if err != nil {
	log.Fatal(err)
}
defer stream.Close()

for stream.Next() {
	if ev := stream.Event(); ev.Delta != "" {
		fmt.Print(ev.Delta)
	}
}
6Add failover with Compare mode

Use the Compare endpoint to send the same prompt to multiple models and compare their outputs side by side.

resp, err := client.Compare(context.Background(), &llmwise.CompareRequest{
	Models: []string{"gpt-5.2", "claude-sonnet-4.5", "gemini-3-flash"},
	Messages: []llmwise.Message{
		{Role: "user", Content: "Explain the CAP theorem."},
	},
})
if err != nil {
	log.Fatal(err)
}
for _, r := range resp.Responses {
	fmt.Printf("[%s]: %s\n\n", r.Model, r.Content)
}

Common questions

Does the Go SDK support connection pooling?
Yes. The SDK uses Go's standard http.Client which provides connection pooling via http.Transport by default. You can pass a custom http.Client with tuned transport settings for high-throughput workloads.
How do I handle errors idiomatically in Go?
The SDK returns wrapped errors using Go's errors package. Use errors.Is() to check for sentinel errors like llmwise.ErrInsufficientCredits (402) or llmwise.ErrUnauthorized (401). All errors include the HTTP status code and response body for debugging.
Can I use LLMWise in a Go web server?
Yes. Create a single client instance at startup and share it across HTTP handlers. The client is safe for concurrent use. Pass the request context to enable per-request cancellation and timeouts.
What Go versions are supported?
The SDK requires Go 1.21 or later. It uses standard library packages (net/http, encoding/json, context) with no CGO dependencies.

One wallet, enterprise AI controls built in

Credit-based pay-per-use with token-settled billing. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Chat, Compare, Blend, Judge, MeshPolicy routing + replay labFailover without extra subscriptions
Get LLM insights in your inbox

Pricing changes, new model launches, and optimization tips. No spam.