Use the official LLMWise Rust crate to call multiple AI models with one API key. Async/await on tokio, zero-copy SSE streaming, serde-typed models, and built-in failover.
Credit-based pay-per-use with token-settled billing. No monthly subscription. Paid credits never expire.
Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.
# Cargo.toml
[dependencies]
llmwise = "0.1"
tokio = { version = "1", features = ["full"] }
serde_json = "1"// cargo add llmwise tokio serde_json
use llmwise::{Client, ChatRequest, Message};
use tokio_stream::StreamExt;
#[tokio::main]
async fn main() -> Result<(), llmwise::Error> {
let client = Client::new(
std::env::var("LLMWISE_API_KEY")
.expect("LLMWISE_API_KEY must be set"),
);
// Basic chat request
let response = client
.chat(&ChatRequest {
model: "auto".into(),
messages: vec![
Message::user("Explain ownership and borrowing in Rust."),
],
max_tokens: Some(512),
..Default::default()
})
.await?;
println!("{}", response.content);
// Streaming chat with async Stream
let mut stream = client
.chat_stream(&ChatRequest {
model: "claude-sonnet-4.5".into(),
messages: vec![
Message::user("Write an async TCP echo server in Rust."),
],
stream: Some(true),
..Default::default()
})
.await?;
while let Some(event) = stream.next().await {
let event = event?;
if let Some(delta) = &event.delta {
print!("{delta}");
}
if event.done {
println!("\nCredits charged: {}", event.credits_charged.unwrap_or(0));
break;
}
}
Ok(())
}Everything you need to integrate LLMWise's multi-model API into your Rust project.
Add the official llmwise crate along with tokio for async runtime. Requires Rust 1.75+ (async fn in trait stabilization).
# Cargo.toml
[dependencies]
llmwise = "0.1"
tokio = { version = "1", features = ["full"] }Store your API key as an environment variable. The client reads it at construction time.
export LLMWISE_API_KEY="your_api_key_here"
Build the client with your API key. It uses reqwest internally with connection pooling enabled by default.
use llmwise::Client;
let client = Client::new(
std::env::var("LLMWISE_API_KEY").expect("LLMWISE_API_KEY required"),
);Call client.chat() with a ChatRequest struct. Fields use Option types with Default trait for optional parameters.
use llmwise::{ChatRequest, Message};
let response = client
.chat(&ChatRequest {
model: "gemini-3-flash".into(),
messages: vec![
Message::user("What are Rust's zero-cost abstractions?"),
],
max_tokens: Some(512),
..Default::default()
})
.await?;
println!("{}", response.content);Use client.chat_stream() to get an async Stream of SSE events. Process with StreamExt::next() in a while-let loop for zero-copy iteration.
use tokio_stream::StreamExt;
let mut stream = client
.chat_stream(&ChatRequest {
model: "deepseek-v3".into(),
messages: vec![Message::user("Implement a lock-free queue in Rust.")],
stream: Some(true),
..Default::default()
})
.await?;
while let Some(event) = stream.next().await {
let event = event?;
if let Some(delta) = &event.delta {
print!("{delta}");
}
if event.done { break; }
}Send the same prompt to multiple models and compare their outputs. Responses are returned as a Vec of typed CompareResponse structs.
use llmwise::CompareRequest;
let response = client
.compare(&CompareRequest {
models: vec!["gpt-5.2".into(), "claude-sonnet-4.5".into(), "gemini-3-flash".into()],
messages: vec![
Message::user("Explain the borrow checker to a C++ developer."),
],
..Default::default()
})
.await?;
for r in &response.responses {
println!("[{}]: {}\n", r.model, r.content);
}Credit-based pay-per-use with token-settled billing. No monthly subscription. Paid credits never expire.
Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.
Pricing changes, new model launches, and optimization tips. No spam.