IntegrationPHP

Integrate Multiple LLM APIs in PHP with One SDK

Use the official LLMWise PHP SDK to call multiple AI models with one API key. Laravel service provider, streaming generators, PSR-18 compatible, and built-in failover.

Credit-based pay-per-use with token-settled billing. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Why teams start here first
No monthly subscription
Pay-as-you-go credits
Start with trial credits, then buy only what you consume.
Failover safety
Production-ready routing
Auto fallback across providers when latency, quality, or reliability changes.
Data control
Your policy, your choice
BYOK and zero-retention mode keep training and storage scope explicit.
Single API experience
One key, multi-provider access
Use Chat/Compare/Blend/Judge/Failover from one dashboard.
Quick start
composer require llmwise/llmwise-php

Full example

PHP
<?php
// composer require llmwise/llmwise-php
require_once 'vendor/autoload.php';

use LLMWise\Client;
use LLMWise\Message;

$client = new Client(getenv('LLMWISE_API_KEY'));

// Basic chat request
$response = $client->chat([
    'model' => 'auto',
    'messages' => [
        Message::user('Explain dependency injection in PHP.'),
    ],
    'max_tokens' => 512,
]);

echo $response['content'] . PHP_EOL;

// Streaming chat via generator (yields SSE events)
$stream = $client->chatStream([
    'model' => 'claude-sonnet-4.5',
    'messages' => [
        Message::user('Write a PSR-15 middleware for rate limiting.'),
    ],
    'stream' => true,
]);

foreach ($stream as $event) {
    if (!empty($event['delta'])) {
        echo $event['delta'];
    }
    if (!empty($event['done'])) {
        echo PHP_EOL . "Credits charged: {$event['credits_charged']}" . PHP_EOL;
        break;
    }
}
Evidence snapshot

PHP integration overview

Everything you need to integrate LLMWise's multi-model API into your PHP project.

Setup steps
6
to first API call
Features
8
capabilities included
Models available
9
via single endpoint
Starter credits
20
free credits never expire

What you get

+Official LLMWise PHP 8.1+ SDK with clean API
+Laravel service provider with config publishing and facade support
+Streaming SSE via PHP generators (yield) for memory-efficient token delivery
+Mesh failover routing with automatic retry on 429/5xx/timeouts
+PSR-18 compatible HTTP client (works with Guzzle, Buzz, Symfony HttpClient)
+Typed response arrays with consistent structure across all endpoints
+Built-in retry logic with configurable backoff for transient errors
+Webhook signature verification helper for Stripe and Clerk events

Step-by-step integration

1Install the LLMWise PHP SDK

Install the official package via Composer. Requires PHP 8.1 or later and the JSON extension.

composer require llmwise/llmwise-php
2Set your LLMWise API key

Store your API key as an environment variable or in your .env file for Laravel.

export LLMWISE_API_KEY="your_api_key_here"

# Or in Laravel .env:
# LLMWISE_API_KEY=your_api_key_here
3Create a client instance

Instantiate the LLMWise client with your API key. In Laravel, use the published config and resolve from the container instead.

<?php
use LLMWise\Client;

$client = new Client(getenv('LLMWISE_API_KEY'));

// Laravel: resolve from container
// $client = app(Client::class);
4Send a basic chat request

Call the chat method with a model ID and messages array. The response includes content, token counts, and cost metadata.

$response = $client->chat([
    'model' => 'gpt-5.2',
    'messages' => [
        Message::system('You are a senior PHP developer.'),
        Message::user('What are the new features in PHP 8.3?'),
    ],
    'max_tokens' => 512,
]);

echo $response['content'];
5Stream tokens with a generator

Use chatStream() which returns a generator. Each yielded event is an associative array with delta text. This is memory-efficient for long responses.

$stream = $client->chatStream([
    'model' => 'gemini-3-flash',
    'messages' => [
        Message::user('Write a Laravel Eloquent query scope for soft-deleted records.'),
    ],
    'stream' => true,
]);

foreach ($stream as $event) {
    if (!empty($event['delta'])) {
        echo $event['delta'];
        ob_flush();
        flush();
    }
}
6Use Compare mode for multi-model evaluation

Send the same prompt to multiple models and receive their responses side by side. Useful for quality evaluation and model selection.

$response = $client->compare([
    'models' => ['gpt-5.2', 'claude-sonnet-4.5', 'gemini-3-flash'],
    'messages' => [
        Message::user('Explain the repository pattern in PHP.'),
    ],
]);

foreach ($response['responses'] as $r) {
    echo "[{$r['model']}]: {$r['content']}\n\n";
}

Common questions

Does the PHP SDK work with Laravel?
Yes. The SDK includes a Laravel service provider that registers the LLMWise client in the container. Publish the config with artisan vendor:publish, set your API key in .env, and inject the client via dependency injection or the facade.
How does streaming work in PHP?
The chatStream() method returns a PHP generator that yields associative arrays as SSE events arrive. Use foreach to iterate and echo or flush output. This keeps memory usage low even for very long responses. Call ob_flush() and flush() in web contexts to send chunks to the browser.
Is the SDK PSR-18 compatible?
Yes. The SDK ships with a Guzzle adapter by default but accepts any PSR-18 compatible HTTP client. Pass your own client instance to the constructor if you need custom middleware, proxies, or logging.
What PHP versions are supported?
PHP 8.1 or later is required. The SDK uses enums, readonly properties, fibers, and named arguments introduced in PHP 8.1. The JSON extension (enabled by default) is the only required extension.

One wallet, enterprise AI controls built in

Credit-based pay-per-use with token-settled billing. No monthly subscription. Paid credits never expire.

Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.

Chat, Compare, Blend, Judge, MeshPolicy routing + replay labFailover without extra subscriptions
Get LLM insights in your inbox

Pricing changes, new model launches, and optimization tips. No spam.