Use the official LLMWise PHP SDK to call multiple AI models with one API key. Laravel service provider, streaming generators, PSR-18 compatible, and built-in failover.
Credit-based pay-per-use with token-settled billing. No monthly subscription. Paid credits never expire.
Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.
composer require llmwise/llmwise-php
<?php
// composer require llmwise/llmwise-php
require_once 'vendor/autoload.php';
use LLMWise\Client;
use LLMWise\Message;
$client = new Client(getenv('LLMWISE_API_KEY'));
// Basic chat request
$response = $client->chat([
'model' => 'auto',
'messages' => [
Message::user('Explain dependency injection in PHP.'),
],
'max_tokens' => 512,
]);
echo $response['content'] . PHP_EOL;
// Streaming chat via generator (yields SSE events)
$stream = $client->chatStream([
'model' => 'claude-sonnet-4.5',
'messages' => [
Message::user('Write a PSR-15 middleware for rate limiting.'),
],
'stream' => true,
]);
foreach ($stream as $event) {
if (!empty($event['delta'])) {
echo $event['delta'];
}
if (!empty($event['done'])) {
echo PHP_EOL . "Credits charged: {$event['credits_charged']}" . PHP_EOL;
break;
}
}Everything you need to integrate LLMWise's multi-model API into your PHP project.
Install the official package via Composer. Requires PHP 8.1 or later and the JSON extension.
composer require llmwise/llmwise-php
Store your API key as an environment variable or in your .env file for Laravel.
export LLMWISE_API_KEY="your_api_key_here" # Or in Laravel .env: # LLMWISE_API_KEY=your_api_key_here
Instantiate the LLMWise client with your API key. In Laravel, use the published config and resolve from the container instead.
<?php
use LLMWise\Client;
$client = new Client(getenv('LLMWISE_API_KEY'));
// Laravel: resolve from container
// $client = app(Client::class);Call the chat method with a model ID and messages array. The response includes content, token counts, and cost metadata.
$response = $client->chat([
'model' => 'gpt-5.2',
'messages' => [
Message::system('You are a senior PHP developer.'),
Message::user('What are the new features in PHP 8.3?'),
],
'max_tokens' => 512,
]);
echo $response['content'];Use chatStream() which returns a generator. Each yielded event is an associative array with delta text. This is memory-efficient for long responses.
$stream = $client->chatStream([
'model' => 'gemini-3-flash',
'messages' => [
Message::user('Write a Laravel Eloquent query scope for soft-deleted records.'),
],
'stream' => true,
]);
foreach ($stream as $event) {
if (!empty($event['delta'])) {
echo $event['delta'];
ob_flush();
flush();
}
}Send the same prompt to multiple models and receive their responses side by side. Useful for quality evaluation and model selection.
$response = $client->compare([
'models' => ['gpt-5.2', 'claude-sonnet-4.5', 'gemini-3-flash'],
'messages' => [
Message::user('Explain the repository pattern in PHP.'),
],
]);
foreach ($response['responses'] as $r) {
echo "[{$r['model']}]: {$r['content']}\n\n";
}Credit-based pay-per-use with token-settled billing. No monthly subscription. Paid credits never expire.
Replace multiple AI subscriptions with one wallet that includes routing, failover, and optimization.
Pricing changes, new model launches, and optimization tips. No spam.