Image generation is usually the first thing a client gets excited about and the first thing that blows up a budget. Drop a bare Http::post() call into a controller, demo it, ship it — and three weeks later you’re staring at an OpenAI invoice with no idea which users triggered it or why the same asset was generated forty times. This guide focuses on image generation specifically — for the full OpenAI integration foundation including text completions, error handling, and token accounting, see the complete Laravel OpenAI integration guide.
This guide does not stop at a working API call. We’re building a proper Laravel OpenAI image generation pipeline: a bound Service class, a queued Job, Redis-backed deduplication caching, S3 storage via the Storage facade, and full token cost tracking through an Eloquent model.
Why gpt-image-1 Changes Your Storage Architecture
Before a single line of code, you need to understand one architectural constraint that gpt-image-1 imposes: it never returns a URL. Every response is base64-encoded JSON. Compare that to DALL-E 3, which gave you a hosted URL you could dump straight into an <img> tag.
With gpt-image-1, you are always responsible for storage. There is no “temporary hosted URL” to lean on. Design your storage layer upfront — trying to bolt it on later is painful. For this guide, we use Laravel’s Storage facade pointed at an S3-compatible bucket, which works out of the box with AWS S3, DigitalOcean Spaces, or Cloudflare R2.
1. Installation and Configuration
Install the official openai-php/laravel package:
composer require openai-php/laravel php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"
This publishes config/openai.php. Your .env file needs:
env
OPENAI_API_KEY=your_key_here OPENAI_REQUEST_TIMEOUT=120
The default timeout of 30 seconds is not enough for image generation. High-quality 1536×1024 requests can take 20–40 seconds under load. Set at least 120.
In Laravel 11+, if you need to register a singleton or override the client, that belongs in bootstrap/app.php, not a Service Provider’s register() method — unless you have a dedicated provider. For most projects, the published config is sufficient.
We also need an Eloquent model to track token usage. Generate it with its migration:
php artisan make:model AiImageUsage -m
// database/migrations/xxxx_create_ai_image_usages_table.php
public function up(): void
{
Schema::create('ai_image_usages', function (Blueprint $table) {
$table->id();
$table->foreignId('user_id')->nullable()->constrained()->nullOnDelete();
$table->string('prompt_hash', 64)->index();
$table->text('prompt');
$table->string('model')->default('gpt-image-1');
$table->string('size')->default('1024x1024');
$table->string('quality')->default('medium');
$table->string('output_format')->default('webp');
$table->unsignedInteger('input_tokens')->default(0);
$table->unsignedInteger('output_tokens')->default(0);
$table->unsignedInteger('total_tokens')->default(0);
$table->string('storage_path')->nullable();
$table->boolean('from_cache')->default(false);
$table->timestamps();
});
}
This table is your cost ledger. Every generation, cached or live, should produce a row. If your bill spikes, you query this table first.
2. The Laravel OpenAI Image Generation Service
We do not call the OpenAI client from a controller. We bind a Service class and inject it wherever it is needed — controllers, Jobs, Artisan commands. This is the Service Container doing what it is designed for.
php artisan make:class App/Services/ImageGenerationService
<?php
namespace App\Services;
use App\Models\AiImageUsage;
use Illuminate\Support\Facades\Cache;
use Illuminate\Support\Facades\Log;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
use OpenAI\Laravel\Facades\OpenAI;
use OpenAI\Exceptions\ErrorException;
use OpenAI\Exceptions\TransporterException;
class ImageGenerationService
{
/**
* Generate an image, with Redis-backed deduplication caching.
* Returns the public storage path on success.
*/
public function generate(
string $prompt,
string $size = '1024x1024',
string $quality = 'medium',
string $outputFormat = 'webp',
bool $transparentBackground = false,
?int $userId = null
): array {
$promptHash = $this->hashParameters($prompt, $size, $quality, $outputFormat, $transparentBackground);
// Check Redis cache before hitting the API.
$cachedPath = Cache::get("image_gen:{$promptHash}");
if ($cachedPath && Storage::disk('s3')->exists($cachedPath)) {
$this->recordUsage(
userId: $userId,
promptHash: $promptHash,
prompt: $prompt,
size: $size,
quality: $quality,
outputFormat: $outputFormat,
storagePath: $cachedPath,
fromCache: true
);
return ['path' => $cachedPath, 'from_cache' => true, 'url' => Storage::disk('s3')->url($cachedPath)];
}
// Build the request payload.
$payload = [
'model' => 'gpt-image-1',
'prompt' => $prompt,
'size' => $size,
'quality' => $quality,
'output_format' => $outputFormat,
];
if ($transparentBackground) {
$payload['background'] = 'transparent';
// Transparent backgrounds require png or webp — force png if jpeg was passed.
if ($outputFormat === 'jpeg') {
$payload['output_format'] = 'png';
$outputFormat = 'png';
}
}
try {
$response = OpenAI::images()->create($payload);
} catch (ErrorException $e) {
Log::error('OpenAI image generation API error', [
'status' => $e->getCode(),
'message' => $e->getMessage(),
'prompt' => Str::limit($prompt, 200),
]);
throw $e;
} catch (TransporterException $e) {
Log::error('OpenAI transport failure (timeout or network)', [
'message' => $e->getMessage(),
]);
throw $e;
}
$imageData = base64_decode($response->data[0]->b64Json);
$storagePath = "ai-images/{$promptHash}.{$outputFormat}";
Storage::disk('s3')->put($storagePath, $imageData, 'public');
// Cache the path in Redis for 30 days.
Cache::put("image_gen:{$promptHash}", $storagePath, now()->addDays(30));
$usage = $response->usage;
$this->recordUsage(
userId: $userId,
promptHash: $promptHash,
prompt: $prompt,
size: $size,
quality: $quality,
outputFormat: $outputFormat,
inputTokens: $usage->inputTokens,
outputTokens: $usage->outputTokens,
totalTokens: $usage->totalTokens,
storagePath: $storagePath,
fromCache: false
);
return [
'path' => $storagePath,
'from_cache' => false,
'url' => Storage::disk('s3')->url($storagePath),
'usage' => [
'input_tokens' => $usage->inputTokens,
'output_tokens' => $usage->outputTokens,
'total_tokens' => $usage->totalTokens,
],
];
}
private function hashParameters(string ...$parts): string
{
return hash('sha256', implode('|', $parts));
}
private function recordUsage(
?int $userId,
string $promptHash,
string $prompt,
string $size,
string $quality,
string $outputFormat,
int $inputTokens = 0,
int $outputTokens = 0,
int $totalTokens = 0,
?string $storagePath = null,
bool $fromCache = false
): void {
AiImageUsage::create([
'user_id' => $userId,
'prompt_hash' => $promptHash,
'prompt' => $prompt,
'size' => $size,
'quality' => $quality,
'output_format' => $outputFormat,
'input_tokens' => $inputTokens,
'output_tokens' => $outputTokens,
'total_tokens' => $totalTokens,
'storage_path' => $storagePath,
'from_cache' => $fromCache,
]);
}
}
Bind this in a Service Provider (or in bootstrap/app.php for simple cases):
// app/Providers/AppServiceProvider.php
use App\Services\ImageGenerationService;
public function register(): void
{
$this->app->singleton(ImageGenerationService::class);
}
```
The Service Container now manages a single instance across the request lifecycle. Inject it via constructor injection, not `app()` helper calls.
---
## 3. Sizes, Quality, and Output Format
These three parameters are your primary cost and performance levers. Know them before you hand this off to a product team.
| Size | Tokens (approx.) | Use Case |
|---|---|---|
| `1024x1024` | ~1,056 (medium) | Default, balanced |
| `1024x1536` | ~1,500 (medium) | Portrait — product shots, posters |
| `1536x1024` | ~1,500 (medium) | Landscape — banners, hero images |
| Quality | Cost | Use Case |
|---|---|---|
| `low` | Lowest | Previews, rapid iteration, internal drafts |
| `medium` | Mid | Content pipelines, blog assets |
| `high` | Highest | Marketing assets, final customer-facing output |
**Output formats:** Use `webp` for web delivery — smaller files, near-identical visual quality. Use `png` when you need transparency (`background: transparent` requires `png` or `webp`). Use `jpeg` only when maximum compression matters and alpha transparency is irrelevant.
> **[Efficiency Gain]** — Never use `quality: 'high'` for internal previews or iteration loops. A medium-quality 1024×1024 costs roughly 1,056 output tokens. High-quality 1536×1024 can cost 3–4× more. If your pipeline generates previews before a user approves a final asset, use `low` quality for the preview and `high` only on the confirmed generation. This single discipline can cut image generation costs by 60–70% in content-heavy workflows.
---
## 4. Prompt Design: Constraints, Not Descriptions
A prompt is not a description of what you want — it is a set of constraints that narrows the model's sample space. The less you specify, the more the model improvises, and improvisation is where inconsistency lives.
**Weak:**
```
"A futuristic city"
```
**Strong:**
```
"A futuristic city at dusk, clean architectural lines, muted neon accents,
realistic lighting, wide-angle perspective, no text, no people, no watermarks"
The most impactful additions are explicit exclusions (no text, no watermark, no people), composition control (wide-angle perspective, centered, flat lay), and style anchors (photorealistic, flat design, editorial illustration).
Version-Control Your Prompt Templates
When your prompt strings are scattered across controllers and config files, debugging output drift becomes a detective exercise with no evidence trail. If you already follow the Prompt Migrations pattern for Laravel, apply the same discipline here — image prompts are just another class of versioned system input.
At minimum, centralise your templates in a dedicated class:
<?php
namespace App\Services\Prompts;
class ImagePromptTemplates
{
public static function productHero(string $productName, string $style = 'clean, white background, studio lighting'): string
{
return "Product photograph of {$productName}, {$style}, no text, no watermarks, centered composition, no background clutter";
}
public static function blogHeader(string $topic, string $mood = 'professional'): string
{
return "Editorial illustration representing {$topic}, {$mood} tone, flat design, no text, wide format, no photorealistic faces";
}
public static function uiScreenshot(string $description, string $theme = 'dark'): string
{
return "Clean UI dashboard screenshot, {$description}, {$theme} theme, minimal design, no lorem ipsum, no real user data, no watermarks";
}
}
Use these as your source of truth. When output quality drifts and a client raises a ticket, the diff is in version control — not in someone’s memory.
5. Queuing Generation with Laravel Jobs
Calling the OpenAI API synchronously from a controller is an architectural mistake. Image generation can take 20–40 seconds under load. That is a dead request, an open DB connection, and a timeout waiting to happen. The correct pattern is to dispatch a Job, return immediately, and notify the user when the asset is ready.
For validating AI-generated image outputs before storing — enforcing response structure and catching malformed base64 — see the guide on validating AI-generated outputs in Laravel.
php artisan make:job GenerateImageJob
<?php
namespace App\Jobs;
use App\Services\ImageGenerationService;
use App\Models\User;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Queue\Middleware\ThrottlesExceptions;
use Illuminate\Support\Facades\Log;
class GenerateImageJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public int $tries = 3;
public int $backoff = 60; // seconds between retries
public function __construct(
public readonly string $prompt,
public readonly string $size = '1024x1024',
public readonly string $quality = 'medium',
public readonly string $outputFormat = 'webp',
public readonly ?int $userId = null,
public readonly ?string $callbackEvent = null,
) {}
public function middleware(): array
{
// Throttle: allow 5 failures before releasing, wait 10 minutes.
return [new ThrottlesExceptions(5, 10)];
}
public function handle(ImageGenerationService $service): void
{
$result = $service->generate(
prompt: $this->prompt,
size: $this->size,
quality: $this->quality,
outputFormat: $this->outputFormat,
userId: $this->userId,
);
Log::info('Image generation complete', [
'user_id' => $this->userId,
'path' => $result['path'],
'from_cache' => $result['from_cache'],
'tokens' => $result['usage']['total_tokens'] ?? 0,
]);
// Broadcast a Livewire or Echo event here if needed.
// event(new ImageGenerationCompleted($this->userId, $result['url']));
}
public function failed(\Throwable $exception): void
{
Log::error('Image generation job failed permanently', [
'user_id' => $this->userId,
'prompt' => substr($this->prompt, 0, 200),
'error' => $exception->getMessage(),
]);
}
}
Dispatching from a controller is now a one-liner:
use App\Jobs\GenerateImageJob;
// Inside your controller action:
GenerateImageJob::dispatch(
prompt: ImagePromptTemplates::productHero($request->validated('product_name')),
size: '1536x1024',
quality: 'high',
userId: auth()->id(),
)->onQueue('image-generation');
Dedicate a named queue (image-generation) so you can scale its workers independently and monitor it separately in Laravel Horizon. Do not run image generation on your default queue — one slow generation blocks everything behind it.
[Production Pitfall] — If you are running multiple queue workers and the same prompt arrives from two users simultaneously before the cache key is written, you will generate the same image twice and charge twice. This is a classic race condition. Guard against it with a
Cache::lock()around the generation call in your Service: acquire the lock using the prompt hash as the key, check the cache inside the lock, generate if still a miss, release the lock. Under high concurrency, this is not optional.
6. Error Handling: What the API Actually Throws
The openai-php client throws typed exceptions. Handle them explicitly — swallowing everything into a generic 500 is not error handling, it is error hiding.
use OpenAI\Exceptions\ErrorException;
use OpenAI\Exceptions\TransporterException;
use OpenAI\Exceptions\UnserializableResponse;
try {
$response = OpenAI::images()->create($payload);
} catch (ErrorException $e) {
// HTTP 400: content policy violation or malformed request
// HTTP 429: rate limit exceeded
// HTTP 500+: OpenAI server error
match (true) {
$e->getCode() === 429 => $this->handleRateLimit($e),
$e->getCode() === 400 => $this->handleContentViolation($e, $prompt),
default => throw $e,
};
} catch (TransporterException $e) {
// Network-level failure: timeout, DNS, connection refused.
// This is retriable — your Job's backoff handles it.
throw $e;
} catch (UnserializableResponse $e) {
// The API returned something the client could not parse.
// Log it and alert — this indicates an API contract change.
Log::critical('OpenAI response unserializable', ['message' => $e->getMessage()]);
throw $e;
}
A note on content policy violations (HTTP 400): you cannot always predict them from the prompt text alone. The model evaluates semantic intent, not keyword matching. A prompt referencing a real-world brand, a public figure, or even certain compositional descriptions can trigger a violation. Filter user-submitted prompts before they reach the API — but accept that some will still fail at the API layer, and budget for that latency without billing the user a token cost they never got value from.
If you are running per-user rate limiting, the Laravel AI Middleware guide for token tracking and rate limiting covers the full middleware architecture — including tiered limits, Redis counters, and how to tie cost telemetry back to individual users at the HTTP layer.
7. Image Editing with gpt-image-1
Beyond generation from scratch, gpt-image-1 accepts input images and edits them based on a new prompt. Up to 10 reference images can be provided. This is useful for background replacement, product colour variations, or iterating on an existing generated asset.
The openai-php client expects the image as a resource or PSR-7 stream. The cleanest approach in Laravel is to retrieve the file from Storage and pass it directly:
use Illuminate\Support\Facades\Storage;
use OpenAI\Laravel\Facades\OpenAI;
public function editImage(string $storagePath, string $editPrompt, ?int $userId = null): array
{
$imageContents = Storage::disk('s3')->get($storagePath);
$tmpPath = tempnam(sys_get_temp_dir(), 'oai_edit_') . '.png';
file_put_contents($tmpPath, $imageContents);
try {
$response = OpenAI::images()->edit([
'model' => 'gpt-image-1',
'image' => fopen($tmpPath, 'r'),
'prompt' => $editPrompt,
'size' => '1024x1024',
'quality' => 'medium',
]);
} finally {
@unlink($tmpPath); // Always clean up the temp file.
}
$imageData = base64_decode($response->data[0]->b64Json);
$outputPath = 'ai-images/edits/' . uniqid('edit_', true) . '.png';
Storage::disk('s3')->put($outputPath, $imageData, 'public');
$usage = $response->usage;
$this->recordUsage(
userId: $userId,
promptHash: hash('sha256', $storagePath . $editPrompt),
prompt: "[EDIT] {$editPrompt}",
size: '1024x1024',
quality: 'medium',
outputFormat: 'png',
inputTokens: $usage->inputTokens,
outputTokens: $usage->outputTokens,
totalTokens: $usage->totalTokens,
storagePath: $outputPath,
fromCache: false
);
return [
'path' => $outputPath,
'url' => Storage::disk('s3')->url($outputPath),
];
}
[Edge Case Alert] — When editing, the input image must be a PNG. Passing a
webporjpegfile to the edit endpoint returns an HTTP 400. If your generated assets are stored aswebp(which they should be for generation), you will need to convert to PNG before editing. Addext-gdorintervention/imageto your stack and convert in-memory before writing the temp file. Do not assume the file extension on your S3 object matches what the API requires.
8. Transparent Backgrounds
For product images, icons, or anything destined for compositing, set background: transparent. This requires png or webp — jpeg does not support an alpha channel. The medium or high quality settings are recommended; the docs note that transparency at low quality can produce artefacts at the edges of the subject.
$payload = [
'model' => 'gpt-image-1',
'prompt' => 'A pair of wireless headphones, product photography style, isolated subject, no background, no shadow',
'size' => '1024x1024',
'quality' => 'medium',
'output_format' => 'png',
'background' => 'transparent',
];
$response = OpenAI::images()->create($payload);
$imageData = base64_decode($response->data[0]->b64Json);
Storage::disk('s3')->put('ai-images/products/headphones_transparent.png', $imageData, 'public');
This pattern is particularly powerful in e-commerce pipelines where the same product needs to appear against multiple backgrounds — generate once with a transparent background and composite at render time.
9. Token-Based Cost Tracking
Unlike DALL-E 2 and DALL-E 3, which charge a flat per-image rate, gpt-image-1 charges based on token consumption. The usage object in every response tells you the exact cost:
{
"input_tokens": 50,
"input_tokens_details": { "image_tokens": 0, "text_tokens": 50 },
"output_tokens": 1056,
"total_tokens": 1106
}
Output tokens are image tokens and are the primary cost driver. A 1024×1024 at medium quality uses approximately 1,056 output tokens. High quality at 1536×1024 can be 3,000–4,000+ output tokens per image.
With the AiImageUsage Eloquent model we created earlier, cost analysis becomes a standard query:
use App\Models\AiImageUsage;
use Illuminate\Support\Facades\DB;
// Total tokens consumed by a user in the current month:
$monthlyUsage = AiImageUsage::where('user_id', $userId)
->where('from_cache', false)
->whereMonth('created_at', now()->month)
->sum('total_tokens');
// Most expensive uncached generations this week:
$expensive = AiImageUsage::where('from_cache', false)
->where('created_at', '>=', now()->subWeek())
->orderByDesc('total_tokens')
->limit(10)
->get(['prompt', 'total_tokens', 'size', 'quality', 'created_at']);
// Cache hit rate (lower is worse — you are regenerating unnecessarily):
$stats = AiImageUsage::selectRaw('
COUNT(*) as total,
SUM(CASE WHEN from_cache = 1 THEN 1 ELSE 0 END) as cached,
ROUND(SUM(CASE WHEN from_cache = 1 THEN 1 ELSE 0 END) / COUNT(*) * 100, 2) as cache_hit_rate
')->first();
Log this from day one. The first time a client asks “why is our OpenAI bill $800 this month,” you want answers in under 30 seconds — not a manual audit of API logs.
10. Production Mistakes to Avoid
These are not theoretical. They happen in real Laravel OpenAI image generation pipelines, usually within the first week of production traffic.
Calling generation synchronously from a controller action. The request timeout kills the user experience and wastes the API call. Use a dispatched Job.
Not caching identical prompts. If your content pipeline regenerates the same blog header every deploy, you are burning tokens on a deterministic outcome. The Redis-backed cache in the Service class above eliminates this.
Ignoring prompt versioning. Output quality drifts when prompts change informally. Centralise templates and version them. When a client says “the images looked better two weeks ago,” you want a diff, not a shrug.
Using high quality for everything. Use low for previews, medium for content pipelines, high only for final confirmed assets. The cost difference is significant.
Not logging token usage per request. The usage object is in every response. Persisting it to AiImageUsage takes four lines of code and saves you hours of forensic accounting when the invoice arrives.
Storing base64 in your database. Never. Decode it, write it to S3 via Storage::disk('s3')->put(), and store the path. A 1024×1024 webp image is 200–400KB. Accumulate a few hundred and your database becomes a CDN without the performance.
Skipping user-submitted prompt sanitisation. Rate limiting and abuse prevention are your responsibility. A user who discovers they can trigger 100 image generations from a single form has just made your problem very expensive. Queue requests, enforce per-user limits, and consider requiring prompt review for high-quality tiers. See our Laravel AI Middleware guide for the full implementation pattern.
11. Safety and Moderation
The API enforces content policy automatically, but a 400 from a policy violation is a failed request you already incurred latency on. Filter user-submitted prompts before they reach the API. At minimum, maintain a blocklist of terms in your config and validate against it in a FormRequest:
// config/image_generation.php
return [
'blocked_terms' => [
// Populate this with your moderation list.
// Consider using a third-party moderation API for dynamic lists.
],
'max_prompt_length' => 1000,
'allowed_quality_tiers' => ['low', 'medium'], // Restrict 'high' to premium users.
];
For multi-user products, never expose quality: 'high' to all users by default. Gate it behind a plan or permission check. The cost difference between medium and high at scale is the difference between a predictable budget and an unpleasant conversation.
Further Reading
- OpenAI Image Generation API Reference — The canonical source on request parameters, model capabilities, and billing for
gpt-image-1. - openai-php/laravel on GitHub — The official PHP client. The README covers authentication, timeout configuration, and fake client setup for testing.
What to Build Next
Once your image generation pipeline is stable, the natural next evolution is adding governance around it — per-user cost caps, tiered quality access, and real-time spend telemetry. The Production-Grade AI Architecture in Laravel guide covers exactly this: Contracts, provider abstraction, and telemetry wired to your observability stack.
Senior Laravel Developer and AI Architect with 10+ years in the trenches. Dewald writes about building resilient, cost-aware AI integrations and modernizing the Laravel developer workflow for the 2026 ecosystem.

