Laravel 13 launched on March 17, 2026 , and Taylor Otwell kept his Laracon EU promise: zero breaking changes, a clean upgrade from Laravel 12, and one headline shift that resets the default approach to AI in PHP. The Laravel 13 AI SDK is now production-stable, first-party, and catalogued inside the official Laravel documentation as a core concern — not a side-effect of a community package you bolt on after you’ve already made architectural decisions you’ll later regret. For direct Claude API integration without the SDK abstraction layer — the raw HTTP approach, streaming, and token accounting — the Claude API integration in Laravel guide covers that pattern and remains valid alongside the SDK.
This is not a release you skim for changelog bullet points and move on. For anyone building AI-powered features on Laravel, this release fundamentally changes how you should be structuring that work. Let’s go through it properly.
Why Laravel 13 Is an AI-First Release
Laravel 13 continues Laravel’s annual release cadence with a focus on AI-native workflows, stronger defaults, and more expressive developer APIs. The official framing is “minimal breaking changes,” and that’s accurate — but undersells it. The infrastructure laid here directly shapes whether your AI features remain maintainable at scale or turn into the kind of controller-level spaghetti you spend the next two years paying down.
The Laravel AI SDK moves from beta to production-stable on the same day as Laravel 13. It is included as a first-party package and gives you a single, provider-agnostic interface for text generation, tool-calling agents, image creation, audio synthesis, and embedding generation. The SDK handles retry logic, error normalisation, and queue integration behind the scenes. You get all of that without writing a custom abstraction layer, and without coupling your application to a specific provider’s SDK contract.
That last point deserves more weight than the release notes give it.
The Laravel AI SDK: What It Actually Gives You
Text Generation and Agents
The simplest entry point looks like this:
use App\Ai\Agents\SalesCoach;
$response = SalesCoach::make()->prompt('Analyse this sales transcript...');
return (string) $response;
With the AI SDK, you can build provider-agnostic AI features while keeping a consistent, Laravel-native developer experience. That means your SalesCoach agent does not care whether it’s backed by OpenAI, Anthropic, or Google Gemini. You wire the provider in config/ai.php and the agent contract stays unchanged.
The default models used for text, images, audio, transcription, and embeddings are now configurable in your application’s config/ai.php file. This gives you granular control over the exact models you’d like to use if you don’t want to rely on the package defaults. A minimal configuration targeting Anthropic looks like this:
// config/ai.php
return [
'models' => [
'text' => [
'default' => env('AI_TEXT_MODEL', 'claude-sonnet-4-6'),
'cheapest' => 'claude-haiku-4-5-20251001',
'smartest' => 'claude-opus-4-6',
],
],
];
You are not hardcoding a model string into a service class. You are not parsing a .env file in three different controllers. One config file governs the whole application. If you need to roll back a model mid-incident, it’s one value and a php artisan config:cache.
Images and Audio
For visual generation use cases, the SDK offers a clean API for creating images from plain-language prompts. For voice experiences, you can synthesize natural-sounding audio from text for assistants, narrations, and accessibility features.
use Laravel\Ai\Image;
use Laravel\Ai\Audio;
// Image generation
$image = Image::of('A product shot of a minimalist desk lamp')->generate();
$rawContent = (string) $image;
// Audio synthesis
$audio = Audio::of('Your order has been confirmed.')->generate();
$rawContent = (string) $audio;
The fluent API is consistent across modalities. That consistency is not accidental. It means a developer who’s only worked with text generation can pick up image or audio synthesis in minutes — no mental context switch, no separate SDK documentation to parse.
Embeddings and the Str Helper
Here’s where it gets genuinely interesting. Embedding generation is wired directly into Laravel’s Str helper:
use Illuminate\Support\Str;
$vector = Str::of('Napa Valley has exceptional Cabernet Sauvignon.')->toEmbeddings();
That’s not a convenience method. That’s the framework signalling that embeddings are now a first-class data type. You’re not reaching for a one-off utility class — you’re calling a helper that sits alongside Str::slug() and Str::limit() as a standard part of the toolkit.
[Architect’s Note] The fact that toEmbeddings() lives on the Str helper rather than a dedicated Embedding facade is a deliberate design choice. It keeps embedding generation composable with the rest of your string manipulation pipeline. Chain it after a limit() call to trim tokens before you generate — or after markdown() to clean formatting before vectorising. Think of it as the framework nudging you toward preprocessing being part of the same expression.
Native Vector Search: Semantic Queries Without a Search Engine Subscription
Laravel 13 deepens its semantic search story with native vector query support, embedding workflows, and related APIs. These features make it straightforward to build AI-powered search experiences using PostgreSQL and pgvector, including similarity search against embeddings generated directly from strings.
The query builder extension looks like this:
$documents = DB::table('documents')
->whereVectorSimilarTo('embedding', 'Best restaurants in Cape Town')
->limit(10)
->get();
Under the hood, this generates a pgvector-compatible cosine similarity query against your Postgres database. No Elasticsearch cluster. No Typesense subscription. No Algolia bill. For a significant number of use cases — internal knowledge bases, product catalogue search, customer support retrieval — Postgres with pgvector is more than sufficient, and the operational overhead is zero if you’re already on Postgres.
The migration for the vector column uses a new column type:
Schema::create('documents', function (Blueprint $table) {
$table->id();
$table->text('content');
$table->vector('embedding', 1536); // dimension matches your model's output
$table->timestamps();
});
A realistic ingestion pipeline that generates and stores embeddings looks like this:
use Illuminate\Support\Str;
use App\Models\Document;
class IngestDocumentJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function __construct(private readonly string $content) {}
public function handle(): void
{
$embedding = Str::of($this->content)->toEmbeddings();
Document::create([
'content' => $this->content,
'embedding' => $embedding,
]);
}
}
Dispatch it from a controller, and your embedding pipeline is queued, retried on failure, and observable via Horizon — exactly the same way you’d treat any other background job.
[Production Pitfall] Embedding API calls are slow — typically 300–800ms depending on payload size and provider latency. Do not generate embeddings synchronously in a request lifecycle. Every embedding generation call belongs in a queued job, period. Under load, synchronous embedding generation will saturate your PHP-FPM pool faster than almost anything else. We’ve seen this exact pattern cause cascading timeouts in staging under traffic replay. Don’t let it reach production.
Provider Agnosticism: What It Means Practically
Switch providers by changing one config value. Teams that offer professional Laravel development services can now build AI-powered features inside a standard Laravel project — no custom abstraction layers, no third-party SDK gymnastics required.
This matters more than it reads. Prior to the AI SDK going stable, the common pattern was to inject an OpenAI client through the Service Container, write a thin wrapper around it, and pray that the wrapper held when you inevitably needed to test responses or swap a model mid-sprint. Most teams ended up with one of three things: a god-service that knew too much, a leaky abstraction that only worked for their current provider, or no abstraction at all and raw Http::post() calls in controllers.
The AI SDK replaces all three anti-patterns with a single, testable, swappable interface that the framework itself maintains. If the underlying provider deprecates an API method, the fix lands in a framework update — not in your codebase.
Here’s how you’d swap to a different provider for a specific agent without touching the rest of the application:
use Laravel\Ai\Facades\Ai;
use Laravel\Ai\Enums\Provider;
$response = Ai::using(Provider::Anthropic)
->withModel('claude-opus-4-6')
->prompt('Summarise this legal document...');
That using() call overrides the config/ai.php default for this invocation only. You can do per-request provider overrides without touching global configuration. That’s useful in multi-tenant applications where different plans map to different model tiers.
If you’re already running a custom abstraction over the OpenAI PHP SDK, we wrote about how to migrate that pattern properly — including how to handle cost visibility and telemetry — in Production-Grade AI Architecture in Laravel: Contracts, Governance & Telemetry. The AI SDK doesn’t replace the need for those architectural decisions; it gives you a better foundation to implement them on.
Tool-Calling Agents: Building Agentic Workflows Natively
The AI SDK’s agent support deserves its own section. Tool-calling — the mechanism by which an LLM decides to invoke a function in your application — is the primitive that makes agentic workflows possible.
A basic agent with tools looks like this:
namespace App\Ai\Agents;
use Laravel\Ai\Agent;
use Laravel\Ai\Attributes\Tool;
class OrderAssistant extends Agent
{
protected string $model = 'claude-sonnet-4-6';
protected string $instructions = 'You are a helpful order management assistant.';
#[Tool('Look up the status of an order by ID')]
public function getOrderStatus(int $orderId): string
{
$order = Order::findOrFail($orderId);
return "Order #{$orderId} is currently {$order->status}.";
}
#[Tool('List all open orders for a given customer')]
public function listOpenOrders(int $customerId): array
{
return Order::where('customer_id', $customerId)
->where('status', 'open')
->pluck('id')
->toArray();
}
}
The #[Tool] attribute wires the method directly into the model’s tool-calling interface. The SDK handles the round-trip: the model sees the tool definition, decides to call it, the SDK invokes your method, and the result is injected back into the conversation automatically.
Notice that getOrderStatus and listOpenOrders are standard Eloquent queries. There is nothing AI-specific about the business logic inside the tools. This is the correct separation. Your tools are Laravel code. The AI SDK manages the protocol layer between your code and the model.
[Edge Case Alert] Tool-calling agents can enter infinite loops if the model repeatedly decides to call the same tool with the same arguments — this happens when the tool’s return value doesn’t advance the conversation’s goal. Always set a maxSteps limit on your agents and handle MaxStepsExceededException explicitly. Defaulting to unlimited steps in production is asking for a runaway API bill.
$response = OrderAssistant::make()
->maxSteps(10)
->prompt("What's the status of order 99?");
If you’re building more sophisticated multi-step agentic workflows and need schema validation on tool outputs — which you almost certainly will once you’re past demos — the Hardening Laravel Agentic Workflows: Schema Validation Against LLM Hallucinations guide covers exactly that.
Queue Integration and config/ai.php as Operational Infrastructure
One under-discussed aspect of the AI SDK is how it integrates with Laravel’s Queue system. Transcription now supports timeouts, giving you better control in production workloads and preventing long-running requests from tying up workers.
$transcript = Transcription::fromPath('./podcast.mp3')
->timeout(240)
->generate(Lab::ElevenLabs);
That timeout() call maps directly to the queue worker’s job timeout. If you’re already familiar with $timeout on job classes, this is the same mechanism — now surfaced at the SDK level so you don’t have to know to set it yourself.
Pair this with Laravel 13’s new Queue::route() method for clean queue topology:
// app/Providers/AppServiceProvider.php
use Illuminate\Support\Facades\Queue;
Queue::route([
IngestDocumentJob::class => 'embeddings',
GenerateImageJob::class => 'ai-images',
TranscribeAudioJob::class => 'transcription',
]);
AI jobs should never share a queue with your standard application jobs. Embedding generation and image synthesis calls have entirely different latency profiles and retry semantics than sending a welcome email. The new Queue::route() method lets you define which queue and connection each job class uses from a single location in a service provider. Previously, teams either set queue properties on each job class or repeated the configuration at every dispatch site.
Centralise that in a Service Provider and you’ve got one place to retune queue topology when your AI workload changes.
Error Handling: What the SDK Does, and What It Doesn’t
The AI SDK handles retry logic and error normalisation internally, but that does not mean you write no error handling. It means you handle fewer low-level concerns. The contracts you still own:
use Laravel\Ai\Exceptions\AiProviderException;
use Laravel\Ai\Exceptions\RateLimitException;
use Laravel\Ai\Exceptions\ContentFilterException;
try {
$response = SalesCoach::make()->prompt($userInput);
} catch (RateLimitException $e) {
// The SDK exhausted its internal retry budget — back off and re-queue
InvokeSalesCoachJob::dispatch($userInput)->delay(now()->addSeconds(60));
} catch (ContentFilterException $e) {
// The provider flagged the input — log it, don't retry
Log::warning('Content filter triggered', ['input_hash' => hash('sha256', $userInput)]);
return response()->json(['error' => 'Input could not be processed.'], 422);
} catch (AiProviderException $e) {
// Unknown provider error — log full context, fail gracefully
Log::error('AI provider failure', ['message' => $e->getMessage()]);
return response()->json(['error' => 'Service temporarily unavailable.'], 503);
}
The SDK’s internal retry logic handles transient 5xx errors and network timeouts. RateLimitException is thrown when the SDK’s retry budget is exhausted — at that point, you need application-level backpressure, not another retry. Handle these as distinct failure modes. They are.
[Word to the Wise] Logging $e->getMessage() on a provider exception often includes the raw prompt in the message string depending on the provider. Sanitise before logging in any context where the prompt may contain user PII. This is not a theoretical concern — it’s the kind of thing that shows up in a GDPR audit.
The Bigger Picture: Laravel’s AI Roadmap Signal
The Laravel AI SDK going stable on the same day as Laravel 13 is a deliberate signal: the framework’s roadmap is now AI-first. The docs now include a dedicated AI section with entries for the AI SDK, MCP integration, and Laravel Boost — Taylor’s AI-assisted development tooling. The installation docs now include a dedicated “Laravel and AI” section, and existing apps are pointed to Laravel Boost for AI-assisted development workflows.
Read that as a directional commitment. The primitives shipped in Laravel 13 — provider-agnostic text generation, first-class embeddings, native vector queries, tool-calling agents — are the foundation. What gets built on top of them in 13.x point releases and Laravel 14 will assume they exist. If you’re planning AI features over the next 12 months and you’re not on Laravel 13, you’re doing that planning on a weaker foundation than the framework now offers.
The upgrade path is genuinely low-friction for most apps. The official release notes emphasise minimal breaking changes, and the official upgrade guide estimates about 10 minutes for many applications. The real risk is in custom cache behaviour, request forgery edge cases, and hand-rolled framework integrations.
If you’re running a custom AI integration today — your own OpenAI service class, your own retry decorator, your own token-counting middleware — the migration question is not “should I switch to the AI SDK?” It’s “how quickly can I make the switch before the ecosystem diverges enough that porting gets expensive?” For token tracking and rate limiting specifically, see Laravel AI Middleware: Token Tracking & Rate Limiting — that pattern ports cleanly onto the SDK’s event hooks.
Upgrade Checklist: AI-Specific Steps for Laravel 13
Before you run composer require laravel/framework:^13.0, address the following:
1. PHP version. Laravel 13 requires PHP 8.3 as a minimum. PHP 8.5, released November 2025, is also supported by Laravel 13, bringing further JIT improvements and native URI handling. Check your server runtime before anything else.
2. Remove redundant provider SDKs. If you’re injecting openai-php/client or anthropic/anthropic-sdk-php directly via the Service Container, evaluate whether the AI SDK replaces that dependency entirely. In most cases, it does. Keeping both creates competing abstraction layers.
3. Audit your queue configuration. With Queue::route() now available, centralise your AI job routing in AppServiceProvider. If you’re currently setting public string $queue = 'ai' on individual job classes, that works — but Queue::route() is cleaner and easier to update without touching job classes.
4. Generate and store your config/ai.php. The SDK expects this file. Publish it with:
php artisan vendor:publish --tag=ai-config
Then pin your model names explicitly rather than relying on package defaults. Model defaults change between SDK minor releases. You want to control when your application picks up a new default model — not discover that it happened because a patch updated the SDK.
5. Add pgvector to your Postgres instance if you plan to use whereVectorSimilarTo. The extension is available in RDS, Supabase, and Neon without additional configuration. On self-managed Postgres, install it with:
CREATE EXTENSION IF NOT EXISTS vector;
6. Test your request forgery protection. Laravel 13 formalises PreventRequestForgery with origin-aware verification on top of token-based CSRF. If you have custom CSRF handling or API routes with unusual origin configurations, test them explicitly before deploying.
What’s Not Yet There
Let’s be direct about the gaps.
Streaming responses — where the model outputs tokens progressively rather than in a single payload — are not yet a first-class concern in the AI SDK’s stable release. For chat interfaces that need token streaming, you’ll still be reaching for provider-specific solutions or custom SSE handling. Watch the 13.x changelog; this is an obvious next primitive.
Multi-modal input — sending images or audio to the model rather than from it — is also not yet documented in the stable SDK surface area. It’s likely coming in a 13.x release, but don’t plan around it until it ships.
The vector search integration is Postgres-only for now. If you’re on MySQL or MariaDB, whereVectorSimilarTo is not available. For those stacks, external vector stores (Pinecone, Qdrant, Weaviate) remain the path, and you’ll need your own integration layer.
Final Thoughts
Laravel 13 is not a rewrite. It does not break your application. What it does is establish a new baseline for what “standard Laravel AI work” looks like — and that baseline is considerably higher than it was under Laravel 12. The teams that adopt this now and build their AI features against SDK contracts rather than raw provider clients will have significantly less migration debt when Laravel 14 arrives and extends these primitives further.
Get on PHP 8.3, publish config/ai.php, and start moving your AI layer onto the SDK. The upgrade cost is low. The cost of not doing it compounds.
For the official release notes and full SDK documentation, see the Laravel 13 Release Notes and the Laravel AI SDK documentation directly.
Frequently Asked Questions
What PHP version does Laravel 13 require?
Laravel 13 requires a minimum of PHP 8.3. PHP 8.4 and 8.5 are also fully supported. If you’re still on PHP 8.2 — which Laravel 12 accepted — you cannot upgrade to Laravel 13 without first upgrading your runtime. Check your server, your Docker base image, and any CI pipeline PHP version pins before you touch composer.json.
Does the Laravel 13 AI SDK support OpenAI and Anthropic?
Yes. The Laravel AI SDK is provider-agnostic by design. OpenAI, Anthropic, and other major providers are supported, and you switch between them by changing a single value in config/ai.php — no application code changes required. You can also override the provider at the per-request level using Ai::using(Provider::Anthropic), which is useful in multi-tenant applications where different user tiers map to different models.
Do I need a separate vector database to use semantic search in Laravel 13?
Not if you’re already on PostgreSQL. Laravel 13’s whereVectorSimilarTo() query builder method uses the pgvector extension, which runs inside your existing Postgres instance. For most internal knowledge bases, product search, and RAG retrieval use cases, this is more than sufficient — and eliminates the operational overhead of a separate vector store. The pgvector extension is available by default on RDS, Supabase, and Neon. If you’re on MySQL or MariaDB, this feature is not available and you will need an external vector store.
How difficult is the upgrade from Laravel 12 to Laravel 13?
For most applications, it is a low-effort upgrade. The Laravel team’s stated goal for this release was minimal breaking changes, and the official upgrade guide estimates the process takes around ten minutes for standard applications. The areas that require careful review are custom CSRF handling, hand-rolled framework integrations, and any direct dependencies on provider-specific AI SDKs that the Laravel AI SDK now replaces. Run your test suite, audit your bootstrap/app.php configuration, and check that your PHP runtime meets the 8.3 minimum.
Should I still use the OpenAI PHP SDK directly if I’m on Laravel 13?
In most cases, no. The Laravel AI SDK replaces the need to inject provider-specific clients — OpenAI, Anthropic, or otherwise — directly through the Service Container. Keeping both creates competing abstraction layers and increases the surface area you need to maintain when models or API contracts change. The exception is if you need capabilities the AI SDK does not yet expose, such as streaming responses or multi-modal input to the model. In those cases, a direct provider SDK call is still valid — but treat it as a deliberate, documented exception rather than the default pattern.
Senior Laravel Developer and AI Architect with 10+ years in the trenches. Dewald writes about building resilient, cost-aware AI integrations and modernizing the Laravel developer workflow for the 2026 ecosystem.

