Laravel SSE (Server-Sent Events over a persistent HTTP connection), is deceptively easy to get running in development, and deceptively fragile once real users hit it. Getting a basic stream working takes twenty minutes. Getting Laravel SSE to hold up reliably in production, under concurrent load, across multiple tenants, with a client that reconnects cleanly after a dropped connection, takes considerably longer.
We are going to cover the specific failure modes that only surface once SSE is live: what happens when a client reconnects mid-stream, how PHP-FPM and Nginx conspire to silently kill long-lived connections, and how to isolate event streams per tenant without introducing a data-leak vector. There is also a testing deep-dive, because SSE is one of those features that looks fine until a specific sequence of events exposes the gap.
What SSE Actually Commits You To
Before we get into failure modes, it is worth being honest about the trade-off you are making with SSE versus WebSockets. SSE is unidirectional, the server pushes and the client receives. The browser handles reconnection natively via the EventSource API, and the protocol is just HTTP, which means it plays nicely with your existing Nginx config, authentication middleware, and load balancers, until it does not.
The constraint that catches teams is connection persistence. Each connected client holds an open HTTP connection for the duration of the session. Under PHP-FPM, that means a worker process is tied up for the lifetime of that connection. With the default pm.max_children sitting somewhere between 5 and 50 depending on your server class, you can saturate your PHP-FPM pool with 30 concurrent SSE clients. That is not a theoretical concern—it is the first thing that breaks.
The alternative is to push SSE responses through a non-blocking driver. We will cover both approaches.
The Route and Stream Controller
Laravel 11 and 12 handle SSE through a StreamedResponse. The route itself is straightforward:
php
// routes/api.php
Route::get('/stream/events', StreamEventsController::class)
->middleware(['auth:sanctum', 'throttle:sse']);
Note the dedicated throttle:sse rate limiter—this is not optional in production. Define it in bootstrap/app.php:
php
// bootstrap/app.php
->withMiddleware(function (Middleware $middleware) {
$middleware->throttleWithRedis('sse', function (Request $request) {
return Limit::perMinute(10)->by($request->user()?->id ?: $request->ip());
});
})
The controller:
php
<?php
namespace App\Http\Controllers;
use App\Services\TenantEventStream;
use Illuminate\Http\Request;
use Symfony\Component\HttpFoundation\StreamedResponse;
class StreamEventsController
{
public function __invoke(Request $request, TenantEventStream $stream): StreamedResponse
{
$tenant = $request->user()->tenant;
$lastId = $request->header('Last-Event-ID');
return response()->stream(function () use ($tenant, $lastId, $stream) {
$stream->pipe($tenant, $lastId, function (string $payload) {
echo "data: {$payload}\n\n";
ob_flush();
flush();
});
}, 200, [
'Content-Type' => 'text/event-stream',
'Cache-Control' => 'no-cache',
'X-Accel-Buffering' => 'no',
'Connection' => 'keep-alive',
]);
}
}
Two things here that earn their place. X-Accel-Buffering: no disables Nginx’s proxy buffering for this response—without it, Nginx will silently hold your events and batch-flush them, which breaks the real-time contract entirely. The Last-Event-ID header extraction is the foundation of reconnect logic, covered next.
Client Reconnect Logic
The browser’s EventSource API reconnects automatically after a dropped connection. By default it waits three seconds and reconnects to the same URL. What it does not do automatically is tell the server where it left off—unless you use the id and retry fields in your event stream correctly.
Every event your server emits should carry an id:
php
private function formatEvent(string $data, string $id, string $event = 'message'): string
{
return "id: {$id}\nevent: {$event}\ndata: {$data}\nretry: 5000\n\n";
}
The retry field tells the browser how long to wait before reconnecting—5000ms is a reasonable production value. The id field is what the browser sends back as Last-Event-ID on reconnect.
On the server, when a client reconnects with a Last-Event-ID, you need to replay missed events. This is where your event store matters:
php
<?php
namespace App\Services;
use App\Models\Tenant;
use Illuminate\Support\Facades\Redis;
class TenantEventStream
{
private const MAX_REPLAY = 100;
private const POLL_INTERVAL = 1; // seconds
private const MAX_DURATION = 60; // seconds before we close and let client reconnect
public function pipe(Tenant $tenant, ?string $lastId, callable $emit): void
{
$channel = $this->channelKey($tenant);
$elapsed = 0;
// Replay missed events first
if ($lastId !== null) {
$this->replay($channel, $lastId, $emit);
}
// Emit a comment heartbeat to keep the connection alive
echo ": heartbeat\n\n";
ob_flush();
flush();
while ($elapsed < self::MAX_DURATION) {
if (connection_aborted()) {
break;
}
$events = Redis::lrange("{$channel}:pending", 0, -1);
foreach ($events as $raw) {
$event = json_decode($raw, true);
$emit($this->formatEvent($event['data'], $event['id'], $event['type']));
Redis::lrem("{$channel}:pending", 0, $raw);
}
sleep(self::POLL_INTERVAL);
$elapsed += self::POLL_INTERVAL;
}
// Graceful close — client will reconnect
echo "event: reconnect\ndata: {}\n\n";
ob_flush();
flush();
}
private function replay(string $channel, string $lastId, callable $emit): void
{
$history = Redis::lrange("{$channel}:history", 0, self::MAX_REPLAY - 1);
$found = false;
foreach ($history as $raw) {
$event = json_decode($raw, true);
if ($found) {
$emit($this->formatEvent($event['data'], $event['id'], $event['type']));
}
if ($event['id'] === $lastId) {
$found = true;
}
}
}
private function channelKey(Tenant $tenant): string
{
return "sse:tenant:{$tenant->id}";
}
private function formatEvent(string $data, string $id, string $event = 'message'): string
{
return "id: {$id}\nevent: {$event}\ndata: {$data}\nretry: 5000\n\n";
}
}
The MAX_DURATION ceiling is deliberate. We intentionally close the connection after 60 seconds and emit a reconnect event. The client reconnects immediately. This approach keeps PHP-FPM workers from being monopolised indefinitely and gives you a natural opportunity to rotate authentication context on each reconnect. Yes, it adds latency to the reconnect cycle. In practice, with a 5-second retry, the gap is barely perceptible to the user.
[Production Pitfall] Do not use set_time_limit(0) in a streaming controller. It sounds like the right call—just keep the connection open forever—but it means a single misbehaving client or network partition can hold a PHP-FPM worker hostage indefinitely. Under load, this degrades your entire application, not just the SSE endpoint. Set a ceiling and reconnect gracefully.
Connection Drops Under Load
The PHP-FPM bottleneck is real. The canonical solution is to handle SSE at the process level, not the worker level. Two patterns are worth comparing:
| Approach | Concurrency Limit | Memory Profile | Complexity |
|---|---|---|---|
| PHP-FPM polling (as above) | ~pm.max_children | Low per worker | Low |
| Laravel Octane (Swoole) | Thousands | Higher baseline | Medium |
| Dedicated Node/Go microservice | Very high | Depends | High |
| Reverb + custom SSE adapter | High | Medium | Medium |
For most Laravel applications, the PHP-FPM polling approach works up to a few hundred concurrent SSE clients—provided you tune aggressively. Beyond that, Octane with Swoole is the path of least resistance.
If you are already running Laravel on a production Nginx stack, you need to add SSE-specific directives to prevent Nginx from treating your stream like a standard buffered HTTP response:
nginx
location /stream/events {
proxy_pass http://php-fpm;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 120s;
chunked_transfer_encoding on;
}
The proxy_read_timeout setting is critical. Nginx’s default is 60 seconds—it will terminate your long-lived connection before your application even gets a chance to send a heartbeat. Set it higher than your MAX_DURATION ceiling.
For the heartbeat, the comment-style keepalive (echo ": heartbeat\n\n") is not cosmetic. AWS ALB and many proxies will terminate idle connections after 60 seconds. A heartbeat every 20–30 seconds keeps them open:
php
private function emitHeartbeat(): void
{
echo ": heartbeat " . now()->timestamp . "\n\n";
ob_flush();
flush();
}
Multi-Tenant Event Stream Isolation
This is where most SSE implementations carry a silent security risk. If your channel keys or subscription logic is naive, a reconnect under a different tenant’s credentials—or a race condition during authentication—can result in events from one tenant being visible to another.
The isolation model needs to be enforced at three layers:
1. Channel key construction must be deterministic and opaque.
Never use guessable identifiers like tenant_1 or the tenant’s domain slug as the Redis key. Derive the channel key from a combination of the tenant ID and a secret:
php
private function channelKey(Tenant $tenant): string
{
return 'sse:' . hash_hmac('sha256', (string) $tenant->id, config('app.key'));
}
This means even if someone discovers the Redis key pattern, they cannot construct a valid key for another tenant without the application key.
2. Authentication must be re-validated on every reconnect, not cached.
Because Laravel Sanctum token authentication runs through standard middleware, the token is validated on each request—which includes each SSE reconnect. Do not bypass this with a long-lived cookie or a shared session. Each reconnect is a fresh HTTP request; treat it as one.
php
// The middleware stack handles this, but be explicit about ability scoping
Route::get('/stream/events', StreamEventsController::class)
->middleware(['auth:sanctum', 'ability:stream:read', 'throttle:sse']);
3. The event publisher must scope to the tenant at write time, not read time.
When a job or service publishes an event to Redis, it must use the same HMAC-derived key. Any event published without scoping is a data-leak waiting to happen:
php
<?php
namespace App\Services;
use App\Models\Tenant;
use Illuminate\Support\Facades\Redis;
use Illuminate\Support\Str;
class EventPublisher
{
private const HISTORY_TTL = 300; // 5 minutes
private const MAX_HISTORY = 500;
public function publish(Tenant $tenant, string $type, array $data): void
{
$id = (string) Str::uuid();
$channel = $this->channelKey($tenant);
$payload = json_encode([
'id' => $id,
'type' => $type,
'data' => json_encode($data),
]);
Redis::pipeline(function ($pipe) use ($channel, $payload) {
// Pending queue (consumed by active connections)
$pipe->rpush("{$channel}:pending", $payload);
// History (for reconnect replay)
$pipe->rpush("{$channel}:history", $payload);
$pipe->ltrim("{$channel}:history", -self::MAX_HISTORY, -1);
$pipe->expire("{$channel}:history", self::HISTORY_TTL);
});
}
private function channelKey(Tenant $tenant): string
{
return 'sse:' . hash_hmac('sha256', (string) $tenant->id, config('app.key'));
}
}
The Redis pipeline here is not just for performance—it ensures the pending queue and history writes are atomic. A partial write that adds to pending but misses history will corrupt your replay logic.
[Architect’s Note] If you are building on a multi-tenant architecture where tenants can have multiple concurrent users—not just a single tenant-wide stream—you need an additional scoping layer. Derive the channel key from tenant_id + user_id, or introduce a subscription model where each user subscribes to a named channel. A single tenant-wide stream is fine for administrative dashboards or background job status feeds, but it breaks down the moment different users within the same tenant need different event visibility.
Publishing from Queued Jobs
In most real applications, events do not originate from a controller—they come from queued jobs, webhooks, or scheduled commands. Here is how a queued job publishes to the tenant event stream cleanly, assuming the production-grade queuing architecture you already have:
php
<?php
namespace App\Jobs;
use App\Models\Tenant;
use App\Services\EventPublisher;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
class BroadcastTenantEvent implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public int $tries = 3;
public int $backoff = 5;
public function __construct(
private readonly Tenant $tenant,
private readonly string $type,
private readonly array $data,
) {}
public function handle(EventPublisher $publisher): void
{
$publisher->publish($this->tenant, $this->type, $this->data);
}
public function failed(\Throwable $e): void
{
\Log::error('SSE event publish failed', [
'tenant' => $this->tenant->id,
'type' => $this->type,
'error' => $e->getMessage(),
]);
}
}
The $tries and $backoff values are there because Redis publish failures under memory pressure are a real production scenario. The job will retry with a 5-second backoff rather than silently dropping events.
Testing Deep-Dive
SSE is notoriously awkward to test. The streaming response does not behave like a standard JSON response, and the reconnect and replay logic introduces temporal dependencies that PHPUnit’s synchronous execution model does not naturally accommodate. Here is how to cover it meaningfully.
Unit Testing the Stream Service
Test the TenantEventStream service in isolation by mocking Redis and verifying that the correct event payloads are emitted in the correct order:
php
<?php
namespace Tests\Unit;
use App\Models\Tenant;
use App\Services\TenantEventStream;
use Illuminate\Support\Facades\Redis;
use Tests\TestCase;
class TenantEventStreamTest extends TestCase
{
public function test_replays_events_after_last_id(): void
{
$tenant = Tenant::factory()->make(['id' => 1]);
$channel = 'sse:' . hash_hmac('sha256', '1', config('app.key'));
$events = [
json_encode(['id' => 'evt-1', 'type' => 'update', 'data' => '{"foo":"bar"}']),
json_encode(['id' => 'evt-2', 'type' => 'update', 'data' => '{"foo":"baz"}']),
json_encode(['id' => 'evt-3', 'type' => 'update', 'data' => '{"foo":"qux"}']),
];
Redis::shouldReceive('lrange')
->with("{$channel}:history", 0, 99)
->once()
->andReturn($events);
Redis::shouldReceive('lrange')
->with("{$channel}:pending", 0, -1)
->andReturn([]);
$emitted = [];
$stream = $this->app->make(TenantEventStream::class);
// We override MAX_DURATION to 0 so the loop exits immediately
// Use reflection or a test subclass for this
$stream->pipe($tenant, 'evt-1', function (string $payload) use (&$emitted) {
$emitted[] = $payload;
});
$this->assertCount(2, $emitted); // evt-2 and evt-3 only
$this->assertStringContainsString('evt-2', $emitted[0]);
$this->assertStringContainsString('evt-3', $emitted[1]);
}
}
For the loop-duration override, extract MAX_DURATION as a constructor-injectable value—or expose it as a public constant that a test subclass can override. Hardcoded private constants are a testing smell.
Integration Testing the HTTP Response
Laravel’s TestResponse does not natively support streaming assertions, but you can test the response headers and the initial output structure:
php
public function test_sse_response_has_correct_headers(): void
{
$user = User::factory()->withTenant()->create();
$response = $this->actingAs($user, 'sanctum')
->get('/stream/events');
$response->assertStatus(200);
$response->assertHeader('Content-Type', 'text/event-stream');
$response->assertHeader('X-Accel-Buffering', 'no');
$response->assertHeader('Cache-Control', 'no-cache');
}
public function test_unauthenticated_request_is_rejected(): void
{
$this->get('/stream/events')->assertUnauthorized();
}
public function test_tenant_isolation_different_users_cannot_share_stream(): void
{
$tenantA = Tenant::factory()->create();
$tenantB = Tenant::factory()->create();
$userA = User::factory()->for($tenantA)->create();
$userB = User::factory()->for($tenantB)->create();
$keyA = 'sse:' . hash_hmac('sha256', (string) $tenantA->id, config('app.key'));
$keyB = 'sse:' . hash_hmac('sha256', (string) $tenantB->id, config('app.key'));
// Channel keys must differ
$this->assertNotEquals($keyA, $keyB);
// Publishing to tenant A's channel should not appear in tenant B's key
Redis::shouldReceive('pipeline')->once();
(new EventPublisher())->publish($tenantA, 'update', ['value' => 42]);
// Assert tenant B's key was never written to
Redis::shouldNotHaveReceived('rpush', ["{$keyB}:pending", \Mockery::any()]);
}
Testing Reconnect Replay with Pest
If your test suite uses Pest, the dataset feature makes reconnect scenario coverage readable:
php
dataset('reconnect_scenarios', [
'reconnect from first event' => ['evt-1', 2], // expects 2 replayed events
'reconnect from second event' => ['evt-2', 1], // expects 1 replayed event
'reconnect from last event' => ['evt-3', 0], // expects nothing
'unknown last id' => ['evt-99', 0], // unknown id, no replay
]);
it('replays correct events on reconnect', function (string $lastId, int $expected) {
// ... setup and assertion
})->with('reconnect_scenarios');
This covers the edge cases that are trivially easy to miss: what happens when the Last-Event-ID is stale (no longer in the history buffer), and what happens when it points to the last event in history (nothing to replay, connection continues normally).
[Edge Case Alert] If your Redis history buffer has been trimmed—because MAX_HISTORY was hit—a reconnecting client might send a Last-Event-ID that no longer exists in history. Your replay function needs to handle this gracefully: if the ID is not found, either replay the entire available history or emit a full-refresh event that tells the client to re-fetch state from your API rather than relying on incremental updates.
php
private function replay(string $channel, string $lastId, callable $emit): void
{
$history = Redis::lrange("{$channel}:history", 0, self::MAX_REPLAY - 1);
$found = false;
$buffer = [];
foreach ($history as $raw) {
$event = json_decode($raw, true);
if ($found) {
$buffer[] = $raw;
}
if ($event['id'] === $lastId) {
$found = true;
}
}
if (!$found) {
// Last ID has been evicted — instruct client to do a full refresh
echo "event: full-refresh\ndata: {}\n\n";
ob_flush();
flush();
return;
}
foreach ($buffer as $raw) {
$event = json_decode($raw, true);
$emit($this->formatEvent($event['data'], $event['id'], $event['type']));
}
}
Monitoring and Observability
Streaming connections are invisible to standard Laravel Telescope request logging—because from Telescope’s perspective, the request is still in progress. You need to instrument the stream explicitly.
The Filament admin dashboard pattern for AI applications applies here: track active SSE connections in Redis with a TTL, and surface that count in your admin panel:
php
// On connection open
Redis::setex("sse:active:{$tenant->id}:{$connectionId}", 90, 1);
// On heartbeat — renew TTL
Redis::expire("sse:active:{$tenant->id}:{$connectionId}", 90);
// On connection close (connection_aborted or MAX_DURATION reached)
Redis::del("sse:active:{$tenant->id}:{$connectionId}");
// Query active connection count
$count = count(Redis::keys("sse:active:{$tenant->id}:*"));
Use the 90-second TTL as a dead-man’s switch. If a connection dies without a clean close—network partition, crashed PHP-FPM worker—the key expires automatically and your count stays accurate.
The Production Checklist
Before SSE goes live, run through this:
| Concern | Check |
|---|---|
PHP-FPM pm.max_children | Set relative to expected concurrent SSE connections |
Nginx proxy_read_timeout | Higher than MAX_DURATION |
X-Accel-Buffering: no | Present in response headers |
| Heartbeat interval | ≤ 30s to survive ALB/proxy idle timeouts |
| Redis history TTL | Covers your expected reconnect window |
| Channel key derivation | HMAC, not guessable slug |
| Auth re-validated on reconnect | Confirmed via Sanctum middleware |
full-refresh fallback | Handled in client and server |
| Rate limiter on SSE route | throttle:sse active |
connection_aborted() check | Inside polling loop |
Senior Laravel Developer and AI Architect with 10+ years in the trenches. Dewald writes about building resilient, cost-aware AI integrations and modernizing the Laravel developer workflow for the 2026 ecosystem.

