2

I’m trying to integrate the OpenAI PHP SDK into my Laravel 11 project to generate long-form SEO blog posts.

The integration works fine for short snippets, but when I send a prompt for a 2,000-word article, the request just hangs and eventually throws a 408 Request Timeout from my Nginx proxy. I know I shouldn’t run this directly in the controller, so I moved it to a Laravel Queue Job, but now I’m hitting a new wall.

My Problem:
Even inside the queue, the job is being marked as “failed” after 60 seconds because of the default retry_after setting. OpenAI sometimes takes 90+ seconds to stream a long response, and the worker thinks the job died and tries to restart it, causing a loop of half-finished API calls.

What I’ve tried:
Increased max_execution_time in php.ini (didn’t help the worker).
Tried using openai-php/client directly instead of the Laravel wrapper.
Set $timeout = 120 in my GenerateArticle job class, but the worker still kills it.

public function handle(): void
{
    // This part takes forever...
    $response = OpenAI::chat()->create([
        'model' => 'gpt-4-turbo',
        'messages' => [['role' => 'user', 'content' => $this->prompt]],
    ]);

    $this->post->update(['content' => $response->choices[0]->message->content]);
}

How do I properly handle these long-running AI tasks without the worker timing out or me hitting a 504 on the frontend? Should I be using Laravel AI SDK streaming or Laravel Reverb to push the content back to the UI piece-by-piece?

Werner Brink Answered question