1

I’m building a chat application in Laravel where I’m using the openai-php/laravel package to interact with the ChatGPT API. The AI responses can be quite large, and I want to stream them to the frontend in real-time for a better user experience.
I know Laravel supports streaming responses with return response()->stream(...), and there are packages for frontend integration like stream-react or stream-view.
The issue is: I’m struggling to properly implement the streaming logic within my Laravel controller and consume it effectively on the client side (using Vue 3 and Inertia.js).

  • How do I correctly set up the response()->stream() function to receive chunks from the OpenAI API and pass them to the client?
  • What is the best way to handle the connection and ensure all data is sent without the request timing out in Laravel?
  • Are there any specific headers I need to set for this to work with an Inertia/Vue frontend?
// Current (problematic) Controller Code 

public function generateResponse(Request $request) { 

    $prompt = $request->input('prompt'); 
    
    // This code waits for the full response, not ideal for streaming 
    
    $result = OpenAI::chat()->create([ 
        'model' => 'gpt-3.5-turbo', 
        'messages' => [ 
            ['role' => 'user', 'content' => $prompt] 
        ], 
        'stream' => true, // I want to use streaming but don't know how to implement
    ]); 

    // How to stream $result chunks to the client? 

    return response()->json(['response' => $result['choices'][0]['message']['content']]); 
}

Any guidance or code examples for the controller and frontend logic would be greatly appreciated!

Dewald Hugo Changed status to publish