Learning how to deploy Laravel to production is one of those milestones that separates developers who build things from developers who ship things. The framework makes local development frictionless — php artisan serve, a .env file, and you’re writing features within minutes. Production is a different environment with different rules, different failure modes, and real consequences when something goes wrong. This guide covers the full pipeline: server provisioning, Nginx configuration, environment hardening, Composer optimisation, queue workers, scheduled tasks, zero-downtime releases, and monitoring — so you ship with confidence the first time.
Understanding What “Production” Actually Means
Production isn’t just a server with your code on it. It’s a contract. Every request corresponds with a real user expecting the application to behave correctly, return quickly, and not expose their data. If you’re still weighing whether Laravel is the right framework for that in 2026 — settle that first. This guide assumes you already have.
Three things define a production environment that a local development setup does not:
Persistence. Your database, uploaded files, and cached state must survive deployments, reboots, and horizontal scaling. Nothing ephemeral belongs in a production container or server filesystem unless you explicitly manage it.
Observability. You cannot dd() your way through a production bug. You need structured logs, error tracking, and performance metrics before your first real user arrives — not after.
Repeatability. Every deploy must be scripted. If you’re running git pull and composer install manually, you’ve already made a mistake. Not because it won’t work once, but because it will fail at the worst possible moment.
If your local stack doesn’t mirror production PHP version for version, you’re doing exploratory work, not engineering — our guide on setting up a Laravel development stack that mirrors production covers exactly how to close that gap before you ever run a deploy.
Server Requirements for Laravel 12
Laravel 12 requires PHP 8.2 or higher. That’s a hard requirement, not a recommendation. Beyond the PHP version, your server needs the following extensions enabled:
BCMath,Ctype,cURL,DOM,FileinfoFilter,Hash,Mbstring,OpenSSLPCRE,PDO,Session,Tokenizer,XML
These are almost always present on a standard LEMP stack (Linux, Nginx, MySQL, PHP), but Fileinfo and cURL occasionally get missed on minimal OS installs. Verify them:
php -m | grep -E "curl|fileinfo|mbstring|pdo|xml|tokenizer"
If any are absent, install the missing modules. On Ubuntu:
sudo apt install php8.2-curl php8.2-mbstring php8.2-xml php8.2-fileinfo
Recommended stack for 2026:
| Component | Recommendation | Notes |
|---|---|---|
| OS | Ubuntu 24.04 LTS | Extended support until 2029 |
| PHP | 8.3 | Security + JIT improvements |
| Web Server | Nginx | Or FrankenPHP for Octane |
| Database | MySQL 8.0 / PostgreSQL 16 | |
| Cache / Queue | Redis 7+ | Single driver for both |
| Process Supervisor | Supervisor 4 | For queue workers |
| Deployment | Laravel Forge / Deployer PHP |
A note on FrankenPHP: Laravel 12’s official docs list it alongside Nginx as a first-class server option. If you’re building a high-throughput API or considering Laravel Octane, FrankenPHP is worth evaluating — it supports persistent worker mode which eliminates PHP bootstrap overhead per request. For a standard application, Nginx + PHP-FPM remains the simpler and more operator-familiar choice.
For a full breakdown of where each of these components sits in a production-grade Laravel workflow — deployment platforms, caching layers, and developer tooling — our top 10 Laravel development tools for 2026 covers the full stack.
Provisioning Your Server: Managed vs. DIY
You have two meaningful paths here. Choose honestly based on your situation.
Option A: Laravel Forge (Recommended for Most Teams)
Laravel Forge provisions and manages your server on any major cloud provider (AWS, DigitalOcean, Hetzner, Linode, Vultr). It handles Nginx configuration, PHP-FPM, SSL certificates via Let’s Encrypt, Redis, Supervisor, deployment scripts, and database management through a UI and CLI. The cost is $15–19/month.
The honest recommendation: if you’re a solo developer or a small team and your core competency is application code, not infrastructure, Forge is not laziness. It’s the correct economic decision. The Nginx configurations it generates are production-hardened. The deployment scripts it creates are correct by default. You avoid a class of operational mistakes that cost engineers days.
Manually provisioning servers doesn’t scale, and more to the point, it doesn’t reproduce — if you’re managing more than one environment, our piece on Infrastructure as Code for reliable systems explains how to codify your stack before it becomes technical debt.
Option B: Self-Managed VPS (DIY)
If you’re going DIY — valid, especially if you’re learning or have specific compliance requirements — here is the minimal provisioning sequence for Ubuntu 24.04:
# Update system sudo apt update && sudo apt upgrade -y # Install Nginx sudo apt install nginx -y # Install PHP 8.3 and required extensions sudo add-apt-repository ppa:ondrej/php sudo apt update sudo apt install php8.3-fpm php8.3-cli php8.3-mysql php8.3-redis \ php8.3-curl php8.3-mbstring php8.3-xml php8.3-zip \ php8.3-fileinfo php8.3-bcmath php8.3-tokenizer -y # Install Composer curl -sS https://getcomposer.org/installer | php sudo mv composer.phar /usr/local/bin/composer # Install Redis sudo apt install redis-server -y sudo systemctl enable redis-server sudo systemctl start redis-server # Install Supervisor sudo apt install supervisor -y sudo systemctl enable supervisor sudo systemctl start supervisor
The provisioning commands above are predictable enough that modern AI-assisted development tools can generate, validate, and audit them — if you haven’t evaluated what’s available for Laravel developers in 2026, infrastructure configuration is precisely where they earn their keep.
Create a dedicated application user. Never run your application as root or www-data with write access to your entire filesystem:
sudo adduser deployer sudo usermod -aG www-data deployer
Nginx Configuration for Laravel
This is where more deployments silently break than anywhere else. The most critical rule: Nginx must point to your public/ directory, not the project root. The public/index.php file is the single entry point for your entire application. Exposing the project root would make your .env file publicly accessible. That is not a theoretical risk.
server {
listen 80;
listen [::]:80;
server_name yourdomain.com www.yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
root /var/www/your-app/public;
index index.php;
# SSL Configuration (managed by Certbot/Let's Encrypt)
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;
# Security Headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
charset utf-8;
# Gzip
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json application/javascript text/xml;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_hide_header X-Powered-By;
}
# Deny access to hidden files
location ~ /\.(?!well-known).* {
deny all;
}
}
Install SSL with Certbot:
sudo apt install certbot python3-certbot-nginx -y sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com
Certbot will auto-renew. Test the renewal process:
sudo certbot renew --dry-run
Environment Configuration and Security Hardening
Your .env file is not a deployment artefact — it is a secret store. It belongs on the server directly. It never belongs in your Git repository.
Key Generation
php artisan key:generate
Run this once on initial setup. Never regenerate a production key unless you are intentionally invalidating all existing sessions and encrypted data. Your APP_KEY is used by Laravel’s Encrypter to protect all encrypted model attributes, cookies, and session data.
Critical Production .env Settings
APP_NAME="Your Application" APP_ENV=production APP_KEY=base64:GENERATED_KEY_HERE APP_DEBUG=false APP_URL=https://yourdomain.com LOG_CHANNEL=stack LOG_LEVEL=error DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=your_database DB_USERNAME=your_db_user DB_PASSWORD=strong_password_here CACHE_STORE=redis SESSION_DRIVER=redis QUEUE_CONNECTION=redis REDIS_HOST=127.0.0.1 REDIS_PASSWORD=null REDIS_PORT=6379 MAIL_MAILER=smtp
APP_DEBUG=false is non-negotiable. With debug mode enabled, Laravel renders full stack traces — including environment variables — directly in the browser on unhandled exceptions. On a production server that’s a critical security vulnerability, not just an aesthetic concern.
Using Redis as your cache, session, and queue driver is the correct production default. It’s fast, it’s atomic for queue operations, and it survives application restarts. The Service Container registers all three bindings automatically when you set those .env values.
File Permissions
# Application should be readable by web server, writable only where needed
sudo chown -R deployer:www-data /var/www/your-app
sudo find /var/www/your-app -type f -exec chmod 644 {} \;
sudo find /var/www/your-app -type d -exec chmod 755 {} \;
# Storage and cache must be writable by the web server
sudo chmod -R 775 /var/www/your-app/storage
sudo chmod -R 775 /var/www/your-app/bootstrap/cache
Composer and Application Optimisation
This is the step most deployment tutorials gloss over. The difference between a development Composer install and a production one is significant.
composer install --no-dev --optimize-autoloader --no-interaction --prefer-dist
--no-devstrips development dependencies (PHPUnit, Collision, Faker, etc.). These have no place in production.--optimize-autoloadergenerates a class map instead of relying on PSR-4 filesystem traversal. Measurably faster on cold boots.--prefer-distdownloads zip archives instead of cloning git repositories. Faster and more reliable on CI/CD pipelines.
Laravel Optimisation Commands
Run these as part of every deployment. They cache the framework’s configuration, routes, views, and events into single serialised PHP files — eliminating repeated filesystem lookups on every request:
php artisan config:cache php artisan route:cache php artisan view:cache php artisan event:cache
Clear them during a new deploy before regenerating:
php artisan optimize:clear php artisan optimize
As of Laravel 11+, php artisan optimize is a convenience command that runs all four cache commands in a single call. Use it.
[Production Pitfall] Never run
php artisan config:cachein an environment where you have calledenv()directly in your application code outside of a config file. Once the config is cached,env()calls returnnull— this is by design, because the config cache is the only authorised source of environment values. Allenv()calls belong inconfig/*.phpfiles. If you have legacy code withenv()scattered through service classes or controllers, your cached deployment will silently misbehave on the values that depend on it. Audit before caching.
Storage Link
If your application handles file uploads, you need the storage symlink. Create it once on initial deploy:
php artisan storage:link
This creates public/storage → storage/app/public. Do not run it on every deploy — it will fail if the link already exists. Your deploy script should check for its existence:
[ ! -L public/storage ] && php artisan storage:link
Database Migrations in Production
Running php artisan migrate in production requires care. There are two distinct phases of your application’s lifetime here: initial deploy and subsequent deploys.
Initial Deploy
php artisan migrate --force
The --force flag is required in production — without it, Artisan will prompt for confirmation, which blocks non-interactive deploy scripts.
Subsequent Deploys
Before running migrations on a live database, you should:
- Take a database snapshot. Most managed databases (RDS, PlanetScale, Supabase) offer point-in-time recovery. On a self-managed instance,
mysqldumpbefore every migration. - Review the migration for destructive operations (
dropColumn,dropTable,changeon a indexed column). These can lock tables and cause downtime under load. - Consider Squashing migrations periodically to keep your migration history manageable.
[Edge Case Alert] On high-traffic MySQL tables, adding a column with a default value causes a full table rebuild in MySQL versions below 8.0. In MySQL 8.0+ with InnoDB, most
ALTER TABLEoperations are instant forADD COLUMNwith a default — but not all. Specifically, adding a column with a non-null default that requires a backfill still rewrites the table. For tables with millions of rows, use pt-online-schema-change or a shadow-table migration pattern to avoid a full table lock mid-deployment.
Queue Workers and Supervisor
Laravel’s queue system is the backbone of any application that does non-trivial background work: sending emails, processing uploads, dispatching webhooks, handling AI API calls. Your queue workers are long-running PHP processes. They need a process manager to restart when they crash, and to restart gracefully when you deploy new code.
Supervisor is the standard solution.
Supervisor Configuration
Create a configuration file at /etc/supervisor/conf.d/laravel-worker.conf:
[program:laravel-worker] process_name=%(program_name)s_%(process_num)02d command=php /var/www/your-app/artisan queue:work redis --sleep=3 --tries=3 --max-time=3600 --queue=default,high,low autostart=true autorestart=true stopasgroup=true killasgroup=true user=deployer numprocs=4 redirect_stderr=true stdout_logfile=/var/www/your-app/storage/logs/worker.log stopwaitsecs=3600
Key options explained:
numprocs=4— run four concurrent worker processes. Tune this based on your server’s CPU count and queue volume.--max-time=3600— each worker process restarts after 1 hour. This prevents memory leaks from accumulating indefinitely in long-running jobs.--tries=3— a job will be retried up to 3 times before being moved to the failed jobs table.--queue=default,high,low— workers process thehighqueue first, thendefault, thenlow. This gives you priority-based job routing.stopwaitsecs=3600— Supervisor will wait up to 1 hour for a worker to finish its current job before force-killing it on a server stop. Set this to your longest acceptable job duration.
Reload Supervisor after creating or editing the config:
sudo supervisorctl reread sudo supervisorctl update sudo supervisorctl start laravel-worker:*
Deploying with Queue Workers Running
When you deploy new code, your queue workers are still running old code. They need to be restarted gracefully — not killed and restarted abruptly, which would abort jobs mid-execution.
php artisan queue:restart
This signals all workers to finish their current job and then exit. Supervisor immediately spawns fresh workers that load the new code. Include this in every deploy script.
Failed Jobs
Always configure a failed job table. It’s a five-second setup that will save you hours of debugging:
php artisan queue:failed-table php artisan migrate
Monitor failed jobs in production:
php artisan queue:failed php artisan queue:retry all # Retry all failed jobs php artisan queue:flush # Clear the failed jobs table
[Architect’s Note] Laravel’s
ShouldBeUniqueandShouldBeUniqueUntilProcessingcontracts on your Job classes use an atomic Redis lock to prevent duplicate job dispatches. If you’re queuing jobs from event listeners or webhooks that can fire multiple times for the same payload — e.g., a Stripe webhook for a payment confirmation — implementShouldBeUniquewith a customuniqueId()method based on the entity identifier, not the job creation time. This eliminates the entire class of duplicate-processing bugs before they reach the database.
Task Scheduling in Production
Laravel’s Scheduler is elegantly simple — you define your schedule in routes/console.php (Laravel 11+) and a single cron entry runs everything:
// routes/console.php
use Illuminate\Support\Facades\Schedule;
Schedule::command('reports:daily')->dailyAt('06:00')->timezone('Africa/Johannesburg');
Schedule::command('cache:prune-stale-tags')->hourly();
Schedule::job(new SendWeeklyDigest)->weekly()->onOneServer();
Set up the single cron entry on your server as the deployer user:
crontab -e -u deployer ``` Add: ``` * * * * * cd /var/www/your-app && php artisan schedule:run >> /dev/null 2>&1
The * * * * * triggers every minute, but Laravel’s Scheduler only executes tasks whose time has come. This is the correct and documented approach.
For clustered/multi-server deployments, the ->onOneServer() method acquires an atomic lock via the cache driver (Redis) to ensure a scheduled task runs on exactly one server even when multiple Scheduler instances are ticking simultaneously.
Zero-Downtime Deployment Strategies
This is where production deployment gets architectural. “Zero-downtime” means users experience no service interruption during a deploy. There are three dominant approaches for Laravel:
Option A: Laravel Forge Deployment Scripts
Forge generates a deployment script automatically. You can customise it in the Forge dashboard under your site’s Deployment Script tab. A production-hardened script looks like this:
cd /home/forge/your-app.com # Pull latest code git pull origin main # Install production dependencies composer install --no-dev --optimize-autoloader --no-interaction # Clear and rebuild caches php artisan optimize:clear php artisan optimize # Run database migrations php artisan migrate --force # Create storage link if not present [ ! -L public/storage ] && php artisan storage:link # Restart queue workers gracefully php artisan queue:restart # Restart PHP-FPM for OPcache invalidation ( flock -w 10 9 || exit 1; echo 'Restarting FPM...'; sudo -S service php8.3-fpm reload ) 9>/tmp/fpmlock
Trigger it via Git webhook — Forge watches your repository branch and deploys automatically on push.
Option B: Deployer PHP (Self-Managed, Zero-Downtime)
Deployer is the professional tool for zero-downtime deployments on self-managed servers. It uses a releases directory structure: each deploy creates a numbered releases/N directory, and on successful deploy, the current symlink is atomically switched to point to the new release. The switch is instantaneous — Nginx serves the new code the moment the symlink flips.
Install Deployer locally (not on the server):
composer require deployer/deployer --dev
Create a deploy.php in your project root:
<?php
namespace Deployer;
require 'recipe/laravel.php';
// Configuration
set('application', 'your-app');
set('repository', 'git@github.com:your-org/your-app.git');
set('git_tty', false);
set('keep_releases', 5);
set('shared_files', ['.env']);
set('shared_dirs', ['storage']);
set('writable_dirs', ['bootstrap/cache', 'storage', 'storage/app', 'storage/logs', 'storage/framework']);
// Server
host('production')
->setHostname('your-server-ip')
->setRemoteUser('deployer')
->setDeployPath('/var/www/your-app');
// Custom tasks
after('deploy:update_code', 'artisan:optimize:clear');
after('artisan:migrate', 'artisan:optimize');
after('artisan:optimize', 'artisan:queue:restart');
// Deploy
after('deploy:failed', 'deploy:unlock');
Deploy:
./vendor/bin/dep deploy production
Rollback to the previous release if something goes wrong:
./vendor/bin/dep rollback production
That rollback takes seconds. It atomically switches the current symlink back to the previous release directory. The database state is the one complexity — schema migrations don’t roll back automatically, which is why additive migrations (never destructive in the same deploy as a code change) are the correct approach.
Option C: Laravel Cloud
Laravel Cloud is Laravel’s own managed platform, announced in 2024 and now generally available. It handles containerised deployments, auto-scaling, managed databases, Redis, and worker processes with no server configuration required. It’s positioned between Forge (you manage the server) and full PaaS offerings. For greenfield applications without specific infrastructure requirements, it warrants serious evaluation.
[Word to the Wise] Zero-downtime deployment solves the web process problem, but it does not solve the database migration problem. The atomic symlink switch means new code goes live instantly — but if your new code expects a database column that doesn’t exist yet because the migration ran after the symlink switch, you will have errors in that gap. The correct sequence is always: run migration first, then flip the symlink. Better still, keep your migrations backwards compatible for at least one release cycle: add columns before the code references them, drop columns after the code no longer references them. This pattern — expand/contract migrations — is how high-traffic teams do schema changes without downtime.
Monitoring, Logging, and Observability
Deploying without observability is flying blind. Here is the minimum viable monitoring stack:
Laravel Pulse
Laravel Pulse is the built-in application performance monitor, included in the framework since Laravel 11. It tracks:
- Slow queries and their origin
- Slow routes
- Queued job throughput and failures
- Redis command frequency
- Exception rates and spikes
Enable it with a single package install if not already present:
composer require laravel/pulse php artisan vendor:publish --provider="Laravel\Pulse\PulseServiceProvider" php artisan migrate
Protect the /pulse dashboard in bootstrap/app.php:
use Laravel\Pulse\Facades\Pulse;
// In your routes or service provider:
Route::middleware(['auth', 'can:viewPulse'])->get('/pulse', function () {
return view('pulse');
});
Structured Logging with Channels
Configure your config/logging.php stack for production. Writing to a single laravel.log file on a multi-process server is a concurrency hazard — log lines from concurrent requests interleave. Use a daily rotating log with proper permissions, or push logs to an external service:
// config/logging.php
'channels' => [
'stack' => [
'driver' => 'stack',
'channels' => ['daily', 'slack'],
'ignore_exceptions' => false,
],
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'error'),
'days' => 14,
'permission' => 0664,
],
'slack' => [
'driver' => 'slack',
'url' => env('LOG_SLACK_WEBHOOK_URL'),
'username' => 'Laravel Log',
'emoji' => ':boom:',
'level' => 'critical',
],
],
This setup writes error and above to your daily rotating log file, and separately pipes critical logs to a Slack webhook for immediate team notification. Tune LOG_LEVEL in your .env — in production, error is almost always the right threshold. debug in production fills your disk.
Error Tracking
Integrate Sentry for Laravel for exception tracking with full stack traces, user context, and performance monitoring. It’s free for modest volumes and worth every penny for the breadcrumb trails it provides when debugging a production exception you cannot reproduce locally.
composer require sentry/sentry-laravel php artisan sentry:publish --dsn=https://your-dsn@sentry.io/your-project-id
OPcache Configuration
This is frequently missed. OPcache caches PHP bytecode in memory so that PHP does not re-parse source files on every request. It is enabled by default on most PHP-FPM installs, but the defaults are often under-configured. Add to your php.ini or /etc/php/8.3/fpm/conf.d/10-opcache.ini:
opcache.enable=1 opcache.memory_consumption=256 opcache.interned_strings_buffer=16 opcache.max_accelerated_files=20000 opcache.revalidate_freq=0 opcache.validate_timestamps=0 opcache.save_comments=1
Setting validate_timestamps=0 tells OPcache to never check whether source files have changed on disk. This is the highest-performance setting and is correct for production — your deployment process (the symlink switch or PHP-FPM reload) is responsible for invalidating the cache. After every deploy, reload PHP-FPM:
sudo systemctl reload php8.3-fpm
Final Pre-Launch Checklist
Run through this before you point your DNS at the new server:
Security
APP_DEBUG=falsein.envAPP_ENV=productionin.env.envfile is not in your git repository (check.gitignore)- No
public/.htaccessexposing sensitive directories - Security headers present in Nginx config (X-Frame-Options, HSTS, CSP)
- SSL certificate installed and auto-renewal tested
- Database user has only the permissions it needs (no
SUPERprivilege)
Performance
php artisan optimizehas been run- OPcache enabled with production settings
- Redis running and configured as cache, session, and queue driver
- Gzip enabled in Nginx
- Static assets compiled and versioned with
npm run build
Reliability
- Supervisor running with correct worker count
- Cron entry configured for the Scheduler
- Failed jobs table created and migrated
- Database migrations run with
--force - Storage symlink created
- Backup strategy in place (database + uploaded files)
Observability
- Laravel Pulse installed and dashboard protected
- Error tracking (Sentry or equivalent) configured
- Log channel configured for production log level
- Health check endpoint responding (
/up— built in since Laravel 11)
Deployment Process
- Deployment is scripted and not manual
- Rollback procedure documented and tested
queue:restartruns as part of every deploy- PHP-FPM reloaded after every deploy (OPcache invalidation)
Closing Thoughts
The gap between “it works on my machine” and “it runs reliably in production” is real, but it’s not mysterious. It’s a known set of problems with known solutions. What we’ve covered here is the complete picture: correct Nginx configuration, hardened environment settings, Composer and framework optimisation, Eloquent-backed database migration discipline, Supervisor-managed queue workers, zero-downtime atomic deployments, and the observability layer that makes all of it debuggable.
The first production deploy is always the hardest. The second is mostly copy-paste. By your third, you’ll have a deployment script you trust completely.
Refer to the official Laravel 12 deployment documentation for the authoritative reference on server requirements and optimisation commands — it’s one of the most concise and well-maintained pages in the Laravel docs.
This guide was featured on Laravel News in March 2026.
Frequently Asked Questions
Do I need Laravel Forge to deploy a Laravel application to production?
No. Laravel Forge is a server management tool, not a deployment requirement. You can deploy Laravel to any VPS — DigitalOcean, Hetzner, AWS, Linode — by provisioning Nginx, PHP-FPM, and Supervisor manually. Forge is a time investment decision, not a technical one. If you’re managing more than one server or you don’t want to own Nginx configuration and SSL renewal as ongoing responsibilities, Forge pays for itself quickly. If you’re running a single server and you’re comfortable on the command line, a self-managed stack is entirely viable.
What PHP version does Laravel 12 require?
Laravel 12 requires PHP 8.2 as a minimum. PHP 8.3 is the recommended version for new production deployments — it includes JIT improvements and additional performance gains over 8.2. You should never deploy on a PHP version below the framework minimum, and you should avoid running a PHP version that has reached end-of-life, as it will no longer receive security patches.
Why does env() return null after I run php artisan config:cache?
Once the configuration cache is built, Laravel stops reading the .env file on each request and serves all values from the cached file instead. Any env() call made directly in application code — outside of a config/*.php file — will return null because the environment file is no longer loaded at runtime. The fix is to move all env() calls into their appropriate config files and reference them via config('your-key') throughout your application. This is the correct pattern regardless of caching — it’s just invisible in development because the cache isn’t active.
Should I run php artisan migrate automatically as part of my deployment script?
Yes, with --force to suppress the production confirmation prompt, and with one important precaution: run the migration before the new code goes live, not after. If your new code references a column that doesn’t exist yet because the migration ran after the symlink switch, you will have a window of errors in production. The correct sequence is migrate first, then deploy code. Additionally, always take a database snapshot before running migrations against a production database — automated or not.
How many Supervisor queue worker processes should I run?
Start with one worker per CPU core as a baseline. A 2-core server running a mixed workload typically handles 2–4 workers comfortably. Monitor your queue depth using php artisan queue:monitor or Laravel Pulse and scale up if jobs are backing up. Be aware that each worker is a persistent PHP process consuming memory — on a memory-constrained server, worker count is bounded by RAM as much as CPU. The --max-time=3600 flag is also important: it restarts workers hourly to prevent memory leaks from compounding in long-running processes.
What is the safest way to handle database schema changes without downtime?
The pattern is called expand/contract migrations. When adding a column, add it as nullable first (expand), deploy the code that writes to it, then in a subsequent deploy make it non-nullable if required (contract). When removing a column, stop writing to it in code first, deploy that code change, then drop the column in the next deploy. This ensures that at no point does your live code expect a schema state that the database hasn’t reached yet. It’s a discipline issue, not a technical limitation — most downtime during deploys traces back to migrations and code changes being coupled in the same release.
How do I make sure my scheduled tasks only run on one server in a multi-server setup?
Add ->onOneServer() to any scheduled task that must not run concurrently across multiple instances. Laravel acquires an atomic lock via your configured cache driver — Redis is the correct driver for this — to guarantee only one server executes the task for a given scheduled window. Every server still needs the cron entry (* * * * * php artisan schedule:run) configured; onOneServer() is the lock mechanism, not a cron replacement. Without it, every server in your cluster will run every scheduled command independently.
What is the /up health check endpoint in Laravel and should I use it?
The /up route was introduced in Laravel 11 as a built-in application health endpoint. It returns an HTTP 200 response when the application is running correctly and throws an exception — triggering a non-200 response — if something is critically wrong. You should use it. Point your load balancer, uptime monitor (Better Uptime, UptimeRobot), or container orchestration health check at /up rather than your homepage or a custom route. It is intentionally lightweight and does not require authentication. It also fires a DiagnosingHealth event, which you can hook into to run custom health checks such as verifying your database connection or Redis availability before reporting healthy.
Senior Laravel Developer and AI Architect with 10+ years in the trenches. Dewald writes about building resilient, cost-aware AI integrations and modernizing the Laravel developer workflow for the 2026 ecosystem.

