
Deploying Inertia Vue SSR to Cloudflare Workers: From Traditional Node.js to Global Edge
Tim Geisendörfer • July 9, 2025
laravel vue inertia featureTraditional Laravel Inertia SSR setups work great, but they come with a familiar problem: your beautiful Vue components get rendered on a single server somewhere, creating bottlenecks and issues with long running node processes. What if we could run that same SSR logic at 200+ edge locations globally, with zero server management?
That's exactly what we can achieve by moving our Inertia Vue SSR from Node.js to Cloudflare Workers. And the best part? Your Vue components and Laravel backend stay completely unchanged.
Important Note: This approach is a prototype and case study. We haven't tested it extensively for production use, so proceed with caution if you're considering this for mission-critical applications. Think of this as an exploration of what's possible rather than a battle-tested solution.
For a working example, check out our demo repository that demonstrates this entire setup.
The Problem with Traditional SSR
Let's be honest - running a Node.js SSR server works, but it comes with real operational challenges: - Infrastructure management: You need to take care of an additional process running on your infrastructure - Scaling complexity: Traffic spikes require careful capacity planning - Memory leaks: In bigger applications, long-running SSR processes can develop memory leaks that are frustrating to debug - Limited deployment options: Some serverless platforms like AWS Lambda (used by Laravel Vapor) don't support traditional InertiaJS SSR at all
Cloudflare Workers addresses these pain points differently. While Workers don't create a fresh environment for each request, they do have built-in memory limits and automatic cleanup mechanisms that prevent the kind of memory leaks that plague long-running Node.js processes.
Setting Up the Dual Build System
The key insight is creating shared SSR logic that works for both traditional Node.js and Cloudflare Workers. Here's how to structure it:
First, create resources/js/ssr-shared.ts
with your core SSR logic:
import { createInertiaApp } from '@inertiajs/vue3';
import { renderToString } from '@vue/server-renderer';
import { resolvePageComponent } from 'laravel-vite-plugin/inertia-helpers';
import { createSSRApp, DefineComponent, h } from 'vue';
import { route as ziggyRoute } from 'ziggy-js';
const appName = import.meta.env.VITE_APP_NAME || 'Laravel';
export async function renderPage(page: any) {
try {
return await createInertiaApp({
page,
render: renderToString,
title: (title) => title ? `${title} - ${appName}` : appName,
resolve: (name) => resolvePageComponent(`./pages/${name}.vue`, import.meta.glob<DefineComponent>('./pages/**/*.vue')),
setup({ App, props, plugin }) {
const app = createSSRApp({ render: () => h(App, props) });
// Handle Ziggy routes for SSR
if (page.props.ziggy) {
const ziggyConfig = {
...page.props.ziggy,
location: new URL(page.props.ziggy.location),
};
const route = (name: string, params?: any, absolute?: boolean) =>
ziggyRoute(name, params, absolute, ziggyConfig);
app.config.globalProperties.route = route;
// Make route function available globally for SSR
if (typeof window === 'undefined') {
if (typeof global !== 'undefined') {
global.route = route;
}
if (typeof globalThis !== 'undefined') {
globalThis.route = route;
}
}
}
app.use(plugin);
return app;
},
});
} catch (error) {
console.error('[SSR] Render error:', error);
throw error;
}
}
Now your traditional SSR entry point becomes incredibly simple:
// resources/js/ssr.ts
import createServer from '@inertiajs/vue3/server';
import { renderPage } from './ssr-shared';
createServer(renderPage, { cluster: true });
Creating the Cloudflare Worker
The Worker entry point handles HTTP requests and calls your shared SSR logic:
// resources/js/worker.ts
import { renderPage } from '../../dist/ssr.js';
export default {
async fetch(request: Request): Promise<Response> {
if (request.method !== 'POST' || new URL(request.url).pathname !== '/render') {
return new Response('Not found', { status: 404 });
}
try {
const body = await request.text();
const data = JSON.parse(body);
const page = data.page || data;
if (!page || !page.component) {
return new Response('Missing page data or component', { status: 400 });
}
const ssrResult = await renderPage(page);
const jsonResponse = {
head: ssrResult.head || [],
body: ssrResult.body || ''
};
return new Response(JSON.stringify(jsonResponse), {
headers: {
'content-type': 'application/json',
'cache-control': 'no-cache, no-store, must-revalidate',
}
});
} catch (error) {
console.error('[Worker] Error:', error);
return new Response(`Internal server error: ${error.message}`, { status: 500 });
}
}
};
Configuring the Build System
The magic happens in your Vite config. We detect when we're building for Workers and adjust accordingly:
// vite.config.ts
import { defineConfig } from 'vite';
import vue from '@vitejs/plugin-vue';
import laravel from 'laravel-vite-plugin';
import path from 'path';
export default defineConfig(() => {
const isWorkerBuild = process.env.WORKER_BUILD === 'true';
const baseConfig = {
plugins: [vue()],
resolve: {
alias: {
'@': path.resolve(__dirname, './resources/js'),
'ziggy-js': resolve(__dirname, 'vendor/tightenco/ziggy'),
},
},
};
if (isWorkerBuild) {
return {
...baseConfig,
build: {
ssr: 'resources/js/ssr-shared.ts',
outDir: 'dist',
rollupOptions: {
output: {
entryFileNames: 'ssr.js'
}
}
},
};
}
return {
...baseConfig,
plugins: [
...baseConfig.plugins,
laravel({
input: ['resources/js/app.ts'],
ssr: 'resources/js/ssr.ts',
refresh: true,
}),
],
};
});
Deployment Process
Create a wrangler.toml
file:
name = "your-app-ssr"
main = "resources/js/worker.ts"
compatibility_date = "2025-06-25"
compatibility_flags = ["nodejs_compat"]
[build]
command = "WORKER_BUILD=true npm run build"
Add the necessary npm scripts:
{
"scripts": {
"build:worker": "WORKER_BUILD=true vite build",
"dev:worker": "wrangler dev --persist-to .wrangler/state",
"deploy:worker": "wrangler deploy"
}
}
Now deploy:
# Login to Cloudflare
npx wrangler login
# Build and deploy
npm run build:worker
npm run deploy:worker
Switching Between SSR Modes
The beauty of this setup is that you can switch between traditional and Workers SSR by simply changing your environment variable:
# Traditional Node.js SSR
INERTIA_SSR_URL=http://127.0.0.1:13714
# Local Workers development
INERTIA_SSR_URL=http://localhost:8787
# Production Workers SSR
INERTIA_SSR_URL=https://your-worker-name.your-subdomain.workers.dev
Real-World Benefits
The advantages of this approach become clear in specific scenarios:
- Serverless-first deployments: Perfect for Laravel Vapor users who can't run traditional SSR servers on AWS Lambda
- Zero scaling concerns: Traffic spikes are handled automatically without capacity planning
- Memory leak mitigation: Workers have built-in memory limits and cleanup mechanisms that prevent the accumulating memory leaks common in long-running SSR processes
- Simplified operations: No SSR servers to monitor, restart, or maintain
- Cost efficiency: Pay per request instead of keeping servers running during low traffic periods
Development Experience
For local development, run both environments:
# Traditional SSR
npm run build:ssr
php artisan inertia:start-ssr
# Workers SSR
npm run dev:worker
Your Vue components, Laravel controllers, and InertiaJS setup remain exactly the same. The only difference is where the SSR happens.
Conclusion
Moving from traditional Node.js SSR to Cloudflare Workers isn't just about performance - it's about simplifying your infrastructure while improving user experience globally. The dual build system ensures you're not locked into either approach, giving you flexibility as your application grows.
The setup might seem complex initially, but once configured, it provides a robust foundation for scalable SSR applications. Your users get faster page loads, and you get to focus on building features instead of managing servers.
How We Built It
Here's what we implemented to make this work:
1. Shared SSR Logic (resources/js/ssr-shared.ts
)
Instead of duplicating SSR code, we extracted the Vue rendering logic into a shared file that works in both Node.js and Workers environments. This handles Vue component rendering, Ziggy route helpers, and error handling.
2. Worker Entry Point (resources/js/worker.ts
)
A simple HTTP handler that receives requests from Laravel, calls the shared SSR logic, and returns rendered HTML. It validates requests and formats responses properly.
3. Dual Build System (vite.config.ts
)
We modified Vite to detect when we're building for Workers (WORKER_BUILD=true
) and adjust the output accordingly. Workers need a different bundle format than traditional Node.js SSR.
4. New npm Scripts
- npm run build:worker
- Creates the Worker-compatible bundle
- npm run dev:worker
- Runs a local development server
- npm run deploy:worker
- Deploys to Cloudflare (when ready)
5. Environment Switching
Simply change INERTIA_SSR_URL
in your .env
file to switch between traditional SSR and Workers SSR. Laravel doesn't care where the SSR happens - it just makes HTTP requests.
Try It Yourself
Want to test this approach? You have two options:
Quick Start (Recommended)
Clone our working example and run it locally:
git clone https://github.com/geisi/inertia-vue-cloudflare-ssr.git
cd inertia-vue-cloudflare-ssr
cp .env.example .env
composer install
npm install
# Build traditional SSR first (InertiaJS requirement)
npm run build:ssr
# Start the local Worker server
npm run dev:worker
# Update your .env with the Worker URL (usually http://localhost:8787)
Build It From Scratch
If you want to understand every step, check out our complete implementation guide in the repository. It walks through each file modification and explains why each change is necessary.
Key Takeaways
This approach transforms your SSR from a long-running process into a stateless, auto-scaling system. While it's still experimental, the operational benefits are significant for specific use cases.
The main benefits we've observed: - Laravel Vapor compatibility: Finally get SSR working on AWS Lambda-based deployments - Memory leak mitigation: Built-in limits and cleanup prevent accumulating memory issues - Automatic scaling: No capacity planning or server management - Cost reduction: Pay-per-request vs always-on servers
Remember, this is prototype territory. Test thoroughly before considering it for production applications. But for exploring the future of web application deployment, it's an exciting direction.
If you try this approach, I'd love to hear about your experience. The Laravel and Vue communities could benefit from more real-world testing of edge-based SSR solutions.
Do you need help with your next project?
Unleash the full potential of your business, and schedule a no-obligation consultation with our team of experts now!