Serveless SvelteKit in AWS Lambda (and all of my frustrations about the process)

I saw an opportunity to keep our OpEx low by running our containerized SvekteKit application in AWS Lambda instead of ECS or K8N and a great vibe coding opportunity. It turned out to be an exercise in frustration. In this post I provide both the solution and the optional color commentary on how I got there.

Posted by Tejus Parikh on June 28, 2025

We do not do hard things to grow ourselves or gain knowledge, we do them because we thought it would be easy. – JFK’s speechwriter (early draft, maybe)

I thought I had a simple plan. I had an SvelteKit SPA that I wanted to deploy as a containerized Node app. It’s not a 24/7 service, I don’t have the personpower or cash to run a Kubernetes cluster, so an AWS Lambda sounded like an ideal choice. I figured I should be able to hire a contractor to help with containerization, change adapter-static to adapter-node, maybe use some AI to bridge any gaps, and be on my merry way. I didn’t count on this being some of the most frustrating work in my professional career.

So if you’re another crazy person trying to do this instead of just launching the app in Vercel or AWS App Runner, put your AI away and take a look at the next section. Very important things have changed since the training data cut-off date. If you want to know how I felt about it all, then read beyond.

SvelteKit Node, AWS Lambda, AWS ALB key changes.

✅ The 100% correct way to host a SvelteKit application in AWS Lambda in 2025.

This is the structure of the final request flow:

browser => CloudFront => ALB => Lambda

The browser request gets routed to Cloudfront, which forwards it an Application Load Balancer, which ultimately invokes a Lambda function with an ALBEvent. The critical thing is that this not a standard HTTP Server communication and the ALBEvent needs to be bridge to SvelteKit.

I’m going to assume you already have a SvelteKit app and a preferred mechanism for deploying into AWS.

The first thing you’ll want to do is install some dependencies:

# Install the node adapter as a dev dependency since this is not needed post build
$ npm install --save-dev '@sveltejs/adapter-node'
# We'll need some runtime dependencies too for the lambda handler
$ npm install '@codegenie/serverless-express' express patch-package

The next step is to change the Svelte config to use the node adapter:

// Import the Node adapter instead of auto or whatever you had there before
import adapter from '@sveltejs/adapter-node';
import { vitePreprocess } from '@sveltejs/vite-plugin-svelte';

/** @type {import('@sveltejs/kit').Config} */
const config = {
    // Consult https://kit.svelte.dev/docs/integrations#preprocessors
    // for more information about preprocessors
    preprocess: sequence([vitePreprocess(), ...your_other_preprocessors]),

    kit: {
        adapter: adapter({
            out: 'build',
            // Setting this to true had me convinced it was working, even when it wasn't
            precompress: false,
        }),
    },
};

export default config;

If you run npm run build and inspect the build directory, you’ll see an index.js and a handler.js. We need to create a proxy that acts like a custom server.

There’s potentially a smarter way to do this by augmenting the Vite build, but that’s beyond what I could figure out right now. What I did instead was build a little Javascript adapter that got included in my lambda container by the Dockerfile build. Much of the hard work has already been done for us, but there are a few quirks.

import express from 'express';
import { handler as svelteHandler } from './build/handler.js';
/**
 * AWS Serverless has been taken over by Vendia and rebranded under @codegenie
 * AI (in 2025) will try and make you use @vendia/serverless-express,
 * but that no longer exists
 */
import serverlessExpress from '@codegenie/serverless-express';

const app = express();

// Lowercase the headers. Svelte sometimes gets lost with mixed cased headers
app.use((_req, res, next) => {
    const originalSetHeader = res.setHeader;
    res.setHeader = function (name, value) {
        return originalSetHeader.call(this, name, `${value}`);
    };
    next();
});

app.use((req, _res, next) => {
    console.info(`Received request: ${req.method} ${req.url}`);
    next();
});

/**
 * cheap healthcheck route
 */
app.get('/healthcheck', (_req, res) => res.send('Ok!'));

/**
 * Test route to help preserve sanity
 */
app.get('/test', (_req, res) => {
    res.send('✅ ALB + Lambda + SvelteKit working!');
});

// Use SvelteKit handler (from adapter-node)
app.use(svelteHandler);

// Wrap in serverless express WITHOUT eventSource
const baseHandler = serverlessExpress({ app, logSettings: { level: 'warn' } });

export const rawHandler = async (event, context, callback) => {
    /* Uncomment the following to get the full event string in the logs */
    // console.debug(JSON.stringify({ event }));
    try {
        const response = await baseHandler(event, context, callback);
        /**
         * As part of it's response translation, SvelteKit correctly formats multiple calls to
         * `event.cookies.set` into a comma-delimited single cookie string. ALB does not like that
         * so the cookies need to be re-split into the correctly formatted for ALB (but non-standard)
         * structure within the multiValueHeaders. You also need to enable this flag on the alb listener
         */
        if (response?.multiValueHeaders?.['set-cookie']) {
            response.multiValueHeaders['set-cookie'] = response.multiValueHeaders['set-cookie']
                .map((c) => c.split(','))
                .flat();
        }
        console.info(
            `Response for request: ${event.path}. Status: ${response.statusCode}, size: ${Buffer.byteLength(JSON.stringify(response), 'utf8')}.`,
        );
        return response;
    } catch (e) {
        console.error(e);
    }
};

export const handler = rawHandler;

We’re not quite done yet. As it stands, this code will never return any data. SvelteKit copies it’s internal response state by writing into the body stream. However, Serverless Express isn’t actually providing a stream, only something that sorta looks like a stream. So SvelteKit is waiting for an event that never comes. This is an open issue on the @codegenie/serverless-express issue tracker.

I patched this by returning true at line 128 (hence the dependency on patch-package). There may be a smarter fix, but this one has been working. You can save the following code-snippet in patches/@codegenie+serverless-express+4.16.0.patch.

diff --git a/node_modules/@codegenie/serverless-express/src/response.js b/node_modules/@codegenie/serverless-express/src/response.js
index a65e020..6225cf0 100644
--- a/node_modules/@codegenie/serverless-express/src/response.js
+++ b/node_modules/@codegenie/serverless-express/src/response.js
@@ -125,6 +125,8 @@ module.exports = class ServerlessResponse extends http.ServerResponse {
         if (typeof cb === 'function') {
           cb()
         }
+
+        return true;
       }
     })
   }

The last thing is to ensure that the ALB Target Group has Multi Value Headers turned on. Without this, you will only ever be able to set a single cookie.

If you packaged everything up correctly, you should have a working SvelteKit Lambda!

The rant

Sometimes you know you’re going to take on something challenging. Designing a new type of service from the ground up, changing the core communication protocol, or designing a distributed caching scheme are all daunting from the get-go. You spend time planning, writing architecture docs, getting peer feedback, plan some more, then start work.

Servicing an HTTP request in a slightly different way using architecture patterns used elsewhere in the system didn’t seem like it would hit that bar. In the olden days, there would have been some Googling, reading a couple random blogs, then implementing a solution. I did do some of that and came across a post from Sean W. Lawrence that formed the basis of the final solution. In the current zeitgeist this seemed like a tailor made problem for a coding agent.

So why am I even doing this in the first place?

The background

I’ve covered a little about the architecture of the application I’m working on in my previous post. There’s a lot going on behind the scenes, but for the user we have a SvelteKit application deployed as an SPA that uses DuckDb WASM to build the interface off data files stored as Parquet in Cloudfront, secured with a custom Lambda@Edge. This setup worked great for prototyping, showing off our capabilities, and generally moving fast. Since it was just Javascript served over a CDN, our system was extremely reliable.

There comes a time when having the browser download a lot of Parquet files to do a join to show a number becomes noticeably slow and you have to do something unconventional like run queries on a server. One of the reasons for going for SvelteKit vs React was SvelteKit is a full stack UI platform with built-in server side support, theoretically allowing a quick pivot when we needed more functionality. The rest of our app is heavy on moving data around and AWS works really well for that side of our operation. We also could really benefit from having a shared filesystem between our services, so if at all possible, we wanted the UI to live in AWS rather than explore alternative all-in-one hosting providers. Worst case, we could toss the code on some virtual pizza boxes, but with our small team size a managed service was vastly preferred.

Too many moving parts

I deal with front-end infrastructure once every few years then promptly forget about it so all this stuff is generally outside my comfort zone. The first attempt was to use AppRunner, but for some reason it lacks support for EFS which was highly desired.

After trial and error, we settled on:

  • CloudFront to accept the request, handle SSL, and forward to something.
  • Application Load Balancer on our internal subnets to accept the forwarded request and forward it to the appropriate lambda
    • We had some issue with using Cloudfront Lambda function origins directly, but I don’t remember what.
  • AWS Lambda to host the SvelteKit app, bundled into a container with access to EFS
  • Move the authentication from a Lambda@Edge Cloudfront to the SvelteKit application. There had always been some quirkiness in the existing approach that we hadn’t been able to solve.
  • Convert the SvelteKit app from adapter-static to adapter-node

The heavy lifting for containerization and Terraform would be handled by another team member, while I would focus on the Javascript application parts. Everything went smoothly for a while. The Github actions were created, the containers got built, and we transformed our app into a true client-server experience.

502 Bad AI

We had a basic handler, but since we are serving the whole application from the Lambda, we needed handling of different content types, binary encoding, etc. It felt like this should be enough prior art and an area that could be tackled in an AI first way.

Unfortunately, this is exactly where the ever evolving (aka flaky) Javascript ecosystem reared it’s ugly head. The general premise remained the same throughout and was highly influenced by the mentioned blog post. Allow Svelte to think it’s talking Express and bridge between that and ALB Request/Responses.

Everyone around me has been hyping up Claude, so I went there first.

Here is one interaction (summarized):

(Claude.ai): No, you don't need to use aws-serverless-express with modern SvelteKit applications. SvelteKit has built-in AWS Lambda support through its official adapter.

(Tejus): does this work with sveltekit4.0?

(Claude.ai): Based on my search results, I can see that there's some important context here. It appears that there is no official @sveltejs/adapter-aws from the SvelteKit team. I need to correct my previous response.

This is the point where a non-ai-first me or a real junior developer would have gone, “Hey, let’s take a look at that code and see what we can learn.” If I did that, the blog post would be done and you could stop reading with zero guilt! But we’re now vibing, I’ve got my real CTO hat on, and we’re going with the AI expert.

When I asked it for some handler code, it did respond with something entirely reasonable looking:

const serverlessExpress = require('@vendia/serverless-express');
const app = require('./build/index.js'); // Your built SvelteKit app

exports.handler = serverlessExpress({ app });

And let the descent into madness start.

Spoiler alert, the code didn’t work. For one thing, in 2025, there’s no such thing as @vendia/serverless-express. All links forward to @codegenie/serverless-express. The package that was mentioned in the earlier blog post is deprecated because the maintainer no longer works at AWS so Amazon no longer wants to support it.

I got a typically useful error message of a 502 Bad Gateway and commenced with vibe-debugging. Many debugging statements, container deployments, and 502 Bad Gateways later, Claude was attempting to rewrite what the serverless-express packages did before it ran out of context.

Armed with more knowledge, I tried the same prompt in ChatGPT 4o. After a few arguments about @codegenie vs @vendia, it essentially spit out the same thing as Claude, including trying to recreate the wheel.

My favorite segment from the text was:

✅ Verified Working Alternative: Use express + Native http.createServer + ALB-Formatted Handler

At this point, it's best to avoid @codegenie/serverless-express entirely and implement a small, clean ALB adapter for SvelteKit yourself.

Here's a 100% Working Solution Without Codegenie

To no surprise, the solution was 0% working when actually verified. Sigh.

Vibe. Vibe. Vibe. Vibe.

Assuming that somehow my prompting was the problem, I tried a few different approaches, using insights gained from previous chat sessions. AI’s can do this, this is the future. Yet, despite the differences in how I started, the conversation fell into the same path. Which, when you think about how LLMs work, is really not that surprising either.

Vibe. Vibe. Angry.

Clearly, I have failed at being a visionary AI-first CTO and needed to go back to being a grumpy software engineer. All the debugging so far suggested a classic JS error where some async code was not being properly awaited or calling the correct callback. I needed to isolate where that was and figure out why.

I took some time looking at the Svelte hander and noticed that pre-rendered routes were handled differently than the others, which explained why some of the tests in the AI cycle of debugging seemed to work. All the requests that I was having issues with were from the ssr handler. Although SSR is disabled in my app, this handler is what’s called anytime the server has to do anything. Eventually I was able to isolate it down to the drain event never being thrown from the write stream, with the root cause being that there was no actual write stream. All the data was getting written immediately, so it should have been returning true the whole time. After this realization, I noticed the open bug from last year on the @codegenie/aws-serverless repository.

I think there were a confluence of factors that made this hard with the approach I took:

  • When you spend all day in Typescript, it’s easy to forget that there’s nothing actually enforcing any of this when the code runs. The thing that looks like a stream does not have to follow stream semantics, especially around events fired and return types of methods.
  • Missing events are much tricker to debug than missing methods.
  • The constantly shifting nature of the Javascript landscape does not work well with training data cut-offs.
  • LLM’s are generators, they don’t have a deep understanding of what they are generating. Both Claude and ChatGPT produced code that would work if the underlying issue was fixed, but no amount of prompting got them to peel back the hood. Instead we ended further from the answer before I decided to give up.
  • You get virtually no information on why an AWS Service is unhappy with your response, an oft recurring theme.

500: Internal Server Error (Not done yet)

Finally, the application came up and I could hit the login page. And the application blew up again.

Thankfully I had logs and could trace the network requests, so it didn’t take too long to figure out that only one cookie was being set, when the auth verifier needed two. This time, Google failed, as it was sure that I wanted a lot of low-value posts about how CloudFront uses a cookie to construct a cache key, but AI came to the rescue.

<<Lots of text that was only somewhat relevant>>

✅ Recommendations

* Use multiValueHeaders for Set-Cookie.

<<More stuff that was not valuable>>

Looking more into it, even if you have a properly concatenated set-cookie response header, ALB will strip everything but the first cookie. I’m sure this makes sense to someone somewhere. @codegenie/aws-serverless automatically formats the response correctly with multiValueHeaders set, but does not re-expand the concatenated cookie string. Which is what these lines are for:

if (response?.multiValueHeaders?.['set-cookie']) {
  response.multiValueHeaders['set-cookie'] = response.multiValueHeaders['set-cookie']
    .map((c) => c.split(','))
    .flat();
}

Final thoughts

This was supposed to be simple and in a way it was. The final amount of javascript was only 20 or so lines. However that’s often the case when doing something new. The simplicity of the solution belies the complexity of discovering the answer.

I forgot a core maxim that infrastructure changes are always risky and full of unknowns. Sometimes there isn’t even an incremental way to build in order to isolate changes. The problem was very much the interplay between the different services and there was no short cut beyond learning more about them.

While I used AI extensively through the project, I feel like it hurt more than it helped. The code it produced was similar to the code in the examples directory and I can’t help but feel that if I spent more time looking at that vs reading AI responses, something would have triggered that lead me to the answer faster. However, that’s an untestable hypothesis as you can’t unlearn what you already know.

At then end, I got the best feedback you can get for a project like this. “It looks like everything is working and is also a lot snappier.”

Original image is CC-licensed [original source]

Tejus Parikh

I'm a software engineer that writes occasionally about building software, software culture, and tech adjacent hobbies. If you want to get in touch, send me an email at [my_first_name]@tejusparikh.com.