Skip to main content

Packaging Quickwit for AWS Lambda

In this blog post, we explain why and how Quickwit can run on AWS Lambda efficiently.

We introduced Quickwit on Lambda in a previous blog post (Scaling search to 0 with AWS Lambda) as the new Quickwit Serverless deployment. To recap, Quickwit on Lambda shows sub-second search performance even on cold start. Since it doesn't consume any resources when it's not used, it enables the construction of observability pipelines that cost only a couple of bucks for small workloads of a few GB per day.

In this post, we take a closer look at how we repackaged the Quickwit codebase, originally meant to work as a long running server, to run as a short lived cloud function.

AWS Lambda packaging options

Let's start with a quick presentation of AWS Lambda deployment options.

ZIP archives

This is the original way of deploying functions that was offered by AWS Lambda ever since the service was launched. The code and its dependencies is packaged into a ZIP file and upload it to AWS. The content of the archive depends on the runtime. In the case of Python, it will typically be the .py file containing your handler function and additional packages and modules used by it (see docs). In Rust, it might just contain a single binary.

This packaging makes it possible to reduce the amount of application code that needs to be downloaded by the function on initialization to its bare minimum because basic dependencies that are usually part of the Linux distribution, such as the C Standard Library (libc), are already provided and cached on the machine that is running the function. The downside is that applications that have dependencies on system libraries can be hard to port to the Amazon Linux distribution that is used by AWS Lambda.

Container images

AWS Lambda added support for container images in 2020. This made it possible to package the function code and dependencies into a Docker-compatible container image, which can then be deployed to AWS. This method is more flexible than ZIP archives, as it allows you to use Docker's tooling and ecosystem to manage your function's dependencies and deployment process. If a container image already exists for the application you plan to run on AWS Lambda, there are chances you can adapt it with little work to have a working AWS Lambda container deployment. The downside is that your package now needs to also wrap system-level dependencies. Even if there are many highly optimized base images (e.g Alpine, Debian slim…) that make it possible to build relatively small container images, those extra bytes are likely to slow down cold starts.

Packaging Quickwit for Lambda

In this section, will measure in particular the Lambda initialization (init) time. It is the duration billed during a cold start, before the handler is started. This phase includes the binary download and its static execution up until the runtime can start the handler.

Quickwit in a Lambda container image

As Quickwit is already distributed as a Docker image, we first tried to run it as a Lambda container image. The results were acceptable, with an init time average slightly larger than 100ms. But we were disappointed by the standard deviation of our measurements. In particular we noticed that each time the container image was updated, the very first execution of the function had a much higher cold start. We saw occurrences where it took up to 1 second. After some investigation, we found the likely explanation to this phenomenon in this paper recently released by AWS. Lambda is using multiple layers of cache for the container images. Until these caches are warmed up, cold starts might be significantly slower.

Running Container-based Lambda functions has another major drawback. When a function is not used during multiple days, it might enter an inactive state. When that happens, new invocations usually fail for dozens of seconds. One of the big benefits of running Quickwit on Lambda is precisely to have a system that scales down to 0 and can be left idle for a long period of time at no cost, so this felt like a rather large inconvenience.

Quickwit with the provided runtime

Given the drawbacks of container based functions, we decided to explore the possibility of packaging Quickwit as a ZIP archive and use the Lambda provided runtime. Here our choice of Rust really paid off. Thanks to the powerful rustup toolchain, building a binary with any target platform is very straightforward. Cherry on the cake, the cargo lambda subcommand does the rest of the work by packaging the binary in the exact archive format Lambda expects. This toolchain builds a "hello world" Lambda of just 17KB (uncompressed), which is an indicator that it's pretty well optimized. End result for the Quickwit Lambda: a self contained package of 15MB that starts in around 100ms. Not bad!

Summary

Docker imageProvided
First cold startBetween 500ms and 1sConsistently 100ms
Further cold startBetween 100ms and 200msConsistently 100ms
Transition to inactiveAfter a few days/weeksNever
Package size (compressed)120MB15MB

Deploying the Quickwit Lambda to AWS

There are many ways to deploy resources to the cloud. The most obvious way to interact with a provider like AWS is by using its web interface, the AWS Console. But it's often tedious, error prone and lacks reproducibility. Infrastructure as code (IaaC) has become the de facto standard for manipulating more complex setups. There are multiple options available:

  • On one hand you have tools compatible across multiple cloud providers, such as Terraform, Pulumi.
  • On the other hand, you have provider specific tools such as AWS Cloudformation or AWS CDK. Even though using a tool that spans multiple providers is appealing, it also comes with some drawbacks when it comes to the state storage, which often introduces the dependency on a third-party service such as Terraform Cloud.

Given our current focus on AWS Lambda, we decided to use the AWS native IaaC tooling, specifically AWS CDK. The appeal of CDK over Cloudformation is that it enables defining the infrastructure using procedural code instead of a cumbersome markup language. But CDK is nothing more than a wrapper layer on top of Cloudformation (CDK stacks synthesize Cloudformation templates to perform deployments).

Note that stacks generated by AWS CDK can be integrated into larger IaaC configuration defined using another technology. For instance one might use the Cloudformation template generated by AWS CDK as a Terraform resource.

To discover how to deploy the Quickwit Lambda using CDK, refer to our tutorial.

Possible improvements

One optimization we could aim for is to use ARM based Lambda functions which are advertised to deliver up to 34% better performances. Quickwit works on ARM, so adding this packaging capability should be straightforward. Note nevertheless that we already had some mixed results when running Quickwit on Graviton, so further benchmarking should be performed to confirm the cost gain.

Another follow up would be to publish the Quickwit Lambda package on the AWS Serverless Repository. This should make it easier to deploy Quickwit building blocks in serverless stacks on AWS.

We might also provide some deployments scripts for other IaaC tools such as Terraform (#4431).

Finally, the techniques introduced here should be transposable to cloud functions proposed by other cloud providers. GCP released its 2nd generation of Cloud Functions in 2022 with a limit of 16GB, making it also a very good candidate for running Quickwit.

If you are interested in any of these features or other ones, join us on Discord and share your use cases with us!