Recent Posts


Customizing the Behavior of AWS Lambda


Frustrated by Magento? Then you’ll love Commerce Bug, the must have debugging extension for anyone using Magento. Whether you’re just starting out or you’re a seasoned pro, Commerce Bug will save you and your team hours everyday. Grab a copy and start working with Magento instead of against it.

Updated for Magento 2! No Frills Magento Layout is the only Magento front end book you'll ever need. Get your copy today!

The Amazon AWS Lambda service allows you to execute a function without worrying about the server your code runs on. However, there’s still “a server” running your code. Amazon doesn’t share or promise many details of what this server is or how it works (VMs, containers, some proprietary software, etc). For most basic use cases, you can use Lambda without thinking about the server your code is running on. However, if you want to use and understand advanced features like

you’ll need to understand a little bit about the environment your code executes in.

We’ll be using the Node.js programming language in our examples below, but the concepts should apply to all languages supported by AWS Lambda.

Execution Environment

The first AWS term we’ll discuss is execution environment. This is the server-like environment where your Lambda code runs. Lambda may seem like magic, but like most magic it’s just a trick. When Lambda starts your Node.js function it’s still running a command that looks like this

$ node /path/to/file/that/calls/your/function.js

A Node.js process is still running in an operating system. AWS may give this a fancy name like execution environment, but it’s still just code running on a computer.

Lambda and Files

The first time you’ll need to think about your Lambda function’s execution environment is when you start thinking about files. Most programming languages are built around the idea that you’ll organize your code into individual files on a computer. Chances are your lambda function will grow beyond the confines of a single file — either you’ll want to organize your code or you’ll want to use a third party library distributed via a package management system like npm.

When you outgrow creating files manually via the AWS console, there are two ways to get files into your Lambda functions execution environment. The first is via a ZIP file archive. The second is via Lambda layers.

Zip Archive

When your function grows beyond a single file, Lambda allows you to create a Zip file archive that contains all the files your Lambda function needs, and then upload that archive to your Lambda function via the GUI, AWS CLI tool or Amazon S3. The system will automatically unzip these files and they’ll be available to your function.

Lambda Layers

The second way to get files to your Lambda’s execution environment is via the Lambda layers feature. Layers allow you to upload a ZIP archive of files to AWS and assign this archive a globally unique Amazon Resource Name (ARN). Then, individual Lambda functions can “add a layer” by configuring this ARN via their Lambda GUI or AWS CLI.

When you configure your Lambda function with a layer ARN the service will unzip the layer’s files into you execution environment’s /opt folder. Lambda configures the execution environments such that your programming language will consider the /opt folder a valid place for libraries or modules.

For example, let’s say you have the following file in your layer


When a Lambda user applies this layer to their function, the system will unzip it to


and that user will be able to write code in their Lambda that looks like this

const foo = require('foo')

The built-in Node.js runtimes will know to look for the module foo in both the local ./node_modules folder as well as the /opt/node_modules folder.


Next up are Lambda runtimes. “Runtime” refers to two distinct features of the AWS Lambda service. There are built-in runtimes and custom runtimes.

When you setup a Lambda function you pick the language you want to write your function in.

These are Lambda’s built-in runtimes.

In addition to these standard runtimes, Lambda allows end users to create custom runtimes. A custom runtime allows you to support writing a function in a language — (such as PHP) — that’s not supported by a standard runtime.

To create a custom runtime, users must

  1. Create a function using the “provider” (Amazon Linux 1) or “provider2” (Amazon Linux 2) built-in runtime.
  2. Create an executable file named bootstrap that contains either a script or compiled code
  3. This bootstrap program will interact with the HTTP based Runtime API. This API allows end-users to register for Lambda events, and then listen for those events. This listening is done with long-polling HTTP requests

The underlying theory of operation is that this bootstrap program will receive an event when your Lambda receives an invocation request, and then this program will take addition steps to “run” the function.


The next Lambda API we’re going to look at is Extensions. There are two categories of extensions in Lambda — External Extensions and Internal Extensions.

External Extensions

External extensions allow you to start a secondary process in your Lambda’s execution environment and listen for events from the HTTP based Extension API. This Extension API allows you to register for and receive events from you Lambda function. Events are listened for with long polling HTTP requests.

The design of this API is very similar to the Runtime API — with the two major differences being that extensions aren’t responsible for processing the Lambda function invocation, and that extensions have an additional events (ex. SHUTDOWN) that they can listen for.

To create an external extension users add an executable file to their lambda at


When creating your execution environment the system scans the /opt/extensions folder for programs to run as extensions. External Extensions are typically distributed as Lambda layers.

Internal Extensions

Internal extensions are a different, (and less complex), feature from external extensions. An internal extension allows you tp customize how a built in runtime will startup your language’s environment.

There are two ways to influence how Lambda starts up your language’s environment. The first is support for environmental variables that allow you to add additional command line startup flags.


These variables and flags aren’t anything specific to Lambda — they’re built-in to the languages. (ex. you can see the documentation for Node.js’s NODE_OPTIONS here). The Lambda service simply ensures that, when set via the GUI console that their built-in runtimes honor these variables.

When these variables aren’t enough, Lambda also allows you to create a wrapper script that will wrap the execution of your language’s runtime. The docs include a python example


# the path to the interpreter and all of the originally intended arguments

# the extra options to pass to the interpreter
extra_args=("-X" "importtime")

# insert the extra options
args=("${args[@]:0:$#-1}" "${extra_args[@]}" "${args[@]: -1}")

# start the runtime with the extra options
exec "${args[@]}"

The theory of operation for these wrapper scripts is that instead of directly invoking your language via the CLI, Lambda will instead invoke this additional script.

There’s no fixed name or location for this wrapper script. Instead you ensure your wrapper script is uploaded to your Lambda via whatever means you choose (zip archive, layers, manually created), and then you set the AWS_LAMBDA_EXEC_WRAPPER variable to point at this script.


There’s one more thing to talk about with regard to features for customizing the behavior of your Lambdas, and that’s containers. Lambda didn’t launch with container support — but in December of 2020 Amazon announced that it was now possible to use a container to create a Lambda function. Because containers were added late in the game, many of the features we’ve talked about work a little differently in containers.

First — zip archives and layers aren’t supported. Instead of using these features to get files up to your Lambda, you’ll need to build your containers (i.e. in your Dockerfile) such that any needed additional files are added to the container itself.

For custom runtimes, instead of uploading a bootstrap file Amazon offers a set of base containers that can be used to implement a custom runtime.

Building extensions (both internal and external) works the same with containers, except that, again, you’ll need to manually add your executable extension file to your container. (vs. using a layer to distribute it, since layers aren’t supported in containers)

Further Reading

The tools available to customize the behavior of the Lambda service are powerful, but as you see they’re not a fully coherent story behind how best to customize things. I hope this article has provided you with a good starting point for learning more. If you’re the sort of person who learns best from completed examples, there’s two additional resources you might find useful

Copyright © Alan Storm 1975 – 2021 All Rights Reserved

Originally Posted: 18th July 2021