2 min read

Deploying AWS Lambda functions with Terraform

One thing I love about Terraform is how declarative it is. Managing Lambda functions with Terraform is a blast. You can use also Terraform to deploy Lambda functions however there are two issues with this

One thing I love about Terraform is how declarative it is. Managing Lambda functions with Terraform is a blast. You can use also Terraform to deploy Lambda functions however there are two issues with this:

  • If you have a development team churning out code they would need to learn some amount of Terraform
  • Your developers would also need to learn associated CI/CD pipeline for your infrastructure

Now what to do? You don't want to slow down development but you also want to manage cloud infra with IaC (Infrastructure as Code) solutions.

This post will go over a middle ground. However, let us first go over how a normal terraform deployment would look like:

resource "aws_lambda_function" "test_lambda" {
  filename      = "lambda_function_payload.zip"
  function_name = "lambda_function_name"
  role          = aws_iam_role.iam_for_lambda.arn
  handler       = "exports.test"

  source_code_hash = filebase64sha256("lambda_function_payload.zip")

  runtime = "nodejs12.x"
}

The main issue with this approach is that you are coupling the payload location with Terraform code.

So let us try to de-couple this. The better approach would be to use a S3 bucket:

resource "aws_lambda_function" "test_lambda" {
  function_name = "lambda_function_name"
  role          = aws_iam_role.iam_for_lambda.arn
  handler       = "exports.test"

  source_code_hash = "<source-code-hash>"
  s3_bucket        = "bucket-for-lambda-zips"
  s3_key           = "path/lambda_function_payload.zip"

  runtime = "nodejs12.x"
}

So now we have decoupled the Lambda artifact from the Terraform code however there is still the issue of the need to update source_code_hash or other attributes each time the developers update the code.

A good way to solve this is to simply place the source code hash in the bucket.

The deployment code will look like so:

openssl dgst -sha256 -binary lambda_function_payload.zip | openssl enc -base64 | tr -d "\n" > lambda_function_payload.zip.base64sha256
aws s3 cp --content-type text/plain lambda_function_payload.zip.base64sha256 s3://bucket-for-lambda-zips/path

aws lambda update-function-code --function-name lambda_function_name --s3-bucket bucket-for-lambda-zips --s3-key path/lambda_function_payload.zip

The first two lines of code will need to be added to the developer's CI/CD pipeline which will generate the payload base64 sha and push it as a text/plain object to the S3 bucket Terraform will reference to this will be needed if you want to keep source_code_hash in state.

The third line will ensure deployment is done.

So now how to reference a plain object in a S3 bucket in the resource definition?

Well there is an easy solution for that. Simply reference it as a data source.

data "aws_s3_bucket_object" "test_lambda_function_hash" {
  bucket = "bucket-for-lambda-zips"
  key    = "path/lambda_function_payload.zip.base64sha256"
}

resource "aws_lambda_function" "test_lambda" {
  function_name = "lambda_function_name"
  role          = aws_iam_role.iam_for_lambda.arn
  handler       = "exports.test"
  publish          = true
  source_code_hash = data.aws_s3_bucket_object.test_lambda_hash.body
  s3_bucket        = "bucket-for-lambda-zips"
  s3_key           = "path/lambda_function_payload.zip"

  runtime = "nodejs12.x"
  
  lifecycle {
    ignore_changes = [
      "source_code_hash",
      "last_modified",
      "qualified_arn",
      "version"
    ]
  }
}

The lifecycle{} block will be needed to ensure when source_code_hash changes (and other deployment related changes) they don't create a Terraform change to be applied.

This pattern has been extremely helpful. It allows one to manage lambda resources over Terraform while not constricting too much of the developer behavior.

At this point you might want to point out that source_code_hash is actually optional. It is! But I have found it helpful to keep hash in state. This approach is also helpful when you are transitioning from a hashed S3 bucket deployment and don't want to rock the boat.

The key is the lifecycle{} exceptions placed. Make sure you add that!

Anyway hope this helps you!

Have fun!