Deploy Big Python Packages To AWS with Serverless

Aamir
3 min readMar 16, 2021

Lately I was working on a ML code which has big packages to be pushed with the code and the whole package including all the dependencies is too big to be able to deploy to AWS. Hence, we thought of adding something as a dependency which could help us separate dependencies from actual code.

So after initial research we found 2 ways for it:

Amazon EFS

Amazon Elastic File System is a cloud storage service provided by Amazon Web Services designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with both AWS cloud services and on-premises resources. The idea behind using EFS is to separate out code and dependencies and don’t build dependencies with package instead keep the dependencies in amazon EFS and let the python files read dependencies from EFS storage. Below are the steps we used for EFS setup:

Setup an EFS storage

Template for setting up cloudformation in serverless.yaml. This should go inside resources section :

Type: AWS::EFS::FileSystem
Properties:
BackupPolicy:
BackupPolicy
Encrypted: Boolean
FileSystemPolicy: Json
FileSystemTags:
- ElasticFileSystemTag
KmsKeyId: String
LifecyclePolicies:
- LifecyclePolicy
PerformanceMode: String
ProvisionedThroughputInMibps: Double
ThroughputMode: String

Mounting EFS storage to local directory

$ sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport mount-target-IP:/   ~/efs

Walkthrough: Create and Mount a File System On-Premises with AWS Direct Connect and VPN — Amazon Elastic File System

In order to mount directory, we needed to install a small utility amazon-efs-utils in local mac systems, but this utility needs MacOS BigSur, so we had to update OS to BigSur.

Below are the commands for installing amazon-efs-utils -

brew tap aws/homebrew-aws
brew install amazon-efs-utils

Also, default security group doesn’t work as we need to open communication for some default ports, so we created a new security group. Below are the configuration we used:

After completing all this configuration, running mount command worked and we were able to mount the EFS storage to our local directory.

Separately building our dependencies

First step was to remove dependencies from obscura and this could be easily achieved by removing serverless-python-requirements. Next step was to build dependencies separately.

For that we used a plugin which acts like a serverless hook and allow some code to be run before/after different phases of serverless build. Here we were using plugin serverless-scriptable-plugin which allowed us to create hooks in serverless.yaml and make some code or some shell commands runnable at those hooks like:

custom:
scriptHooks:
before:package:createDeploymentArtifacts: npm run build

https://www.npmjs.com/package/serverless-scriptable-plugin

We would need to run something which could help us export our dependencies to efs folder like(If using poetry):

poetry export -f requirements.txt --without-hashes > requirements.txt
pip install -r requirements.txt -t ~/.efs-storage

Docker with AWS lambda

Second approach we tried is to dockerize obscura and upload it as a docker image. In this approach, obscura will be deployed as an image to Amazon ECR — (Elastic container registry) and will still be accessible like normal endpoints.

https://www.serverless.com/blog/container-support-for-lambda

A thorough medium blog for AWS lambda and docker: https://towardsdatascience.com/serverless-bert-with-huggingface-aws-lambda-and-docker-4a0214c77a6f

Advantages with this approach is -

  • Serverless has good support for docker containers because of which we don’t need to add any extra plugins to serverless.
  • We don’t need to add more cloudformation resource as serverless indigenously created Amazon ECR repository and at the time of serverless deploy itself, its creating image and uploading it to ECR it created, which makes it really straight forward
  • Max size of package which can be deployed in 10gb which is a good enough room

Steps to dockerize serverless and upload it as a container

Create a Dockerfile in project

In dockerfile, we need to install pip install poetry into the container and then install our dependencies at appropriate place using

poetry export -f requirements.txt --without-hashes > requirements.txt && pip install -r requirements.txt -t ${LAMBDA_TASK_ROOT}/

Configure Serverless.yaml

In provider section in serverless.yaml, we need to give ecr tag

provider:
name: aws
ecr:
images:
obscura-image:
path: ./

This image name over here obscura-image should be used as a image tag in functions tag

functions:
app:
image: obscura-image

Here, we are referencing image created in ecr tag, When we are specifying ecr tag, serverless is automatically creating an image named obscura-image and upload it to newly created ecr. It always creates ecr with name format: serverless-<service-name>-v1

Thats it, once we run sls deploy , serverless will automatically deploy project as an image to ecr.

We are more inclined towards dockerizing containers as this seems to be a straightforward, easy approach. Also, serverless has good support for it.

--

--

Aamir

Aamir is senior software engineer at Harman Connected Services. Always enthusiastic to learn new technologies and experiment