Custom container runtimes
Nitric builds applications by identifying its entrypoints, which are typically defined in the nitric.yaml
file as services
. Each entrypoint in a Nitric app is built into its own container using Docker, then deployed to a cloud container runtime such as AWS Lambda, Google CloudRun or Azure Container Apps.
The Nitric CLI decides how to build those containers based on the programming language used by the entrypoint, for example if the entrypoint is a python file it will be built using Nitric's python dockerfile template. These dockerfile templates are designed with compatibility and ease of use in mind, this makes building applications convenient but may not provide additional dependencies your code relies or the ideal optimization for your application.
If you need to customize the docker container build process to add dependencies, optimize container size, support a new language or any other reason, you can create a custom dockerfile template to be used by some or all of the entrypoints (services) in your application.
Add a new custom runtime
Add a new custom runtime in the runtimes
configuration.
To use the runtime, simply specify the runtime key per service as shown below.
name: custom-exampleservices:- match: services/*.tsruntime: 'custom-node' # specify custom runtimestart: npm run dev:services $SERVICE_PATHruntimes:custom-node:# All services that specify the 'custom-node' runtime will be built using this dockerfiledockerfile: ./docker/node.dockerfileargs: {}
In this example we're specifying that any handlers that match the path services/*.ts
will use a custom node.dockerfile
for their dockerfile template.
Create a dockerfile template
It's important to note that the custom dockerfile you create needs to act as a template. This can look a bit different to how you might have written dockerfiles in the past, since the same template file will need to be used for all services that match the configuration the entrypoint will use a variable which contains the service's filename.
Here are some example dockerfiles:
Note the $HANDLER
variable, which specifies the handler to execute in the
final container.
FROM python:3.11-slimARG HANDLERENV HANDLER=${HANDLER}ENV PYTHONUNBUFFERED=TRUERUN apt-get update -y && \apt-get install -y ca-certificates && \update-ca-certificatesRUN pip install --upgrade pip pipenv# Copy either requirements.txt or PipfileCOPY requirements.tx[t] Pipfil[e] Pipfile.loc[k] ./# Guarantee lock file if we have a Pipfile and no Pipfile.lockRUN (stat Pipfile && pipenv lock) || echo "No Pipfile found"# Output a requirements.txt file for final module install if there is a Pipfile.lock foundRUN (stat Pipfile.lock && pipenv requirements > requirements.txt) || echo "No Pipfile.lock found"RUN pip install --no-cache-dir -r requirements.txtCOPY . .ENTRYPOINT python $HANDLER
Create an ignore file
Custom dockerfile templates also support co-located dockerignore files. If your custom docker template is at path ./docker/node.dockerfile
you can create an ignore file at ./docker/node.dockerfile.dockerignore
.
Create a monorepo with custom runtimes
Nitric supports monorepos via the custom runtime feature, this allows you to change the build context of your docker build. To use a custom runtime in a monorepo, you can specify the runtime
key per service definition as shown below.
Example for Turborepo
Turborepo is a monorepo tool for JavaScript and TypeScript that allows you to manage multiple packages in a single repository. In this example, we will use a custom runtime to build a service in a monorepo using a custom dockerfile.
name: guestbook-appservices:- match: services/*.tsruntime: turbotype: ''start: npm run dev:services $SERVICE_PATHruntimes:turbo:dockerfile: ./turbo.dockerfile # the custom dockerfilecontext: ../../ # the context of the docker buildargs:TURBO_SCOPE: 'guestbook-api'
FROM node:alpine AS builderARG TURBO_SCOPE# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.RUN apk add --no-cache libc6-compatRUN apk update# Set working directoryWORKDIR /appRUN yarn global add turbo# copy from root of the mono-repoCOPY . .RUN turbo prune --scope=${TURBO_SCOPE} --docker# Add lockfile and package.json's of isolated subworkspaceFROM node:alpine AS installerARG TURBO_SCOPEARG HANDLERRUN apk add --no-cache libc6-compatRUN apk updateWORKDIR /appRUN yarn global add typescript @vercel/ncc turbo# First install dependencies (as they change less often)COPY .gitignore .gitignoreCOPY --from=builder /app/out/json/ .COPY --from=builder /app/out/yarn.lock ./yarn.lockRUN yarn install --frozen-lockfile --production# Build the project and its dependenciesCOPY --from=builder /app/out/full/ .COPY turbo.json turbo.jsonRUN turbo run build --filter=${TURBO_SCOPE} -- ./${HANDLER} -m --v8-cache -o lib/FROM node:alpine AS runnerARG TURBO_SCOPEWORKDIR /appCOPY --from=installer /app/backends/${TURBO_SCOPE}/lib .ENTRYPOINT ["node", "index.js"]