How to Write a Dockerfile for a Node.js Application: Multi-Environment Builds with ARGs, ENVs, and Health Checks

Beginner

Why Your Dockerfile Matters More Than You Think

If you’ve ever heard someone say “it works on my machine,” you already understand the core problem Docker solves. Docker packages your application and everything it needs to run — the runtime, dependencies, environment variables, configuration — into a single portable image. That image runs the same way everywhere: your laptop, a teammate’s machine, a staging server, or a production Kubernetes cluster.

But here’s the thing most beginner tutorials skip: a working Dockerfile and a production-ready Dockerfile are very different things. A working Dockerfile gets your app running. A production-ready Dockerfile handles different environments (development vs. production), uses build arguments to stay flexible, manages environment variables safely, and includes health checks so your orchestrator knows when your app is actually ready to serve traffic.

In this guide, we’re going to build both — starting simple, then layering in real-world practices step by step. By the end, you’ll have a Dockerfile you can genuinely be proud of.

Prerequisites

  • Docker installed locally (docs.docker.com/get-docker)
  • Node.js 20 LTS installed (we’ll use it to create a sample app)
  • Basic comfort with the terminal
  • No prior Docker expertise required — we’ll explain everything

Step 1: Create a Simple Node.js Application

Let’s start with a minimal but realistic Express application. Create a new directory and initialize it:

mkdir node-docker-demo
cd node-docker-demo
npm init -y
npm install express

Now create your main application file:

touch app.js

Open app.js and add the following:

cat > app.js << 'EOF'
const express = require('express');
const app = express();

const PORT = process.env.PORT || 3000;
const NODE_ENV = process.env.NODE_ENV || 'development';

app.get('/', (req, res) => {
  res.json({
    message: 'Hello from Node.js!',
    environment: NODE_ENV,
    uptime: process.uptime(),
  });
});

app.get('/health', (req, res) => {
  res.status(200).json({ status: 'ok' });
});

app.listen(PORT, () => {
  console.log(`Server running in ${NODE_ENV} mode on port ${PORT}`);
});
EOF

Notice two things here: we read PORT and NODE_ENV from environment variables, with sensible defaults. This is a healthy habit — your app should be configurable from the outside, not have values baked in.

Also notice the /health endpoint. This is a simple route that returns HTTP 200 when the app is alive. We’ll wire this into our Docker health check later.

Step 2: Your First (Naive) Dockerfile

Let’s write the simplest possible Dockerfile first, then improve it together:

touch Dockerfile
cat > Dockerfile << 'EOF'
FROM node:20

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .

EXPOSE 3000

CMD ["node", "app.js"]
EOF

Let's break down every line:

  • FROM node:20 — Use the official Node.js 20 image as our base. This is a Debian-based image with Node pre-installed.
  • WORKDIR /app — Set the working directory inside the container. All subsequent commands run from here.
  • COPY package*.json ./ — Copy only the package files first. This is intentional — we'll explain why in a moment.
  • RUN npm install — Install dependencies.
  • COPY . . — Now copy the rest of your application code.
  • EXPOSE 3000 — Document that this container listens on port 3000. Note: this does not publish the port — it's informational.
  • CMD ["node", "app.js"] — The command to run when the container starts.

Why Copy package.json Before Your Code?

This is one of the most important Docker optimization patterns. Docker builds images in layers. Each instruction is a layer, and layers are cached. If you copy your code first and then run npm install, then every single code change invalidates the npm install cache and Docker reinstalls all packages — even if package.json didn't change.

By copying package*.json first and running npm install before copying your code, you ensure that npm only reinstalls packages when package.json actually changes. This makes your builds dramatically faster.

Step 3: Add a .dockerignore File

Before we build, let's create a .dockerignore file. This is the Docker equivalent of .gitignore — it tells Docker which files to exclude from the build context.

cat > .dockerignore << 'EOF'
node_modules
npm-debug.log
.git
.gitignore
.env
*.md
EOF

The most critical exclusion here is node_modules. Without this, Docker would copy your local node_modules folder into the image — which is slow and can cause issues if your local OS differs from the container OS (for example, native binaries compiled for macOS won't work inside a Linux container).

Excluding .env is a security habit. You never want secret files baked into an image.

Step 4: Build and Test Your First Image

docker build -t node-docker-demo:latest .

You should see output like this:

[+] Building 18.3s (9/9) FINISHED
 => [internal] load build definition from Dockerfile
 => [internal] load .dockerignore
 => [1/5] FROM docker.io/library/node:20
 => [2/5] WORKDIR /app
 => [3/5] COPY package*.json ./
 => [4/5] RUN npm install
 => [5/5] COPY . .
 => exporting to image

Now run it:

docker run -p 3000:3000 node-docker-demo:latest

Expected output:

Server running in development mode on port 3000

Open your browser or use curl:

curl http://localhost:3000
{"message":"Hello from Node.js!","environment":"development","uptime":2.345}

It works. Now let's make it production-ready.

Step 5: Understanding ARG vs ENV — A Critical Distinction

This is where many beginners get confused, so let's be very clear.

Feature ARG ENV
Available during Build time only Build time AND runtime
Visible in running container No Yes
Set by caller with --build-arg -e or --env at runtime
Use case Build-time configuration (e.g., Node version, app version) Runtime configuration (e.g., NODE_ENV, PORT)
Shows in docker inspect No (after build) Yes

Common mistake: Using ARG for secrets because you think they're "hidden." They're not truly safe — ARG values can be inspected in the build history with docker history --no-trunc. Never pass secrets via ARG or ENV. Use Docker secrets or a secrets manager for sensitive values.

Step 6: Add ARGs and ENVs to Your Dockerfile

Let's rewrite our Dockerfile with proper ARG and ENV usage:

cat > Dockerfile << 'EOF'
# ---- Build Arguments (available at build time only) ----
ARG NODE_VERSION=20
ARG APP_VERSION=1.0.0

# ---- Base Image ----
FROM node:${NODE_VERSION}-alpine

# ---- Labels (good practice for image metadata) ----
LABEL maintainer="yourname@example.com"
LABEL version="${APP_VERSION}"
LABEL description="Node.js demo application"

# ---- Working Directory ----
WORKDIR /app

# ---- Runtime Environment Variables (with safe defaults) ----
ENV NODE_ENV=production
ENV PORT=3000

# ---- Dependencies ----
COPY package*.json ./

# Install only production dependencies when NODE_ENV=production
RUN npm ci --omit=dev

# ---- Application Code ----
COPY . .

# ---- Non-root User (security best practice) ----
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

# ---- Port ----
EXPOSE ${PORT}

# ---- Health Check ----
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
  CMD wget -qO- http://localhost:${PORT}/health || exit 1

# ---- Start Command ----
CMD ["node", "app.js"]
EOF

Let's go through the new and improved parts carefully.

Alpine Base Image

We switched from node:20 to node:20-alpine. Alpine Linux is a minimal Linux distribution. The full node:20 image is roughly 1.1GB. The Alpine variant is typically around 180MB. Smaller images mean faster pulls, less attack surface, and lower storage costs.

npm ci Instead of npm install

npm ci is designed for automated environments. It installs exactly what's in your package-lock.json, fails if the lockfile doesn't match package.json, and is generally faster and more deterministic than npm install. Always use npm ci in your Dockerfiles.

The --omit=dev flag tells npm to skip packages listed under devDependencies — you don't need test runners, linters, or TypeScript compilers in your production image.

Non-root User

By default, Docker containers run as root. This is a security risk — if an attacker breaks out of your application, they'd have root access inside the container. Creating a dedicated non-root user and switching to it with USER appuser is a simple, high-value security practice.

The HEALTHCHECK Instruction

This is one of the most underused Docker features. The HEALTHCHECK instruction tells Docker how to test whether your container is healthy. Let's break down the flags:

  • --interval=30s — Run the health check every 30 seconds
  • --timeout=5s — If the check takes longer than 5 seconds, consider it failed
  • --start-period=10s — Give the app 10 seconds to start up before counting failures
  • --retries=3 — After 3 consecutive failures, mark the container as unhealthy

We use wget here because Alpine includes it by default, whereas curl is not pre-installed in the Alpine image. The -qO- flags tell wget to run quietly and output to stdout. If the command fails (non-zero exit code), the health check fails.

When you run this in Docker Compose or Kubernetes, the orchestrator uses this health status to decide whether to route traffic to the container or restart it.

Step 7: Build for Different Environments

Now let's see ARG in action. Build a production image using the defaults:

docker build -t node-docker-demo:prod .

Build with a custom Node version and app version:

docker build \
  --build-arg NODE_VERSION=20 \
  --build-arg APP_VERSION=2.1.0 \
  -t node-docker-demo:2.1.0 .

Run the production container:

docker run -d \
  --name my-node-app \
  -p 3000:3000 \
  -e NODE_ENV=production \
  -e PORT=3000 \
  node-docker-demo:prod

Run a development-style container (overriding the ENV default):

docker run -d \
  --name my-node-app-dev \
  -p 3001:3000 \
  -e NODE_ENV=development \
  node-docker-demo:prod

Check that the health check is working:

docker ps

Look at the STATUS column. After the start period, you should see:

CONTAINER ID   IMAGE                  STATUS                    PORTS
a1b2c3d4e5f6   node-docker-demo:prod  Up 45s (healthy)          0.0.0.0:3000->3000/tcp

The (healthy) status confirms your health check is passing. If you saw (health: starting), just wait — the container is still in its start period.

Step 8: Inspect and Debug

View the environment variables currently set in a running container:


				
				

Leave a Comment

Your email address will not be published. Required fields are marked *