all

Case study

Deploying a Client Node.js Website with GitLab CI/CD and Kubernetes

Hero overview of GitLab, Docker, Kubernetes and live website

Use case

A web-design studio delivered a new client application and needed the infrastructure part completed quickly: take an existing GitLab repository with a Node.js application, build it with npm run build, and make it available as a live website on the client domain with a CI/CD path ready for future releases.

Incoming data

  • Source code in a GitLab repository
  • Node.js application
  • Production build command: npm run build

Deliverable

  • A running website on the client domain
  • A GitLab pipeline that builds and publishes the application container image on every relevant push
  • Kubernetes deployment prepared through a reusable Helm-based approach
  • SSL issued automatically during deployment

The full solution was delivered in about two hours.


Step 1. Preparing the application for containerized delivery

The first task was to package the application in a predictable, repeatable way so that the same artifact could be built in CI and deployed into Kubernetes.

The initial working Dockerfile was prepared directly in the same repository:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
FROM node:22.12.0-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

FROM node:22.12.0-alpine AS production
WORKDIR /app
COPY --from=build /app/.output ./.output
EXPOSE 3000
CMD ["node", "/app/.output/server/index.mjs"]

This already uses a multi-stage build, which is the correct direction for production delivery.

Why multi-stage builds matter

Multi-stage Docker build diagram

For Node.js projects, a naive single-stage image often contains:

  • full source code
  • build-time dependencies
  • development dependencies
  • package manager cache
  • temporary build files

That works, but it creates unnecessary weight in production.

With a multi-stage build, the heavy work happens in the builder stage, while the runtime stage receives only the required application output. In this case, only the built .output directory is needed at runtime. That means:

  • smaller images
  • faster push to registry
  • faster pull in Kubernetes
  • less space consumed in the container registry
  • smaller attack surface in production

Step 2. Tightening the Dockerfile for CI/CD use

After the first working version, the Dockerfile was optimized for more predictable CI builds and better cache behavior.

Final optimized Dockerfile

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
FROM node:22.12.0-alpine AS build
WORKDIR /app

COPY package*.json ./
RUN npm ci

COPY . .
RUN npm run build

FROM node:22.12.0-alpine AS runtime
WORKDIR /app
ENV NODE_ENV=production

COPY --from=build /app/.output ./.output

EXPOSE 3000
CMD ["node", ".output/server/index.mjs"]

What was improved

  1. npm ci instead of npm install
    This makes CI builds more deterministic because dependencies are installed strictly from the lock file.

  2. Separated dependency layer before copying the whole source tree
    That improves Docker layer reuse. When developers change application code but not dependencies, the dependency install layer can stay cached.

  3. Runtime stage stays minimal
    The final container includes only the production runtime and built output, not the full source tree and not the build environment.

  4. Cleaner runtime command
    A small improvement, but helpful for readability and maintenance.

Practical size impact

Before and after container optimization comparison

The main gain came not from a small syntax cleanup, but from using the multi-stage runtime image instead of a typical single-stage “everything included” image.

For a project like this, a realistic comparison is:

  • naive single-stage image: ~420 MB
  • final multi-stage runtime image: ~130 MB

That is a reduction of about 290 MB per image, or roughly 69% smaller.

Why this matters over a week

This team typically pushes new versions a few times per hour. Using a conservative example:

  • 3 pushes per hour
  • 8 working hours per day
  • 5 working days per week

That gives:

  • 120 image pushes per week

At 290 MB saved per image, the weekly difference is:

  • 120 × 290 MB = 34,800 MB
  • approximately 34.8 GB less image data

That is only the registry-side accumulation before cleanup. If the registry garbage collection runs weekly, that difference can remain stored for the whole period. In other words, a seemingly small per-build optimization quickly turns into tens of gigabytes saved per week.

The same reduction also helps deployment speed, because every push to the registry and every pull by Kubernetes nodes moves significantly less data. Even on moderate network throughput, cutting hundreds of MB from each release is immediately noticeable during frequent releases.


Step 3. Preparing the GitLab pipeline

Once the application was containerized, the next step was to automate build validation and image publishing in GitLab CI/CD.

The requested stage layout was:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
stages:
  - lint
  - test
  - build
  - deploy
  - rollback

workflow:
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
    - if: $CI_PIPLINE_SOURCE == "web"
    - if: $CI_MERGE_REQUEST_ID

At this stage of the project, the pipeline was implemented up to the image build and push. The deploy and rollback stages were intentionally left for the next phase, because deployment itself was handled through Kubernetes + Helmwave.

Sample .gitlab-ci.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
stages:
  - lint
  - test
  - build
  - deploy
  - rollback

workflow:
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
    - if: '$CI_PIPLINE_SOURCE == "web"'
    - if: '$CI_MERGE_REQUEST_ID'

variables:
  IMAGE_TAG: $CI_COMMIT_SHORT_SHA
  IMAGE_NAME: $CI_REGISTRY_IMAGE:$IMAGE_TAG

default:
  image: node:22.12.0-alpine
  before_script:
    - npm ci

lint:
  stage: lint
  script:
    - npm run lint
  rules:
    - if: '$CI_COMMIT_BRANCH'
    - if: '$CI_MERGE_REQUEST_ID'

test:
  stage: test
  script:
    - npm run test --if-present
  rules:
    - if: '$CI_COMMIT_BRANCH'
    - if: '$CI_MERGE_REQUEST_ID'

build_image:
  stage: build
  image: docker:27
  services:
    - docker:27-dind
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: "/certs"
  before_script:
    - echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
  script:
    - docker build -t "$IMAGE_NAME" .
    - docker push "$IMAGE_NAME"
    - docker tag "$IMAGE_NAME" "$CI_REGISTRY_IMAGE:latest"
    - docker push "$CI_REGISTRY_IMAGE:latest"
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
    - if: '$CI_PIPLINE_SOURCE == "web"'

deploy:
  stage: deploy
  script:
    - echo "Deployment is handled later via Helmwave into Kubernetes"
  when: manual
  allow_failure: true

rollback:
  stage: rollback
  script:
    - echo "Rollback stage will be implemented with Helmwave release rollback"
  when: manual
  allow_failure: true

What this pipeline does

  • runs lint checks
  • runs tests if present
  • builds the Docker image
  • tags it with the commit SHA
  • pushes it to the GitLab container registry
  • also updates the latest tag on the main delivery path

That gives a clean build artifact ready for Kubernetes deployment.

GitLab pipeline flow with active and future stages

Step 4. Sample pipeline run output

Below is a representative example of the pipeline output for a successful run through the build-and-push phase:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
Pipeline #2841 for commit 7f2c8a1d
Project: studio/client-landing
Branch: main

[lint] Running with node:22.12.0-alpine
$ npm ci
added 742 packages in 18s
$ npm run lint
✔ No lint errors found
Job succeeded

[test] Running with node:22.12.0-alpine
$ npm ci
added 742 packages in 17s
$ npm run test --if-present
Test Suites: 12 passed, 12 total
Tests:       84 passed, 84 total
Job succeeded

[build_image] Running with docker:27 + docker:27-dind
$ echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
Login Succeeded

$ docker build -t registry.example.com/studio/client-landing:7f2c8a1d .
[+] Building 54.2s (14/14) FINISHED
 => [build 1/6] FROM docker.io/library/node:22.12.0-alpine
 => [build 2/6] WORKDIR /app
 => [build 3/6] COPY package*.json ./
 => [build 4/6] RUN npm ci
 => [build 5/6] COPY . .
 => [build 6/6] RUN npm run build
 => [runtime 1/3] FROM docker.io/library/node:22.12.0-alpine
 => [runtime 2/3] WORKDIR /app
 => [runtime 3/3] COPY --from=build /app/.output ./.output
 => exporting to image
 => naming to registry.example.com/studio/client-landing:7f2c8a1d

$ docker push registry.example.com/studio/client-landing:7f2c8a1d
The push refers to repository [registry.example.com/studio/client-landing]
7f2c8a1d: pushed
latest layer: pushed
digest: sha256:9ab3d0d... size: 1301

$ docker tag registry.example.com/studio/client-landing:7f2c8a1d registry.example.com/studio/client-landing:latest
$ docker push registry.example.com/studio/client-landing:latest
latest: digest: sha256:9ab3d0d... size: 1301

Job succeeded

Pipeline result: passed
Duration: 2m 11s
Artifacts delivered: container image pushed to registry

Step 5. Deploying into Kubernetes with a reusable Helm chart

With the image already published to the registry, deployment into Kubernetes was straightforward.

Instead of writing a dedicated chart only for this one project, the deployment used a universal Helm chart that can be reused across many web applications. This is important in real client work because it reduces routine effort and makes future projects faster to launch.

Why this approach works well

For a typical Node.js web application, only a few values usually need to be customized:

  • image repository
  • image tag
  • service port
  • ingress hostname
  • replica count
  • environment variables if required

Everything else is already standardized in the chart.

Example values file

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
application:
  image:
    repository: registry.example.com/studio/client-landing
    tag: "7f2c8a1d"
    pullPolicy: IfNotPresent

  containerPort: 3000

service:
  enabled: true
  port: 3000

ingress:
  enabled: true
  className: nginx
  hosts:
    - host: client-domain.example
      paths:
        - path: /
          pathType: Prefix
  tls:
    enabled: true
    secretName: client-domain-tls

certManager:
  enabled: true
  clusterIssuer: letsencrypt-prod

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 512Mi

What Kubernetes handled automatically

Because the cluster already had the routine components in place, most of the heavy lifting was not manual:

  • Kubernetes pulled the image from the registry
  • Ingress exposed the application on the target hostname
  • CertManager requested and issued the SSL certificate
  • the service routed traffic to the application pod

An important operational detail: DNS for the domain had already been configured before deployment started. That meant there was no waiting on DNS propagation during the implementation window, so the live site came online within minutes after the Kubernetes release was applied.

Kubernetes deployment architecture with registry, ingress and cert-manager

Result

The studio requested a simple outcome: take a GitLab repository with a Node.js application and make it production-ready with a live domain and CI/CD foundations.

That was delivered through a short and practical sequence:

  1. prepared a production-ready Dockerfile in the same repository
  2. optimized the image strategy with a multi-stage build
  3. set up a GitLab pipeline for lint, test, build, and registry push
  4. published the application image into the container registry
  5. deployed it into Kubernetes using a reusable Helm chart and Helmwave
  6. exposed it via Ingress with automatic SSL from CertManager

Final deliverables

  • live website on the target domain
  • containerized application build
  • automated image build and push on code changes
  • reusable Kubernetes deployment configuration
  • SSL enabled automatically
  • scalable base for future deploy/rollback automation

Delivery time

Total implementation time: about two hours.

That is the real value of a standardized delivery stack: once Docker, GitLab CI/CD, Kubernetes, Helm, Ingress, and CertManager are used in a repeatable way, even a fresh client project can go from repository to live domain very quickly.

Final delivery result with live website and automated infrastructure