Build Platforms Daily

Deploying Angular Apps with Score: Platform-Agnostic Workflows for Docker, Podman & Kubernetes

By Victor Ikeme on Aug 10, 2025
angular-with-score post

Score provides a developer-friendly way to define application workloads without worrying about how they are deployed. In this hands-on tutorial, we use Score to containerize and deploy an Angular application to both Docker (or Podman) and Kubernetes — with zero YAML files manually written.

We use Score to define the application’s container, runtime arguments, ports, volumes, and DNS routing, all from a single score.yaml file. Then, we deploy it with score-compose for local testing and score-k8s for a full Kubernetes deployment. Along the way, we configure DNS and routing resources to expose the app publicly via dynamic hostnames.

This example belongs to the growing Awesome Score Spec Examples collection; a practical guide showcasing over 50 real-world Score-based workloads.

⭐️ Give the repo a star to follow along and support the platform engineering series.

What You Will Learn

By following this guide, we:

  • Containerize and run an Angular app locally using Docker or Podman.
  • Define a workload using score.yaml with containers, ports, and volumes.
  • Deploy the app using score-compose and score-k8s without writing Compose or Kubernetes manifests.
  • Configure DNS and route resources with Score to assign stable local or remote URLs.
  • Understand how Score separates concerns between developers and platform engineers using provisioners.

Table of Contents

1. Prerequisites

Before starting, we ensure the environment has the tools and services needed to follow this tutorial from start to finish.

  • Install Docker or Podman.
  • Install Score Compose.
  • Install Score K8s.
  • Install Kubectl
  • Set up access to a Kubernetes cluster. To create a local cluster, we use the provided Kind script in the Github repo.
  • Optionally, launch this environment instantly using GitHub Codespaces or a DevContainer in VS Code.

To follow this tutorial:

$ git clone https://github.com/victor-ikeme/awesome-score-spec-examples

$ cd awesome-score-spec-examples/examples/angular

This places us in the angular project folder with all necessary files, including score.yaml, Dockerfile, and automation scripts.

2. Quick Start (Optional)

To skip detailed steps and see score deploy the app immediately, you can use the added Makefile in the repo to run Score and provision everything automatically.

To deploy the app using Docker:

$ make compose-test

This launches the app in a containerized environment. To stop and clean up:

$ make compose-down

To deploy the app to a local Kubernetes cluster, we initialize a Kind cluster and load the image:

$ make kind-create-cluster
$ make kind-load-images
$ make k8s-test

This completes the deployment to Kubernetes using score-k8s. The Angular application is now running and exposed with a stable route.

3. Deploy Angular Locally Using Docker or Podman.

To begin, we start by running the Angular app as a container. This allows us to confirm that the app works as expected before moving on to Score-based deployment.

First, we build the container image from the Dockerfile in the angular folder.

$ docker build -t angular .

This command outputs the Docker build steps, ending with a successfully tagged image named angular.

Next, we run the container and map port 4200 to the host. This ensures we can access the application through a browser.

$ docker run -it -p 4200:4200 angular

The terminal logs the container startup process. Once complete, we access the Angular app at http://localhost:4200 and verify that it loads correctly.

This confirms that the app runs in a containerized environment with no issues, setting the stage for platform-agnostic deployment using Score.

4. Define the Angular Workload with Score.

After verifying that the Angular application runs successfully in a container, we proceed to define the workload using Score. This allows us to abstract the deployment configuration from the underlying platform, whether it is Docker Compose, Podman, or Kubernetes.

To describe the app’s behavior, we create a score.yaml file in the root of the angular folder. This file defines the workload’s container, arguments, volumes, exposed ports, and any resource dependencies.

apiVersion: score.dev/v1b1
metadata:
  name: angular-demo
containers:
  web:
    image: .
    args: ["ng", "serve", "--host", "0.0.0.0", "--disable-host-check", "--port", "4200"]
    volumes:
      - source: ${resources.project-source}
        target: /angular/project
      - source: ${resources.node-modules-cache}
        target: /angular/project/node_modules
service:
  ports:
    tcp:
      port: 4200
      targetPort: 4200
resources:
  project-source:
    type: volume
  node-modules-cache:
    type: volume

This configuration instructs Score to launch a container using the current context (denoted by .), override the default command with Angular’s development server options, and mount two volumes: one for the project source and one for the node_modules directory.

By default, Score maps TCP port 4200 inside the container to the same port on the host or cluster, making the app reachable in local and remote environments. It also registers two volumes in the resources section, which Score automatically provisions based on the platform.

This Score definition now becomes the single source of truth for how the Angular app is built and deployed. In the next section, we generate a Docker Compose configuration from it using score-compose.

5. Deploy the Angular App Using score-compose.

Once the score.yaml file is in place, we can deploy the Angular application locally using Docker or Podman with the help of score-compose. This step translates the Score specification into a Docker Compose configuration automatically.

To begin, we initialize the project with score-compose. This sets up the environment and prepares it for generating a Compose file.

$ score-compose init

After initialization completes, we generate the Docker Compose configuration from score.yaml. In this case, we build the image locally from the app context and assign it the tag angular:local. We also expose the app on port 4200.

$ score-compose generate score.yaml \
    --build 'web={"context":"app","tags":["angular:local"]}' \
    --publish 4200:angular:4200

This command produces a docker-compose.yaml file and logs a confirmation message. The file includes the service definition, volume mounts, ports, and build instructions derived directly from the Score spec.

Next, we launch the app using Docker Compose. This builds and starts the container in detached mode.

$ docker compose up --build -d

Docker logs the build and container startup process. Once complete, we verify the application is running by navigating to http://localhost:4200.

To confirm the container is active, we inspect the running Docker containers:

$ docker ps

This lists the container name, port mappings, and status. For example:

CONTAINER ID   IMAGE             PORTS                    STATUS
abc123def456   angular:local     0.0.0.0:4200->4200/tcp   Up 10 seconds

To observe runtime logs and confirm the app booted successfully, we tail the container output:

$ docker compose logs -f

This displays the logs from the Angular service in real time, including startup messages and runtime info.

This confirms that the app is now deployed through Score, using Docker Compose as the runtime target.

If you prefer using a prebuilt image instead of building locally, you can repeat the steps above with a different flag --image. First, we regenerate the Compose config using a hosted image.

$ score-compose generate score.yaml \
    --image ghcr.io/victor-ikeme/angular:latest \
    --publish 4200:angular:4200

Then, we start the container without rebuilding it locally.

$ docker compose up -d

Once the container is running, we open the browser to http://localhost:4200 and confirm the app is accessible.

This approach demonstrates how Score abstracts away Compose-specific configuration while supporting both local builds and prebuilt images.

6. Deploy the Angular App Using score-k8s.

After confirming that the app works locally with Docker, we now deploy it to a Kubernetes cluster with same score.yaml file using score-k8s. This allows us to test the same Score-defined workload in a production-grade environment.

To begin, we initialize the project for Kubernetes deployment using score-k8s. This creates the necessary scaffold to generate manifests from the Score spec.

$ score-k8s init

Once initialization completes, we generate Kubernetes manifests using the same score.yaml file. This time, we reference a prebuilt image hosted on GitHub Container Registry.

$ score-k8s generate score.yaml \
    --image ghcr.io/victor-ikeme/angular:latest

This command outputs a manifests.yaml file containing a deployment, service, and other Kubernetes resources based on the workload definition.

If a Kubernetes cluster is not available, we can quickly create a local one using Kind. The repository includes a setup script in the scripts folder that automates this step.

$ ./scripts/setup-kind-cluster.sh

The script provisions a Kind cluster and prepares it for local development. It outputs progress logs and confirms when the cluster is ready.

Next, we apply the generated Kubernetes manifests using kubectl. This deploys the Angular app into the cluster.

$ kubectl apply -f manifests.yaml

Kubernetes schedules the pod and creates a service to expose it. We verify that the pod is running and that the service is available by listing the resources.

$ kubectl get pods,svc

To access the app in the browser, we use kubectl port-forward to map port 4200 from the service to the host machine.

$ kubectl port-forward svc/angular 4200:4200

The terminal confirms the port-forwarding is active. Once established, we navigate to http://localhost:4200 and confirm that the Angular app is running inside the Kubernetes cluster.

This step proves that the same score.yaml spec can deploy consistently across both local containers and Kubernetes environments.

7. Expose the Angular App Using DNS and Route Resources.

After deploying the application, we now focus on exposing it via a user-friendly URL. Score supports dns and route resource types, which allow platform-agnostic configuration of public-facing access.

By default, we access the app using local addresses and ports like localhost:4200 Instead, we can request a dynamic hostname and URL, managed by score provisioners tailored for Docker or Kubernetes. This removes the need to define service hosts manually.

To request DNS and route resources, we update the score.yaml file to include both:

resources:
  dns:
    type: dns

  route:
    type: route
    params:
      host: ${resources.dns.host}
      path: /
      port: 4200

This configuration tells Score to allocate a DNS host and bind a route to port 4200 at deployment time. Both values are resolved by platform-specific provisioners.

7.1. Expose the App with DNS in score-compose.

When exposing a workload publicly, Score encourages collaboration between application developers and platform engineers through a shared contract. This contract is expressed in score.yaml, allowing each persona to focus only on what they care about.

From a developer’s perspective, the only concern is declaring what the app needs. In this case, the app simply requires a stable URL for access in any environment — local or remote.

To declare that, we add two resources to score.yaml: one for DNS and one for the route that binds it to the app.

resources:
  dns:
    type: dns

  route:
    type: route
    params:
      host: ${resources.dns.host}
      path: /
      port: 4200

This definition avoids hardcoding hostnames like localhost or ports like 4200, delegating that responsibility to the runtime platform.

Thereafter, the final score specification file should now look like this:

apiVersion: score.dev/v1b1
metadata:
  name: angular
containers:
  web:
    image: .
    args: ["ng", "serve", "--host", "0.0.0.0", "--disable-host-check", "--port", "4200"]
    volumes:
      - source: ${resources.project-source}
        target: /angular/project
      - source: ${resources.node-modules-cache}
        target: /angular/project/node_modules
service:
  ports:
    tcp:
      port: 4200
      targetPort: 4200
resources:
  project-source:
    type: volume
  node-modules-cache:
    type: volume
  dns:
    type: dns
  route:
    type: route
    params:
      host: ${resources.dns.host}
      path: /
      port: 4200

On the other side, platform engineers retain full control over how these requests get fulfilled. This includes choosing whether to expose the app via a local DNS dev router, an ingress controller, or any other implementation. That logic resides in Score Provisioners, which they author and maintain.

Score Compose supports DNS-aware provisioners out of the box. These provisioners dynamically allocate hostnames and routing logic at deployment time, depending on platform capabilities.

To activate DNS support, we begin by initializing the project with a DNS provisioner.

$ score-compose init \
  --no-sample \
  --provisioners https://raw.githubusercontent.com/score-spec/community-provisioners/refs/heads/main/dns/score-compose/10-dns-with-url.provisioners.yaml

This command registers the community provisioner that knows how to resolve DNS and route resources in Docker-based environments.

Next, we generate the Compose configuration. Since DNS is now being handled by the provisioner, we no longer need to explicitly publish ports using --publish

$ score-compose generate score.yaml \
  --image ghcr.io/victor-ikeme/angular:latest

After generation completes, we inspect the resolved DNS resource output. This reveals the unique hostname and complete public URL for the app.

$ score-compose resources list

output:


+-------------------------------+------------------------------------------------+
|              UID              |                    OUTPUTS                     |
+-------------------------------+------------------------------------------------+
| dns.default#angular.dns     | host, url                                      |
+-------------------------------+------------------------------------------------+
| route.default#angular.route |                                                |
+-------------------------------+------------------------------------------------+

Thereafter, we retrieve the provisioned dns name and URL:

$ score-compose resources get-outputs dns.default#angular.dns

output:

{
  "host": "dnsvtt3ix.localhost",
  "url": "http://dnsvtt3ix.localhost:8080/"
}

This confirms that the Score engine and provisioner successfully mapped our abstract DNS request to a real, routable URL in the local environment.

To complete the deployment, we launch the app using Docker Compose.

$ docker compose up -d

Once the containers are running, we check the status of the running services.

$ docker ps

The output lists the angular container, confirming it is active. We then open a browser and visit the DNS-assigned URL at http://dnsvtt3ix.localhost:8080. The Angular app loads successfully, now exposed through a fully abstracted and dynamically provisioned DNS route.


7.2. Expose the App with DNS in score-k8s.

The same Score abstraction used for Docker environments also works in Kubernetes. In this case, the platform engineer may choose to implement DNS and routing using Kubernetes-native primitives like HTTPRoute, backed by Gateway API.

The score.yaml file remains unchanged. Both the application developer and platform engineer continue to operate through the same interface. The difference lies in how those DNS and route requests are provisioned — now inside the cluster.

To start, we initialize the project with a score-k8s DNS provisioner.

$ score-k8s init \
  --provisioners https://raw.githubusercontent.com/score-spec/community-provisioners/refs/heads/main/dns/score-k8s/10-dns-with-url.provisioners.yaml

The CLI registers the provisioner and prepares the project for manifest generation.

Next, we generate the manifests based on the Score definition and the prebuilt Angular image.

$ score-k8s generate score.yaml \
  --image ghcr.io/victor-ikeme/angular:latest

This outputs a manifests.yaml file that includes deployments, services, and an HTTP route configured to accept traffic through a Gateway.

We then apply the manifests to the Kubernetes cluster.

$ kubectl apply -f manifests.yaml

Once applied, we confirm that the pods and routing resources have been created successfully.

$ kubectl get all,httproute
pod/backstage-7cfcc444bf-xt5jv   1/1     Running
service/backstage               ClusterIP   10.96.113.66
httproute.gateway.networking.k8s.io/route-angular-43ebba9d   ["localhost"]

Here, the HTTP route connects the DNS host to the internal service on port 4200, using the platform’s routing layer.

Next, we inspect the DNS resource output to retrieve the assigned public URL.

$ score-compose resources list

output:


+-------------------------------+------------------------------------------------+
|              UID              |                    OUTPUTS                     |
+-------------------------------+------------------------------------------------+
| dns.default#angular.dns     | host, url                                      |
+-------------------------------+------------------------------------------------+
| route.default#angular.route |                                                |
+-------------------------------+------------------------------------------------+

Then, run this to get our public URL

$ score-k8s resources get-outputs dns.default#angular.dns
{
  "host": "dnsvdodg4.localhost",
  "url": "http://dnsvdodg4.localhost:80"
}

This confirms that DNS and routing have been successfully abstracted and provisioned inside the Kubernetes cluster.

At this point, we open a browser and navigate to http://dnsvdodg4.localhost:80. The Angular application loads in the browser without any port-forwarding or service exposure steps.

By delegating infrastructure specifics to provisioners, Score enables developers to describe intent while platform engineers enforce implementation. This shared model promotes clean separation of concerns and reproducible environments across any runtime.

Summary

In this tutorial, we used Score to deploy an Angular application across both Docker/Podman and Kubernetes environments. We started by running the app locally in a container, then abstracted its configuration using a single score.yaml file.

We built the container image, ran it with score-compose, and generated Compose files directly from the Score spec. Afterward, we deployed the same app to a Kubernetes cluster using score-k8s, without writing a single Kubernetes manifest manually.

We also introduced DNS and route resources, showing how Score enables developers to declare what the application needs, while platform engineers define how those needs get fulfilled using provisioners. This model promotes environment-agnostic deployments and clean separation of responsibilities.

By relying on Score, we replaced repetitive infrastructure code with a lightweight, declarative contract. Whether deploying to local machines or full Kubernetes clusters, the workload stayed the same.

To explore more Score-based deployment scenarios, visit the full Awesome Score Spec Examples repository. Star the repo to support this work and follow along as we add new examples every week.

Listen to my Podcasts

Join me as I talk shop about internal developer platforms, products, tools, tips and everything in between—new episodes every week!

© Copyright 2025 by Build Platforms Daily. Built with ♥ by Victor Ikeme.