top of page


Display the Default Page with Nginx on Kubernetes

Updated: Sep 17, 2022

This article is for Kubernetes beginners. In this article, you will see how to prepare nginx with Kubernetes and display the welcome page.

What is Kubernetes?

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. Source:

The premise is that Kubernetes is a container orchestration tool and knowledge of containers is necessary.

  • Declarative Configuration Management and Automation

In Kubernetes, Kubernetes creates a Pod (a collection of containers and volumes) according to the "desired environment” you declare in a file called a manifest file.

For example, if you declare "four containers" and then accidentally delete one, Kubernetes will automatically create the missing container.

The difference between Docker Compose and Kubernetes is that once a container is created, it will not do anything further whereas Kubernetes adds the management functions described above.

  • Manage Containerised Workloads and Services

Docker ran containers on a single machine. In contrast, Kubernetes manages containers on multiple machines.

In large-scale applications, multiple machines are linked to spread the load and functions. At the time, instead of running "docker run" on each machine one by one, if the aforementioned manifest file is prepared, it can deploy on each machine as appropriate, which is a helpful feature when deploying large-scale services.

  • Advantages

In addition to being the de facto standard for deploying large-scale services with containers, it has several advantages, such as the fact that it can be delivered including configuration files just like Docker, and can be versioned using replica sets and updated without having to stop the container.

Moreover, its open source community is active on a global scale.

Glossary of Kubernetes

  • Master Nodes

This is the part where the user gives commands. There are also worker nodes, which are explained below, and the role of the master node is to give instructions to the worker nodes.

  • Worker Nodes

The area that moves in conjunction with the master nodes and where the Pod is actually located.

  • Pods

It is a collection of containers and volumes (storage areas). Even if you don't use volumes, Kubernetes manages Pods as a single entity.

  • Service

It is responsible for organizing Pods. One IP is allocated to each service, and when that IP is accessed, the service performs load balancing to the Pods under it.

Load balancing between multiple worker nodes is outside the scope of the service, as this load balancing is done on a worker node basis.

  • Replica Set

The number of Pods needs to be managed so that Kubernetes can automatically revert to the desired number when a Pod stops or is deleted. The replica set is responsible for this role.

  • Deployment

mentioned 'Pod', 'service' and 'replica set' can be described in a manifest file (yaml format), but 'Pod' and 'replica set' are often included in this 'deployment'.

In other words, when creating a Pod, a manifest for the 'deployment', and a manifest for the 'service' are all that are required for Kubernetes to work.


Before taking a look at the manifesto file with the mixture of terms explained above, you need a Kubernetes runtime environment to apply the manifest file.

You might want to use the extensions that come with "Docker Desktop" since the other Kubernetes services provided by cloud providers are expensive for beginners to try out.

The setup procedure is simple. In Docker Desktop, go to 'settings' => 'Kubernetes' => and check 'Enable Kubernetes'.

And now you are all set to start learning Kubernetes!

Manifest File Example

Now, let's look at a yaml file describing a deployment to create nginx from the official documentation as an example.

apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
      app: nginx
  replicas: 2
        app: nginx
      - name: nginx
        image: nginx:1.14.2
        - containerPort: 80
apiVersion: v1
kind: Service
  name: nginx-service
    app: nginx
  type: NodePort
  - port: 8080
    targetPort: 80
    nodePort: 30080
    protocol: TCP
    app: nginx

The upper part separated by "---" is the description of the deployment and the lower part is the description of the service.

It is possible to write resources in separate manifest files and manage them separately, but if you want to write them in the same file, the resources can be separated by hyphens like this.

The description of each item above looks like this.

  • apiVersion

Kubernetes resources have an API version, so describe the appropriate one.

Usually, for deployments and services, the commands 'kubectl api-resources | grep Deployment' and 'kubectl api-resources | grep Services' will show 'apps/v1' and 'v 1" in the APIVERSION field respectively.

  • Kind

Describes the type of resource. In this case, "Deployment" and "Service" are used to create a deployment and a service respectively.

  • metadata

As the name suggests, this is metadata. You can label resources. For now, let's just remember 'name' and 'labels'.

  • spec

Describes the content of the resource. The sub-items vary depending on the resource.

  • In a deployment section

- selector

Used by deployments to manage Pods. It is labelled in the example.

- replica

Specifies the replica set. In the example, two Pods are requested.

- spec

Pod spec. Specifies the image and port to be used by the container.

  • In a service section

- type

This section allows you to specify how the service communicates with the outside by choosing from a number of different types.

When allowing access from outside, "LoadBalancer", which can be connected by load balancer IP, is basically used here, but since there is no public access this time, "NodePort", which can be connected by worker node IP, is specified, which connects directly to the worker node.

In addition, 'NodePort' can be used not only for testing but also for situations where you want to perform some kind of operation on a per-NodePort basis.

- ports

In addition to setting up the protocol TCP, three ports are defined here.

'port' is the port of the service, 'targetPort' is the port of the container, and "nodePort" is the port of the worker node.

As NodePort is specified in type this time, nginx is accessed via this 'nodePort'.

- selector

Specifies the label set in the Pod.

Creating a Resource

Let's actually create and run the manifest file and create the resource.

# Open the file using vi, paste the example source and save it, then use the kubectl command to create the deployment
vi example.yml
kubectl apply -f example.yml

Checking the Pods.

$ kubectl get po -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6595874d85-7mt97   1/1     Running   0          52s
nginx-deployment-6595874d85-mds2z   1/1     Running   0          52s

You can see that two Pods have been created and are 'Running'.

For example, changing the number of 'replicas' in the yml file will increase or decrease the number of Pods.

Next, the service can then be checked with the following command.

$ kubectl get services
kubernetes ClusterIP 443/TCP 26h
nginx-service NodePort 8080:30080/TCP 21m

Since the nodePort and service port seem to be well linked, try accessing the Pod by typing "http://localhost:30080/" into your browser.

You will see a Welcome page as you expected!

This blog post is translated from a blog post written on our Japanese website.


Recent Posts

See All


bottom of page