Skip to main content

Hello World! 🌏

· 2 min read
Sang, Nguyen Nhat
Infrastructure Engineer at VNG

👋 Hello there, and welcome to my corner of the digital universe! I'm Sang, an System Engineer at VNG, one of the leading cloud companies in Vietnam. Sohan img ☁️ My journey in the world of cloud computing began with a fascination for building scalable and robust infrastructure. As an OpenStack Cloud Infrastructure Engineer, I have the opportunity of working with cutting-edge technology to architect, deploy, and manage cloud environments that power businesses and drive innovation.

🎓 I had:

  • VMware Certified Technical Associate - Data Center Virtualization: Received in 2022
  • Google Professional Cloud Architecture: Obtained in 2022.
  • Certified Kubernetes Administrator (CKA): Achieved in 2021.

📚 Lifelong learning is at the core of who I am. I'm constantly seeking out new knowledge, whether it's through books, courses, or meaningful conversations. After all, growth doesn't happen in stagnation but in the pursuit of understanding and enlightenment.

🌟 Beyond my professional endeavors, I am deeply invested in fostering collaboration and knowledge-sharing within the tech community. Whether it's through guiding colleagues with mentorship, contributing open-source initiatives, or actively engaging in industry events.

When you get as old as I am, you start to realize that you've told most of the good stuff you know to other people anyway.

- Richard Feynman

💬 But enough about me — I want to hear about you! Feel free to reach out:

Via
zalo+84812525615
whatsapp+84812525615
telegram@sangnn
skypelive:.cid.95d091cfcead997d

so we can embark on this journey together.

Thanks for stopping by, and I can't wait to connect with you!

Sang

Introduction to Service Pipeline - Tekton basic

· 8 min read
Sang, Nguyen Nhat
Infrastructure Engineer at VNG

As modern software development shifts towards cloud-native architectures, service pipelines have become key to deploying, managing, and scaling microservices independently. In this post, we’ll explore the fundamentals of Tekton, a Kubernetes-native framework, is built to automate CI/CD processes for services and the power to build complex workflows.

We'll also discuss why you might want to use Tekton over traditional CI/CD tools like GitLab CI or GitHub Actions.

What is Tekton?

Tekton is a Kubernetes-based framework for defining, running, and managing CI/CD pipelines. Unlike traditional CI/CD systems that typically handle the deployment of entire applications, Tekton excels at managing service pipelines, which allow individual microservices to be built, tested, and deployed independently. This gives you the ability to release services faster and with more flexibility.

Why Use Tekton for Service Pipelines?

1. Kubernetes-Native and Cloud-Native Integration

Tekton is designed specifically for Kubernetes environments, making it a perfect fit for organizations running microservices or cloud-native applications. Unlike GitLab CI or GitHub Actions, which require some integration overhead, Tekton runs directly within Kubernetes clusters, taking full advantage of Kubernetes’ features.

2. Fine-Grained Control and Flexibility

Tekton provides more control over task execution, pipeline structure, and parameterization compared to traditional CI/CD tools. This flexibility allows you to build complex, customized workflows for each microservice from your code.

3. Vendor-Neutral, Open-Source Solution

Tekton is completely open-source and vendor-neutral, meaning you’re not locked into a specific CI/CD ecosystem like GitLab or GitHub. This gives you the freedom to use Tekton in any Kubernetes environment.

Key Concepts

  • Tasks: Individual units of work, such as running a build or test.
  • Pipelines: Collections of tasks that run in sequence or parallel.
  • TaskRuns and PipelineRuns: Runtime instances of tasks and pipelines, allowing you to pass parameters dynamically.

Tekton Service Pipeline Example

Let’s walk through building a service pipeline that retrieves content from a URL, displays it, and then uses that content to generate a personalized message. I will add the in-line comments for clarification.

Tekton Task

A Task is the basic building block of a Tekton service pipeline. Below is a task that greets a user based on an input parameter.

cat <<EOF | kubectl apply --validate=false -f -
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: greet-task
spec:
# params used to define input of the task
params:
- name: person
type: string
description: Who should we greet?
default: world
# we will define "what to do" in the steps [] here
steps:
- name: greet
image: alpine # using whatever build image
command:
- echo
args:
- "Hello \$(params.person)!"
EOF

This task uses the Alpine image to echo a personalized greeting, which can be customized by passing a value for the person parameter.

tip

A Task is like docker image, meanwhile the TaskRun is a container

TaskRun Definition

The TaskRun below executes the task, allowing us to pass a specific value to the person parameter.

cat <<EOF | kubectl apply --validate=false -f -
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: greet-task-run-1
spec:
# this TaskRun is called from greet-task above
taskRef:
name: greet-task
# we can pass argument(s) here
params:
- name: person
value: "future engineers"
EOF

Applying this task, we can see the output in pod log:

$ kubectl get pod | grep greet-task-run-1
greet-task-run-1-pod 0/1 Completed 0 34s

$ kubectl logs greet-task-run-1-pod
Defaulted container "step-greet" out of: step-greet, prepare (init)
Hello future engineers!
info

There are a Tekton hub containing Tasks which has been written by community.

We will examine the community wget Task in next section

Tekton Pipeline and PipelineRun

Now, let’s define a service pipeline that uses multiple tasks. We’ll use the wget task to fetch content from the internet and then pass that content into our greeting task.

Install the Wget Task

First, install the wget community task from Tekton Hub:

kubectl apply -f https://api.hub.tekton.dev/v1/resource/tekton/task/wget/0.1/raw

Verify with kubectl get task we'll see the new wget Task.

Pipeline Definition

The following pipeline defines a sequence of tasks: fetching content from a URL wget, displaying that content cat, and greeting the user with the content as a message greet-task. Flow will be like this:

In this pipeline:

  • fetch-content: Uses wget to download content from a URL and store it in a workspace.
  • display-content: Reads the content and outputs it as a pipeline result.
  • greet-with-content: Passes the fetched content as the greeting message in the final task.
cat <<EOF | kubectl apply --validate=false -f -
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: greet-service-pipeline
annotations:
description: |
Fetch content using wget, display it, and greet the user.
spec:
# this Pipeline required a single URL param
params:
- name: sourceUrl
description: The URL to fetch the content from
type: string
default: ""
# workspace is used to share information between tasks
# think of it like a persistent volume
workspaces:
- name: storage
tasks:
# start of "wget" task
- name: fetch-content
taskRef:
name: wget
params:
- name: url
value: "\$(params.sourceUrl)"
- name: diroptions
value:
- "-P"
workspaces:
# you may wonder why name `wget-workspace`
# please check https://hub.tekton.dev/tekton/task/wget
- name: wget-workspace
workspace: storage
# start of "cat" task
- name: display-content
runAfter: [fetch-content]
workspaces:
- name: output
workspace: storage
# we can define the Task directly inside the Pipeline
taskSpec:
workspaces:
- name: output
steps:
- name: cat-content
image: busybox
script: |
#!/bin/sh
cat \$(workspaces.output.path)/name | tee /tekton/results/fetchedContent
# store the content and can be used in later tasks without workspace
results:
- name: fetchedContent
description: The content fetched from the URL
# start of "greet-task"
- name: greet-with-content
runAfter: [display-content]
taskRef:
name: greet-task
params:
- name: person
value: "\$(tasks.display-content.results.fetchedContent)" # get value from another task
EOF

PipelineRun Definition

Here’s how you run the pipeline using a PipelineRun, specifying a URL to fetch content from:

cat <<EOF | kubectl apply --validate=false -f -
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: greet-service-pipeline-run-1
spec:
workspaces:
- name: storage
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Mi
params:
- name: sourceUrl
value: "https://raw.githubusercontent.com/nhatsangvn/nhatsangvn/refs/heads/main/name"
pipelineRef:
name: greet-service-pipeline
EOF

Applying this pipeline will get the content from the specified URL, display it, and greet the information from the file.

$ kubectl get pod | grep greet-service-pipeline
greet-service-pipeline-run-1-display-content-pod 0/1 Completed 0 74s
greet-service-pipeline-run-1-fetch-content-pod 0/1 Completed 0 95s
greet-service-pipeline-run-1-greet-with-content-pod 0/1 Completed 0 67s

$ kubectl logs greet-service-pipeline-run-1-greet-with-content-pod
Defaulted container "step-greet" out of: step-greet, prepare (init)
Hello SangNN
!

Here we go: "Hello SangNN"

Conclusion

In this blog post, we’ve covered the basics of Tekton and how it can be used to build service pipelines tailored for microservices and cloud-native architectures. By leveraging Tekton, you gain the flexibility, control, and scalability needed to manage independent service deployments, allowing for faster and more frequent updates. Whether you're building for Kubernetes, looking to decouple CI/CD processes, or need a vendor-agnostic solution, Tekton's service pipeline approach is a strong candidate for modern DevOps workflows.

Stay tuned for more advanced topics on Tekton in future posts!

Introduction to OpenTelemetry Collector

· 7 min read
Sang, Nguyen Nhat
Infrastructure Engineer at VNG

In recent years, the term Open Telementry (Otel) has been hotter than ever. One of the popular components in that ecosystem is Open Telementry Collector (otel-collector), designed to facilitate the collection, processing, and exporting of telemetry data such as metrics, traces, and logs.

In this article, we focus mainly on logging 🧾.