Nhảy tới nội dung

Hello World! 🌏

· 3 phút để đọc
Sang, Nguyen Nhat
Infrastructure Engineer at VNG

👋 Chào mọi người đến với một góc của mình. Mình là Sang, hiện tại đang là kỹ sư Hệ thống Openstack ở VNG, một trong những công ty Cloud mạnh của Việt Nam.

Sohan img ☁️ Hành trình của mình trong thế giới "điện toán đám mây" bắt đầu với niềm đam mê xây dựng những cơ sở hạ tầng mạnh mẽ. Với tư cách là Kỹ sư OpenStack, mình có cơ hội làm việc với công nghệ tiên tiến từ việc kiến trúc, triển khai và quản lý những hệ thống này nhằm hỗ trợ doanh nghiệp và thúc đẩy đổi mới.

🎓 Mình có các chứng chỉ sau:

  • VMware Certified Technical Associate - Data Center Virtualization: nhận vào năm 2022
  • Google Professional Cloud Architecture: nhận vào năm 2022.
  • Certified Kubernetes Administrator (CKA): nhận vào năm2021.

📚 Không ngừng tiếp thu kiến thức là giá trị cốt lõi của mình. Mình luôn cố gắng cập nhật kiến thức mới, cho dù đó là thông qua sách, khóa học hay những cuộc trò chuyện ý nghĩa. Suy cho cùng, sự tăng trưởng không diễn ra trong tình trạng trì trệ mà trong quá trình theo đuổi sự hiểu biết và luôn tò mò.

🌟 Ngoài những nỗ lực chuyên môn, mình luôn cố gắng chia sẻ kiến thức trong cộng đồng công nghệ: cho dù đó là thông qua việc hướng dẫn đồng nghiệp, hay đóng góp cho các dự án mã nguồn mở hay tích cực tham gia vào các sự kiện trong ngành.

When you get as old as I am, you start to realize that you've told most of the good stuff you know to other people anyway.

- Richard Feynman (Mình xin phép giữ nguyên văn câu nói của tác giả)

💬 Và đó là những gì về mình, nếu bạn muốn chia sẻ bạn có thể contact mình thông qua các kênh sau:

Via
zalo+8481 2525 615
telegram@sangnn
skypelive:.cid.95d091cfcead997d

Cảm ơn đã dành vài phút để đọc, hi vọng sẽ sớm gặp mọi người.

Sang

Introduction to Service Pipeline - Tekton basic

· 8 phút để đọc
Sang, Nguyen Nhat
Infrastructure Engineer at VNG

As modern software development shifts towards cloud-native architectures, service pipelines have become key to deploying, managing, and scaling microservices independently. In this post, we’ll explore the fundamentals of Tekton, a Kubernetes-native framework, is built to automate CI/CD processes for services and the power to build complex workflows.

We'll also discuss why you might want to use Tekton over traditional CI/CD tools like GitLab CI or GitHub Actions.

What is Tekton?

Tekton is a Kubernetes-based framework for defining, running, and managing CI/CD pipelines. Unlike traditional CI/CD systems that typically handle the deployment of entire applications, Tekton excels at managing service pipelines, which allow individual microservices to be built, tested, and deployed independently. This gives you the ability to release services faster and with more flexibility.

Why Use Tekton for Service Pipelines?

1. Kubernetes-Native and Cloud-Native Integration

Tekton is designed specifically for Kubernetes environments, making it a perfect fit for organizations running microservices or cloud-native applications. Unlike GitLab CI or GitHub Actions, which require some integration overhead, Tekton runs directly within Kubernetes clusters, taking full advantage of Kubernetes’ features.

2. Fine-Grained Control and Flexibility

Tekton provides more control over task execution, pipeline structure, and parameterization compared to traditional CI/CD tools. This flexibility allows you to build complex, customized workflows for each microservice from your code.

3. Vendor-Neutral, Open-Source Solution

Tekton is completely open-source and vendor-neutral, meaning you’re not locked into a specific CI/CD ecosystem like GitLab or GitHub. This gives you the freedom to use Tekton in any Kubernetes environment.

Key Concepts

  • Tasks: Individual units of work, such as running a build or test.
  • Pipelines: Collections of tasks that run in sequence or parallel.
  • TaskRuns and PipelineRuns: Runtime instances of tasks and pipelines, allowing you to pass parameters dynamically.

Tekton Service Pipeline Example

Let’s walk through building a service pipeline that retrieves content from a URL, displays it, and then uses that content to generate a personalized message. I will add the in-line comments for clarification.

Tekton Task

A Task is the basic building block of a Tekton service pipeline. Below is a task that greets a user based on an input parameter.

cat <<EOF | kubectl apply --validate=false -f -
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: greet-task
spec:
# params used to define input of the task
params:
- name: person
type: string
description: Who should we greet?
default: world
# we will define "what to do" in the steps [] here
steps:
- name: greet
image: alpine # using whatever build image
command:
- echo
args:
- "Hello \$(params.person)!"
EOF

This task uses the Alpine image to echo a personalized greeting, which can be customized by passing a value for the person parameter.

mẹo

A Task is like docker image, meanwhile the TaskRun is a container

TaskRun Definition

The TaskRun below executes the task, allowing us to pass a specific value to the person parameter.

cat <<EOF | kubectl apply --validate=false -f -
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: greet-task-run-1
spec:
# this TaskRun is called from greet-task above
taskRef:
name: greet-task
# we can pass argument(s) here
params:
- name: person
value: "future engineers"
EOF

Applying this task, we can see the output in pod log:

$ kubectl get pod | grep greet-task-run-1
greet-task-run-1-pod 0/1 Completed 0 34s

$ kubectl logs greet-task-run-1-pod
Defaulted container "step-greet" out of: step-greet, prepare (init)
Hello future engineers!
thông tin

There are a Tekton hub containing Tasks which has been written by community.

We will examine the community wget Task in next section

Tekton Pipeline and PipelineRun

Now, let’s define a service pipeline that uses multiple tasks. We’ll use the wget task to fetch content from the internet and then pass that content into our greeting task.

Install the Wget Task

First, install the wget community task from Tekton Hub:

kubectl apply -f https://api.hub.tekton.dev/v1/resource/tekton/task/wget/0.1/raw

Verify with kubectl get task we'll see the new wget Task.

Pipeline Definition

The following pipeline defines a sequence of tasks: fetching content from a URL wget, displaying that content cat, and greeting the user with the content as a message greet-task. Flow will be like this:

In this pipeline:

  • fetch-content: Uses wget to download content from a URL and store it in a workspace.
  • display-content: Reads the content and outputs it as a pipeline result.
  • greet-with-content: Passes the fetched content as the greeting message in the final task.
cat <<EOF | kubectl apply --validate=false -f -
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: greet-service-pipeline
annotations:
description: |
Fetch content using wget, display it, and greet the user.
spec:
# this Pipeline required a single URL param
params:
- name: sourceUrl
description: The URL to fetch the content from
type: string
default: ""
# workspace is used to share information between tasks
# think of it like a persistent volume
workspaces:
- name: storage
tasks:
# start of "wget" task
- name: fetch-content
taskRef:
name: wget
params:
- name: url
value: "\$(params.sourceUrl)"
- name: diroptions
value:
- "-P"
workspaces:
# you may wonder why name `wget-workspace`
# please check https://hub.tekton.dev/tekton/task/wget
- name: wget-workspace
workspace: storage
# start of "cat" task
- name: display-content
runAfter: [fetch-content]
workspaces:
- name: output
workspace: storage
# we can define the Task directly inside the Pipeline
taskSpec:
workspaces:
- name: output
steps:
- name: cat-content
image: busybox
script: |
#!/bin/sh
cat \$(workspaces.output.path)/name | tee /tekton/results/fetchedContent
# store the content and can be used in later tasks without workspace
results:
- name: fetchedContent
description: The content fetched from the URL
# start of "greet-task"
- name: greet-with-content
runAfter: [display-content]
taskRef:
name: greet-task
params:
- name: person
value: "\$(tasks.display-content.results.fetchedContent)" # get value from another task
EOF

PipelineRun Definition

Here’s how you run the pipeline using a PipelineRun, specifying a URL to fetch content from:

cat <<EOF | kubectl apply --validate=false -f -
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: greet-service-pipeline-run-1
spec:
workspaces:
- name: storage
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Mi
params:
- name: sourceUrl
value: "https://raw.githubusercontent.com/nhatsangvn/nhatsangvn/refs/heads/main/name"
pipelineRef:
name: greet-service-pipeline
EOF

Applying this pipeline will get the content from the specified URL, display it, and greet the information from the file.

$ kubectl get pod | grep greet-service-pipeline
greet-service-pipeline-run-1-display-content-pod 0/1 Completed 0 74s
greet-service-pipeline-run-1-fetch-content-pod 0/1 Completed 0 95s
greet-service-pipeline-run-1-greet-with-content-pod 0/1 Completed 0 67s

$ kubectl logs greet-service-pipeline-run-1-greet-with-content-pod
Defaulted container "step-greet" out of: step-greet, prepare (init)
Hello SangNN
!

Here we go: "Hello SangNN"

Conclusion

In this blog post, we’ve covered the basics of Tekton and how it can be used to build service pipelines tailored for microservices and cloud-native architectures. By leveraging Tekton, you gain the flexibility, control, and scalability needed to manage independent service deployments, allowing for faster and more frequent updates. Whether you're building for Kubernetes, looking to decouple CI/CD processes, or need a vendor-agnostic solution, Tekton's service pipeline approach is a strong candidate for modern DevOps workflows.

Stay tuned for more advanced topics on Tekton in future posts!

Introduction to OpenTelemetry Collector

· 7 phút để đọc
Sang, Nguyen Nhat
Infrastructure Engineer at VNG

In recent years, the term Open Telementry (Otel) has been hotter than ever. One of the popular components in that ecosystem is Open Telementry Collector (otel-collector), designed to facilitate the collection, processing, and exporting of telemetry data such as metrics, traces, and logs.

In this article, we focus mainly on logging 🧾.