使用kuberbuilder创建工程示例
原文連接:https://blog.csdn.net/u012986012/article/details/119710511
kubebuilder是一個官方提供快速實現Operator的工具包,可快速生成k8s的CRD、Controller、Webhook,用戶只需要實現業務邏輯。
類似工具還有operader-sdk,目前正在與Kubebuilder融合
kubebuilder封裝了controller-runtime與controller-tools,通過controller-gen來生產代碼,簡化了用戶創建Operator的步驟。
一般創建Operator流程如下:
示例
我們準備創建一個2048的游戲,對外可以提供服務,也能方便地擴縮容。
準備環境
首先你需要有Kubernetes、Docker、Golang相關環境。
Linux下安裝kubebuilder
Create Resource [y/n]
y #生成CR
Create Controller [y/n]
y #生成Controller
目錄結構如下:
├── api │ └── v1 # CRD定義 ├── bin │ └── controller-gen ├── config │ ├── crd # crd配置 │ ├── default │ ├── manager # operator部署文件 │ ├── prometheus │ ├── rbac │ └── samples # cr示例 ├── controllers │ ├── game_controller.go # controller邏輯 │ └── suite_test.go ├── Dockerfile ├── go.mod ├── go.sum ├── hack │ └── boilerplate.go.txt # 頭文件模板 ├── main.go # 項目主函數 ├── Makefile └── PROJECT #項目元數據編寫API
在mygame/api/v1/game_types.go定義我們需要的字段
Spec配置如下
type GameSpec struct {// Number of desired pods. This is a pointer to distinguish between explicit// zero and not specified. Defaults to 1.// +optional//+kubebuilder:default:=1 //+kubebuilder:validation:Minimum:=1Replicas *int32 `json:"replicas,omitempty" protobuf:"varint,1,opt,name=replicas"`// Docker image name// +optionalImage string `json:"image,omitempty"`// Ingress Host nameHost string `json:"host,omitempty"` }kubebuilder:default可以設置默認值
Status定義如下
const (Running = "Running"Pending = "Pending"NotReady = "NotReady"Failed = "Failed" ) type GameStatus struct {// Phase is the phase of guestbookPhase string `json:"phase,omitempty"`// replicas is the number of Pods created by the StatefulSet controller.Replicas int32 `json:"replicas"`// readyReplicas is the number of Pods created by the StatefulSet controller that have a Ready Condition.ReadyReplicas int32 `json:"readyReplicas"`// LabelSelector is label selectors for query over pods that should match the replica count used by HPA.LabelSelector string `json:"labelSelector,omitempty"` }另外需要添加scale接口
//+kubebuilder:subresource:scale:specpath=.spec.replicas,statuspath=.status.replicas,selectorpath=.status.labelSelector編寫Controller邏輯
Controller的核心邏輯在Reconcile中,我們只需要填充自己的業務邏輯
func (r *GameReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {logger := log.FromContext(ctx)logger.Info("revice reconcile event", "name", req.String())// 獲取game對象game := &myappv1.Game{}if err := r.Get(ctx, req.NamespacedName, game); err != nil {return ctrl.Result{}, client.IgnoreNotFound(err)}// 如果處在刪除中直接跳過if game.DeletionTimestamp != nil {logger.Info("game in deleting", "name", req.String())return ctrl.Result{}, nil}// 同步資源,如果資源不存在創建deployment、ingress、service,并更新statusif err := r.syncGame(ctx, game); err != nil {logger.Error(err, "failed to sync game", "name", req.String())return ctrl.Result{}, nil}return ctrl.Result{}, nil }添加rbac配置
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete //+kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get;update;patch //+kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete //+kubebuilder:rbac:groups=networking,resources=ingresses,verbs=get;list;watch;create;update;patch;delete具體syncGame邏輯如下
func (r *GameReconciler) syncGame(ctx context.Context, obj *myappv1.Game) error {logger := log.FromContext(ctx)game := obj.DeepCopy()name := types.NamespacedName{Namespace: game.Namespace,Name: game.Name,}// 構造ownerowner := []metav1.OwnerReference{{APIVersion: game.APIVersion,Kind: game.Kind,Name: game.Name,Controller: pointer.BoolPtr(true),BlockOwnerDeletion: pointer.BoolPtr(true),UID: game.UID,},}labels := game.Labelslabels[gameLabelName] = game.Namemeta := metav1.ObjectMeta{Name: game.Name,Namespace: game.Namespace,Labels: labels,OwnerReferences: owner,}// 獲取對應deployment, 如不存在則創建deploy := &appsv1.Deployment{}if err := r.Get(ctx, name, deploy); err != nil {if !errors.IsNotFound(err) {return err}deploy = &appsv1.Deployment{ObjectMeta: meta,Spec: getDeploymentSpec(game, labels),}if err := r.Create(ctx, deploy); err != nil {return err}logger.Info("create deployment success", "name", name.String())} else {// 如果存在對比和game生成的deployment是否一致,不一致則更新want := getDeploymentSpec(game, labels)get := getSpecFromDeployment(deploy)if !reflect.DeepEqual(want, get) {deploy = &appsv1.Deployment{ObjectMeta: meta,Spec: want,}if err := r.Update(ctx, deploy); err != nil {return err}logger.Info("update deployment success", "name", name.String())}}//service創建svc := &corev1.Service{}if err := r.Get(ctx, name, svc); err != nil {...}// ingress創建ing := &networkingv1.Ingress{}if err := r.Get(ctx, name, ing); err != nil {...}newStatus := myappv1.GameStatus{Replicas: *game.Spec.Replicas,ReadyReplicas: deploy.Status.ReadyReplicas,}if newStatus.Replicas == newStatus.ReadyReplicas {newStatus.Phase = myappv1.Running} else {newStatus.Phase = myappv1.NotReady}// 更新狀態if !reflect.DeepEqual(game.Status, newStatus) {game.Status = newStatuslogger.Info("update game status", "name", name.String())return r.Client.Status().Update(ctx, game)}return nil }默認情況下生成的controller只監聽自定義資源,在示例中我們也需要監聽game的子資源,如監聽deployment是否符合預期
// SetupWithManager sets up the controller with the Manager. func (r *GameReconciler) SetupWithManager(mgr ctrl.Manager) error {// 創建controllerc, err := controller.New("game-controller", mgr, controller.Options{Reconciler: r,MaxConcurrentReconciles: 3, //controller運行的worker數})if err != nil {return err}//監聽自定義資源if err := c.Watch(&source.Kind{Type: &myappv1.Game{}}, &handler.EnqueueRequestForObject{}); err != nil {return err}//監聽deployment,將owner信息即game namespace/name添加到隊列if err := c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{OwnerType: &myappv1.Game{},IsController: true,}); err != nil {return err}return nil }部署驗證
安裝CRD
make install# 查看deploy
kubectl get deploy game-sample
NAME READY UP-TO-DATE AVAILABLE AGE
game-sample 1/1 1 1 6m
# 查看ingress
kubectl get ing game-sample
NAME CLASS HOSTS ADDRESS PORTS AGE
game-sample <none> mygame.io 192.168.49.2 80 7m
驗證應用,在/etc/host中添加<Ingress ADDRESS Ip> mygame.io,訪問瀏覽器如下圖所示
驗證擴容
kubectl scale games.myapp.qingwave.github.io game-sample --replicas 2 game.myapp.qingwave.github.io/game-sample scaled# 擴容后
kubectl get games.myapp.qingwave.github.io
NAME PHASE HOST DESIRED CURRENT READY AGE
game-sample Running mygame.io 2 2 2 7m
同樣的通過ValidateCreate、ValidateUpdate可實現validating webhook
func (r *Game) ValidateCreate() error {gamelog.Info("validate create", "name", r.Name)// Host不能包括通配符if strings.Contains(r.Spec.Host, "*") {return errors.New("host should not contain *")}return nil }本地驗證webhook需要配置證書,在集群中測試更方便點,可參考官方文檔。
總結
至此,我們已經實現了一個功能完全的game-operator,可以管理game資源的生命周期,創建/更新game時會自動創建deployment、service、ingress等資源,當deployment被誤刪或者誤修改時也可以自動回復到期望狀態,也實現了scale接口。
通過kubebuiler大大簡化了開發operator的成本,我們只需要關心業務邏輯即可,不需要再手動去創建client/controller等,但同時kubebuilder生成的代碼中屏蔽了很多細節,比如controller的最大worker數、同步時間、隊列類型等參數設置,只有了解operator的原理才更好應用于生產。
引用
[1] https://book.kubebuilder.io/
總結
以上是生活随笔為你收集整理的使用kuberbuilder创建工程示例的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 深入了解Kubernetes CRD开发
- 下一篇: Kubernetes CRD开发工具Op