k8s基础 (三) – k8s 运维示例

一、调整 pod 数量

  • 1、yml 文件中修改 replicas 数量
  • 2、在 dashboard 修改 deployment 的 pod 值;此修改只是临时生效,重建 yml 文件后,pod 数量会恢复,以 yml 文件为准
  • 3、通过kubectl scale命令;此修改只是临时生效,重建 yml 文件后,pod 数量会恢复
  • 4、通过 kubectl edit 编辑 deployment
  • 5、通过 HPA 控制器

1.1 手动调整 pod 数量

kubectl scale 对运行在k8s 环境中的pod 数量进行扩容(增加)或缩容(减小)

# 当前 pod 数量
<root@ubuntu181 ~># kubectl get deployment -n test
NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
test-nginx-deployment         1/1     1            1           59m
test-tomcat-app1-deployment   1/1     1            1           41h

# 查看命令使用帮助
<root@ubuntu181 ~>#kubectl --help | grep scale
  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale     Auto-scale a Deployment, ReplicaSet, or ReplicationController
<root@ubuntu181 ~>#kubectl scale --help

# 执行扩容/缩容
<root@ubuntu181 ~>#kubectl scale deployment/test-tomcat-app1-deployment --replicas=2 -n test
deployment.apps/test-tomcat-app1-deployment scaled

# 验证手动扩容结果
<root@ubuntu181 ~>#kubectl get deployment -n test
NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
test-nginx-deployment         1/1     1            1           63m
test-tomcat-app1-deployment   2/2     2            2           41h

1.2 HPA 自动伸缩 pod 数量

  • kubectl autoscale 自动控制在 k8s 集群中运行的 pod 数量(水平自动伸缩),需要提前设置 pod 范围及触发条件。
  • k8s 从1.1版本开始增加了名称为 HPA(Horizontal Pod Autoscaler)的控制器,用于实现基于 pod 中资源 (CPU/Memory) 利用率进行对 pod 的自动扩缩容功能的实现,早期的版本只能基于 Heapster 组件实现对 CPU 利用率做为触发条件,但是在 k8s 1.11 版本开始使用 Metrices Server 完成数据采集,然后将采集到的数据通过 API(Aggregated API,汇总API),例如metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io,然后再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的。
控制管理器默认每隔15s(可以通过 –horizontal-pod-autoscaler-sync-period 修改)查询 metrics 的资源使用情况
支持以下三种metrics指标类型:
  ->预定义metrics(比如Pod的CPU)以利用率的方式计算
  ->自定义的Pod metrics,以原始值(raw value)的方式计算
  ->自定义的object metrics
支持两种metrics查询方式:
  ->Heapster
  ->自定义的REST API
支持多metrics

运维通过 master 的 API 创建一个 HPA 控制器,由 HPA 控制器对 deployment 进行自动伸缩,通过调用 kube-controller-manager 对 pod 数量进行管理;
配置了 HPA 的 deployment 发现 pod 的资源利用率超过了之前所设定的值,通过调用 kube-controller-manager 服务创建 pod,直到 pod 的资源利用率低于所设定的值;HPA 会通过 metrics API 获取当前 HPA 所管理的 deployment 内部的 POD 的资源利用率

工作过程:
HPA 通过 master api(Metrices Server通过api server把采集到的数据记录到 etcd 中,HPA 会通过 master api 获取到 etcd 中 pod 的数据)获取到 deployment 中 pod 的指标数据,再与 HPA 提前定义好的值做对比,如果资源利用率超出定义的值,则会通过 kube-controller-manager 新建 pod,直到资源利用率低于 hpa 所定义的值;
kube-controller-manager 默认每隔15s会查询 metrics 的资源使用情况

1.2.1 准备 metrics-server

使用 metrics-server 作为 HPA 数据源

准备 image

docker pull k8s.gcr.io/metrics-server/metrics-server:v0.3.7
docker tag k8s.gcr.io/metrics-server/metrics-server:v0.4.0 harbor.linux.net/baseimages/metrics-server-amd64:v0.4.0
docker push harbor.linux.net/baseimages/metrics-server-amd64:v0.4.0

修改 yaml 文件

<root@ubuntu181 ~>#wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml

<root@ubuntu181 ~>#vim components.yaml
        image: harbor.linux.net/baseimages/metrics-server-amd64:v0.4.0
        imagePullPolicy: IfNotPresent

<root@ubuntu181 ~>#kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

验证 metrics-server pod

<root@ubuntu181 ~>#kubectl -n kube-system -o wide get pod | grep metrics-server
metrics-server-5b99f85777-67bf8              1/1     Running   0          83s    172.20.3.46     192.168.7.105   <none>           <none>

修改 controller-manager 启动参数

<root@ubuntu181 base>#kube-controller-manager --help | grep horizontal-pod-autoscaler-sync-period
      --horizontal-pod-autoscaler-sync-period duration                 The period for syncing the number of pods in horizontal pod autoscaler. (default 15s) # 

vim /etc/systemd/system/kube-controller-manager.service
[Service]
ExecStart=/opt/kube/bin/kube-controller-manager \
  --bind-address=192.168.7.101 \
  --allocate-node-cidrs=true \
  --cluster-cidr=172.20.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --node-cidr-mask-size=24 \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-cluster-ip-range=10.20.0.0/16 \
  --use-service-account-credentials=true \
  --horizontal-pod-autoscaler-use-rest-clients=false \ # 不使用其他客户端数据
  --horizontal-pod-autoscaler-sync-period=10s \ # 数据采集周期间隔时间
  --v=2
Restart=always
RestartSec=5

重启 controller-manager

<root@ubuntu181 ~>#systemctl daemon-reload
<root@ubuntu181 ~>#systemctl restart kube-controller-manager.service
<root@ubuntu181 ~>#ps -ef | grep kube-controller-manager
root      43099      1  4 22:15 ?        00:00:01 /opt/kube/bin/kube-controller-manager --bind-address=192.168.7.101 --allocate-node-cidrs=true --cluster-cidr=172.20.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig --leader-elect=true --node-cidr-mask-size=24 --root-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --service-cluster-ip-range=10.20.0.0/16 --use-service-account-credentials=true --horizontal-pod-autoscaler-use-rest-clients=false --horizontal-pod-autoscaler-sync-period=10s --v=2
root      43362   1740  0 22:16 pts/0    00:00:00 grep --color=auto kube-controller-manager

1.2.2 通过命令配置扩缩容

<root@ubuntu181 nginx>#kubectl get deployment -n test
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
test-nginx-deployment   1/1     1            1           16s

<root@ubuntu181 nginx>#kubectl autoscale deployment test-nginx-deployment --min=2 --max=5 --cpu-percent=80 -n test
horizontalpodautoscaler.autoscaling/test-nginx-deployment autoscaled
<root@ubuntu181 nginx>#kubectl get hpa -n test
NAME                    REFERENCE                          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
test-nginx-deployment   Deployment/test-nginx-deployment   <unknown>/80%   2         5         0          8

<root@ubuntu181 ~>#kubectl describe deployment/test-nginx-deployment -n test
Name:                   test-nginx-deployment
Namespace:              test
CreationTimestamp:      Sat, 10 Apr 2021 22:40:40 +0800
Labels:                 app=test-nginx-deployment-label
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=test-nginx-selector
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable

  DESIRED # 最终期望处于READY状态的副本数
  CURRENT # 当前的副本总数
  UP-TO-DATE # 当前完成更新的副本数
  AVAILABLE # 当前可用的副本数

1.2.3 yaml 文件中定义扩缩容配置

查看帮助:kubectl explain HorizontalPodAutoscaler

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: test-nginx-deployment-label
  name: test-nginx-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-nginx-selector
  template:
    metadata:
      labels:
        app: test-nginx-selector
    spec:
      containers:
      - name: test-nginx-container
        image: harbor.linux.net/test/nginx-web1:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "20"
        resources:
          limits:
            cpu: 2
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-nginx-service-label
  name: test-nginx-service
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 32080
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 32443
  selector:
    app: test-nginx-selector
---
apiVersion: autoscaling/v1 # 定义API版本
kind: HorizontalPodAutoscaler # 对象类型
metadata:  # 定义对象元数据
  namespace: test # 创建后隶属的namespace
  name: test-nginx-app1-podautoscaler # 对象名称
  labels: # 定义 label 标签
    app: test-nginx-app1 # 自定义的label名称
    version: v1  # 自定义的api版本
spec: # 定义对象具体信息
  scaleTargetRef: # 定义水平伸缩的目标对象,Deployment、ReplicationController/ReplicaSet
    apiVersion: apps/v1  # API版本,HorizontalPodAutoscaler.spec.scaleTargetRef.apiVersion
    kind: Deployment # 目标对象类型为deployment
    name: test-nginx-deployment # deployment 的具体名称
  minReplicas: 2 # 最小pod数
  maxReplicas: 5 # 最大pod数
  targetCPUUtilizationPercentage: 60 # CPU 使用率

验证 HPA

<root@ubuntu181 nginx>#kubectl apply -f  nginxhpa.yml
deployment.apps/test-nginx-deployment created
service/test-nginx-service created
horizontalpodautoscaler.autoscaling/test-nginx-app1-podautoscaler created

<root@ubuntu181 ~>#kubectl get hpa -n test
NAME                            REFERENCE                          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
test-nginx-app1-podautoscaler   Deployment/test-nginx-deployment   <unknown>/60%   2         5         2          58s

<root@ubuntu181 ~>#kubectl top pod -n test
NAME                                     CPU(cores)   MEMORY(bytes)
test-nginx-deployment-58c4569cb5-gqj4q   0m           2Mi
test-nginx-deployment-58c4569cb5-r274q   0m           2Mi

1.3 配置自动扩缩容

先手动扩容至5个,验证在空闲时间是否会自动缩容

1.3.1 将 pod 扩容至5个

在 dashboard 中修改 Deployment

1.3.2 验证 HPA 日志

<root@ubuntu182 ~>#kubectl get hpa -n test
NAME                            REFERENCE                          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
test-nginx-app1-podautoscaler   Deployment/test-nginx-deployment   <unknown>/60%   2         5         5          10m

<root@ubuntu181 ~>#kubectl describe deployment/test-nginx-deployment -n test
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  11m   deployment-controller  Scaled up replica set test-nginx-deployment-58c4569cb5 to 1
  Normal  ScalingReplicaSet  11m   deployment-controller  Scaled up replica set test-nginx-deployment-58c4569cb5 to 2
  Normal  ScalingReplicaSet  115s  deployment-controller  Scaled up replica set test-nginx-deployment-58c4569cb5 to 5

二、动态修改资源内容 kubectl edit

用于临时修改某些配置后需要立即生效的场景

<root@ubuntu181 ~>#kubectl get deployment -n test
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
test-nginx-deployment   2/2     2            2           36m

<root@ubuntu181 ~>#kubectl edit deployment test-nginx-deployment -n test

三、定义 node 资源标签

lable 是一个键值对,创建 pod 的时候会查询哪些 node 有这个标签,只会将 pod 创建在符合指定 label 值的 node 节点上

3.1 查看当前 node label

<root@ubuntu181 ~>#kubectl describe node 192.168.7.104
Name:               192.168.7.104
Roles:              node
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=192.168.7.104
                    kubernetes.io/os=linux
                    kubernetes.io/role=node

3.2 自定义 node label 并验证

<root@ubuntu181 ~>#kubectl label node 192.168.7.104 project=linux1
node/192.168.7.104 labeled
<root@ubuntu181 ~>#kubectl label node 192.168.7.104 test_label=test
node/192.168.7.104 labeled

<root@ubuntu181 ~>#kubectl describe node 192.168.7.104
Name:               192.168.7.104
Roles:              node
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=192.168.7.104
                    kubernetes.io/os=linux
                    kubernetes.io/role=node
                    project=linux1
                    test_label=test

3.3 yaml 引用 node label

<root@ubuntu181 nginx>#vim nginxhpa.yml
            cpu: 500m
            memory: 1Gi
      nodeSelector:
        project: linux1
<root@ubuntu181 nginx>#kubectl apply -f  nginxhpa.yml
deployment.apps/test-nginx-deployment created
service/test-nginx-service created
horizontalpodautoscaler.autoscaling/test-nginx-app1-podautoscaler created

<root@ubuntu181 nginx>#kubectl get pods -o wide -n test
NAME                                     READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
test-nginx-deployment-855d86d78d-5chzp   1/1     Running   0          40s   172.20.2.16   192.168.7.104   <none>           <none>
test-nginx-deployment-855d86d78d-rbft7   1/1     Running   0          50s   172.20.2.15   192.168.7.104   <none>           <none>

3.4 删除自定义 node label

<root@ubuntu181 nginx>#kubectl label nodes 192.168.7.104 test_label-
node/192.168.7.104 labeled

四、业务镜像版本升级及回滚

在指定的 deployment 中通过 kubectl set image 指定新版本的 镜像:tag 来实现更新代码的目的。
构建三个不同版本的 nginx 镜像,第一次使用 v1 版本,后组逐渐升级到 v2 与 v3,测试镜像版本升级与回滚操作

4.1 升级到镜像到指定版本

# v1版本,--record=true 为记录执行的 kubectl
<root@ubuntu181 nginx>#kubectl apply -f nginx.yaml --record=true
<root@ubuntu181 nginx>#kubectl get deployments  -o wide -n test
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS             IMAGES                                SELECTOR
test-nginx-deployment   1/1     1            1           3m50s   test-nginx-container   harbor.linux.net/test/nginx-web1:v1   app=test-nginx-selector

# 镜像更新命令格式为
kubectl set image deployment/deployment-name containers-name=image -n namespace

# 升级为 v3
<root@ubuntu181 nginx>#kubectl set image deployment test-nginx-deployment test-nginx-container=harbor.linux.net/test/nginx-web1:v3 -n test
deployment.apps/test-nginx-deployment image updated

# 升级为 v4
<root@ubuntu181 nginx>#kubectl set image deployment test-nginx-deployment test-nginx-container=harbor.linux.net/test/nginx-web1:v4 -n test
deployment.apps/test-nginx-deployment image updated

4.2 查看历史版本信息

<root@ubuntu181 tomcat-app1>#kubectl rollout history deployment/test-nginx-deployment -n test
deployment.apps/test-nginx-deployment
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=nginx.yaml --record=true
2         kubectl apply --filename=nginx.yaml --record=true
3         kubectl apply --filename=nginx.yaml --record=true

4.3 回滚到上一个版本

<root@ubuntu181 nginx>#kubectl rollout undo deployment/test-nginx-deployment -n test
deployment.apps/test-nginx-deployment rolled back

4.4 回滚到指定版本

# 查看当前版本号:
<root@ubuntu181 tomcat-app1>#kubectl rollout history deployment/test-nginx-deployment -n test
deployment.apps/test-nginx-deployment
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=nginx.yaml --record=true
2         kubectl apply --filename=nginx.yaml --record=true
3         kubectl apply --filename=nginx.yaml --record=true

# 指定回滚
<root@ubuntu181 tomcat-app1>#kubectl rollout undo deployment/test-nginx-deployment --to-revision=1 -n test
deployment.apps/test-nginx-deployment rolled back

# 回滚后的版本号:
<root@ubuntu181 tomcat-app1>#kubectl rollout history deployment/test-nginx-deployment -n test
deployment.apps/test-nginx-deployment 
REVISION  CHANGE-CAUSE
3         kubectl apply --filename=nginx.yaml --record=true
4         kubectl apply --filename=nginx.yaml --record=true
5         kubectl apply --filename=nginx.yaml --record=true

五、 配置主机为封锁状态且不参与调度

<root@ubuntu181 ~>#kubectl --help | grep cordon # 警戒线
  cordon        Mark node as unschedulable # 标记为警戒,即不参加pod调度
  uncordon      Mark node as schedulable # 去掉警戒,即参加pod调度

# 设置192.168.7.102 不参加调度
kubectl cordon 192.168.7.102

<root@ubuntu181 ~>#kubectl cordon 192.168.7.102
node/192.168.7.102 already cordoned
<root@ubuntu181 ~>#kubectl get node
NAME            STATUS                     ROLES    AGE     VERSION
192.168.7.101   Ready,SchedulingDisabled   master   2d16h   v1.19.6
192.168.7.102   Ready,SchedulingDisabled   master   2d16h   v1.19.6
192.168.7.104   Ready                      node     2d16h   v1.19.6
192.168.7.105   Ready                      node     2d16h   v1.19.6

六、从 etcd 删除 pod

适用于自动化场景

6.1 查看和 namespace 相关的数据

<root@ubuntu182 ~>#ETCDCTL_API=3 etcdctl get /registry/ --prefix --keys-only | grep test
/registry/deployments/test/test-nginx-deployment
/registry/endpointslices/test/test-nginx-service-gr97d
/registry/events/default/test-nginx-deployment.1674a9f21ca6997b
/registry/horizontalpodautoscalers/default/test-nginx-deployment
/registry/namespaces/test
/registry/pods/test/test-nginx-deployment-58c4569cb5-kktr2
/registry/replicasets/test/test-nginx-deployment-58c4569cb5
/registry/replicasets/test/test-nginx-deployment-744dbcb47f
/registry/replicasets/test/test-nginx-deployment-c75846d
/registry/secrets/test/default-token-s85c8
/registry/serviceaccounts/test/default
/registry/services/endpoints/test/test-nginx-service
/registry/services/specs/test/test-nginx-service

6.2 从 etcd 查看具体某个对象的数据

ETCDCTL_API=3 etcdctl get /registry/pods/test/test-nginx-deployment-58c4569cb5-kktr2

6.3 删除 etcd 指定资源

ETCDCTL_API=3 etcdctl del /registry/pods/test/test-nginx-deployment-58c4569cb5-kktr2

七、RBAC 多账户实现

7.1 ServiceAccount 介绍

  • 每个 pod 都与一个 ServiceAccount 相关联,它代表了运行在 pod 中应用程序的身份证明。token 文件持有 ServiceAccount 的认证 token。 应用程序使用这个 token 连接 API 服务器时, 身份认证插件会对 ServiceAccount 进行身份认证, 并将 Set-viceAccount 的用户名传回API服务器内部。
  • Service account是为了方便Pod里面的进程调用Kubernetes API或其他外部服务而设计的。它与User account不同
    • User account是为人设计的,而service account则是为Pod中的进程调用Kubernetes API而设计;
    • User account是跨namespace的,而service account则是仅局限它所在的namespace;
    • 每个namespace都会自动创建一个default service account
    • Token controller检测service account的创建,并为它们创建secret
    • 开启ServiceAccount Admission Controller后
      • 1.每个Pod在创建后都会自动设置spec.serviceAccount为default(除非指定了其他ServiceAccout)
      • 2.验证Pod引用的service account已经存在,否则拒绝创建
      • 3.如果Pod没有指定ImagePullSecrets,则把service account的ImagePullSecrets加到Pod中
      • 4.每个container启动后都会挂载该service account的token和ca.crt到/var/run/secrets/kubernetes.io/serviceaccount/
  • 参考:https://kubernetes.io/zh/docs/reference/access-authn-authz/service-accounts-admin/
# 查看 ServiceAccount 列表
<root@ubuntu181 tomcat-app1>#kubectl get sa
NAME      SECRETS   AGE
default   1         6d21h
<root@ubuntu181 tomcat-app1>#kubectl describe sa
Name:                default
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   default-token-h7dnz
Tokens:              default-token-h7dnz
Events:              <none>

# 指定 namespace 创建账户
<root@ubuntu181 tomcat-app1>#kubectl create serviceaccount xiaoming -n test
serviceaccount/xiaoming created

<root@ubuntu181 tomcat-app1>#kubectl describe sa -n test
Name:                xiaoming
Namespace:           test
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   xiaoming-token-gd547
Tokens:              xiaoming-token-gd547
Events:              <none>

# 查看密钥里面的数据,包含了CA证书、命名空间和token
<root@ubuntu181 tomcat-app1>#kubectl describe secret xiaoming-token-gd547 -n test

7.2 RBAC

基于角色(Role)的访问控制(RBAC)是一种基于组织中用户的角色来调节控制对计算机或网络资源的访问的方法。

7.2.1 使用 Role 和 RoleBinding

Role资源定义了哪些操作可以在哪些资源上执行

# 示例
# cat test-role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: test
  name: test-role
rules:
- apiGroups: ["*"]
  resources: ["pods"]
  #verbs: ["*"]
  ##RO-Role
  verbs: ["get", "watch", "list"]
- apiGroups: ["extensions", "apps"]
  resources: ["deployments"]
  #verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  ##RO-Role
  verbs: ["get", "watch", "list"]

<root@ubuntu181 data>#kubectl apply -f test-role.yml
role.rbac.authorization.k8s.io/test-role created

创建 RoleBinding 资源来实现将角色绑定到主体

# cat test-bind.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: role-bind-test
  namespace: test
subjects:
- kind: ServiceAccount
  name: xiaoming
  namespace: test
roleRef:
  kind: Role
  name: test-role
  apiGroup: rbac.authorization.k8s.io

7.3 用户登录

7.3.1 基于 token

# 获取 token
<root@ubuntu181 data>#kubectl describe secret xiaoming-token-gd547 -n test
Name:         xiaoming-token-gd547
Namespace:    test
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: xiaoming
              kubernetes.io/service-account.uid: 8da8eadd-689c-4ecc-86c7-c805ba3d7787

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlhLZjgzNDBmRksyR1ZLcGxqZ3hIMElKeW1rU0w0NjlLc3NoSmdrdVlQdTAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJ0ZXN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InhpYW9taW5nLXRva2VuLWdkNTQ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InhpYW9taW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOGRhOGVhZGQtNjg5Yy00ZWNjLTg2YzctYzgwNWJhM2Q3Nzg3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnRlc3Q6eGlhb21pbmcifQ.M215tsTBVjuMbm5WEKlDD7HKDxfPPKqbys-flOLaU5JZOGx3cOdruGAOmsHuTD-Vf0rqSyjjdCvQ80YusO-0XPqRvH5pSSdRcgLGWkSW_i-mY5wSIH_fnysKRGnYWy9E7MnP7bMUd345L84twHPkjbzx2_8BY5rIDEU_ePvm86_bqNzmtZ7FmEvfa5ByeJDEDeG96Q-aF1k5u8M8J6rtSAdOcg0E9DqMNWW3Cw_CyQOCKtvkEDJMv7d7a_LkxmHHjASJ_rrZrXIPfcEGVjoJzN1pvqRWruL_yTJHlEHPop0KT5v_jv31_XdhJOAtoA-Sxf0q81KlnXQNQi5u-PUtWg

# 使用base加密

<root@ubuntu181 data>#kubectl get secret xiaoming-token-gd547 -o jsonpath={.data.token} -n test |base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IlhLZjgzNDBmRksyR1ZLcGxqZ3hIMElKeW1rU0w0NjlLc3NoSmdrdVlQdTAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJ0ZXN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InhpYW9taW5nLXRva2VuLWdkNTQ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InhpYW9taW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOGRhOGVhZGQtNjg5Yy00ZWNjLTg2YzctYzgwNWJhM2Q3Nzg3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnRlc3Q6eGlhb21pbmcifQ.M215tsTBVjuMbm5WEKlDD7HKDxfPPKqbys-flOLaU5JZOGx3cOdruGAOmsHuTD-Vf0rqSyjjdCvQ80YusO-0XPqRvH5pSSdRcgLGWkSW_i-mY5wSIH_fnysKRGnYWy9E7MnP7bMUd345L84twHPkjbzx2_8BY5rIDEU_ePvm86_bqNzmtZ7FmEvfa5ByeJDEDeG96Q-aF1k5u8M8J6rtSAdOcg0E9DqMNWW3Cw_CyQOCKtvkEDJMv7d7a_LkxmHHjASJ_rrZrXIPfcEGVjoJzN1pvqRWruL_yTJHlEHPop0KT5v_jv31_XdhJOAtoA-Sxf0q81KlnXQNQi5u-PUtWg

7.3.2 基于 kube-config 文件登录

创建csr文件

# cat xiaoming-csr.json
{
  "CN": "xiaoming",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

签发证书

<root@ubuntu181 xiaoming>#cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem  -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubeasz/clusters/k8s-01/ssl/ca-config.json -profile=kubernetes xiaoming-csr.json | cfssljson -bare xiaoming
2021/04/15 16:58:36 [INFO] generate received request
2021/04/15 16:58:36 [INFO] received CSR
2021/04/15 16:58:36 [INFO] generating key: rsa-2048
2021/04/15 16:58:36 [INFO] encoded CSR
2021/04/15 16:58:36 [INFO] signed certificate with serial number 89595049681671765668118199791100464000669527954
2021/04/15 16:58:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

<root@ubuntu181 xiaoming>#ls
xiaoming.csr  xiaoming-csr.json  xiaoming-key.pem  xiaoming.pem

设置客户端认证参数

kubectl config set-credentials xiaoming \
--client-certificate=/etc/kubernetes/ssl/xiaoming.pem \
--client-key=/etc/kubernetes/ssl/xiaoming-key.pem \
--embed-certs=true \
--kubeconfig=xiaoming.kubeconfig

设置上下文参数

kubectl config set-context cluster1 \
--cluster=cluster1 \
--user=xiaoming \
--namespace=test \
--kubeconfig=xiaoming.kubeconfig

设置默认上下文

kubectl config use-context cluster1 --kubeconfig=xiaoming.kubeconfig

获取 token 并写入用户 kube-config 文件

kubectl get secret -n test
kubectl describe secret xiaoming-token-gd547 -n test
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇