k8s基础 (五) – k8s 实战案例 下

一、实战案例之 MySQL 主从架构

  • 参考
    https://kubernetes.io/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
    https://kubernetes.io/zh/docs/tasks/run-application/run-replicated-stateful-application/
    https://www.kubernetes.org.cn/statefulset
  • 基于 StatefulSet 实现:
    Pod 调度运行时,如果应用不需要任何稳定的标示、有序的部署、删除和扩展,则应该使用一组无状态副本的控制器来部署应用,例如 Deployment 或 ReplicaSet 更适合无状态服务需求,而 StatefulSet 适合管理所有有状态的服务,比如 MySQL、MongoDB 集群等。

StatefulSet 本质上是 Deployment 的一种变体,在 v1.9 版本中已成为 GA 版本,它为了解决有状态服务的问题,它所管理的 Pod 拥有固定的 Pod 名称,启停顺序,在 StatefulSet 中,Pod 名字称为网络标识(hostname),还必须要用到共享存储。
在 Deployment 中,与之对应的服务是 service,而在 StatefulSet 中与之对应的 headless service,headless service,即无头服务,与 service 的区别就是它没有 Cluster IP,解析它的名称时将返回该 Headless Service 对应的全部 Pod 的 Endpoint 列表。
StatefulSet 特点:
  -> 给每个 pod 分配固定且唯一的网络标识符
  -> 给每个 pod 分配固定且持久化的外部存储
  -> 对 pod 进行有序的部署和扩展
  -> 对 pod 进有序的删除和终止
  -> 对 pod 进有序的自动滚动更新

1.1 StatefulSet 的组成部分

StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用场景包括

  • 稳定的持久化存储,即 Pod 重新调度后还是能访问到相同的持久化数据,基于 PVC 来实现
  • 稳定的网络标志,即 Pod 重新调度后其 PodName 和 HostName不变,基于 Headless Service(即没有Cluster IP的Service)来实现
  • 有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个 Pod 运行之前所有之前的 Pod 必须都是Running和Ready状态),基于 init containers 来实现
  • 有序收缩,有序删除(即从N-1到0)

从上面的应用场景可以发现,StatefulSet 由以下几个部分组成:

  • 用于定义网络标志(DNS domain)的Headless Service:用来定义Pod网络标识( DNS domain)
  • 用于创建PersistentVolumes的volumeClaimTemplates:存储卷申请模板,创建PVC,指定pvc名称大小,将自动创建pvc,且pvc必须由存储类供应
  • 定义具体应用的StatefulSet:定义具体应用,有多少个Pod副本,并为每个Pod定义了一个域名。

1.2 镜像准备

https://github.com/docker-library/ #github 下载地址

# 准备xtrabackup镜像
docker pull registry.cn-hangzhou.aliyuncs.com/hxpdocker/xtrabackup:1.0
docker tag registry.cn-hangzhou.aliyuncs.com/hxpdocker/xtrabackup:1.0 harbor.linux.net/test/xtrabackup:1.0
docker push harbor.linux.net/test/xtrabackup:1.0

# 准备mysql 镜像
docker pull mysql:5.7
<root@ubuntu181 redis>#docker run -it --rm mysql:5.7 bash
root@4e8ce7462d3c:/# mysql --version
mysql  Ver 14.14 Distrib 5.7.33, for Linux (x86_64) using  EditLine wrapper
docker tag mysql:5.7 harbor.linux.net/baseimages/mysql:5.7.33
docker push harbor.linux.net/baseimages/mysql:5.7.33

1.3 创建 PV

pvc 会自动基于 PV 创建,只需要有多个可用的 PV 即可,PV 数量取决于计划启动多少个 mysql pod,本次创建5个 PV,也就是最多启动5个 mysql pod。

# 文件
<root@ubuntu181 mysql>#pwd
/data/k8s-data/yaml/testapp/mysql
<root@ubuntu181 mysql>#tree
.
├── mysql-configmap.yaml
├── mysql-services.yaml
├── mysql-statefulset.yaml
└── pv
    └── mysql-persistentvolume.yaml

<root@k8s-harbor ~>#mkdir -m 755 /data/k8sdata/test/mysql-datadir-{1..5}
<root@ubuntu181 mysql>#cat pv/mysql-persistentvolume.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-1
  namespace: test
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/test/mysql-datadir-1 
    server: 192.168.7.107
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-2
  namespace: test
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/test/mysql-datadir-2
    server: 192.168.7.107
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-3
  namespace: test
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/test/mysql-datadir-3
    server: 192.168.7.107
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-4
  namespace: test
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/test/mysql-datadir-4
    server: 192.168.7.107
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-5
  namespace: test
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/test/mysql-datadir-5
    server: 192.168.7.107

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-6
  namespace: test
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/test/mysql-datadir-6
    server: 192.168.7.107

<root@ubuntu181 mysql>#kubectl apply -f  pv/mysql-persistentvolume.yaml 
persistentvolume/mysql-datadir-1 created
persistentvolume/mysql-datadir-2 created
persistentvolume/mysql-datadir-3 created
persistentvolume/mysql-datadir-4 created
persistentvolume/mysql-datadir-5 created
persistentvolume/mysql-datadir-6 created

1.4 运行mysql服务

部署包含一个 ConfigMap、两个 Service 与一个 StatefulSet

1.4.1 yaml 文件

mysql-configmap.yaml

# cat mysql-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
    log_bin_trust_function_creators=1
    lower_case_table_names=1
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only
    log_bin_trust_function_creators=1

mysql-services.yaml

这个无头服务给 StatefulSet 控制器为集合中每个 Pod 创建的 DNS 条目提供了一个宿主。 因为服务名为 mysql,所以可以通过在同一 Kubernetes 集群和名字中的任何其他 Pod 内解析 <Pod 名称>.mysql 来访问 Pod。

客户端服务称为 mysql-read,是一种常规服务,具有其自己的集群 IP。 该集群 IP 在报告就绪的所有MySQL Pod 之间分配连接。 可能的端点集合包括 MySQL 主节点和所有副本节点。

请注意,只有读查询才能使用负载平衡的客户端服务。 因为只有一个 MySQL 主服务器,所以客户端应直接连接到 MySQL 主服务器 Pod (通过其在无头服务中的 DNS 条目)以执行写入操作。

# cat mysql-services.yaml

# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql

mysql-statefulset.yaml

# cat mysql-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 2
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: harbor.linux.net/baseimages/mysql:5.7 
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)]] || exit 1
          ordinal={BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=((100 +ordinal)) >> /mnt/conf.d/server-id.cnf
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql
        image: harbor.linux.net/test/xtrabackup:1.0        command:
        - bash
        - "-c"
        - |
          set -ex
          # Skip the clone if data already exists.
          [[ -d /var/lib/mysql/mysql ]] && exit 0
          # Skip the clone on master (ordinal index 0).
          [[ `hostname` =~ -([0-9]+) ]] || exit 1
          ordinal={BASH_REMATCH[1]}
          [[ordinal -eq 0 ]] && exit 0
          # Clone data from previous peer.
          ncat --recv-only mysql-((ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
          # Prepare the backup.
          xtrabackup --prepare --target-dir=/var/lib/mysql
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: mysql
        image: harbor.linux.net/baseimages/mysql:5.7 
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: harbor.linux.net/test/xtrabackup:1.0 
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql
          # Determine binlog position of cloned data, if any.
          if [[ -f xtrabackup_slave_info ]]; then
            # XtraBackup already generated a partial "CHANGE MASTER TO" query
            # because we're cloning from an existing slave.
            mv xtrabackup_slave_info change_master_to.sql.in
            # Ignore xtrabackup_binlog_info in this case (it's useless).
            rm -f xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
            # We're cloning directly from master. Parse binlog position.
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)]] || exit 1
            rm xtrabackup_binlog_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='{BASH_REMATCH[1]}',\
                  MASTER_LOG_POS={BASH_REMATCH[2]}">change_master_to.sql.in
          fi
          # Check if we need to complete a clone by starting replication.
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
            echo "Initializing replication from clone position"
            # In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig
            mysql -h 127.0.0.1 <<EOF(<change_master_to.sql.orig),
            MASTER_HOST='mysql-0.mysql',
            MASTER_USER='root',
            MASTER_PASSWORD='',
            MASTER_CONNECT_RETRY=10;
          START SLAVE;
          EOF
          fi
          # Start a server to send backups when requested by peers.
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
            "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi

1.4.2 运行 mysql 服务

<root@ubuntu181 mysql>#kubectl apply -f .
configmap/mysql created
service/mysql created
service/mysql-read created
statefulset.apps/mysql created

<root@ubuntu181 ~>#kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP             NODE            NOMINATED NODE   READINESS GATES
mysql-0   2/2     Running   0          4m37s   172.20.2.84    192.168.7.104   <none>           <none>
mysql-1   2/2     Running   1          2m42s   172.20.3.107   192.168.7.105   <none>           <none>

1.4.3 验证 mysql 主从

二、实战案例之 WordPress

  • LNMP 案例之基于 Nginx+PHP 实现 WordPress 博客站点,要求 Nginx+PHP 运行在同一个 Pod 的不同容器,MySQL 运行与 default 的 namespace 并可以通过 service name 增删改查数据库。
  • PHP 代码通过 NFS 共享挂载到每个容器的的代码目录中
  • 参考:

    首页


    https://cn.wordpress.org/download/releases/

2.1 准备 nginx 镜像

2.1.1 文件列表

<root@ubuntu181 nginx>#pwd
/data/k8s-data/dockerfile/web/test/wordpress/nginx
<root@ubuntu181 nginx>#tree
.
├── build-command.sh
├── Dockerfile
├── index.html
├── nginx.conf
└── run_nginx.sh

<root@ubuntu181 nginx>#cat run_nginx.sh
#!/bin/bash
/usr/local/nginx/sbin/nginx
tail -f /etc/hosts

<root@ubuntu181 nginx>#cat Dockerfile
FROM harbor.linux.net/pub-images/nginx-base:v1.14.2

ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD run_nginx.sh /usr/local/nginx/sbin/run_nginx.sh
RUN mkdir -pv /usr/local/nginx/html/wordpress
RUN chown nginx.nginx /usr/local/nginx/html/wordpress/ -R

EXPOSE 80 443

CMD ["/usr/local/nginx/sbin/run_nginx.sh"]

<root@ubuntu181 nginx>#cat build-command.sh
#!/bin/bash
TAG=1
docker build -t harbor.linux.net/test/wordpress-nginx:{TAG} .
echo "镜像制作完成,即将上传至Harbor服务器"
sleep 1
docker push  harbor.linux.net/test/wordpress-nginx:${TAG}
echo "镜像上传完成"

2.1.2 执行构建

<root@ubuntu181 nginx>#bash build-command.sh v1

2.2 准备 PHP 镜像

官方PHP镜像

docker pull php:5.6.40-fpm
docker tag php:5.6.40-fpm harbor.linux.net/baseimages/php:5.6.40-fpm
docker push harbor.linux.net/baseimages/php:5.6.40-fpm

自制PHP镜像

<root@ubuntu181 php>#tree
.
├── build-command.sh
├── Dockerfile
├── run_php.sh
└── www.conf

<root@ubuntu181 php>#cat Dockerfile
#PHP Base Image
FROM harbor.linux.net/baseimages/centos-base:7.7.1908

MAINTAINER kong

RUN yum install -y  https://mirrors.tuna.tsinghua.edu.cn/remi/enterprise/remi-release-7.rpm && yum install  php56-php-fpm php56-php-mysql -y
ADD www.conf /opt/remi/php56/root/etc/php-fpm.d/www.conf
RUN chown nginx.nginx /opt/remi/php56/root/etc/php-fpm.d/ -R
ADD run_php.sh /usr/local/bin/run_php.sh
EXPOSE 9000

CMD ["/usr/local/bin/run_php.sh"]

<root@ubuntu181 php>#cat build-command.sh
#!/bin/bash
TAG=1
docker build -t harbor.linux.net/test/wordpress-php-5.6:{TAG} .
echo "镜像制作完成,即将上传至Harbor服务器"
sleep 1
docker push harbor.linux.net/test/wordpress-php-5.6:${TAG}
echo "镜像上传完成"

# 构建
bash build-command.sh v1

2.3 运行WordPress站点

使用自制镜像运行PHP环境,WordPress 页面文件保存在后端存储 NFS 服务器。

2.3.1 yaml 文件

<root@ubuntu181 wordpress>#cat wordpress.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: wordpress-app
  name: wordpress-app-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress-app
  template:
    metadata:
      labels:
        app: wordpress-app
    spec:
      containers:
      - name: wordpress-app-nginx
        image: harbor.linux.net/test/wordpress-nginx:v1 
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        volumeMounts:
        - name: wordpress
          mountPath: /usr/local/nginx/html/wordpress
          readOnly: false

      - name: wordpress-app-php
        image: harbor.linux.net/test/wordpress-php-5.6:v1
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 9000
          protocol: TCP
          name: http
        volumeMounts:
        - name: wordpress
          mountPath: /usr/local/nginx/html/wordpress
          readOnly: false

      volumes:
      - name: wordpress
        nfs:
          server: 192.168.7.107
          path: /data/k8sdata/test/wordpress

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: wordpress-app
  name: wordpress-app-spec
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30031
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 30033
  selector:
    app: wordpress-app

2.3.2 部署 WordPress

运行 WordPress

<root@k8s-harbor wordpress>#tar xvf wordpress-5.2.9-zh_CN.tar.gz -C /data/k8sdata/test/
<root@k8s-harbor wordpress>#chown 2019.2019 /data/k8sdata/test/ -R

# 允许 wordpress.yaml
<root@ubuntu181 wordpress>#pwd
/data/k8s-data/yaml/testapp/wordpress
<root@ubuntu181 wordpress>#kubectl apply -f wordpress.yaml

创建 mysql 权限

<root@ubuntu181 ~>#kubectl exec mysql-0 -i -t sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulting container name to mysql.
Use 'kubectl describe pod/mysql-0 -n default' to see all of the containers in this pod.
# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4712
Server version: 5.7.33-log MySQL Community Server (GPL)

mysql> CREATE DATABASE wordpress; #创建数据库
Query OK, 1 row affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON wordpress.* TO "wordpress"@"%" IDENTIFIED BY "wordpress"; # 创建账户并授权
Query OK, 0 rows affected, 1 warning (0.01 sec)

初始化 WordPress

mysql servicr地址:mysql-0.mysql.default.svc.linux.local

验证k8s中mysql 数据

三、实战案例之 CICD

基于 jenkins 与 gitlab实现容器代码升级及回滚

3.1 gitlab 代码

3.2 jenkins 设置

3.3 shel 脚本内容

#!/bin/bash

#记录脚本开始执行时间
starttime=`date +'%Y-%m-%d %H:%M:%S'`

#变量
SHELL_DIR="/data/scripts"
SHELL_NAME="0"
K8S_CONTROLLER1="192.168.7.101"
K8S_CONTROLLER2="192.168.7.102"
DATE=`date +%Y-%m-%d_%H_%M_%S`
METHOD=1
Branch=2

if test -zBranch;then
  Branch=develop
fi


function Code_Clone(){
  Git_URL="git@192.168.7.106:test/app1.git"
  DIR_NAME=`echo {Git_URL} |awk -F "/" '{print2}' | awk -F "." '{print 1}'`
  DATA_DIR="/data/gitdata/test"
  Git_Dir="{DATA_DIR}/{DIR_NAME}"
  cd{DATA_DIR} &&  echo "即将清空上一版本代码并获取当前分支最新代码" && sleep 1 && rm -rf {DIR_NAME}
  echo "即将开始从分支{Branch} 获取代码" && sleep 1
  git clone -b {Branch}{Git_URL} 
  echo "分支{Branch} 克隆完成,即将进行代码编译!" && sleep 1
  #cd{Git_Dir} && mvn clean package
  #echo "代码编译完成,即将开始将IP地址等信息替换为测试环境"
  #####################################################
  sleep 1
  cd {Git_Dir}
  tar czf{DIR_NAME}.tar.gz  ./*
}

#将打包好的压缩文件拷贝到k8s 控制端服务器
function Copy_File(){
  echo "压缩文件打包完成,即将拷贝到k8s 控制端服务器{K8S_CONTROLLER1}" && sleep 1
  scp{Git_Dir}/{DIR_NAME}.tar.gz root@{K8S_CONTROLLER1}:/data/k8s-data/dockerfile/web/test/nginx/
  echo "压缩文件拷贝完成,服务器{K8S_CONTROLLER1}即将开始制作Docker 镜像!" && sleep 1
}

#到控制端执行脚本制作并上传镜像
function Make_Image(){
  echo "开始制作Docker镜像并上传到Harbor服务器" && sleep 1
  ssh root@{K8S_CONTROLLER1} "cd /data/k8s-data/dockerfile/web/test/nginx/  && bash build-command.sh {DATE}"
  echo "Docker镜像制作完成并已经上传到harbor服务器" && sleep 1
}

#到控制端更新k8s yaml文件中的镜像版本号,从而保持yaml文件中的镜像版本号和k8s中版本号一致
function Update_k8s_yaml(){
  echo "即将更新k8s yaml文件中镜像版本" && sleep 1
  ssh root@{K8S_CONTROLLER1} "cd /data/k8s-data/yaml/testapp/nginx  && sed -i 's#image: harbor.linux.net.*#image: harbor.linux.net\/test\/nginx-web1:{DATE}#g' nginx.yaml"
  echo "k8s yaml文件镜像版本更新完成,即将开始更新容器中镜像版本" && sleep 1
}

#到控制端更新k8s中容器的版本号,有两种更新办法,一是指定镜像版本更新,二是apply执行修改过的yaml文件
function Update_k8s_container(){
  #第一种方法
   ssh root@{K8S_CONTROLLER1} "kubectl set image deployment/test-nginx-deployment  test-nginx-container=harbor.linux.net/test/nginx-web1:{DATE} -n test"
  #第二种方法,推荐使用第一种
  #ssh root@{K8S_CONTROLLER1} "cd /data/k8s-data/yaml/testapp/nginx && kubectl  apply -f nginx.yaml --record" 
  echo "k8s 镜像更新完成" && sleep 1
  echo "当前业务镜像版本: harbor.linux.net/test/nginx-web1:{DATE}"
  #计算脚本累计执行时间,如果不需要的话可以去掉下面四行
  endtime=`date +'%Y-%m-%d %H:%M:%S'`
  start_seconds=(date --date="starttime" +%s);         end_seconds=(date --date="endtime" +%s);
  echo "本次业务镜像更新总计耗时:"((end_seconds-start_seconds))"s"
}

#基于k8s 内置版本管理回滚到上一个版本
function rollback_last_version(){
  echo "即将回滚之上一个版本"
  ssh root@{K8S_CONTROLLER1}  "kubectl rollout undo deployment/test-nginx-deployment  -n test"
  sleep 1
  echo "已执行回滚至上一个版本"
}

#使用帮助
usage(){
  echo "部署使用方法为{SHELL_DIR}/{SHELL_NAME} deploy "
  echo "回滚到上一版本使用方法为{SHELL_DIR}/{SHELL_NAME} rollback_last_version"
}

#主函数
main(){
  case{METHOD}  in
  deploy)
    Code_Clone;
    Copy_File;
    Make_Image; 
    Update_k8s_yaml;
    Update_k8s_container;
  ;;
  rollback_last_version)
    rollback_last_version;
  ;;
  *)
    usage;
  esac;
}

main 12 $3

3.4 执行构建 v1

3.4.1 构建 test-app1

3.4.2 验证

3.5 执行构建 V2

3.5.1 gitlab 代码

git clone -b develop git@192.168.7.106:test/app1.git
cd app1/
vim index.html
  <h1> 111 V1 </h1>
  <h1> 222 V2 </h1>

git config --global user.name "test"
git config --global user.email "11111111@163.com"
git add .
git commit -m "v2"
git push

3.5.2 构建并验证

3.6 版本回滚

3.6.1 执行构建

3.6.2 验证

四、实战案例之日志收集

实现pod中日志收集之至ELK,自定义字段数据格式转换、排序、基于日志实现pod自愈、自动扩容等

4.1 启动 redis

<root@ubuntu181 nginx>#kubectl -n test -o wide get pod
NAME                                   READY   STATUS    RESTARTS   AGE     IP             NODE            NOMINATED NODE   READINESS GATES
deploy-devops-redis-585bc47444-69sp7   1/1     Running   0          5m48s   172.20.2.100   192.168.7.104   <none>           <none>
<root@ubuntu181 nginx>#kubectl -n test -o wide get service
NAME               TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE     SELECTOR
srv-devops-redis   NodePort   10.20.67.240   <none>        6379:36379/TCP   5m53s   app=devops-redis

4.2 启动 ES 和 kibana

ES:192.168.7.103:9200
kibana:192.168.7.107:5601

4.3 tomcat 日志收集

制作镜像

<root@ubuntu181 tomcat-app1>#tree
.
├── app1.tar.gz
├── build-command.sh
├── catalina.sh
├── Dockerfile
├── filebeat.yml
├── myapp
│   └── index.html
├── run_tomcat.sh
└── server.xml

<root@ubuntu181 tomcat-app1>#cat filebeat.yml 
filebeat.inputs:
- input_type: log
  paths:
    - /apps/tomcat/logs/catalina.out
  fields:
    type: tomcat-catalina

output.redis:
  hosts: ["192.168.7.104:36379"]
  key: "test-nginx-app1"
  db: 1
  timeout: 5
  password: "123456"

<root@ubuntu181 tomcat-app1>#cat run_tomcat.sh 
#!/bin/bash
#echo "nameserver 223.6.6.6" > /etc/resolv.conf
#echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts

/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - tomcat -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts

<root@ubuntu181 tomcat-app1>#bash build-command.sh v666

运行 tomcat

<root@ubuntu181 tomcat-app1>#cat tomcat-app666.yaml
        image: harbor.linux.net/test/tomcat-app1:v666
<root@ubuntu181 tomcat-app1>#kubectl apply -f tomcat-app666.yaml

验证 redis

运行 logstash

<root@ubuntu186 conf.d>#cat /etc/logstash/conf.d/redis-to-els.conf 
input {
  redis {
    data_type => "list"
    key => "test-nginx-app1"
    host => "192.168.7.104"
    port => "36379"
    db => "1"
    password => "123456"
  }
}

output {
  elasticsearch {
    hosts => ["192.168.7.103:9200"]
    index => "tomcat-catalina-%{+YYYY.MM.dd}"
  }
}

验证 kibana

暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇