k8s基础 (四) – k8s 实战案例 上

一、K8S 高可用

基于 HAProxy+Keepalived 实现高可用 k8s 集群环境、实现 K8S 版本升级、calico 与 flannel 网络通信、kube DNS 与 CoreDNS、Dashboard。

1.1 高可用 K8S 基础环境

二、动静分离 web 站点

以下服务要求全部运行在K8S环境内,主要介绍在k8s中运行目前比较常见的主流服务和架构,如基于Nginx+Tomcat的动静分离架构、基于PVC实现的Zookeeper集群和Redis服务,基于PVC+StatefulSet实现的MySQL主从架构,运行java应用,K8S中基于Nginx+PHP+MySQL实现的WordPress的web站点,如何在k8s中基于Zookeeper运行微服务等,以及K8S的CI与CD、日志收集分析展示与prometheus+grafana实现pod监控与报警等。

2.1 Nginx+Tomcat 实动静分离 web 站点

参见 k8s 基础 (二) – web 运行示例

三、PV 及 PVC

默认情况下容器中的磁盘文件是非持久化的,对于运行在容器中的应用来说面临两个问题,第一:当容器挂掉 kubelet 将重启启动它时,文件将会丢失;第二:当 Pod 中同时运行多个容器,容器之间需要共享文件时,Kubernetes 的 Volume 解决了这两个问题。

PersistentVolume(PV)是集群中已由管理员配置的一段网络存储,集群中的资源就像一个节点是一个集群资源,PV 是诸如卷之类的卷插件,但是具有独立于使用 PV 的任何单个 pod 的生命周期, 该API对象捕获存储的实现细节,即 NFS,iSCSI 或云提供商特定的存储系统,PV 是由管理员添加的的一个存储的描述,是一个全局资源即不隶属于任何 namespace,包含存储的类型,存储的大小和访问模式等,它的生命周期独立于 Pod,例如当使用它的 Pod 销毁时对PV没有影响。

PersistentVolumeClaim(PVC)是用户存储的请求,它类似于pod,Pod消耗节点资源,PVC 消耗存储资源,就像 pod 可以请求特定级别的资源(CPU和内存),PVC 是 namespace 中的资源,可以设置特定的空间大小和访问模式。

kubernetes 从1.0版本开始支持 PersistentVolume 和 PersistentVolumeClaim。

PV是对底层网络存储的抽象,即将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多份后给不同的业务使用。

PVC是对PV资源的申请调用,就像 POD 消费 node 节点资源一样,pod是通过PVC将数据保存至PV,PV在保存至存储。

3.1 PersistentVolume 参数

<root@ubuntu182 ~>#kubectl explain PersistentVolume
KIND:     PersistentVolume
VERSION:  v1

DESCRIPTION:
     PersistentVolume (PV) is a storage resource provisioned by an
     administrator. It is analogous to a node. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes


Capacity: # 当前PV空间大小,kubectl explain PersistentVolume.spec.capacity

accessModes :访问模式,# kubectl explain PersistentVolume.spec.accessModes
  ReadWriteOnce – PV 只能被单个节点以读写权限挂载,RWO
  ReadOnlyMany – PV 以可以被多个节点挂载但是权限是只读的,ROX
  ReadWriteMany – PV 可以被多个节点是读写方式挂载使用,RWX

persistentVolumeReclaimPolicy # 删除机制即删除存储卷卷时候,已经创建好的存储卷由以下删除操作: # kubectl explain PersistentVolume.spec.persistentVolumeReclaimPolicy
  Retain – 删除PV后保持原装,最后需要管理员手动删除
  Recycle – 空间回收,及删除存储卷上的所有数据(包括目录和隐藏文件),目前仅支持NFS和hostPath
  Delete – 自动删除存储卷

volumeMode #卷类型,kubectl explain PersistentVolume.spec.volumeMode 定义存储卷使用的文件系统是块设备还是文件系统,默认为文件系统

mountOptions #附加的挂载选项列表,实现更精细的权限控制
  ro,soft

官方提供的基于各后端存储创建的PV支持的访问模式

3.2 PersistentVolumeClaim 创建参数

<root@ubuntu182 ~>#kubectl explain PersistentVolumeClaim
KIND:     PersistentVolumeClaim
VERSION:  v1

DESCRIPTION:
     PersistentVolumeClaim is a user's request for and claim to a persistent
     volume

accessModes :PVC访问模式,# kubectl explain PersistentVolumeClaim.spec.volumeMode
  ReadWriteOnce – PV只能被单个节点以读写权限挂载,RWO
  ReadOnlyMany – PV以可以被多个节点挂载但是权限是只读的,ROX
  ReadWriteMany – PV可以被多个节点是读写方式挂载使用,RWX

resources: # 定义PVC创建存储卷的空间大小

selector: # 标签选择器,选择要绑定的PV
  matchLabels # 匹配标签名称
  matchExpressions # 基于正则表达式匹配

volumeName # 要绑定的PV名称

volumeMode # 卷类型
  定义PVC使用的文件系统是块设备还是文件系统,默认为文件系统

四、实战案例之 zookeeper 集群

基于PV和PVC作为后端存储,实现 zookeeper 集群

4.1 下载JDK镜像

<root@ubuntu181 ~>#docker pull elevy/slim_java:8 # 拉取 jdk 镜像
<root@ubuntu181 ~>#docker run -it --rm elevy/slim_java:8 sh # 查看 jdk 版本
/ # java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

<root@ubuntu181 ~>#docker tag elevy/slim_java:8 harbor.linux.net/baseimages/slim_java1.8.0_144
<root@ubuntu181 ~>#docker push harbor.linux.net/baseimages/slim_java1.8.0_144

4.2 zookeeper 镜像准备

官方程序包下载官方网址:https://archive.apache.org/dist/zookeeper/

4.2.1 构建镜像

本次采用 github 开源项目构建

cd /data/k8s-data/dockerfile/web/test/zookeeper
chmod a+x *.sh
chmod a+x bin/*.sh

<root@ubuntu181 zookeeper>#tree
.
├── bin
│   └── zkReady.sh
├── build-command.sh
├── conf
│   ├── log4j.properties
│   └── zoo.cfg
├── Dockerfile
├── entrypoint.sh

# 查看 Dockerfile
<root@ubuntu181 zookeeper>#cat Dockerfile
FROM harbor.linux.net/baseimages/slim_java1.8.0_144

ENV ZK_VERSION 3.4.14

RUN apk add --no-cache --virtual .build-deps \
      ca-certificates   \
      gnupg             \
      tar               \
      wget &&           \
    #
    # Install dependencies
    apk add --no-cache  \
      bash &&           \
    #
    # Download Zookeeper
    wget -nv -O /tmp/zk.tgz "https://archive.apache.org/dist/zookeeper/zookeeper-{ZK_VERSION}/zookeeper-{ZK_VERSION}.tar.gz" && \
    wget -nv -O /tmp/zk.tgz.asc "https://archive.apache.org/dist/zookeeper/zookeeper-{ZK_VERSION}/zookeeper-{ZK_VERSION}.tar.gz.asc" && \
    wget -nv -O /tmp/KEYS https://dist.apache.org/repos/dist/release/zookeeper/KEYS && \
    #
    # Verify the signature
    export GNUPGHOME="(mktemp -d)" && \
    gpg -q --batch --import /tmp/KEYS && \
    gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
    #
    # Set up directories
    #
    mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
    #
    # Install
    tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
    #
    # Slim down
    cd /zookeeper && \
    cp dist-maven/zookeeper-{ZK_VERSION}.jar . && \
    rm -rf \
      *.txt \
      *.xml \
      bin/README.txt \
      bin/*.cmd \
      conf/* \
      contrib \
      dist-maven \
      docs \
      lib/*.txt \
      lib/cobertura \
      lib/jdiff \
      recipes \
      src \
      zookeeper-*.asc \
      zookeeper-*.md5 \
      zookeeper-*.sha1 && \
    #
    # Clean up
    apk del .build-deps && \
    rm -rf /tmp/* "GNUPGHOME"

COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /

ENV PATH=/zookeeper/bin:{PATH} \
    ZOO_LOG_DIR=/zookeeper/log \
    ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
    JMXPORT=9010

ENTRYPOINT [ "/entrypoint.sh" ]

CMD [ "zkServer.sh", "start-foreground" ]

EXPOSE 2181 2888 3888 9010


# 查看 build 脚本
<root@ubuntu181 zookeeper>#cat build-command.sh 
#!/bin/bash
TAG=1
docker build -t harbor.linux.net/test/zookeeper:{TAG} .
sleep 1
docker push  harbor.linux.net/test/zookeeper:${TAG}

# 构建
bash build-command.sh v3.4.14

4.2.2 测试 zookeeper 镜像

<root@ubuntu181 ~>#docker run -it --rm -p 2181:2181 harbor.linux.net/test/zookeeper:v3.4.14
2021-04-11 07:32:37,597 [myid:] - INFO  [main:ZooKeeperServer@836] - tickTime set to 2000
2021-04-11 07:32:37,598 [myid:] - INFO  [main:ZooKeeperServer@845] - minSessionTimeout set to -1
2021-04-11 07:32:37,598 [myid:] - INFO  [main:ZooKeeperServer@854] - maxSessionTimeout set to -1
2021-04-11 07:32:37,614 [myid:] - INFO  [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
2021-04-11 07:32:37,651 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181

测试客户端连接zookeeper

4.3 k8s 运行zookeeper服务

通过yaml文件将zookeeper集群服务运行k8s环境

4.3.1 创建 pv

# yaml目录结构
<root@ubuntu181 zookeeper>#tree
.
├── pv
│   ├── zookeeper-persistentvolumeclaim.yaml
│   └── zookeeper-persistentvolume.yaml
└── zookeeper.yaml

# 创建 pv
<root@ubuntu181 zookeeper>#cat pv/zookeeper-persistentvolume.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-1
  namespace: test
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce 
  nfs:
    server: 192.168.7.107
    path: /data/k8sdata/test/zookeeper-datadir-1 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-2
  namespace: test
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.7.107
    path: /data/k8sdata/test/zookeeper-datadir-2 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-3
  namespace: test
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.7.107  
    path: /data/k8sdata/test/zookeeper-datadir-3 

<root@ubuntu181 zookeeper>#kubectl apply -f  pv/zookeeper-persistentvolume.yaml 
persistentvolume/zookeeper-datadir-pv-1 created
persistentvolume/zookeeper-datadir-pv-2 created
persistentvolume/zookeeper-datadir-pv-3 created

<root@ubuntu181 zookeeper>#kubectl get pv
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
zookeeper-datadir-pv-1   20Gi       RWO            Retain           Available                                   17s
zookeeper-datadir-pv-2   20Gi       RWO            Retain           Available                                   17s
zookeeper-datadir-pv-3   20Gi       RWO            Retain           Available                                   17s

dashborad 验证存储卷

4.3.2 创建 pvc

<root@ubuntu181 zookeeper>#cat pv/zookeeper-persistentvolumeclaim.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-1
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-1
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-2
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-2
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-3
  namespace: test
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-3
  resources:
    requests:
      storage: 10Gi

<root@ubuntu181 zookeeper>#kubectl apply -f  pv/zookeeper-persistentvolumeclaim.yaml
persistentvolumeclaim/zookeeper-datadir-pvc-1 created
persistentvolumeclaim/zookeeper-datadir-pvc-2 created
persistentvolumeclaim/zookeeper-datadir-pvc-3 created

<root@ubuntu181 zookeeper>#kubectl get pvc -n test
NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zookeeper-datadir-pvc-1   Bound    zookeeper-datadir-pv-1   20Gi       RWO                           21s
zookeeper-datadir-pvc-2   Bound    zookeeper-datadir-pv-2   20Gi       RWO                           20s
zookeeper-datadir-pvc-3   Bound    zookeeper-datadir-pv-3   20Gi       RWO                           20s

dashborad 验证存储卷

4.3.3 运行 zookeeper 集群

<root@ubuntu181 zookeeper>#cat zookeeper.yaml 
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  namespace: test
spec:
  ports:
    - name: client
      port: 2181
  selector:
    app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper1
  namespace: test
spec:
  type: NodePort
  ports:
    - name: client
      port: 2181
      nodePort: 42181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper2
  namespace: test
spec:
  type: NodePort
  ports:
    - name: client
      port: 2181
      nodePort: 42182
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper3
  namespace: test
spec:
  type: NodePort
  ports:
    - name: client
      port: 2181
      nodePort: 42183
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "3"
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: zookeeper1
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.linux.net/test/zookeeper:v3.4.14
          imagePullPolicy: IfNotPresent
          env:
            - name: MYID
              value: "1"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx1G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-1 
      volumes:
        - name: zookeeper-datadir-pvc-1 
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-1
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: zookeeper2
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "2"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.linux.net/test/zookeeper:v3.4.14 
          imagePullPolicy: IfNotPresent
          env:
            - name: MYID
              value: "2"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx1G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-2 
      volumes:
        - name: zookeeper-datadir-pvc-2
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-2
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: zookeeper3
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "3"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.linux.net/test/zookeeper:v3.4.14 
          imagePullPolicy: IfNotPresent
          env:
            - name: MYID
              value: "3"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx1G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-3
      volumes:
        - name: zookeeper-datadir-pvc-3
          persistentVolumeClaim:
           claimName: zookeeper-datadir-pvc-3

# 启动前保证 coredns 内存限制足够
<root@ubuntu181 zookeeper>#kubectl apply  -f zookeeper.yaml 
service/zookeeper created
service/zookeeper1 created
service/zookeeper2 created
service/zookeeper3 created
deployment.apps/zookeeper1 created
deployment.apps/zookeeper2 created
deployment.apps/zookeeper3 created

4.4 验证

<root@ubuntu181 zookeeper>#kubectl exec -it zookeeper1-68596f5745-rs4j2 -n test sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower

<root@ubuntu181 zookeeper>#kubectl exec -it zookeeper2-59f5fd5bb8-4lthf -n test sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: leader

客户端连接测试

五、实战案例之 dubbo

运行dubbo生成者与消费者示例。
官方网站:http://dubbo.apache.org/

5.1 运行 provider

5.1.1 准备镜像

镜像文件列表

cd /data/k8s-data/dockerfile/web/test/dubbo/provider

<root@ubuntu181 provider>#tree -L 2
.
├── build-command.sh
├── Dockerfile
├── dubbo-demo-provider-2.1.5
│   ├── bin
│   ├── conf
│   └── lib
├── dubbo-demo-provider-2.1.5-assembly.tar.gz
└── run_java.sh

Dockerfile 文件内容

<root@ubuntu181 provider>#cat Dockerfile
#Dubbo provider
FROM harbor.linux.net/pub-images/jdk-base:v8.212

MAINTAINER kong

RUN yum install file nc -y
RUN mkdir -p /apps/dubbo/provider && useradd tomcat
ADD dubbo-demo-provider-2.1.5/  /apps/dubbo/provider
ADD run_java.sh /apps/dubbo/provider/bin 
RUN chown tomcat.tomcat /apps -R
RUN chmod a+x /apps/dubbo/provider/bin/*.sh

CMD ["/apps/dubbo/provider/bin/run_java.sh"]

修改 provider 配置

参考:https://dubbo.apache.org/zh/docs/v2.7/user/references/registry/zookeeper/#zookeeper-%E5%AE%89%E8%A3%85

<root@ubuntu181 provider>#vim dubbo-demo-provider-2.1.5/conf/dubbo.properties 
dubbo.registry.address=zookeeper://zookeeper1.test.svc.linux.local:2181?backup=zookeeper2.test.svc.linux.local:2181,zookeeper3.test.svc.linux.local:2181

build-command 脚本

<root@ubuntu181 provider>#cat build-command.sh 
#!/bin/bash
docker build -t harbor.linux.net/test/dubbo-demo-provider:v1  .
sleep 3
docker push harbor.linux.net/test/dubbo-demo-provider:v1

执行构建

<root@ubuntu181 provider>#chmod 755 ./*.sh
<root@ubuntu181 provider>#chmod  a+x dubbo-demo-provider-2.1.5/bin/*
<root@ubuntu181 provider>#bash build-command.sh # 构建

5.1.2 运行 provider 服务

<root@ubuntu181 provider>#cd /data/k8s-data/yaml/testapp/dubbo/provider
<root@ubuntu181 provider>#cat provider.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: test-provider
  name: test-provider-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-provider
  template:
    metadata:
      labels:
        app: test-provider
    spec:
      containers:
      - name: test-provider-container
        image: harbor.linux.net/test/dubbo-demo-provider:v1 
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 20880
          protocol: TCP
          name: http

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-provider
  name: test-provider-spec
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 20880
    protocol: TCP
    targetPort: 20880
    #nodePort: 30001
  selector:
    app: test-provider

# 运行 provider 服务
<root@ubuntu181 provider>#kubectl apply -f provider.yaml
deployment.apps/test-provider-deployment created
service/test-provider-spec created

5.1.3 zookeeper 验证 provider 注册

5.2 运行 consumer

5.2.1 准备镜像

镜像文件列表

<root@ubuntu181 consumer>#pwd
/data/k8s-data/dockerfile/web/test/dubbo/consumer
<root@ubuntu181 consumer>#tree -L 2
.
├── build-command.sh
├── Dockerfile
├── dubbo-demo-consumer-2.1.5
│   ├── bin
│   ├── conf
│   └── lib
├── dubbo-demo-consumer-2.1.5-assembly.tar.gz
└── run_java.sh

Dockerfile 文件内容

<root@ubuntu181 consumer>#cat Dockerfile 
#Dubbo consumer
FROM harbor.linux.net/pub-images/jdk-base:v8.212 

MAINTAINER kong

RUN yum install file -y
RUN mkdir -p /apps/dubbo/consumer && useradd tomcat
ADD dubbo-demo-consumer-2.1.5  /apps/dubbo/consumer
ADD run_java.sh /apps/dubbo/consumer/bin 
RUN chown tomcat.tomcat /apps -R
RUN chmod a+x /apps/dubbo/consumer/bin/*.sh

CMD ["/apps/dubbo/consumer/bin/run_java.sh"]

修改 consumer 配置

<root@ubuntu181 consumer>#vim dubbo-demo-consumer-2.1.5/conf/dubbo.properties
dubbo.registry.address=zookeeper://zookeeper1.test.svc.linux.local:2181

build-command 脚本

<root@ubuntu181 consumer>#cat build-command.sh
#!/bin/bash
docker build -t harbor.linux.net/test/dubbo-demo-consumer:v1  .
sleep 3
docker push harbor.linux.net/test/dubbo-demo-consumer:v1

执行构建

<root@ubuntu181 consumer>#chmod -R a+x  *.sh dubbo-demo-consumer-2.1.5/bin/*
<root@ubuntu181 consumer>#bash build-command.sh

5.2.2 运行 consumer 服务

<root@ubuntu181 consumer>#pwd
/data/k8s-data/yaml/testapp/dubbo/consumer
<root@ubuntu181 consumer>#cat consumer.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: test-consumer
  name: test-consumer-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-consumer
  template:
    metadata:
      labels:
        app: test-consumer
    spec:
      containers:
      - name: test-consumer-container
        image: harbor.linux.net/test/dubbo-demo-consumer:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-consumer
  name: test-consumer-server
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    #nodePort: 30001
  selector:
    app: test-consumer

<root@ubuntu181 consumer>#kubectl apply -f consumer.yaml
deployment.apps/test-consumer-deployment created
service/test-consumer-server created

5.2.3 zookeeper 验证消费者

5.3 运行dubbo admin

5.3.1 准备镜像

镜像文件列表

<root@ubuntu181 dubboadmin>#tree -L 2
.
├── build-command.sh
├── catalina.sh
├── Dockerfile
├── dubboadmin
│   ├── crossdomain.xml
│   ├── css
│   ├── favicon.ico
│   ├── images
│   ├── js
│   ├── META-INF
│   ├── SpryAssets
│   └── WEB-INF
├── dubboadmin.war
├── dubboadmin.war.bak
├── logging.properties
├── run_tomcat.sh
└── server.xml

Dockerfile 文件内容

<root@ubuntu181 dubboadmin>#cat Dockerfile 
#Dubbo dubboadmin
FROM harbor.linux.net/pub-images/tomcat-base:v8.5.43 

MAINTAINER kong

RUN yum install unzip -y  
ADD server.xml /apps/tomcat/conf/server.xml
ADD logging.properties /apps/tomcat/conf/logging.properties
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD dubboadmin.war  /data/tomcat/webapps/dubboadmin.war
RUN cd /data/tomcat/webapps && unzip dubboadmin.war && rm -rf dubboadmin.war && chown -R tomcat.tomcat /data /apps

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"]

修改 dubboadmin 配置

<root@ubuntu181 dubboadmin>#cat dubboadmin/WEB-INF/dubbo.properties
dubbo.registry.address=zookeeper://zookeeper1.test.svc.linux.local:2181
dubbo.admin.root.password=root
dubbo.admin.guest.password=guest

build-command 脚本

<root@ubuntu181 dubboadmin>#cat build-command.sh 
#!/bin/bash
TAG=1
docker build -t harbor.linux.net/test/dubboadmin:{TAG}  .
sleep 3
docker push  harbor.linux.net/test/dubboadmin:${TAG}

执行构建

<root@ubuntu181 dubboadmin>#chmod -R a+x  *.sh
<root@ubuntu181 dubboadmin>#zip -r dubboadmin.war dubboadmin
<root@ubuntu181 dubboadmin>#bash build-command.sh v1

5.3.2 运行 dubbo admin 服务

<root@ubuntu181 dubboadmin>#pwd
/data/k8s-data/yaml/testapp/dubbo/dubboadmin

<root@ubuntu181 dubboadmin>#cat dubboadmin.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: test-dubboadmin
  name: test-dubboadmin-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-dubboadmin
  template:
    metadata:
      labels:
        app: test-dubboadmin
    spec:
      containers:
      - name: test-dubboadmin-container
        image: harbor.linux.net/test/dubboadmin:v1 
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-dubboadmin
  name: test-dubboadmin-service
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30080
  selector:
    app: test-dubboadmin

<root@ubuntu181 dubboadmin>#kubectl apply -f dubboadmin.yaml 
deployment.apps/test-dubboadmin-deployment created
service/test-dubboadmin-service created

5.3.3 验证 dubbo admin 服务

六、实战案例之 Redis 服务

在k8s环境中运行redis服务

6.1 构建redis镜像

6.1.1 镜像文件列表

<root@ubuntu181 redis>#pwd
/data/k8s-data/dockerfile/web/test/redis
<root@ubuntu181 redis>#tree 
.
├── build-command.sh
├── Dockerfile
├── redis-4.0.14.tar.gz
├── redis.conf
└── run_redis.sh

6.1.2 Dockerfile 文件内容

<root@ubuntu181 redis>#cat Dockerfile 
#Redis Image
FROM harbor.linux.net/baseimages/centos-base:7.7.1908

MAINTAINER kong

ADD redis-4.0.14.tar.gz /usr/local/src
RUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server  /usr/sbin/ && mkdir -pv /data/redis-data
ADD redis.conf /usr/local/redis/redis.conf 
ADD run_redis.sh /usr/local/redis/run_redis.sh

EXPOSE 6379

CMD ["/usr/local/redis/run_redis.sh"]

6.1.3 build-command 脚本

<root@ubuntu181 redis>#cat build-command.sh 
#!/bin/bash
TAG=1
docker build -t harbor.linux.net/test/redis:{TAG} .
sleep 3
docker push  harbor.linux.net/test/redis:${TAG}

6.1.4 执行构建

<root@ubuntu181 redis>#chmod 755 *.sh
<root@ubuntu181 redis>#bash build-command.sh v4.0.14

6.1.5 测试redis 镜像

<root@ubuntu181 dashboard>#docker run -it --rm -p6379:6379 harbor.linux.net/test/redis:v4.0.14
8:C 12 Apr 01:11:28.489 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8:C 12 Apr 01:11:28.490 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=8, just started
8:C 12 Apr 01:11:28.490 # Configuration loaded
127.0.0.1    localhost
::1    localhost ip6-localhost ip6-loopback
fe00::0    ip6-localnet
ff00::0    ip6-mcastprefix
ff02::1    ip6-allnodes
ff02::2    ip6-allrouters
172.17.0.2    210c2367e634

6.2 运行 redis 服务

基于PV/PVC保存数据,实现k8s中运行Redis服务

6.2.1 创建 PV 与 PVC

<root@ubuntu181 pv>#pwd
/data/k8s-data/yaml/testapp/redis/pv

<root@ubuntu181 pv>#cat *
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-datadir-pvc-1 
  namespace: test
spec:
  volumeName: redis-datadir-pv-1 
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-datadir-pv-1
  namespace: test
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/test/redis-datadir-1 
    server: 192.168.7.107

<root@ubuntu181 pv>#kubectl apply -f .
persistentvolume/redis-datadir-pv-1 created
persistentvolumeclaim/redis-datadir-pvc-1 created

<root@ubuntu181 pv>#kubectl get pvc -n test # 验证
NAME                  STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-datadir-pvc-1   Bound    redis-datadir-pv-1   10Gi       RWO                           36s

6.2.2 运行 Redis 服务

<root@ubuntu181 redis>#pwd
/data/k8s-data/yaml/testapp/redis

<root@ubuntu181 redis>#cat redis.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: devops-redis 
  name: deploy-devops-redis
  namespace: test
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: devops-redis
  template:
    metadata:
      labels:
        app: devops-redis
    spec:
      containers:
        - name: redis-container
          image: harbor.linux.net/test/redis:v4.0.14 
          imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/data/redis-data/"
            name: redis-datadir
      volumes:
        - name: redis-datadir
          persistentVolumeClaim:
            claimName: redis-datadir-pvc-1 

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: devops-redis
  name: srv-devops-redis
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 6379 
    targetPort: 6379
    nodePort: 36379 
  selector:
    app: devops-redis
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800

<root@ubuntu181 redis>#kubectl apply -f redis.yaml 
deployment.apps/deploy-devops-redis created
service/srv-devops-redis created

6.2.3 外部客户端访问 redis

<root@k8s-harbor test>#redis-cli -h 192.168.7.104 -p 36379 -a 123456
192.168.7.104:36379> set key1 value1
OK
192.168.7.104:36379> set key2 value2
OK

6.2.4 验证 PVC 存储卷数据

<root@k8s-harbor redis-datadir-1>#ll /data/k8sdata/test/redis-datadir-1
total 12
drwxr-xr-x 2 root root 4096 Apr 12 01:36 ./
drwxr-xr-x 6 root root 4096 Apr 12 01:35 ../
-rw-r--r-- 1 root root  124 Apr 12 01:36 dump.rdb
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇