k8s基础 (二) – web 运行示例

一、运行 nginx

将 nginx 运行在 k8s 中并可以从外部访问到 nginx 的 web 页面

1.1 Nginx 镜像制作

基于基础的 centos/ubuntu/alpine 镜像,制作公司内部基础镜像 Nginx 基础镜像及 Nginx 业务镜像

1.1.1 Centos 基础镜像制作

拉取基础镜像

echo "192.168.7.107 harbor.linux.net" >> /etc/hosts
docker pull centos:7.7.1908
docker images
docker tag 9dd718864ce6 harbor.linux.net/baseimages/centos:7.7.1908
docker login harbor.linux.net
docker push harbor.linux.net/baseimages/centos:7.7.1908

镜像文件列表

cd /opt/k8s-data/ # dockerfile 与 yaml 目录
cd dockerfile/system/centos/ # 系统镜像目录
<root@ubuntu181 centos>#tree
.
├── build-command.sh
├── Dockerfile
└── filebeat-7.6.1-x86_64.rpm

0 directories, 3 files

Dockerfile 文件内容

<root@ubuntu181 centos>#cat Dockerfile
#自定义Centos 基础镜像
FROM harbor.linux.net/baseimages/centos:7.7.1908
MAINTAINER kong kongxuc@163.com

ADD filebeat-7.6.1-x86_64.rpm /tmp
RUN yum install -y /tmp/filebeat-7.6.1-x86_64.rpm vim wget tree  lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop &&  rm -rf /etc/localtime /tmp/filebeat-7.6.1-x86_64.rpm && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && useradd  nginx -u 2019 && useradd www -u 2020

build-command 脚本

基于脚本实现镜像自动build及上传到harbor功能

<root@ubuntu181 centos># cat build-command.sh
#!/bin/bash
docker build -t  harbor.linux.net/baseimages/centos-base:7.7.1908 .

docker push harbor.linux.net/baseimages/centos-base:7.7.1908

执行构建centos 基础镜像

构建完成后自动上传至本地harbor服务器

<root@ubuntu181 centos>#bash build-command.sh 
Sending build context to Docker daemon   24.7MB
Step 1/4 : from harbor.linux.net/baseimages/centos:7.7.1908
 ---> 08d05d1d5859
Step 2/4 : MAINTAINER kong kongxuc@163.com
 ---> Using cache
 ---> b896ea95b71f
Step 3/4 : ADD filebeat-7.6.1-x86_64.rpm /tmp
 ---> Using cache
 ---> e3056e3fc4ac
Step 4/4 : RUN yum install -y /tmp/filebeat-7.6.1-x86_64.rpm vim wget tree  lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop &&  rm -rf /etc/localtime /tmp/filebeat-7.6.1-x86_64.rpm && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && useradd  nginx -u 2019 && useradd www -u 2020
 ---> Using cache
 ---> 284371c7020c
Successfully built 284371c7020c
Successfully tagged harbor.linux.net/baseimages/centos-base:7.7.1908
The push refers to repository [harbor.linux.net/baseimages/centos-base]
ef8de1cf2474: Pushed
7d5d8a57fd88: Pushed
034f282942cd: Mounted from baseimages/centos 
7.7.1908: digest: sha256:e53ce3eb4da9ea5e7efd4aa8c958d4ea63a96995a701079b9d86d7d245b598f0 size: 954

# 查看镜像
<root@ubuntu181 centos>#docker images
REPOSITORY                                    TAG             IMAGE ID       CREATED          SIZE
harbor.linux.net/baseimages/centos-base       7.7.1908        284371c7020c   5 minutes ago    607MB

1.1.2 Nginx 基础镜像制作

制作一个通用的Ningx镜像

镜像文件列表

cd /data/k8s-data/dockerfile/web/pub-images/nginx-base
tree
.
├── build-command.sh
├── Dockerfile
└── nginx-1.14.2.tar.gz

Dockerfile 文件内容

<root@ubuntu181 nginx-base>#cat Dockerfile 
#Nginx Base Image
FROM harbor.linux.net/baseimages/centos-base:7.7.1908 

MAINTAINER  kong

RUN yum install -y vim wget tree  lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
ADD nginx-1.14.2.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.14.2 && ./configure  && make && make install && ln -sv  /usr/local/nginx/sbin/nginx /usr/sbin/nginx  &&rm -rf /usr/local/src/nginx-1.14.2.tar.gz

build-command 脚本

<root@ubuntu181 nginx-base>#cat build-command.sh
#!/bin/bash
docker build -t harbor.linux.net/pub-images/nginx-base:v1.14.2  .
sleep 1
docker push  harbor.linux.net/pub-images/nginx-base:v1.14.2

执行构建 Nginx 基础镜像

<root@ubuntu181 nginx-base>#bash build-command.sh
Successfully built 9eaeb9781133
Successfully tagged harbor.linux.net/pub-images/nginx-base:v1.14.2
The push refers to repository [harbor.linux.net/pub-images/nginx-base]
c784a28424ef: Pushed
27a9f5b9fb5d: Pushed
b21be66a2629: Pushed
ef8de1cf2474: Mounted from baseimages/centos-base
7d5d8a57fd88: Mounted from baseimages/centos-base
034f282942cd: Mounted from baseimages/centos-base
v1.14.2: digest: sha256:15f1e4e93e53af8d2e74a2fa8fdfeb65c31893fe31a641e028abf1f421875630 size: 1588

1.1.3 Nginx 业务镜像制作

基于Nginx基础镜像,制作N个不同服务的Nginx业务镜像:

镜像文件列表

cd /data/k8s-data/dockerfile/web/test/nginx
<root@ubuntu181 nginx>#tree
.
├── app1.tar.gz
├── build-command.sh
├── Dockerfile
├── index.html
├── nginx.conf
└── webapp
    └── index.html

Dockerfile 文件内容

<root@ubuntu181 nginx>#cat Dockerfile
#Nginx 1.14.2
FROM harbor.linux.net/pub-images/nginx-base:v1.14.2

ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD app1.tar.gz  /usr/local/nginx/html/webapp/
ADD index.html  /usr/local/nginx/html/index.html

#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/images /usr/local/nginx/html/webapp/images

EXPOSE 80 443

CMD ["nginx"]

build-command 脚本

<root@ubuntu181 nginx>#cat build-command.sh 
#!/bin/bash
TAG=1
docker build -t harbor.linux.net/test/nginx-web1:{TAG} .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker push harbor.linux.net/test/nginx-web1:${TAG}
echo "镜像上传到harbor完成"

其他文件

# 测试页面文件内容
<root@ubuntu181 nginx># cat webapp/index.html
Nginx webapp test page

# Nginx 配置文件
<root@ubuntu181 nginx>#cat nginx.conf
daemon off;# 关闭后台运行
    location / {
        root   html;
        index  index.html index.htm;
    }

    location /webapp {
        root   html;
        index  index.html index.htm;
    }

执行构建并测试

# 执行构建 Nginx 业务镜像
bash build-command.sh v1

# 测试nginx业务镜像可以启动为容器
docker run -it --rm -p 8801:80 harbor.linux.net/test/nginx-web1:v1

<root@ubuntu181 ~>#docker exec -it 86fd9fda2d45 bash
[root@86fd9fda2d45 /]# ps -ef
UID         PID   PPID  C STIME TTY          TIME CMD
root          1      0  0 12:09 pts/0    00:00:00 nginx: master process nginx
nginx         6      1  0 12:09 pts/0    00:00:00 nginx: worker process
nginx         7      1  0 12:09 pts/0    00:00:00 nginx: worker process
root          8      0  1 12:09 pts/1    00:00:00 bash
root         24      8  0 12:09 pts/1    00:00:00 ps -ef

验证 web 页面

1.2 yaml 文件讲解

需要提前创建好 yaml 文件,并创建好 pod 运行所需要的 namespace、yaml 文件等资源

1.2.1 创建 namespace

namespace yaml 文件

<root@ubuntu181 namespaces>#cd /data/k8s-data/yaml/namespaces
<root@ubuntu181 namespaces>#cat test-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: test

创建并验证 namespace

<root@ubuntu181 namespaces>#kubectl apply -f test-ns.yaml
namespace/test created
<root@ubuntu181 namespaces>#kubectl get namespaces
NAME              STATUS   AGE
default           Active   14h
kube-node-lease   Active   14h
kube-public       Active   14h
kube-system       Active   14h
test              Active   8s

1.2.2 Nginx 业务 yaml 文件详解

<root@ubuntu181 nginx># cd /data/k8s-data/yaml/testapp/nginx
<root@ubuntu181 nginx># cat nginx.yaml
kind: Deployment # 类型,是 deployment 控制器,kubectl explain Deployment
apiVersion: apps/v1 # API版本,# kubectl explain Deployment.apiVersion
metadata: # pod的元数据信息,kubectl explain Deployment.metadata
  labels: # 自定义pod的标签,# kubectl explain Deployment.metadata.labels
    app: test-nginx-deployment-label # 标签名称为app值为 test-nginx-deployment-label,后面会用到此标签
  name: test-nginx-deployment # pod 的名称
  namespace: test # pod 的 namespace,默认是 default
spec: # 定义 deployment 中容器的详细信息,kubectl explain Deployment.spec
  replicas: 1 # 创建出的 pod 的副本数,即多少个 pod,默认值为1
  selector: # 定义标签选择器
    matchLabels: # 定义匹配的标签,必须要设置
      app: test-nginx-selector # 匹配的目标标签
  template: # 定义模板,必须定义,模板是起到描述要创建的pod的作用
    metadata: # 定义模板元数据
      labels: # 定义模板label,kubectl explain Deployment.spec.template.metadata.labels
        app: test-nginx-selector # 定义标签,等于Deployment.spec.selector.matchLabels
    spec: # 定义pod信息
      containers: # 定义 pod 中容器列表,可以多个至少一个,pod 不能动态增减容器
      - name: test-nginx-container # 容器名称
        image: harbor.linux.net/test/nginx-web1:v1 # 镜像地址
        #command: ["/apps/tomcat/bin/run_tomcat.sh"] # 容器启动执行的命令或脚本
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always  # 拉取镜像策略,包括 Always、Never、IfNotPresent
        ports: # 定义容器端口列表
        - containerPort: 80 # 定义一个端口
          protocol: TCP  # 端口协议
          name: http # 端口名称
        - containerPort: 443
          protocol: TCP
          name: https
        env: # 配置环境变量
        - name: "password" # 变量名称。必须要用引号引起来
          value: "123456" # 当前变量的值
        - name: "age" # 另一个变量名称
          value: "20" # 另一个变量的值
        resources:  # 对资源的请求设置和限制设置
          limits: # 资源限制设置,上限
            cpu: 2 # cpu 的限制,单位为 core 数,可以写0.5或者500m等CPU压缩值,1000毫核
            memory: 2Gi # 内存限制,单位可以为 Mib/Gib,将用于 docker run --memory参数
          requests: # 资源请求的设置
            cpu: 500m # cpu请求数,容器启动的初始可用数量,可以写0.5或者500m等CPU压缩值
            memory: 1Gi # 内存请求大小,容器启动的初始可用数量,用于调度 pod 时候使用

        volumeMounts:
        - name: magedu-images
          mountPath: /usr/local/nginx/html/webapp/images
          readOnly: false
        - name: magedu-static
          mountPath: /usr/local/nginx/html/webapp/static
          readOnly: false
      volumes:
      - name: magedu-images
        nfs:
          server: 172.31.1.103
          path: /data/magedu/images
      - name: magedu-static
        nfs:
          server: 172.31.1.103
          path: /data/magedu/static
      #nodeSelector:
      #  group: magedu



---
kind: Service # 类型为service
apiVersion: v1 # service API版本, service.apiVersion
metadata: # 定义service元数据,service.metadata
  labels: # 自定义标签,service.metadata.labels
    app: test-nginx-service-label # 定义 service 标签的内容
  name: test-nginx-service # 定义service的名称,此名称会被DNS解析
  namespace: test # 该 service 隶属于的 namespaces 名称,即把 service 创建到哪个 namespace 里面
spec: # 定义service的详细信息,service.spec
  type: NodePort # service的类型,定义服务的访问方式,默认为ClusterIP, service.spec.type
  ports: # 定义访问端口, service.spec.ports
  - name: http # 定义一个端口名称
    port: 80 # service 80端口
    protocol: TCP # 协议类型
    targetPort: 80 # 目标pod的端口
    nodePort: 32002 # node节点暴露的端口
  - name: https # SSL 端口
    port: 443 # service 443 端口
    protocol: TCP
    targetPort: 443  # 目标pod端口
    nodePort: 32443 # node节点暴露的SSL端口
  selector:  # service的标签选择器,定义要访问的目标pod
    app: test-nginx-selector # 将流量路到选择的pod上,须等于Deployment.spec.selector.matchLabels

1.3 k8s 中创建 Nginx pod

Nginx yaml 文件

<root@ubuntu181 nginx>#cat /data/k8s-data/yaml/testapp/nginx/nginxv1.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: test-nginx-deployment-label
  name: test-nginx-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-nginx-selector
  template:
    metadata:
      labels:
        app: test-nginx-selector
    spec:
      containers:
      - name: test-nginx-container
        image: harbor.linux.net/test/nginx-web1:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "20"
        resources:
          limits:
            cpu: 2
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-nginx-service-label
  name: test-nginx-service
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 32080
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 32443
  selector:
    app: test-nginx-selector

创建 Nginx pod 并验证

<root@ubuntu181 nginx>#kubectl apply -f nginxv1.yaml
deployment.apps/test-nginx-deployment created
service/test-nginx-service created

<root@ubuntu181 nginx>#kubectl get pod -o wide -n test
NAME                                     READY   STATUS    RESTARTS   AGE   IP           NODE            NOMINATED NODE   READINESS GATES
test-nginx-deployment-58c4569cb5-lbn78   1/1     Running   0          49s   172.20.3.9   192.168.7.105   <none>           <none>
<root@ubuntu181 nginx>#kubectl get service -o wide -n test
NAME                 TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
test-nginx-service   NodePort   10.20.153.166   <none>        80:32080/TCP,443:32443/TCP   59s   app=test-nginx-selector

测试访问Nginx web界面

二、运行 tomcat

基于基础的 centos 镜像,制作公司内部基础镜像 –jdk 镜像 –tomcat 基础镜像 –tomcat 业务镜像:

2.1 jdk 基础镜像制作

2.1.1 jdk 基础镜像文件列表

<root@ubuntu181 jdk-1.8.212>#cd /data/k8s-data/dockerfile/web/pub-images/jdk-1.8.212
<root@ubuntu181 jdk-1.8.212>#tree
.
├── build-command.sh
├── Dockerfile
├── jdk-8u212-linux-x64.tar.gz
└── profile

2.1.2 Dockerfile 文件内容

<root@ubuntu181 jdk-1.8.212>#cat Dockerfile
#JDK Base Image
FROM harbor.linux.net/baseimages/centos-base:7.7.1908

MAINTAINER kong

ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profile

ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME JAVA_HOME/jre
ENV CLASSPATHJAVA_HOME/lib/:JRE_HOME/lib/
ENV PATHPATH:$JAVA_HOME/bin

2.1.3 build-command 脚本

<root@ubuntu181 jdk-1.8.212>#cat build-command.sh
#!/bin/bash
docker build -t harbor.linux.net/pub-images/jdk-base:v8.212  .
sleep 1
docker push  harbor.linux.net/pub-images/jdk-base:v8.212

2.1.4 执行构建 jdk 基础镜像

<root@ubuntu181 jdk-1.8.212>#bash build-command.sh
Successfully built 9be0800e100c
Successfully tagged harbor.linux.net/pub-images/jdk-base:v8.212
The push refers to repository [harbor.linux.net/pub-images/jdk-base]

# 验证JDK镜像启动为容器后的java环境
docker run -it --rm harbor.linux.net/pub-images/jdk-base:v8.212 bash
[root@e1c56401db58 /]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

2.2 tomcat 基础镜像制作

2.2.1 基础镜像文件列表

<root@ubuntu181 tomcat-base-8.5.43>#cd /data/k8s-data/dockerfile/web/pub-images/tomcat-base-8.5.43
<root@ubuntu181 tomcat-base-8.5.43>#tree
.
├── apache-tomcat-8.5.43.tar.gz
├── build-command.sh
└── Dockerfile

2.2.1 Dockerfile 文件内容

<root@ubuntu181 tomcat-base-8.5.43>#cat Dockerfile 
#Tomcat 8.5.43基础镜像
FROM harbor.linux.net/pub-images/jdk-base:v8.212

MAINTAINER kong

RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv 
ADD apache-tomcat-8.5.43.tar.gz  /apps
RUN useradd tomcat -u 2021 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data -R

2.2.1 build-command 脚本

<root@ubuntu181 tomcat-base-8.5.43>#cat build-command.sh
#!/bin/bash
docker build -t harbor.linux.net/pub-images/tomcat-base:v8.5.43  .
sleep 3
docker push  harbor.linux.net/pub-images/tomcat-base:v8.5.43

2.2.1 构建 tomcat 基础镜像并验证

# 构建 tomcat 基础镜像
bash build-command.sh

# 测试访问 tomcat 基础镜像启动为容器
<root@ubuntu181 tomcat-base-8.5.43>#docker run -it --rm -p 8801:8080 harbor.linux.net/pub-images/tomcat-base:v8.5.43 bash
[root@0689621f239d /]# /apps/tomcat/bin/catalina.sh start
Using CATALINA_BASE:   /apps/tomcat
Using CATALINA_HOME:   /apps/tomcat
Using CATALINA_TMPDIR: /apps/tomcat/temp
Using JRE_HOME:        /usr/local/jdk/jre
Using CLASSPATH:       /apps/tomcat/bin/bootstrap.jar:/apps/tomcat/bin/tomcat-juli.jar
Tomcat started.
[root@0689621f239d /]# curl -I http://192.168.7.101:8801/
HTTP/1.1 200 
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked

2.3 tomcat 业务镜像 app1 制作

后期按此步骤制作 app2、appN 镜像

2.3.1 业务镜像文件列表

<root@ubuntu181 tomcat-app1>#cd /data/k8s-data/dockerfile/web/test/tomcat-app1
<root@ubuntu181 tomcat-app1>#tree
.
├── app1.tar.gz
├── build-command.sh
├── catalina.sh
├── Dockerfile
├── filebeat.yml
├── myapp
│   └── index.html
├── run_tomcat.sh
└── server.xml

2.3.2 Dockerfile 文件内容

<root@ubuntu181 tomcat-app1>#cat Dockerfile
#tomcat web1
FROM harbor.linux.net/pub-images/tomcat-base:v8.5.43

ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml
RUN chown  -R tomcat.tomcat /data/ /apps/

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"

2.3.3 build-command 脚本

<root@ubuntu181 tomcat-app1>#cat build-command.sh
#!/bin/bash
TAG=1
docker build -t  harbor.linux.net/test/tomcat-app1:{TAG} .
sleep 3
docker push  harbor.linux.net/test/tomcat-app1:${TAG}

2.3.4 执行构建并验证

# 构建
bash build-command.sh 2020-05-20

# 验证
<root@ubuntu181 tomcat-app1>#docker run -it --rm -p 8801:8080 harbor.linux.net/test/tomcat-app1:2020-05-20
Using CATALINA_BASE:   /apps/tomcat
Using CATALINA_HOME:   /apps/tomcat
Using CATALINA_TMPDIR: /apps/tomcat/temp
Using JRE_HOME:        /usr/local/jdk
Using CLASSPATH:       /apps/tomcat/bin/bootstrap.jar:/apps/tomcat/bin/tomcat-juli.jar
Tomcat started.
127.0.0.1    localhost
::1    localhost ip6-localhost ip6-loopback
fe00::0    ip6-localnet
ff00::0    ip6-mcastprefix
ff02::1    ip6-allnodes
ff02::2    ip6-allrouters
172.17.0.2    83c290cf49de

2.4 在 k8s 环境运行 tomcat:

2.4.1 yaml 文件

<root@ubuntu181 tomcat-app1>#cat tomcat-app1.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: test-tomcat-app1-deployment-label
  name: test-tomcat-app1-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: test-tomcat-app1-selector
    spec:
      containers:
      - name: test-tomcat-app1-container
        image: harbor.linux.net/test/tomcat-app1:2020-05-20
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-tomcat-app1-service-label
  name: test-tomcat-app1-service
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 40003
  selector:
    app: test-tomcat-app1-selector

2.4.2 创建 tomcat 业务 pod

<root@ubuntu181 tomcat-app1>#kubectl apply -f tomcat-app1.yaml
deployment.apps/test-tomcat-app1-deployment created
service/test-tomcat-app1-service created

<root@ubuntu181 tomcat-app1>#kubectl get pod -o wide -n test
NAME                                          READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
test-tomcat-app1-deployment-8dd5bdf8b-7znlz   1/1     Running   0          33s   172.20.3.10   192.168.7.105   <none>           <none>
<root@ubuntu181 tomcat-app1>#kubectl get service -o wide -n test
NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
test-tomcat-app1-service   NodePort   10.20.181.79   <none>        80:40003/TCP   37s   app=test-tomcat-app1-selector

测试访问tomcat业务pod的nodeport

2.5 k8s 中 nginx+tomcat 实现动静分离

实现一个通用的 nginx+tomcat 动静分离 web 架构,即用户访问的静态页面和图片在由 nginx 直接响应,而动态请求则基于 location 转发至 tomcat。
重点:Nginx 基于 tomcat 的 service name 转发用户请求到 tomcat 业务 app

2.5.1 nginx 业务镜像配置

nginx配置文件

upstream tomcat_webserver {
  server test-tomcat-app1-service.test.svc.linux.local:80;
}
server {
  location /myapp {
    proxy_pass http://tomcat_webserver;
    proxy_set_header Host host;
    proxy_set_header X-Forwarded-Forproxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
  }
}

重新构建 nginx 业务镜像

# 构建
<root@ubuntu181 nginx>#bash build-command.sh v2

# 镜像启动为容容器并验证配置文件
<root@ubuntu181 nginx>#docker run -it --rm harbor.linux.net/test/nginx-web1:v2 bash
[root@3824cdf70c81 /]# grep -v "#" /usr/local/nginx/conf/nginx.conf | grep -v "^"
user  nginx nginx;
worker_processes  auto;
daemon off;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
upstream tomcat_webserver {
  server test-tomcat-app1-service.test.svc.linux.local:80;
}
    server {
        listen       80;
        server_name  localhost;
        location / {
            root   html;
            index  index.html index.htm;
        }
        location /webapp {
            root   html;
            index  index.html index.htm;
        }
        location /myapp {
        proxy_pass http://tomcat_webserver;
        proxy_set_header Hosthost;
        proxy_set_header X-Forwarded-For proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IPremote_addr;
  }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

2.5.2 重新创建业务 nginx pod

# 删除原 pod
kubectl  delete  -f nginxv1.yaml

# 修改 yaml 文件镜像
<root@ubuntu181 nginx>#vim nginxv2.yaml
    spec:
      containers:
      - name: test-nginx-container
        image: harbor.linux.net/test/nginx-web1:v2

# 启动
<root@ubuntu181 nginx>#kubectl  apply  -f nginxv2.yaml
deployment.apps/test-nginx-deployment created
service/test-nginx-service created
<root@ubuntu181 nginx>#kubectl get pod -n test -o wide
NAME                                          READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
test-nginx-deployment-5f5746cbcb-xsc7v        1/1     Running   0          15s   172.20.3.18   192.168.7.105   <none>           <none>
test-tomcat-app1-deployment-8dd5bdf8b-swfkg   1/1     Running   1          69m   172.20.3.13   192.168.7.105   <none>           <none>

2.5.3 dns 报错

参考:https://www.cnblogs.com/dudu/p/12180982.html

# ansible 安装时 ENABLE_LOCAL_DNS_CACHE: true,安装了 node-local-dns,导致 rewrite 与 hosts 配置相关的解析总是失败
# 在 nodelocaldns.yaml 中这里的 forward 配置对应的是一个变量 __PILLAR__UPSTREAM__SERVERS__,这个变量改为 kube-dns-upstream service 的 IP 地址就可以了

<root@ubuntu181 yml>#kubectl get svc -n kube-system | grep kube-dns-upstream
kube-dns-upstream           ClusterIP   10.20.170.151   <none>        53/UDP,53/TCP            29h

<root@ubuntu181 yml>#vim nodelocaldns.yaml
        forward . 10.20.170.151

<root@ubuntu181 yml>#kubectl apply -f nodelocaldns.yaml 
serviceaccount/node-local-dns unchanged
service/kube-dns-upstream unchanged
configmap/node-local-dns configured
daemonset.apps/node-local-dns configured
service/node-local-dns unchanged

2.5.4 访问验证

三、基于 NFS 实现动静分离

图片的上传由后端服务器 tomcat 完成,图片的读取由前端的 nginx 响应,就需要 nginx 与 tomcat 的数据保持一致性,因此需要将数据保存到 k8s 环境外部的存储服务器,然后再挂载到各 nginx 与 tomcat 的容器中进行相应的操作。
存储卷类型及使用:http://docs.kubernetes.org.cn/429.html

3.1 NFS 服务器环境准备

# 107 节点安装 NFS 软件包

mkdir /data/test -p #数据总目录
mkdir /data/test/images #图片目录
mkdir /data/test/static #静态文件目录

vim /etc/exports
/data/test *(rw,no_root_squash)

systemctl restart nfs-server

3.2 NFS 客户端挂载并测试写入文件

mount -t nfs 192.168.7.107:/data/test /mnt
cp /etc/passwd /mnt/ #必须能够写入数据
<root@ubuntu184 ~>#ls /mnt
images  passwd  static

3.3 nginx 业务容器 yaml 文件

# cat nginxv3.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: test-nginx-deployment-label
  name: test-nginx-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-nginx-selector
  template:
    metadata:
      labels:
        app: test-nginx-selector
    spec:
      containers:
      - name: test-nginx-container
        image: harbor.linux.net/test/nginx-web1:v2
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "20"
        resources:
          limits:
            cpu: 2
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi

        volumeMounts:
        - name: test-images
          mountPath: /usr/local/nginx/html/webapp/images
          readOnly: false
        - name: test-static
          mountPath: /usr/local/nginx/html/webapp/static
          readOnly: false
      volumes:
      - name: test-images
        nfs:
          server: 192.168.7.107
          path: /data/test/images 
      - name: test-static
        nfs:
          server: 192.168.7.107
          path: /data/test/static
      #nodeSelector:
      #  group: magedu


---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-nginx-service-label
  name: test-nginx-service
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 32080
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 32443
  selector:
    app: test-nginx-selector

3.4 pod 中验证 NFS 挂载

# 执行更新yaml文件
kubectl apply -f nginxv3yaml

# 验证NFS挂载
[root@test-nginx-deployment-7b6f564ffb-j824d /]# df -TH
Filesystem                      Type     Size  Used Avail Use% Mounted on
overlay                         overlay  106G   11G   89G  11% /
tmpfs                           tmpfs     68M     0   68M   0% /dev
tmpfs                           tmpfs    2.1G     0  2.1G   0% /sys/fs/cgroup
/dev/sda1                       ext4     106G   11G   89G  11% /etc/hosts
shm                             tmpfs     68M     0   68M   0% /dev/shm
tmpfs                           tmpfs    2.1G   13k  2.1G   1% /run/secrets/kubernetes.io/serviceaccount
192.168.7.107:/data/test/images nfs4     106G  7.3G   93G   8% /usr/local/nginx/html/webapp/images
192.168.7.107:/data/test/static nfs4     106G  7.3G   93G   8% /usr/local/nginx/html/webapp/static

3.5 访问web测试

上传文件到 NFS

<root@k8s-harbor images>#ll
total 1392
drwxr-xr-x 2 root root    4096 Apr 10 16:00 ./
drwxr-xr-x 4 root root    4096 Apr 10 15:26 ../
-rw-r--r-- 1 root root 1416101 Apr  4 12:25 1.jpg

访问测试

访问:http://192.168.7.105:32080/webapp/images/1.jpg

暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇