E10 K8S部署

获取部署代码

git clone http://192.168.7.32:9091/luxsun/e10-k8s.git

准备存储,默认使用nfs

  1. 安装nfs服务yum install nfs-utils -y 所有k8s节点都需要执行
  2. 配置目录共享(NFS服务器执行),如下:
# vi /etc/exports 
/data *(rw,no_root_squash,sync)`
  1. 启动nfs服务systemctl start nfs && systemctl enable nfs
  2. 创建oa命名空间 kubectl create ns oa
  3. 配置nfs作为kubernetes后端存储卷
    # nfs-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner-config
      labels:
    app: nfs-client-provisioner-config
      # replace with namespace where provisioner is deployed
      namespace: oa
    spec:
      replicas: 1
      strategy:
    type: Recreate
      selector:
    matchLabels:
      app: nfs-client-provisioner-config
      template:
    metadata:
      labels:
        app: nfs-client-provisioner-config
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root-config
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs-config
            - name: NFS_SERVER
              value: 10.12.253.54
            - name: NFS_PATH
              value: /data/
      volumes:
        - name: nfs-client-root-config
          nfs:
            server: 10.12.253.54
            path: /data/
    
    #storageclass.yml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage-config
    provisioner: fuseim.pri/ifs-config # or choose another name, must match deployment's env PROVISIONER_NAME'
    parameters:
      archiveOnDelete: "false"
    

    ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed


kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules:

  • apiGroups: [””] resources: [“persistentvolumes”] verbs: [“get”, “list”, “watch”, “create”, “delete”]
  • apiGroups: [””] resources: [“persistentvolumeclaims”] verbs: [“get”, “list”, “watch”, “update”]
  • apiGroups: [“storage.k8s.io”] resources: [“storageclasses”] verbs: [“get”, “list”, “watch”]
  • apiGroups: [””] resources: [“events”] verbs: [“create”, “update”, “patch”] — kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects:
  • kind: ServiceAccount name: nfs-client-provisioner

    replace with namespace where provisioner is deployed

roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io — kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed

rules:

  • apiGroups: [””] resources: [“endpoints”] verbs: [“get”, “list”, “watch”, “create”, “update”, “patch”] — kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed

subjects:

  • kind: ServiceAccount name: nfs-client-provisioner

    replace with namespace where provisioner is deployed

roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io


4. 部署资源 `kubectl apply -f ./nfs`


#### 部署周边服务

[Helm | Helm版本支持策略](https://helm.sh/zh/docs/topics/version_skew/)

##### 配置镜像仓库secret

1.  主节点登录镜像仓库`reg.e-cology.cn` `docker login reg.e-cology.cn -u readonly -pReadonly@2023`
2.  创建secret
```shell
kubectl create secret generic regcred     --from-file=.dockerconfigjson=/root/.docker/config.json     --type=kubernetes.io/dockerconfigjson
部署helm
  1. 安装helm,本地默认kubernetes版本为v1.22.2,对应helm版本为 3.10
  2. 下载helm wget https://get.helm.sh/helm-v3.10.1-linux-amd64.tar.gz
  3. 解压(tar -zxvf helm-v3.10.1-linux-amd64.tar.gz)
  4. 在解压目中找到helm程序,移动到需要的目录中(mv linux-amd64/helm /usr/local/bin/helm)
部署运维平台

需要提前准备好mysql数据库,可以和e10数据库共用运维平台数据库在启动过程中会自动初始化表

  1. 修改monitor-config.yaml设置数据库地址
  2. 使用kubectl apply -f e-monitor部署运维平台
部署elk日志收集
  1. 部署zookeeper,kubectl apply -f elk/zookeeper.yaml
  2. 部署kafka-service kubectl apply -f elk/kafka.yaml
  3. 部署kafka-deployment,需要使用kubectl get svc |grep kafka-service获取kafka的ip
  4. 部署logstash,kubectl apply -f elk/logstash.yaml需要修改es地址,kafka地址
部署nacos
部署mysql

因nacos依赖与mysql需要先部署mysql

  1. 创建nacos初始化脚本作为configmap,kubectl apply -f ./mysql/nacosinit.yaml
  2. 修改./mysql/values.yaml 设置root密码,nacos帐号密码,挂载创建的configmap,修改storageclass如下

image-image-20230308172844-gzgqiiw

image-image-20230308172922-rpb52ji

image-image-20230308173015-ir0hd14

  1. 修改完成后,使用 helm install nacos-mysql -f values.yaml .安装数据库
    部署nacos
  2. 修改./nacos/values.yaml 主要修改mysql连接,配置storageclass

image-image-20230308173626-kkrq37w

  1. 使用helm install weaver-nacos -f values . 安装nacos集群
安装Elasticsearch集群
  1. 修改./elasticsearch/values.yaml 设置jvm内存,持久化存储

image-image-20230309104502-255fnou

  1. 使用helm install weaver-elasticsearch -f values.yaml .安装es集群
安装kafka集群(自带了zookeeper)
  1. 修改./kafka/values.yaml,修改storageclass,heapOpts,zookeeper连接地址

image-image-20230309131704-we9tptx

image-image-20230309133502-m7f9ert

  1. 使用helm install weaver-kafka -f values.yml .安装kafka集群
安装Mongodb
  1. 修改./mongodb/values.yml,修改storageclass,帐号密码

image-image-20230309132109-jcqga41

image-image-20230309132128-jpknlcy

  1. 使用helm install weaver-mongodb -f values.yml .安装mongo集群
安装rabbitmq集群
  1. 修改./rabbitmq/values.yaml,修改帐号密码,storageclass pv大小

image-image-20230309132512-al5n85f

image-image-20230309132533-ksnyskr

  1. 使用helm install weaver-rabbitmq -f values.yml . 安装rabbitmq集群
安装redis集群
  1. 修改./redis/values.yml,修改storageclass,密码等

image-image-20230309132746-u5ql7wh

  1. 使用helm install weaver-redis -f values.yml .安装redis集群

连接配置表

table-id-DWl9Nx

准备nacos配置文件

  1. 登录源环境nacos,导出配置文件

image-image-20230309160146-eokxk3x

解压,并修改配置

image-image-20230309160235-4rmq1yt

  1. 修改系统访问地址,如果使用nodeport,默认地址就为masterip:32600,比如10.12.253.90:32600

image-image-20230309160513-7jan1xf

  1. 修改redis配置 weaver-cache.properties

image-image-20230309160920-2ne6xf3

  1. 修改rabbitmq配置文件weaver.properties

image-image-20230309161108-xjwjty9

  1. 修改kafka配置

image-image-20230309161334-fnk1kmi

  1. 修改zookeeper配置,weaver-crm-service.properties

image-image-20230309161546-0l3td88

  1. 修改elasticsearch配置

image-image-20230309162325-h7isq7h

image-image-20230309162431-iecjprg

weaver-front-monitor-service.properties配置文件

weaver-architecture-service.properties配置文件

  1. 修改mongo配置文件

image-image-20230309162808-5j0jc98

image-image-20230309162842-xrib1gr

image-image-20230309162957-7slxkoq

  1. 压缩修改好的配置文件,并导入nacos集群

image-miao-20230309163345-3z5yruf

准备后端镜像

  1. 准备war包.放入到ecology目录,如下

image-image-20230309164117-aslt1rp

  1. 执行build.sh编译生成镜像(等待镜像生成完成)

image-image-20230309164311-1gfbnhm

  1. 准备weaver-open-gateway.jar weaver-gateway.jar编译生成镜像,如下

image-image-20230309171245-u6xmukq

image-image-20230309171414-fc693lu

准备前端镜像

  1. 编译生成前端镜像cd e10-front
  2. docker build -t reg.e-cology.cn/e10/e10-front:20230417 .

部署e10服务

  1. 修改zhechart/values.yaml中pullsecrets,emonitorIP,storageclass,kafka_host,前后端镜像地址
  2. 使用helm install weaver-e10 -f values.yaml .部署e10服务
部署完成,等待所有服务变为就绪状态,访问`masterip:32600`

若需要域名访问,需要准备istio或nginxingress,开放e10-front,下面以istio举例

  1. 修改hosts,指定外部访问地址,也可以为ip,修改destination 指向e10-front服务

image-image-20230310132915-0wg61ji

  1. 使用kubectl apply -f e10.yaml 配置外部访问