[TOC]


0x00 前言简述

Q: 什么是Jenkins的分布式节点构建?

描述: 简单的说就是通过将构建过程分配到从属Slave节点上,从而减轻Master节点的压力,而且可以同时构建多个有点类似负载均衡的概念。
随着现在容器的盛行我们可以将server节点和agent节点在容器或者基于Kubernetes中部署, 可以实现动态的资源分配等等好处。

WeiyiGeek.Jenkins分布式节点

WeiyiGeek.Jenkins分布式节点


1.节点说明

描述: 我们在使用Jenkins的时候一般都会分为server节点与agent节点(也可以叫 slave 节点)。

  • 1) server :主要用于处理调度构建作业,把构建分发到slave节点进行实际执行,监视slave节点的状态(必要时让它们进行上线或者离线),记录和发布构建产物。
  • 2) agent : 主要用于处理Job任务等例如编译和发布, agent节点可以分为静态节点和动态节点;

节点类型:

  • 1) 静态节点是固定的一台vm虚机或者容器。
  • 2) 动态节点是随着任务的构建来自动创建agent节点。
    l

2.节点连接

agent节点加入的两种方式:

  • ssh : 在Linux系统中最方便的就是通过SSH启动Jenkins节点,关键是需要再Slave机器中开启sshd服务以及网络连通;
  • jnlp :兼容各种操作系统只要网络可以正常通信都可以采用此种方法(需要安装Java环境);


SSH 方式

环境依赖: SSH Slaves plugin 插件 、SSH Credentials Plugin 插件(管理认证票据)
添加流程:

  • 1) 输入Slave节点IP
  • 2) 添加Credentials认证票据(账号密码、或、密钥登陆)
  • 3)在这里设置的credentials在jenkins的其他需要credentials的地方,可以通过下拉菜单选择使用,比如添加slave时可以直接在Credentials下拉菜单里选择对应的credential就行


用户密码方式添加:
添加流程:

  • 1) 新建节点->输入节点IP
  • 2)基础信息配置
  • 3) 选择Jenkins凭据
  • 4)启动代理

私钥方式方式添加:
添加流程:(相比于上面的流程唯一不同的是Jenkins凭据选择为ssh private key)

  • 记住是私钥不是公钥 cat .ssh/id_rsa 输入到Private Key之中


JNLP 方式

描述: 与ssh方式是master主动连接slave不同而JNLP方式是slave主动连接master。

Tips : 复制节点在节点环境配置好之后,我们再添加节点就可以复制了修改IP和其他自定义配置即可。

Tips : 在需要Jenkins全局安全配置上开启 Inbound agents 端口 50000/tcp 代理端口, 此端口的作用是便于Agent的jnlp与jenkins的master节点间进行通信;


3.点明主题

Q: 传统Jenkins的server、agent分布式方案有什么缺陷?

  • 1.高可用管理: Master 节点发生单点故障时整个流程都不可用了;
  • 2.配置管理维护: 每个 Slave节点的配置环境不一样,来完成不同语言的编译打包等操作,但是这些差异化的配置导致管理起来非常不方便,维护起来也是比较费劲
  • 3.资源分配不均衡:有的 Slave节点要运行的job出现排队等待,而有的Slave节点处于空闲状态
  • 4.资源浪费: 每台 Slave节点可能是实体机或者VM,当Slave节点处于空闲状态时,也不会完全释放掉资源

Tips : 正因为上面的这些种种痛点我们渴望一种更高效更可靠的方式来完成这个 CI/CD 流程,而 Docker 虚拟化容器技术能很好的解决这个痛点,又特别是在 Kubernetes 集群环境下面能够更好来解决上面的问题,所以我们可以引入Kubernates来解决!

Tips : 基于Kubernete的CI/CD可以使用的工具有很多, 比如Jenkins、Gitlab CI已经新兴的drone之类的都是可以,大家可以根据需求进行学习搭建并落地;


Q: 什么是 Kubernetes ? 它有何作用?
答: Kubernetes (简称K8S)是Google开源的容器集群管理系统,在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性。
其主要功能如下:

  • 1.使用Docker对应用程序包装(package)、实例化(instantiate)、运行(run)。
  • 2.以集群的方式运行、管理跨机器的容器。以集群的方式运行、管理跨机器的容器。
  • 3.解决 Docker跨机器容器之间的通讯问题。解决Docker跨机器容器之间的通讯问题。
  • 4.Kubernetes 的自我修复机制使得容器集群总是运行在用户期望的状态。
WeiyiGeek.Kubernetes 搭建 Jenkins 集群示意图

WeiyiGeek.Kubernetes 搭建 Jenkins 集群示意图

PS : 从图上可以看到 Jenkins Master 和 Jenkins Slave 以 Pod 形式运行在 Kubernetes 集群的 Node 上,Master 运行在其中一个节点,并且将其配置数据存储到一个 Volume 上去,Slave 运行在各个节点上,并且它不是一直处于运行状态,它会按照需求动态的创建并自动删除。
PS : 这种方式的工作流程大致为当 Jenkins Master 接受到 Build 请求时,会根据配置的 Label 动态创建一个运行在 Pod 中的 Jenkins Slave 并注册到 Master 上,当运行完 Job 后,这个 Slave 会被注销并且这个 Pod 也会自动删除,恢复到最初状态


Q: Kubernets方式部署给我们带来什么好处?

  • 1.服务高可用,当 Jenkins Master 出现故障时,Kubernetes 会自动创建一个新的 Jenkins Master 容器,并且将 Volume 分配给新创建的容器,保证数据不丢失,从而达到集群服务高可用。
  • 2.动态伸缩,合理使用资源,每次运行 Job 时,会自动创建一个 Jenkins Slave,Job 完成后,Slave 自动注销并删除容器,资源自动释放,而且 Kubernetes 会根据每个资源的使用情况,动态分配 Slave 到空闲的节点上创建,降低出现因某节点资源利用率高,还排队等待在该节点的情况。
  • 3.扩展性好,当 Kubernetes 集群的资源严重不足而导致 Job 排队等待时,可以很容易的添加一个 Kubernetes Node 到集群中,从而实现扩展。

常用 Kubernates + Harbor + Jenkins + gitlab 持续集成方案:
Tips : 大致工作流程 手动 /自动构建 -> Jenkins 调度 K8S API -> 动态生成 Jenkins Slave pod -> Slave pod 拉取 Git 代码/编译/打包镜像 ->推送到镜像仓库 Harbor -> Slave 工作完成,Pod 自动销毁 -> 部署 到测试或生产 Kubernetes平台。(完全自动化,无需人工干预)

weiyigeek.CI/CD集成

weiyigeek.CI/CD集成


4.知识扩展

(1) 官方的Jenkins镜像网站
Hub Docker Images : https://hub.docker.com/r/jenkins/jenkins/

1
docker pull jenkins/jenkins

(2) Kubernetes 插件官方帮助
kubernetes Plugs : https://plugins.jenkins.io/kubernetes/


0x01 安装部署

(0) 分布式架构过程说明

将 Jenkins 的 Master/Agent 分布式架构直接部署在宿主机上不是一个很好的选择;但是它作为一个向容器化过度的中间阶段,是必要学习掌握的。

Tips : 整个 Jenkins 服务分布式部署是很简单的其步骤为:

  • 部署一个 Jenkins Master 节点。
  • 通过 Jenkins 的 WEB 页面,在 Master 节点上添加 Agent node 节点。
  • 添加完 Agent Node 节点后,Jenkins Master 会提供部署 Agent Node 服务的软件包和启动方式;直接找台服务器,根据提示运行就行。

在 Master 节点中添加 Agent 方式

  • Step 1.新建节点页面的访问路径
    描述: 在 Jenkins 服务的页面上找到”新建节点“的页面;它的访问路径如下: Manage Jenkins -》Manage Nodes and Clouds(管理节点)-》New Node(新建节点)

  • Step 2.节点名称和节点类型
    描述:通过上面的访问路径,进入添加节点的第一个页面。在这里需要填写一下【节点名称】和选择节点类型,一般选择永久节点 [Permanent Agent] 即可

  • Step 3.在节点的详细设置页面中填写更多信息

    1
    2
    3
    4
    5
    6
    7
    8
    * 配置项 	配置项说明/配置信息
    * Name/名称 Jenkins Agent 的名称
    * of executors / 并发执行数 Agent 节点可以同时执行几个 Job
    * Remote root directory / 远程工作目录 从节点上jenkins agent的工作目录,推荐只用绝对路径,如/home//jenkins-agent。注意jenkins要有该目录的读写权限
    * Labels / 标签 给Agent节点设置标签;Job 任务可以根据标签选择特定的 Agent 节点执行。
    * Usage / Agent 节点的使用方法 有两种方式:1、尽量使用此Agent执行任务;2、只执行标签匹配的任务。
    * Lanuch method / Agent 启动方法 有多种选择,这里使用”Launch agent by connecting it to the master“/通过将 Agent 连接到 Master 来启动它
    * 其他的配置项可以选择默认状态,也可以更加自己的理解设置。
  • Step 4.获取部署 Agent 的方法
    描述: 上面的步骤操作完成后,就会有个展示配置 Agent 节点的页面。其中提供了两中部署 Agent 的方式,我们选择第二种。

  • Step 5.在 Agent 服务器的命令行执行启动命令

    1
    2
    3
    4
    5
    6
    # 方式1.将密码通过命令行直接传入(不安全)
    java -jar agent.jar -jnlpUrl http://jenkins.example.com/computer/agent-test/slave-agent.jnlp -secret d4abba3f13324b85ab2997e22c3442045bb86fcd213f79fa01416a5fd0399a18 -workDir ""

    # 方式2.将密码信息写入到文件中 Agent 启动方式
    echo d4abba3f13324b85ab2997e22c3442045bb86fcd213f79fa01416a5fd0399a18 > secret-file
    java -jar agent.jar -jnlpUrl http://jenkins.example.com/computer/agent-test/slave-agent.jnlp -secret @secret-file -workDir ""


(1) 单主机部署配置固定 agent

描述: 添加一个普通、固定(永久)的节点到Jenkins即给 Jenkins 增加了一个普通的永久代理人;之所以叫做“固定”是因为Jenkins没给这种节点提供更高级的集成方式;

  • 例如:动态配置。没有其他代理类型能选择的话可以选择该代理类型;
  • 例如,你在添加不受Jenkins管理的物理机、在Jenkins外部管理的虚拟机等。


环境准备:

1
2
3
4
5
6
7
8
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal

kubernetes 安装的 Jenkins 2.275 : `http://jenkins.weiyigeek.top:8080/`


Java Web 启动 Agent 方式

Q: 如何实现Jenkins分布式构建?

  • Step 1.开启代理程序的TCP端口:Manage Jenkins -> Configure Global Security(全局安全配置) -> 代理 -> 设置为固定的50000端口

  • Step 2.在 Manage Jenkins -> Manage Nodes 节点列表 -> 新增节点 -> 节点名称(agent-机器名称)-> 添加一个普通、固定的节点到Jenkins;

  • Step 3.此时有两种 Slave 节点连接Master节点的方法 -> (Launch agent by connecting it to the master 或者 通过Java Web启动代理) ;
    描述: 使用Java Web Start就必须在Agent机器上打开JNLP文件,然后将创建到Jenkins服务器的TCP连接,意味着不需要Jenkins服务器访问Agent 而是Agent能够链接到Jenkins Server即可

    1
    2
    3
    4
    5
    6
    # 在命令行中启动节点
    java -jar agent.jar -jnlpUrl http://192.168.12.107:30001/computer/node-1/jenkins-agent.jnlp -secret 52e2e9b37cd4a36310d39984ff461b48ef95b40101a4d34875b7248a7622bd92 -workDir "/home/jenkins"

    # Run from agent command line, with the secret stored in a file:
    echo 52e2e9b37cd4a36310d39984ff461b48ef95b40101a4d34875b7248a7622bd92 > secret-file
    java -jar agent.jar -jnlpUrl http://192.168.12.107:30001/computer/node-1/jenkins-agent.jnlp -secret @secret-file -workDir "/home/jenkins"
WeiyiGeek.Create-Node

WeiyiGeek.Create-Node

  • Step 4.采用VM方式运行agent.jar连接到Jenkins的Server节点, 首先我们知道上面的agent.jar和secret信息等;

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    # (1) 我们在一台Linux服务器中下载agent.jar和启动连接。
    wget http://192.168.12.107:30001/jnlpJars/agent.jar
    java -jar agent.jar -jnlpUrl http://192.168.12.107:30001/computer/node-1/jenkins-agent.jnlp -secret 52e2e9b37cd4a36310d39984ff461b48ef95b40101a4d34875b7248a7622bd92 -workDir "/home/jenkins"

    # (2) 运行后输入以下日志
    # INFO: Agent discovery successful
    # Agent address: 192.168.1.200
    # Agent port: 30081
    # Identity: 0b:a1:da:6c:8e:e2:ca:f8:17:f6:b7:ee:cb:ff:84:0d
    # Jul 24, 2020 5:57:29 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Handshaking
    # Jul 24, 2020 5:57:29 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Connecting to 192.168.1.200:30081
    # Jul 24, 2020 5:57:29 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Trying protocol: JNLP4-connect
    # Jul 24, 2020 5:57:29 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Remote identity confirmed: 0b:a1:da:6c:8e:e2:ca:f8:17:f6:b7:ee:cb:ff:84:0d
    # Jul 24, 2020 5:57:30 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Connected # 表示 成功

    # (3) 使用nohup后台运行agent
    nohup java -jar agent.jar -jnlpUrl http://192.168.12.107:30001/computer/node-1/jenkins-agent.jnlp -secret 52e2e9b37cd4a36310d39984ff461b48ef95b40101a4d34875b7248a7622bd92 -workDir "/home/jenkins"
  • Step 5.采用Docker方式运行agent.jar连接到Jenkins的Server节点, 此种方式非常简单拉取镜像和启动镜像;
    参考连接: https://hub.docker.com/r/jenkins/inbound-agent

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    # (1) 拉取镜像
    docker pull jenkins/inbound-agent:alpine

    # (2) 获取 Jenkins 的 SVC 地址(由于我的Master是采用K8s搭建的)
    ~$ kubectl get svc -n devops
    # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    # jenkins NodePort 10.99.55.239 <none> 8080:30001/TCP,50000:32001/TCP 19d


    # (3) 启动 agent 镜像(注意参数)
    # docker run --init jenkins/inbound-agent -url http://jenkins-server:port -workDir=/home/jenkins/agent <secret> <agent name>
    ~$ docker run -d --name jenkins-agent --init jenkins/inbound-agent:alpine -url http://10.99.55.239:8080 -workDir=/home/jenkins/agent 52e2e9b37cd4a36310d39984ff461b48ef95b40101a4d34875b7248a7622bd92 node-1
    c7a00c89f92f16c07d35e572d2ba9da3c17c47a2746e46e3d3225bff418a78f3

    # (4) 查看是否启动成功
    ~$ docker logs c7a00c89f92f16c07d35e572d2ba9da3c17c47a2746e46e3d3225bff418a78f3
    # Feb 03, 2021 11:57:39 AM hudson.remoting.jnlp.Main createEngine
    # INFO: Setting up agent: node-1
    # Feb 03, 2021 11:57:39 AM hudson.remoting.jnlp.Main$CuiListener <init>
    # INFO: Jenkins agent is running in headless mode.
    # Feb 03, 2021 11:57:40 AM hudson.remoting.Engine startEngine
    # INFO: Using Remoting version: 4.6
    # Feb 03, 2021 11:57:40 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
    # INFO: Using /home/jenkins/agent/remoting as a remoting work directory
    # Feb 03, 2021 11:57:40 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
    # INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
    # Feb 03, 2021 11:57:40 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Locating server among [http://10.99.55.239:8080]
    # Feb 03, 2021 11:57:40 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve
    # INFO: Remoting server accepts the following protocols: [JNLP4-connect, Ping]
    # Feb 03, 2021 11:57:40 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Agent discovery successful
    # Agent address: 10.99.55.239
    # Agent port: 50000
    # Identity: 8e:c7:1e:e1:39:ee:f4:2a:43:f6:aa:d9:0e:b7:b6:62
    # Feb 03, 2021 11:57:40 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Handshaking
    # Feb 03, 2021 11:57:40 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Connecting to 10.99.55.239:50000
    # Feb 03, 2021 11:57:40 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Trying protocol: JNLP4-connect
    # Feb 03, 2021 11:58:00 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Remote identity confirmed: 8e:c7:1e:e1:39:ee:f4:2a:43:f6:aa:d9:0e:b7:b6:62
    # Feb 03, 2021 11:58:02 AM hudson.remoting.jnlp.Main$CuiListener status
    # INFO: Connected # 表示连接成功
WeiyiGeek.jenkins-agent-docker

WeiyiGeek.jenkins-agent-docker

  • Step 6.采用kubernetes集群以静态的方式部署agent,我们首先编写一个部署文件,并且定义好名称空间、镜像、agent配置信息。同样此处我们创建一个node-2的agent节点;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# (0) 在命令行中启动节点参数实例 (注意K8s下面的环境变量)
java -jar agent.jar -jnlpUrl http://192.168.12.107:30001/computer/node-2/jenkins-agent.jnlp -secret a06d5adaef69cbaf34e9296d8340d464163ab9f5bbf6b465a290237dc38d45db -workDir "/home/jenkins/agent"

# (1) Jenkins-agent-Deployment 资源清单
cat > Jenkins-agent-Deployment.yaml <<'END'
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: jenkins-agent
name: jenkins-agent-node2
namespace: devops
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: jenkins-agent-node
template:
metadata:
labels:
k8s-app: jenkins-agent-node
namespace: devops
name: jenkins-agent-node2
spec:
containers:
- name: jenkins-agent
image: jenkins/inbound-agent:alpine
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi
env:
- name: JENKINS_URL
value: http://10.99.55.239:8080
- name: JENKINS_SECRET
value: a06d5adaef69cbaf34e9296d8340d464163ab9f5bbf6b465a290237dc38d45db
- name: JENKINS_AGENT_NAME
value: node-2
- name: JENKINS_AGENT_WORKDIR
value: /home/jenkins/workspace
END

# JENKINS_URL: url for the Jenkins server, can be used as a replacement to -url option, or to set alternate jenkins URL
# JENKINS_TUNNEL: (HOST:PORT) connect to this agent host and port instead of Jenkins server, assuming this one do route TCP traffic to Jenkins master
# JENKINS_SECRET: agent secret, if not set as an argument
# JENKINS_AGENT_NAME: agent name, if not set as an argument
# JENKINS_AGENT_WORKDIR: agent work directory, if not set by optional parameter -workDir
# JENKINS_WEB_SOCKET: true if the connection should be made via WebSocket rather than TCP


# (2)通过kubectl工具在k8s集群中部署`jenkins agent deployment`并查验状态
kubectl create -f Jenkins-agent-Deployment.yaml
# deployment.apps/jenkins-agent-node2 created

~/k8s/jenkins$ kubectl get pod -n devops -o wide | grep "jenkins-agent"
# jenkins-agent-node2-574bb65cb8-5kh7g 1/1 Running 0 29s 172.16.182.237

~/k8s/jenkins$ kubectl logs -f jenkins-agent-node2-574bb65cb8-5kh7g -n devops

# (3) 在传入JENKINS_URL时建议尽量采用域名的方式;
~/k8s/jenkins$ kubectl exec -it jenkins-agent-node2-574bb65cb8-5kh7g -n devops bash
# bash-5.0$ ping jenkins.devops
# PING jenkins.devops (10.99.55.239): 56 data bytes
# bash-5.0$ ping jenkins.devops.svc.cluster.local
# PING jenkins.devops.svc.cluster.local (10.99.55.239): 56 data bytes
WeiyiGeek.kubernetes-jenkins-agent

WeiyiGeek.kubernetes-jenkins-agent


Launch agents via SSH 启动 Agent 方式

描述: 当前Jenkins 2.277版本默认是不支持SSH 启动 Agent 方式, 我们需要安装ssh Agent插件;

1
SSH Agent - This plugin allows you to provide SSH credentials to builds via a ssh-agent in Jenkins


  • Step 1.首先创建一个凭据存储服务器认证信息。

  • Step 2.之后创建一个新的节点添加以下配置。配置ssh的主机和认证信息最后保存(agent配置完成)。

WeiyiGeek.ssh-agent

WeiyiGeek.ssh-agent

  • Step 3.保存后创建一个测试的Pipeline项目
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    pipeline {
    // 此处agent指令是必须的
    agent none
    stages {
    // docker 构建的
    stage ('node-1'){
    agent {
    node { label "docker-jenkins-slave"}
    }
    steps {
    echo "----- node-1 ------"
    sh "hostname && ifconfig"
    sh "env"
    echo "----- node-1 end------"
    }
    }
    // k8s 集群中构建的
    stage ('node-2'){
    agent {
    node { label "k8s-node-2"}
    }
    steps {
    echo "----- node-2 ------"
    sh "hostname && ifconfig"
    sh "env"
    echo "----- node-2 end------"
    }
    }
    }
    }

输出结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
Started by user Jenkins 管理员
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] stage

[Pipeline] { (node-1)
Running on node-1 in /home/jenkins/workspace/pineline-use-node
[Pipeline] echo
----- node-1 ------
[Pipeline] sh
+ hostname
c7a00c89f92f
+ ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10783 errors:0 dropped:0 overruns:0 frame:0
TX packets:14915 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:38664627 (36.8 MiB) TX bytes:5669411 (5.4 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

[Pipeline] sh
# + env
# JENKINS_HOME=/var/jenkins_home
# LANGUAGE=en_US:en
# RUN_CHANGES_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/3/display/redirect?page=changes
# HOSTNAME=c7a00c89f92f
# NODE_LABELS=docker-jenkins-slave node-1
# HUDSON_URL=http://192.168.12.107:30001/
# SHLVL=3
# HOME=/home/jenkins
# BUILD_URL=http://192.168.12.107:30001/job/pineline-use-node/3/
# HUDSON_COOKIE=de8bbafb-13fd-42d5-99a4-7ffaebf9a3e2
# JENKINS_SERVER_COOKIE=durable-8dd62800688ecbc8d9c9e78a15c49a2d
# WORKSPACE=/home/jenkins/workspace/pineline-use-node
# JAVA_VERSION=jdk8u272-b10
# NODE_NAME=node-1
# RUN_ARTIFACTS_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/3/display/redirect?page=artifacts
# STAGE_NAME=node-1
# EXECUTOR_NUMBER=0
# RUN_TESTS_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/3/display/redirect?page=tests
# BUILD_DISPLAY_NAME=#3
# HUDSON_HOME=/var/jenkins_home
# AGENT_WORKDIR=/home/jenkins/agent
# JOB_BASE_NAME=pineline-use-node
# PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# BUILD_ID=3
# BUILD_TAG=jenkins-pineline-use-node-3
# LANG=en_US.UTF-8
# JENKINS_URL=http://192.168.12.107:30001/
# JOB_URL=http://192.168.12.107:30001/job/pineline-use-node/
# BUILD_NUMBER=3
# JENKINS_NODE_COOKIE=a822e652-5870-44ef-b671-d6c1f2afaffb
# RUN_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/3/display/redirect
# HUDSON_SERVER_COOKIE=04b411a999365c6a
# JOB_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/display/redirect
# JOB_NAME=pineline-use-node
# LC_ALL=en_US.UTF-8
# JAVA_HOME=/opt/java/openjdk
# PWD=/home/jenkins/workspace/pineline-use-node
# WORKSPACE_TMP=/home/jenkins/workspace/[email protected]
# GITLAB_OBJECT_KIND=none
[Pipeline] echo
----- node-1 end------


[Pipeline] stage
[Pipeline] { (node-2)
Running on node-2 in /home/jenkins/agent/workspace/pineline-use-node
[Pipeline] {
[Pipeline] echo
----- node-2 ------
[Pipeline] sh
+ hostname
jenkins-agent-node2-574bb65cb8-5kh7g
+ ifconfig
eth0 Link encap:Ethernet HWaddr 5E:A7:08:EA:68:0F
inet addr:172.16.182.237 Bcast:172.16.182.237 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU:1480 Metric:1
RX packets:8149 errors:0 dropped:0 overruns:0 frame:0
TX packets:11311 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:30953238 (29.5 MiB) TX bytes:4382625 (4.1 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

[Pipeline] sh
# + env
# JENKINS_HOME=/var/jenkins_home
# JENKINS_SECRET=a06d5adaef69cbaf34e9296d8340d464163ab9f5bbf6b465a290237dc38d45db
# agentType=k8s
# KUBERNETES_PORT=tcp://10.96.0.1:443
# KUBERNETES_SERVICE_PORT=443
# LANGUAGE=en_US:en
# RUN_CHANGES_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/3/display/redirect?page=changes
# SONARQUBE_PORT_9000_TCP_ADDR=10.100.233.217
# HOSTNAME=jenkins-agent-node2-574bb65cb8-5kh7g
# SHLVL=3
# NODE_LABELS=k8s-node-2 node-2
# HUDSON_URL=http://192.168.12.107:30001/
# HOME=/home/jenkins
# SONARQUBE_PORT_9000_TCP_PORT=9000
# BUILD_URL=http://192.168.12.107:30001/job/pineline-use-node/3/
# SONARQUBE_PORT_9000_TCP_PROTO=tcp
# HUDSON_COOKIE=c088e600-327b-4e78-a80e-4ba7bd7254f3
# JENKINS_SERVER_COOKIE=durable-4d6313311ef99675e2bfd51284774a87
# JENKINS_AGENT_WORKDIR=/home/jenkins/workspace
# SONARQUBE_SERVICE_HOST=10.100.233.217
# WORKSPACE=/home/jenkins/agent/workspace/pineline-use-node
# JAVA_VERSION=jdk8u272-b10
# NODE_NAME=node-2
# SONARQUBE_PORT_9000_TCP=tcp://10.100.233.217:9000
# RUN_ARTIFACTS_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/3/display/redirect?page=artifacts
# STAGE_NAME=node-2
# EXECUTOR_NUMBER=1
# SONARQUBE_SERVICE_PORT=9000
# SONARQUBE_PORT=tcp://10.100.233.217:9000
# RUN_TESTS_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/3/display/redirect?page=tests
# BUILD_DISPLAY_NAME=#3
# JENKINS_PORT_50000_TCP_ADDR=10.99.55.239
# KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
# HUDSON_HOME=/var/jenkins_home
# AGENT_WORKDIR=/home/jenkins/agent
# JOB_BASE_NAME=pineline-use-node
# PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# JENKINS_SERVICE_HOST=10.99.55.239
# SONARQUBE_SERVICE_PORT_SONARQUBE=9000
# JENKINS_PORT_8080_TCP_ADDR=10.99.55.239
# BUILD_ID=3
# JENKINS_PORT_50000_TCP_PORT=50000
# KUBERNETES_PORT_443_TCP_PORT=443
# JENKINS_SERVICE_PORT_AGENT=50000
# JENKINS_PORT_50000_TCP_PROTO=tcp
# BUILD_TAG=jenkins-pineline-use-node-3
# KUBERNETES_PORT_443_TCP_PROTO=tcp
# JENKINS_URL=http://192.168.12.107:30001/
# JENKINS_PORT_8080_TCP_PORT=8080
# LANG=en_US.UTF-8
# JOB_URL=http://192.168.12.107:30001/job/pineline-use-node/
# JENKINS_AGENT_NAME=node-2
# JENKINS_PORT_8080_TCP_PROTO=tcp
# BUILD_NUMBER=3
# JENKINS_NODE_COOKIE=243aa3e8-0de2-4297-b617-8764795f5cab
# RUN_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/3/display/redirect
# JENKINS_PORT=tcp://10.99.55.239:8080
# JENKINS_SERVICE_PORT=8080
# HUDSON_SERVER_COOKIE=04b411a999365c6a
# JOB_DISPLAY_URL=http://192.168.12.107:30001/job/pineline-use-node/display/redirect
# KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
# JENKINS_PORT_50000_TCP=tcp://10.99.55.239:50000
# KUBERNETES_SERVICE_PORT_HTTPS=443
# JOB_NAME=pineline-use-node
# LC_ALL=en_US.UTF-8
# PWD=/home/jenkins/agent/workspace/pineline-use-node
# JENKINS_PORT_8080_TCP=tcp://10.99.55.239:8080
# JAVA_HOME=/opt/java/openjdk
# KUBERNETES_SERVICE_HOST=10.96.0.1
# JENKINS_SERVICE_PORT_WEB=8080
# WORKSPACE_TMP=/home/jenkins/agent/workspace/[email protected]
# GITLAB_OBJECT_KIND=none
[Pipeline] echo
----- node-2 end------
[Pipeline] End of Pipeline
Finished: SUCCESS

WeiyiGeek.Pipeline流水线选择

WeiyiGeek.Pipeline流水线选择


(2) 集群搭建Jenkins Master 节点

环境准备:

1
2
3
- Kubernetes 集群 :"v1.19.6"
- NFS : 数据持久化最常用的共享存储
- Jenkins 镜像 : jenkins/jenkins:2.277-alpine


2.1) 基础环境

NFS (Network File System) 环境

描述:它最大的功能就是可以通过网络,让不同的机器、不同的操作系统可以共享彼此的文件。我们可以利用NFS共享Jenkins运行的配置文件、Maven的仓库依赖文件等。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# NFS 客户端安装: Master&node节点都执行
~$ ansible dtk8s -m shell -a "sudo apt-get install nfs-common rpcbind -y"

# 查看Windows 2008 R2搭建的NFS服务端
showmount -e 192.168.1.31
Export list for 192.168.1.31:
/nask8sapp (everyone)

# 安全配置 (设置指定的作用域)
基于 Unix 软件的 Portmap 的入站规则,允许 Portmap 服务的流量。[TCP 111] - 设置其作用域
NFS 服务器(NFS-UDP-In) NFS 服务器允许 NFS 通信的入站规则。[UDP 2049] - 设置其作用域
NFS 服务器(NFS-TCP-In) NFS 服务器允许 NFS 通信的入站规则。[TCP 2049] - 设置其作用域

# 临时与永久挂载
ansible dtk8s -m shell -a "sudo mkdir /nfsdisk-31" # 目录创建
sudo mount.nfs 192.168.12.31:/nask8sapp /nfsdisk-31/
ls /nfsdisk-31/
# /nfsdisk-31/1.txt

# 通过 /etc/fstab 挂载
ansible dtk8s -m shell -a 'echo "192.168.12.31:/nask8sapp /nfsdisk-31 nfs defaults 0 0"|sudo tee -a /etc/fstab'
# weiyigeek-* | CHANGED | rc=0 >>
# 192.168.12.31:/nask8sapp /nfsdisk-31 nfs defaults 0 0
# ...
ansible dtk8s -m shell -a 'sudo mount -a'
ansible dtk8s -m shell -a 'mount | grep "nfs"'
# weiyigeek-* | CHANGED | rc=0 >>
# 192.168.12.31:/nask8sapp on /nfsdisk-31 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.12.31,mountvers=3,mountport=1048,mountproto=udp,local_lock=none,addr=192.168.12.31)

# 至此下一步我们需要在k8s集群中进行配置 NFS client provisioner


NFS Client Provisioner 环境

注意: 安装配置是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有 的NFS服务器提供存储。
nfs-client-provisioner 构建的yaml文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
cat > nfs-client-provisioner.yaml  <<'EOF'
# NFS 驱动编排的资源清单
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy: # 策略
type: Recreate # 再生(循环使用)
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner # 服务帐户名称
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: timezone
mountPath: /etc/localtime # 时区设置
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME # StorageClass 对象中定义的 provisioner 键需要保持一致
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.12.31
- name: NFS_PATH
value: /nask8sapp
volumes:
- name: timezone # 时区定义
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
- name: nfs-client-root # 存储卷
nfs:
server: 192.168.12.31
path: /nask8sapp
---
# Storageclass 部署文件
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage # 重要 StorageClass Name绑定
annotations:
storageclass.kubernetes.io/is-default-class: "true" #设置其为默认存储后端
provisioner: fuseim.pri/ifs # 或选择另一个名称,必须与NFS 驱动编排匹配部署的 env PROVISIONER_NAME
parameters:
archiveOnDelete: "false" # 删除pvc后,后端存储上的pv也自动删除
EOF


# nfs-client-provisioner - rbac授权资源清单
cat > nfs-client-rbac.yaml<<'EOF'
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
EOF

执行 & 运行pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
~/k8s/nfs-client-provisioner$ ls
# nfs-client-provisioner.yaml nfs-client-rbac.yaml

~/k8s/nfs-client-provisioner$ kubectl create -f .
# deployment.apps/nfs-client-provisioner created
# storageclass.storage.k8s.io/managed-nfs-storage created
# serviceaccount/nfs-client-provisioner created
# clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
# clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
# role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
# rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

$ kubectl get storageclasses.storage.k8s.io
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
# managed-nfs-storage (default) fuseim.pri/ifs Delete Immediate false 53s

$ kubectl get pod
# NAME READY STATUS RESTARTS AGE
# nfs-client-provisioner-57946d456-b4s7l 1/1 Running 0 22s


2.2) 搭建流程

资源清单
  • Step 1.创建PV、PVC,为Jenkins提供数据持久化
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    cat > jenkins-PVC.yaml <<'EOF'
    apiVersion: v1
    kind: Namespace
    metadata:
    name: devops
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: jenkins-pvc
    namespace: devops
    annotations: # 空间标注
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 5Gi
    EOF


  • Step 2.创建角色授权
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    cat > jenkins-role.yaml <<'EOF'
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: jenkins-sa
    namespace: devops
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: jenkins-cr
    rules:
    - apiGroups: ["extensions", "apps"]
    resources: ["deployments"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
    - apiGroups: [""]
    resources: ["services"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
    - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create","delete","get","list","patch","update","watch"]
    - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create","delete","get","list","patch","update","watch"]
    - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get","list","watch"]
    - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: jenkins-crd
    roleRef:
    kind: ClusterRole
    name: jenkins-cr
    apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: ServiceAccount
    name: jenkins-sa
    namespace: devops
    EOF


  • Step 3.在Kubernetes中Deployment部署Jenkins以及Service资源创建
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    cat > jenkins-deployment.yaml <<'EOF'
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: jenkins
    namespace: devops
    spec:
    selector:
    matchLabels:
    app: jenkins
    template:
    metadata:
    labels:
    app: jenkins
    spec:
    terminationGracePeriodSeconds: 10
    serviceAccount: jenkins-sa
    containers:
    - name: jenkins
    image: jenkins/jenkins:2.275-alpine
    imagePullPolicy: IfNotPresent
    env:
    - name: JAVA_OPTS
    value: -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai
    ports:
    - containerPort: 8080
    name: web
    protocol: TCP
    - containerPort: 50000
    name: agent
    protocol: TCP
    resources:
    limits:
    cpu: 1000m
    memory: 1Gi
    requests:
    cpu: 500m
    memory: 512Mi
    livenessProbe:
    httpGet:
    path: /login
    port: 8080
    initialDelaySeconds: 60
    timeoutSeconds: 5
    failureThreshold: 12
    readinessProbe:
    httpGet:
    path: /login
    port: 8080
    initialDelaySeconds: 60
    timeoutSeconds: 5
    failureThreshold: 12
    volumeMounts:
    - name: jenkinshome
    mountPath: /var/jenkins_home
    securityContext:
    fsGroup: 1000
    volumes:
    - name: jenkinshome
    persistentVolumeClaim:
    claimName: jenkins-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: jenkins
    namespace: devops
    labels:
    app: jenkins
    spec:
    selector:
    app: jenkins
    type: NodePort
    ports:
    - name: web
    port: 8080
    targetPort: web
    nodePort: 30001
    - name: agent
    port: 50000
    targetPort: agent
    EOF


创建查看
  • Step 4.采用kubectl命令创建上面的资源清单
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# (1) PVC 持久卷创建
$ kubectl create -f jenkins-PVC.yaml
# namespace/devops created
# persistentvolumeclaim/jenkins-pvc created

# (2) Jenkins 在集群中角色创建绑定
kubectl create -f jenkins-role.yaml
# serviceaccount/jenkins-sa created
# clusterrole.rbac.authorization.k8s.io/jenkins-cr created
# clusterrolebinding.rbac.authorization.k8s.io/jenkins-crd created

# (3) 部署 Jenkins deployment 和 SVC
kubectl create -f jenkins-deployment.yaml
# deployment.apps/jenkins created
# service/jenkins created


  • Step 5.查看创建的PVC、POD以及SVC
    1
    2
    3
    4
    5
    6
    7
    8
    9
    ~$ kubectl get pvc,pod,svc -n devops
    # NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    # persistentvolumeclaim/jenkins-pvc Bound pvc-3cd916df-91cb-470d-b9ef-e9b4f115223d 5Gi RWX managed-nfs-storage 23m

    # NAME READY STATUS RESTARTS AGE
    # pod/jenkins-689775956-nph9z 1/1 Running 0 15m

    # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    # service/jenkins NodePort 10.104.214.1 <none> 8080:30001/TCP,50000:30465/TCP 15m


  • Step 6.我们进入Pod容器内部进行查看
    1
    2
    3
    4
    5
    kubectl exec -n devops -it jenkins-7db7878f8f-78tmk bash
    $ ps aux # 可看到有两个进程正在运行
    # PID USER TIME COMMAND
    # 1 jenkins 0:39 /sbin/tini -- /usr/local/bin/jenkins.sh # 主运行文件
    # 6 jenkins 57:00 java -Duser.home=/var/jenkins_home -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai -Djenkins.model.Jenkins.slaveAgentPort=50000 -jar /usr/share/jenkins/jenkins.war # 运行 jenkins.war 的命令

查看容器入口执行文件: cat /usr/local/bin/jenkins.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#! /bin/bash -e
: "${JENKINS_WAR:="/usr/share/jenkins/jenkins.war"}"
: "${JENKINS_HOME:="/var/jenkins_home"}"
: "${COPY_REFERENCE_FILE_LOG:="${JENKINS_HOME}/copy_reference_file.log"}"
: "${REF:="/usr/share/jenkins/ref"}"
touch "${COPY_REFERENCE_FILE_LOG}" || { echo "Can not write to ${COPY_REFERENCE_FILE_LOG}. Wrong volume permissions?"; exit 1; }
echo "--- Copying files at $(date)" >> "$COPY_REFERENCE_FILE_LOG"
find "${REF}" \( -type f -o -type l \) -exec bash -c '. /usr/local/bin/jenkins-support; for arg; do copy_reference_file "$arg"; done' _ {} +

# if `docker run` first argument start with `--` the user is passing jenkins launcher arguments
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then

# read JAVA_OPTS and JENKINS_OPTS into arrays to avoid need for eval (and associated vulnerabilities)
java_opts_array=()
while IFS= read -r -d '' item; do
java_opts_array+=( "$item" )
done < <([[ $JAVA_OPTS ]] && xargs printf '%s\0' <<<"$JAVA_OPTS")

readonly agent_port_property='jenkins.model.Jenkins.slaveAgentPort'
if [ -n "${JENKINS_SLAVE_AGENT_PORT:-}" ] && [[ "${JAVA_OPTS:-}" != *"${agent_port_property}"* ]]; then
java_opts_array+=( "-D${agent_port_property}=${JENKINS_SLAVE_AGENT_PORT}" )
fi

if [[ "$DEBUG" ]] ; then
java_opts_array+=( \
'-Xdebug' \
'-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=y' \
)
fi

jenkins_opts_array=( )
while IFS= read -r -d '' item; do
jenkins_opts_array+=( "$item" )
done < <([[ $JENKINS_OPTS ]] && xargs printf '%s\0' <<<"$JENKINS_OPTS")

exec java -Duser.home="$JENKINS_HOME" "${java_opts_array[@]}" -jar ${JENKINS_WAR} "${jenkins_opts_array[@]}" "[email protected]"
fi

# As argument is not jenkins, assume user want to run his own process, for example a `bash` shell to explore this image
exec "[email protected]"

Tips : /usr/local/bin/jenkins.sh 以及 usr/share/jenkins/可用帮助我们进行Jenkins手动升级;


服务访问
  • Step 7.因为我们Service是采用NodePort类型其端口为30001,我们直接在浏览器用这个端口访问Jenkins UI, 下面是精简操作如果不理解的童鞋,需要按照第一篇文章的操作进行初始化;
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    # Jenkins 初始化密码获取的两种方式;
    # 方式1.认证
    $ kubectl logs -n devops pod/jenkins-689775956-nph9z | grep -A 2 "following password"
    Please use the following password to proceed to installation:

    c45f558fa237472f9f8f954ceb3a323e

    # 方式2.动态卷Jenkins持久化目录
    # 容器中的目录 : /var/jenkins_home/secrets/initialAdminPassword - pvc-3cd916df-91cb-470d-b9ef-e9b4f115223d
    $ cat /nfsdisk-31/devops-jenkins-pvc-pvc-3cd916df-91cb-470d-b9ef-e9b4f115223d/secrets/initialAdminPassword
    c45f558fa237472f9f8f954ceb3a323e
WeiyiGeek.Jenkins Init

WeiyiGeek.Jenkins Init


  • Step 7.安装 Jenkins入门学习之持续化集成与部署 文章的操作进行初始化, 当然您也可以选择自定义插件安装 -> Languages (两项插件) 先进行汉化
WeiyiGeek.Languages-Chinese

WeiyiGeek.Languages-Chinese


  • Step 8.Create First Admin User -> Instance Configuration (Jenkins URL) -> Save -> Restart 安装成功
    1
    2
    Jenkins URL: http://jenkins.weiyigeek.com.cn:30001/  # 注意需要添加解析
    Jenkins URL: http://192.168.10.10:30001
    WeiyiGeek.

    WeiyiGeek.


  • Step 9.插件镜像仓库设置加快拉取进度 Dashboard -> 插件管理 -> 高级 -> 升级站点;

    1
    2
    3
    4
    # 升级站点
    https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json

    # 用户定义的时区 -> Time Zone -> Asia/Beijing
  • Step 10.最后最新进行常用插件安装,可以参考最后一章


(3) 集群动态创建 Agent 节点 - Slave 节点

描述: 前面我们说过Jenkins的分布式架构(Master-Slave),其中 master 主要负责负载地调度而 slave 执行构建任务,并且当Job构建完成后Pod就会自动销毁, 所以其可以很好地解决性能与资源占用问题。

步骤说明:

  • Step 1.所以在 Jenkins 服务安装好 Kubernetes 插件 并配置好连接 Kubernetes 的信息,就可以在 Kubernetes 集群中动态创建 Agent 节点了。其中 Jenkins Master节点可以直接安装在宿主机中,也可以部署在 Kubernetes 集群中。
    该插件为每个要启动的 Jenkins Agent 节点创建一个 Kubernetes Pod 对象,并在构建完成后销毁 Pod 。
    • Agent 节点是使用JNLP启动的,是通过 Agent 节点镜像自动连接 Jenkins Master 节点。使用这种连接方式,需要对 Agent 节点设置一些环境变量:
      • 1.JENKINS_URL:Jenkins Web界面URL
      • 2.JENKINS_SECRET:用于认证的密钥
      • 3.JENKINS_AGENT_NAME:Jenkins代理的名称
      • 4.JENKINS_NAME:Jenkins代理的名称(不建议使用。仅在此处是为了向后兼容)

Tips : 这些环境变量会在 Pod 创建配置中设定好,用于 Agent 节点启动时连接 Master 节点。


  • Step 2.Kubernetes 插件使用时,最先要配置的是连接 Kubernetes 集群的连接信息Jenkins 服务 Master 节点连接地址(其他连接信息自动生成不需要配置)。
    • 1.Jenkins 服务使用 Kubernetes 插件连接 Kubernetes 集群,并动态创建 Agent 节点。连接 Kubernetes 集群需要配置的 Kubernetes 连接信息包括:
    • 2.Kubernetes 集群名称
    • 3.Kubernetes 集群Api-server 的连接地址
    • 4.Kubernetes 集群服务证书:Kubernetes 集群节点间通信都是使用证书双向认证加密的,一般所有的证书都使用同一个 CA 证书做证书申请签发;这里的服务器证书就是这个CA 证书。因为此时 Jenkins Master 节点也就是一个 Kubernetes Agent 节点,也需要信任 CA 证书,信任CA 证书签发的其他节点上的证书。
    • 5.Kubernetes 命名空间:应该是创建的 Agent 节点在哪个命名空间中运行。
    • 6.凭证(Kubernetes 认证):支持的凭证包括:用户名密码、秘密文件(kubeconfig文件)、秘密文本(基于令牌的身份验证)(OpenShift)、来自私钥的Google服务帐户(GKE身份验证)、X.509客户端证书。

将会具体使用配置将会在下面进行详述的配置讲解。


内置 Jenkins Master 接入内部 K8s 集群

配置流程:

  • Step 1.安装kubernetes插件: 点击 Manage Jenkins -> Manage Plugins -> Available -> Kubernetes 勾选安装即可。
    1
    2
    3
    4
    # 安装 kubernetes plugins
    Kubernetes Client API Plugin - Kubernetes客户端API插件,供其他Jenkins插件使用 - 4.11.1
    Kubernetes Credentials Plugin - Kubernetes证书的公共类 - 0.8.0
    Kubernetes plugin - 这个插件集成了Jenkins与Kubernetes - 1.28.7


  • Step 2.在Jenkins中配置k8s信息:安装完毕后点击 Manage Jenkins —> Configure System —> (拖到最下方) Add a new cloud —> 选择 Kubernetes,然后填写 Kubernetes 和 Jenkins 配置信息。
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    # kubernetes 集群名称 Name:
    kubernetes
    # Kubernetes 地址
    https://kubernetes.default.svc.cluster.local
    # Kubernetes 服务证书 key 内容
    ca.crt
    # Kubernetes 命名空间
    devops
    # 添加凭据可选
    # Jenkins 地址 : The URL of the Jenkins Master server.
    # 格式 : 服务名.namespace.svc.cluster.local:8080
    http://jenkins.devops.svc.cluster.local:8080
    # 注意: Jenkins通道不需要添加slave它会自动识别(网上博客说加上之后可能导致Pod反复重启-应该在新版本中不会存在)
    # Jenkins 通道
    jenkins.devops.svc.cluster.local:50000
    # Pod Labels
    Key: app
    Value: k8s
    # 其它值默认即可(如需配置相应即可)

Tips: 设置JNLP访问协议,打开Jenkins/Configure Global Security找到 Agents, 设置 Port 为 指定端口50000(对于集群搭建Jenkins-Master想接入其它VM物理机上的Agent节点), Agent protocols 选Inbound TCP Agent Protocol/4 (TLS encryption)保存;


  • Step 3.点击Test Connection,如果出现 Connected to Kubernetes v1.19.6 的提示信息证明 Jenkins 已经可以和 Kubernetes 系统正常通信了(CSDN - 滞后性太严重了,还是得看官网)
    Tips : 注意如果这里 Connection 失败的话,很有可能是权限问题,这里就需要把我们创建的 jenkins 的 serviceAccount 对应的 secret 添加到这里的 Credentials 里面。
WeiyiGeek.JenkinsConnectionK8s

WeiyiGeek.JenkinsConnectionK8s


独立Jenkins Master节点接入外部K8s集群

  • Step 1.创建Kubernetes NamespaceService Account(可参考前面搭建)
    1
    2
    3
    4
    5
    # 在Kubenates的上创建devops命名空间,用于Jenkins使用
    kubectl create namespace devops

    # 在Kubernetes上为Jenkins构建创建有Cluster Admin权限的Service Account jenkins:
    kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=devops:jenkins

Step 2.生成调度凭证即Kubernetes的 server certificate key和Client P12 Certificate File, 其作用是采用P12 Certificate File提供给jenkins Master去调用Kubenetes,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# ~/.kube/config 运行以下命令分别生成生成 ca.crt, client.crt, client.key
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ********ORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://weiyigeek-lb-vip.k8s:16443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR........VRFMLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNS........BQUklWQVRFIEtFWS0tLS0tCg==


# (1) 复制 certificate-authority-data 的内容运行以下命令生成 `ca.crt` - ca证书
echo "<certificate-authority-data>" | base64 -d > ca.crt
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1ERXhNVEEwTlRRMU0xb1hEVE14TURFd09UQTBOVFExTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDdTCmNReTNnZDRoK0dHNitKcC8wcmloVGhkd096ZUsxZEFkK1oyK016ZkhvdFVMd2xva2dtd0h0S2JyTm9RdVNFOTMKdXFHbnpBeDArWmduRkx4cVR1bHk4U1A0eXhJTW9Fb0oxKytXNDl0ZklQa0NHOE50aVlmaG5sZ1VIU2haalR2TwpVSzF0Z2ZmSzR1UGlyazk1QWFYWnArMDloQmZDVnR4aFBjbzBxSDZlUVJUZk5xdVU1cHg3TnQzVlo4Wm5MQVBICkxKb0JCS081ZjJXd1l0aXNtZ0FxSkZCWWltQk84R1Z4azdEcjQwVGRRVWZ0NWNmZmFERW1tMlByb0l5WWpobXQKbDIrYVNJc3pGRFBRbDZpMVFnNVJKbzJ2aTgzTDltQ1F5Z3RWVEQ3QUwvSFkralNHbDgvbTVISlBvSUJWY0ZoVApUTi9DQmYwMWIzd1lqNGhVeFVzQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZIZHpBb2lGNytHeVpGN1dDZUFQbndNRjZQNk5NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBK2pjL1VGTkZNUUp5a1pEYUV0NDQ2MWg4cXQyOFVjYVYzelU0NUlyY2psaXU1T0FJZAoya2RYdXFtSCtncUJmMktDY1d0VGM4VU83ak9Kbm8yTWRpZVpJNG5od0crWG5mOWpjcFhabE5Ma1RjazdqdS9CCmk5UUUvWVdJS2JWOW9nQmlwTXR1VmdLbW4rYmM5b0RlTVk4dWRnN2czVUE1TVlla1ZqOG1FYXB3UmEyZk9udmcKVVI4Znd4Q2xyRXdtS2tqVlhKL0hSaWU1NFcvWkkzT1l6dE1zaGFRT0VNMTY2K2lQWUxSU3lscG96RTZ6WFZMZQpvVVQ0RCttdlJHRVJTcytpaVYzNzJLSENQUDByTWJHSHBncG1BNFh4NVpOdnlKL29TVTNIUjRGWVBzMEUvaG5KCktsalRFQThuM0FsTUxkWlhHTHd6ckoydEV5bENmYUFVaUFxVAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" | base64 -d > ca.crt

# (2) 复制 client-certificate-data的内容,运行以下命令生成 `client.crt` - 客户端证书
echo "<client-certificate-data>" | base64 -d > client.crt
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJV0JlN0JvSnhTWEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBeE1URXdORFUwTlROYUZ3MHlNakF4TVRFd05EVTBOVFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTVxL1lOYTh0K2JQZk1rUGwKZnU1NGVCVGtCSHVlcURKWTFvZ0pDVFMyWFNydkVrNTB4ZDVPVVZCWXloNlo4ZFdPZXhRVmVNR3RYZzlObWMzbwprVGRlMzh2eVRVNEE5VXNDakpibDlVenNBR3NCNGRUYVN2azJPUEJWN0pZYUNYMnZQQUtvdEI2aVl5WnJmQW5FClY2UlYzYTlKS1RaSEw3bFNUc3VtZ3A2WGtTVHBaSnYrSEpHTHF0ZFYxNk8vdjhoZ0llL2s1a0lZSjR1d01VOFIKUEpTbGdhdDJhck9GSU9HYkVqTEhjd3lXSWxFcGJSbklxQW41MzhvR01WZHF3WlhoaHRvbjZDWFJya1poRzRTbQpZUnBUdy9WZDRhTnB5ZElqbU9NYmlqeFE5aVRMckp1QjhNc3h6UnZjc2dkeG9iNjNCTzNCV0lWTE9aUExaVlU4Cks5M3BXUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVkM01DaUlYdjRiSmtYdFlKNEErZkF3WG8vbzB3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFMdWw4c3Fhc0VCRGVLclBPNUNZTk5UQjVoNXlBZ25SWi9TMDQ4S3NEOUd3UWhjeFFhdDFSdHRPClRuQ0VFbzZlU0t6ZXpMMW9ER3JiU2w3NDJkRFhiSDJzd0VHeFJjTEJ4ZTVhRVUxK2N6QklGRlBPN0xaWFlvYXUKNVRJeDVSODVVMXU3WXFUVERkUXR3LzFoUFJGajI0UnFSWkVIekl4NWpaZFY1ZTk0b0NYVENnVHhCeTNsdUZnMApSeXRML3I3eEdHdzF5cFdTZ0RUYVNQU2c3eGxNVGxOWVRFMVJ1RGh2THNpelpFb09PMjJnWDM4L0lCU1dwQ1U5CjJFUlFBbUhoYTZqOWQ4eDBub2xieTVIYXZqS3VadVVhYWMrVVNWNFNaMGw0QjJWeC9pNzZVSGRhSmgxL2lZenAKdHJEQ3NZeThTY2hFZFBxemdvS3krU2QxS3pNakZ0Zz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > client.crt

# (3) 复制 client-key-data的内容,运行以下命令生成 `client.key` - 密钥
echo "<client-key-data>" | base64 -d > client.key
echo "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBNXEvWU5hOHQrYlBmTWtQbGZ1NTRlQlRrQkh1ZXFESlkxb2dKQ1RTMlhTcnZFazUwCnhkNU9VVkJZeWg2WjhkV09leFFWZU1HdFhnOU5tYzNva1RkZTM4dnlUVTRBOVVzQ2pKYmw5VXpzQUdzQjRkVGEKU3ZrMk9QQlY3SllhQ1gydlBBS290QjZpWXlacmZBbkVWNlJWM2E5SktUWkhMN2xTVHN1bWdwNlhrU1RwWkp2KwpISkdMcXRkVjE2Ty92OGhnSWUvazVrSVlKNHV3TVU4UlBKU2xnYXQyYXJPRklPR2JFakxIY3d5V0lsRXBiUm5JCnFBbjUzOG9HTVZkcXdaWGhodG9uNkNYUnJrWmhHNFNtWVJwVHcvVmQ0YU5weWRJam1PTWJpanhROWlUTHJKdUIKOE1zeHpSdmNzZ2R4b2I2M0JPM0JXSVZMT1pQTFpWVThLOTNwV1FJREFRQUJBb0lCQUR2UWdvNUE4dm5aQXRtRQpzMS83TTI5bmMwd2FSYVExRWNYbWxmazJHc2NEbCtPMlJoNzhLbkI1RmR5cW5KNFJFcFdsT29BS01BckFpdzJEClQzYy8xVERRTCs2TmVFQWlCL0l1T2tnbGZ0Z0k1djhJY3VXWHd0QjJ1TURVbHNHNVBoT2dXT0FEUlhYU0EzS3gKRWFEcjhudTl0SW1rRWtjMGxUdnJJQ3lrTklha2ZudkJMbVVsWm9RdFRBWVBQZVk0bmYzK3YrLzJodHJkWittWgpxRzdhQ0tBOHUwcUYxMFpoYlNXUWNVbWpOYnJRODMwa0JON04vYlFwSVNUcEh3NVNzMjdZM2VMY0lCTjRzTHZUCndISGRPbXVLOXI4YXpsWTYza1d3MlJNL1NGQlhxM0FpOUo2T1hnUzNaUndTVzk1aEFheDhtQUFINnFYZ1liaWIKS0MyTzZkRUNnWUVBOEVEZ1Z1ZFJVYis5R0ZJZTNNYnl5VWFyWU01cEZmdU80N2lNeDJuWnBXUWlCN0RVbGdrbQpPTERyOHpzL2NFbUdaZzdFT2RYcUxBQkcvUzFwbi80aHRZcXFlaXNUWGJBcDY0Ky9QMHQ5SjRSWUc0TWk3c1ROCjdtbzdZRnkwbU5xU3dEZTFiWndGeTZTczJteWtpU2ZtNFIwY041QmtMZHpHQk9BdXI3eldzeTBDZ1lFQTljNTAKZC9sbWpvZXBQQW94V2dZaEJFTzdnQ2FEbDYreWRNY3hwZ2RNNWVYdEs0d1o3b0IzOGY2Nk9yUkMyR21Hek51cQo1cTdFeWVpZzh5UHNTVHBLVndTRFA1WlRIUGZSU0JaRmlxeCt3SlVLMHFWWUtPNENVU29tSVZqeE1pOXRLenRRCmRRNVR4UnlPWUhzT1ZYMk5EVTZwMWFQTE5DVGFFeGlJbUZNRldsMENnWUJpbnQ3NERXQXVKSHprdk9ENlU1aFoKMHU2S2dIQldtN3FkODZXbVBlY2ZveWpzNjBONGl5enJYSVNlaFpXVzdEZUZNVTZQUnlZbkJiNGVNMFFHYnZVNwpaajV3ZzdvaFhTejRDenZBS2Fhb1VBVXkxZlBDKzNwbEFhcDU5ZFFVWXJTV3ZzZDB4UFVFRVFiN2FsbG9DNzhVCmJUU21BbGw5RWdFZkF6OW0yQ2R4eVFLQmdIL2p4UEZQRDY4RW9tYWNud1RKdjQvcWRibTlVQ1l4d2RYRWRlNSsKU2VJcmVQUjVWbHlpOXNVdjFWRUp6T1d3TWZTUUxpRUx1Vk9iOTNISnRQeDhtWVVnMGZEWms3QzB0MnljT2Q1bQoxU1A1NThHbFNYTXlNbjVzUVo2RUdpb1VSdWFCVytFcmJTWlhMelMva2J1bE1TaEZUMVBhZnJWSW56WGtROTJOCkJISDVBb0dBSWdrVFg1UnA2bk10UHF1aytOMy9JQVNXS2lURkxwQ0d6VXNsaGNoejBQM25oMDQxR3dsMlNveGYKNkRlYmRRK2ovbTJPcG9sai9sQm04NVB0WGx5WTdUTUh2MVFKSCs4TEFUb0NJQTRuaHdOMjl6V1hrRXhySTVTZApRU1NobU93R25nbWhZVEhEWWlTeDdPWFFhODE1dGpSc1hQRWtNVS9DOWs2UXdWK1RvRU09Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==" | base64 -d > client.key

# (4) 之后再根据前面步骤生成的 ca.crt, client.crt 和 client.key 来生成 PKCS12 格式的 cert.pfx
openssl pkcs12 -export -out cert.pfx -inkey client.key -in client.crt -certfile ca.crt
# Enter Export Password: weiyigeek
# Verifying - Enter Export Password: weiyigeek


Step 3.在Jenkins上集成 Kubernetes 首先需要将 cert.pfx 导入到Jenkins Global Credential 凭据中(注意输入密码)

WeiyiGeek.cert.pfx

WeiyiGeek.cert.pfx


Step 4.然后按照上面的方式在Jenkins上配置Kubernetes Cloud, 不同的是需要填入以下信息,最后点击“Test Connection”按钮测试Jenkins是否可以成功连接Kubernetes。

1
2
3
4
5
6
7
# Kubernetes 名称 : kubernetes-external
# Kubernetes 地址 : https://192.168.12.110:16443 # 此处由于做了高可用所以不是master节点地址:6443
# Kubernetes 服务证书 key :ca.crt 的内容
# Kubernetes 命名空间 : devops
# Jenkins 地址 :http://jenkins.devops.svc.cluster.local:8080
# Jenkins 通道 :jenkins.devops.svc.cluster.local:50000
# Tips : 注意勾选从 master 传递给 agent 的环境变量

WeiyiGeek.

WeiyiGeek.

Tips : 在进行第五步是建议在各个Work节点拉取jenkins/inbound-agent:4.3-4镜像


  • Step 5.下面设置 Pod Templates 此处需添加一个Pod 模板名称等相关信息
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    # 添加 Pod 模板 
    Pod 模板名称: jenkins-agent-jnlp
    命名空间:devops # 一定不要写错误了否则不能创建Pod
    标签列表: jenkins-agent-jnlp # 后续Job可以指定该标签进行运行

    # 添加容器列表 (jenkins/inbound-agent:4.3-4) # 差点把我坑哭了 (测试时候可以先不加容器,默认是jnlp容器)
    # 名称 : maven
    # Docker 镜像 : maven:3.6-jdk-8-alpine
    # 工作目录 : /home/jenkins/
    # 运行的命令 : sleep 9000

    # 挂载到 Pod 代理中的卷列表
    # 选择 Host Path Volume 将maven进行持久化存储(此处路径与您setting配置有关默认是运行用户家目录中)
    # Maven 持久化目录 : /home/jenkins/.m2 # 此处应该是您各个K8s的Work节点上NFS目录;
    /nfsdisk-31/appstorage/mavenRepo # 主机目录([email protected] )NFS目录
    /home/jenkins/.m2 # Pod挂载目录
    # docker socket 目录:
    /var/run/docker.sock # 主机目录(由于各个节点都安装docker)
    /var/run/docker.sock # Pod挂载目录

    # Service Account 账户权限
    Service Account :jenkins-sa # 前面创建的ServiceAccount
WeiyiGeek.Slave Pod 模板

WeiyiGeek.Slave Pod 模板

Tips : 警告如果要为JNLP代理提供自己的Docker映像则必须将容器命名为jnlp,以便它覆盖默认容器否则,将导致两个代理尝试同时连接到主服务器

Tips : Kubernetes 默认的jnlp容器是"jenkins/inbound-agent:4.3-4", name: "jnlp",我们可以自定义容器进行覆盖只需将容器名称更改为jnlp即可(一般情况下不建议更改);

Tips :镜像的选择就是一个坑开始使用的是jenkins/inbound-agent:alpine然后容器名称并未设置为jnlp覆盖默认的"jenkins/inbound-agent:4.3-4"容器,根本都没有执行节点加入命令, 此处凸显出官网的重要性,发现官网采用的是jenkins/jnlp-slave:latest(https://hub.docker.com/r/jenkins/jnlp-slave) 或者不加镜像容器信息 最后完满解决;


  • Step 6.创建一个Job Pipeline 验证测试环境;
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    // # scripted Pipeline 
    def podTemplates = 'jenkins-slave'
    podTemplate(label: podTemplates, cloud: 'kubernetes')
    {
    node (podTemplates) {
    stage('init') {
    echo "Hello world, Kubernetes Jenkins Slave"
    sh "hostname && ls /home/jenkins/.m2 && sleep 300"
    }
    }
    }

输出结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// # scripted Pipeline 
podTemplate(name: 'jenkins-slave', label: 'jenkins-slave', cloud: 'kubernetes',containers: [
containerTemplate(name: 'maven', image: 'maven:3.6-jdk-8-alpine', ttyEnabled: true, command: 'sleep', args: '30')
]){
node (podTemplates) {
stage('init') {
container ('maven') {
echo "Hello world, Kubernetes Jenkins Slave"
sh 'mvn -version'
sh "hostname && ls /home/jenkins/.m2 && pwd"
sh "env"
sh "pwd && ls"
}
}
}
}

WeiyiGeek.result

WeiyiGeek.result


0x03 补充说明

(1) K8s 集群中对搭建的Jenkins进行版本升级

描述: 在 K8s 中对 Jenkins 升级是非常的简单只需要把image键中版本值进行改变(只需要使用新的版本镜像替换即可),从而拉取新的镜像运行即可。

Tips : 注意此处做了PVC持久化如果未作持久化的童鞋需要注意数据的保存, 其次是拉取的版本的Jenkins镜像必须存在

1
2
3
4
5
6
7
8
$ grep "jenkins:2.277-alpine" jenkins-deployment.yaml
# image: jenkins/jenkins:2.277-alpine
# https://updates.jenkins.io/download/war/2.277/jenkins.war
$ kubectl apply -f jenkins-deployment.yaml
$ kubectl get pod -n devops
# NAME READY STATUS RESTARTS AGE
# jenkins-5df679b7ff-5r8d4 0/1 ContainerCreating 0 24s
# jenkins-7db7878f8f-78tmk 1/1 Running 2 18d # 原版本等待新版本的Pod启动完毕后自动销毁;

Tips: 对于Jenkins版本在生产环境中不建议冒冒失失进行升级其中原因你我心知,但是针对于出现严重漏洞不得不修复时,建议构建新的环境再依次迁移(同时注意备份),确定无任何问题后再关闭下线并进行切换

Tips : 此处为测试学习环境,看到这个提示我的强迫症就上来了;

WeiyiGeek.K8s集群中对Jenkins进行升级

WeiyiGeek.K8s集群中对Jenkins进行升级


(2) 移植其他Jenkins机器上的插件到Kubernetes安装的Jenkins中然后进行重新Jenkins(需要非常注意版本问题-)

简单操作流程:

1
2
3
4
5
6
7
$ tar -zxvf jenkins_2.272.x_plugins.tar.gz -C ./jenkins/

~/jenkins$ cd var/lib/jenkins/plugins/

~/jenkins/var/lib/jenkins/plugins$ cp -a . /nfsdisk-31/devops-jenkins-pvc-pvc-3cd916df-91cb-470d-b9ef-e9b4f115223d/plugins/

$ chown -R jenkins:jenkins /nfsdisk-31/devops-jenkins-pvc-pvc-3cd916df-91cb-470d-b9ef-e9b4f115223d/plugins

Tips : 注意此处是我的PVC持久化的目录路径,与你实践的环境是不一致的。



0x04 入坑出坑

问题1.在K8s中安装Jenkins时报错从logs日志显示Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?错误

  • 错误信息: 没有权限在 jenkins 的 home 目录下面创建文件;
    1
    2
    3
    kubectl -n kube-ops logs jenkins2-59764f8f65-rcvh5
    Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
    touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
  • 错误原因: 因为默认的镜像使用的是 jenkins 这个用户,而我们通过 PVC 挂载到 nfs 服务器的共享数据目录下面却是 root 用户的,所以没有权限访问该目录
  • 问题解决: 只需要在 nfs 共享数据目录下面把我们的目录权限重新分配下即可:
    1
    chown -R 1000 /data/k8s/jenkins2


问题2.Jenkins调用节点执行任务时java.lang.IllegalStateException: Agent is not connected after 100 seconds, status: Running报错导致不断重启

问题描述: Agent 不能 通过 jnlp 与 Jenkins 的 Master 相连接

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
2021-02-04 05:37:46.886+0000 [id=1985]  WARNING o.c.j.p.k.KubernetesLauncher#launch: Error in provisioning; agent=KubernetesSlave name: jenkins-agent-jnlp-b63kv, template=PodTemplate{id='83f07044-1633-468a-91c8-e05ffb924303', name='jenkins-agent-jnlp', namespace='devops', label='jenkins-agent-jnlp', serviceAccount='jenkins-sa', volumes=[HostPathVolume [mountPath=/home/jenkins/.m2, hostPath=/tmp/.m2]], containers=[ContainerTemplate{name='jnlp', image='jenkins/inbound-agent:alpine', workingDir='/home/jenkins', command='sleep', args='9999999', resourceRequestCpu='', resourceRequestMemory='', resourceRequestEphemeralStorage='', resourceLimitCpu='', resourceLimitMemory='', resourceLimitEphemeralStorage='', livenessProbe=ContainerLivenessProbe{execArgs='', timeoutSeconds=0, initialDelaySeconds=0, failureThreshold=0, periodSeconds=0, successThreshold=0}}]}
java.lang.IllegalStateException: Agent is not connected after 100 seconds, status: Running
at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:244)
at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:294)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:80)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-02-04 05:37:46.886+0000 [id=1985] INFO o.c.j.p.k.KubernetesSlave#_terminate: Terminating Kubernetes instance for agent jenkins-agent-jnlp-b63kv
2021-02-04 05:37:46.900+0000 [id=1985] INFO o.c.j.p.k.KubernetesSlave#deleteSlavePod: Terminated Kubernetes instance for agent devops/jenkins-agent-jnlp-b63kv
2021-02-04 05:37:46.901+0000 [id=1985] INFO o.c.j.p.k.KubernetesSlave#_terminate: Disconnected computer jenkins-agent-jnlp-b63kv
Terminated Kubernetes instance for agent devops/jenkins-agent-jnlp-b63kv
Disconnected computer jenkins-agent-jnlp-b63kv

问题原因:

答: 这个问题困扰了我好久,总结可能出现该问题的情况,
1.指定的 Jenkins-jnlp 容器镜像的Agent不能正常连接到Master
2.指定的 Jenkins-jnlp 镜像启动参数问题。

解决办法:

答: 换镜像,在后面的章节中我会将自定义Jenkins Slave Jnlp 容器镜像的DockerFile文件进行分享。


问题3.基于 Kubernetes 部署 Jenkins 动态 slave 后,运行 Jenkins Job 会抛java.nio.channels.ClosedChannelException

异常问题:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
FATAL: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from 10.244.8.1/10.244.8.1:55340
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.Request.call(Request.java:202)
at hudson.remoting.Channel.call(Channel.java:954)
at hudson.FilePath.act(FilePath.java:1071)
at hudson.FilePath.act(FilePath.java:1060)
at hudson.FilePath.mkdirs(FilePath.java:1245)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1819)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused: hudson.remoting.RequestAbortedException
at hudson.remoting.Request.abort(Request.java:340)
at hudson.remoting.Channel.terminate(Channel.java:1038)
at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:209)
at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:172)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:142)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE

问题原因: 抛 java.nio.channels.ClosedChannelException 异常的原因是 Jenkins Slave Pod 在 Jenkins Job 运行时突然挂掉,然后 Master Pod 无法和 Slave Pod 进行通信。那么解决方法就是找到 Slave Pod 经常挂掉的原因,经排查是 Slave Pod 的资源限制不合理,配置的 CPU 和内存太小,导致 Pod 在运行是很容易超出资源限制,然后被 k8s Kill 掉。

解决办法:打开 Jenkins 设置 Slave Pod 模版的资源限制:Jenkins->系统管理->系统设置->云->镜像->Kubernetes Pod Template->Container Template->高级,然后根据实际情况调整 CPU 和内存需求。