在 Ubuntu 20.04 中安装 k8s 集群

在本文中,我们使用三台云主机来搭建一个基于 Ubuntu 20.04 的 kubernetes 的集群.

准备主机

在云平台申请三台云主机, 在申请时选择使用 Ubuntu 20.04 作为主机的操作系统。 完成后得到三台云主机的内网地址分别是:

Master - 192.168.11.16
Node-01 - 192.168.11.2
Node-02 - 192.168.11.6

设定ubunt 20.04 的更新源为国内源

参考 为 Ubuntu 20.04 设置国内源

关闭交换分区

为三台机器关闭交换分区。
分别在三台机器中执行以下步骤

临时关闭

1
sudo swapoff -a

永久关闭需要编辑 fstab 文件

1
sudo vi /etc/fstab  #注释掉 swap 一行

验证, 执行

1
free -h

可以看到, Swap 的数量为 0

1
2
3
              total        used        free      shared  buff/cache   available
Mem: 3.8Gi 215Mi 1.9Gi 2.0Mi 1.7Gi 3.3Gi
Swap: 0B 0B 0B

更改内核参数

顺序执行以下命令

1
sudo modprobe overlay
1
sudo modprobe br_netfilter
1
2
3
4
5
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
1
sudo sysctl --system

验证,执行:

1
lsmod | grep br_netfilter

可以看到系统显示下面的内容

1
2
br_netfilter           28672  0
bridge 176128 1 br_netfilter

安装 docker

  1. 安装证书
1
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
  1. 设置镜像为阿里云
1
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
  1. 更新并安装
1
sudo apt update
1
sudo apt install -y docker.io
  1. 修改 docker 配置文件
1
2
3
4
5
6
7
8
9
10
sudo tee /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
  1. 重启 docker
1
sudo systemctl restart docker
  1. 设置 docker 开机启动
1
sudo systemctl enable docker
  1. 将当前用户加上到 docker 组,这样以后执行 docker 命令就不需要输入 sudo. 假设当前用户名为 stu, 执行:
1
sudo usermod -aG docker stu

注意,要上上面的命令起效,需要重新登陆

  1. 验证安装, 可以执行一些常用的 docker 命令,看是否正常, 如:
1
docker images
1
docker --version

改变主机名

使用下面的命令更改主机。

安装文章开头的列表来进行IP来进行修改
Master - 192.168.11.16
Node-01 - 192.168.11.2
Node-02 - 192.168.11.6

如在 192.168.11.16 执行:

1
sudo hostnamectl set-hostname "Master"

设置 Kubernetes repository

1
sudo apt install -y apt-transport-https curl gnupg2 software-properties-common ca-certificates
1
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - #使用阿里云
1
2
3
sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF 
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main #阿里云源
EOF
1
sudo apt update

安装 kubeadm kubeadm kubectl

1
sudo apt-get install -y kubelet kubeadm kubectl

标记阻止自动更新

1
sudo apt-mark hold kubelet kubeadm kubectl

查看版本

1
kubectl version --client && kubeadm version

将 kubelet 设置为开机启动

1
sudo systemctl enable kubelet

拉取镜像

工作节点只拉取kube-proxy、coredns、pause

首先,用下面的命令查看需要的镜像版本

1
kubeadm config images list

系统显示

1
2
3
4
5
6
7
k8s.gcr.io/kube-apiserver:v1.23.0
k8s.gcr.io/kube-controller-manager:v1.23.0
k8s.gcr.io/kube-scheduler:v1.23.0
k8s.gcr.io/kube-proxy:v1.23.0
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

接下来,分别在 Master 节点和工作节点上新建两个名为: pull-images.sh 的文件,用来下载和对镜像进行更名

Master 节点的脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#!/bin/bash

echo "1.拉取镜像"

#下面的版本号要对应
KUBE_VERSION=v1.23.0
PAUSE_VERSION=3.6
CORE_DNS_VERSION=1.8.6
ETCD_VERSION=3.5.1-0

# pull kubernetes images from hub.docker.com
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:$KUBE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:$KUBE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:$KUBE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

# retag to k8s.gcr.io prefix
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns/coredns:v$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION

# untag origin tag, the images won't be delete.
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

echo "====================执行完毕======================"

echo "所需镜像:"
kubeadm config images list

echo "已安装镜像:"
sudo docker images

echo "====如果数量不匹配请多执行几次k8s_pull_images.sh====="

Node 节点的脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#下面的版本号要对应
KUBE_VERSION=v1.23.0
PAUSE_VERSION=3.6
CORE_DNS_VERSION=1.8.6

# pull kubernetes images from hub.docker.com
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:$KUBE_VERSION

# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION

# retag to k8s.gcr.io prefix
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns/coredns:v$CORE_DNS_VERSION

# untag origin tag, the images won't be delete.
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:$KUBE_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION

echo "====================执行完毕======================"

echo "已安装镜像:"
sudo docker images

echo "====如果数量不匹配请多执行几次k8s_pull_images.sh====="

建立好以后,执行下面的命令为脚本赋予执行权限

1
chmod a+x pull-images.sh

然后执行

1
./pull-images.sh

初始化主节点

主节点上执行

1
sudo kubeadm init --apiserver-advertise-address=192.168.11.16 --pod-network-cidr=10.244.0.0/16

注意:用你的主节点 IP 替换 apiserver-advertise-address 的值

  • –apiserver-advertise-address: k8s 中的主要服务apiserver的部署地址,填自己的管理节点 ip
  • –pod-network-cidr: 这个是 k8s 采用的节点网络,因为我们将要使用flannel作为 k8s 的网络,作为学习环境,这里填10.244.0.0/16就行

运行完成后,你可以看到类似如下的提示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.11.16:6443 --token pekml2.0s4kxso66inl2tww \
--discovery-token-ca-cert-hash sha256:6f1b5b42a7a09351a8805a7f2b0bebabb07dcb4e782cc3e74461de2a16962502

按照提示执行

1
mkdir -p $HOME/.kube
1
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
1
sudo chown $(id -u):$(id -g) $HOME/.kube/config

接下来,为集群安装一个 flannel 网络

1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果该文件无法访问,也可以在当前目录下新建一个名为:kube-flannel.yml 的文件,包含以下内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: quay.io/coreos/flannel:v0.15.1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.15.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

然后执行:

1
kubectl apply -f kube-flannel.yml

可以看到类似如下的信息

1
2
3
4
5
6
7
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

验证

在主节点上执行

1
kubectl get nodes

可以看到

1
2
NAME     STATUS   ROLES                  AGE   VERSION
master Ready control-plane,master 21m v1.23.0

加入工作节点

在工作节点上执行

1
2
sudo kubeadm join 192.168.11.16:6443 --token pekml2.0s4kxso66inl2tww \
--discovery-token-ca-cert-hash sha256:6f1b5b42a7a09351a8805a7f2b0bebabb07dcb4e782cc3e74461de2a16962502

可以看打类似如下的信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1213 11:09:04.079403 1772383 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

当在所有工作节点上都执行了上面的命令行,可以返回到 主节点, 执行

1
kubectl get nodes

如果看到类似的信息,则说明k8s集群已经安装成功

1
2
3
4
NAME      STATUS   ROLES                  AGE     VERSION
master Ready control-plane,master 26m v1.23.0
node-02 Ready <none> 62s v1.23.0
node-03 Ready <none> 2m58s v1.23.0

本文标题:在 Ubuntu 20.04 中安装 k8s 集群

文章作者:Morning Star

发布时间:2021年09月01日 - 09:09

最后更新:2022年01月07日 - 14:01

原始链接:https://www.mls-tech.info/microservice/k8s/k8s-installation-on-ubuntu20.04/

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。