一键脚本部署k8s
前情提示
以前安装k8s集群的时候使用的是k8s官网的教程 使用的镜像源都是国外的 速度慢就不说了 还有一些根本就下载不动 导致安装失败 最后在群里小伙伴(蘑菇博客交流群/@你钉钉响了)的建议下使用一个开源的一键安装k8s的脚本就好了起来了
git地址 https://github.com/TimeBye/kubeadm-ha
环境准备
官网的安装说明也很简单但是还有些细节还是没有提到 所以我自己照着官网的教程 补充了一些细节
硬件系统要求
- Master节点:2c2g+
- Worker节点:2c4g+
使用centos7.7安装请按上面配置准备好3台centos,1台作为Master节点,2台Worker节点
本方式为1主2worker的配置
这是我的各个节点的配置
主机名 | ip | 配置 | 角色 |
---|
node1 | 10.168.1.11 | 4c16g | etcd,master,worker |
node2 | 10.168.1.12 | 4c16g | etcd,lb,master,worker |
node3 | 10.168.1.13 | 4c16g | etcd,lb,master,worker |
node4 | 10.168.1.14 | 4c16g | worker |
角色说明:
master: k8s管理节点
worker:k8s工作节点
etcd:k8s数据存储etcd部署节点
lb: 负载均衡节点,基于负载均衡器 + keepalived实现,可实现一台lb主节点挂掉后,ip自动漂移到其他节点
其余准备工作
除以上四台主机使用的ip外,另外准备一个ip,供vip漂移使用,本人使用的ip是:10.168.1.15
centos准备
在安装之前需要准备一些基础的软件环境用于下载一键安装k8s的脚本和编辑配置
centos网络准备
安装时需要连接互联网下载各种软件 所以需要保证每个节点都可以访问外网
建议关闭centos的防火墙
1
| systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
|
同时需要保证各个节点间可以相互ping通
centos软件准备
用ssh连接到Master节点上安装git
部署k8s前配置
下载部署脚本
在Master节点clone安装脚本 脚本地址
1
| git clone --depth 1 https://github.com/TimeBye/kubeadm-ha
|
进入到下载的部署脚本的目录
安装 Ansible 运行环境
在master节点安装Ansible环境
1
| sudo ./install-ansible.sh
|
修改安装的配置文件
由于我部署的集群是三个master节点,所以我们需要修改example/hosts.m-master.hostname.ini
文件
1
| vi example/hosts.m-master.hostname.ini
|
修改的内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
|
[all] node1 ansible_host=10.168.1.11 ansible_port=22 ansible_user="root" ansible_ssh_pass="111111" node2 ansible_host=10.168.1.12 ansible_port=22 ansible_user="root" ansible_ssh_pass="111111" node3 ansible_host=10.168.1.13 ansible_port=22 ansible_user="root" ansible_ssh_pass="111111" node4 ansible_host=10.168.1.14 ansible_port=22 ansible_user="root" ansible_ssh_pass="111111"
[lb] node2 node3
[etcd] node1 node2 node3
[kube-master] node1 node2 node3
[kube-worker] node1 node2 node3 node4
lb_kube_apiserver_ip="10.168.1.15"
lb_kube_apiserver_port="8443"
|
hosts.m-master.hostname.ini
完整内容如下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
|
[all] 192.168.28.80 ansible_port=22 ansible_user="root" ansible_ssh_pass="cheng" 192.168.28.128 ansible_port=22 ansible_user="root" ansible_ssh_pass="cheng" 192.168.28.89 ansible_port=22 ansible_user="root" ansible_ssh_pass="cheng"
[lb]
[etcd] 192.168.28.80 192.168.28.128 192.168.28.89
[kube-master] 192.168.28.80
[kube-worker] 192.168.28.80 192.168.28.128 192.168.28.89
[new-master]
[new-worker]
[new-etcd]
[del-worker]
[del-master]
[del-etcd]
[del-node]
[all:vars]
skip_verify_node=false
kube_version="1.19.4"
lb_mode="nginx"
lb_kube_apiserver_port="8443"
kube_pod_subnet="10.244.0.0/18"
kube_service_subnet="10.244.64.0/18"
kube_network_node_prefix="24"
kube_max_pods="110"
network_plugin="calico"
kubelet_root_dir="/var/lib/kubelet"
docker_storage_dir="/var/lib/docker"
etcd_data_dir="/var/lib/etcd"
|
修改variables.yaml
文件
该文件说明:
高级配置,注意: 如果安装集群时使用高级配置则以后所有操作都需将 -e @example/variables.yaml
参数添加在 ansible-playbook
命令中
- 本项目所有可配置项都在
example/variables.yaml
文件中体现,需自定义配置时删除配置项前注释符即可。 - 若
example/hosts.m-master.ip.ini
文件中与 example/variables.yaml
变量值冲突时, example/variables.yaml
文件中的变量值优先级最高。
修改内容:
1 2 3 4 5 6 7 8 9 10 11 12 13
| docker_mirror: - "https://lj88h5zm.mirror.aliyuncs.com"
kubernetesui_dashboard_enabled: false
cert_manager_enabled: true
|
完整内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403
|
skip_verify_node: false
install_mode: online
http_proxy: https_proxy: no_proxy: 192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,127.0.0.1,localhost
timezone: Asia/Shanghai
base_yum_repo: http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
epel_yum_repo: http://mirrors.aliyun.com/epel/$releasever/$basearch
docker_yum_repo: https://mirrors.aliyun.com/docker-ce/linux/centos/{{ ansible_distribution_major_version }}/$basearch/stable
kubernetes_yum_repo: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-{{ ansible_architecture }}/
base_apt_repo: deb http://mirrors.aliyun.com/{{ host_distribution | lower }}/ {{ host_distribution_release }} main restricted universe multiverse
docker_apt_repo: "deb [arch={{ host_architecture }}] https://mirrors.aliyun.com/docker-ce/linux/{{ host_distribution | lower }} {{ host_distribution_release }} stable"
kubernetes_apt_repo: deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
docker_version: 19.03.13 containerd_version: 1.3.7-1
docker_mirror: - "https://lj88h5zm.mirror.aliyuncs.com" - "https://reg-mirror.qiniu.com" - "https://hub-mirror.c.163.com" - "https://docker.mirrors.ustc.edu.cn"
docker_insecure_registries: - "{{ kube_pod_subnet }}" - "{{ kube_service_subnet }}"
docker_log_driver: "json-file" docker_log_level: "warn" docker_log_max_size: "10m" docker_log_max_file: 3
docker_storage_dir: "/var/lib/docker"
docker_max_concurrent_downloads: 10
lb_mode: nginx
lb_kube_apiserver_ip: "10.168.1.15"
lb_kube_apiserver_port: 8443
lb_kube_apiserver_healthcheck_port: 8081
enabel_ingress_nodeport_lb: true
enabel_ingress_tls_nodeport_lb: true
lb_openresty_image: registry.aliyuncs.com/kubeadm-ha/openresty_openresty:1.17.8.2-alpine
lb_nginx_image: registry.aliyuncs.com/kubeadm-ha/nginx:1.18-alpine
lb_haproxy_image: registry.aliyuncs.com/kubeadm-ha/haproxy:2.1-alpine
lb_haproxy_stats_bind_address: 9099
lb_haproxy_stats_uri: "/stats"
lb_haproxy_stats_refresh: 10
lb_haproxy_stats_user: "admin"
lb_haproxy_stats_password: "admin"
lb_haproxy_balance_alg: "leastconn"
lb_envoy_image: registry.aliyuncs.com/kubeadm-ha/envoyproxy_envoy:v1.16.0 lb_envoy_admin_address_port: 9099
lb_keepalived_image: registry.aliyuncs.com/kubeadm-ha/osixia_keepalived:2.0.20
lb_keepalived_password: "d0cker"
lb_keepalived_router_id: 51
etcd_certs_expired: 3650
etcd_ca_certs_expired: 36500
etcd_image: registry.aliyuncs.com/kubeadm-ha/etcd:3.4.13-0
etcd_data_dir: "/var/lib/etcd"
etcd_backup_hour: "3"
etcd_backup_expiry: "7"
kube_certs_expired: 3650
kube_ca_certs_expired: 36500
kubeadm_token: "abcdef.0123456789abcdef"
kube_master_external_ip: - "8.8.8.8" kube_master_external_domain: - "kubernetes.io"
pod_infra_container_image: registry.aliyuncs.com/kubeadm-ha/pause:3.2
kube_image_repository: registry.aliyuncs.com/kubeadm-ha
kube_version: 1.19.4
kube_dns_domain: cluster.local
kube_pod_subnet: 10.244.0.0/18
kube_service_subnet: 10.244.64.0/18
kube_network_node_prefix: 24
kube_max_pods: 110
kube_service_node_port_range: 30000-32767
eviction_hard_imagefs_available: 15% eviction_hard_memory_available: 100Mi eviction_hard_nodefs_available: 10% eviction_hard_nodefs_inodes_free: 5%
kube_cpu_reserved: 100m kube_memory_reserved: 256M kube_ephemeral_storage_reserved: 1G
system_reserved_enabled: true
system_cpu_reserved: 500m system_memory_reserved: 512M system_ephemeral_storage_reserved: 10G
kube_proxy_mode: iptables
kubelet_root_dir: "/var/lib/kubelet"
kube_encryption_algorithm: "aescbc"
kube_encrypt_token: "GPG4RC0Vyk7+Mz/niQPttxLIeL4HF96oRCcBRyKNpfM="
kubernetes_audit: false
audit_log_maxage: 30
audit_log_maxbackups: 10
audit_log_maxsize: 100
audit_log_hostpath: /var/log/kubernetes/audit
audit_policy_file: /etc/kubernetes/config/apiserver-audit-policy.yaml
audit_policy_custom_rules: | - level: None users: [] verbs: [] resources: []
kube_apiserver_enable_admission_plugins: - NodeRestriction
kube_apiserver_disable_admission_plugins: [ ]
kube_controller_node_monitor_grace_period: 40s
kube_controller_node_monitor_period: 5s
kube_controller_pod_eviction_timeout: 2m0s
kube_controller_terminated_pod_gc_threshold: 10
kube_kubeadm_apiserver_extra_args: { } kube_kubeadm_controller_extra_args: { } kube_kubeadm_scheduler_extra_args: { }
apiserver_extra_volumes: { } controller_manager_extra_volumes: { } scheduler_extra_volumes: { }
wait_plugins_ready: true
network_plugins_enabled: true
network_plugin: "calico"
calico_veth_mtu: 1440
calico_typha_image: registry.aliyuncs.com/kubeadm-ha/calico_typha:v3.16.5 calico_cni_image: registry.aliyuncs.com/kubeadm-ha/calico_cni:v3.16.5 calico_node_image: registry.aliyuncs.com/kubeadm-ha/calico_node:v3.16.5 calico_kube_controllers_image: registry.aliyuncs.com/kubeadm-ha/calico_kube-controllers:v3.16.5 calico_pod2daemon_flexvol_image: registry.aliyuncs.com/kubeadm-ha/calico_pod2daemon-flexvol:v3.16.5
calico_felix_log_level: "warning"
calicoctl_image: registry.aliyuncs.com/kubeadm-ha/calico_ctl:v3.16.5
flannel_backend: "vxlan"
flannel_image: registry.aliyuncs.com/kubeadm-ha/coreos_flannel:v0.13.0
ingress_controller_enabled: true
ingress_controller_tpye: nginx
ingress_controller_external_traffic_policy: Cluster
ingress_controller_http_nodeport: 30080
ingress_controller_https_nodeport: 30443
nginx_ingress_image: registry.aliyuncs.com/kubeadm-ha/ingress-nginx_controller:v0.41.0 nginx_ingress_webhook_certgen_image: registry.aliyuncs.com/kubeadm-ha/jettech_kube-webhook-certgen:v1.5.0
traefik_certs_expired: 3650
traefik_ingress_image: registry.aliyuncs.com/kubeadm-ha/traefik:2.3.1
kubernetesui_dashboard_enabled: true
kubernetesui_dashboard_certs_expired: 3650
kubernetesui_dashboard_image: registry.aliyuncs.com/kubeadm-ha/kubernetesui_dashboard:v2.0.4 kubernetesui_metrics_scraper_image: registry.aliyuncs.com/kubeadm-ha/kubernetesui_metrics-scraper:v1.0.4
metrics_server_enabled: true
metrics_server_image: registry.aliyuncs.com/kubeadm-ha/metrics-server_metrics-server:v0.4.0
cert_manager_enabled: true
acme_email: liu15077731547@gmail.com acme_server: https://acme-v02.api.letsencrypt.org/directory
cert_manager_cainjector_image: registry.aliyuncs.com/kubeadm-ha/jetstack_cert-manager-cainjector:v1.0.4 cert_manager_webhook_image: registry.aliyuncs.com/kubeadm-ha/jetstack_cert-manager-webhook:v1.0.4 cert_manager_controller_image: registry.aliyuncs.com/kubeadm-ha/jetstack_cert-manager-controller:v1.0.4
|
升级内核
修改完配置文件后建议升级内核
1
| ansible-playbook -i example/hosts.m-master.hostname.ini -e @example/variables.yaml 00-kernel.yml
|
内核升级完毕后重启所有节点 上执行
开始部署k8s
等待所有的节点重启完成后进入脚本目录
执行一键部署命令
1
| ansible-playbook -i example/hosts.m-master.hostname.ini -e @example/variables.yaml 90-init-cluster.yml
|
查看节点运行情况
等待所有节点ready 即为创建成功
1 2 3 4 5
| NAME STATUS ROLES AGE VERSION node1 Ready etcd,master,worker 3m10s v1.19.4 node2 Ready etcd,lb,master,worker 3m9s v1.19.4 node3 Ready etcd,lb,master,worker 3m6s v1.19.4 node4 Ready worker 3m1s v1.19.4
|
部署kuboard(可选)
安装 Kuboard
在master节点执行
1 2 3
| kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.7/metrics-server.yaml
|
查看 Kuboard 运行状态
1 2
| kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system
|
输出结果如下所示
1 2 3
| NAME READY STATUS RESTARTS AGE kuboard-74c645f5df-cmrbc 1/1 Running 0 80s
|
访问Kuboard
Kuboard Service 使用了 NodePort 的方式暴露服务,NodePort 为 32567;您可以按如下方式访问 Kuboard。
如:
1
| http://10.168.1.15:32567/
|
第一次访问需要输入token 我们获取一下
获取token
在master节点执行
1 2
| echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d)
|
我获取到的token (全部复制完全)
1
| eyJhbGciOiJSUzI1NiIsImtpZCI6ImY1eUZlc0RwUlZha0E3LWZhWXUzUGljNDM3SE0zU0Q4dzd5R3JTdXM2WEUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJvYXJkLXVzZXItdG9rZW4tMmJsamsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3Vib2FyZC11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzhlZDRmNDktNzM0Zi00MjU1LTljODUtMWI5MGI4MzU4ZWMzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmt1Ym9hcmQtdXNlciJ9.MujbwGnkL_qa3H14oKDT1zZ5Fzt16pWoaY52nT7fV5B2nNIRsB3Esd18S8ztHUJZLRGxAhBwu-utToi2YBb8pH9RfIeSXMezFZ6QhBbp0n5xYWeYETQYKJmes2FRcW-6jrbpvXlfUuPXqsbRX8qrnmSVEbcAms22CSSVhUbTz1kz8C7b1C4lpSGGuvdpNxgslNFZTFrcImpelpGSaIGEMUk1qdjKMROw8bV83pga4Y41Y6rJYE3hdnCkUA8w2SZOYuF2kT1DuZuKq3A53iLsvJ6Ps-gpli2HcoiB0NkeI_fJORXmYfcj5N2Csw6uGUDiBOr1T4Dto-i8SaApqmdcXg
|
将token输入到kuboard
最后即可进入kuboard的dashboard界面
rancher部署(可选)
kuboard和rancher建议部署其中一个
helm安装
使用helm部署rancher会方便很多,所以需要安装helm
1 2 3
| curl -O http://rancher-mirror.cnrancher.com/helm/v3.2.4/helm-v3.2.4-linux-amd64.tar.gz tar -zxvf helm-v3.2.4-linux-amd64.tar.gz mv linux-amd64/helm /usr/local/bin
|
验证
输入以下内容说明helm安装成功
1
| version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
|
添加rancher chart仓库
1 2
| helm repo add rancher-stable http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stable helm repo update
|
安装rancher
1 2 3 4
| helm install rancher rancher-stable/rancher \ --create-namespace \ --namespace cattle-system \ --set hostname=rancher.local.com
|
等待 Rancher 运行:
1
| kubectl -n cattle-system rollout status deploy/rancher
|
输出信息:
1 2
| Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... deployment "rancher" successfully rolled out
|