Kubernetes是k8s的全称,是Kubemini的完全体,k3s是一个轻量级的 Kubernetes 发行版,专为边缘计算、IoT 设备和嵌入式系统等资源受限的环境而设计。K8s是一种云环境,通过对集群中容器的编排,可以实现业务的高可用(HA),动态伸缩(HPA)等等高级功能。
1.准备环境
- 科学浏览工具
- 可以不用,但必须有。
- 本次部署全程使用国内网络。
- 用来试验的虚拟机3台
- 4cpu/2g/8g(disk) 最小化安装Debian 12,配置国内apt源,修改主机名分别为kubemaster、kubenode。
检查环境
参照kubernetes文档1,k8s从v1.24以后就不再集成一个Docker CE的组件dockershim,但不代表不支持Docker CE,只是默认使用使用Containerd作为底层(容器运行时)。
所以当前版本不需要安装在机器上安装Docker CE环境了。关闭linux Swap然后配置一下iptables策略,开启内核mod:overlay、br_netfilter。最后执行下面的命令测试环境:
#关闭Linux swap
swapoff -a
#不挂载swap分区
nano /etc/fstab
#添加开启内核mod
echo -e "overlay\nbr_netfilter" > /etc/modules-load.d/k8s.conf
modprobe overlay
modprobe br_netfilter
#通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:
lsmod | grep br_netfilter
lsmod | grep overlay
#添加参数至/etc/sysctl.d/k8s.conf
nano /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
#通过运行以下指令确认 sysctl 配置中被设置为 1
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
如果是其他linux发行版,可能还需要设置SELinux、Firewall等参数。
2.安装kubeadm
如果你的环境可以访问k8s.io那么我推荐使用官方的软件仓。
#安装依赖
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gpg
#下载用于 Kubernetes 软件包仓库的公共签名密钥
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#添加k8s软件仓 (此操作会覆盖 /etc/apt/sources.list.d/kubernetes.list 中现存的所有配置。)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
#安装
apt-get update
apt-get install -y kubelet kubeadm kubectl
#锁定版本
apt-mark hold kubelet kubeadm kubectl
也可以使用阿里镜像仓进行安装。
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
#这里需要单独再装一下containerd
apt-get install -y containerd
配置containerd
#配置 systemd cgroup 驱动
root@kubemaster:~# cat /etc/containerd/config.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/usr/lib/cni"
conf_dir = "/etc/cni/net.d"
[plugins."io.containerd.grpc.v1.cri".containerd]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.internal.v1.opt"]
path = "/var/lib/containerd/opt"
[grpc]
address = "/run/containerd/containerd.sock"
#配置监听套接字
root@kubemaster:~# crictl config runtime-endpoint unix:///run/containerd/containerd.sock
#查看配置
root@kubemaster:~# cat /etc/crictl.yaml
runtime-endpoint: "unix:///run/containerd/containerd.sock"
image-endpoint: ""
timeout: 0
debug: false
pull-image-on-create: false
disable-pull-on-run: false
#重启/containerd
root@kubemaster:~# systemctl restart containerd
#查看工作状态
root@kubemaster:~# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
crictl ps 可以正常看到输出,就说containerd服务基本没什么问题,containerd的配置文件还可以修改很多东西,比如修改默认拉取images的地址(自定义源)2。
3.初始化管理平台(control-plane node)
这里记录下阿里的镜像仓的kubernetes版本比k8s.io可能慢一个版本。而且在初始化kubemsater节点的时候,如果使用阿里仓会多一个命令“–image-repository registry.aliyuncs.com/google_containers”,但是不管使用哪个软件仓,初始化的时候总是会出问题,相对来说k8s.io问题比较少,在Debian 12上我只遇到了/usr/lib/cni目录不存在的问题,ln一个就搞定了。下面是我使用阿里仓安装的过程,参考kubernetes官方文档,初始化集群3。
kubeadm init --control-plane-endpoint kubemaster.home --pod-network-cidr 10.244.0.0/16 --kubernetes-version v1.28.2 --image-repository registry.aliyuncs.com/google_containers
初始化过程中提示[kubelet-check] Initial timeout of 40s passed,初始化超时,使用命令查看kubelet init日志。
root@kubemaster:~# journalctl -u kubelet -n 50
Nov 06 11:34:35 kubemaster.home kubelet[99024]: E1106 11:34:35.111358 99024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp :443: i/o timeout" pod="kube-system/kube-controller-manager-kubemaster.home"
Nov 06 11:34:35 kubemaster.home kubelet[99024]: E1106 11:34:35.111394 99024 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp :443: i/o timeout" pod="kube-system/kube-controller-manager-kubemaster.home"
Nov 06 11:34:35 kubemaster.home kubelet[99024]: E1106 11:34:35.111474 99024 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-kubemaster.home_kube-system(673d0ad57ff4a5fd75243a07164e8e13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-kubemaster.home_kube-system(673d0ad57ff4a5fd75243a07164e8e13)\\\": rpc error: code = DeadlineExceeded desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\\\": dial tcp :443: i/o timeout\"" pod="kube-system/kube-controller-manager-kubemaster.home" podUID="673d0ad57ff4a5fd75243a07164e8e13"
Nov 06 11:34:35 kubemaster.home kubelet[99024]: E1106 11:34:35.118296 99024 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp :443: i/o timeout"
Nov 06 11:34:35 kubemaster.home kubelet[99024]: E1106 11:34:35.118354 99024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp :443: i/o timeout" pod="kube-system/etcd-kubemaster.home"
Nov 06 11:34:35 kubemaster.home kubelet[99024]: E1106 11:34:35.118382 99024 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp ]:443: i/o timeout" pod="kube-system/etcd-kubemaster.home"
日志中提示kube不能创建Sandbox,也不能拉取的到Sanbox的images,使用crictl images命令查看下当前已经拉取的镜像。
root@kubemaster:~# crictl images
IMAGE TAG IMAGE ID SIZE
registry.aliyuncs.com/google_containers/coredns v1.10.1 ead0a4a53df89 16.2MB
registry.aliyuncs.com/google_containers/etcd 3.5.9-0 73deb9a3f7025 103MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.28.2 cdcab12b2dd16 34.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.28.2 55f13c92defb1 33.4MB
registry.aliyuncs.com/google_containers/kube-proxy v1.28.2 c120fed2beb84 24.6MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.28.2 7a5d9d67a13f6 18.8MB
registry.aliyuncs.com/google_containers/pause 3.9 e6f1816883972 322kB
本地已经有pause,版本是3.9。根据安装和配置先决条件文档中的指引,尝试指定沙箱环境,修改/etc/containerd/config.toml。
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
重启containerd,并且重置kubernetes,然后重新初始化。
root@kubemaster:~# systemctl restart containerd && kubeadm reset -y
root@kubemaster:~# kubeadm init --control-plane-endpoint kubemaster.home --pod-network-cidr 10.244.0.0/16 --kubernetes-version v1.28.2 --image-repository registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.28.2
......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join kubemaster.home:6443 --token f2a5zy.y1nc1cypm8vlkofs \
--discovery-token-ca-cert-hash sha256:96465cc6fc45f7266719f6841c4ae99c3a4d584378ee9f67907a101f1837d0f5 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join kubemaster.home:6443 --token f2a5zy.y1nc1cypm8vlkofs \
--discovery-token-ca-cert-hash sha256:96465cc6fc45f7266719f6841c4ae99c3a4d584378ee9f67907a101f1837d0f5
记录上面给出的key和命令,根据提示将管理员配置文件复制到指定目录,或者把环境变量写入.bashrc文件。使用kubectl查看节点信息,kubemaster节点已经是notready状态,再次查看kubelet日志没有其他报错,只提示没有初始化CNI网络。
Nov 06 12:07:22 kubemaster.home kubelet[100251]: E1106 12:07:22.148165 100251 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
–
#k8s.io(已经部署完成)
root@kubernetes:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes Ready control-plane 5d19h v1.28.3
#阿里源(部署中)
root@kubemaster:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster.home NotReady control-plane 12m v1.28.2
参考插件支持列表4,k8s的网络插件可以对不同层面的网络通讯进行模拟,比如有些插件可以模拟二层网络的转发,以后深入可以研究一下。这边选择的是Flannel5。先把清单文件下载到本地,apply应用一下,安装完成观察一下pods状态。
这里为什么为要把清单文件下载本地然后安装,因为后面如果需要删除,可以直接使用该清单文件,有的时候你本地的配置和有些插件或者容器的配置不一样,需要对清单文件进行微调。保存清单文件是个好习惯。
root@kubemaster:~# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml && kubectl apply -f kube-flannel.yml
root@kubemaster:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-ln6w4 1/1 Running 0 10m
kube-system coredns-66f779496c-fxjjm 0/1 ContainerCreating 0 3h
kube-system coredns-66f779496c-vpzmr 0/1 ContainerCreating 0 3h
kube-system etcd-kubemaster.home 1/1 Running 3 3h
kube-system kube-apiserver-kubemaster.home 1/1 Running 3 3h
kube-system kube-controller-manager-kubemaster.home 1/1 Running 5 3h
kube-system kube-proxy-6xpz2 1/1 Running 0 3h
kube-system kube-scheduler-kubemaster.home 1/1 Running 6 3h
一般来说网络插件部署完成后,coredns就会自己启动完成。这里我们查看一下kube-flannel-ds-ln6w4的日志,观察容器是否启动工作正常。
root@kubemaster:~# kubectl logs kube-flannel-ds-ln6w4 -n kube-flannel
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
......
I1106 06:40:49.758364 1 iptables.go:283] bootstrap done
I1106 06:40:49.770459 1 iptables.go:283] bootstrap done #启动完成工作正常
Flannel已经正常启动了,可是coredns还是没能起来一直处在创建状态。我们在观察一下kubelet的日志,发现容器一直在报错,无法找到/usr/lib/cni路径。在参考安装文档,需要将确保该目录下有Flannel可执行文件,ln做一个软连接之后再观察pods状态。
root@kubemaster:~# journalctl -u kubelet -n 50
#日志内容
......
Nov 06 15:19:51 kubemaster.home kubelet[100251]: E1106 15:19:51.849618 100251 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0fd04ffa-1604-4086-84e6-f1f0020b5665\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e602915e2b13e71bc51d157fc17baf45b02f5e4710f7fa7bcd27acf114bf73e\\\": plugin type=\\\"loopback\\\" failed (delete): failed to find plugin \\\"loopback\\\" in path [/usr/lib/cni]\""
Nov 06 15:19:51 kubemaster.home kubelet[100251]: E1106 15:19:51.849644 100251 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0fd04ffa-1604-4086-84e6-f1f0020b5665\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e602915e2b13e71bc51d157fc17baf45b02f5e4710f7fa7bcd27acf114bf73e\\\": plugin type=\\\"loopback\\\" failed (delete): failed to find plugin \\\"loopback\\\" in path [/usr/lib/cni]\"" pod="kube-system/coredns-66f779496c-fxjjm" podUID="0fd04ffa-1604-4086-84e6-f1f0020b5665"
Nov 06 15:19:54 kubemaster.home kubelet[100251]: E1106 15:19:54.849405 100251 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5dc9e4f5f6c265c05315e6f43696718e054b9420502ea6976996f972fbe62e8\": plugin type=\"loopback\" failed (delete): failed to find plugin \"loopback\" in path [/usr/lib/cni]" podSandboxID="b5dc9e4f5f6c265c05315e6f43696718e054b9420502ea6976996f972fbe62e8"
#找到路径
root@kubemaster:~# find / -name flannel
/opt/cni/bin/flannel
root@kubemaster:~# ln -s /opt/cni/bin/ /usr/lib/cni
#再次观察pods
root@kubemaster:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-ln6w4 1/1 Running 0 45m
kube-system coredns-66f779496c-fxjjm 1/1 Running 0 3h36m
kube-system coredns-66f779496c-vpzmr 1/1 Running 0 3h36m
kube-system etcd-kubemaster.home 1/1 Running 3 3h36m
kube-system kube-apiserver-kubemaster.home 1/1 Running 3 3h36m
kube-system kube-controller-manager-kubemaster.home 1/1 Running 5 3h36m
kube-system kube-proxy-6xpz2 1/1 Running 0 3h36m
kube-system kube-scheduler-kubemaster.home 1/1 Running 6 3h36m
#查看节点细节
root@kubemaster:~# kubectl describe nodes
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeReady 50m kubelet Node kubemaster.home status is now: NodeReady
到此kubemaster节点基本部署完毕。下面直接在master节点上部署Dashboard。
4.部署DashBoard
下载DashBoard清单文件到本地,kubectl apply,然后观察一下pods,发现DashBoard的容器处在一个叫作Pending的状态,因为master这个节点,或者说角色为control-plane的节点,默认都有一个属性:污点6。DashBoard的资源清单注释了其不能被调度至管理节点。当有合适的node时,k8s会自动将pods调度过去,所以我们需要部署一个子节点,或者称为工作节点。
# Comment the following tolerations if Dashboard must not be deployed on master
# 如果需要部署在管理节点 注释下面语句
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
部署子节点(node)的过程,参考文章1-2小节,将包含Sanbox字段的containerd配置文件,复制到子节点node中。重启服务并将kubemaster初始化k8s完成后给出的命令,输入到子节点中,系统执行完毕后,在kubemaster上使用kubectl get nodes看到kubenode的状态是noready。(也可以将管理员配置文件复制到kubenode中,就可以在node中管理集群,并不推荐)。
这里noready的原因和master节点一样,Flannel插件没有找到/usr/lib/cni下的可自行文件,通过ln解决问题后,再次get pods/nodes观察阶段运行状态。当所有pods和节点都ok以后,来测试访问Dashboard。
root@kubemaster:~# kubectl get pods -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel kube-flannel-ds-ln6w4 1/1 Running 0 8h 192.168.0.214 kubemaster.home <none> <none>
kube-flannel kube-flannel-ds-rwmpr 1/1 Running 0 5h47m 192.168.0.138 kubenode <none> <none>
kube-system coredns-66f779496c-fxjjm 1/1 Running 0 11h 10.244.0.2 kubemaster.home <none> <none>
kube-system coredns-66f779496c-vpzmr 1/1 Running 0 11h 10.244.0.3 kubemaster.home <none> <none>
kube-system etcd-kubemaster.home 1/1 Running 3 11h 192.168.0.214 kubemaster.home <none> <none>
kube-system kube-apiserver-kubemaster.home 1/1 Running 3 11h 192.168.0.214 kubemaster.home <none> <none>
kube-system kube-controller-manager-kubemaster.home 1/1 Running 5 11h 192.168.0.214 kubemaster.home <none> <none>
kube-system kube-proxy-6xpz2 1/1 Running 0 11h 192.168.0.214 kubemaster.home <none> <none>
kube-system kube-proxy-tlrqd 1/1 Running 0 5h47m 192.168.0.138 kubenode <none> <none>
kube-system kube-scheduler-kubemaster.home 1/1 Running 6 11h 192.168.0.214 kubemaster.home <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-5657497c4c-pp9m8 1/1 Running 0 7h12m 10.244.1.2 kubenode <none> <none>
kubernetes-dashboard kubernetes-dashboard-78f87ddfc-855mn 1/1 Running 0 7h12m 10.244.1.3 kubenode <none> <none>
root@kubemaster:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster.home Ready control-plane 11h v1.28.2
kubenode Ready <none> 5h53m v1.28.2
首先查看一下Dashboard在集群中服务监听的地址,这里有几个概念ClusterIP、NodePort、LoadBalancer、ExternalName。这几种就是k8s内部网络的常见类型7。当前只有仪表盘只有集群内部可以访问。
root@kubemaster:~# kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 11h
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.106.131.10 <none> 8000/TCP 7h46m
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.111.16.23 <none> 443/TCP 7h46m
参考kubernetes API8,通过配置一个外部地址来访问他,首先我们导出仪表盘的service配置文件,添加一个externalIPs字段后再导入,其值为kubemaster的nodeport。
root@kubemaster:~# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard -o yaml > dash.yaml && cp dash.yaml dash.yaml.bak
root@kubemaster:~# cat dash.yaml
......
type: ClusterIP
externalIPs:
- 192.168.0.214
......
root@kubemaster:~# kubectl apply -f dash.yaml
service/kubernetes-dashboard configured
接下来通过浏览器访问https://kubemaster.home/就可以访问了,根据文档添加和配置用户9,生成token就可以登陆使用了。将代码分别写入adduser.yaml、addrule.yaml。
#创建服务用户 adduser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
#创建用户角色 addrule.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
应用配置,生成token,使用token登录系统。
root@kubemaster:~# kubectl apply -f adduser.yaml && kubectl apply -f addrule.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
root@kubemaster:~# kubectl -n kubernetes-dashboard create token admin-user
#生成的token(可以设置token的超时值 具体参考文档)
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9qOFRBSUI4bHh2LU1tTUF5TFJaUjZBaUhvbkpHb3B4eGVJR0xsQ3FWaTAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjk5MjkwMTk0LCJpYXQiOjE2OTkyODY1OTQsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiOTFmMmVhZWQtMjI0ZS00NGY1LThmZjEtNjU5N2EzNWE0MGZiIn19LCJuYmYiOjE2OTkyODY1OTQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.uMdCHJx9kGbUO3SPsrS-Cfyh_spat5yuVWjpZcT1jCfzyxBNKbkdLHVC8eM0C1gkX6Z5K20C-HojI1kT49PuXkVVmgm3UfWhnXOIKzTXN9bwEgNYffLuns14RwnGBGjBRSwMRqt8RP161tefgyqnAErhtN07K7wqOKh-hJ1Dle16Fp520jki3DghbG6oLe0s1H7wAamYXBeee9LG50aFmeabyW9WrgRPYyNaoOEEHHPw0EUYxrLsYw95XAJ2VVsFqq9kjTTxLeSeXLzHNLhS6MVmwP4lVjYDBSwwPDo0ndtwPOXyGZOqL0Bu7EGuUsP_AooyZKBy5zSNQEGJHb0cvg
5.小结
kubernetes集群环境的功能很强大,对于容器编排的粒度也很细。尽管Kubernetes具有许多优点,但它也可能对初学者来说具有一定的学习曲线,需要深入了解其概念和组件。此外,管理Kubernetes集群可能需要一定的操作经验和资源。总的来说,Kubernetes是一个强大的工具,特别适用于需要高度可伸缩性、可靠性和自动化的现代应用程序部署和管理。
–
- 安装和配置先决条件 https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/ ↩︎
- container配置文档 https://github.com/containerd/containerd/blob/main/docs/cri/registry.md ↩︎
- 初始化和加入集群 https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ ↩︎
- k8s插件列表 https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/#networking-and-network-policy ↩︎
- github Flannel https://github.com/flannel-io/flannel#deploying-flannel-manually ↩︎
- 污点和容忍度 https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions ↩︎
- k8s的网络类型 https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/ ↩︎
- kubernetes API参考 https://kubernetes.io/zh-cn/docs/reference/kubernetes-api/service-resources/service-v1/ ↩︎
- Dashboard添加用户 https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md ↩︎
你的代码框(显示代码的区域)用的是什么插件呀?真好看
谢谢 我用的是Code Block Pro
https://code-block-pro.com/?utm_campaign=plugin&utm_source=readme-body&utm_medium=textlink