前置要求
- 系统 Centos8
- 内存推荐3G或以上(这个很重要,否则有可能导致初始化不成功)
1.查询系统版本
[root@localhost ~]# cat /etc/centos-releaseCentOS Linux release 8.2.2004 (Core)
2.确定IP地址
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33TYPE="Ethernet"PROXY_METHOD="none"BROWSER_ONLY="no"BOOTPROTO="static"DEFROUTE="yes"IPV4_FAILURE_FATAL="no"IPV6INIT="yes"IPV6_AUTOCONF="yes"IPV6_DEFROUTE="yes"IPV6_FAILURE_FATAL="no"IPV6_ADDR_GEN_MODE="stable-privacy"NAME="ens33"UUID="bce8c979-9f30-4b67-819e-cae1ef0b70c0"DEVICE="ens33"ONBOOT="yes"IPADDR="192.168.0.127"NETMASK="255.255.255.0"GATEWAY="192.168.0.1"DNS1="8.8.8.8"
3.添加阿里云
[root@localhost ~]# rm -rfv /etc/yum.repos.d/*[root@localhost ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
注:这里需要添加阿里云地址,否则可能会导致镜像拉取不出来的情况
4.修改host
[root@localhost ~]# vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.0.127 master01.paas.com master01
5. 关闭swapoff,注释swapoff 分区
[root@localhost ~]# swapoff -a[root@localhost ~]# cat /etc/fstab## /etc/fstab# Created by anaconda on Thu May 13 00:46:53 2021## Accessible filesystems, by reference, are maintained under '/dev/disk/'.# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.## After editing this file, run 'systemctl daemon-reload' to update systemd# units generated from this file.#UUID=d5aed907-ab30-47d1-af47-f76abea61f07 / xfs defaults 0 0UUID=0e24ccd7-4b02-4873-bfa8-83ba1f1e676b /boot ext4 defaults 1 2#UUID=dedd03a8-0500-4383-b868-cec55f4dd8bd swap swap defaults 0 0
6.配置内核参数,将桥接的IPv4流量传递到iptables的链
[root@localhost ~]# cat > /etc/sysctl.d/k8s.conf << EOF> net.bridge.bridge-nf-call-ip6tables = 1> net.bridge.bridge-nf-call-iptables = 1> EOF
[root@localhost ~]# sysctl --system* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...kernel.yama.ptrace_scope = 0* Applying /usr/lib/sysctl.d/50-coredump.conf ...kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e* Applying /usr/lib/sysctl.d/50-default.conf ...kernel.sysrq = 16kernel.core_uses_pid = 1kernel.kptr_restrict = 1net.ipv4.conf.all.rp_filter = 1net.ipv4.conf.all.accept_source_route = 0net.ipv4.conf.all.promote_secondaries = 1net.core.default_qdisc = fq_codelfs.protected_hardlinks = 1fs.protected_symlinks = 1* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...net.core.optmem_max = 81920* Applying /usr/lib/sysctl.d/50-pid-max.conf ...kernel.pid_max = 4194304* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...fs.aio-max-nr = 1048576* Applying /etc/sysctl.d/99-sysctl.conf ...* Applying /etc/sysctl.d/k8s.conf ...* Applying /etc/sysctl.conf ...
7.安装常用包
[root@localhost ~]# yum install vim bash-completion net-tools gcc -y
8.使用aliyun源安装docker-ce
[root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2[root@localhost ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo[root@localhost ~]# yum -y install docker-ce
安装docker-ce如果出现以下错
[root@localhost ~]# yum -y install docker-ceCentOS-8 - Base - mirrors.aliyun.com 14 kB/s | 3.8 kB 00:00CentOS-8 - Extras - mirrors.aliyun.com 6.4 kB/s | 1.5 kB 00:00CentOS-8 - AppStream - mirrors.aliyun.com 16 kB/s | 4.3 kB 00:00Docker CE Stable - x86_64 40 kB/s | 22 kB 00:00Error:Problem: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed- cannot install the best candidate for the job- package containerd.io-1.2.10-3.2.el7.x86_64 is excluded- package containerd.io-1.2.13-3.1.el7.x86_64 is excluded- package containerd.io-1.2.2-3.3.el7.x86_64 is excluded- package containerd.io-1.2.2-3.el7.x86_64 is excluded- package containerd.io-1.2.4-3.1.el7.x86_64 is excluded- package containerd.io-1.2.5-3.1.el7.x86_64 is excluded- package containerd.io-1.2.6-3.3.el7.x86_64 is excluded(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
解决方法
[root@localhost ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm[root@localhost ~]# yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm
docker-ce安装成功
[root@localhost ~]# systemctl start docker[root@localhost ~]# systemctl enable docker
添加aliyundocker仓库加速器
登录阿里云账号获取镜像信息,网址:https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
针对Docker客户端版本大于 1.10.0 的用户
您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器
sudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://lso20XXX.mirror.aliyuncs.com"]}EOFsudo systemctl daemon-reloadsudo systemctl restart docker
9.安装kubectl、kubelet、kubeadm
[root@localhost ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo> [kubernetes]> name=Kubernetes> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/> enabled=1> gpgcheck=1> repo_gpgcheck=1> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg> EOF
10.安装
[root@localhost ~]# yum -y install kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet && systemctl start kubelet
11.初始化集群(此时会需要较长时间,耐心等待)
[root@localhost ~]# kubeadm init --kubernetes-version=1.18.0 \> --apiserver-advertise-address=192.168.0.127 \> --image-repository registry.aliyuncs.com/google_containers \> --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。
这一步很关键,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。
[apiclient] All control plane components are healthy after 20.502515 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node master01.paas.com as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master01.paas.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: fvdmel.61fjcb4ej591sujj[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.0.127:6443 --token fvdmel.61fjcb4ej591sujj \--discovery-token-ca-cert-hash sha256:f36a0ec6acd67259e8f86a6a882bdf445685341a4c2b52cebc7e9651d3de7ec6
出现上述信息,代表安装成功
记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群时执行。
11.根据提示创建kubectl
[root@localhost ~]# mkdir -p $HOME/.kube[root@localhost ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@localhost ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
执行下面命令,使kubectl可以自动补充
[root@localhost ~]# source <(kubectl completion bash)
12.查看节点,pod
[root@localhost ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster01.paas.com NotReady master 5m4s v1.18.0
[root@localhost ~]# kubectl get pod --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-7ff77c879f-g9cqf 0/1 Pending 0 7m30skube-system coredns-7ff77c879f-st5h7 0/1 Pending 0 7m30skube-system etcd-master01.paas.com 1/1 Running 0 7m41skube-system kube-apiserver-master01.paas.com 1/1 Running 0 7m41skube-system kube-controller-manager-master01.paas.com 1/1 Running 0 7m41skube-system kube-proxy-bb58h 1/1 Running 0 7m30skube-system kube-scheduler-master01.paas.com 1/1 Running 0 7m41s
13.node节点为NotReady,因为corednspod没有启动,缺少网络pod
[root@localhost ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlconfigmap/calico-config createdcustomresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org createdclusterrole.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrole.rbac.authorization.k8s.io/calico-node createdclusterrolebinding.rbac.authorization.k8s.io/calico-node createddaemonset.apps/calico-node createdserviceaccount/calico-node createddeployment.apps/calico-kube-controllers createdserviceaccount/calico-kube-controllers createdpoddisruptionbudget.policy/calico-kube-controllers created
查看pod和node
[root@localhost ~]# kubectl get pod --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-6566c5b7d8-hcm8j 1/1 Running 0 2m32skube-system calico-node-hv6wl 1/1 Running 0 2m32skube-system coredns-7ff77c879f-g9cqf 1/1 Running 0 10mkube-system coredns-7ff77c879f-st5h7 1/1 Running 0 10mkube-system etcd-master01.paas.com 1/1 Running 0 11mkube-system kube-apiserver-master01.paas.com 1/1 Running 0 11mkube-system kube-controller-manager-master01.paas.com 1/1 Running 0 11mkube-system kube-proxy-bb58h 1/1 Running 0 10mkube-system kube-scheduler-master01.paas.com 1/1 Running 0 11m
查看时,会出现没有启动成功的情况,可以使用下面的命令:
[root@localhost ~]# systemctl restart kubelet
然后在查询
[root@localhost ~]# kubectl get pod --all-namespaces
多次尝试直到全部启动成功
14.安装kubernetes-dashboard
官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里修改nodeport
在40行左右添加
type: NodePort
在44行左右添加
nodePort: 30000
整体代码如下:
# Copyright 2018 The Kubernetes Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.apiVersion: v1kind: Namespacemetadata:name: kubernetes-dashboard---apiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: ServiceapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:type: NodePortports:- port: 443targetPort: 8443nodePort: 30000selector:k8s-app: kubernetes-dashboard---apiVersion: v1kind: Secretmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboardtype: Opaque---apiVersion: v1kind: Secretmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboardtype: Opaquedata:csrf: ""---apiVersion: v1kind: Secretmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboardtype: Opaque---kind: ConfigMapapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardrules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardrules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboardsubjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: kubernetes-dashboardroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboardsubjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: DeploymentapiVersion: apps/v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.0.0-rc7imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"beta.kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: ServiceapiVersion: v1metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboardspec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: DeploymentapiVersion: apps/v1metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboardspec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.4ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"beta.kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}
[root@localhost soft]# kubectl create -f recommended.yamlnamespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createddeployment.apps/dashboard-metrics-scraper created
查看pod,service
[root@localhost soft]# kubectl get pod --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-6566c5b7d8-hcm8j 1/1 Running 0 17mkube-system calico-node-hv6wl 1/1 Running 0 17mkube-system coredns-7ff77c879f-g9cqf 1/1 Running 0 25mkube-system coredns-7ff77c879f-st5h7 1/1 Running 0 25mkube-system etcd-master01.paas.com 1/1 Running 0 25mkube-system kube-apiserver-master01.paas.com 1/1 Running 0 25mkube-system kube-controller-manager-master01.paas.com 1/1 Running 0 25mkube-system kube-proxy-bb58h 1/1 Running 0 25mkube-system kube-scheduler-master01.paas.com 1/1 Running 0 25mkubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-qhh7s 1/1 Running 0 2m27skubernetes-dashboard kubernetes-dashboard-5d4dc8b976-nklpk 1/1 Running 0 2m27s
同样的,如果有服务没启动,重启kubelet
systemctl restart kubelet
浏览器访问
https://192.168.0.127:30000/
注意是https请求
浏览器会提示有风险,忽略,点击高级访问网站
[root@localhost ~]# find / -name kubernetes-dashboard-token*/var/lib/kubelet/pods/5c6451f1-94c3-4061-be65-267467a24b8c/volumes/kubernetes.io~secret/kubernetes-dashboard-token-njb8k/var/lib/kubelet/pods/521c3913-a71b-498e-b7e3-8fa9c5ffe282/volumes/kubernetes.io~secret/kubernetes-dashboard-token-njb8k[root@localhost ~]# kubectl describe secrets -n kubernetes-dashboard kubernetes-dashboard-token-njb8k | grep token | awk 'NR==3{print $2}'eyJhbGciOiJSUzI1NiIsImtpZCI6IlQyU3l3Z09PWnZ6ajJwdzNJTUlISTJrSHZmUkE0ckhuSnMxMnBpNDVDV1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uamI4ayIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjE3MTk0ZmNlLTM1YWYtNGY3MC1iYWI5LWUzZTBkMzRiOTMwZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.HtGK9YDQlS4dBalBbBhydQzmInyGiYFhGPi8AJxGpVeU5kap_NLU4PKDA3vvd2xaQd8g6KtFl75fL9AgMcDetzzTLOJwWWNDxMkq9qeSQojLN9380XP4XQhkIFu5GxSLYEnGNdjUAS_Y9D7WVNJjJBjL-vEQKsxX6Gj7ybNVIJk82T4E0cc-YBydyfWzSRVYDu6YoFSx_GtdjBYknHM2VsZeimS7_2ojdrWptS4QoBhF1QgtvYRP1ggwm3i8l_7lT3-P6Efh-YVDLW3TXtnlKpZtRYz2XbrUkGrGIev-ihxSKEvsYREKL28SR0geDq3vxWMq3RNLRPYak4Q_XtxKsQ
找到并复制上述token
在浏览器中粘贴即可
点击确定,即可登录dashboard
