1. 关闭 2. 安装
3. 安装
4. 安装
5. 安装
6. 配置并启用
1. 配置
C.启动
7. 配置并启用
B. 配置
C. 启动
E. 更改
G. 使用
A. 配置
B. 配置
C. 配置
D. 启动
E. 配置
F. 配置
G.启动
H. 配置
I. 配置
J. 启动
K. 验证
9. 配置并启用
A. 配置
B. 配置
C. 启动
D. 配置
E. 配置
F. 启动
G. 查看
11. 部署
A.下载
C. 执行
A. 下载
本文由百家号/熊掌号作者上传并发布,百家号仅提供信息发布平台。文章仅代表作者个人观点,不代表百度立场。
SeLinux
和 FireWall
# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# systemctl stop firewalld
# systemctl disable firewalld
2. 安装 docker
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum list docker-ce --showduplicates | sort -r
# yum -y install docker-ce
查看版本信息
# docker --version
Docker version 17.12.1-ce, build 7390fc6
设置开机启动并启动docker
# systemctl start docker
# systemctl status docker
# systemctl enable docker
查看版本详细信息
# docker version
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:15:20 2018
OS/Arch: linux/amd64
Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:17:54 2018
OS/Arch: linux/amd64
Experimental: false
使用国内(腾讯)加速器
# sed -i ‘s#ExecStart=/usr/bin/dockerd#ExecStart=/usr/bin/dockerd --registry-mirror=https://mirror.ccs.tencentyun.com#‘ /usr/lib/systemd/system/docker.service
# systemctl daemon-reload
# systemctl restart docker
3. 安装 etcd
下载 etcd
# curl -L https://storage.googleapis.com/etcd/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz -o etcd-v3.2.9-linux-amd64.tar.gz
# tar -zxvf etcd-v3.2.9-linux-amd64.tar.gz
# cp etcd-v3.2.9-linux-amd64/etcd* /usr/bin/
查看版本信息
# etcd --version
etcd Version: 3.2.9
Git SHA: f1d7dd8
Go Version: go1.8.4
Go OS/Arch: linux/amd64
# etcdctl --version
etcdctl version: 3.2.9
API version: 2
4. 安装 Kubernetes
下载 kubernetes-1.8.1
# wget https://storage.googleapis.com/kubernetes-release/release/v1.8.1/kubernetes-server-linux-amd64.tar.gz
# tar -zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin/
# cp kubectl kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy /usr/bin/
查看版本信息
# kube-apiserver --version
Kubernetes v1.8.1
5. 安装 flanneld
下载 flanneld
# curl -L https://github.com/coreos/flannel/releases/download/v0.9.0/flannel-v0.9.0-linux-amd64.tar.gz -o flannel-v0.9.0-linux-amd64.tar.gz
# tar -zxvf flannel-v0.9.0-linux-amd64.tar.gz
# mv flanneld /usr/bin/
# mkdir /usr/libexec/flannel/
# mv mk-docker-opts.sh /usr/libexec/flannel/
查看版本信息
# flanneld --version
v0.9.0
6. 配置并启用 etcd
1. 配置 etcd
A. 配置启动项
# cat > /etc/systemd/system/etcd.service <<EOF
[Unit]
Description=etcd
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos/etcd
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf
Restart=on-failure
LimitNOFILE=65536
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
B. 配置各节点 etcd.conf 配置文件
# mkdir -p /var/lib/etcd/
# mkdir -p /etc/etcd/
# export ETCD_NAME=etcd
# export INTERNAL_IP=10.104.246.79 # 本机IP
# cat > /etc/etcd/etcd.conf <<EOF
name: ‘${ETCD_NAME}‘
data-dir: "/var/lib/etcd/"
listen-peer-urls: http://${INTERNAL_IP}:2380
listen-client-urls: http://${INTERNAL_IP}:2379,http://127.0.0.1:2379
initial-advertise-peer-urls: http://${INTERNAL_IP}:2380
advertise-client-urls: http://${INTERNAL_IP}:2379
initial-cluster: "etcd=http://${INTERNAL_IP}:2380"
initial-cluster-token: ‘etcd-cluster‘
initial-cluster-state: ‘new‘
EOF
注:
new-----初始化集群安装时使用该选项;
existing-----新加入集群时使用该选项。
C.启动 etcd
# systemctl start etcd
# systemctl status etcd
# systemctl enable etcd
2. 检查安装情况
查看集群成员
# etcdctl member list
729e9b6a58e46a03: name=etcd peerURLs=http://10.104.246.79:2380 clientURLs=http://10.104.246.79:2379 isLeader=true
查看集群健康状况
# etcdctl cluster-health
member 729e9b6a58e46a03 is healthy: got healthy result from http://10.104.246.79:2379
cluster is healthy
7. 配置并启用 flanneld
A. 配置启动项
# cat > /etc/systemd/system/flanneld.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \$FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
# cat > /usr/bin/flanneld-start <<EOF
#!/bin/sh
exec /usr/bin/flanneld \ -etcd-endpoints=\${FLANNEL_ETCD_ENDPOINTS:-\${FLANNEL_ETCD}} \ -etcd-prefix=\${FLANNEL_ETCD_PREFIX:-\${FLANNEL_ETCD_KEY}} \ "\[email protected]"
EOF
# chmod 755 /usr/bin/flanneld-start
B. 配置 flannel
配置文件
# etcdctl mkdir /kube/network
# etcdctl set /kube/network/config ‘{ "Network": "10.254.0.0/16" }‘
{ "Network": "10.254.0.0/16" }
# cat > /etc/sysconfig/flanneld <<EOF
FLANNEL_ETCD_ENDPOINTS="http://10.104.246.79:2379"
FLANNEL_ETCD_PREFIX="/kube/network"
EOF
C. 启动 flanneld
# systemctl start flanneld
# systemctl status flanneld
# systemctl enable flanneld
D. 查看各节点网段
# cat /var/run/flannel/subnet.env
FLANNEL_NETWORK=10.254.0.0/16
FLANNEL_SUBNET=10.254.57.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
E. 更改 docker
网段为 flannel
分配的网段
# export FLANNEL_SUBNET=10.254.57.1/24
# cat > /etc/docker/daemon.json <<EOF
{
"bip" : "$FLANNEL_SUBNET"
}
EOF
重启 docker
# systemctl daemon-reload
# systemctl restart docker
F. 查看是否已分配相应网段
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.104.192.1 0.0.0.0 UG 0 0 0 eth0
10.104.192.0 0.0.0.0 255.255.192.0 U 0 0 0 eth0
10.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 # >flannel0
10.254.57.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
G. 使用 etcdctl
命令查看 flannel
的相关信息
# etcdctl ls /kube/network/subnets
/kube/network/subnets/10.254.57.0-24
# etcdctl -o extended get /kube/network/subnets/10.254.57.0-24
Key: /kube/network/subnets/10.254.57.0-24
Created-Index: 7
Modified-Index: 7
TTL: 86145
Index: 7
{"PublicIP":"10.104.246.79"}
H. 测试网络是否正常
# ping -c 4 10.254.57.1
PING 10.254.57.1 (10.254.57.1) 56(84) bytes of data.
64 bytes from 10.254.57.1: icmp_seq=1 ttl=64 time=0.042 ms
64 bytes from 10.254.57.1: icmp_seq=2 ttl=64 time=0.037 ms
64 bytes from 10.254.57.1: icmp_seq=3 ttl=64 time=0.039 ms
64 bytes from 10.254.57.1: icmp_seq=4 ttl=64 time=0.038 ms
--- 10.254.57.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.037/0.039/0.042/0.002 ms
8. 配置并启用 Kubernetes Master 节点
Kubernetes Master 节点包含的组件:
kube-apiserver
kube-scheduler
kube-controller-manager
A. 配置 config
文件
# mkdir -p /etc/kubernetes/
# cat > /etc/kubernetes/config <<EOF
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://10.104.246.79:8080"
KUBE_ADMISSION_CONTROL=ServiceAccount
EOF
B. 配置 kube-apiserver
启动项
# cat > /etc/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \ \$KUBE_LOGTOSTDERR \ \$KUBE_LOG_LEVEL \ \$KUBE_ETCD_SERVERS \ \$KUBE_API_ADDRESS \ \$KUBE_API_PORT \ \$KUBELET_PORT \ \$KUBE_ALLOW_PRIV \ \$KUBE_SERVICE_ADDRESSES \ \$KUBE_ADMISSION_CONTROL \ \$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
C. 配置 apiserver
配置文件
# cat > /etc/kubernetes/apiserver <<EOF
KUBE_API_ADDRESS="--advertise-address=10.104.246.79 --bind-address=10.104.246.79 --insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.104.246.79:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS="--enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/apiserver.log"
EOF
- [x] 注:使用 HTTP 和 使用 HTTPS 的最大不同就是--admission-control=ServiceAccount选项。
D. 启动 kube-apiserver
# systemctl start kube-apiserver
# systemctl status kube-apiserver
# systemctl enable kube-apiserver
E. 配置 kube-controller-manager
启动项
# cat > /etc/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \ \$KUBE_LOGTOSTDERR \ \$KUBE_LOG_LEVEL \ \$KUBE_MASTER \ \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
F. 配置 kube-controller-manager
配置文件
# cat > /etc/kubernetes/controller-manager <<EOF
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes"
EOF
G.启动 kube-controller-manager
# systemctl start kube-controller-manager
# systemctl status kube-controller-manager
# systemctl enable kube-controller-manager
H. 配置 kube-scheduler
启动项
# cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \ \$KUBE_LOGTOSTDERR \ \$KUBE_LOG_LEVEL \ \$KUBE_MASTER \ \$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
I. 配置 kube-scheduler
配置文件
# cat > /etc/kubernetes/scheduler <<EOF
KUBE_SCHEDULER_ARGS="--address=127.0.0.1"
EOF
J. 启动 kube-scheduler
# systemctl start kube-scheduler
# systemctl status kube-scheduler
# systemctl enable kube-scheduler
K. 验证 Master
节点
# kubectl get componentstatuses
# kubectl get cs
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health": "true"}
scheduler Healthy ok
controller-manager Healthy ok
9. 配置并启用 Kubernetes Node
节点
Kubernetes Node 节点包含如下组件:
kubelet
kube-proxy
A. 配置 kubelet
启动项
# cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \ \$KUBE_LOGTOSTDERR \ \$KUBE_LOG_LEVEL \ \$KUBELET_ADDRESS \ \$KUBELET_PORT \ \$KUBELET_HOSTNAME \ \$KUBE_ALLOW_PRIV \ \$KUBELET_POD_INFRA_CONTAINER \ \$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
B. 配置 kubelet
配置文件
# mkdir -p /var/lib/kubelet
# export MASTER_ADDRESS=10.104.246.79
# export KUBECONFIG_DIR=/etc/kubernetes
# cat > "${KUBECONFIG_DIR}/kubelet.kubeconfig" <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
server: http://${MASTER_ADDRESS}:8080/
name: local
contexts:
- context:
cluster: local
name: local
current-context: local
EOF
# cat > /etc/kubernetes/kubelet <<EOF
KUBELET_ADDRESS="--address=${MASTER_ADDRESS}"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=master"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=hub.c.163.com/k8s163/pause-amd64:3.0"
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --fail-swap-on=false --cluster-dns=10.254.0.2 --cluster-domain=cluster.local. --serialize-image-pulls=false"
EOF
注:
--fail-swap-on ##如果在节点上启用了swap,则Kubelet无法启动.(default true)[该命令是1.8版本开始才有的]
--cluster-dns=10.254.0.2
--cluster-domain=cluster.local. ##与 KubeDNS Pod 配置的参数一致
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig #新版本不再支持 --api-servers 模式
C. 启动 kubelet
# systemctl start kubelet
# systemctl status kubelet
# systemctl enable kubelet
D. 配置 kube-proxy
启动项
# cat > /etc/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \ \$KUBE_LOGTOSTDERR \ \$KUBE_LOG_LEVEL \ \$KUBE_MASTER \ \$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
E. 配置 kube-proxy
配置文件
# cat > /etc/kubernetes/proxy <<EOF
KUBE_PROXY_ARGS="--bind-address=10.104.246.79 --hostname-override=10.104.246.79 --cluster-cidr=10.254.0.0/16"
EOF
F. 启动 kube-proxy
# systemctl start kube-proxy
# systemctl status kube-proxy
# systemctl enable kube-proxy
G. 查看 Nodes
相关信息
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready <none> 3m v1.8.1
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready <none> 3m v1.8.1 <none> CentOS Linux 7 (Core) 3.10.0-514.26.2.el7.x86_64 docker://17.12.1-ce
# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready <none> 4m v1.8.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master
# kubectl version --short
Client Version: v1.8.1
Server Version: v1.8.1
H. 查看集群信息
# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.
10. 部署 KubeDNS 插件
官方的yaml文件目录:kubernetes/cluster/addons/dns。
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
A. 下载 Kube-DNS 相关 yaml 文件
# mkdir dns && cd dns
# curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kube-dns.yaml.base
# cp kube-dns.yaml.base kube-dns.yaml
替换所有的 images,默认google源国内被墙了,换成阿里云的
# sed -i ‘s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g‘ kube-dns.yaml
替换如下
# sed -i "s/__PILLAR__DNS__SERVER__/10.254.0.2/g" kube-dns.yaml
# sed -i "s/__PILLAR__DNS__DOMAIN__/cluster.local/g" kube-dns.yaml
# sed -i ‘/--domain=cluster.local./a\ - --kube-master-url=http://10.104.246.79:8080‘ kube-dns.yaml
- [x] 注:这里我们要使用--kube-master-url命令指定apiserver,不然也会产生 CrashLoopBackOff 错误。
B. 对比更改
# diff kube-dns.yaml kube-dns.yaml.base
33c33
< clusterIP: 10.254.0.2
---
> clusterIP: __PILLAR__DNS__SERVER__
98c98
< image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.8
---
> image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
128,129c128
< - --domain=cluster.local.
< - --kube-master-url=http://10.104.246.79:8080
150c149
< image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8
---
> image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
170c169
< - --server=/cluster.local/127.0.0.1#10053
---
> - --server=/__PILLAR__DNS__DOMAIN__/127.0.0.1#10053
189c188
< image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.8
---
> image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
202,203c201,202
< - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
< - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
---
> - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,SRV
> - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,SRV
C. 执行该文件
# kubectl create -f .
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment "kube-dns" created
D. 查看 KubeDNS 服务
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-64bbf87bf7-x6mvq 3/3 Running 0 4m
E. 查看集群信息
# kubectl get service -n kube-system | grep dns
kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 10m
# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.
F. 查看 KubeDNS 守护程序的日志
# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar
11. 部署 Heapster
组件
A.下载 heapster
# wget https://codeload.github.com/kubernetes/heapster/tar.gz/v1.5.0-beta.0 -O heapster-1.5.0-beta.tar.gz
# tar -zxvf heapster-1.5.0-beta.tar.gz
# cd heapster-1.5.0-beta.0/deploy/kube-config
# cp rbac/heapster-rbac.yaml influxdb/
# cd influxdb/
# ls
grafana.yaml heapster-rbac.yaml heapster.yaml influxdb.yaml
B. 修改yaml文件
# sed -i ‘s#gcr.io#registry.cn-hangzhou.aliyuncs.com#‘ *.yaml
# sed -i ‘s#google_containers#google-containers#‘ heapster.yaml
# sed -i ‘#https://kubernetes.default#http://10.104.246.79:8080?inClusterConfig=false#‘ heapster.yaml
- [x] 注: heapster 默认使用 https 连接 apiserver ,这里更改为使用 http 连接。
C. 执行 influxdb
目录下的所有文件
# kubectl create -f .
deployment "monitoring-grafana" created
service "monitoring-grafana" created
clusterrolebinding "heapster" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
D. 检查执行结果
# kubectl get deployments -n kube-system | grep -E ‘heapster|monitoring‘
heapster 1 1 1 1 5m
monitoring-grafana 1 1 1 1 26m
monitoring-influxdb 1 1 1 1 26m
E. 检查 Pods
# kubectl get pods -n kube-system | grep -E ‘heapster|monitoring‘
heapster-d55bf744b-r4tbq 1/1 Running 0 16m
monitoring-grafana-65445db678-6jgfn 1/1 Running 0 37m
monitoring-influxdb-59944dd94b-7gbmb 1/1 Running 0 37m
# kubectl get svc -n kube-system | grep -E ‘heapster|monitoring‘
heapster ClusterIP 10.254.106.108 <none> 80/TCP 16m
monitoring-grafana ClusterIP 10.254.119.93 <none> 80/TCP 37m
monitoring-influxdb ClusterIP 10.254.205.189 <none> 8086/TCP 37m
F. 查看集群信息
# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
monitoring-grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.
12. 部署 Kubernetes Dashboard
A. 下载 yaml
文件
这里我们使用不需要证书的版本:
# curl -O https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
B. 替换 images
# sed -i ‘s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#‘ kubernetes-dashboard.yaml
###C. 添加 apiserver
地址
sed -i ‘/--apiserver-host/a\ - --apiserver-host=http:\/\/10.104.246.79:8080‘ kubernetes-dashboard.yaml
D. 执行该文件
# kubectl create -f kubernetes-dashboard.yaml
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
E. 检查 kubernetes-dashboard 服务
# kubectl get pods -n kube-system | grep dashboard
kubernetes-dashboard-7fd9954b4d-shdg7 1/1 Running 0 20s
-
[x] 注:1.7版不能使用 kubectl cluster-info 查看到 kubernetes-dashboard 地址,1.6.3版的可以。
-
[x] 1.7.0版需要使用http://localhost:8080/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
进行访问。而1.7.1版可以使用http://localhost:8080/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ 访问,也可以使用http://localhost:8080/ui访问,其会自动跳转。
13. 查看 kubernetes dashboard
使用http://localhost:8080/ui访问
原文地址:http://blog.51cto.com/hzde0128/2087266