360SDN.COM

Kubernetes的安装

来源:水哥 请叫我水哥  2019-04-25 14:51:20    评论:0点击:

水哥 请叫我水哥

目录

目录一、了解Kubernetes1.1 Kubernetes架构二、Kubernetes集群部署1.服务器规划2.服务器准备工作(参考官方文档安装准备工作)2.1 验证每个节点的MAC地址和ProductUUID是唯一的2.2 检查所需接口是否被占用2.3 关闭Selinux,Firewall,配置/etc/hosts,关闭swap,配置yum2.4 其他配置2.5 Docker安装及加速器配置三、Kubernetes相关软件包的安装3.1 Kubernetes软件包的安装3.2 kubeadm init启动一个主节点(master上执行)3.3 安装Pod网络插件3.4 其他配置(可选)

一、了解Kubernetes

Kubernetes是google公司的开源容器集群管理项目,基于go语言实现,试图为基于容器的应用部署和生产管理打造一套强大并且易用的操作平台。是一个基于容器的分布式架构领先方案。具体优势可以见官网https://kubernetes.io/zh/

1.1 Kubernetes架构

属于主从分布式结构,由Master Node和Work Node组成,以及包括客户端命令工具Kubectl和其他附加选项。

 

二、Kubernetes集群部署

1.服务器规划

| 主机名 | 服务器地址     | 系统版本  | 软件版本                                                      |
| :----: | -------------- | --------- | ---------------------------------------------------------- |
| master | 192.168.48.150 | CentOS7.4 | kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 docker-1.13.1 |
| node1  | 192.168.48.151 | CentOS7.4 | kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 docker-1.13.1 |
| node2  | 192.168.48.152 | CentOS7.4 | kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 docker-1.13.1 |
2.服务器准备工作(参考官方文档安装准备工作)
2.1 验证每个节点的MAC地址和ProductUUID是唯一的

# 检查productUUID是否唯一
[root@master ~]# cat /sys/class/dmi/id/product_uuid
C8C84D56-0854-9260-6BC7-D4198BBA8DE2
[root@node1 ~]# cat /sys/class/dmi/id/product_uuid
E8DF4D56-5E2C-E4CF-B0E4-197C0CE086E4
[root@node2 ~]# cat /sys/class/dmi/id/product_uuid
65F74D56-6152-225E-F42F-8FCEFB0A2B0F
# 检查MAC地址是否唯一(这个主要针对虚拟机复制或者克隆导致的MAC地址相同的问题)
使用iplink或者ifconfig -a查看,内容太多就不粘贴了

2.2 检查所需接口是否被占用

Master Node
规则    方向    端口范围    作用    使用者
TCP    Inbound    6443*    Kubernetes API server    All
TCP    Inbound    2379-2380    etcd server client API    kube-apiserver, etcd
TCP    Inbound    10250    Kubelet API    Self, Control plane
TCP    Inbound    10251    kube-scheduler    Self
TCP    Inbound    10252    kube-controller-manager    Self

Worker Node
规则    方向    端口范围    作用    使用者
TCP    Inbound    10250    Kubelet API    Self, Control plane
TCP    Inbound    30000-32767    NodePort Services**    All
2.3 关闭Selinux,Firewall,配置/etc/hosts,关闭swap,配置yum

#Master Node和Work Node都要配置

#selinux关闭
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#关闭防火墙
firewall-cmd --set-default-zone=trust
systemctl stop firewalld;systemctl disable firewalld#或者直接关闭
#hosts文件配置
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.48.150 master.cs.com master
192.168.48.151 node1.cs.com node1
192.168.48.152 node2.cs.com node2
#关闭swap交换分区
swapon -s #查看交换分区
swapoff /dev/dm-1 #关闭交换分区
vim /etc/fstab
/dev/mapper/centos-swap swap swap defaults   0 0#编辑文件注释掉swap的自动挂载

#配置yum,使用阿里云的yum源和epel源,如下自行去阿里云下载配置
[root@master yum.repos.d]# cat k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

2.4 其他配置

#Master Node和Work Node都要配置
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p

2.5 Docker安装及加速器配置

#yum安装docker
yum install docker -y
#配置阿里云的镜像加速器
[root@master ~]# cat /etc/docker/daemon.json  
{
"registry-mirrors": ["https://ipm7fo92.mirror.aliyuncs.com"]
}
#启动docker,并设置开机启动
systemctl start docker ; systemctl enable docker

#在后续安装过程中会遇到安装kubernetes相关软件包是以容器的方式运行,所以需要镜像。
#由于网络问题,无法下载相关软件包,这里提前下载了并打包成:images.tar镜像,然后需要重新tag命名。
#加载kubernetes需要的镜像
docker load -i images.tar

#大概报错格式如下:
[preflight]Some fatal errors occurred:
[ERROR ImagePull]:failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.13.3]:exit status 1
[ERROR ImagePull]:failed to pull image [k8s.gcr.io/kube-controller-manager-amd64:v1.13.3]:exit status 1
[ERROR ImagePull]:failed to pull image [k8s.gcr.io/kube-scheduler-amd64:v1.13.3]:exit status 1
[ERROR ImagePull]:failed to pull image [k8s.gcr.io/etcd-amd64:3.2.18]:exit status 1
[ERROR ImagePull]:failed to pull image [k8s.gcr.io/coredns:1.1.3]:exit status 1
[ERROR ImagePull]:failed to pull image [k8s.gcr.io/kube-proxy-amd64:v1.13.3]:exit status 1
#镜像tag重命名,需要和后续过程中kubeadm init运行后报错的错误中的镜像的命名一致(如上)。这一步可以在后面报错的时候运行(根据错误来配置命名)。
[root@master ~]# cat xx.sh
#!/bin/bash
docker tag k8s.gcr.io/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver-amd64:v1.13.3
docker tag k8s.gcr.io/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager-amd64:v1.13.3
docker tag k8s.gcr.io/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler-amd64:v1.13.3
docker tag k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/etcd-amd64:3.2.18
docker tag k8s.gcr.io/coredns:1.2.6 k8s.gcr.io/coredns:1.1.3
docker tag k8s.gcr.io/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy-amd64:v1.13.3
#加载的kubernetes镜像
[root@master ~]# docker images
REPOSITORY                                         TAG                 IMAGE ID           CREATED             SIZE
k8s.gcr.io/kube-proxy-amd64                       v1.13.3             98db19758ad4        2 months ago        80.3 MB
k8s.gcr.io/kube-proxy                             v1.13.3             98db19758ad4        2 months ago        80.3 MB
k8s.gcr.io/kube-apiserver-amd64                   v1.13.3             fe242e556a99        2 months ago        181 MB
k8s.gcr.io/kube-apiserver                         v1.13.3             fe242e556a99        2 months ago        181 MB
k8s.gcr.io/kube-controller-manager-amd64           v1.13.3             0482f6400933        2 months ago        146 MB
k8s.gcr.io/kube-controller-manager                 v1.13.3             0482f6400933        2 months ago        146 MB
k8s.gcr.io/kube-scheduler-amd64                   v1.13.3             3a6f709e97a0        2 months ago        79.6 MB
k8s.gcr.io/kube-scheduler                         v1.13.3             3a6f709e97a0        2 months ago        79.6 MB
docker.io/grafana/grafana                          6.0.0-beta1         21b3e0a4f0b9        2 months ago        242 MB
quay.io/coreos/prometheus-config-reloader         v0.28.0             39cdf06d5528        2 months ago        21.3 MB
quay.io/coreos/prometheus-operator                 v0.28.0             56f018290908        2 months ago        44.9 MB
quay.io/coreos/flannel                             v0.11.0-amd64       ff281650a721        2 months ago        52.6 MB
quay.io/coreos/kube-rbac-proxy                     v0.4.1             70eeaa7791f2        2 months ago        41.3 MB
quay.io/prometheus/alertmanager                   v0.16.0             a91ca27a028f        2 months ago        42.5 MB
quay.io/coreos/kube-state-metrics                 v1.5.0             91599517197a        3 months ago        31.8 MB
quay.io/coreos/k8s-prometheus-adapter-amd64       v0.4.1             9dcf4b7170ef        4 months ago        60.7 MB
quay.io/prometheus/node-exporter                   v0.17.0             b3e7f67a1480        4 months ago        21 MB
quay.io/prometheus/prometheus                     v2.5.0             42e450d926a8        5 months ago        99.8 MB
k8s.gcr.io/coredns                                 1.1.3               f59dcacceff4        5 months ago        40 MB
k8s.gcr.io/coredns                                 1.2.6               f59dcacceff4        5 months ago        40 MB
quay.io/calico/node                               v3.2.2             c703fef69d2d        6 months ago        75.2 MB
quay.io/calico/kube-controllers                   v3.2.2             403d39a0fd89        6 months ago        60.3 MB
quay.io/calico/cni                                 v3.2.2             e8d97609112a        6 months ago        75.1 MB
k8s.gcr.io/etcd                                    3.2.24             3cab8e1b9802        6 months ago        220 MB
k8s.gcr.io/etcd-amd64                              3.2.18             3cab8e1b9802        6 months ago        220 MB
k8s.gcr.io/kubernetes-dashboard-amd64             v1.10.0             0dab2435c100        7 months ago        122 MB
quay.io/coreos/etcd                               v3.3.9             58c02f00d03b        8 months ago        39.2 MB
gcr.io/google_containers/heapster-amd64           v1.5.3             f57c75cd7b0a        11 months ago       75.3 MB
k8s.gcr.io/pause                                   3.1                 da86e6ba6ca1        16 months ago       742 kB
gcr.io/google_containers/heapster-influxdb-amd64   v1.3.3             577260d221db        19 months ago       12.5 MB
gcr.io/google_containers/heapster-grafana-amd64   v4.4.3             8cb3de219af7        19 months ago       152 MB
quay.io/coreos/configmap-reload                   v0.0.1             3129a2ca29d7        2 years ago         4.79 MB
quay.io/coreos/addon-resizer                       1.0                 9ca330d8f890        3 years ago         36.8 MB

三、Kubernetes相关软件包的安装
3.1 Kubernetes软件包的安装

#Work Node和Master Node都要安装

yum install -y kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3  -- disableexcludes= = kubernetes   #注意需要指定版本,否则将安装最新版
systemctl restart kubelet ; systemctl enable kubelet #启动kubelet服务
#注意:这时候kubelet并没有启动,需要后面配置完成后才算真正启动
[root@master ~]# systemctl is-active kubelet
activating          #三个节点activating状态

3.2 kubeadm init启动一个主节点(master上执行)

kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 #指定版本,并指明pod网络可以使用的IP地址段。如果设置这个参数control plane将会为每一个节点自动分配CIDRs。

#init会输出如下信息:这个是我在官网的截图信息
[init] Using Kubernetes version: vX.Y.Z
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 39.511972 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: <token>
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [Podnetwork].yaml" with one of the addon options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

#如果要让普通用户运行kubectl,需要运行如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#如果是root用户,则可运行
export KUBECONFIG=/etc/kubernetes/admin.conf

#请备份好kubeamin init输出的kubeadmin join命令,因为需要这个命令来添加集群节点
#生成的令牌是主节点和新添加的节点之间互相身份认证的。任何人知道这个令牌,都可以往集群添加节点。
#比如我的集群节点生成格式如下:
kubeadm join 192.168.48.150:6443 --token 5v7y5a.atgep3l20qevj8ww --discovery-token-ca-cert-hash sha256:bcfa877d1b4072f722229226a704140cf115d3cca6cfdd2ef6e35063b8bc9b60
#如果忘记了,可以通过kubeadm token查看
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.48.150:6443 --token hgysdf.uguxkg4hh1uohdjb --discovery-token-ca-cert-hash sha256:bcfa877d1b4072f722229226a704140cf115d3cca6cfdd2ef6e35063b8bc9b60

#获取现在集群的状态,这时只有master节点
[root@master ~]# kubectl get nodes
NAME           STATUS   ROLES   AGE     VERSION
master.cs.com   NOTReady   master   6h42m   v1.13.3

#各节点加入集群。输入下面的命令
kubeadm join 192.168.48.150:6443 --token 5v7y5a.atgep3l20qevj8ww --discovery-token-ca-cert-hash sha256:bcfa877d1b4072f722229226a704140cf115d3cca6cfdd2ef6e35063b8bc9b60

#获取现在集群的状态,可以看到各节点已经加入集群
[root@master ~]# kubectl get nodes
NAME           STATUS   ROLES   AGE     VERSION
master.cs.com   NOTReady   master   6h42m   v1.13.3
node1.cs.com   NOTReady   <none>   6h35m   v1.13.3  
node2.cs.com   NOTReady   <none>   6h35m   v1.13.3 #由于现在是跨主机通信,所以需要安装网络插件

#这时候kubelet已经变成active状态
[root@master ~]# systemctl is-active kubelet
active          

3.3 安装Pod网络插件

您必须先安装 Pod 网络插件,以便您的 Pod 可以互相通信。

网络必须在部署任何应用之前部署好。此外,在网络安装之前是 CoreDNS 不会启用的。kubeadm 只支持基于容器网络接口(CNI)的网络而且不支持 kubenet 。

支持的网络类型:

# 配置Flannel网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
#检查coredns是否运行,running后,集群各个节点直接才能ready
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS   RESTARTS   AGE
kube-system   coredns-594954f7fb-5fpml                1/1     Running   1         6h54m
kube-system   coredns-594954f7fb-hmgs9                1/1     Running   1         6h54m
kube-system   etcd-master.cs.com                      1/1     Running   1         6h53m
kube-system   kube-apiserver-master.cs.com            1/1     Running   1         6h53m
kube-system   kube-controller-manager-master.cs.com   1/1     Running   1         6h53m
kube-system   kube-flannel-ds-amd64-278v5             1/1     Running   3         6h42m
kube-system   kube-flannel-ds-amd64-hg2dc             1/1     Running   1         6h42m
kube-system   kube-flannel-ds-amd64-hrszl             1/1     Running   1         6h42m
kube-system   kube-proxy-4nn85                        1/1     Running   1         6h54m
kube-system   kube-proxy-bmzcq                        1/1     Running   1         6h47m
kube-system   kube-proxy-dftgb                        1/1     Running   1         6h47m
kube-system   kube-scheduler-master.cs.com            1/1     Running   1         6h53m
#检查集群状态。已经都ready。
[root@master ~]# kubectl get nodes
NAME           STATUS   ROLES   AGE     VERSION
master.cs.com   Ready   master   6h56m   v1.13.3
node1.cs.com   Ready   <none>   6h49m   v1.13.3
node2.cs.com   Ready   <none>   6h49m   v1.13.3

3.4 其他配置(可选)

#所有节点都配置
[root@master ~]# cat /etc/profile |grep source
source <(kubectl completion bash)    #添加后,就可以使用kubectl命令补全,前提是bash-completion.noarch必须要安装才行。

3.5 查看集群信息和状态

#查看集群信息
[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.48.150:6443
KubeDNS is running at https://192.168.48.150:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

#查看集群版本
[root@master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

#查看配置信息
[root@master ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.48.150:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
为您推荐

友情链接 |九搜汽车网 |手机ok生活信息网|ok生活信息网|ok微生活
 Powered by www.360SDN.COM   京ICP备11022651号-4 © 2012-2016 版权