百度360必应搜狗淘宝本站头条
当前位置:网站首页 > 优雅编程 > 正文

ubuntu 22.04 下 Kubernetes管理 4块4090 GPU显卡

sinye56 2024-11-14 17:43 6 浏览 0 评论


1.安装显卡驱动


NVIDIA-Linux-x86_64-535.113.01.run* cuda 12.2


国内下载地址,速度快,替换驱动版本


# wget https://cn.download.nvidia.com/XFree86/Linux-x86_64/525.113.01/NVIDIA-Linux-x86_64-525.113.01.run
# sh NVIDIA-Linux-x86_64-535.113.01.run


安装完reboot 重启


开启内存持久化


(base) ubuntu@ubuntu:~$ nvidia-smi -pm 1
Unable to set persistence mode for GPU 00000000:17:00.0: Insufficient Permissions
Terminating early due to previous errors.


nvidia-smi 查看显卡


(base) ubuntu@ubuntu:~$ nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 4090 (UUID: GPU-88717b49-0372-9d05-e6ca-238870f93bf3)
GPU 1: NVIDIA GeForce RTX 4090 (UUID: GPU-74b01939-bc8b-833b-10ac-daa5c60fc594)
GPU 2: NVIDIA GeForce RTX 4090 (UUID: GPU-0715eb37-44d8-d7ca-cd20-79452c93fe86)
GPU 3: NVIDIA GeForce RTX 4090 (UUID: GPU-b9f5ac04-9684-71fe-88b6-6363e7c2936d)
(base) ubuntu@ubuntu:~$


2.安装docker


配置apt源


# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update


安装docker


sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin


重启docker


systemctl restart docker


3.安装nvidia-docker-toolkit


安装Apt


配置存储库:


#curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \

&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \

sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \

sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \

&& \

sudo apt-get update


安装NVIDIA容器工具包:


#sudo apt-get install -y nvidia-container-toolkit


测试安装


#docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi


4.安装kubernetes kubeadm


由于服务器上已经安装了docker ,所有我们不用containerd


基础环境配置


1.设置主机名字,具有明显的标识性


hostnamectl set-hostname ubuntu


2.禁用SELinux


sudo setenforce 0

sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config


3.关闭swap分区


swapoff -a #临时关闭

sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭

sed -ri 's/#(.*swap.*)/\1/' /etc/fstab #开启swap分区


4.把IPv6的流量桥接到IPv4网卡上,通信更方便,统计更准确


cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF


5.应用配置


sysctl --system


6. 安装Kubernetes组件


0.docker 已经安装好,不再安装 ,查看docker的版本和配置信息


(base) ubuntu@ubuntu:~/k8s$ sudo docker info

Client: Docker Engine - Community

Version: 24.0.6

Context: default

Debug Mode: false

Plugins:

buildx: Docker Buildx (Docker Inc.)

Version: v0.11.2

Path: /usr/libexec/docker/cli-plugins/docker-buildx

compose: Docker Compose (Docker Inc.)

Version: v2.21.0

Path: /usr/libexec/docker/cli-plugins/docker-compose

(base) ubuntu@ubuntu:~/k8s$


(base) ubuntu@ubuntu:~/k8s$ cat /etc/docker/daemon.json

{

"exec-opts":["native.cgroupdriver=systemd"],

"data-root": "/data2/dockerdata",

"runtimes": {

"nvidia": {

"args": [],

"path": "nvidia-container-runtime"

}

}

}

(base) ubuntu@ubuntu:~/k8s$


1.Kubernetes 添加 apt 存储库


curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

sudo apt-add-repository "deb kubernetes-apt安装包下载_开源镜像站-阿里云 kubernetes-xenial main"


2.安装kubelet,kubectl,kubeadm


sudo apt update

sudo apt install -y kubelet=1.23.8-00 kubeadm=1.23.8-00 kubectl=1.23.8-00

sudo apt-mark hold kubelet kubeadm kubectl


这里指定了版本是为了将其版本保持一致,以便于后面安装dashboard,由于是用docker安装k8s,而在k8s v1.24之后的版本不再支持docker,所以安装v1.23.8,如果想安装最新版本或者指定版本,把后面的版本号去掉或者修改即可


最后一行命令是为了防止其自动更新导致版本不匹配


#解除锁定

apt-mark unhold package_name


3.设置kubelet开机自启


sudo systemctl enable --now kubelet


4.master域名映射


echo "172.16.1.220 cluster-endpoint" >> /etc/hosts # 把x替换成你的服务器/虚拟机的内网ip


5.kubeadm init初始化


sudo kubeadm init \

--apiserver-advertise-address=172.16.1.220 \

--control-plane-endpoint=cluster-endpoint \

--image-repository registry.aliyuncs.com/google_containers \

--kubernetes-version v1.23.8 \

--service-cidr=10.96.0.0/16 \

--pod-network-cidr=10.244.0.0/16


在使用docker安装k8s的时候,有一个很重要的小细节,就是docker默认使用的Cgroup Driver是cgroupfs,安装报错,那么就需要使用systemd作为cgroup


解决方法:


vim /etc/docker/daemon.json


添加以下内容


{

"exec-opts":["native.cgroupdriver=systemd"]

}


#应用配置并重启docker


systemctl daemon-reload

systemctl restart docker


此时,重新使用kubeadm初始化就没问题了


在初始化之前还要重置以前的初始化


kubeadm reset

rm -rf /etc/kubernetes/manifests/kube-apiserver.yaml

rm -rf /etc/kubernetes/manifests/kube-controller-manager.yaml

rm -rf /etc/kubernetes/manifests/kube-scheduler.yaml

rm -rf /etc/kubernetes/manifests/etcd.yaml

rm -rf /var/lib/etcd/*


(base) ubuntu@ubuntu:~$ sudo systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent

Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)

Drop-In: /etc/systemd/system/kubelet.service.d

└─10-kubeadm.conf

Active: active (running) since Wed 2024-04-03 07:40:35 UTC; 2min 54s ago

Docs: Kubernetes Documentation | Kubernetes

Main PID: 90566 (kubelet)

Tasks: 47 (limit: 618620)

Memory: 98.3M

CGroup: /system.slice/kubelet.service

└─90566 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=>

(base) ubuntu@ubuntu:~$ sudo docker images |grep google

registry.aliyuncs.com/google_containers/kube-apiserver v1.23.8 09d62ad3189b 21 months ago 135MB

registry.aliyuncs.com/google_containers/kube-proxy v1.23.8 db4da8720bcb 21 months ago 112MB

registry.aliyuncs.com/google_containers/kube-scheduler v1.23.8 afd180ec7435 21 months ago 53.5MB

registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.8 2b7c5a039984 21 months ago 125MB

registry.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61 2 years ago 293MB

registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 2 years ago 46.8MB

registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 2 years ago 683kB

(base) ubuntu@ubuntu:~$


令牌是节点加入的指令,24h有效,可以用以下指令生成


kubeadm token create --print-join-command


6.根据提示继续 ----以 ubuntu 用户执行


mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


7.安装网络组件


curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O


接着将calico.yaml中的3888和3889行修改为如图所示的样子,因为前面node节点的ip配置是这样的


vi calico.yaml

3888 # - name: CALICO_IPV4POOL_CIDR

3889 # value: "192.168.0.0/16"

----》

3888 - name: CALICO_IPV4POOL_CIDR

3889 value: "10.244.0.0/16"


应用yaml文件


(base) ubuntu@ubuntu:~/k8s$ kubectl apply -f calico.yaml

(base) ubuntu@ubuntu:~/k8s$


8.查看master节点状态


kubectl get nodes ------等一会才能查看到状态 网络配置完成过一会即可看到Ready状态


(base) ubuntu@ubuntu:~/k8s$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

ubuntu Ready control-plane,master 14m v1.23.8


至此,k8s集群搭建完毕,如有多个node节点可以使用令牌加入master节点中


(base) ubuntu@ubuntu:~/k8s$ kubectl get no -o yaml | grep taint -A 5

taints:

- effect: NoSchedule

key: node-role.kubernetes.io/master

status:

addresses:

- address: 172.16.1.220

(base) ubuntu@ubuntu:~/k8s$


#去除所有的污点


(base) ubuntu@ubuntu:~/k8s$ kubectl taint nodes --all node-role.kubernetes.io/master-

node/ubuntu untainted


#再次查看,如果没有任何输出则污点去除成功


(base) ubuntu@ubuntu:~/k8s$ kubectl get no -o yaml | grep taint -A 5

(base) ubuntu@ubuntu:~/k8s$


#查看pod节点是否成功启动,所有节点都是running


(base) ubuntu@ubuntu:~/k8s$ kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system calico-kube-controllers-5b9cd88b65-gtjvn 1/1 Running 0 10m

kube-system calico-node-nc746 1/1 Running 0 10m

kube-system coredns-6d8c4cb4d-85t62 1/1 Running 0 24m

kube-system coredns-6d8c4cb4d-cwd92 1/1 Running 0 24m

kube-system etcd-ubuntu 1/1 Running 1 24m

kube-system kube-apiserver-ubuntu 1/1 Running 1 24m

kube-system kube-controller-manager-ubuntu 1/1 Running 1 24m

kube-system kube-proxy-st4qb 1/1 Running 0 24m

kube-system kube-scheduler-ubuntu 1/1 Running 1 24m

(base) ubuntu@ubuntu:~/k8s$


默认安装 kubeadm 证书有效期 一年


(base) ubuntu@ubuntu:~$ sudo kubeadm certs check-expiration

[check-expiration] Reading configuration from the cluster...

[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

W0403 14:24:19.986291 791926 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf


5.安装设备插件 device plugin


部署设备插件的首选方法是使用helm作为守护进程。 安装helm的说明可以在 这里.


https://github.com/helm/helm/tags


下载helm二进制包


#tar -zxvf helm-v3.10.2-linux-amd64.tar.gz

#mv linux-amd64/helm /usr/local/bin/helm


开始,设置插件的helm存储库,并更新如下:


$ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin

$ helm repo update


然后验证插件的最新版本(v0.14.3)是否可用:


(base) ubuntu@ubuntu:~/k8s$ helm search repo nvdp --devel

NAME CHART VERSION APP VERSION DESCRIPTION

nvdp/gpu-feature-discovery 0.15.0-rc.2 0.15.0-rc.2 A Helm chart for gpu-feature-discovery on Kuber...

nvdp/nvidia-device-plugin 0.15.0-rc.2 0.15.0-rc.2 A Helm chart for the nvidia-device-plugin on Ku...

(base) ubuntu@ubuntu:~/k8s$


Deploy the device plugin:


#helm install --generate-name nvdp/nvidia-device-plugin --namespace nvidia-device-plugin \

--create-namespace


下载helm chat 包


helm pull nvdp/nvidia-device-plugin


安装nvidia-device-plugin插件一直起不来,查看日志发现抱错


(base) ubuntu@ubuntu:~$ kubectl logs nvidia-device-plugin-1712138777-wxdc8 -n nvidia-device-plugin


安装抱错 Detected non-NVML platform: could not load NVML library: libnvidia-ml.so.1: cannot open shared object file: No such file or directory


需要修改 daemon.json 文件 ,然后重启docker


(base) ubuntu@ubuntu:~$ more /etc/docker/daemon.json

{

"exec-opts":["native.cgroupdriver=systemd"],

"data-root": "/data2/dockerdata",

"default-runtime": "nvidia",

"runtimes": {

"nvidia": {

"runtimeArgs": [],

"path": "/usr/bin/nvidia-container-runtime"

}

}

}

(base) ubuntu@ubuntu:~$ sudo systemctl restart docker


6.安装gpu-feature-discovery


$ helm repo add nvgfd https://nvidia.github.io/gpu-feature-discovery

$ helm repo update

$ helm search repo nvgfd --devel

# helm install --generate-name nvgfd/gpu-feature-discovery --namespace gpu-feature-discovery \

--create-namespace

(base) ubuntu@ubuntu:~$ helm list -A

NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION

gpu-feature-discovery-1712148385 gpu-feature-discovery 1 2024-04-03 12:47:56.759025068 +0000 UTC deployed gpu-feature-discovery-0.8.2 0.8.2

nvidia-device-plugin-1712138777 nvidia-device-plugin 1 2024-04-03 10:08:01.06389181 +0000 UTC deployed nvidia-device-plugin-0.14.5 0.14.5

(base) ubuntu@ubuntu:~$


镜像下载不下来 ,下载helm chart 重新安装


(base) ubuntu@ubuntu:~/k8s/gpu-feature-discovery$ helm uninstall gpu-feature-discovery-1712148385 -n gpu-feature-discovery

release "gpu-feature-discovery-1712148385" uninstalled

(base) ubuntu@ubuntu:~/k8s/gpu-feature-discovery$


#helm pull nvgfd/gpu-feature-discovery

#docker pull yansenchangyu/node-feature-discovery:v0.13.1

#docker pull nvcr.io/nvidia/gpu-feature-discovery:v0.8.2


修改 node-feature-discovery 下载路径,原来的镜像下载不下来 修改 registry.k8s.io/nfd/node-feature-discovery:v0.13.2 --》yansenchangyu/node-feature-discovery:v0.13.1


#helm install gpu-feature-discovery . --create-namespace --namespace gpu-feature-discovery


Helm 安装gpu-feature-discovery ,NFD 会自动安装


7.测试集群和GPU集成


cat <<EOF | kubectl apply -f -

apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
restartPolicy: Never
containers:
- name: cuda-container
image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
resources:
limits:
nvidia.com/gpu: 1 # requesting 1 GPU
nodeSelector:
accelerator: nvidia-rtx4090

EOF


观察运行日志,看到 Test PASSED 表示容器使用gpu计算运行完成。


(base) ubuntu@ubuntu:~/k8s$ kubectl logs gpu-pod

[Vector addition of 50000 elements]

Copy input data from the host memory to the CUDA device

CUDA kernel launch with 196 blocks of 256 threads

Copy output data from the CUDA device to the host memory

Test PASSED

Done

(base) ubuntu@ubuntu:~/k8s$


如果你觉得内容还算实用,欢迎点赞分享给你的朋友,在此谢过。

相关推荐

RHEL8和CentOS8怎么重启网络

本文主要讲解如何重启RHEL8或者CentOS8网络以及如何解决RHEL8和CentOS8系统的网络管理服务报错,当我们安装好RHEL8或者CentOS8,重启启动网络时,会出现以下报错:...

Linux 内、外网双网卡路由配置

1.路由信息的影响Linux系统中如果有多张网卡的情况下,如果路由信息配置不正确,...

Linux——centos7修改网卡名

修改网卡名这个操作可能平时用不太上,可作为了解。修改网卡默认名从ens33改成eth01.首先修改网卡配置文件名(建议将原配置文件进行备份)...

CentOS7下修改网卡名称为ethX的操作方法

?Linux操作系统的网卡设备的传统命名方式是eth0、eth1、eth2等,而CentOS7提供了不同的命名规则,默认是基于固件、拓扑、位置信息来分配。这样做的优点是命名全自动的、可预知的...

Linux 网卡名称enss33修改为eth0

一、CentOS修改/etc/sysconfig/grub文件(修改前先备份)为GRUB_CMDLINE_LINUX变量增加2个参数(net.ifnames=0biosdevname=0),修改完成...

CentOS下双网卡绑定,实现带宽飞速

方式一1.新建/etc/sysconfig/network-scripts/ifcfg-bond0文件DEVICE=bond0IPADDR=191.3.60.1NETMASK=255.255.2...

linux 双网卡双网段设置路由转发

背景网络情况linux双网卡:网卡A(ens3)和网卡B(...

Linux-VMware设置网卡保持激活

Linux系统只有在激活网卡的状态下才能去连接网络,进行网络通讯。修改配置文件(永久激活网卡)...

VMware虚拟机三种网络模式

01.VMware虚拟机三种网络模式由于linux目前很热门,越来越多的人在学习linux,但是买一台服务放家里来学习,实在是很浪费。那么如何解决这个问题?虚拟机软件是很好的选择,常用的虚拟机软件有v...

Rocky Linux 9/CentOS Stream 9修改网卡配置/自动修改主机名(实操)

推荐...

2023年最新版 linux克隆虚拟机 解决网卡uuid重复问题

问题描述1、克隆了虚拟机,两台虚拟机里面的ip以及网卡的uuid都是一样的2、ip好改,但是uuid如何改呢?解决问题1、每台主机应该保证网卡的UUID是唯一的,避免后面网络通信有问题...

Linux网卡的Vlan配置,你可能不了解的玩法

如果服务器上连的交换机端口已经预先设置了TRUNK,并允许特定的VLAN可以通过,那么服务器的网卡在配置时就必须指定所属的VLAN,否则就不通了,这种情形在虚拟化部署时较常见。例如在一个办公环境中,办...

Centos7 网卡绑定

1、切换到指定目录#备份网卡数据cd/etc/sysconfig/network-scriptscpifcfg-enp5s0f0ifcfg-enp5s0f0.bak...

Linux搭建nginx+keepalived 高可用(主备+双主模式)

一:keepalived简介反向代理及负载均衡参考:...

Linux下Route 路由指令使用详解

linuxroute命令用于显示和操作IP路由表。要实现两个不同子网之间的通信,需要一台连接两个网络的路由器,或者同时位于两个网络的网关来实现。在Linux系统中,设置路由通常是为了解决以下问题:该...

取消回复欢迎 发表评论: