当前位置:首页 > 其他 > 正文内容

[kubernetes]二进制方法布置单机k8s-v1.30.5

邻居的猫1个月前 (12-09)其他1171

前语

之前在单机测验k8s的kind最近毛病了,虚拟机运转个几分钟后就宕机了,不知道是根因是什么,并且kind布置k8s不太好做一些个性化装备,爽性用二进制方法从头搭一个单机k8s。

由于是用来开发测验的,所以control-plane就不做高可用了,etcd+apiserver+controller-manager+scheduler都只要一个实例。

环境信息:

  • 主机:Debian 12.7,4核CPU、4GB内存、30GB存储(仅仅布置一个k8s的话,2C2G的装备也满足)
  • 容器运转时:containerd v1.7.22
  • etcd: v3.4.34
  • kubernetes:v1.30.5
  • cni: calico v3.25.0

本文中的大部分装备文件已上传到gitee - k8s-note,目录为"装置k8s/二进制单机布置k8s-v1.30.5",如有需求可直接clone repo.

预备

本节指令大部分都要root权限,假如履行指令时提示权限缺乏,可自行切换root用户或运用sudo

调整主机参数

  1. 修正主机名。kubernetes要求每个节点的hostname不一样
hostnamectl set-hostname k8s-node1
  1. 修正/etc/hosts文件。假如内网有自建DNS可疏忽
192.168.0.31 k8s-node1
  1. 装置时刻同步服务。假如有多台主机,要留意主机之间的时刻要同步。内网假如有时刻同步服务器,能够修正chrony的装备来指向内网时刻同步服务器
sudo apt install -y chrony
sudo systemctl start chrony
  1. 封闭swap。默许状况下,k8s无法在运用swap的主机上运转。这儿用的暂时封闭指令,固化装备需求修正/etc/fstab文件,将swap相关装备行删去或注释。
sudo swapoff -a
  1. 装载内核模块。这步没做的话,下一步装备体系参数会报错。
# 1. 增加装备
cat <<EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

# 2. 当即装载
modprobe overlay
modprobe br_netfilter

# 3. 检查装载。假如没有输出成果则阐明没有装载成功。
lsmod | grep br_netfilter
  1. 装备体系参数。主要是net.bridge.bridge-nf-call-ip6tablesnet.bridge.bridge-nf-call-iptablesnet.ipv4.ip_forward这三个参数,其它参数可按状况自行修正。
# 1. 增加装备文件
cat << EOF > /etc/sysctl.d/k8s-sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
vm.swappiness = 0
EOF

# 2. 装备收效
sysctl -p /etc/sysctl.d/k8s-sysctl.conf
  1. 启用ipvs。编写systemd装备文件,完结开机主动装载到内核。能装置ipvs的话就尽量运用ipvs,有助于进步集群的负载均衡功用。详细拜见:https://kubernetes.io/zh-cn/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
# 1. 装置依靠
apt install -y ipset ipvsadm
# 2. 当即装载
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

# 3. 固化到装备文件
cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

# 4. 检查是否已装载
lsmod |grep ip_vs

装置containerd

k8s在1.24版别之后不再直接支撑docker作为容器运转时,所以本文运用运用containerd。二进制装置包可从GitHub - containerd下载,留意要下载cri-containerd-cni版别的

  1. 解压到根目录。压缩包里边的文件是依照根目录结构安排的,所以要直接解压到根目录。
tar xf cri-containerd-cni-1.7.22-linux-amd64.tar.gz -C /
  1. 创立装备文件目录并生成默许的装备文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
  1. 修正装备文件/etc/containerd/config.toml,修正以下内容
# 关于运用systemd作为init system的linux发行版,官方主张用systemd作为容器cgroup driver
# false改成true
SystemdCgroup = true
# pause镜像的地址改为自己在阿里云上传的镜像地址。假如是内网环境,可改为内网registry的地址
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/rainux/pause:3.9"
  1. 发动containerd
systemctl start containerd
systemctl enable containerd
  1. 履行指令测验下containerd是否正常。没报错一般便是正常的
crictl images

生成ca证书

后边的k8s和etcd集群都会用到ca证书。假如安排能供给一致的CA认证中心,则直接运用安排颁布的CA证书即可。假如没有一致的CA认证中心,则能够经过颁布自签名的CA证书来完结安全装备。这儿自行生成一个ca证书。

# 生成私钥文件ca.key
openssl genrsa -out ca.key 2048
# 依据私钥文件生成根证书文件ca.crt
# /CN为master的主机名或IP地址
# days为证书的有效期
openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s-node1" -days 36500 -out ca.crt

# 复制ca证书到/etc/kubernetes/pki
mkdir -p /etc/kubernetes/pki
cp ca.crt ca.key /etc/kubernetes/pki/

装置etcd

etcd的装置包能够从官网下载,下载后解压。能够将压缩包中的etcdetcdctl放到环境变量PATH中的目录。

  1. 修正文件etcd_ssl.cnf。IP地址为etcd节点。
[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[ req_distinguished_name ]

[ v3_req ]

basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
IP.1 = 192.168.0.31
  1. 创立etcd服务端证书
openssl genrsa -out etcd_server.key 2048
openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr
openssl x509 -req -in etcd_server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt
  1. 创立etcd客户端证书
openssl genrsa -out etcd_client.key 2048
openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr
openssl x509 -req -in etcd_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt
  1. 修正etcd的装备文件。目录、文件途径,IP、端口等信息按实践状况修正
ETCD_NAME=etcd1
ETCD_DATA_DIR=/home/rainux/apps/etcd/data

ETCD_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt
ETCD_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key
ETCD_TRUSTED_CA_FILE=/home/rainux/apps/certs/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.0.31:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.0.31:2379

ETCD_PEER_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt
ETCD_PEER_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/home/rainux/apps/certs/ca.crt
ETCD_LISTEN_PEER_URLS=https://192.168.0.31:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.0.31:2380

ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.0.31:2380"
ETCD_INITIAL_CLUSTER_STATE=new
  1. 修正/etc/systemd/system/etcd.service,留意依据实践修正装备文件和etcd二进制文件的途径
[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target

[Service]
User=rainux
EnvironmentFile=/home/rainux/apps/etcd/conf/etcd.conf
ExecStart=/home/rainux/apps/etcd/etcd
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. 发动etcd
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

# 检查service状况
systemctl status etcd
  1. 运用etcd客户端验证下etcd状况
etcdctl --cacert=/etc/kubernetes/pki/ca.crt --cert=$HOME/apps/certs/etcd_client.crt --key=$HOME/apps/certs/etcd_client.key --endpoints=https://192.168.0.31:2379 endpoint health

# 正常状况下会有相似以下输出
https://192.168.0.31:2379 is healthy: successfully committed proposal: took = 13.705325ms

装置control-plane

k8s的二进制文件装置包能够从github下载:https://github.com/kubernetes/kubernetes/releases

在changelog中找到二进制包的下载链接,下载server binary即可,里边包含了master和node的二进制文件。

解压后将其间的二进制文件挪到 /usr/local/bin目录

装置apiserver

apiserver的中心功用是供给k8s各类资源目标的增修正查及watch等HTTP REST接口,成为集群内各个功用模块之间数据交互和通讯的中心纽带,是整个体系的数据总线和数据中心。除此之外,它仍是集群办理的API进口,是资源配额操控的进口,供给了齐备的集群安全机制。

  1. 修正master_ssl.cnf。DNS.5为三台服务器的主机名,另行设置/etc/hosts。IP.1为Master Service虚拟服务的Cluster IP地址,IP.2为apiserver的服务器IP
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-node1
IP.1 = 169.169.0.1
IP.2 = 192.168.0.31
  1. 生成ssl证书文件
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=k8s-node1" -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt
  1. 运用cfssl创立sa.pub和sa-key.pem。cfssl和cfssljson能够从GitHub - cfssl下载
cat<<EOF > sa-csr.json 
{
    "CN":"sa",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
EOF

cfssl gencert -initca sa-csr.json | cfssljson -bare sa -

openssl x509 -in sa.pem -pubkey -noout > sa.pub
  1. 修正kube-apiserver的装备文件,留意依据实践状况修正文件途径和etcd地址
KUBE_API_ARGS="--secure-port=6443 \
--tls-cert-file=/home/rainux/apps/certs/apiserver.crt \
--tls-private-key-file=/home/rainux/apps/certs/apiserver.key \
--client-ca-file=/home/rainux/apps/certs/ca.crt \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-key-file=/home/rainux/apps/certs/sa.pub \
--service-account-signing-key-file=/home/rainux/apps/certs/sa-key.pem \
--apiserver-count=1 \
--endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.0.31:2379 \
--etcd-cafile=/home/rainux/apps/certs/ca.crt \
--etcd-certfile=/home/rainux/apps/certs/etcd_client.crt \
--etcd-keyfile=/home/rainux/apps/certs/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=30000-32767 \
--allow-privileged=true \
--audit-log-maxsize=100 \
--audit-log-maxage=15 \
--audit-log-path=/home/rainux/apps/kubernetes/logs/apiserver.log --v=2"
  1. 修正service文件。/etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service

[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. 发动apiserver
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

# 检查service状况
systemctl status kube-apiserver
  1. 生成客户端证书
openssl genrsa -out client.key 2048
# /CN的称号用于标识衔接apiserver的客户端用户称号
openssl req -new -key client.key -subj "/CN=admin" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 36500
  1. 创立客户端衔接apiserver所需的kubeconfig装备文件。其间server为nginx监听地址。留意依据实践修正装备。这个kubeconfig装备文件也能够给kubectl运用,所以开发环境中能够直接文件途径置为$HOME/.kube/config
apiVersion: v1
kind: Config
clusters:
- name: default
  cluster:
    server: https://192.168.0.31:6443
    certificate-authority: /home/rainux/apps/certs/ca.crt
users:
- name: admin
  user:
    client-certificate: /home/rainux/apps/certs/client.crt
    client-key: /home/rainux/apps/certs/client.key
contexts:
- context:
    cluster: default
    user: admin
  name: default
current-context: default

装置kube-controller-manager

controller-manager经过apiserver供给的接口实时监控集群中特定资源的状况改变,当资源目标不符合预期状况时,controller-manager会测验将其状况调整为希望的状况。

  1. 修正装备文件 /home/rainux/apps/kubernetes/conf/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/home/rainux/.kube/config \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--service-account-private-key-file=/home/rainux/apps/certs/apiserver.key \
--root-ca-file=/home/rainux/apps/certs/ca.crt \
--v=0"
  1. 修正service文件/etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service

[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. 发动kube-controller-manager
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

装置kube-scheduler

  1. 修正装备文件/home/rainux/apps/kubernetes/conf/kube-scheduler.conf
KUBE_SCHEDULER_ARGS="--kubeconfig=/home/rainux/.kube/config \
--leader-elect=true \
--v=0"
  1. 修正service文件 /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service

[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. 发动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

装置worker node

装置kubelet

  1. 修正文件 /home/rainux/apps/kubernetes/conf/kubelet.conf。留意依据实践修正hostname-overridekubeconfig
KUBELET_ARGS="--kubeconfig=/home/rainux/.kube/config \
--config=/home/rainux/apps/kubernetes/conf/kubelet.config \
--hostname-override=k8s-node1 \
--v=0 \
--container-runtime-endpoint="unix:///run/containerd/containerd.sock"
  1. 修正/home/rainux/apps/kubernetes/conf/kubelet.config文件。
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0  # 服务监听地址
port: 10250  # 服务监听端口号
cgroupDriver: systemd  # cgroup驱动,默许为cgroupfs, 主张systemd
clusterDNS: ["169.169.0.100"]  # 集群DNS地址
clusterDomain: cluster.local  # 服务DNS域名后缀
authentication:  # 是否答应匿名拜访或许是否运用webhook鉴权
  anonymous:
    enabled: true
  1. 修正service文件 /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service

[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kubelet.conf
ExecStart=/usr/local/bin/kubelet $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. 发动kubelet
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

装置kube-proxy

  1. 修正装备文件/home/rainux/apps/kubernetes/conf/kube-proxy.confproxy-mode参数默许为iptables,假如装置了ipvs,主张修正为ipvs
KUBE_PROXY_ARGS="--kubeconfig=/home/rainux/.kube/config \
--hostname-override=k8s-node1 \
--proxy-mode=ipvs \
--v=0"
  1. 修正service文件 /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=kubelet.service

[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. 发动kube-proxy
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

装置calico

  1. 下载calico装备文件
wget https://docs.projectcalico.org/manifests/calico.yaml
  1. 假如能够正常拜访docker hub,则能够直接运用装备文件来创立calico资源目标,不然需求修正其间的镜像地址。假如用的calico版别也是3.25.0,能够用我在阿里云上传的镜像。
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:cni-v3.25.0
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:node-v3.25.0
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:kube-controllers-v3.25.0
  1. 履行装置
kubectl create -f calico.yaml
  1. 检查calico的pod是否正常运转。假如正常,状况应该都是running;若不正常,则需求describe pod的信息检查什么问题
kubectl get pods -A

装置CoreDNS

  1. 修正布置文件 coredns.yaml。留意service中指定了clusterIP,以及镜像地址改为了我在阿里云上传的。
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    cluster.local {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local 169.169.0.0/16 {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
    . {
        cache 30
        loadbalance
        forward . /etc/resolv.conf
    }  

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: registry.cn-hangzhou.aliyuncs.com/rainux/coredns:1.11.3
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalat

扫描二维码推送至手机访问。

版权声明:本文由51Blog发布,如需转载请注明出处。

本文链接:https://www.51blog.vip/?id=664

分享给朋友:

“[kubernetes]二进制方法布置单机k8s-v1.30.5” 的相关文章

3大战略+1款东西,在K8s上搞定使用零宕机

3大战略+1款东西,在K8s上搞定使用零宕机

原文链接: https://jaadds.medium.com/building-resilient-applications-on-kubernetes-9e9e4edb4d33 翻译:cloudpilot.ai Kubernetes 供给的某些特功用够协助企业充分使用云原生运用的优势,例如无需...

DirectX9(D3D9)游戏开发:高光时间录制和同享纹路的踩坑

DirectX9(D3D9)游戏开发:高光时间录制和同享纹路的踩坑

同享纹路 老游戏运用directx9无法直接与cc高光sdk(d3d11)对接,可是d3d9ex有同享纹路,咱们经过同享纹路把游戏画面同享给cc录制,记载一些踩坑的笔记。 同享纹路示例: // 初始化Direct3D void initD3D9(HWND hWnd) { hr = d3d9e...

【标题全解】ACGO巅峰赛#15

【标题全解】ACGO巅峰赛#15

ACGO 巅峰赛#15 - 标题解析 距离四个月再战 ACGO Rated,鉴于最近学业繁忙,竞赛打得都不是很频频。尽管这次没有 AK 排位赛(我能够说是因为周末太忙,没有足够的时刻考虑标题…(好吧,其实或许是因为我把 T5 给想杂乱了))。 本文仍旧供给每道题的完好解析(因为我在赛后把标题做出来...

区块链开发工程师,未来科技浪潮中的关键角色

区块链开发工程师是一个涉及多个领域的职位,主要职责包括设计、开发、测试和维护基于区块链技术的软件系统。这个职位通常需要具备以下技能和知识:1. 编程语言:区块链开发工程师需要掌握至少一种编程语言,如Solidity(用于智能合约开发)、JavaScript、Python、Java等。2. 区块链技术...

开源杀毒软件,守护网络安全的新选择

开源杀毒软件,守护网络安全的新选择

1. ClamAV 简介:ClamAV 是一款免费、跨平台的开源防病毒软件工具包,主要用于检测和清除计算机设备上的恶意软件。它采用C和C 语言编写,并在GNU通用公共许可证下授权。 特点:ClamAV 支持多种操作系统,包括Windows、Linux、macOS等,可以检测各种类型的恶...

全球云计算市场份额,竞争格局与未来趋势

全球云计算市场份额,竞争格局与未来趋势

根据最新的市场数据,全球云计算市场的竞争格局和市场份额情况如下:1. 市场份额前三名: 亚马逊AWS:在2024年第三季度,亚马逊AWS在全球云基础设施市场的份额达到了31%,位居第一。 微软Azure:微软Azure的市场份额为20%,排名第二。 谷歌云:谷歌云的市场份额为11%...