k8s 安装遇到的问题

在运行kubeadm init命令时,遇到了一些问题。整理了一份问题解决方法,供参考。

问题一: kubeadm config images pull报错 pulling image: rpc error: cng dial unix /var/run/containerd/containerd.sock: connect: permission denied\

问题描述

已配置镜像仓库地址为aliyun的地址,pull镜像时报错permission denied

[shirley@master k8s_install]$ kubeadm config images pull --config kubeadm.yam
failed to pull image "registry.aliyuncs.com/google_containers/kube-apiserver:from image service failed" err="rpc error: code = Unavailable desc = connecticontainerd.sock: connect: permission denied\"" image="registry.aliyuncs.com/g
time="2023-10-10T14:56:54+08:00" level=fatal msg="pulling image: rpc error: cng dial unix /var/run/containerd/containerd.sock: connect: permission denied\
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

解决方法:

参考https://techglimpse.com/failed-pull-image-registry-kube-apiserver/ 中方式进行排查。

1..验证网络是否配置正确,是否有HTTP_PROXY

2. kubernetes使用crictl命令管理CRI,查看其配置文件/etc/crictl.yaml。初始情况下没有这个配置文件,这里建议添加这个配置,否则kubeadm init时会报其他错。

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 0
debug: false
pull-image-on-create: false
EOF

3. 查看配置文件:/etc/containerd/config.toml,注释掉disabled_plugins = ["cri"]这一行。字面意思即知,这个配置是禁用CRI插件,与报错冲突。

# disabled_plugins = ["cri"]

如果/etc/containerd/config.toml不存在,运行如下命令生成:

containerd config default > /etc/containerd/config.toml

重启containerd

sudo systemctl restart containerd

问题解决

问题二:kubelet报错failed to run Kubelet: running with swap on is not supported,

问题描述

运行命令:sudo kubeadm init --config kubeadm.yaml 时报错kubelet 异常。查看kubelet日志,报错 failed to run Kubelet: running with swap on is not supported, please disable swap!

[root@master ~]# journalctl -f -ukubelet
... ...
Oct 10 16:18:35 master.k8s kubelet[2079]: I1010 16:18:35.432021    2079 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Oct 10 16:18:35 master.k8s kubelet[2079]: E1010 16:18:35.432363    2079 run.go:74] "command failed" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename\t\t\t\tType\t\tSize\tUsed\tPriority /dev/dm-1                               partition\t2097148\t0\t-2]"
Oct 10 16:18:35 master.k8s systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Oct 10 16:18:35 master.k8s systemd[1]: Unit kubelet.service entered failed state.
Oct 10 16:18:35 master.k8s systemd[1]: kubelet.service failed.
... ...

解决方法

# 临时
sudo swapoff -a
# 永久防止开机自动挂载swap
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

问题解决。查看kubelet状态,服务running了。

[root@master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Tue 2023-10-10 16:26:16 CST; 1min 4s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 2532 (kubelet)
    Tasks: 11
   Memory: 32.1M
   CGroup: /system.slice/kubelet.service
           └─2532 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/l...

Oct 10 16:27:14 master.k8s kubelet[2532]: E1010 16:27:14.927100    2532 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc e....k8s.io/p

问题三:kubeadm init时报错一些配置文件已存在

问题描述

[shirley@master k8s_install]$ sudo kubeadm init --config kubeadm.yaml
[sudo] password for shirley:
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "node" could not be reached
        [WARNING Hostname]: hostname "node": lookup node on 192.168.246.2:53: server misbehaving
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

解决方法:kubeadm reset

从log可以看出,一些配置文件已经存在。由于前面kubeadm运行报错了,导致init命令运行一半就退出,而配置文件已经生成。通过kubeadm reset命令撤销之前的操作,如下:

[shirley@master k8s_install]$ sudo kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1010 16:34:16.187161    2705 reset.go:120] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://192.168.246.133:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.246.133:6443: connect: connection refused
W1010 16:34:16.187828    2705 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1010 16:34:41.266029    2705 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[

问题四: kubeadm init报ipv4相关错误

[shirley@master k8s_install]$ sudo kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "node" could not be reached
        [WARNING Hostname]: hostname "node": lookup node on 192.168.246.2:53: server misbehaving
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决方法

加载ipvs模块解决上述问题,命令如下。需要root权限

# 加载ipvs模块
modprobe br_netfilter
modprobe -- ip_vs
modprobe -- ip_vs_sh
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- nf_conntrack_ipv4


# 验证ip_vs模块
lsmod |grep ip_vs
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_sh               12688  0 
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

# 内核文件 
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF

# 生效并验证内核优化
sysctl -p /etc/sysctl.d/k8s.conf

问题五:kubeadm init时,kubelet 报错crictl --runtime-endpoint配置不对

kubeadm init报如下错误:

... ...
This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

可以从日志看出时crictl命令运行时有问题。unix:///var/run/containerd/containerd.sock不存在。运行crictl命令,发现同样报错:

[root@master k8s_install]# crictl images list
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
E1010 17:19:18.816289    3832 remote_image.go:119] "ListImages with filter from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\"" filter="&ImageFilter{Image:&ImageSpec{Image:list,Annotations:map[string]string{},},}"
FATA[0000] listing images: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
[

出现如上报错的原因时,crictl下载镜像时使用的是默认端点[unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]。这些端点废弃了,需要重新指定containerd.sock。后面的报错就是找不到dockershim.sock。

解决方法:修改crictl配置文件

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 0
debug: false
pull-image-on-create: false
EOF

运行crictl images list命令,不再报错

[root@master ~]# crictl images list
IMAGE                                                             TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/coredns                   v1.10.1             ead0a4a

问题六:报错pause镜像获取失败

问题描述及排查过程

kubeadm init时,报错The kubelet is not running

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

通过log提示执行命令crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a 发现没有容器在运行。查看containerd的日志,有如下报错:

[root@master ~]# journalctl -fu containerd
...
Oct 11 08:35:16 master.k8s containerd[1903]: time="2023-10-11T08:35:16.760026536+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-node,Uid:a5a7c15a42701ab6c9dca630e6523936,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: connect: connection refused"
Oct 11 08:35:18 master.k8s containerd[1903]: time="2023-10-11T08:35:18.606581001+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: connect: connection refused" host=registry.k8s.io
...

报错显示containerd拉去镜像失败。error="failed to get sandbox image \"registry.k8s.io/pause:3.6\"

解决方法:修改containerd配置

containerd config dump命令查看看当前containerd的配置:

[root@master k8s_install]# containerd config dump
... ...
[plugins]
...
  [plugins."io.containerd.grpc.v1.cri"]
...
    sandbox_image = "registry.k8s.io/pause:3.6"
    selinux_category_range = 1024

... ...

发现containerd模式的配置中使用pause的image repo为registry.k8s.io/pause:3.6,而本地镜像名称为:registry.aliyuncs.com/google_containers/pause:3.9

[root@master k8s_install]# crictl images list | grep pause
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB

运行containerd config dump > /etc/containerd/config.toml 命令,将当前配置导出到文件,并修改sandbox_image配置。

## 将当前配置到处到配置文件
containerd config dump > /etc/containerd/config.toml


## 修改配置文件/etc/containerd/config.toml, 更改sandbox_image配置
[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

重启containerd

sudo systemctl restart containerd

查看containerd当前配置,验证pause镜像是否生效

[root@master k8s_install]# containerd config dump | grep pause
    pause_threshold = 0.02
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

问题七: kubelet报错/var/lib/kubelet/config.yaml不存在

Oct 11 11:38:00 slave.k8s kubelet[77030]: E1011 11:38:00.869724   77030 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"

在执行kubeadm init 或kubeadm join之前,会发现启动的kubelet日志报错,读不到配置文件/var/lib/kubelet/config.yaml。不用担心,执行完kubeadm init/join之后,会自动生成配置文件

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/984437.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

大模型是如何工作的

近几十年来&#xff0c;人工智能经历了从基础算法到生成式AI的深刻演变。生成式AI通过学习大量数据可以创造出全新的内容&#xff0c;如文本、图像、音频和视频&#xff0c;这极大地推动了AI技术的广泛应用。常见的应用场景包括智能问答&#xff08;如通义千问、GPT&#xff09…

Qt常用控件之 纵向列表QListWidget

纵向列表QListWidget QListWidget 是一个纵向列表控件。 QListWidget属性 属性说明currentRow当前被选中的是第几行。count一共有多少行。sortingEnabled是否允许排序。isWrapping是否允许换行。itemAlignment元素的对齐方式。selectRectVisible被选中的元素矩形是否可见。s…

利用pdf.js+百度翻译实现PDF翻译,创建中文PDF

基于JavaScript的PDF文档解析与智能翻译系统开发实践 一、功能预览 1.1 PDF加载 1.2 PDF翻译 二、系统架构设计 2.1 PDF智能翻译系统架构设计 层级模块名称功能描述技术实现呈现层Canvas渲染器PDF文档可视化渲染PDF.js + 动态视口计算 + 矩阵变换

Java数据结构第十九期:解构排序算法的艺术与科学(一)

专栏&#xff1a;Java数据结构秘籍 个人主页&#xff1a;手握风云 目录 一、排序的概念及引用 1.1. 排序的概念 1.2. 排序的应用 1.3. 常见的排序算法 二、常见排序算法的实现 2.1. 直接插入排序 2.2. 希尔排序 一、排序的概念及引用 1.1. 排序的概念 所谓排序&#xf…

1.2TypeScript 类型系统在前端的革命性意义

文章目录 **一、前端开发的类型觉醒&#xff08;历史背景&#xff09;****二、类型系统的核心价值****三、类型系统与现代框架的化学反应****四、高级类型编程实战****五、工程化影响深度解析****六、生态系统的蝴蝶效应****七、企业级应用实践数据****八、类型系统的局限性***…

K8S学习之基础十九:k8s的四层代理Service

K8S四层代理Service 四层负载均衡Service 在k8s中&#xff0c;访问pod可以通过ip端口的方式&#xff0c;但是pod是由生命 周期的&#xff0c;pod在重启的时候ip地址往往会发生变化&#xff0c;访问pod就需要新的ip地址&#xff0c;这样就会很麻烦&#xff0c;每次pod地址改变就…

Varlens(手机上的单反)Ver.1.9.3 高级版.apk

Varlens 是一款专业级手机摄影软件&#xff0c;旨在通过丰富的功能和高自由度参数调节&#xff0c;让手机拍摄效果媲美微单相机。以下是核心功能总结&#xff1a; 一、核心功能 专业拍摄模式 支持手动/自动/程序模式&#xff0c;可调节ISO、快门速度、EV、白平衡等参数27 提供…

用Deepseek写一个 HTML 和 JavaScript 实现一个简单的飞机游戏

大家好&#xff01;今天我将分享如何使用 HTML 和 JavaScript 编写一个简单的飞机游戏。这个游戏的核心功能包括&#xff1a;控制飞机移动、发射子弹、敌机生成、碰撞检测和得分统计。代码简洁易懂&#xff0c;适合初学者学习和实践。 游戏功能概述 玩家控制&#xff1a;使用键…

物联网IoT系列之MQTT协议基础知识

文章目录 物联网IoT系列之MQTT协议基础知识物联网IoT是什么&#xff1f;什么是MQTT&#xff1f;为什么说MQTT是适用于物联网的协议&#xff1f;MQTT工作原理核心组件核心机制 MQTT工作流程1. 建立连接2. 发布和订阅3. 消息确认4. 断开连接 MQTT工作流程图MQTT在物联网中的应用 …

在Rocky Linux上安装Redis(DNF和源码安装)

一.前言 Redis 是一款高性能的 NoSQL 数据库&#xff0c;被广泛用于缓存、消息队列等场景。本教程将手把手教你如何在 Rocky Linux 上安装 Redis&#xff0c;包括使用 DNF 进行安装和源码编译安装的两种方式。 二. 使用 DNF 安装 Redis 1.安装redis sudo dnf -y install red…

江科大51单片机笔记【10】蜂鸣器(上)

一、蜂鸣器 1.原理 蜂鸣器是一种将电信号转换为声音信号的器件&#xff0c;常同来产生设备的按键音、报警音等提示信号蜂鸣器按驱动方式可分为有源蜂鸣器和无源蜂鸣器&#xff08;外观基本一样&#xff09;有源蜂鸣器&#xff1a;内部自带振荡源&#xff0c;将正负极接上直流…

Android设备是如何进入休眠的呢?

首先我们手机灭屏后&#xff0c;一般需要等一段时间CPU才真正进入休眠。即Android设备屏幕暗下来的时候&#xff0c;并不是立即就进入了休眠模式&#xff1b;当所有唤醒源都处于de-avtive状态后&#xff0c;系统才会进入休眠。在手机功耗中从灭屏开始到CPU进入休眠时间越短&…

011---UART协议的基本知识(一)

1. 摘要 文章为学习记录。主要介绍 UART 协议的概述、物理层、协议层、关键参数。 2. UART概述 通用异步收发传输器&#xff08;Universal Asynchronous Receiver/Transmitter&#xff09;&#xff0c;通常称作UART&#xff08;串口&#xff09;&#xff0c;是一种异步****串…

共绘智慧升级,看永洪科技助力由由集团起航智慧征途

在数字化洪流汹涌澎湃的当下&#xff0c;企业如何乘风破浪&#xff0c;把握转型升级的黄金机遇&#xff0c;已成为所有企业必须直面的时代命题。由由集团&#xff0c;作为房地产的领航者&#xff0c;始终以前瞻视野引领变革&#xff0c;坚决拥抱数字化浪潮&#xff0c;携手数字…

【leetcode100】组合总和Ⅱ

1、题目描述 给定一个候选人编号的集合 candidates 和一个目标数 target &#xff0c;找出 candidates 中所有可以使数字和为 target 的组合。 candidates 中的每个数字在每个组合中只能使用 一次 。 注意&#xff1a;解集不能包含重复的组合。 示例 1: 输入: candidates…

【cocos creator】热更新

一、介绍 试了官方的热更新功能&#xff0c;总结一下 主要用于安卓包热更新 参考&#xff1a; Cocos Creator 2.2.2 热更新简易教程 基于cocos creator2.4.x的热更笔记 二、使用软件 1、cocos creator v2.4.10 2、creator热更新插件&#xff1a;热更新manifest生成工具&…

open webui-二次开发-源码启动前后端工程-【超简洁步骤】

参考资料 openwebui docs 获取源码 git clone https://github.com/open-webui/open-webui && cd open-webui启动后端服务 cd backend conda create --name open-webui python3.11 conda activate open-webui pip install -r requirements.txt -U sh dev.sh没有cond…

软件工程笔记下

从程序到软件☆ 章节 知识点 概论☆ 软件的定义&#xff0c;特点&#xff0c;生存周期。软件工程的概论。软件危机。 1.☆软件&#xff1a;软件程序数据文档 &#xff08;1&#xff09;软件&#xff1a;是指在计算机系统的支持下&#xff0c;能够完成特定功能与性能的包括…

Manus AI Agent 技术解读:架构、机制与竞品对比

目录 1. Manus 是什么&#xff1f; 1.1 研发背景 1.2 技术特点 1.3 工具调用能力 1.4 主要应用场景 2. Manus 一夜爆火的原因何在&#xff1f; 2.1 技术突破带来的震撼 2.2 完整交付的产品体验 2.3 生态与开源策略 3. Manus 与其他 AI Agent 的对比分析 3.1 技术架构…

深入探讨 Docker 层次结构及其备份策略20250309

深入探讨 Docker 层次结构及其备份策略 本文将深入探讨 Docker 层次结构 以及在 不同场景下应选择哪种备份方式。通过本文的介绍&#xff0c;您将对如何高效地管理和迁移 Docker 容器有更深的理解。 &#x1f4cc; 什么是 Docker 层次结构&#xff1f; Docker 镜像采用了 分…