helm安装etcd-ha的失败的原因是bitnami不支持ARM架构-过程分享
按照指定步骤后,会得到po和pvc都处在pending状态
[root@arm download]# wget https://get.helm.sh/helm-v3.10.3-linux-arm64.tar.gz
--2022-12-30 19:07:28-- https://get.helm.sh/helm-v3.10.3-linux-arm64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.199.39.108, 2606:2800:247:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.199.39.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13085851 (12M) [application/x-tar]
Saving to: ‘helm-v3.10.3-linux-arm64.tar.gz’helm-v3.10.3-linux-arm64.tar.gz 100%[======================================================================================>] 12.48M --.-KB/s in 0.04s2022-12-30 19:07:29 (332 MB/s) - ‘helm-v3.10.3-linux-arm64.tar.gz’ saved [13085851/13085851][root@arm download]# ll
total 125260
drwxr-xr-x 2 root root 4096 Dec 30 19:07 ./
drwx------ 12 root root 4096 Dec 29 18:54 ../
-rw-r--r-- 1 root root 115165850 Dec 7 03:31 go1.19.4.linux-arm64.tar.gz
-rw-r--r-- 1 root root 13085851 Dec 14 23:44 helm-v3.10.3-linux-arm64.tar.gz
[root@arm download]# tar -zxvf helm-v3.10.3-linux-arm64.tar.gz
linux-arm64/
linux-arm64/helm
linux-arm64/LICENSE
linux-arm64/README.md
[root@arm download]# mv linux-arm64/helm /usr/local/bin/helm
[root@arm download]# helm help
The Kubernetes package manager
...[root@arm download]# helm repo add bitnami https://charts.bitnami.com/bitnami
[root@arm download]# helm pull bitnami/etcd
[root@arm download]# tar -xvf etcd-8.5.11.tgz
[root@arm download]# vi etcd/values.yaml # 新版本已经是false了,不用改
# 后面证实发现自己看错地方了...)应该是588行...## persistentVolumeClaimRetentionPolicy
473 ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention
474 ## @param persistentVolumeClaimRetentionPolicy.enabled Controls if and how PVCs are deleted during the lifecycle of a StatefulSet
475 ## @param persistentVolumeClaimRetentionPolicy.whenScaled Volume retention behavior when the replica count of the StatefulSet is reduced
476 ## @param persistentVolumeClaimRetentionPolicy.whenDeleted Volume retention behavior that applies when the StatefulSet is deleted
477 persistentVolumeClaimRetentionPolicy:
478 enabled: false[root@arm download]# helm install my-release ./etcd
...
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: etcd
CHART VERSION: 8.5.11
APP VERSION: 3.5.6** Please be patient while the chart is being deployed **etcd can be accessed via port 2379 on the following DNS name from within your cluster:my-release-etcd.default.svc.cluster.localTo create a pod that you can use as a etcd client run the following command:kubectl run my-release-etcd-client --restart='Never' --image docker.io/bitnami/etcd:3.5.6-debian-11-r10 --env ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d) --env ETCDCTL_ENDPOINTS="my-release-etcd.default.svc.cluster.local:2379" --namespace default --command -- sleep infinityThen, you can set/get a key using the commands below:kubectl exec --namespace default -it my-release-etcd-client -- bashetcdctl --user root:$ROOT_PASSWORD put /message Helloetcdctl --user root:$ROOT_PASSWORD get /messageTo connect to your etcd server from outside the cluster execute the following commands:kubectl port-forward --namespace default svc/my-release-etcd 2379:2379 &echo "etcd URL: http://127.0.0.1:2379"* As rbac is enabled you should add the flag `--user root:$ETCD_ROOT_PASSWORD` to the etcdctl commands. Use the command below to export the password:export ETCD_ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d)[root@arm download]# kubectl run my-release-etcd-client --restart='Never' --image docker.io/bitnami/etcd:3.5.6-debian-11-r10 --env ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d) --env ETCDCTL_ENDPOINTS="my-release-etcd.default.svc.cluster.local:2379" --namespace default --command -- sleep infinity
pod/my-release-etcd-client created
[root@arm download]# kubectl exec --namespace default -it my-release-etcd-client -- bash
error: cannot exec into a container in a completed pod; current phase is Failed
[root@arm download]# k get po
NAME READY STATUS RESTARTS AGE
envoy-fb5d77cc9-rjw9w 1/1 Running 0 24h
my-release-etcd-0 0/1 Pending 0 4m7s
my-release-etcd-client 0/1 Error 0 18s
[root@arm download]# k get po
NAME READY STATUS RESTARTS AGE
envoy-fb5d77cc9-rjw9w 1/1 Running 0 24h
my-release-etcd-0 0/1 Pending 0 4m23s
my-release-etcd-client 0/1 Error 0 34s
[root@arm download]# k get po
NAME READY STATUS RESTARTS AGE
envoy-fb5d77cc9-rjw9w 1/1 Running 0 24h
my-release-etcd-0 0/1 Pending 0 5m47s
my-release-etcd-client 0/1 Error 0 118s
## 自己猜测应该是端口冲突,于是helm help , helm install -h 研究了一下
# 后面想到这个端口是pod内的,所以不会冲突,所以继续研究pv,pvc
然后阅读pv,pvc的官方文档发现:
静态制备
集群管理员创建若干 PV 卷。这些卷对象带有真实存储的细节信息, 并且对集群用户可用(可见)。PV 卷对象存在于 Kubernetes API 中,可供用户消费(使用)。
动态制备
如果管理员所创建的所有静态 PV 卷都无法与用户的 PersistentVolumeClaim 匹配, 集群可以尝试为该 PVC 申领动态制备一个存储卷。 这一制备操作是基于 StorageClass 来实现的:PVC 申领必须请求某个 存储类, 同时集群管理员必须已经创建并配置了该类,这样动态制备卷的动作才会发生。 如果 PVC 申领指定存储类为""
,则相当于为自身禁止使用动态制备的卷。
加上看到pvc的详细信息,确实是StorageClass为空,所以只能手动制备PV存储卷先了
[root@arm ~]# k describe pvc data-my-release-etcd-0
Name: data-my-release-etcd-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=my-releaseapp.kubernetes.io/name=etcd
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: my-release-etcd-0
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal FailedBinding 2m32s (x50807 over 8d) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
创建PV后成功绑定了一个pvc,但是还是不太行
[root@arm pv]# cat po-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: task-pv-volumelabels:type: local
spec:capacity:storage: 8GiaccessModes:- ReadWriteOncehostPath:path: "/mnt/data"
[root@arm pv]# k apply -f po-volume.yaml
persistentvolume/task-pv-volume created
[root@arm pv]# k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 8Gi RWO Retain Bound default/data-my-release-etcd-2 3m26s
[root@arm pv]# k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-my-release-etcd-0 Pending 9d
data-my-release-etcd-1 Pending 8d
data-my-release-etcd-2 Bound task-pv-volume 8Gi RWO 8d[root@arm pv]# cat po-volume1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: task-pv-volume-1labels:type: local
spec:capacity:storage: 8GiaccessModes:- ReadWriteOncehostPath:path: "/mnt/data"[root@arm pv]# cat po-volume2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: task-pv-volume-2labels:type: local
spec:capacity:storage: 8GiaccessModes:- ReadWriteOncehostPath:path: "/mnt/data"
[root@arm pv]# k apply -f po-volume1.yaml
persistentvolume/task-pv-volume-1 created
[root@arm pv]# k apply -f po-volume2.yaml
persistentvolume/task-pv-volume-2 created
[root@arm pv]# k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-my-release-etcd-0 Pending 9d
data-my-release-etcd-1 Pending 9d
data-my-release-etcd-2 Bound task-pv-volume 8Gi RWO 9d[root@arm pv]# k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-my-release-etcd-0 Bound task-pv-volume-1 8Gi RWO 9d
data-my-release-etcd-1 Bound task-pv-volume-2 8Gi RWO 9d
data-my-release-etcd-2 Bound task-pv-volume 8Gi RWO 9d
[root@arm pv]# k get po
NAME READY STATUS RESTARTS AGE
envoy-fb5d77cc9-rjw9w 1/1 Running 0 10d
my-release-etcd-0 0/1 Pending 0 9d
my-release-etcd-1 0/1 Pending 0 9d
my-release-etcd-2 0/1 CrashLoopBackOff 7 (79s ago) 9d
my-release-etcd-client 0/1 Error 0 9d
[root@arm pv]# k get po
NAME READY STATUS RESTARTS AGE
envoy-fb5d77cc9-rjw9w 1/1 Running 0 10d
my-release-etcd-0 0/1 Error 0 9d
my-release-etcd-1 0/1 Error 0 9d
my-release-etcd-2 0/1 CrashLoopBackOff 7 (81s ago) 9d
my-release-etcd-client 0/1 Error 0 9d
[root@arm download]# k describe pod my-release-etcd-0
Name: my-release-etcd-0
Namespace: default
Priority: 0
Node: arm/10.0.0.29
Start Time: Sun, 08 Jan 2023 19:44:00 +0800
Labels: app.kubernetes.io/instance=my-releaseapp.kubernetes.io/managed-by=Helmapp.kubernetes.io/name=etcdcontroller-revision-hash=my-release-etcd-5d49546c66helm.sh/chart=etcd-8.5.11statefulset.kubernetes.io/pod-name=my-release-etcd-0
Annotations: checksum/token-secret: b9cdb65acc8d3eff297975d64902520093c035e029074d0dc7b172f405f46e00cni.projectcalico.org/containerID: 27fecd909420c116cc20862804e4c98aec2c37620b88a710dfead06d252eb863cni.projectcalico.org/podIP: 192.168.64.206/32cni.projectcalico.org/podIPs: 192.168.64.206/32
Status: Running
IP: 192.168.64.206
IPs:IP: 192.168.64.206
Controlled By: StatefulSet/my-release-etcd
Containers:etcd:Container ID: docker://c65f7b13411d6117954fd50fe35f826189667c6d1bf53ce8ef571a82cec165ffImage: docker.io/bitnami/etcd:3.5.6-debian-11-r10Image ID: docker-pullable://bitnami/etcd@sha256:2d7b831769734bb97a5c1cfd2fe46e29f422b70b5ba9f9aedfd91300839ac3eePorts: 2379/TCP, 2380/TCPHost Ports: 0/TCP, 0/TCPState: WaitingReason: CrashLoopBackOffLast State: TerminatedReason: ErrorExit Code: 1Started: Sun, 08 Jan 2023 19:45:40 +0800Finished: Sun, 08 Jan 2023 19:45:40 +0800Ready: FalseRestart Count: 4Liveness: exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=30s #success=1 #failure=5Readiness: exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=10s #success=1 #failure=5Environment:BITNAMI_DEBUG: falseMY_POD_IP: (v1:status.podIP)MY_POD_NAME: my-release-etcd-0 (v1:metadata.name)MY_STS_NAME: my-release-etcdETCDCTL_API: 3ETCD_ON_K8S: yesETCD_START_FROM_SNAPSHOT: noETCD_DISASTER_RECOVERY: noETCD_NAME: $(MY_POD_NAME)ETCD_DATA_DIR: /bitnami/etcd/dataETCD_LOG_LEVEL: infoALLOW_NONE_AUTHENTICATION: noETCD_ROOT_PASSWORD: <set to the key 'etcd-root-password' in secret 'my-release-etcd'> Optional: falseETCD_AUTH_TOKEN: jwt,priv-key=/opt/bitnami/etcd/certs/token/jwt-token.pem,sign-method=RS256,ttl=10mETCD_ADVERTISE_CLIENT_URLS: http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2379,http://my-release-etcd.default.svc.cluster.local:2379ETCD_LISTEN_CLIENT_URLS: http://0.0.0.0:2379ETCD_INITIAL_ADVERTISE_PEER_URLS: http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2380ETCD_LISTEN_PEER_URLS: http://0.0.0.0:2380ETCD_CLUSTER_DOMAIN: my-release-etcd-headless.default.svc.cluster.localMounts:/bitnami/etcd from data (rw)/opt/bitnami/etcd/certs/token/ from etcd-jwt-token (ro)/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bz6gj (ro)
Conditions:Type StatusInitialized TrueReady FalseContainersReady FalsePodScheduled True
Volumes:etcd-jwt-token:Type: Secret (a volume populated by a Secret)SecretName: my-release-etcd-jwt-tokenOptional: falsedata:Type: EmptyDir (a temporary directory that shares a pod·s lifetime)Medium:SizeLimit: <unset>kube-api-access-bz6gj:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: <nil>DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 2m36s default-scheduler Successfully assigned default/my-release-etcd-0 to armNormal Created 101s (x4 over 2m35s) kubelet Created container etcdNormal Started 101s (x4 over 2m35s) kubelet Started container etcdWarning BackOff 70s (x12 over 2m33s) kubelet Back-off restarting failed containerNormal Pulled 57s (x5 over 2m35s) kubelet Container image "docker.io/bitnami/etcd:3.5.6-debian-11-r10" already present on machine
[root@arm download]# k logs my-release-etcd-0
exec /opt/bitnami/scripts/etcd/entrypoint.sh: exec format error
后面看网上文章发现自己忘了设置helm中的value.yaml中的persistence的enable为false了,就又试了一下empty-dir模式
然后
cd /root/download
vim etcd/values.yaml
And set persistence to false:
558 persistence:
559 ## @param persistence.enabled If true, use a Persistent Volume Claim. If false, use emptyDir.
560 ##
561 enabled: false
然后
[root@arm download]# helm uninstall my-release
release "my-release" uninstalled
[root@arm download]# k get pod
NAME READY STATUS RESTARTS AGE
envoy-fb5d77cc9-rjw9w 1/1 Running 0 10d
my-release-etcd-client 0/1 Error 0 9d
[root@arm download]# k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-my-release-etcd-0 Bound task-pv-volume-1 8Gi RWO 9d
data-my-release-etcd-1 Bound task-pv-volume-2 8Gi RWO 9d
data-my-release-etcd-2 Bound task-pv-volume 8Gi RWO 9d
[root@arm download]# k delete pvc data-my-release-etcd-0 data-my-release-etcd-1 data-my-release-etcd-2
persistentvolumeclaim "data-my-release-etcd-0" deleted
persistentvolumeclaim "data-my-release-etcd-1" deleted
persistentvolumeclaim "data-my-release-etcd-2" deleted
[root@arm download]# k delete pv task-pv-volume task-pv-volume-1 task-pv-volume-2
persistentvolume "task-pv-volume" deleted
persistentvolume "task-pv-volume-1" deleted
persistentvolume "task-pv-volume-2" deleted
[root@arm download]# helm install my-release ./etcd
NAME: my-release
LAST DEPLOYED: Sun Jan 8 19:43:49 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: etcd
CHART VERSION: 8.5.11
APP VERSION: 3.5.6** Please be patient while the chart is being deployed **etcd can be accessed via port 2379 on the following DNS name from within your cluster:my-release-etcd.default.svc.cluster.localTo create a pod that you can use as a etcd client run the following command:kubectl run my-release-etcd-client --restart='Never' --image docker.io/bitnami/etcd:3.5.6-debian-11-r10 --env ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d) --env ETCDCTL_ENDPOINTS="my-release-etcd.default.svc.cluster.local:2379" --namespace default --command -- sleep infinityThen, you can set/get a key using the commands below:kubectl exec --namespace default -it my-release-etcd-client -- bashetcdctl --user root:$ROOT_PASSWORD put /message Helloetcdctl --user root:$ROOT_PASSWORD get /messageTo connect to your etcd server from outside the cluster execute the following commands:kubectl port-forward --namespace default svc/my-release-etcd 2379:2379 &echo "etcd URL: http://127.0.0.1:2379"* As rbac is enabled you should add the flag `--user root:$ETCD_ROOT_PASSWORD` to the etcdctl commands. Use the command below to export the password:export ETCD_ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d)
[root@arm download]# k get pod
NAME READY STATUS RESTARTS AGE
envoy-fb5d77cc9-rjw9w 1/1 Running 0 10d
my-release-etcd-0 0/1 CrashLoopBackOff 2 (18s ago) 44s
my-release-etcd-client 0/1 Error 0 9d
[root@arm download]# k describe pod my-release-etcd-0
Name: my-release-etcd-0
Namespace: default
Priority: 0
Node: arm/10.0.0.29
Start Time: Sun, 08 Jan 2023 19:44:00 +0800
Labels: app.kubernetes.io/instance=my-releaseapp.kubernetes.io/managed-by=Helmapp.kubernetes.io/name=etcdcontroller-revision-hash=my-release-etcd-5d49546c66helm.sh/chart=etcd-8.5.11statefulset.kubernetes.io/pod-name=my-release-etcd-0
Annotations: checksum/token-secret: b9cdb65acc8d3eff297975d64902520093c035e029074d0dc7b172f405f46e00cni.projectcalico.org/containerID: 27fecd909420c116cc20862804e4c98aec2c37620b88a710dfead06d252eb863cni.projectcalico.org/podIP: 192.168.64.206/32cni.projectcalico.org/podIPs: 192.168.64.206/32
Status: Running
IP: 192.168.64.206
IPs:IP: 192.168.64.206
Controlled By: StatefulSet/my-release-etcd
Containers:etcd:Container ID: docker://c65f7b13411d6117954fd50fe35f826189667c6d1bf53ce8ef571a82cec165ffImage: docker.io/bitnami/etcd:3.5.6-debian-11-r10Image ID: docker-pullable://bitnami/etcd@sha256:2d7b831769734bb97a5c1cfd2fe46e29f422b70b5ba9f9aedfd91300839ac3eePorts: 2379/TCP, 2380/TCPHost Ports: 0/TCP, 0/TCPState: WaitingReason: CrashLoopBackOffLast State: TerminatedReason: ErrorExit Code: 1Started: Sun, 08 Jan 2023 19:45:40 +0800Finished: Sun, 08 Jan 2023 19:45:40 +0800Ready: FalseRestart Count: 4Liveness: exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=30s #success=1 #failure=5Readiness: exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=10s #success=1 #failure=5Environment:BITNAMI_DEBUG: falseMY_POD_IP: (v1:status.podIP)MY_POD_NAME: my-release-etcd-0 (v1:metadata.name)MY_STS_NAME: my-release-etcdETCDCTL_API: 3ETCD_ON_K8S: yesETCD_START_FROM_SNAPSHOT: noETCD_DISASTER_RECOVERY: noETCD_NAME: $(MY_POD_NAME)ETCD_DATA_DIR: /bitnami/etcd/dataETCD_LOG_LEVEL: infoALLOW_NONE_AUTHENTICATION: noETCD_ROOT_PASSWORD: <set to the key 'etcd-root-password' in secret 'my-release-etcd'> Optional: falseETCD_AUTH_TOKEN: jwt,priv-key=/opt/bitnami/etcd/certs/token/jwt-token.pem,sign-method=RS256,ttl=10mETCD_ADVERTISE_CLIENT_URLS: http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2379,http://my-release-etcd.default.svc.cluster.local:2379ETCD_LISTEN_CLIENT_URLS: http://0.0.0.0:2379ETCD_INITIAL_ADVERTISE_PEER_URLS: http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2380ETCD_LISTEN_PEER_URLS: http://0.0.0.0:2380ETCD_CLUSTER_DOMAIN: my-release-etcd-headless.default.svc.cluster.localMounts:/bitnami/etcd from data (rw)/opt/bitnami/etcd/certs/token/ from etcd-jwt-token (ro)/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bz6gj (ro)
Conditions:Type StatusInitialized TrueReady FalseContainersReady FalsePodScheduled True
Volumes:etcd-jwt-token:Type: Secret (a volume populated by a Secret)SecretName: my-release-etcd-jwt-tokenOptional: falsedata:Type: EmptyDir (a temporary directory that shares a pod·s lifetime)Medium:SizeLimit: <unset>kube-api-access-bz6gj:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: <nil>DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 2m36s default-scheduler Successfully assigned default/my-release-etcd-0 to armNormal Created 101s (x4 over 2m35s) kubelet Created container etcdNormal Started 101s (x4 over 2m35s) kubelet Started container etcdWarning BackOff 70s (x12 over 2m33s) kubelet Back-off restarting failed containerNormal Pulled 57s (x5 over 2m35s) kubelet Container image "docker.io/bitnami/etcd:3.5.6-debian-11-r10" already present on machine
[root@arm download]# k logs my-release-etcd-0
exec /opt/bitnami/scripts/etcd/entrypoint.sh: exec format error
[root@arm download]#
发现得到同样的错误,然后去网上查找发现一个issue: Running the container fails with ‘exec /opt/bitnami/scripts/zookeeper/entrypoint.sh: exec format error’
I’m afraid we currently don’t have support for ARM architecture in our containers. It’s something that we have in our backlog, but there are no immediate plans to work on it.
As soon as there are news, we will let you know.
原来bitnami不支持ARM架构的服务器…我吐血了
helm安装etcd-ha的失败的原因是bitnami不支持ARM架构-过程分享相关推荐
- win10鼎信诺为什么安装不了_Win10升级安装为什么会偶遇失败?原因居然是这个...
微软向所有用户正式推出了Windows10 20H2(2020年10月更新),用户现在可以通过Windows更新自动升级到Windows 10 20H2.如果你的计算机仍未升级到最新的Windows1 ...
- 关于hadoop安装中nodemanager启动失败的原因
在安装到最后一步的时候,输入jps后发现少了nodemanager节点并且在终端内没有任何报错,在论坛中逛了半天学会了查看日志,日志就是在解压后的hadoop文件下面会有一个logs在里面就会有每一次 ...
- Genymotion模拟器安装ARM架构编译应用失败解决方案
我们在安装一些应用到Genymotion模拟器会提示:adb: failed to install xx.apk: Failure [INSTALL_FAILED_NO_MATCHING_ABIS: ...
- Windows x86 环境 虚拟机 安装银河麒麟V10 arm架构系统
0 准备材料 1.Kylin-Desktop-V10-SP1-Release-2107-arm64 2.QEMU 3.QEMU_EFI.fd 注意:在arm架构板卡安装可参照官方文档 跳过第1章节 Q ...
- Ubuntu安装Firefox浏览器(硬件:树莓派ARM架构)
特别提示:经测试,在树莓派Ubuntu22.04 Server版本系统的应用商店中有一个开源的Chrome浏览器,叫做Chromium,安装后无法启动:所以还是安装Firefox浏览器吧. 提示:本文 ...
- linux系统脚本安装失败,ubuntu16.04下vim安装失败的原因分析及解决方案
先给大家说下问题描述? 重装了ubuntu系统,安装vim出现了以下问题: sudo apt-get install vim 正在读取软件包列表... 完成 正在分析软件包的依赖关系树 正在读取状态信 ...
- adhoc包无法安装_关于iOS 应用安装失败的原因找到了
原标题:关于iOS 应用安装失败的原因找到了 iOS 的内测应用在安装时,很多人都遇到过安装失败的情况,安装失败的原因比较多,下面我们将一些常见原因总结如下,方便开发者进行排查. 启动应用时,出现提示 ...
- win7安装android驱动失败怎么办,Win7蓝牙驱动安装失败的原因分析与解决方法
蓝牙是一种支持设备短距离通信的无线电技术,现在不管是手机还是电脑都已经普遍的使用蓝牙功能了.如果Win7系统电脑没有安装蓝牙驱动的话,是需要安装后才能使用的.最近,有用户在安装蓝牙驱动的时候,发现安装 ...
- 整理 node-sass 安装失败的原因及解决办法
文章出处 node-sass 安装失败的原因 npm 安装 node-sass 依赖时,会从 github.com 上下载 .node 文件.由于国内网络环境的问题,这个下载时间可能会很长,甚至导致超 ...
最新文章
- 白皮书下载 |《产品用户体验的数据化评估》
- PKU 学生的反馈 2009-1
- 射线和三角形的相交检测(ray triangle intersection test)
- java数组线性查找_数组查找: 线性查找与二分查找
- 太强了!GitHub中文开源项目榜单出炉,暴露了程序员的硬性需求!
- 方差公式初三_方差|初中方差的计算公式
- word忘记密码怎么解除
- Python3智联招聘网爬虫学习
- 物联网卡零售应用的真实案例
- 新概念2-课文名称和知识点
- linux ubuntu bionic,如何升级Ubuntu到18.04 LTS Bionic Beaver
- python后端工程师岗位职责_【PYTHON后端开发工程师岗位职责_PYTHON后端开发工程师职责/工作内容】-猎聘岗位职责频道...
- OLED显示与LCD相比的优缺点都有哪些
- 自写网络验证,支持注册 充值 在线消息 自动更新
- 计算机只能用右键打开方式,电脑所有的程序双击打开的都是属性,必须右键打开才可以,该怎么处理...
- Xilinx FPGA时钟及I/O接口规划(一)
- Lock wait timeout exceeded; try restarting transaction
- HTML简易时钟教程,html5 svg简单的模拟时钟特效-HTML5动画
- 51单片机应用篇-- --智能门锁
- 续上文,Unity3D面试ABC