如何重试图像拉在一个库伯内茨豆荚?

我是库伯内特家族的新人,我在休眠舱里有点问题,当我运行命令的时候

 kubectl get pods

结果:

NAME                   READY     STATUS             RESTARTS   AGE
mysql-apim-db-1viwg    1/1       Running            1          20h
mysql-govdb-qioee      1/1       Running            1          20h
mysql-userdb-l8q8c     1/1       Running            0          20h
wso2am-default-813fy   0/1       ImagePullBackOff   0          20h

由于“ wso2am-default-813fy”节点的问题,我需要重新启动它。有什么建议吗?

110959 次浏览

Usually in case of "ImagePullBackOff" it's retried after few seconds/minutes. In case you want to try again manually you can delete the old pod and recreate the pod. The one line command to delete and recreate the pod would be:

kubectl replace --force -f <yml_file_describing_pod>
$ kubectl replace --force -f <resource-file>

if all goes well, you should see something like:

<resource-type> <resource-name> deleted
<resource-type> <resource-name> replaced

details of this can be found in the Kubernetes documentation, "manage-deployment" and kubectl-cheatsheet pages at the time of writing.

In case of not having the yaml file:

kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -

If the Pod is part of a Deployment or Service, deleting it will restart the Pod and, potentially, place it onto another node:

$ kubectl delete po $POD_NAME

replace it if it's an individual Pod:

$ kubectl get po -n $namespace $POD_NAME -o yaml | kubectl replace -f -

First try to see what's wrong with the pod:

kubectl logs -p <your_pod>

In my case it was a problem with the YAML file.

So, I needed to correct the configuration file and replace it:

kubectl replace --force -f <yml_file_describing_pod>

Try with deleting pod it will try to pull image again.

kubectl delete pod <pod_name> -n <namespace_name>

Most probably the issue of ImagePullBackOff is due to either the image not being present or issue with the pod YAML file.

What I will do is this

kubectl get pod -n $namespace $POD_NAME --export > pod.yaml | kubectl -f apply -

I would also see the pod.yaml to see the why the earlier pod didn't work

There is also possibility that the pull policy is not defined or kubernetes is configured to pull from the hub but fails due network issues. Try setting up a local secure registry and execute a pull . It would work.