鍍金池/ 問答/Linux  網(wǎng)絡(luò)安全/ 用Kubernetes部署應(yīng)用出現(xiàn)CrashLoopBackOff是怎么回事?

用Kubernetes部署應(yīng)用出現(xiàn)CrashLoopBackOff是怎么回事?

用Kubernetes集群,3個(gè)主機(jī),1個(gè)master,2個(gè)nodes。
Kubernetes的版本是1.7
部署一個(gè)應(yīng)用類似:

deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: server
  labels:
    app: server
spec:
  ports:
    - port: 80
  selector:
    app: server
    tier: frontend
  type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: server
  labels:
    app: server
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: server
        tier: frontend
    spec:
      containers:
      - image: 192.168.33.13/myapp/server
        name: server
        ports:
        - containerPort: 3000
          name: server
        imagePullPolicy: Always

192.168.33.13是鏡像服務(wù)器,用Harbor搭建。
從Kubernetes集群能訪問Harbor鏡像服務(wù)器。

部署執(zhí)行(kubectl create -f deployment.yaml)后,第一次貌似pull到k8s集群本地并成功運(yùn)行,容器重啟后就不行了:

$ kubectl get pods
NAME                                                         READY     STATUS             RESTARTS   AGE
server-962161505-kw3jf                                       0/1       CrashLoopBackOff   6          9m
server-962161505-lxcfb                                       0/1       CrashLoopBackOff   6          9m
server-962161505-mbnkn                                       0/1       CrashLoopBackOff   6          9m
$ kubectl describe pod server-962161505-kw3jf
Name:           server-962161505-kw3jf
Namespace:      default
Node:           node1/192.168.33.11
Start Time:     Mon, 13 Nov 2017 17:45:47 +0900
Labels:         app=server
                pod-template-hash=962161505
                tier=backend
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"server-962161505","uid":"0acadda6-c84f-11e7-84b8-02178ad2db9a","...
Status:         Running
IP:             10.42.254.104
Created By:     ReplicaSet/server-962161505
Controlled By:  ReplicaSet/server-962161505
Containers:
  server:
    Container ID:   docker://29eca3d9a20c60c83314101b036d742c5868c3bf25a39f28c5e4208bcdbfcede
    Image:          192.168.33.13/myapp/server
    Image ID:       docker-pullable://192.168.33.13/myapp/server@sha256:0e056e3ff5b1f1084e0946bc4211d33c6f48bc06dba7e07340c1609bbd5513d6
    Port:           3000/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 14 Nov 2017 10:13:12 +0900
      Finished:     Tue, 14 Nov 2017 10:13:13 +0900
    Ready:          False
    Restart Count:  26
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-csjqn (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  default-token-csjqn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-csjqn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                 From            Message
  ----     ------                 ----                ----            -------
  Normal   SuccessfulMountVolume  22m                 kubelet, node1  MountVolume.SetUp succeeded for volume "default-token-csjqn"
  Normal   SandboxChanged         22m                 kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Warning  Failed                 20m (x3 over 21m)   kubelet, node1  Failed to pull image "192.168.33.13/myapp/server": rpc error: code = 2 desc = Error response from daemon: {"message":"Get http://192.168.33.13/v2/: dial tcp 192.168.33.13:80: getsockopt: connection refused"}
  Normal   BackOff                20m (x5 over 21m)   kubelet, node1  Back-off pulling image "192.168.33.13/myapp/server"
  Normal   Pulling                4m (x7 over 21m)    kubelet, node1  pulling image "192.168.33.13/myapp/server"
  Normal   Pulled                 4m (x4 over 20m)    kubelet, node1  Successfully pulled image "192.168.33.13/myapp/server"
  Normal   Created                4m (x4 over 20m)    kubelet, node1  Created container
  Normal   Started                4m (x4 over 20m)    kubelet, node1  Started container
  Warning  FailedSync             10s (x99 over 21m)  kubelet, node1  Error syncing pod
  Warning  BackOff                10s (x91 over 20m)  kubelet, node1  Back-off restarting failed container

把鏡像放到docker hub上也一樣。

回答
編輯回答
青黛色

問題解決了嗎?

2017年5月25日 13:33