这就是我一直得到的答案:

[root@centos-master ~]# kubectl get pods
NAME               READY     STATUS             RESTARTS   AGE
nfs-server-h6nw8   1/1       Running            0          1h
nfs-web-07rxz      0/1       CrashLoopBackOff   8          16m
nfs-web-fdr9h      0/1       CrashLoopBackOff   8          16m

下面是描述pods的输出 Kubectl描述了豆荚

Events:
  FirstSeen LastSeen    Count   From                SubobjectPath       Type        Reason      Message
  --------- --------    -----   ----                -------------       --------    ------      -------
  16m       16m     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned nfs-web-fdr9h to centos-minion-2
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Created     Created container with docker id 495fcbb06836
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Started     Started container with docker id 495fcbb06836
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Started     Started container with docker id d56f34ae4e8f
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Created     Created container with docker id d56f34ae4e8f
  16m       16m     2   {kubelet centos-minion-2}               Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "web" with CrashLoopBackOff: "Back-off 10s restarting failed container=web pod=nfs-web-fdr9h_default(461c937d-d870-11e6-98de-005056040cc2)"

我有两个pod: nfs-web-07rxz, nfs-web-fdr9h,但如果我做kubectl日志nfs-web-07rxz或带-p选项,我在两个pod中都看不到任何日志。

[root@centos-master ~]# kubectl logs nfs-web-07rxz -p
[root@centos-master ~]# kubectl logs nfs-web-07rxz

这是我的replicationController yaml文件: replicationController yaml文件

apiVersion: v1 kind: ReplicationController metadata:   name: nfs-web spec:   replicas: 2   selector:
    role: web-frontend   template:
    metadata:
      labels:
        role: web-frontend
    spec:
      containers:
      - name: web
        image: eso-cmbu-docker.artifactory.eng.vmware.com/demo-container:demo-version3.0
        ports:
          - name: web
            containerPort: 80
        securityContext:
          privileged: true

我的Docker镜像是由这个简单的Docker文件制作的:

FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y nfs-common

我在CentOs-1611上运行我的kubernetes集群,kube版本:

[root@centos-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

如果我通过docker run运行docker映像,我能够运行映像而没有任何问题,只有通过kubernetes我得到了崩溃。

有人能帮我一下吗,我怎么调试而不看到任何日志?


当前回答

在你的yaml文件中,添加命令行和参数行:

...
containers:
      - name: api
        image: localhost:5000/image-name 
        command: [ "sleep" ]
        args: [ "infinity" ]
...

对我有用。

其他回答

#Show details of specific pod
kubectl  describe pod <pod name> -n <namespace-name>

# View logs for specific pod
kubectl  logs <pod name> -n <namespace-name>

在我的例子中,问题是错误的命令行参数列表。我在我的部署文件中这样做:

...
args:
  - "--foo 10"
  - "--bar 100"

而不是正确的方法:

...
args:
  - "--foo"
  - "10"
  - "--bar"
  - "100"

我也有类似的问题,但当我纠正了我的动物园管理员后,问题解决了。Yaml文件的服务名称与文件部署的容器名称不匹配。它通过使它们相同来解决。

apiVersion: v1
kind: Service
metadata:
  name: zk1
  namespace: nbd-mlbpoc-lab
  labels:
    app: zk-1
spec:
  ports:
  - name: client
    port: 2181
    protocol: TCP
  - name: follower
    port: 2888
    protocol: TCP
  - name: leader
    port: 3888
    protocol: TCP
  selector:
    app: zk-1
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: zk-deployment
  namespace: nbd-mlbpoc-lab
spec:
  template:
    metadata:
      labels:
        app: zk-1
    spec:
      containers:
      - name: zk1
        image: digitalwonderland/zookeeper
        ports:
        - containerPort: 2181
        env:
        - name: ZOOKEEPER_ID
          value: "1"
        - name: ZOOKEEPER_SERVER_1
          value: zk1

似乎Pod应该处于crashloopbackoff状态的原因有很多。

In my case, one of the container was terminating continuously due to the missing Environment value.

因此,调试的最佳方法是-

1. check Pod description output i.e. kubectl describe pod abcxxx
2. check the events generated related to the Pod i.e. kubectl get events| grep abcxxx
3. Check if End-points have been created for the Pod i.e. kubectl get ep
4. Check if dependent resources have been in-place e.g. CRDs or configmaps or any other resource that may be required.

我需要为后续的kubectl执行调用保持一个pod运行,正如上面的评论所指出的,我的pod正在被我的k8s集群杀死,因为它已经完成了所有的任务。我设法保持我的豆荚运行,只需用一个不会自动停止的命令踢豆荚:

kubectl run YOUR_POD_NAME -n YOUR_NAMESPACE --image SOME_PUBLIC_IMAGE:latest --command tailf /dev/null