1 개요[ | ]
- kubectl get events
- kubectl get ev
2 예시 v1.25[ | ]
Console
Copy
$ kubectl get ev
LAST SEEN TYPE REASON OBJECT MESSAGE
5m5s Normal NodeHasSufficientMemory node/cluster1-node1 Node cluster1-node1 status is now: NodeHasSufficientMemory
57m Warning FreeDiskSpaceFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1268858880 bytes, but freed 0 bytes
57m Warning ImageGCFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1268858880 bytes, but freed 0 bytes
52m Warning FreeDiskSpaceFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1270890496 bytes, but freed 0 bytes
52m Warning ImageGCFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1270890496 bytes, but freed 0 bytes
47m Warning FreeDiskSpaceFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1272733696 bytes, but freed 0 bytes
47m Warning ImageGCFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1272733696 bytes, but freed 0 bytes
32m Warning FreeDiskSpaceFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 930242560 bytes, but freed 0 bytes
27m Warning FreeDiskSpaceFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1151131648 bytes, but freed 0 bytes
27m Warning ImageGCFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1151131648 bytes, but freed 0 bytes
22m Warning FreeDiskSpaceFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1152888832 bytes, but freed 0 bytes
22m Warning ImageGCFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1152888832 bytes, but freed 0 bytes
17m Warning FreeDiskSpaceFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1154555904 bytes, but freed 0 bytes
17m Warning ImageGCFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1154555904 bytes, but freed 0 bytes
12m Warning FreeDiskSpaceFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1146241024 bytes, but freed 0 bytes
12m Warning ImageGCFailed node/cluster1-node1 failed to garbage collect required amount of images. Wanted to free 1146241024 bytes, but freed 0 bytes
10m Warning EvictionThresholdMet node/cluster1-node1 Attempting to reclaim memory
10m Normal NodeHasInsufficientMemory node/cluster1-node1 Node cluster1-node1 status is now: NodeHasInsufficientMemory
3 기타[ | ]
Console
Copy
$ kubectl get events
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
Console
Copy
$ kubectl get events
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
3m 3m 1 guestbook-75786d799f-r6mxl Pod Normal Scheduled default-scheduler Successfully assigned guestbook-75786d799f-r6mxl to 10.77.155.84
3m 3m 1 guestbook-75786d799f-r6mxl Pod Normal SuccessfulMountVolume kubelet, 10.77.155.84 MountVolume.SetUp succeeded for volume "default-token-5rlxc"
3m 3m 1 guestbook-75786d799f-r6mxl Pod spec.containers{guestbook} Normal Pulled kubelet, 10.77.155.84 Container image "ibmcom/guestbook:v1" already present on machine
3m 3m 1 guestbook-75786d799f-r6mxl Pod spec.containers{guestbook} Normal Created kubelet, 10.77.155.84 Created container
3m 3m 1 guestbook-75786d799f-r6mxl Pod spec.containers{guestbook} Normal Started kubelet, 10.77.155.84 Started container
3m 3m 1 guestbook-75786d799f-xvpvv Pod spec.containers{guestbook} Normal Killing kubelet, 10.77.155.84 Killing container with id docker://guestbook:Need to kill Pod
3m 3m 1 guestbook-75786d799f ReplicaSet Normal SuccessfulDelete replicaset-controller Deleted pod: guestbook-75786d799f-xvpvv
3m 3m 1 guestbook-75786d799f ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: guestbook-75786d799f-r6mxl
3m 3m 1 guestbook Deployment Normal ScalingReplicaSet deployment-controller Scaled down replica set guestbook-75786d799f to 0
3m 3m 1 guestbook Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set guestbook-75786d799f to 1
Console
Copy
$ kubectl get events
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
3m 3m 1 guestbook-75786d799f-r6mxl Pod Normal Scheduled default-scheduler Successfully assigned guestbook-75786d799f-r6mxl to 10.77.155.84
3m 3m 1 guestbook-75786d799f-r6mxl Pod Normal SuccessfulMountVolume kubelet, 10.77.155.84 MountVolume.SetUp succeeded for volume "default-token-5rlxc"
3m 3m 1 guestbook-75786d799f-r6mxl Pod spec.containers{guestbook} Normal Pulled kubelet, 10.77.155.84 Container image "ibmcom/guestbook:v1" already present on machine
3m 3m 1 guestbook-75786d799f-r6mxl Pod spec.containers{guestbook} Normal Created kubelet, 10.77.155.84 Created container
3m 3m 1 guestbook-75786d799f-r6mxl Pod spec.containers{guestbook} Normal Started kubelet, 10.77.155.84 Started container
3m 3m 1 guestbook-75786d799f-xvpvv Pod spec.containers{guestbook} Normal Killing kubelet, 10.77.155.84 Killing container with id docker://guestbook:Need to kill Pod
3m 3m 1 guestbook-75786d799f ReplicaSet Normal SuccessfulDelete replicaset-controller Deleted pod: guestbook-75786d799f-xvpvv
3m 3m 1 guestbook-75786d799f ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: guestbook-75786d799f-r6mxl
3m 3m 1 guestbook Deployment Normal ScalingReplicaSet deployment-controller Scaled down replica set guestbook-75786d799f to 0
3m 3m 1 guestbook Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set guestbook-75786d799f to 1
Console
Copy
$ kubectl get events --all-namespaces
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
default 36m Normal NodeHasSufficientMemory node/master Node master status is now: NodeHasSufficientMemory
default 36m Normal NodeHasNoDiskPressure node/master Node master status is now: NodeHasNoDiskPressure
default 36m Normal NodeHasSufficientPID node/master Node master status is now: NodeHasSufficientPID
default 35m Normal RegisteredNode node/master Node master event: Registered Node master in Controller
default 35m Normal Starting node/master Starting kube-proxy.
default 35m Normal RegisteredNode node/node01 Node node01 event: Registered Node node01 in Controller
default 35m Normal Starting node/node01 Starting kubelet.
default 35m Normal NodeHasSufficientMemory node/node01 Node node01 status is now: NodeHasSufficientMemory
default 35m Normal NodeHasNoDiskPressure node/node01 Node node01 status is now: NodeHasNoDiskPressure
default 35m Normal NodeHasSufficientPID node/node01 Node node01 status is now: NodeHasSufficientPID
default 35m Normal NodeAllocatableEnforced node/node01 Updated Node Allocatable limit across pods
default 35m Normal Starting node/node01 Starting kube-proxy.
default 35m Normal NodeReady node/node01 Node node01 status is now: NodeReady
kube-system 35m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 33m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 32m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 30m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 29m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 27m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 26m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 21m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 15m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 13m Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 7m8s Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 5m40s Normal LeaderElection endpoints/cloud-controller-manager katacoda-cloud-provider-758cf7cf75-ktdmf-external-cloud-controller became leader
kube-system 35m Warning FailedScheduling pod/coredns-fb8b8dccf-wcmr6 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
kube-system 35m Warning FailedScheduling pod/coredns-fb8b8dccf-wcmr6 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
kube-system 35m Normal Scheduled pod/coredns-fb8b8dccf-wcmr6 Successfully assigned kube-system/coredns-fb8b8dccf-wcmr6 to node01
kube-system 35m Normal Pulled pod/coredns-fb8b8dccf-wcmr6 Container image "k8s.gcr.io/coredns:1.3.1" already present on machine
kube-system 35m Normal Created pod/coredns-fb8b8dccf-wcmr6 Created container coredns
kube-system 35m Normal Started pod/coredns-fb8b8dccf-wcmr6 Started container coredns
kube-system 35m Warning FailedScheduling pod/coredns-fb8b8dccf-xs6lx 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
kube-system 35m Warning FailedScheduling pod/coredns-fb8b8dccf-xs6lx 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
kube-system 35m Normal Scheduled pod/coredns-fb8b8dccf-xs6lx Successfully assigned kube-system/coredns-fb8b8dccf-xs6lx to node01
kube-system 35m Normal Pulled pod/coredns-fb8b8dccf-xs6lx Container image "k8s.gcr.io/coredns:1.3.1" already present on machine
kube-system 35m Normal Created pod/coredns-fb8b8dccf-xs6lx Created container coredns
kube-system 35m Normal Started pod/coredns-fb8b8dccf-xs6lx Started container coredns
kube-system 35m Normal SuccessfulCreate replicaset/coredns-fb8b8dccf Created pod: coredns-fb8b8dccf-xs6lx
kube-system 35m Normal SuccessfulCreate replicaset/coredns-fb8b8dccf Created pod: coredns-fb8b8dccf-wcmr6
kube-system 35m Normal ScalingReplicaSet deployment/coredns Scaled up replica set coredns-fb8b8dccf to 2
kube-system 21m Normal Scheduled pod/dash-kubernetes-dashboard-dff4ccb96-h5pbw Successfully assigned kube-system/dash-kubernetes-dashboard-dff4ccb96-h5pbw to node01
kube-system 21m Normal Pulling pod/dash-kubernetes-dashboard-dff4ccb96-h5pbw Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
kube-system 21m Normal Pulled pod/dash-kubernetes-dashboard-dff4ccb96-h5pbw Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
kube-system 21m Normal Created pod/dash-kubernetes-dashboard-dff4ccb96-h5pbw Created container kubernetes-dashboard
kube-system 21m Normal Started pod/dash-kubernetes-dashboard-dff4ccb96-h5pbw Started container kubernetes-dashboard
kube-system 21m Normal SuccessfulCreate replicaset/dash-kubernetes-dashboard-dff4ccb96 Created pod: dash-kubernetes-dashboard-dff4ccb96-h5pbw
kube-system 21m Normal ScalingReplicaSet deployment/dash-kubernetes-dashboard Scaled up replica set dash-kubernetes-dashboard-dff4ccb96 to 1
kube-system 36m Normal Pulled pod/etcd-master Container image "k8s.gcr.io/etcd:3.3.10" already present on machine
kube-system 36m Normal Created pod/etcd-master Created container etcd
kube-system 36m Normal Started pod/etcd-master Started container etcd
kube-system 18m Warning Unhealthy pod/etcd-master Liveness probe failed: Error: context deadline exceeded
kube-system 35m Warning FailedScheduling pod/katacoda-cloud-provider-758cf7cf75-ktdmf 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
kube-system 35m Warning FailedScheduling pod/katacoda-cloud-provider-758cf7cf75-ktdmf 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
kube-system 35m Normal Scheduled pod/katacoda-cloud-provider-758cf7cf75-ktdmf Successfully assigned kube-system/katacoda-cloud-provider-758cf7cf75-ktdmf to node01
kube-system 35m Normal Pulling pod/katacoda-cloud-provider-758cf7cf75-ktdmf Pulling image "katacoda/katacoda-cloud-provider:0.0.1"
kube-system 35m Normal Pulled pod/katacoda-cloud-provider-758cf7cf75-ktdmf Successfully pulled image "katacoda/katacoda-cloud-provider:0.0.1"
kube-system 33m Normal Created pod/katacoda-cloud-provider-758cf7cf75-ktdmf Created container katacoda-cloud-provider
kube-system 33m Normal Started pod/katacoda-cloud-provider-758cf7cf75-ktdmf Started container katacoda-cloud-provider
kube-system 5m20s Warning Unhealthy pod/katacoda-cloud-provider-758cf7cf75-ktdmf Liveness probe failed: Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
kube-system 32m Normal Killing pod/katacoda-cloud-provider-758cf7cf75-ktdmf Container katacoda-cloud-provider failed liveness probe, will be restarted
kube-system 33m Normal Pulled pod/katacoda-cloud-provider-758cf7cf75-ktdmf Container image "katacoda/katacoda-cloud-provider:0.0.1" already present on machine
kube-system 23s Warning BackOff pod/katacoda-cloud-provider-758cf7cf75-ktdmf Back-off restarting failed container
kube-system 35m Normal SuccessfulCreate replicaset/katacoda-cloud-provider-758cf7cf75 Created pod: katacoda-cloud-provider-758cf7cf75-ktdmf
kube-system 35m Normal ScalingReplicaSet deployment/katacoda-cloud-provider Scaled up replica set katacoda-cloud-provider-758cf7cf75 to 1
kube-system 36m Normal Pulled pod/kube-apiserver-master Container image "k8s.gcr.io/kube-apiserver:v1.14.0" already present on machine
kube-system 36m Normal Created pod/kube-apiserver-master Created container kube-apiserver
kube-system 36m Normal Started pod/kube-apiserver-master Started container kube-apiserver
kube-system 35m Warning Unhealthy pod/kube-apiserver-master Liveness probe failed: HTTP probe failed with statuscode: 403
kube-system 18m Warning Unhealthy pod/kube-apiserver-master Liveness probe failed: HTTP probe failed with statuscode: 500
kube-system 36m Normal Pulled pod/kube-controller-manager-master Container image "k8s.gcr.io/kube-controller-manager:v1.14.0" already present on machine
kube-system 36m Normal Created pod/kube-controller-manager-master Created container kube-controller-manager
kube-system 36m Normal Started pod/kube-controller-manager-master Started container kube-controller-manager
kube-system 35m Normal LeaderElection endpoints/kube-controller-manager master_fba4bedb-5207-11ea-bbaa-0242ac110043 became leader
kube-system 35m Normal Scheduled pod/kube-keepalived-vip-xs784 Successfully assigned kube-system/kube-keepalived-vip-xs784 tonode01
kube-system 35m Normal Pulling pod/kube-keepalived-vip-xs784 Pulling image "gcr.io/google_containers/kube-keepalived-vip:0.9"
kube-system 35m Normal Pulled pod/kube-keepalived-vip-xs784 Successfully pulled image "gcr.io/google_containers/kube-keepalived-vip:0.9"
kube-system 35m Normal Created pod/kube-keepalived-vip-xs784 Created container kube-keepalived-vip
kube-system 35m Normal Started pod/kube-keepalived-vip-xs784 Started container kube-keepalived-vip
kube-system 35m Normal SuccessfulCreate daemonset/kube-keepalived-vip Created pod: kube-keepalived-vip-xs784
kube-system 35m Normal Scheduled pod/kube-proxy-xwmrv Successfully assigned kube-system/kube-proxy-xwmrv to master
kube-system 35m Normal Pulled pod/kube-proxy-xwmrv Container image "k8s.gcr.io/kube-proxy:v1.14.0" already present on machine
kube-system 35m Normal Created pod/kube-proxy-xwmrv Created container kube-proxy
kube-system 35m Normal Started pod/kube-proxy-xwmrv Started container kube-proxy
kube-system 35m Normal Scheduled pod/kube-proxy-zm2lb Successfully assigned kube-system/kube-proxy-zm2lb to node01
kube-system 35m Normal Pulling pod/kube-proxy-zm2lb Pulling image "k8s.gcr.io/kube-proxy:v1.14.0"
kube-system 35m Normal Pulled pod/kube-proxy-zm2lb Successfully pulled image "k8s.gcr.io/kube-proxy:v1.14.0"
kube-system 35m Normal Created pod/kube-proxy-zm2lb Created container kube-proxy
kube-system 35m Normal Started pod/kube-proxy-zm2lb Started container kube-proxy
kube-system 35m Normal SuccessfulCreate daemonset/kube-proxy Created pod: kube-proxy-xwmrv
kube-system 35m Normal SuccessfulCreate daemonset/kube-proxy Created pod: kube-proxy-zm2lb
kube-system 36m Normal Pulled pod/kube-scheduler-master Container image "k8s.gcr.io/kube-scheduler:v1.14.0" already present on machine
kube-system 36m Normal Created pod/kube-scheduler-master Created container kube-scheduler
kube-system 36m Normal Started pod/kube-scheduler-master Started container kube-scheduler
kube-system 35m Normal LeaderElection endpoints/kube-scheduler master_f3ff19a8-5207-11ea-88c0-0242ac110043 became leader
kube-system 35m Normal Scheduled pod/weave-net-2l2g8 Successfully assigned kube-system/weave-net-2l2g8 to master
kube-system 35m Normal Pulled pod/weave-net-2l2g8 Container image "weaveworks/weave-kube:2.5.1" already present on machine
kube-system 35m Normal Created pod/weave-net-2l2g8 Created container weave
kube-system 35m Normal Started pod/weave-net-2l2g8 Started container weave
kube-system 35m Normal Pulled pod/weave-net-2l2g8 Container image "weaveworks/weave-npc:2.5.1" already present on machine
kube-system 35m Normal Created pod/weave-net-2l2g8 Created container weave-npc
kube-system 35m Normal Started pod/weave-net-2l2g8 Started container weave-npc
kube-system 35m Warning Unhealthy pod/weave-net-2l2g8 Readiness probe failed: Get http://127.0.0.1:6784/status: dialtcp 127.0.0.1:6784: connect: connection refused
kube-system 35m Normal Scheduled pod/weave-net-xc54f Successfully assigned kube-system/weave-net-xc54f to node01
kube-system 35m Normal Pulled pod/weave-net-xc54f Container image "weaveworks/weave-kube:2.5.1" already present on machine
kube-system 35m Normal Created pod/weave-net-xc54f Created container weave
kube-system 35m Normal Started pod/weave-net-xc54f Started container weave
kube-system 35m Normal Pulled pod/weave-net-xc54f Container image "weaveworks/weave-npc:2.5.1" already present on machine
kube-system 35m Normal Created pod/weave-net-xc54f Created container weave-npc
kube-system 35m Normal Started pod/weave-net-xc54f Started container weave-npc
kube-system 35m Warning Unhealthy pod/weave-net-xc54f Readiness probe failed: Get http://127.0.0.1:6784/status: dialtcp 127.0.0.1:6784: connect: connection refused
kube-system 35m Normal SuccessfulCreate daemonset/weave-net Created pod: weave-net-2l2g8
kube-system 35m Normal SuccessfulCreate daemonset/weave-net Created pod: weave-net-xc54f
Console
Copy
$ kubectl get event
LAST SEEN TYPE REASON OBJECT MESSAGE
13s Warning Unhealthy pod/app-prod-6bc4658c66-8hmlp Liveness probe failed: Get http://10.20.1.8:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
13s Warning Unhealthy pod/app-prod-6bc4658c66-8npgc Liveness probe failed: Get http://10.20.3.5:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
6m5s Warning NodeSysctlChange node/node01 {"unmanaged": {"fs.aio-nr": "64"}}
90s Warning NodeSysctlChange node/node02 {"unmanaged": {"fs.aio-nr": "5450"}}
Console
Copy
$ kubectl describe pod/app-prod-6bc4658c66-8hmlp | grep Events: -A99
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 117s (x3706 over 16d) kubelet, node01 Liveness probe failed: Get http://10.20.1.8:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Console
Copy
$ kubectl get event --field-selector=involvedObject.name=app-prod-6bc4658c66-8hmlp -owide
LAST SEEN TYPE REASON OBJECT SUBOBJECT SOURCE MESSAGE FIRST SEEN COUNT NAME
22s Warning Unhealthy pod/app-prod-6bc4658c66-8hmlp spec.containers{app} kubelet, node01 Liveness probe failed: Get http://10.20.1.8:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 16d 3712 app-prod-6bc4658c66-8hmlp.16125180cbb5a1e9
Console
Copy
$ kubectl describe node/node01 | grep Events: -A99
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning NodeSysctlChange 19m (x66 over 16d) sysctl-monitor, node01 {"unmanaged": {"fs.aio-nr": "64"}}
4 같이 보기[ | ]
- kubectl
- kubectl describe
- kubectl describe pod
- kubectl describe node
- kubectl get events -ojsonpath
- k8s 이벤트
5 참고[ | ]
편집자 Jmnote Jmnote bot
로그인하시면 댓글을 쓸 수 있습니다.