"Killer Shell CKA - NetworkPolicy Misconfigured"의 두 판 사이의 차이

태그: 되돌려진 기여
 
(같은 사용자의 중간 판 9개는 보이지 않습니다)
3번째 줄: 3번째 줄:
* https://killercoda.com/killer-shell-cka/scenario/networkpolicy-misconfigured
* https://killercoda.com/killer-shell-cka/scenario/networkpolicy-misconfigured
* 요구사항
* 요구사항
** Namespace <code>default</code>에서 레이블 <code>level=100x</code>를 가진 모든 파드가, Namespaces <code>level-1000</code>, <code>level-1001</code>, <code>level-1002</code> 내 레이블 <code>level=100x</code>를 가진 파드들과 통신할 수 있어야 함
** 네임스페이스 <code>default</code>에서 레이블 <code>level=100x</code>를 가진 모든 파드가, Namespaces <code>level-1000</code>, <code>level-1001</code>, <code>level-1002</code> 내 레이블 <code>level=100x</code>를 가진 파드들과 통신(Egress) 가능해야 함
** 위 요구는 Egress(아웃바운드) 허용을 의미
** DNS(53/TCP, 53/UDP)도 허용되어야 함
** DNS(53/TCP, 53/UDP)도 허용되어야 함


12번째 줄: 11번째 줄:
controlplane:~$ kubectl get pod -A --show-labels
controlplane:~$ kubectl get pod -A --show-labels
NAMESPACE            NAME                                      READY  STATUS    RESTARTS      AGE    LABELS
NAMESPACE            NAME                                      READY  STATUS    RESTARTS      AGE    LABELS
default              tester-0                                  1/1    Running  0            116s    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
default              tester-0                                  1/1    Running  0            12m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
kube-system          calico-kube-controllers-fdf5f5495-vxpw5  1/1    Running  1 (40m ago)  4d18h   k8s-app=calico-kube-controllers,pod-template-hash=fdf5f5495
kube-system          calico-kube-controllers-fdf5f5495-vxpw5  1/1    Running  1 (50m ago)  4d19h   k8s-app=calico-kube-controllers,pod-template-hash=fdf5f5495
kube-system          canal-9nbnr                              2/2    Running  2 (40m ago)  4d18h   controller-revision-hash=7d8c9cfdb6,k8s-app=canal,pod-template-generation=1
kube-system          canal-9nbnr                              2/2    Running  2 (50m ago)  4d19h   controller-revision-hash=7d8c9cfdb6,k8s-app=canal,pod-template-generation=1
kube-system          coredns-6ff97d97f9-c6lgm                  1/1    Running  1 (40m ago)  4d18h   k8s-app=kube-dns,pod-template-hash=6ff97d97f9
kube-system          coredns-6ff97d97f9-c6lgm                  1/1    Running  1 (50m ago)  4d19h   k8s-app=kube-dns,pod-template-hash=6ff97d97f9
kube-system          coredns-6ff97d97f9-q8rh6                  1/1    Running  1 (40m ago)  4d18h   k8s-app=kube-dns,pod-template-hash=6ff97d97f9
kube-system          coredns-6ff97d97f9-q8rh6                  1/1    Running  1 (50m ago)  4d19h   k8s-app=kube-dns,pod-template-hash=6ff97d97f9
kube-system          etcd-controlplane                        1/1    Running  1 (40m ago)  4d18h   component=etcd,tier=control-plane
kube-system          etcd-controlplane                        1/1    Running  1 (50m ago)  4d19h   component=etcd,tier=control-plane
kube-system          kube-apiserver-controlplane              1/1    Running  1 (40m ago)  4d18h   component=kube-apiserver,tier=control-plane
kube-system          kube-apiserver-controlplane              1/1    Running  1 (50m ago)  4d19h   component=kube-apiserver,tier=control-plane
kube-system          kube-controller-manager-controlplane      1/1    Running  1 (40m ago)  4d18h   component=kube-controller-manager,tier=control-plane
kube-system          kube-controller-manager-controlplane      1/1    Running  1 (50m ago)  4d19h   component=kube-controller-manager,tier=control-plane
kube-system          kube-proxy-txl4k                          1/1    Running  1 (40m ago)  4d18h   controller-revision-hash=7f964d48ff,k8s-app=kube-proxy,pod-template-generation=1
kube-system          kube-proxy-txl4k                          1/1    Running  1 (50m ago)  4d19h   controller-revision-hash=7f964d48ff,k8s-app=kube-proxy,pod-template-generation=1
kube-system          kube-scheduler-controlplane              1/1    Running  1 (40m ago)  4d18h   component=kube-scheduler,tier=control-plane
kube-system          kube-scheduler-controlplane              1/1    Running  1 (50m ago)  4d19h   component=kube-scheduler,tier=control-plane
level-1000          tester-0                                  1/1    Running  0            116s    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1000          tester-0                                  1/1    Running  0            12m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1001          tester-0                                  1/1    Running  0            116s    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1001          tester-0                                  1/1    Running  0            12m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1002          tester-0                                  1/1    Running  0            115s    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1002          tester-0                                  1/1    Running  0            12m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
local-path-storage  local-path-provisioner-5c94487ccb-7djz6  1/1    Running  1 (40m ago)  4d18h  app=local-path-provisioner,pod-template-hash=5c94487ccb
local-path-storage  local-path-provisioner-5c94487ccb-7djz6  1/1    Running  1 (50m ago)  4d18h  app=local-path-provisioner,pod-template-hash=5c94487ccb
other                tester-0                                  1/1    Running  0            115s    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
other                tester-0                                  1/1    Running  0            12m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang='console'>
<syntaxhighlight lang='console'>
39번째 줄: 38번째 줄:
level-1002          pod/tester-0                                  1/1    Running  0            2m13s  app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1002          pod/tester-0                                  1/1    Running  0            2m13s  app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
other                pod/tester-0                                  1/1    Running  0            2m13s  app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
other                pod/tester-0                                  1/1    Running  0            2m13s  app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
</syntaxhighlight>
<syntaxhighlight lang='console'>
controlplane:~$ kubectl get pod,svc -A --show-labels | grep level
default              pod/tester-0                                  1/1    Running  0            10m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1000          pod/tester-0                                  1/1    Running  0            10m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1001          pod/tester-0                                  1/1    Running  0            10m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1002          pod/tester-0                                  1/1    Running  0            10m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
other                pod/tester-0                                  1/1    Running  0            10m    app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1000    service/tester      ClusterIP  10.99.166.152    <none>        80/TCP                  10m    app=tester
level-1001    service/tester      ClusterIP  10.109.125.206  <none>        80/TCP                  10m    app=tester
level-1002    service/tester      ClusterIP  10.106.122.56    <none>        80/TCP                  10m    app=tester
</syntaxhighlight>
</syntaxhighlight>


83번째 줄: 93번째 줄:
* 기존 NetworkPolicy의 <code>namespaceSelector</code> 중 하나가 잘못된 네임스페이스 레이블을 가리키고 있어 <code>level-1001</code>로의 통신이 차단됨
* 기존 NetworkPolicy의 <code>namespaceSelector</code> 중 하나가 잘못된 네임스페이스 레이블을 가리키고 있어 <code>level-1001</code>로의 통신이 차단됨
* NetworkPolicy에서 <code>namespaceSelector</code>는 네임스페이스의 레이블을 사용한다. kubeadm 기반 클러스터는 보통 <code>kubernetes.io/metadata.name={namespace}</code> 레이블이 있다.
* NetworkPolicy에서 <code>namespaceSelector</code>는 네임스페이스의 레이블을 사용한다. kubeadm 기반 클러스터는 보통 <code>kubernetes.io/metadata.name={namespace}</code> 레이블이 있다.
<syntaxhighlight lang='console'>
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1000.svc.cluster.local
...</html>
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1001.svc.cluster.local
...curl: (28) Connection timed out after 1001 milliseconds
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1002.svc.cluster.local
...</html>
</syntaxhighlight>


==해결==
==해결==
135번째 줄: 154번째 줄:
아래 호출들이 성공해야 한다.
아래 호출들이 성공해야 한다.
<syntaxhighlight lang='console'>
<syntaxhighlight lang='console'>
controlplane:~$ kubectl exec tester-0 -- curl -tester.level-1000.svc.cluster.local
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1000.svc.cluster.local
controlplane:~$ kubectl exec tester-0 -- curl tester.level-1001.svc.cluster.local
...</html>
controlplane:~$ kubectl exec tester-0 -- curl tester.level-1002.svc.cluster.local
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1001.svc.cluster.local
...</html>
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1002.svc.cluster.local
...</html>
</syntaxhighlight>
</syntaxhighlight>



2025년 9월 24일 (수) 23:28 기준 최신판

1 개요[ | ]

Killer Shell CKA - NetworkPolicy Misconfigured

2 사전 확인[ | ]

레이블과 테스트 리소스를 확인한다.

controlplane:~$ kubectl get pod -A --show-labels
NAMESPACE            NAME                                      READY   STATUS    RESTARTS      AGE     LABELS
default              tester-0                                  1/1     Running   0             12m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
kube-system          calico-kube-controllers-fdf5f5495-vxpw5   1/1     Running   1 (50m ago)   4d19h   k8s-app=calico-kube-controllers,pod-template-hash=fdf5f5495
kube-system          canal-9nbnr                               2/2     Running   2 (50m ago)   4d19h   controller-revision-hash=7d8c9cfdb6,k8s-app=canal,pod-template-generation=1
kube-system          coredns-6ff97d97f9-c6lgm                  1/1     Running   1 (50m ago)   4d19h   k8s-app=kube-dns,pod-template-hash=6ff97d97f9
kube-system          coredns-6ff97d97f9-q8rh6                  1/1     Running   1 (50m ago)   4d19h   k8s-app=kube-dns,pod-template-hash=6ff97d97f9
kube-system          etcd-controlplane                         1/1     Running   1 (50m ago)   4d19h   component=etcd,tier=control-plane
kube-system          kube-apiserver-controlplane               1/1     Running   1 (50m ago)   4d19h   component=kube-apiserver,tier=control-plane
kube-system          kube-controller-manager-controlplane      1/1     Running   1 (50m ago)   4d19h   component=kube-controller-manager,tier=control-plane
kube-system          kube-proxy-txl4k                          1/1     Running   1 (50m ago)   4d19h   controller-revision-hash=7f964d48ff,k8s-app=kube-proxy,pod-template-generation=1
kube-system          kube-scheduler-controlplane               1/1     Running   1 (50m ago)   4d19h   component=kube-scheduler,tier=control-plane
level-1000           tester-0                                  1/1     Running   0             12m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1001           tester-0                                  1/1     Running   0             12m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1002           tester-0                                  1/1     Running   0             12m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
local-path-storage   local-path-provisioner-5c94487ccb-7djz6   1/1     Running   1 (50m ago)   4d18h   app=local-path-provisioner,pod-template-hash=5c94487ccb
other                tester-0                                  1/1     Running   0             12m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
controlplane:~$ kubectl get svc,pod -A --show-labels | grep tester
level-1000    service/tester       ClusterIP   10.99.166.152    <none>        80/TCP                   2m14s   app=tester
level-1001    service/tester       ClusterIP   10.109.125.206   <none>        80/TCP                   2m14s   app=tester
level-1002    service/tester       ClusterIP   10.106.122.56    <none>        80/TCP                   2m13s   app=tester
other         service/tester       ClusterIP   10.100.158.115   <none>        80/TCP                   2m13s   app=tester
default              pod/tester-0                                  1/1     Running   0             2m14s   app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1000           pod/tester-0                                  1/1     Running   0             2m14s   app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1001           pod/tester-0                                  1/1     Running   0             2m14s   app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1002           pod/tester-0                                  1/1     Running   0             2m13s   app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
other                pod/tester-0                                  1/1     Running   0             2m13s   app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
controlplane:~$ kubectl get pod,svc -A --show-labels | grep level
default              pod/tester-0                                  1/1     Running   0             10m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1000           pod/tester-0                                  1/1     Running   0             10m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1001           pod/tester-0                                  1/1     Running   0             10m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1002           pod/tester-0                                  1/1     Running   0             10m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
other                pod/tester-0                                  1/1     Running   0             10m     app=tester,apps.kubernetes.io/pod-index=0,controller-revision-hash=tester-84c8c4f7b8,level=100x,statefulset.kubernetes.io/pod-name=tester-0
level-1000    service/tester       ClusterIP   10.99.166.152    <none>        80/TCP                   10m     app=tester
level-1001    service/tester       ClusterIP   10.109.125.206   <none>        80/TCP                   10m     app=tester
level-1002    service/tester       ClusterIP   10.106.122.56    <none>        80/TCP                   10m     app=tester

기존 네트워크정책 확인:

controlplane:~$ kubectl -n default get networkpolicy np-100x -o yaml
...
spec:
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: level-1000
      podSelector:
        matchLabels:
          level: 100x
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: level-1000
      podSelector:
        matchLabels:
          level: 100x
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: level-1002
      podSelector:
        matchLabels:
          level: 100x
  - ports:
    - port: 53
      protocol: TCP
    - port: 53
      protocol: UDP
  podSelector:
    matchLabels:
      level: 100x
  policyTypes:
  - Egress

3 문제 원인[ | ]

  • 기존 NetworkPolicy의 namespaceSelector 중 하나가 잘못된 네임스페이스 레이블을 가리키고 있어 level-1001로의 통신이 차단됨
  • NetworkPolicy에서 namespaceSelector는 네임스페이스의 레이블을 사용한다. kubeadm 기반 클러스터는 보통 kubernetes.io/metadata.name={namespace} 레이블이 있다.
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1000.svc.cluster.local
...</html>
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1001.svc.cluster.local
...curl: (28) Connection timed out after 1001 milliseconds
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1002.svc.cluster.local
...</html>

4 해결[ | ]

NetworkPolicy를 올바르게 수정한다.

controlplane:~$ kubectl -n default edit networkpolicy np-100x

수정 예시(YAML):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np-100x
  namespace: default
spec:
  podSelector:
    matchLabels:
      level: 100x
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: level-1000
      podSelector:
        matchLabels:
          level: 100x
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: level-1001   # CHANGE: 올바른 네임스페이스로 수정
      podSelector:
        matchLabels:
          level: 100x
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: level-1002
      podSelector:
        matchLabels:
          level: 100x
  - ports:
    - port: 53
      protocol: TCP
    - port: 53
      protocol: UDP

5 검증[ | ]

아래 호출들이 성공해야 한다.

controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1000.svc.cluster.local
...</html>
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1001.svc.cluster.local
...</html>
controlplane:~$ kubectl exec tester-0 -- curl -m 1 tester.level-1002.svc.cluster.local
...</html>

6 참고/주의[ | ]

  • 본 정책은 default 네임스페이스의 파드 중 level=100x 레이블에만 적용되며, Egress만 제한/허용한다.
  • DNS 허용은 목적지 제한 없이 53/TCP, 53/UDP를 개방한다. 더 엄격히 하려면 CoreDNS 파드가 있는 kube-system 네임스페이스와 파드 셀렉터를 사용해 특정 대상으로 한정할 수 있다.
  • CNI 플러그인(Calico, Cilium 등)에서 NetworkPolicy 기능이 활성화되어 있어야 정책이 적용된다.

7 같이 보기[ | ]

문서 댓글 ({{ doc_comments.length }})
{{ comment.name }} {{ comment.created | snstime }}