카타코더 - Handling Failures With Circuit Breakers

1 개요[ | ]

카타코더 - Handling Failures With Circuit Breakers
카타코더 - Increasing Microservices Reliability with Istio
# 코스
카타코더 - Simulating Failures Between Microservices
카타코더 - Handling Timeouts Between Microservices
카타코더 - Handling Failures With Circuit Breakers

2 Deploy HTTPBin Client[ | ]

master $ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml)
service/httpbin created
deployment.extensions/httpbin created

3 Configure Circuit Breaker[ | ]

master $ kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml)
service/sleep created
deployment.extensions/sleep created
master $ kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) bash
Defaulting container name to sleep.
Use 'kubectl describe pod/sleep-8689d847d7-wsz24 -n default' to see all of the containers in this pod.
root@sleep-8689d847d7-wsz24:/#
root@sleep-8689d847d7-wsz24:/# curl http://httpbin:8000/get;
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Content-Length": "0",
    "Host": "httpbin:8000",
    "User-Agent": "curl/7.35.0",
    "X-B3-Sampled": "1",
    "X-B3-Spanid": "c30cd27a2b2a546c",
    "X-B3-Traceid": "c30cd27a2b2a546c",
    "X-Request-Id": "132d3300-dd41-9a70-b3df-ce22dae17699"
  },
  "origin": "127.0.0.1",
  "url": "http://httpbin:8000/get"
}
root@sleep-8689d847d7-wsz24:/# exit
exit
master $
master $

4 View Request[ | ]

httpbinRule.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 1
      http:
        http1MaxPendingRequests: 1
        maxRequestsPerConnection: 1
    outlierDetection:
      consecutiveErrors: 1
      interval: 1s
      baseEjectionTime: 3m
      maxEjectionPercent: 100
    tls:
      mode: ISTIO_MUTUAL
master $ kubectl apply -f httpbinRule.yaml
destinationrule.networking.istio.io/httpbin created
master $ kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) bash
Defaulting container name to sleep.
Use 'kubectl describe pod/sleep-8689d847d7-wsz24 -n default' to see all of the containers in this pod.
root@sleep-8689d847d7-wsz24:/# curl http://httpbin:8000/get;
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Content-Length": "0",
    "Host": "httpbin:8000",
    "User-Agent": "curl/7.35.0",
    "X-B3-Sampled": "1",
    "X-B3-Spanid": "4cb00669c7d12ebc",
    "X-B3-Traceid": "4cb00669c7d12ebc",
    "X-Request-Id": "de7168d0-376a-98f7-9143-64df01f5ab1f"
  },
  "origin": "127.0.0.1",
  "url": "http://httpbin:8000/get"
}
root@sleep-8689d847d7-wsz24:/# exit
exit
master $

5 Tripping Circuit Breaker[ | ]

master $ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/sample-client/fortio-deploy.yaml)
deployment.apps/fortio-deploy created
master $ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
master $ kubectl exec -it $FORTIO_POD  -c fortio /usr/local/bin/fortio-- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
08:19:11 I logger.go:97> Log level is now 3 Warning (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get
Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:11 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
Ended after 167.112178ms : 20 calls. qps=119.68
Aggregated Function Time : count 20 avg 0.0090880385 +/- 0.01315 min 0.000432493 max 0.061560415 sum 0.18176077
# range, mid point, percentile, count
>= 0.000432493 <= 0.001 , 0.000716246 , 30.00, 6
> 0.002 <= 0.003 , 0.0025 , 40.00, 2
> 0.003 <= 0.004 , 0.0035 , 50.00, 2
> 0.008 <= 0.009 , 0.0085 , 55.00, 1
> 0.009 <= 0.01 , 0.0095 , 65.00, 2
> 0.01 <= 0.011 , 0.0105 , 70.00, 1
> 0.011 <= 0.012 , 0.0115 , 75.00, 1
> 0.012 <= 0.014 , 0.013 , 85.00, 2
> 0.014 <= 0.016 , 0.015 , 95.00, 2
> 0.06 <= 0.0615604 , 0.0607802 , 100.00, 1
# target 50% 0.004
# target 75% 0.012
# target 90% 0.015
# target 99% 0.0612483
# target 99.9% 0.0615292
Sockets used: 11 (for perfect keepalive, would be 2)
Code 200 : 10 (50.0 %)
Code 503 : 10 (50.0 %)
Response Header Sizes : count 20 avg 115.2 +/- 115.2 min 0 max 231 sum2304
Response Body/Total Sizes : count 20 avg 406.2 +/- 189.2 min 217 max 596 sum 8124
All done 20 calls (plus 0 warmup) 9.088 ms avg, 119.7 qps
master $ kubectl exec -it $FORTIO_POD  -c fortio /usr/local/bin/fortio-- load -c 3 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
08:19:33 I logger.go:97> Log level is now 3 Warning (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get
Starting at max qps with 3 thread(s) [gomax 4] for exactly 20 calls (6per thread + 2)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
08:19:33 W http_client.go:604> Parsed non ok code 503 (HTTP/1.1 503)
Ended after 86.568517ms : 20 calls. qps=231.03
Aggregated Function Time : count 20 avg 0.0057601875 +/- 0.005609 min 0.000251307 max 0.015513942 sum 0.115203749
# range, mid point, percentile, count
>= 0.000251307 <= 0.001 , 0.000625654 , 45.00, 9
> 0.001 <= 0.002 , 0.0015 , 50.00, 1
> 0.008 <= 0.009 , 0.0085 , 60.00, 2
> 0.009 <= 0.01 , 0.0095 , 70.00, 2
> 0.01 <= 0.011 , 0.0105 , 80.00, 2
> 0.011 <= 0.012 , 0.0115 , 85.00, 1
> 0.012 <= 0.014 , 0.013 , 90.00, 1
> 0.014 <= 0.0155139 , 0.014757 , 100.00, 2
# target 50% 0.002
# target 75% 0.0105
# target 90% 0.014
# target 99% 0.0153625
# target 99.9% 0.0154988
Sockets used: 11 (for perfect keepalive, would be 3)
Code 200 : 10 (50.0 %)
Code 503 : 10 (50.0 %)
Response Header Sizes : count 20 avg 115.2 +/- 115.2 min 0 max 231 sum2304
Response Body/Total Sizes : count 20 avg 406.2 +/- 189.2 min 217 max 596 sum 8124
All done 20 calls (plus 0 warmup) 5.760 ms avg, 231.0 qps
master $ kubectl exec -it $FORTIO_POD  -c istio-proxy  -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 20
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 20

6 참고[ | ]

문서 댓글 ({{ doc_comments.length }})
{{ comment.name }} {{ comment.created | snstime }}