在參考官方文檔的時候發現環境偶爾會出現問題,因此插入一章與調試有關的內容,便於簡單問題的定位。涵蓋官方文檔的診斷工具(http://dwz.date/b5EZ)章節
調試 Envoy 和 istiod
獲取網格概況
檢索 Envoy 和 istiod 的差異
深入探究 Envoy 配置
檢查 bootstrap 配置
校驗到 istiod 的連通性
通過 istioctl 的輸出理解網格
使用 istioctl analyse 診斷配置
組件內省
組件日誌
首先可以通過日誌或 Introspection 檢查各個組件,如果不足以支持問題定位,可以參考如下操作:istioctl 是一個可以用於調試和診斷 istio 服務網格的工具。Istio 項目為 Bash 和 ZSH 運行下的 istioctl 提供了自動補全功能。建議安裝對應 istio 版本的 istioctl。將如下內容添加到 ~/.bash_profile 文件中[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
使用 bash 時,將在 tools 命令中的 istioctl.bash 文件拷貝到 $HOME 目錄下,然後執行如下操作即可可以使用 istioctl proxy-status 或 istioctl ps 命令查看網格的狀態。如果輸出結果中缺少某個代理,說明該代理當前沒有連接到 Pilot 實例,因而無法接收到任何配置。如果狀態為 stale,表示當前存在網絡故障,或 Pilot 需要擴容。可以使用 istioctl proxy-config 或 istioctl pc 檢索代理配置信息。例如,使用如下方式可以檢索特定 pod 中的 Envoy 實例的集群配置信息。$ istioctl proxy-config cluster <pod-name> [flags]
使用如下方式可以檢索特定 pod 中的 Envoy 實例的 bootstrap 配置信息。$ istioctl proxy-config bootstrap <pod-name> [flags]
$ istioctl proxy-config listener <pod-name> [flags]
$ istioctl proxy-config route <pod-name> [flags]
$ istioctl proxy-config endpoints <pod-name> [flags]
安裝 Bookinfo
使用 kubernetes 集群中部署類似應用
通過 proxy-status 命令可以查看網格的概況,了解是否有 sidecar 無法接收配置或無法保持同步。如果某個代理沒有出現在輸出列表中,則說明該代理沒有連接到 istiod 實例,因此也無法接收任何配置信息。狀態信息如下:SYNCED:表示 Envoy 確認了 istiod 發過來的配置
NOT SENT:表示 istiod 還沒有發送配置到 Envoy。通常時因為 istiod 當前沒有需要發送的配置信息
STALE:表示 istiod 發送了一個更新到 Envoy,但沒有接收到確認。通常表示 Envoy和 istiod 之間的網絡出現了問題,或 istio 本身出現了 bug。
$ istioctl ps
NAME CDS LDS EDS RDS PILOT VERSION
details-v1-78d78fbddf-psnmk.default SYNCED SYNCED SYNCED SYNCED istiod-788cf6c878-4pq5g 1.6.0
istio-ingressgateway-569669bb67-dsd5h.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-788cf6c878-4pq5g 1.6.0
productpage-v1-85b9bf9cd7-d8hm8.default SYNCED SYNCED SYNCED SYNCED istiod-788cf6c878-4pq5g 1.6.0
prometheus-79878ff5fd-tjdxx.istio-system SYNCED SYNCED SYNCED SYNCED istiod-788cf6c878-4pq5g 1.6.0
ratings-v1-6c9dbf6b45-xlf2q.default SYNCED SYNCED SYNCED SYNCED istiod-788cf6c878-4pq5g 1.6.0
reviews-v1-564b97f875-q5l9r.default SYNCED SYNCED SYNCED SYNCED istiod-788cf6c878-4pq5g 1.6.0
reviews-v2-568c7c9d8f-vcd94.default SYNCED SYNCED SYNCED SYNCED istiod-788cf6c878-4pq5g 1.6.0
reviews-v3-67b4988599-psllq.default SYNCED SYNCED SYNCED SYNCED istiod-788cf6c878-4pq5g 1.6.0
sleep-78484c89dd-fmxbc.default SYNCED SYNCED SYNCED SYNCED istiod-788cf6c878-4pq5g 1.6.0
$ istioctl proxy-status details-v1-6dcc6fbb9d-wsjz4.default
--- Istiod Clusters
+++ Envoy Clusters
@@ -374,36 +374,14 @@
"edsClusterConfig": {
"edsConfig": {
"ads": {
}
},
"serviceName": "outbound|443||public-cr0bdc785ce3f14722918080a97e1f26be-alb1.kube-system.svc.cluster.local"
- },
- "connectTimeout": "1.000s",
- "circuitBreakers": {
- "thresholds": [
- {
-
- }
- ]
- }
- }
- },
- {
- "cluster": {
- "name": "outbound|53||kube-dns.kube-system.svc.cluster.local",
- "type": "EDS",
- "edsClusterConfig": {
- "edsConfig": {
- "ads": {
-
- }
- },
- "serviceName": "outbound|53||kube-dns.kube-system.svc.cluster.local"
},
"connectTimeout": "1.000s",
"circuitBreakers": {
"thresholds": [
{
}
Listeners Match
Routes Match
$ istioctl proxy-config cluster -n istio-system istio-ingressgateway-7d6874b48f-qxhn5
SERVICE FQDN PORT SUBSET DIRECTION TYPE
BlackHoleCluster - - - STATIC
agent - - - STATIC
details.default.svc.cluster.local 9080 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15443 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 853 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
...
productpage.default.svc.cluster.local 9080 - outbound EDS
prometheus.istio-system.svc.cluster.local 9090 - outbound EDS
prometheus_stats - - - STATIC
ratings.default.svc.cluster.local 9080 - outbound EDS
reviews.default.svc.cluster.local 9080 - outbound EDS
sds-grpc - - - STATIC
xds-grpc - - - STRICT_DNS
zipkin - - - STRICT_DNS
一個0.0.0.0:15006 的 listener,用於接收到 pod 的入站流量;以及一個 0.0.0.0:15001的 listener,用於接收所有到 pod 的出站流量,然後將請求交給一個 virtual listener。
每個 kubernetes service IP 都對應一個 virtual listener,非 HTTP 的 listener 用於出站的 TCP/HTTPS 流量
pod IP 中的 virtual listener 暴露了接收入站流量的埠
0.0.0.0 的 HTTP 類型的 virtual listener,用於出站的 HTTP 流量
可以看到 TYPE 欄位是沒有 HTTPS 的,HTTPS 作為 TCP 類型。下面是 productpage 的 listeners,刪減了部分信息。10.84 開頭的是各個 kubernetes service 的 CLUSTER-IP,以 172.20 開頭的是 kubernetes 的 node IP,以 nodePort 方式暴露服務。$ istioctl proxy-config listeners productpage-v1-85b9bf9cd7-d8hm8.default
ADDRESS PORT TYPE
0.0.0.0 443 TCP <--+
10.84.71.37 443 TCP |
10.84.223.189 443 TCP |
10.84.100.226 15443 TCP |
10.84.121.154 443 TCP |
10.84.142.44 443 TCP | #從0.0.0.0_15001相關IP:PORT上接收出站的non-HTTP流量
10.84.155.219 443 TCP |
172.20.127.212 9100 TCP |
10.84.205.103 443 TCP |
10.84.167.116 443 TCP |
172.20.127.211 9100 TCP <--+
10.84.113.197 9979 HTTP+TCP<--+
0.0.0.0 9091 HTTP+TCP |
10.84.30.227 9092 HTTP+TCP |
10.84.108.37 8080 HTTP+TCP |
10.84.158.64 8443 HTTP+TCP |
10.84.202.185 8080 HTTP+TCP |
10.84.21.252 8443 HTTP+TCP |
10.84.215.56 8443 HTTP+TCP |
0.0.0.0 60000 HTTP+TCP | # 從0.0.0.0_15001的相關埠上接收出站的HTTP+TCP流量
10.84.126.74 8778 HTTP+TCP |
10.84.126.74 8080 HTTP+TCP |
10.84.123.207 8080 HTTP+TCP |
10.84.30.227 9091 HTTP+TCP |
10.84.229.5 8080 HTTP+TCP<--+
0.0.0.0 9080 HTTP+TCP # 從 0.0.0.0_15006 上接收所有到9080的入站流量
0.0.0.0 15001 TCP # 從IP tables接收pod的所有出站流量,並移交給虛擬偵聽器
0.0.0.0 15006 HTTP+TCP # Envoy 入站
0.0.0.0 15090 HTTP # Envoy Prometheus 遙測
0.0.0.0 15021 HTTP # 健康檢查
$ ss -ntpl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:15090 0.0.0.0:*
LISTEN 0 128 127.0.0.1:15000 0.0.0.0:*
LISTEN 0 128 0.0.0.0:9080 0.0.0.0:*
LISTEN 0 128 0.0.0.0:15001 0.0.0.0:*
LISTEN 0 128 0.0.0.0:15006 0.0.0.0:*
LISTEN 0 128 0.0.0.0:15021 0.0.0.0:*
LISTEN 0 128 *:15020 *:*
$ istioctl pc listener productpage-v1-85b9bf9cd7-d8hm8.default --port 15001 -o json
[
{
"name": "virtualOutbound",
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 15001
}
},
"filterChains": [
{
"filters": [
{
"name": "istio.stats",
"typedConfig": {
"@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"typeUrl": "type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm",
"value": {
"config": {
"configuration": "{\n \"debug\": \"false\",\n \"stat_prefix\": \"istio\"\n}\n",
"root_id": "stats_outbound",
"vm_config": {
"code": {
"local": {
"inline_string": "envoy.wasm.stats"
}
},
"runtime": "envoy.wasm.runtime.null",
"vm_id": "tcp_stats_outbound"
}
}
}
}
},
{
"name": "envoy.tcp_proxy",
"typedConfig": {
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
"statPrefix": "PassthroughCluster",
"cluster": "PassthroughCluster",
"accessLog": [
{
"name": "envoy.file_access_log",
"typedConfig": {
"@type": "type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog",
"path": "/dev/stdout",
"format": "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% \"%DYNAMIC_METADATA(istio.mixer:status)%\" \"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n"
}
}
]
}
}
],
"name": "virtualOutbound-catchall-tcp"
}
],
"useOriginalDst": true,
"trafficDirection": "OUTBOUND"
}
]
istioctl pc listener productpage-v1-85b9bf9cd7-d8hm8.default -o json --address 0.0.0.0 --port 9080
[
{
"name": "0.0.0.0_9080",
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 9080
}
},
"filterChains": [
{
"filterChainMatch": {
"applicationProtocols": [
"http/1.0",
"http/1.1",
"h2c"
]
},
"filters": [
{
"name": "envoy.http_connection_manager",
"typedConfig": {
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
"statPrefix": "outbound_0.0.0.0_9080",
"rds": {
"configSource": {
"ads": {}
},
"routeConfigName": "9080"
},
...
]
$ istioctl proxy-config routes productpage-v1-85b9bf9cd7-d8hm8.default --name 9080 -o json
[
{
"name": "9080",
"virtualHosts": [
...
{
"name": "reviews.default.svc.cluster.local:9080",
"domains": [
"reviews.default.svc.cluster.local",
"reviews.default.svc.cluster.local:9080",
"reviews",
"reviews:9080",
"reviews.default.svc.cluster",
"reviews.default.svc.cluster:9080",
"reviews.default.svc",
"reviews.default.svc:9080",
"reviews.default",
"reviews.default:9080",
"10.84.110.152",
"10.84.110.152:9080"
],
"routes": [
{
"name": "default",
"match": {
"prefix": "/"
},
"route": {
"cluster": "outbound|9080||reviews.default.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"decorator": {
"operation": "reviews.default.svc.cluster.local:9080/*"
}
}
],
"includeRequestAttemptCount": true
}
],
"validateClusters": false
}
]
$ istioctl pc cluster productpage-v1-85b9bf9cd7-d8hm8.default --fqdn reviews.default.svc.cluster.local -o json
[
{
...
"name": "outbound|9080||reviews.default.svc.cluster.local",
"type": "EDS",
"edsClusterConfig": {
"edsConfig": {
"ads": {}
},
"serviceName": "outbound|9080||reviews.default.svc.cluster.local"
},
"connectTimeout": "10s",
"circuitBreakers": {
"thresholds": [
{
"maxConnections": 4294967295,
"maxPendingRequests": 4294967295,
"maxRequests": 4294967295,
"maxRetries": 4294967295
}
]
},
"filters": [
{
"name": "istio.metadata_exchange",
"typedConfig": {
"@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"typeUrl": "type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange",
"value": {
"protocol": "istio-peer-exchange"
}
}
}
]
}
]
$ istioctl pc endpoint productpage-v1-85b9bf9cd7-d8hm8.default --cluster "outbound|9080||reviews.default.svc.cluster.local"
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.80.3.55:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local
10.80.3.56:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local
10.80.3.58:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local
$ istioctl proxy-config bootstrap -n istio-system istio-ingressgateway-569669bb67-dsd5h.istio-system
{
"bootstrap": {
"node": {
"id": "router~10.83.0.14~istio-ingressgateway-569669bb67-dsd5h.istio-system~istio-system.svc.cluster.local",
"cluster": "istio-ingressgateway",
"metadata": {
"CLUSTER_ID": "Kubernetes",
"CONFIG_NAMESPACE": "istio-system",
"EXCHANGE_KEYS": "NAME,NAMESPACE,INSTANCE_IPS,LABELS,OWNER,PLATFORM_METADATA,WORKLOAD_NAME,MESH_ID,SERVICE_ACCOUNT,CLUSTER_ID",
"INSTANCE_IPS": "10.83.0.14,fe80::6871:95ff:fe5b:9e3e",
"ISTIO_PROXY_SHA": "istio-proxy:12cfbda324320f99e0e39d7c393109fcd824591f",
"ISTIO_VERSION": "1.6.0",
"LABELS": {
"app": "istio-ingressgateway",
"chart": "gateways",
"heritage": "Tiller",
"istio": "ingressgateway",
"pod-template-hash": "569669bb67",
"release": "istio",
"service.istio.io/canonical-name": "istio-ingressgateway",
"service.istio.io/canonical-revision": "latest"
},
"MESH_ID": "cluster.local",
"NAME": "istio-ingressgateway-569669bb67-dsd5h",
"NAMESPACE": "istio-system",
"OWNER": "kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway",
...
"ROUTER_MODE": "sni-dnat",
"SDS": "true",
"SERVICE_ACCOUNT": "istio-ingressgateway-service-account",
"TRUSTJWT": "true",
"WORKLOAD_NAME": "istio-ingressgateway"
},
...
}
$ kubectl create namespace foo
$ kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n foo
$ kubectl exec $(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name}) -c sleep -n foo -- curl -sS istiod.istio-system:15014/debug/endpointz
如下內容是一個實驗特性,僅用於評估 istio 1.3 中包含一個 istioctl experimental describe 命令。該 CLI 命令提供了解影響 pod 的配置所需的信息。本節展示如何使用該 experimental 子命令查看一個 pod 是否在網格中,以及檢查該 pod 的配置。該命令的基本使用方式為:$ istioctl experimental describe pod <pod-name>[.<namespace>] #或
$ istioctl experimental describe pod <pod-name> -n <namespace>
$ istioctl experimental describe pod mutatepodimages-7575797d95-qn7p5
Pod: mutatepodimages-7575797d95-qn7p5
Pod does not expose ports
WARNING: mutatepodimages-7575797d95-qn7p5 is not part of mesh; no Istio sidecar
Error: failed to execute command on sidecar: error 'execing into mutatepodimages-7575797d95-qn7p5/default istio-proxy container: container istio-proxy is not valid for pod mutatepodimages-7575797d95-qn7p5
$ istioctl x describe pod ratings-v1-6c9dbf6b45-xlf2q
Pod: ratings-v1-6c9dbf6b45-xlf2q
Pod Ports: 9080 (details), 15090 (istio-proxy)
Service: details
Port: http 9080/HTTP targets pod port 9080
Pilot reports that pod enforces HTTP/mTLS and clients speak HTTP
pod 的服務容器埠,上述為 ratings 的 9080 埠
pod 中的 istio-proxy埠,15090
pod 服務使用的協議,9080 埠的 http 協議
pod 設置的 mutual TLS
可以使用 istioctl describe 檢查應用到一個 pod 的 destination rule。例如執行如下命令部署 destination rule$ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
$ export RATINGS_POD=$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')
$ istioctl x describe pod $RATINGS_POD
Pod: ratings-v1-6c9dbf6b45-xlf2q
Pod Ports: 9080 (ratings), 15090 (istio-proxy)
Service: ratings
Port: http 9080/HTTP targets pod port 9080
DestinationRule: ratings for "ratings"
Matching subsets: v1
(Non-matching subsets v2,v2-mysql,v2-mysql-vm)
Traffic Policy TLS Mode: ISTIO_MUTUAL
Pilot reports that pod enforces HTTP/mTLS and clients speak mTLS
應用到 ratings 服務的 ratings destination rule
匹配 pod 的 ratings destination rule,上述為 v1
destination rule 定義的其他 subset
pod 接收 HTTP 或 mutual TLS,但客戶端使用mutual TLS
$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
$ export REVIEWS_V1_POD=$(kubectl get pod -l app=reviews,version=v1 -o jsonpath='{.items[0].metadata.name}')
istioctl x describe pod $REVIEWS_V1_POD
$ istioctl x describe pod $REVIEWS_V1_POD
Pod: reviews-v1-564b97f875-q5l9r
Pod Ports: 9080 (reviews), 15090 (istio-proxy)
Service: reviews
Port: http 9080/HTTP targets pod port 9080
DestinationRule: reviews for "reviews"
Matching subsets: v1
(Non-matching subsets v2,v3)
Traffic Policy TLS Mode: ISTIO_MUTUAL
VirtualService: reviews
1 HTTP route(s)
$ export REVIEWS_V2_POD=$(kubectl get pod -l app=reviews,version=v2 -o jsonpath='{.items[0].metadata.name}')
istioctl x describe pod $REVIEWS_V2_POD
[root@bastion istio-1.6.0]# istioctl x describe pod $REVIEWS_V2_POD
Pod: reviews-v2-568c7c9d8f-vcd94
...
VirtualService: reviews
WARNING: No destinations match pod subsets (checked 1 HTTP routes)
Route to non-matching subset v1 for (everything)
$ kubectl delete -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
$ istioctl x describe pod $REVIEWS_V1_POD
Pod: reviews-v1-564b97f875-q5l9r
Pod Ports: 9080 (reviews), 15090 (istio-proxy)
Service: reviews
Port: http 9080/HTTP targets pod port 9080
VirtualService: reviews
WARNING: No destinations match pod subsets (checked 1 HTTP routes)
Warning: Route to subset v1 but NO DESTINATION RULE defining subsets!
$ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
istioctl describe 也可以展示流量權重。如理如下命令會將 90% 的流量導入 reviews 服務的 v1 subset,將 10% 的流量導入 reviews 服務的v2 subset。$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-90-10.yaml
$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
Weight 90%
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-jason-v2-v3.yaml
$ istioctl x describe pod $REVIEWS_V1_POD
...
VirtualService: reviews
WARNING: No destinations match pod subsets (checked 2 HTTP routes)
Route to non-matching subset v2 for (when headers are end-user=jason)
Route to non-matching subset v3 for (everything)
$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: ratings-strict
spec:
selector:
matchLabels:
app: ratings
mtls:
mode: STRICT
EOF
$ istioctl x describe pod $RATINGS_POD
Pilot reports that pod enforces mTLS and clients speak mTLS
$ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
如果在瀏覽器傻瓜打開Bookinfo,會顯示 Ratings service is currently unavailable,使用如下命令查看原因:$ istioctl x describe pod $RATINGS_POD
...
WARNING Pilot predicts TLS Conflict on ratings-v1-f745cf57b-qrxl2 port 9080 (pod enforces mTLS, clients speak HTTP)
Check DestinationRule ratings/default and AuthenticationPolicy ratings-strict/default
$ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
$ kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl delete -f samples/bookinfo/networking/bookinfo-gateway.yaml
$ kubectl delete -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml
$ istioctl analyze --all-namespaces
例如,如果某些命名空間沒有啟用 istiod 注入,會列印如下告警信息:Warn [IST0102] (Namespace openshift) The namespace is not enabled for Istio injection. Run 'kubectl label namespace openshift istio-injection=enabled' to enable it, or 'kubectl label namespace openshift istio-injection=disabled' to explicitly mark it as not needing injection
上述的例子用於分析一個存在的集群,但該工具也可以支持分析本地 kubernetes yaml 的配置文件集,或同時分析本地文件和集群。當分析一個本地文件集時,這些文件集應該是完全自包含的。通常用於分析需要部署到集群的一個完整的配置文件集。分析特定的本地 kubernetes yaml 文件集:$ istioctl analyze --use-kube=false a.yaml b.yaml
$ istioctl analyze --use-kube=false *.yaml
模擬將當前目錄中的 files 部署到當前集群中:$ istioctl analyze *.yaml
使用 istioctl analyze --help 命令查看完整的選項。更多 analyse 的使用參見 Q&A.Istio 組件是用一個靈活的內省框架構建的,它使檢查和操作運行組件的內部狀態變得簡單。組件會開放一個埠,用於通過 web 瀏覽器的交互式視圖獲取組件的狀態,或使用外部工具通過 REST 訪問。Mixer, Pilot 和 Galley 都實現了 ControlZ 功能(1.6版本可以查看 istiod)。當啟用這些組件時將記錄一條消息,指示要連接的 IP 地址和埠,以便與 ControlZ 交互。
2018-07-26T23:28:48.889370Z info ControlZ available at 100.76.122.230:9876
可以使用如下命令進行埠轉發,類似 kubectl 的 port-forward,用於遠程訪問。$ istioctl dashboard controlz <podname> -n <namespaces>
none
error
warning
info
debug
其中 none 表示沒有劃分作用域的輸出,debug 會最大化輸出。默認的作用域為 info,用於在一般情況下為 istio 提供何時的日誌輸出。可以使用 --log_output_level 控制輸出級別:日誌信息通常會發送到組件的標準輸出流中。--log_target 選項可以將輸出重定向到任意(數量的)位置,可以通過逗號分割的列表給出文件系統的路徑。stdout 和 stderr 分別表示標準輸出和標準錯誤輸出流。