K8S之監控Prometheus部署

K8S之監控Prometheus部署

Kubernetes Prometheus

簡介

在k8s平臺上部署Prometheus監控有幾種方式

老老實實寫yaml部署指令碼,這種方式部署太麻煩,細節太多,不建議

使用開源專案prometheus-operator部署

使用開源專案kube-prometheus部署

prometheus-operator只包含一個operator,該operator管理和操作Prometheus和Alertmanager叢集,專案地址:https://github。com/prometheus-operator/prometheus-operator

kube Prometheus以Prometheus Operator和一系列manifests檔案為基礎,以幫助你快速在kubernetes叢集中部署Prometheus監控系統,專案地址:https://github。com/prometheus-operator/kube-prometheus

這裡我選用的是

kube Prometheus去部署監控

下載Kube-Prometheus專案

#我用的版本是 release-0。11https://github。com/prometheus-operator/kube-prometheus/tree/release-0。11

安裝 Kube-Prometheus

#GitHub上已經給出了安裝方法[root@master kube-prometheus-release-0。11]# kubectl apply ——server-side -f manifests/setupcustomresourcedefinition。apiextensions。k8s。io/alertmanagerconfigs。monitoring。coreos。com serverside-appliedcustomresourcedefinition。apiextensions。k8s。io/alertmanagers。monitoring。coreos。com serverside-appliedcustomresourcedefinition。apiextensions。k8s。io/podmonitors。monitoring。coreos。com serverside-appliedcustomresourcedefinition。apiextensions。k8s。io/probes。monitoring。coreos。com serverside-appliedcustomresourcedefinition。apiextensions。k8s。io/prometheuses。monitoring。coreos。com serverside-appliedcustomresourcedefinition。apiextensions。k8s。io/prometheusrules。monitoring。coreos。com serverside-appliedcustomresourcedefinition。apiextensions。k8s。io/servicemonitors。monitoring。coreos。com serverside-appliedcustomresourcedefinition。apiextensions。k8s。io/thanosrulers。monitoring。coreos。com serverside-appliednamespace/monitoring serverside-applied[root@master kube-prometheus-release-0。11]# until kubectl get servicemonitors ——all-namespaces ; do date; sleep 1; echo “”; doneNo resources foundkubectl apply -f manifests/。。。

安裝完成,看下資源情況

[root@master kube-prometheus-release-0。11]# kubectl get all -n monitoringNAME READY STATUS RESTARTS AGEpod/alertmanager-main-0 2/2 Running 0 65spod/alertmanager-main-1 2/2 Running 0 65spod/alertmanager-main-2 2/2 Running 0 63spod/blackbox-exporter-559db48fd-4c6rf 3/3 Running 0 2m40spod/grafana-546559f668-ft5zs 1/1 Running 0 2m15spod/kube-state-metrics-576b75c6f7-dx8vs 3/3 Running 0 2m9spod/node-exporter-fzwzs 2/2 Running 0 2mpod/node-exporter-qstbq 2/2 Running 0 2mpod/node-exporter-r9w26 2/2 Running 0 2m1spod/prometheus-adapter-5f68766c85-hvvhn 1/1 Running 0 86spod/prometheus-adapter-5f68766c85-vkh7l 1/1 Running 0 86spod/prometheus-k8s-0 2/2 Running 0 49spod/prometheus-k8s-1 0/2 PodInitializing 0 49spod/prometheus-operator-68845dfbbf-ldvvz 2/2 Running 0 81sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/alertmanager-main ClusterIP 10。0。0。120 9093/TCP,8080/TCP 2m46sservice/alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 66sservice/blackbox-exporter ClusterIP 10。0。0。164 9115/TCP,19115/TCP 2m41sservice/grafana ClusterIP 10。0。0。80 3000/TCP 2m18sservice/kube-state-metrics ClusterIP None 8443/TCP,9443/TCP 2m10sservice/node-exporter ClusterIP None 9100/TCP 2m2sservice/prometheus-adapter ClusterIP 10。0。0。213 443/TCP 91sservice/prometheus-k8s ClusterIP 10。0。0。28 9090/TCP,8080/TCP 100sservice/prometheus-operated ClusterIP None 9090/TCP 51sservice/prometheus-operator ClusterIP None 8443/TCP 84sNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset。apps/node-exporter 3 3 3 3 3 kubernetes。io/os=linux 2m4sNAME READY UP-TO-DATE AVAILABLE AGEdeployment。apps/blackbox-exporter 1/1 1 1 2m49sdeployment。apps/grafana 1/1 1 1 2m25sdeployment。apps/kube-state-metrics 1/1 1 1 2m19sdeployment。apps/prometheus-adapter 2/2 2 2 100sdeployment。apps/prometheus-operator 1/1 1 1 93sNAME DESIRED CURRENT READY AGEreplicaset。apps/blackbox-exporter-559db48fd 1 1 1 2m51sreplicaset。apps/grafana-546559f668 1 1 1 2m27sreplicaset。apps/kube-state-metrics-576b75c6f7 1 1 1 2m20sreplicaset。apps/prometheus-adapter-5f68766c85 2 2 2 102sreplicaset。apps/prometheus-operator-68845dfbbf 1 1 1 95sNAME READY AGEstatefulset。apps/alertmanager-main 2/3 75sstatefulset。apps/prometheus-k8s 1/2 59s

在上面可以看到自動建立了一個monitoring的NameSpace,Pod也都建立好了。

如何訪問Grafana

[root@master kube-prometheus-release-0。11]# kubectl get svc -n monitoringNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEalertmanager-main ClusterIP 10。0。0。120 9093/TCP,8080/TCP 4m59salertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 3m19sblackbox-exporter ClusterIP 10。0。0。164 9115/TCP,19115/TCP 4m54sgrafana ClusterIP 10。0。0。80 3000/TCP 4m31skube-state-metrics ClusterIP None 8443/TCP,9443/TCP 4m23snode-exporter ClusterIP None 9100/TCP 4m15sprometheus-adapter ClusterIP 10。0。0。213 443/TCP 3m44sprometheus-k8s ClusterIP 10。0。0。28 9090/TCP,8080/TCP 3m53sprometheus-operated ClusterIP None 9090/TCP 3m4sprometheus-operator ClusterIP None 8443/TCP 3m37s

預設情況下,服務的網路型別都是ClusterIP,無法在外面訪問,這裡最好的方法是使用ingress配置對外提供服務,由於我的集群裡沒有安裝ingress,現在我就修改成NodePort方式對外提供服務

vim manifests/grafana-service。yamlspec: ports: - name: http port: 3000 targetPort: http type: NodePortmanifests/alertmanager-service。yaml manifests/prometheus-service。yaml #grafana、alertmanager、prometheus都配置成type: NodePortkubectl apply -f manifests/grafana-service。yamlkubectl apply -f manifests/alertmanager-service。yaml kubectl apply -f manifests/prometheus-service。yaml

再檢視一下Service資訊

alertmanager-main NodePort 10。0。0。120 9093:47927/TCP,8080:31539/TCP 10malertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 9m12sblackbox-exporter ClusterIP 10。0。0。164 9115/TCP,19115/TCP 10mgrafana NodePort 10。0。0。80 3000:37010/TCP 10mkube-state-metrics ClusterIP None 8443/TCP,9443/TCP 10mnode-exporter ClusterIP None 9100/TCP 10mprometheus-adapter ClusterIP 10。0。0。213 443/TCP 9m37sprometheus-k8s NodePort 10。0。0。28 9090:40124/TCP,8080:42004/TCP 9m46sprometheus-operated ClusterIP None 9090/TCP 8m57sprometheus-operator ClusterIP None 8443/TCP 9m30s

這裡就可以看到 Grafana/Prometheus/Alertmanager都變成了NodePort

我們挑一個服務IP地址訪問一下

Grafana

K8S之監控Prometheus部署

K8s Grafana

Prometheus

K8S之監控Prometheus部署

K8s Prometheus

Alertmanager

K8S之監控Prometheus部署

K8s Alertmanager

解除安裝Prometheus方式

kubectl delete ——ignore-not-found=true -f manifests/ -f manifests/setup

這樣k8s的資源就可以監控起來了,這中間還有一個問題是,我安裝Kube-Promethues的時候有很多映象下載不了,下篇我說下怎樣下載k8s。gcr。io的映象。