Halyard + Kubernetes + Redis + MySQL57 + S3
數據持久化
也可以使用二進位安裝。最好將halyard運行在一臺配置好kubectl客戶端的節點上。因為後續需要用到k8s集群帳戶信息。
docker pull registry.cn-beijing.aliyuncs.com/spinnaker-cd/halyard:1.32.0mkdir /root/.haldocker run -itd --name halyard \ -v /root/.hal:/home/spinnaker/.hal \ -v /root/.kube:/home/spinnaker/.kube \ registry.cn-beijing.aliyuncs.com/spinnaker-cd/halyard:1.32.0 ## 以root身份進入容器,修改配置文件docker exec -it -u root halyard bash ## 修改spinnaker.config.input.gcs.enabled = false 。vi /opt/halyard/config/halyard.yml spinnaker: artifacts: debian: https://dl.bintray.com/spinnaker-releases/debians docker: gcr.io/spinnaker-marketplace config: input: gcs: enabled: false writerEnabled: false bucket: halconfig ## 需要重啟容器(如果此命令未重啟,則需要退出容器然後 docker restart halyard)hal shutdown## 啟動docker start halyard
所有的鏡像已經通過GitHub Actions自動同步到阿里雲鏡像倉庫。大家直接下載。為了方便可以直接運行腳本下載當前版本的所有鏡像。
bom文件和下載鏡像的腳本都在這個壓縮包中。
# 上傳到伺服器(運行halyard容器的節點)scp 1.22.1-Image-Script.zip root@master.zy.com:/rootunzip 1.22.1-Image-Script.zipcd 1.22.1[root@master 1.22.1]# ls -a. .. .boms GetImages.sh tagfile.txt## .boms需要放到.hal目錄下## GetImages.sh 鏡像下載腳本## tagfile.txt 鏡像標籤sh -x GetImages.sh chmod 777 -R .hal/## 等待鏡像下載完成(這個腳本中做了ssh免密哦)
tagfile.txt
## tagfile[root@master 1.22.1]# cat tagfile.txtecho:2.14.0-20200817170018clouddriver:6.11.0-20200818115831deck:3.3.0-20200818132306fiat:1.13.0-20200817170018front50:0.25.1-20200831095512gate:1.18.1-20200825122721igor:1.12.0-20200817200018kayenta:0.17.0-20200817170018orca:2.16.0-20200817170018rosco:0.21.1-20200827112228
GetImages.sh
## script#!/bin/bashS_REGISTRY="gcr.io/spinnaker-marketplace"T_REGISTRY="registry.cn-beijing.aliyuncs.com/spinnaker-cd"NODES="node01.zy.com node02.zy.com"## 下載鏡像function GetImages(){ echo -e "\033[43;34m =====GetImg===== \033[0m" IMAGES=$( cat tagfile.txt) for image in ${IMAGES} do for node in ${NODES} do echo -e "\033[32m ${node} ---> pull ---> ${image} \033[0m" ssh ${node} "docker pull ${T_REGISTRY}/${image}" echo -e "\033[32m ${node} ---> tag ---> ${image} \033[0m" ssh ${node} "docker tag ${T_REGISTRY}/${image} ${S_REGISTRY}/${image}" done done for node in ${NODES} do echo -e "\033[43;34m =====${node}===鏡像信息===== \033[0m" ssh ${node} "docker images | grep 'spinnaker-marketplace' " done}GetImages
[root@master 1.22.1]# mv .boms/ ~/.hal/[root@master 1.22.1]# cd ~/.hal/[root@master .hal]# cd .boms/[root@master .boms]# lsbom clouddriver deck echo fiat front50 gate igor kayenta orca rosco[root@master .boms]# tree.├── bom│ ├── 1.19.4.yml│ └── 1.22.1.yml├── clouddriver│ ├── 6.11.0-20200818115831│ │ └── clouddriver.yml│ ├── 6.7.3-20200401190525│ │ └── clouddriver.yml│ └── clouddriver.yml├── deck│ ├── 3.0.2-20200324040016│ │ └── settings.js│ ├── 3.3.0-20200818132306│ │ └── settings.js│ └── settings.js├── echo│ ├── 2.11.2-20200401121252│ │ └── echo.yml│ ├── 2.14.0-20200817170018│ │ └── echo.yml│ └── echo.yml├── fiat│ ├── 1.10.1-20200401121252│ │ └── fiat.yml│ ├── 1.13.0-20200817170018│ │ └── fiat.yml│ └── fiat.yml├── front50│ ├── 0.22.1-20200401121252│ │ └── front50.yml│ ├── 0.25.1-20200831095512│ │ └── front50.yml│ └── front50.yml├── gate│ ├── 1.15.1-20200403040016│ │ └── gate.yml│ ├── 1.18.1-20200825122721│ │ └── gate.yml│ └── gate.yml├── igor│ ├── 1.12.0-20200817200018│ │ └── igor.yml│ ├── 1.9.2-20200401121252│ │ └── igor.yml│ └── igor.yml├── kayenta│ ├── 0.14.0-20200304112817│ │ └── kayenta.yml│ ├── 0.17.0-20200817170018│ │ └── kayenta.yml│ └── kayenta.yml├── orca│ ├── 2.13.2-20200401144746│ │ └── orca.yml│ ├── 2.16.0-20200817170018│ │ └── orca.yml│ └── orca.yml└── rosco ├── 0.18.1-20200401121252 │ ├── images.yml │ ├── packer │ │ ├── alicloud.json │ │ ├── alicloud-multi.json │ │ ├── aws-chroot.json │ │ ├── aws-ebs.json │ │ ├── aws-multi-chroot.json │ │ ├── aws-multi-ebs.json │ │ ├── aws-windows-2012-r2.json │ │ ├── azure-linux.json │ │ ├── azure-windows-2012-r2.json │ │ ├── docker.json │ │ ├── gce.json │ │ ├── huaweicloud.json │ │ ├── install_packages.sh │ │ ├── oci.json │ │ └── scripts │ │ ├── aws-windows-2012-configure-ec2service.ps1 │ │ ├── aws-windows.userdata │ │ ├── windows-configure-chocolatey.ps1 │ │ └── windows-install-packages.ps1 │ └── rosco.yml ├── 0.21.1-20200827112228 │ ├── images.yml │ ├── packer │ │ ├── alicloud.json │ │ ├── alicloud-multi.json │ │ ├── aws-chroot.json │ │ ├── aws-ebs.json │ │ ├── aws-multi-chroot.json │ │ ├── aws-multi-ebs.json │ │ ├── aws-windows-2012-r2.json │ │ ├── azure-linux.json │ │ ├── azure-windows-2012-r2.json │ │ ├── docker.json │ │ ├── gce.json │ │ ├── huaweicloud.json │ │ ├── install_packages.sh │ │ ├── oci.json │ │ └── scripts │ │ ├── aws-windows-2012-configure-ec2service.ps1 │ │ ├── aws-windows.userdata │ │ ├── windows-configure-chocolatey.ps1 │ │ └── windows-install-packages.ps1 │ ├── README.md │ └── rosco.yml ├── images.yml ├── packer │ ├── alicloud.json │ ├── alicloud-multi.json │ ├── aws-chroot.json │ ├── aws-ebs.json │ ├── aws-multi-chroot.json │ ├── aws-multi-ebs.json │ ├── aws-windows-2012-r2.json │ ├── azure-linux.json │ ├── azure-windows-2012-r2.json │ ├── docker.json │ ├── gce.json │ ├── huaweicloud.json │ ├── install_packages.sh │ ├── oci.json │ └── scripts │ ├── aws-windows-2012-configure-ec2service.ps1 │ ├── aws-windows.userdata │ ├── windows-configure-chocolatey.ps1 │ └── windows-install-packages.ps1 ├── README.md └── rosco.yml37 directories, 91 files
docker exec -it halyard bash
# 設置Spinnaker版本,--version 指定版本hal config version edit --version local:1.22.1# 設置時區hal config edit --timezone Asia/Shanghai# 設置存儲為s3(後面不用,但是必須配置bug)hal config storage edit --type s3 --no-validate # 訪問方式:設置deck與gate的域名hal config security ui edit --override-base-url http://spinnaker.idevops.sitehal config security api edit --override-base-url http://spin-gate.idevops.site
hal config provider docker-registry enable --no-validatehal config provider docker-registry account add my-harbor-registry \ --address http://192.168.1.200:8088 \ --username admin \ --password Harbor12345hal config provider kubernetes enablehal config provider kubernetes account add default \ --docker-registries my-harbor-registry \ --context $(kubectl config current-context) \ --service-account true \ --omit-namespaces=kube-system,kube-public \ --provider-version v2 \ --no-validate # 部署方式,分布式部署,名稱空間。hal config deploy edit \ --account-name default \ --type distributed \ --location spinnaker
## 開啟一些主要的功能(後期可以再追加)hal config features edit --pipeline-templates truehal config features edit --artifacts truehal config features edit --managed-pipeline-templates-v2-ui true
# 配置Jenkinshal config ci jenkins enable### JenkinsServer 需要用到帳號和密碼hal config ci jenkins master add my-jenkins-master-01 \ --address http://jenkins.idevops.site \ --username admin \ --password admin### 啟用csrfhal config ci jenkins master edit my-jenkins-master-01 --csrf true
# GitHub## 參考:https://spinnaker.io/setup/artifacts/github/## 創建token https://github.com/settings/tokenshal config artifact github enablehal config artifact github account add my-github-account \ --token 02eb8aa1c2cd67af305d1f606 \ --username zey# GitLab## https://spinnaker.io/setup/artifacts/gitlab/## 創建一個個人的token(admin)hal config artifact gitlab enablehal config artifact gitlab account add my-gitlab-account \ --token qqHX8T4VTpozbnX
## service-settingsmkdir .hal/default/service-settings/vi .hal/default/service-settings/redis.ymloverrideBaseUrl: redis://192.168.1.200:6379skipLifeCycleManagement: true## profiles## /root/.hal/default/profiles[root@master profiles]# ls[root@master profiles]# vi gate-local.ymlredis: configuration: secure: true
創建資料庫
CREATE DATABASE `clouddriver` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, EXECUTE, SHOW VIEWON `clouddriver`.*TO 'clouddriver_service'@'%' IDENTIFIED BY 'clouddriver@spinnaker.com';GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, LOCK TABLES, EXECUTE, SHOW VIEWON `clouddriver`.*TO 'clouddriver_migrate'@'%' IDENTIFIED BY 'clouddriver@spinnaker.com';
修改配置文件
## /root/.hal/default/profilesbash-5.0$ cat clouddriver-local.ymlsql: enabled: true # read-only boolean toggles `SELECT` or `DELETE` health checks for all pools. # Especially relevant for clouddriver-ro and clouddriver-ro-deck which can # target a SQL read replica in their default pools. read-only: false taskRepository: enabled: true cache: enabled: true # These parameters were determined to be optimal via benchmark comparisons # in the Netflix production environment with Aurora. Setting these too low # or high may negatively impact performance. These values may be sub-optimal # in some environments. readBatchSize: 500 writeBatchSize: 300 scheduler: enabled: true # Enable clouddriver-caching's clean up agent to periodically purge old # clusters and accounts. Set to true when using the Kubernetes provider. unknown-agent-cleanup-agent: enabled: false connectionPools: default: # additional connection pool parameters are available here, # for more detail and to view defaults, see: # https://github.com/spinnaker/kork/blob/master/kork-sql/src/main/kotlin/com/netflix/spinnaker/kork/sql/config/ConnectionPoolProperties.kt default: true jdbcUrl: jdbc:mysql://192.168.1.200:3306/clouddriver user: clouddriver_service password: clouddriver@spinnaker.com # The following tasks connection pool is optional. At Netflix, clouddriver # instances pointed to Aurora read replicas have a tasks pool pointed at the # master. Instances where the default pool is pointed to the master omit a # separate tasks pool. tasks: user: clouddriver_service jdbcUrl: jdbc:mysql://192.168.1.200:3306/clouddriver password: clouddriver@spinnaker.com migration: user: clouddriver_migrate jdbcUrl: jdbc:mysql://192.168.1.200:3306/clouddriver password: clouddriver@spinnaker.comredis: enabled: false cache: enabled: false scheduler: enabled: false taskRepository: enabled: false
創建資料庫
CREATE DATABASE `front50` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, EXECUTE, SHOW VIEW ON `front50`.* TO 'front50_service'@'%' IDENTIFIED BY "front50@spinnaker.com";GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, LOCK TABLES, EXECUTE, SHOW VIEW ON `front50`.* TO 'front50_migrate'@'%' IDENTIFIED BY "front50@spinnaker.com";
修改配置文件
## /root/.hal/default/profilesbash-5.0$ cat front50-local.ymlspinnaker: s3: enabled: falsesql: enabled: true connectionPools: default: # additional connection pool parameters are available here, # for more detail and to view defaults, see: # https://github.com/spinnaker/kork/blob/master/kork-sql/src/main/kotlin/com/netflix/spinnaker/kork/sql/config/ConnectionPoolProperties.kt default: true jdbcUrl: jdbc:mysql://192.168.1.200:3306/front50 user: front50_service password: front50@spinnaker.com migration: user: front50_migrate jdbcUrl: jdbc:mysql://192.168.1.200:3306/front50 password: front50@spinnaker.com
創建資料庫
set tx_isolation = 'REPEATABLE-READ';CREATE SCHEMA `orca` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, EXECUTE, SHOW VIEWON `orca`.* TO 'orca_service'@'%' IDENTIFIED BY "orca@spinnaker.com" ;GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, LOCK TABLES, EXECUTE, SHOW VIEW ON `orca`.* TO 'orca_migrate'@'%' IDENTIFIED BY "orca@spinnaker.com" ;
修改配置文件
## /root/.hal/default/profilesbash-5.0$ cat orca-local.ymlsql: enabled: true connectionPool: jdbcUrl: jdbc:mysql://192.168.1.200:3306/orca user: orca_service password: orca@spinnaker.com connectionTimeout: 5000 maxLifetime: 30000 # MariaDB-specific: maxPoolSize: 50 migration: jdbcUrl: jdbc:mysql://192.168.1.200:3306/orca user: orca_migrate password: orca@spinnaker.com# Ensure we're only using SQL for accessing execution stateexecutionRepository: sql: enabled: true redis: enabled: false# Reporting on active execution metrics will be handled by SQLmonitor: activeExecutions: redis: false# Use SQL for Orca's work queue# Settings from Netflix and may require adjustment for your environment# Only validated with AWS Aurora MySQL 5.7# Please PR if you have success with other databaseskeiko: queue: sql: enabled: true redis: enabled: falsequeue: zombieCheck: enabled: true pendingExecutionService: sql: enabled: true redis: enabled: false
hal deploy apply --no-validate
創建Ingress訪問
apiVersion: extensions/v1beta1kind: Ingressmetadata: name: spinnaker-service annotations: kubernetes.io/ingress.class: nginxspec: rules: - host: spinnaker.idevops.site http: paths: - path: / backend: serviceName: spin-deck servicePort: 9000 - host: spin-gate.idevops.site http: paths: - path: / backend: serviceName: spin-gate servicePort: 8084 - host: spin-front50.idevops.site http: paths: - path: / backend: serviceName: spin-front50 servicePort: 8080 - host: spin-fiat.idevops.site http: paths: - path: / backend: serviceName: spin-fiat servicePort: 7003
kubectl create -f ingress.yml
開啟認證LDAP/OAuth2(兩者二選一即可,推薦LDAP)
# 開啟LDAP認證hal config security authn ldap edit \ --user-search-base 'ou=devops,dc=zy,dc=com' \ --url 'ldap://192.168.1.200:389' \ --user-search-filter 'cn={0}' \ --manager-dn 'cn=admin,dc=zy,dc=com' \ --manager-password '12345678' hal config security authn ldap enable## --user-search-base 用戶搜索的部分## --url LDAP伺服器## --user-search-filter 搜索用戶DN時使用的過濾器## --manager-dn LDAP管理器用戶## --manager-password LDAP管理器用戶的密碼# GitHub## 首先需要登錄GitHub然後創建一個OAuth APP。## 參考官方:https://spinnaker.io/setup/security/authentication/oauth/github/hal config security authn oauth2 edit --provider github \ --client-id 66826xxxxxxxxe0ecdbd7 \ --client-secret d834851134e80a9xxxxxxe371613f05bc26hal config security authn oauth2 enable
授權管理
角色可以通過LDAP自定義也可以使用文件自定義。兩者二選一。
通過LDAP組定義角色:例如我在LDAP中存在類型為groupOfUniqueName的組yunweizu。則關聯這個組的所有用戶的角色為yunweizu。後續添加權限則根據yunweizu授權。
通過文件自定義:編寫一個靜態的yaml文件,裡面定義每個用戶和其對應的角色。
# 使用Yaml文件##如下配置設置user1為yunweizu、user2為demo。users: - username: devops roles: - yunweizu - username: user2 roles: - demo hal config security authz enable hal config security authz file edit --file-path=$HOME/.hal/userrole.yaml hal config security authz edit --type file## 授權(根據LDAP組進行授權)hal config security authz ldap edit \ --url 'ldap://192.168.1.200:389/dc=zy,dc=com' \ --manager-dn 'cn=admin,dc=zy,dc=com' \ --manager-password '12345678' \ --user-dn-pattern 'cn={0}' \ --group-search-base 'ou=devops' \ --group-search-filter 'uniqueMember={0}' \ --group-role-attributes 'cn' \ --user-search-filter 'cn={0}』 hal config security authz edit --type ldap hal config security authz enable
開啟授權後可以設置哪些用戶可以訪問集群帳戶、鏡像倉庫、應用程式。
## 配置yunweizu和group02角色的用戶可以使用default這個集群帳戶hal config provider kubernetes account edit default \--add-read-permission yunweizu,group02 \--add-write-permission yunweizu ## 配置yunweizu角色的用戶可以使用my-harbor-registry帳戶hal config provider docker-registry account edit my-harbor-registry \ --read-permissions yunweizu \ --write-permissions yunweizu
開啟管道權限
~/.hal/default/profiles/orca-local.ymltasks: useManagedServiceAccounts: true~/.hal/default/profiles/settings-local.jswindow.spinnakerSettings.feature.managedServiceAccounts = true;
定義超級管理員
vi ~/.hal/default/profiles/fiat-local.ymlbash-5.0$ cat fiat-local.ymlfiat: admin: roles: - devops-admin ## 指定的組
.hal/default/profiles/echo-local.yml
[root@master profiles]# cat echo-local.ymlmail: enabled: true from: 250642@qq.comspring: mail: host: smtp.qq.com username: 25642@qq.com password: ubxijwaah protocol: smtp default-encoding: utf-8 properties: mail: display: sendname: SpinnakerAdmin smtp: port: 465 auth: true starttls: enable: true required: true ssl: enable: true transport: protocol: smtp debug: true
.hal/default/profiles/settings-local.js
window.spinnakerSettings.notifications.email.enabled = true;
更新配置
hal deploy apply --no-validate
配置存儲
hal config canary enable ##aws s3 minio 創建一個bucket spinnaker-canary,賦予讀寫權限。hal config canary aws enablehal config canary aws account add my-canary \--bucket spinnaker-canary \--endpoint http://minio.idevops.site \--access-key-id AKIAIOSFODNN7EXAMPLE \--secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY hal config canary edit --default-storage-account my-canaryhal config canary aws edit --s3-enabled true
Prometheus集成
## prometheushal config canary prometheus enable## 這裡做了basic認證,無認證忽略username和password選項。hal config canary prometheus account add my-prometheus \--base-url http://prometheus.idevops.site \--username admin \--password admin hal config canary edit --default-metrics-account my-prometheushal config canary edit --default-metrics-store prometheus
hal deploy apply --no-validate
效果
hal config metric-stores prometheus enablehal deploy apply --no-validate[root@master monitor]# kubectl get pod -n spinnakerNAME READY STATUS RESTARTS AGEspin-clouddriver-7cd94f5b9-cn22r 2/2 Running 2 4h4mspin-deck-684854fbd7-cb7wh 1/1 Running 1 4h4mspin-echo-746b45ff98-kcz5m 2/2 Running 2 4h4mspin-front50-66b4f9966-l6r4h 2/2 Running 2 4h4mspin-gate-6788588dfc-q8cpt 2/2 Running 2 4h4mspin-igor-6f6fbbbb75-4b4jd 2/2 Running 2 4h4mspin-kayenta-64fddf7db9-j4pqg 2/2 Running 2 4h4mspin-orca-d5c488b48-5q8sp 2/2 Running 2 4h4mspin-rosco-5f4bcb754c-9kgl9 2/2 Running 2 4h4m# 通過describe可以看到POD中存在一個sidecar容器monitoring-daemonkubectl describe pod spin-gate-6788588dfc-q8cpt -n spinnaker
正常運行後通過 podID:8008/prometheus_metrics獲取度量數據,需要添加以下服務發現配置。
# prometheus需要添加配置- job_name: 'spinnaker-services' kubernetes_sd_configs: - role: pod metrics_path: "/prometheus_metrics" relabel_configs: - source_labels: [__meta_kubernetes_pod_label_app] action: keep regex: 'spin' - source_labels: [__meta_kubernetes_pod_container_name] action: keep regex: 'monitoring-daemon' ## prometheus-operator 按照以下配置,其他方式忽略以下配置。apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: spinnaker-all-metrics labels: app: spin # this label is here to match the prometheus operator serviceMonitorSelector attribute # prometheus.prometheusSpec.serviceMonitorSelector # https://github.com/helm/charts/tree/master/stable/prometheus-operator release: prometheus-operatorspec: selector: matchLabels: app: spin namespaceSelector: any: true endpoints: # "port" is string only. "targetPort" is integer or string. - targetPort: 8008 interval: 10s path: "/prometheus_metrics"
打開prometheus頁面,能夠看到以下信息。
對接Grafana展示數據,Spinnaker官方提供了控制臺模板。
打開Grafana控制臺,開始導入json模板。模板較多,創建一個文件夾管理。
澤陽,DevOps領域實踐者。專注於企業級DevOps運維開發技術實踐分享,主要以新Linux運維技術、DevOps技術課程為主。豐富的一線實戰經驗,課程追求實用性獲得多數學員認可。課程內容均來源於企業應用,在這裡既學習技術又能獲取熱門技能,歡迎您的到來!