使用Pgpool搭建Postgresql集群
操作系統:CentOS7
Postgresql版本:10.10
Pgpool-II版本:4.0.6
端口
??9999:對外提供服務
??9000:watch_dog
??9898:pcp
??9694:watch_dog心跳檢查
備注:下文的安裝步驟中,【】包圍的部分需根據自己的環境來配置
背景
??項目上有實現數據庫高可用的需求,Postgresql本身支持數據同步和手動failover。即,一個雙機熱備的Postgresql集群,主節點宕機時,整個集群只能對外提供讀服務,而無法提供寫服務,只有手動執行promote命令晉升備節點為主節點,才能恢復數據庫服務的正常。以下是Postgresql官方文檔的引文:
PostgreSQL does not provide the system software required to identify a
failure on the primary and notify the standby database server.
??Pgpool通過其WatchDog的虛擬IP和健康檢查功能實現了自動故障切換。本文使用master-slave模式搭建集群,不使用replication_mode的原因是該模式下的online recovery存在限制,Pgpool官網引文如下:
There is a restriction in the online recovery in replication_mode. If
Pgpool-II itself is installed on multiple hosts, online recovery does
not work correctly, because Pgpool-II has to stop all the clients
during the 2nd stage of online recovery. If there are several
Pgpool-II hosts, only one of them will have received the online
recovery command and will block the connections from clients.
??為實現分區容錯,本文使用3個節點搭建集群,每個節點安裝Pgpool-II和Postgresql數據庫,數據庫為1主2備。
??注意:Pgpool-II和Postgresql集群對于主節點使用的術語不同,Pgpool-II的主節點稱為Master,Postgresql的主節點稱為Primary;備節點使用的術語相同,都為standby。
步驟
1 安裝并配置PostgreSQL
??在所有節點安裝PostgreSQL,但僅需要在節點1配置并啟動PostgreSQL,然后使用pcp_recovery_node功能初始化并啟動節點2、3上的Postgresql服務,注意不要讓PostgreSQL開機自啟。參考PG數據庫安裝。
2 配置/etc/hosts
??增加DNS配置,供recovery_1st_stage使用
3 配置ssh免密通信
??Pgpool的failover配置中使用的failover.sh、follow_master.sh,以及online recovery配置中使用的recovery_1st_stage、pgpool_remote_start腳本中,都涉及通過ssh進行遠程操作。
??需要配置root用戶到postgres用戶(包括本機的postgres用戶)的單向,以及postgres用戶之間的雙向免密通信。
3.1 設置postgres用戶的密碼
3.2 在當前節點執行以下語句,生成密鑰
??注意不要設置passphrase
3.3 將公鑰發送到其他節點
ssh-copy-id -i ~/.ssh/id_rsa.pub 【用戶名】@【IP】3.4 測試ssh
ssh 【用戶名】@【IP】4 修改ssh配置(可選,默認10個一般夠)
vi /etc/ssh/sshd_config修改
MaxSessions 100重啟sshd
systemctl restart sshd5 配置系統日志
??創建日志文件,應創建在/var/log路徑下,否則syslog無法寫入文件(除非關閉或配置SELinux)。
修改syslog配置
vi /etc/rsyslog.conf修改
*.info;mail.none;authpriv.none;cron.none;LOCAL1.none /var/log/messages LOCAL1.* /var/log/pgpool/pgpool_run.log重啟
systemctl restart rsyslog6 配置pgpass
??為了允許數據庫的postgres用戶無需為流復制和在線恢復指定密碼,對于每個節點,在操作系統的postgres用戶的主目錄中創建.pgpass文件,權限改為600。
編輯
# hostname:port:database:username:password # In a standby server, a database field of replication matches streaming replication connections made to the master server. 【節點1的IP】:5432:postgres:postgres:123456 【節點2的IP】:5432:postgres:postgres:123456 【節點3的IP】:5432:postgres:postgres:123456授權
chmod 600 ~/.pgpass7 安裝Pgpool
??root用戶執行,在所有節點安裝和配置Pgpool;為了使用在線恢復功能,需要pgpool_recovery、pgpool_remote_start、pgpool_switch_xlog等函數,因此需要安裝pgpool_recovery。
??注意:安裝pgpool_recovery需要先安裝postgresql-devel。
8 在節點1的Postgresql上安裝pgpool_recovery擴展
??使用PG創建數據庫時,實際上是通過復制一個現有的數據庫來工作的。默認情況下,它復制名為template1的標準系統數據庫。
9 編寫failover腳本
在所有節點上新建配置文件,并賦予執行權限
編輯腳本(注意和pgpool.conf中failover_command參數的聯系),如果primary宕機,晉升standby;如果standby宕機,僅記錄日志。
#!/bin/bash # This script is run by failover_command.set -o xtrace logger -i -p local1.info failover_command begin# Special values: # %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %H = hostname of the new master node # %M = old master node id # %P = old primary node id # %r = new master port number # %R = new master database cluster path # %% = '%' characterFAILED_NODE_ID="$1" FAILED_NODE_HOST="$2" FAILED_NODE_PORT="$3" FAILED_NODE_PGDATA="$4" NEW_MASTER_NODE_ID="$5" NEW_MASTER_NODE_HOST="$6" OLD_MASTER_NODE_ID="$7" OLD_PRIMARY_NODE_ID="$8" NEW_MASTER_NODE_PORT="$9" NEW_MASTER_NODE_PGDATA="${10}"PGHOME=/usr/local/pgsqllogger -i -p local1.info failover.sh: start: failed_node_id=${FAILED_NODE_ID} old_primary_node_id=${OLD_PRIMARY_NODE_ID} \failed_host=${FAILED_NODE_HOST} new_master_host=${NEW_MASTER_NODE_HOST}## Test passwrodless SSH ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${NEW_MASTER_NODE_HOST} -i ~/.ssh/id_rsa ls /tmp > /dev/nullif [ $? -ne 0 ]; thenlogger -i -p local1.error failover.sh: passwrodless SSH to postgres@${NEW_MASTER_NODE_HOST} failed. Please setup passwrodless SSH.exit 1 fi# If standby node is down, skip failover. if [ ${FAILED_NODE_ID} -ne ${OLD_PRIMARY_NODE_ID} ]; thenlogger -i -p local1.info failover.sh: Standby node is down. Skipping failover.exit 0 fi# Promote standby node. logger -i -p local1.info failover.sh: Primary node is down, promote standby node PostgreSQL@${NEW_MASTER_NODE_HOST}.ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \ postgres@${NEW_MASTER_NODE_HOST} -i ~/.ssh/id_rsa ${PGHOME}/bin/pg_ctl -D ${NEW_MASTER_NODE_PGDATA} -w promoteif [ $? -ne 0 ]; thenlogger -i -p local1.error failover.sh: new_master_host=${NEW_MASTER_NODE_HOST} promote failedexit 1 filogger -i -p local1.info failover.sh: end: new_master_node_id=$NEW_MASTER_NODE_ID started as the primary node exit 0授權
chmod +x /usr/local/pgpool/failover_stream.sh10 編寫follow_master腳本
??使仍在運行的standby從新的primary上同步數據,三個及以上節點數量時,需要設置此腳本
編輯腳本
#!/bin/bash # This script is run after failover_command to synchronize the Standby with the new Primary.set -o xtrace logger -i -p local1.info follow_master begin# special values: %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %M = old master node id # %H = new master node host name # %P = old primary node id # %R = new master database cluster path # %r = new master port number # %% = '%' character FAILED_NODE_ID="$1" FAILED_NODE_HOST="$2" FAILED_NODE_PORT="$3" FAILED_NODE_PGDATA="$4" NEW_MASTER_NODE_ID="$5" NEW_MASTER_NODE_HOST="$6" OLD_MASTER_NODE_ID="$7" OLD_PRIMARY_NODE_ID="$8" NEW_MASTER_NODE_PORT="$9" NEW_MASTER_NODE_PGDATA="${10}"PGHOME=/usr/local/pgsql ARCHIVEDIR=/home/pg/data/archivedir REPL_USER=postgres PCP_USER=postgres PGPOOL_PATH=/usr/local/bin PCP_PORT=9898# Recovery the slave from the new primary logger -i -p local1.info follow_master.sh: start: synchronize the Standby node PostgreSQL@${FAILED_NODE_HOST} with the new Primary node PostgreSQL@${NEW_MASTER_NODE_HOST}## Test passwrodless SSH ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${NEW_MASTER_NODE_HOST} -i ~/.ssh/id_rsa ls /tmp > /dev/nullif [ $? -ne 0 ]; thenlogger -i -p local1.error follow_master.sh: passwrodless SSH to postgres@${NEW_MASTER_NODE_HOST} failed. Please setup passwrodless SSH.exit 1 fi## Get PostgreSQL major version PGVERSION=`${PGHOME}/bin/initdb -V | awk '{print $3}' | sed 's/\..*//' | sed 's/\([0-9]*\)[a-zA-Z].*/\1/'`if [ ${PGVERSION} -ge 12 ]; thenRECOVERYCONF=${FAILED_NODE_PGDATA}/myrecovery.conf elseRECOVERYCONF=${FAILED_NODE_PGDATA}/recovery.conf fi# Check the status of standby ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa ${PGHOME}/bin/pg_ctl -w -D ${FAILED_NODE_PGDATA} status## If Standby is running, run pg_basebackup. if [ $? -eq 0 ]; then# Execute pg_basebackupssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa "set -o errexit${PGHOME}/bin/pg_ctl -w -m f -D ${FAILED_NODE_PGDATA} stoprm -rf ${FAILED_NODE_PGDATA}rm -rf ${ARCHIVEDIR}/*${PGHOME}/bin/pg_basebackup -h ${NEW_MASTER_NODE_HOST} -U ${REPL_USER} -p ${NEW_MASTER_NODE_PORT} -D ${FAILED_NODE_PGDATA} -X streamif [ ${PGVERSION} -ge 12 ]; thensed -i -e \"\\\$ainclude_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'\" \-e \"/^include_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'/d\" ${FAILED_NODE_PGDATA}/postgresql.confficat > ${RECOVERYCONF} << EOT primary_conninfo = 'host=${NEW_MASTER_NODE_HOST} port=${NEW_MASTER_NODE_PORT} user=${REPL_USER} passfile=''/home/postgres/.pgpass''' recovery_target_timeline = 'latest' restore_command = 'scp ${NEW_MASTER_NODE_HOST}:${ARCHIVEDIR}/%f %p' EOTif [ ${PGVERSION} -ge 12 ]; thentouch ${FAILED_NODE_PGDATA}/standby.signalelseecho \"standby_mode = 'on'\" >> ${RECOVERYCONF}fi"if [ $? -ne 0 ]; thenlogger -i -p local1.error follow_master.sh: end: pg_basebackup failedexit 1fi# start Standby node on ${FAILED_NODE_HOST}ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa $PGHOME/bin/pg_ctl -l /dev/null -w -D ${FAILED_NODE_PGDATA} start# If start Standby successfully, attach this nodeif [ $? -eq 0 ]; then# Run pcp_attact_node to attach Standby node to Pgpool-II.${PGPOOL_PATH}/pcp_attach_node -w -h localhost -U $PCP_USER -p ${PCP_PORT} -n ${FAILED_NODE_ID}if [ $? -ne 0 ]; thenlogger -i -p local1.error follow_master.sh: end: pcp_attach_node failedexit 1fi# If start Standby failed, drop replication slot "${FAILED_NODE_HOST}"elselogger -i -p local1.error follow_master.sh: end: follow master command failedexit 1fielselogger -i -p local1.info follow_master.sh: failed_nod_id=${FAILED_NODE_ID} is not running. skipping follow master commandexit 0 filogger -i -p local1.info follow_master.sh: end: follow master command complete exit 0授權
chmod +x /usr/local/pgpool/follow_master.sh11 編寫recovery_1st_stage腳本
??pcp_recovery_node會用到recovery_1st_stage和pgpool_remote_start。
??recovery_1st_stage會使用pg_basebackup從現存的主節點同步數據,初始化數據庫,更新recovery配置文件。
編輯
#!/bin/bash # This script is executed by "recovery_1st_stage" to recovery a Standby node.set -o xtrace logger -i -p local1.info recovery_1st_stage beginPRIMARY_NODE_PGDATA="$1" DEST_NODE_HOST="$2" DEST_NODE_PGDATA="$3" PRIMARY_NODE_PORT="$4" DEST_NODE_PORT=5432# 這里是hostname變量,所以需要配置DNS PRIMARY_NODE_HOST=$(hostname) PGHOME=/usr/local/pgsql ARCHIVEDIR=/home/pg/data/archivedir REPL_USER=postgreslogger -i -p local1.info recovery_1st_stage: start: pg_basebackup for Standby node PostgreSQL@{$DEST_NODE_HOST}## Test passwrodless SSH ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${DEST_NODE_HOST} -i ~/.ssh/id_rsa ls /tmp > /dev/nullif [ $? -ne 0 ]; thenlogger -i -p local1.error recovery_1st_stage: passwrodless SSH to postgres@${DEST_NODE_HOST} failed. Please setup passwrodless SSH.exit 1 fi## Get PostgreSQL major version PGVERSION=`${PGHOME}/bin/initdb -V | awk '{print $3}' | sed 's/\..*//' | sed 's/\([0-9]*\)[a-zA-Z].*/\1/'` if [ $PGVERSION -ge 12 ]; thenRECOVERYCONF=${DEST_NODE_PGDATA}/myrecovery.conf elseRECOVERYCONF=${DEST_NODE_PGDATA}/recovery.conf fi## Execute pg_basebackup to recovery Standby node ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@$DEST_NODE_HOST -i ~/.ssh/id_rsa "set -o errexitrm -rf $DEST_NODE_PGDATArm -rf $ARCHIVEDIR/*${PGHOME}/bin/pg_basebackup -h ${PRIMARY_NODE_HOST} -U ${REPL_USER} -p ${PRIMARY_NODE_PORT} -D ${DEST_NODE_PGDATA} -X streamif [ ${PGVERSION} -ge 12 ]; thensed -i -e \"\\\$ainclude_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'\" \-e \"/^include_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'/d\" ${DEST_NODE_PGDATA}/postgresql.confficat > ${RECOVERYCONF} << EOT primary_conninfo = 'host=${PRIMARY_NODE_HOST} port=${PRIMARY_NODE_PORT} user=${REPL_USER} passfile=''/home/postgres/.pgpass''' recovery_target_timeline = 'latest' restore_command = 'scp ${PRIMARY_NODE_HOST}:${ARCHIVEDIR}/%f %p' EOTif [ ${PGVERSION} -ge 12 ]; thentouch ${DEST_NODE_PGDATA}/standby.signalelseecho \"standby_mode = 'on'\" >> ${RECOVERYCONF}fised -i \"s/#*port = .*/port = ${DEST_NODE_PORT}/\" ${DEST_NODE_PGDATA}/postgresql.conf "if [ $? -ne 0 ]; thenlogger -i -p local1.error recovery_1st_stage: end: pg_basebackup failed. online recovery failedexit 1 filogger -i -p local1.info recovery_1st_stage: end: recovery_1st_stage complete exit 0授權
chmod +x /home/pg/data/recovery_1st_stage.sh12 編寫pgpool_remote_start腳本(注意沒有sh擴展名)
pgpool_remote_start負責啟動數據庫
編輯
#!/bin/bash # This script is run after recovery_1st_stage to start Standby node.set -o xtrace logger -i -p local1.info pgpool_remote_start beginPGHOME=/usr/local/pgsql DEST_NODE_HOST="$1" DEST_NODE_PGDATA="$2"logger -i -p local1.info pgpool_remote_start: start: remote start Standby node PostgreSQL@$DEST_NODE_HOST## Test passwrodless SSH ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${DEST_NODE_HOST} -i ~/.ssh/id_rsa ls /tmp > /dev/nullif [ $? -ne 0 ]; thenlogger -i -p local1.error pgpool_remote_start: passwrodless SSH to postgres@${DEST_NODE_HOST} failed. Please setup passwrodless SSH.exit 1 fi## Start Standby node ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@$DEST_NODE_HOST -i ~/.ssh/id_rsa " $PGHOME/bin/pg_ctl -l /dev/null -w -D $DEST_NODE_PGDATA start "if [ $? -ne 0 ]; thenlogger -i -p local1.error pgpool_remote_start: PostgreSQL@$DEST_NODE_HOST start failed.exit 1 filogger -i -p local1.info pgpool_remote_start: end: PostgreSQL@$DEST_NODE_HOST started successfully. exit 0授權
chmod +x /home/pg/data/pgpool_remote_start.sh13 配置pgpool.conf
復制Pgpool配置模板文件
修改
#CONNECTIONS listen_addresses = '*' port = 9999 pcp_listen_addresses = '*' pcp_port = 9898#- Backend Connection Settings -backend_hostname0 ='【集群內第1個節點的IP,多個節點配置一致】' backend_port0 = 5432 backend_weight0 = 1 backend_data_directory0 = '/home/pg/data' backend_flag0 = ‘ALLOW_TO_FAILOVER’backend_hostname1 ='【集群內第2個節點的IP,多個節點配置一致】' backend_port1 = 5432 backend_weight1 = 1 backend_data_directory1 = '/home/pg/data' backend_flag1 = 'ALLOW_TO_FAILOVER'backend_hostname2 ='【集群內第3個節點的IP,多個節點配置一致】' backend_port2 = 5432 backend_weight2 = 1 backend_data_directory2 = '/home/pg/data' backend_flag2 = 'ALLOW_TO_FAILOVER'#- Authentication - enable_pool_hba = on pool_passwd ='pool_passwd'#------------------------------------------------------------------------------ # POOLS #------------------------------------------------------------------------------# - Concurrent session and pool size -num_init_children = 400 max_pool = 4#------------------------------------------------------------------------------ # LOGS #------------------------------------------------------------------------------ log_destination = 'syslog' syslog_facility = 'LOCAL1'#------------------------------------------------------------------------------ # FILE LOCATIONS #------------------------------------------------------------------------------pid_file_name = '/usr/local/pgpool/pgpool.pid'#------------------------------------------------------------------------------ # REPLICATION MODE #------------------------------------------------------------------------------ replication_mode = off #------------------------------------------------------------------------------ # LOAD BALANCING MODE #------------------------------------------------------------------------------ load_balance_mode = on #------------------------------------------------------------------------------ # MASTER/SLAVE MODE #------------------------------------------------------------------------------ master_slave_mode = on master_slave_sub_mode = 'stream' # - Streaming - sr_check_period = 5 sr_check_user = 'postgres' sr_check_password ='123456' sr_check_database = 'postgres' follow_master_command = '/usr/local/pgpool/follow_master.sh %d %h %p %D %m %H %M %P %r %R' #------------------------------------------------------------------------------ #HEALTH CHECK #------------------------------------------------------------------------------health_check_period = 10 health_check_timeout = 20 health_check_user = 'postgres' health_check_password ='123456' health_check_database = 'postgres'#------------------------------------------------------------------------------ #FAILOVER AND FAILBACK #------------------------------------------------------------------------------ # 注意failover_stream.sh后的參數,要與failover_stream.sh中的腳本匹配 failover_command = '/usr/local/pgpool/failover_stream.sh %d %h %p %D %m %H %M %P %r %R'#------------------------------------------------------------------------------ #WATCHDOG #------------------------------------------------------------------------------ use_watchdog = on wd_hostname = '【本機IP】' wd_port = 9000 delegate_IP = '【虛擬IP】' if_cmd_path = '/sbin' if_up_cmd = 'ip addr add $_IP_$/24 dev 【網卡名稱(通過ip addr查看)】 label 【網卡名稱(通過ip addr查看)】:0' if_down_cmd = 'ip addr del $_IP_$/24 dev 【網卡名稱(通過ip addr查看)】' arping_path = '/usr/sbin' arping_cmd = 'arping -U $_IP_$ -w 1' wd_heartbeat_port = 9694 wd_heartbeat_keepalive = 2 wd_heartbeat_deadtime = 30 heartbeat_destination0 = '【其他節點1的IP】' heartbeat_destination_port0 = 9694 heartbeat_device0 = '【網卡名稱(通過ip addr查看)】' heartbeat_destination1 = '【其他節點2的IP】' heartbeat_destination_port1 = 9694 heartbeat_device1 = '【網卡名稱(通過ip addr查看)】'#------------------------------------------------------------------------------ # ONLINE RECOVERY #------------------------------------------------------------------------------recovery_user = 'postgres' recovery_password = '123456' recovery_1st_stage_command = 'recovery_1st_stage.sh' #- Other pgpool Connection Settings -other_pgpool_hostname0 ='【其他節點1的IP】' other_pgpool_port0 = 9999 other_wd_port0 = 9000 other_pgpool_hostname1 ='【其他節點2的IP】' other_pgpool_port1 = 9999 other_wd_port1 = 900014 生成pool_passwd
/usr/local/bin/pg_md5 -p --md5auth --username=postgres pool_passwd chown postgres:postgres /usr/local/etc/pool_passwd操作完成后,在/usr/local/etc目錄下會生成一個pool_passwd文件,該文件在pgpool.conf中通過“pool_passwd”選項配置。
15 配置pool_hba.conf
??客戶端認證配置文件,在pgpool.conf中配置“enable_pool_hba = on”時,將使用該文件認證客戶端連接。
root用戶執行
修改
local all all md5 host all all 0.0.0.0/0 md5 host all all 0/0 md516 配置pcp.conf
??Pgpool-II為管理員提供了執行管理操作的接口,例如獲取Pgpool-II狀態或遠程終止Pgpool-II進程。pcp.conf是此接口用于身份驗證的用戶名/密碼文件。所有操作模式都需要設置pcp.conf文件。數據庫中使用PCP進行健康檢測和復制延遲檢測的用戶,其用戶名和密碼必須在pcp.conf中聲明。
16.1 生成密碼摘要
16.2 生成和配置文件
cp /usr/local/etc/pcp.conf.sample /usr/local/etc/pcp.conf vi /usr/local/etc/pcp.conf編輯
# Example: postgres:【上一步生成的密碼摘要】17 設置PCP密碼配置文件
??由于follow_master_command腳本必須在不輸入密碼的情況下執行PCP命令,所以我們在Pgpool-II啟動用戶的主目錄中創建.pcppass。
18 允許postgres用戶以root身份執行相關命令
root用戶執行
19 配置開機自啟
vi /etc/systemd/system/pgpool.service編輯
[Unit] Description=Pgpool-II After=syslog.target network.target[Service] Type=forkingUser=rootExecStart=/usr/local/bin/pgpool -C -D ExecStop=/usr/local/bin/pgpool -m fast stop ExecReload=/usr/local/bin/pgpool reload LimitNOFILE=65536 KillMode=process KillSignal=SIGINT Restart=on-abnormal RestartSec=30s TimeoutSec=0[Install] WantedBy=multi-user.target開機自啟
systemctl enable pgpool20 在所有節點啟動pgpool
systemctl start pgpool21 通過pcp_recovery_node初始化備節點數據庫
??pcp_recovery_node會先在primary節點執行SELECT pgpool_recovery(‘recovery_1st_stage’, ‘【需要啟動的standby的IP】’, ‘【數據目錄】’, ‘5432’, 【節點編號】),可以通過查詢PG日志查看異常信息;然后執行recovery_1st_stage.sh和pgpool_remote_start.sh
22 查看節點狀態
22.1 Pgpool節點狀態
22.2 數據庫節點狀態
通過虛擬IP的9999端口開啟PostgreSQL數據庫命令行,執行
注意:如果使用navicat的命令行功能,可能需要關閉并重啟命令行界面(建立新的session)才能看到正確的節點狀態
23 高可用測試
23.1 Pgpool 高可用測試
- master停止pgpool服務,某個standby接管master服務
- 重啟之前停止的master的pgpool服務,節點狀態變為standby
- standby停止pgpool服務,節點狀態變為shutdown
- 重啟standby,節點狀態恢復為standby
23.2 數據庫高可用測試
- primary停止數據庫服務,某個standby晉升為primary,其他standby執行follow
master,從新的primary同步數據 - 執行pcp_recovery_node,將之前停止服務的primary重啟
- standby停止數據庫服務,其他節點無變化
- 執行pcp_recovery_node,將之前停止服務的standby重啟
24 負載均衡測試
??使用pgbench進行負載均衡測試
24.1 數據初始化
24.3 使用show pool_nodes查看查詢請求在各節點的負載
25 非正常關閉pgpool后的故障恢復
- 非正常關閉pgpool,要清理緩存文件:
- 如果啟動第一個節點狀態為standby,重啟pgpool并使用
pcp_attach_node加入該節點
總結
以上是生活随笔為你收集整理的使用Pgpool搭建Postgresql集群的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: handsontable mysql_H
- 下一篇: 中兴tdlte认证题库vue_中兴5G初