suse ha for sap scale-up性能优化场景安装配置

1. 安装SUSE操作系统

在官网下载SUSE Linux Enterprise Server for SAP Applications安装介质,在安装操作系统过程中,选择SUSE Linux Enterprise Server for SAP Applications操作系统。
在这里插入图片描述

在软件选择界面,根据需要选择SAP HANA Server Base,SAP Application Server Base,High Availability等组件。
在这里插入图片描述

安装好操作系统后,可以看到相关的sap和ha包:
# rpm -qa |grep pattern |grep sap
patterns-sles-sap_server-32bit-12-10.1.x86_64
patterns-sap-hana-12.3-6.11.1.x86_64
patterns-sles-sap_server-12-10.1.x86_64

# rpm -qa |grep -i sap
sap-locale-32bit-1.0-92.4.x86_64
yast2-sap-scp-1.0.3-11.2.noarch
patterns-sles-sap_server-32bit-12-10.1.x86_64
SLES_SAP-release-DVD-12.5-1.130.x86_64
patterns-sap-hana-12.3-6.11.1.x86_64
sap-locale-1.0-92.4.x86_64
yast2-saptune-1.3-3.4.2.noarch
sles4sap-white-papers-1.0-1.1.noarch
yast2-sap-ha-1.0.5-2.10.noarch
SLES_SAP-release-12.5-1.130.x86_64
saptune-2.0.1-3.3.1.x86_64
cyrus-sasl-gssapi-2.1.26-8.7.1.x86_64
patterns-sles-sap_server-12-10.1.x86_64
clamsap-0.99.25-1.8.x86_64
sap-netscape-link-0.1-1.2.noarch
saprouter-systemd-0.2-1.1.noarch
SAPHanaSR-0.153.2-3.8.2.noarch
sap-installation-wizard-3.1.81.20-3.15.1.x86_64
cyrus-sasl-gssapi-32bit-2.1.26-8.7.1.x86_64
yast2-sap-scp-prodlist-1.0.4-5.6.1.noarch
sapconf-4.1.14-40.56.3.noarch

# rpm -qa |grep -i cluster
yast2-cluster-3.4.1-9.8.noarch
cluster-md-kmp-default-4.12.14-120.1.x86_64
ha-cluster-bootstrap-0.5-3.6.2.noarch
cluster-glue-1.0.12+v1.git.1485976882.03d61cd-3.8.1.x86_64

# rpm -qa |grep -i ha

sle-ha-install-quick_en-12.4-1.3.noarch
nautilus-share-0.7.3-11.81.x86_64
hardlink-1.0-6.45.x86_64
yast2-hana-firewall-1.1.5-1.5.x86_64
libHalf11-2.1.0-2.14.x86_64
libenchant1-1.6.0-21.107.x86_64
perl-Tie-IxHash-1.23-3.19.noarch
patterns-sap-hana-12.3-6.11.1.x86_64
haveged-1.9.1-16.1.x86_64
libharfbuzz0-32bit-1.4.5-7.5.x86_64
libxcb-shape0-1.10-4.3.1.x86_64
HANA-Firewall-1.1.6-1.17.noarch
shared-mime-info-1.6-11.3.x86_64
libharfbuzz0-1.4.5-7.5.x86_64
patterns-ha-ha_sles-12-15.7.x86_64
yast2-sap-ha-1.0.5-2.10.noarch
gucharmap-3.18.2-3.4.x86_64
gucharmap-lang-3.18.2-3.4.noarch
perl-Crypt-SmbHash-0.12-156.12.x86_64
libthai-data-0.1.25-4.2.x86_64
sharutils-lang-4.11.1-14.64.x86_64
sharutils-4.11.1-14.64.x86_64
libthai0-32bit-0.1.25-4.2.x86_64
release-notes-ha-12.5.20191017-1.2.noarch
python-chardet-3.0.4-5.3.2.noarch
nautilus-share-lang-0.7.3-11.81.noarch
libthai0-0.1.25-4.2.x86_64
perl-Digest-SHA1-2.13-17.216.x86_64
ha-cluster-bootstrap-0.5-3.6.2.noarch
sle-ha-manuals_en-12.3-1.3.noarch
libgucharmap_2_90-7-3.18.2-3.4.x86_64
hawk2-2.1.0+git.1539075484.48179981-3.3.1.x86_64
yast2-metapackage-handler-3.1.4-3.3.noarch
libhavege1-1.9.1-16.1.x86_64
yast2-hardware-detection-3.1.8-1.39.x86_64
SAPHanaSR-0.153.2-3.8.2.noarch
libharfbuzz-icu0-1.4.5-7.5.x86_64
shadow-4.2.1-34.20.x86_64

2. 安装HANA数据库

分别在主备节点上安装HANA数据库。

# ./hdbsetup

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述


3.配置HANA主备库数据复制

1)备份主数据库。
hdbadm@hanadb01:/usr/sap/HDB/HDB00> hdbsql -u SYSTEM -d SYSTEMDB -i 00 “BACKUP DATA FOR FULL SYSTEM USING FILE (‘backup’)”

2)在主节点上启用系统复制。
hdbnsutil -sr_enable --name=site1

nameserver is active, proceeding ...
successfully enabled system as system replication source site
done.

检查主节点的复制配置信息。
hdbnsutil -sr_stateConfiguration --sapcontrol=1

SAPCONTROL-OK: <begin>
mode=primary
site id=1
site name=site1
SAPCONTROL-OK: <end>
done.

3)注册备节点。

停止备数据库。
hdbadm@hanadb02:/usr/sap/HDB/HDB00> HDB stop

在HANA 2.0中,系统复制是以加密方式运行,因此需要复制主节点的key文件到备节点。
cd /usr/sap//SYS/global/security/rsecssfs
rsync -va hanadb01:/usr/sap//SYS/global/security/rsecssfs/data/SSFS_.DAT SSFS_.DAT
rsync -va hanadb01:/usr/sap//SYS/global/security/rsecssfs/key/SSFS_.KEY SSFS_.KEY

编辑主备机的global.ini文件(/hana/shared/<SID>/global/hdb/custom/config/global.ini)配置HANA使用专门的复制IP网段进行数据复制。
[system_replication_hostname_resolution]
192.168.1.207 = hanadb01
192.168.1.205 = hanadb02

注册备节点。
hdbadm@hanadb02:/usr/sap/HDB/HDB00> hdbnsutil -sr_register --name=site2 --remoteHost=hanadb01 --remoteInstance=00 --replicationMode=sync --operationMode=delta_datashipping

adding site ...
collecting information ...
updating local ini files ...
done.

启动备数据库。
hdbadm@hanadb02:/usr/sap/HDB/HDB00> HDB start

检查系统复制状态。
hdbadm@hanadb02:/usr/sap/HDB/home> HDBSettings.sh systemReplicationStatus.py --sapcontrol=1
SAPCONTROL-OK:
site/2/REPLICATION_MODE=SYNC
site/2/SITE_NAME=site2
site/2/SOURCE_SITE_ID=1
site/2/PRIMARY_MASTERS=hanadb01
local_site_id=2
SAPCONTROL-OK: <end

在主节点检查复制状态。
hdbadm@hanadb01:/usr/sap/HDB/HDB00> hdbnsutil -sr_state

System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~

online: true

mode: primary
operation mode: primary
site id: 1
site name: site1

is source system: true
is secondary/consumer system: false
has secondaries/consumers attached: true
is a takeover active: false
is primary suspended: false

Host Mappings:
~~~~~~~~~~~~~~

hanadb01 -> [site2] hanadb02
hanadb01 -> [site1] hanadb01


Site Mappings:
~~~~~~~~~~~~~~
site1 (primary/primary)
    |---site2 (sync/delta_datashipping)

Tier of site1: 1
Tier of site2: 2

Replication mode of site1: primary
Replication mode of site2: sync

Operation mode of site1: primary
Operation mode of site2: delta_datashipping

Mapping: site1 -> site2

Hint based routing site: 
done.>

在备节点上检查复制的状态。
hdbadm@hanadb02:/usr/sap/HDB/home> hdbnsutil -sr_state

System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~

online: true

mode: sync
operation mode: delta_datashipping
site id: 2
site name: site2

is source system: false
is secondary/consumer system: true
has secondaries/consumers attached: false
is a takeover active: false
is primary suspended: false
is timetravel enabled: false
replay mode: auto
active primary site: 1

primary masters: hanadb01

Host Mappings:
~~~~~~~~~~~~~~

hanadb02 -> [site2] hanadb02
hanadb02 -> [site1] hanadb01


Site Mappings:
~~~~~~~~~~~~~~
site1 (primary/primary)
    |---site2 (sync/delta_datashipping)

Tier of site1: 1
Tier of site2: 2

Replication mode of site1: primary
Replication mode of site2: sync

Operation mode of site1: primary
Operation mode of site2: delta_datashipping

Mapping: site1 -> site2

Hint based routing site: 
done.

切换测试
停止主数据库
hdbadm@hanadb01:/usr/sap/HDB/HDB00> HDB stop

在备节点上切换数据库为主库
hdbadm@hanadb02:/usr/sap/HDB/home> hdbnsutil -sr_takeover
hdbadm@hanadb02:/usr/sap/HDB/home> hdbnsutil -sr_state

System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~

online: true

mode: primary
operation mode: primary
site id: 2
site name: site2

is source system: true
is secondary/consumer system: false
has secondaries/consumers attached: false
is a takeover active: false
is primary suspended: false

Host Mappings:
~~~~~~~~~~~~~~

hanadb02 -> [site2] hanadb02


Site Mappings:
~~~~~~~~~~~~~~
site2 (primary/primary)

Tier of site2: 1

Replication mode of site2: primary

Operation mode of site2: primary


Hint based routing site: 
done.

注册原主节点为备数据库。
hdbadm@hanadb01:/usr/sap/HDB/HDB00> hdbnsutil -sr_register --name=site1 --remoteHost=hanadb02 --remoteInstance=00 --replicationMode=sync --operationMode=delta_datashipping

在原主节点上启动数据库。
hdbadm@hanadb01:/usr/sap/HDB/HDB00> HDB start

查看节点角色:
hdbnsutil -sr_state
# SAPHanaSR-showAttr --format=script | SAPHanaSR-filter --search=‘roles’

Fri Jun 30 16:03:02 2023; Hosts/hanadb01/roles=4:S:master1:master:worker:master
Fri Jun 30 16:03:02 2023; Hosts/hanadb02/roles=4:P:master1:master:worker:master

重复同样的步骤,将原主节点的数据库切换为主数据库,重建主备关系。


4.安装SAP Host Agent

# SAPCAR -xvf SAPHOSTAGENT60_60-80004822.SAR
# ./saphostexec -install


参考:Installing SAP Host Agent Manually


5.配置HANA HA/DR Provider

此步骤是强制性的,如果备节点与主节点不同步,将立即通知集群。当备节点不同步时,SAP HANA会在某个时间点上使用HA/DR提供程序接口调用此hook。通常情况下,这是在释放第一个待提交时发生的。当系统复制恢复时,SAP HANA将再次调用此hook。

1)编辑global.ini( /hana/shared/HA1/global/hdb/custom/config/global.ini)文件,增加以以下行:

[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /usr/share/SAPHanaSR
execution_order = 1

[trace]
ha_dr_saphanasr = info

2)编辑/etc/sudoers文件,允许用户<sid>adm访问集群,<sid>是小写。
# SAPHanaSR-ScaleUp entries for writing srHook cluster attribute
<sid>adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_<sid>site_srHook*


6.配置集群

6.1.使用图形界面进行配置

注:使用图形界面进行配置时,必须要配置共享磁盘作为SBD fencing机制,否则请使用命令行进行集群的配置。
# yast2
在这里插入图片描述
在这里插入图片描述

6.2.使用命令行配置集群

1)在主节点上初始化集群。

hanadb01:~ # ha-cluster-init

  Generating SSH key
  Configuring csync2
  Generating csync2 shared key (this may take a while)...done
  csync2 checking files...done
  
Configure Corosync:
  This will configure the cluster messaging layer.  You will need
  to specify a network address over which to communicate (default
  is em4's network, but you can use the network address of any
  active interface).

  Network address to bind to (e.g.: 192.168.1.0) [192.168.100.0]
  Multicast address (e.g.: 239.x.x.x) [239.205.185.119]
  Multicast port [5405]
  
Configure SBD:
  If you have shared storage, for example a SAN or iSCSI target,
  you can use it avoid split-brain scenarios by configuring SBD.
  This requires a 1 MB partition, accessible to all nodes in the
  cluster.  The device path must be persistent and consistent
  across all nodes in the cluster, so /dev/disk/by-id/* devices
  are a good choice.  Note that all data on the partition you
  specify here will be destroyed.

Do you wish to use SBD (y/n)? n
WARNING: Not configuring SBD - STONITH will be disabled.
  Hawk cluster interface is now running. To see cluster status, open:
    https://192.168.100.207:7630/
  Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
  Waiting for cluster........done
  Loading initial cluster configuration
  
Configure Administration IP Address:
  Optionally configure an administration virtual IP
  address. The purpose of this IP address is to
  provide a single IP that can be used to interact
  with the cluster, rather than using the IP address
  of any specific cluster node.

Do you wish to configure a virtual IP address (y/n)? n
  Done (log saved to /var/log/ha-cluster-bootstrap.log)

2)在备节点上加入集群。
hanadb02:~ # ha-cluster-join -c hanadb01 -i eth3

  Retrieving SSH keys - This may prompt for root@hanadb01:
Password: 
  One new SSH key installed
  Configuring csync2...done
  Merging known_hosts
  Probing for new partitions...done
  Hawk cluster interface is now running. To see cluster status, open:
    https://192.168.100.205:7630/
  Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
  Waiting for cluster....done
  Reloading cluster configuration...done
  Done (log saved to /var/log/ha-cluster-bootstrap.log)

3)检查HA服务的状态,为集群增加冗余的通讯链路。
systemctl status pacemaker
yast2 cluster
在这里插入图片描述

注:在SUSE 12 SP5中,如果在pacemaker启动时其中一个ring链路不通,pacemaker就会无法启动,在messages日志中报以下错误:

2023-07-04T10:52:46.084460+08:00 hanadb02 corosync[42440]:   [TOTEM ] One of your ip addresses are now bound to localhost. Corosync would not work correctly.
2023-07-04T10:34:22.167023+08:00 hanadb02 corosync[47138]: Starting Corosync Cluster Engine (corosync): [FAILED]
2023-07-04T10:34:22.167434+08:00 hanadb02 systemd[1]: corosync.service: Control process exited, code=exited status=1
2023-07-04T10:34:22.168118+08:00 hanadb02 systemd[1]: Failed to start Corosync Cluster Engine.
2023-07-04T10:34:22.168403+08:00 hanadb02 systemd[1]: Dependency failed for Pacemaker High Availability Cluster Manager.
2023-07-04T10:34:22.168691+08:00 hanadb02 systemd[1]: pacemaker.service: Job pacemaker.service/start failed with result 'dependency'.

4)定义集群引导选项、资源和操作的默认值。
# vi crm-bs.txt
property $id=“cib-bootstrap-options”
stonith-enabled=“true”
stonith-action=“reboot”
stonith-timeout=“150s”
rsc_defaults $id=“rsc-options”
resource-stickiness=“1000”
migration-threshold=“5000”
op_defaults $id=“op-options”
timeout=“600”
# crm configure load update crm-bs.txt

5)定义IPMI作为fencing机制
# vi ipmi.txt
primitive rsc_hanadb01_stonith_ipmi stonith:external/ipmi
params hostname=hanadb01 ipaddr=192.168.100.206 userid=root passwd=calvin interface=lanplus
op monitor interval=1800 timeout=30

primitive rsc_hanadb02_stonith_ipmi stonith:external/ipmi
params hostname=hanadb02 ipaddr=192.168.100.204 userid=root passwd=calvin interface=open
op monitor interval=1800 timeout=30

# crm configure load update ipmi.txt


6)定义hana拓扑资源

# vi crm-saphanatop.txt

primitive rsc_SAPHanaTopology_HDB_HDB00 ocf:suse:SAPHanaTopology
op monitor interval=“10” timeout=“600”
op start interval=“0” timeout=“600”
op stop interval=“0” timeout=“300”
params SID=“HDB” InstanceNumber=“00”
clone cln_SAPHanaTopology_HDB_HDB00 rsc_SAPHanaTopology_HDB_HDB00
meta clone-node-max=“1” interleave=“true”

# crm configure load update crm-saphanatop.txt

7)定义hana数据库资源
# vi crm-saphana.txt

primitive rsc_SAPHana_HDB_HDB00 ocf:suse:SAPHana
op start interval=“0” timeout=“3600”
op stop interval=“0” timeout=“3600”
op promote interval=“0” timeout=“3600”
op monitor interval=“60” role=“Master” timeout=“700”
op monitor interval=“61” role=“Slave” timeout=“700”
params SID=“HDB” InstanceNumber=“00” PREFER_SITE_TAKEOVER=“true”
DUPLICATE_PRIMARY_TIMEOUT=“7200” AUTOMATED_REGISTER=“false”
ms msl_SAPHana_HDB_HDB00 rsc_SAPHana_HDB_HDB00
meta clone-max=“2” clone-node-max=“1” interleave=“true”

# crm configure load update crm-saphana.txt

8)定义浮动IP资源
# vi crm-vip.txt

primitive rsc_ip_HDB_HDB00 ocf💓IPaddr2
op monitor interval=“10s” timeout=“20s”
params ip=“192.168.100.203”

# crm configure load update crm-vip.txt

9)定义浮动IP的位置(与数据库绑定)和HANA拓扑与数据库资源的启动顺序。
# vi crm-cs.txt

colocation col_saphana_ip_HDB_HDB00 2000: rsc_ip_HDB_HDB00:Started
msl_SAPHana_HDB_HDB00:Master
order ord_SAPHana_HDB_HDB00 Optional: cln_SAPHanaTopology_HDB_HDB00
msl_SAPHana_HDB_HDB00

# crm configure load update crm-cs.txt


7.切换数据库测试

7.1 使用HA切换HANA数据库

在主节点上执行切换操作:
hanadb01:/hana/prop # crm resource move rsc_SAPHana_HDB_HDB00 force

INFO: Move constraint created for rsc_SAPHana_HDB_HDB00

hanadb01:/hana/prop # crm status

Stack: corosync
Current DC: hanadb01 (version 1.1.21+20190809.bf34b44fa-1.17-1.1.21+20190809.bf34b44fa) - partition with quorum
Last updated: Wed Jun 21 16:48:22 2023
Last change: Wed Jun 21 16:48:13 2023 by root via crm_resource on hanadb01

2 nodes configured
7 resources configured

Online: [ hanadb01 hanadb02 ]

Full list of resources:

 rsc_hanadb01_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 rsc_hanadb02_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 Clone Set: cln_SAPHanaTopology_HDB_HDB00 [rsc_SAPHanaTopology_HDB_HDB00]
     Started: [ hanadb01 hanadb02 ]
 Master/Slave Set: msl_SAPHana_HDB_HDB00 [rsc_SAPHana_HDB_HDB00]
     rsc_SAPHana_HDB_HDB00      (ocf::suse:SAPHana):    Stopping hanadb01
     Slaves: [ hanadb02 ]
 rsc_ip_HDB_HDB00       (ocf::heartbeat:IPaddr2):       Started hanadb02

hanadb01:/hana/prop # crm status

Stack: corosync
Current DC: hanadb01 (version 1.1.21+20190809.bf34b44fa-1.17-1.1.21+20190809.bf34b44fa) - partition with quorum
Last updated: Wed Jun 21 16:48:43 2023
Last change: Wed Jun 21 16:48:31 2023 by root via crm_attribute on hanadb02

2 nodes configured
7 resources configured

Online: [ hanadb01 hanadb02 ]

Full list of resources:

 rsc_hanadb01_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 rsc_hanadb02_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 Clone Set: cln_SAPHanaTopology_HDB_HDB00 [rsc_SAPHanaTopology_HDB_HDB00]
     Started: [ hanadb01 hanadb02 ]
 Master/Slave Set: msl_SAPHana_HDB_HDB00 [rsc_SAPHana_HDB_HDB00]
     rsc_SAPHana_HDB_HDB00      (ocf::suse:SAPHana):    Promoting hanadb02
     Stopped: [ hanadb01 ]
 rsc_ip_HDB_HDB00       (ocf::heartbeat:IPaddr2):       Started hanadb02

hanadb01:/hana/prop # crm status

Stack: corosync
Current DC: hanadb01 (version 1.1.21+20190809.bf34b44fa-1.17-1.1.21+20190809.bf34b44fa) - partition with quorum
Last updated: Wed Jun 21 16:50:19 2023
Last change: Wed Jun 21 16:49:20 2023 by root via crm_attribute on hanadb02

2 nodes configured
7 resources configured

Online: [ hanadb01 hanadb02 ]

Full list of resources:

 rsc_hanadb01_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 rsc_hanadb02_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 Clone Set: cln_SAPHanaTopology_HDB_HDB00 [rsc_SAPHanaTopology_HDB_HDB00]
     Started: [ hanadb01 hanadb02 ]
 Master/Slave Set: msl_SAPHana_HDB_HDB00 [rsc_SAPHana_HDB_HDB00]
     Masters: [ hanadb02 ]
     Stopped: [ hanadb01 ]
 rsc_ip_HDB_HDB00       (ocf::heartbeat:IPaddr2):       Started hanadb02

在新备节点上重建与新主数据库的复制关系:
hdbnsutil -sr_register --name=site1 --remoteHost=hanadb02 --remoteInstance=10 --replicationMode=sync --operationMode=delta_datashipping

清除资源的constraint规则,HA会自动在备节点上启动数据库:
crm resource clear msl_SAPHana_HDB_HDB00

INFO: Removed migration constraints for msl_SAPHana_HDB_HDB00

7.2. 使用SAP命令切换HANA数据库

让HANA数据库资源进入维护模式
crm resource maintenance msl_SAPHana_HDB_HDB00

在主节点上停止HANA数据库
HDB stop

在备节点上接管数据库
hdbnsutil -sr_takeover

在原主节点上重建主备复制关系
hdbnsutil -sr_register --name=site1 --remoteHost=hanadb02 --remoteInstance=10 --replicationMode=sync --operationMode=delta_datashipping

在原主节点上启动数据库
HDB start

让集群更新资源的状态
crm resource refresh msl_SAPHana_HDB_HDB00

让HANA数据库资源退出维护模式
crm resource maintenance msl_SAPHana_HDB_HDB00 off


8. 让节点进入与退出维护模式

节点进入维护模式后,HA不会自动启动和停止该节点上的资源。

hanadb01:~ # crm node show

hanadb01(1084777679): member
        hana_ha1_vhost=hanadb01 hana_ha1_site=site1 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb02 lpa_ha1_lpt=10 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off
hanadb02(1084777677): member
        hana_ha1_vhost=hanadb02 hana_ha1_site=site2 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb01 lpa_ha1_lpt=1688350881 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off

在这里插入图片描述

hanadb01:~ # crm node maintenace hanadb01
hanadb01:~ # crm node show

hanadb01(1084777679): member
        hana_ha1_vhost=hanadb01 hana_ha1_site=site1 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb02 lpa_ha1_lpt=10 hana_ha1_op_mode=delta_datashipping maintenance=on standby=off
hanadb02(1084777677): member
        hana_ha1_vhost=hanadb02 hana_ha1_site=site2 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb01 lpa_ha1_lpt=1688350881 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off

在这里插入图片描述

hanadb01:~ # crm node ready hanadb01
hanadb01:~ # crm node show

hanadb01(1084777679): member
        hana_ha1_vhost=hanadb01 hana_ha1_site=site1 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb02 lpa_ha1_lpt=10 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off
hanadb02(1084777677): member
        hana_ha1_vhost=hanadb02 hana_ha1_site=site2 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb01 lpa_ha1_lpt=1688350881 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off

9. 清除备节点的资源的失败状态

# crm resource refresh rsc_SAPHana_HDB_HDB00 hanadb02

# crm resource cleanup rsc_SAPHana_HDB_HDB00 hanadb02

参考:《SAP HANA System Replication Scale-Up Performance Optimized Scenario》

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/34784.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Pytorch--模型微调finetune--迁移学习 (待继续学习)

https://www.bilibili.com/video/BV1Z84y1T7Zh/?spm_id_from333.788&vd_source3fd64243313f29b58861eb492f248b34 主要方法 torchvision 微调timm 微调半精度训练 背景&#xff08;问题来源&#xff09; 解决方案 大模型无法避免过拟合&#xff0c;

CSS 自定义提示(重写 title 属性)

前言 CSS 原生 title 属性太丑&#xff0c;需要重写 效果 改造 HTML 代码第2行&#xff0c;tip-label 自定义属性 <div class"tools"><div class"btn tip" v-for"item of list" :key"item.icon" :tip-label"item.l…

Linux内核代码中常用的数据结构

Linux内核代码中广泛使用了数据结构和算法&#xff0c;其中最常用的两个是链表和红黑树。 链表 Linux内核代码大量使用了链表这种数据结构。链表是在解决数组不能动态扩展这个缺陷而产生的一种数据结构。链表所包含的元素可以动态创建并插入和删除。 链表的每个元素都是离散…

【电商API接口系列】关键词搜索商品列表,品牌监控场景

API接口允许不同应用程序之间共享数据&#xff0c;在系统之间传输、读取和更新数据。例如&#xff0c;一个电商网站可以通过API接口获取支付系统的支付状态。API接口允许开发人员使用他人开发的功能来扩展自己的应用程序。通过调用第三方API接口&#xff0c;开发人员无需重新实…

Jenkins全栈体系(一)

Jenkins Jenkins&#xff0c;原名 Hudson&#xff0c;2011年改为现在的名字。它是一个开源的实现持续集成的软件工具。 第一章 GitLab安装使用 官方网站&#xff1a;https://about.gitlab.com/ 安装所需最小配置 内存至少4G https://docs.gitlab.cn/jh/install/requireme…

大禹智库:下一代向量数据库————具备在线化,协作化,可视化,自动化和安全互信的向量数据库

目录 一、在线化 二、协作化 三、可视化 四、自动化 五、安全互信 结论&#xff1a; 行业分析报告&#xff1a;下一代向量数据库的特征 摘要&#xff1a; 向量数据库是一种用于存储和处理向量数据的数据库系统。随着人工智能和大数据技术的快速发展&#xff0c;向量数据…

(css)在网页上添加Live 2D网页二次元可动小人

(css)在网页上添加Live 2D网页二次元可动小人 效果&#xff1a; 代码&#xff1a; <script src"js/L2Dwidget.min.js"></script> <script src"js/L2Dwidget.0.min.js"></script> <script>L2Dwidget.init({"model&quo…

SpringBoot2+Vue2实战(十)权限管理

一、父子菜单实现 新建数据库表 sys_menu sys_role 实体类 Role import com.baomidou.mybatisplus.annotation.IdType; import com.baomidou.mybatisplus.annotation.TableId; import com.baomidou.mybatisplus.annotation.TableName;import java.io.Serializable;import l…

博客相关推荐在线排序学习实践

现有固定槽位的填充召回策略在相关线上推荐服务中缺乏有效的相关性排序&#xff0c;存在较硬的排列顺序&#xff0c;各个策略之间互相影响&#xff0c;导致线上基于规则的拓扑图比较复杂&#xff0c;因此设计在线推理服务&#xff0c;通过学习用户行为完成在线排序。 1. 博客相…

【计算机网络】数据链路层之随机接入-CSMA/CD协议(总线局域网)

1.概念 2.信号碰撞&#xff08;冲突&#xff09; 3.解决方案 CSMA/CD 4.争用期&#xff08;端到端往返时延&#xff09; 5.最小帧长 6.最大帧长 7.指数退避算法 8.信道利用率 9.帧发送流程 10.帧接受流程 12.题目1 13.题目2 14.题目3 15 小结

数字IC后端学习笔记:等效性检查和ECO

1.形式验证工具 对于某些电路的移植&#xff0c;一般不需要对新电路进行仿真验证&#xff0c;而可以直接通过EDA工具来分析该电路的功能是否与原电路一致&#xff0c;此种验证方法可以大量减少验证时间&#xff0c;提高电路的效率。 等效性检查&#xff08;Equivalence Check&a…

给LLM装上知识:从LangChain+LLM的本地知识库问答到LLM与知识图谱的结合

第一部分 什么是LangChain&#xff1a;连接本地知识库与LLM的桥梁 作为一个 LLM 应用框架&#xff0c;LangChain 支持调用多种不同模型&#xff0c;提供相对统一、便捷的操作接口&#xff0c;让模型即插即用&#xff0c;这是其GitHub地址&#xff0c;其架构如下图所示 (点此查…

状态检测防火墙

状态检测防火墙原理 对于已经存在会话表的报文的检测过程比没有会话表的报文要短很多。通过对一条连接的首包进行检测并建立会话后,该条连接的绝大部分报文都不再需要重新检测。这就是状态检测防火墙的“状态检测机制”,相对于包过滤防火墙的“逐包检测机制”的改进之处。这种…

ChatLaw:中文法律大模型

论文题目&#xff1a;ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases   论文日期&#xff1a;2023/06/28   官网地址&#xff1a;https://www.chatlaw.cloud   论文地址&#xff1a;https://arxiv.org/abs/2306.16092   G…

Compose编排工具应用

补充&#xff1a; Docker Compose 文件&#xff1a;Docker Compose 是一个用于定义和运行多个 Docker 容器的工具。它使用 YAML 文件格式来描述应用程序的各个组件和其配置。以下是一个简单的示例&#xff1a; 在上面的示例中&#xff0c;我们定义了两个服务&#xff1a;web 和…

浅谈金融场景的风控策略

随着互联网垂直电商、消费金融等领域的快速崛起&#xff0c;用户及互联网、金融平台受到欺诈的风险也急剧增加。网络黑灰产已形成完整的、成熟的产业链&#xff0c;每年千亿级别的投入规模&#xff0c;超过1000万的“从业者”&#xff0c;其专业度也高于大多数技术人员&#xf…

Ubuntu 23.10 现在由Linux内核6.3提供支持

对于那些希望在Ubuntu上尝试最新的Linux 6.3内核系列的人来说&#xff0c;今天有一个好消息&#xff0c;因为即将发布的Ubuntu 23.10&#xff08;Mantic Minotaur&#xff09;已经重新基于Linux内核6.3。 Ubuntu 23.10的开发工作于4月底开始&#xff0c;基于目前的临时版本Ubu…

通过ioctl函数选择不同硬件的控制,LED 蜂鸣器 马达 风扇

通过ioctl函数选择不同硬件的控制&#xff0c;LED 蜂鸣器 马达 风扇 实验现象 head.h #ifndef __HEAD_H__ #define __HEAD_H__ typedef struct{volatile unsigned int MODER; // 0x00volatile unsigned int OTYPER; // 0x04volatile unsigned int OSPEEDR; // 0x08volati…

【Linux】gcc编译过程、make和makefile的概念与区别、Linux简单进度条实现

文章目录 1.gcc编译过程1.1预处理1.2编译1.3汇编1.4链接 2.自动化构建工具-make和makefile2.1使用背景2.2两者的概念和区别2.3项目清理 3.Linux简单进度条的实现 1.gcc编译过程 1. 预处理&#xff08;进行宏替换)   2. 编译&#xff08;生成汇编)   3. 汇编&#xff08;生成…

【NX】NXOpen::BlockStyler::Tree的个人使用类分享

网上关于NXOpen::BlockStyler::Tree的例子不是太多&#xff0c;控件默认id名称为tree_control01&#xff0c;因为例子不多很多功能得自己写&#xff0c;虽然NXOpen::BlockStyler::Tree的封装已经不错了&#xff0c;但是实际使用起来也不是很方便&#xff0c;比如像获取所有节点…