Oracle 19c Rac + ADG搭建(源库:RAC,目标库FS)

Oracle 19c Rac + ADG搭建(源库:RAC,目标库FS)

环境说明

主库(RAC)备库(FS)
数据库版本Oracle19.3.0.0Oracle19.3.0.0
IP 地址192.168.40.30-34192.168.40.40
主机名hfdb30、hfdb31hfdb41
DB_UNIQUE_NAMEhfdbdghfdb
DB_NAMEhfdbhfdb
DB Instanceshfdb1、hfdb2dghfdb
DB STORAGERACFS
OSRHEL7RHEL7

Oracle RAC DataGuard 主库配置

主库设置force logging模式

# 在一台RAC上面执行即可
SYS@hfdb1> alter database force logging;
	Database altered.
SYS@hfdb1> select force_logging from v$database;
	YES

启动归档日志

关闭数据库

[oracle@hfdb30:/home/oracle]$srvctl stop database -d hfdb

#也可以在每个库里面shutdown immediate;     
#rac的两台主机都要做shutdown   
##【第二台不关闭 在第一台起startup mount 报ORA-01102: cannot mount database in EXCLUSIVE mode】
SYS@hfdb1> startup mount
alter system set db_recovery_file_dest_size=2G scope=both; 
alter system set db_recovery_file_dest='+DGRECOVERY1' scope=both;  

alter database archivelog;   --在一台上面执行

#关闭实例并重新启动数据库
SYS@hfdb1> shutdown immediate
[oracle@hfdb30:/home/oracle] srvctl start database -d hfdb

报错:

[oracle@hfdb30:/home/oracle]$srvctl start database -d hfdb
PRCC-1014 : hfdb was already running
PRCR-1004 : Resource ora.hfdb.db is already running
PRCR-1079 : Failed to start resource ora.hfdb.db
CRS-5017: The resource action “ora.hfdb.db start” encountered the following error:
ORA-27125: unable to create shared memory segment
Linux-x86_64 Error: 28: No space left on device
Additional information: 4549
Additional information: 2499805184
. For details refer to “(:CLSN00107:)” in “/oracle/app/grid/diag/crs/hfdb31/crs/trace/crsd_oraagent_oracle.trc”.

CRS-2674: Start of ‘ora.hfdb.db’ on ‘hfdb31’ failed
CRS-2528: Unable to place an instance of ‘ora.hfdb.db’ as all possible servers are occupied by the resource

问题分析:

从报错可以看出,说明问题是共享内存不足导致的。所以我们进行共享内存修改

解决方法:

[root@hfdb31 ~]# vi /etc/sysctl.conf

kernel.shmmax = 6871947673 --增大(物理内存0.8102410241024)
kernel.shmall = 1677721 --增大 (shmmax /shmmni )
kernel.shmmni = 4096

sysctl -p 重新加载

节点2同样也要执行

–检查归档是否开启:

SYS@hfdb1> select status from v$instance;

STATUS
------------------------------------
OPEN

SYS@hfdb1> set linesize 200
SYS@hfdb1> col NAME format a50
SYS@hfdb1>  show parameter recovery

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_recovery_file_dest                string                            +DGRECOVERY
db_recovery_file_dest_size           big integer                       2G
recovery_parallelism                 integer                           0
remote_recovery_file_dest            string

SYS@hfdb1> show parameter cluster

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
cdb_cluster                          boolean                           FALSE
cdb_cluster_name                     string
cluster_database                     boolean                           TRUE
cluster_database_instances           integer                           2
cluster_interconnects  

SYS@hfdb1> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     19
Next log sequence to archive   20
Current log sequence           20

创建备用的重做日志文件

SYS@hfdb1> select a.thread#,a.group#,a.bytes/1024/1024,b.member from v$log a,v$logfile b where a.group#=b.group#;

   THREAD#     GROUP# A.BYTES/1024/1024 MEMBER
---------- ---------- ----------------- ---------------------------------------------------------------------------
         1          2               200 +DGSYSTEM1/HFDB/ONLINELOG/group_2.264.1184265675
         1          1               200 +DGSYSTEM1/HFDB/ONLINELOG/group_1.263.1184265675
         2          3               200 +DGSYSTEM1/HFDB/ONLINELOG/group_3.267.1184266753
         2          4               200 +DGSYSTEM1/HFDB/ONLINELOG/group_4.268.1184266763


#建议比在线重做日志文件多1组。
	
alter system set standby_file_management=manual scope=both sid='*'; 
alter database add standby logfile thread 1 group 10 '+DGSYSTEM1' size 200m; 
alter database add standby logfile thread 1 group 11 '+DGSYSTEM1'size 200m;
alter database add standby logfile thread 1 group 12 '+DGSYSTEM1' size 200m;
alter database add standby logfile thread 2 group 13 '+DGSYSTEM1' size 200m;
alter database add standby logfile thread 2 group 14 '+DGSYSTEM1'size 200m;
alter database add standby logfile thread 2 group 15 '+DGSYSTEM1' size 200m;
	
alter system set standby_file_management=auto scope=both sid='*';
#检查重做日志文件:
SYS@hfdb1> select  GROUP# ,MEMBER from v$logfile;

    GROUP# MEMBER
---------- --------------------------------------------------------------------------------
         2 +DGSYSTEM1/HFDB/ONLINELOG/group_2.264.1184265675
         1 +DGSYSTEM1/HFDB/ONLINELOG/group_1.263.1184265675
         3 +DGSYSTEM1/HFDB/ONLINELOG/group_3.267.1184266753
         4 +DGSYSTEM1/HFDB/ONLINELOG/group_4.268.1184266763
        10 +DGSYSTEM1/HFDB/ONLINELOG/group_10.270.1185369259
        11 +DGSYSTEM1/HFDB/ONLINELOG/group_11.271.1185369267
        12 +DGSYSTEM1/HFDB/ONLINELOG/group_12.272.1185369275
        13 +DGSYSTEM1/HFDB/ONLINELOG/group_13.273.1185369283
        14 +DGSYSTEM1/HFDB/ONLINELOG/group_14.274.1185369293
        15 +DGSYSTEM1/HFDB/ONLINELOG/group_15.275.1185369301

10 rows selected.

SYS@hfdb1> select group#,THREAD#,status from v$log;

    GROUP#    THREAD# STATUS
---------- ---------- ------------------------------------------------
         1          1 INACTIVE
         2          1 CURRENT
         3          2 CURRENT
         4          2 INACTIVE

主库参数文件

–【alter system set这中方式是可以在rac两个库里面同时生效】

alter system set db_unique_name='hfdb' scope=spfile sid='*'; 
alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(hfdb,dghfdb)' scope=both sid='*'; 
alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FlLE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=hfdb' scope=both sid='*'; 
alter system set LOG_ARCHIVE_DEST_2='SERVICE=dghfdb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=dghfdb' scope=both sid='*'; 
alter system set FAL_SERVER='dghfdb' scope=both sid='*'; 
alter system set fal_client='hfdb' scope=both sid='*'; 
alter system set DB_FILE_NAME_CONVERT='/oradata/dghfdb','+DGSYSTEM1/hfdb','/oradata/dghfdb','+DGDATA1/hfdb' scope=spfile sid='*'; 
alter system set LOG_FILE_NAME_CONVERT='/oradata/dghfdb','+DGSYSTEM1/hfdb' scope=spfile sid='*'; 
alter system set standby_file_management=AUTO scope=both sid='*'; 

#关闭重启数据库生效  --两个rac都要重启

检查参数:

SYS@hfdb1> show parameter conver

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_file_name_convert                 string                            /oradata/dghfdb, +DGSYSTEM1/hfd
                                                                       b, /oradata/dghfdb, +DGDATA1/
                                                                       hfdb
log_file_name_convert                string                            /oradata/dghfdb, +DGSYSTEM1/hfd
                                                                       b
pdb_file_name_convert                string

SYS@hfdb1> show parameter archive;

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
archive_lag_target                   integer                           0
log_archive_config                   string                            DG_CONFIG=(hfdb,dghfdb)
log_archive_dest                     string
log_archive_dest_1                   string                            LOCATION=USE_DB_RECOVERY_FlLE_
                                                                       DEST VALID_FOR=(ALL_LOGFILES,A
                                                                       LL_ROLES) DB_UNIQUE_NAME=hfdb
log_archive_dest_10                  string
log_archive_dest_11                  string
log_archive_dest_12                  string
log_archive_dest_13                  string
log_archive_dest_14                  string

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
log_archive_dest_15                  string
log_archive_dest_16                  string
log_archive_dest_17                  string
log_archive_dest_18                  string
log_archive_dest_19                  string
log_archive_dest_2                   string                            SERVICE=dghfdb LGWR ASYNC VALI
                                                                       D_FOR=(ONLINE_LOGFILES,PRIMARY
                                                                       _ROLE) DB_UNIQUE_NAME=dghfdb
log_archive_dest_20                  string
log_archive_dest_21                  string
log_archive_dest_22                  string

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
log_archive_dest_23                  string
log_archive_dest_24                  string
log_archive_dest_25                  string
log_archive_dest_26                  string
log_archive_dest_27                  string
log_archive_dest_28                  string
log_archive_dest_29                  string
log_archive_dest_3                   string
log_archive_dest_30                  string
log_archive_dest_31                  string
log_archive_dest_4                   string

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
log_archive_dest_5                   string
log_archive_dest_6                   string
log_archive_dest_7                   string
log_archive_dest_8                   string
log_archive_dest_9                   string
log_archive_dest_state_1             string                            enable
log_archive_dest_state_10            string                            enable
log_archive_dest_state_11            string                            enable
log_archive_dest_state_12            string                            enable
log_archive_dest_state_13            string                            enable
log_archive_dest_state_14            string                            enable

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
log_archive_dest_state_15            string                            enable
log_archive_dest_state_16            string                            enable
log_archive_dest_state_17            string                            enable
log_archive_dest_state_18            string                            enable
log_archive_dest_state_19            string                            enable
log_archive_dest_state_2             string                            enable
log_archive_dest_state_20            string                            enable
log_archive_dest_state_21            string                            enable
log_archive_dest_state_22            string                            enable
log_archive_dest_state_23            string                            enable
log_archive_dest_state_24            string                            enable

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
log_archive_dest_state_25            string                            enable
log_archive_dest_state_26            string                            enable
log_archive_dest_state_27            string                            enable
log_archive_dest_state_28            string                            enable
log_archive_dest_state_29            string                            enable
log_archive_dest_state_3             string                            enable
log_archive_dest_state_30            string                            enable
log_archive_dest_state_31            string                            enable
log_archive_dest_state_4             string                            enable
log_archive_dest_state_5             string                            enable
log_archive_dest_state_6             string                            enable

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
log_archive_dest_state_7             string                            enable
log_archive_dest_state_8             string                            enable
log_archive_dest_state_9             string                            enable
log_archive_duplex_dest              string
log_archive_format                   string                            %t_%s_%r.dbf
log_archive_max_processes            integer                           4
log_archive_min_succeed_dest         integer                           1
log_archive_start                    boolean                           FALSE
log_archive_trace                    integer                           0
SYS@hfdb1>

创建pfile文件,备库要用

SYS@hfdb1> create pfile ='/home/oracle/standby.pfile' from spfile;
File created.

主库配置监听与TNS

#TNS文件要配置别名解析  --两个rac主机都要配置
##/oracle/app/oracle/product/19c/db_1/network/admin
hfdb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = hfdbscan)(PORT = 1521))
    (CONNECT_DATA =      
     (SERVER = DEDICATED)
      (SERVICE_NAME = hfdb)
    )
  )

dghfdb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.40.40)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dghfdb)
    )
  )

RAC集群下dataguard备库配置

拷贝pfile文件到备库:

[oracle@hfdb30:/home/oracle]$scp standby.pfile 192.168.40.40:/home/oracle

–1)创建项目的目录

[oracle@hfdb40:/home/oracle]$mkdir -p /oracle/app/oracle/admin/dghfdb/adump
[oracle@hfdb40:/home/oracle]$mkdir -p /oradata/dgerpdb/
[oracle@hfdb40:/home/oracle]$df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        72G   13G   60G  17% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           5.0G     0  5.0G   0% /dev/shm
tmpfs           3.9G  9.4M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sdb1        50G  7.2G   43G  15% /oracle
tmpfs           799M   12K  799M   1% /run/user/42
tmpfs           799M     0  799M   0% /run/user/0
[oracle@hfdb40:/home/oracle]$mkdir -p /oradata/dghfdb/datafile
[oracle@hfdb40:/home/oracle]$mkdir -p  /oradata/dghfdb/controlfile
[oracle@hfdb40:/home/oracle]$mkdir -p  /oradata/dghfdb/onlinelog   
[oracle@hfdb40:/home/oracle]$mkdir -p   /oradata/dghfdb/tempfile     

准备备库参数文件

[oracle@hfdb40:/home/oracle]$vi standby.pfile
*.audit_trail='NONE'
*.compatible='19.0.0'
*.db_block_size=8192
*.db_domain=''
*.db_name='hfdb'
*.diagnostic_dest='/oracle/app/oracle'
*.open_cursors=2000
*.pga_aggregate_target=1073741824
*.processes=2000
*.remote_login_passwordfile='exclusive'
*.sessions=2205
*.sga_target=3355443200
*.audit_file_dest='/oracle/app/oracle/admin/dghfdb/adump'
*.control_files='/oradata/dghfdb/control01.ctl','/oradata/dghfdb/control02.ctl'
*.db_unique_name='dghfdb'
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(dghfdb,hfdb)'
*.LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=dghfdb'
*.LOG_ARCHIVE_DEST_2='SERVICE=hfdb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=hfdb'
*.log_archive_format='%t_%s_%r.arc'
*.fal_client='dghfdb'
*.FAL_SERVER='hfdb'
*.DB_FILE_NAME_CONVERT='+DGDATA1/hfdb','/oradata/dghfdb','+DGSYSTEM1/hfdb','/oradata/dghfdb'
*.LOG_FILE_NAME_CONVERT='+DGSYSTEM1/hfdb' ,'/oradata/dghfdb'
*.standby_file_management=AUTO
*.db_recovery_file_dest_size=2g
*.db_recovery_file_dest='/archive'
*.undo_tablespace='UNDOTBS1'


[oracle@hfdb40:/oracle/app/oracle/product/19c/db_1/dbs]$cp /home/oracle/standby2.pfile .
[oracle@hfdb40:/oracle/app/oracle/product/19c/db_1/dbs]$mv standby2.pfile initdghfdb.ora
[oracle@hfdb40:/oracle/app/oracle/product/19c/db_1/dbs]$ll
total 12
-rw-rw----. 1 oracle oinstall 1544 Nov 18 14:02 hc_hfdb.dat
-rw-r--r--. 1 oracle oinstall 1078 Nov 18 14:02 initdghfdb.ora
-rw-r--r--. 1 oracle oinstall 3079 May 14  2015 init.ora

准备数据库密码文件

#密码文件可以拷贝主库的,也可以自己创建,但是19c创建密码对密码复杂度要求比较高,这里是拷贝密码的方式。
ASMCMD> ls
pwdhfdb.256.1184254885
pwdhfdb.257.1184265483
ASMCMD> pwcopy pwdhfdb.256.1184254885 /home/oracle/orapwdhfdb
ASMCMD-9463: operation failed due to lack of write permissions
ASMCMD> pwcopy pwdhfdb.256.1184254885 /home/grid/orapwdhfdb
copying +dgsystem1/hfdb/password/pwdhfdb.256.1184254885 -> /home/grid/orapwdhfdb

[root@hfdb30 grid]# scp orapwdhfdb 192.168.40.40:/oracle/app/oracle/product/19c/db_1/dbs
The authenticity of host '192.168.40.40 (192.168.40.40)' can't be established.
ECDSA key fingerprint is SHA256:4aQmRUrp6cpVXTfHLhO+D7WfPLevTnVZo7UidRmm6TE.
ECDSA key fingerprint is MD5:7e:ec:25:e6:20:92:85:89:22:2a:f3:59:dd:ef:8f:7e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.40.40' (ECDSA) to the list of known hosts.
root@192.168.40.40's password:
orapwdhfdb                                                                                                                 100% 2048     1.1MB/s   00:00
[root@hfdb30 grid]#

[root@hfdb40 dbs]# ll
total 16
-rw-rw----. 1 oracle oinstall 1544 Nov 18 14:02 hc_hfdb.dat
-rw-r--r--. 1 oracle oinstall 1078 Nov 18 14:02 initdghfdb.ora
-rw-r--r--. 1 oracle oinstall 3079 May 14  2015 init.ora
-rw-r-----. 1 root   root     2048 Nov 18 14:15 orapwdhfdb
[root@hfdb40 dbs]# chown oracle:oinstall orapwdhfdb
[root@hfdb40 dbs]# ll
total 16
-rw-rw----. 1 oracle oinstall 1544 Nov 18 14:02 hc_hfdb.dat
-rw-r--r--. 1 oracle oinstall 1078 Nov 18 14:02 initdghfdb.ora
-rw-r--r--. 1 oracle oinstall 3079 May 14  2015 init.ora
-rw-r-----. 1 oracle oinstall 2048 Nov 18 14:15 orapwdhfdb
[root@hfdb40 dbs]# mv orapwdhfdb orapwdghfdb
[root@hfdb40 dbs]# ll
total 16
-rw-rw----. 1 oracle oinstall 1544 Nov 18 14:02 hc_hfdb.dat
-rw-r--r--. 1 oracle oinstall 1078 Nov 18 14:02 initdghfdb.ora
-rw-r--r--. 1 oracle oinstall 3079 May 14  2015 init.ora
-rw-r-----. 1 oracle oinstall 2048 Nov 18 14:15 orapwdghfdb

备库配置监听与TNS

/oracle/app/oracle/product/11.2.0/db_1/network/admin
配置静态监听:
listener.ora

# listener.ora Network Configuration File: /oracle/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
# Generated by Oracle configuration tools.
SID_LIST_LISTENER=
  (SID_LIST=
       (SID_DESC=
          (GLOBAL_DBNAME=dghfdb)
          (SID_NAME=dghfdb)
          (ORACLE_HOME=/oracle/app/oracle/product/19c/db_1)
       )
  )
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = hfdb40)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

配置备库tns:

[oracle@hfdb40:/oracle/app/oracle/product/19c/db_1/network/admin]$cat tnsnames.ora
# tnsnames.ora Network Configuration File: /oracle/app/oracle/product/19c/db_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

LISTENER_HFDB =
  (ADDRESS = (PROTOCOL = TCP)(HOST = hfdb40)(PORT = 1521))


dghfdb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.40.40)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dghfdb)
    )
  )
hfdb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.40.32)(PORT = 1521))
    (CONNECT_DATA =
     (SERVER = DEDICATED)
      (SERVICE_NAME = hfdb)
    )
  )

这里最好使用tnsping命令,分别在主库和备库上面,检查下dghfdb和hfdb是否正常。

修改备库的唯一数据库名:

[oracle@hferp164:/home/oracle]$cat ~/.bash_profile

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
PS1="[`whoami`@`hostname`:"'$PWD]$'
export PS1
export TMP=/tmp
export LANG=en_US.UTF8
export TMPDIR=$TMP
export ORACLE_UNQNAME=dghfdb
ORACLE_SID=dghfdb; export ORACLE_SID
ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/19c/db_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
NLS_LANG=AMERICAN_AMERICA.UTF8;export NLS_LANG
PATH=.:$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
umask=022
alias sqlplus="rlwrap sqlplus"
alias rman="rlwrap rman"

[oracle@hferp164:/home/oracle]$
检查: --【退出重新进入oracle用户,环境变量生效】

[oracle@hfdb40:/home/oracle]$env |grep ORACLE
ORACLE_UNQNAME=dghfdb
ORACLE_SID=dghfdb
ORACLE_BASE=/oracle/app/oracle
ORACLE_TERM=xterm
ORACLE_HOME=/oracle/app/oracle/product/19c/db_1
[oracle@hfdb40:/home/oracle]$

启动备库到nomount状态

#在主库上全备数据库 --这里的配置其实没有什么作用,就是为了后期失败恢复的
SYS@hfdb1> alter system archive log current; --执行一次归档
[oracle@hfdb30:/home/oracle]$mkdir hfdb   --新建一个目录用户存放备份文件
[oracle@hfdb30:/home/oracle]$rman target /
crosscheck archivelog all;    --列出过期的归档对象
delete noprompt expired archivelog all;   删除过期的归档对象


[oracle@hfdb30:/home/oracle]$mkdir /backup/hfdb
[oracle@hfdb30:/home/oracle]$rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Mon Nov 18 14:28:35 2024
Version 19.7.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HFDB (DBID=778580805)

RMAN> backup database format '/backup/hfdb/full%u_%s_%p';

Starting backup at 2024-11-18 14:36:47
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 instance=hfdb1 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DGSYSTEM1/HFDB/DATAFILE/system.258.1184265513
input datafile file number=00004 name=+DGSYSTEM1/HFDB/DATAFILE/undotbs1.260.1184265603
input datafile file number=00003 name=+DGSYSTEM1/HFDB/DATAFILE/sysaux.259.1184265569
input datafile file number=00005 name=+DGSYSTEM1/HFDB/DATAFILE/undotbs2.266.1184266663
input datafile file number=00007 name=+DGSYSTEM1/HFDB/DATAFILE/users.261.1184265607
channel ORA_DISK_1: starting piece 1 at 2024-11-18 14:36:49
channel ORA_DISK_1: finished piece 1 at 2024-11-18 14:39:04
piece handle=/backup/hfdb/full063aeo01_6_1 tag=TAG20241118T143648 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:02:15
Finished backup at 2024-11-18 14:39:04

Starting Control File and SPFILE Autobackup at 2024-11-18 14:39:04
piece handle=+DGRECOVERY1/HFDB/AUTOBACKUP/2024_11_18/s_1185374344.256.1185374347 comment=NONE
Finished Control File and SPFILE Autobackup at 2024-11-18 14:39:11


拷贝备份集到备库相同的目录下去
[oracle@hfdb40 backup]$ mkdir hfdb
[oracle@hfdb40 backup]$ scp -r 192.168.40.30:/backup/hfdb/ .  

RAC集群下dataguard物理备库创建

启动备库为nomount状态:

[oracle@hfdb40:/backup]$orapwd file=/oracle/app/oracle/product/19c/db_1/dbs/orapwdgerpdb password=oracle --【拷贝过来的密码进不去,可以用这个】

#这边是验证前面你创建的密码和tns是否有问题,遇到问题要解决,否则后面主库适用rman连接时也回报错
[oracle@hfdb40:/oracle/app/oracle/product/19c/db_1/network/admin]$sqlplus  sys/oracle@dghfdb as sysdba

报错:TNS-01189: The listener could not authenticate the user  -- /etc/hosts里面的IP地址配置错误,修改,重启监听。

报错:ORA-12154: TNS:could not resolve the connect identifier specified -- tns里面没有配置dghfdb
LISTENER_HFDB =
  (ADDRESS = (PROTOCOL = TCP)(HOST = hfdb40)(PORT = 1521))

dghfdb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.40.40)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dghfdb)
    )
  )

#备库需要启动数据库到mount状态
[oracle@hfdb40:/oracle/app/oracle/product/19c/db_1/network/admin]$sqlplus  sys/oracle@dghfdb as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Mon Nov 18 15:20:01 2024
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SYS@dghfdb>  startup nomount

#进入主库进行数据复制:duplicate
[oracle@hfdb30:/backup/hfdb]$rman target / auxiliary sys/oracle@dghfdb

Recovery Manager: Release 19.0.0.0.0 - Production on Mon Nov 18 15:20:34 2024
Version 19.7.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HFDB (DBID=778580805)
PL/SQL package SYS.DBMS_BACKUP_RESTORE version 19.03.00.00 in AUXILIARY database is not current
PL/SQL package SYS.DBMS_RCVMAN version 19.03.00.00 in AUXILIARY database is not current
connected to auxiliary database: HFDB (not mounted)

RMAN>

RMAN> duplicate target database for standby from active database nofilenamecheck;   --【复制目标数据库到备库】
[oracle@hfdb30:/home/oracle]$rman target sys/oracle auxiliary sys/oracle@dghfdb

Recovery Manager: Release 19.0.0.0.0 - Production on Mon Nov 18 17:12:27 2024
Version 19.7.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HFDB (DBID=778580805)
PL/SQL package SYS.DBMS_BACKUP_RESTORE version 19.03.00.00 in AUXILIARY database is not current
PL/SQL package SYS.DBMS_RCVMAN version 19.03.00.00 in AUXILIARY database is not current
connected to auxiliary database: HFDB (not mounted)

RMAN> duplicate target database for standby from active database nofilenamecheck;

Starting Duplicate Db at 2024-11-18 17:13:48
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=2279 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   passwordfile auxiliary format  '/oracle/app/oracle/product/19c/db_1/dbs/orapwdghfdb'   ;
}
executing Memory Script

Starting backup at 2024-11-18 17:13:49
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=403 instance=hfdb1 device type=DISK
Finished backup at 2024-11-18 17:13:53

contents of Memory Script:
{
   backup as copy current controlfile for standby auxiliary format  '/oradata/dghfdb/control01.ctl';
   restore clone primary controlfile to  '/oradata/dghfdb/control02.ctl' from
 '/oradata/dghfdb/control01.ctl';
}
executing Memory Script

Starting backup at 2024-11-18 17:13:53
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/oradata/dghfdb/control01.ctl tag=TAG20241118T171353
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 2024-11-18 17:14:00

Starting restore at 2024-11-18 17:14:00
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: copied control file copy
Finished restore at 2024-11-18 17:14:01

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
   set newname for tempfile  1 to
 "/oradata/dghfdb/tempfile/temp.265.1184265697";
   switch clone tempfile all;
   set newname for datafile  1 to
 "/oradata/dghfdb/datafile/system.258.1184265513";
   set newname for datafile  3 to
 "/oradata/dghfdb/datafile/sysaux.259.1184265569";
   set newname for datafile  4 to
 "/oradata/dghfdb/datafile/undotbs1.260.1184265603";
   set newname for datafile  5 to
 "/oradata/dghfdb/datafile/undotbs2.266.1184266663";
   set newname for datafile  7 to
 "/oradata/dghfdb/datafile/users.261.1184265607";
   backup as copy reuse
   datafile  1 auxiliary format
 "/oradata/dghfdb/datafile/system.258.1184265513"   datafile
 3 auxiliary format
 "/oradata/dghfdb/datafile/sysaux.259.1184265569"   datafile
 4 auxiliary format
 "/oradata/dghfdb/datafile/undotbs1.260.1184265603"   datafile
 5 auxiliary format
 "/oradata/dghfdb/datafile/undotbs2.266.1184266663"   datafile
 7 auxiliary format
 "/oradata/dghfdb/datafile/users.261.1184265607"   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /oradata/dghfdb/tempfile/temp.265.1184265697 in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 2024-11-18 17:14:10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DGSYSTEM1/HFDB/DATAFILE/system.258.1184265513
output file name=/oradata/dghfdb/datafile/system.258.1184265513 tag=TAG20241118T171410
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DGSYSTEM1/HFDB/DATAFILE/undotbs1.260.1184265603
output file name=/oradata/dghfdb/datafile/undotbs1.260.1184265603 tag=TAG20241118T171410
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:05
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DGSYSTEM1/HFDB/DATAFILE/sysaux.259.1184265569
output file name=/oradata/dghfdb/datafile/sysaux.259.1184265569 tag=TAG20241118T171410
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:05
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DGSYSTEM1/HFDB/DATAFILE/undotbs2.266.1184266663
output file name=/oradata/dghfdb/datafile/undotbs2.266.1184266663 tag=TAG20241118T171410
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=+DGSYSTEM1/HFDB/DATAFILE/users.261.1184265607
output file name=/oradata/dghfdb/datafile/users.261.1184265607 tag=TAG20241118T171410
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 2024-11-18 17:17:41

sql statement: alter system archive log current

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=1185383865 file name=/oradata/dghfdb/datafile/system.258.1184265513
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=1185383865 file name=/oradata/dghfdb/datafile/sysaux.259.1184265569
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=1185383865 file name=/oradata/dghfdb/datafile/undotbs1.260.1184265603
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=1185383865 file name=/oradata/dghfdb/datafile/undotbs2.266.1184266663
datafile 7 switched to datafile copy
input datafile copy RECID=5 STAMP=1185383865 file name=/oradata/dghfdb/datafile/users.261.1184265607
Finished Duplicate Db at 2024-11-18 17:20:11

RMAN>

____________________________这里的几个报错记录_________________________________

hannel ORA_AUX_DISK_1: ORA-19870: error while restoring backup piece +DGRECOVERY1/HFDB/AUTOBACKUP/2024_11_19/s_1185446469.259.1185446471
ORA-19505: failed to identify file "+DGRECOVERY1/HFDB/AUTOBACKUP/2024_11_19/s_1185446469.259.1185446471"
ORA-17503: ksfdopn:2 Failed to open file +DGRECOVERY1/HFDB/AUTOBACKUP/2024_11_19/s_1185446469.259.1185446471
ORA-15001: diskgroup "DGRECOVERY1" does not exist or is not mounted
failover to previous backup

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 11/19/2024 10:51:55
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06024: no backup or copy of the control file found to restore

使用:duplicate target database for standby from active database nofilenamecheck;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 11/19/2024 10:52:24
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 11/19/2024 10:52:23
ORA-17627: ORA-01017: invalid username/password; logon denied
ORA-17629: Cannot connect to the remote database server

使用:rman target sys/oracle auxiliary sys/oracle@dghfdb


查看备库(dghfdbSYS@dghfdb> select status from v$instance;

STATUS
------------------------------------
MOUNTED

SYS@dghfdb> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/oradata/dghfdb/onlinelog/group_2.264.1184265675
/oradata/dghfdb/onlinelog/group_1.263.1184265675
/oradata/dghfdb/onlinelog/group_3.267.1184266753
/oradata/dghfdb/onlinelog/group_4.268.1184266763
/oradata/dghfdb/onlinelog/group_10.270.1185369259
/oradata/dghfdb/onlinelog/group_11.271.1185369267
/oradata/dghfdb/onlinelog/group_12.272.1185369275
/oradata/dghfdb/onlinelog/group_13.273.1185369283
/oradata/dghfdb/onlinelog/group_14.274.1185369293
/oradata/dghfdb/onlinelog/group_15.275.1185369301

10 rows selected.

SYS@dghfdb> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/oradata/dghfdb/tempfile/temp.265.1184265697

SYS@dghfdb>  show parameter control

NAME                                 TYPE
------------------------------------ ---------------------------------
VALUE
------------------------------
control_file_record_keep_time        integer
7
control_files                        string
/oradata/dghfdb/control01.ctl,
 /oradata/dghfdb/control02.ctl
control_management_pack_access       string
DIAGNOSTIC+TUNING
SYS@dghfdb> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/oradata/dghfdb/datafile/system.258.1184265513
/oradata/dghfdb/datafile/sysaux.259.1184265569
/oradata/dghfdb/datafile/undotbs1.260.1184265603
/oradata/dghfdb/datafile/undotbs2.266.1184266663
/oradata/dghfdb/datafile/users.261.1184265607

SYS@dghfdb>


同步:
SQL> alter database recover managed standby database disconnect from session;  -- 【在备库上面启动MRP进程,同步没有同步过来的文件】
SQL> alter database recover managed standby database cancel; 
SQL> archive log list  
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     0
Next log sequence to archive   0
Current log sequence           0

使用这个检查:
select thread#, max(sequence#) "Last Primary Seq Generated" from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# group by thread# order by 1;

打开数据库:
SQL> alter database open;

打开同步:
SQL> alter database recover managed standby database disconnect from session; 
SQL> select name,open_mode,protection_mode,database_role,log_mode from v$database;

NAME      OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE
--------- -------------------- -------------------- ----------------
LOG_MODE
------------
HFERPDB   READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY
ARCHIVELOG

测试:
SQL> create table hftable1 as select name from v$datafile;
SQL> alter system switch logfile;  

#ADG是可以正常同步的,但是备库执行 archive log list 时显示都为 0,因此比较好奇,于是查询mos发现:

#参考MOS文档:Archive Log List Showing 0 At Standby, But Standby Is Completely In Sync With Primary! (Doc ID 2041137.1)    

#12C针对物理备库redo log做出了调整,可以通过以下SQL查看:


select thread#, max(sequence#) "Last Primary Seq Generated" from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# group by thread# order by 1;

DataGuard日常检查与数据同步测试

alter database recover managed standby database disconnect from session;    --【开启恢复管理,开启后启动mrp进程】
alter database recover managed standby database cancel;     --【取消恢复管理】
#主库1:
SYS@hfdb1>  SELECT UNIQUE THREAD#, MAX(SEQUENCE#) OVER(PARTITION BY THREAD#) LAST FROM V$ARCHIVED_LOG;

   THREAD#       LAST
---------- ----------
         2         14
         1         27

#主库2:
SYS@hfdb2>  SELECT UNIQUE THREAD#, MAX(SEQUENCE#) OVER(PARTITION BY THREAD#) LAST FROM V$ARCHIVED_LOG;

   THREAD#       LAST
---------- ----------
         2         14
         1         27

#备库
SYS@dghfdb>  SELECT UNIQUE THREAD#, MAX(SEQUENCE#) OVER(PARTITION BY THREAD#) LAST FROM V$ARCHIVED_LOG;

   THREAD#       LAST
---------- ----------
         2         14
         1         27
         
 #备库有没有日志丢失
SYS@dghfdb> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

#查询没有被应用的归档日志
SYS@hfdb1> select thread#,sequence#,first_time,next_time,applied from v$archived_log where applied='NO';

	#测试:
create tablespace hfdb1 datafile '+dgdata1' size 10m autoextend off; 
create user hfdb1 identified by hfdb1 default tablespace hfdb1 temporary tablespace temp; 
grant dba to hfdb1; 
conn hfdb1/hfdb1; 
create table hfdb1.test (c1 varchar2(10),c2 number); 
insert into hfdb1.test values('abc','1'); 
commit; 
	
conn / as sysdba
SQL> select name from v$datafile;

SQL> select dest_name,status,error from v$archive_dest;
select process,client_process,sequence#,status from v$managed_standby;
select archived_thread#,archived_seq#,applied_thread#,applied_seq#,dest_name from v$archive_dest_status;

RAC dg环境下,如何启停数据库

如何关机

先关主库(两个实例),再关备库

主库:

Shutdown immediate --(srvctl stop database -d hfdb)

停止监听:lsnrctl stop

备库:
SQL> alter database recover managed standby database cancel;  -取消日志应用进程

Shutdown immediate

停止监听:lsnrctl stop

如何开机

先开备库,再开主库

主库:

开启监听:lsnrctl start

startup --(srvctl start database -d hfdb)

备库:

开启监听:lsnrctl start

Startup

SQL> alter database recover managed standby database disconnect from session;

检查:

select thread#, max(sequence#) "Last Primary Seq Generated" from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# group by thread# order by 1;

主备切换( hfdb30/31 > hfdb40)

switchover --有计划的,正常的切换,不会破环数据库dg关系。

failover --非计划的,破坏性的切换,切换之后,要重做DG。–【11g后有个对库的闪回,做一个闪回点,后再failover ,只有闪回空间足够大,是有机会可以闪回的】

switchover

主库rac备库rs:
	-- switchover  
	主库rac备库rs:
	
col name for a20
col open_mode for a20
col PROTECTION_MODE for a20
col DATABASE_ROLE for a20
col SWITCHOVER_STATUS for a20
set linesize 1000
SYS@dghfdb> select NAME,OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE,SWITCHOVER_STATUS from v$database;

NAME                 OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE        SWITCHOVER_STATUS
-------------------- -------------------- -------------------- -------------------- --------------------
HFDB                 READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY     NOT ALLOWED

HFDB1@hfdb1> select NAME,OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE,SWITCHOVER_STATUS from v$database;

NAME                 OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE        SWITCHOVER_STATUS
-------------------- -------------------- -------------------- -------------------- --------------------
HFDB                 READ WRITE           MAXIMUM PERFORMANCE  PRIMARY              TO STANDBY

SYS@hfdb2> select NAME,OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE,SWITCHOVER_STATUS from v$database;

NAME                 OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE        SWITCHOVER_STATUS
-------------------- -------------------- -------------------- -------------------- --------------------
HFDB                 READ WRITE           MAXIMUM PERFORMANCE  PRIMARY              TO STANDBY



HFDB1@hfdb1> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
	no rows selected
SYS@dghfdb> select thread#,sequence#,first_time,next_time,applied from v$archived_log where applied='NO';
	no rows selected
	
	#对主库进行切换,rac集群关闭第二个节点。(如果SWITCHOVER_STATUS的值为TO STANDBY或者为SESSIONS ACTIVE都可以切换至备库)
	--主库30 #关闭rac2
	SYS@hfdb2> shutdown immediate
	alter database commit to switchover to physical standby; 
or: 
	alter database commit to switchover to physical standby with session shutdown; 
	startup;  --这时第二个实例也要启动
	SYS@hfdb1> startup
	SYS@hfdb2> startup
	——————————————————————————————报错解决——————————————————————————————————————————
ORA-10458: standby database requires recovery
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '+DGSYSTEM1/HFDB/DATAFILE/system.258.1184265513'
SYS@hfdb1> recover managed standby database using current logfile disconnect from session;
Media recovery complete.

SYS@hfdb1> alter database open read only;
ERROR at line 1:
ORA-10458: standby database requires recovery
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '+DGSYSTEM1/HFDB/DATAFILE/system.261.1185439411'

这里检查主库(40)报错:
ARC2 (PID:27173): FAL archive failed with error 12154 (see trace for details)
ARC2 (PID:27173): FAL archive failed, archiver continuing
和
krsu_dump_oci_emsg: Detailed OCI error val is 12154 and errmsg is ORA-12154: TNS:could not resolve the connect identifier specified 
#这个地方的问题是:40上面的tns不能到达主库(scanIP 32)。重新配置tns后,在把(30和31库)停止,在starrup恢复。

	
	--备库40:
--alter database recover managed standby database cancel;  --取消同步
alter database commit to switchover to physical standby; 
or: 
SYS@dghfdb> alter database commit to switchover to primary with session shutdown;
SYS@dghfdb> alter database open;


	
	--30
	alter database recover managed standby database disconnect from session; 
	--alter database recover managed standby database cancel; 
	
	insert into hfdb1.itpux values('itpux','4'); 
	commit; 
	alter system archive log current; 
	select * from hfdb1.itpux; 
	#
	select * from hfdb1.test ;
	insert into hfdb1.test values('ab3','2'); 
	
SYS@dghfdb> select * from hfdb1.test ;

C1                                     C2
------------------------------ ----------
abc                                     1

SYS@dghfdb> insert into hfdb1.test values('ab3','2');

1 row created.

SYS@dghfdb> commit;


SYS@hfdb1> select * from hfdb1.test ;

C1                                     C2
------------------------------ ----------
abc                                     1
ab3                                     2

SYS@hfdb1>

主备回切FS>RAC ( 40> 30)
select NAME,OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE,SWITCHOVER_STATUS from v$database;
SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP; 
select thread#,sequence#,first_time,next_time,applied from v$archived_log where applied='NO'; 


--主库40

alter database commit to switchover to physical standby; 
or: 
alter database commit to switchover to physical standby with session shutdown; 

startup 

 

--备库30
alter database recover managed standby database cancel;  
关闭实例二

alter database commit to switchover to primary; 
or: 
alter database commit to switchover to primary with session shutdown; 
alter database open; 

 

--40:

alter database recover managed standby database disconnect from session; 

insert into hfdb1.itpux values('itpux','5'); 
commit; 
alter system archive log current; 

faillover

假设物理主库宕机,无法启动,紧急启用备库。直接在备库上操作,将备库转换为主库角色。备库上执行下面四条命令即可

SQL > alter database recover managed standby database finish;

SQL > alter database commit to switchover to primary;

SQL > shutdown immediate;

SQL > startup;

具体操作

直接备库failover切换

SQL> alter database recover managed standby database finish;

Database altered.

再直接备库转换为主库

SQL> alter database commit to switchover to primary;

关闭备库

SQL> shutdown immediate

ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

开启备库

SQL> startup 

查看转换角色

- SQL> select open_mode,database_role,switchover_status from v$database;
- OPEN_MODE DATABASE_ROLE SWITCHOVER_STATUS
- ---------- ---------------- -------------------- ------------------------------------
- READ WRITE PRIMARY NOT ALLOWED

现在备库成为了主库角色,failover切换完成;

维护

查看主备课的日志应用状态

select sequence#,applied from v$archived_log order by sequence#;

查备库进程

select process,client_process,sequence#,status,BLOCK#,BLOCKS from v$managed_standby;

查看备库延迟

select name,value,unit,time_computed from v$dataguard_stats where name in ('transport lag','apply lag');

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/922006.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

使用ENSP实现DHCP

一、项目拓扑 二、项目实现 将信息提示改为中文 language-mode chinese确认 y 进入系统试图 sys将交换机命名为SW1 sysname SW1关闭信息中心 undo info-center enable 创建valn10 vlan20 vlan batch 10 20进入vlan10虚拟接口 int vlanif 10将vlan10虚拟接口IP地址配置为192.16…

CANDENCE: 绘制好的封装元件 刷新(Refresh) 和 替换 (Replace)焊盘

绘制好的封装元件 刷新(Refresh) 和 替换 (Replace)焊盘 一、刷新(Refresh) 1、以下面这个bga484封装的元件为例 2、打开bga的焊盘文件 3、我们对上面这个焊盘稍加修改,如下,然后保存 4、在封装编辑页面,如下操作 5…

【学术讲座】视觉计算中的深度学习方法 AIGC图像视频生成模型的推理加速

视觉计算中的深度学习方法 发展历程 backbone 强化学习、LLM等:有监督 && 无监督的结合 目标检测 图像分割 网络结构搜索 搜索方法 1:强化学习 2:强化学习 3:梯度算法 结构选择的作用 1:开放环境感知网络…

摄像机视频分析软件下载LiteAIServer视频智能分析平台玩手机打电话检测算法技术的实现

随着科技的不断进步,摄像机视频分析软件的发展已经为我们的生活带来了许多便捷。其中,LiteAIServer视频智能分析平台的玩手机打电话检测算法技术尤为突出,它利用先进的图像处理和人工智能技术,能够自动识别并监控视频中的玩手机或…

基于UDP和TCP实现回显服务器

目录 一. UDP 回显服务器 1. UDP Echo Server 2. UDP Echo Client 二. TCP 回显服务器 1. TCP Echo Server 2. TCP Echo Client 回显服务器 (Echo Server) 就是客户端发送什么样的请求, 服务器就返回什么样的响应, 没有任何的计算和处理逻辑. 一. UDP 回显服务器 1. UD…

DICOM核心概念:显式 VR(Explicit VR)与隐式 VR(Implicit VR)在DICOM中的定义与区别

在DICOM(Digital Imaging and Communications in Medicine)标准中,VR(Value Representation) 表示数据元素的值的类型和格式。理解显式 VR(Explicit VR)与隐式 VR(Implicit VR&#…

安卓应用安装过程学习

声明:此文章来自http://shuwoom.com/?p60的学习记录 启动式安装 public static final IPackageManager main(Context context, Installer installer,boolean factoryTest, boolean onlyCore) {PackageManagerService m new PackageManagerService(context, inst…

基于Java Springboot医疗垃圾分类系统

一、作品包含 源码数据库全套环境和工具资源部署教程 二、项目技术 前端技术:Html、Css、Js、Vue、Element-ui 数据库:MySQL 后端技术:Java、Spring Boot、MyBatis 三、运行环境 开发工具:IDEA/eclipse 数据库:…

SQL99版全外连接和交叉连接和总结

全外连接MySQL不支持 elect 查询列表 from 表名1 表别名1 cross join 表名2 表别名2 on 连接条件 ...... ; 交叉连接 就两个记录做笛卡尔积!没什么好说的,基本也没用过! 总结

推荐一款开源电子书阅读器Koodo Reader

Koodo Reader 是一个开源的电子书阅读器,支持多达15种主流电子书格式, 内置笔记、高亮、翻译功能,助力高效书籍阅读和学习。 官网地址:https://www.koodoreader.com/zh 一、下载软件 下载地址:https://dl.koodoreader.…

WebStorm 2024.3/IntelliJ IDEA 2024.3出现elementUI提示未知 HTML 标记、组件引用爆红等问题处理

WebStorm 2024.3/IntelliJ IDEA 2024.3出现elementUI提示未知 HTML 标记、组件引用爆红等问题处理 1. 标题识别elementUI组件爆红 这个原因是: 在官网说明里,才版本2024.1开始,默认启用的 Vue Language Server,但是在 Vue 2 项目…

Harbor2.11.1生成自签证和配置HTTPS访问

文章目录 HTTPS的工作流程部署Harbor可参考上一篇文章生成自签证书1.修改/etc/hosts文件2.生成证书a.创建存放证书路径b.创建ca.key密钥c.创建ca.crtd.创建给Harbor服务器使用密钥 yunzhidong.harbor.com.keye.创建给Harbor服务器使用证书签名请求文件 yunzhidong.harbor.com.c…

【深度学习之二】正则化函数(weight decay, dropout, label smoothing, and etc)详解,以及不同的函数适用的场景

在深度学习中正则化函数的重要性不言而喻,今天主要总结一些当前常用的一些正则化函数 在深度学习中,正则化(Regularization)是一种防止模型过拟合的技术。过拟合指的是模型在训练数据上表现很好,但在未见过的测试数据…

uni-app 修改复选框checkbox选中后背景和字体颜色

编写css(注意:这个样式必须写在App.vue里) /* 复选框 */ /* 复选框-圆角 */ checkbox.checkbox-round .wx-checkbox-input, checkbox.checkbox-round .uni-checkbox-input {border-radius: 100rpx; } /* 复选框-背景颜色 */ checkbox.checkb…

Ngrok实现内网穿透(Windows)

Ngrok实现内网穿透(Windows) 什么是内网穿透,内网穿透有什么用 内网穿透(NAT traversal)是一种技术手段,使得位于内网或防火墙后面的设备能够通过外网访问。例如,如果你的计算机、服务器等设备…

Simulink中Model模块的模型保护功能

在开发工作过程中,用户为想要知道供应商的开发能力,想要供应商的模型进行测试。面对如此要求,为了能够尽快拿到定点项目,供应商会选择一小块算法或是模型以黑盒的形式供客户测试。Simulink的Model模块除了具有模块引用的功能之外&…

Linux内核USB2.0驱动框架分析--USB包

一, 包的组成 每个包都由SOP(包起始域)、SYNC(同步域)、Packet Content(包内容)、EOP(包结束域)四部分组成,其中SOP、SYNC、EOP为所有包共有的域&#xff0c…

STM32F4----ADC模拟量转换成数字量

STM32F4----ADC模拟量转换成数字量 基本原理 当需要测量和记录外部电压的变化,或者根据外部电压的变化量来决定是否触发某个动作时,我们可以使用ADC(模拟—数字转换器)功能。这个功能可以将模拟的电压信号转换为数字信号&#x…

大数据学习18之Spark-SQL

1.概述 1.1.简介 Spark SQL 是 Apache Spark 用于处理结构化数据的模块。 1.2.历史 1.2.1.Shark Hadoop诞生初期,Hive是唯一在Hadoop上运行的SQL-on-Hadoop工具,MR的中间计算过程产生了大量的磁盘落地操作,消耗了大量的I/O,降低…

医学AI公开课·第一期|Machine LearningTransformers in Med AI

小罗碎碎念 从这周开始,我计划每个周末录一个视频,分享一些医学人工智能领域的进展。 作为第一期视频,我打算介绍一下机器学习和Transformer在医学AI领域中的应用。 为了准备这期视频,总共做了24页PPT(三部分内容&…