Oracle 11g Installation With ASM and Data Guard Setup
Configure Oracle 11g with ASM and Data Guard for high availability.
In this article we will look at Oracle 11g installation with ASM storage and also setup physical standby on ASM. We will be following below steps for our configuration
We will first setup the primary server and then the standby server
Data Guard With ASM - Setup Primary
Let us create a virtual machine, install Oracle GRID (which provides us ASM) and then install oracle 11g software. The first step is to Install Oracle Linux 7.7 on Virtualbox.
Oracle Installation Prerequisites
Install below package to perform all oracle installation pre-requisites
yum -y install oracle-rdbms*
Change password for oracle user
passwd oracle
Add grid user and update password
useradd -u 54323 -g oinstall -G dba grid
passwd grid
Create necessary directories
mkdir -p /u01/app/oracle
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
chownoracle:oinstall /u01/app/oracle
chmod -R 775 /u01
mkdir -p /u02
chown -R oracle:oinstall /u02
chmod -R 775 /u02
Download Oracle 11.2.0.4 gird and database binaries. Copy it to /u02 location
Switch to Oracle user and edit the bash profile
su - oracle
vi .bash_profile
Put below under the bash profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
export ORACLE_SID=proddb
export ORACLE_UNQNAME=proddb
export JAVA_HOME=/usr/bin/java
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_TERM=xterm
export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
export TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/local/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
Switch to grid user and edit the bash profile
su - grid
vi .bash_profile
Put below under the bash profile
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
export ORACLE_SID=+ASM
export JAVA_HOME=/usr/bin/java
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_TERM=xterm
export TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/local/bin
export PATH
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
Configure ASM Disks
Our goal is to setup primary database on ASM disks. Let us proceed and configure ASM disks. Install ASM packages
yum -y install oracleasm*
yum -y install kmod-oracleasm
Configure ASM
[root@prod ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Load kernel module
[root@prod ~]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
Shutdown the virtual machine and Add asm disk to virtual machine of 110 GB size. Go to virtual machine settings > storage > Controller SATA > Add new disk
Select create new disk
Select VDI and click next
Select Fixed size and click next
Give 110 GB size and click on create
It will take some time to create the disk. Once created, start the virtual machine. Check the new added disk is listed via fdisk -l command
Create three partitions on /dev/sdb disk to be used as ASM disks
fdisk /dev/sdb
n --> create new partition
p
1
<enter>
+50G
n
P
2
<enter>
+50G
n
p
3
<enter>
<enter>
w --> save partitions
Create asm disks. Each partition will be used as one disk inside a disk group
oracleasm createdisk DATA1 /dev/sdb1
oracleasm createdisk FRA1 /dev/sdb2
oracleasm createdisk CRS1 /dev/sdb3
Install Grid Software
Switch to grid user, unzip the grid software under /u02
su - grid
cd /u02
unzip p13390677_112040_Linux-x86-64_3of7.zip
Start the runInstaller
cd grid
./runInstaller
Follow the screens to install oracle grid software
Check the resources status
ps -ef|grep pmon
crsctl status resource -t
Create ASM Diskgroups
As grid user, start asmca utility
Click on Create. Give a name to your DATA disk group. This disk group will hold database files. Then click on OK
Similarly create FRA diskgroup
Close!
Install Oracle Database Software
Switch to oracle user, unzip oracle software under /u02
su - oracle
cd /u02
unzip p13390677_112040_Linux-x86-64_1of7_DB1.zip
unzip p13390677_112040_Linux-x86-64_2of7_DB2.zip
cd database
./runInstaller
Follow the screens to install oracle database software
DBCA Create Primary Database
Initiate dbca and follow below screens
Under Sizing Tab, click on All Initialization Parameters
Change the control files parameter to
("+DATA/{DB_UNIQUE_NAME}/control1.ctl","+FRA/{DB_UNIQUE_NAME}/control2.ctl")
Check the database pmon status at OS level
Our primary database is ready!
Setup Standby Server
Now that we have successfully installed Oracle 11g on ASM, we will proceed with setting up standby server. We will create another virtual machine and install OEL 7.
Oracle Installation Prerequisites
Update /etc/hosts file
Install below package to perform all oracle installation pre-requisites
yum -y install oracle-rdbms*
Change password for oracle user
passwd oracle
Add grid user and update password
useradd -u 54323 -g oinstall -G dba grid
passwd grid
Create necessary directories
mkdir -p /u01/app/oracle
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
mkdir -p /u02
chown -R oracle:oinstall /u02
chmod -R 775 /u02
Download Oracle 11.2.0.4 gird and database binaries. Copy it to /u02 location
Switch to Oracle user and edit the bash profile
su - oracle
mv .bash_profile .bash_profile_orig
vi .bash_profile
Put below under the bash profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
export ORACLE_SID=proddb_st
export ORACLE_UNQNAME=proddb_st
export JAVA_HOME=/usr/bin/java
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_TERM=xterm
export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
export TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/local/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
Switch to grid user and edit the bash profile
su - grid
mv .bash_profile .bash_profile_orig
vi .bash_profile
Put below under the bash profile
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
export ORACLE_SID=+ASM
export JAVA_HOME=/usr/bin/java
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_TERM=xterm
export TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/local/bin
export PATH
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
Configure ASM Disks
In this lab exercise, we will be adding ASM disks to standby server. Our goal is to setup both primary and standby database on ASM disks. Let us proceed and configure ASM disks. Install ASM packages
yum -y install oracleasm*kmod-oracleasm
Configure ASM
[root@prod ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Load kernel module
[root@prod ~]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
Shutdown the virtual machine and Add asm disk to virtual machine of 110 GB size. Go to virtual machine settings > storage > Controller SATA > Add new disk
Select create new disk
Select VDI and click next
Select Fixed size and click next
Give 110 GB size and click on create
It will take some time to create the disk. Once created, start the virtual machine.
Check the new added disk is listed via fdisk -l command (These commands are same as prod server but run on DR server)
Create three partitions on /dev/sdb disk to be used as ASM disks
fdisk /dev/sdb
n --> create new partition
p
1
<enter>
+50G
n
P
2
<enter>
+50G
n
p
3
<enter>
<enter>
w --> save partitions
Check the partitions under /dev/sdb
Create asm disks. Each partition will be used as one disk inside a disk group
oracleasm createdisk DATA1 /dev/sdb1
oracleasm createdisk FRA1 /dev/sdb2
oracleasm createdisk CRS1 /dev/sdb3
Install Oracle Grid Software
Switch to grid user, unzip the grid software under /u02
su - grid
cd /u02
unzip p13390677_112040_Linux-x86-64_3of7.zip
cd grid directory and run the installer
cd grid
./runInstaller
Follow the screens to install oracle grid software
Check the resources status
ps -ef|grep pmon
crsctl status resource -t
Create ASM Diskgroups
As grid user, start asmca utility
Click on Create. Give a name to your DATA disk group. This disk group will hold database files. Then click on OK
Similarly create FRA diskgroup
Install Oracle Database Software
Switch to oracle user, unzip oracle software under /u02
su - oracle
cd /u02
unzip p13390677_112040_Linux-x86-64_1of7_DB1.zip
unzip p13390677_112040_Linux-x86-64_2of7_DB2.zip
cd database
./runInstaller
Follow the screens to install oracle database software
Configure Data Guard
Now that both our primary and standby servers are ready, we can proceed further and configure physical standby.
Primary Database Changes
Configuration overview
Primary server must run in archive log mode. This is the basic requirement for primary to run in archive log mode.
SQL> archive log list;
If it’s not in archive log mode, use below
SQL> shutdown immediate
SQL> startup mount
SQL> alter database archivelog;
SQL> alter database open;
SQL> archive log list;
Enable force logging: In oracle, users can restrict redo generation for SQL by using NOLOGGING clause. This NOLOGGING transaction will be a problem for physical standby. Hence, we force logging so even user uses NOLOGGING clause, every SQL will be logged on to redo
SQL> alter database force logging;
SQL> select name, force_logging from v$database;
Standby file management: We need to make sure whenever we add/drop datafile in primary database, those files are also added / dropped on standby.
SQL> alter system set standby_file_management = 'AUTO';
Create standby log files: You must create standby log files on primary. These files are used by a standby database to store redo it receives from primary database. Our primary may become standby later and we would need them, so better to create it. First check the current log groups and add current + 1 standby log groups
SQL> select group#, member from v$logfile;
Add the standby logfiles, make sure group number should be from a different series like in this case we choose to start with 11 and above. This helps in easy differentiation.
ALTER DATABASE ADD STANDBY LOGFILE GROUP 11 '+FRA' SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 12 '+FRA' SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 13 '+FRA' SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 14 '+FRA' SIZE 50M;
Check the standby log files via below query
SQL> SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS FROM V$STANDBY_LOG;
Create password file for standby: This is needed for cloning purpose. Even if there is one password file in $ORACLE_HOME/dbs location, create a new one with standby SID
SQL> alter system set remote_login_passwordfile=exclusive scope=spfile;
SQL> exit;
cd $ORACLE_HOME/dbs
orapwd file=orapwproddb_st password=<sys_password>
scp orapwproddb_st oracle@<standby_ip>:$ORACLE_HOME/dbs
Check DB Unique name parameter on primary: Make sure your primary database has DB_UNIQUE_NAME parameter set for consistency. If it’s not set properly, use ALTER SYSTEM SET command
SQL> show parameter db_unique_name;
Enable flashback on primary: Flashback database is highly recommended because in case of failover, you need not re-create primary database from scratch.
SQL> alter system set db_recovery_file_dest_size=45g;
SQL> select flashback_on from v$database;
SQL> alter database flashback on;
If flashback parameters are not set properly, use below commands
SQL> show parameter recovery;
SQL> alter system set db_recovery_file_dest='+FRA';
SQL> alter system set db_recovery_file_dest_size=45g;
SQL> alter database flashback on;
Configure tns entries on primary & standby: Use below tns entries and put them under ORACLE user HOME/network/admin/tnsnames.ora. Change host as per your environment and execute on both primary and standby.
vi $ORACLE_HOME/network/admin/tnsnames.ora
proddb =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.201)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = proddb)
)
)
proddb_st =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.204)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = proddb_st)
)
)
Configure Listener on primary: We need to configure listener via gird user.
su - grid
cd $ORACLE_HOME/network/admin
vi listener.ora
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.201)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=proddb)
(SID_NAME=proddb)
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
)
(SID_DESC=
(GLOBAL_DBNAME=proddb_DGMGRL)
(SID_NAME=proddb)
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
)
)
lsnrctl reload listener
Configure listener on standby: We need to configure listener via gird user.
su - grid
cd $ORACLE_HOME/network/admin
vi listener.ora
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.204)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=proddb_st)
(SID_NAME=proddb_st)
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
)
(SID_DESC=
(GLOBAL_DBNAME=proddb_st_DGMGRL)
(SID_NAME=proddb_st)
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
)
)
lsnrctl reload listener
Perform network test: Perform tnsping on both servers to check network configuration is fine
tnspingproddb >> must be OK
tnspingproddb_st >> must be OK
Configure redo transport from primary to standby: The below statement says that if the current database is in primary role, then transport logs to standby. We need to change service and db_unique_name for same parameter on standby server.
SQL> alter system set log_archive_dest_2 = 'service=proddb_stasync
valid_for=(online_logfiles,primary_role) db_unique_name=proddb_st';
Setup FAL (Fetch Archive Log) server: This parameters tell the primary as to where it will get archives from (FAL Server=proddb_st).
SQL> alter system set fal_server = 'proddb_st';
Setup Data Guard configuration on primary: This parameter will let primary database know which databases are in data guard configuration.
SQL> alter system set log_archive_config= 'dg_config=(proddb,proddb_st)';
Build Physical Standby
Configuration overview
Create pfile for standby (on primary): We will create pfile from spfile on primary database and copy the file to standby server
On primary server
=================
SQL> create pfile='/tmp/initproddb_st.ora' from spfile;
scp /tmp/initproddb_st.ora oracle@192.168.0.204:/tmp
On standby server
=================
vi /tmp/initproddb_st.ora
Change FAL_SERVER to the primary SID (proddb)
In LOG_ARCHIVE_DEST_2 change the service and db_unique_name to the primary SID (in this case proddb)
Add parameter db_unique_name='proddb_st'
Save pfile and close
Create audit destination at OS level using mkdir
Switch to grid user and create respective directories under +DATA and +FRA
su - grid
asmcmd
ASMCMD> cd DATA
ASMCMD> mkdir PRODDB
ASMCMD> mkdir PRODDB_ST
ASMCMD> cd ..
ASMCMD> cd FRA
ASMCMD> mkdir PRODDB
ASMCMD> mkdir PRODDB_ST
ASMCMD> exit
Create spfile on standby: In the previous step, we have created pfile. Let’s create spfile using pfile.
SQL> startup nomount pfile=/tmp/initproddb_st.ora;
SQL> create spfile='+DATA/proddb_st/spfileproddb_st.ora' from pfile='/tmp/initproddb_st.ora';
SQL> shutdown
Note: in case of ORA-00845, edit /etc/fstab as root user
vi /etc/fstab
Edit below line and add size=5G
===============================
tmpfs /dev/shmtmpfsdefaults,size=5G 0 0
mount -o remount /dev/shm
Under $ORACLE_HOME/dbs create initproddb_st.ora file with below spfile location
cd $ORACLE_HOME/dbs
vi initproddb_st.ora
SPFILE='+DATA/proddb_st/spfileproddb_st.ora'
Start the standby DB instance in nomount stage and check spfile parameter
SQL> startup nomount
SQL> show parameter spfile
SQL> exit; >> must exit or duplicate will fail
Duplicate primary database via RMAN: In this step, we will use RMAN to duplicate primary database for our standby database. You can connect to RMAN on primary or standby server. We will connect to RMAN on primary server and perform the duplicate.
On primary server
=================
rman target sys/sys@proddb
RMAN> connect catalog rman_rc/rman_rc@rcat --> connect catalog
RMAN> connect auxiliary sys/sys@proddb_st --> connect standby
RMAN> duplicate target database for standby from active database nofilenamecheck;
Once cloning is done, you should see below at RMAN prompt
Finished Duplicate Db at 07-DEC-2015
Drop multiplexed SRL on standby: The standby redo logs are multiplexed on both +DATA and +FRA diskgroups. We need to delete the +DATA multiplexed copies of SRL
On standby server
=================
select name, database_role, open_mode from v$database;
column member format a50;
select group#, member from v$logfile where type='STANDBY' order by 1,2;
Drop the multiplexed copes of SRL from +DATA only
=================================================
SQL> alter database drop logfile member '+DATA/proddb_stb/onlinelog/group_11.265.940696341';
SQL> alter database drop logfile member '+DATA/proddb_stb/onlinelog/group_12.267.940696345';
SQL> alter database drop logfile member '+DATA/proddb_stb/onlinelog/group_13.266.940696347';
SQL> alter database drop logfile member '+DATA/proddb_stb/onlinelog/group_14.269.940696351';
One of the logfile members will not drop as it is in use. You need to switch logfile in production and then drop SRL on standby
On Primary:
===========
SQL> alter system switch logfile;
System altered.
Start stop MRP on standby: Our standby database is ready and we can start MRP process. Run below command to start recovery process in background.
To start MRP:
SQL> alter database recover managed standby database disconnect;
To stop MRP:
SQL> alter database recover managed standby database cancel;
Verifying Redo Apply: Once MRP is started, we must verify if our setup is working fine
On standby:
===========
SQL>alter database recover managed standby database disconnect;
On both primary & standby:
==========================
SQL> set lines 999;
SQL> select * from v$dataguard_status order by timestamp;
SQL> select dest_id, status, destination, error from v$archive_dest where dest_id<=2;
IF you see any ORA error like ORA-16058, do this on primary:
============================================================
SQL> alter system set log_archive_dest_state_2='DEFER';
SQL> alter system set log_archive_dest_state_2='ENABLE';
SQL> select dest_id, status, destination, error from v$archive_dest where dest_id<=2;
On primary:
===========
SQL> select sequence#, first_time, next_time, applied, archived from v$archived_log where name = 'proddb_st' order by first_time;
SQL> select STATUS, GAP_STATUS from V$ARCHIVE_DEST_STATUS where DEST_ID = 2;
SQL> archive log list;
On standby:
===========
SQL> select process, status, sequence# from v$managed_standby;
SQL>select sequence#, applied, first_time, next_time, name filename from v$archived_log order by sequence#;
Enable flashback on standby: As we know the importance of flashback in oracle data guard, we must enable it on standby as well
On standby:
==========
SQL>alter database recover managed standby database cancel;
SQL>alter database flashback on;
SQL>alter database recover managed standby database disconnect;
Configure Archive deletion policy: We must set this policy in order to prevent accidental deletion of archive logs on primary database.
rman target /
configure archivelog deletion policy to applied on all standby;
Register standby database with srvctl (grid utility): We need to register standby database with srvctl utility as it will help us manage database easily.
On standby:
===========
sqlplus / as sysdba
SQL>alter database recover managed standby database cancel;
SQL>shutdown immediate
SQL> exit;
switch to grid home location via oracle user:
=============================================
cd /u01/app/11.2.0/grid/bin
./srvctl add database -d proddb -iproddb_st -o /u01/app/oracle/product/11.2.0/dbhome_1 -r PHYSICAL_STANDBY -s MOUNT
./srvctl start database -d proddb
./srvctl status database -d proddb
Note: even though we can start stop standby database via srvctl utility, we must always start MRP manually.
Further Read
Convert Physical Standby
Oracle Data Guard Broker
Change Protection Modes