Oracle Data Guard
— Standby in RAC 19c
Advanced Interview Questions & Answers
15 Hard-Level Questions
| Oracle 19c | RAC & Data Guard
Question 1 of 15
Topic: Architecture

Answer
In a RAC primary, each instance has its own set of online redo logs (ORLs)
and generates redo independently. In Data Guard with RAC, each RAC instance
ships its redo stream directly to the standby — this is called multi-instance
redo apply. The standby (physical) must be configured with Standby Redo Logs
(SRLs) for every
primary instance plus one extra (n+1 rule per thread). Key differences:
— Primary RAC: n instances, each with its own thread of redo, independent ARCn shipping
— Standby RAC: MRP (Managed
Recovery Process) or multiple apply
processes coordinate across
all received threads
—
FAL (Fetch Archive
Log) is used for gap resolution per-instance thread
—
In 19c, Real-Time
Apply on a RAC standby
is fully supported
with parallel MRP0
SQL
/ Command Reference:
![]()
Question 2 of 15
![]()
Answer
The n+1 rule states:
for each redo thread (instance) on the primary,
the standby must have at least as many SRL groups as the primary has ORL groups
for that thread, plus one additional group. If the primary RAC has 3 instances
each with 3 ORL groups, the standby needs at least 4 SRL groups per thread (12 total).
Violation consequences:
—
Shipping stalls when all SRL groups are full and the standby
cannot write incoming
redo
— Alert log shows: No standby redo logfile available
for thread N
—
In SYNC mode this causes
the primary to hang (waiting
for acknowledgment)
— In ASYNC
mode it causes archive log accumulation and eventual lag
—
Gap detection triggers
but FAL may not resolve
if SRLs remain
saturated
SQL
/ Command Reference:

Question 3 of 15

Answer
In Maximum Protection mode, every redo transaction must be written
to at least one standby
SRL before the primary commits — using SYNC + AFFIRM
attributes. The primary remains open only as long as at least one synchronized
standby is reachable and confirming redo receipt.
Network partition behavior:
—
If contact with all synchronized standbys is lost, the primary
shuts itself down rather than risk data loss
—
this is the defining behavior
of Maximum Protection
— In RAC, this applies
per-instance: if all instances lose standby connectivity, all instances shut down
—
The shutdown is controlled by LGWR and observed in the alert log as: LGWR: standby
lost, shutting down
— To recover,
you must restore
standby connectivity and mount/open the primary manually
SQL / Command
Reference:

Question 4 of 15
Topic: Fast-Start Failover
![]()
Answer
FSFO automates failover
without DBA intervention when the Observer
detects primary failure.
In RAC 19c, the Observer
monitors the primary cluster's VIP/SCAN and the standby. Key conditions for
FSFO to trigger:
— Observer loses connectivity to primary AND standby has already received
sufficient redo (within FastStartFailoverThreshold seconds
lag)
— The standby
must be in a synchronized state (or within
lag threshold for async)
False failover causes in RAC:
— Network partition
where Observer cannot
reach primary but primary is healthy (split-brain)
—
Temporary cluster interconnect issues causing all RAC instances to appear ofline
simultaneously
—
Observer VM migration
or maintenance causing
observation gap
— Clock skew between Observer
host and cluster
nodes affecting timeout
calculations
Mitigations: Use multiple
Observers (19c supports
up to 3 observers per configuration), place
Observer on a separate, stable host, tune
FastStartFailoverThreshold appropriately (default 30s), and use
FastStartFailoverPmyShutdown=TRUE to
avoid split-brain.
SQL
/ Command Reference:

Question 5 of 15
Topic: Apply Services
![]()
Answer
Real-Time Apply (RTA)
allows MRP to apply redo directly from SRL groups
while they are being written —
no need to wait for the log to be archived first. This minimizes apply
lag to near-zero.
Traditional vs Real-Time:
— Traditional: MRP waits for an archived
log to be fully received
and registered before applying
— Real-Time: MRP reads from the active
SRL as redo streams in, applying changes
continuously RAC standby caveats:
— In a RAC standby,
only one instance
runs MRP (single-instance apply) — other
instances are mounted
but
idle from an apply perspective in older approaches; in 19c multi-instance apply is available
— If the instance running
MRP fails, MRP must restart
on another instance — this causes a brief apply interruption
—
SRL groups must be on shared storage
accessible from all standby instances
—
Parallel apply (multiple RFS/MRP workers) can introduce brief ordering delays
if thread coordinators lag
SQL
/ Command Reference:

Question 6 of 15
Topic: Gap Resolution

Answer
FAL is the gap resolution mechanism. When the standby
detects a missing
archive log sequence
for any thread, it requests the missing log from
the FAL server (typically the primary). In RAC, each thread is a separate redo
stream, so gaps are tracked per-thread.
Multi-instance gap handling:
—
Each standby RFS process tracks
sequences per thread
independently
— FAL_SERVER on the standby
should point to all primary
instances (or their SCAN) to maximize availability
—
FAL_CLIENT tells the primary where to ship the gap log
— If the instance that archived a specific sequence
is down, the FAL request
fails unless the archive is accessible from a surviving instance or
shared archive destination
Best practice in RAC: Use a shared
archive destination (ASM diskgroup or NFS) accessible from all primary instances, and set FAL_SERVER to
the primary's SCAN listener so any surviving instance can serve gaps.
SQL / Command
Reference:

Question 7 of 15
Topic: Switchover
![]()
Answer
Mandatory pre-checks:
—
Verify no apply lag: SHOW DATABASE
VERBOSE standby_name — check ApplyLag
— Confirm protection mode is acceptable
— Ensure all primary instances
are open and healthy
—
Validate Data Guard configuration: VALIDATE DATABASE primary_name
—
Check for ORA- errors in alert logs on both sides
In 19c, DGMGRL handles the full RAC switchover — it quiesces
all primary instances, flips roles, and restarts
services. No manual SQL on individual instances is required.
SQL / Command
Reference:

Question 8 of 15
Topic: Active
Data Guard

![]()
Answer
ADG requires the Oracle
Active Data Guard license (separate from Data Guard, which is included
in EE). It allows the physical
standby to be open READ ONLY while
MRP continues applying
redo — enabling ofload of queries, reporting, and backups.
In RAC
standby:
—
All standby instances can be open read-only simultaneously
—
Only
one instance runs MRP; apply
and read access
coexist on shared
storage
— Read-only queries
get consistent reads using undo data; this requires sufficient undo retention on the
standby
— 19c feature:
ADG DML Redirect
— DML issued on the standby is transparently forwarded to the primary, executed, and the result returned
to the standby session
Apply
lag implications:
—
Query workload on standby competes
with MRP for I/O and CPU, potentially increasing apply lag
— Size SGA appropriately — buffer cache on standby
serves both apply and read queries
SQL
/ Command Reference:

Question 9 of 15

Answer
SYNC/AFFIRM (Maximum
Protection / Maximum
Availability): LGWR waits for redo to be written to standby SRL and flushed to disk before committing.
Highest data protection, highest latency overhead. In RAC: every instance's
LGWR waits independently — network
RTT directly adds to commit latency.
Suitable for low-latency WAN
(<5ms) or local standby.
SYNC/NOAFFIRM: LGWR waits for redo to be written to standby SRL buffer only (in-memory), not necessarily
flushed to disk. Slight improvement over SYNC/AFFIRM; standby SRL not durable until DBWR flushes. Rarely used —
most use AFFIRM with SYNC for consistency.
ASYNC/NOAFFIRM (Maximum Performance): LGWR or ARCn ships redo asynchronously — primary never waits
for standby acknowledgment. Zero commit latency overhead; standby can lag by
seconds to minutes. In RAC: each instance ships independently, lag can vary per
thread. Default and most common for geographically distant standbys.
SQL
/ Command Reference:

Question 10 of 15
![]()
Answer
A Far Sync instance
is a special Oracle instance
with no datafiles — it only receives
redo from the primary
(SYNC) and forwards
it to terminal standbys (ASYNC).
This lets you achieve zero-data-loss protection over long distances without the latency of full
SYNC shipping to a distant standby.
Architecture in RAC: Primary RAC -> Far Sync (near,
SYNC) -> Terminal Standby
(distant, ASYNC). Far Sync runs as a single instance (not RAC) — it has controlfile, SRLs, and parameter file, but NO datafiles. Each primary RAC instance ships to the Far Sync
instance; Far Sync then forwards to terminal standby.
Limitations:
—
Far
Sync cannot be opened — it is always mounted
— Cannot perform
switchover to Far Sync — it is not a valid failover
target
—
Does not support
Active Data Guard (no datafiles, cannot be read)
—
Far Sync must be a single instance
— RAC Far Sync is not supported
— Recovery catalog
and RMAN operations must skip Far Sync
SQL
/ Command Reference:

Question 11 of 15
Topic: Troubleshooting
![]()
Answer
Step 1 — Identify
MRP status: Query v$managed_standby for MRP process
status and sequence.
Step 2 — Check alert
log on standby for the exact ORA- error causing
MRP to stop. Common causes:
ORA-01547 (failed to establish dependency), ORA-00600 (internal),
corrupt block, missing archived log.
Step 3 — Check for gaps using v$archive_gap and v$archived_log where
applied='NO'.
Step 4 — Check RFS processes — are logs still arriving?
Query v$managed_standby where process='RFS'.
Step 5 — Restart MRP: Cancel and re-issue RECOVER MANAGED STANDBY DATABASE.
Step 6 — If corruption detected, use RMAN block media recovery on standby. If gap cannot
be resolved, consider
reinstating from primary RMAN backup.
SQL
/ Command Reference:

Question 12 of 15
Topic: Broker
Configuration
![]()
Answer
The Data Guard Broker configuration is stored in two binary
files (dr1*.dat and dr2*.dat) on each database — defined by the
DG_BROKER_CONFIG_FILE1/2 parameters. These are NOT the same as the controlfile.
In RAC: All primary
instances should point
to the same broker config files — typically on a shared ASM
diskgroup. If instances point to different
files (e.g., local file per node), the broker state
diverges and DGMGRL shows inconsistent configurations.
Out-of-sync symptoms:
— SHOW CONFIGURATION returns ORA-16596 or ORA-16714
—
Broker operations fail with configuration file mismatch
— Switchover/failover cannot proceed
Recovery steps: Disable
broker on all instances, delete
stale config files from all nodes, re-enable
broker, then recreate the full
configuration in DGMGRL.
SQL
/ Command Reference:

Question 13 of 15
Topic:
RMAN Duplication

![]()
Answer
RMAN's DUPLICATE TARGET DATABASE FOR
STANDBY FROM ACTIVE DATABASE works with RAC primaries but
requires careful parameter
handling since the standby may have different instance counts, node names, and storage paths.
Parameters requiring
special attention:
—
DB_FILE_NAME_CONVERT: Maps primary
datafile paths to standby paths
—
LOG_FILE_NAME_CONVERT: Maps online redo log paths
— DB_UNIQUE_NAME: Must be unique
— different from primary
—
CLUSTER_DATABASE: Set to FALSE during duplication; re-enable after
— THREAD and UNDO_TABLESPACE: Remove RAC-specific parameters not applicable to single-instance
standby
—
INSTANCE_NUMBER, INSTANCE_NAME: Clear
primary RAC-specific values
After duplication, add SRLs, re-enable
Broker, and set CLUSTER_DATABASE=TRUE if standby is also RAC.
SQL
/ Command Reference:


Question 14 of 15
Topic: 19c New Features

Answer
1.
Automatic reinstatement after failover (improved):
19c improved the automatic flashback + reinstatement flow — the old primary can be automatically converted to a standby post-failover without manual intervention, even in RAC configurations.
2. Multiple
Observer support (up to 3): 19c allows up to 3 Observers per Data Guard
configuration. This eliminates the Observer
SPOF in FSFO — a master observer
and backup observers coordinate, improving
failover reliability in RAC environments.
3. ADG
DML Redirect: Transparent DML redirection from standby to primary —
applications connecting to the standby for reads can also issue DML without application changes, improving workload
distribution in RAC ADG
deployments.
4. Automatic Block
Media Recovery from standby: In 19c, the primary can automatically recover
corrupt blocks using clean
copies from a synchronized ADG standby — zero DBA intervention required.
5. VALIDATE DATABASE
in DGMGRL enhanced: 19c DGMGRL's validate
command checks RAC-specific configurations including SRL
counts, thread configurations, and service failover definitions more
comprehensively.
SQL
/ Command Reference:

Question 15 of 15
Topic: Advanced Scenario
![]()
Answer
On the primary side:
—
Instance 3 crashes; PMON detects the failure and notifies other instances
—
Cluster Ready Services
(CRS) evicts instance
3 from the cluster
— Instance recovery
for thread 3 is performed by a surviving instance (instance 1) — it reads thread 3's online redo logs and rolls back
uncommitted transactions
— Instance 1 archives thread
3's current online
redo log (the end-of-thread archive), writing a special thread close marker
On the standby side:
—
The RFS process
for thread 3 loses its connection (the LGWR/ARCn from instance 3 is gone)
— The standby
receives the final
archive for thread
3 from instance 1 (via FAL or shipped by instance 1's ARCn)
— MRP detects
the end-of-thread marker in thread 3's archive
sequence and performs
thread recovery locally on the
standby
—
MRP rolls back uncommitted thread
3 transactions using undo data in the redo stream
— Once thread
recovery is complete,
MRP marks thread
3 as recovered and continues applying threads 1, 2, and 4 normally
Key point: The standby does NOT wait for instance
3 to come back — it performs
full thread recovery autonomously using the shipped
redo, maintaining apply consistency.
SQL / Command
Reference:




No comments:
Post a Comment