Pages

Tuesday, 12 May 2026

How to Access Oracle and SQL Server Tables from PostgreSQL

 

How to Access Oracle and SQL Server Tables from PostgreSQL

 

A Popular DBA Interview Question Explained

PostgreSQL is not just a database. It can also work as:

·         Federation database

·         Central reporting layer

·         Cross-platform query engine

 

One of the most common DBA interview questions today is:

“How do you access Oracle or SQL Server tables directly from PostgreSQL?”

 

The answer is: Foreign Data Wrappers (FDW)

 

Interview Question

 

Q1. How can PostgreSQL access Oracle database tables? Answer: PostgreSQL uses: oracle_fdw

 

This extension allows PostgreSQL to:

 

·         Connect Oracle remotely

·         Query Oracle tables

·         Perform joins

·         Read/write data without migrating the data.

Oracle PostgreSQL Architecture

Oracle Database

     

 oracle_fdw

      PostgreSQL


Step-by-Step Oracle FDW Example

Step 1) Install Oracle Instant Client - Download Oracle client libraries: Oracle Instant Client Downloads

Install:

dnf install oracle-instantclient-basic*.rpm dnf install oracle-instantclient-devel*.rpm

 

Step 2) Install oracle_fdw

 

git clone https://github.com/laurenz/oracle_fdw.git cd oracle_fdw

make

make install

Step 3) Create Extension

Login PostgreSQL: CREATE EXTENSION oracle_fdw;

Step 4) Create Foreign Server

 

CREATE SERVER oracle_server FOREIGN DATA WRAPPER oracle_fdw OPTIONS (dbserver '//192.168.233.134:1521/XEPDB1');

 

Step 5) Create User Mapping

CREATE USER MAPPING FOR postgres SERVER oracle_server OPTIONS ( user 'system', password 'oraclepass' );

 

Step 6) Create Foreign Table - Suppose Oracle table:

HR.EMPLOYEES

 

Map it into PostgreSQL:

 

CREATE FOREIGN TABLE oracle_employees ( emp_id integer, emp_name text, salary numeric) SERVER oracle_server OPTIONS (schema 'HR', table 'EMPLOYEES');

Step 7) Query Oracle Table

 

SELECT * FROM oracle_employees;

 

PostgreSQL reads Oracle table directly.


Interview Question

 

Q2. How can PostgreSQL access SQL Server tables? Answer: PostgreSQL uses:

 

·         tds_fdw

·         ODBC FDW

Most commonly: tds_fdw

It connects PostgreSQL with:

·         Microsoft SQL Server

·         Sybase using TDS protocol.

SQL Server PostgreSQL Architecture

SQL Server

   

 tds_fdw

    PostgreSQL

 

Step-by-Step SQL Server FDW Example

Step 1) Install FreeTDS

dnf install freetds freetds-devel

 

Step 2) Install tds_fdw

git clone https://github.com/tds-fdw/tds_fdw.git cd tds_fdw

make USE_PGXS=1

make USE_PGXS=1 install Step 3) Create Extension CREATE EXTENSION tds_fdw; Step 4) Create Foreign Server

CREATE SERVER sqlserver_svr FOREIGN DATA WRAPPER tds_fdw OPTIONS (servername ‘192.168.223.132', port '1433', database 'company' );


Step 5) Create User Mapping

 

CREATE USER MAPPING FOR postgres SERVER sqlserver_svr OPTIONS ( username 'sa', password 'sqlpass');

 

Step 6) Create Foreign Table

Suppose SQL Server table: dbo.customers

Create mapping:

CREATE FOREIGN TABLE customers_sqlserver ( customer_id integer, customer_name text, city text) SERVER sqlserver_svr OPTIONS (schema_name 'dbo', table_name 'customers');

 

Step 7) Query SQL Server Table

 

SELECT * FROM customers_sqlserver; PostgreSQL reads SQL Server data directly. Advanced Interview Question

Q3. Can PostgreSQL join Oracle and SQL Server tables together?

Answer Yes. Example:

SELECT o.emp_name, s.city FROM oracle_employees o JOIN customers_sqlserver s ON o.emp_id = s.customer_id;

PostgreSQL can join tables from different database technologies.

Oracle Data Guard — Standby in RAC 19c -- Advanced Interview Questions & Answers

 

Oracle Data Guard Standby in RAC 19c

Advanced Interview Questions & Answers

15 Hard-Level Questions | Oracle 19c | RAC & Data Guard

 

 

Question 1 of 15

 

 Topic: Architecture                                                                                                                                                                                                                                   

Text Box: Q1. What are the key architectural differences between a RAC primary and a RAC standby database in Data Guard 19c, particularly regarding redo log shipping?

Answer

In a RAC primary, each instance has its own set of online redo logs (ORLs) and generates redo independently. In Data Guard with RAC, each RAC instance ships its redo stream directly to the standby — this is called multi-instance redo apply. The standby (physical) must be configured with Standby Redo Logs (SRLs) for every

primary instance plus one extra (n+1 rule per thread). Key differences:

  Primary RAC: n instances, each with its own thread of redo, independent ARCn shipping

  Standby RAC: MRP (Managed Recovery Process) or multiple apply processes coordinate across all received threads

  FAL (Fetch Archive Log) is used for gap resolution per-instance thread

  In 19c, Real-Time Apply on a RAC standby is fully supported with parallel MRP0

SQL / Command Reference:

Text Box: -- Check SRL count per thread (should be primary_instances + 1)
SELECT thread#, group#, status FROM v$standby_log ORDER BY thread#;


Question 2 of 15


 

Answer

The n+1 rule states: for each redo thread (instance) on the primary, the standby must have at least as many SRL groups as the primary has ORL groups for that thread, plus one additional group. If the primary RAC has 3 instances each with 3 ORL groups, the standby needs at least 4 SRL groups per thread (12 total).

Violation consequences:

  Shipping stalls when all SRL groups are full and the standby cannot write incoming redo

  Alert log shows: No standby redo logfile available for thread N

  In SYNC mode this causes the primary to hang (waiting for acknowledgment)

  In ASYNC mode it causes archive log accumulation and eventual lag

  Gap detection triggers but FAL may not resolve if SRLs remain saturated

SQL / Command Reference:

Text Box: -- Add SRL on standby
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
GROUP 11 ('/u01/oradata/srl11.log') SIZE 200M;

Question 3 of 15


 

Answer


In Maximum Protection mode, every redo transaction must be written to at least one standby SRL before the primary commits — using SYNC + AFFIRM attributes. The primary remains open only as long as at least one synchronized standby is reachable and confirming redo receipt.

Network partition behavior:

  If contact with all synchronized standbys is lost, the primary shuts itself down rather than risk data loss

  this is the defining behavior of Maximum Protection

  In RAC, this applies per-instance: if all instances lose standby connectivity, all instances shut down

  The shutdown is controlled by LGWR and observed in the alert log as: LGWR: standby lost, shutting down

  To recover, you must restore standby connectivity and mount/open the primary manually

SQL / Command Reference:

Text Box: -- Check current mode
SELECT protection_mode, protection_level FROM v$database;
-- PROTECTION_LEVEL shows degraded mode if standby is unreachable

Question 4 of 15

 

 Topic: Fast-Start Failover                                                                                                                                                                                                                                   

Text Box: Q4. Describe Fast-Start Failover (FSFO) in a RAC 19c environment. What is the role of the Observer, and what are the conditions that can cause a false failover?

Answer

FSFO automates failover without DBA intervention when the Observer detects primary failure. In RAC 19c, the Observer monitors the primary cluster's VIP/SCAN and the standby. Key conditions for FSFO to trigger:

  Observer loses connectivity to primary AND standby has already received sufficient redo (within FastStartFailoverThreshold seconds lag)

  The standby must be in a synchronized state (or within lag threshold for async) False failover causes in RAC:

  Network partition where Observer cannot reach primary but primary is healthy (split-brain)

  Temporary cluster interconnect issues causing all RAC instances to appear ofline simultaneously

  Observer VM migration or maintenance causing observation gap

  Clock skew between Observer host and cluster nodes affecting timeout calculations


Mitigations: Use multiple Observers (19c supports up to 3 observers per configuration), place Observer on a separate, stable host, tune FastStartFailoverThreshold appropriately (default 30s), and use

FastStartFailoverPmyShutdown=TRUE to avoid split-brain.

SQL / Command Reference:

Text Box: -- Check FSFO status
DGMGRL> SHOW FAST_START FAILOVER; DGMGRL> SHOW OBSERVER;

Question 5 of 15

 

 Topic: Apply Services                                                                                                                                                                                                                                   

Text Box: Q5. What is Real-Time Apply in a physical standby, how does it differ from traditional archivelog-based apply, and what are the caveats in a multi-instance RAC standby?

Answer

Real-Time Apply (RTA) allows MRP to apply redo directly from SRL groups while they are being written — no need to wait for the log to be archived first. This minimizes apply lag to near-zero.

Traditional vs Real-Time:

  Traditional: MRP waits for an archived log to be fully received and registered before applying

  Real-Time: MRP reads from the active SRL as redo streams in, applying changes continuously RAC standby caveats:

  In a RAC standby, only one instance runs MRP (single-instance apply) other instances are mounted but

idle from an apply perspective in older approaches; in 19c multi-instance apply is available

  If the instance running MRP fails, MRP must restart on another instance — this causes a brief apply interruption

  SRL groups must be on shared storage accessible from all standby instances

  Parallel apply (multiple RFS/MRP workers) can introduce brief ordering delays if thread coordinators lag

SQL / Command Reference:

Text Box: -- Start Real-Time Apply
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;


Question 6 of 15

 

 Topic: Gap Resolution                                                                                                                                                                                                                                   

Text Box: Q6. Explain the Fetch Archive Log (FAL) mechanism in a RAC Data Guard setup. How does FAL handle gaps when multiple primary instances are involved?

Answer

FAL is the gap resolution mechanism. When the standby detects a missing archive log sequence for any thread, it requests the missing log from the FAL server (typically the primary). In RAC, each thread is a separate redo stream, so gaps are tracked per-thread.

Multi-instance gap handling:

  Each standby RFS process tracks sequences per thread independently

  FAL_SERVER on the standby should point to all primary instances (or their SCAN) to maximize availability

  FAL_CLIENT tells the primary where to ship the gap log

  If the instance that archived a specific sequence is down, the FAL request fails unless the archive is accessible from a surviving instance or shared archive destination

Best practice in RAC: Use a shared archive destination (ASM diskgroup or NFS) accessible from all primary instances, and set FAL_SERVER to the primary's SCAN listener so any surviving instance can serve gaps.

SQL / Command Reference:

Text Box: -- Standby parameter FAL_SERVER = 'PRIMARY_SCAN' FAL_CLIENT = 'STANDBY_DB'


-- Check for gaps
SELECT thread#, low_sequence#, high_sequence# FROM v$archive_gap;


Question 7 of 15

 

 Topic: Switchover                                                                                                                                                                                                                                   

Text Box: Q7. Walk through the exact steps to perform a switchover from a RAC primary to a RAC physical standby in Data Guard 19c using DGMGRL, and what pre-checks are mandatory.

Answer

Mandatory pre-checks:

  Verify no apply lag: SHOW DATABASE VERBOSE standby_name check ApplyLag

  Confirm protection mode is acceptable

  Ensure all primary instances are open and healthy

  Validate Data Guard configuration: VALIDATE DATABASE primary_name

  Check for ORA- errors in alert logs on both sides

In 19c, DGMGRL handles the full RAC switchover it quiesces all primary instances, flips roles, and restarts services. No manual SQL on individual instances is required.

SQL / Command Reference:

Text Box: -- 1. Connect to DGMGRL dgmgrl sys/password@primary
-- 2. Validate
DGMGRL> VALIDATE DATABASE 'primary_db'; DGMGRL> VALIDATE DATABASE 'standby_db'; DGMGRL> SHOW CONFIGURATION;
-- 3. Switchover
DGMGRL> SWITCHOVER TO 'standby_db';
-- 4. Verify
DGMGRL> SHOW CONFIGURATION;
DGMGRL> SHOW DATABASE VERBOSE 'new_primary';

Question 8 of 15

 

 Topic: Active Data Guard                                                                                                                                                                                                                                   


Text Box: Q8. How does Active Data Guard (ADG) work in a RAC standby, and what are the licensing, resource, and apply-lag implications of opening the standby read-only while apply is running?

Answer

ADG requires the Oracle Active Data Guard license (separate from Data Guard, which is included in EE). It allows the physical standby to be open READ ONLY while MRP continues applying redo — enabling ofload of queries, reporting, and backups.

In RAC standby:

  All standby instances can be open read-only simultaneously

  Only one instance runs MRP; apply and read access coexist on shared storage

  Read-only queries get consistent reads using undo data; this requires sufficient undo retention on the standby

  19c feature: ADG DML Redirect DML issued on the standby is transparently forwarded to the primary, executed, and the result returned to the standby session

Apply lag implications:

  Query workload on standby competes with MRP for I/O and CPU, potentially increasing apply lag

  Size SGA appropriately buffer cache on standby serves both apply and read queries

SQL / Command Reference:

Text Box: -- Open ADG standby
ALTER DATABASE OPEN READ ONLY;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;


-- Monitor lag
SELECT name, value FROM v$dataguard_stats WHERE name LIKE '%lag%';


Question 9 of 15


 

Answer

SYNC/AFFIRM (Maximum Protection / Maximum Availability): LGWR waits for redo to be written to standby SRL and flushed to disk before committing. Highest data protection, highest latency overhead. In RAC: every instance's LGWR waits independently — network RTT directly adds to commit latency. Suitable for low-latency WAN (<5ms) or local standby.

SYNC/NOAFFIRM: LGWR waits for redo to be written to standby SRL buffer only (in-memory), not necessarily flushed to disk. Slight improvement over SYNC/AFFIRM; standby SRL not durable until DBWR flushes. Rarely used — most use AFFIRM with SYNC for consistency.

ASYNC/NOAFFIRM (Maximum Performance): LGWR or ARCn ships redo asynchronously primary never waits for standby acknowledgment. Zero commit latency overhead; standby can lag by seconds to minutes. In RAC: each instance ships independently, lag can vary per thread. Default and most common for geographically distant standbys.

SQL / Command Reference:

Text Box: -- Check current transport mode
SELECT dest_name, target, archiver, schedule,
net_timeout, delay_mins, db_unique_name, affirm, async FROM v$archive_dest WHERE status='VALID';

Question 10 of 15


 


Answer

A Far Sync instance is a special Oracle instance with no datafiles it only receives redo from the primary

(SYNC) and forwards it to terminal standbys (ASYNC). This lets you achieve zero-data-loss protection over long distances without the latency of full SYNC shipping to a distant standby.

Architecture in RAC: Primary RAC -> Far Sync (near, SYNC) -> Terminal Standby (distant, ASYNC). Far Sync runs as a single instance (not RAC) it has controlfile, SRLs, and parameter file, but NO datafiles. Each primary RAC instance ships to the Far Sync instance; Far Sync then forwards to terminal standby.

Limitations:

  Far Sync cannot be opened it is always mounted

  Cannot perform switchover to Far Sync it is not a valid failover target

  Does not support Active Data Guard (no datafiles, cannot be read)

  Far Sync must be a single instance RAC Far Sync is not supported

  Recovery catalog and RMAN operations must skip Far Sync

SQL / Command Reference:

Text Box: -- Verify far sync
DGMGRL> SHOW DATABASE VERBOSE 'far_sync_name';
-- Type should be: FAR SYNC

 

 

Question 11 of 15

 

 Topic: Troubleshooting                                                                                                                                                                                                                                   

Text Box: Q11. A physical standby in a RAC 19c setup reports 'ORA-16766: Redo Apply is stopped' and shows an apply lag of 4 hours. Describe your systematic troubleshooting approach.

Answer

Step 1 Identify MRP status: Query v$managed_standby for MRP process status and sequence.

Step 2 Check alert log on standby for the exact ORA- error causing MRP to stop. Common causes: ORA-01547 (failed to establish dependency), ORA-00600 (internal), corrupt block, missing archived log.

Step 3 Check for gaps using v$archive_gap and v$archived_log where applied='NO'.


Step 4 Check RFS processes are logs still arriving? Query v$managed_standby where process='RFS'. Step 5 — Restart MRP: Cancel and re-issue RECOVER MANAGED STANDBY DATABASE.

Step 6 If corruption detected, use RMAN block media recovery on standby. If gap cannot be resolved, consider reinstating from primary RMAN backup.

SQL / Command Reference:

Text Box: -- Step 1: MRP status
SELECT process, status, sequence#, delay_mins FROM v$managed_standby WHERE process LIKE 'MRP%';


-- Step 3: Gaps
SELECT * FROM v$archive_gap;
SELECT thread#, sequence#, applied FROM v$archived_log WHERE applied='NO' ORDER BY thread#, sequence#;


-- Step 5: Restart MRP
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT;

Question 12 of 15

 

 Topic: Broker Configuration                                                                                                                                                                                                                                   

Text Box: Q12. What is the Oracle Data Guard Broker configuration file, and what happens if it becomes out of sync between RAC nodes or between primary and standby? How do you recover?

Answer

The Data Guard Broker configuration is stored in two binary files (dr1*.dat and dr2*.dat) on each database — defined by the DG_BROKER_CONFIG_FILE1/2 parameters. These are NOT the same as the controlfile.


In RAC: All primary instances should point to the same broker config files typically on a shared ASM

diskgroup. If instances point to different files (e.g., local file per node), the broker state diverges and DGMGRL shows inconsistent configurations.

Out-of-sync symptoms:

  SHOW CONFIGURATION returns ORA-16596 or ORA-16714

  Broker operations fail with configuration file mismatch

  Switchover/failover cannot proceed

Recovery steps: Disable broker on all instances, delete stale config files from all nodes, re-enable broker, then recreate the full configuration in DGMGRL.

SQL / Command Reference:

Text Box: -- 1. Disable broker on all instances ALTER SYSTEM SET dg_broker_start=FALSE;


-- 2. Delete stale config files (dr1*.dat, dr2*.dat from all nodes)



-- 3. Re-enable broker
ALTER SYSTEM SET dg_broker_start=TRUE;



-- 4. Recreate configuration
DGMGRL> CREATE CONFIGURATION 'dg_config'
AS PRIMARY DATABASE IS 'primary_db' CONNECT IDENTIFIER IS 'primary_scan';
DGMGRL> ADD DATABASE 'standby_db'
AS CONNECT IDENTIFIER IS 'standby_scan' MAINTAINED AS PHYSICAL;
DGMGRL> ENABLE CONFIGURATION;

 

Question 13 of 15

 

 Topic: RMAN Duplication                                                                                                                                                                                                                                   


Text Box: Q13. How do you create a physical standby database from an active RMAN duplicate in 19c when the primary is a RAC database? What parameters require special attention?

Answer

RMAN's DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE works with RAC primaries but

requires careful parameter handling since the standby may have different instance counts, node names, and storage paths.

Parameters requiring special attention:

  DB_FILE_NAME_CONVERT: Maps primary datafile paths to standby paths

  LOG_FILE_NAME_CONVERT: Maps online redo log paths

  DB_UNIQUE_NAME: Must be unique different from primary

  CLUSTER_DATABASE: Set to FALSE during duplication; re-enable after

  THREAD and UNDO_TABLESPACE: Remove RAC-specific parameters not applicable to single-instance standby

  INSTANCE_NUMBER, INSTANCE_NAME: Clear primary RAC-specific values

After duplication, add SRLs, re-enable Broker, and set CLUSTER_DATABASE=TRUE if standby is also RAC.

SQL / Command Reference:

Text Box: -- Connect RMAN to target (primary) and auxiliary (standby) rman TARGET sys/pwd@primary AUXILIARY sys/pwd@standby


RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE
DORECOVER SPFILE
SET db_unique_name='standby_db' SET cluster_database='FALSE' SET fal_server='primary_scan'
SET log_archive_dest_2='SERVICE=primary_db ASYNC VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)'
NOFILENAMECHECK;


Question 14 of 15

 

 Topic: 19c New Features                                                                                                                                                                                                                                   

Text Box: Q14. What are the key Data Guard enhancements introduced in Oracle 19c relevant to RAC standby databases, and how do they improve upon 18c or 12.2 behavior?

Answer

1.  Automatic reinstatement after failover (improved): 19c improved the automatic flashback + reinstatement flow the old primary can be automatically converted to a standby post-failover without manual intervention, even in RAC configurations.

2.  Multiple Observer support (up to 3): 19c allows up to 3 Observers per Data Guard configuration. This eliminates the Observer SPOF in FSFO a master observer and backup observers coordinate, improving failover reliability in RAC environments.

3.  ADG DML Redirect: Transparent DML redirection from standby to primary — applications connecting to the standby for reads can also issue DML without application changes, improving workload distribution in RAC ADG deployments.

4.  Automatic Block Media Recovery from standby: In 19c, the primary can automatically recover corrupt blocks using clean copies from a synchronized ADG standby — zero DBA intervention required.

5.  VALIDATE DATABASE in DGMGRL enhanced: 19c DGMGRL's validate command checks RAC-specific configurations including SRL counts, thread configurations, and service failover definitions more

comprehensively.

SQL / Command Reference:

Text Box: -- Use multiple observers in 19c
DGMGRL> START OBSERVER observer1 FILE IS '/u01/obs/obs1.dat' LOGFILE IS '/u01/obs/obs1.log'
CONNECT IDENTIFIER IS 'primary_scan';


-- Validate RAC-specific configuration DGMGRL> VALIDATE DATABASE 'standby_db';


Question 15 of 15

 

 Topic: Advanced Scenario                                                                                                                                                                                                                                   

Text Box: Q15. In a RAC primary with 4 instances, instance 3 crashes mid-transaction. Explain the exact sequence of events on the physical standby regarding redo application, thread recovery, and LGWR/ARCn behavior.

Answer

On the primary side:

  Instance 3 crashes; PMON detects the failure and notifies other instances

  Cluster Ready Services (CRS) evicts instance 3 from the cluster

  Instance recovery for thread 3 is performed by a surviving instance (instance 1) — it reads thread 3's online redo logs and rolls back uncommitted transactions

  Instance 1 archives thread 3's current online redo log (the end-of-thread archive), writing a special thread close marker

On the standby side:

  The RFS process for thread 3 loses its connection (the LGWR/ARCn from instance 3 is gone)

  The standby receives the final archive for thread 3 from instance 1 (via FAL or shipped by instance 1's ARCn)

  MRP detects the end-of-thread marker in thread 3's archive sequence and performs thread recovery locally on the standby

  MRP rolls back uncommitted thread 3 transactions using undo data in the redo stream

  Once thread recovery is complete, MRP marks thread 3 as recovered and continues applying threads 1, 2, and 4 normally

Key point: The standby does NOT wait for instance 3 to come back it performs full thread recovery autonomously using the shipped redo, maintaining apply consistency.

SQL / Command Reference:

Text Box: -- Monitor thread-level apply on standby
SELECT thread#, sequence#, applied, first_time, next_time FROM v$archived_log
WHERE thread# = 3 ORDER BY sequence# DESC FETCH FIRST 10 ROWS ONLY;


Text Box: -- Check thread status
SELECT thread#, status, enabled FROM v$thread;