Logminer vs binary file reads - CDC failsafe?
The below relates to Oracle to SQL replication:
I have a scenario where we are experiencing some downtime in the target SQL database due to the log file being full. This causes a fatal error in Attunity and the tasks stop. When the target SQL database is up and running again (only hours later), all replicate tasks then fails to resume due to the fact that it cannot find the archived redo log sequence --> This is because the archived logs gets removed overnight as part of a cleanup job for Oracle.
I believe there is other 3rd party software that removes the logs and compresses them so Attunity wont be able to read it from that software's location etc (not 100% sure about this).
We might consider copying the logs to a separate location and reading it from there as an alternative or failsafe in the event that we have downtime and thus wont be able to catch up with the current log sequence.
Has anybody tried using the binary file reads option instead of using logminer to access redo logs? Are there any performance differences?
Would this be a viable option in terms of performance and being a failsafe?
Originally Posted by RaynardtSM
The issue you have to resolve first is your target DB log being full.
If you copy the Source log to a separate location, then you would need to setup the Database management for your source to look at the log in the separate location.
How to restore the CDC task failed due to server downtime
Originally Posted by stevenguyen
Even I am facing the same issue where a running task got stopped as the target sql db server went down.
Now I am unable to resume the task as it cannot find the ' last redo log'.
Any inputs on how can we fix this issue.
A small correction, its actually the Attunity Server that went down and not the target SQL DB server.
Originally Posted by akanksha.1787
I'm sorry but it is the Replicate users responsibility to make sure all required redo and archived Transaction logs are available when a Replicate task is resumed (or started by timestamp)
Any missing archive log will have to be restored from backup for replication to be resumed without loss of changes.
Alternatively one can start by timestamp-tables-already-loaded with the earliest time for which a redo is available (query V$ARCHIVED_LOG for FIRST_TIME, SEQUENCE#) and accept some data loss.
Or, start fresh with a reload.
This is the same whether using Logminer or direct file reads.
Tags for this Thread