Get Data Moving 1
Results 1 to 5 of 5

Thread: Logminer vs binary file reads - CDC failsafe?

  1. #1
    RaynardtSM is offline Junior Member
    Join Date
    Sep 2015
    Posts
    1
    Rep Power
    0

    Logminer vs binary file reads - CDC failsafe?

    The below relates to Oracle to SQL replication:

    I have a scenario where we are experiencing some downtime in the target SQL database due to the log file being full. This causes a fatal error in Attunity and the tasks stop. When the target SQL database is up and running again (only hours later), all replicate tasks then fails to resume due to the fact that it cannot find the archived redo log sequence --> This is because the archived logs gets removed overnight as part of a cleanup job for Oracle.

    I believe there is other 3rd party software that removes the logs and compresses them so Attunity wont be able to read it from that software's location etc (not 100% sure about this).
    We might consider copying the logs to a separate location and reading it from there as an alternative or failsafe in the event that we have downtime and thus wont be able to catch up with the current log sequence.

    Has anybody tried using the binary file reads option instead of using logminer to access redo logs? Are there any performance differences?

    Would this be a viable option in terms of performance and being a failsafe?

  2. #2
    stevenguyen is offline Senior Member
    Join Date
    May 2014
    Posts
    221
    Rep Power
    3
    Quote Originally Posted by RaynardtSM View Post
    The below relates to Oracle to SQL replication:

    I have a scenario where we are experiencing some downtime in the target SQL database due to the log file being full. This causes a fatal error in Attunity and the tasks stop. When the target SQL database is up and running again (only hours later), all replicate tasks then fails to resume due to the fact that it cannot find the archived redo log sequence --> This is because the archived logs gets removed overnight as part of a cleanup job for Oracle.

    I believe there is other 3rd party software that removes the logs and compresses them so Attunity wont be able to read it from that software's location etc (not 100% sure about this).
    We might consider copying the logs to a separate location and reading it from there as an alternative or failsafe in the event that we have downtime and thus wont be able to catch up with the current log sequence.

    Has anybody tried using the binary file reads option instead of using logminer to access redo logs? Are there any performance differences?

    Would this be a viable option in terms of performance and being a failsafe?

    Hello RaynardtSM,

    The issue you have to resolve first is your target DB log being full.

    If you copy the Source log to a separate location, then you would need to setup the Database management for your source to look at the log in the separate location.

    Thanks,
    Steve

  3. #3
    akanksha.1787 is offline Junior Member
    Join Date
    Nov 2016
    Posts
    22
    Rep Power
    0

    How to restore the CDC task failed due to server downtime

    Quote Originally Posted by stevenguyen View Post
    Hello RaynardtSM,

    The issue you have to resolve first is your target DB log being full.

    If you copy the Source log to a separate location, then you would need to setup the Database management for your source to look at the log in the separate location.

    Thanks,
    Steve
    Hi Steve,

    Even I am facing the same issue where a running task got stopped as the target sql db server went down.
    Now I am unable to resume the task as it cannot find the ' last redo log'.
    Any inputs on how can we fix this issue.

    Regards,
    Akanksha Bassin

  4. #4
    akanksha.1787 is offline Junior Member
    Join Date
    Nov 2016
    Posts
    22
    Rep Power
    0
    Quote Originally Posted by akanksha.1787 View Post
    Hi Steve,

    Even I am facing the same issue where a running task got stopped as the target sql db server went down.
    Now I am unable to resume the task as it cannot find the ' last redo log'.
    Any inputs on how can we fix this issue.

    Regards,
    Akanksha Bassin
    A small correction, its actually the Attunity Server that went down and not the target SQL DB server.

    Regards,
    Akanksha Bassin

  5. #5
    Hein is offline Senior Member
    Join Date
    Dec 2007
    Location
    Nashua, NH - USA.
    Posts
    108
    Rep Power
    10
    I'm sorry but it is the Replicate users responsibility to make sure all required redo and archived Transaction logs are available when a Replicate task is resumed (or started by timestamp)

    Any missing archive log will have to be restored from backup for replication to be resumed without loss of changes.

    Alternatively one can start by timestamp-tables-already-loaded with the earliest time for which a redo is available (query V$ARCHIVED_LOG for FIRST_TIME, SEQUENCE#) and accept some data loss.

    Or, start fresh with a reload.

    This is the same whether using Logminer or direct file reads.

    good luck,
    Hein

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •