Get Data Moving 1
Results 1 to 5 of 5

Thread: Oracle to Oracle one tasks with 66 tables full load: Error "Position does not exist"

  1. #1
    mkurup is offline Junior Member
    Join Date
    Aug 2019
    Posts
    3
    Rep Power
    0

    Oracle to Oracle one tasks with 66 tables full load: Error "Position does not exist"

    Hello,

    I am a newbie, working on a task to perform full load on a bunch of Oracle tables. Both source and target databases are Oracle.
    After full load of last 10 tables, started resume processing, the tasks stopping immediately after the start and ending with below error.

    Position does not exist [1002510] (at_audit_reader.c:784)
    00005936: 2019-10-08T21:59:51 [TASK_MANAGER ]E: Stream component failed at subtask 0, component st_0_S_DCAT_LOGSTREAM [1002510] (subtask.c:1368)

    Can you please provide some insights of what this error is about?


    It was fine on first round of tables added and loaded successfully.

    Thanks!
    MK

  2. #2
    Hein is offline Senior Member
    Join Date
    Dec 2007
    Location
    Nashua, NH - USA.
    Posts
    161
    Rep Power
    12
    You may have to show us more about the task, notably a JSON export.
    How did you configure the source endpont.
    The error message suggests LOGSTREAM.
    Is that intentional? Logstream is a more two-task (or more) complex configuration
    One main task to read the real (Oracle) source to with an Oracle source endpoitn and Logstream target.
    And one or more other tasks to use the Logstream (or a copy of that) as source and write to the (Oracle) target.
    as a self proclaimed noob, I expect want a single task with just an Oracle Source and Oracle target.
    If not, please explain and provide details.

    hth,
    Hein.

  3. #3
    mkurup is offline Junior Member
    Join Date
    Aug 2019
    Posts
    3
    Rep Power
    0
    Hi Hein,

    We have been trained by the trainer from Attunity based on current use case to go by two configurations (
    two-task). Create a log stream first to read data from Source and on second configuration, read logs from a file and insert/update the Oracle target table. Therefore for any CDC, the file will be referenced. The reason behind that suggestion as I understand is to mitigate the stress level on critical source database. The design is a table exists in multiple schema from source system updating one target table in the target Oracle database. Besides the log stream, I have only one task. Many to one.

    "
    as a self proclaimed noob, I expect want a single task with just an Oracle Source and Oracle target."

    Do you advise to go with one task and do not use the log stream? Is that the reason of error (below) intermittently occurring whenever the task was stopped to further add new tables?
    Position does not exist [1002510] (at_audit_reader.c:784)

    I was able to rerun the task successfully now after truncating entire tables. But I do not think that is a best approach to correct this situation. I am thinking it's losing the last timestamp of redo logs that was read.
    I will attach the JSON file for your reference.

    Best,
    MK

  4. #4
    Hein is offline Senior Member
    Join Date
    Dec 2007
    Location
    Nashua, NH - USA.
    Posts
    161
    Rep Power
    12
    Ok, so you are a bit further along then your average newbie :-).

    Log_stream is wonderful technology when multiple (replicate) tasks needs to read the same Oracle Redo logs as source.
    For those cases it significantly reduces the load on the source server.
    It gives more freedom to the DBA for log archive management because the log_stream source task only needs to read the logs, nothing else and supposedly one can keep it running and running, always keeping up.
    Alternatively, if a task has to be stopped, all the archive logs will have to remain available for the duration of such stoppage.

    That being said, if there is only one target task, and there is a reasonable archive log retention period ( > 24 hours?) then why bother?
    In such situation, the log_stream component only creates overhead (on the Replicate Server) and increases complexity.

    Back to the problem on hand, it appears that in principle everything is done correctly.
    Through other inputs I was already aware of the mechanism to have the 'same' table from multiple source schemas to go to a single target table, but that's fine and I don't see how that could result in 'position' the problem on hand.
    That's a source redo/archive log to logstream issue and not likely to be related to any specific table as at that point the task does not know/care where the target table will go to. (that's done with transformations in the target task right? I don't see the JSON to verify. You probably would have needed to attach those as '.txt' files)

    We'll need to study several logs to see where the 'disconnect' in time=position may have started.

    As such I'm afraid this is beyond casual forum helping and is better served through the Attunity/Qlik support services.

    Fair enough?

    Good luck!
    Hein

  5. #5
    mkurup is offline Junior Member
    Join Date
    Aug 2019
    Posts
    3
    Rep Power
    0
    Hi Hein,


    When you mentioned 'Retention' made me realize that it is currently set for 4 that was suggested during the training. That's means 4 hours. The logs are getting deleted when task is trying to go back to previous time stamp after reload/full load? I am going to rerun task few times after extending the time frame to 24 hours. Appreciate your responses! (I would consider myself newbie until my one task running perfectly in production :).

    Thanks!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •