How to increase the throughput of records being read from Oracle Source DB
Could you please help us understand, how can we increase the throughput while reading records from the source oracle database.
The throughput we get varies higly with various peaks such as 20k 4k 12k 0 9k 0 21k 2k etc.
The source view has 23 million records and the target in Azure sqldw.
We have been able to successfully replicate the data only when we chose to create a .csv file of size 2GB on the blob storage.
The task has failed for all the file sizes between 200-1000 MB with the error ' Error in request handler'.
Your response would be really helpful.
From another thread you post that CSV between 200-500 are working, but over 500MB are not working.
in this thread you post that 2000MB is working and below is not working.
1. Look like you have a connection issue with your Replicate server with the Attunity Cloudbeam server and Azure, please double check all your connection.
2. What version of Replicate are you running ?
3. What version of Replicate Client is running on the Attunity Cloudbeam Azure server?