S3 Copy command from S3 bucket to Redshift taking much longer than 30 seconds

-We have seen that when the Redshift Cluster have only one node or two node, it does not have enough processing power to handle all the Replicate tasks.

For example:
For a full load task, Replicate read that source data and create a csv file on the Replicate server, default size is 1GB for the csv.
Then Replicate transfer the 1GB csv file to S3 bucket and then issue a Copy command to Redshift. Then it wait for the Copy command to be complete before it does another copy command, until all data are transfer to Redshift.

So if you have 10 tasks running full load at the same time you would have 10x1GB file copy to S3 and then copy to Redshift. However, if you have two nodes, each node would handle 5x1GB copy from S3 to Redshift.

Therefore, the copy command would slow down.

However, If you have a Redshift Cluster with 10 Nodes, then each Redshift Cluster could handle each 1GB copy speeding up the S3 copy command.

Below is a link from Amazon talking about dividing the workload among the nodes in your cluster.

Split Your Load Data into Multiple Files - Amazon Redshift