General CPU Consumption explanation

Resource consumption of AIS in CDC scenario is affected by the total amount of changes in the AS/400 system journal that AIS monitors and by the total amount of changes in the captured tables.
Let's assume that during an hour there were N total updates written to the AS/400 system journal that AIS monitors. Of them, Nc are captured and Ni are ignored so N=Ni+Nc
Since AIS reads a log that contain changes not only from the captured tables but from many other tables, there is some inherent overhead that depends on N which is small but still exists (let's denote Tj as the time it takes to read and skip over a journal record). For journal records that represent changes that we want to capture, there is a known processing time that involves converting of the AS/400 journal record into a binary XML record that can be sent to the AIS staging area (let's denote Tp as the time it takes to process a journal record). In addition to these numbers, there is another activity that takes some time and that is the act of querying the AS/400 journal for new records (let's denote it Tq) and this is multiplied by the number of such queries within a hour, Nq.

The CPU consumption T is given as T = Ni * Tj + Nc * (Tj + Tp) + Tq * Nq
The actual numbers of formula about depend on the specific system, the type of I/O device and further, Tp also depends on the number of columns in the captured tables and the type of data (e.g., for strings there is conversion from EBCDIC to UTF-8). From our experience, the time it takes to query the journal for changes is small and can usually be ignored.
In order to provide a change stream out of the AS/400 machine, the AIS CDC agent needs CPU time to work. This time is, for the most part, a function of the number and kind of changes made to the monitored AS/400 journal. The AIS CDC agent for AS/400 is a passive agent in that it reads and scans the AS/400 journal only upon request for mode change records from the CDC Change Router (on the UNIX machine). When data captured data exists, the CDC agent on the AS/400 will format and return it to the change router on the UNIX machine and the faster the change router will ask for data, the faster it will get it.
Now the AIS CDC components do not contain mechanism to slow down the movement of change events. If the load of the CDC agent on the AS/400 is too high and interferes with the work of other applications, it means that the CDC agent should be allocated less CPU on the AS/400 machine. This will result in slower rate of change events seen from the UNIX machine (although this may average out over a longer period of time). The AS/400 has a load manager component that can be used to limit the resource utilization of the CDC agent such that other application can get more CPU time. This is the recommended practice in general and also in this case.
Controlling the change stream from the client side is not an effective approach as it cannot be correlated with actual consumption numbers on the AS/400. For example, if we were to offer a setting on the change router such as "Donít process more that 200 change events per minute" then we will be running a risk of starting to accumulate delays even if we are at a time where the system is idle and can easily pick up pace to catch up with change events.
By controlling the resource allocation to the CDC agent on the AS/400 with the system's load manager, one gets better flexibility and control. One can provision each application with the resources it needs to do the work it was built for while still let the CDC agent more resources when possible - for example, one can say that when the overall load is less than 70%, the CDC agent can use up to 15% but when the overall load is 70% or more, then the CDC agent should get no more than 7%.

/d