The Staging Area has a mechanizim to delete expired events. The default setting is 48 hours. In many cases and especially in busy systems you may see that events are not deleted on time.

The Router deletes events that existed in the Staging Area for at least eventExpirationHours:

1. If the Router is in idle time, the Router deletes old events all the time.
2. If the Router is NOT in idle time:
- If there are lots of events to delete (and the previous deletion reached max batch size), the Router tries to delete old events all the time in intervals of at least 15 seconds.
- If there are not much of events for deletion, the Router deletes old events every 120 seconds.
- On each deletion, the router deletes maxDeletedEventsInBatch events, which has the default of 500. (This is settable in the Router's adapter)


If it's very critical for the user to force delete these events the follow query can be used with special care:

Suppose we have a change table called CT and we want to delete the oldest 10000 events.

To do so, we can run the following query:
delete CT where context < (select max(C) from (select context C from CT limit to 10000 rows) T);
- The most inner select statement “select context C from CT limit to 10000 rows” reads the first 10000 rows in the file
- The outer select statement “select max(C) from (…) T” returns the value of the last context of the 10000 rows
- The main statement “delete CT where context < …” deletes rows until that context found above.

You can add to the most inner select statement a condition on date/time so, for example if you don’t want to delete too recent records.

For example:
delete CT where context < (select max(C) from (select context C from CT where context < '20090711T125000' limit to 10000 rows) T)
This will delete changes up until 2009-07-11 12:50:00 (note it'll delete 10000 rows on each execute). So you have to execute every 'n' time, depending on how much rows you need to delete.

For example, every hour or half an hour (depending on how busy your system is).

Note that DISAM has a limitation on the maximum number of file locks allowed by the system. DISAM locks the records before delete using fcntl (F_SETLK). When the system runs out of lock handles we get ENOLCK.

In the last example we used 10000 for two reasons:
- We don't know the limit on the user's system.
- We didn't want to delete a huge blocks at once. (You can choose the amount of records to be deleted and the interval depending on how busy your system is)