Our log generation is very high. Will replication be impacted?
Local Data Processing
If the source generates a large log in a short period of time then what will happen on the target depends on a number of factors. An important factor is whether the log has many small transactions or few large transactions. Here are a number of considerations to help guide you answer the question:
- HVR captures all transactions that are performed on the system but only propagates the transactions if a commit happens. Database changes are written to the log all the time and HVR will continuously try to stay current and process the log as it is written to. So even though a large volume of log data may be written in a relatively short period of time HVR may still be current with the capture, and start propagating changes as soon as it sees the commit for the transaction(s) it is tracking.
- HVR only captures changes for tables that are part of the channel. There may be lots of database changes irrelevant to HVR so even though HVR tracks the transactions it may not keep a lot of data per transaction.
- Transactions in the database log are written in a commit order. Every database has a commit sequence number (e.g. in Oracle this is the SCN (System Commit Number)). By default and when not using the /Burst option in Integrate HVR performs the changes in the target database in commit order on the source. If there is a long-running transaction on the source that made a lot of changes to tables that are captured by HVR then it may take some time for HVR to process the changes for this transaction on the target. If that happens then you will see that latency in HVR increases simply because HVR is working on a large transaction that takes time to process. If for example, it takes 2 minutes to perform a single transaction on the target database then at the end of the transaction HVR will show 2 minutes of latency (that is hopefully caught up quickly afterward). Any short running transactions that came after the long running transaction will only be applied to the large transaction has been applied to keep the destination database consistent. There is an option /TxSplitLimit as part of Integrate that can be used to instruct HVR to break up large transactions into multiple smaller transactions. Doing so will break the transaction boundaries but may limit resource consumption of large transactions on the target system. Note that HVR has ways to split a channel into multiple integrate processes but doing that typically breaks transaction boundaries and breaks the consistency that is maintained within a channel.
- If there is a backlog of transaction files to be processed then by default HVR will process up to 10 MB of compressed transaction files and apply these as a single transaction to the target database. Given performing a commit is a relatively expensive database operation this is often the best way to speed up database integration, but it may lead to relatively large transactions on the target database. The option /CycleByteLimit on the Integrate action can be used to decrease or increase this cycle limit and cause more or less frequent commits if there is a backlog of transactions to be processed on the Integrate side. See https://www.hvr-software.com/docs/actions/integrate for more information on each of these parameters.