Occasionally you will run into a network issue similar to these examples:
F_JG2111: Connect to host ‘<hostname>' and port 4344 for coprocess 'mychn-cap-srosc' failed. Error: Connection refused.
F_JT04A5: Coprocess has exited abruptly. This could be a network problem or a fatal error in the child coprocess.
In these cases, ask the following questions:
-
Is the HVR remote listener on the source running on port 4343?
-
On the remote location machine where you have the connection issue, run: hvrtestlistener localhost <port number>. If it says connection refused;
-
As root user do a netstat -tulpn |egrep “hvrremote|inetd”, this will either give back a port of an hvrremotelistener or Inet deamon or nothing. If nothing, hvrremotelistener or Inet deamon needs to be started.
-
-
If hvrtestlistener on that remote connection works fine, then go to the hub and run: hvrtestlistener <remote location host> <port number>. If it says connection refused or timed-out, please check if firewall rules exists between HVR hub and remote location.
-
If hvrtestlistener works fine from hub, please go into hvrgui, add a new location for test to same remote location but instead use a file instead of database. Now to test connection, if that works, but the original location with a database connection does not work, you have a database connection issue.
-
-
Does the server have a firewall that may be blocking traffic?
-
Did you configure an hvrproxy for this server?
-
Note that you go through the proxy so the origination address from the source's perspective is the proxy machine.’
-
Does the proxy allow you access to the server ?
-
Some errors like Error F_JT04A5 are usually a result from a network failure or instability. If the error occurs frequently it can help to relax HVR's TCP Keep Alive settings (the default settings are rather tight). There are two keep alive settings:
HVR_TCP_KEEPALIVE
This can be done using environment variable HVR_TCP_KEEPALIVE to 300.
-
This needs to be done in 2 places:
-
An action Environment /Name=HVR_TCP_KEEPALIVE /Value=300 needs to be added to the channel
-
A scheduler set attribute needs to be added:
Attribute Name: set
Variable: HVR_TCP_KEEPALIVEValue: 300
-
-
After this run HVR Initialize with Scripts and Jobs
HVR_REMOTE_KEEPALIVE
If the TCP Keep Alive is dropped on the OS by a network component (router, load balancer, etc.), an alternative HVR_REMOTE_KEEPALIVE can be set that will send keep alive signals from the application rather than the OS. It forces application level (HVR protocol) keep-alive packages to be sent between hub and remote location - reason being that ELB's and other firewall/routers may filter OS level keep-alives.
A typical value to set for this variable to 5. This environment variable enables keep-alive messages sent from the remote location back to the hub. The value is the duration between keep-alive messages in seconds. Value 0 disables the keep-alives. Default is 0.