I think this may have been covered in another post but wanted to report it and see what the approximate timeline would be for a fix making it into OpsCenter Community.
I have a Cassandra cluster with 2 datacenters. 1 on aws and 1 private. I have opscenter running on a seperate instance on AWS.
Opscenter connects to the cluster fine and finds all the nodes, pushed out the agents no problem. Agents launch. I see all nodes as active in the OpsCenter dashboard, I see the different datacenters and stats and I see all nodes connected in the storage pie chart as well.
All good. Now if I go to the cluster and actually click on a node and choose "view replication" for example I get an error that connection was refused. When I check the logs for the agent on that machine I see Jetty BindException cannot assign address.
It seems that Jetty tries to bind to the public IP of the node as listed in the address.yaml pushed out by OpsCenter.
So if I change the local address in the address.yaml to be the private address of the node, OR to be 0.0.0.0 then I no longer get jetty errors in the logs and the actions in opscenter work (ie view replication now works). But the node appears disconnected in the storage pie graph on the front page and opscenter starts prompting me to install the agent again on that node.
So it seems like there needs to be a way to tell Jetty to bind to one address but still use the proper "local" address so that opscenter knows what address to contact the node on.
Is there anyway I can over-ride this for the time being either editing startup scripts or similar to force jetty to just bind 0.0.0.0 and still keep opscenter happy by having the node local address that it needs in the address.yaml?