Thursday, October 15, 2015

Wildfly9 reverse-proxy / load-balancer configuration

Background

Prior to Wildfly9 most production JBoss / Wildfly instances required a front-end to load balance traffic across application server nodes. Apache httpd configured with their mod_cluster module was a popular choice. Wilfly9 now includes their own (java) version of mod_cluster thus eliminating the need for Apache httpd and time-consuming configuration.
I'm currently in the process of converting an iptables / apache httpd / jboss 7 implementation to iptables / wildfly9 (on RHELv7).

I found the WFLY9 documentation inadequate (I think this is what was written for Wildfly8 - using a handler rather than a filter), but instead found a reply (kwart) at wildfly-9-load-balancing which put me on to the video tutorial / interview of Stuart Douglas w Markus Eisele https://youtu.be/xa_gtRDpwyQ

The following assumes knowledge of the above tutorial as a starting point. In my case the nodes have different IP addresses to the mod_cluster load-balancer.
Initially I could get the nodes introduced to mod_cluster, but I found requests were not delegated correctly. I then realised the ha profile undertow server alias attribute holds the key.
This the glue connects a request to the an appropriate node. In my mind the following question is posed when a request arrives at mod_cluster: does this node-X have a matching context and alias? If so the request is sent through to the matching node. If not the mod_cluster doesn't know what to do with the request and so responds with a 404.

Wildfly Server Summary

load balancer (mod_cluster) (8080)
node 1 (8080)
node 2 (8280)
node 3 (8480)
profile: default
sockets: standard-sockets
ip address: <ip address A>
port offset: 0
wildfly server: master
profile: ha
sockets: ha-sockets
ip address: <ip address B>
port offset: 0
wildfly server: server1
profile: ha
sockets: ha-sockets
ip address: <ip address B>
port offset: 200
wildfly server: server1
profile: ha
sockets: ha-sockets
ip address: <ip address B>
port offset: 400
wildfly server: server1

Implement using Wildfly CLI

The following assumes a more or less vanilla domain mode configuration with "default" and "ha" profiles. The load-balancer will be configured using the "default" profile and the nodes are assumed to have been configured using the "ha" profile.

# enable mod_cluster load balancer in the default profile
/profile=default/subsystem=undertow/configuration=filter/mod-cluster=modcluster:add(management-socket-binding=http, advertise-socket-binding=modcluster)
/profile=default/subsystem=undertow/server=default-server/host=default-host/filter-ref=modcluster:add
/socket-binding-group=standard-sockets/socket-binding=modcluster:add(port=23364, multicast-address=224.0.1.105)/server-group=load-balancer:add(profile=default, socket-binding-group=standard-sockets)

# add new server group: load-balancer
/host=master/server-config=load-balancer:add(group=load-balancer)/server-group=load-balancer:start-servers

# configure undertow aliases (for backend nodes)
/profile=full-ha/subsystem=undertow/server=default-server/host=default-host:write-attribute(name=alias, value=[ip and/or host names - comma separated])

eg: ... value=[111.222.333.444, host1.domain.co])

# reload nodes in order for above alias change to take effect
/server-group=your-ha-node-group1:reload-servers
/server-group=your-ha-node-group2:reload-servers

Adding an IP tables redirect
In order to allow a redirected request to be allocated to an appropriate node by the load-balancer, the IP or hostname contained in the original browser URL must be added to the list of aliases in the node profile.
Fortunately the alias attribute accepts a comma separated list of values (ip addresses and hostnames).

IP tables config (on RHELv7)
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -d <public ip address> -p tcp -m tcp --dport 80 -j DNAT --to-destination <ip address A>:8080

If the public ip address and hostname are added to the ha profile undertow default host aliases then nodes are available by either ip address or hostname urls.
Now to add ssl support.

PS
I subsequently found Stuart's tutorial documented here - see clustering-domain-mode.txt.

2 comments:

  1. I`m trying to deploy this configuration on AWS, however AWS does not accept multicasting... how could I achieve a similar configuration to work? Thanks

    ReplyDelete
  2. Hi ,
    How it works with Public IP is there any configuration

    ReplyDelete