Trying to get a platform with firewall, loadbalancing, and many connections, we ended up not taking any appliance, but getting nice hardware with heavy processors and nice network cards.
It's actually cheaper to get 8 servers with dual 10Gbps interfaces, 4 10Gbps switches than it would be to get like 2 appliance load-balancers that could possibly handle 4Gbps total traffic.
More redundancy, probably higher limits, and more flexibility… and all that cheaper ? Yeah, nice. But what are the limits you can reach with those ?
Lets bench it !
At the end, we want :
Lets check what are the limits we can reach, and how it reacts.
Some informations about the hardware used for our benches :
cpu | 2 Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz (two 6 cores, total of 24 threads) |
---|---|
memory | 64GB (memory is cheap) |
network | Intel X520-DA2 – Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) |
Switches | TurboIron x24 |
---|---|
Cables | Twinax cables (no reason to get fiber when your servers are so close from the switches) |
Linux - Debian/wheezy (some hardware are not supported ot squeeze install)
We are using Munin, with 1s statistic plugins, including :
inject (found here)
As we try to handle alot of connections from a single server, we soon hit the source port limit. inject allows to bypass that limit, as it binds to a specific source ip/port for each outgoing connections.
Nginx, as in production we will have reverse proxies using nginx.
Some issues with nginx, forced me to search for an other server to bench. httpterm was pointed out. Like inject, it is aimed to just do that: stress some http connections.
As we are running linux, it's obviously iptables.
As mentionned earlier, it's IPVS with direct routing (already used in our production).