User Tools

Site Tools


system:benches10gbps:loadbalancing

Like for firewall, those benches starts with inject/httpterm configuration allowing us to get 800k connections/s (2 clients, 3 servers) without loadbalancing them.

Monitoring graphs for the different benches can be found here.

Baseline

configuration

Keepalive configuration :

global_defs {
   router_id rempart.nbs-system.com
}

virtual_server_group pool_test_0 {
        10.128.0.0 80
        10.128.0.1 80
        10.128.0.2 80
        10.128.0.3 80
        10.128.0.4 80
        10.128.0.5 80
        10.128.0.6 80
        10.128.0.7 80
        10.128.0.8 80
        10.128.0.9 80
        10.128.0.10 80
        10.128.0.11 80
        10.128.0.12 80
        10.128.0.13 80
        10.128.0.14 80
        10.128.0.15 80
}

virtual_server group pool_test_0 {
  delay_loop 5
  lb_algo rr
  lb_kind DR
  protocol TCP
  real_server 10.127.0.1 80 {
    weight 1
#    HTTP_GET {
#      url {
#        path /
#        status_code 200
#      }
#      connect_port 80
#      connect_timeout 20
#      nb_get_retry 1
#      delay_before_retry 0
#    }
  }
  real_server 10.127.0.2 80 {
    weight 1
#    HTTP_GET {
#      url {
#        path /
#        status_code 200
#      }
#      connect_port 80
#      connect_timeout 20
#      nb_get_retry 1
#      delay_before_retry 0
#    }
  }
  real_server 10.127.0.6 80 {
    weight 1
#    HTTP_GET {
#      url {
#        path /
#        status_code 200
#      }
#      connect_port 80
#      connect_timeout 20
#      nb_get_retry 1
#      delay_before_retry 0
#    }
  }
}

[...]

vrrp_instance INTERNAL {
  interface eth1
  virtual_router_id 1

  state MASTER
  priority 150

  advert_int 1

  garp_master_delay 5

  virtual_ipaddress_excluded {
        10.128.0.0/32 dev eth1
        10.128.0.1/32 dev eth1
        10.128.0.2/32 dev eth1
        10.128.0.3/32 dev eth1
        10.128.0.4/32 dev eth1
        10.128.0.5/32 dev eth1
        10.128.0.6/32 dev eth1
        10.128.0.7/32 dev eth1
        10.128.0.8/32 dev eth1
        10.128.0.9/32 dev eth1
        10.128.0.10/32 dev eth1
        10.128.0.11/32 dev eth1
        10.128.0.12/32 dev eth1
        10.128.0.13/32 dev eth1
        10.128.0.14/32 dev eth1
        10.128.0.15/32 dev eth1
  }
}

Checks are disabled for now, as we mainly want to check the ipvs impact first.

16 VIP pointing to our 3 servers.

baseline

First attempt gives very poor results, a high start, but drops to about 40-45k conn/s.

bps pkt conn per sec

ipvs_tab_bits

Like for conntrack, ipvs need to keep track of the connections. There is also a hash table, and size can also be changed.

Default value (used in previous test) is 12. Documentation states the maximum value is 20 (bits, with means 1M entries).

# cat /etc/modprobe.d/ipvs.conf
options ip_vs conn_tab_bits=20

That give us some better overall performances, with like 350k conn/s, but there is some weird downfall some time in the beginning.

bps pkt conn per sec

bps pkt conn per sec

bps pkt conn per sec ipvs connection tracking

smp affinity

The statistics shows that only the 16 first cpu are used. Basic affinity that was in place was a simple interrupt n to thread n.

With interrups mapped to cpu#0, core#0-3, and then getting on the other thread, and back on the same (0-3, 12-15, 0-3, 12-15), all connections get on the same 8 threads, but the performances is more stable. and gets stable around 280k conn/s.

bps pkt conn per sec ipvs connection tracking

With interrupts mapped the same, but then to the other cpu (0-3, 12-15, 6-9, 18-21), we use more threads, and get better performances, stable around 370k/s.

bps pkt conn per sec ipvs connection tracking

If we change cpu before changing thread (0-3, 6-9, 12-15, 18-21), the performances are lightly better, around 380k/s.

bps pkt conn per sec ipvs connection tracking

higher hash size

Increasing the hashsize (24 or 28) tends to increase the overall conn/s, but with worse downfall at times, especially with 28.

with 24 bits:

bps pkt conn per sec ipvs connection tracking

with 28 bits (same test, twice) :

bps pkt conn per sec ipvs connection tracking

bps pkt conn per sec ipvs connection tracking

Conclusion

  • As always, affinity helps. In this case, not only with performance, but with stability.
  • if you have alot of connection to balance, increase the bits to at least 20. It doesn't take much memory, and it helps alot.
system/benches10gbps/loadbalancing.txt · Last modified: 2012/10/17 17:35 by ze