This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
system:benches10gbps:direct [2012/10/02 18:02] ze dual |
system:benches10gbps:direct [2012/10/04 13:16] (current) ze add httpterm benches |
||
---|---|---|---|
Line 435: | Line 435: | ||
* on high load configuration, reducing the number of process to just have one per used core is better | * on high load configuration, reducing the number of process to just have one per used core is better | ||
* 240k connections / seconds is doable with a single host | * 240k connections / seconds is doable with a single host | ||
+ | |||
+ | For some unknown reason (at the time of writing that documentation), the | ||
+ | connections highly drops for 1-2s, as can be seen on | ||
+ | [[http://www.hagtheil.net/files/system/benches10gbps/direct/bench-bad/nginx-bad/elastiques-nginx/|bench-bad/nginx-bad]] | ||
+ | graphs. I tried to avoid using results triggering such behaviour. Any ideas/hints on what could produce such are welcome. | ||
+ | |||
+ | ====== post-bench ====== | ||
+ | |||
+ | After publishing the first benches, someone adviced to use httpterm, instead of nginx. Unlike nginx, httpterm is aimed at only doing stress bench, and not serve real pages. | ||
+ | |||
+ | Bench using multi-process httpterm directly shows some bug. It still sends header, but fails to send data. Getting down to 1 process keep it running, but obviously not using all cores. | ||
+ | |||
+ | As we have 16 core for the web server, so 16 process with 1 IP each were launched, pinned with taskset on a cpu each. | ||
+ | |||
+ | file-0.cfg: | ||
+ | # taskset 000010 ./httpterm -D -f file-0.cfg | ||
+ | global | ||
+ | maxconn 30000 | ||
+ | ulimit-n 500000 | ||
+ | nbproc 1 | ||
+ | quiet | ||
+ | | ||
+ | listen proxy1 10.128.0.0:80 | ||
+ | object weight 1 name test1 code 200 size 200 | ||
+ | clitimeout 10000 | ||
+ | |||
+ | That gives up more connections per seconds: 278765 | ||
+ | |||
+ | |||
+ | That helps get even more requests per seconds, but we still get some stall at times. | ||