Nginx下Go的多种使用方式性能比较
结论:Go HTTP standalone >(优于) Nginx proxy to Go HTTP >(优于) Nginx fastcgi to Go TCP FastCGI
原文链接:http://www.oschina.net/translate/benchmarking-nginx-with-go?from=20131222
英文链接:https://gist.github.com/hgfischer/7965620
目前有很多提供Go语言HTTP应用服务的方法,但其中最好的选择取决于每个应用的实际情况。目前,Nginx看起来是每个新项目的标准Web服务器,即使在有其他许多不错Web服务器的情况下。然而,在Nginx上提供Go应用服务的开销是多少呢?我们需要一些nginx的特性参数(vhosts,负载均衡,缓存,等等)或者直接使用Go提供服务?如果你需要nginx,最快的连接机制是什么?这就是在这我试图回答的问题。该基准测试的目的不是要验证Go比nginx的快或慢。那将会很愚蠢。
下面是我们要比较不同的设置:
- Go HTTP standalone (as the control group)
- Nginx proxy to Go HTTP
- Nginx fastcgi to Go TCP FastCGI
- Nginx fastcgi to Go Unix Socket FastCGI
硬件
因为我们将在相同的硬件下比较所有设置,硬件选择的是廉价的一个。这不应该是一个大问题。
- Samsung 笔记本 NP550P5C-AD1BR
- Intel Core i7 3630QM @2.4GHz (quad core, 8 threads)
- CPU caches: (L1: 256KiB, L2: 1MiB, L3: 6MiB)
- RAM 8GiB DDR3 1600MHz
软件
- Ubuntu 13.10 amd64 Saucy Salamander (updated)
- Nginx 1.4.4 (1.4.4-1~saucy0 amd64)
- Go 1.2 (linux/amd64)
- wrk 3.0.4
设置
内核
只需很小的一点调整,将内核的limits调高。如果你对这一变量有更好的想法,请在写在下面评论处:
01
|
fs. file -max 9999999
|
02
|
fs.nr_open 9999999
|
03
|
net.core.netdev_max_backlog 4096
|
04
|
net.core.rmem_max 16777216
|
05
|
net.core.somaxconn 65535
|
06
|
net.core.wmem_max 16777216
|
07
|
net.ipv4.ip_forward 0
|
08
|
net.ipv4.ip_local_port_range 1025 65535
|
09
|
net.ipv4.tcp_fin_timeout 30
|
10
|
net.ipv4.tcp_keepalive_time 30
|
11
|
net.ipv4.tcp_max_syn_backlog 20480
|
12
|
net.ipv4.tcp_max_tw_buckets 400000
|
13
|
net.ipv4.tcp_no_metrics_save 1
|
14
|
net.ipv4.tcp_syn_retries 2
|
15
|
net.ipv4.tcp_synack_retries 2
|
16
|
net.ipv4.tcp_tw_recycle 1
|
17
|
net.ipv4.tcp_tw_reuse 1
|
18
|
vm.min_free_kbytes 65536
|
19
|
vm.overcommit_memory 1
|
Limits
供root和www-data打开的最大文件数限制被配置为200000。
Nginx
有几个必需得Nginx调整。有人跟我说过,我禁用了gzip以保证比较公平。下面是它的配置文件/etc/nginx/nginx.conf:
01
|
user www-data;
|
02
|
worker_processes auto;
|
03
|
worker_rlimit_nofile 200000;
|
04
|
pid /var/run/nginx.pid;
|
05
|
06
|
events {
|
07
|
worker_connections 10000;
|
08
|
use epoll;
|
09
|
multi_accept on;
|
10
|
}
|
11
|
12
|
http {
|
13
|
sendfile on;
|
14
|
tcp_nopush on;
|
15
|
tcp_nodelay on;
|
16
|
keepalive_timeout 300;
|
17
|
keepalive_requests 10000;
|
18
|
types_hash_max_size 2048;
|
19
|
20
|
open_file_cache max=200000 inactive=300s;
|
21
|
open_file_cache_valid 300s;
|
22
|
open_file_cache_min_uses 2;
|
23
|
open_file_cache_errors on;
|
24
|
25
|
server_tokens off;
|
26
|
dav_methods off;
|
27
|
28
|
include /etc/nginx/mime.types;
|
29
|
default_type application/octet-stream;
|
30
|
31
|
access_log /var/log/nginx/access.log combined;
|
32
|
error_log /var/log/nginx/error.log warn;
|
33
|
34
|
gzip off;
|
35
|
gzip_vary off;
|
36
|
37
|
include /etc/nginx/conf.d/*.conf;
|
38
|
include /etc/nginx/sites-enabled/*.conf;
|
39
|
}
|
Nginx vhosts
01
|
upstream go_http {
|
02
|
server 127.0.0.1:8080;
|
03
|
keepalive 300;
|
04
|
}
|
05
|
06
|
server {
|
07
|
listen 80;
|
08
|
server_name go.http;
|
09
|
access_log off;
|
10
|
error_log /dev/null crit;
|
11
|
12
|
location / {
|
13
|
proxy_pass http://go_http;
|
14
|
proxy_http_version 1.1;
|
15
|
proxy_set_header Connection "" ;
|
16
|
}
|
17
|
}
|
18
|
19
|
upstream go_fcgi_tcp {
|
20
|
server 127.0.0.1:9001;
|
21
|
keepalive 300;
|
22
|
}
|
23
|
24
|
server {
|
25
|
listen 80;
|
26
|
server_name go.fcgi.tcp;
|
27
|
access_log off;
|
28
|
error_log /dev/null crit;
|
29
|
30
|
location / {
|
31
|
include fastcgi_params;
|
32
|
fastcgi_keep_conn on;
|
33
|
fastcgi_pass go_fcgi_tcp;
|
34
|
}
|
35
|
}
|
36
|
37
|
upstream go_fcgi_unix {
|
38
|
server unix:/tmp/go.sock;
|
39
|
keepalive 300;
|
40
|
}
|
41
|
42
|
server {
|
43
|
listen 80;
|
44
|
server_name go.fcgi.unix;
|
45
|
access_log off;
|
46
|
error_log /dev/null crit;
|
47
|
48
|
location / {
|
49
|
include fastcgi_params;
|
50
|
fastcgi_keep_conn on;
|
51
|
fastcgi_pass go_fcgi_unix;
|
52
|
}
|
53
|
}
|
Go源码
01
|
package main
|
02
|
03
|
import (
|
04
|
"fmt"
|
05
|
"log"
|
06
|
"net"
|
07
|
"net/http"
|
08
|
"net/http/fcgi"
|
09
|
"os"
|
10
|
"os/signal"
|
11
|
"syscall"
|
12
|
)
|
13
|
14
|
var (
|
15
|
abort bool
|
16
|
)
|
17
|
18
|
const (
|
19
|
SOCK = "/tmp/go.sock"
|
20
|
)
|
21
|
22
|
type Server struct {
|
23
|
}
|
24
|
25
|
func (s Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
26
|
body := "Hello World\n"
|
27
|
// Try to keep the same amount of headers
|
28
|
w.Header().Set( "Server" , "gophr" )
|
29
|
w.Header().Set( "Connection" , "keep-alive" )
|
30
|
w.Header().Set( "Content-Type" , "text/plain" )
|
31
|
w.Header().Set( "Content-Length" , fmt.Sprint(len(body)))
|
32
|
fmt.Fprint(w, body)
|
33
|
}
|
34
|
35
|
func main() {
|
36
|
sigchan := make(chan os.Signal, 1)
|
37
|
signal .Notify(sigchan, os.Interrupt)
|
38
|
signal .Notify(sigchan, syscall.SIGTERM)
|
39
|
40
|
server := Server{}
|
41
|
42
|
go func() {
|
43
|
http.Handle( "/" , server)
|
44
|
if err := http.ListenAndServe( ":8080" , nil); err != nil {
|
45
|
log .Fatal(err)
|
46
|
}
|
47
|
}()
|
48
|
49
|
go func() {
|
50
|
tcp, err := net.Listen( "tcp" , ":9001" )
|
51
|
if err != nil {
|
52
|
log .Fatal(err)
|
53
|
}
|
54
|
fcgi.Serve(tcp, server)
|
55
|
}()
|
56
|
57
|
go func() {
|
58
|
unix, err := net.Listen( "unix" , SOCK)
|
59
|
if err != nil {
|
60
|
log .Fatal(err)
|
61
|
}
|
62
|
fcgi.Serve(unix, server)
|
63
|
}()
|
64
|
65
|
<-sigchan
|
66
|
67
|
if err := os.Remove(SOCK); err != nil {
|
68
|
log .Fatal(err)
|
69
|
}
|
70
|
}
|
检查HTTP header
01
|
$ curl -sI http://127.0.0.1:8080/
|
02
|
HTTP/1.1 200 OK
|
03
|
Connection: keep-alive
|
04
|
Content-Length: 12
|
05
|
Content-Type: text/plain
|
06
|
Server: gophr
|
07
|
Date: Sun, 15 Dec 2013 14:59:14 GMT
|
08
|
09
|
$ curl -sI http://127.0.0.1:8080/ | wc -c
|
10
|
141
|
01
|
$ curl -sI http://go.http/
|
02
|
HTTP/1.1 200 OK
|
03
|
Server: nginx
|
04
|
Date: Sun, 15 Dec 2013 14:59:31 GMT
|
05
|
Content-Type: text/plain
|
06
|
Content-Length: 12
|
07
|
Connection: keep-alive
|
08
|
09
|
$ curl -sI http://go.http/ | wc -c
|
10
|
141
|
01
|
$ curl -sI http://go.fcgi.tcp/
|
02
|
HTTP/1.1 200 OK
|
03
|
Content-Type: text/plain
|
04
|
Content-Length: 12
|
05
|
Connection: keep-alive
|
06
|
Date: Sun, 15 Dec 2013 14:59:40 GMT
|
07
|
Server: gophr
|
08
|
09
|
$ curl -sI http://go.fcgi.tcp/ | wc -c
|
10
|
141
|
01
|
$ curl -sI http://go.fcgi.unix/
|
02
|
HTTP/1.1 200 OK
|
03
|
Content-Type: text/plain
|
04
|
Content-Length: 12
|
05
|
Connection: keep-alive
|
06
|
Date: Sun, 15 Dec 2013 15:00:15 GMT
|
07
|
Server: gophr
|
08
|
09
|
$ curl -sI http://go.fcgi.unix/ | wc -c
|
10
|
141
|
启动引擎
基准测试
GOMAXPROCS = 1
Go standalone
1
|
# wrk -t100 -c5000 -d30s http://127.0.0.1:8080/
|
2
|
Running 30s test @ http://127.0.0.1:8080/
|
3
|
100 threads and 5000 connections
|
4
|
Thread Stats Avg Stdev Max +/- Stdev
|
5
|
Latency 116.96ms 17.76ms 173.96ms 85.31%
|
6
|
Req/Sec 429.16 49.20 589.00 69.44%
|
7
|
1281567 requests in 29.98s, 215.11MB read
|
8
|
Requests/sec: 42745.15
|
9
|
Transfer/sec: 7.17MB
|
Nginx + Go through HTTP
1
|
# wrk -t100 -c5000 -d30s http://go.http/
|
2
|
Running 30s test @ http://go.http/
|
3
|
100 threads and 5000 connections
|
4
|
Thread Stats Avg Stdev Max +/- Stdev
|
5
|
Latency 124.57ms 18.26ms 209.70ms 80.17%
|
6
|
Req/Sec 406.29 56.94 0.87k 89.41%
|
7
|
1198450 requests in 29.97s, 201.16MB read
|
8
|
Requests/sec: 39991.57
|
9
|
Transfer/sec: 6.71MB
|
Nginx + Go through FastCGI TCP
01
|
# wrk -t100 -c5000 -d30s http://go.fcgi.tcp/
|
02
|
Running 30s test @ http://go.fcgi.tcp/
|
03
|
100 threads and 5000 connections
|
04
|
Thread Stats Avg Stdev Max +/- Stdev
|
05
|
Latency 514.57ms 119.80ms 1.21s 71.85%
|
06
|
Req/Sec 97.18 22.56 263.00 79.59%
|
07
|
287416 requests in 30.00s, 48.24MB read
|
08
|
Socket errors: connect 0, read 0, write 0, timeout 661
|
09
|
Requests/sec: 9580.75
|
10
|
Transfer/sec: 1.61MB
|
Nginx + Go through FastCGI Unix Socket
01
|
# wrk -t100 -c5000 -d30s http://go.fcgi.unix/
|
02
|
Running 30s test @ http://go.fcgi.unix/
|
03
|
100 threads and 5000 connections
|
04
|
Thread Stats Avg Stdev Max +/- Stdev
|
05
|
Latency 425.64ms 80.53ms 925.03ms 76.88%
|
06
|
Req/Sec 117.03 22.13 255.00 81.30%
|
07
|
350162 requests in 30.00s, 58.77MB read
|
08
|
Socket errors: connect 0, read 0, write 0, timeout 210
|
09
|
Requests/sec: 11670.72
|
10
|
Transfer/sec: 1.96MB
|
GOMAXPROCS = 8
Go standalone
1
|
# wrk -t100 -c5000 -d30s http://127.0.0.1:8080/
|
2
|
Running 30s test @ http://127.0.0.1:8080/
|
3
|
100 threads and 5000 connections
|
4
|
Thread Stats Avg Stdev Max +/- Stdev
|
5
|
Latency 39.25ms 8.49ms 86.45ms 81.39%
|
6
|
Req/Sec 1.29k 129.27 1.79k 69.23%
|
7
|
3837995 requests in 29.89s, 644.19MB read
|
8
|
Requests/sec: 128402.88
|
9
|
Transfer/sec: 21.55MB
|
Nginx + Go through HTTP
1
|
# wrk -t100 -c5000 -d30s http://go.http/
|
2
|
Running 30s test @ http://go.http/
|
3
|
100 threads and 5000 connections
|
4
|
Thread Stats Avg Stdev Max +/- Stdev
|
5
|
Latency 336.77ms 297.88ms 632.52ms 60.16%
|
6
|
Req/Sec 2.36k 2.99k 19.11k 84.83%
|
7
|
2232068 requests in 29.98s, 374.64MB read
|
8
|
Requests/sec: 74442.91
|
9
|
Transfer/sec: 12.49MB
|
Nginx + Go through FastCGI TCP
01
|
# wrk -t100 -c5000 -d30s http://go.fcgi.tcp/
|
02
|
Running 30s test @ http://go.fcgi.tcp/
|
03
|
100 threads and 5000 connections
|
04
|
Thread Stats Avg Stdev Max +/- Stdev
|
05
|
Latency 217.69ms 121.22ms 1.80s 75.14%
|
06
|
Req/Sec 263.09 102.78 629.00 62.54%
|
07
|
721027 requests in 30.01s, 121.02MB read
|
08
|
Socket errors: connect 0, read 0, write 176, timeout 1343
|
09
|
Requests/sec: 24026.50
|
10
|
Transfer/sec: 4.03MB
|
Nginx + Go through FastCGI Unix Socket
1
|
# wrk -t100 -c5000 -d30s http://go.fcgi.unix/
|
2
|
Running 30s test @ http://go.fcgi.unix/
|
3
|
100 threads and 5000 connections
|
4
|
Thread Stats Avg Stdev Max +/- Stdev
|
5
|
Latency 694.32ms 332.27ms 1.79s 62.13%
|
6
|
Req/Sec 646.86 669.65 6.11k 87.80%
|
7
|
909836 requests in 30.00s, 152.71MB read
|
8
|
Requests/sec: 30324.77
|
9
|
Transfer/sec: 5.09MB
|
结论
第一组基准测试时一些Nginx的设置还没有很好的优化(启用gzip,Go的后端没有使用keep-alive连接)。当改为wrk以及按建议优化Nginx后结果有较大差异。
来源:http://blog.csdn.net/typ2004/article/details/39482245
Nginx下Go的多种使用方式性能比较相关推荐
- ListView上拉加载和下拉刷新多种实现方式
ListView上拉加载和下拉刷新多种实现方式 该篇为ListView下拉刷新和上拉加载实现的各种方法大合集.可能在具体的细节逻辑上处理不太到位,但基本上完成逻辑的实现.细节方面,个人可以根据自己的需 ...
- Mysql count 的多种使用方式性能比较
Mysql的count函数用于统计符合条件的记录数,常用的方式有: 1.count(*) 2.count(1) 3.count(id) 4.count(col) 首先需要明确一点:count函数对于返 ...
- linux数据同步技术比较,linux下实现web数据同步的四种方式(性能比较)教程.docx
linux下实现web数据同步的四种方式(性能比较)教程 实现web数据同步的四种方式=======================================1.nfs实现web数据共享2.rs ...
- nginx curl linux,linux nginx 下通过curl url方式,符以后的参数无效
linux nginx 下通过curl url方式,&符以后的参数无效 nginx转发get请求丢失参数 ngixn转发,接口为get请求时参数会丢失,post不会出现问题.原因:get请求参 ...
- C# 高性能 TCP 服务的多种实现方式
哎~~ 想想大部分园友应该对 "高性能" 字样更感兴趣,为了吸引眼球所以标题中一定要突出,其实我更喜欢的标题是<猴赛雷,C# 编写 TCP 服务的花样姿势!>. 本篇文 ...
- 如何提高Linux下块设备IO的整体性能?
编辑手记:本文主要讲解Linux IO调度层的三种模式:cfp.deadline和noop,并给出各自的优化和适用场景建议. 作者简介: 邹立巍 Linux系统技术专家.目前在腾讯SNG社交网络运营部 ...
- C#高性能TCP服务的多种实现方式
☆ 哎~~ 想想大部分园友应该对 "高性能" 字样更感兴趣,为了吸引眼球所以标题中一定要突出,其实我更喜欢的标题是<猴赛雷,C#编写TCP服务的花样姿势!>. 本篇文章 ...
- 分布式锁的多种实现方式
转载自 分布式锁的多种实现方式 目前几乎很多大型网站及应用都是分布式部署的,分布式场景中的数据一致性问题一直是一个比较重要的话题.分布式的CAP理论告诉我们"任何一个分布式系统都无法同时满足 ...
- ecshop nginx php-fpm,ecshop在nginx下配置常见问题
我们前面讲述了nginx安装和配置,知道如何将ecshop建立在linux环境下的nginx上.但是为了让ecshop能更好的在nginx下跑出性能.我们必须对ecshop以及nginx的配置做出调整 ...
最新文章
- Uploadify 上传插件引起Chrome崩溃解决方法
- mysql数据库设计实践_MYSQL教程分享20个数据库设计的最佳实践
- window.showModalDialog弹出模态窗口
- 从alexnet到resnet,初探深度学习算法玩摄影
- 岗位推荐 | 阿里巴巴达摩院招聘自然语言处理、机器翻译算法专家
- android 通过webview调起支付宝app支付
- 查找三 哈希表的查找
- jqueryui dialog asp.net服务端控件失效问题解决
- 我的世界java刷怪数量_Minecraft我的世界Java版18w16a更新发布
- python 抽奖器_兄弟连学python (02) ----简易抽奖器
- 20 个免费的 jQuery 的工具提示插件:
- maven依赖冲突以及解决方法
- Hive学习之六 《Hive进阶— —hive jdbc》 详解
- 新翔绩效考核系统 v2022
- 阿里播放器Aliplayer封装
- video.js使用方法
- Struts2快速入门,超简单详细的快速入门教程
- 1. NET 6.0 前言
- 万用表测电容方法-电子技术方案
- 45、预制干粉灭火装置的设置要求
热门文章
- pcb 布线电容 影响延时_信号在PCB走线中传输时延
- 等于x分之a的平方的导数_清华学霸丨手把手教你导数大题如何骗分(文理通用),家长为孩子收...
- Ubuntu16.04+caffe+digits安装配置
- jetty9.x版本配置优化
- SSM前后端分离及跨域
- Metasploit渗透某高校域服务器
- Java GC垃圾回收机制
- selenium+python中,框架中,怎么返回上一个菜单
- NFS客户端、服务器协商读写粒度(rsize、wsize)流程 【转】
- linux文件管理和 对bash的理解