haproxy 负载

by Sachin Malhotra

由Sachin Malhotra

负载测试HAProxy(第1部分) (Load Testing HAProxy (Part 1))

This is the first post in a 3 part series on load testing HAProxy, which is a reliable, high performant TCP/HTTP load balancer.

这是有关负载测试HAProxy的3部分系列文章的第一篇,HAProxy是一种可靠的高性能TCP / HTTP负载均衡器。

Load Testing? HAProxy? If all this seems greek to you, don’t worry. I will provide inline links to read up on everything I’m talking about in this blog post.

负载测试? HAProxy? 如果这一切对您来说都很希腊,请不用担心。 我将提供内联链接,以阅读我在本博文中谈论的所有内容。

For reference, our current stack is:

供参考,我们当前的堆栈为:

  • Instances hosted on Amazon EC2 (not that this one should matter)

    托管在Amazon EC2上的实例(这并不重要)

  • Ubuntu 14.04 (Trusty) for the OS适用于操作系统的Ubuntu 14.04(Trusty)
  • Supervisor for process management

    流程管理主管

On production, we have around 30-odd HAProxy load balancers that help us route our traffic to the backend servers which are in an autoscaling mode and hence don’t have a fixed number. Number of backend servers ranges from 12–32 throughout the day.

在生产中,我们有大约30多种HAProxy负载均衡器,可帮助我们将流量路由到处于自动扩展模式的后端服务器,因此没有固定数量。 后端服务器的数量全天范围从12到32。

This article should help you get up-to-speed on the basics of load balancing and how it works with HAProxy. It will also explain what routing algorithms are available.

本文应帮助您快速了解负载平衡的基本知识及其在HAProxy中的工作方式。 它还将说明可用的路由算法。

Coming back to our topic at hand, which is load testing HAProxy.

回到我们的主题,即负载测试HAProxy。

Never before did we put any dedicated effort in finding out the limits of our HAProxy setup in handling HTTP and HTTPs requests. Currently, on production, we have 4 core, 30 Gig instances of HAProxy machines.

之前,我们从未做出任何专门的努力来发现我们的HAProxy设置在处理HTTP和HTTPs请求方面的局限性。 当前,在生产中,我们有4个核心,30 Gig的HAProxy机器实例。

Introducing Amazon EC2 R4 Instances, the next generation of memory-optimized instancesYou can now launch R4 instances, the next generation of Amazon EC2 Memory Optimized instances, featuring a larger…aws.amazon.com

引入Amazon EC2 R4实例,下一代内存优化实例 现在,您可以启动R4实例,这是下一代Amazon EC2内存优化实例,具有更大的性能…… aws.amazon.com

As I am writing this post, we’re in the process of moving our entire traffic (HTTP) to HTTPs (that is, encrypted traffic). But before moving further, we needed some definitive answers to the following questions:

在撰写本文时,我们正在将整个流量(HTTP)移至HTTP(即加密流量)。 但是,在进一步发展之前,我们需要对以下问题给出一些明确的答案:

  1. What is the impact as we shift our traffic from Non-SSL to SSL? CPU should definitely take a hit because SSL handshake is not a normal 3 way handshake, it is rather a 5 way handshake and after the handshake is complete, further communication is encrypted using the secret key generated during the handshake and this is bound to take up CPU.

    将流量从非SSL转移到SSL有什么影响? CPU肯定会受到打击,因为SSL握手不是正常的三向握手,而是五向握手,并且在握手完成后,使用握手期间生成的密钥对进一步的通信进行加密,这势必会占用中央处理器。

  2. What are some other hardware/software limits that might be reached on production as a result of SSL termination at the HAProxy level. We could also go for the SSL PassThrough option provided by HAProxy which terminates/decrypts the SSL connection at the backend servers. However, SSL termination at the HAProxy level is more performant and so this is what we intend to test.

    HAProxy级别的SSL终止可能会在生产上达到其他一些硬件/软件限制 。 我们还可以使用HAProxy提供的SSL PassThrough选项,该选项可终止/解密后端服务器上的SSL连接。 但是,HAProxy级别的SSL终止性能更高,因此这就是我们要测试的内容。

  3. What is the best hardware required on production to support the kind of load that we see today. Will the existing hardware scale or do we need bigger machines? This was also one of the prime questions we wanted an answer to via this test.

    生产中所需的最佳硬件是什么,以支持我们今天看到的那种负载 。 是现有的硬件规模还是我们需要更大的机器? 这也是我们希望通过此测试回答的主要问题之一。

For this purpose we put in a dedicated effort for load testing HAProxy version 1.6 to find out answers to the above questions. I won’t be outlining the approach we took nor will I be outlining the results of this exercise in this blog post.

为此,我们专门进行了负载测试HAProxy 1.6版的工作,以找出上述问题的答案。 我不会在本文中概述我们采用的方法,也不会概述此练习的结果。

Rather, I will be discussing an important aspect of any load testing exercise that most of us tend to ignore.

相反,我将讨论我们大多数人倾向于忽略的任何负载测试活动的重要方面。

限界者 (The Ulimiter)

If you have ever done any kind of load testing or hosted any server serving a lot of concurrent requests, you definitely would have run into the dreaded “Too many open files” issue.

如果您曾经进行过任何类型的负载测试或托管任何服务器来处理大量并发请求,那么您肯定会遇到可怕的“打开文件太多”问题。

An important part of any stress testing exercise is the ability of your load testing client to establish a lot of concurrent connections to your backend server or to the proxy like HAProxy in between.

任何压力测试活动的重要组成部分是负载测试客户端能否在其后端服务器或代理之间建立大量并发连接,例如HAProxy。

A lot of times we end up being bottleneck on the client not being able to generate the amount of load we expect it to generate. The reason for this is not because the client is not performing optimally, but something else entirely on the hardware level.

很多时候,我们最终成为客户端的瓶颈,无法产生我们期望它产生的负载量。 这样做的原因不是因为客户端的性能不是最佳,而是完全在硬件级别。

Ulimit is used to restrict the number of user level resources. For all practical purposes pertaining to load testing environments, ulimit gives us the number of file descriptors that can be opened by a single process on the system. On most machines if you check the limit on file descriptors, it comes out to be this number = 1024.

Ulimit用于限制用户级别资源的数量。 出于与负载测试环境有关的所有实际目的,ulimit为我们提供了可以由系统上的单个进程打开的文件描述符的数量。 在大多数计算机上,如果您检查文件描述符的限制,结果就是这个数字= 1024。

As you can see, the number of open files is 1024 it on our staging setup. Opening a new TCP connection / socket also counts as an open file or a file descriptor and hence the limitation.

如您所见,在我们的暂存设置中,打开的文件数为1024。 打开新的TCP连接/套接字也算作打开的文件或文件描述符,因此也受到限制。

What this generally means is that a single client process can only open 1024 connections to the backend servers and no more. It means you need to increase this limit to a very high number on your load testing environment before proceeding further. Checkout the ulimit setting we have on our production machines.

这通常意味着单个客户端进程只能打开与后端服务器的1024个连接,而不能打开更多。 这意味着您需要在负载测试环境中将此限制增加到很高的数量,然后再继续进行。 检出生产机器上的ulimit设置。

This information is what you would generally find after 10 seconds of Googling, but keep in mind that ulimit is not guaranteed to give you the limits your processes actually have! There’s a million things that can modify a limits of a process after (or before) you initialized your shell. So what you should do instead is fire up top, htop, ps, or whatever you want to use to get the ID of the problematic process, and do a cat /proc/{process_id}/limits:

该信息是您在谷歌搜索10秒钟后通常会找到的信息,但请记住, 不能保证ulimit会给您您的进程实际具有的限制! 在初始化外壳之后(或之前),有上百万种东西可以修改进程的限制。 因此,您应该执行的操作是up tophtopps或您想要用来获取有问题的进程的ID的任何东西,然后执行cat /proc/{process_id}/limits

The max open files for this specific process is different than the system wide limits we have on this server.

此特定进程的最大打开文件数与我们在此服务器上的系统范围限制不同。

Let’s move on to the interesting part. Raising the limits :D

让我们继续进行有趣的部分。 提高极限:D

您要阅读的内容:提高极限 (The Stuff You Came Here to Read: Raising the Limit)

There are two ways of changing the ulimit setting on a machine.

有两种方法可以更改计算机上的ulimit设置。

  1. ulimit -n <some_value>. This will change the ulimit settings only for the current shell session. As soon as you open another shell session, you are back to square one i.e. 1024 file descriptors. So this is probably not what you want.

    ulimit -n <some_value >。 这将仅更改当前Shell会话的ulimit设置。 一旦打开另一个Shell会话,您将回到平方,即1024个文件描述符。 因此,这可能不是您想要的。

  2. fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下内容

    fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following* soft nofile 500000

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下* soft nofile 500000

    fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following* soft nofile 500000* hard nofile 500000

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下内容* soft nofile 500000 * hard nofile 500000

    fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following* soft nofile 500000* hard nofile 500000root soft nofile 500000

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下内容* soft nofile 500000 * hard nofile 500000 root soft nofile 500000

    fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following* soft nofile 500000* hard nofile 500000root soft nofile 500000root hard nofile 500000

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下内容* soft nofile 500000 * hard nofile 500000 root soft nofile 500000 root hard nofile 500000

    to the file

    到文件

    /etc/security/limits.conf.

    /etc/security/limits.conf。

The * basically represents that we are setting these values for all the users except root. “soft or hard” basically represent soft or hard limits. The next entry specifies the item for which we want to change the limit values i.e. nofile in this case which means the number of open files . And finally we have the value we wanna set which in this case is 500000. The * here does not apply to a root user, hence the last two lines specially for the root user.

*基本上表示我们正在为root用户以外的所有用户设置这些值。 “软硬”基本上代表软硬限制。 下一个条目指定我们要更改其极限值的项目,在这种情况下,即nofile,这意味着打开的文件数。 最后,我们要设置的值是500000。这里的*不适用于root用户,因此最后两行专门用于root用户。

After doing this, you need to take a reboot of the system. Sadly yes :( And the changes should reflect in the ulimit -n command.

完成此操作后,您需要重新引导系统。 遗憾的是:(并且所做的更改应反映在ulimit -n命令中。

Hurray !. Pat yourself on the back. You successfully changed the ulimit settings for the system. However, it is not necessary that changing this will affect all the user processes running on the system. It is quite possible that even after changing the system wide ulimit, you might find that /etc/<pid>/limits give you a smaller number than what you might expect to find.

万岁! 轻拍自己的背部。 您已成功更改了系统的ulimit设置。 但是,不必更改此设置将影响系统上运行的所有用户进程。 即使在更改了系统范围的ulimit之后,您很有可能会发现/ etc / <pid> / limits给您的数量比预期的要少。

In this case, you almost certainly have a process manager, or something similar that is messing up your limits. You need to keep in mind that processes inherit the limits of their parent processes. So if you have something like a Supervisor managing your processes, they will inherit the settings of the Supervisor daemon and this overrides any changes you make to the system level limits.

在这种情况下,几乎可以肯定您有一个流程管理器,或者类似的东西正在破坏您的极限。 您需要记住,进程继承了其父进程的限制。 因此,如果您有像Supervisor这样的东西来管理您的进程,它们将继承Supervisor守护程序的设置,这将覆盖您对系统级别限制所做的任何更改。

Supervisor has a config variable that sets the file descriptor limit of its main process. Apparently, this setting is in turn inherited by any and all processes it launches. To override the default setting, you can add the following line to /etc/supervisor/supervisord.conf, in the [supervisord]section:

Supervisor具有一个config变量 ,用于设置其主进程的文件描述符限制。 显然,此设置反过来会由它启动的所有进程继承。 要覆盖默认设置,可以在[supervisord]部分的/etc/supervisor/supervisord.conf添加以下行:

minfds=500000

Updating this will lead to all the child processes being controlled by supervisor inheriting this updated limit. You just need to restart the supervisor daemon to bring this change into effect.

更新此设置将导致所有子进程都由主管继承此更新的限制来控制。 您只需要重新启动超级用户守护程序即可使此更改生效。

Remember to do this on any machine that intends to have a lot of concurrent connections open. Be it the client in a load testing scenario or a server trying to serve a lot of concurrent requests.

请记住在要打开大量并发连接的任何计算机上执行此操作。 无论是负载测试场景中的客户端,还是尝试服务于大量并发请求的服务器。

In Part 2, we’ll learn how to deal with the Sysctl port range monster.

在第2部分中 ,我们将学习如何处理Sysctl端口范围内的怪物

Do let me know how this blog post helped you. Also, please recommend (❤) this post if you think this may be useful for someone.

让我知道此博客文章如何帮助您。 另外,如果您认为这可能对某人有用,请推荐(❤)这篇文章。

翻译自: https://www.freecodecamp.org/news/load-testing-haproxy-part-1-f7d64500b75d/

haproxy 负载

haproxy 负载_负载测试HAProxy(第1部分)相关推荐

  1. 负载测试中极限负载_负载测试准则

    负载测试中极限负载 负载测试并非易事. 通常不仅要下载JMeter或Gatling ,记录一些方案然后运行它们. 好吧,也许就是这样,但是如果您是幸运的话. 听起来像"上尉的讲话" ...

  2. haproxy负载均衡_做负载均衡Nginx、HAProxy和LVS总有一个适合你

    Nginx Nginx优点: 1.工作在网络7层之上,可针对http应用做一些分流的策略,如针对域名.目录结构,它的正规规则比HAProxy更为强大和灵活,所以,目前为止广泛流行. 2.Nginx对网 ...

  3. haproxy负载均衡_基于mycat+haproxy+keepalived搭建mysql数据库高可用负载均衡

    概述 目前业界对数据库性能优化普遍采用集群方式,而oracle集群软硬件投入昂贵,mysql则比较推荐用mycat去搭建数据库集群,下面介绍一下怎么用mycat+haproxy+keepalived搭 ...

  4. HAproxy七层负载均衡——环境搭建及实现过程详解

    实验环境 主机名 IP 服务 虚拟机server1 172.25.6.1 haproxy,httpd,服务端 虚拟机server2 172.25.6.2 httpd,php,客户端 虚拟机server ...

  5. 项目实战4—HAProxy实现高级负载均衡实战和ACL控制

    分类: Linux架构篇 转自https://www.cnblogs.com/along21/  haproxy实现高级负载均衡实战 环境:随着公司业务的发展,公司负载均衡服务已经实现四层负载均衡,但 ...

  6. Haproxy+Keepalived实现负载均衡

    Haproxy+Keepalived实现负载均衡 HAProxy介绍 反向代理服务器,支持双机热备支持虚拟主机,但其配置简单,拥有非常不错的服务器健康检查功能,当其代理的后端服务器出现故障, HAPr ...

  7. haproxy+keepalived实现负载均衡及高可用

    HAProxy是一个使用C语言编写的自由及开放源代码软件,其提供高性能性.负载均衡,以及基于TCP和HTTP的应用程序代理.相较与 Nginx,HAProxy 更专注与反向代理,因此它可以支持更多的选 ...

  8. Ansible(四)ansible roles实现(apache+haproxy+keepalived)负载均衡+高可用

    1.ansible roles简介 <1> roles 用于层次性.结构化地组织playbook. <2> roles 能够根据层次型结构自动装载变量文件.tasks以及han ...

  9. mqtt haproxy 代理及负载搭建

    目录 mqtt 分布集群搭建 haproxy 安装配置 解压 安装 配置haproxy.cfg 启动haproxy 配置mqtt 测试 负载配置说明 负载均衡算法 ACL规则定义 全局配置 默认配置 ...

最新文章

  1. AngularJs 基础教程​ —— Select(选择框)
  2. DLL load failed while importing _pywrap_tensorflow_internal
  3. 从成本角度看Java微服务
  4. switch分支结构
  5. Oracle中rank() over, dense_rank(), row_number() 的区别
  6. 吴恩达《Machine Learning》精炼笔记 12:大规模机器学习和图片文字识别 OCR
  7. pandas中drop用法_如何使用drop方法对数据进行删减处理
  8. hsv 明度的范围_通过HSV转换的方式实现图片数据增强
  9. 看ExtJs API文档的阅读方法
  10. Apple`s Steve Jobs Has Reshaped(重塑) the Tech World: 10 Ways He Did It
  11. TensorFlow 2.0 - Keras Pipeline、自定义Layer、Loss、Metric
  12. 第94课 函数的参数 《小学生C++编程入门》 例94.1
  13. 为什么面试你要25K,测试总监只给你15K
  14. 东南大学计算机网络_【20考研】东南大学计算机考研分数统计
  15. python dateutil_安装python dateutil
  16. 用c语言向无盘符分区拷文件,用GHOST软件将PC硬盘上C分区制作成映象文件﹡.gho的步骤...
  17. Javascript:ES6-ES11(1)
  18. DSP TMS320F2803x CLA 指令
  19. 微信分享之SPA的坑
  20. wd 文件服务器客服电话,wd 云服务器

热门文章

  1. 生产者消费者模型 java
  2. 项目 我行我素购物管理系统 0913
  3. jquery-本地存储-cookie插件
  4. mysql float精度与范围总结
  5. Effective_STL 学习笔记(四) 用 empty 来代替检查 size() 是否为0
  6. CentOS7的/tmp目录自动清理规则
  7. [转】TCP 三次握手 四次挥手
  8. Skype 释出新的 Linux 客户端
  9. Android客户端应用享用传统Web服务
  10. 将任意图像转成 HTML5 Canvas