websocket 并发

One thing about WebSockets is that you need a lot of resources on the client's side to generate high enough load for the server to actually eat up all the CPU resources.

关于WebSockets的一件事是,您需要在客户端方面提供大量资源来生成足够高的负载,以使服务器实际上耗尽所有CPU资源。

There are several challenges you have to overcome because the WebSockets protocol is more CPU demanding on the client's side than on the server's side. At the same time you need a lot of RAM to store information about open connections if you have millions of them.

您必须克服几个挑战,因为WebSockets协议在客户端比在服务器端对CPU的要求更高。 同时,如果您有数百万个打开的连接,则需要大量RAM来存储有关打开的连接的信息。

I've been lucky enough to get a couple of new servers for a limited period of time at my disposal for the hardware "burnout" tests. So I decided to use my Lua Application Server — LAppS to do both jobs: test the hardware and perform the LAppS high load tests.

我很幸运能够在有限的时间内购买到几台新服务器,以进行硬件“老化”测试。 因此,我决定使用我的Lua应用服务器LAppS来完成这两项工作:测试硬件并执行LAppS高负载测试。

Let's see the details

让我们看看细节

挑战性 (Challenges)

First of all WebSockets on the client's side require a pretty fast RNG for traffic masking unless you want to fake it. I wanted the tests to be realistic, so I discarded the idea of faking the RNG by replacing it with a constant. This leaves the only option — a lot of CPU power. As I said, I have two servers: one with dual Intel Gold 6148 CPUs (40 cores in total) 256GB RAM (DDR4-2666) and one with quad Intel Platinum 8180 CPUs (112 cores in total) 384GB RAM (DDR4-2666). I'll be referring to them as Server A and Server B hereafter.

首先,客户端上的WebSocket需要非常快速的RNG进行流量屏蔽,除非您想伪造它。 我希望测试是现实的,所以我放弃了用常量替换RNG的想法。 剩下的唯一选择是-大量的CPU功能。 就像我说的,我有两台服务器:一台带有双Intel Gold 6148 CPU(总共40个内核)256GB RAM(DDR4-2666),一台带有四核Intel Platinum 8180 CPU(总共112个内核)384GB RAM(DDR4-2666) 。 此后,我将它们称为服务器A和服务器B。

The second challenge — there are no libraries or test suites for WebSockets that are fast enough. Which is why I was forced to develop a WebSockets client module for LAppS (cws).

第二个挑战-没有足够快的WebSocket库或测试套件。 这就是为什么我被迫为LAppS( cws )开发WebSockets客户端模块。

测试设置 (The tests setup)

Both servers have dual port 10GbE cards. Port 1 of each card is aggregated to a bonding interface and these ports on either cards are connected to each other in RR mode. Port 2 of each card is aggregated to a bonding interface in an Active-Backup mode, connected over an Ethernet switch. Each hardware server was running the RHEL 7.6. I had to install gcc-8.3 from the sources to have a modern compiler instead of the default gcc-4.8.5. Not the best setup for the performance tests, yet beggars can't be choosers.

两台服务器均具有双端口10GbE卡。 每个卡的端口1聚合到绑定接口,并且两个卡上的这些端口都以RR模式彼此连接。 每个卡的端口2以活动备份模式聚合到绑定接口,并通过以太网交换机连接。 每个硬件服务器都在运行RHEL 7.6。 我必须从源代码安装gcc-8.3才能拥有现代的编译器,而不是默认的gcc-4.8.5。 这不是性能测试的最佳设置,但是乞g不能成为选择者。

CPUs on Server A (2xGold 6148) have higher frequency than on Server B (4xPlatinum 8180). At the same time Server B has more cores, so I decided to run an echo server on Server A and the echo clients on Server B.

服务器A(2xGold 6148)上的CPU的频率高于服务器B(4xPlatinum 8180)上的CPU。 同时,服务器B具有更多核心,因此我决定在服务器A上运行回显服务器,在服务器B上运行回显客户端。

I wanted to see how LAppS behaves under high load with millions of connections and what maximum amount of echo requests per second it can serve. These are actually two different sets of tests because with the connections amount growth the server and clients are doing an increasingly complex job.

我想了解LAppS在具有数百万个连接的高负载下的行为,以及每秒可以服务的最大回声请求数量。 实际上,这是两组不同的测试,因为随着连接数量的增长,服务器和客户端的工作越来越复杂。

Before this I have made all the tests on my home PC which I use for development. The results of these tests will be used for comparison. My home PC has an Intel Core i7-7700 CPU 3.6GHz (4.0GHz Turbo) with 4 cores and 32GB of DDR4-2400 RAM. This PC runs Gentoo with 4.14.72 kernel.

在此之前,我已经在用于开发的家用PC上进行了所有测试。 这些测试的结果将用于比较。 我的家用PC具有Intel Core i7-7700 CPU 3.6GHz(4.0GHz Turbo),4个内核和32GB DDR4-2400 RAM。 这台PC运行带有4.14.72内核的Gentoo。

幽灵和崩溃补丁级别 (Specter and Meltdown patch levels)

  • Home PC

    家用电脑

    /sys/devices/system/cpu/vulnerabilities/l1tf:Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable
    /sys/devices/system/cpu/vulnerabilities/meltdown:Mitigation: PTI
    /sys/devices/system/cpu/vulnerabilities/spec_store_bypass:Mitigation: Speculative Store Bypass disabled via prctl and seccomp
    /sys/devices/system/cpu/vulnerabilities/spectre_v1:Mitigation: __user pointer sanitization
    /sys/devices/system/cpu/vulnerabilities/spectre_v2:Vulnerable: Minimal generic ASM retpoline, IBPB, IBRS_FW
  • Servers A and B

    服务器A和B

    /sys/kernel/debug/x86/ibpb_enabled : 1
    /sys/kernel/debug/x86/pti_enabled : 1
    /sys/kernel/debug/x86/ssbd_enabled : 1
    /sys/kernel/debug/x86/ibrs_enabled : 1
    /sys/kernel/debug/x86/retp_enabled : 3

I'll make an extra note with the results of the tests of Servers A and B whether these patches are enabled or not.

无论是否启用这些修补程序,我都会在服务器A和B的测试结果中做一个额外说明。

On my Home PC patch level is never changed.

在我的家用PC上,补丁级别从未更改。

安装LAppS 0.8.1 (Installing LAppS 0.8.1)

安装必备组件和LAppS (Installing prerequisites and the LAppS)

LAppS depends on the luajit-2.0.5 or higher, the libcrypto++8.2 and the wolfSSL-3.15.7 libraries and they have to be installed from sources in RHEL 7.6 and likely in any other linux distribution.

LAppS依赖于luajit-2.0.5或更高版本,libcrypto ++ 8.2和wolfSSL-3.15.7库,它们必须从RHEL 7.6以及可能的任何其他Linux发行版中的源安装。

The prefix for installation is /usr/local. Here is the pretty much self-explanatory Dockerfile part for wolfSSL installation

安装的前缀是/ usr / local。 这是WolfSSL安装的几乎不言自明的Dockerfile部分

ADD https://github.com/wolfSSL/wolfssl/archive/v3.15.7-stable.tar.gz ${WORKSPACE}RUN tar xzvf v3.15.7-stable.tar.gzWORKDIR ${WORKSPACE}/wolfssl-3.15.7-stableRUN ./autogen.shRUN ./configure CFLAGS="-pipe -O2 -march=native -mtune=native -fomit-frame-pointer -fstack-check -fstack-protector-strong -mfpmath=sse -msse2avx -mavx2 -ftree-vectorize -funroll-loops -DTFM_TIMING_RESISTANT -DECC_TIMING_RESISTANT -DWC_RSA_BLINDING" --prefix=/usr/local --enable-tls13 --enable-openssh --enable-aesni --enable-intelasm --enable-keygen --enable-certgen --enable-certreq --enable-curve25519 --enable-ed25519 --enable-intelasm --enable-hardenRUN make -j40 allRUN make install

And here is the part for the libcrypto++ installation

这是libcrypto ++安装的一部分

ADD https://github.com/weidai11/cryptopp/archive/CRYPTOPP_8_2_0.tar.gz ${WORKSPACE}RUN rm -rf ${WORKSPACE}/cryptopp-CRYPTOPP_8_2_0RUN tar xzvf ${WORKSPACE}/CRYPTOPP_8_2_0.tar.gzWORKDIR ${WORKSPACE}/cryptopp-CRYPTOPP_8_2_0RUN make CFLAGS="-pipe -O2 -march=native -mtune=native -fPIC -fomit-frame-pointer -fstack-check -fstack-protector-strong -mfpmath=sse -msse2avx -mavx2 -ftree-vectorize -funroll-loops" CXXFLAGS="-pipe -O2 -march=native -mtune=native -fPIC -fomit-frame-pointer -fstack-check -fstack-protector-strong -mfpmath=sse -msse2avx -mavx2 -ftree-vectorize -funroll-loops" -j40 libcryptopp.a libcryptopp.soRUN make install

And the luajit

和luajit

ADD http://luajit.org/download/LuaJIT-2.0.5.tar.gz ${WORKSPACE}WORKDIR ${WORKSPACE}RUN tar xzvf LuaJIT-2.0.5.tar.gzWORKDIR ${WORKSPACE}/LuaJIT-2.0.5RUN env CFLAGS="-pipe -Wall -pthread -O2 -fPIC -march=native -mtune=native -mfpmath=sse -msse2avx -mavx2 -ftree-vectorize -funroll-loops -fstack-check -fstack-protector-strong -fno-omit-frame-pointer" make -j40 allRUN make install

One optional dependency of the LAppS, which may be ignored, is the new library from Microsoft mimalloc. This library makes significant performance improvement (about 1%) but requires cmake-3.4 or higher. Given the limited amount of time for the tests, I decided to sacrifice mentioned performance improvement.

LAppS的一个可选依赖项(可以忽略)是Microsoft mimalloc的新库。 该库可显着提高性能(约1%),但要求cmake-3.4或更高。 鉴于测试时间有限,我决定牺牲提到的性能改进。

On my home PC, I'll not disable mimalloc during the tests.

在家用PC上,我不会在测试期间禁用mimalloc

Lets checkout LAppS from the repository:

让我们从存储库中签出LAppS:

WORKDIR ${WORKSPACE}RUN rm -rf ITCLib ITCFramework lar utils LAppSRUN git clone https://github.com/ITpC/ITCLib.gitRUN git clone https://github.com/ITpC/utils.gitRUN git clone https://github.com/ITpC/ITCFramework.gitRUN git clone https://github.com/ITpC/LAppS.gitRUN git clone https://github.com/ITpC/lar.gitWORKDIR ${WORKSPACE}/LAppS

Now we need to remove "-lmimalloc" from all the Makefiles in nbproject subdirectory, so we can build LAppS (assuming our current directory is ${WORKSPACE}/LAppS)

现在我们需要从nbproject子目录中的所有Makefile中删除“ -lmimalloc”,以便我们可以构建LAppS(假设当前目录为$ {WORKSPACE} / LAppS)

# find ./nbproject -type f -name "*.mk" -exec sed -i.bak -e 's/-lmimalloc//g' {} \;
# find ./nbproject -type f -name "*.bak" -exec rm {} \;

And now we can build LAppS. LAppS provides several build configurations, which may or may not exclude some features on the server side:

现在我们可以构建LAppS。 LAppS提供了几种构建配置,这些配置可能会也可能不会排除服务器端的某些功能:

  • with SSL support and with statistics gathering support具有SSL支持和统计信息收集支持
  • with SSL and without statistics gathering (though minimal statistics gathering will continue as it is used for dynamic LAppS tuning in runtime)使用SSL且不收集统计信息(尽管在运行时用于动态LAppS调整的情况下,将继续进行最少的统计收集)
  • without SSL and without statistic gathering.没有SSL,也没有统计信息收集。

Before next step make sure you are the owner of the /opt/lapps directory (or run make installs with sudo)

在进行下一步之前,请确保您是/ opt / lapps目录的所有者(或使用sudo运行make installs)

Let's make two kinds of binaries with SSL support and statistics gathering and without (assuming we are within ${WORKSPACE}/LAppS directory):

让我们创建两种具有SSL支持和统计信息收集功能的二进制文件(不包含在$ {WORKSPACE} / LAppS目录中):

# make clean
# make CONF=Release.AVX2 install
# make CONF=Release.AVX2.NO_STATS.NO_TLS install

The resulting binaries are:

生成的二进制文件是:

  • dist/Release.AVX2/GNU-Linux/lapps.avx2dist / Release.AVX2 / GNU-Linux / lapps.avx2
  • dist/Release.AVX2.NO_STATS.NO_TLS/GNU-Linux/lapps.avx2.nostats.notlsdist / Release.AVX2.NO_STATS.NO_TLS / GNU-Linux / lapps.avx2.nostats.notls

They will be installed into /opt/lapps/bin.

它们将被安装到/ opt / lapps / bin中。

Please note that the WebSockets client module for Lua is always built with SSL support. Whether it is enabled or not depends on the URI you are using for connection (ws:// or wss://) in runtime.

请注意,用于Lua的WebSockets客户端模块始终带有SSL支持。 是否启用它取决于运行时用于连接的URI(ws://或wss://)。

测试1.家用PC上的最佳性能。 基线配置。 (Test 1. Top performance on home PC. Configuration for baseline.)

I've already established that I get best performance when I configure four benchmark service instances with 100 connections each. In the same time I need only three IOWorkers and four echo service instances to achieve best performance. Remember? I have only 4 cores here.

我已经确定,当我配置四个基准测试服务实例(每个实例具有100个连接)时,我将获得最佳性能。 同时,我只需要三个IOWorker和四个回显服务实例即可达到最佳性能。 记得? 我这里只有4个核心。

The purpose of this test is just to establish a baseline for further comparison. Nothing exciting here really.

该测试的目的只是为进一步比较建立基准。 真的没有什么令人兴奋的。

Bellow are the steps required to configure LAppS for the tests.

下面是为测试配置LAppS所需的步骤。

自签名证书 (Self signed certificates)

certgen.sh脚本 (certgen.sh script)

#!/bin/bashopenssl genrsa -out example.org.key 2048
openssl req -new -key example.org.key -out example.org.csropenssl genrsa -out ca.key 2048
openssl req -new -x509 -key ca.key -out ca.crt
openssl x509 -req -in example.org.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out example.org.crt
cat example.org.crt ca.crt > example.org.bundle.crt

Running this script from within the /opt/lapps/conf/ssl directory will create all required files. Here is the script's output and what I've typed in:

从/ opt / lapps / conf / ssl目录中运行此脚本将创建所有必需的文件。 这是脚本的输出以及我输入的内容:

certgen.sh输出 (certgen.sh output)

# certgen.sh
Generating RSA private key, 2048 bit long modulus
.................................................................................................................................................................+++++
.....................................+++++
e is 65537 (0x010001)
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:KZ
State or Province Name (full name) [Some-State]:none
Locality Name (eg, city) []:Almaty
Organization Name (eg, company) [Internet Widgits Pty Ltd]:NOORG.DO.NOT.FORGET.TO.REMOVE.FROM.BROWSER
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Generating RSA private key, 2048 bit long modulus
...+++++
............................................................................+++++
e is 65537 (0x010001)
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:KZ
State or Province Name (full name) [Some-State]:none
Locality Name (eg, city) []:Almaty
Organization Name (eg, company) [Internet Widgits Pty Ltd]:none
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:*.example.org
Email Address []:
Signature ok
subject=C = KZ, ST = none, L = Almaty, O = NOORG.DO.NOT.FORGET.TO.REMOVE.FROM.BROWSER
Getting CA Private Key

LAppS配置 (LAppS configuration)

Here is the WebSockets configuration file which is set for TLS 1.3, includes the above generated certificates and configured with 3 IOWorkers (see detailed description of variables in the LAppS wiki).

这是为TLS 1.3设置的WebSockets配置文件,包括上面生成的证书,并配置了3个IOWorkers(请参阅LAppS Wiki中的变量详细说明)。

/opt/lapps/etc/conf/ws.json (/opt/lapps/etc/conf/ws.json)

{"listeners" : 2,"connection_weight": 1.0,"ip" : "0.0.0.0","port" : 5083,"lapps_config_auto_save" : true ,"workers" : {"workers": 3,"max_connections" : 40000,"auto_fragment" : false,"max_poll_events" : 256,"max_poll_wait_ms" : 10,"max_inbounds_skip" : 50,"input_buffer_size" : 2048},"acl" : {"policy" : "allow","exclude" : []},"tls":true,"tls_server_version" : 4,"tls_client_version" : 4,"tls_certificates":{"ca":"/opt/lapps/conf/ssl/ca.crt","cert": "/opt/lapps/conf/ssl/example.org.bundle.crt","key": "/opt/lapps/conf/ssl/example.org.key"}
}

Configuring services: the echo server and the echo client (benchmark), each with four instances.

配置服务:回显服务器和回显客户端(基准),每个都有四个实例。

/opt/lapps/etc/conf/lapps.json (/opt/lapps/etc/conf/lapps.json)

{"directories": {"app_conf_dir": "etc","applications": "apps","deploy": "deploy","tmp": "tmp","workdir": "workdir"},"services": {"echo": {"auto_start": true,"instances": 4,"internal": false,"max_inbound_message_size": 16777216,"protocol": "raw","request_target": "/echo","acl" : {"policy" : "allow","exclude" : []}},"benchmark": {"auto_start": true,"instances" : 4,"internal": true,"preload": [ "nap", "cws", "time" ],"depends" : [ "echo" ]}}
}

The echo service is pretty trivial:

回声服务非常简单:

回显服务源代码 (echo service source code)

echo = {}echo.__index = echo;echo.onStart=function()print("echo::onStart");
endecho.onDisconnect=function()
endecho.onShutdown=function()print("echo::onShutdown()");
endecho.onMessage=function(handler,opcode, message)local result, errmsg=ws:send(handler,opcode,message);if(not result)thenprint("echo::OnMessage(): "..errmsg);endreturn result;
endreturn echo

The benchmark service creates as many as benchmark.max_connections to benchmark.target and then just runs until you stop the LAppS. There is no pause in the connections' establishment or echo requests bombardment. The cws module API resembles the Web API for WebSockets. Once all benchmark.max_connections are established the benchmark prints the amount of Sockets connected. Once the connection is established the benchmark sends a benchmark.message to the server. After the server replies the anonymous onmessage method of the cws object is invoked, which just sends the same message back to the server.

基准测试服务会创建与基准 测试 .target一样多的基准 测试 .max_connections,然后一直运行直到停止LAppS。 连接的建立没有暂停,也没有回应请求的轰炸。 cws模块API类似于WebSockets的Web API。 一旦建立了所有Benchmark.max_connections ,基准就会打印已连接的Socket数量。 建立连接后,基准测试将向服务器发送一个基准测试消息。 服务器回复后,将调用cws对象的匿名onmessage方法,该方法只会将相同的消息发送回服务器。

基准服务源代码 (benchmark service source code)

benchmark={}
benchmark.__index=benchmarkbenchmark.init=function()
endbenchmark.messages_counter=0;
benchmark.start_time=time.now();
benchmark.max_connections=100;
benchmark.target="wss://127.0.0.1:5083/echo";
benchmark.message="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX";benchmark.meter=function()benchmark.messages_counter=benchmark.messages_counter+1;local slice=time.now() - benchmark.start_time;if( slice  >= 1000)thenprint(benchmark.messages_counter.." messages received per "..slice.."ms")benchmark.messages_counter=0;benchmark.start_time=time.now();end
endbenchmark.run=function()local n=nap:new();local counter=1;n:sleep(1);local array={};local start=time.now();while(#array < benchmark.max_connections) and (not must_stop())dolocal sock, err_msg=cws:new(benchmark.target,{onopen=function(handler)local result, errstr=cws:send(handler,benchmark.message,2);if(not result)thenprint("Error on websocket send at handler "..handler..": "..errstr);endend,onmessage=function(handler,message,opcode)benchmark.meter();cws:send(handler,message,opcode);end,onerror=function(handler, message)print("Client WebSocket connection is failed for socketfd "..handler..". Error: "..message);end,onclose=function(handler)print("Connection is closed for socketfd "..handler);end});if(sock ~= nil)thentable.insert(array,sock);elseprint(err_msg);err_msg=nil;collectgarbage("collect");end-- poll events once per 10 outgoing connections -- this will improve the connection establishment speedif counter == 10thencws:eventLoop();counter=1elsecounter = counter + 1;endendprint("Sockets connected: "..#array);benchmark.start_time=time.now();while not must_stop()docws:eventLoop();endfor i=1,#arraydoarray[i]:close();cws:eventLoop();end
endreturn benchmark;

服务安装 (Services installation)

Now we need to place the service scripts into /opt/lapps/apps//.lua:

现在我们需要将服务脚本放入/opt/lapps/apps//.lua:

  • /opt/lapps/apps/benchmark/benchmark.lua/opt/lapps/apps/benchmark/benchmark.lua
  • /opt/lapps/apps/echo/echo.lua/opt/lapps/apps/echo/echo.lua

We are ready to run our benchmark now. Just run: rm -f lapps.log; /opt/lapps/bin/lapps.avx2 > log from within the LAppS directory and wait for 5 mins, then press Ctrl-C once, to stop the LAppS (it will not stop immediately, it will shutdown the connections first), or twice (this will interrupt the shutdown sequence).

我们现在准备运行我们的基准测试。 只需运行: rm -f lapps.log; /opt/lapps/bin/lapps.avx2 > log rm -f lapps.log; /opt/lapps/bin/lapps.avx2 > log从LAppS目录中rm -f lapps.log; /opt/lapps/bin/lapps.avx2 > log并等待5分钟,然后按Ctrl-C一次,以停止LAppS(它不会立即停止,它将首先关闭连接),或者两次(这将中断关机顺序)。

Ok, we've got a text file with something like this inside:

好的,我们有一个文本文件,里面是这样的:

基准输出 (benchmark output)

echo::onStart echo::onStart echo::onStart echo::onStart running 1 messages received per 3196ms 1 messages received per 3299ms 1 messages received per 3299ms 1 messages received per 3305ms Sockets connected: 100 Sockets connected: 100 Sockets connected: 100 Sockets connected: 100 134597 messages received per 1000ms 139774 messages received per 1000ms 138521 messages received per 1000ms 139404 messages received per 1000ms 140162 messages received per 1000ms 139337 messages received per 1000ms 140088 messages received per 1000ms 139946 messages received per 1000ms 141204 messages received per 1000ms 137988 messages received per 1000ms 141805 messages received per 1000ms 134733 messages received per 1000ms ...

echo :: onStart echo :: onStart echo :: onStart echo :: onStart正在运行每3196ms收到1条消息每3299ms收到1条消息每3299ms收到1条消息每3305ms收到1条消息已连接的套接字:100个套接字已连接:100个套接字已连接:100个套接字已连接:每1000ms接收100 134597条消息每1000ms接收139774条消息每1000ms接收的139521条消息139每1000ms接收的140404条消息139337每1000ms接收的140088条消息每1000ms接收的139946条消息139946每1000ms接收的141204条消息137988条消息每1000毫秒接收到的141805消息每1000毫秒接收到的134733消息...

Let's clean this log file like this:

让我们像这样清理此日志文件:

echo -e ':%s/ms/^V^M/g\n:%g/^$/d\nGdd1G4/Sockets\n4\ndggZZ' | vi $i

'^V^M' is visible representation of the following key hits: Ctrl-V Ctrl-V Ctrl-V Ctrl-M, so it will be pretty useless to just copy paste this bash line. Short explanation:

“ ^ V ^ M”是以下按键的可见表示:Ctrl-V Ctrl-V Ctrl-V Ctrl-M,因此仅复制粘贴此bash行将非常无用。 简短说明:

  • we have to replace 'ms' symbols with end of line, because we do not need them, they will mess the calculations later on and 4 benchmarks working in parallel may print out their results in one line. 我们必须用行尾替换“ ms”符号,因为我们不需要它们,它们将在以后使计算混乱,并且并行工作的4个基准可能会在一行中打印出它们的结果。
  • we need to remove all empty lines afterwards之后我们需要删除所有空行
  • we remove the last line as well, because we stop the server 我们也会删除最后一行,因为我们停止了服务器
  • in the log file there will be only fore lines consisting Sockets connected: 100 (it is because we run only four benchmark services). So we skip 4 lines past the last of them, and than removing everything to the top of the file.

    日志文件中,只有前几行包含连接的套接字:100 (这是因为我们仅运行四个基准服务)。 因此,我们跳过了最后一行之后的4行,而不是将所有内容删除到文件顶部。

  • saving the file.保存文件。

File is saved and you are back in shell now, and the log file is ready for the next step:

文件已保存,您现在又回到Shell中,日志文件已准备好进行下一步:

# awk -v avg=0 '{rps=($1/$5);avg+=rps;}END{print ((avg*1000)/NR)*4}' log

This awk single-liner calculates amount of the echo responses per ms, and accumulates the result in avg variable. After all the lines in log file are processed it multiplies avg to 1000 to get the total amount of the echo responses per second, dividing to number of lines and multipling to amount of benchmark services. This gives us the average number of echo responses per second for this test run.

这个awk单线计算每毫秒的回声响应量,并将结果累积在avg变量中。 处理完日志文件中的所有行之后,它将avg乘以1000,以获得每秒的回声响应总数,除以行数并将其乘以基准服务的数量。 这为我们提供了此测试运行每秒的平均回声响应数。

On my PC this number (ERps) is: 563854

在我的电脑上,这个数字(ERps)是: 563854

Let's do the same without SSL support and see the difference:

让我们在没有SSL支持的情况下进行相同的操作,看看有什么不同:

  • change value of tls variable in ws.json to false

    将ws.json中的tls变量的值更改为false

  • change benchmark.target in benchmark.lua from wss:// to ws://

    基准 .lua中的基准。目标从wss://更改为ws://

  • run rm -f lapps.log; /opt/lapps/bin/lapps.avx2.nostats.notls > log from within LAppS directory, and repeat above steps.

    运行rm -f lapps.log; /opt/lapps/bin/lapps.avx2.nostats.notls > log rm -f lapps.log; /opt/lapps/bin/lapps.avx2.nostats.notls > log从LAppS目录中rm -f lapps.log; /opt/lapps/bin/lapps.avx2.nostats.notls > log ,然后重复上述步骤。

I've got: 721236 responses per second

我得到: 每秒721236个响应

The difference in performance with SSL and without SSL is about 22%. Let's keep these numbers in mind for future reference.

使用SSL和不使用SSL的性能差异约为22%。 让我们牢记这些数字,以备将来参考。

With the same setup on Server A, I've got: 421905 ERpS with SSL and 443145 ERpS without SSL. The patches for Specter and Meltdown were disabled. On the Server B I've got 270996 ERpS with SSL and 318522 ERpS without SSL with the patches enabled. 385726 and 372126 without and with SSL. The patches for Specter and Meltdown were disabled as well.

使用服务器A上的相同设置,我得到: 421905 ERpS(带SSL)和443145 ERpS(不带SSL)。 幽灵和熔毁的补丁已禁用。 在服务器B上我有与SSL和318522的ERP 270996的ERP没有SSL与补丁启用。 不带SSL和带SSL的385726372126 。 幽灵和熔毁的补丁也被禁用。

The results are worse with the same setup because the CPUs on these servers have lesser frequency.

使用相同的设置,结果会更糟,因为这些服务器上的CPU频率较低。

Please beware that the clients are very dependent on the data availability in /dev/urandom. It may take a while until the clients actually start running, if you already ran it once. So just wait for them to start working, then they are pretty fast at what they do. Just monitor with top if the LAppS instances actually doing any job at all. If /dev/urandom is exhausted then the LAppS will not eat up your CPUs until there is some data available.

请注意,客户端非常依赖/ dev / urandom中的数据可用性。 如果您已经运行过一次,则可能需要一段时间才能真正开始运行客户端。 因此,只要等他们开始工作,他们就可以很快地完成工作。 如果LAppS实例实际上根本没有做任何工作,则仅监视top 。 如果/ dev / urandom耗尽,则LAppS不会耗尽您的CPU,直到有一些可用数据为止。

为大型测试做准备 (Preparing for big tests)

First of all we need to make some changes in kernel parameters and do not forget the ulimit for nr of open files. I used almost the same setup like in this article.

首先,我们需要对内核参数进行一些更改,并且不要忘记打开文件的nr的ulimit。 我使用了与本文几乎相同的设置。

Create a file with following content

创建具有以下内容的文件

:
sysctl -w fs.file-max=14000000
sysctl -w fs.nr_open=14000000
ulimit -n 14000000
sysctl -w net.ipv4.tcp_mem="100000000 100000000 100000000"
sysctl -w net.core.somaxconn=20000
sysctl -w net.ipv4.tcp_max_syn_backlog=20000
sysctl -w net.ipv4.ip_local_port_range="1025 65535"

Then use source ./filename on both servers to change the kernel parameters and the ulimit.

然后在两个服务器上使用source ./filename更改内核参数和ulimit。

This time my purpose is to create several millions clients on one server and connect them to the second.

这次,我的目的是在一台服务器上创建数百万个客户端并将它们连接到第二台服务器。

Server A will serve as the WebSockets echo service server side. Server B will serve as the WebSockets echo service client side.

服务器A将充当WebSockets回显服务服务器端。 服务器B将充当WebSockets回显服务客户端。

There is a limitation in LAppS imposed by LuaJIT. You can use only 2GB of RAM per LuaJIT VM. That is the limit of RAM you can use within single LAppS instance for all the services, as the services are the threads linked against one libluajit instance. If any of your services which is running under the single LAppS instance exceeds this limit then all the services will be out of memory.

LuaJIT对LAppS施加了限制。 每个LuaJIT VM只能使用2GB的RAM。 这就是您可以在单个LAppS实例中为所有服务使用RAM的限制,因为这些服务是针对一个libluajit实例链接的线程。 如果在单个LAppS实例下运行的任何服务超出此限制,则所有服务都将内存不足。

I found out that per single LAppS instance you can't establish more then 2 464 000 client WebSockets with the message size of 64 bytes. The message size may slightly change this limit, because LAppS passes the message to cws service by allocating the space for this message within the LuaJIT.

我发现,每个LAppS实例最多只能建立2464 000个客户端WebSocket,消息大小为64个字节。 消息大小可能会稍微更改此限制,因为LAppS通过在LuaJIT内为该消息分配空间来将消息传递给cws服务。

This implies that I have to start several LAppS instances with the same configuration on Server B to establish more then 2.4 millions of WebSockets. The Server A (the echo server) does not uses as much memory on LuaJIT side, so the one instance of the LAppS will take care of 12.3 million of WebSockets without any problem.

这意味着我必须在服务器B上启动几个具有相同配置的LAppS实例,才能建立超过240万个WebSocket。 服务器A(回显服务器)在LuaJIT端使用的内存不多,因此LAppS的一个实例将处理1230万个WebSocket,而不会出现任何问题。

Let's prepare two different configs for the servers A and B.

让我们为服务器A和B准备两个不同的配置。

服务器A ws.json (Server A ws.json)

{"listeners" : 224,"connection_weight": 1.0,"ip" : "0.0.0.0","port" : 5084,"lapps_config_auto_save" : true ,"workers" : {"workers": 40,"max_connections" : 2000000,"auto_fragment" : false,"max_poll_events" : 256,"max_poll_wait_ms" : 10,"max_inbounds_skip" : 50,"input_buffer_size" : 2048},"acl" : {"policy" : "allow","exclude" : []},"tls": true,"tls_server_version" : 4,"tls_client_version" : 4,"tls_certificates":{"ca":"/opt/lapps/conf/ssl/ca.crt","cert": "/opt/lapps/conf/ssl/example.org.bundle.crt","key": "/opt/lapps/conf/ssl/example.org.key"}
}

服务器A lapps.json (Server A lapps.json)

{"directories": {"app_conf_dir": "etc","applications": "apps","deploy": "deploy","tmp": "tmp","workdir": "workdir"},"services": {"echo": {"auto_start": true,"instances": 40,"internal": false,"max_inbound_message_size": 16777216,"protocol": "raw","request_target": "/echo","acl" : {"policy" : "allow","exclude" : []}}}
}

服务器B ws.json (Server B ws.json)

{"listeners" : 0,"connection_weight": 1.0,"ip" : "0.0.0.0","port" : 5083,"lapps_config_auto_save" : true ,"workers" : {"workers": 0,"max_connections" : 0,"auto_fragment" : false,"max_poll_events" : 2048,"max_poll_wait_ms" : 10,"max_inbounds_skip" : 50,"input_buffer_size" : 2048},"acl" : {"policy" : "deny","exclude" : []},"tls": true,"tls_server_version" : 4,"tls_client_version" : 4,"tls_certificates":{"ca":"/opt/lapps/conf/ssl/ca.crt","cert": "/opt/lapps/conf/ssl/example.org.bundle.crt","key": "/opt/lapps/conf/ssl/example.org.key"}
}

Server A has two interfaces:

服务器A具有两个接口:

  • bond0 — x.x.203.37 bond0 — xx203.37
  • bond1 — x.x.23.10bond1 — xx23.10

One is faster another is slower but it does not really matter. The server will be under heavy load anyways.

一个越快,另一个越慢,但这并不重要。 无论如何,服务器将承受沉重的负担。

Lets prepare a template from our /opt/lapps/benchmark/benchmark.lua

让我们从/opt/lapps/benchmark/benchmark.lua准备一个模板

基准测试 (benchmark.lua)

benchmark={}
benchmark.__index=benchmarkbenchmark.init=function()
endbenchmark.messages_counter=0;
benchmark.start_time=time.now();
benchmark.max_connections=10000;
benchmark.target_port=0;
benchmark.target_prefix="wss://IPADDR:";
benchmark.target_postfix="/echo";
benchmark.message="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX";benchmark.meter=function()benchmark.messages_counter=benchmark.messages_counter+1;local slice=time.now() - benchmark.start_time;if( slice  >= 1000)thenprint(benchmark.messages_counter.." messages received per "..slice.."ms")benchmark.messages_counter=0;benchmark.start_time=time.now();end
endbenchmark.run=function()local n=nap:new();local counter=1;n:sleep(1);local array={};local start=time.now();while(#array < benchmark.max_connections) and (not must_stop())dobenchmark.target_port=math.random(5084,5307);local sock, err_msg=cws:new(benchmark.target_prefix..benchmark.target_port..benchmark.target_postfix,{onopen=function(handler)local result, errstr=cws:send(handler,benchmark.message,2);if(not result)thenprint("Error on websocket send at handler "..handler..": "..errstr);endend,onmessage=function(handler,message,opcode)benchmark.meter();cws:send(handler,message,opcode);end,onerror=function(handler, message)print("Client WebSocket connection is failed for socketfd "..handler..". Error: "..message);end,onclose=function(handler)print("Connection is closed for socketfd "..handler);end});if(sock ~= nil)thentable.insert(array,sock);elseprint(err_msg);err_msg=nil;collectgarbage("collect"); -- force garbage collection on connection failure.end-- poll events once per 10 outgoing connections -- this will improve the connection establishment speedif counter == 10thencws:eventLoop();counter=1elsecounter = counter + 1;endendprint("Sockets connected: "..#array);benchmark.start_time=time.now();while not must_stop()docws:eventLoop();endfor i=1,#arraydoarray[i]:close();cws:eventLoop();end
endreturn benchmark;

Let's store Server A IP addresses into files IP1 and IP2 respectively and then:

让我们分别将服务器A的IP地址存储到文件IP1和IP2中,然后:

for i in 1 2
domkdir -p /opt/lapps/apps/benchmark${i}sed -e "s/IPADDR/$(cat IP1)/g" /opt/lapps/apps/benchmark/benchmark.lua > /opt/lapps/apps/benchmark${i}/benchmark${i}.lua
done

Now we modify the /opt/lapps/etc/conf/lapps.json on Server B to use these two benchmark services:

现在,我们修改服务器B上的/opt/lapps/etc/conf/lapps.json以使用以下两个基准服务:

服务器B lapps.json (Server B lapps.json)

{"directories": {"app_conf_dir": "etc","applications": "apps","deploy": "deploy","tmp": "tmp","workdir": "workdir"},"services": {"benchmark1": {"auto_start": true,"instances" : 112,"internal": true,"preload": [ "nap", "cws", "time" ]},"benchmark2": {"auto_start": true,"instances" : 112,"internal": true,"preload": [ "nap", "cws", "time" ]}}
}

Are we ready? No we are not. Because we are intent to generate 2 240 000 outgoing sockets to only two addresses and we need more ports on the server side. It is impossible to create more then 64k connections to the same ip:port pair (actually a little less then 64k).

我们准备好了吗? 不,我们不是。 因为我们打算仅向两个地址生成2240 000个传出套接字,所以我们需要在服务器端设置更多端口。 到同一ip:port对创建的连接数不能超过64k(实际上比64k少一点)。

On the Server A in the file LAppS/include/wsServer.h there is a function void startListeners(). In this function we will replace line 126

在服务器L上的文件LAppS / include / wsServer.h中,有一个函数void startListeners()。 在此功能中,我们将替换第126行

LAppSConfig::getInstance()->getWSConfig()["port"],

with this:

有了这个:

static_cast<int>(LAppSConfig::getInstance()->getWSConfig()["port"])+i,

Rebuild LAppS:

重建LAppS:

make CONF=Release.AVX2 install

对2 240 000客户端WebSocket运行测试。 (Running the tests for 2 240 000 client WebSockets.)

Start the LAppS on Server A, then start the LAppS on Server B and redirect the output to a file like this:

在服务器A上启动LAppS,然后在服务器B上启动LAppS并将输出重定向到如下文件:

/opt/lapps/bin/lapps.avx2 > log

Edit, NB:

编辑,注意:

if you want to run several LAppS instances, then create separate directory for each instnace (like run1,run2, etc) and run each instance from within these directories.  This is required for lapps.log file and of course for resulting standard output, for not to be overlapped/overwritten by concurrent LAppS instances.

This may take a while till all the connections are established. Let's monitor the progress.

在建立所有连接之前,可能需要一段时间。 让我们监视进度。

Do not use netstat to watch established connections, it is pointless while it runs indefinitely after like 150k connections which are established in some seconds. Look at the lapps.log on the Server A, in the directory which you were in when started LAppS. You may use following one-liner to see how the connections are established and how are they distributed among IOWorkers:

不要使用netstat监视已建立的连接,这毫无意义,尽管它会在几秒钟内建立150k个连接后无限期地运行。 在启动LAppS时所在的目录中,查看服务器A上的lapps.log。 您可以使用以下一种代码来查看如何建立连接以及如何在IOWorkers之间分配连接:

date;awk '/ will be added to connection pool of worker/{a[$22]++}END{for(i in a){ print i, a[i]; sum+=a[i]} print sum}' lapps.log | sort -n;date

Here is an idea on how rapidly these connections are established:

以下是有关建立这些连接的速度的想法:

375874
Sun Jul 21 20:33:07 +06 2019650874
Sun Jul 21 20:34:42 +06 2019   2894 connections per second1001874
Sun Jul 21 20:36:45 +06 2019   2974 connections per second1182874
Sun Jul 21 20:37:50 +06 2019   2784 connections per second1843874
Sun Jul 21 20:41:44 +06 2019   2824 connections per second2207874
Sun Jul 21 20:45:43 +06 2019   3058 connections per second

On the Server B we may check amount of benchmarks finished establishing their connections:

在服务器B上,我们可以检查完成建立它们的连接的基准数量:

# grep Sockets log | wc -l
224

After you see that the number is 224 let the servers work for a while, monitor the CPU and memory usage with top. You might see something like this (Server B at left and Server A at the right side):

在看到数字224后,请让服务器工作一会儿,然后使用top监视CPU和内存使用情况。 您可能会看到类似以下内容(服务器B在左侧,服务器A在右侧):

Or like this (Server B at left and Server A at the right side):

或这样(服务器B在左侧,服务器A在右侧):

It is clearly the Server B, where the benchmark client services are running, is under heavy load. The Server A is under heavy load too but not always. Some times it chills while the Server B struggles with amount of tasks to work on. The traffic ciphering on TLS is very CPU intensive.

显然,运行基准客户端服务的服务器B负载沉重。 服务器A也承受着沉重的负担,但并非总是如此。 在服务器B忙于处理大量任务时,有时它会变冷。 TLS上的流量加密非常占用CPU。

Let's stop the servers (press Ctrl-C several times) and modify the log as before but with respect to changed amount of benchmark services (224):

让我们停止服务器(几次按Ctrl-C)并像以前一样修改日志,但要更改基准测试服务的数量(224):

echo -e ':%s/ms/^V^M/g\n:%g/^$/d\nGdd1G224/Sockets\n224\ndggZZ' | vi $i
awk -v avg=0 '{rps=($1/$5);avg+=rps;}END{print ((avg*1000)/NR)*224}' log

You might want to delete several last lines as well, to account for print outs from stopping benchmark services. You've already got an idea on how to do this. So I'll post the results for different test scenarios and proceed with problems I've faced.

您可能还希望删除最后几行,以说明停止基准测试服务后的打印输出。 您已经知道如何执行此操作。 因此,我将针对不同的测试场景发布结果,并处理我遇到的问题。

所有其他测试的结果(490万及更高) (Results for all the other tests (4.9 million and beyond))

测试#4负载 (test #4 load)

测试#5负载 (test #5 load)

测试#5平衡以获得相等的CPU共享 (test #5 balanced for equal CPU sharing)

测试#6 (test #6)

测试#8 (test #8)

测试#12 (test #12)

测试#13 (test #13)

问题。 (Problems.)

Marked in above table with red.

在上表中用红色标记。

Running more then 224 benchmark instances on the Server B proved to be the bad idea, as the server on client side was incapable of evenly distribute the CPU time among processes/threads. First 224 benchmark instances which have established all their connections took most of the CPU resources and the rest of benchmark instances are lagging behind. Even using renice -20 does not help a lot(448 benchmark instances, 2 LAppS instances):

在服务器B上运行超过224个基准测试实例被证明是一个坏主意,因为客户端的服务器无法在进程/线程之间平均分配CPU时间。 建立所有连接的前224个基准测试实例占用了大部分CPU资源,其余基准测试实例则落后。 即使使用renice -20也无济于事(448个基准实例,2个LAppS实例):

448个基准实例 (448 benchmark instances)

The Server B (left side) is under very heavy load and the Server A still has free CPU resources.

服务器B(左侧)的负载非常重,服务器A仍具有可用的CPU资源。

So I doubled the benchmark.max_connections instead of starting more of the separate LAppS instances.

因此,我将benchmark.max_connections加倍,而不是启动更多单独的LAppS实例。

Still for 12.3 million of WebSockets to run, I have started 5th LAppS instance (tests 5 and 6) without stopping four already running ones. And played the role of CFQ by manually suspending and restarting prioritized processes with kill -STOP/-CONT or/and renice their priorities. You can use following template script for this:

仍然要运行1,230万个WebSocket,我已经启动了第五个LAppS实例(测试5和6),而没有停止四个已经运行的实例。 而通过手动暂停和用kill -STOP / -CONT重启优先级的进程和/或使用renice它们的优先级发挥CFQ的作用。 您可以为此使用以下模板脚本:

while [ 1 ];
dokill -STOP <4 fast-processes pids>sleep 10kill -CONT <4 fast-processes pids>sleep 5
done

Welcome to 2019 RHEL 7.6! Honestly, I used the renice command first time since 2009. What is worst, — I used it almost unsuccessfully this time.

欢迎使用2019 RHEL 7.6! 坦率地说,自2009年以来,我第一次使用renice命令。最糟糕的是,这一次,我几乎没有成功使用它。

I had a problem with scatter-gather engine of the NICs. So I disabled it for some tests, not actually marking this event into the table.

我对NIC的分散收集引擎有问题。 因此,我将其禁用以进行某些测试,而实际上并未将此事件标记到表中。

I had partial link disruptions under heavy load and the NIC driver bug, so I had to discard related test results.

在高负载和NIC驱动程序错误的情况下,我有部分链路中断,因此我不得不放弃相关的测试结果。

故事结局 (End of story)

Honestly, the tests went much smoother than I've anticipated.

老实说,测试比我预期的要顺利得多。

I'm convinced that I have not managed to load LAppS on Server A to its full potential (without SSL), because I had not enough CPU resources for the client side. Though with TLS 1.3 enabled, the LAppS on the Server A was utilizing almost all available CPU resources.

我坚信我没有设法将服务器A上的LAppS完全加载(没有SSL),因为我没有足够的CPU资源供客户端使用。 尽管启用了TLS 1.3,但服务器A上的LAppS却利用了几乎所有可用的CPU资源。

I'm still convinced that LAppS is most scalable and fastest WebSockets open source server out there and the cws WebSockets client module is the only of it's kind, providing the framework for high load testing.

我仍然坚信LAppS是可伸缩性最高,速度最快的WebSockets开源服务器,而cws WebSockets客户端模块是同类产品中唯一的,它提供了高负载测试的框架。

Please verify the results on your own hardware.

请在您自己的硬件上验证结果。

Note of advice: Never use nginx or apache as the load balancer or as a proxy-pass for WebSockets, or you'll end up cutting the performance on order of magnitude. They are not build for WebSockets.

意见建议:切勿将nginx或apache用作负载平衡器或WebSockets的代理通道,否则最终将导致性能下降。 它们不是为WebSocket构建的。

翻译自: https://habr.com/en/post/460847/

websocket 并发

websocket 并发_1230万个并发WebSocket相关推荐

  1. 并发测试神器,模拟一次超过 5 万的并发用户

    点击上方蓝色"方志朋",选择"设为星标" 回复"666"获取独家整理的学习资料! 作者:blazemeter  |  来源:t.cn/ES7 ...

  2. 如何模拟超过 5 万的并发用户

    点击上方"方志朋",选择"设为星标" 回复"666"获取新整理的面试文章 来源:www.oschina.net/translate/how- ...

  3. 敢不敢模拟超过 5 万的并发用户?

    来自:http://t.cn/ES7KBkW 本文将从负载测试的角度,描述了做一次流畅的5万用户并发测试需要做的事情. 你可以在本文的结尾部分看到讨论的记录. 快速的步骤概要 编写你的脚本 使用JMe ...

  4. cc穿盾并发脚本_敢不敢模拟超过 5 万的并发用户?

    阅读本文大概需要 6 分钟. 来自:http://t.cn/ES7KBkW 本文将从负载测试的角度,描述了做一次流畅的 5 万用户并发测试需要做的事情. 你可以在本文的结尾部分看到讨论的记录. 快速的 ...

  5. 并发200_一种单机支持 JavaWeb 容器万级并发的设想

    作者:莫那鲁道链接:https://www.cnblogs.com/stateis0/p/10963171.html 当前的大部分 Java web 容器基于 Bio 线程模型,例如常见的 Tomca ...

  6. 网易云信IM即时通讯聊天源码SDK 并发高轻松万人并发稳定不丢消息 后端PHP 前端 安卓Java

    优势:承载用户并发同时在线都在第三方网易云信高并发轻松承载万人并发. 开发语言:后端PHP 前端安卓Java 苹果 OC PC端C# 网易云信 IM UIKit是基于 NIM SDK(网易云信 IM ...

  7. 如何解决1万个并发连接,用每个客户一个线程的方法

    原文链接:http://stackoverflow.com/questions/17593699/tcp-ip-solving-the-c10k-with-the-thread-per-client- ...

  8. 从新手到架构师,一篇就够:从100到1000万高并发的架构演进之路

    1.引言 本文以设计淘宝网的后台架构为例,介绍从一百个并发到千万级并发情况下服务端的架构的14次演进过程,同时列举出每个演进阶段会遇到的相关技术,让大家对架构的演进有一个整体的认知.文章最后汇总了一些 ...

  9. 家乐福618保卫战二-零售O2O场景中的万级并发交易情况下的极限性能调优

    本系列简介 这个系列可以帮助普通程序员们深刻的意识到平时工作中到底还有什么不足以及如何进一步进化成真正意义上的架构师.CTO以及后面的道路是如何走的: 这个系列可以帮助企业IT管理者深刻意识到,性能安 ...

最新文章

  1. 洛谷 P2296 寻找道路
  2. JS移动客户端--触屏滑动事件 banner图效果
  3. Linux系统巡检项目
  4. windows下修改mysql密码 10054错误
  5. java 跨类 调用 model_Model.java中的这两个方法,为什么不能在子类中调用,或者包内调用也行啊。...
  6. (计算机组成原理)第三章存储系统:本章习题
  7. 沈阳大学生招聘2020计算机,2020沈阳市高校毕业生基层公共岗位服务计划人员招录600人...
  8. SQL Server高级查询之数据库安全管理 第六章节
  9. Spring AOP(三)之AfterThrowing增强处理
  10. 部署vue3开发环境
  11. 计算机组成原理与体系结构 —— 南桥与北桥
  12. ibm入职测试题太难了_IBM面试的IQ测试题
  13. 电脑如何打开EPUB文件
  14. matlab英文字母对应数字,MATLAB编程:大写英文字母转换成数值(0-25)两种代码
  15. c++ 判断回文,说实话,累赘
  16. Advanced控制理论
  17. 乐思蜀:我们不做网络民工
  18. 送给即将毕业的同学,谈谈毕业后第一份工作和追女生的问题。
  19. 宣传单彩页_宣传单彩页设计
  20. docsify部署静态文件服务器,云开发 Docsify 文档部署

热门文章

  1. python里面pandas对数据表的变量重新赋值,将满意,不满意的李克特量表赋值为数字
  2. ios 滤镜处理(详细滤镜介绍)及处理方法
  3. 转载java基础总结大全(使用)
  4. 【C++】1074:津津的储蓄计划(信息学奥赛)
  5. P1085不高兴的津津
  6. X230网卡驱动安装总不成功的问题
  7. 如何准备好2023年的USACO?
  8. Python matplotlib与tkinter结合
  9. 小米彷徨:股价与业绩的自我救赎
  10. python(scikit-learn)实现k均值聚类算法