微服务架构 性能提升

by Domenico Angilletta

通过多梅尼科·安吉列塔(Domenico Angilletta)

如何通过无服务器架构提高性能 (How to boost your performance with serverless architecture)

In this article, I am going to describe how I moved a heavy task like image pre-processing from my application server to a completely serverless architecture on AWS in charge of storing, processing and serving images.

在本文中,我将描述如何将繁重的任务(例如图像预处理)从应用程序服务器移到AWS上用于存储,处理和提供图像的完全无服务器的架构

问题 (The Problem)

Image pre-processing is a task required by many web applications. Each time an application allows a user to upload an image, it is very likely that this image needs to be pre-processed before it is served to a front-end application.

图像预处理是许多Web应用程序所需的任务。 每次应用程序允许用户上传图像时,很有可能需要在将该图像提供给前端应用程序之前对其进行预处理。

In this article I am going to describe a serverless architecture based on AWS, that is extremely scalable and cost-efficient.

在本文中,我将描述基于AWS的无服务器架构,该架构具有极高的可扩展性和成本效益。

But let’s start from the beginning. In one of my last projects, a marketplace web application where users have to upload an image of a product they want to sell, the original image is first cropped to the correct image ratio (4:3). It is then transformed in three different formats used in different places of the front-end application: 800x600px, 400x300px, and 200x150px.

但是,让我们从头开始。 在我的最后一个项目中,一个市场Web应用程序需要用户上传他们要出售的产品的图像,首先将原始图像裁剪为正确的图像比例(4:3)。 然后将其转换为前端应用程序不同位置使用的三种不同格式:800x600px,400x300px和200x150px。

Being a Ruby on Rails developer, my first approach was to use a RubyGem — in particular Paperclip or Dragonfly, which both make use of ImageMagick for image processing.

作为Ruby on Rails的开发人员,我的第一种方法是使用RubyGem-特别是Paperclip或Dragonfly ,它们都使用ImageMagick进行图像处理。

Although this implementation is quiet straightforward (since it it mostly just configuration), there are different drawbacks that could arise:

尽管此实现非常安静(因为它主要只是配置),但可能会出现不同的缺点:

  1. The images are processed on the application server. This could increase the general response time because of the greater workload on the CPU图像在应用程序服务器上处理。 由于CPU上的工作量较大,因此可能会增加一般响应时间
  2. The application server has limited computing power, which is set upfront, and is not well-suited for burst request handling. If many images need to be processed at the same time, the server capacity could be exhausted for a long period of time. Increasing the computing power on the other side would result in higher costs.应用服务器的计算能力有限,这是预先设置的,因此不适合突发请求处理。 如果需要同时处理许多图像,则服务器容量可能会耗尽很长时间。 另一方面,提高计算能力将导致更高的成本。
  3. Images are processed in sequence. Again, if many images need to be processed at the same time, speed could be very bad.图像按顺序处理。 同样,如果需要同时处理许多图像,速度可能会很差。
  4. If not correctly configured, these gems save processed images on disk, which could quickly make your server run out of space.如果配置不正确,这些gem将已处理的图像保存在磁盘上,这可能很快使您的服务器空间不足。

In general, based on how much image processing your application does, this solution is not scalable.

通常,根据您的应用程序执行的图像处理量,此解决方案不可扩展。

解决方案 (The Solution)

Having a closer look to the image pre-processing task, you’ll notice that there is probably no need to run it directly on your application server. In particular this is the case if your image transformations are always the same and do not rely on other information than the image itself. This was the case for me, where I always generated different image sizes together with an image quality/weight optimization.

仔细研究图像预处理任务,您会发现可能不需要直接在应用程序服务器上运行它。 如果您的图像变换始终相同并且不依赖于图像本身以外的其他信息,则尤其是这种情况。 对我来说就是这种情况,我总是生成不同的图像尺寸以及图像质量/权重优化。

Once you realize that this task can be easily isolated from the rest of the application logic, thinking about a serverless solution that just takes an original image as input and generates all needed transformations is straightforward.

一旦意识到可以轻松地将这项任务与其他应用程序逻辑隔离开,那么考虑一个无服务器的解决方案,只需将原始图像作为输入并生成所有需要的转换即可。

AWS Lambda turns out to be a perfect fit for this kind of problem. On the one side, it can handle thousands of requests per second, and on the other side, you pay only for the compute time you consume. There is no charge when your code is not running.

事实证明,AWS Lambda非常适合此类问题。 一方面,它每秒可以处理数千个请求,另一方面,您只需为消耗的计算时间付费。 代码未运行时不收费。

AWS S3 provides unlimited storage at a very low price, while AWS SNS provides an easy way of Pub/Sub messaging for microservices, distributed systems, and serverless applications. Finally, AWS Cloudfront is used as the Content Delivery Network for the images stored on S3.

AWS S3以非常低的价格提供了无限的存储空间,而AWS SNS为微服务,分布式系统和无服务器应用程序提供了一种简单的发布/订阅消息传递方式。 最后,AWS Cloudfront用作存储在S3上的图像的内容交付网络。

The combination of these four AWS services results in a very powerful image processing solution at a very low cost.

这四个AWS服务的组合可提供功能强大且价格低廉的图像处理解决方案。

高层建筑 (High Level Architecture)

The process of generating different image versions from an original image starts with an upload of the original image on AWS S3. This triggers, through AWS SNS, the execution of an AWS Lambda function in charge of generating the new image versions and uploading them again on AWS S3. Here is the sequence in more detail:

从原始图像生成不同图像版本的过程始于将原始图像上传到AWS S3。 这将通过AWS SNS触发执行AWS Lambda函数,以生成新的映像版本并将其再次上载到AWS S3。 这是更详细的序列:

  1. Images are uploaded to a specific folder inside an AWS S3 bucket图像上传到AWS S3存储桶中的特定文件夹
  2. Each time a new image is uploaded to this folder, S3 publishes a message containing the S3 key of the created object on an AWS SNS topic每次将新图像上传到此文件夹时,S3都会在AWS SNS主题上发布一条消息,其中包含所创建对象的S3键
  3. AWS Lambda, which is configured as consumer on the same SNS topic, reads the new message and uses the S3 object key to fetch the new image被配置为同一SNS主题上的使用者的AWS Lambda读取新消息并使用S3对象键来获取新图像
  4. AWS Lambda processes the new image, applying the necessary transformations, and uploads the processed image(s) to S3AWS Lambda处理新图像,应用必要的转换,然后将处理后的图像上传到S3
  5. The processed images are now served to the final users through AWS Cloudfront CDN, in order to optimize the download speed.现在,已处理的图像通过AWS Cloudfront CDN提供给最终用户,以优化下载速度。

This architecture is very scalable, since each uploaded image will trigger a new Lambda code execution to handle just that request, so that there can be thousands of images being processed in parallel by as many code executions.

这种结构具有很好的可扩展性,因为每个上载的图像都会触发一个新的Lambda代码执行以仅处理该请求,因此可以有多达数千个图像被多个代码执行并行处理。

No disk space or computation power is used on the application server, because everything is stored on S3 and processed by Lambda.

由于所有内容都存储在S3上并由Lambda处理,因此在应用程序服务器上不使用磁盘空间或计算能力。

Finally, configuring a CDN in front of S3 is very easy and allows you to have high download speeds from everywhere in the world.

最后,在S3前面配置CDN非常容易,并且可以让您从世界各地高速下载。

分步教程 (Step-by-Step Tutorial)

The implementation of this solution is relatively easy, since it is mostly configuration, except for the Lambda code that performs the image pre-processing. The rest of this article will describe in detail how to setup the AWS architecture, and will provide the code executed by AWS Lambda to resize the uploaded image in order to have a complete working example.

该解决方案的实现相对容易,因为它主要是配置,除了执行图像预处理的Lambda代码外。 本文的其余部分将详细描述如何设置AWS架构,并将提供由AWS Lambda执行的代码以调整上载图像的大小,以提供完整的工作示例。

To try it out yourself, you will need an AWS account. If you don’t have one, you can create one for free and take advantage of the AWS Free Tier here.

要自己尝试,您将需要一个AWS账户。 如果您没有,则可以免费创建一个,并在此处利用AWS Free Tier。

步骤1:在AWS SNS上创建主题 (Step 1: Create a Topic on AWS SNS)

First of all, we need to configure a new SNS (Simple Notification Service) topic on which AWS will publish a message each time a new image is uploaded to S3. This message contains the S3 object key used later by the Lambda function to fetch the uploaded image and process it.

首先,我们需要配置一个新的SNS(简单通知服务)主题,每次将新图像上传到S3时,AWS都会在该主题上发布一条消息。 此消息包含S3对象密钥,Lambda函数稍后将使用该S3对象密钥来获取上载的图像并进行处理。

From your AWS console visit the SNS page, click on “Create topic,” and enter a topic name, for example “image-preprocessing.”

在您的AWS控制台中,访问SNS页面 ,单击“创建主题”,然后输入主题名称,例如“图像预处理”。

Next, we need to change the topic policy to allow our S3 bucket to publish messages on it.

接下来,我们需要更改主题策略,以允许我们的S3存储桶在其上发布消息。

From the topic page, click on Actions -> Edit Topic Policy, choose Advanced view, add the following JSON block (with your own arns for Resource and SourceArn) to the statement array and update the policy:

在主题页面上,单击“操作”->“编辑主题策略”,选择“高级”视图,将以下JSON块(带有您自己的Resource和SourceArn arns)添加到语句数组并更新策略:

{      "Sid": "ALLOW_S3_BUCKET_AS_PUBLISHER",      "Effect": "Allow",      "Principal": {        "AWS": "*"      },      "Action": [        "SNS:Publish",      ],      "Resource": "arn:aws:sns:us-east-1:AWS-OWNER-ID:image-preprocessing",      "Condition": {          "StringLike": {              "aws:SourceArn": "arn:aws:s3:*:*:YOUR-BUCKET-NAME"          }      }}

You can find an example of a complete policy JSON here.

您可以在此处找到完整策略JSON的示例。

步骤2:创建AWS S3文件夹结构 (Step 2: Create AWS S3 folder structure)

Now we need to prepare the folder structure on S3 that will contain the original and the processed images. In this example, we will generate two resized image versions, 800x600 and 400x300.

现在,我们需要在S3上准备包含原始图像和已处理图像的文件夹结构。 在此示例中,我们将生成两个调整大小的图像版本:800x600和400x300。

From your AWS console, open the S3 page and create a new bucket. I will call mine “image-preprocessing-example.” Then, inside the bucket, we need to create a folder named “originals,” a folder named “800x600,” and another named “400x300.”

在您的AWS控制台中,打开S3页面并创建一个新存储桶。 我将其称为“图像预处理示例”。 然后,在存储桶中,我们需要创建一个名为“ originals”的文件夹,一个名为“ 800x600”的文件夹和另一个名为“ 400x300”的文件夹。

步骤3:配置AWS S3事件 (Step 3: Configure AWS S3 Events)

Every time a new image is uploaded to the originals folder, we want S3 to publish a message on our “image-preprocessing” SNS topic so that the image can be processed.

每次将新图像上传到originals文件夹时,我们都希望S3在“图像预处理” SNS主题上发布一条消息,以便可以处理该图像。

To do that, open your S3 bucket from the AWS console, click on Properties -> Events -> + Add notification and fill in the following fields:

为此,请从AWS控制台打开S3存储桶,单击Properties-> Events-> + Add notification,然后填写以下字段:

Here we are telling S3 to generate an event each time a new object is created (ObjectCreate) inside the originals folder (prefix), and to publish this event on our SNS Topic “image-preprocessing.”

在这里,我们告诉S3每次在originals文件夹(前缀)中创建一个新对象(ObjectCreate)时生成一个事件,并将此事件发布到我们的SNS主题“图像预处理”上。

步骤4:配置IAM角色以允许Lambda访问S3文件夹 (Step 4: Configure IAM role to allow Lambda to access the S3 folder)

We want to create a Lambda function that fetches image objects from S3, processes them, and uploads the processed versions again to S3. To do that, we need first to setup an IAM role that will allow our Lambda function to access the needed S3 folder.

我们想要创建一个Lambda函数,该函数从S3提取图像对象,对其进行处理,然后将处理后的版本再次上载到S3。 为此,我们首先需要设置一个IAM角色,该角色将允许我们的Lambda函数访问所需的S3文件夹。

From the AWS Console IAM page:

在AWS控制台IAM页面中 :

1. Click on Create Policy2. Click on JSON and type in (replace YOUR-BUCKET-NAME)

1.单击“ 创建策略” 。2.单击“ JSON”并输入(替换您的“桶名”)

{      "Version": "2012-10-17",      "Statement": [          {              "Sid": "Stmt1495470082000",              "Effect": "Allow",              "Action": [                  "s3:*"              ],              "Resource": [                  "arn:aws:s3:::YOUR-BUCKET-NAME/*"              ]          }      ]}

where the resource is our bucket on S3. Click on review, enter the policy name, for example AllowAccessOnYourBucketName, and create the policy.

资源是我们在S3上的存储桶。 单击审阅,输入策略名称,例如AllowAccessOnYourBucketName,然后创建策略。

3. Click on Roles -> Create role4. Choose Aws Service -> Lambda (who will use the policy)5. Select the previously created policy (AllowAccessOnYourBucketName)6. Finally, click on review, type in a name (LambdaS3YourBucketName), and click create role

3.单击角色->创建角色4。 选择“ Aws服务-> Lambda(将使用该策略)” 5。 选择先前创建的策略(AllowAccessOnYourBucketName)6。 最后,单击评论,输入名称(LambdaS3YourBucketName),然后单击创建角色

步骤5:创建AWS Lambda函数 (Step 5: Create the AWS Lambda function)

Now we have to setup our Lambda function to consume messages from the “image-preprocessing” SNS Topic and generate our resized image versions.

现在,我们必须设置Lambda函数以使用“图像预处理” SNS主题中的消息并生成调整大小的图像版本。

Let’s start with creating a new Lambda function.

让我们从创建一个新的Lambda函数开始。

From your AWS console, visit the Lambda page, click on “Create function,” and type in your function name, for example ImageResize, choose your runtime, in this case Node.js 6.10, and the previously created IAM role.

在您的AWS控制台上,访问Lambda页面 ,单击“创建函数”,然后键入您的函数名称,例如ImageResize,选择您的运行时(在本例中为Node.js 6.10)和以前创建的IAM角色。

Next we need to add SNS to the function triggers, so that the Lambda function will be called each time a new message is published to the “image-preprocessing” topic.

接下来,我们需要在函数触发器中添加SNS,以便每次将新消息发布到“图像预处理”主题时都将调用Lambda函数。

To do that, click on “SNS” in the list of triggers, select “image-preprocessing” from the SNS topic list, and click “add.”

为此,请在触发器列表中单击“ SNS”,从SNS主题列表中选择“图像预处理”,然后单击“添加”。

Finally we have to upload our code that will handle the S3 ObjectCreated event. That means fetching the uploaded image from the S3 originals folder, processing it, and uploading it again in the resized image folders.

最后,我们必须上载将处理S3 ObjectCreated事件的代码。 这意味着从S3原始文件夹中获取上载的图像,进行处理,然后再次将其重新上载到调整后的图像文件夹中。

You can download the code here. The only file you need to upload to your Lambda function is version1.1.zip, which contains index.js and the node_modules folder.

您可以在此处下载代码。 您唯一需要上传到Lambda函数的文件是version1.1.zip ,其中包含index.js和node_modules文件夹。

In order to give the Lambda function enough time and memory to process the image, we can increase the memory to 256 MB and the timeout to 10 sec. The needed resources depend on the image size and the transformation complexity.

为了给Lambda函数足够的时间和内存来处理图像,我们可以将内存增加到256 MB,并将超时增加到10秒。 所需资源取决于图像大小和转换复杂度。

The code itself is quiet simple, and just has the purpose of demonstrating the AWS integration.

该代码本身非常简单,仅用于演示AWS集成。

First, a handler function is defined (exports.handler). This function is called by the external trigger, in this case the message published on SNS which contains the S3 object key of the uploaded image.

首先,定义一个处理函数(exports.handler)。 该函数由外部触发器调用,在这种情况下,该消息在SNS上发布,其中包含上载图像的S3对象密钥。

It first parses the event message JSON to extract the S3 bucket name, the S3 object key of the uploaded image, and the filename that is just the final part of the key.

它首先解析事件消息JSON,以提取S3存储桶名称,上载图像的S3对象键以及仅是键最后一部分的文件名。

Once it has the bucket and object key, the uploaded image is fetched using s3.getObject and then passed to the resize function. The SIZE variable holds the image sizes we want to generate, which correspond also to the S3 folder names where the transformed images will be uploaded.

一旦具有存储桶和对象键,就可以使用s3.getObject提取上载的图像,然后将其传递给调整大小函数。 SIZE变量保存我们要生成的图像大小,它也对应于将转换后的图像上传到的S3文件夹名称。

var async = require('async');var AWS = require('aws-sdk');var gm = require('gm').subClass({ imageMagick: true });var s3 = new AWS.S3();
var SIZES = ["800x600", "400x300"];
exports.handler = function(event, context) {    var message, srcKey, dstKey, srcBucket, dstBucket, filename;    message = JSON.parse(event.Records[0].Sns.Message).Records[0];
srcBucket = message.s3.bucket.name;    dstBucket = srcBucket;    srcKey    =  message.s3.object.key.replace(/\+/g, " ");     filename = srcKey.split("/")[1];    dstKey = "";     ...    ...    // Download the image from S3    s3.getObject({            Bucket: srcBucket,            Key: srcKey    }, function(err, response){        if (err){            var err_message = 'Cannot download image: ' + srcKey;            return console.error(err_message);        }        var contentType = response.ContentType;
// Pass in our image to ImageMagick        var original = gm(response.Body);
// Obtain the size of the image        original.size(function(err, size){            if(err){                return console.error(err);            }
// For each SIZES, call the resize function            async.each(SIZES, function (width_height,  callback) {                var filename = srcKey.split("/")[1];                var thumbDstKey = width_height +"/" + filename;                resize(size, width_height, imageType, original,                          srcKey, dstBucket, thumbDstKey, contentType,                        callback);            },            function (err) {                if (err) {                    var err_message = 'Cannot resize ' + srcKey;                    console.error(err_message);                }                context.done();            });        });    });
}

The resize function applies some transformations on the original image using the “gm” library, in particular it resizes the image, crops it if needed, and reduces the quality to 80%. It then uploads the modified image to S3 using “s3.putObject”, specifying “ACL: public-read” to make the new image public.

调整大小功能使用“ gm”库在原始图像上进行了一些转换,特别是调整了图像的大小,并在需要时对其进行裁剪,并将质量降低到80%。 然后,使用“ s3.putObject ”将修改后的图像上传到S3,并指定“ ACL:public-read ”以使新图像公开。

var resize = function(size, width_height, imageType,                       original, srcKey, dstBucket, dstKey,                       contentType, done) {
async.waterfall([        function transform(next) {            var width_height_values = width_height.split("x");            var width  = width_height_values[0];            var height = width_height_values[1];
// Transform the image buffer in memory            original.interlace("Plane")                .quality(80)                .resize(width, height, '^')                .gravity('Center')                .crop(width, height)                .toBuffer(imageType, function(err, buffer) {                if (err) {                    next(err);                } else {                    next(null, buffer);                }            });        },        function upload(data, next) {            console.log("Uploading data to " + dstKey);            s3.putObject({                    Bucket: dstBucket,                    Key: dstKey,                    Body: data,                    ContentType: contentType,                    ACL: 'public-read'                },                next);            }        ], function (err) {            if (err) {                console.error(err);            }            done(err);        }    );};

步骤6:测试 (Step 6: Test)

Now we can test that everything is working as expected by uploading an image to the originals folder. If everything was implemented correctly, then we should find a resized version of the uploaded image in the 800x600 folder and one in the 400x300 folder.

现在,我们可以通过将图像上传到originals文件夹来测试一切正常。 如果一切都正确实施,那么我们应该在800x600文件夹中找到调整大小的上传图像版本,在400x300文件夹中找到一个调整大小的版本。

In the video below, you can see three windows: on the left the originals folder, in the middle the 800x600 folder, and on the right the 400x300 folder. After uploading a file to the original folder, the other two windows are refreshed to check if the images were created.

在下面的视频中,您可以看到三个窗口:左侧为originals文件夹,中间为800x600文件夹,右侧为400x300文件夹。 将文件上传到原始文件夹后,将刷新其他两个窗口以检查是否创建了图像。

And voilà, here they are ;)

瞧,这是;)

(可选)步骤6:添加Cloudfront CDN ((Optional) Step 6: Add Cloudfront CDN)

Now that the images are generated and uploaded to S3, we can add Cloudfront CDN to deliver the images to our end users, so that download speed is improved.

现在已经生成了图像并将其上传到S3,我们可以添加Cloudfront CDN来将图像传递给最终用户,从而提高下载速度。

  1. Open the Cloudfront Page

    打开Cloudfront页面

  2. Click on “Create Distribution”点击“创建发行版”
  3. When asked for the delivery method, choose “Web Distribution”当询问发送方式时,选择“ Web Distribution”
  4. Choose your S3 bucket as “Origin Domain Name” and click on “Create Distribution”选择您的S3存储桶作为“原始域名”,然后单击“创建分发”

The process of creating the distribution network is not immediate, so you will have to wait until the status of your CDN changes from “In Prog” to “Deployed.

创建分发网络的过程不是立即的,因此您必须等待CDN的状态从“ In Prog ”更改为“ Deployed”。

Once it is deployed you can use the domain name instead of your S3 bucket URL. For example if your Cloudfront domain name is “1234-cloudfront-id.cloudfront.net”, then you can access your resized image folder by “https://1234-cloudfront-id.cloudfront.net/400x300/FILENAME” and “https://1234-cloudfront-id.cloudfront.net/800x600/FILENAME”

部署后,您可以使用域名而不是S3存储桶URL。 例如,如果您的Cloudfront域名是“ 1234-cloudfront-id.cloudfront.net ”,那么您可以通过“ https://1234-cloudfront-id.cloudfront.net / 400x300 / FILENAME”和“ https://1234-cloudfront-id.cloudfront.net / 800x600 / FILENAME”

Cloudfront has many other options that should be set, but those are out of the scope of this article. For a more detailed guide to setting up your CDN, take a look at Amazon’s getting started guide.

Cloudfront还应设置许多其他选项,但这些选项不在本文讨论范围之内。 有关设置CDN的更详细指南,请参阅Amazon入门指南 。

And that’s it! I hope you enjoyed this article. Please leave a comment below, and let me know what you think!

就是这样! 希望您喜欢这篇文章。 请在下面发表评论,让我知道您的想法!

翻译自: https://www.freecodecamp.org/news/serverless-image-preprocessing-using-aws-lambda-42d58e1183f5/

微服务架构 性能提升

微服务架构 性能提升_如何通过无服务器架构提高性能相关推荐

  1. 微服务平台安全性提升_让我们提升软件安全性

    微服务平台安全性提升 We've all heard, "Software is eating the world." But, today let's talk about a ...

  2. 服务器控件的优点和缺点_什么是无服务器架构? 它的优点和缺点是什么?

    服务器控件的优点和缺点 Serverless, the new buzzword in town has been gaining a lot of attention from the pros a ...

  3. 看完这篇你就知道什么是无服务器架构了

    作者 | Systango,译者 | 无明.策划 | 小智 Gartner 最近的一份报告表明:到 2020 年,全球将有 20% 的企业部署无服务器架构.这说明无服务器架构不只是一个流行语,而是一种 ...

  4. SolarWinds:什么是无服务器架构?它有哪些优点和缺点?

    作者 | SolarWinds首席极客 Chrystal Taylor 供稿 | SolarWinds 随着IT行业的快速发展和网络带宽的不断消耗,人们对计算和数字存储的需求也越来越大.在此背景下,云 ...

  5. aws消息服务器,经验分享:我们如何使用AWS构建无服务器架构 - hypertrack

    我们的客户使用HyperTrack无需服务器即可访问实时位置.他们将我们用作实时位置的托管服务.他们不需要构建和管理服务器来摄取,处理,存储,提供和管理与其应用用户的实时位置相关的任何内容. 而我们自 ...

  6. 微服务笔记:第一章_微服务简介|Eureka注册中心|Nacos注册中心|Nacos配置管理|Feign|Gateway服务网关

    微服务笔记:第一章_微服务简介|Eureka注册中心|Nacos注册中心|Nacos配置管理|Feign|Gateway服务网关 1. 微服务简介 1.1 服务架构演变 1.2 SpringCloud ...

  7. 基于微服务和Docker容器技术的PaaS云平台架构设计

    本文讲的是基于微服务和Docker容器技术的PaaS云平台架构设计[编者的话]在系统架构上,PaaS云平台主要分为微服务架构.Docker容器技术.DveOps三部分,这篇文章重点介绍微服务架构的实施 ...

  8. 儒猿秒杀季!微服务限流熔断技术源码剖析与架构设计

    疯狂秒杀季:49元秒杀 原价 299元 的 <微服务限流熔断技术源码剖析与架构设计课> 今天 上午11点,仅 52 套,先到先得! === 课程背景 === 成为一名架构师几乎是每个程序员 ...

  9. 微服务连载(二)漫谈何时从单体架构迁移到微服务?

    面对微服务如火如荼的发展,很多人都在了解,学习希望能在自己的项目中帮得上忙,当你对微服务的庐山真面目有所了解后,接下来就是说服自己了,到底如何评估微服务,什么时候使用微服务,什么时间点最合适,需要哪些 ...

最新文章

  1. JavaScript cookie
  2. java后台面试自我介绍_java腾讯远程面试后台研发岗面试题分享
  3. 惠普打印机136w硒鼓芯片怎么清零_关于惠普彩激升级后无法识别硒鼓的处理方案...
  4. C++学习之路 | PTA乙级—— 1021 个位数统计 (15分)(精简)
  5. 数列求和 java_[代码展示]数列求和
  6. 留个坑,不知道为什么sqlite3要求组权限才能执行db:migrate,而可以直接执行db:......
  7. python 基本数据类型
  8. opendir函数和readdir函数内涵及用法
  9. 旅游后台管理系列——SSM框架Dao层整合
  10. 印象笔记html预览,超级笔记使用指南 | 印象笔记
  11. matlab 窄带通,MATLAB 窄带随机过程
  12. World中利用宏命令批量删除页眉和页脚
  13. java中isa什么意思_aiisa是什么意思?
  14. python爬取代理IP并进行有效的IP测试
  15. 用gethostbyname进行域名转IP问题
  16. 零伽壹浅谈:区块链存证机制之探索
  17. eslint 如何单独给一行取消eslint检查
  18. Loaders 的使用,结合Fragments
  19. textpad和masm搭建汇编环境
  20. Go之Go语言是什么?Go有什么特点?Go语言的应用前景如何?

热门文章

  1. 线程的组成 java 1615387415
  2. 05 使用VS程序调试的方法和技巧1214
  3. it精英挑战赛的规则 校区内部评选 2020
  4. 循环打印三角形 java 0913
  5. xlrd合并单元格的读取的注意事项
  6. django-模型类字段类型
  7. jquery-幻灯片效果-编辑中
  8. 建库建表设置统一编码
  9. HDU 2208 唉,可爱的小朋友(DFS)
  10. AR-关于几种特殊的收款方式说明