二分法算法复杂度简化

by Shruti Tanwar

通过Shruti Tanwar

让我们简化算法的复杂性! (Let’s simplify algorithm complexities!)

It’s been a while since I started thinking about going back to the basics and brushing up on core computer science concepts. And I figured, before jumping into the pool of heavyweight topics like data structures, operating systems, OOP, databases, and system design (seriously, the list is endless)?, I should probably pick up the topic we all kinda don’t wanna touch: algorithm Complexity Analysis.

自从我开始考虑回到基础知识并重提核心计算机科学概念以来已经有一段时间了。 而且我想,在进入诸如数据结构,操作系统,OOP,数据库和系统设计之类的重量级主题池之前(严重,清单无穷)?我应该选择我们都不想做的主题touch:算法复杂度分析。

Yep! The concept which is overlooked most of the time, because the majority of us developers are thinking, “Hmm, I probably won’t need to know that while I actually code!”.?

是的 这个概念在大多数时候都被忽略了,因为我们大多数开发人员都在思考:“嗯,在我实际编写代码时,我可能不需要知道这一点!”。

Well, I’m not sure if you’ve ever felt the need to understand how algorithm analysis actually works. But if you did, here’s my try at explaining it in the most lucid manner possible. I hope it helps someone like me.?

好吧,我不确定您是否曾经感到需要了解算法分析的实际工作原理。 但是,如果您这样做了,这是我的尝试,以尽可能清晰的方式进行解释。 我希望它能帮助像我这样的人。

无论如何,算法分析是什么,为什么我们需要它? ? (What is algorithm analysis anyway, and why do we need it??)

Before diving into algorithm complexity analysis, let’s first get a brief idea of what algorithm analysis is. Algorithm analysis deals with comparing algorithms based upon the number of computing resources that each algorithm uses.

在深入研究算法复杂度分析之前,让我们首先简要了解一下什么是算法分析。 算法分析是根据每种算法使用的计算资源数量来比较算法。

What we want to achieve by this practice is being able to make an informed decision about which algorithm is a winner in terms of making efficient use of resources(time or memory, depending upon use case). Does this make sense?

通过这种实践,我们想要实现的目标是就有效利用资源(时间或内存,取决于使用情况)做出明智的决定,以决定哪种算法是赢家。 这有意义吗?

Let’s take an example. Suppose we have a function product() which multiplies all the elements of an array, except the element at the current index, and returns the new array. If I am passing [1,2,3,4,5] as an input, I should get [120, 60, 40, 30, 24] as the result.

让我们举个例子。 假设我们有一个函数product() ,它将数组中除当前索引处的元素之外的所有元素相乘,并返回新数组。 如果我要传递[1,2,3,4,5]作为输入,我应该得到[120,60,40,30,24]作为结果。

The above function makes use of two nested for loops to calculate the desired result. In the first pass, it takes the element at the current position. In the second pass, it multiplies that element with each element in the array — except when the element of the first loop matches the current element of the second loop. In that case, it simply multiplies it by 1 to keep the product unmodified.

上面的函数利用两个嵌套的for循环来计算所需的结果。 在第一遍中,它将元素置于当前位置。 在第二遍中,它将该元素与数组中的每个元素相乘-除非第一个循环的元素与第二个循环的当前元素匹配。 在这种情况下,只需将其乘以1即可保持乘积不变。

Are you able to follow? Great!

你能跟随吗? 大!

It’s a simple approach which works well, but can we make it slightly better? Can we modify it in such a way that we don’t have to make two uses of nested loops? Maybe storing the result at each pass and making use of that?

这是一种行之有效的简单方法,但是我们可以使其更好一点吗? 我们可以以不必两次使用嵌套循环的方式修改它吗? 也许在每次通过时都存储结果并加以利用?

Let’s consider the following method. In this modified version, the principle applied is that for each element, calculate the product of the values on the right, calculate the products of values on the left, and simply multiply those two values. Pretty sweet, isn’t it?

让我们考虑以下方法。 在此修改版本中,所应用的原理是对于每个元素,计算右侧值的乘积,计算左侧值的乘积,然后将这两个值简单地相乘。 很漂亮,不是吗?

Here, rather than making use of nested loops to calculate values at each run, we use two non-nested loops, which reduces the overall complexity by a factor of O(n) (we shall come to that later).

在这里,我们没有使用嵌套循环来计算每次运行的值,而是使用了两个非嵌套循环,这将整体复杂度降低了O(n)倍(稍后再介绍)。

We can safely infer that the latter algorithm performs better than the former. So far, so good? Perfect!

我们可以安全地推断出后一种算法比前一种算法性能更好。 到目前为止,一切都很好? 完善!

At this point, we can also take a quick look at the different types of algorithm analysis which exist out there. We do not need to go into the minute level details, but just need to have a basic understanding of the technical jargon.

在这一点上,我们还可以快速浏览一下那里存在的不同类型的算法分析。 我们不需要详细介绍一下级别,而只需要对技术术语有基本的了解。

Depending upon when an algorithm is analyzed, that is, before implementation or after implementation, algorithm analysis can be divided into two stages:

根据分析算法的时间,即在实施之前或实施之后,算法分析可以分为两个阶段:

  • Apriori Analysis − As the name suggests, in apriori(prior), we do analysis (space and time) of an algorithm prior to running it on a specific system. So fundamentally, this is a theoretical analysis of an algorithm. The efficiency of an algorithm is measured under the assumption that all other factors, for example, processor speed, are constant and have no effect on the implementation.

    先验分析 -顾名思义,在先验(先验)中,我们分析算法(空间和时间),然后再在特定系统上运行该算法。 因此从根本上讲,这是对算法的理论分析。 在假定所有其他因素(例如,处理器速度)不变且对实现没有影响的假设下,测量算法的效率。

  • Apostiari Analysis − Apostiari analysis of an algorithm is performed only after running it on a physical system. The selected algorithm is implemented using a programming language which is executed on a target computer machine. It directly depends on system configurations and changes from system to system.

    Apostiari分析 -仅在物理系统上运行算法后,才能对算法进行Apostiari分析。 使用在目标计算机上执行的编程语言来实现所选算法。 它直接取决于系统配置和系统之间的变化。

In the industry, we rarely perform Apostiari analysis, as software is generally made for anonymous users who might run it on different systems.Since time and space complexity can vary from system to system, Apriori Analysis is the most practical method for finding algorithm complexities. This is because we only look at the asymptotic variations (we will come to that later) of the algorithm, which gives the complexity based on the input size rather than system configurations.

在行业中,我们很少执行Apostiari分析,因为软件通常是为可能在不同系统上运行的匿名用户制作的。由于时间和空间的复杂性可能因系统而异,因此Apriori Analysis是查找算法复杂性的最实用方法。 这是因为我们仅查看算法的渐近变化(我们将在后面介绍),它根据输入大小而不是系统配置给出复杂性。

Now that we have a basic understanding of what algorithm analysis is, we can move forward to our main topic: algorithm complexity. We will be focusing on Apriori Analysis, keeping in mind the scope of this post, so let’s get started.

现在我们对什么是算法分析有了基本的了解,我们可以继续讨论我们的主要主题:算法复杂性。 我们将重点放在Apriori Analysis上 ,请牢记本文的范围,因此让我们开始吧。

通过渐进分析深入研究复杂性 (Deep dive into complexity with asymptotic analysis)

Algorithm complexity analysis is a tool that allows us to explain how an algorithm behaves as the input grows larger.

算法复杂度分析是一种工具,可让我们解释输入越大时算法的行为。

So, if you want to run an algorithm with a data set of size n, for example, we can define complexity as a numerical function f(n) — time versus the input size n.

因此,例如,如果要使用大小为n的数据集运行算法,我们可以将复杂度定义为数值函数f(n) -时间与输入大小n的关系

Now you must be wondering, isn’t it possible for an algorithm to take different amounts of time on the same inputs, depending on factors like processor speed, instruction set, disk speed, and brand of the compiler? If yes, then pat yourself on the back, cause you are absolutely right!?

现在,您一定想知道,算法是否可能根据处理器速度,指令集,磁盘速度和编译器品牌等因素在同一输入上花费不同的时间? 如果是,则轻拍自己的后背,因为您绝对正确!

This is where Asymptotic Analysis comes into this picture. Here, the concept is to evaluate the performance of an algorithm in terms of input size (without measuring the actual time it takes to run). So basically, we calculate how the time (or space) taken by an algorithm increases as we make the input size infinitely large.

这就是这张图中渐进分析的地方。 在这里,概念是根据输入大小来评估算法的性能(而不测量实际运行时间)。 因此,基本上,我们可以计算随着输入大小无限大,算法所花费的时间(或空间)如何增加。

Complexity analysis is performed on two parameters:

复杂度分析是针对两个参数进行的:

  1. Time: Time complexity gives an indication as to how long an algorithm takes to complete with respect to the input size. The resource which we are concerned about in this case is CPU (and wall-clock time).

    时间 :时间复杂度可以指示算法完成输入长度所需的时间。 在这种情况下,我们关注的资源是CPU(和挂钟时间)。

  2. Space: Space complexity is similar, but is an indication as to how much memory is “required” to execute the algorithm with respect to the input size. Here, we dealing with system RAM as a resource.

    空间 :空间复杂度类似,但是它表示“相对于输入大小”“需要”多少内存来执行算法。 在这里,我们将系统RAM作为资源来处理。

Are you still with me? Good! Now, there are different notations which we use for analyzing complexity through asymptotic analysis. We will go through all of them one by one and understand the fundamentals behind each.

你还在吗? 好! 现在,我们使用不同的符号通过渐近分析来分析复杂性。 我们将逐一介绍所有这些知识,并了解每个知识背后的基本原理。

The Big oh (Big O)The very first and most popular notation used for complexity analysis is BigO notation. The reason for this is that it gives the worst case analysis of an algorithm. The nerd universe is mostly concerned about how badly an algorithm can behave, and how it can be made to perform better. BigO provides us exactly that.

Big哦(Big O)用于复杂度分析的第一个也是最流行的表示法是BigO表示法。 其原因是它给出了算法的最坏情况分析。 书呆子世界最关心的是算法的性能如何不良以及如何使其表现更好。 BigO正是为我们提供了这一点。

Let’s get into the mathematical side of it to understand things at their core.

让我们进入它的数学方面,以了解事物的核心。

Let’s consider an algorithm which can be described by a function f(n). So, to define the BigO of f(n), we need to find a function, let’s say, g(n), which bounds it. Meaning, after a certain value, n0, the value of g(n) would always exceed f(n).

让我们考虑一种可以用函数f(n)描述的算法 因此,要定义f(n)的BigO,我们需要找到一个函数g(n)来对其进行限制。 这意味着,在某个值n0之后, g(n)的值将始终超过f(n)

We can write it as,f(n) ≤ C g(n) where n≥n0; C> 0; n0≥1

我们可以写成f(n)≤C g(n)其中n≥n0; C> 0; n0≥1

If above conditions are fulfilled, we can say that g(n) is the BigO of f(n), orf(n) = O (g(n))

如果满足上述条件,我们可以说g(n)f(n)的BigO ,或者 f(n)= O(g(n))

Can we apply the same to analyze an algorithm? This basically means that in worst case scenario of running an algorithm, the value should not pass beyond a certain point, which is g(n) in this case. Hence, g(n) is the BigO of f(n).

我们可以将其应用于算法分析吗? 这基本上意味着,在运行算法的最坏情况下,该值不应超过特定点,在这种情况下,该点为g(n) 。 因此, g(n)f(n)的BigO

Let’s go through some commonly used bigO notations and their complexity and understand them a little better.

让我们研究一些常用的bigO表示法及其复杂性,并更好地理解它们。

  • O(1): Describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.

    O(1):描述一种算法,无论输入数据集的大小如何,该算法将始终在同一时间(或空间)执行。

function firstItem(arr){      return arr[0];}

The above function firstItem(), will always take the same time to execute, as it returns the first item from an array, irrespective of its size. The running time of this function is independent of input size, and so it has a constant complexity of O(1).

上面的函数firstItem()总是需要花费相同的时间执行,因为它从数组中返回第一个项目,而不管其大小如何。 该函数的运行时间与输入大小无关,因此具有恒定的O(1)。

Relating it to the above explanation, even in the worst case scenario of this algorithm (assuming input to be extremely large), the running time would remain constant and not go beyond a certain value. So, its BigO complexity is constant, that is O(1).

与上面的解释有关,即使在该算法的最坏情况下(假设输入非常大),运行时间也将保持恒定并且不会超过某个值。 因此,其BigO复杂度是恒定的,即O(1)。

  • O(N): Describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. Take a look at the example below. We have a function called matchValue() which returns true whenever a matching case is found in the array. Here, since we have to iterate over the whole of the array, the running time is directly proportional to the size of the array.

    O(N):描述一种算法,其性能将线性增长并与输入数据集的大小成正比。 看下面的例子。 我们有一个名为matchValue( )的函数,只要在数组中找到匹配的大小写,它就会返回true。 在这里,由于我们必须遍历整个数组,因此运行时间与数组的大小成正比。

function matchValue(arr, k){   for(var i = 0; i < arr.length; i++){     if(arr[i]==k){       return true;     }     else{       return false;     }   }   }

This also demonstrates how Big O favors the worst-case performance scenario. A matching case could be found during any iteration of the for loop and the function would return early. But Big O notation will always assume the upper limit (worst-case) where the algorithm will perform the maximum number of iterations.

这也说明了Big O如何支持最坏情况下的性能情况。 在for循环的任何迭代期间都可以找到匹配的情况for并且该函数将尽早返回。 但是Big O表示法将始终假定上限(最坏情况),在该上限下算法将执行最大迭代次数。

  • O(N²): This represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N³), O(N⁴), etc.

    O(N²):这表示一种算法,其性能与输入数据集的大小的平方成正比。 这在涉及对数据集进行嵌套迭代的算法中很常见。 更深层的嵌套迭代将得出O(N³),O(N⁴)等。

function containsDuplicates(arr){    for (var outer = 0; outer < arr.length; outer++){        for (var inner = 0; inner < arr.length; inner++){            if (outer == inner)                 continue;            if (arr[outer] == arr[inner])                return true;        }    }       return false;}
  • O(2^N): Denotes an algorithm whose growth doubles with each addition to the input data set. The growth curve of an O(2^N) function is exponential — starting off very shallow, then rising meteorically. An example of an O(2^N) function is the recursive calculation of Fibonacci numbers:

    O(2 ^ N):表示一种算法,每增加一次输入数据集,其增长速度就会加倍。 O(2 ^ N)函数的增长曲线是指数的-从非常浅的位置开始,然后在气象上上升。 O(2 ^ N)函数的一个示例是斐波那契数的递归计算:

function recursiveFibonacci(number){    if (number <= 1) return number;    return recursiveFibonacci(number - 2) + recursiveFibonacci(number - 1);}

Are you getting the hang of this? Perfect. If not, feel free to fire up your queries in the comments below. :)

你明白了吗? 完善。 如果没有,请随时在下面的评论中提出您的查询。 :)

Moving on, now that we have a better understanding of the BigO notation, let us get to the next type of asymptotic analysis which is, the Big Omega(Ω).

继续,现在我们对BigO表示法有了更好的了解,让我们进入下一种渐近分析,即Big Omega(Ω)。

The Big Omega (Ω)?The Big Omega(Ω) provides us with the best case scenario of running an algorithm. Meaning, it would give us the minimum amount of resources (time or space) an algorithm would take to run.

大欧米茄(Ω)他大欧米茄(Ω)为我们提供了运行算法的最好情况。 这意味着,它将为我们提供算法运行所需的最少资源(时间或空间)。

Let’s dive into the mathematics of it to analyze it graphically.

让我们深入研究它的数学特性,以图形方式对其进行分析。

We have an algorithm which can be described by a function f(n). So, to define the BigΩ of f(n), we need to find a function, let’s say, g(n), which is tightest to the lower bound of f(n). Meaning, after a certain value, n0, the value of f(n) would always exceed g(n).

我们有一个可用函数f(n)描述的算法 因此,要确定fBigΩ(N),我们需要找到一个功能,让我们说,G(N),这是严格的下界F(N)的。 意思是,在某个值n0之后, f(n)的值将始终超过g (n)

We can write it as,f(n)≥ C g(n) where n≥n0; C> 0; n0≥1

我们可以写成f(n) ≥C g(n)其中n≥n0; C> 0; n0≥1

If above conditions are fulfilled, we can say that g(n) is the BigΩ of f(n), orf(n) = Ω (g(n))

如果满足上述条件,我们可以说g(n)f(n)的BigΩ ,或者 f(n)= Ω (g(n))

Can we infer that Ω(…) is complementary to O(…)? Moving on to the last section of this post…

我们可以推断出Ω(…)与O(…)是互补的吗? 转到本文的最后一部分...

The Big Theta (θ)?The Big Theta(θ) is a sort of a combination of both BigO and BigΩ. It gives us the average case scenario of running an algorithm. Meaning, it would give us the mean of the best and worst case. Let’s analyse it mathematically.

大θ(θ)他大西塔(θ)是一种既比戈和BigΩ的组合。 它为我们提供了运行算法的平均情况。 意思是,这将为我们提供最佳和最差情况的平均值。 让我们对其进行数学分析。

Considering an algorithm which can be described by a function f(n). The Bigθ of f(n) would be a function, let’s say, g(n), which bounds it the tightest by both lower and upper bound, such that,C₁g(n) ≤ f(n)≤ C₂ g(n)where C₁, C₂ >0, n≥ n0, n0 ≥ 1

考虑可以由函数f(n)描述的算法 f(n)的Bigθ将是一个函数,比方说g(n) ,它的上下边界都将其限制为最紧密,因此C₁g(n)≤f(n)≤C 2 g(n)哪里 C 1,C 2> 0,n≥n0,n0≥1

Meaning, after a certain value, n0, the value of C₁g(n) would always be less than f(n), and value of C₂ g(n) would always exceed f(n).

意思是,在某个值n0之后, C₁g(n)的值将始终小于f(n) ,而C 2 g(n)的值将始终超过f(n)

Now that we have a better understanding of the different types of asymptotic complexities, let’s have an example to get a clearer idea of how all this works practically.

现在,我们对渐进复杂度的不同类型有了更好的了解,下面让我们举一个例子来更清楚地了解所有这些如何实际工作。

Consider an array, of size, say, n, and we want to do a linear search to find an element x in it. Suppose the array looks something like this in the memory.

考虑一个大小为n的数组,我们想进行线性搜索以在其中找到一个元素x 。 假设数组在内存中看起来像这样。

Going by the concept of linear search, if x=9, then that would be the best case scenario for the following case (as we don’t have to iterate over the whole array). And from what we have just learned, the complexity for this can be written as Ω(1). Makes sense?

按照线性搜索的概念,如果x = 9,则对于以下情况将是最好的情况(因为我们不必遍历整个数组)。 从我们刚刚学到的知识来看,其复杂度可以写成Ω(1)。 说得通?

Similarly, if x were equal to 14, that would be the worst case scenario, and the complexity would have been O(n).

同样,如果x等于14,那将是最坏的情况,并且复杂度将为O(n)。

What would be the average case complexity for this? θ(n/2) => 1/2 θ(n) => θ(n) (As we ignore constants while calculating asymptotic complexities).

平均案件的复杂度是多少? θ(n / 2)=> 1/2θ(n)=>θ(n)(因为我们在计算渐近复杂度时忽略了常数)。

So, there you go folks. A fundamental insight into algorithmic complexities. Did it go well with you? Leave your advice, questions, suggestions in the comments below. Thanks for reading!❤️

所以,你去乡亲。 对算法复杂性的基本了解。 顺利吗? 在下面的评论中留下您的建议,问题和建议。 感谢您的阅读!❤️

References:-

参考文献:

  • A nice write-up by Dionysis “dionyziz” Zindros: https://discrete.gr/complexity/

    Dionysis“ dionyziz” Zindros的一篇不错的文章: https ://discrete.gr/complexity/

  • A good series on algorithm & data structures: http://interactivepython.org/runestone/static/pythonds/AlgorithmAnalysis/WhatIsAlgorithmAnalysis.html

    关于算法和数据结构的一系列很好的文章: http : //interactivepython.org/runestone/static/pythonds/AlgorithmAnalysis/WhatIsAlgorithmAnalysis.html

翻译自: https://www.freecodecamp.org/news/lets-simplify-algorithm-complexities-25e75f37d03f/

二分法算法复杂度简化

二分法算法复杂度简化_让我们简化算法的复杂性!相关推荐

  1. 什么是算法算法有些什么特性_反正是什么算法

    什么是算法算法有些什么特性 What you need to know about the simple concept that powers the modern world. 您需要了解为现代世 ...

  2. python推荐系统算法朴素贝叶斯_朴素贝叶斯算法在人才盘点中的应用(之一)

    一.识别人才首先是处理不确定性问题 做招聘面试的HR应该会认同这样的经历. 打开应聘者简历,赫然写着TOP10名学毕业.抬头一瞧,小伙长得一表人才,精神抖擞,朝气蓬勃.HR兴趣大增. 再一看,研究方向 ...

  3. 数据结构与算法实验祝恩_《数据结构与算法》实验教学大纲

    1 项目 顺序存储的线性表 时数 2 性质 验证 内容 要求 内容: 1 .设线性表存放在向量 A[arrsize] 的前 elenum 个分量中,且递增有序.试 设计一算法, 将 x 插入到线性表的 ...

  4. 算法复杂度为O(N) 的排序算法

    题目:某公司有几万名员工,请完成一个时间复杂度为O(n)的算法对该公司员工的年龄作排序,可使用O(1)的辅助空间. 分析:排序是面试时经常被提及的一类题目,我们也熟悉其中很多种算法,诸如插入排序.归并 ...

  5. lfu算法实现c语言_哈希查找算法(C语言实现)

    上一节介绍了有关哈希表及其构造过程的相关知识,本节将介绍如何利用哈希表实现查找操作.在哈希表中进行查找的操作同哈希表的构建过程类似,其具体实现思路为:对于给定的关键字 K,将其带入哈希函数中,求得与该 ...

  6. java算法提高求最大值_藍橋杯 算法提高 求最大值

    算法提高 求最大值 時間限制:1.0s   內存限制:256.0MB 問題描述給n個有序整數對ai bi,你需要選擇一些整數對 使得所有你選定的數的ai+bi的和最大.並且要求你選定的數對的ai之和非 ...

  7. bp算法运行太慢_神经网络,BP算法的理解与推导

    原创,转载请注明出处. (常规字母代表标量,粗体字母代表向量,大写粗体字母代表矩阵) 这里假设你已经知道了神经网络的基本概念,并且最好看过BP算法. 可能你没有看懂,或者你跟我一样被各种公式搞晕了.尤 ...

  8. 算法设计与分析_[04] 天牛须算法设计思想分析

    原文链接: https://arxiv.org/abs/1710.10724​arxiv.org 算法实现: 首先,初始化参数 ,分别代表初始解,初始的搜索范围,以及更新步长,且通过原文我们知道: 在 ...

  9. 数据结构算法动图识记_【数据结构与算法】用动图解说数组、链表、跳表原理与实现...

    「初」前言 在学习数据结构与算法的过程中,感觉真的是一入算法深似海,但是越学越觉得有趣.不过我们会发现在终身学习的过程中,我们都是越学越多,不知的也越来越多,但是更渴望认知更多的知识,越是对知识感兴趣 ...

最新文章

  1. 区块链技术背后的运行逻辑
  2. cordova入门——cordova环境配置
  3. 幼儿园python_[Python]猜数字游戏AI版的实现(幼儿园智商AI)
  4. 游戏界著名设计师 Cory Schmtiz:“灵感乍现”是设计生涯里的浪漫
  5. HDU 4547 CD操作
  6. Bootstrap 公布长期支持计划,Bootstrap 3 生命周期结束
  7. flash builder 4.6在debug调试时需要系统安装flashplayer debug版本
  8. 单调队列练习之广告印刷
  9. 树形动态规划 java_树形动态规划
  10. 破解密码很难?利用Python自动编写暴力破解字典,***必学技能!
  11. 随机信号分析第2版 [赵淑清郑薇编著] (部分)课后作业答案(自己写的)
  12. [工具书]IntelliJ IDEA社区版下载及配置 - ZIP版
  13. 这是三篇影响百度17年的技术博客,作者李彦宏
  14. 计算机人类的三大科学思维,什么是科学思维:科学思维可以分为理论、实验、计算思维...
  15. 红色警戒2地图编辑器研究
  16. 【会声会影教学】如何更改短视频速度
  17. c#通配符匹配符合条件文件名
  18. 【Python学习】导入类
  19. 配置tomcat使用https方式连接,同时也可以使用http方式连接
  20. DELL PC服务器PowerEdge 管理工具OMSA的使用

热门文章

  1. mysql创建外键级联更新_MySQL使用外键实现级联删除与更新的方法
  2. 爱帮网遭江苏移动侵权 源自爱帮机器人
  3. 具有跳跃性思维的算法
  4. PaddleSpeech TTS 设计要素 — 训练组件
  5. 学习笔记-用PLAN法提高执行力
  6. Multisim 数字计数器
  7. thinkphp5之腾讯视频hls片批量多线程下载自动合成mp4
  8. 蚂蚁金服的“开放联盟链”如何影响现有公链
  9. 关于Linux操作系统的处理机管理分析
  10. 夫妻驾驶途中打瞌睡 车辆定速巡航120迈飞下四米高速路