使用akka构建高并发程序

If you read my previous story about Scalachain, you probably noticed that it is far from being a distributed system. It lacks all the features to properly work with other nodes. Add to it that a blockchain composed by a single node is useless. For this reason I decided it is time to work on the issue.

如果您阅读了我以前关于Scalachain的故事,您可能会注意到它远非分布式系统。 它缺少与其他节点正确配合使用的所有功能。 此外,由单个节点组成的区块链是无用的。 基于这个原因,我认为是时候解决这个问题了。

Since Scalachain is powered by Akka, why not take the chance to play with Akka Cluster? I created a simple project to tinker a bit with Akka Cluster, and in this story I’m going to share my learnings. We are going to create a cluster of three nodes, using Cluster Aware Routers to balance the load among them. Everything will run in a Docker container, and we will use docker-compose for an easy deployment.

由于Scalachain由Akka提供支持,为什么不趁此机会与Akka Cluster一起玩呢? 我创建了一个简单的项目来完善Akka Cluster ,在这个故事中,我将分享我的经验。 我们将创建一个由三个节点组成的群集,使用群集感知路由器来平衡它们之间的负载。 一切都将在Docker容器中运行,并且我们将使用docker-compose进行轻松部署。

Ok, Let’s roll! ?

好吧,滚吧! ?

Akka Cluster快速入门 (Quick introduction to Akka Cluster)

Akka Cluster provides great support to the creation of distributed applications. The best use case is when you have a node that you want to replicate N times in a distributed environment. This means that all the N nodes are peers running the same code. Akka Cluster gives you out-of-the-box the discovery of members in the same cluster. Using Cluster Aware Routers it is possible to balance the messages between actors in different nodes. It is also possible to choose the balancing policy, making load-balancing a piece of cake!

Akka Cluster为分布式应用程序的创建提供了强大的支持。 最佳用例是在分布式环境中拥有要复制N次的节点时。 这意味着所有N个节点都是运行相同代码的对等点。 Akka群集使您可以立即发现同一群集中的成员。 使用群集感知路由器,可以在不同节点中的参与者之间平衡消息。 还可以选择平衡策略,使负载平衡成为小菜一碟!

Actually you can chose between two types of routers:

实际上,您可以在两种类型的路由器之间进行选择:

Group Router — The actors to send the messages to — called routees — are specified using their actor path. The routers share the routees created in the cluster. We will use a Group Router in this example.

组路由器 -将消息发送到的参与者-称为路由-使用其参与者路径指定。 路由器共享在群集中创建的路由。 在此示例中,我们将使用组路由器。

Pool Router — The routees are created and deployed by the router, so they are its children in the actor hierarchy. Routees are not shared between routers. This is ideal for a primary-replica scenario, where each router is the primary and its routees the replicas.

池路由器 -路由是由路由器创建和部署的,因此它们是角色层次结构中的子级。 路由器之间不共享路由。 这对于主副本方案是理想的,因为每个路由器都是主副本,并且路由副本。

This is just the tip of the iceberg, so I invite you to read the official documentation for more insights.

这只是冰山一角,因此,我邀请您阅读官方文档以获取更多见解。

数学计算的集群 (A Cluster for mathematical computations)

Let’s picture a use-case scenario. Suppose to design a system to execute mathematical computations on request. The system is deployed online, so it needs a REST API to receive the computation requests. An internal processor handles these requests, executing the computation and returning the result.

让我们描述一个用例场景。 假设设计一个按要求执行数学计算的系统。 该系统在线部署,因此它需要一个REST API来接收计算请求。 内部处理器处理这些请求,执行计算并返回结果。

Right now the processor can only compute the Fibonacci number. We decide to use a cluster of nodes to distribute the load among the nodes and improve performance. Akka Cluster will handle cluster dynamics and load-balancing between nodes. Ok, sounds good!

现在,处理器只能计算斐波那契数 。 我们决定使用节点集群来在节点之间分配负载并提高性能。 Akka Cluster将处理节点之间的群集动态和负载平衡。 好的听起来不错!

演员等级 (Actor hierarchy)

First things first: we need to define our actor hierarchy. The system can be divided in three functional parts: the business logic, the cluster management, and the node itself. There is also the server but it is not an actor, and we will work on that later.

首先,我们需要定义参与者的层次结构。 该系统可以分为三个功能部分: 业务逻辑集群管理节点本身。 还有服务器,但它不是演员,我们将在以后进行处理。

Business logic

商业逻辑

The application should do mathematical computations. We can define a simple Processor actor to manage all the computational tasks. Every computation that we support can be implemented in a specific actor, that will be a child of the Processor one. In this way the application is modular and easier to extend and maintain. Right now the only child of Processor will be the ProcessorFibonacci actor. I suppose you can guess what its task is. This should be enough to start.

该应用程序应该进行数学计算。 我们可以定义一个简单的Processor actor来管理所有计算任务。 我们支持的每个计算都可以在特定的actor中实现,它将成为Processor的子代。 这样,应用程序是模块化的,并且易于扩展和维护。 现在, Processor的唯一孩子将是ProcessorFibonacci actor。 我想您可以猜到它的任务是什么。 这应该足以开始。

Cluster management

集群管理

To manage the cluster we need a ClusterManager. Sounds simple, right? This actor handles everything related to the cluster, like returning its members when asked. It would be useful to log what happens inside the cluster, so we define a ClusterListener actor. This is a child of the ClusterManager, and subscribes to cluster events logging them.

要管理集群,我们需要一个ClusterManager 。 听起来很简单,对吧? 这个参与者处理与集群有关的所有事情,例如在被询问时返回其成员。 记录集群内部发生的情况将很有用,因此我们定义了一个ClusterListener actor。 这是ClusterManager的子级,并订阅记录它们的集群事件。

Node

节点

The Node actor is the root of our hierarchy. It is the entry point of our system that communicates with the API. The Processor and the ClusterManager are its children, along with the ProcessorRouter actor. This is the load balancer of the system, distributing the load among Processors. We will configure it as a Cluster Aware Router, so every ProcessorRouter can send messages to Processors on every node.

Node actor是我们层次结构的根源。 这是我们与API通信的系统的入口点。 ProcessorClusterManager是其子级,以及ProcessorRouter actor。 这是系统的负载平衡器,可在Processor之间分配负载。 我们将其配置为群集感知路由器,因此每个ProcessorRouter都可以将消息发送到每个节点上的Processor

演员执行 (Actor Implementation)

Time to implement our actors! Fist we implement the actors related to the business logic of the system. We move then on the actors for the cluster management and the root actor (Node) in the end.

是时候实施我们的演员了! 首先,我们实现与系统业务逻辑相关的参与者。 然后,我们移至用于集群管理的参与者,最后移至根参与者( Node )。

ProcessorFibonacci

处理器斐波那契

This actor executes the computation of the Fibonacci number. It receives a Compute message containing the number to compute and the reference of the actor to reply to. The reference is important, since there can be different requesting actors. Remember that we are working in a distributed environment!

该参与者执行斐波那契数的计算。 它收到一条Compute消息,其中包含要计算的数字和要答复的actor的引用。 参考很重要,因为可以有不同的请求方。 请记住,我们正在分布式环境中工作!

Once the Compute message is received, the fibonacci function computes the result. We wrap it in a ProcessorResponse object to provide information on the node that executed the computation. This will be useful later to see the round-robin policy in action.

收到Compute消息后, fibonacci函数将计算结果。 我们将其包装在ProcessorResponse对象中,以提供有关执行计算的节点的信息。 这对于以后查看循环策略的作用将很有用。

The result is then sent to the actor we should reply to. Easy-peasy.

然后将结果发送给我们应该答复的演员。 十分简单。

object ProcessorFibonacci {sealed trait ProcessorFibonacciMessagecase class Compute(n: Int, replyTo: ActorRef) extends ProcessorFibonacciMessagedef props(nodeId: String) = Props(new ProcessorFibonacci(nodeId))def fibonacci(x: Int): BigInt = {@tailrec def fibHelper(x: Int, prev: BigInt = 0, next: BigInt = 1): BigInt = x match {case 0 => prevcase 1 => nextcase _ => fibHelper(x - 1, next, next + prev)}fibHelper(x)}
}class ProcessorFibonacci(nodeId: String) extends Actor {import ProcessorFibonacci._override def receive: Receive = {case Compute(value, replyTo) => {replyTo ! ProcessorResponse(nodeId, fibonacci(value))}}
}

Processor

处理器

The Processor actor manages the specific sub-processors, like the Fibonacci one. It should instantiate the sub-processors and forward the requests to them. Right now we only have one sub-processor, so the Processor receives one kind of message: ComputeFibonacci. This message contains the Fibonacci number to compute. Once received, the number to compute is sent to a FibonacciProcessor, along with the reference of the sender().

Processor角色负责管理特定的子处理器,例如斐波那契。 它应该实例化子处理器并将请求转发给它们。 现在,我们只有一个子处理器,因此Processor会收到一种消息: ComputeFibonacci 。 该消息包含要计算的斐波那契数。 接收到后,要计算的数字连同sender()的引用一起发送到FibonacciProcessor

object Processor {sealed trait ProcessorMessagecase class ComputeFibonacci(n: Int) extends ProcessorMessagedef props(nodeId: String) = Props(new Processor(nodeId))
}class Processor(nodeId: String) extends Actor {import Processor._val fibonacciProcessor: ActorRef = context.actorOf(ProcessorFibonacci.props(nodeId), "fibonacci")override def receive: Receive = {case ComputeFibonacci(value) => {val replyTo = sender()fibonacciProcessor ! Compute(value, replyTo)}}
}

ClusterListener

集群监听器

We would like to log useful information about what happens in the cluster. This could help us to debug the system if we need to. This is the purpose of the ClusterListener actor. Before starting, it subscribes itself to the event messages of the cluster. The actor reacts to messages like MemberUp, UnreachableMember, or MemberRemoved, logging the corresponding event. When ClusterListener is stopped, it unsubscribes itself from the cluster events.

我们想记录有关集群中发生的情况的有用信息。 如果需要,这可以帮助我们调试系统。 这是ClusterListener actor的目的。 在启动之前,它会订阅群集的事件消息。 MemberUp对诸如MemberUpUnreachableMemberMemberRemoved类的消息做出React,记录相应的事件。 停止ClusterListener ,它将取消订阅集群事件。

object ClusterListener {def props(nodeId: String, cluster: Cluster) = Props(new ClusterListener(nodeId, cluster))
}class ClusterListener(nodeId: String, cluster: Cluster) extends Actor with ActorLogging {override def preStart(): Unit = {cluster.subscribe(self, initialStateMode = InitialStateAsEvents,classOf[MemberEvent], classOf[UnreachableMember])}override def postStop(): Unit = cluster.unsubscribe(self)def receive = {case MemberUp(member) =>log.info("Node {} - Member is Up: {}", nodeId, member.address)case UnreachableMember(member) =>log.info(s"Node {} - Member detected as unreachable: {}", nodeId, member)case MemberRemoved(member, previousStatus) =>log.info(s"Node {} - Member is Removed: {} after {}",nodeId, member.address, previousStatus)case _: MemberEvent => // ignore}
}

ClusterManager

集群管理器

The actor responsible of the management of the cluster is ClusterManager. It creates the ClusterListener actor, and provides the list of cluster members upon request. It could be extended to add more functionalities, but right now this is enough.

负责集群管理的ClusterManagerClusterManager 。 它创建ClusterListener actor,并根据请求提供集群成员的列表。 可以扩展它以添加更多功能,但是现在就足够了。

object ClusterManager {sealed trait ClusterMessagecase object GetMembers extends ClusterMessagedef props(nodeId: String) = Props(new ClusterManager(nodeId))
}class ClusterManager(nodeId: String) extends Actor with ActorLogging {val cluster: Cluster = Cluster(context.system)val listener: ActorRef = context.actorOf(ClusterListener.props(nodeId, cluster), "clusterListener")override def receive: Receive = {case GetMembers => {sender() ! cluster.state.members.filter(_.status == MemberStatus.up).map(_.address.toString).toList}}
}

ProcessorRouter

处理器路由器

The load-balancing among processors is handled by the ProcessorRouter. It is created by the Node actor, but this time all the required information are provided in the configuration of the system.

处理器之间的负载平衡由ProcessorRouter 。 它是由Node actor创建的,但是这次所有必需的信息都在系统的配置中提供。

class Node(nodeId: String) extends Actor {//...val processorRouter: ActorRef = context.actorOf(FromConfig.props(Props.empty), "processorRouter")//...
}

Let’s analyse the relevant part in the application.conf file.

让我们分析一下application.conf文件中的相关部分。

akka {actor {...deployment {/node/processorRouter {router = round-robin-grouproutees.paths = ["/user/node/processor"]cluster {enabled = onallow-local-routees = on}}}}...
}

The first thing is to specify the path to the router actor, that is /node/processorRouter. Inside that property we can configure the behaviour of the router:

首先要指定路由器角色的路径,即/node/processorRouter 。 在该属性内,我们可以配置路由器的行为:

  • router: this is the policy for the load balancing of messages. I chose the round-robin-group, but there are many others.

    router :这是消息负载平衡的策略。 我选择了round-robin-group ,但还有许多其他round-robin-group

  • routees.paths: these are the paths to the actors that will receive the messages handled by the router. We are saying: “When you receive a message, look for the actors corresponding to these paths. Choose one according to the policy and forward the message to it.” Since we are using Cluster Aware Routers, the routees can be on any node of the cluster.

    routees.paths :这是将接收路由器处理的消息的参与者的路径。 我们说的是: “收到消息后,寻找与这些路径相对应的参与者。 根据政策选择一个,然后将消息转发给它。” 由于我们使用的是群集感知路由器,因此路由可以位于群集的任何节点上。

  • cluster.enabled: are we operating in a cluster? The answer is on, of course!

    cluster.enabled :我们是否在集群中运行? 答案是on的,当然!

  • cluster.allow-local-routees: here we are allowing the router to choose a routee in its node.

    cluster.allow-local-routees :在这里,我们允许路由器在其节点中选择一个路由。

Using this configuration we can create a router to load balance the work among our processors.

使用此配置,我们可以创建一个路由器来负载均衡处理器之间的工作。

Node

节点

The root of our actor hierarchy is the Node. It creates the children actors — ClusterManager, Processor, and ProcessorRouter — and forwards the messages to the right one. Nothing complex here.

Actor层次结构的根是Node 。 它创建子actor(即ClusterManagerProcessorProcessorRouter ),并将消息转发到正确的子actor。 这里没什么复杂的。

object Node {sealed trait NodeMessagecase class GetFibonacci(n: Int)case object GetClusterMembersdef props(nodeId: String) = Props(new Node(nodeId))
}class Node(nodeId: String) extends Actor {val processor: ActorRef = context.actorOf(Processor.props(nodeId), "processor")val processorRouter: ActorRef = context.actorOf(FromConfig.props(Props.empty), "processorRouter")val clusterManager: ActorRef = context.actorOf(ClusterManager.props(nodeId), "clusterManager")override def receive: Receive = {case GetClusterMembers => clusterManager forward GetMemberscase GetFibonacci(value) => processorRouter forward ComputeFibonacci(value)}
}

服务器和API (Server and API)

Every node of our cluster runs a server able to receive requests. The Server creates our actor system and is configured through the application.conf file.

我们集群的每个节点都运行一台能够接收请求的服务器。 Server创建我们的参与者系统,并通过application.conf文件进行配置。

object Server extends App with NodeRoutes {implicit val system: ActorSystem = ActorSystem("cluster-playground")implicit val materializer: ActorMaterializer = ActorMaterializer()val config: Config = ConfigFactory.load()val address = config.getString("http.ip")val port = config.getInt("http.port")val nodeId = config.getString("clustering.ip")val node: ActorRef = system.actorOf(Node.props(nodeId), "node")lazy val routes: Route = healthRoute ~ statusRoutes ~ processRoutesHttp().bindAndHandle(routes, address, port)println(s"Node $nodeId is listening at http://$address:$port")Await.result(system.whenTerminated, Duration.Inf)}

Akka HTTP powers the server itself and the REST API, exposing three simple endpoints. These endpoints are defined in the NodeRoutes trait.

Akka HTTP为服务器本身和REST API供电,公开了三个简单的端点。 这些端点是在NodeRoutes特性中定义的。

The first one is /health, to check the health of a node. It responds with a 200 OK if the node is up and running

第一个是/health ,用于检查节点的运行状况。 如果节点已启动并正在运行,它将以200 OK响应

lazy val healthRoute: Route = pathPrefix("health") {concat(pathEnd {concat(get {complete(StatusCodes.OK)})})}

The /status/members endpoint responds with the current active members of the cluster.

/status/members端点以集群的当前活动成员作为响应。

lazy val statusRoutes: Route = pathPrefix("status") {concat(pathPrefix("members") {concat(pathEnd {concat(get {val membersFuture: Future[List[String]] = (node ? GetClusterMembers).mapTo[List[String]]onSuccess(membersFuture) { members =>complete(StatusCodes.OK, members)}})})})}

The last (but not the least) is the /process/fibonacci/n endpoint, used to request the Fibonacci number of n.

最后(但并非最不重要)是/process/fibonacci/n端点,用于请求n的斐波那契数。

lazy val processRoutes: Route = pathPrefix("process") {concat(pathPrefix("fibonacci") {concat(path(IntNumber) { n =>pathEnd {concat(get {val processFuture: Future[ProcessorResponse] = (node ? GetFibonacci(n)).mapTo[ProcessorResponse]onSuccess(processFuture) { response =>complete(StatusCodes.OK, response)}})}})})}

It responds with a ProcessorResponse containing the result, along with the id of the node where the computation took place.

它以包含结果的ProcessorResponse以及进行计算的节点的ID进行响应。

集群配置 (Cluster Configuration)

Once we have all our actors, we need to configure the system to run as a cluster! The application.conf file is where the magic takes place. I’m going to split it in pieces to present it better, but you can find the complete file here.

一旦有了所有参与者,就需要配置系统以使其作为集群运行! application.conf文件是神奇的地方。 我将对其进行拆分以更好地呈现它,但是您可以在此处找到完整的文件。

Let’s start defining some useful variables.

让我们开始定义一些有用的变量。

clustering {ip = "127.0.0.1"ip = ${?CLUSTER_IP}port = 2552port = ${?CLUSTER_PORT}seed-ip = "127.0.0.1"seed-ip = ${?CLUSTER_SEED_IP}seed-port = 2552seed-port = ${?CLUSTER_SEED_PORT}cluster.name = "cluster-playground"
}

Here we are simply defining the ip and port of the nodes and the seed, as well as the cluster name. We set a default value, then we override it if a new one is specified. The configuration of the cluster is the following.

在这里,我们仅定义节点的IP和端口,种子以及群集名称。 我们设置一个默认值,如果指定了新值,则将其覆盖。 群集的配置如下。

akka {actor {provider = "cluster".../* router configuration */...}remote {log-remote-lifecycle-events = onnetty.tcp {hostname = ${clustering.ip}port = ${clustering.port}}}cluster {seed-nodes = ["akka.tcp://"${clustering.cluster.name}"@"${clustering.seed-ip}":"${clustering.seed-port}]auto-down-unreachable-after = 10s}
}
...
/* server vars */
...
/* cluster vars */
}

Akka Cluster is build on top of Akka Remoting, so we need to configure it properly. First of all, we specify that we are going to use Akka Cluster saying that provider = "cluster". Then we bind cluster.ip and cluster.port to the hostname and port of the netty web framework.

Akka Cluster是在Akka Remoting之上构建的,因此我们需要对其进行正确配置。 首先,我们指定将要使用Akka Cluster,即provider = "cluster" 。 然后,我们将cluster.ipcluster.port绑定到netty Web框架的hostnameport

The cluster requires some seed nodes as its entry points. We set them in the seed-nodes array, in the format akka.tcp://"{clustering.cluster.name}"@"{clustering.seed-ip}":”${clustering.seed-port}”. Right now we have one seed node, but we may add more later.

集群需要一些种子节点作为其入口点。 我们将它们设置在seed-nodes数组中,格式为akka.tcp://"{clustering.cluster.name}"@"{clustering.seed-ip}":”${clustering.seed-port}” 。 现在,我们有一个种子节点,但以后可能会添加更多。

The auto-down-unreachable-after property sets a member as down after it is unreachable for a period of time. This should be used only during development, as explained in the official documentation.

auto-down-unreachable-after属性将成员设置为在一段时间内无法访问后变为down。 如官方文档中所述,只能在开发过程中使用它。

Ok, the cluster is configured, we can move to the next step: Dockerization and deployment!

好了,集群已经配置好了,我们可以继续下一步:Dockerization and Deployment!

Docker化和部署 (Dockerization and deployment)

To create the Docker container of our node we can use sbt-native-packager. Its installation is easy: add addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.15") to the plugin.sbt file in the project/ folder. This amazing tool has a plugin for the creation of Docker containers. it allows us to configure the properties of our Dockerfile in the build.sbt file.

要创建我们节点的Docker容器,我们可以使用sbt-native-packager 。 它的安装很容易:将addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.15")project/文件夹中的plugin.sbt文件中。 这个惊人的工具有一个用于创建Docker容器的插件。 它允许我们在build.sbt文件中配置Dockerfile的属性。

// other build.sbt propertiesenablePlugins(JavaAppPackaging)
enablePlugins(DockerPlugin)
enablePlugins(AshScriptPlugin)mainClass in Compile := Some("com.elleflorio.cluster.playground.Server")
dockerBaseImage := "java:8-jre-alpine"
version in Docker := "latest"
dockerExposedPorts := Seq(8000)
dockerRepository := Some("elleflorio")

Once we have setup the plugin, we can create the docker image running the command sbt docker:publishLocal. Run the command and taste the magic… ?

设置好插件后,我们可以运行命令sbt docker:publishLocal创建sbt docker:publishLocal 。 运行命令并品尝魔术……?

We have the Docker image of our node, now we need to deploy it and check that everything works fine. The easiest way is to create a docker-compose file that will spawn a seed and a couple of other nodes.

我们有节点的Docker映像,现在我们需要部署它并检查一切正常。 最简单的方法是创建一个docker-compose文件,该文件将产生一个种子和几个其他节点。

version: '3.5'networks:cluster-network:services:seed:networks:- cluster-networkimage: elleflorio/akka-cluster-playgroundports:- '2552:2552'- '8000:8000'environment:SERVER_IP: 0.0.0.0CLUSTER_IP: seedCLUSTER_SEED_IP: seednode1:networks:- cluster-networkimage: elleflorio/akka-cluster-playgroundports:- '8001:8000'environment:SERVER_IP: 0.0.0.0CLUSTER_IP: node1CLUSTER_PORT: 1600CLUSTER_SEED_IP: seedCLUSTER_SEED_PORT: 2552node2:networks:- cluster-networkimage: elleflorio/akka-cluster-playgroundports:- '8002:8000'environment:SERVER_IP: 0.0.0.0CLUSTER_IP: node2CLUSTER_PORT: 1600CLUSTER_SEED_IP: seedCLUSTER_SEED_PORT: 2552

I won’t spend time going through it, since it is quite simple.

因为它很简单,所以我不会花时间去研究它。

让我们运行它! (Let’s run it!)

Time to test our work! Once we run the docker-compose up command, we will have a cluster of three nodes up and running. The seed will respond to requests at port :8000, while node1 and node2 at port :8001 and :8002. Play a bit with the various endpoints. You will see that the requests for a Fibonacci number will be computed by a different node each time, following a round-robin policy. That’s good, we are proud of our work and can get out for a beer to celebrate! ?

是时候测试我们的工作了! 一旦运行了docker-compose up命令,我们将建立一个由三个节点组成的集群并正在运行。 seed将在端口:8000响应请求,而node1node2在端口:8001:8002响应。 尝试一下各种端点。 您将看到,遵循循环策略,每次都会由不同的节点来计算对斐波那契数的请求。 很好,我们为我们的工作感到自豪,可以出去喝杯啤酒庆祝一下! ?

结论 (Conclusion)

We are done here! We learned a lot of things in these ten minutes:

我们在这里完成! 在这十分钟中,我们学到了很多东西:

  • What Akka Cluster is and what can do for us.什么是Akka集群,什么可以为我们做。
  • How to create a distributed application with it.如何使用它创建分布式应用程序。
  • How to configure a Group Router for load-balancing in the cluster.如何在群集中配置组路由器以实现负载平衡。
  • How to Dockerize everything and deploy it using docker-compose.如何对所有内容进行Docker化并使用docker-compose进行部署。

You can find the complete application in my GitHub repo. Feel free to contribute or play with it as you like! ?

您可以在我的GitHub存储库中找到完整的应用程序。 随意贡献或随心所欲玩! ?

See you! ?

再见! ?

翻译自: https://www.freecodecamp.org/news/how-to-make-a-simple-application-with-akka-cluster-506e20a725cf/

使用akka构建高并发程序

使用akka构建高并发程序_如何使用Akka Cluster创建简单的应用程序相关推荐

  1. cmd写java程序_用cmd写一个最简单的Java程序

    一,准备: 1.确保电脑中装有eclipse软件并且确保配置好环境变量 (1)环境变量配置方法: 特别提示:jdk和eclipse保存的路径不能有中文字符 1.打开我的电脑--属性--高级--环境变量 ...

  2. 微博服务器又炸了,快来看看如何一步步构建高并发的网站

    如何构建高并发的网站 昨天的微博服务器又炸了,心疼微博三秒钟 .虽然网上各种嘲讽谩骂渣浪的,不过作为程序员细细想想感觉新浪还是很不容易的,毕竟它也没法知道哪个明星突然就出啥事了,面对突如其来的多出好几 ...

  3. 构建高并发高可用的电商平台架构实践 转载

    2019独角兽企业重金招聘Python工程师标准>>> 构建高并发高可用的电商平台架构实践 转载 博客分类: java 架构 [-] 一 设计理念 空间换时间 多级缓存静态化 索引 ...

  4. SpringBoot+Netty构建高并发稳健的部标JT808网关

    应很多朋友的要求,今天分享一下如何使用SpringBoot和Netty构建高并发稳健的JT808网关,并且是兼容JT808-2011和JT808-2019的网关,此网关已经有多个客户在商用. JT80 ...

  5. 如何构建高并发高可用的剧场直播云端混流服务?

    在LiveVideoStack线上交流分享中,爱奇艺技术研究员李晓威分享了基于爱奇艺Hydra平台的剧场直播云端混流方案,重点讲解如何提升WebRTC推流成功率并提升音视频质量,如何做到点播流在客户端 ...

  6. 微信小程序订餐系统需要服务器吗,微信小程序订餐系统怎么开发 怎么创建微信外卖小程序...

    原标题:微信小程序订餐系统怎么开发 怎么创建微信外卖小程序 过硬(10guoying.com)6月4日 观察:网上订餐的新消费方式如今在我们日常生活中已经十分常见,人们不仅可以在饿了么.美团等的外卖平 ...

  7. python异步高并发_通过python异步通讯方式构建高并发压力测试工具

    背景说明 在工作中,要对一个接口进行压测,我当时就想通过python自己编写一个压力发生器. 初步方案(单线程循环发送) 通过循环向服务端发送请求,代码如下: #采用单步循环的方式循环测试 impor ...

  8. 构建高并发高可用的电商平台架构实践

    问题导读: 1.如何构建高并发电商平台架构 2.哈希.B树.倒排.bitmap的作用是什么? 3.作为软件工程师,该如何实现读写? 4.如何实现负载均衡.反向代理? 5.电商业务是什么? 6.基础中间 ...

  9. 构建高并发高可用的电商平台架构实践 转自网络

    从各个角度总结了电商平台中的架构实践,由于时间仓促,定了个初稿,待补充完善,欢迎大家一起交流. 转载请声明出处: 作者:杨步涛 关注分布式架构.大数据.搜索.开源技术 QQ:306591368 技术B ...

最新文章

  1. Git学习笔记07-删除文件
  2. 世界大学排名:12所中国大学科研实力进百强
  3. 1.22-科技信息检索主要来源
  4. java遍历子目录_Java遍历文件夹及子目录代码实例
  5. mysql 主从复制结构配置
  6. 5分钟即可使用25年的Linux
  7. JAVA正则表达式及常用类
  8. 每日一课(12/75)操作数的寻址方式
  9. 36_入门泛型的基本应用
  10. Java的反射 基础+简单复制对象实例
  11. STM32 硬件IIC OLED
  12. r730 raid5 linux 驱动,Dell power edge R730 raid卡安装配置
  13. 快速了解元宇宙的 7 层产业链
  14. python寻峰,[LeetCode][Python]162. 寻找峰值
  15. 获得lazada商品详情
  16. kaos linux 包管理,KaOS 发布 2018.06 版
  17. java Instant
  18. 客户端、服务器、浏览器
  19. 搭建Web SpeedTest网速测试工具
  20. 云计算大数据之 Java 操作 Kafka

热门文章

  1. 进程共享(读时共享写时复制)
  2. 【微信小程序】java最简单观察者模式
  3. 我了解到的面试的一些小内幕!附面试题答案
  4. accsess转成mysql语句_轻松教你SQL转ACCESS
  5. [No0000B0]ReSharper操作指南1/16-入门与简介
  6. Java中获取完整的url
  7. Python ValueError: IO operation on closed file
  8. Unity3D 装备系统学习Inventory Pro 2.1.2 基础篇
  9. C# 类(14) 事件
  10. 弹出层之1:JQuery.Boxy (二)