The Reactor:An Object-Oriented Wrapper for Event-Driven Port Monitoring and Service Demultiplexing

反应堆模式:一种应用于事件驱动的端口监控和服务多路化的面向对象封装器

Douglas C. Schmidt

An earlier version of this paper appeared in the February 1993 issue of the C++ Report.

这篇文章的早期版本出现在1993年2月发表的C++ Report上。

1. Introduction

1 绪论

This is part one of the third article in a series that describes techniques for encapsulating existing operating system (OS) interprocess communication (IPC) services within object oriented (OO) C++ wrappers.

这是描述使用C++面向对象技术封装操作系统进程间通信服务的一个系列的第三篇。

The first article explains the main principles and motivations for OO wrappers, which simplify the development of correct, concise, portable, and efficient applications.

第一篇文章解释了面向对象包装器的主要原则和动机,它简化了开发正确的,简明的,可移植的以及高效的应用程序。

The second article describes an OO wrapper called IPC SAP that encapsulates the BSD socket and System V TLI system call Application Programmatic Interfaces (APIs).

第二篇文章描述了一种被称为IPC SAP的面向对象包装器,它封装了BSD socket 和System V TLI 系统调用应用程序编程接口。

IPC SAP enables application programs to access local and remote IPC protocol families such as TCP/IP via a type-secure, object-oriented interface.

IPC SAP 使应用程序可以通过类型安全的面向对象的接口访问本地的和远程的IPC协议族,比如TCP/IP。

This third article presents an OO wrapper for the I/O port monitoring and timer-based event notification facilities provided by the select and poll system calls.

第三篇文章介绍了一种对于I/O端口监控和基于定时器的事件通知的由select 和poll系统调用提供的机制的面向对象包装器。

Both select and poll enable applications to specify a time-out interval to wait for the occurrence of different types of input and output events on one or more I/O descriptors.

select 和poll都允许应用程指定一个在一个或者多个I/O描述符上等待不同类型的输入输出事件的超时时间间隔。

Select and poll detect when certain I/O or timer events occur and demultiplex these events to the appropriate application(s).

select 和poll检测某个I/O 或者定时器事件的发生并且分发这些事件到适当的应用程序。

As with many other OS APIs, the event demultiplexing interfaces are complicated, error-prone, non-portable, and not easily extensible.

和其他许多操作系统应用程序编程接口一样,事件多路化系统非常复杂,容易出错,不易移植而且不容易扩展。

An extensible OO framework called the Reactor was developed to overcome these limitations.

一种被称为反应堆的可扩展的面向对象框架被开发出来以克服这些限制。

The Reactor provides a set of higher-level programming abstractions that simplify the design and implementation of event-driven distributed applications.

反应堆模式提供了一系列的高级的编程抽象,它简化了事件驱动的分布式系统的设计和实现。

The Reactor also shields developers from many error-prone details in the existing event demultiplexing APIs and improves application portability between different OS variants.

反应堆模式同时也将开发人员从容易出错的现存事件多路复用编程接口中解脱出来,同时改善了应用程序在不同的操作系统变种间的可移植性。

The Reactor is somewhat different than the IPC SAP class wrapper described in [2]. IPC SAP added a relatively“thin” OO veneer to the BSD socket and System V TLI APIs.

反应堆模式和IPC SAP类包装器有些不同,IPC SAP 添加了一个相对薄的面向对象封装在BSD socket and System V TLI应用程序编程接口之上。

On the other hand, the Reactor provides a significantly richer set of abstractions than those offered directly by select or poll.

另一方面,反应堆模式提供了一系列更加丰富的抽象相对于直接的select 和 poll所提供的。

In particular, the Reactor integrates I/O-based port monitoring together with timer-based event notification to provide a general framework for demultiplexing application communication services.

特别的,反应堆模式整合了基于I/O的端口监控和基于定时器的事件通知为多路化应用程序服务提供一种通用的框架。

Port monitoring is used by event-driven network servers that perform I/O on many connections simultaneously.

端口监控被用于事件驱动的同时为许多并发连接进行I/O的网络服务器。

Since these servers must handle multiple connections it is not feasible to perform blocking I/O on a single connection indefinitely.

由于这些服务器必须处理多个连接,对于单一的连接提供无限时间的阻塞I/O是不够灵活的。

Likewise, the timer-based APIs enable applications to register certain operations that are periodically or aperiodically activated via a centralized timer facility controlled by the Reactor.

同样的,基于定时器的应用程序编程接口使应用程序可以注册一些操作,它们通过一个由反应堆集中控制的定时器周期性或者非周期性的激活。

This topic is divided into two parts.

这个主题分为两部分。

Part one (presented in this article) describes a distributed logging facility that motivates the need for efficient event demultiplexing, examines several alternative solution approaches, evaluates the advantages and disadvantages of these alternatives, and compares them with the Reactor.

第一部分(在这篇文章介绍)描述一个分布式的日志系统,它激发了有效的事件多路化的需求,检查了其他的几个替代方案,评价了这些替代方案的优势和劣势,并且将他们和反应堆模式进行比较。

Part two (appearing in a subsequent issue of the C++ Report) focuses on the OO design aspects of the Reactor.

第二部分(出现在后来发表的C++ Report上)集中在反应堆模式的面向对象设计方面。

In addition, it discusses the design and implementation of the distributed logging facility.

另外,它讨论了分布式日志系统的设计和实现。

This example illustrates precisely how the Reactor simplifies the development of event-driven distributed applications.

这个例子精确的说明了反应堆模式是如何简化事件驱动的分布式应用程序的开发的。

2. Example: A Distributed Logging Facility

2. 例子:一个分布式日志系统

To motivate the utility of event demultiplexing mechanisms, this section describes the requirements and behavior of a distributed logging facility that handles event-driven I/O from multiple sources “simultaneously.”

为了演示事件多路复用机制的功能,这一节描述了一个处理从多个源头同时接收消息的事件驱动的I/O分布式日志系统。

As shown in Figure 1, the distributed logging facility offers several services to applications that operate concurrently throughout a network environment.

正如图1所示,这个分布式日志系统为应用程序提供几个服务,这些服务通过网络环境并发的操作。

First, it provides a centralized location for recording certain status information used to simplify the management and tracking of distributed application behavior.

首先,它为记录某些状态信息提供一个集中的位置以简化对于分布式应用程序行为的管理和跟踪。

To facilitate this, the client daemon time-stamps outgoing logging records to allow chronological tracing and reconstruction of the execution order of multiple concurrent processes executing on separate host machines.

为了完成这个功能,客户端的守护进程给发出去的日志记录打上时间戳从而可以按照时间顺序跟踪和重建多个并发进程在不同主机上的执行顺序。

Second, the facility also enables the prioritized delivery of logging records. These records are received and forwarded by the client daemon in the order of their importance, rather than in the order they were originally generated.

第二,这个功能也使按优先级投递日志记录称为可能。这些记录被客户端守护进程按他们的重要性接收和转发,而不是按照他们最初产生的顺序。

Centralizing the logging activities of many distributed applications within a single server is also useful since it serializes access to shared output devices such as consoles, printers, files, or network management databases.

集中多个分布式应用程序的产生日志到一个单一的服务器也是有用的,因为这使对输出设备比如控制台,打印机,文件或者网络管理数据库的访问得以串行化。

In contrast, without such a centralized facility, it becomes difficult to monitor and debug applications consisting of multiple concurrent processes.

相比之下,如果没有这样的集中设备,对于由多个并发进程组成的应用程序的监控和调试都将变得很困难。

For example, the output from ordinary C stdio library subroutines (such as fputs and printf) that are called simultaneously by multiple processes or threads is often scrambled together when it is displayed in a single window or console.

比如,一个普通的C stdio 库的子函数(比如fputs 和 printf)的输出,如果被多进程或者多线程同时调用,当显示在窗口或者控制台的时候经常会纠缠在一起。

The distributed logging facility is designed using a client/server architecture.

这个分布式日志系统被设计为使用客户端服务器架构。

The server logging daemon collects, formats, and outputs logging records forwarded from client logging daemons running on multiple hosts throughout a local and/or wide-area network.

服务器日志守护进程搜集,格式化并且输出从客户端日志守护进程转发过来的日志记录,这些客户端日志守护进程运行在多个主机上遍布于一个局域或者广域网。

Output from the logging server may be redirected to various devices such as printers, persistent storage repositories, or logging management consoles.

日志服务器的输出可能被重定向到各种设备,比如打印机,永久存储或者日志管理控制台。

As shown in Figure 1, the InterProcess Communication (IPC) structure of the logging facility involves several levels of demultiplexing.

正如图1所示,日志系统的进程间通信结构设计几个层次的多路化。

For instance, each client host in the network contains multiple application processes (such as P1 ; P2 ; andP3) that may participate with the distributed logging facility.

比如,网络上的每一个客户端主机包含多个应用程序进程(比如 P1;P2;P3),他们都可能参与分布式日志系统。

Each participating process uses the application logging API depicted in the rectangular boxes in Figure 1 to format debugging traces or error diagnostics into logging records.

每个参与进程都使用如图1的方框中所描述的应用程序日志API进行格式化调试跟踪或者错误诊断输入到日志记录系统中。

A logging record is an object containing several header fields and a payload with a maximum size of approximately 1K bytes.

一条日志记录是一个包含几个头域和一个有效载荷的最大值接近1K字节的对象。

When invoked by an application process, the Log Msg::log API prepends the current process identifier and program name to the record.

当Log Msg::log API被应用程序调用的时候, 它预先考虑将当前进程的ID和程序名字放进记录。

It then uses the “record-oriented” named pipe IPC mechanism to demultiplex these composite logging records onto a single client logging daemon running on each host machine.

然后使用基于记录的命名管道IPC机制多路化这些组合日志记录到一个单独的运行在每个主机上的日志客户端的守护进程上。

The client daemon prepends a time-stamp to the record and then employs a remote IPC service (such as TCP or RPC) to demultiplex the record into a server logging daemon running on a designated host in the network.

这个客户端守护进程为日志记录预先准备一个时间戳,然后发起一个远程的IPC服务(比如 TCP或者RPC)以多路化这个日志记录到一个运行在一个指定的网络主机上的守护进程中。

The server operates in an event-driven manner, processing logging records as they arrive from multiple client daemons.

服务器运行在一种事件驱动的方式下,同时处理来自多个客户端守护进程的日志记录。

Depending on the logging behavior of the participating applications, the logging records may be sent by arbitrary clients and arrive at the server daemon at arbitrary time intervals.

由于参与的应用程序产生日志的行为不同,日志记录可以来自于任意的客户端并且以任意的时间间隔到达服务器端。

A separate TCP stream connection is established between each client logging daemon and the designated server logging daemon.

一个单独的TCP流连接在客户端日志守护进程和指定的服务器端日志守护进程之间被创建出来。

Each client connection is represented by a unique I/O descriptor in the server.

每一个客户端的连接在服务器端以一个唯一的I/O描述符表示。

In addition, the server also maintains a dedicated I/O descriptor to accept new connection requests from client daemons that want to participate with the distributed logging facility.

另外,服务器也维护着一个专一的I/O描述符以接受从客户端守护进程发过来的想要参与分布式日志系统的连接请求。

During connection establishment the server caches the client’s host name (illustrated by the ovals in the logging server daemon), and uses this information to identify the client in the formatted records it prints to the output device(s).

在连接建立期间,服务器缓存客户端的主机名(在日志服务器守护进程的图中以椭圆说明),并且使用这些信息在格式化好的日志记录准备输出到输出设备中标识客户端。

The complete design and implementation of the distributed logging facility is described in [3].

分布式日志系统的完整的设计和实现在[3]中进行没描述。

The remainder of the current article presents the necessary background material by exploring several alternative mechanisms for handling I/O from multiple sources.

这篇文章的剩余部分通过探索几个其他的处理多路I/O的替代方案描述背景材料。

3. Operating System Event Demultiplexing

3. 操作系统事件多路化

Modern operating systems such as UNIX, Windows NT, and OS/2 offer several techniques that allow applications to perform I/O on multiple descriptors “simultaneously.”

现代操作系统比如UNIX,Windows NT, 和 OS/2都提供几种技术允许应用程序运行在多个描述符上的同时的I/O。

This section describes four alternatives and compares and contrasts their advantages and disadvantages.

这部分描述4种替代方案并且比较和对比他们的优势和劣势。

To focus the discussion, each alternative is characterized in terms of the distributed logging facility described in Section 2 above.

为了集中讨论,每个替代方案都被赋予上面第二部分中讨论的分布式日志系统的特性。

In particular, each section presents a skeletal server logging daemon implemented with the alternative being discussed.

特别的,每个部分表现为一个服务器端日志守护进程的使用替代技术的实现骨架。

To save space and increase clarity, the examples utilize the OO IPC SAP socket-wrapper library described in a previous C++ Report article [2].

为了节省空间并且是描述更加清楚,这些例子利用面向对象的IPC SAP套接字封装器库,这个库在先前的C++ Report文章中有描述。

The handle logging record function shown in Figure 2 is also invoked by all the example server daemons.

这个显示在图2中的处理日志记录的函数也被所有的服务端守护进程调用。

This function is responsible for receiving and processing the logging records and writing them to the appropriate output device.

这个函数负责接受和处理日志记录并且将他们写到适当的输出设备上。

Any synchronization mechanisms required to serialize access to the output device(s) are also performed in this function.

任何需要串行化访问输出设备的同步机制也在这个函数中进行操作。

In general, the concurrent multi-process and multi-thread approaches are somewhat more complicated to develop since output must be serialized to avoid scrambling the logging records generated from all the separate processes.

通常并发的多进程和多线程是相近的都使开发更加复杂,因为输出必须串行化以避免争夺日志记录从所有的分离的进程中产生。

To accomplish this, the concurrent server daemons cooperate by using some form of synchronization mechanisms (such as semaphores, locks, or other IPC mechanisms like FIFOs or message queues) in the handle logging record subroutine.

为了完成这个目标,这个并发服务器守护进程通过某种形式的同步机制(比如信号量,锁或者其他IPC机制比如FIFOs或者消息队列)来处理日志记录子进程。

4.Summary

4. 总结

 

This article presents the background material necessary to understand the behavior, advantages, and disadvantages of existing UNIX mechanisms for handling multiple sources of I/O in a network application.

这篇文章介绍了为了理解现存的UNIX处理多路网络I/O应用程序的机制的行为,优势和劣势的必要的背景资料。

An OO wrapper called the Reactor has been developed to encapsulate and overcome the limitations with the select and poll event demultiplexing system calls.

一个被称为反应堆的面向对象的封装器被开发出来以封装和克服select 和poll 事件多路化系统调用的限制。

The object-oriented design and implementation of the Reactor is explored in greater detail in part two of this article (appearing in the next C++ Report).

反应堆的面向对象设计和实现在这篇文章(在下一篇C++ Report上出现)的第二部分进行更加详细的讨论。

In addition to describing the class relationships and inheritance hierarchies, the follow-up article presents an extended example involving the distributed logging facility.

除了描述类的关系和继承层次,下一篇文章将介绍一个涉及分布式日志系统的可扩展的示例。

This example illustrates how the Reactor simplifies the development of event-driven network servers that manage multiple client connections simultaneously.

这个示例说明了反应堆是如何简化事件驱动的处理多个客户端同时连接的网络服务器开发的。

转载于:https://www.cnblogs.com/pugang/p/4621373.html

反应堆模式最牛的那篇论文--由solidmango执笔翻译相关推荐

  1. 盘点世上最牛的5篇博士论文,跪拜!

    点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 本文转自|视觉算法 相信很多人被论文烦恼过,一想起选题.开题报告. ...

  2. CVPR 2018 最牛逼的十篇论文

    标题 The 10 coolest papers from CVPR 2018 CVPR 2018 最牛逼的十篇论文 by 啦啦啦2 01 The 2018 Conference on Compute ...

  3. 了解《诗歌生成》必看的6篇论文【附打包下载地址】

    论文推荐 " <SFFAI 66期-诗歌生成专场>来自清华大学的矣晓沅同学推荐的文章主要关注于自然语言处理领域,你可以认真阅读讲者推荐的论文,来与讲者及同行线上交流哦." ...

  4. 智能语音信息处理团队18篇论文被语音技术顶会ICASSP 2023接收

    近日,ICASSP 2023会议发出了审稿结果通知,语音及语言信息处理国家工程研究中心智能语音信息处理团队共18篇论文被会议接收,论文方向涵盖语音识别.语音合成.话者识别.语音增强.情感识别.声音事件 ...

  5. PayPal高级工程总监:读完这100篇论文 就能成大数据高手

    PayPal高级工程总监:读完这100篇论文 就能成大数据高手 阅读目录 关键架构层(Key architecture layers) 架构的演进(Architecture Evolution) 文件 ...

  6. 霍金最后一篇论文上线;世界最快的深度算法现身;波士顿机器人跑酷;亚马逊AI招聘重男轻女被骂下架...

    大家好,我是为人造的智能操碎了心的智能禅师. 今天是美好的周一,也是国际调节椅子日.经常坐椅子工作的人,因为久坐不动,时间长了就会产生各种问题.所以设立这个节日也是为了提醒大家,椅子一定要买人体工程学 ...

  7. 这100篇论文,使您成大数据高手……

    开源(Open Source)用之于大数据技术,其作用有二:一方面,在大数据技术变革之路上,开源在众人之力和众人之智推动下,摧枯拉朽,吐故纳新,扮演着非常重要的推动作用.另一方面,开源也给大数据技术构 ...

  8. 喜报!《大数据》72篇论文入选中国知网《学术精要数据库》高影响力论文!...

    <大数据>2012-2022年共有72篇论文入选<学术精要数据库>"高影响力论文",其中高PCSI论文38篇,高被引论文42篇,高下载论文54篇," ...

  9. 认知智能再突破,阿里 18 篇论文入选 AI 顶会 KDD

    作者 | 马超 责编 | 屠敏 头图 | CSDN 下载自东方 IC 出品 | CSDN(ID:CSDNnews) 近日,国际知识发现与数据挖掘协会KDD在官网(https://www.kdd.org ...

最新文章

  1. python自然语言处理一作者书
  2. C++ open 打开文件(含打开模式一览表)
  3. 【风控体系】携程基于大数据分析的实时风控体系
  4. 语言高精度算法阶乘_JavaScript中的算法(附10道面试常见算法题解决方法和思路)...
  5. (3.3)HarmonyOS鸿蒙长按事件
  6. macos必做的设置_如何在MacOS上设置PHP,CaddyServer和Kirby —以及为什么要这样做
  7. 【快速入门Linux】8_Linux命令—系统信息相关命令(时间、磁盘、进程)
  8. async 和 await 关键字
  9. 关于集合addAll()方法的坑度
  10. 电信无线路由器服务器网站,电信拨号上网连无线路由器的方法
  11. Eclipse配置android开发环境详解
  12. 对比7种分布式事务方案,还是偏爱阿里开源的Seata,真香!(原理+实战)
  13. 毕业论文-word中自动生成中英文双目录(TC域,支持更新不覆盖)
  14. 生活随记-如何健康摄入果糖
  15. Html5版音乐游戏制作及分享(H5音乐游戏)
  16. springBoot整合mybatis-plus 报错 No qualifying bean of type
  17. 小程序access_token耗尽问题
  18. 第四章-linux内核裁剪与移植
  19. c语言求13为质数的代码,C语言求质数.doc
  20. 【bzoj 4627】 回转寿司 【BeiJing2016】

热门文章

  1. 华为鸿蒙万物互联应用,为什么我需要万物互联? 鸿蒙能带来什么?
  2. easyexcel导出百万级数据_百万级数据下的mysql深度解析
  3. 如何修改influxdb表结构_influxdb基本操作
  4. android wear中国版,AndroidWear中国版App——小白上手指南
  5. python每行输出8个式子_多图+代码 | 详解Python操作Excel神器openpyxl的各种操作!
  6. Pytorch 尝试通过强化cpu使用加快训练和推理速度(二)
  7. 【车牌识别】+【模板匹配】基于智能交通的车牌识别系统
  8. 多进程通信相关函数归纳
  9. ZZ_MODIFIED_GEEBINF 不可用
  10. 在你迷茫时不如学好一门语言(送给大一的学弟学妹)