Word 直接复制过来标号有错误

Lecture 1 Process

What is an Operating System? OS

An Operating system is the heart of computers, mobiles and other computing devices that helps operate the hardware and software.

6 Layers: 0.hardware 1. CPU scheduling 2.Memory management 3.Process management 4.Buffering for input and output.

OS carry out the most commonly required operations.

1). Process Management.

2). Memory Management.

3). File System Management.

4). IO System Management.

5) Protection and Security.

Process Concept.

Process – a program in execution, process execution must progress in sequential fashion.

An operating system executes: -Batch system(job) -Time-shared system: user programs or tasks.

批处理系统执行作业(job),而分时系统使用用户程序(user program)或任务(task)

Process and Program

A process is considered as an active entity.

A program is considered to be a passive entity. (stored in disk or executable file).

Program becomes process when executable file is loaded into memory.

程序本身不是进程,程序只是被动实体,如储存在磁盘上包含以系列指令的文件(经常称为可执行文件),进程是活动的(active)实体(entity),当一个可执行文件被加载到内存时,这个程序就成为进程

Process in Memory.

进程的内存结构

Process State

As a process executes, it changes state. 状态

The state of a process is defined in part by the current activity of that process.

进程状态,部分取决于进程的当前活动

1).New: The process is being created. 进程正在创建

2).Running: Instructions are being executed. 指令正在执行

3).Waiting: The process is waiting for some event to occur. 进程等待发生某个事件

4).Ready: The process is waiting to be assigned to a processor. 进程等待分配处理器

5).Terminated: The process has finished execution. 进程已经完成执行

Diagram of Process State:

Process Control Block

Process Control Block is a data structure that used for storing the information about a process.

Each and every process is identified by its own PCB.

*PCB contains many information that related to some certain process: 1) Process state 2) program counter 3) CPU register 4) CPU-scheduling information 5) memory-management information 6) accounting information 7) I/O status information

It is also called as context of the process and PCB of each process resides in the main memory.

PCB of all the processes are presented in a linked list.

PCB is important in multiprogramming environment as it captures the information pertaining to the number of processes running simultaneously.

Process Scheduling

Process Scheduler 进程调度器

It selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.

进程调度器选择一个可用进程到CPU上执行。*单处理器系统不会具有多个正在运行的进程,如果有多个进程,那么余下的需要等待CPU空闲并能重新调度

Maintains scheduling queues of process:

-Job queue: set of all processes in the system

-Ready queue: set of all processes during residing in main memory, ready and waiting to execute

-Device queue: set of processes waiting for an I/O device

*进程在进入系统时,会被加到job queue,这个队列包括系统内的所有进程,驻留在内存中的,就绪的,等待运行的进程保存在ready queue,这个队列通常用链表实现;其头节点有两个指针,用于指向链表的第一个和最后一个PCB块;每个PCB还包括一个指针,指向ready queue的下一个PCB

最初,新进程被加到ready queue,它在ready queue中等待,直到被选中执行或被分派(dispatched),当该进程分配到CPU并执行时,以下事件有可能发生:

1) 进程可能发出I/O 请求,被放到I/O队列

2) 可能创建一个新的子进程,并等待其终止

3) 进程可能由于中断而被强制释放CPU,并被放回到就绪队列

对于前面两种情况,进程最终从等待状态切换到就绪状态,放回到就绪队列。进程重复这一循环直到终止,然后它会从所有队列中删除,其PCB和资源也被释放

Schedulers 调度程序

Processes choose to execute through proper schedulers.

*通常对于批处理系统,提交的进程多于可以立即执行的。这些进程会被保存到大容量储存设备的缓冲池,以便以后执行。

Long-Term Scheduler is also called Job Scheduler and is responsible for controlling the Degree of Multiprogramming i.e. the total number of processes that are present in the ready state. 长期调度程序或作业调度程序从该池中选择进程加到内存以便执行

Short-Term Scheduler is also known as CPU scheduler and is responsible for selecting one processes from the ready state for scheduling it on the running state. 短期调度程序或CPU调度程序从准备执行的进程中选择进程,并分配CPU

Medium- Term Scheduler is responsible for swapping of a process from the Main Memory to Secondary Memory and vice-versa (mid-term effect on the performance of the system).

*中期调度程序的核心思想是将进程从内存(或从CPU竞争)中移出,从而降低多道程序程度,之后可被重新调入内存,并从中断处继续执行(swap)

Medium-Term Scheduler can be added if degree of multiple programming needs to decrease.

Remove process from memory, store on disk, bring back in from disk to continue execution. Swapping. Swap out and in.

Content Switch. 上下文切换

When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch.

Context of a process represented in the PCB. *包括CPU寄存器的值,进程状态和内存管理信息等。

Process Creation 进程创建

Parent process create children processes, which, in turn create other processes, forming a tree of processes.

创建进程称为父进程,新进程称为子进程,新进程可创建其他进程,形成进程树

*大多数操作系统对进程的识别采用唯一的进程标识符(process identifier,pid),这通常是一个常数值

一般来说,一个进程创建子进程时,子进程需要一定的资源(CPU,时间,内存,I/O设备等)

Resource sharing options: 1). Parent and children share all resources 2) Children share subset of parent’s resources 3) Parent and child share no resources

Execution options: 1) Parent and children execute concurrently 2) Parent waits until children terminate

*新进程的地址空间也有两种可能:1) 子进程是父进程的复制品 2)子进程加载另一个新程序

Linux系统的一个典型进程树

Process Termination

Process executes last statement and then ask the operating system to delete it using the exit() system call.

Then, returns status data (often integer) from child to parent. Processes’ resources are deallocated by operating system.

Parent may wait terminate the execution of children processes. (Terminate-Process() in Windows )

*通常只有终止进程的父进程才能执行这一系统调用,如果终止子进程,则父进程需要知道这些子进程的标识符,因此当一个进程创建新进程时,新创建进程的标识符要传递到父进程

*父进程终止子进程的原因: 1) 子进程超过它所分配的资源2) 分配给子进程的任务不再需要 3) 父进程正在退出,而且操作系统不允许无父进程的子进程继续执行

*有些系统不允许子进程在父进程已经终止的情况下存在 对于这类系统,如果一个进程终止(正常或不正常),那么它的所有子进程也应该终止。成为级联终止(cascade termination), 通常由操作系统启动 (P83)

Inter-Process Communication 进程间通信IPC

Independent Processes – neither affect other processes or be affected by other processes.

Cooperating Processes – can affect or be affected by other processes. There are several reasons why cooperating processes are allowed.

提供环境允许进程协作,理由:

1) Information Sharing 2) Computation Speedup 3) Modularity 4) Convenience

IPC两种基本模型: 1) Shared Memory 2) Message Passing

A region of memory is shared by cooperating processes.

采用共享内存的进程间通信,需要通信进程建立共享内存区域

Processes can exchange information by reading and writing all the data to the shared region.

为了允许生产者进程和消费者进程并发执行,应有一个可用的缓冲区,生产者和消费者必须同步,这样消费者不会试图消费一个尚未生产出来的项

Two types of buffers:

1) Unbounded-Buffer : places no practical limit on the size of the buffer.

2) Bounded-Buffer: Assumes that there is a fixed buffer size.

Message-Passing Systems 消息传递系统

Communication takes place by way of messages exchanged among the cooperating processes.

A message-passing facility provides at least 2 operations: 1) send(message) 2) receive(message)

The message size is either fixed or variable.

如果进程P和Q需要通信,那么他们必须相互发送消息和接收消息:它们之间需要通信链路(communi cation link)

If processes P and Q want to communicate: a communication link must exist between them.

There are several methods for logically implementing a link and the send()/receive() operations:

1) Direct or Indirect communication 直接或间接的通信

2) Synchronous or asynchronous communication 同步或异步的通信

3) Automatic or explicit buffering 自动或显式的缓冲

Exchanged messages by communicating processes reside in a temporary queue.

Buffering:1) Zero capacity. The queue has a maximum length of zero, thus, the link cannot have any messages waiting in it. 2) Bounded capacity. The queue has finite length n, thus, at most n messages can reside in it. 3) Unbounded capacity. The queue’s length is potentially infinite.

Name 命名

Processes must name each other explicitly:

Direct Communication:

send(P ,message) – send a message to process P

receive(Q, message) – receive a message from process Q

Direct Communication is implemented when the processes use specific process identifier for the communication, but it is hard to identify the sender ahead of time.

*这种方案的通信链路有以下属性:1)在需要通信的每对进程之间,自动建立链路。仅需知道对方身份就可进行交流。2)每个链路只与两个进程相关 3)每对进程之间只有一个链路
*这种方案展示了寻址的对称性(symmetry),发送和接受必须指定对方

*一个变形是非对称性(asymmetry),即只需要发送者制定接收者,而接收者不需要指定发送者

Send(P, message) receive(id, message) id被设置为与其通信进程的名称 P87

Indirect Communication:

Create a new mailbox (port) mailbox可以抽象成一个对象,进程可以向其中存放消息,可以删除消息,每个邮箱都有一个唯一的标识符

Send and receive messages through mailbox.

Send( A , message)    Receive( A , message)

*对于这种方案,通信链路具有如下特点:1)只有在两个进程共享一个邮箱时,才能建立通信链路 2)一个链路可以与两个或更多进程相关联 3)两个通信进程之间可有多个不同链路,每个链路对应一个邮箱

*若P1 P2 P3都共享mailbox A,P2和P3都执行receive( A, message),收到P1消息的进程取决于选择的方案:1)允许一个链路最多只能与两个进程关联 2)允许一次最多一个进程执行操作receive() 3)允许系统随意选择一个进程以便接收消息(即进程P1 P2两者之一都可以接受消息,但不能两个都可以)系统同样可以定义一个算法来选择哪个进程是接收者。系统可以让发送者指定接收者。

Synchronous and Asynchronous Message Passing 同步

*进程间通信可以通过调用原语send()和receive()来进行,实现这些有不同的方案

Message passing may be either blocking or non-blocking, synchronous or asynchronous as well.

Blocking send: the sender is blocked until the message is received.

Nonblocking send: the sender sends the message and continue.

Blocking receive: the receiver is blocked until a message is available.

Nonblocking reveive: the receiver receivers a valid message or a null message.

*不同组合的send()和receive()都有可能。当send和receive都是阻塞的,在发送者和接收者之间就有一个交会(rendezvous),当采用阻塞的send和receive时,生产者仅需调用send并且等待,直到消息被送到接收者或邮箱。同样,当消费者调用receive时,它会阻塞直到有一个消息可用

Lecture 2 Threads

  1. What is a thread?

OS view: A thread is an independent stream of instructions that can be scheduled to run by the OS.

Software developer view: A thread can be considered as a “procedure” that runs independently from the main program.

*每个线程是CPU使用的一个基本单元:它包括线程ID,程序计数器,寄存器组和堆栈。每个传统或重量级(Heavyweight)进程只有单个控制线程。如果一个进程具有多个控制线程,那么它能同时执行多个任务。

  1. Benefit of Threads 优点

多线程编程具有四大类优点:1) 响应性:程序及时部分阻塞或者执行冗长操作,它仍可继续执行2) 资源共享:它允许一个应用程序在同一地址时间内有多个不同活动线程3) 经济:由于线程能够共享他们所属进程的资源,所以创建和切换线程更加经济4) 可伸缩性:对于多处理器体系结构,多线程的优点更大,因为线程可以在多处理核上并行运行

  1. Example of Threads

User types text in the Word Editor: Open a file in word editor and typing the text(one thread), the text is automatically formatting(another thread), the text is automatically specifying the spelling mistakes(another thread) and the file is automatically saved to disk (another thread).

  1. Threads are scheduled on a processor, and each thread can execute a set of instructions independent of other processes and threads.

Thread Control Block stores the information about a thread.

It shares with other threads belonging to the same process its code section, data section and other operating system resources, such as open files and signals.

  1. Multicore Programming 多核编程

Concurrent execution on single-core system 并发性:

Concurrency means multiple tasks which start, run, and complete in overlapping time periods, in no specific order.

Parallelism on a multi-core system 并行性:

A system is parallel if it can perform more than one task simultaneously.

*并行性和并发性

并行系统可以同时执行多个任务。并发系统支持多个任务,允许所有任务都能取得进展。因此没有并行,并发也是可能的。

  1. Multithreading Models 多线程模型

Multithreading can be supported by:

  1. User level libraries (without Kernel being aware of it) : Library creates and manages threads(user level implementation)
  2. Kernel level -Kernel itself: Kernel creates and manages threads (kernel space implementation)

A user process wants to create one or more threads. Kernel can create one or more threads for the process. Even a kernel does not support threading, it can create one thread per process. (i.e., it can create a process which is a single thread of execution).

  1. User-Level Threads (ULT) 用户线程

User Thread – is the unit of execution that is implemented by users and the kernel is not aware of the existence of these threads

User-Level threads are much faster than kernel level threads. All thread management is done by the application by using a thread library.

Threading in programming like in Java, C#, Python.

Advantages and Disadvantages:

Advantages: Thread switching does not involve the kernel: no mode switching. Therefore fast. Scheduling scan be application specific: choose the best algorithm for the situation. Can run on any OS. We only need a thread library.

Disadvantages: Most system calls are blocking for processes. So, all threads within a process will be implicitly blocked. The kernel can only assign processors to processes. Two threads within the same process cannot run simultaneously on two processors.

  1. Kernel-Level Threads (KLT) 内核线程

Kernel Thread – is the unit of execution that is scheduled by the kernel to execute on the CPU. Are handled by the operating system directly and the thread management is done by the kernel. E.g Windows XP/2000 Solaris

Advantages and Disadvantages:

Advantage: The kernel can schedule multiple threads of the same process on multiple processors. Blocking at thread level, not process level. If a thread blocks, the CPU can be assigned to another thread in the same process. Even the kernel routines can be multithreaded.

Disadvantage: Thread switching always involves the kernel. This means 2 mode switches per thread switch. It is slower compared to User Level Threads, but faster than a full process switch.

  1. Multithreading Models 多线程模型

Finally, a relationship must exist between user threads and kernel threads.

有两种方法来提供线程支持:用户层的用户线程或内核层的内核线程。用户线程位于内核之上,它的管理无需内核支持,而内核线程由操作系统来直接支持与管理

Multithreading Models are three types: 1) Many-To-One 2) One-To-One 3) Many-To-Many

  1. Example: Solaris

Process includes the user’s address space. Stack, and process control block.

User-Level Threads (Threads Library): Invisible to OS, are the interface for application parallelism.

Kernel Threads: the unit that can be dispatched on a processor.

Lightweight processes (LWP) – layer between kernel threads and user threads: each LWP supports one or more ULTs and maps to exactly one KLT.

  1. Many-To-One Model 多对一模型

Many user-level threads are mapped to a single kernel thread. 多对一模型映射多个用户级线程到一个内核线程。

The process can only run one user-level thread at a time because there only one kernel-level thread associated with the process. Thread management done at user space, by a thread library.

Examples: Solaris Green Threads, GNU Portable Threads

  1. One-To-One Model

Each user thread mapped to kernel thread.

Kernel may implement threading and can manage threads, schedule threads.

Kernel is aware of threads.

Provides more concurrency; when a thread blocks, another can run.

Example: Windows NT/XP/2000 Linux Solaris9 and later.

  1. Many-To-Many Model

Allows many user level threads to be mapped to many kernel threads.

Allows the operating system to create a sufficient number of kernel threads.

Number of kernel threads may be specific to an either a particular application or a particular machine.

The user can create any number of threads and corresponding kernel level threads can run in parallel on multiprocessor.

Example: Solaris prior to Version 9 / Windows NT /2000 within the ThreadFiber package. Solaris older than 9 Two-Level Relationship Multithreading Model.

*Solaris在第9版以前支持这种双层模型,第9版之后就使用了一对一模型。

*多对多模型的一种变种仍然多路复用多个用户级线程到同样数量或更少数量的内核,但也允许绑定某个用户线程到一个内核线程。这个变种,有时称为双层模型。

  1. Thread Libraries 线程库

No matter which thread is implemented, threads can be created, used, and terminated via a set of functions that are part of a Thread API (a thread library).

Thread library provides programmer with API (Application Programming Interface) for creating and managing threads: Programmer just have to know the thread library interface(API). Threads may be implemented in user space or kernel space. Library may be entirely in user space or may get kernel support for threading.

Three primary thread libraries: POSIX threads, Java threads, Win32 threads.

Two approaches for implementing thread library:

  1. To provide a library entirely in user space with no kernel support. (All code and data structures for the library exist in user space. Invoking a function in the library results in a local function in user space and not a system call.

第一种方法是,在用户空间中提供一个没有内核支持的库。这种库的所有代码和数据结构都位于用户空间。这意味着,调用库内的一个函数只是导致了用户空间内的一个本地函数的调用,而不是系统调用。

  1. To implement a kernel-level library supported directly by the operating system. (Code and data structures for the library exist in kernel space. Invoking a function in the API for the library typically results in a system call to the kernel.

第二种方法是:实现由操作系统直接支持的一个内核级的库。库内的代码和数据结构位于内核空间。调用库中的一个API函数通常会导致对内核的系统调用。

  1. Implicit Threading 隐式多线程

There are 2 categories: Explicit and Implicit threading.

Explicit threading – the programmer creates and manages threads.

Implicit threading – the compilers and run-time libraries create and manage threads.

针对解决数百甚至上千线程的应用程序并且更好地设计多线程程序,有一种方法是将多线程的创建与管理交给编译器和运行时库来完成。这种策略称为隐式编程,是一种流行趋势。JVM P22

  1. Thread Pool 线程池

Thread pool create a number of threads at process startup and place them into a pool, where they sit and wait for work.

*线程池的思想是:在进程开始时创建一定数量的线程,并加到池中等待工作。当服务器收到请求时,它会唤醒池内的一个线程(如果有可用线程),并将需要服务的请求传递给它。一旦线程完成了服务,它会返回到池中再等待工作。如果池内没有可用线程,那么服务器会等待,直到有空线程为止。

  1. OpenMP

Is a set of compiler directives available for C, C++, and Fortran programs that instruct the compiler to automatically generate parallel code where appropriate.

  1. Grand Central Dispatch 大中央制度

GCD is an extension to C and C++ available on Apple’s Mac OS X and iOS operating systems to support parallelism.

*GCD为C和C++语言增加了块(block)的扩展。每块只是工作的一个独立单元。P125

  1. Threading issues / Designing multithreaded programs

There are a variety of issues to consider with multithreaded programming: 1) Semantics of fork() and exec() 2) Signal handling 3) Thread cancellation

  1. Creating a thread is done with a fork() system call. A newly created thread is called a child thread, and the thread that is initiated to create the new thread is considered a perent thread.

The exec() system call family replaces the currently running thread with a new thread.

The original thread identifier remains the same, and all the internal details, such as stack, data and instructions. The new thread replaces the executables.

If exec() will be called after fork(), there is no need to duplicate the threads. They will be replaced anyway.

If exec() will not be called. Then it is logical to duplicate the threads as well, so that the child will have as many threads as the parent has.

  1. System Handling

A signal is a software interrupt, or an event generated by a Unix/Linux system in response to a condition or an action.

There are several signals available in the Unix system. The signal is handled by a signal handler (all signals are handled exactly once).

-Asynchronous Signal is generated from outside the process that receives it

-Synchronous Signal is delivered to the same process that caused the signal to occur.

  1. Thread Cancellation

Terminating a thread before it has finished: Need at various cases.

Two general approaches: Asynchronous cancellation terminates the target thread immediately.

Deferred cancellation allows the target thread to periodically check if it should be cancelled.

Cancelled thread has sent the cancellation request.

Lecture 4 CPU Scheduling I

  1. Basic Concepts

*对于单处理器系统,同一时间只有一个进程可以运行;其他进程都应等待,直到CPU空闲并可调度为止。多道程序的目标是,始终允许某个进程运行以最大化CPU利用率。

多个进程同时处于内存。当一个进程等待时,操作系统就从该进程接管CPU控制,并将CPU交给另一进程。这种方式不断重复。当一个进程必须等待时,另一进程接管CPU使用权。

  1. CPU – I/O Burst Cycle

Process execution consists of a cycle of CPU execution and I/O wait.

Process execution begins with a CPU burst, followed by an I/O burst, then another CPU burst…etc

The duration of these CPU burst have been measured.

An I/O -bound program would typically have many short CPU bursts, A CPU-bound program might have a few very long CPU bursts. This can help to select an appropriate CPU-scheduling algorithm.

  1. Type of Processes 进程类型

I/O bound: Has small bursts of CPU activity and then waits for I/O (e.g. Word processor)

Affects user interaction (we want these processes to have highest priority)

CPU bound: Hardly any I/O. mostly CPU activity (eg. gcc, scientific modeling, 3D rendering, etc.)

Useful to have long CPU bursts. Could do with lower priorities.

  1. CPU Schedulers  CPU调度程序

Scheduler triggered to run when timer interrupt occurs or when running process is blocked on I/O.

Scheduler picks another process from the ready queue.

Performs a context switch.

*每当CPU空闲时,操作系统就应从就绪队列中选择一个进程来执行。进程选择采用短期调度程序(short-term scheduler)或C PU调度程序。调度程序则从内存中选择一个能够执行的进程,并为其分配CPU。

*就绪队列不是先进先出(FIFO)序列。队列内的记录通常为进程控制块(PCB)。

  1. Preemptive Scheduling   抢占调度

Preemptive Scheduling : the system may stop the execution of the running process and after that, the context switch may provide the processor to another process. The interrupted process is put back into the ready queue and will be scheduled sometime in future, according to the scheduling policy.

Non-Preemptive Scheduling : when a process is assigned to the processor, it is allowed to execute to its completion, that is, a system cannot take away the processor from the process until it exits.

Any other process which enters the queue has to wait until the current process finishes its CPU cycle.

Decides which process should run next.

CPU scheduling takes place on 4 circumstances:1) When the process changes state from Running to Ready ex: when an interrupt occurs. 2) Changes state from Running to Waiting ex: as result of I/O request or wait(). 3) Changes state from Waiting to Ready ex: at completion of I/O. 4) Process Terminates.

2 and 4 is nonpreemptive – a new process must be selected. All other scheduling is preemptive – either continue running the current process or select a different one.

需要进行CPU调度的情况可分为以下四种:1)当一个进程从运行状态切换到等待状态时(I/O请求,wait()调用以便等待一个子进程的终止)2) 当一个进程从运行状态切换到就绪状态时(当出现中断时)3) 当一个进程从等待状态切换到就绪状态时(当I/O完成时)4)当一个进程中止时

对于第一种和第四种情况,除了调度没有选择。一个新进程必须被选择执行。不过对于第二种和第三种情况还是有选择的。

  1. Dispatcher 调度程序

Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:

-switching context -switching to user mode -jumping to the proper location in the user program to restart that program.

Dispatch latency – time it takes for the dispatcher to stop one process and start another running.

Dispatcher is invoked during every process switch; hence it should be as fast as possible.

调度程序是一个模块,用来将CPU控制交给由短期调度程序选择的进程。这个功能包括:-切换上下文 -切换到用户模式 -跳转到用户程序的合适位置,以便重新启动程序。

调度程序应尽可能快,因为在每次进程切换时都要使用。调度程序停止一个进程而启动另一个所需的时间称为调度延迟。

  1. Scheduling Criteria        调度准则

Max CPU utilization – keep the CPU as busy as possible

Max Throughput – complete as many processes as possible per unit time

Fairness -give each process a fair share of CPU

Min Waiting time – process should not wait long in the ready queue

Min Response time – CPU should respond immediately

  1. Scheduling Algorithms   调度算法

Order of scheduling matters.

Terms the algorithms deal with 1): Arrival Time(AT) :Time at which the process arrives in the ready queue.

2)Completion Time: Time at which process completes its execution.

3)Burst Time : time required by a process for CPU execution.

4)Turnaround Time (TT): the total amount of time spent by the process from coming in the ready state for the first time to its completion. Turnaround time = exit time – arrival time

5) Waiting Time(WT): The total time spent by the process/thread in the ready state waiting for CPU.

Waiting time = Turn Around Time – Burst Time

6) Response Time : Time at which the process gets the CPU for the first time.

Scheduling Algorithms

  1. First-Come, First-Served (FCFS)          先到先服务调度 No Preemption

Processes are executed on first come, first serve basis. Poor in performance as average wait time is high.

The Gantt chart:

Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0+24+27)/3 = 17

*如果进程按P2 P3 P1的顺序到达,那么平均等待时间时(6+0+3)/3 = 3ms

  1. Shortest Job First (SJF) 最短作业优先调度 No Preemption

Schedule process with the shortest burst time. The shortest burst time is scheduled first.

Advantages: Minimizes average wait time and average response time.

Disadvantage: Not practical: difficult to predict burst time; Learning to predict future; May starve long jobs.

Average waiting time = (3+16+9+0)/4 = 7.

虽然SJF算法是最优的,但是他不能在短期CPU调度级别上加以实现,因为没有办法知道下次CPU执行的长度。虽然不知道下一个CPU执行的长度,但是可以预测它。可以认为下一个CPU执行的长度与以前的相似。因此,通过计算下一个CPU执行长度的近似值,可以选择具有预测最短CPU执行的进程来运行。

The real difficulty with the Shortest-Job-First SJF algorithm is knowing the length of the next CPU request.

-with short-term scheduling, there is no way to know the length of the next CPU burst.

  1. Determining Length of Next CPU Burst

Exponential averaging: next = average of ( past estimate + past actual )

Let tn = actual length of the nth burst.

τn+1 = predicted value for the next CPU burst.

a = weighing factor, 0a1.

The estimate of the next CPU burst period is

Commonly, a set to 1/2 – determines the relative weight of recent and past history (τn).

If a=0, then recent history has no effect. If a=1, the only the most recent CPU bursts matter.

  1. Shortest-Remaining-Time-First SRTF (SJF with preemption) 最短剩余时间优先

If a new process arrives with a shorter burst time than remaining of current process, then schedule new process.

Further reduces average waiting time and average response time.

Context Switch – the context of the process is saved in the PCB when the process is removed from the execution and the next process is scheduled.

This PCB is accessed on the next execution of this process.

SJF算法可以是抢占或者非抢占的。当一个新进程到达就绪队列而以前进程正在执行时,就需要选择了。新进程的下次CPU执行,与当前运行进程的尚未完成的CPU执行相比,可能还小。抢占SJF算法会抢占当前运行进程,而非抢占SJF算法会允许当前运行进程以先完成CPU执行。

  1. Priority Scheduling 优先级调度

Each process is assigned a priority (just a number). The CPU is allocated to the process the highest priority (smallest integer = highest priority)

Priority may be: 1) Internal Priorities based on criteria within OS. Ex: memory needs. 2) External Priorities based on criteria outside OS. Ex: assigned by administrators.

Problem : Starvation – low priority processes may never execute.

Solution: Aging – as time progresses increase the priority of the process. Ex: do priority = priority – 1 every 15 min.

*SJF算法是通用优先级调度(priority- scheduling)算法的一个特例。每个进程都有一个优先级与其关联,而具有最高优先级的进程会分配到CPU。具有相同优先级的进程按FCFS顺序调度。SJF算法是一个简单的优先级算法,其优先级p为下次CPU执行的倒数。CPU执行越长,则优先级越小;反之亦然。

Each process is assigned a priority, Process with highest priority is to be executed first and so on.

Average waiting time: (0+1+6+16+18)/5 = 8.2

  1. Round Robin (RR) Scheduling 轮转调度

Each process gets a small unit of CPU time (time quantum or time-slice), usually 10-100 milliseconds.

After this time has elapsed, the process is preempted and added to the end of the ready queue Ready queue is treated as a circular queue.

If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.

为了实现RR调度,我们再次将就绪队列视为进程的FIFO队列。新进程添加到就绪队列的尾部。CPU调度程序从就绪队列中选择第一个进程,将定时器设置在一个时间片中后中断,最后分派这个进程。接下来有两种情况可能发生,进程了可能只需少于时间片的CPU执行。对于这种情况,进程本身会自动释放CPU。调度程序接着处理就绪队列的下一个进程。否则,如果当前运行进程的CPU执行大于一个时间片,那么定时器会中断,进而中断操作系统。然后,进行上下文切换,再将进程加到就绪队列的尾部,接着CPU调度程序会选择就绪队列内的下一个进程。

*我们希望时间片远大于上下文切换的时间。如果上下文切换时间约为时间片的10%,那么约10%的CPU时间会浪费在上下文切换上。在实践中,大多数现代操作系统的时间片约为10-100ms,上下文切换的时间一般少于10ms。因此,上下文切换的时间仅占时间片的一小部分。

*周转时间也依赖于时间片大小。时间片大小增加,一组进程的平均周转时间不一定会改善。一般情况下,如果大多数进程能在一个时间片内完成,那么平均周转时间会改善。尽管时间片应该比上下文切换的时间要大,但也不能太大,如果时间片太大那么RR调度就变成了FCFS调度。根据经验,80%的CPU执行应该小于时间片。

  1. Multiple Queue Scheduling 多级队列调度

Ready queue is partitioned into separate queues: e.g. 2 queues containing:

  1. Foreground (interactive) processes. May have externally defined priority over background processes
  2. Background (batch) processes

Process permanently associated to a given queue; no move to a different queue.

There are two types of scheduling in multi-level queue scheduling:

  1. Scheduling among the queues.
  2. Scheduling between the processes of the selected queue.

Must schedule among the queues too (not just processes):

  1. Fixed priority scheduling; i.e., server all from foreground then from background Possibility of starvation.
  2. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes.

80% to foreground in RR, and 20% to background in FCFS.

在进程容易分成不同组的情况下,可以有另一类调度算法。例如,进程通常分为前台进程(foreground process)(或交互进程)和后台进程(background process)(或批处理进程)。这两种类型的进程具有不同的响应时间要求,今儿有不同调度需要。另外,与后台进程相比,前台进程可能要有更高的优先级。

多级队列(multiple level)调度算法将就绪队列分成多个单独队列。根据进程属性,如内存大小,进程优先级,进程类型等,一个进程永久分到一个队列。每个队列有自己的调度算法。前台队列可以采用RR算法制度,而后台队列可以采用FCFS算法调度。

此外,队列之间应有调度,通常采用固定优先级抢占调度。例如,前台队列可以比后台队列具有绝对的优先。

另一种可能是,在队列之间划分时间片。每个队列都有一定比例的CPU时间,可用于调度队列内的进程。例如,对于前台-后台队列的例子,前台队列可以有80%的CPU时间,用于在进程之间进行RR调度,而后台队列可以有20%的CPU时间,用于按FCFS算法调度进程。

  1. Multilevel Feedback Queue Scheduling

Multilevel feedback queues – automatically place processes into priority levels based on their CPU burst behavior.

I/O-intense processes will end up on higher priority queues and CPU-intensive processes will end up on low priority queues.

A process can move between the various queues.

A multilevel feedback queue uses 2 basic rules:

  1. A new process gets placed in the highest priority queue.
  2. If a process does not finish its quantum, then it will stay at the same priority level otherwise it moves to the next lower priority level.

多级反馈队列调度算法允许进程在队列之间迁移。这种想法是,根据不同的CPU执行的特点来区分进程。如果进程使用过多的CPU时间,那么它会被移到更低的优先级队列。这种方案将I/O密集型和交互进程放在更高优先级队列上。此外,在较低优先级队列中等待过长的进程会被移到更高优先级队列。这种形式的老化阻止饥饿的发生。

Multiple-feedback-queue scheduler defined by the following parameters:

  1. Number of queues
  2. Scheduling algorithms for each queue
  3. Method used to determine when to upgrade a process
  4. Method used to determine when to demote a process
  5. Method used to determine which queue a process will enter when that process needs service.

Lecture 5 CPU Scheduling II

  1. Thread Scheduling 线程调度

User threads are mapped to kernel threads by the thread library.

The thread models: 1) Many to one model. 2) One to one model. 3) Many to many model.

The contention scope refers to the in which threads compete for the use of physical CPUs.

There are 2 possible contention scopes: 1) Process Contention Scope PCS, a.k.a local contention scope

2)System Contention Scope SCS a.k.a global contention scope.

用户级和内核级线程之间的一个区别在于它们是如何调度的。对于实现多对一和多对多模型的系统线程库会调度用户级线程,以便在可用LWP上运行。这种方案称为进程竞争范围(PCS),因为竞争CPU是发生在同一进程的线程之间。为了决定哪个内核级线程调度到一个处理器上,内核采用系统竞争范围(SCS)。采用SCS调度来竞争CPU,发生在系统内的所有线程之间。采用一对一模型的系统,如Windows,Linux,Solaris,只采用SCS调度。

  1. The basic levels to schedule threads:

-Process Contention Scope (unbound threads) – competition for the CPU takes place among threads belonging to the same process. Available on the many-to-one model.

-System Contention Scope (bound threads) – competition for the CPU takes place among all threads in the system. Available on the one-to-one model.

In an many-to-many thread model, user threads can have either system or process contention scope.

  1. Multiple-Processor Scheduling 多处理器调度

Different inter-process communication and synchronization techniques are required.

In multiprocessing systems, all processors share a memory.

There are 3 structures for multi-processor OS: 1) Separate Kernel Configuration 2) Master-Slave Configuration (Asymmetric Configuration) 3) Symmetric Configuration

如果有多个CPU,则负载分配(load sharing)成为可能,但是调度问题就更相应地复杂。许多可能的方法都已试过,但与单处理器调度一样,没有最好的解决方案。

  1. Separate Kernel Configuration

Each processor has its own I/O devices and file system. There is very little interdependence among the processors. A processor started on a process runs to completion on that processor only. The disadvantage of this organization is that parallel execution is not possible. A single task cannot be divided into sub-tasks and distributed among several processors, thereby losing the advantage of computational speed-up.

  1. Master-Slave Configuration (Asymmetric)

One processor as master and other processors in the system as slaves. The master processor runs the OS and processes while slave processors run the processes only.

The process scheduling is performed by the master processor. The parallel processing is possible as a task can be broken down into sub-tasks and assigned to various processors.

  1. Symmetric Configuration SMP

Any processor can access any device and can handle any interrupts generated on it.

Mutual exclusion must be enforced such that only one processor is allowed to execute the OS at one time.

To prevent the concurrency of the processes many parts of the OS are independent of each other such as scheduler, file system call, etc.

  1. Processor Affinity 处理器亲和性

Processor affinity is the ability to direct a specific task, or process, to use a specified core.

The idea behind: if the process is directed to always use the same core it is possible that the process will run more efficiently because of the cache re-use. If a process migrates from one CPU to another, the old instruction and address caches become invalid, and it will take time for caches on the new CPU to become populated.

如果进程移到其他处理器,第一个处理器缓存的内容无效,第二个处理器缓存应重新填充。由于缓存的无效或重新填充代价高,大多数SMP系统试图避免将进程从一个处理器移到另一个处理器,而是试图让一个进程运行在同一个处理器上。这称为处理器亲和性,即一个进程对它运行的处理器具有亲和性。

Soft affinity – OSs try to keep a process running on the same processor but not guaranteeing it will do so.

Hard affinity – allows a process to specify a subset of processors on which it may run.

处理器的亲和性有多种。当一个操作系统是土保持进程运行在同一处理器上时,这称为软亲和性。这里,操作系统试图保持一个进程在某个处理器上,但是这个进程也可以迁移到其他处理器。相反,,有的系统提供系统调用以便支持硬亲和性,从而允许某个进程运行在某个处理器子集上。许多系统提供软的和硬的亲和性。例如,Linux实现软亲和性,但是它也一共系统调用sched_setaffinity() 以支持硬亲和性。

  1. Load Balancing 负载平衡

When each processor has a separate ready queue, there can be an imbalance in the numbers of jobs in the queues Load Balancing.

对于SMP系统,重要的是保持所有处理器的负载平衡,以便充分利用多处理器的优点。否则,一个或多个处理器会空闲,而其它处理器会处于高负载状态,且有一系列进程处于等待状态。负载平衡设法将负载平均分到SMP系统的所有处理器。

Push migration = A system process periodically (e.g. 200ms) checks ready queues and moves processes to different queues, if need be.

Pull migration = If scheduler finds there is no process in ready queue so it raids another processor’s run queue and transfers a process onto its own queue so it will have something to run (pulls a waiting task from a busy processor).

负载平衡有两种方法:推迁移和拉迁移。推迁移,一个特定的任务周期性地检查每个处理器的负载,如果发现不平衡,那么通过将进程从超载处理器推到空闲或不太忙的处理器。拉迁移,当空闲处理器从一个忙的处理器上拉一个等待任务时,发生拉迁移。

  1. Multicore Processors 多核处理器

A core executes one thread at a time.

Processor spends time waiting for the data to become available (slowing or stopping of a process) = Memory stall.

传统上,SMP系统具有多个物理处理器,以便允许多个线程并行运行。然而最近的做法是:将多个处理器放置在同一个物理芯片上,从而产生多核心处理器。每个核都保持架构的状态,因此对操作系统而言他似乎是一个单独的物理处理器。

当一个处理器访问内存时,它花费大量时间等待所需数据。这称为内存停顿。(memory stall)在这种情况下,处理器可能花费高达50%的时间等待内存数据变得可用。为了弥补这种情况,许多硬件设计都采用了多线程的处理器核,即每个核会分配到两个或多个硬件线程。这样,如果一个线程停顿而等待内存,该核可以切换到另一个线程。

Solution: Multithreaded core: to put multiple CPUs cores onto a single chip to run multiple kernel threads concurrently.

  1. Hyperthreading 超线程

Intel technology – The physical processor is divided into 2 logical or virtual processors that are treated as if they are actually physical cores by the operating system.(Simultaneously multithreading SMT)

Hyperthreading allows multiple threads to run on each core of CPU.

  1. Techniques for multithreading

Coarse-grained multithreading – switching between threads only one thread blocks (long latency event such as a memory stall occurs)

Fine-grained multithreading – instructions “scheduling” among threads obeys a Round Robin policy.

一般来说,处理器核的多线程有两种方法:粗粒度coarse和细粒度fine的多线程。粗粒度的多线程,线程一直在处理器上执行,直到一个长延迟时间发生如内存停顿。细粒度的多线程在更细的粒度级别上(通常在指令周期的边界上)切换线程。

  1. Real-Time CPU Scheduling 实时CPU调度

A real-time system is one in which time plays an essential role.

Hard real-time tasks – is one that must meet its deadline; otherwise, it will cause unacceptable damage or a fatal error to the system.

Soft real-time tasks – an associated deadline that is desirable but not mandatory; it still makes sense to schedule and complete the task even if it has passed its deadline.

硬实时系统有严格的要求。一个任务应在它的截止期限之前完成;在截止期限之后完成,与没有完成,是完全一样的。

软实时系统不保证会调度关键实时进程;而只保证这类进程会优先于非关键进程。

Issues in Real-Time Scheduling

从事件发生到事件得到服务的这段时间称为事件延迟

The major challenge for an RTOS is to schedule the real-time tasks.

2 types of latencies may delay the processing:

  1. Interrupt latency – aka interrupt response time is the time elapsed between the last instruction executed on the current interrupt task and start of the interrupt handler.

中断延迟是从CPU收到中断到中断处理程序开始的时间

  1. Dispatch latency – time it takes for the dispatcher to stop one process and start another running.

调度程序从停止一个进程到启动另一个进程所需的时间量称为调度延迟

The RTOS schedules all tasks according to the deadline information and ensures that all deadlines are met.

Static scheduling – A schedule is prepared before execution of the application begins.

Priority-based scheduling – The priority assigned to the tasks depends on how quickly a task has to respond to the event.

Dynamic scheduling – There is complete knowledge of tasks set, but new arrivals are not known. Therefore, the schedule changes over the time.

优先级调度

Aperiodic tasks have a deadline by which it must finish or start, or it may have a constraint on both start and finish time.

Periodic tasks, the requirement may be stated as “once per period T” or “exactly T units apart”.

进程是周期性的(periodic)。也就是说,它们定期需要CPU。

它具有固定的处理时间t,CPU应处理的截止期限d,周期p。处理时间,截止期限,周期三者的关系为:0≤t≤d≤p。周期任务的速率为1/p。

  1. Rate-Monotonic Scheduling 单调速率调度

Missed deadlines with Rate Monotonic Scheduling.

It is a static priority-based preemptive scheduling algorithm.

The shortest period = the highest priority.

单调速率调度算法采用抢占的,静态优先级的策略,调度周期性任务。当较低优先级的进程正在运行并且较高优先级的进程可以运行时,较高优先级进程会抢占低优先级。周期越短优先级越高。

The CPU utilization of a process Pi:   ti/ pi

ti= the execution time

pi=the period of process

To meet all the deadlines in the system, the following must be satisfied: itipi≤1

The worst-case processor utilization for scheduling processes may be given as the following:  itipi≤n(21n-1)

Consider P1 and P2. The period of P1 is 50, P2 is 100. p1=50 p2=100

t1=20 t2=35

the CPU utilization of P1 = 20/50 = 0.4

the CPU utilization of P2 = 35/100 = 0.35  total = 75%

Consider p1 = 50, t1 = 25 p2 = 80, t2 = 25

Total CPU utilization: (25/50)+(35/80) = 0.94

尽管是最优的,然而单调速率调度有一个限制:CPU的利用率是有限的,并不总是可能完全最大化CPU资源。对于具有一个进程的系统,CPU利用率是100%;但是当进程数量接近无穷大时,他大约接近69%。对于具有两个进程的系统,CPU利用率是83%。因此单调速率调度不能保证它们可以调度以便满足他们的截止期限。

  1. Earliest-Deadline-First Scheduling 最早截止期限优先调度

The scheduling criterion is based on the deadline of the processes. When a process becomes runnable, it must announce its deadline requirements to the system. Dynamically assigns priorities according to deadline.

The earlier the deadline = the higher the priority.

最早截止期限优先调度根据截止期限动态分配优先级。截止期限越早,优先级越高。

Consider p1 = 50, t1 = 25  p2 = 80, t2 = 35

与单调速率调度不一样,EDF调度不要求进程应是周期的,也不要求进程的CPU执行长度是固定的。从理论上说,它可以使得每个进程都可以满足截止期限的要求并且CPU利用率将会是100%,然而在实际中由于进程的上下文切换和中断处理的代价,这种级别的CPU利用率是不可能的。

  1. Proportional Share Scheduling 比例分享调度

T shares are allocated among all processes in the system. An application receives N shares where N<T

This ensures each application will receive N/T of the total processor time.

比例分享调度程序在所有应用之间分配T股。如果一个应用程序接受N股的事件,那么确保了它将有N/T的总的处理器时间。

  1. Algorithm Evaluation 算法评估

How do we select a CPU-scheduling algorithm for a particular system?

  1. Deterministic evaluation 确定性模型
  2. Queueing Models 排队模型
  3. Simulations 仿真

一种主要类别的评估方法称为分析评估法。

  1. Deterministic evaluation

Takes a particular predetermined workload and defines the performance of each algorithm for that workload.

采用特定的预先确定的负荷,计算在给定负荷下每个算法的性能。

FCFS – (0+10+39+42+49)/5 = 28ms

Non-preemptive SFJ – (10+32+0+3+20)/5 = 13ms

RR – (0+32+20+23+40)/5 = 23ms

  1. Queueing Models

Use distributions of CPU and I/O bursts.

Knowing arrival and service rates – can compute utilization, average queue length, average wait time, etc..

n = average queue length

w = average waiting time in queue

λ = average arrival rate into queue

Little’s law : processes leaving queue must equal processes arriving, thus,

n=λ × W

  1. Simulations

Represent major components and activities of the system with software functions and data structures.

Trace tapes(by monitoring the real system) have the advantage of making it possible to compare different algorithms on the exact same inputs.

Lecture 6 Deadlock 死锁

  1. System Model 系统模型

*在正常操作模式下,进程只能按如下顺序使用资源:1) 申请2) 使用 3) 释放

  1. Deadlock Characterization 死锁特征

Deadlock can be defined as the permanent blocking of a set of processes that compete for system resources.

发生死锁时,进程永远不能完成,系统资源被阻碍使用,以致于阻止了其他作业开始执行。

  1. Mandatory Condition 必要条件

Deadlock can arise if 4 conditions hold simultaneously.

1) Mutual Exclusion: Only one process at a resource.

2) Hold and Wait: A process holding at least one resource is waiting to acquire additional resources held by other processes.

3) No Preemption: A resource can be released only voluntarily by the process holding it, after that process has completed its task.

The first 3 conditions are necessary but not sufficient for a deadlock to exist. For deadlock to actually take place, a fourth condition is required.

4) Circular Wait: A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain.

1)互斥:至少有一个资源必须处于非共享模式,即一次只有一个进程可使用。如果另一进程申请该资源,那么进程应等到该资源释放为止。

2)占有并等待:一个进程应至少占有一个资源,并等到该资源释放为止。

3)非抢占:资源不能被抢占,即资源只能被进程在完成任务后自愿释放。

4)循环等待

  1. Resource-Allocation Graph 系统资源分配图

A set of vertices V and a set of edges E.

V is partitioned into 2 types: P={P1,P2,…,Pn} , the set consisting of all the processes in the system.

R={R1,R2,…,Rm} ,the set consisting of all resource types in the system.

Request edge申请边 – directed edge Pi  Rj

Assignment edge 分配边 – directed edge Rj Pi

系统资源分配图的有向图可以更精确地描述死锁。

  1. Basic Facts

If graph contains no cycles – no deadlock

If graph contains a cycle – if only one instance per resource, then deadlock.

If several instances per resource type, possibly deadlock.

  1. Handling Deadlocks 死锁处理方法

Ensure that the system will never enter a deadlock state.

To deal with the deadlock, following 3 approaches can be used:

1) Deadlock prevention

2) Deadlock avoidance

3) Deadlock detection and recovery

Ignore the problem and pretend that deadlocks never occur in the system(used by most operating systems, including UNIX)

1) Deadlock Prevention 死锁预防

I. Mutual Exclusion – In general, the first of the 4 conditions cannot be disallowed. If access to a resource requires mutual exclusion, then mutual exclusion must be supported by the OS.

互斥条件必须成立,至少有一个资源时共享的。相反,可共享资源不要求互斥访问,因此不会参与死锁(只读文件)

II. Hold and wait – Must guarantee that whenever a process requests a resource, it does not hold any other resources. Require process to request and be allocated all its resources before it begins execution or allow process to request resources only when the process has none allocated to it. Low resources utilization; Starvation possible

为了确保持有且等待不会出现,应保证当每个进程申请一个资源时,它不能占有其他资源。可以采用:1) 每个进程在执行前申请并获得所有资源2) 允许进程仅在没有资源时才可申请资源,一个进程可申请一些资源并使用,然而在申请更多资源之前它应释放已分配的所有资源。

III. No Preemption – can be prevented in several ways

If a process holding certain resources is denied a further request, that process must release its original resources and if necessary, request them again together with additional resource. If a process requests a resource that is currently held by another process, the OS may preempt the second process and require it to release its resources.

第三个必要条件是:不能抢占已分配的资源。为了确保不成立可以:如果一个进程持有资源并申请另一个不能立即分配的资源(这个进程应等待),那么它现在分配的资源都可被抢占(隐式释放)。被抢占资源添加到进程等待的资源列表是那个。只有当进程获得其原有资源和申请的新资源时,它才可以重新执行。

换句话说,如果一个进程申请一些资源,那么首先检查它们是否可用。如果可用,那么分配它们。如果不可用,那么检查这些资源是否已分配给等待额外资源的其他进程。如果是,那么从等待进程抢占这些资源,并分配给申请进程。如果资源不可用且也不被其他等待进程持有,那么申请进程应等待。当一个进程处于等待时,如果其他进程申请其拥有资源,那么该进程的部分资源可以被抢占。只有当一个进程分配到申请的资源,并且恢复在等待时被抢占的资源时,它才能重新执行。

IV. Circular Wait – can be prevented by defining a linear ordering of resource types. If a process has been allocated resources of type R, then it may subsequently request only those resources of types following R in the ordering.

确保循环等待不成立的一个方法是:对所有资源类型进行完全排序,而且要求每个进程按递增顺序来申请资源。

2) Deadlock Avoidance 死锁避免

A safe state is one which there is at least one sequence of resource allocations to processes that does not result in a deadlock. An unsafe state is a state that is not safe.

Safe state  no deadlocks. Unsafe state  possible deadlock

Resources Allocate Chart Algorithm

Where every resources type has a single instance of resource.

Claim edge: PiRj indicated that process Pi may request resource Rj. represented by a dashed line.

After the cycle check, if it is confirmed that there will be no circular wait, the claim edge is converted to a request edge. Otherwise, it will be rejected. Request edge converted to an assignment edge when the resource is allocated to the process. When a resource is released by a process, assignment edge reconverts to a claim edge.

3) feds

  1. Banker’s Algorithm 银行家算法

The banker’s algorithm has 2 parts: 1) Safety Test Algorithm that check the current state of the system for its safe state. 2) Resource request algorithm that verifies whether the requested resources, when allocated to the process, affect the safe state. If it does, the request is denied.

当一个新的进程进入系统时,它应声明可能需要的每种类型资源实例的最大数量,这一数量不能超过系统系统资源的总和。当用户申请一组资源时,系统应确定这些资源的分配是否仍会使系统处于安全状态。如果会就分配资源,否则,进程应等待,直到某个其他进程释放足够多的资源为止。

Data Structure for the Banker’s Algorithm:

Let n = number of processes , m = number of resources types.

Available: Vector of length m. If Available[j] = k, there are k instances of resource type Rj available.

Max: n x m matrix. If Max[i,j] = k, then process Pi may request at most k instances of resource type Rj

Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj

Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task.

Need[i,j] = Max[i,j] – Allocation[i,j]

1) Safety Test Algorithm

2)Resource Request Algorithm

Example:P223

  1. Deadlock Detection

2 parts: 1)Single Instance of Resource 2) Multiple Instance of Resource

1) Resource-Allocation Graph(a) and Wait-For Graph(b)

Nodes are processes.

Pi Pj if Pi is waiting for Pj

An edge exists between the processes, only if one process waits for another.

Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock.

Need n^2 operations

2)

O(m x n^2)

  1. Detection-Algorithm Usage 应用检测算法

When should we invoke the detection algorithm: 1) How often a deadlock is likely to occur. 2) How many processes will be affected by deadlock when it happens.

If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock.

如果经常发生死锁,就应经常调用检测算法。只有当某个进程提出请求且得不到满足时,才会出现死锁。这一请求可能是完成等待进程链的最后请求。在极端情况下,即每次分配请求不能立即允许时,就调用算法。如果有许多不同的资源类型,那么一个请求可能形成资源图的多个环。

*对于每个请求都调用算法将会引起相当的计算开销。另一个不太昂贵的方法只是每隔一段时间调用检测算法,如每小时一次,或当CPU使用率低于40%时。如果在任意时间点调用,通常不能确定哪个死锁进程造成了死锁。

  1. Recovery from Deadlock 死锁恢复

There are 2 options for breaking a deadlock: 1) Process Termination 2) Resources Preemption

1) Process Termination

2 methods: I. Abort all deadlocked processes. 2) Abort one process at a time until the deadlock cycle is eliminated.

Which order to abort: Priority of the process, how long a process has computed, and how much longer to completion, resources the process has used, resources that process needs to complete, how many processes will need to be terminated, is process interactive or batch.

1) Resource Preemption

I. Select a victim – a process, whose execution has just started and requires many resources to complete, will be the right victim for preemption

II. Rollback – return the process to some safe state, restart it from that state.

III. Starvation – it may be possible that the same process is always chosen for resource preemption, resulting in a starvation situation. Thus, it is important to ensure that the process will not starve. This can be done by fixing the number of times a process can be chosen as a victim.

如何保证资源不会总是从同一个进程中被抢占:应确保一个进程只能有限次数地被选为牺牲进程。最为常用的方法是在代价因素中加上回滚次数。

CPT104 Operating System Concepts Note相关推荐

  1. 《Operating System Concepts(操作系统概念)》课程学习(1)——Chapter 1 Introduction(第1章 绪论)

    操作系统概念 Operating System Concepts 说起操作系统,我想在坐的各位同学都不会陌生.因为无论我们想用计算机干什么,首先要做的就是启动操作系统,任何软件的运行都离不开操作系统的 ...

  2. [Operating.System.Concepts(9th,2012.12)].Abraham.Silberschatz.文字版(恐龙书——操作系统概念 原书第九版)课后习题 参考答案

    目录 Chap01 导论 Chap02 OS结构 Chap03 进程 Chap04 线程 Chap05 同步(Synchronization) Chap06 CPU调度 Chap07 死锁 Chap0 ...

  3. [操作系统概念]Operating System Concepts 7th - Preface

    Preface - 前言(9 pages) Reading started at 2019/03/06 19:00 essential : absolutely necessary prevalent ...

  4. java大作业私人管家系统_操作系统概念(Operating System Concepts)第十版期中大作业...

    更正: 第一题中,哲学家就餐问题中的哲学家的状态state[i]应属于临界区变量,是可能会产生读写冲突的,所以对其进行读写的时候均需要加一把互斥锁. 非常感谢不听不听不听的指正. ---------- ...

  5. 页面置换算法简单对比----《operating system concepts》《操作系统原理》

    置换策略 当请求调页程序要调进一个页面,但是该作业分配所得的主内存块已经全部用完,则必须淘汰改作业在贮存中的一个页面.置换算法就是决定选择哪一个页面进行淘汰的规则. 如置换算法不够好,就会导致刚淘汰的 ...

  6. <Operating System Concepts> 第九版 第一章习题答案(原创)

    1.1 a. 1.多个用户同时读写一个文件,高速缓存更新不及时或硬盘写入不及时,造成I/O错误. 2.用户操作存在重合性,不同用户执行命令存在对系统的写入.删除等操作,分时系统任务切换时出现冲突错误. ...

  7. xv6: a simple, Unix-like teaching operating system|Chapter 1 Operating system interfaces

    Chapter 1 Operating system interfaces 1.0 Overview ❓ Recall The job of an operating system How does ...

  8. 一些关于ROS的讨论 Robot Operating System – A flexible framework for writing robot software (ros.org)

    https://news.ycombinator.com/item?id=17916456 Robot Operating System – A flexible framework for writ ...

  9. How to write an operating system

    以前上学时从网上找到的一篇好文章,翻译过,但找不到译文了,整理资料时找到了,虽然用处不大,但也是一番回忆,贴到这儿先. How to write an operating system Writing ...

最新文章

  1. 4G EPS 的接口类型
  2. 据说这是中途接手别人项目时的场景
  3. 从DSSM语义匹配到Google的双塔深度模型召回和广告场景中的双塔模型思考
  4. CALL FUNCTION START NEW TASK
  5. ooooo123123emabc
  6. 微软邮件服务器名称,邮箱服务器角色概述
  7. bch怎么挖_BCH与BSV的减半,给目前正在反弹中的行情带来什么?
  8. 深度学习-吴恩达-笔记-5-深度学习的实践层面
  9. angularjs绑定属性_AngularJS隔离范围属性绑定教程
  10. 『TensorFlow』读书笔记_TFRecord学习
  11. Win10设置文件夹背景色
  12. 串操作指令---movs,stos,rep
  13. 程序员编程规范之注释
  14. C#编程:常用数学函数
  15. Centos 7 拨号上网
  16. 有了这几招,再也不怕背不过课本了!
  17. 西门子1200与FANUC机器人Profinet通讯
  18. git_describe
  19. 如何设置和取消RAR文件的密码保护
  20. NodeJS 频繁请求服务器限速工具

热门文章

  1. 【ECAPA_TDNN 下 】代码和论文细节分析
  2. 项目中常用的管理工具 confluence,wiki,jira
  3. js转换时间戳一直转换成1970的解决方法
  4. x390yoga 关掉触控屏幕_本站首晒:顶风作案的ThinkPad X390 yoga简单开箱
  5. java 事务补偿机制_重试补偿机制完善
  6. 现代高性能连接器 Mold 1.0.2 发布
  7. 课程向:深度学习与人类语言处理 ——李宏毅,2020 (P12)
  8. linux cp保留时间,linux – cp -p会保留一些文件的时间但不是全部吗?
  9. The Stein-Lov´asz Theorem 定理
  10. 如何精确评估开发时间?