Linux内核深入理解中断和异常(7):中断下半部:Softirq, Tasklets and Workqueues

rtoax 2021年3月

  • 0x00-0x1f architecture-defined exceptions and interrupts;

  • 0x30-0x3f are used for ISA(Industry Standard Architecture) interrupts;

  • softirqs是静态分配的,这对内核模块来说是不可加载的,这就引出了tasklets;

  • softirqs实际上很少使用;

  • workqueue运行在内核进程上下文中;

  • tasklets运行在软件中断上下文中;

  • ksoftirqd调度 softirqs

  • kworker调度workqueue

/***  start_kernel()->setup_arch()->idt_setup_early_traps()*  start_kernel()->setup_arch()->idt_setup_early_pf()*  start_kernel()->trap_init()->idt_setup_traps()*  start_kernel()->trap_init()->idt_setup_ist_traps()*  start_kernel()->early_irq_init()*  start_kernel()->init_IRQ()*  start_kernel()->softirq_init()*/

1. Introduction to deferred interrupts (Softirq, Tasklets and Workqueues)

It is the nine part of the Interrupts and Interrupt Handling in the Linux kernel chapter and in the previous Previous part we saw implementation of the init_IRQ from that defined in the arch/x86/kernel/irqinit.c source code file. So, we will continue to dive into the initialization stuff which is related to the external hardware interrupts in this part.

Interrupts may have different important characteristics and there are two among them:

  • Handler of an interrupt must execute quickly;
  • Sometime an interrupt handler must do a large amount of work.

As you can understand, it is almost impossible to make so that both characteristics were valid. Because of these, previously the handling of interrupts was split into two parts:

  • Top half;
  • Bottom half;

In the past there was one way to defer interrupt handling in Linux kernel. And it was called: the bottom half of the processor, but now it is already not actual. Now this term has remained as a common noun referring to all the different ways of organizing deferred processing of an interrupt.The deferred processing of an interrupt suggests that some of the actions for an interrupt may be postponed to a later execution when the system will be less loaded. As you can suggest, an interrupt handler can do large amount of work that is impermissible as it executes in the context where interrupts are disabled. That’s why processing of an interrupt can be split on two different parts.

  • In the first part, the main handler of an interrupt does only minimal and the most important job.
  • After this it schedules the second part and finishes its work.

When the system is less busy and context of the processor allows to handle interrupts, the second part starts its work and finishes to process remaining part of a deferred interrupt.

There are three types of deferred interrupts in the Linux kernel:

deferred:延期的;

  • softirqs;
  • tasklets;
  • workqueues;

And we will see description of all of these types in this part. As I said, we saw only a little bit about this theme, so, now is time to dive deep into details about this theme.

2. Softirqs

With the advent of parallelisms in the Linux kernel, all new schemes of implementation of the bottom half handlers are built on the performance of the processor specific kernel thread that called ksoftirqd (will be discussed below). Each processor has its own thread that is called ksoftirqd/n where the n is the number of the processor. We can see it in the output of the systemd-cgls util:

[rongtao@localhost cgroup]$ systemd-cgls -k | grep ksoftirqd
├─     6 [ksoftirqd/0]
├─    12 [ksoftirqd/1]
├─    16 [ksoftirqd/2]
├─    20 [ksoftirqd/3]

或者:

[rongtao@localhost cgroup]$ ps -ef | grep ksoftirq
root          6      2  0 3月02 ?       00:01:13 [ksoftirqd/0]
root         12      2  0 3月02 ?       00:00:06 [ksoftirqd/1]
root         16      2  0 3月02 ?       00:00:00 [ksoftirqd/2]
root         20      2  0 3月02 ?       00:00:00 [ksoftirqd/3]

The spawn_ksoftirqd function starts this these threads. As we can see this function called as early initcall:

static __init int spawn_ksoftirqd(void)
{cpuhp_setup_state_nocalls(CPUHP_SOFTIRQ_DEAD, "softirq:dead", NULL,takeover_tasklets);BUG_ON(smpboot_register_percpu_thread(&softirq_threads));return 0;
}
early_initcall(spawn_ksoftirqd);

Softirqs are determined statically at compile-time of the Linux kernel and the open_softirq function takes care of softirq initialization. The open_softirq function defined in the kernel/softirq.c:

void open_softirq(int nr, void (*action)(struct softirq_action *))
{softirq_vec[nr].action = action;
}

and as we can see this function uses two parameters:

  • the index of the softirq_vec array;
  • a pointer to the softirq function to be executed;

First of all let’s look on the softirq_vec array:

static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;

it defined in the same source code file. As we can see, the softirq_vec array may contain NR_SOFTIRQS or 10 types of softirqs that has type softirq_action. First of all about its elements. In the current version of the Linux kernel there are ten softirq vectors defined; two for tasklet processing, two for networking, two for the block layer, two for timers, and one each for the scheduler and read-copy-update processing. All of these kinds are represented by the following enum:

enum
{HI_SOFTIRQ=0,TIMER_SOFTIRQ,NET_TX_SOFTIRQ,NET_RX_SOFTIRQ,BLOCK_SOFTIRQ,BLOCK_IOPOLL_SOFTIRQ,TASKLET_SOFTIRQ,SCHED_SOFTIRQ,HRTIMER_SOFTIRQ,RCU_SOFTIRQ,NR_SOFTIRQS
};

在5.10.13中是这样的:

enum    /* softirq 软中断 */
{HI_SOFTIRQ=0,   /*  */TIMER_SOFTIRQ,  /*  */NET_TX_SOFTIRQ, /*  */NET_RX_SOFTIRQ, /* 网络收包软终端 */BLOCK_SOFTIRQ,      /*  */IRQ_POLL_SOFTIRQ,   /*  */TASKLET_SOFTIRQ,/* tasklet */SCHED_SOFTIRQ,  /* 调度软中断 */HRTIMER_SOFTIRQ,/* 高精度定时器 */RCU_SOFTIRQ,    /* Preferable RCU should always be the last softirq */NR_SOFTIRQS
};

All names of these kinds of softirqs are represented by the following array:

const char * const softirq_to_name[NR_SOFTIRQS] = {"HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "BLOCK_IOPOLL","TASKLET", "SCHED", "HRTIMER", "RCU"
};

同样5.10.13中:

const char * const softirq_to_name[NR_SOFTIRQS] /*  */= {"HI",       /*  */"TIMER",    /*  */"NET_TX",   /*  */"NET_RX",   /*  */"BLOCK",    /*  */"IRQ_POLL", /*  */"TASKLET",  /*  */"SCHED",    /*  */"HRTIMER",  /*  */"RCU"       /*  */
};

Or we can see it in the output of the /proc/softirqs:

~$ cat /proc/softirqs CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       CPU6       CPU7       HI:          5          0          0          0          0          0          0          0TIMER:     332519     310498     289555     272913     282535     279467     282895     270979NET_TX:       2320          0          0          2          1          1          0          0NET_RX:     270221        225        338        281        311        262        430        265BLOCK:     134282         32         40         10         12          7          8          8
BLOCK_IOPOLL:          0          0          0          0          0          0          0          0TASKLET:     196835          2          3          0          0          0          0          0SCHED:     161852     146745     129539     126064     127998     128014     120243     117391HRTIMER:          0          0          0          0          0          0          0          0RCU:     337707     289397     251874     239796     254377     254898     267497     256624

As we can see the softirq_vec array has softirq_action types. This is the main data structure related to the softirq mechanism, so all softirqs represented by the softirq_action structure. The softirq_action structure consists a single field only: an action pointer to the softirq function:

struct softirq_action
{void    (*action)(struct softirq_action *);
};

So, after this we can understand that the open_softirq function fills the softirq_vec array with the given softirq_action. The registered deferred interrupt (with the call of the open_softirq function) for it to be queued for execution, it should be activated by the call of the raise_softirq function. This function takes only one parameter – a softirq index nr. Let’s look on its implementation:

void raise_softirq(unsigned int nr)
{unsigned long flags;local_irq_save(flags);raise_softirq_irqoff(nr);local_irq_restore(flags);
}

Here we can see the call of the raise_softirq_irqoff function between the local_irq_save and the local_irq_restore macros. The local_irq_save defined in the include/linux/irqflags.h header file and saves the state of the IF flag of the eflags register and disables interrupts on the local processor. The local_irq_restore macro defined in the same header file and does the opposite thing: restores the interrupt flag and enables interrupts. We disable interrupts here because a softirq interrupt runs in the interrupt context and that one softirq (and no others) will be run.

The raise_softirq_irqoff function marks the softirq as deffered by setting the bit corresponding to the given index nr in the softirq bit mask (__softirq_pending) of the local processor. It does it with the help of the:

__raise_softirq_irqoff(nr);

macro. After this, it checks the result of the in_interrupt that returns irq_count value. We already saw the irq_count in the first part of this chapter and it is used to check if a CPU is already on an interrupt stack or not. We just exit from the raise_softirq_irqoff, restore IF flag and enable interrupts on the local processor, if we are in the interrupt context, otherwise we call the wakeup_softirqd:

if (!in_interrupt())wakeup_softirqd();

Where the wakeup_softirqd function activates the ksoftirqd kernel thread of the local processor:

static void wakeup_softirqd(void)
{struct task_struct *tsk = __this_cpu_read(ksoftirqd);if (tsk && tsk->state != TASK_RUNNING)wake_up_process(tsk);
}

Each ksoftirqd kernel thread runs the run_ksoftirqd function that checks existence of deferred interrupts and calls the __do_softirq function depending on the result of the check.

static void run_ksoftirqd(unsigned int cpu) /* ksoftirqd 主任务 */
{local_irq_disable();    /* 关中断 */if (local_softirq_pending()) {/** We can safely run softirq on inline stack, as we are not deep* in the task stack here.*/__do_softirq();local_irq_enable();cond_resched();return;}local_irq_enable(); /*  */
}

This function reads the __softirq_pending softirq bit mask of the local processor and executes the deferrable functions corresponding to every bit set. During execution of a deferred(延期的) function, new pending softirqs might occur. The main problem here that execution of the userspace code can be delayed for a long time while the __do_softirq function will handle deferred interrupts. For this purpose, it has the limit of the time when it must be finished:

unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
...
...
...
restart:
while ((softirq_bit = ffs(pending))) {...h->action(h);...
}
...
...
...
pending = local_softirq_pending();
if (pending) {if (time_before(jiffies, end) && !need_resched() &&--max_restart)goto restart;
}
...

调用检查存在的可延期中断被周期的执行:

  • 来自do_IRQ(),在5.10.13中为common_interrupt

Checks of the existence of the deferred interrupts are performed periodically. There are several points where these checks occur. The main point is the call of the do_IRQ function defined in arch/x86/kernel/irq.c, which provides the main means for actual interrupt processing in the Linux kernel. When do_IRQ finishes handling an interrupt, it calls the exiting_irq function from the arch/x86/include/asm/apic.h that expands to the call of the irq_exit function. irq_exit checks for deferred interrupts and the current context and calls the invoke_softirq function:

if (!in_interrupt() && local_softirq_pending())invoke_softirq();

that also executes __do_softirq. To summarize, each softirq goes through the following stages:

  • Registration of a softirq with the open_softirq function.
  • Activation of a softirq by marking it as deferred with the raise_softirq function.
  • After this, all marked softirqs will be triggered in the next time the Linux kernel schedules a round of executions of deferrable functions.
  • And execution of the deferred functions that have the same type.

softirqs是静态分配的,这对内核模块来说是不可加载的,这就引出了tasklets;

As I already wrote, the softirqs are statically allocated and it is a problem for a kernel module that can be loaded. The second concept that built on top of softirq – the tasklets solves this problem.

3. Tasklets

If you read the source code of the Linux kernel that is related to the softirq, you notice that it is used very rarely. The preferable way to implement deferrable functions are tasklets. As I already wrote above the tasklets are built on top of the softirq concept and generally on top of two softirqs:

  • TASKLET_SOFTIRQ;
  • HI_SOFTIRQ.

In short words, tasklets are softirqs that can be allocated and initialized at runtime and unlike softirqs, tasklets that have the same type cannot be run on multiple processors at a time. Ok, now we know a little bit about the softirqs, of course previous text does not cover all aspects about this, but now we can directly look on the code and to know more about the softirqs step by step on practice and to know about tasklets. Let’s return back to the implementation of the softirq_init function that we talked about in the beginning of this part. This function is defined in the kernel/softirq.c source code file, let’s look on its implementation:

void __init softirq_init(void)
{int cpu;for_each_possible_cpu(cpu) {per_cpu(tasklet_vec, cpu).tail =&per_cpu(tasklet_vec, cpu).head;per_cpu(tasklet_hi_vec, cpu).tail =&per_cpu(tasklet_hi_vec, cpu).head;}open_softirq(TASKLET_SOFTIRQ, tasklet_action);open_softirq(HI_SOFTIRQ, tasklet_hi_action);
}

We can see definition of the integer cpu variable at the beginning of the softirq_init function. Next we will use it as parameter for the for_each_possible_cpu macro that goes through the all possible processors in the system. If the possible processor is the new terminology for you, you can read more about it the CPU masks chapter. In short words, possible cpus is the set of processors that can be plugged in anytime during the life of that system boot. All possible processors stored in the cpu_possible_bits bitmap, you can find its definition in the kernel/cpu.c:

static DECLARE_BITMAP(cpu_possible_bits, CONFIG_NR_CPUS) __read_mostly;
...
...
...
const struct cpumask *const cpu_possible_mask = to_cpumask(cpu_possible_bits);

Ok, we defined the integer cpu variable and go through the all possible processors with the for_each_possible_cpu macro and makes initialization of the two following per-cpu variables:

  • tasklet_vec;
  • tasklet_hi_vec;

These two per-cpu variables defined in the same source code file as the softirq_init function and represent two tasklet_head structures:

static DEFINE_PER_CPU(struct tasklet_head, tasklet_vec);
static DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec);

Where tasklet_head structure represents a list of Tasklets and contains two fields, head and tail:

struct tasklet_head {struct tasklet_struct *head;struct tasklet_struct **tail;
};

The tasklet_struct structure is defined in the include/linux/interrupt.h and represents the Tasklet. Previously we did not see this word in this book. Let’s try to understand what the tasklet is. Actually, the tasklet is one of mechanisms to handle deferred interrupt. Let’s look on the implementation of the tasklet_struct structure:

struct tasklet_struct
{struct tasklet_struct *next;unsigned long state;atomic_t count;void (*func)(unsigned long);unsigned long data;
};

5.10.13中为:

struct tasklet_struct   /* tasklet(TASKLET_SOFTIRQ,HI_SOFTIRQ) 隶属于 softirq */
{struct tasklet_struct *next;unsigned long state;atomic_t count;bool use_callback;union {void (*func)(unsigned long data);void (*callback)(struct tasklet_struct *t);};unsigned long data;
};

As we can see this structure contains five fields, they are:

  • Next tasklet in the scheduling queue;
  • State of the tasklet;
  • Represent current state of the tasklet, active or not;
  • Main callback of the tasklet;
  • Parameter of the callback.

In our case, we set only for initialize only two arrays of tasklets in the softirq_init function: the tasklet_vec and the tasklet_hi_vec. Tasklets and high-priority tasklets are stored in the tasklet_vec and tasklet_hi_vec arrays, respectively. So, we have initialized these arrays and now we can see two calls of the open_softirq function that is defined in the kernel/softirq.c source code file:

open_softirq(TASKLET_SOFTIRQ, tasklet_action);
open_softirq(HI_SOFTIRQ, tasklet_hi_action);

at the end of the softirq_init function. The main purpose of the open_softirq function is the initialization of softirq. Let’s look on the implementation of the open_softirq function.

in our case they are: tasklet_action and the tasklet_hi_action or the softirq function associated with the HI_SOFTIRQ softirq is named tasklet_hi_action and softirq function associated with the TASKLET_SOFTIRQ is named tasklet_action.

The Linux kernel provides API for the manipulating of tasklets.

  • First of all it is the tasklet_init function that takes tasklet_struct, function and parameter for it and initializes the given tasklet_struct with the given data:
void tasklet_init(struct tasklet_struct *t,void (*func)(unsigned long), unsigned long data)
{t->next = NULL;t->state = 0;atomic_set(&t->count, 0);t->func = func;t->data = data;
}
EXPORT_SYMBOL(tasklet_init);

在5.10.13中为:

void tasklet_init(struct tasklet_struct *t,void (*func)(unsigned long), unsigned long data)
{t->next = NULL;t->state = 0;atomic_set(&t->count, 0);t->func = func;t->use_callback = false;t->data = data;
}
EXPORT_SYMBOL(tasklet_init);

There are additional methods to initialize a tasklet statically with the two following macros:

#define DECLARE_TASKLET(name, _callback)     \
struct tasklet_struct name = {             \.count = ATOMIC_INIT(0),          \.callback = _callback,                \.use_callback = true,             \
}#define DECLARE_TASKLET_DISABLED(name, _callback)  \
struct tasklet_struct name = {             \.count = ATOMIC_INIT(1),          \.callback = _callback,                \.use_callback = true,             \
}

The Linux kernel provides three following functions to mark a tasklet as ready to run:

void tasklet_schedule(struct tasklet_struct *t);
void tasklet_hi_schedule(struct tasklet_struct *t);
void tasklet_hi_schedule_first(struct tasklet_struct *t);

5.10.13中没有了tasklet_hi_schedule_first

The first function schedules a tasklet with the normal priority, the second with the high priority and the third out of turn. Implementation of the all of these three functions is similar, so we will consider only the first – tasklet_schedule. Let’s look on its implementation:

static inline void tasklet_schedule(struct tasklet_struct *t)
{if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state))__tasklet_schedule(t);
}void __tasklet_schedule(struct tasklet_struct *t)
{unsigned long flags;local_irq_save(flags);t->next = NULL;*__this_cpu_read(tasklet_vec.tail) = t;__this_cpu_write(tasklet_vec.tail, &(t->next));raise_softirq_irqoff(TASKLET_SOFTIRQ);local_irq_restore(flags);
}

在5.10.13中:

static void __tasklet_schedule_common(struct tasklet_struct *t,struct tasklet_head __percpu *headp,unsigned int softirq_nr)
{struct tasklet_head *head;unsigned long flags;local_irq_save(flags);head = this_cpu_ptr(headp);t->next = NULL;*head->tail = t;head->tail = &(t->next);raise_softirq_irqoff(softirq_nr);local_irq_restore(flags);
}void __tasklet_schedule(struct tasklet_struct *t)
{__tasklet_schedule_common(t, &tasklet_vec,TASKLET_SOFTIRQ);
}
EXPORT_SYMBOL(__tasklet_schedule);void __tasklet_hi_schedule(struct tasklet_struct *t)
{__tasklet_schedule_common(t, &tasklet_hi_vec,HI_SOFTIRQ);
}
EXPORT_SYMBOL(__tasklet_hi_schedule);

As we can see it checks and sets the state of the given tasklet to the TASKLET_STATE_SCHED and executes the __tasklet_schedule with the given tasklet. The __tasklet_schedule looks very similar to the raise_softirq function that we saw above. It saves the interrupt flag and disables interrupts at the beginning. After this, it updates tasklet_vec with the new tasklet and calls the raise_softirq_irqoff function that we saw above.

When the Linux kernel scheduler will decide to run deferred functions, the tasklet_action function will be called for deferred functions which are associated with the TASKLET_SOFTIRQ and tasklet_hi_action for deferred functions which are associated with the HI_SOFTIRQ. These functions are very similar and there is only one difference between them – tasklet_action uses tasklet_vec and tasklet_hi_action uses tasklet_hi_vec.

Let’s look on the implementation of the tasklet_action function:

static void tasklet_action(struct softirq_action *a)
{local_irq_disable();list = __this_cpu_read(tasklet_vec.head);__this_cpu_write(tasklet_vec.head, NULL);__this_cpu_write(tasklet_vec.tail, this_cpu_ptr(&tasklet_vec.head));local_irq_enable();while (list) {if (tasklet_trylock(t)) {t->func(t->data);tasklet_unlock(t);}.........}
}

In the beginning of the tasklet_action function, we disable interrupts for the local processor with the help of the local_irq_disable macro (you can read about this macro in the second part of this chapter). In the next step, we take a head of the list that contains tasklets with normal priority and set this per-cpu list to NULL because all tasklets must be executed in a generally way. After this we enable interrupts for the local processor and go through the list of tasklets in the loop. In every iteration of the loop we call the tasklet_trylock function for the given tasklet that updates state of the given tasklet on TASKLET_STATE_RUN:

static inline int tasklet_trylock(struct tasklet_struct *t)
{return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
}

If this operation was successful we execute tasklet’s action (it was set in the tasklet_init) and call the tasklet_unlock function that clears tasklet’s TASKLET_STATE_RUN state.

In general, that’s all about tasklets concept. Of course this does not cover full tasklets, but I think that it is a good point from where you can continue to learn this concept.

The tasklets are widely used concept in the Linux kernel, but as I wrote in the beginning of this part there is third mechanism for deferred functions – workqueue. In the next paragraph we will see what it is.

4. Workqueues

  • workqueue运行在内核进程上下文中;
  • tasklets运行在软件中断上下文中;

The workqueue is another concept for handling deferred functions. It is similar to tasklets with some differences. Workqueue functions run in the context of a kernel process, but tasklet functions run in the software interrupt context. This means that workqueue functions must not be atomic as tasklet functions. Tasklets always run on the processor from which they were originally submitted. Workqueues work in the same way, but only by default. The workqueue concept represented by the:

struct worker_pool {spinlock_t              lock;int                     cpu;int                     node;int                     id;unsigned int            flags;struct list_head        worklist;int                     nr_workers;
...
...
...

structure that is defined in the kernel/workqueue.c source code file in the Linux kernel. I will not write the source code of this structure here, because it has quite a lot of fields, but we will consider some of those fields.

In its most basic form, the work queue subsystem is an interface for creating kernel threads to handle work that is queued from elsewhere. All of these kernel threads are called – worker threads. The work queue are maintained by the work_struct that defined in the include/linux/workqueue.h. Let’s look on this structure:

struct work_struct {atomic_long_t data;struct list_head entry;work_func_t func;
#ifdef CONFIG_LOCKDEPstruct lockdep_map lockdep_map;
#endif
};

Here are two things that we are interested: func – the function that will be scheduled by the workqueue and the data - parameter of this function. The Linux kernel provides special per-cpu threads that are called kworker:

[rongtao@localhost cgroup]$ systemd-cgls -k | grep kworker
├─     4 [kworker/0:0H]
├─    14 [kworker/1:0H]
├─    17 [kworker/2:0]
├─    18 [kworker/2:0H]
├─    21 [kworker/3:0]
├─    22 [kworker/3:0H]
├─   309 [kworker/0:1H]
├─   678 [kworker/1:1H]
├─  2259 [kworker/0:2]
├─  7987 [kworker/1:1]
├─  8370 [kworker/1:0]
├─  8641 [kworker/0:1]
├─  9001 [kworker/1:2]
├─  9059 [kworker/1:3]
├─112749 [kworker/u510:0]
├─171528 [kworker/3:2]
├─171529 [kworker/2:2]
├─184716 [kworker/u510:2]
  • ksoftirqd调度 softirqs
  • kworker调度workqueue

This process can be used to schedule the deferred functions of the workqueues (as ksoftirqd for softirqs). Besides this we can create new separate worker thread for a workqueue. The Linux kernel provides following macros for the creation of workqueue:

#define DECLARE_WORK(n, f) \struct work_struct n = __WORK_INITIALIZER(n, f)

for static creation. It takes two parameters: name of the workqueue and the workqueue function. For creation of workqueue in runtime, we can use the:

#define INIT_WORK(_work, _func)       \__INIT_WORK((_work), (_func), 0)#define __INIT_WORK(_work, _func, _onstack)                     \do {                                                        \__init_work((_work), _onstack);                     \(_work)->data = (atomic_long_t) WORK_DATA_INIT();   \INIT_LIST_HEAD(&(_work)->entry);                    \(_work)->func = (_func);                           \} while (0)

macro that takes work_struct structure that has to be created and the function to be scheduled in this workqueue. After a work was created with the one of these macros, we need to put it to the workqueue. We can do it with the help of the queue_work or the queue_delayed_work functions:

static inline bool queue_work(struct workqueue_struct *wq,struct work_struct *work)
{return queue_work_on(WORK_CPU_UNBOUND, wq, work);
}

The queue_work function just calls the queue_work_on function that queues work on specific processor. Note that in our case we pass the WORK_CPU_UNBOUND to the queue_work_on function. It is a part of the enum that is defined in the include/linux/workqueue.h and represents workqueue which are not bound to any specific processor. The queue_work_on function tests and set the WORK_STRUCT_PENDING_BIT bit of the given work and executes the __queue_work function with the workqueue for the given processor and given work:

bool queue_work_on(int cpu, struct workqueue_struct *wq,struct work_struct *work)
{bool ret = false;...if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {__queue_work(cpu, wq, work);ret = true;}...return ret;
}

The __queue_work function gets the work pool. Yes, the work pool not workqueue. Actually, all works are not placed in the workqueue, but to the work pool that is represented by the worker_pool structure in the Linux kernel. As you can see above, the workqueue_struct structure has the pwqs field which is list of worker_pools.

When we create a workqueue, it stands out for each processor the pool_workqueue. Each pool_workqueue associated with worker_pool, which is allocated on the same processor and corresponds to the type of priority queue. Through them workqueue interacts with worker_pool. So in the __queue_work function we set the cpu to the current processor with the raw_smp_processor_id (you can find information about this macro in the fourth part of the Linux kernel initialization process chapter), getting the pool_workqueue for the given workqueue_struct and insert the given work to the given workqueue:

static void __queue_work(int cpu, struct workqueue_struct *wq,struct work_struct *work)
{
...
...
...
if (req_cpu == WORK_CPU_UNBOUND)cpu = raw_smp_processor_id();if (!(wq->flags & WQ_UNBOUND))pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
elsepwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
...
...
...
insert_work(pwq, work, worklist, work_flags);

As we can create works and workqueue, we need to know when they are executed. As I already wrote, all works are executed by the kernel thread. When this kernel thread is scheduled, it starts to execute works from the given workqueue. Each worker thread executes a loop inside the worker_thread function. This thread makes many different things and part of these things are similar to what we saw before in this part. As it starts executing, it removes all work_struct or works from its workqueue.

That’s all.

5. Conclusion

It is the end of the ninth part of the Interrupts and Interrupt Handling chapter and we continued to dive into external hardware interrupts in this part. In the previous part we saw initialization of the IRQs and main irq_desc structure. In this part we saw three concepts: the softirq, tasklet and workqueue that are used for the deferred functions.

The next part will be last part of the Interrupts and Interrupt Handling chapter and we will look on the real hardware driver and will try to learn how it works with the interrupts subsystem.

If you have any questions or suggestions, write me a comment or ping me at twitter.

Please note that English is not my first language, And I am really sorry for any inconvenience. If you find any mistakes please send me PR to linux-insides.

6. Links

  • initcall
  • IF
  • eflags
  • CPU masks
  • per-cpu
  • Workqueue
  • Previous part

Linux内核深入理解中断和异常(7):中断下半部:Softirq, Tasklets and Workqueues相关推荐

  1. Linux内核深入理解中断和异常(8):串口驱动程序

    Linux内核深入理解中断和异常(8):串口驱动程序 rtoax 2021年3月 /*** start_kernel()->setup_arch()->idt_setup_early_tr ...

  2. Linux内核深入理解中断和异常(6):IRQs的非早期初始化

    Linux内核深入理解中断和异常(6):IRQs的非早期初始化 rtoax 2021年3月 0x00-0x1f architecture-defined exceptions and interrup ...

  3. Linux内核深入理解中断和异常(5):外部中断

    Linux内核深入理解中断和异常(5):外部中断 rtoax 2021年3月 1. 外部中断简介 外部中断包括:键盘,鼠标,打印机等. 外部中断包括: I/O interrupts; IO中断 Tim ...

  4. Linux内核深入理解中断和异常(3):异常处理的实现(X86_TRAP_xx)

    Linux内核深入理解中断和异常(3):异常处理的实现(X86_TRAP_xx) rtoax 2021年3月 /*** start_kernel()->setup_arch()->idt_ ...

  5. Linux内核深入理解中断和异常(2):初步中断处理-中断加载

    Linux内核深入理解中断和异常(2):初步中断处理-中断加载 rtoax 2021年3月 1. 总体概览 关于idt_table结构的填充,在5.10.13中流程为: idt_setup_early ...

  6. Linux内核深入理解中断和异常(1)

    Linux内核深入理解中断和异常(1) rtoax 2021年3月 1. 中断介绍 内核中第一个子系统是中断(interrupts). 1.1. 什么是中断? 我们已经在这本书的很多地方听到过 中断( ...

  7. Linux内核深入理解中断和异常(4):不可屏蔽中断NMI、浮点异常和SIMD

    Linux内核深入理解中断和异常(4):不可屏蔽中断NMI.浮点异常和SIMD rtoax 2021年3月 本文介绍一下几种trap: //* External hardware asserts (外 ...

  8. Linux内核深入理解系统调用(2):vsyscall 和 vDSO 以及程序是如何运行的(execve)

    Linux内核深入理解系统调用(2) vsyscall 和 vDSO 以及程序是如何运行的(execve) rtoax 2021年3月 1. vsyscalls 和 vDSO 这是讲解 Linux 内 ...

  9. Linux内核深入理解系统调用(1):初始化-入口-处理-退出

    Linux内核深入理解系统调用(1):初始化-入口-处理-退出 rtoax 2021年3月 1. Linux 内核系统调用简介 这次提交为 linux内核解密 添加一个新的章节,从标题就可以知道, 这 ...

最新文章

  1. RDKit | 基于Lipinski规则过滤化合物库
  2. 敲代码、作诗、写论文无所不能!史上最大AI模型GPT-3强势霸榜Github
  3. jquery.cookies使用
  4. 【Jetpack 】
  5. 部署LVS-DR(LVS+Keepalived)群集
  6. Swing应用程序中的JavaFX 8 DatePicker
  7. 将截断字符串或二进制数据。
  8. 【HDU 6299】Balanced Sequence
  9. mysql序列号生成软件_GitHub - spcent/seq: 基于mysql的序列号生成器
  10. 6678学习笔记开篇
  11. ArcGIS/ArcMAP操作录屏视频及相关实验数据(行政界线、地名点、道路路网、水系、乡镇/街道面等)
  12. SAP ABAP MOVE-CORRESPONDING ... TO ...的使用
  13. 对 MMO 游戏的调研
  14. 路由器配置 校园网账号独立登录 DHCP关闭
  15. 基于Java毕业设计政府采购线上招投标平台源码+系统+mysql+lw文档+部署软件
  16. iPhone,iPad如何获取WIFI名称即SSID
  17. 用GUI自动控制键盘和鼠标
  18. Unity3d自学之路(一)
  19. Android消息提醒
  20. JAVA 时间戳与Date类型的相互转换、格式化日期、字符串日期转Date

热门文章

  1. springboot使用spring-data-jpa完成数据持久化
  2. HTML5特性system,HTML5教程 FileSystemAPI整理
  3. 转化百分比_localPosition与anchoredPosition的转化关系
  4. php安装扩展写kafca,安装PHP的kafka扩展
  5. js和jQuery判断数组是否包含指定元素
  6. 微信小程序Server端环境配置
  7. js中的总结汇总(以后的都收集到这篇)
  8. VirtualBox 如何重复使用复制的硬盘文件
  9. petshop学习笔记(1)
  10. 物理服务器转虚拟路径,服务器配置虚拟路径