线程与进程

线程的定义

  • 线程是进程的基本单位,一个进程的所有任务都在线程中执行
  • 进程要想执行任务,必须的有线程,进程至少要有一条线程
  • 程序启动默认会开启一条线程,也就是我们的主线程

进程的定义

  • 进程是指在系统中正在运行的一个应用程序
  • 每个进程之间是独立的,每个进程均运行在其专用的且受保护的内存中

进程与线程的区别

  • 地址空间:同一进程的线程共享本进程的地址空间,而进程之间则是独立的地址空间。
  • 资源拥有:同一进程内的线程共享本进程的资源如内存、I/O、cpu等,但是进程之间的资源是独立的。

  • 一个进程崩溃后,在保护模式下不会对其他进程产生影响,但是一个线程崩溃整个进程都死掉。所以多进程要比多线程健壮。

  • 进程切换时,消耗的资源大,效率高。所以涉及到频繁的切换时,使用线程要好于进程。同样如果要求同时进行并且又要共享某些变量的并发操作,只能用线程不能用进程

  • 执行过程:每个独立的进程有一个程序运行的入口、顺序执行序列和程序入口。但是线程不能独立执行,必须依存在应用程序中,由应用程序提供多个线程执行控制。

  • 线程是处理器调度的基本单位,但是进程不是

多线程的意义

优点

  • 能适当提高程序的执行效率
  • 能适当提高资源的利用率(CPU、内存)
  • 线程上的任务执行完成后,线程会自动销毁

缺点

  • 开启线程需要占用一定的内存空间(默认情况下,每个线程占有512kb)
  • 如果线程开启过多,会占用大量的内存空间,降低程序的性能
  • 线程越多,CPU在调用线程的开销越大

GCD

我们使用了很多gcd,但是底层一直是没有深入探究的,比如队列是怎么创建的,gcd的函数是什么时候去调用的等等问题都不清楚。接下来就进入底层分析吧。

队列是怎么创建的

dispatch_queue_create就是入口了,这个是在源码libdspatch中的。

  • 根据传入的attr创建dqai,串行的为null,所以串行的最后为空的dqai,
  • 队列的优先级
  • 一些列其他操作
  • _dispatch_get_root_queue从这个模版数组中,根据串行,并发的角标不一样获取到tq

  • _dispatch_object_alloc分配内存

  • _dispatch_queue_init初始化

    #define DISPATCH_QUEUE_WIDTH_FULL 0x1000ull

    #define DISPATCH_QUEUE_WIDTH_POOL (DISPATCH_QUEUE_WIDTH_FULL - 1)

    #define DISPATCH_QUEUE_WIDTH_MAX  (DISPATCH_QUEUE_WIDTH_FULL - 2)

  • 串行width为1,自定义并发队列为0xffe

dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{return _dispatch_lane_create_with_target(label, attr,DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,dispatch_queue_t tq, bool legacy)
{dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);//// Step 1: Normalize arguments (qos, overcommit, tq)//dispatch_qos_t qos = dqai.dqai_qos;
#if !HAVE_PTHREAD_WORKQUEUE_QOSif (qos == DISPATCH_QOS_USER_INTERACTIVE) {dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED;}if (qos == DISPATCH_QOS_MAINTENANCE) {dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND;}
#endif // !HAVE_PTHREAD_WORKQUEUE_QOS_dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit;if (overcommit != _dispatch_queue_attr_overcommit_unspecified && tq) {if (tq->do_targetq) {DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and ""a non-global target queue");}}if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) {// Handle discrepancies between attr and target queue, attributes winif (overcommit == _dispatch_queue_attr_overcommit_unspecified) {if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {overcommit = _dispatch_queue_attr_overcommit_enabled;} else {overcommit = _dispatch_queue_attr_overcommit_disabled;}}if (qos == DISPATCH_QOS_UNSPECIFIED) {qos = _dispatch_priority_qos(tq->dq_priority);}tq = NULL;} else if (tq && !tq->do_targetq) {// target is a pthread or runloop root queue, setting QoS or overcommit// is disallowedif (overcommit != _dispatch_queue_attr_overcommit_unspecified) {DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute ""and use this kind of target queue");}} else {if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {// Serial queues default to overcommit!overcommit = dqai.dqai_concurrent ?_dispatch_queue_attr_overcommit_disabled :_dispatch_queue_attr_overcommit_enabled;}}if (!tq) {tq = _dispatch_get_root_queue(qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;if (unlikely(!tq)) {DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute");}}//// Step 2: Initialize the queue//if (legacy) {// if any of these attributes is specified, use non legacy classesif (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) {legacy = false;}}const void *vtable;dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;if (dqai.dqai_concurrent) {vtable = DISPATCH_VTABLE(queue_concurrent);} else {vtable = DISPATCH_VTABLE(queue_serial);}switch (dqai.dqai_autorelease_frequency) {case DISPATCH_AUTORELEASE_FREQUENCY_NEVER:dqf |= DQF_AUTORELEASE_NEVER;break;case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM:dqf |= DQF_AUTORELEASE_ALWAYS;break;}if (label) {const char *tmp = _dispatch_strdup_if_mutable(label);if (tmp != label) {dqf |= DQF_LABEL_NEEDS_FREE;label = tmp;}}dispatch_lane_t dq = _dispatch_object_alloc(vtable,sizeof(struct dispatch_lane_s));_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));dq->dq_label = label;dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,dqai.dqai_relpri);if (overcommit == _dispatch_queue_attr_overcommit_enabled) {dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;}if (!dqai.dqai_inactive) {_dispatch_queue_priority_inherit_from_target(dq, tq);_dispatch_lane_inherit_wlh_from_target(dq, tq);}_dispatch_retain(tq);dq->do_targetq = tq;_dispatch_object_debug(dq, "%s", __func__);return _dispatch_trace_queue_create(dq)._dq;
}

width为最大并发数量,这一点,我们通过打印也可以知道

其他什么target都是从模版数组中设置。

死锁

死锁是指多线程抢夺同一块资源而引起的一种僵持状态(你等我,我等你),下图为死锁。那么底层到底是怎样的呢?接下来我们就直接从源码分析,这里直接从dispatch_sync入口。

dispatch_sync内部会调用_dispatch_sync_f

void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{_dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}

_dispatch_sync_f内部又调用了_dispatch_sync_f_inline,这里就比较重要了,dq_width等于1的时候,在队列中有分析过,为串行队列,能达成死锁的就在这里了,然后又进入了栅栏函数_dispatch_barrier_sync_f,从这里我们也知道,栅栏函数也是同步的一种方案。_dispatch_barrier_sync_f这里也是我们分析的入口了。

static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,uintptr_t dc_flags)
{_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,dispatch_function_t func, uintptr_t dc_flags)
{// 串行队列if (likely(dq->dq_width == 1)) {return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);}if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");}dispatch_lane_t dl = upcast(dq)._dl;// Global concurrent queues and queues bound to non-dispatch threads// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUEif (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);}if (unlikely(dq->do_targetq->do_targetq)) {return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);}_dispatch_introspection_sync_begin(dl);_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}

_dispatch_barrier_sync_f,内部一眼就能看到两个if判断就是核心了(这是看源码的经验),第二个判断target一般都会有,所以可以猜到是第一个if判断会进入了,来到_dispatch_sync_f_slow。补充说明tid,这里下面流程会用来判断,

  • #define _dispatch_tid_self() ((dispatch_tid)_dispatch_thread_port())

  • #define _dispatch_thread_port() ((mach_port_t)(uintptr_t)\

    _dispatch_thread_getspecific(_PTHREAD_TSD_SLOT_MACH_THREAD_SELF))

  • tid为线程的一个标识,_dispatch_thread_getspecific就是获取

static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,dispatch_function_t func, uintptr_t dc_flags)
{_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,dispatch_function_t func, uintptr_t dc_flags)
{dispatch_tid tid = _dispatch_tid_self();if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");}dispatch_lane_t dl = upcast(dq)._dl;if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,DC_FLAG_BARRIER | dc_flags);}if (unlikely(dl->do_targetq->do_targetq)) {return _dispatch_sync_recurse(dl, ctxt, func,DC_FLAG_BARRIER | dc_flags);}_dispatch_introspection_sync_begin(dl);_dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, funcDISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}

_dispatch_sync_f_slow这里就是重点了

  • 赋值dsc
  • _dispatch_trace_item_push压栈(队列是 FIFO 原则)

  • __DISPATCH_WAIT_FOR_QUEUE__这个就是判断了,同时我们上面死锁截图的堆栈也有这个方法

static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,dispatch_function_t func, uintptr_t top_dc_flags,dispatch_queue_class_t dqu, uintptr_t dc_flags)
{dispatch_queue_t top_dq = top_dqu._dq;dispatch_queue_t dq = dqu._dq;if (unlikely(!dq->do_targetq)) {return _dispatch_sync_function_invoke(dq, ctxt, func);}pthread_priority_t pp = _dispatch_get_priority();struct dispatch_sync_context_s dsc = {.dc_flags    = DC_FLAG_SYNC_WAITER | dc_flags,.dc_func     = _dispatch_async_and_wait_invoke,.dc_ctxt     = &dsc,.dc_other    = top_dq,.dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,.dc_voucher  = _voucher_get(),.dsc_func    = func,.dsc_ctxt    = ctxt,.dsc_waiter  = _dispatch_tid_self(),};_dispatch_trace_item_push(top_dq, &dsc);__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);if (dsc.dsc_func == NULL) {// dsc_func being cleared means that the block ran on another thread ie.// case (2) as listed in _dispatch_async_and_wait_f_slow.dispatch_queue_t stop_dq = dsc.dc_other;return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);}_dispatch_introspection_sync_begin(top_dq);_dispatch_trace_item_pop(top_dq, &dsc);_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flagsDISPATCH_TRACE_ARG(&dsc));
}

__DISPATCH_WAIT_FOR_QUEUE__,因为我们主要看死锁原因,所以下面的代码省略了很多。这里核心就是这个判断了_dq_state_drain_locked_by

static void
__DISPATCH_WAIT_FOR_QUEUE__(dispatch_sync_context_t dsc, dispatch_queue_t dq)
{uint64_t dq_state = _dispatch_wait_prepare(dq);if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,"dispatch_sync called on queue ""already owned by current thread");}
// 省略了很多
}

_dq_state_drain_locked_by,拿tid和lock_value比较,相同则返回0,不同为1。死锁就是最终返回YES,然后DISPATCH_CLIENT_CRASH报错。

static inline bool
_dq_state_drain_locked_by(uint64_t dq_state, dispatch_tid tid)
{return _dispatch_lock_is_locked_by((dispatch_lock)dq_state, tid);
}static inline bool
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{// equivalent to _dispatch_lock_owner(lock_value) == tid// ^异或,相同为0,不同为1return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}

gcd的block是什么时候调用的

这个就要分同步和异步了,上面都是同步,所以我们从同步开始。在_dispatch_sync_f_inline这个函数中,_dispatch_sync_invoke_and_complete,会从之前一直将func传递过来,内部会进入_dispatch_sync_function_invoke_inline,内部调用_dispatch_client_callout

static void
_dispatch_sync_invoke_and_complete(dispatch_lane_t dq, void *ctxt,dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{_dispatch_sync_function_invoke_inline(dq, ctxt, func);_dispatch_trace_item_complete(dc);_dispatch_lane_non_barrier_complete(dq, 0);
}static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,dispatch_function_t func)
{dispatch_thread_frame_s dtf;_dispatch_thread_frame_push(&dtf, dq);_dispatch_client_callout(ctxt, func);_dispatch_perfmon_workitem_inc();_dispatch_thread_frame_pop(&dtf);
}

_dispatch_client_callout,在这个内部之间f调用,这就是直接block调用了。这边是同步函数,所以直接调用了,没有做什么保存处理。

void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{_dispatch_get_tsd_base();void *u = _dispatch_get_unwind_tsd();if (likely(!u)) return f(ctxt);_dispatch_set_unwind_tsd(NULL);f(ctxt);_dispatch_free_unwind_tsd();_dispatch_set_unwind_tsd(u);
}

异步的时候dispatch_async,会将block指针复制,保存在qos中,最后再压栈操作。

void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{dispatch_continuation_t dc = _dispatch_continuation_alloc();uintptr_t dc_flags = DC_FLAG_CONSUME;dispatch_qos_t qos;qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}

在_dispatch_continuation_init,复制block的函数调用指针,在_dispatch_continuation_init_f内赋值,最后调用_dispatch_continuation_priority_set生成qos返回

DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,dispatch_queue_class_t dqu, dispatch_block_t work,dispatch_block_flags_t flags, uintptr_t dc_flags)
{void *ctxt = _dispatch_Block_copy(work);dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;if (unlikely(_dispatch_block_has_private_data(work))) {dc->dc_flags = dc_flags;dc->dc_ctxt = ctxt;// will initialize all fields but requires dc_flags & dc_ctxt to be setreturn _dispatch_continuation_init_slow(dc, dqu, flags);}dispatch_function_t func = _dispatch_Block_invoke(work);if (dc_flags & DC_FLAG_CONSUME) {func = _dispatch_call_block_and_release;}return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}

_dispatch_continuation_priority_set生成返回对象qos

static inline dispatch_qos_t
_dispatch_continuation_priority_set(dispatch_continuation_t dc,dispatch_queue_class_t dqu,pthread_priority_t pp, dispatch_block_flags_t flags)
{dispatch_qos_t qos = DISPATCH_QOS_UNSPECIFIED;
#if HAVE_PTHREAD_WORKQUEUE_QOSdispatch_queue_t dq = dqu._dq;if (likely(pp)) {bool enforce = (flags & DISPATCH_BLOCK_ENFORCE_QOS_CLASS);bool is_floor = (dq->dq_priority & DISPATCH_PRIORITY_FLAG_FLOOR);bool dq_has_qos = (dq->dq_priority & DISPATCH_PRIORITY_REQUESTED_MASK);if (enforce) {pp |= _PTHREAD_PRIORITY_ENFORCE_FLAG;qos = _dispatch_qos_from_pp_unsafe(pp);} else if (!is_floor && dq_has_qos) {pp = 0;} else {qos = _dispatch_qos_from_pp_unsafe(pp);}}dc->dc_priority = pp;
#else(void)dc; (void)dqu; (void)pp; (void)flags;
#endifreturn qos;
}

最后在_dispatch_continuation_async中压栈操作

static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTIONif (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {_dispatch_trace_item_push(dqu, dc);}
#else(void)dc_flags;
#endifreturn dx_push(dqu._dq, dc, qos);
}

#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z),这个函数为宏定义,最后调用dq_push,全局搜索,因为是并发队列,发现了这个调用的地方,但是最后还是会到_dispatch_root_queue_push。

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,.do_type        = DISPATCH_QUEUE_CONCURRENT_TYPE,.do_dispose     = _dispatch_lane_dispose,.do_debug       = _dispatch_queue_debug,.do_invoke      = _dispatch_lane_invoke,.dq_activate    = _dispatch_lane_activate,.dq_wakeup      = _dispatch_lane_wakeup,.dq_push        = _dispatch_lane_concurrent_push,
);.dq_push        = _dispatch_root_queue_push,

_dispatch_root_queue_push内部省略了很多代码,直接留下最重要的函数,跳到了_dispatch_root_queue_poke_slow

_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,dispatch_qos_t qos)
{_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,dispatch_object_t _head, dispatch_object_t _tail, int n)
{struct dispatch_object_s *hd = _head._do, *tl = _tail._do;if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {return _dispatch_root_queue_poke(dq, n, 0);}
}
void
_dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor)
{return _dispatch_root_queue_poke_slow(dq, n, floor);
}

_dispatch_root_queue_poke_slow这边有初始化队列,将func强转

static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{_dispatch_root_queues_init();_dispatch_debug_root_queue(dq, __func__);_dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n);pthr = _dispatch_mgr_root_queue_init();
}

_dispatch_mgr_root_queue_init,dispatch_once_f这个就是核心了

static pthread_t *
_dispatch_mgr_root_queue_init(void)
{dispatch_once_f(&_dispatch_mgr_sched_pred, NULL, _dispatch_mgr_sched_init);dispatch_pthread_root_queue_context_t pqc = _dispatch_mgr_root_queue.do_ctxt;pthread_attr_t *attr = &pqc->dpq_thread_attr;struct sched_param param;(void)dispatch_assume_zero(pthread_attr_setdetachstate(attr,PTHREAD_CREATE_DETACHED));
#if !DISPATCH_DEBUG(void)dispatch_assume_zero(pthread_attr_setstacksize(attr, 64 * 1024));
#endif
#if HAVE_PTHREAD_WORKQUEUE_QOSqos_class_t qos = _dispatch_mgr_sched.qos;if (qos) {if (_dispatch_set_qos_class_enabled) {(void)dispatch_assume_zero(pthread_attr_set_qos_class_np(attr,qos, 0));}}
#endifparam.sched_priority = _dispatch_mgr_sched.prio;if (param.sched_priority > _dispatch_mgr_sched.default_prio) {(void)dispatch_assume_zero(pthread_attr_setschedparam(attr, &param));}return &_dispatch_mgr_sched.tid;
}

dispatch_once_f在这个里面_dispatch_once_callout调用func,最后_dispatch_once_gate_broadcast发送通知,本次调用完,通过dispatch_once_gate_t来限制只调用一次。

void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{dispatch_once_gate_t l = (dispatch_once_gate_t)val;#if !DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTERuintptr_t v = os_atomic_load(&l->dgo_once, acquire);if (likely(v == DLOCK_ONCE_DONE)) {return;}
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTERif (likely(DISPATCH_ONCE_IS_GEN(v))) {return _dispatch_once_mark_done_if_quiesced(l, v);}
#endif
#endifif (_dispatch_once_gate_tryenter(l)) {return _dispatch_once_callout(l, ctxt, func);}return _dispatch_once_wait(l);
}
static void
_dispatch_once_callout(dispatch_once_gate_t l, void *ctxt,dispatch_function_t func)
{_dispatch_client_callout(ctxt, func);_dispatch_once_gate_broadcast(l);
}

某大厂面试题

问打印的a为多少?答案是大于5。while循环,可以同时开启多个子线程,所以while循环里面可以进入多次,导致a的值为大于5。

__block int a = 0;
while (a < 5) {dispatch_async(dispatch_get_global_queue(0, 0), ^{NSLog(@"里面 --- %d",a);a++;});
}
NSLog(@"外面 --- %d",a);

随之而来的第二个问题来了,那么如何可以解决这个问题?答案就是加锁,锁加在也同样是一个问题,这里我们使用信号量。因为我们主要加的锁是a++这个操作,所以按照以下代码加锁

    __block int a = 0;dispatch_semaphore_t sem = dispatch_semaphore_create(1);while (a < 5) {dispatch_async(dispatch_get_global_queue(0, 0), ^{NSLog(@"里面 --- %d ----",a);a++;dispatch_semaphore_signal(sem);});dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);}NSLog(@"外面 --- %d ----",a);

栅栏函数

控制任务执行顺序,同步方案的一种。这里就有一个注意点,必须是自己手动创建的队列,使用全局的队列没有效果。

// 并发队列
//    dispatch_queue_t queue = dispatch_queue_create("ff.com", DISPATCH_QUEUE_CONCURRENT);
// 串行队列 DISPATCH_QUEUE_SERIAL 宏对应的就是NULL dispatch_queue_t queue = dispatch_queue_create("ff.com", NULL);
// 全局的不可取
//    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);dispatch_async(queue, ^{sleep(1);NSLog(@"任务1-----");});dispatch_async(queue, ^{sleep(3);NSLog(@"任务2-----");});dispatch_barrier_async(queue, ^{
//        sleep(1);NSLog(@"任务3-----");});

dispatch_barrier_async和dispatch_barrier_sync作用相同,只是同步栅栏会堵塞线程,影响后面的任务执行。

调度组

常用的两种使用方式,一种是dispatch_group_enter和dispatch_group_leave,最后dispatch_group_notify,随机执行任务1和任务2完成后,最后执行任务3

dispatch_group_t group = dispatch_group_create();dispatch_group_enter(group);dispatch_async(dispatch_get_global_queue(0, 0), ^{sleep(1);NSLog(@"任务1-----");dispatch_group_leave(group);});dispatch_group_enter(group);dispatch_async(dispatch_get_global_queue(0, 0), ^{sleep(1);NSLog(@"任务2-----");dispatch_group_leave(group);});dispatch_group_enter(group);dispatch_async(dispatch_get_global_queue(0, 0), ^{sleep(1);NSLog(@"任务3-----");dispatch_group_leave(group);});

另一种是dispatch_group_async,这里要注意dispatch_group_notify不能写在任务1的前面。

dispatch_group_async(group, dispatch_get_global_queue(0, 0), ^{sleep(1);NSLog(@"任务1-----");});dispatch_group_async(group, dispatch_get_global_queue(0, 0), ^{sleep(2);NSLog(@"任务2-----");});dispatch_group_notify(group, dispatch_get_main_queue(), ^{NSLog(@"所有任务结束");});dispatch_group_async(group, dispatch_get_global_queue(0, 0), ^{sleep(4);NSLog(@"任务3-----");});

dispatch_source

创建全局的queue和增量的source,设置回调dispatch_source_set_event_handler,从dispatch_source_get_data这里获取回调的数据

- (void)sourceDemo{self.times = 0;self.queue = dispatch_queue_create("ff.com", NULL);self.source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, dispatch_get_main_queue());dispatch_source_set_event_handler(self.source, ^{NSInteger data = dispatch_source_get_data(self.source);FFLog(@"----- %ld",data);});dispatch_resume(self.source);self.isRuning = YES;
}

开启任务,dispatch_source_merge_data给source传数据

- (IBAction)startClick:(id)sender {for (int i = 0; i < 100; i++) {dispatch_async(self.queue, ^{sleep(1);self.times++;dispatch_source_merge_data(self.source, self.times);});}
}

暂停或者继续任务

- (IBAction)cancelClick:(id)sender {if (self.isRuning) {self.isRuning = NO;dispatch_suspend(self.source);dispatch_suspend(self.queue);}else{self.isRuning = YES;dispatch_resume(self.source);dispatch_resume(self.queue);}
}

iOS进阶之底层原理-线程与进程、gcd相关推荐

  1. iOS进阶之底层原理-应用程序加载(dyld加载流程、类与分类的加载)

    iOS应用程序的入口是main函数,那么main函数之前系统做了什么呢? 我们定义一个类方法load,打断点,查看栈进程,我们发现dyld做了很多事,接下来就来探究到底dyld做了什么. 什么是dyl ...

  2. iOS 进阶之底层原理一OC对象原理alloc做了什么

    人狠话不多,直接上干货.这是第一篇,之后还会持续更新,当作自己学习的笔记,也同时分享给大家,希望帮助更多人. 首先,我们来思考,下面这段代码的输出是否相同.答案很明显,p1.p2.p3是指向相同的对象 ...

  3. iOS进阶之底层原理-block本质、block的签名、__block、如何避免循环引用

    面试的时候,经常会问到block,学完本篇文章,搞通底层block的实现,那么都不是问题了. block的源码是在libclosure中. 我们带着问题来解析源码: blcok的本质是什么 block ...

  4. iOS进阶之底层原理-锁、synchronized

    锁主要分为两种,自旋锁和互斥锁. 自旋锁 线程反复检查锁变量是否可用,处于忙等状态.一旦获取了自旋锁,线程会一直保持该锁,直至释放,会阻塞线程,但是避免了线程上下文的调度开销,适合短时间的场合. 互斥 ...

  5. iOS进阶之底层原理-weak实现原理

    基本上每一个面试都会问weak的实现原理,还有循环引用时候用到weak,今天我们就来研究下weak的实现原理到底是什么. weak入口 我们在这里打个断点,然后进入汇编调试. 这里就很明显看到了入口, ...

  6. iOS进阶之底层原理-消息机制

    消息发送的本质是objc_msgSend,至于为啥是这个,可以通过断点调试,这样就直接进入汇编,因为是汇编代码,熟悉常用指令即可,大部分根据注释走下去 objc_msgSend源码分析流程 这是一段汇 ...

  7. iOS进阶之底层原理-cache_t

    接着上一篇的对象结构探索,我们详细介绍cache_t.源码为最新的objc4-818.2. cache_t的底层结构 struct cache_t { // 省略一堆私有属性,方法 public:// ...

  8. iOS进阶之底层原理-isa与对象

    上一篇文章探究了对象的创建已经底层结构,这篇详细介绍isa.对象以及互相的关系. isa是什么 从源码分析,isa是个共用体,封装了类的信息. nonpointer:是否对isa指针开启指针优化,0为 ...

  9. iOS 锁的底层原理

    @synchronized(互斥锁) 原理 1.clang分析实现原理 {objc_sync_enter(_sync_obj);try {// 结构体struct _SYNC_EXIT {// 构造函 ...

最新文章

  1. 英国推6.5亿英镑网络安全战略 强化安全屏障
  2. 附笔记pdf下载,MIT中文线性代数课程精细笔记[第四课]
  3. 卧槽,别人家的黑客增长!
  4. Angular [(ngModel)]的ng-dirty设置时机
  5. 布式缓存系统Memcached简介与实践
  6. vector 删除指定元素_std::vector简介
  7. Mysql更改表名大小写不敏感
  8. 使用PaddlePaddle.org工具构建PaddlePaddle文档
  9. 虚拟机环境下Centos6.5如何上网
  10. CCS 下载程序时报错的解决办法( TI C2000 TMS320F28379D)
  11. python网页抓取与按键精灵原理一样吗_Python网络爬虫学习笔记之 三种网页抓取方法...
  12. 计算机会计系统管理,会计电算化系统管理实验报告.doc
  13. 通过DCMM评估对企业有什么作用
  14. python研究背景和意义_课题设计研究的背景和意义
  15. 工程师成长知识图谱(书籍)
  16. HTML+CSS学习笔记(篇幅较大)
  17. 软件基础原理——程序运行原理
  18. win2d 画出好看的图形
  19. Redis在win7下安装步骤
  20. JSP入门教程:JSP简明教程

热门文章

  1. windows server 证书的颁发与IIS证书的使用
  2. TCP listen()函数内幕
  3. 什么是Web Worker?
  4. apche 虚拟主机设置
  5. Android开源项目SlidingMenu本学习笔记(两)
  6. Happy birthday! Hubble
  7. 获取焦点时,利用js定时器设定时间执行动作
  8. C# 使用Linq递归查询数据库遇到的问题及解决方法
  9. Centos7解决图形界面卡死问题
  10. Mac休眠后解决卡死转圈问题