dispatch_async.3 7.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235
  1. .\" Copyright (c) 2008-2012 Apple Inc. All rights reserved.
  2. .Dd May 1, 2009
  3. .Dt dispatch_async 3
  4. .Os Darwin
  5. .Sh NAME
  6. .Nm dispatch_async ,
  7. .Nm dispatch_sync
  8. .Nd schedule blocks for execution
  9. .Sh SYNOPSIS
  10. .Fd #include <dispatch/dispatch.h>
  11. .Ft void
  12. .Fo dispatch_async
  13. .Fa "dispatch_queue_t queue" "void (^block)(void)"
  14. .Fc
  15. .Ft void
  16. .Fo dispatch_sync
  17. .Fa "dispatch_queue_t queue" "void (^block)(void)"
  18. .Fc
  19. .Ft void
  20. .Fo dispatch_async_f
  21. .Fa "dispatch_queue_t queue" "void *context" "void (*function)(void *)"
  22. .Fc
  23. .Ft void
  24. .Fo dispatch_sync_f
  25. .Fa "dispatch_queue_t queue" "void *context" "void (*function)(void *)"
  26. .Fc
  27. .Sh DESCRIPTION
  28. The
  29. .Fn dispatch_async
  30. and
  31. .Fn dispatch_sync
  32. functions schedule blocks for concurrent execution within the
  33. .Xr dispatch 3
  34. framework. Blocks are submitted to a queue which dictates the policy for their
  35. execution. See
  36. .Xr dispatch_queue_create 3
  37. for more information about creating dispatch queues.
  38. .Pp
  39. These functions support efficient temporal synchronization, background
  40. concurrency and data-level concurrency. These same functions can also be used
  41. for efficient notification of the completion of asynchronous blocks (a.k.a.
  42. callbacks).
  43. .Sh TEMPORAL SYNCHRONIZATION
  44. Synchronization is often required when multiple threads of execution access
  45. shared data concurrently. The simplest form of synchronization is
  46. mutual-exclusion (a lock), whereby different subsystems execute concurrently
  47. until a shared critical section is entered. In the
  48. .Xr pthread 3
  49. family of procedures, temporal synchronization is accomplished like so:
  50. .Bd -literal -offset indent
  51. int r = pthread_mutex_lock(&my_lock);
  52. assert(r == 0);
  53. // critical section
  54. r = pthread_mutex_unlock(&my_lock);
  55. assert(r == 0);
  56. .Ed
  57. .Pp
  58. The
  59. .Fn dispatch_sync
  60. function may be used with a serial queue to accomplish the same style of
  61. synchronization. For example:
  62. .Bd -literal -offset indent
  63. dispatch_sync(my_queue, ^{
  64. // critical section
  65. });
  66. .Ed
  67. .Pp
  68. In addition to providing a more concise expression of synchronization, this
  69. approach is less error prone as the critical section cannot be accidentally
  70. left without restoring the queue to a reentrant state.
  71. .Pp
  72. The
  73. .Fn dispatch_async
  74. function may be used to implement deferred critical sections when the result
  75. of the block is not needed locally. Deferred critical sections have the same
  76. synchronization properties as the above code, but are non-blocking and
  77. therefore more efficient to perform. For example:
  78. .Bd -literal
  79. dispatch_async(my_queue, ^{
  80. // critical section
  81. });
  82. .Ed
  83. .Sh BACKGROUND CONCURRENCY
  84. .The
  85. .Fn dispatch_async
  86. function may be used to execute trivial background tasks on a global concurrent
  87. queue. For example:
  88. .Bd -literal
  89. dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0), ^{
  90. // background operation
  91. });
  92. .Ed
  93. .Pp
  94. This approach is an efficient replacement for
  95. .Xr pthread_create 3 .
  96. .Sh COMPLETION CALLBACKS
  97. Completion callbacks can be accomplished via nested calls to the
  98. .Fn dispatch_async
  99. function. It is important to remember to retain the destination queue before the
  100. first call to
  101. .Fn dispatch_async ,
  102. and to release that queue at the end of the completion callback to ensure the
  103. destination queue is not deallocated while the completion callback is pending.
  104. For example:
  105. .Bd -literal
  106. void
  107. async_read(object_t obj,
  108. void *where, size_t bytes,
  109. dispatch_queue_t destination_queue,
  110. void (^reply_block)(ssize_t r, int err))
  111. {
  112. // There are better ways of doing async I/O.
  113. // This is just an example of nested blocks.
  114. dispatch_retain(destination_queue);
  115. dispatch_async(obj->queue, ^{
  116. ssize_t r = read(obj->fd, where, bytes);
  117. int err = errno;
  118. dispatch_async(destination_queue, ^{
  119. reply_block(r, err);
  120. });
  121. dispatch_release(destination_queue);
  122. });
  123. }
  124. .Ed
  125. .Sh RECURSIVE LOCKS
  126. While
  127. .Fn dispatch_sync
  128. can replace a lock, it cannot replace a recursive lock. Unlike locks, queues
  129. support both asynchronous and synchronous operations, and those operations are
  130. ordered by definition. A recursive call to
  131. .Fn dispatch_sync
  132. causes a simple deadlock as the currently executing block waits for the next
  133. block to complete, but the next block will not start until the currently
  134. running block completes.
  135. .Pp
  136. As the dispatch framework was designed, we studied recursive locks. We found
  137. that the vast majority of recursive locks are deployed retroactively when
  138. ill-defined lock hierarchies are discovered. As a consequence, the adoption of
  139. recursive locks often mutates obvious bugs into obscure ones. This study also
  140. revealed an insight: if reentrancy is unavoidable, then reader/writer locks are
  141. preferable to recursive locks. Disciplined use of reader/writer locks enable
  142. reentrancy only when reentrancy is safe (the "read" side of the lock).
  143. .Pp
  144. Nevertheless, if it is absolutely necessary, what follows is an imperfect way of
  145. implementing recursive locks using the dispatch framework:
  146. .Bd -literal
  147. void
  148. sloppy_lock(object_t object, void (^block)(void))
  149. {
  150. if (object->owner == pthread_self()) {
  151. return block();
  152. }
  153. dispatch_sync(object->queue, ^{
  154. object->owner = pthread_self();
  155. block();
  156. object->owner = NULL;
  157. });
  158. }
  159. .Ed
  160. .Pp
  161. The above example does not solve the case where queue A runs on thread X which
  162. calls
  163. .Fn dispatch_sync
  164. against queue B which runs on thread Y which recursively calls
  165. .Fn dispatch_sync
  166. against queue A, which deadlocks both examples. This is bug-for-bug compatible
  167. with nontrivial pthread usage. In fact, nontrivial reentrancy is impossible to
  168. support in recursive locks once the ultimate level of reentrancy is deployed
  169. (IPC or RPC).
  170. .Sh IMPLIED REFERENCES
  171. Synchronous functions within the dispatch framework hold an implied reference
  172. on the target queue. In other words, the synchronous function borrows the
  173. reference of the calling function (this is valid because the calling function
  174. is blocked waiting for the result of the synchronous function, and therefore
  175. cannot modify the reference count of the target queue until after the
  176. synchronous function has returned).
  177. For example:
  178. .Bd -literal
  179. queue = dispatch_queue_create("com.example.queue", NULL);
  180. assert(queue);
  181. dispatch_sync(queue, ^{
  182. do_something();
  183. //dispatch_release(queue); // NOT SAFE -- dispatch_sync() is still using 'queue'
  184. });
  185. dispatch_release(queue); // SAFELY balanced outside of the block provided to dispatch_sync()
  186. .Ed
  187. .Pp
  188. This is in contrast to asynchronous functions which must retain both the block
  189. and target queue for the duration of the asynchronous operation (as the calling
  190. function may immediately release its interest in these objects).
  191. .Sh FUNDAMENTALS
  192. Conceptually,
  193. .Fn dispatch_sync
  194. is a convenient wrapper around
  195. .Fn dispatch_async
  196. with the addition of a semaphore to wait for completion of the block, and a
  197. wrapper around the block to signal its completion. See
  198. .Xr dispatch_semaphore_create 3
  199. for more information about dispatch semaphores. The actual implementation of the
  200. .Fn dispatch_sync
  201. function may be optimized and differ from the above description.
  202. .Pp
  203. The
  204. .Fn dispatch_async
  205. function is a wrapper around
  206. .Fn dispatch_async_f .
  207. The application-defined
  208. .Fa context
  209. parameter is passed to the
  210. .Fa function
  211. when it is invoked on the target
  212. .Fa queue .
  213. .Pp
  214. The
  215. .Fn dispatch_sync
  216. function is a wrapper around
  217. .Fn dispatch_sync_f .
  218. The application-defined
  219. .Fa context
  220. parameter is passed to the
  221. .Fa function
  222. when it is invoked on the target
  223. .Fa queue .
  224. .Pp
  225. .Sh SEE ALSO
  226. .Xr dispatch 3 ,
  227. .Xr dispatch_apply 3 ,
  228. .Xr dispatch_once 3 ,
  229. .Xr dispatch_queue_create 3 ,
  230. .Xr dispatch_semaphore_create 3