One of the main new features in Apple’s new Snow Leopard operating system has been released as open source. Apple has released the code of the userland portion of its Grand Central Dispatch technology under the Apache License, version 2. Mac OS X also has kernel support for Grand Central Dispatch, which is also released as open source via the XNU project. While we’re at it, let’s take this opportunity to look into exactly what Grand Central Dispatch is.
What is Grand Central Dispatch?
Grand Central Dispatch is Apple’s answer to the problem of parallel programming. With modern computers containing ever more processing cores, you’d think that operating systems and applications would run ever faster. However, this is not the case because in order to benefit from these multiple processing cores, code has to be executed in different threads.
Sounds simple enough. Have your program use multiple threads so that each core can handle one of them, and bingo, each additional core equals benefit. Of course, it’s not that simple. Parallel programming is extremely difficult, and even the most talented of programmers will run into race conditions, deadlocks, and other difficulties, because different threads require access to the same data and memory space.
The BeOS was probably one of the first (the first?) desktop operating system which was designed from the ground up to run on multiple processors. By making extensive use of multithreading, the BeOS could squeeze every last ounce of power from the two processors inside the BeBox, resulting in a very impressive operating system which could do all sorts of fancy tricks in a time when the competition was just discovering the merits of protected memory. However, all this multithreading was quite harsh on programmers.
Without proper multithreading, applications today can seriously underutilise the power available to them. Singlethreaded applications will only utilise a single core, leading to all sorts of performance issues. This screenshot from John Siracusa’s excellent Snow Leopard review illustrates this point:
And your wallet dies a little inside.
Grand Central Dispatch is Apple’s answer to make the life of programmers easier. Note that from this point onwards, I’m pretty much relying on whatever Siracusa wrote down. It’s not a new Cocoa framework, but a plain C library which is available for any of the C-based languages in Mac OS X: Objective-C, C++, and Objective-C++. You only need to add a #include <dispatch/dispatch.h>
.
So, what, exactly, does GCD do? Siracusa put it like this:
The bottom line is that the optimal number of threads to put in flight at any given time is best determined by a single, globally aware entity. In Snow Leopard, that entity is GCD. It will keep zero threads in its pool if there are no queues that have tasks to run. As tasks are dequeued, GCD will create and dole out threads in a way that optimizes the use of the available hardware. GCD knows how many cores the system has, and it knows how many threads are currently executing tasks. When a queue no longer needs a thread, it’s returned to the pool where GCD can hand it out to another queue that has a task ready to be dequeued.
Basically, thread management is no longer something programmers have to do manually; programmers simply cannot know what the optimal thread count of their software is, as your system may be doing any number of things at any given time.
GCD is not about pervasive multithreading like the BeOS was. BeOS was about having each and every component run on its own concurrent thread, leading to difficulties in data sharing and locking. GCD, on the other hand, promotes a more hierarchical design, where you have one main application thread for user events and the interface, and any number of “worker threads” doing specific jobs as needed.
Siracusa presents the example of a word processor which has a button which analyses a document, and presents a number of statistics. On an average document, this takes less than a second, so it can easily be run on the main program thread without the user ever noticing it. However, what if you were to load up a very long and complicated document?
Suddenly, the analysis would take 15-30 seconds, and because it runs on the program’s main thread, the entire application will become unresponsive, the beach ball or hourglass will appear, and the user is bummed. This is where Grand Central Dispatch comes into play: using just two additional lines of code, a programmer can relegate the analysis to its own thread, without it interfering the execution of the program’s main thread. Application remains responsive, no beach balls and hourglasses, user is not bummed out. “No application-global objects, no thread management, no callbacks, no argument marshalling, no context objects, not even any additional variables,” Siracusa explains, “Behold, Grand Central Dispatch.”
It is important to note, however, that GCD doesn’t actually address the problem of parallelism in computer programming at all: programmers will still have to figure out for themselves which tasks can be run in parallel and when. What GCD does, however, is make the actual process of spinning off a task to its own thread easy.
Open source
The news now is that Apple has released the userspace implementation of Grand Central Dispatch as open source under version 2 of the Apache License (the kernel component is part of the open source XNU project). For portability purposes, kernel support is not necessary – compiler support for blocks, however, is. The blocks runtime is available as a component of the LLVM project.
Further reading:
I had assumed this Grand Central was something magical that addressed a lot of the issues of parallel programming like concurrency and data sharing. After reading this, it sounds like it is .Net’s (and I’m sure other languages) ThreadPool with a lot of Apple marketing.
How is this different/better/revolutionary from ThreadPool? Just that fact that its at the OS level instead of the runtime level?
So you ignore a 23 page article from Arstechnica; sufficient space spent to explaining GCD, and yet, you still see fit to whine on this forum that it is little more than ‘marketing fluff’.
Or, he’s asking a legitimate question about what sets GCD apart.
Siracusa explains:
Exactly Thom. As in John Siracusa’s article – the devil is in the details.
Edited 2009-09-11 15:02 UTC
No, what ticked me off was this:
And ignores what makes it unique when compared to other implementations of such an idea. He might as well trolled in saying, “nothing special, Apple is just copying Microsoft” for which his post could be summarised down to.
Of course he ignored what makes it unique, as that was his very question!
Next time, try not to be so aggressive when someone is trying to ask an honest question.
Which is why I pointed him to the Arstechnica website.
If the answer to the question was incredibly difficult to find – then I’d cut him some slack but it isn’t the case. The answer was sitting in a 23 page review.
I love working with closures/blocks, but I’m not sure why being able to execute them on a separate thread is so special. C# has been able to do this with anonymous methods since 2.0 and it hasn’t brought any threading nirvana to the .NET world.
Developers avoid writing multithreaded code not because it is difficult to execute or schedule, but because it is difficult to synchronize shared resources. That is the hard problem. Solve that and then we reach threading nirvana.
Not to mention that the block implementation in Objective-C has some potentially serious memory leak issues when used without garbage collection enabled or on platforms where it is unavailable (iPhone/iTouch).
It has been explained to me that it is possible to avoid locking using blocks+GCD, and that this could be done by creating threads by using the block syntax with IBDispatch calls, kind of ‘in-line’ threading.
This provides multithreading but also ensures that code can be structured in such a way as that shared resources will be accessed serially. So with careful planning (so the story goes) locking can be avoided entirely, or at least reduced.
Not likely to be developing OS X anytime soon, so I can safely say that this is truly threading nirvana, without having to worry about being bitten by my own words.
dotnet certainly didn’t invent threadpool ;-).
Threadpool as such is trivial. GCD manages the threadpool by looking at how busy the CPUs are, and is per-OS instead of per-application.
Its more than a threadpool, but I don’t if its as big of a deal as it has been made out to be… It needs a few months as least so that the tires can get kicked so to speak.
Anyway, how it is different from a .net threadpool:
1. It isn’t a class instantiated in your code. It is an always running background daemon.
2. It knows the topology of the machine it is running on, i.e. how many sockets and cores are available. The primary benefit of this is that you do not have to configure the size of the pool – you just let it handle that itself.
3. In the traditional threadpool approach each application manages its own (probably differing) implementation of a threadpool, and since they do not know of each others existence they often don’t have enough information to make the best decisions for the system as a whole. GC is a global threadpool, and as such it has a much broader view of the system and can therefore make better decisions.
4. It is designed to execute closures (Apple calls them blocks, same thing). While this is a restriction, some people consider it a good one because it eliminates a lot of potential blocking issues.
5. Its completely dynamic and requires no pre-planning. By that I mean your code does not (and by definition should not) be concerned with how many threads will be needed or even if your code will actually end up running in parallel or synchronous.
5. It supports FIFO queuing (not unique, but not a common attribute of most threadpool implementations)
6. As I understand it (I may be wrong) it appears to be a M:N threading model, i.e. userspace threads. As such, the GC “threads” themselves are very lightweight.
The bad (or at least potentionally bad):
1. For other platforms, if it gets adopted, it is unclear at this point how things would work, since it requires support for closures, and there are no existing C compilers (besides Apples) that support closures.
2. I may or may not be possible to use it from VM based langauges (i.e. .net, java), but frankly for those types of environments it doesn’t make all that much sense either. You would probably want something like it implemented within the VM, it makes more sense there.
In my opinion, it seems like a pretty solid system (at least on paper). I think the main attraction is going to be in using it for hiding blocking behavior in UI applications, i.e. pushing code that might block the main UI thread off to the background. I don’t see it as a replacement for a threadpool exactly, there are probably times you want to have application control of thread count and such… But I think its simplicity will attract a lot of users, at least on OSX. Whether it will catch on for other platforms is hard to say.
This is an excellent summary of the differences between GCD and .NET’s thread pool. I’m a .NET dev and I love the thread pool but I fully realize it’s limitations when it comes to balancing tasks I submit to it with OS-wide task execution.
I do some OS X development for fun in my spare time and I’m looking forward to seeing what it’s like. Just have to upgrade to Snow Leopard first.
LLVM. End of story.
I don’t think that compiler support is the a problem for GCD adoption outside OS X. In UNIX, gcc is king and I expect Apple to push the blocks/closures extension support into gcc soon. Also, any compiler vendor can just copy the implementation of LLVM since it’s licensed under the BSD license.
As for Windows, Microsoft is going to do whatever it wants no matter what Apple does.
I don’t think license is an issue – and IIUC apple already had patches that implement blocks for gcc. It’s a different thing whether the C community really *wants* this feature, as it doesn’t fit very well in C (C is still not intended to be a high level language).
C++ lambdas are probably a better closure alternative:
http://groups.google.com/group/comp.lang.c++.moderated/browse_threa…
I don’t think it will end up being a problem in the long run. But although the user space code is available, right now you’ll have to use llvm to hack on it. I don’t think you will see widespread use until GCC proper implements Apple’s changes, and afaik that has not happened yet. It might happen quickly, but I doubt it. Ive already seen some bickering on the mailing lists about the ugly syntax…
Some developers are a bit upset because the existing inner function support (-fnested-functions) is already very close to blocks in functionality, and rather than implementing the rather strange syntax Apple uses for blocks they would prefer to expand inner functions to support full closures. I guess we’ll see…
Not unless it’s over Richard Stallman’s dead body. The whole reason for the existence of LLVM/Clang is that GCC was going nowhere Apple wanted and RMS was unwilling to add even trivial features any other modern-day compiler has.
As cynical as it is, I think it’ll take some real convincing for GCC to accept Blocks. Maybe if it gets ratified into the standard, then there should be no reasonable reason for it to not be included.
While RMS (co)wrote a lot of GNU software, how much control does he actually have these days? I doubt he still actively develops any of them. Any decisions about what features are and are not added to GCC are probably up to the GCC maintainers, not RMS.
This reminds me of EGCS*, which came to be exactly because the GCC project was stale and unwilling to evolve in any meaningful way. This is not the case with LLVM and Clang.
GCC and LLVM/Clang have fundamentally different objectives.
GCC is supposed to be the “all architectures/all-languages” compiler (which is why it changed from “GNU C Compiler” to “GNU Compiler Collection”), which places and extra burden in anyone who wishes to make significant changes in the way it operates and the features it has: making significant changes to GCC implies that those changes have to apply to all architectures (from multi-core multi-GHz CPUs to tiny microcontrollers) or, at least, not cause them any harm.
This is fantastically hard. Even so, GCC seems to be making steady progress over the last few years, but within its limitations and without a (probably disastrous for the project’s goals) major rewrite.
LLVM/Clang is the way to work beyond those limitations without caring if it only works for x86. There is no war between LLVM and GCC (and if there is any significant rivalry between the two projects, there shouln’t be).
The two projects actually help each other.
The knowledge within the GCC project about the different architectures it supports can help LLVM from falling into the x86-only pit, and the more production-oriented rate of change can help LLVM integrate the more relevant features sooner and more consistently into its design.
On the other hand, LLVM can eventually reach the level of quality and architecture/language support that GCC currently has, allowing it to adopt GCC’s goals and effectively become a replacement for GCC.
I don’t think RMS has anything to do with this (and with today’s GCC, actually), nor I think he would come forward criticizing LLVM if this should happen.
I think you are wrong in both counts.
On one hand you imply that GCC is slow in adopting changes when, in fact, GCC is often criticized for implementing too many extensions to standards. Many changes to language standards have actually been implemented first in GCC, which is the complete opposite of what you’re stating.
On the other hand, GCC doesn’t have to implement blocks just because Apple provides them with the necessary code to do it and they are required by something that currently has no expression whatsoever outside of OS X.
The real problem, like others have mentioned, is that blocks/closures may not be a good fit for C, and C developers may not like them.
This may be confusing to the people that use the term “C/C++”, but C is still a separate, and living, language from C++ because of its simplicity (a real understatement when comparing to a language so feature-bloated and baroque as C++). C programmers may not look favorably to C becoming a little more like C++.
* http://en.wikipedia.org/wiki/GNU_Compiler_Collection#EGCS_fork
llvm isn’t x86-only:
http://llvm.org/Features.html
I didn’t say that it was. But supporting multiple architectures isn’t their primary objective, unlike GCC, and on the Clang site there is mention of it being production quality for x86 and x86_64, although it supports other architectures.
clang isn’t llvm. clang is the frontend, llvm the backend. clang doesn’t support architectures, it supports languages. llvm does support the processor architectures. you can use gcc as frontend for llvm, but that doesn’t change the supported architectures but the supported languages. and you can’t use clang with gcc as backend. so i don’t get what you’re trying to say with your post.
FWIW – Apple’s compiler isn’t Apple’s at all. It’s GCC, and the blocks implementation is open-source; whether the GCC maintainers pick it up is a different story.
Apple uses a custom branch of GCC for OSX, and that branch supports blocks, but GCC mainline isn’t necessarily going to accept the changes required to support it. Which is more or less what you just said , just pointing out my use of the term “Apple’s compiler” was meant to be read as “Apple’s GCC branch”.
re: C closures, I thought apple used gcc 4? do they have to make any changes available?
Blocks and closures in C, C++, ObjC if I had to guess.
Java has thread pools as well (in java.util.concurrent), since 1.5.
This isn’t a thread pool, the .net equivilent would be this the TPL (and PLINQ) that is coming with .NET 4 http://msdn.microsoft.com/en-us/magazine/cc163340.aspx
I haven’t looked into GCD or TPL enough to comment, but they both (supposedly) address the same issue. IMO this isn’t something you can just shoehorn in, you need it from the ground up (like scala)
I had not seen this before. You are right though, it looks to be very much the same basic concept as GCD. Thanks for pointing this out.
One thing that I notice immediately: I like the TPL syntax a lot better. For example, Parallel.For is very similar to dispatch_apply, but it doesn’t require you to declare a queue. It looks much less alien…
just reads a whole lot nicer than
of course, that is just syntactic sugar – they both do the same thing…
Anything like this in linux or BSD systems? I guess now that they have released it as Open Source and under the Apache License, they will find a way into the other systems. Any problems adopting them into the kernel for GPL or BSD licensed systems?
No problem for BSD systems, a BIG problem for Linux since it uses GPL which is compatible only with itself.
Not entirely: the Apache 2.0 license is generally considered to be unidirectionally compatible with GPLv3. This means you may take Apache2.0-licensed code, combine it with GPLv3-licensed code, and publish the result under the GPLv3. You may not publish the result under the Apache2 license, though.
http://www.apache.org/licenses/GPL-compatibility.html
http://www.fsf.org/licensing/licenses/#apache2
However, only the libdispatch part is Apache2-licensed: the kernel part (xnu) is under ‘APPLE PUBLIC SOURCE LICENSE’, and I haven’t checked whether that’s GPL-compatible. Moreover, the Linux kernel is GPLv2, not GPLv3, and whether GPLv2 and Apache2 licenses are compatible afaik is under dispute.
So libdispatch can be included in GPLv3 userland applications, and xnu can be included in the kernel if the ‘APPLE PUBLIC SOURCE LICENSE’ is GPLv2-compatible, which is might be.
The kernel component would be pretty specific to OS X anyway – it would have to be completely reimplemented anyway to run on Linux.
The compiler parts would be under the same license as the compilers – GPL3 for GCC, and the LLVM license for clang (looks like a standard BSD-style license). No problems here.
The rest of it is userspace. There would be no licensing problems at all with the daemon, since it’s independent anyway. As for any library components that need to be linked to applications, the Apache license is compatible with GPL / LGPL (in one direction), so there wouldn’t be any licensing problems with integrating it with glibc, or wherever it needs to go.
I think Apple pretty much did the right thing here. It just remains to be seen how useful it is, and whether anyone puts it to good use.
That is the point I think he/she was trying to get at; when he/she talks about compatibility, as you’ve explained, it is a one way transaction. Quite honestly it makes me laugh when GPL advocates talk about the evils of proprietary software and yet their own licence has the overtones of being proprietary given that the transaction between BSD/Apache/X11/etc is one way.
You have a strange definition of proprietary.
The GPL has this one major goal: To make sure the user of the software can always view the source. It accomplishes this very well. Its goal is not to be the friendliest license for you to copy and paste from–that would be the BSD license. If all you want is to steal someone else’s code, BSD is for you! If you want to be sure no one can steal your code, GPL is a much better choice.
For all its faults I don’t know that I’ve heard many people talk about the evils of the BSD license.
Can we dispense with the anti-GPL hostility, please? I can be just as hostile toward the BSD license, but I generally don’t turn my flamethrower on for that. I can respect the position of someone choosing to give away his code in that fashion. Can’t you respect the position of someone who wishes to use the GPL?
you have a strange definition of stealing.
IIUC they didn’t release the kernel bits, just the userspace lib. In any case, GCD isn’t really that special and it should be a piece of cake to implement from scratch, even.
rtfa:
@ raboof: apples public source licence isn’t compatible with he gpl. but the xnu-code won’t be of any use in kernels which aren’t reated to xnu or at least mach anyway. the concepts used will probably be more interesting than the code.
Edited 2009-09-11 17:06 UTC
It looks like the language/compiler features are the hardest part of the project.
Get cracking then! 🙂
Ummm…Why would they want to?
For releasing it so quickly.
It seems that apple cannot win with some people, if the release the GCD technology open source it’s no big deal. If the don’t then all apple supposedly does is piggybacking off opensource.
I would think that any contributions to the community that could possibly yield some benefit would be a welcomed thing.
I mean it is not like, .net is opensource and Mono is filled with criticism, what is the happy medium, I just wish that sometimes people can appreciate it the contributions no matter how small they all form a larger opensource eco-system.
I agree. This is the internet though and opinions are cheap. It is next to impossible to even get agreement that the sky is blue.
Where I am it’s grey
The truth is though that as long as a company makes money off open source they will be derided by the community (not the whole community just the nutty fringe). Just ask Red Hat. They get it despite them contributing tons of code.
Not a terribly bad idea, but I wonder exactly how multiple ‘threads’ of execution will be handled ( read: dependency branches ). If it is done at compile-time, then all is well — so long as you don’t plan on writing a portable piece of software.
I guess it would not be too hard to tell a decent compiler to ignore all of the ^{} blocks…
–The loon
IIRC you can attach dependency information between the various GC threads and the system figures out all that itself, making it a very nice little thing, but I’ve not read all the documentation myself yet.
[quote]Parallel programming is extremely difficult, and even the most talented of programmers will run into race conditions, deadlocks, and other difficulties, because different threads require access to the same data and memory space.[/quote]
Er, the most talented of programmers run into those in other people’s code and fixes those problems, if it is tractable, and reformulate the code when not, with only a few exceptions being stuck in the case where deadlocks are allowed to occur: the one that comes to mind most readily are databases as a very common piece of software that’s prone to deadlocks, due to the nature of them, and then the database is (hopefully, correctly) coded in a way as to detect and break the deadlocks.
There are actually rather simple rules as to how you must design and code to avoid race conditions/deadlocks: maintain a consistent locking/unlocking order, and protect with critical sections all resources that are shared between threads. If you do a bit of proper diagramming ahead of time, and stop and think about what’s required, you’ll either fairly readily realize what needs to be done to do that, or that it simply can’t be done reliably. Timeout periods (not available in all threading primitives) are an admission to 2 things:
1. People will eventually make mistakes in their locking order, so this provides a slightly more graceful exit out of their errors than remaining locked, only if they handle such error conditions properly: not doing so can introduce race conditions into the system.
2. Often times the timeouts are for access to resources that can take an indeterminate period of time, sometimes never completing its intended task, so not having a timeout value on that would be a nightmare that’d require killing threads and cleaning up the bodies.
#2 can’t be worked around nicely, at least not without asynchronous I/O for I/O-related items, but #1 can almost always be prevented.
Threading looks extremely simple on paper/design phase and when you are writing the code. The simplicity is over when you are trying to debug the code. Things like Python’s Queue.Queue and GCD help, but once you start needing mutexes you are screwed (or, well, will be eventually).
It is somewhat (but not much) harder to write the same code in asynchronous fashion, but it pays off in the end as you’ll get more deterministic code.
To quote Alan Cox, “A Computer is a state machine. Threads are for people who can’t program state machines.”.
I do speak from firsthand experience: it’s true that debugging threaded code can be a nightmare, especially in the sorts of contexts I have experience with, industrial machinery that cannot tolerate holding a thread at some point of execution while the hardware itself is running. For that, you need built-in facilities in the source code in the form of telemetry to help debug what the sequence of things are. Designing and implementing it isn’t that overly complicated, unless you go bonkers: debugging it (and whatever mistakes you’ve made in general logic otherwise: you can have correct threading logic but other things will cause debugging to be required) is where the real fun comes in, as those conditions arise, if you can’t lock down a system indefinitely while debugging lest vital things timeout.
State machines are definitely an easier way at times to do things, no question there, and that’s why industrial machinery tends to use programmable ladder logic, a formalized and well-understood state machine that works the same everywhere. If more software was written in the same way, it’d be easier to debug.
Well I couldn’t agree more, but only if we are talking about one core. If you want to access more cores you need threads.
GCD is more than just a queue. It exists so you don’t have to use mutexes. You just decide what parts of your code can be executed parallel to your main thread and GCD deals with race conditions itself
It’s too bad there isn’t an “LVM” for CPU cores though, as there is for hard drives. Just pool the CPU’s together and let the daemon do it’s thing.
Well, I don’t know what exactly LVM does, but from what i’ve read about it from Wikipedia this is exactly what GCD is trying to do. You just say “I want these stuff to run separately” and the daemon does its thing.
And I guess there is/will be some way to use GCD with openCL, allowing the daemon to pool not only the CPUs but the GPUs too
Edited 2009-09-11 18:36 UTC
No. GCD is nothing more than a thread pool that is managed by the OS rather than the language runtime. It’s interesting and should lead to better allocation of CPU cores for multithreaded programs, but it does not solve any of the hard problems which revolve around shared resources, and it certainly doesn’t remove the need for synchronization primitives like mutexes.
That is from Apple’s Concurrency Programming Guide. I am sure that GCD is using some locking mechanisms inside, but it abstracts them from the developer.
Now you can all shove it about what does Apple contribute back. Between GCD and OpenCL the world of computing is far better off.
So I can do whatever I want with applications developed by others and I’m automatically absolved of all wrongdoing, just as long as I release the code to completely unrelated applications that I’ve developed?
Awesome!
I think that blocks are a bad idea as it overlaps too much with C++ lambdas, which are much farther along in standardization. Also the syntax for a pointer to a block uses the ^ symbol. In C++/CLI, which I think is already standardized, ^ is used as a managed pointer. So there are two conflicts with existing practice here. Another concern is that block is already a defined term in C.
I’m not opposed to closures in general, I just think they should have gone more along the route of C++ lambdas.
You’re a C++ developer I take it?
Well, C and C++ have been separate languages for a while. Since C99, C++ is no longer a superset of C, although people still think it is.
This is one of the headline features of their new OS just released weeks ago. To Open Source this so quickly is commendable. Well done to Apple on this.
You can read abut gcd:
http://www.mikeash.com/?page=pyblog/friday-qa-2009-08-28-intro-to-g…
http://www.mikeash.com/?page=pyblog/friday-qa-2009-09-04-intro-to-g…
http://www.mikeash.com/?page=pyblog/friday-qa-2009-09-11-intro-to-g….
I’m an extreme beginner in multithreaded programming. In Python, no less.
So this GCD thing means I can just say “Run this in a different thread” and that’s it? I don’t have to create locks to control access to resources that are shared between threads? Can my separate thread work safely with any data that just a normal function or method would be able to use? What would I have to do to tell the parent thread that the worker thread has finished?
It’s confusing to follow whether GCD takes care of all the details for you, or if it’s just a daemon that keeps track of what cores are available and says “no more, please” when they’re all working to maximum capacity. If it’s the latter, then so freaking what, even I could write something like that.
Pretty much, with the twist that the OS optimizes the worker thread operation according to amount of cores and their workload.
Use callbacks (where the blocks step in).
This is trivial in Python (and other langs that support closures). Apple provided closures (blocks) for C and ObjC, and implemented OS level support for that.
Yes, it’s not rocket science, but that doesn’t mean providing such standard support is a bad idea 😉
No, it isn’t quite that simple. If you actually have shared state, you have to deal with it. The difference is that the methodology for dealing with shared state is always going to be uniform – you create a queue for it to gate access. Queues in essence replace locking constructs.
No. I would read this if you want to understand the implications of blocks as Apple has implemented them:
http://thirdcog.eu/pwcblocks/
You would use dispatch_async and give it a callback to call on completion of the queue.
Its definitely not just for tracking cores. The concept of queues is fundamental to it.
The C++ programmers have Intel Thread Building Blocks and Microsoft Parallel Pattern Library, with no need for language extensions.
Java programmers have java.util.concurrent.
.Net programmer have ThreadPools, Task Parallel Library and Parallel LINQ.
Grand Central Dispatch is only Apple catching up, while providing proprietary extensions to the C language. Which might take several years before any other vendor cares to implement them, even with the source available.
So for me, this is a good thing for Mac only developers, for all the others a nice curiosity, nothing more.
I like how it was released open-source — this shows that Apple can give as much as(?) it takes. I’ve done little multithreading, but I too can’t see what makes this better than other threadpools. Nevertheless, The GCD relies on blocks. This is the distinguishing difference from most implementations, but it’s also what bugs me.
Blocks would’ve been fine if there was no __block qualifier. What I understood is that it relies on reference counting, which would mean an invisible implementation in a language of much lower level.. I don’t want C standard to accept this upon Apple’s insistence. This is different from ‘&’ in C++ because those are “sugary pointers” at a lower level. It is for that reason that the blocks in C++0x have such an ugly (albeit expressive) syntax for lambdas. If stdC includes blocks, then these would not only come in clash with upcoming C++ lambdas but also bring in more complexity.