“Parallel computing is the use of multiple processors in order to solve a specific task. For quite a few decades now parallelism has been used in the domain of High Performance Computing (HPC) where large, difficult problems are split up into pieces which are solved and then recombined to form the answer. With the emergence of multiple cores per processor this has become more and more important for the everyday user and programmer. In this article I will explain some of the elementry concepts of parallel computing and point the reader to further points of information.”
This article doesn’t mention functional programming, which makes it much easier to write parallel computations. The problem with imperative programming languages is that it is extremely easy to modify the program’s state. Most of the difficulties in parallel computation lies within being able to synchronize state changes. Functional programming reduces this problem by making it more difficult to alter the program’s state.
NESL was mentioned, that is a function parallel language. Although functional languages are very useful in certain domains, to be honest, I am not convinced that they are the answer in parallel computing.
As a functional language is abstract from the actual machine there is much emphasis put on the compiler deriving the best solution to many of the important physical aspects of parallelism. The compiler’s inference (or “best guess”) probably won’t be ideal and is not transparent. Of course it will be sufficient for some programmers, but not all.