并行编程和并行编程的区别是什么?我问了谷歌,但没有找到任何帮助我理解这种区别的东西。你能给我举个例子吗?

现在我找到了这个解释:http://www.linux-mag.com/id/7411 -但是“并发性是程序的属性”vs“并行执行是机器的属性”对我来说还不够-我仍然不能说什么是什么。


当前回答

并行编程发生在代码同时被执行并且每次执行都是相互独立的时候。因此,通常不会有关于共享变量之类的关注,因为那不太可能发生。

However, concurrent programming consists on code being executed by different processes/threads that share variables and such, therefore on concurrent programming we must establish some sort of rule to decide which process/thread executes first, we want this so that we can be sure there will be consistency and that we can know with certainty what will happen. If there is no control and all threads compute at the same time and store things on the same variables, how would we know what to expect in the end? Maybe a thread is faster than the other, maybe one of the threads even stopped in the middle of its execution and another continued a different computation with a corrupted (not yet fully computed) variable, the possibilities are endless. It's in these situations that we usually use concurrent programming instead of parallel.

其他回答

我认为并发编程指的是多线程编程,它是关于让你的程序运行多个线程,从硬件细节中抽象出来。

并行编程是指专门设计程序算法以利用可用的并行执行。例如,您可以并行执行某些算法的两个分支,期望它会比先检查第一个分支再检查第二个分支更快地到达结果(平均而言)。

Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang, F# asynchronous workflows and Scala's Akka library are perhaps the most promising approaches to highly concurrent programming.

Multicore programming is a special case of parallel programming. Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug than concurrent programs. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising approach for high-performance parallel programming on multicores and it has been adopted in both Intel's Threaded Building Blocks and Microsoft's Task Parallel Library (in .NET 4).

不同的人在许多不同的具体情况下讨论不同类型的并发性和并行性,因此需要一些抽象来涵盖它们的共同性质。

The basic abstraction is done in computer science, where both concurrency and parallelism are attributed to the properties of programs. Here, programs are formalized descriptions of computing. Such programs need not to be in any particular language or encoding, which is implementation-specific. The existence of API/ABI/ISA/OS is irrelevant to such level of abstraction. Surely one will need more detailed implementation-specific knowledge (like threading model) to do concrete programming works, the spirit behind the basic abstraction is not changed.

第二个重要的事实是,作为一般属性,并发性和并行性可以在许多不同的抽象中共存。

关于一般的区别,请参阅并发和并行的基本观点的相关答案。(还有一些链接包含一些其他来源。)

并发编程和并行编程是用一些系统实现这些一般属性的技术,这些系统公开了可编程性。系统通常是编程语言及其实现。

A programming language may expose the intended properties by built-in semantic rules. In most cases, such rules specify the evaluations of specific language structures (e.g. expressions) making the computation involved effectively concurrent or parallel. (More specifically, the computational effects implied by the evaluations can perfectly reflect these properties.) However, concurrent/parallel language semantics are essentially complex and they are not necessary to practical works (to implement efficient concurrent/parallel algorithms as the solutions of realistic problems). So, most traditional languages take a more conservative and simpler approach: assuming the semantics of evaluation totally sequential and serial, then providing optional primitives to allow some of the computations being concurrent and parallel. These primitives can be keywords or procedural constructs ("functions") supported by the language. They are implemented based on the interaction with hosted environments (OS, or "bare metal" hardware interface), usually opaque (not able to be derived using the language portably) to the language. Thus, in this particular kind of high-level abstractions seen by the programmers, nothing is concurrent/parallel besides these "magic" primitives and programs relying on these primitives; the programmers can then enjoy less error-prone experience of programming when concurrency/parallelism properties are not so interested.

Although primitives abstract the complex away in the most high-level abstractions, the implementations still have the extra complexity not exposed by the language feature. So, some mid-level abstractions are needed. One typical example is threading. Threading allows one or more thread of execution (or simply thread; sometimes it is also called a process, which is not necessarily the concept of a task scheduled in an OS) supported by the language implementation (the runtime). Threads are usually preemptively scheduled by the runtime, so a thread needs to know nothing about other threads. Thus, threads are natural to implement parallelism as long as they share nothing (the critical resources): just decompose computations in different threads, once the underlying implementation allows the overlapping of the computation resources during the execution, it works. Threads are also subject to concurrent accesses of shared resources: just access resources in any order meets the minimal constraints required by the algorithm, and the implementation will eventually determine when to access. In such cases, some synchronization operations may be necessary. Some languages treat threading and synchronization operations as parts of the high-level abstraction and expose them as primitives, while some other languages encourage only relatively more high-level primitives (like futures/promises) instead.

Under the level of language-specific threads, there come multitasking of the underlying hosting environment (typically, an OS). OS-level preemptive multitasking are used to implement (preemptive) multithreading. In some environments like Windows NT, the basic scheduling units (the tasks) are also "threads". To differentiate them with userspace implementation of threads mentioned above, they are called kernel threads, where "kernel" means the kernel of the OS (however, strictly speaking, this is not quite true for Windows NT; the "real" kernel is the NT executive). Kernel threads are not always 1:1 mapped to the userspace threads, although 1:1 mapping often reduces most overhead of mapping. Since kernel threads are heavyweight (involving system calls) to create/destroy/communicate, there are non 1:1 green threads in the userspace to overcome the overhead problems at the cost of the mapping overhead. The choice of mapping depending on the programming paradigm expected in the high-level abstraction. For example, when a huge number of userspace threads expected being concurrently executed (like Erlang), 1:1 mapping is never feasible.

The underlying of OS multitasking is ISA-level multitasking provided by the logical core of the processor. This is usually the most low-level public interface for programmers. Beneath this level, there may exist SMT. This is a form of more low-level multithreading implemented by the hardware, but arguably, still somewhat programmable - though it is usually only accessible by the processor manufacturer. Note the hardware design is apparently reflecting parallelism, but there is also concurrent scheduling mechanism to make the internal hardware resources being efficiently used.

在上面提到的每一层“线程”中,都涉及并发性和并行性。尽管编程接口变化很大,但它们都服从于一开始基本抽象所揭示的属性。

并行编程发生在代码同时被执行并且每次执行都是相互独立的时候。因此,通常不会有关于共享变量之类的关注,因为那不太可能发生。

However, concurrent programming consists on code being executed by different processes/threads that share variables and such, therefore on concurrent programming we must establish some sort of rule to decide which process/thread executes first, we want this so that we can be sure there will be consistency and that we can know with certainty what will happen. If there is no control and all threads compute at the same time and store things on the same variables, how would we know what to expect in the end? Maybe a thread is faster than the other, maybe one of the threads even stopped in the middle of its execution and another continued a different computation with a corrupted (not yet fully computed) variable, the possibilities are endless. It's in these situations that we usually use concurrent programming instead of parallel.

只是分享一个有助于突出区别的例子:

并行编程:假设您想实现归并排序算法。每次将问题划分为两个子问题时,可以有两个线程来解决它们。然而,为了进行合并步骤,您必须等待这两个线程完成,因为合并需要两个子解决方案。这种“强制等待”使其成为并行程序。

并发程序:假设你想压缩n个文本文件,并为每个文件生成一个压缩文件。您可以有2个(最多n个)线程,每个线程处理压缩文件的一个子集。当每个线程完成时,它就完成了,它不需要等待或做任何其他事情。因此,由于不同的任务以“任意顺序”交错的方式执行,所以程序是并发的,而不是并行的。

正如其他人提到的,每个并行程序都是并发的(事实上必须是),而不是相反。