并行编程和并行编程的区别是什么?我问了谷歌,但没有找到任何帮助我理解这种区别的东西。你能给我举个例子吗?

现在我找到了这个解释:http://www.linux-mag.com/id/7411 -但是“并发性是程序的属性”vs“并行执行是机器的属性”对我来说还不够-我仍然不能说什么是什么。


当前回答

不同的人在许多不同的具体情况下讨论不同类型的并发性和并行性,因此需要一些抽象来涵盖它们的共同性质。

The basic abstraction is done in computer science, where both concurrency and parallelism are attributed to the properties of programs. Here, programs are formalized descriptions of computing. Such programs need not to be in any particular language or encoding, which is implementation-specific. The existence of API/ABI/ISA/OS is irrelevant to such level of abstraction. Surely one will need more detailed implementation-specific knowledge (like threading model) to do concrete programming works, the spirit behind the basic abstraction is not changed.

第二个重要的事实是,作为一般属性,并发性和并行性可以在许多不同的抽象中共存。

关于一般的区别,请参阅并发和并行的基本观点的相关答案。(还有一些链接包含一些其他来源。)

并发编程和并行编程是用一些系统实现这些一般属性的技术,这些系统公开了可编程性。系统通常是编程语言及其实现。

A programming language may expose the intended properties by built-in semantic rules. In most cases, such rules specify the evaluations of specific language structures (e.g. expressions) making the computation involved effectively concurrent or parallel. (More specifically, the computational effects implied by the evaluations can perfectly reflect these properties.) However, concurrent/parallel language semantics are essentially complex and they are not necessary to practical works (to implement efficient concurrent/parallel algorithms as the solutions of realistic problems). So, most traditional languages take a more conservative and simpler approach: assuming the semantics of evaluation totally sequential and serial, then providing optional primitives to allow some of the computations being concurrent and parallel. These primitives can be keywords or procedural constructs ("functions") supported by the language. They are implemented based on the interaction with hosted environments (OS, or "bare metal" hardware interface), usually opaque (not able to be derived using the language portably) to the language. Thus, in this particular kind of high-level abstractions seen by the programmers, nothing is concurrent/parallel besides these "magic" primitives and programs relying on these primitives; the programmers can then enjoy less error-prone experience of programming when concurrency/parallelism properties are not so interested.

Although primitives abstract the complex away in the most high-level abstractions, the implementations still have the extra complexity not exposed by the language feature. So, some mid-level abstractions are needed. One typical example is threading. Threading allows one or more thread of execution (or simply thread; sometimes it is also called a process, which is not necessarily the concept of a task scheduled in an OS) supported by the language implementation (the runtime). Threads are usually preemptively scheduled by the runtime, so a thread needs to know nothing about other threads. Thus, threads are natural to implement parallelism as long as they share nothing (the critical resources): just decompose computations in different threads, once the underlying implementation allows the overlapping of the computation resources during the execution, it works. Threads are also subject to concurrent accesses of shared resources: just access resources in any order meets the minimal constraints required by the algorithm, and the implementation will eventually determine when to access. In such cases, some synchronization operations may be necessary. Some languages treat threading and synchronization operations as parts of the high-level abstraction and expose them as primitives, while some other languages encourage only relatively more high-level primitives (like futures/promises) instead.

Under the level of language-specific threads, there come multitasking of the underlying hosting environment (typically, an OS). OS-level preemptive multitasking are used to implement (preemptive) multithreading. In some environments like Windows NT, the basic scheduling units (the tasks) are also "threads". To differentiate them with userspace implementation of threads mentioned above, they are called kernel threads, where "kernel" means the kernel of the OS (however, strictly speaking, this is not quite true for Windows NT; the "real" kernel is the NT executive). Kernel threads are not always 1:1 mapped to the userspace threads, although 1:1 mapping often reduces most overhead of mapping. Since kernel threads are heavyweight (involving system calls) to create/destroy/communicate, there are non 1:1 green threads in the userspace to overcome the overhead problems at the cost of the mapping overhead. The choice of mapping depending on the programming paradigm expected in the high-level abstraction. For example, when a huge number of userspace threads expected being concurrently executed (like Erlang), 1:1 mapping is never feasible.

The underlying of OS multitasking is ISA-level multitasking provided by the logical core of the processor. This is usually the most low-level public interface for programmers. Beneath this level, there may exist SMT. This is a form of more low-level multithreading implemented by the hardware, but arguably, still somewhat programmable - though it is usually only accessible by the processor manufacturer. Note the hardware design is apparently reflecting parallelism, but there is also concurrent scheduling mechanism to make the internal hardware resources being efficiently used.

在上面提到的每一层“线程”中,都涉及并发性和并行性。尽管编程接口变化很大,但它们都服从于一开始基本抽象所揭示的属性。

其他回答

将原始问题解释为并行/并发计算,而不是编程。

在并发计算中,两个计算都是彼此独立前进的。第二个计算不需要等到第一个计算完成后才能继续进行。但是,它并没有说明这是如何实现的机制。在单核设置中,线程之间需要挂起和交替(也称为抢占式多线程)。

在并行计算中,两个计算同时进行——字面上是同时进行。这对于单CPU来说是不可能的,而是需要多核设置。

图片来自文章:“Node.js中的并行vs并发”

虽然没有完整 对并行和并发这两个术语的区别达成一致, 许多作者做了以下区分:

在并发计算中,一个程序可以在任意时刻执行多个任务。 在并行计算中,一个程序是多个任务紧密合作的程序 解决一个问题。

所以并行程序是并发的,但是像多任务操作系统这样的程序也是并发的,即使它运行在一台带有 只有一个核心,因为多个任务可以在任何时刻进行。

来源:Peter Pacheco的《并行编程介绍》

从处理器的角度来看,它可以用这张图片来描述

从处理器的角度来看,它可以用这张图片来描述

如果你的程序使用线程(并发编程),它不一定会这样执行(并行执行),因为这取决于机器是否可以处理几个线程。

这是一个直观的例子。非线程机器上的线程:

        --  --  --
     /              \
>---- --  --  --  -- ---->>

螺纹机上的螺纹:

     ------
    /      \
>-------------->>

虚线表示执行的代码。正如您所看到的,它们都分开并分别执行,但是线程机器可以同时执行几个单独的部分。

传统的任务调度可以是串行、并行或并发的。

Serial: tasks must be executed one after the other in a known tricked order or it will not work. Easy enough. Parallel: tasks must be executed at the same time or it will not work. Any failure of any of the tasks - functionally or in time - will result in total system failure. All tasks must have a common reliable sense of time. Try to avoid this or we will have tears by tea time. Concurrent: we do not care. We are not careless, though: we have analysed it and it doesn't matter; we can therefore execute any task using any available facility at any time. Happy days.

通常,在已知事件发生时,可用的调度会发生变化,我们称之为状态变化。

人们通常认为这是关于软件的,但实际上这是一种早于计算机的系统设计概念;软件系统的吸收速度有点慢,甚至很少有软件语言试图解决这个问题。如果你感兴趣,你可以试着查一下transputer language occam。

简而言之,系统设计解决以下问题:

动词——你在做什么(操作或算法) 名词——对(数据或接口)进行操作的对象 启动时,进度、状态改变 是串行、并行还是并发 地点——一旦你知道事情发生的时间,你就能说出事情可能发生的地点,而不是之前。 为什么,这是正确的方法吗?还有别的办法吗,更重要的是,更好的办法?如果你不做会怎么样?

祝你好运。