并行编程和并行编程的区别是什么?我问了谷歌,但没有找到任何帮助我理解这种区别的东西。你能给我举个例子吗?

现在我找到了这个解释:http://www.linux-mag.com/id/7411 -但是“并发性是程序的属性”vs“并行执行是机器的属性”对我来说还不够-我仍然不能说什么是什么。


当前回答

我会试着用我自己的方式来解释它,它可能不是计算机术语,但它能给你一个大概的概念。

让我们以家务为例:洗碗,倒垃圾,修剪草坪等,我们有三个人(线程)A, B, C来做这些事情

并发: 三个人独立地开始不同的任务,例如,

A --> cleaning dishes
B --> taking out trash 
C --> mowing the lawn 

在这里,任务的顺序是不确定的,响应取决于工作量

并行: 在这里,如果我们想要提高吞吐量,我们可以分配多个人来完成一项任务,例如,清洁盘子,我们分配两个人,A擦洗盘子,B洗盘子,这可能会提高吞吐量。

洗碗:

A --> soaping the dishes
B --> washing the dishes

等等

希望这能给你一些启发!现在转到其他答案中解释的技术术语;)

其他回答

它们是从(非常轻微的)不同的角度描述同一件事情的两个短语。并行编程是从硬件的角度描述情况——至少有两个处理器(可能在一个物理包中)并行处理一个问题。并发编程更多地是从软件的角度描述事情——两个或多个操作可能同时(并发)发生。

这里的问题是,人们试图用这两个短语来做出明确的区分,但实际上这两个短语并不存在。现实情况是,几十年来,他们试图划定的分界线一直是模糊的,而且随着时间的推移越来越模糊。

What they're trying to discuss is the fact that once upon a time, most computers had only a single CPU. When you executed multiple processes (or threads) on that single CPU, the CPU was only really executing one instruction from one of those threads at a time. The appearance of concurrency was an illusion--the CPU switching between executing instructions from different threads quickly enough that to human perception (to which anything less than 100 ms or so looks instantaneous) it looked like it was doing many things at once.

与此形成鲜明对比的是具有多个CPU或多核CPU的计算机,因此机器正在同时执行来自多个线程和/或进程的指令;执行其中一个的代码不能/不会对执行另一个的代码产生任何影响。

Now the problem: such a clean distinction has almost never existed. Computer designers are actually fairly intelligent, so they noticed a long time ago that (for example) when you needed to read some data from an I/O device such as a disk, it took a long time (in terms of CPU cycles) to finish. Instead of leaving the CPU idle while that happened, they figured out various ways of letting one process/thread make an I/O request, and let code from some other process/thread execute on the CPU while the I/O request completed.

因此,早在多核cpu成为标准之前,我们就有多个线程并行进行操作。

That's only the tip of the iceberg though. Decades ago, computers started providing another level of parallelism as well. Again, being fairly intelligent people, computer designers noticed that in a lot of cases, they had instructions that didn't affect each other, so it was possible to execute more than one instruction from the same stream at the same time. One early example that became pretty well known was the Control Data 6600. This was (by a fairly wide margin) the fastest computer on earth when it was introduced in 1964--and much of the same basic architecture remains in use today. It tracked the resources used by each instruction, and had a set of execution units that executed instructions as soon as the resources on which they depended became available, very similar to the design of most recent Intel/AMD processors.

But (as the commercials used to say) wait--that's not all. There's yet another design element to add still further confusion. It's been given quite a few different names (e.g., "Hyperthreading", "SMT", "CMP"), but they all refer to the same basic idea: a CPU that can execute multiple threads simultaneously, using a combination of some resources that are independent for each thread, and some resources that are shared between the threads. In a typical case this is combined with the instruction-level parallelism outlined above. To do that, we have two (or more) sets of architectural registers. Then we have a set of execution units that can execute instructions as soon as the necessary resources become available. These often combine well because the instructions from the separate streams virtually never depend on the same resources.

然后,当然,我们会讲到具有多核的现代系统。这里的情况很明显,对吧?我们有N个(目前大约在2到256之间)独立的内核,它们都可以同时执行指令,所以我们有了真正的并行性的清晰案例——在一个进程/线程中执行指令不会影响在另一个进程/线程中执行指令。

嗯,算是吧。即使在这里,我们也有一些独立的资源(寄存器、执行单元、至少一个级别的缓存)和一些共享资源(通常至少是最低级别的缓存,当然还有内存控制器和内存带宽)。

To summarize: the simple scenarios people like to contrast between shared resources and independent resources virtually never happen in real life. With all resources shared, we end up with something like MS-DOS, where we can only run one program at a time, and we have to stop running one before we can run the other at all. With completely independent resources, we have N computers running MS-DOS (without even a network to connect them) with no ability to share anything between them at all (because if we can even share a file, well, that's a shared resource, a violation of the basic premise of nothing being shared).

每个有趣的案例都涉及到独立资源和共享资源的某种组合。每一台相当现代的计算机(以及许多根本不现代的计算机)都至少有一些能力同时执行至少几个独立的操作,而任何比MS-DOS更复杂的东西都至少在某种程度上利用了这一点。

人们喜欢在“并发”和“并行”之间画出的漂亮、清晰的分界线根本不存在,而且几乎从来都不存在。人们喜欢归类为“并发”的东西通常仍然包含至少一种或更多不同类型的并行执行。他们喜欢归类为“并行”的内容通常涉及共享资源,(例如)一个进程在使用两个进程之间共享的资源时阻塞另一个进程的执行。

试图在“并行”和“并发”之间划清界限的人,其实是生活在一个从未真正存在过的计算机幻想中。

我在一些博客上找到了这个内容。认为它是有用的和相关的。

并发性和并行性不是一回事。两个任务T1和T2是并发的,如果这两个任务的执行顺序不是预先确定的,

T1可以在T2之前执行和完成, T2可以在T1之前执行和完成, T1和T2可以在同一个时间实例中同时执行(并行性), T1和T2可以交替执行, ... 如果操作系统安排两个并发线程在一个单核非smt非cmp处理器上运行,您可能会得到并发性而不是并行性。并行在多核、多处理器或分布式系统上是可能的。

并发性通常被认为是程序的一种属性,是一个比并行性更普遍的概念。

来源:https://blogs.oracle.com/yuanlin/entry/concurrency_vs_parallelism_concurrent_programming

我会试着用我自己的方式来解释它,它可能不是计算机术语,但它能给你一个大概的概念。

让我们以家务为例:洗碗,倒垃圾,修剪草坪等,我们有三个人(线程)A, B, C来做这些事情

并发: 三个人独立地开始不同的任务,例如,

A --> cleaning dishes
B --> taking out trash 
C --> mowing the lawn 

在这里,任务的顺序是不确定的,响应取决于工作量

并行: 在这里,如果我们想要提高吞吐量,我们可以分配多个人来完成一项任务,例如,清洁盘子,我们分配两个人,A擦洗盘子,B洗盘子,这可能会提高吞吐量。

洗碗:

A --> soaping the dishes
B --> washing the dishes

等等

希望这能给你一些启发!现在转到其他答案中解释的技术术语;)

在编程中,并发是独立的组合 执行进程,而并行是同时执行 计算的(可能相关的)。 -安德鲁·格兰德

And

Concurrency is the composition of independently executing computations. Concurrency is a way to structure software, particularly as a way to write clean code that interacts well with the real world. It is not parallelism. Concurrency is not parallelism, although it enables parallelism. If you have only one processor, your program can still be concurrent but it cannot be parallel. On the other hand, a well-written concurrent program might run efficiently in parallel on a multiprocessor. That property could be important... - Rob Pike -

为了理解其中的区别,我强烈建议你去看看Rob Pike(Golang的创作者之一)的视频。并发不是并行

我认为并发编程指的是多线程编程,它是关于让你的程序运行多个线程,从硬件细节中抽象出来。

并行编程是指专门设计程序算法以利用可用的并行执行。例如,您可以并行执行某些算法的两个分支,期望它会比先检查第一个分支再检查第二个分支更快地到达结果(平均而言)。