并行编程和并行编程的区别是什么?我问了谷歌,但没有找到任何帮助我理解这种区别的东西。你能给我举个例子吗?

现在我找到了这个解释:http://www.linux-mag.com/id/7411 -但是“并发性是程序的属性”vs“并行执行是机器的属性”对我来说还不够-我仍然不能说什么是什么。


当前回答

我会试着用我自己的方式来解释它,它可能不是计算机术语,但它能给你一个大概的概念。

让我们以家务为例:洗碗,倒垃圾,修剪草坪等,我们有三个人(线程)A, B, C来做这些事情

并发: 三个人独立地开始不同的任务,例如,

A --> cleaning dishes
B --> taking out trash 
C --> mowing the lawn 

在这里,任务的顺序是不确定的,响应取决于工作量

并行: 在这里,如果我们想要提高吞吐量,我们可以分配多个人来完成一项任务,例如,清洁盘子,我们分配两个人,A擦洗盘子,B洗盘子,这可能会提高吞吐量。

洗碗:

A --> soaping the dishes
B --> washing the dishes

等等

希望这能给你一些启发!现在转到其他答案中解释的技术术语;)

其他回答

我认为并发编程指的是多线程编程,它是关于让你的程序运行多个线程,从硬件细节中抽象出来。

并行编程是指专门设计程序算法以利用可用的并行执行。例如,您可以并行执行某些算法的两个分支,期望它会比先检查第一个分支再检查第二个分支更快地到达结果(平均而言)。

https://joearms.github.io/published/2013-04-05-concurrent-and-parallel-programming.html

并发=两个队列和一台咖啡机。

并行=两个队列和两个咖啡机。

传统的任务调度可以是串行、并行或并发的。

Serial: tasks must be executed one after the other in a known tricked order or it will not work. Easy enough. Parallel: tasks must be executed at the same time or it will not work. Any failure of any of the tasks - functionally or in time - will result in total system failure. All tasks must have a common reliable sense of time. Try to avoid this or we will have tears by tea time. Concurrent: we do not care. We are not careless, though: we have analysed it and it doesn't matter; we can therefore execute any task using any available facility at any time. Happy days.

通常,在已知事件发生时,可用的调度会发生变化,我们称之为状态变化。

人们通常认为这是关于软件的,但实际上这是一种早于计算机的系统设计概念;软件系统的吸收速度有点慢,甚至很少有软件语言试图解决这个问题。如果你感兴趣,你可以试着查一下transputer language occam。

简而言之,系统设计解决以下问题:

动词——你在做什么(操作或算法) 名词——对(数据或接口)进行操作的对象 启动时,进度、状态改变 是串行、并行还是并发 地点——一旦你知道事情发生的时间,你就能说出事情可能发生的地点,而不是之前。 为什么,这是正确的方法吗?还有别的办法吗,更重要的是,更好的办法?如果你不做会怎么样?

祝你好运。

从处理器的角度来看,它可以用这张图片来描述

从处理器的角度来看,它可以用这张图片来描述

并行编程发生在代码同时被执行并且每次执行都是相互独立的时候。因此,通常不会有关于共享变量之类的关注,因为那不太可能发生。

However, concurrent programming consists on code being executed by different processes/threads that share variables and such, therefore on concurrent programming we must establish some sort of rule to decide which process/thread executes first, we want this so that we can be sure there will be consistency and that we can know with certainty what will happen. If there is no control and all threads compute at the same time and store things on the same variables, how would we know what to expect in the end? Maybe a thread is faster than the other, maybe one of the threads even stopped in the middle of its execution and another continued a different computation with a corrupted (not yet fully computed) variable, the possibilities are endless. It's in these situations that we usually use concurrent programming instead of parallel.