我在一次面试中被问到这个问题。它们都是O(nlogn),但大多数人使用快速排序而不是归并排序。为什么呢?
当前回答
快速排序是最坏情况O(n²),然而,平均情况始终执行归并排序。每个算法都是O(nlogn),但你需要记住,当谈论大O时,我们忽略了较低的复杂度因素。当涉及到常数因子时,快速排序比归并排序有显著的改进。
归并排序也需要O(2n)内存,而快速排序可以就地完成(只需要O(n))。这是快速排序通常比归并排序更受欢迎的另一个原因。
额外信息:
快速排序的最坏情况发生在枢轴选择不佳时。考虑下面的例子:
[5, 4, 3, 2, 1]
If the pivot is chosen as the smallest or largest number in the group then quick sort will run in O(n^2). The probability of choosing the element that is in the largest or smallest 25% of the list is 0.5. That gives the algorithm a 0.5 chance of being a good pivot. If we employ a typical pivot choosing algorithm (say choosing a random element), we have 0.5 chance of choosing a good pivot for every choice of a pivot. For collections of a large size the probability of always choosing a poor pivot is 0.5 * n. Based on this probability quick sort is efficient for the average (and typical) case.
其他回答
One of the reason is more philosophical. Quicksort is Top->Down philosophy. With n elements to sort, there are n! possibilities. With 2 partitions of m & n-m which are mutually exclusive, the number of possibilities go down in several orders of magnitude. m! * (n-m)! is smaller by several orders than n! alone. imagine 5! vs 3! *2!. 5! has 10 times more possibilities than 2 partitions of 2 & 3 each . and extrapolate to 1 million factorial vs 900K!*100K! vs. So instead of worrying about establishing any order within a range or a partition,just establish order at a broader level in partitions and reduce the possibilities within a partition. Any order established earlier within a range will be disturbed later if the partitions themselves are not mutually exclusive.
任何自下而上的排序方法,如归并排序或堆排序,就像工人或雇员的方法一样,人们很早就开始在微观层面进行比较。但是,一旦在它们之间发现了一个元素,这个顺序就必然会丢失。这些方法非常稳定和可预测,但要做一定量的额外工作。
Quick Sort is like Managerial approach where one is not initially concerned about any order , only about meeting a broad criterion with No regard for order. Then the partitions are narrowed until you get a sorted set. The real challenge in Quicksort is in finding a partition or criterion in the dark when you know nothing about the elements to sort. That is why we either need to spend some effort to find a median value or pick 1 at random or some arbitrary "Managerial" approach . To find a perfect median can take significant amount of effort and leads to a stupid bottom up approach again. So Quicksort says just a pick a random pivot and hope that it will be somewhere in the middle or do some work to find median of 3 , 5 or something more to find a better median but do not plan to be perfect & don't waste any time in initially ordering. That seems to do well if you are lucky or sometimes degrades to n^2 when you don't get a median but just take a chance. Any way data is random. right. So I agree more with the top ->down logical approach of quicksort & it turns out that the chance it takes about pivot selection & comparisons that it saves earlier seems to work better more times than any meticulous & thorough stable bottom ->up approach like merge sort. But
That's hard to say.The worst of MergeSort is n(log2n)-n+1,which is accurate if n equals 2^k(I have already proved this).And for any n,it's between (n lg n - n + 1) and (n lg n + n + O(lg n)).But for quickSort,its best is nlog2n(also n equals 2^k).If you divide Mergesort by quickSort,it equals one when n is infinite.So it's as if the worst case of MergeSort is better than the best case of QuickSort,why do we use quicksort?But remember,MergeSort is not in place,it require 2n memeroy space.And MergeSort also need to do many array copies,which we don't include in the analysis of algorithm.In a word,MergeSort is really faseter than quicksort in theroy,but in reality you need to consider memeory space,the cost of array copy,merger is slower than quick sort.I once made an experiment where I was given 1000000 digits in java by Random class,and it took 2610ms by mergesort,1370ms by quicksort.
快速排序是一种就地排序算法,因此它更适合于数组。另一方面,归并排序需要额外的O(N)存储空间,更适合于链表。
与数组不同,在喜欢列表中,我们可以在中间插入O(1)空间和O(1)时间的项,因此归并排序中的归并操作可以在不需要任何额外空间的情况下实现。但是,为数组分配和取消分配额外空间会对归并排序的运行时间产生不利影响。归并排序也有利于链表,因为数据是按顺序访问的,没有太多的随机内存访问。
另一方面,快速排序需要大量的随机内存访问,而使用数组,我们可以直接访问内存,而不需要像链表那样进行任何遍历。同样,快速排序用于数组时具有良好的引用局部性,因为数组连续存储在内存中。
尽管这两种排序算法的平均复杂度都是O(NlogN),但通常人们在执行普通任务时使用数组进行存储,因此快速排序应该是首选算法。
编辑:我刚刚发现归并排序最差/最好/平均情况总是nlogn,但快速排序可以从n2(最坏的情况下,元素已经排序)到nlogn(平均/最佳情况下,pivot总是将数组分为两半)。
在c/c++领域,当不使用stl容器时,我倾向于使用快速排序,因为它是构建的 进入运行时,而归并排序没有。
所以我相信,在许多情况下,这只是阻力最小的途径。
此外,对于整个数据集不适合工作集的情况,快速排序的性能可以高得多。
快速排序和合并排序的小增加。
它还可以依赖于排序项的类型。如果访问项、交换和比较不是简单的操作,就像比较平面内存中的整数一样,那么归并排序可能是更可取的算法。
例如,我们在远程服务器上使用网络协议对项目进行排序。
而且,在像“链表”这样的自定义容器中,也没有快速排序的好处。 1. 对链表进行归并排序,不需要额外的内存。 2. 快速排序中对元素的访问不是顺序的(在内存中)