最近我参加了一个面试,面试官要求我“编写一个程序,从一个包含10亿个数字的数组中找出100个最大的数字”。
我只能给出一个蛮力解决方案,即以O(nlogn)时间复杂度对数组进行排序,并取最后100个数字。
Arrays.sort(array);
面试官正在寻找一个更好的时间复杂度,我尝试了几个其他的解决方案,但都没有回答他。有没有更好的时间复杂度解决方案?
最近我参加了一个面试,面试官要求我“编写一个程序,从一个包含10亿个数字的数组中找出100个最大的数字”。
我只能给出一个蛮力解决方案,即以O(nlogn)时间复杂度对数组进行排序,并取最后100个数字。
Arrays.sort(array);
面试官正在寻找一个更好的时间复杂度,我尝试了几个其他的解决方案,但都没有回答他。有没有更好的时间复杂度解决方案?
当前回答
此代码用于在未排序数组中查找N个最大的数字。
#include <iostream>
using namespace std;
#define Array_Size 5 // No Of Largest Numbers To Find
#define BILLION 10000000000
void findLargest(int max[], int array[]);
int checkDup(int temp, int max[]);
int main() {
int array[BILLION] // contains data
int i=0, temp;
int max[Array_Size];
findLargest(max,array);
cout<< "The "<< Array_Size<< " largest numbers in the array are: \n";
for(i=0; i< Array_Size; i++)
cout<< max[i] << endl;
return 0;
}
void findLargest(int max[], int array[])
{
int i,temp,res;
for(int k=0; k< Array_Size; k++)
{
i=0;
while(i < BILLION)
{
for(int j=0; j< Array_Size ; j++)
{
temp = array[i];
res= checkDup(temp,max);
if(res == 0 && max[j] < temp)
max[j] = temp;
}
i++;
}
}
}
int checkDup(int temp, int max[])
{
for(int i=0; i<N_O_L_N_T_F; i++)
{
if(max[i] == temp)
return -1;
}
return 0;
}
这可能不是一个有效的方法,但可以完成工作。
希望这能有所帮助
其他回答
虽然其他的quickselect解决方案已经被否决,但事实是quickselect将比使用大小为100的队列更快地找到解决方案。在比较方面,Quickselect的预期运行时间为2n + o(n)。一个非常简单的实现是
array = input array of length n
r = Quickselect(array,n-100)
result = array of length 100
for(i = 1 to n)
if(array[i]>r)
add array[i] to result
这平均需要3n + o(n)次比较。此外,quickselect将数组中最大的100个项保留在最右边的100个位置,这可以提高效率。所以实际上,运行时间可以提高到2n+o(n)。
有一个问题是,这是预期的运行时间,而不是最坏的情况,但通过使用一个不错的主元选择策略(例如,随机选择21个元素,并选择这21个元素的中位数作为主元),那么比较的数量可以保证高概率为(2+c)n对于任意小的常数c。
事实上,通过使用优化的抽样策略(例如随机抽样平方根(n)个元素,并选择第99百分位数),对于任意小的c(假设K,要选择的元素数量为o(n)),运行时间可以降至(1+c)n + o(n)。
另一方面,使用大小为100的队列将需要O(log(100)n)个比较,log以2为底100的对数大约等于6.6。
如果我们从更抽象的意义上考虑这个问题,即从大小为N的数组中选择最大的K个元素,其中K=o(N),但K和N都趋于无穷大,那么快速选择版本的运行时间将是o(N),队列版本的运行时间将是o(N log K),因此在这种意义上,快速选择也渐近地更好。
在注释中,提到队列解决方案将在随机输入的预期时间N + K log N内运行。当然,随机输入假设永远不会成立,除非问题明确地说明了这一点。队列解决方案可以以随机顺序遍历数组,但这将产生对随机数生成器的N次调用的额外成本,以及排列整个输入数组或分配一个长度为N的包含随机索引的新数组。
如果问题不允许您移动原始数组中的元素,并且分配内存的成本很高,因此不能复制数组,那就是另一回事了。但严格地从运行时间来看,这是最好的解决方案。
Recently I am adapting a theory that all the problems in the world could be solved with O(1). And even this one. It wasn't clear from the question what is the range of the numbers. If the numbers are it range from 1 to 10, then probably the the top 100 largest numbers will be a group of 10. The chance that the highest number will be picked out of the 1 billion numbers when the highest number is very small in compare to to 1 billion are very big. So I would give this as an answer in that interview.
你可以遍历这些数字,需要O(n)
只要发现一个大于当前最小值的值,就将新值添加到一个大小为100的循环队列中。
循环队列的最小值就是新的比较值。继续往队列中添加。如果已满,则从队列中提取最小值。
The simplest solution is to scan the billion numbers large array and hold the 100 largest values found so far in a small array buffer without any sorting and remember the smallest value of this buffer. First I thought this method was proposed by fordprefect but in a comment he said that he assumed the 100 number data structure being implemented as a heap. Whenever a new number is found that is larger then the minimum in the buffer is overwritten by the new value found and the buffer is searched for the current minimum again. If the numbers in billion number array are randomly distributed most of the time the value from the large array is compared to the minimum of the small array and discarded. Only for a very very small fraction of number the value must be inserted into the small array. So the difference of manipulating the data structure holding the small numbers can be neglected. For a small number of elements it is hard to determine if the usage of a priority queue is actually faster than using my naive approach.
I want to estimate the number of inserts in the small 100 element array buffer when the 10^9 element array is scanned. The program scans the first 1000 elements of this large array and has to insert at most 1000 elements in the buffer. The buffer contains 100 element of the 1000 elements scanned, that is 0.1 of the element scanned. So we assume that the probability that a value from the large array is larger than the current minimum of the buffer is about 0.1 Such an element has to be inserted in the buffer . Now the program scans the next 10^4 elements from the large array. Because the minimum of the buffer will increase every time a new element is inserted. We estimated that the ratio of elements larger than our current minimum is about 0.1 and so there are 0.1*10^4=1000 elements to insert. Actually the expected number of elements that are inserted into the buffer will be smaller. After the scan of this 10^4 elements fraction of the numbers in the buffer will be about 0.01 of the elements scanned so far. So when scanning the next 10^5 numbers we assume that not more than 0.01*10^5=1000 will be inserted in the buffer. Continuing this argumentation we have inserted about 7000 values after scanning 1000+10^4+10^5+...+10^9 ~ 10^9 elements of the large array. So when scanning an array with 10^9 elements of random size we expect not more than 10^4 (=7000 rounded up) insertions in the buffer. After each insertion into the buffer the new minimum must be found. If the buffer is a simple array we need 100 comparison to find the new minimum. If the buffer is another data structure (like a heap) we need at least 1 comparison to find the minimum. To compare the elements of the large array we need 10^9 comparisons. So all in all we need about 10^9+100*10^4=1.001 * 10^9 comparisons when using an array as buffer and at least 1.000 * 10^9 comparisons when using another type of data structure (like a heap). So using a heap brings only a gain of 0.1% if performance is determined by the number of comparison. But what is the difference in execution time between inserting an element in a 100 element heap and replacing an element in an 100 element array and finding its new minimum?
在理论层面:在堆中插入需要多少比较。我知道它是O(log(n))但常数因子有多大呢?我 在机器级别:缓存和分支预测对堆插入和数组中线性搜索的执行时间有什么影响? 在实现级别:库或编译器提供的堆数据结构中隐藏了哪些额外成本?
我认为,在人们试图估计100个元素堆和100个元素数组的性能之间的真正区别之前,这些都是必须回答的一些问题。所以做一个实验并测量真实的表现是有意义的。
另一个O(n)算法-
该算法通过消元法找到最大的100个
考虑所有的百万数字的二进制表示。从最重要的位开始。确定MSB是否为1可以通过布尔运算与适当的数字相乘来完成。如果百万个数字中有超过100个1,就去掉其他带0的数字。现在剩下的数从下一个最有效的位开始。计算排除后剩余数字的数量,只要这个数字大于100,就继续进行。
主要的布尔运算可以在图形处理器上并行完成