我有一台有1mb内存的电脑,没有其他本地存储。我必须使用它通过TCP连接接受100万个8位十进制数字,对它们进行排序,然后通过另一个TCP连接发送排序的列表。
数字列表可能包含重复的,我不能丢弃。代码将放在ROM中,所以我不需要从1 MB中减去我的代码的大小。我已经有了驱动以太网端口和处理TCP/IP连接的代码,它需要2 KB的状态数据,包括1 KB的缓冲区,代码将通过它读取和写入数据。这个问题有解决办法吗?
问答来源:
slashdot.org
cleaton.net
我有一台有1mb内存的电脑,没有其他本地存储。我必须使用它通过TCP连接接受100万个8位十进制数字,对它们进行排序,然后通过另一个TCP连接发送排序的列表。
数字列表可能包含重复的,我不能丢弃。代码将放在ROM中,所以我不需要从1 MB中减去我的代码的大小。我已经有了驱动以太网端口和处理TCP/IP连接的代码,它需要2 KB的状态数据,包括1 KB的缓冲区,代码将通过它读取和写入数据。这个问题有解决办法吗?
问答来源:
slashdot.org
cleaton.net
当前回答
我认为解决方案是结合视频编码的技术,即离散余弦变换。在数字视频中,不是将视频的亮度或颜色的变化记录为常规值,如110 112 115 116,而是从最后一个中减去每一个(类似于运行长度编码)。110 112 115 116变成110 2 3 1。这些值,2,3 1比原始值需要更少的比特。
So lets say we create a list of the input values as they arrive on the socket. We are storing in each element, not the value, but the offset of the one before it. We sort as we go, so the offsets are only going to be positive. But the offset could be 8 decimal digits wide which this fits in 3 bytes. Each element can't be 3 bytes, so we need to pack these. We could use the top bit of each byte as a "continue bit", indicating that the next byte is part of the number and the lower 7 bits of each byte need to be combined. zero is valid for duplicates.
当列表填满时,数字之间的距离应该越来越近,这意味着平均只有1个字节用于确定到下一个值的距离。7位值和1位偏移(如果方便的话),但可能存在一个“继续”值需要少于8位的最佳点。
总之,我做了一些实验。我使用随机数生成器,我可以将100万个排序过的8位十进制数字放入大约1279000字节。每个数字之间的平均间隔始终是99…
public class Test {
public static void main(String[] args) throws IOException {
// 1 million values
int[] values = new int[1000000];
// create random values up to 8 digits lrong
Random random = new Random();
for (int x=0;x<values.length;x++) {
values[x] = random.nextInt(100000000);
}
Arrays.sort(values);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int av = 0;
writeCompact(baos, values[0]); // first value
for (int x=1;x<values.length;x++) {
int v = values[x] - values[x-1]; // difference
av += v;
System.out.println(values[x] + " diff " + v);
writeCompact(baos, v);
}
System.out.println("Average offset " + (av/values.length));
System.out.println("Fits in " + baos.toByteArray().length);
}
public static void writeCompact(OutputStream os, long value) throws IOException {
do {
int b = (int) value & 0x7f;
value = (value & 0x7fffffffffffffffl) >> 7;
os.write(value == 0 ? b : (b | 0x80));
} while (value != 0);
}
}
其他回答
Gilmanov的答案在假设上是非常错误的。它开始基于毫无意义的一百万个连续整数进行推测。这意味着没有差距。这些随机的间隙,不管有多小,真的是一个糟糕的主意。
你自己试试。获得100万个27位随机整数,对它们排序,用7-Zip, xz压缩,任何你想要的LZMA。结果超过1.5 MB。上面的前提是连续数字的压缩。即使是增量编码也超过1.1 MB。没关系,这使用了超过100 MB的RAM进行压缩。因此,即使压缩的整数也不适合这个问题,更不用说运行时RAM的使用了。
让我难过的是,人们竟然投票支持漂亮的图像和合理化。
#include <stdint.h>
#include <stdlib.h>
#include <time.h>
int32_t ints[1000000]; // Random 27-bit integers
int cmpi32(const void *a, const void *b) {
return ( *(int32_t *)a - *(int32_t *)b );
}
int main() {
int32_t *pi = ints; // Pointer to input ints (REPLACE W/ read from net)
// Fill pseudo-random integers of 27 bits
srand(time(NULL));
for (int i = 0; i < 1000000; i++)
ints[i] = rand() & ((1<<27) - 1); // Random 32 bits masked to 27 bits
qsort(ints, 1000000, sizeof (ints[0]), cmpi32); // Sort 1000000 int32s
// Now delta encode, optional, store differences to previous int
for (int i = 1, prev = ints[0]; i < 1000000; i++) {
ints[i] -= prev;
prev += ints[i];
}
FILE *f = fopen("ints.bin", "w");
fwrite(ints, 4, 1000000, f);
fclose(f);
exit(0);
}
现在用LZMA压缩ints.bin…
$ xz -f --keep ints.bin # 100 MB RAM
$ 7z a ints.bin.7z ints.bin # 130 MB RAM
$ ls -lh ints.bin*
3.8M ints.bin
1.1M ints.bin.7z
1.2M ints.bin.xz
诀窍是将算法状态表示为“增量计数器”=“+”和“输出计数器”=“!”字符的压缩流,这是一个整数多集。例如,集合{0,3,3,4}将被表示为“!+++!!+!”,后面跟着任意数量的“+”字符。要修改多集,您可以输出字符,每次只保持恒定的解压缩量,并在以压缩形式流回之前进行适当的更改。
细节
我们知道最终集合中恰好有10^6个数字,所以最多有10^6个“!”字符。我们还知道我们的范围大小为10^8,这意味着最多有10^8个“+”字符。10^6 "的排列方式!s在10^8 "+"s中的值是(10^8 + 10^6)选10^6,因此指定某种特定的排列需要大约0.965 MiB '的数据。那太紧了。
我们可以独立对待每个角色而不超出我们的配额。“+”字符正好是“!”字符的100倍,如果我们忘记了它们是相互依赖的,那么每个字符是“+”的概率就简化为100:1。100:101的几率对应于每个字符0.08位,对于几乎相同的~0.965 MiB(忽略依赖关系在这种情况下只有~12位的代价!)
The simplest technique for storing independent characters with known prior probability is Huffman coding. Note that we need an impractically large tree (A huffman tree for blocks of 10 characters has an average cost per block of about 2.4 bits, for a total of ~2.9 Mib. A huffman tree for blocks of 20 characters has an average cost per block of about 3 bits, which is a total of ~1.8 MiB. We're probably going to need a block of size on the order of a hundred, implying more nodes in our tree than all the computer equipment that has ever existed can store.). However, ROM is technically "free" according to the problem and practical solutions that take advantage of the regularity in the tree will look essentially the same.
伪代码
Have a sufficiently large huffman tree (or similar block-by-block compression data) stored in ROM Start with a compressed string of 10^8 "+" characters. To insert the number N, stream out the compressed string until N "+" characters have gone past then insert a "!". Stream the recompressed string back over the previous one as you go, keeping a constant amount of buffered blocks to avoid over/under-runs. Repeat one million times: [input, stream decompress>insert>compress], then decompress to output
If it is possible to read the input file more than once (your problem statement doesn't say it can't), the following should work. It is described in Benchley's book "Programming Perls." If we store each number in 8 bytes we can store 250,000 numbers in one megabyte. Use a program that makes 40 passes over the input file. On the first pass it reads into memory any integer between 0 and 249,999, sorts the (at most) 250,000 integers and writes them to the output file. The second pass sorts the integers from 250,000 to 499,999 and so on to the 40th pass, which sorts 9,750,000 to 9,999,999.
在所有可能的输入中,这个问题只有一个解决方案。作弊。
通过TCP读取m个值,其中m接近内存中可排序的最大值,可能是n/4。 对250,000(大约)个数字进行排序并输出。 重复做另外3个四分之三。 让接收方在处理时合并接收到的4个数字列表。(这并不比使用单个列表慢多少。)
如果输入流可以接收几次,这将是很大的 更简单(没有关于这方面的信息,想法和时间-性能问题)。
然后,我们可以数小数。如果是计数值的话 容易使输出流。通过计算值来压缩。它 这取决于输入流中的内容。