今天,我在看一些c++代码(别人写的),发现了这一部分:
double someValue = ...
if (someValue < std::numeric_limits<double>::epsilon() &&
someValue > -std::numeric_limits<double>::epsilon()) {
someValue = 0.0;
}
我在想这到底说得通不合理。
epsilon()的文档说:
该函数返回1与可[用双精度符号]表示的大于1的最小值之间的差值。
这是否也适用于0,即()的最小值大于0?或者有没有0到0 +之间的数可以用双精度数表示?
如果不是,那么比较是不是等同于someValue == 0.0?
可以用下面的程序输出一个数(1.0,0.0,…)的近似值(可能的最小差值)。输出如下:
0.0 = 4.940656e-324
1.0的是2.220446e-16
稍微思考一下就会明白,我们用来计算它的值的数字越小,指数就越小,因为指数可以调整到这个数字的大小。
#include <stdio.h>
#include <assert.h>
double getEps (double m) {
double approx=1.0;
double lastApprox=0.0;
while (m+approx!=m) {
lastApprox=approx;
approx/=2.0;
}
assert (lastApprox!=0);
return lastApprox;
}
int main () {
printf ("epsilon for 0.0 is %e\n", getEps (0.0));
printf ("epsilon for 1.0 is %e\n", getEps (1.0));
return 0;
}
Also, a good reason for having such a function is to remove "denormals" (those very small numbers that can no longer use the implied leading "1" and have a special FP representation). Why would you want to do this? Because some machines (in particular, some older Pentium 4s) get really, really slow when processing denormals. Others just get somewhat slower. If your application doesn't really need these very small numbers, flushing them to zero is a good solution. Good places to consider this are the last steps of any IIR filters or decay functions.
请参见:为什么将0.1f更改为0会使性能降低10倍?
和http://en.wikipedia.org/wiki/Denormal_number