当比较浮点数和整数时,一些值对的计算时间要比类似大小的其他值长得多。

例如:

>>> import timeit
>>> timeit.timeit("562949953420000.7 < 562949953421000") # run 1 million times
0.5387085462592742

但是如果浮点数或整数变小或变大一定数量,比较运行得更快:

>>> timeit.timeit("562949953420000.7 < 562949953422000") # integer increased by 1000
0.1481498428446173
>>> timeit.timeit("562949953423001.8 < 562949953421000") # float increased by 3001.1
0.1459577925548956

更改比较运算符(例如使用==或>代替)不会以任何明显的方式影响时间。

这不仅仅与大小有关,因为选择更大或更小的值可以导致更快的比较,所以我怀疑这是由于一些不幸的比特排列方式。

显然,比较这些值对于大多数用例来说已经足够快了。我只是好奇,为什么Python在处理某些值对时比处理其他值对时更困难。


当前回答

使用带有任意精度浮点数和整数的gmpy2,可以获得更统一的比较性能:

~ $ ptipython
Python 3.5.1 |Anaconda 4.0.0 (64-bit)| (default, Dec  7 2015, 11:16:01) 
Type "copyright", "credits" or "license" for more information.

IPython 4.1.2 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import gmpy2

In [2]: from gmpy2 import mpfr

In [3]: from gmpy2 import mpz

In [4]: gmpy2.get_context().precision=200

In [5]: i1=562949953421000

In [6]: i2=562949953422000

In [7]: f=562949953420000.7

In [8]: i11=mpz('562949953421000')

In [9]: i12=mpz('562949953422000')

In [10]: f1=mpfr('562949953420000.7')

In [11]: f<i1
Out[11]: True

In [12]: f<i2
Out[12]: True

In [13]: f1<i11
Out[13]: True

In [14]: f1<i12
Out[14]: True

In [15]: %timeit f<i1
The slowest run took 10.15 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 441 ns per loop

In [16]: %timeit f<i2
The slowest run took 12.55 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 152 ns per loop

In [17]: %timeit f1<i11
The slowest run took 32.04 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 269 ns per loop

In [18]: %timeit f1<i12
The slowest run took 36.81 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 231 ns per loop

In [19]: %timeit f<i11
The slowest run took 78.26 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 156 ns per loop

In [20]: %timeit f<i12
The slowest run took 21.24 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 194 ns per loop

In [21]: %timeit f1<i1
The slowest run took 37.61 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 275 ns per loop

In [22]: %timeit f1<i2
The slowest run took 39.03 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 259 ns per loop

其他回答

使用带有任意精度浮点数和整数的gmpy2,可以获得更统一的比较性能:

~ $ ptipython
Python 3.5.1 |Anaconda 4.0.0 (64-bit)| (default, Dec  7 2015, 11:16:01) 
Type "copyright", "credits" or "license" for more information.

IPython 4.1.2 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import gmpy2

In [2]: from gmpy2 import mpfr

In [3]: from gmpy2 import mpz

In [4]: gmpy2.get_context().precision=200

In [5]: i1=562949953421000

In [6]: i2=562949953422000

In [7]: f=562949953420000.7

In [8]: i11=mpz('562949953421000')

In [9]: i12=mpz('562949953422000')

In [10]: f1=mpfr('562949953420000.7')

In [11]: f<i1
Out[11]: True

In [12]: f<i2
Out[12]: True

In [13]: f1<i11
Out[13]: True

In [14]: f1<i12
Out[14]: True

In [15]: %timeit f<i1
The slowest run took 10.15 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 441 ns per loop

In [16]: %timeit f<i2
The slowest run took 12.55 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 152 ns per loop

In [17]: %timeit f1<i11
The slowest run took 32.04 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 269 ns per loop

In [18]: %timeit f1<i12
The slowest run took 36.81 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 231 ns per loop

In [19]: %timeit f<i11
The slowest run took 78.26 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 156 ns per loop

In [20]: %timeit f<i12
The slowest run took 21.24 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 194 ns per loop

In [21]: %timeit f1<i1
The slowest run took 37.61 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 275 ns per loop

In [22]: %timeit f1<i2
The slowest run took 39.03 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 259 ns per loop

Python源代码中的浮动对象注释承认:

比较几乎是一场噩梦

在比较浮点数和整数时尤其如此,因为与浮点数不同,Python中的整数可以任意大,并且始终是精确的。尝试将整数强制转换为浮点数可能会失去精度并使比较不准确。试图将浮点转换为整数也行不通,因为任何小数部分都会丢失。

为了解决这个问题,Python执行一系列检查,如果其中一个检查成功,则返回结果。它比较两个值的符号,然后比较整数是否“太大”而不能作为浮点数,然后比较浮点数的指数与整数的长度。如果所有这些检查都失败,则必须构造两个新的Python对象进行比较,以获得结果。

当比较浮点数v和整数/长w时,最坏的情况是:

V和w有相同的符号(都是正的或者都是负的) 整数w只有足够少的位,可以保存在size_t类型中(通常是32位或64位), 整数w至少有49位, 浮点数v的指数等于w中的比特数。

这就是问题中的值

>>> import math
>>> math.frexp(562949953420000.7) # gives the float's (significand, exponent) pair
(0.9999999999976706, 49)
>>> (562949953421000).bit_length()
49

我们看到49既是浮点数的指数,也是整数的位数。两个数字都是正的,因此上述四个条件都满足。

选择一个更大(或更小)的值可以改变整数的位数或指数的值,因此Python能够确定比较的结果,而无需执行昂贵的最终检查。

这是特定于该语言的CPython实现。


比较更详细

float_richcompare函数处理v和w两个值之间的比较。

下面是该函数执行的检查的逐步描述。Python源代码中的注释实际上非常有助于理解函数的功能,所以我将它们留在相关的地方。我也在答案后面的列表中总结了这些检查。

主要思想是将Python对象v和w映射到两个适当的C双精度对象i和j,然后可以很容易地比较它们以给出正确的结果。Python 2和Python 3都使用相同的思想来做到这一点(前者只分别处理int和long类型)。

首先要做的是检查v是否确实是Python浮点数,并将其映射到cdouble i。接下来,该函数查看w是否也是浮点数,并将其映射到cdouble j。这是该函数的最佳情况,因为所有其他检查都可以跳过。该函数还检查v是inf还是nan:

static PyObject*
float_richcompare(PyObject *v, PyObject *w, int op)
{
    double i, j;
    int r = 0;
    assert(PyFloat_Check(v));       
    i = PyFloat_AS_DOUBLE(v);       

    if (PyFloat_Check(w))           
        j = PyFloat_AS_DOUBLE(w);   

    else if (!Py_IS_FINITE(i)) {
        if (PyLong_Check(w))
            j = 0.0;
        else
            goto Unimplemented;
    }

现在我们知道,如果w没有通过这些检查,它就不是Python浮点数。现在该函数检查它是否是一个Python整数。如果是这种情况,最简单的测试是提取v的符号和w的符号(如果为0则返回0,如果为负则返回-1,如果为正则返回1)。如果符号不同,这是返回比较结果所需的所有信息:

    else if (PyLong_Check(w)) {
        int vsign = i == 0.0 ? 0 : i < 0.0 ? -1 : 1;
        int wsign = _PyLong_Sign(w);
        size_t nbits;
        int exponent;

        if (vsign != wsign) {
            /* Magnitudes are irrelevant -- the signs alone
             * determine the outcome.
             */
            i = (double)vsign;
            j = (double)wsign;
            goto Compare;
        }
    }   

如果检查失败,那么v和w有相同的符号。

下一个检查计算整数w中的比特数。如果它有太多比特,那么它不可能被作为浮点数持有,因此它的大小必须大于浮点数v:

    nbits = _PyLong_NumBits(w);
    if (nbits == (size_t)-1 && PyErr_Occurred()) {
        /* This long is so large that size_t isn't big enough
         * to hold the # of bits.  Replace with little doubles
         * that give the same outcome -- w is so large that
         * its magnitude must exceed the magnitude of any
         * finite float.
         */
        PyErr_Clear();
        i = (double)vsign;
        assert(wsign != 0);
        j = wsign * 2.0;
        goto Compare;
    }

另一方面,如果整数w有48位或更少的位,它可以安全地转换为C double j,并进行比较:

    if (nbits <= 48) {
        j = PyLong_AsDouble(w);
        /* It's impossible that <= 48 bits overflowed. */
        assert(j != -1.0 || ! PyErr_Occurred());
        goto Compare;
    }

从这里开始,我们知道w有49位或更多位。将w视为正整数比较方便,因此根据需要更改符号和比较运算符:

    if (nbits <= 48) {
        /* "Multiply both sides" by -1; this also swaps the
         * comparator.
         */
        i = -i;
        op = _Py_SwappedOp[op];
    }

现在函数查看浮点数的指数。回想一下,浮点数可以写成(忽略符号)significant * 2exponent,这个significant表示一个介于0.5到1之间的数字:

    (void) frexp(i, &exponent);
    if (exponent < 0 || (size_t)exponent < nbits) {
        i = 1.0;
        j = 2.0;
        goto Compare;
    }

这证明了两件事。如果指数小于0,则浮点数小于1(因此在大小上小于任何整数)。或者,如果指数小于w中的比特数,那么我们有v < |w|,因为significant * 2exponent小于2nbits。

如果这两次检查失败,函数将检查指数是否大于w中的比特数。这表明signand * 2exponent大于2nbits,因此v > |w|:

    if ((size_t)exponent > nbits) {
        i = 2.0;
        j = 1.0;
        goto Compare;
    }

如果这个检查没有成功,我们知道浮点数v的指数与整数w的位数相同。

现在可以比较这两个值的唯一方法是用v和w构造两个新的Python整数。其思想是丢弃v的小数部分,将整数部分翻倍,然后加1。w也翻倍,这两个新的Python对象可以进行比较,以给出正确的返回值。使用一个值较小的例子,通过比较(2*4)+1 == 9 < 8 ==(2*4)(返回false)来确定4.65 < 4。

    {
        double fracpart;
        double intpart;
        PyObject *result = NULL;
        PyObject *one = NULL;
        PyObject *vv = NULL;
        PyObject *ww = w;

        // snip

        fracpart = modf(i, &intpart); // split i (the double that v mapped to)
        vv = PyLong_FromDouble(intpart);

        // snip

        if (fracpart != 0.0) {
            /* Shift left, and or a 1 bit into vv
             * to represent the lost fraction.
             */
            PyObject *temp;

            one = PyLong_FromLong(1);

            temp = PyNumber_Lshift(ww, one); // left-shift doubles an integer
            ww = temp;

            temp = PyNumber_Lshift(vv, one);
            vv = temp;

            temp = PyNumber_Or(vv, one); // a doubled integer is even, so this adds 1
            vv = temp;
        }
        // snip
    }
}

为了简洁起见,我省略了Python在创建这些新对象时必须进行的额外错误检查和垃圾跟踪。不用说,这增加了额外的开销,并解释了为什么问题中突出显示的值比其他值的比较速度要慢得多。


下面是比较函数执行的检查的摘要。

设v为浮点数,并将其转换为C double类型。现在,如果w也是浮点数

检查w是否为nan或inf,如果是,根据w的类型分别处理。 如果不是,直接比较v和w的C双精度表示。

如果w是整数:

Extract the signs of v and w. If they are different then we know v and w are different and which is the greater value. (The signs are the same.) Check whether w has too many bits to be a float (more than size_t). If so, w has greater magnitude than v. Check if w has 48 or fewer bits. If so, it can be safely cast to a C double without losing its precision and compared with v. (w has more than 48 bits. We will now treat w as a positive integer having changed the compare op as appropriate.) Consider the exponent of the float v. If the exponent is negative, then v is less than 1 and therefore less than any positive integer. Else, if the exponent is less than the number of bits in w then it must be less than w. If the exponent of v is greater than the number of bits in w then v is greater than w. (The exponent is the same as the number of bits in w.) The final check. Split v into its integer and fractional parts. Double the integer part and add 1 to compensate for the fractional part. Now double the integer w. Compare these two new integers instead to get the result.