我把Project Euler中的第12题作为一个编程练习,并比较了我在C、Python、Erlang和Haskell中的实现(当然不是最优的)。为了获得更高的执行时间,我搜索第一个因数超过1000的三角形数,而不是原始问题中所述的500。

结果如下:

C:

lorenzo@enzo:~/erlang$ gcc -lm -o euler12.bin euler12.c
lorenzo@enzo:~/erlang$ time ./euler12.bin
842161320

real    0m11.074s
user    0m11.070s
sys 0m0.000s

Python:

lorenzo@enzo:~/erlang$ time ./euler12.py 
842161320

real    1m16.632s
user    1m16.370s
sys 0m0.250s

Python与PyPy:

lorenzo@enzo:~/Downloads/pypy-c-jit-43780-b590cf6de419-linux64/bin$ time ./pypy /home/lorenzo/erlang/euler12.py 
842161320

real    0m13.082s
user    0m13.050s
sys 0m0.020s

Erlang:

lorenzo@enzo:~/erlang$ erlc euler12.erl 
lorenzo@enzo:~/erlang$ time erl -s euler12 solve
Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:4:4] [rq:4] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.7.4  (abort with ^G)
1> 842161320

real    0m48.259s
user    0m48.070s
sys 0m0.020s

Haskell:

lorenzo@enzo:~/erlang$ ghc euler12.hs -o euler12.hsx
[1 of 1] Compiling Main             ( euler12.hs, euler12.o )
Linking euler12.hsx ...
lorenzo@enzo:~/erlang$ time ./euler12.hsx 
842161320

real    2m37.326s
user    2m37.240s
sys 0m0.080s

简介:

C: 100% Python: 692% (PyPy占118%) Erlang: 436%(135%归功于RichardC) Haskell: 1421%

我认为C语言有一个很大的优势,因为它使用长来进行计算,而不是像其他三种那样使用任意长度的整数。它也不需要首先加载运行时(其他的呢?)

问题1: Erlang, Python和Haskell是否会因为使用任意长度的整数而降低速度,或者只要值小于MAXINT就不会?

问题2: 哈斯克尔为什么这么慢?是否有一个编译器标志关闭刹车或它是我的实现?(后者是很有可能的,因为Haskell对我来说是一本有七个印章的书。)

问题3: 你能否给我一些提示,如何在不改变我确定因素的方式的情况下优化这些实现?以任何方式优化:更好、更快、更“原生”的语言。

编辑:

问题4: 我的函数实现是否允许LCO(最后调用优化,也就是尾递归消除),从而避免在调用堆栈中添加不必要的帧?

虽然我不得不承认我的Haskell和Erlang知识非常有限,但我确实试图用这四种语言实现尽可能相似的相同算法。


使用的源代码:

#include <stdio.h>
#include <math.h>

int factorCount (long n)
{
    double square = sqrt (n);
    int isquare = (int) square;
    int count = isquare == square ? -1 : 0;
    long candidate;
    for (candidate = 1; candidate <= isquare; candidate ++)
        if (0 == n % candidate) count += 2;
    return count;
}

int main ()
{
    long triangle = 1;
    int index = 1;
    while (factorCount (triangle) < 1001)
    {
        index ++;
        triangle += index;
    }
    printf ("%ld\n", triangle);
}

#! /usr/bin/env python3.2

import math

def factorCount (n):
    square = math.sqrt (n)
    isquare = int (square)
    count = -1 if isquare == square else 0
    for candidate in range (1, isquare + 1):
        if not n % candidate: count += 2
    return count

triangle = 1
index = 1
while factorCount (triangle) < 1001:
    index += 1
    triangle += index

print (triangle)

-module (euler12).
-compile (export_all).

factorCount (Number) -> factorCount (Number, math:sqrt (Number), 1, 0).

factorCount (_, Sqrt, Candidate, Count) when Candidate > Sqrt -> Count;

factorCount (_, Sqrt, Candidate, Count) when Candidate == Sqrt -> Count + 1;

factorCount (Number, Sqrt, Candidate, Count) ->
    case Number rem Candidate of
        0 -> factorCount (Number, Sqrt, Candidate + 1, Count + 2);
        _ -> factorCount (Number, Sqrt, Candidate + 1, Count)
    end.

nextTriangle (Index, Triangle) ->
    Count = factorCount (Triangle),
    if
        Count > 1000 -> Triangle;
        true -> nextTriangle (Index + 1, Triangle + Index + 1)  
    end.

solve () ->
    io:format ("~p~n", [nextTriangle (1, 1) ] ),
    halt (0).

factorCount number = factorCount' number isquare 1 0 - (fromEnum $ square == fromIntegral isquare)
    where square = sqrt $ fromIntegral number
          isquare = floor square

factorCount' number sqrt candidate count
    | fromIntegral candidate > sqrt = count
    | number `mod` candidate == 0 = factorCount' number sqrt (candidate + 1) (count + 2)
    | otherwise = factorCount' number sqrt (candidate + 1) count

nextTriangle index triangle
    | factorCount triangle > 1000 = triangle
    | otherwise = nextTriangle (index + 1) (triangle + index + 1)

main = print $ nextTriangle 1 1

当前回答

Erlang实现存在一些问题。作为下面的基准,我测量的未修改的Erlang程序的执行时间为47.6秒,而C代码的执行时间为12.7秒。

(编辑:在Erlang/OTP版本24,2021年,Erlang有一个自动JIT编译器,旧的+本机编译器选项不再支持或需要。我保留下面这段文字作为历史文件。关于export_all的注释对于jit生成良好代码的能力仍然是有效的。)

The first thing you should do if you want to run computationally intensive Erlang code is to use native code. Compiling with erlc +native euler12 got the time down to 41.3 seconds. This is however a much lower speedup (just 15%) than expected from native compilation on this kind of code, and the problem is your use of -compile(export_all). This is useful for experimentation, but the fact that all functions are potentially reachable from the outside causes the native compiler to be very conservative. (The normal BEAM emulator is not that much affected.) Replacing this declaration with -export([solve/0]). gives a much better speedup: 31.5 seconds (almost 35% from the baseline).

但是代码本身有一个问题:对于factorCount循环中的每一次迭代,都要执行以下测试:

factorCount (_, Sqrt, Candidate, Count) when Candidate == Sqrt -> Count + 1;

C代码不这样做。一般来说,在相同代码的不同实现之间进行公平的比较是很棘手的,特别是如果算法是数值的,因为您需要确保它们实际上在做相同的事情。在某个实现中由于某个类型转换而产生的轻微舍入错误可能会导致它比另一个实现进行更多的迭代,即使两者最终得到相同的结果。

为了消除这个可能的错误源(并在每次迭代中摆脱额外的测试),我重写了factorCount函数,如下所示,密切模仿C代码:

factorCount (N) ->
    Sqrt = math:sqrt (N),
    ISqrt = trunc(Sqrt),
    if ISqrt == Sqrt -> factorCount (N, ISqrt, 1, -1);
       true          -> factorCount (N, ISqrt, 1, 0)
    end.

factorCount (_N, ISqrt, Candidate, Count) when Candidate > ISqrt -> Count;
factorCount ( N, ISqrt, Candidate, Count) ->
    case N rem Candidate of
        0 -> factorCount (N, ISqrt, Candidate + 1, Count + 2);
        _ -> factorCount (N, ISqrt, Candidate + 1, Count)
    end.

这个重写,没有export_all和本机编译,给了我以下运行时:

$ erlc +native euler12.erl
$ time erl -noshell -s euler12 solve
842161320

real    0m19.468s
user    0m19.450s
sys 0m0.010s

这与C代码相比不算太糟:

$ time ./a.out 
842161320

real    0m12.755s
user    0m12.730s
sys 0m0.020s

考虑到Erlang完全不适合编写数字代码,在这样的程序中只比C慢50%就已经很不错了。

最后,关于你的问题:

问题1:erlang、python和haskell是否会因为使用任意长度的整数而降低速度 只要值小于MAXINT,它们不就行了吗?

Yes, somewhat. In Erlang, there is no way of saying "use 32/64-bit arithmetic with wrap-around", so unless the compiler can prove some bounds on your integers (and it usually can't), it must check all computations to see if they can fit in a single tagged word or if it has to turn them into heap-allocated bignums. Even if no bignums are ever used in practice at runtime, these checks will have to be performed. On the other hand, that means you know that the algorithm will never fail because of an unexpected integer wraparound if you suddenly give it larger inputs than before.

问题4:我的函数实现是否允许LCO,从而避免在调用堆栈中添加不必要的帧?

是的,您的Erlang代码在最后调用优化方面是正确的。

其他回答

Erlang实现存在一些问题。作为下面的基准,我测量的未修改的Erlang程序的执行时间为47.6秒,而C代码的执行时间为12.7秒。

(编辑:在Erlang/OTP版本24,2021年,Erlang有一个自动JIT编译器,旧的+本机编译器选项不再支持或需要。我保留下面这段文字作为历史文件。关于export_all的注释对于jit生成良好代码的能力仍然是有效的。)

The first thing you should do if you want to run computationally intensive Erlang code is to use native code. Compiling with erlc +native euler12 got the time down to 41.3 seconds. This is however a much lower speedup (just 15%) than expected from native compilation on this kind of code, and the problem is your use of -compile(export_all). This is useful for experimentation, but the fact that all functions are potentially reachable from the outside causes the native compiler to be very conservative. (The normal BEAM emulator is not that much affected.) Replacing this declaration with -export([solve/0]). gives a much better speedup: 31.5 seconds (almost 35% from the baseline).

但是代码本身有一个问题:对于factorCount循环中的每一次迭代,都要执行以下测试:

factorCount (_, Sqrt, Candidate, Count) when Candidate == Sqrt -> Count + 1;

C代码不这样做。一般来说,在相同代码的不同实现之间进行公平的比较是很棘手的,特别是如果算法是数值的,因为您需要确保它们实际上在做相同的事情。在某个实现中由于某个类型转换而产生的轻微舍入错误可能会导致它比另一个实现进行更多的迭代,即使两者最终得到相同的结果。

为了消除这个可能的错误源(并在每次迭代中摆脱额外的测试),我重写了factorCount函数,如下所示,密切模仿C代码:

factorCount (N) ->
    Sqrt = math:sqrt (N),
    ISqrt = trunc(Sqrt),
    if ISqrt == Sqrt -> factorCount (N, ISqrt, 1, -1);
       true          -> factorCount (N, ISqrt, 1, 0)
    end.

factorCount (_N, ISqrt, Candidate, Count) when Candidate > ISqrt -> Count;
factorCount ( N, ISqrt, Candidate, Count) ->
    case N rem Candidate of
        0 -> factorCount (N, ISqrt, Candidate + 1, Count + 2);
        _ -> factorCount (N, ISqrt, Candidate + 1, Count)
    end.

这个重写,没有export_all和本机编译,给了我以下运行时:

$ erlc +native euler12.erl
$ time erl -noshell -s euler12 solve
842161320

real    0m19.468s
user    0m19.450s
sys 0m0.010s

这与C代码相比不算太糟:

$ time ./a.out 
842161320

real    0m12.755s
user    0m12.730s
sys 0m0.020s

考虑到Erlang完全不适合编写数字代码,在这样的程序中只比C慢50%就已经很不错了。

最后,关于你的问题:

问题1:erlang、python和haskell是否会因为使用任意长度的整数而降低速度 只要值小于MAXINT,它们不就行了吗?

Yes, somewhat. In Erlang, there is no way of saying "use 32/64-bit arithmetic with wrap-around", so unless the compiler can prove some bounds on your integers (and it usually can't), it must check all computations to see if they can fit in a single tagged word or if it has to turn them into heap-allocated bignums. Even if no bignums are ever used in practice at runtime, these checks will have to be performed. On the other hand, that means you know that the algorithm will never fail because of an unexpected integer wraparound if you suddenly give it larger inputs than before.

问题4:我的函数实现是否允许LCO,从而避免在调用堆栈中添加不必要的帧?

是的,您的Erlang代码在最后调用优化方面是正确的。

看看这个博客。在过去一年左右的时间里,他用Haskell和Python完成了一些Project Euler问题,他通常发现Haskell要快得多。我认为在这些语言之间,它更多地与你的流畅性和编码风格有关。

说到Python速度,你使用了错误的实现!尝试一下PyPy,对于这样的事情,你会发现它要快得多。

使用Haskell,您真的不需要显式地考虑递归。

factorCount number = foldr factorCount' 0 [1..isquare] -
                     (fromEnum $ square == fromIntegral isquare)
    where
      square = sqrt $ fromIntegral number
      isquare = floor square
      factorCount' candidate
        | number `rem` candidate == 0 = (2 +)
        | otherwise = id

triangles :: [Int]
triangles = scanl1 (+) [1,2..]

main = print . head $ dropWhile ((< 1001) . factorCount) triangles

在上面的代码中,我用普通的列表操作替换了@Thomas回答中的显式递归。代码仍然做着完全相同的事情,而不需要我们担心尾部递归。它运行(~ 7.49秒)比@Thomas回答的版本(~ 7.04秒)在我的机器上运行GHC 7.6.2,而来自@Raedwulf的C版本运行~ 3.15秒。GHC似乎在过去一年中有所改善。

PS:我知道这是一个老问题,我从谷歌搜索中偶然发现了它(我忘了我在搜索什么了,现在…)只是想评论一下关于LCO的问题,并表达我对Haskell的总体感受。我想对上面的答案进行注释,但是注释不允许代码块。

问题1:Erlang、Python和Haskell是否会因为使用 任意长度的整数,只要值更小 比MAXINT ?

对于Erlang,第一个问题的答案是否定的。最后一个问题可以通过适当地使用Erlang来回答,如下所示:

http://bredsaal.dk/learning-erlang-using-projecteuler-net

由于它比您最初的C示例要快,我猜它会有很多问题,因为其他人已经详细讨论过了。

这个Erlang模块在一个便宜的上网本上执行大约5秒…它使用erlang中的网络线程模型,并演示了如何利用事件模型。它可以分布在许多节点上。而且速度很快。不是我的代码。

-module(p12dist).  
-author("Jannich Brendle, jannich@bredsaal.dk, http://blog.bredsaal.dk").  
-compile(export_all).

server() ->  
  server(1).

server(Number) ->  
  receive {getwork, Worker_PID} -> Worker_PID ! {work,Number,Number+100},  
  server(Number+101);  
  {result,T} -> io:format("The result is: \~w.\~n", [T]);  
  _ -> server(Number)  
  end.

worker(Server_PID) ->  
  Server_PID ! {getwork, self()},  
  receive {work,Start,End} -> solve(Start,End,Server_PID)  
  end,  
  worker(Server_PID).

start() ->  
  Server_PID = spawn(p12dist, server, []),  
  spawn(p12dist, worker, [Server_PID]),  
  spawn(p12dist, worker, [Server_PID]),  
  spawn(p12dist, worker, [Server_PID]),  
  spawn(p12dist, worker, [Server_PID]).

solve(N,End,_) when N =:= End -> no_solution;

solve(N,End,Server_PID) ->  
  T=round(N*(N+1)/2),
  case (divisor(T,round(math:sqrt(T))) > 500) of  
    true ->  
      Server_PID ! {result,T};  
    false ->  
      solve(N+1,End,Server_PID)  
  end.

divisors(N) ->  
  divisor(N,round(math:sqrt(N))).

divisor(_,0) -> 1;  
divisor(N,I) ->  
  case (N rem I) =:= 0 of  
  true ->  
    2+divisor(N,I-1);  
  false ->  
    divisor(N,I-1)  
  end.

下面的测试发生在Intel(R) Atom(TM) CPU N270 @ 1.60GHz上

~$ time erl -noshell -s p12dist start

The result is: 76576500.

^C

BREAK: (a)bort (c)ontinue (p)roc info (i)nfo (l)oaded
       (v)ersion (k)ill (D)b-tables (d)istribution
a

real    0m5.510s
user    0m5.836s
sys 0m0.152s

更多关于C版本的数字和解释。显然这么多年来没人这么做过。记得给这个答案点赞,这样它就可以放在最上面,让每个人都能看到和学习。

第一步:作者程序的基准

笔记本电脑的规格:

CPU i3 M380 (931 MHz -最大省电模式) 4 gb内存 Win7 64位 微软Visual Studio 2012终极版 Cygwin与gcc 4.9.3 Python 2.7.10

命令:

compiling on VS x64 command prompt > `for /f %f in ('dir /b *.c') do cl /O2 /Ot /Ox %f -o %f_x64_vs2012.exe`
compiling on cygwin with gcc x64   > `for f in ./*.c; do gcc -m64 -O3 $f -o ${f}_x64_gcc.exe ; done`
time (unix tools) using cygwin > `for f in ./*.exe; do  echo "----------"; echo $f ; time $f ; done`

.

----------
$ time python ./original.py

real    2m17.748s
user    2m15.783s
sys     0m0.093s
----------
$ time ./original_x86_vs2012.exe

real    0m8.377s
user    0m0.015s
sys     0m0.000s
----------
$ time ./original_x64_vs2012.exe

real    0m8.408s
user    0m0.000s
sys     0m0.015s
----------
$ time ./original_x64_gcc.exe

real    0m20.951s
user    0m20.732s
sys     0m0.030s

文件名为:integertype_architecture_compiler.exe

Integertype目前与原始程序相同(稍后详细介绍) 架构是x86或x64,取决于编译器设置 编译器是GCC或vs2012

第二步:调查、改进和再次基准

VS比gcc快250%。这两个编译器应该给出类似的速度。显然,代码或编译器选项有问题。让我们调查!

首先要注意的是整数类型。转换可能很昂贵,一致性对于更好的代码生成和优化很重要。所有整数都应该是相同的类型。

它现在是int和long的混合体。我们要改进这一点。使用哪种类型?最快的。必须对它们进行基准测试!

----------
$ time ./int_x86_vs2012.exe

real    0m8.440s
user    0m0.016s
sys     0m0.015s
----------
$ time ./int_x64_vs2012.exe

real    0m8.408s
user    0m0.016s
sys     0m0.015s
----------
$ time ./int32_x86_vs2012.exe

real    0m8.408s
user    0m0.000s
sys     0m0.015s
----------
$ time ./int32_x64_vs2012.exe

real    0m8.362s
user    0m0.000s
sys     0m0.015s
----------
$ time ./int64_x86_vs2012.exe

real    0m18.112s
user    0m0.000s
sys     0m0.015s
----------
$ time ./int64_x64_vs2012.exe

real    0m18.611s
user    0m0.000s
sys     0m0.015s
----------
$ time ./long_x86_vs2012.exe

real    0m8.393s
user    0m0.015s
sys     0m0.000s
----------
$ time ./long_x64_vs2012.exe

real    0m8.440s
user    0m0.000s
sys     0m0.015s
----------
$ time ./uint32_x86_vs2012.exe

real    0m8.362s
user    0m0.000s
sys     0m0.015s
----------
$ time ./uint32_x64_vs2012.exe

real    0m8.393s
user    0m0.015s
sys     0m0.015s
----------
$ time ./uint64_x86_vs2012.exe

real    0m15.428s
user    0m0.000s
sys     0m0.015s
----------
$ time ./uint64_x64_vs2012.exe

real    0m15.725s
user    0m0.015s
sys     0m0.015s
----------
$ time ./int_x64_gcc.exe

real    0m8.531s
user    0m8.329s
sys     0m0.015s
----------
$ time ./int32_x64_gcc.exe

real    0m8.471s
user    0m8.345s
sys     0m0.000s
----------
$ time ./int64_x64_gcc.exe

real    0m20.264s
user    0m20.186s
sys     0m0.015s
----------
$ time ./long_x64_gcc.exe

real    0m20.935s
user    0m20.809s
sys     0m0.015s
----------
$ time ./uint32_x64_gcc.exe

real    0m8.393s
user    0m8.346s
sys     0m0.015s
----------
$ time ./uint64_x64_gcc.exe

real    0m16.973s
user    0m16.879s
sys     0m0.030s

整数类型是int long int32_t uint32_t int64_t和uint64_t from #include <stdint.h>

C语言中有很多整数类型,还有一些带符号/无符号的可以使用,还有编译为x86或x64的选择(不要与实际的整数大小混淆)。要编译和运行^^的版本太多了

第三步:理解数字

最终结论:

32位整数比64位整数快200% 无符号64位整数比有符号64位快25%(不幸的是,我对此没有解释)

陷阱问题:“C语言中int和long的大小是多少?” 正确答案是:C中int和long的大小没有很好的定义!

来自C规范:

Int至少是32位 Long至少是int型

从gcc手册页(-m32和-m64标志):

32位环境将int、long和指针设置为32位,并生成可在任何i386系统上运行的代码。 64位环境将int设置为32位,long设置为64位,指针设置为64位,并为AMD的x86-64架构生成代码。

来自MSDN文档(数据类型范围)https://msdn.microsoft.com/en-us/library/s3f49ktz%28v=vs.110%29.aspx:

Int, 4字节,也是有符号的 Long, 4字节,也称为Long int和带符号的Long int

总结一下:吸取的教训

32位整数比64位整数快。 标准整数类型在C和c++中都没有很好地定义,它们取决于编译器和体系结构。当你需要一致性和可预测性时,使用uint32_t整数族从#include <stdint.h>。 速度问题解决。所有其他语言都落后百分之百,C和c++又赢了!他们总是这样。接下来的改进将是使用OpenMP:D进行多线程处理