我想知道在我的c++程序中某个函数在Linux上执行需要多少时间。之后,我想做一个速度比较。我看到了几个时间函数,但最终从boost。空间:

process_user_cpu_clock, captures user-CPU time spent by the current process

现在,我不清楚如果我使用上述函数,我会得到哪个CPU在该函数上花费的唯一时间吗?

其次,我找不到任何使用上述功能的例子。请问有谁能告诉我如何使用上面的功能?

p.s.:现在,我使用std::chrono::system_clock::now()以秒为单位获得时间,但这给了我不同的结果,因为不同的CPU负载每次。


当前回答

c++ 11清理了Jahid的回复:

#include <chrono>
#include <thread>

void long_operation(int ms)
{
    /* Simulating a long, heavy operation. */
    std::this_thread::sleep_for(std::chrono::milliseconds(ms));
}

template<typename F, typename... Args>
double funcTime(F func, Args&&... args){
    std::chrono::high_resolution_clock::time_point t1 = 
        std::chrono::high_resolution_clock::now();
    func(std::forward<Args>(args)...);
    return std::chrono::duration_cast<std::chrono::milliseconds>(
        std::chrono::high_resolution_clock::now()-t1).count();
}

int main()
{
    std::cout<<"expect 150: "<<funcTime(long_operation,150)<<"\n";

    return 0;
}

其他回答

在c++ 11中,这是一个非常容易使用的方法。你必须使用std::chrono::high_resolution_clock from <chrono>头。

像这样使用它:

#include <chrono>

/* Only needed for the sake of this example. */
#include <iostream>
#include <thread>
    
void long_operation()
{
    /* Simulating a long, heavy operation. */

    using namespace std::chrono_literals;
    std::this_thread::sleep_for(150ms);
}

int main()
{
    using std::chrono::high_resolution_clock;
    using std::chrono::duration_cast;
    using std::chrono::duration;
    using std::chrono::milliseconds;

    auto t1 = high_resolution_clock::now();
    long_operation();
    auto t2 = high_resolution_clock::now();

    /* Getting number of milliseconds as an integer. */
    auto ms_int = duration_cast<milliseconds>(t2 - t1);

    /* Getting number of milliseconds as a double. */
    duration<double, std::milli> ms_double = t2 - t1;

    std::cout << ms_int.count() << "ms\n";
    std::cout << ms_double.count() << "ms\n";
    return 0;
}

这将度量函数long_operation的持续时间。

可能的输出:

150ms
150.068ms

工作示例:https://godbolt.org/z/oe5cMd

c++ 11清理了Jahid的回复:

#include <chrono>
#include <thread>

void long_operation(int ms)
{
    /* Simulating a long, heavy operation. */
    std::this_thread::sleep_for(std::chrono::milliseconds(ms));
}

template<typename F, typename... Args>
double funcTime(F func, Args&&... args){
    std::chrono::high_resolution_clock::time_point t1 = 
        std::chrono::high_resolution_clock::now();
    func(std::forward<Args>(args)...);
    return std::chrono::duration_cast<std::chrono::milliseconds>(
        std::chrono::high_resolution_clock::now()-t1).count();
}

int main()
{
    std::cout<<"expect 150: "<<funcTime(long_operation,150)<<"\n";

    return 0;
}

在c++ 11中,这是一个非常容易使用的方法。 我们可以从头文件中使用std::chrono::high_resolution_clock 我们可以编写一个方法,以易于阅读的形式打印方法执行时间。

例如,要找到1到1亿之间的所有质数,大约需要1分40秒。 因此,执行时间打印为:

Execution Time: 1 Minutes, 40 Seconds, 715 MicroSeconds, 715000 NanoSeconds

代码在这里:

#include <iostream>
#include <chrono>

using namespace std;
using namespace std::chrono;

typedef high_resolution_clock Clock;
typedef Clock::time_point ClockTime;

void findPrime(long n, string file);
void printExecutionTime(ClockTime start_time, ClockTime end_time);

int main()
{
    long n = long(1E+8);  // N = 100 million

    ClockTime start_time = Clock::now();

    // Write all the prime numbers from 1 to N to the file "prime.txt"
    findPrime(n, "C:\\prime.txt"); 

    ClockTime end_time = Clock::now();

    printExecutionTime(start_time, end_time);
}

void printExecutionTime(ClockTime start_time, ClockTime end_time)
{
    auto execution_time_ns = duration_cast<nanoseconds>(end_time - start_time).count();
    auto execution_time_ms = duration_cast<microseconds>(end_time - start_time).count();
    auto execution_time_sec = duration_cast<seconds>(end_time - start_time).count();
    auto execution_time_min = duration_cast<minutes>(end_time - start_time).count();
    auto execution_time_hour = duration_cast<hours>(end_time - start_time).count();

    cout << "\nExecution Time: ";
    if(execution_time_hour > 0)
    cout << "" << execution_time_hour << " Hours, ";
    if(execution_time_min > 0)
    cout << "" << execution_time_min % 60 << " Minutes, ";
    if(execution_time_sec > 0)
    cout << "" << execution_time_sec % 60 << " Seconds, ";
    if(execution_time_ms > 0)
    cout << "" << execution_time_ms % long(1E+3) << " MicroSeconds, ";
    if(execution_time_ns > 0)
    cout << "" << execution_time_ns % long(1E+6) << " NanoSeconds, ";
}

下面是一个函数,它将测量作为参数传递的任何函数的执行时间:

#include <chrono>
#include <utility>

typedef std::chrono::high_resolution_clock::time_point TimeVar;

#define duration(a) std::chrono::duration_cast<std::chrono::nanoseconds>(a).count()
#define timeNow() std::chrono::high_resolution_clock::now()

template<typename F, typename... Args>
double funcTime(F func, Args&&... args){
    TimeVar t1=timeNow();
    func(std::forward<Args>(args)...);
    return duration(timeNow()-t1);
}

使用示例:

#include <iostream>
#include <algorithm>

typedef std::string String;

//first test function doing something
int countCharInString(String s, char delim){
    int count=0;
    String::size_type pos = s.find_first_of(delim);
    while ((pos = s.find_first_of(delim, pos)) != String::npos){
        count++;pos++;
    }
    return count;
}

//second test function doing the same thing in different way
int countWithAlgorithm(String s, char delim){
    return std::count(s.begin(),s.end(),delim);
}


int main(){
    std::cout<<"norm: "<<funcTime(countCharInString,"precision=10",'=')<<"\n";
    std::cout<<"algo: "<<funcTime(countWithAlgorithm,"precision=10",'=');
    return 0;
}

输出:

norm: 15555
algo: 2976

在Scott Meyers的书中,我发现了一个通用泛型lambda表达式的例子,可以用来测量函数的执行时间。(c++ 14)

auto timeFuncInvocation = 
    [](auto&& func, auto&&... params) {
        // get time before function invocation
        const auto& start = std::chrono::high_resolution_clock::now();
        // function invocation using perfect forwarding
        std::forward<decltype(func)>(func)(std::forward<decltype(params)>(params)...);
        // get time after function invocation
        const auto& stop = std::chrono::high_resolution_clock::now();
        return stop - start;
     };

问题是,您只能测量一次执行,因此结果可能非常不同。为了获得可靠的结果,您应该测量大量的执行。 根据Andrei Alexandrescu在code::dive 2015会议上的演讲-编写快速代码I:

测量时间:tm = t + tq + tn + to

地点:

Tm -测量(观察)时间

T -实际感兴趣的时间

Tq -由量化噪声增加的时间

Tn -由各种噪声源添加的时间

To -开销时间(测量、循环、调用函数)

根据他在后面的演讲中所说的,你应该把大量执行中的最小值作为你的结果。 我鼓励你们去看他解释原因的那节课。

还有谷歌上的一个很好的库- https://github.com/google/benchmark。 这个库使用简单,功能强大。你可以在youtube上查看钱德勒·卡鲁斯的一些讲座,他在实践中使用了这个库。例如,2017年CppCon:钱德勒·卡鲁斯《无处可去》;

使用示例:

#include <iostream>
#include <chrono>
#include <vector>
auto timeFuncInvocation = 
    [](auto&& func, auto&&... params) {
        // get time before function invocation
        const auto& start = high_resolution_clock::now();
        // function invocation using perfect forwarding
        for(auto i = 0; i < 100000/*largeNumber*/; ++i) {
            std::forward<decltype(func)>(func)(std::forward<decltype(params)>(params)...);
        }
        // get time after function invocation
        const auto& stop = high_resolution_clock::now();
        return (stop - start)/100000/*largeNumber*/;
     };

void f(std::vector<int>& vec) {
    vec.push_back(1);
}

void f2(std::vector<int>& vec) {
    vec.emplace_back(1);
}
int main()
{
    std::vector<int> vec;
    std::vector<int> vec2;
    std::cout << timeFuncInvocation(f, vec).count() << std::endl;
    std::cout << timeFuncInvocation(f2, vec2).count() << std::endl;
    std::vector<int> vec3;
    vec3.reserve(100000);
    std::vector<int> vec4;
    vec4.reserve(100000);
    std::cout << timeFuncInvocation(f, vec3).count() << std::endl;
    std::cout << timeFuncInvocation(f2, vec4).count() << std::endl;
    return 0;
}

编辑: 当然,你总是需要记住,你的编译器可以优化或不优化某些东西。像perf这样的工具在这种情况下很有用。