问:Java中的异常处理真的很慢吗?

传统观点以及大量谷歌结果表明,不应该将异常逻辑用于Java中的正常程序流。通常会给出两个原因,

它真的很慢——甚至比普通代码慢一个数量级(给出的原因各不相同),

and

它很混乱,因为人们只希望在异常代码中处理错误。

这个问题是关于第一条的。

As an example, this page describes Java exception handling as "very slow" and relates the slowness to the creation of the exception message string - "this string is then used in creating the exception object that is thrown. This is not fast." The article Effective Exception Handling in Java says that "the reason for this is due to the object creation aspect of exception handling, which thereby makes throwing exceptions inherently slow". Another reason out there is that the stack trace generation is what slows it down.

My testing (using Java 1.6.0_07, Java HotSpot 10.0, on 32 bit Linux), indicates that exception handling is no slower than regular code. I tried running a method in a loop that executes some code. At the end of the method, I use a boolean to indicate whether to return or throw. This way the actual processing is the same. I tried running the methods in different orders and averaging my test times, thinking it may have been the JVM warming up. In all my tests, the throw was at least as fast as the return, if not faster (up to 3.1% faster). I am completely open to the possibility that my tests were wrong, but I haven't seen anything out there in the way of the code sample, test comparisons, or results in the last year or two that show exception handling in Java to actually be slow.

引导我走上这条路的是我需要使用的一个API,它将抛出异常作为正常控制逻辑的一部分。我想纠正它们的用法,但现在我可能做不到。我是否应该赞美他们的前瞻思维?

在论文《即时编译中的高效Java异常处理》中,作者建议,即使没有抛出异常,仅异常处理程序的存在就足以阻止JIT编译器正确优化代码,从而降低代码的速度。我还没有测试过这个理论。


当前回答

我认为第一篇文章提到遍历调用堆栈和创建堆栈跟踪是最昂贵的部分,虽然第二篇文章没有这样说,但我认为这是对象创建中最昂贵的部分。John Rose在一篇文章中描述了加速异常的不同技术。(预分配和重用异常,没有堆栈跟踪的异常,等等)

但我仍然认为这应该被认为是一种必要的邪恶,一种最后的手段。John这样做的原因是为了模拟JVM中(还)没有的其他语言的特性。你不应该养成对控制流使用异常的习惯。尤其是因为性能原因!正如您自己在第2条中提到的,这样做可能会掩盖代码中的严重错误,而且对于新程序员来说,维护起来会更加困难。

Java中的微基准测试出奇地难以正确(有人告诉过我),特别是在进入JIT领域时,因此我真的怀疑在现实生活中使用异常是否比“返回”更快。例如,我怀疑您在测试中有2到5个堆栈帧?现在假设您的代码将由JBoss部署的JSF组件调用。现在您可能有一个数页长的堆栈跟踪。

也许您可以发布您的测试代码?

其他回答

Aleksey Shipilëv做了一个非常彻底的分析,他在各种条件组合下对Java异常进行了基准测试:

新创建的异常vs预先创建的异常 启用与禁用堆栈跟踪 请求的堆栈跟踪vs从未请求的堆栈跟踪 在顶层捕获vs在每一层重新抛出vs在每一层被链接/包裹 不同级别的Java调用堆栈深度 无内联优化vs极端内联vs默认设置 用户定义字段读与不读

他还将它们与在不同错误频率级别检查错误代码的性能进行了比较。

结论(逐字摘自他的帖子)如下:

Truly exceptional exceptions are beautifully performant. If you use them as designed, and only communicate the truly exceptional cases among the overwhelmingly large number of non-exceptional cases handled by regular code, then using exceptions is the performance win. The performance costs of exceptions have two major components: stack trace construction when Exception is instantiated and stack unwinding during Exception throw. Stack trace construction costs are proportional to stack depth at the moment of exception instantiation. That is already bad because who on Earth knows the stack depth at which this throwing method would be called? Even if you turn off the stack trace generation and/or cache the exceptions, you can only get rid of this part of the performance cost. Stack unwinding costs depend on how lucky we are with bringing the exception handler closer in the compiled code. Carefully structuring the code to avoid deep exception handlers lookup is probably helping us get luckier. Should we eliminate both effects, the performance cost of exceptions is that of the local branch. No matter how beautiful it sounds, that does not mean you should use Exceptions as the usual control flow, because in that case you are at the mercy of optimizing compiler! You should only use them in truly exceptional cases, where the exception frequency amortizes the possible unlucky cost of raising the actual exception. The optimistic rule-of-thumb seems to be 10^-4 frequency for exceptions is exceptional enough. That, of course, depends on the heavy-weights of the exceptions themselves, the exact actions taken in exception handlers, etc.

结果是,当没有抛出异常时,您不会付出代价,因此当异常条件足够罕见时,异常处理比每次都使用if更快。这篇文章的全文非常值得一读。

供你参考,我扩展了Mecki做的实验:

method1 took 1733 ms, result was 2
method2 took 1248 ms, result was 2
method3 took 83997 ms, result was 2
method4 took 1692 ms, result was 2
method5 took 60946 ms, result was 2
method6 took 25746 ms, result was 2

前3个和Mecki的一样(我的笔记本电脑明显慢一些)。

method4和method3是一样的,除了它创建了一个新的Integer(1)而不是抛出一个新的Exception()。

method5类似于method3,除了它创建了新的Exception()而不抛出它。

Method6和method3很像,只是它会抛出一个预先创建的异常(一个实例变量),而不是创建一个新异常。

在Java中,抛出异常的大部分开销是收集堆栈跟踪所花费的时间,这发生在创建异常对象时。抛出异常的实际成本虽然很大,但比创建异常的成本要小得多。

使用附带的代码,在JDK 15上,@Mecki测试用例得到了完全不同的结果。这基本上是在5个循环中运行代码,第一个循环稍微短一些,给VM一些时间来热身。

结果:

Loop 1 10000 cycles
method1 took 1 ms, result was 2
method2 took 0 ms, result was 2
method3 took 22 ms, result was 2
method4 took 22 ms, result was 2
method5 took 24 ms, result was 2
Loop 2 10000000 cycles
method1 took 39 ms, result was 2
method2 took 39 ms, result was 2
method3 took 1558 ms, result was 2
method4 took 1640 ms, result was 2
method5 took 1717 ms, result was 2
Loop 3 10000000 cycles
method1 took 49 ms, result was 2
method2 took 48 ms, result was 2
method3 took 126 ms, result was 2
method4 took 88 ms, result was 2
method5 took 87 ms, result was 2
Loop 4 10000000 cycles
method1 took 34 ms, result was 2
method2 took 34 ms, result was 2
method3 took 33 ms, result was 2
method4 took 98 ms, result was 2
method5 took 58 ms, result was 2
Loop 5 10000000 cycles
method1 took 34 ms, result was 2
method2 took 33 ms, result was 2
method3 took 33 ms, result was 2
method4 took 48 ms, result was 2
method5 took 49 ms, result was 2
package hs.jfx.eventstream.api;

public class Snippet {
  int value;


  public int getValue() {
      return value;
  }

  public void reset() {
      value = 0;
  }

  // Calculates without exception
  public void method1(int i) {
      value = ((value + i) / i) << 1;
      // Will never be true
      if ((i & 0xFFFFFFF) == 1000000000) {
          System.out.println("You'll never see this!");
      }
  }

  // Could in theory throw one, but never will
  public void method2(int i) throws Exception {
      value = ((value + i) / i) << 1;
      // Will never be true
      if ((i & 0xFFFFFFF) == 1000000000) {
          throw new Exception();
      }
  }

  private static final NoStackTraceRuntimeException E = new NoStackTraceRuntimeException();

  // This one will regularly throw one
  public void method3(int i) throws NoStackTraceRuntimeException {
      value = ((value + i) / i) << 1;
      // i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
      // an AND operation between two integers. The size of the number plays
      // no role. AND on 32 BIT always ANDs all 32 bits
      if ((i & 0x1) == 1) {
          throw E;
      }
  }

  // This one will regularly throw one
  public void method4(int i) throws NoStackTraceThrowable {
      value = ((value + i) / i) << 1;
      // i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
      // an AND operation between two integers. The size of the number plays
      // no role. AND on 32 BIT always ANDs all 32 bits
      if ((i & 0x1) == 1) {
          throw new NoStackTraceThrowable();
      }
  }

  // This one will regularly throw one
  public void method5(int i) throws NoStackTraceRuntimeException {
      value = ((value + i) / i) << 1;
      // i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
      // an AND operation between two integers. The size of the number plays
      // no role. AND on 32 BIT always ANDs all 32 bits
      if ((i & 0x1) == 1) {
          throw new NoStackTraceRuntimeException();
      }
  }

  public static void main(String[] args) {
    for(int k = 0; k < 5; k++) {
      int cycles = 10000000;
      if(k == 0) {
        cycles = 10000;
        try {
          Thread.sleep(500);
        }
        catch(InterruptedException e) {
          // TODO Auto-generated catch block
          e.printStackTrace();
        }
      }
      System.out.println("Loop " + (k + 1) + " " + cycles + " cycles");
      int i;
      long l;
      Snippet t = new Snippet();

      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          t.method1(i);
      }
      l = System.currentTimeMillis() - l;
      System.out.println(
          "method1 took " + l + " ms, result was " + t.getValue()
      );

      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          try {
              t.method2(i);
          } catch (Exception e) {
              System.out.println("You'll never see this!");
          }
      }
      l = System.currentTimeMillis() - l;
      System.out.println(
          "method2 took " + l + " ms, result was " + t.getValue()
      );

      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          try {
              t.method3(i);
          } catch (NoStackTraceRuntimeException e) {
            // always comes here
          }
      }
      l = System.currentTimeMillis() - l;
      System.out.println(
          "method3 took " + l + " ms, result was " + t.getValue()
      );


      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          try {
              t.method4(i);
          } catch (NoStackTraceThrowable e) {
            // always comes here
          }
      }
      l = System.currentTimeMillis() - l;
      System.out.println( "method4 took " + l + " ms, result was " + t.getValue() );


      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          try {
              t.method5(i);
          } catch (RuntimeException e) {
            // always comes here
          }
      }
      l = System.currentTimeMillis() - l;
      System.out.println( "method5 took " + l + " ms, result was " + t.getValue() );
    }
  }

  public static class NoStackTraceRuntimeException extends RuntimeException {
    public NoStackTraceRuntimeException() {
        super("my special throwable", null, false, false);
    }
  }

  public static class NoStackTraceThrowable extends Throwable {
    public NoStackTraceThrowable() {
        super("my special throwable", null, false, false);
    }
  }
}

我认为第一篇文章提到遍历调用堆栈和创建堆栈跟踪是最昂贵的部分,虽然第二篇文章没有这样说,但我认为这是对象创建中最昂贵的部分。John Rose在一篇文章中描述了加速异常的不同技术。(预分配和重用异常,没有堆栈跟踪的异常,等等)

但我仍然认为这应该被认为是一种必要的邪恶,一种最后的手段。John这样做的原因是为了模拟JVM中(还)没有的其他语言的特性。你不应该养成对控制流使用异常的习惯。尤其是因为性能原因!正如您自己在第2条中提到的,这样做可能会掩盖代码中的严重错误,而且对于新程序员来说,维护起来会更加困难。

Java中的微基准测试出奇地难以正确(有人告诉过我),特别是在进入JIT领域时,因此我真的怀疑在现实生活中使用异常是否比“返回”更快。例如,我怀疑您在测试中有2到5个堆栈帧?现在假设您的代码将由JBoss部署的JSF组件调用。现在您可能有一个数页长的堆栈跟踪。

也许您可以发布您的测试代码?

我用JVM 1.5做了一些性能测试,使用异常至少慢了两倍。平均:一个非常小的方法的执行时间超过3倍(3倍)。一个必须捕获异常的小循环的自时间增加了2倍。

我在产品代码和微基准测试中也看到过类似的数字。

异常绝对不应该用于任何频繁调用的东西。每秒抛出数千个异常将导致巨大的瓶颈。

例如,使用“Integer.ParseInt(…)”在一个非常大的文本文件中找到所有错误的值——非常糟糕的想法。(我曾在产品代码上看到过这种实用方法的性能下降)

使用异常在用户GUI表单上报告错误的值,从性能的角度来看可能并不是那么糟糕。

无论这是否是一个好的设计实践,我都会遵循这样的规则:如果错误是正常的/预期的,那么就使用返回值。如果不正常,请使用异常。例如:读取用户输入,错误值是正常的—使用错误代码。将值传递给内部实用程序函数时,应该通过调用代码来过滤坏值——使用异常。