问:Java中的异常处理真的很慢吗?

传统观点以及大量谷歌结果表明,不应该将异常逻辑用于Java中的正常程序流。通常会给出两个原因,

它真的很慢——甚至比普通代码慢一个数量级(给出的原因各不相同),

and

它很混乱,因为人们只希望在异常代码中处理错误。

这个问题是关于第一条的。

As an example, this page describes Java exception handling as "very slow" and relates the slowness to the creation of the exception message string - "this string is then used in creating the exception object that is thrown. This is not fast." The article Effective Exception Handling in Java says that "the reason for this is due to the object creation aspect of exception handling, which thereby makes throwing exceptions inherently slow". Another reason out there is that the stack trace generation is what slows it down.

My testing (using Java 1.6.0_07, Java HotSpot 10.0, on 32 bit Linux), indicates that exception handling is no slower than regular code. I tried running a method in a loop that executes some code. At the end of the method, I use a boolean to indicate whether to return or throw. This way the actual processing is the same. I tried running the methods in different orders and averaging my test times, thinking it may have been the JVM warming up. In all my tests, the throw was at least as fast as the return, if not faster (up to 3.1% faster). I am completely open to the possibility that my tests were wrong, but I haven't seen anything out there in the way of the code sample, test comparisons, or results in the last year or two that show exception handling in Java to actually be slow.

引导我走上这条路的是我需要使用的一个API,它将抛出异常作为正常控制逻辑的一部分。我想纠正它们的用法,但现在我可能做不到。我是否应该赞美他们的前瞻思维?

在论文《即时编译中的高效Java异常处理》中,作者建议,即使没有抛出异常,仅异常处理程序的存在就足以阻止JIT编译器正确优化代码,从而降低代码的速度。我还没有测试过这个理论。


当前回答

即使抛出异常并不慢,对于正常的程序流抛出异常仍然是一个坏主意。使用这种方式,它是类似于GOTO…

我想这并没有真正回答问题。我想抛出异常的“传统”智慧在早期的java版本(< 1.4)中是正确的。创建异常需要虚拟机创建整个堆栈跟踪。从那时起,在VM中发生了很多变化,以加快速度,这可能是已经改进的一个领域。

其他回答

HotSpot非常能够删除系统生成的异常代码,只要它是内联的。但是,显式创建的异常和其他未删除的异常要花费大量时间来创建堆栈跟踪。重写fillInStackTrace以查看这会如何影响性能。

Aleksey Shipilëv做了一个非常彻底的分析,他在各种条件组合下对Java异常进行了基准测试:

新创建的异常vs预先创建的异常 启用与禁用堆栈跟踪 请求的堆栈跟踪vs从未请求的堆栈跟踪 在顶层捕获vs在每一层重新抛出vs在每一层被链接/包裹 不同级别的Java调用堆栈深度 无内联优化vs极端内联vs默认设置 用户定义字段读与不读

他还将它们与在不同错误频率级别检查错误代码的性能进行了比较。

结论(逐字摘自他的帖子)如下:

Truly exceptional exceptions are beautifully performant. If you use them as designed, and only communicate the truly exceptional cases among the overwhelmingly large number of non-exceptional cases handled by regular code, then using exceptions is the performance win. The performance costs of exceptions have two major components: stack trace construction when Exception is instantiated and stack unwinding during Exception throw. Stack trace construction costs are proportional to stack depth at the moment of exception instantiation. That is already bad because who on Earth knows the stack depth at which this throwing method would be called? Even if you turn off the stack trace generation and/or cache the exceptions, you can only get rid of this part of the performance cost. Stack unwinding costs depend on how lucky we are with bringing the exception handler closer in the compiled code. Carefully structuring the code to avoid deep exception handlers lookup is probably helping us get luckier. Should we eliminate both effects, the performance cost of exceptions is that of the local branch. No matter how beautiful it sounds, that does not mean you should use Exceptions as the usual control flow, because in that case you are at the mercy of optimizing compiler! You should only use them in truly exceptional cases, where the exception frequency amortizes the possible unlucky cost of raising the actual exception. The optimistic rule-of-thumb seems to be 10^-4 frequency for exceptions is exceptional enough. That, of course, depends on the heavy-weights of the exceptions themselves, the exact actions taken in exception handlers, etc.

结果是,当没有抛出异常时,您不会付出代价,因此当异常条件足够罕见时,异常处理比每次都使用if更快。这篇文章的全文非常值得一读。

使用附带的代码,在JDK 15上,@Mecki测试用例得到了完全不同的结果。这基本上是在5个循环中运行代码,第一个循环稍微短一些,给VM一些时间来热身。

结果:

Loop 1 10000 cycles
method1 took 1 ms, result was 2
method2 took 0 ms, result was 2
method3 took 22 ms, result was 2
method4 took 22 ms, result was 2
method5 took 24 ms, result was 2
Loop 2 10000000 cycles
method1 took 39 ms, result was 2
method2 took 39 ms, result was 2
method3 took 1558 ms, result was 2
method4 took 1640 ms, result was 2
method5 took 1717 ms, result was 2
Loop 3 10000000 cycles
method1 took 49 ms, result was 2
method2 took 48 ms, result was 2
method3 took 126 ms, result was 2
method4 took 88 ms, result was 2
method5 took 87 ms, result was 2
Loop 4 10000000 cycles
method1 took 34 ms, result was 2
method2 took 34 ms, result was 2
method3 took 33 ms, result was 2
method4 took 98 ms, result was 2
method5 took 58 ms, result was 2
Loop 5 10000000 cycles
method1 took 34 ms, result was 2
method2 took 33 ms, result was 2
method3 took 33 ms, result was 2
method4 took 48 ms, result was 2
method5 took 49 ms, result was 2
package hs.jfx.eventstream.api;

public class Snippet {
  int value;


  public int getValue() {
      return value;
  }

  public void reset() {
      value = 0;
  }

  // Calculates without exception
  public void method1(int i) {
      value = ((value + i) / i) << 1;
      // Will never be true
      if ((i & 0xFFFFFFF) == 1000000000) {
          System.out.println("You'll never see this!");
      }
  }

  // Could in theory throw one, but never will
  public void method2(int i) throws Exception {
      value = ((value + i) / i) << 1;
      // Will never be true
      if ((i & 0xFFFFFFF) == 1000000000) {
          throw new Exception();
      }
  }

  private static final NoStackTraceRuntimeException E = new NoStackTraceRuntimeException();

  // This one will regularly throw one
  public void method3(int i) throws NoStackTraceRuntimeException {
      value = ((value + i) / i) << 1;
      // i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
      // an AND operation between two integers. The size of the number plays
      // no role. AND on 32 BIT always ANDs all 32 bits
      if ((i & 0x1) == 1) {
          throw E;
      }
  }

  // This one will regularly throw one
  public void method4(int i) throws NoStackTraceThrowable {
      value = ((value + i) / i) << 1;
      // i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
      // an AND operation between two integers. The size of the number plays
      // no role. AND on 32 BIT always ANDs all 32 bits
      if ((i & 0x1) == 1) {
          throw new NoStackTraceThrowable();
      }
  }

  // This one will regularly throw one
  public void method5(int i) throws NoStackTraceRuntimeException {
      value = ((value + i) / i) << 1;
      // i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
      // an AND operation between two integers. The size of the number plays
      // no role. AND on 32 BIT always ANDs all 32 bits
      if ((i & 0x1) == 1) {
          throw new NoStackTraceRuntimeException();
      }
  }

  public static void main(String[] args) {
    for(int k = 0; k < 5; k++) {
      int cycles = 10000000;
      if(k == 0) {
        cycles = 10000;
        try {
          Thread.sleep(500);
        }
        catch(InterruptedException e) {
          // TODO Auto-generated catch block
          e.printStackTrace();
        }
      }
      System.out.println("Loop " + (k + 1) + " " + cycles + " cycles");
      int i;
      long l;
      Snippet t = new Snippet();

      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          t.method1(i);
      }
      l = System.currentTimeMillis() - l;
      System.out.println(
          "method1 took " + l + " ms, result was " + t.getValue()
      );

      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          try {
              t.method2(i);
          } catch (Exception e) {
              System.out.println("You'll never see this!");
          }
      }
      l = System.currentTimeMillis() - l;
      System.out.println(
          "method2 took " + l + " ms, result was " + t.getValue()
      );

      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          try {
              t.method3(i);
          } catch (NoStackTraceRuntimeException e) {
            // always comes here
          }
      }
      l = System.currentTimeMillis() - l;
      System.out.println(
          "method3 took " + l + " ms, result was " + t.getValue()
      );


      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          try {
              t.method4(i);
          } catch (NoStackTraceThrowable e) {
            // always comes here
          }
      }
      l = System.currentTimeMillis() - l;
      System.out.println( "method4 took " + l + " ms, result was " + t.getValue() );


      l = System.currentTimeMillis();
      t.reset();
      for (i = 1; i < cycles; i++) {
          try {
              t.method5(i);
          } catch (RuntimeException e) {
            // always comes here
          }
      }
      l = System.currentTimeMillis() - l;
      System.out.println( "method5 took " + l + " ms, result was " + t.getValue() );
    }
  }

  public static class NoStackTraceRuntimeException extends RuntimeException {
    public NoStackTraceRuntimeException() {
        super("my special throwable", null, false, false);
    }
  }

  public static class NoStackTraceThrowable extends Throwable {
    public NoStackTraceThrowable() {
        super("my special throwable", null, false, false);
    }
  }
}

比较一下,假设是Integer。将parseInt转换为以下方法,该方法在不可解析数据的情况下只返回默认值,而不会抛出异常:

  public static int parseUnsignedInt(String s, int defaultValue) {
    final int strLength = s.length();
    if (strLength == 0)
      return defaultValue;
    int value = 0;
    for (int i=strLength-1; i>=0; i--) {
      int c = s.charAt(i);
      if (c > 47 && c < 58) {
        c -= 48;
        for (int j=strLength-i; j!=1; j--)
          c *= 10;
        value += c;
      } else {
        return defaultValue;
      }
    }
    return value < 0 ? /* übergebener wert > Integer.MAX_VALUE? */ defaultValue : value;
  }

只要您将这两种方法应用于“有效”数据,它们将以大致相同的速率工作(即使Integer。parseInt设法处理更复杂的数据)。但是当您试图解析无效数据时(例如解析“abc”1.000.000次),性能上的差异应该是至关重要的。

我改变了上面的@Mecki的答案,让method1在调用方法中返回一个布尔值和一个检查,因为你不能用什么都不替换一个异常。在运行两次之后,method1仍然是最快的或者和method2一样快。

下面是代码的快照:

// Calculates without exception
public boolean method1(int i) {
    value = ((value + i) / i) << 1;
    // Will never be true
    return ((i & 0xFFFFFFF) == 1000000000);

}
....
   for (i = 1; i < 100000000; i++) {
            if (t.method1(i)) {
                System.out.println("Will never be true!");
            }
    }

和结果:

运行1

method1 took 841 ms, result was 2
method2 took 841 ms, result was 2
method3 took 85058 ms, result was 2

运行2

method1 took 821 ms, result was 2
method2 took 838 ms, result was 2
method3 took 85929 ms, result was 2