问:Java中的异常处理真的很慢吗?

传统观点以及大量谷歌结果表明,不应该将异常逻辑用于Java中的正常程序流。通常会给出两个原因,

它真的很慢——甚至比普通代码慢一个数量级(给出的原因各不相同),

and

它很混乱,因为人们只希望在异常代码中处理错误。

这个问题是关于第一条的。

As an example, this page describes Java exception handling as "very slow" and relates the slowness to the creation of the exception message string - "this string is then used in creating the exception object that is thrown. This is not fast." The article Effective Exception Handling in Java says that "the reason for this is due to the object creation aspect of exception handling, which thereby makes throwing exceptions inherently slow". Another reason out there is that the stack trace generation is what slows it down.

My testing (using Java 1.6.0_07, Java HotSpot 10.0, on 32 bit Linux), indicates that exception handling is no slower than regular code. I tried running a method in a loop that executes some code. At the end of the method, I use a boolean to indicate whether to return or throw. This way the actual processing is the same. I tried running the methods in different orders and averaging my test times, thinking it may have been the JVM warming up. In all my tests, the throw was at least as fast as the return, if not faster (up to 3.1% faster). I am completely open to the possibility that my tests were wrong, but I haven't seen anything out there in the way of the code sample, test comparisons, or results in the last year or two that show exception handling in Java to actually be slow.

引导我走上这条路的是我需要使用的一个API,它将抛出异常作为正常控制逻辑的一部分。我想纠正它们的用法,但现在我可能做不到。我是否应该赞美他们的前瞻思维?

在论文《即时编译中的高效Java异常处理》中,作者建议,即使没有抛出异常,仅异常处理程序的存在就足以阻止JIT编译器正确优化代码,从而降低代码的速度。我还没有测试过这个理论。


当前回答

关于异常性能的好文章是:

https://shipilev.net/blog/2014/exceptional-performance/

实例化vs重用现有的,有堆栈跟踪和没有,等等:

Benchmark                            Mode   Samples         Mean   Mean error  Units

dynamicException                     avgt        25     1901.196       14.572  ns/op
dynamicException_NoStack             avgt        25       67.029        0.212  ns/op
dynamicException_NoStack_UsedData    avgt        25       68.952        0.441  ns/op
dynamicException_NoStack_UsedStack   avgt        25      137.329        1.039  ns/op
dynamicException_UsedData            avgt        25     1900.770        9.359  ns/op
dynamicException_UsedStack           avgt        25    20033.658      118.600  ns/op

plain                                avgt        25        1.259        0.002  ns/op
staticException                      avgt        25        1.510        0.001  ns/op
staticException_NoStack              avgt        25        1.514        0.003  ns/op
staticException_NoStack_UsedData     avgt        25        4.185        0.015  ns/op
staticException_NoStack_UsedStack    avgt        25       19.110        0.051  ns/op
staticException_UsedData             avgt        25        4.159        0.007  ns/op
staticException_UsedStack            avgt        25       25.144        0.186  ns/op

根据堆栈跟踪的深度:

Benchmark        Mode   Samples         Mean   Mean error  Units

exception_0000   avgt        25     1959.068       30.783  ns/op
exception_0001   avgt        25     1945.958       12.104  ns/op
exception_0002   avgt        25     2063.575       47.708  ns/op
exception_0004   avgt        25     2211.882       29.417  ns/op
exception_0008   avgt        25     2472.729       57.336  ns/op
exception_0016   avgt        25     2950.847       29.863  ns/op
exception_0032   avgt        25     4416.548       50.340  ns/op
exception_0064   avgt        25     6845.140       40.114  ns/op
exception_0128   avgt        25    11774.758       54.299  ns/op
exception_0256   avgt        25    21617.526      101.379  ns/op
exception_0512   avgt        25    42780.434      144.594  ns/op
exception_1024   avgt        25    82839.358      291.434  ns/op

有关其他详细信息(包括来自JIT的x64汇编程序),请阅读原始博客文章。

这意味着Hibernate/Spring/etc-EE-shit因为异常(xD)而变慢。

通过重写应用程序控制流,避免异常(返回错误作为返回),提高应用程序的性能10 -100倍,这取决于你抛出它们的频率))

其他回答

我认为第一篇文章提到遍历调用堆栈和创建堆栈跟踪是最昂贵的部分,虽然第二篇文章没有这样说,但我认为这是对象创建中最昂贵的部分。John Rose在一篇文章中描述了加速异常的不同技术。(预分配和重用异常,没有堆栈跟踪的异常,等等)

但我仍然认为这应该被认为是一种必要的邪恶,一种最后的手段。John这样做的原因是为了模拟JVM中(还)没有的其他语言的特性。你不应该养成对控制流使用异常的习惯。尤其是因为性能原因!正如您自己在第2条中提到的,这样做可能会掩盖代码中的严重错误,而且对于新程序员来说,维护起来会更加困难。

Java中的微基准测试出奇地难以正确(有人告诉过我),特别是在进入JIT领域时,因此我真的怀疑在现实生活中使用异常是否比“返回”更快。例如,我怀疑您在测试中有2到5个堆栈帧?现在假设您的代码将由JBoss部署的JSF组件调用。现在您可能有一个数页长的堆栈跟踪。

也许您可以发布您的测试代码?

比较一下,假设是Integer。将parseInt转换为以下方法,该方法在不可解析数据的情况下只返回默认值,而不会抛出异常:

  public static int parseUnsignedInt(String s, int defaultValue) {
    final int strLength = s.length();
    if (strLength == 0)
      return defaultValue;
    int value = 0;
    for (int i=strLength-1; i>=0; i--) {
      int c = s.charAt(i);
      if (c > 47 && c < 58) {
        c -= 48;
        for (int j=strLength-i; j!=1; j--)
          c *= 10;
        value += c;
      } else {
        return defaultValue;
      }
    }
    return value < 0 ? /* übergebener wert > Integer.MAX_VALUE? */ defaultValue : value;
  }

只要您将这两种方法应用于“有效”数据,它们将以大致相同的速率工作(即使Integer。parseInt设法处理更复杂的数据)。但是当您试图解析无效数据时(例如解析“abc”1.000.000次),性能上的差异应该是至关重要的。

关于异常性能的好文章是:

https://shipilev.net/blog/2014/exceptional-performance/

实例化vs重用现有的,有堆栈跟踪和没有,等等:

Benchmark                            Mode   Samples         Mean   Mean error  Units

dynamicException                     avgt        25     1901.196       14.572  ns/op
dynamicException_NoStack             avgt        25       67.029        0.212  ns/op
dynamicException_NoStack_UsedData    avgt        25       68.952        0.441  ns/op
dynamicException_NoStack_UsedStack   avgt        25      137.329        1.039  ns/op
dynamicException_UsedData            avgt        25     1900.770        9.359  ns/op
dynamicException_UsedStack           avgt        25    20033.658      118.600  ns/op

plain                                avgt        25        1.259        0.002  ns/op
staticException                      avgt        25        1.510        0.001  ns/op
staticException_NoStack              avgt        25        1.514        0.003  ns/op
staticException_NoStack_UsedData     avgt        25        4.185        0.015  ns/op
staticException_NoStack_UsedStack    avgt        25       19.110        0.051  ns/op
staticException_UsedData             avgt        25        4.159        0.007  ns/op
staticException_UsedStack            avgt        25       25.144        0.186  ns/op

根据堆栈跟踪的深度:

Benchmark        Mode   Samples         Mean   Mean error  Units

exception_0000   avgt        25     1959.068       30.783  ns/op
exception_0001   avgt        25     1945.958       12.104  ns/op
exception_0002   avgt        25     2063.575       47.708  ns/op
exception_0004   avgt        25     2211.882       29.417  ns/op
exception_0008   avgt        25     2472.729       57.336  ns/op
exception_0016   avgt        25     2950.847       29.863  ns/op
exception_0032   avgt        25     4416.548       50.340  ns/op
exception_0064   avgt        25     6845.140       40.114  ns/op
exception_0128   avgt        25    11774.758       54.299  ns/op
exception_0256   avgt        25    21617.526      101.379  ns/op
exception_0512   avgt        25    42780.434      144.594  ns/op
exception_1024   avgt        25    82839.358      291.434  ns/op

有关其他详细信息(包括来自JIT的x64汇编程序),请阅读原始博客文章。

这意味着Hibernate/Spring/etc-EE-shit因为异常(xD)而变慢。

通过重写应用程序控制流,避免异常(返回错误作为返回),提高应用程序的性能10 -100倍,这取决于你抛出它们的频率))

不幸的是,我的回答太长了,不能在这里发表。所以让我在这里总结一下,并向你推荐http://www.fuwjax.com/how-slow-are-java-exceptions/以获得更具体的细节。

这里真正的问题不是“与“从未失败的代码”相比,“将失败报告为异常”的速度有多慢?”,正如人们所接受的回答可能会让你相信的那样。相反,问题应该是“与其他方式报告的失败相比,‘作为异常报告的失败’有多慢?”通常,报告失败的另外两种方法是使用哨兵值或使用结果包装器。

哨兵值是在成功情况下返回一个类,在失败情况下返回另一个类的尝试。你几乎可以把它看作是返回一个异常而不是抛出一个异常。这需要一个与success对象共享的父类,然后执行“instanceof”检查和几个类型转换来获得成功或失败的信息。

事实证明,冒着类型安全的风险,Sentinel值比异常快,但仅快大约2倍。现在,这可能看起来很多,但2倍只包括实现差异的成本。实际上,这个因素要低得多,因为我们可能失败的方法要比本页其他地方示例代码中的几个算术运算符有趣得多。

另一方面,结果包装器根本不牺牲类型安全。它们将成功和失败信息包装在单个类中。因此,它们提供了一个“isSuccess()”来代替“instanceof”,并为成功和失败对象提供了getter。但是,结果对象大约比使用异常慢2倍。事实证明,每次创建一个新的包装器对象比有时抛出异常要昂贵得多。

最重要的是,异常是语言提供的一种指示方法可能失败的方式。没有其他方法可以仅从API判断哪些方法总是(大部分)工作,哪些方法报告失败。

异常比哨兵更安全,比结果对象更快,并且比两者都不那么令人惊讶。我并不是建议用try/catch替换if/else,但是异常是报告失败的正确方式,即使在业务逻辑中也是如此。

也就是说,我想指出的是,我遇到的两种最常见的实质上影响性能的方法是创建不必要的对象和嵌套循环。如果可以在创建异常和不创建异常之间选择,请不要创建异常。如果要在有时创建异常或始终创建另一个对象之间做出选择,那么就创建异常。

It depends how exceptions are implemented. The simplest way is using setjmp and longjmp. That means all registers of the CPU are written to the stack (which already takes some time) and possibly some other data needs to be created... all this already happens in the try statement. The throw statement needs to unwind the stack and restore the values of all registers (and possible other values in the VM). So try and throw are equally slow, and that is pretty slow, however if no exception is thrown, exiting the try block takes no time whatsoever in most cases (as everything is put on the stack which cleans up automatically if the method exists).

Sun和其他人认识到,这可能是次优的,当然随着时间的推移,虚拟机会变得越来越快。还有另一种实现异常的方法,它使try本身闪电般快(实际上try本身根本不会发生任何事情——当类被VM加载时,需要发生的一切都已经完成了),并且它使throw不那么慢。我不知道哪个JVM使用了这种新的、更好的技术……

...但你是在用Java写代码,所以你的代码以后只能在一个特定系统的一个JVM上运行吗?因为如果它可以在任何其他平台或任何其他JVM版本(可能是任何其他供应商的)上运行,谁说他们也使用快速实现呢?速度快的要比速度慢的复杂得多,而且不容易在所有系统上实现。你想要便携吗?那就不要指望异常会很快。

It also makes a big difference what you do within a try block. If you open a try block and never call any method from within this try block, the try block will be ultra fast, as the JIT can then actually treat a throw like a simple goto. It neither needs to save stack-state nor does it need to unwind the stack if an exception is thrown (it only needs to jump to the catch handlers). However, this is not what you usually do. Usually you open a try block and then call a method that might throw an exception, right? And even if you just use the try block within your method, what kind of method will this be, that does not call any other method? Will it just calculate a number? Then what for do you need exceptions? There are much more elegant ways to regulate program flow. For pretty much anything else but simple math, you will have to call an external method and this already destroys the advantage of a local try block.

请看下面的测试代码:

public class Test {
    int value;


    public int getValue() {
        return value;
    }

    public void reset() {
        value = 0;
    }

    // Calculates without exception
    public void method1(int i) {
        value = ((value + i) / i) << 1;
        // Will never be true
        if ((i & 0xFFFFFFF) == 1000000000) {
            System.out.println("You'll never see this!");
        }
    }

    // Could in theory throw one, but never will
    public void method2(int i) throws Exception {
        value = ((value + i) / i) << 1;
        // Will never be true
        if ((i & 0xFFFFFFF) == 1000000000) {
            throw new Exception();
        }
    }

    // This one will regularly throw one
    public void method3(int i) throws Exception {
        value = ((value + i) / i) << 1;
        // i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
        // an AND operation between two integers. The size of the number plays
        // no role. AND on 32 BIT always ANDs all 32 bits
        if ((i & 0x1) == 1) {
            throw new Exception();
        }
    }

    public static void main(String[] args) {
        int i;
        long l;
        Test t = new Test();

        l = System.currentTimeMillis();
        t.reset();
        for (i = 1; i < 100000000; i++) {
            t.method1(i);
        }
        l = System.currentTimeMillis() - l;
        System.out.println(
            "method1 took " + l + " ms, result was " + t.getValue()
        );

        l = System.currentTimeMillis();
        t.reset();
        for (i = 1; i < 100000000; i++) {
            try {
                t.method2(i);
            } catch (Exception e) {
                System.out.println("You'll never see this!");
            }
        }
        l = System.currentTimeMillis() - l;
        System.out.println(
            "method2 took " + l + " ms, result was " + t.getValue()
        );

        l = System.currentTimeMillis();
        t.reset();
        for (i = 1; i < 100000000; i++) {
            try {
                t.method3(i);
            } catch (Exception e) {
                // Do nothing here, as we will get here
            }
        }
        l = System.currentTimeMillis() - l;
        System.out.println(
            "method3 took " + l + " ms, result was " + t.getValue()
        );
    }
}

结果:

method1 took 972 ms, result was 2
method2 took 1003 ms, result was 2
method3 took 66716 ms, result was 2

try块的减速太小,无法排除后台进程等混杂因素。但是catch block杀死了一切,让它慢了66倍!

正如我所说,如果将try/catch和throw都放在同一个方法(method3)中,结果不会那么糟糕,但这是我不依赖的特殊JIT优化。即使使用这种优化,抛出仍然非常慢。我不知道你们想做什么,但肯定有比try/catch/throw更好的方法。