问:Java中的异常处理真的很慢吗?
传统观点以及大量谷歌结果表明,不应该将异常逻辑用于Java中的正常程序流。通常会给出两个原因,
它真的很慢——甚至比普通代码慢一个数量级(给出的原因各不相同),
and
它很混乱,因为人们只希望在异常代码中处理错误。
这个问题是关于第一条的。
As an example, this page describes Java exception handling as "very slow" and relates the slowness to the creation of the exception message string - "this string is then used in creating the exception object that is thrown. This is not fast." The article Effective Exception Handling in Java says that "the reason for this is due to the object creation aspect of exception handling, which thereby makes throwing exceptions inherently slow". Another reason out there is that the stack trace generation is what slows it down.
My testing (using Java 1.6.0_07, Java HotSpot 10.0, on 32 bit Linux), indicates that exception handling is no slower than regular code. I tried running a method in a loop that executes some code. At the end of the method, I use a boolean to indicate whether to return or throw. This way the actual processing is the same. I tried running the methods in different orders and averaging my test times, thinking it may have been the JVM warming up. In all my tests, the throw was at least as fast as the return, if not faster (up to 3.1% faster). I am completely open to the possibility that my tests were wrong, but I haven't seen anything out there in the way of the code sample, test comparisons, or results in the last year or two that show exception handling in Java to actually be slow.
引导我走上这条路的是我需要使用的一个API,它将抛出异常作为正常控制逻辑的一部分。我想纠正它们的用法,但现在我可能做不到。我是否应该赞美他们的前瞻思维?
在论文《即时编译中的高效Java异常处理》中,作者建议,即使没有抛出异常,仅异常处理程序的存在就足以阻止JIT编译器正确优化代码,从而降低代码的速度。我还没有测试过这个理论。
不知道这些主题是否相关,但我曾经想实现一个依赖于当前线程的堆栈跟踪的技巧:我想发现方法的名称,它触发了实例化类中的实例化(是的,这个想法很疯狂,我完全放弃了它)。所以我发现调用Thread.currentThread(). getstacktrace()是非常慢的(由于本机的dumpThreads方法,它在内部使用)。
相应地,Java Throwable有一个本地方法fillInStackTrace。我认为前面描述的kill -catch块以某种方式触发了该方法的执行。
但让我告诉你另一个故事……
在Scala中,一些函数特性是使用ControlThrowable在JVM中编译的,它扩展了Throwable,并以以下方式覆盖了它的fillInStackTrace:
override def fillInStackTrace(): Throwable = this
所以我调整了上面的测试(循环量减少了十,我的机器有点慢:):
class ControlException extends ControlThrowable
class T {
var value = 0
def reset = {
value = 0
}
def method1(i: Int) = {
value = ((value + i) / i) << 1
if ((i & 0xfffffff) == 1000000000) {
println("You'll never see this!")
}
}
def method2(i: Int) = {
value = ((value + i) / i) << 1
if ((i & 0xfffffff) == 1000000000) {
throw new Exception()
}
}
def method3(i: Int) = {
value = ((value + i) / i) << 1
if ((i & 0x1) == 1) {
throw new Exception()
}
}
def method4(i: Int) = {
value = ((value + i) / i) << 1
if ((i & 0x1) == 1) {
throw new ControlException()
}
}
}
class Main {
var l = System.currentTimeMillis
val t = new T
for (i <- 1 to 10000000)
t.method1(i)
l = System.currentTimeMillis - l
println("method1 took " + l + " ms, result was " + t.value)
t.reset
l = System.currentTimeMillis
for (i <- 1 to 10000000) try {
t.method2(i)
} catch {
case _ => println("You'll never see this")
}
l = System.currentTimeMillis - l
println("method2 took " + l + " ms, result was " + t.value)
t.reset
l = System.currentTimeMillis
for (i <- 1 to 10000000) try {
t.method4(i)
} catch {
case _ => // do nothing
}
l = System.currentTimeMillis - l
println("method4 took " + l + " ms, result was " + t.value)
t.reset
l = System.currentTimeMillis
for (i <- 1 to 10000000) try {
t.method3(i)
} catch {
case _ => // do nothing
}
l = System.currentTimeMillis - l
println("method3 took " + l + " ms, result was " + t.value)
}
所以,结果是:
method1 took 146 ms, result was 2
method2 took 159 ms, result was 2
method4 took 1551 ms, result was 2
method3 took 42492 ms, result was 2
你看,method3和method4之间唯一的区别是它们会抛出不同类型的异常。是的,method4仍然比method1和method2慢,但是差异是可以接受的。
Aleksey Shipilëv做了一个非常彻底的分析,他在各种条件组合下对Java异常进行了基准测试:
新创建的异常vs预先创建的异常
启用与禁用堆栈跟踪
请求的堆栈跟踪vs从未请求的堆栈跟踪
在顶层捕获vs在每一层重新抛出vs在每一层被链接/包裹
不同级别的Java调用堆栈深度
无内联优化vs极端内联vs默认设置
用户定义字段读与不读
他还将它们与在不同错误频率级别检查错误代码的性能进行了比较。
结论(逐字摘自他的帖子)如下:
Truly exceptional exceptions are beautifully performant. If you use them as designed, and only communicate the truly exceptional cases among the overwhelmingly large number of non-exceptional cases handled by regular code, then using exceptions is the performance win.
The performance costs of exceptions have two major components: stack trace construction when Exception is instantiated and stack unwinding during Exception throw.
Stack trace construction costs are proportional to stack depth at the moment of exception instantiation. That is already bad because who on Earth knows the stack depth at which this throwing method would be called? Even if you turn off the stack trace generation and/or cache the exceptions, you can only get rid of this part of the performance cost.
Stack unwinding costs depend on how lucky we are with bringing the exception handler closer in the compiled code. Carefully structuring the code to avoid deep exception handlers lookup is probably helping us get luckier.
Should we eliminate both effects, the performance cost of exceptions is that of the local branch. No matter how beautiful it sounds, that does not mean you should use Exceptions as the usual control flow, because in that case you are at the mercy of optimizing compiler! You should only use them in truly exceptional cases, where the exception frequency amortizes the possible unlucky cost of raising the actual exception.
The optimistic rule-of-thumb seems to be 10^-4 frequency for exceptions is exceptional enough. That, of course, depends on the heavy-weights of the exceptions themselves, the exact actions taken in exception handlers, etc.
结果是,当没有抛出异常时,您不会付出代价,因此当异常条件足够罕见时,异常处理比每次都使用if更快。这篇文章的全文非常值得一读。