我想更好地理解其中的区别。我在网上找到了很多解释,但它们都倾向于抽象的差异,而不是实际的含义。
Most of my programming experiences has been with CPython (dynamic, interpreted), and Java (static, compiled). However, I understand that there are other kinds of interpreted and compiled languages. Aside from the fact that executable files can be distributed from programs written in compiled languages, are there any advantages/disadvantages to each type? Oftentimes, I hear people arguing that interpreted languages can be used interactively, but I believe that compiled languages can have interactive implementations as well, correct?
开始用“过去的冲击波”来思考
很久很久以前,有一个计算机王国
解释器和编译器。的优点引起了各种各样的争论
一个比另一个。当时的普遍意见是这样的:
解释器:快速开发(编辑和运行)。执行速度慢,因为每个语句都必须被解释为
每次执行的机器代码(想想这对于执行了数千次的循环意味着什么)。
编译器:开发(编辑、编译、链接和运行)缓慢。编译/链接步骤可能会花费大量时间)。快
来执行。整个程序已经是原生机器代码了。
运行时有一到两个数量级的差异
解释程序和编译程序之间存在性能差异。其他的区别
点,例如代码的运行时可变性,也有一些兴趣,但主要是
区别围绕着运行时性能问题。
今天的情况已经发展到这样的程度,编译/解释的区别是
几乎无关紧要。许多
编译语言调用的运行时服务并非如此
完全基于机器代码。而且,大多数解释型语言都被“编译”成字节码
之前执行。字节码解释器非常高效,可以与一些编译器生成的解释器相匹敌
从执行速度的角度来看代码。
典型的区别是编译器生成本机机器码,解释器读取源代码
使用某种运行时系统动态生成机器代码。
如今,经典的诠释者已所剩无几——几乎全部
编译成字节码(或其他一些半编译状态),然后在虚拟“机器”上运行。
编译语言是这样一种语言:程序一旦编译,就用目标机器的指令来表达。例如,源代码中的加法“+”操作可以直接转换为机器代码中的“ADD”指令。
解释型语言是指指令不直接由目标机器执行,而是由其他程序(通常用本机语言编写)读取和执行的语言。例如,相同的“+”操作将在运行时被解释器识别,然后调用它自己的“add(a,b)”函数,并使用适当的参数,然后执行机器代码“add”指令。
你可以在编译语言中做你在解释语言中可以做的任何事情,反之亦然——它们都是图灵完备的。然而,这两种方法在实施和使用方面都有优点和缺点。
我将完全概括(纯粹主义者原谅我!),但大致来说,以下是编译语言的优点:
通过直接使用目标计算机的本机代码获得更快的性能
有机会在编译阶段应用相当强大的优化
下面是解释型语言的优点:
更容易实现(编写好的编译器非常困难!!)
不需要运行编译阶段:可以直接“动态”执行代码
是否可以更方便地使用动态语言
注意,字节码编译等现代技术增加了一些额外的复杂性——这里发生的情况是,编译器的目标是一个与底层硬件不同的“虚拟机”。这些虚拟机指令可以在稍后阶段再次编译,以获得本机代码(例如,由Java JVM JIT编译器完成)。
极端和简单的情况:
A compiler will produce a binary executable in the target machine's native executable format. This binary file contains all required resources except for system libraries; it's ready to run with no further preparation and processing and it runs like lightning because the code is the native code for the CPU on the target machine.
An interpreter will present the user with a prompt in a loop where he can enter statements or code, and upon hitting RUN or the equivalent the interpreter will examine, scan, parse and interpretatively execute each line until the program runs to a stopping point or an error. Because each line is treated on its own and the interpreter doesn't "learn" anything from having seen the line before, the effort of converting human-readable language to machine instructions is incurred every time for every line, so it's dog slow. On the bright side, the user can inspect and otherwise interact with his program in all kinds of ways: Changing variables, changing code, running in trace or debug modes... whatever.
说完了这些,让我来解释一下,生活不再那么简单了。例如,
Many interpreters will pre-compile the code they're given so the translation step doesn't have to be repeated again and again.
Some compilers compile not to CPU-specific machine instructions but to bytecode, a kind of artificial machine code for a ficticious machine. This makes the compiled program a bit more portable, but requires a bytecode interpreter on every target system.
The bytecode interpreters (I'm looking at Java here) recently tend to re-compile the bytecode they get for the CPU of the target section just before execution (called JIT). To save time, this is often only done for code that runs often (hotspots).
Some systems that look and act like interpreters (Clojure, for instance) compile any code they get, immediately, but allow interactive access to the program's environment. That's basically the convenience of interpreters with the speed of binary compilation.
Some compilers don't really compile, they just pre-digest and compress code. I heard a while back that's how Perl works. So sometimes the compiler is just doing a bit of the work and most of it is still interpretation.
最后,现在,解释和编译是一种权衡,花费(一次)编译的时间通常会获得更好的运行时性能,但解释环境提供了更多的交互机会。编译与解释主要是“理解”程序的工作如何在不同的过程之间划分的问题,而如今,由于语言和产品试图提供两者的最佳服务,这条界线有点模糊。