使用静态const变量比使用#define preprocessor更好吗?或者这取决于上下文?
每种方法的优点/缺点是什么?
使用静态const变量比使用#define preprocessor更好吗?或者这取决于上下文?
每种方法的优点/缺点是什么?
当前回答
请看这里:static const vs define
通常使用const声明(注意它不需要是静态的)
其他回答
#define会导致意想不到的结果:
#include <iostream>
#define x 500
#define y x + 5
int z = y * 2;
int main()
{
std::cout << "y is " << y;
std::cout << "\nz is " << z;
}
输出错误的结果:
y is 505
z is 510
然而,如果你用常量替换它:
#include <iostream>
const int x = 500;
const int y = x + 5;
int z = y * 2;
int main()
{
std::cout << "y is " << y;
std::cout << "\nz is " << z;
}
它输出正确的结果:
y is 505
z is 1010
这是因为#define只是替换了文本。因为这样做会严重打乱操作的顺序,所以我建议使用常量变量。
就我个人而言,我讨厌预处理器,所以我总是使用const。
#define的主要优点是它不需要在程序中存储内存,因为它实际上只是用文字值替换一些文本。它还具有没有类型的优点,因此可以用于任何整数值而不会产生警告。
“const”的优点是它们可以有作用域,并且可以在需要传递指向对象的指针的情况下使用。
我不知道你说的“静态”到底是什么意思。如果是全局声明,我会把它放在匿名名称空间中,而不是使用静态名称空间。例如
namespace {
unsigned const seconds_per_minute = 60;
};
int main (int argc; char *argv[]) {
...
}
使用静态const就像在代码中使用任何其他const变量一样。这意味着您可以跟踪信息的来源,而不是在预编译过程中简单地在代码中替换#define。
对于这个问题,你可能想看看c++ FAQ Lite: http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.7
作为一个相当老和生疏的C程序员,他从来没有完全学会c++,因为其他东西出现了,现在正在努力掌握Arduino,我的观点很简单。
#define是一个编译器预处理器指令,应该这样使用,用于条件编译等。例如,低级代码需要定义一些可能的替代数据结构,以移植到特定的硬件。它会根据模块编译和链接的顺序产生不一致的结果。如果你需要某些东西在范围上是全局的,那么就这样正确地定义它。
Const和(static Const)应该总是用来命名静态值或字符串。它们是类型化的、安全的,调试器可以完全使用它们。
枚举总是让我感到困惑,所以我设法避免使用它们。
#define, const和(你忘了的)enum之间的利弊,取决于使用情况:
enums: only possible for integer values properly scoped / identifier clash issues handled nicely, particularly in C++11 enum classes where the enumerations for enum class X are disambiguated by the scope X:: strongly typed, but to a big-enough signed-or-unsigned int size over which you have no control in C++03 (though you can specify a bit field into which they should be packed if the enum is a member of struct/class/union), while C++11 defaults to int but can be explicitly set by the programmer can't take the address - there isn't one as the enumeration values are effectively substituted inline at the points of usage stronger usage restraints (e.g. incrementing - template <typename T> void f(T t) { cout << ++t; } won't compile, though you can wrap an enum into a class with implicit constructor, casting operator and user-defined operators) each constant's type taken from the enclosing enum, so template <typename T> void f(T) get a distinct instantiation when passed the same numeric value from different enums, all of which are distinct from any actual f(int) instantiation. Each function's object code could be identical (ignoring address offsets), but I wouldn't expect a compiler/linker to eliminate the unnecessary copies, though you could check your compiler/linker if you care. even with typeof/decltype, can't expect numeric_limits to provide useful insight into the set of meaningful values and combinations (indeed, "legal" combinations aren't even notated in the source code, consider enum { A = 1, B = 2 } - is A|B "legal" from a program logic perspective?) the enum's typename may appear in various places in RTTI, compiler messages etc. - possibly useful, possibly obfuscation you can't use an enumeration without the translation unit actually seeing the value, which means enums in library APIs need the values exposed in the header, and make and other timestamp-based recompilation tools will trigger client recompilation when they're changed (bad!)
consts: properly scoped / identifier clash issues handled nicely strong, single, user-specified type you might try to "type" a #define ala #define S std::string("abc"), but the constant avoids repeated construction of distinct temporaries at each point of use One Definition Rule complications can take address, create const references to them etc. most similar to a non-const value, which minimises work and impact if switching between the two value can be placed inside the implementation file, allowing a localised recompile and just client links to pick up the change
#defines: "global" scope / more prone to conflicting usages, which can produce hard-to-resolve compilation issues and unexpected run-time results rather than sane error messages; mitigating this requires: long, obscure and/or centrally coordinated identifiers, and access to them can't benefit from implicitly matching used/current/Koenig-looked-up namespace, namespace aliases etc. while the trumping best-practice allows template parameter identifiers to be single-character uppercase letters (possibly followed by a number), other use of identifiers without lowercase letters is conventionally reserved for and expected of preprocessor defines (outside the OS and C/C++ library headers). This is important for enterprise scale preprocessor usage to remain manageable. 3rd party libraries can be expected to comply. Observing this implies migration of existing consts or enums to/from defines involves a change in capitalisation, and hence requires edits to client source code rather than a "simple" recompile. (Personally, I capitalise the first letter of enumerations but not consts, so I'd be hit migrating between those two too - maybe time to rethink that.) more compile-time operations possible: string literal concatenation, stringification (taking size thereof), concatenation into identifiers downside is that given #define X "x" and some client usage ala "pre" X "post", if you want or need to make X a runtime-changeable variable rather than a constant you force edits to client code (rather than just recompilation), whereas that transition is easier from a const char* or const std::string given they already force the user to incorporate concatenation operations (e.g. "pre" + X + "post" for string) can't use sizeof directly on a defined numeric literal untyped (GCC doesn't warn if compared to unsigned) some compiler/linker/debugger chains may not present the identifier, so you'll be reduced to looking at "magic numbers" (strings, whatever...) can't take the address the substituted value need not be legal (or discrete) in the context where the #define is created, as it's evaluated at each point of use, so you can reference not-yet-declared objects, depend on "implementation" that needn't be pre-included, create "constants" such as { 1, 2 } that can be used to initialise arrays, or #define MICROSECONDS *1E-6 etc. (definitely not recommending this!) some special things like __FILE__ and __LINE__ can be incorporated into the macro substitution you can test for existence and value in #if statements for conditionally including code (more powerful than a post-preprocessing "if" as the code need not be compilable if not selected by the preprocessor), use #undef-ine, redefine etc. substituted text has to be exposed: in the translation unit it's used by, which means macros in libraries for client use must be in the header, so make and other timestamp-based recompilation tools will trigger client recompilation when they're changed (bad!) or on the command line, where even more care is needed to make sure client code is recompiled (e.g. the Makefile or script supplying the definition should be listed as a dependency)
我的个人观点:
作为一般规则,我使用const,并认为它们是一般使用中最专业的选择(尽管其他的简单性对这个老懒程序员来说很有吸引力)。