PEP 8规定:
导入总是放在文件的顶部,就在任何模块注释和文档字符串之后,在模块全局变量和常量之前。
然而,如果我导入的类/方法/函数只在很少的情况下使用,那么在需要时进行导入肯定会更有效吗?
这不是:
class SomeClass(object):
def not_often_called(self)
from datetime import datetime
self.datetime = datetime.now()
比这更有效率?
from datetime import datetime
class SomeClass(object):
def not_often_called(self)
self.datetime = datetime.now()
I do not aspire to provide complete answer, because others have already done this very well. I just want to mention one use case when I find especially useful to import modules inside functions. My application uses python packages and modules stored in certain location as plugins. During application startup, the application walks through all the modules in the location and imports them, then it looks inside the modules and if it finds some mounting points for the plugins (in my case it is a subclass of a certain base class having a unique ID) it registers them. The number of plugins is large (now dozens, but maybe hundreds in the future) and each of them is used quite rarely. Having imports of third party libraries at the top of my plugin modules was a bit penalty during application startup. Especially some thirdparty libraries are heavy to import (e.g. import of plotly even tries to connect to internet and download something which was adding about one second to startup). By optimizing imports (calling them only in the functions where they are used) in the plugins I managed to shrink the startup from 10 seconds to some 2 seconds. That is a big difference for my users.
所以我的答案是否定的,不要总是把导入放在模块的顶部。
在函数中导入变量/局部作用域可以提高性能。这取决于函数中导入对象的使用情况。如果你多次循环并访问一个模块全局对象,将它导入为本地会有帮助。
test.py
X=10
Y=11
Z=12
def add(i):
i = i + 10
runlocal.py
from test import add, X, Y, Z
def callme():
x=X
y=Y
z=Z
ladd=add
for i in range(100000000):
ladd(i)
x+y+z
callme()
run.py
from test import add, X, Y, Z
def callme():
for i in range(100000000):
add(i)
X+Y+Z
callme()
在Linux上的时间显示了一个小的增益
/usr/bin/time -f "\t%E real,\t%U user,\t%S sys" python run.py
0:17.80 real, 17.77 user, 0.01 sys
/tmp/test$ /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" python runlocal.py
0:14.23 real, 14.22 user, 0.01 sys
真实的是挂钟。用户是程序中的时间。Sys是系统调用的时间。
https://docs.python.org/3.5/reference/executionmodel.html#resolution-of-names
以下是对这个问题的最新答案总结
而且
相关的
的问题。
PEP 8
recommends putting imports at the top.
It's often more convenient to get
ImportErrors
when you first run your program
rather than when your program first calls your function.
Putting imports in the function scope
can help avoid issues with circular imports.
Putting imports in the function scope
helps keep maintain a clean module namespace,
so that it does not appear among tab-completion suggestions.
Start-up time:
imports in a function won't run until (if) that function is called.
Might get significant with heavy-weight libraries.
Even though import statements are super fast on subsequent runs,
they still incur a speed penalty
which can be significant if the function is trivial but frequently in use.
Imports under the __name__ == "__main__" guard seem very reasonable.
Refactoring
might be easier if the imports are located in the function
where they're used (facilitates moving it to another module).
It can also be argued that this is good for readability.
However, most would argue the contrary, i.e.
Imports at the top enhance readability,
since you can see all your dependencies at a glance.
It seems unclear if dynamic or conditional imports favour one style over another.
我采用了将所有导入放在使用它们的函数中,而不是放在模块的顶部的做法。
这样做的好处是能够更可靠地进行重构。当我将一个函数从一个模块移动到另一个模块时,我知道该函数将继续工作,并且保留所有遗留的测试。如果我将导入放在模块的顶部,当我移动一个函数时,我发现我最终要花费大量时间来完成新模块的导入并使其最小化。重构IDE可能会让这一点变得无关紧要。
正如在其他地方提到的那样,有一个速度惩罚。我在我的应用程序中测量了这一点,发现它对我的目的来说是微不足道的。
不需要搜索(例如grep)就能看到所有模块依赖关系也是很好的。然而,我关心模块依赖关系的原因通常是因为我正在安装、重构或移动由多个文件组成的整个系统,而不仅仅是单个模块。在这种情况下,我无论如何都要执行全局搜索,以确保具有系统级依赖关系。因此,我还没有找到全局导入来帮助我在实践中理解一个系统。
我通常把sys的导入放在if __name__=='__main__'检查中,然后将参数(如sys.argv[1:])传递给main()函数。这允许我在sys未被导入的上下文中使用main。