如何在Python中获得对象在内存中占用的大小?
当前回答
Pympler包的asizeof模块可以做到这一点。
使用方法如下:
from pympler import asizeof
asizeof.asizeof(my_object)
不像系统。Getsizeof,它适用于你自己创建的对象。它甚至可以与numpy一起工作。
>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> A = rand(10)
>>> B = rand(10000)
>>> asizeof.asizeof(A)
176
>>> asizeof.asizeof(B)
80096
正如前面提到的,
可以通过设置option code=True来包含类、函数、方法、模块等对象的(字节)代码大小。
如果你需要实时数据的其他视图,请选择Pympler
模块muppy用于在线监控Python应用程序 和模块类跟踪器提供的生命周期的离线分析 选择Python对象。
其他回答
使用以下函数获取python对象的实际大小:
import sys
import gc
def actualsize(input_obj):
memory_size = 0
ids = set()
objects = [input_obj]
while objects:
new = []
for obj in objects:
if id(obj) not in ids:
ids.add(id(obj))
memory_size += sys.getsizeof(obj)
new.append(obj)
objects = gc.get_referents(*new)
return memory_size
actualsize([1, 2, [3, 4, 5, 1]])
参考:https://towardsdatascience.com/the-strange-size-of-python-objects-in-memory-ce87bdfbb97f
对于numpy数组,getsizeof不起作用-对我来说,它总是出于某种原因返回40:
from pylab import *
from sys import getsizeof
A = rand(10)
B = rand(10000)
然后(在ipython中):
In [64]: getsizeof(A)
Out[64]: 40
In [65]: getsizeof(B)
Out[65]: 40
不过令人高兴的是,:
In [66]: A.nbytes
Out[66]: 80
In [67]: B.nbytes
Out[67]: 80000
下面是我根据之前对所有变量的列表大小的回答编写的一个快速脚本
for i in dir():
print (i, sys.getsizeof(eval(i)) )
如果性能不是问题,最简单的解决方案是pickle和测量:
import pickle
data = ...
len(pickle.dumps(data))
If you don't need the exact size of the object but roughly to know how big it is, one quick (and dirty) way is to let the program run, sleep for an extended period of time, and check the memory usage (ex: Mac's activity monitor) by this particular python process. This would be effective when you are trying to find the size of one single large object in a python process. For example, I recently wanted to check the memory usage of a new data structure and compare it with that of Python's set data structure. First I wrote the elements (words from a large public domain book) to a set, then checked the size of the process, and then did the same thing with the other data structure. I found out the Python process with a set is taking twice as much memory as the new data structure. Again, you wouldn't be able to exactly say the memory used by the process is equal to the size of the object. As the size of the object gets large, this becomes close as the memory consumed by the rest of the process becomes negligible compared to the size of the object you are trying to monitor.
推荐文章
- 证书验证失败:无法获得本地颁发者证书
- 当使用pip3安装包时,“Python中的ssl模块不可用”
- 无法切换Python与pyenv
- Python if not == vs if !=
- 如何从scikit-learn决策树中提取决策规则?
- 为什么在Mac OS X v10.9 (Mavericks)的终端中apt-get功能不起作用?
- 将旋转的xtick标签与各自的xtick对齐
- 为什么元组可以包含可变项?
- 如何合并字典的字典?
- 如何创建类属性?
- 不区分大小写的“in”
- 在Python中获取迭代器中的元素个数
- 解析日期字符串并更改格式
- 使用try和。Python中的if
- 如何在Python中获得所有直接子目录