我有一个熊猫数据框架,df_test。它包含一个列'size',以字节为单位表示大小。我已经计算了KB, MB和GB使用以下代码:
df_test = pd.DataFrame([
{'dir': '/Users/uname1', 'size': 994933},
{'dir': '/Users/uname2', 'size': 109338711},
])
df_test['size_kb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0, grouping=True) + ' KB')
df_test['size_mb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 2, grouping=True) + ' MB')
df_test['size_gb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 3, grouping=True) + ' GB')
df_test
dir size size_kb size_mb size_gb
0 /Users/uname1 994933 971.6 KB 0.9 MB 0.0 GB
1 /Users/uname2 109338711 106,776.1 KB 104.3 MB 0.1 GB
[2 rows x 5 columns]
我已经运行了超过120,000行,根据%timeit,每列大约需要2.97秒* 3 = ~9秒。
有什么办法能让它快点吗?例如,我可以从apply中一次返回一列并运行3次,我可以一次返回所有三列以插入到原始的数据框架中吗?
我发现的其他问题都希望接受多个值并返回一个值。我想取一个值并返回多个列。
只是另一种可读的方式。这段代码将添加三个新列及其值,在apply函数中返回不带使用参数的序列。
def sizes(s):
val_kb = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
val_mb = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
val_gb = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
return pd.Series([val_kb,val_mb,val_gb],index=['size_kb','size_mb','size_gb'])
df[['size_kb','size_mb','size_gb']] = df.apply(lambda x: sizes(x) , axis=1)
一个来自https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html的通用示例
df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)
#foo bar
#0 1 2
#1 1 2
#2 1 2
目前的一些回复还可以,但我想提供另一种可能更“泛化”的选项。这适用于我目前的熊猫0.23(不确定它是否适用于以前的版本):
import pandas as pd
df_test = pd.DataFrame([
{'dir': '/Users/uname1', 'size': 994933},
{'dir': '/Users/uname2', 'size': 109338711},
])
def sizes(s):
a = locale.format_string("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
b = locale.format_string("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
c = locale.format_string("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
return a, b, c
df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes, axis=1, result_type="expand")
注意,诀窍在于apply的result_type参数,该参数将其结果展开为可以直接分配给新/旧列的DataFrame。
You can go 40+ times faster than the top answers here if you do your math in numpy instead. Adapting @Rocky K's top two answers. The main difference is running on an actual df of 120k rows. Numpy is way faster at math when you apply your functions array-wise (instead of applying a function value-wise). The best answer is by far the third one because it uses numpy for the math. Also notice that it only calculates 1024**2 and 1024**3 once each instead of once for each row, saving 240k calculations. Here are the timings on my machine:
Tuples (pass value, return tuple then zip, new columns dont exist):
Runtime: 10.935037851333618
Tuples (pass value, return tuple then zip, new columns exist):
Runtime: 11.120025157928467
Use numpy for math portions:
Runtime: 0.24799370765686035
以下是我用来计算这些时间的脚本(改编自Rocky K):
import numpy as np
import pandas as pd
import locale
import time
size = np.random.random(120000) * 1000000000
data = pd.DataFrame({'Size': size})
def sizes_pass_value_return_tuple(value):
a = locale.format_string("%.1f", value / 1024.0, grouping=True) + ' KB'
b = locale.format_string("%.1f", value / 1024.0 ** 2, grouping=True) + ' MB'
c = locale.format_string("%.1f", value / 1024.0 ** 3, grouping=True) + ' GB'
return a, b, c
print('\nTuples (pass value, return tuple then zip, new columns dont exist):')
df1 = data.copy()
start = time.time()
df1['size_kb'], df1['size_mb'], df1['size_gb'] = zip(*df1['Size'].apply(sizes_pass_value_return_tuple))
end = time.time()
print('Runtime:', end - start, '\n')
print('Tuples (pass value, return tuple then zip, new columns exist):')
df2 = data.copy()
start = time.time()
df2 = pd.concat([df2, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
df2['size_kb'], df2['size_mb'], df2['size_gb'] = zip(*df2['Size'].apply(sizes_pass_value_return_tuple))
end = time.time()
print('Runtime:', end - start, '\n')
print('Use numpy for math portions:')
df3 = data.copy()
start = time.time()
df3['size_kb'] = (df3.Size.values / 1024).round(1)
df3['size_kb'] = df3.size_kb.astype(str) + ' KB'
df3['size_mb'] = (df3.Size.values / 1024 ** 2).round(1)
df3['size_mb'] = df3.size_mb.astype(str) + ' MB'
df3['size_gb'] = (df3.Size.values / 1024 ** 3).round(1)
df3['size_gb'] = df3.size_gb.astype(str) + ' GB'
end = time.time()
print('Runtime:', end - start, '\n')
您可以从包含新数据的应用函数返回一个Series,从而避免需要迭代三次。将axis=1传递给apply函数,将函数的大小应用到数据框架的每一行,返回一个要添加到新数据框架的序列。这个序列s包含新的值,以及原始数据。
def sizes(s):
s['size_kb'] = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
s['size_mb'] = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
s['size_gb'] = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
return s
df_test = df_test.append(rows_list)
df_test = df_test.apply(sizes, axis=1)
非常酷的答案!谢谢Jesse和jaumebonet!以下是我对以下方面的一些观察:
邮政编码(*……
... result_type = "扩大")
虽然expand更优雅(pandifyed),但**zip至少快2倍。在下面这个简单的例子中,我的速度快了4倍。
import pandas as pd
dat = [ [i, 10*i] for i in range(1000)]
df = pd.DataFrame(dat, columns = ["a","b"])
def add_and_sub(row):
add = row["a"] + row["b"]
sub = row["a"] - row["b"]
return add, sub
df[["add", "sub"]] = df.apply(add_and_sub, axis=1, result_type="expand")
# versus
df["add"], df["sub"] = zip(*df.apply(add_and_sub, axis=1))