我已经创建了一个熊猫数据框架
df = DataFrame(index=['A','B','C'], columns=['x','y'])
得到了这个
x y
A NaN NaN
B NaN NaN
C NaN NaN
现在,我想给特定的单元格赋值,例如给C行和x列赋值。
我希望得到这样的结果:
x y
A NaN NaN
B NaN NaN
C 10 NaN
下面的代码:
df.xs('C')['x'] = 10
但是df的内容没有改变。数据帧仍然只包含nan。
有什么建议吗?
下面是所有用户提供的有效解决方案的摘要,用于以整数和字符串为索引的数据帧。
df。iloc, df。Loc和df。对于这两种类型的数据帧,df。Iloc仅适用于行/列整数索引df。Loc和df。At支持使用列名和/或整数索引设置值。
当指定的索引不存在时,df。Loc和df。At会将新插入的行/列追加到现有的数据帧,但df。iloc将引发“IndexError:位置索引器越界”。在Python 2.7和3.7中测试的工作示例如下:
import numpy as np, pandas as pd
df1 = pd.DataFrame(index=np.arange(3), columns=['x','y','z'])
df1['x'] = ['A','B','C']
df1.at[2,'y'] = 400
# rows/columns specified does not exist, appends new rows/columns to existing data frame
df1.at['D','w'] = 9000
df1.loc['E','q'] = 499
# using df[<some_column_name>] == <condition> to retrieve target rows
df1.at[df1['x']=='B', 'y'] = 10000
df1.loc[df1['x']=='B', ['z','w']] = 10000
# using a list of index to setup values
df1.iloc[[1,2,4], 2] = 9999
df1.loc[[0,'D','E'],'w'] = 7500
df1.at[[0,2,"D"],'x'] = 10
df1.at[:, ['y', 'w']] = 8000
df1
>>> df1
x y z w q
0 10 8000 NaN 8000 NaN
1 B 8000 9999 8000 NaN
2 10 8000 9999 8000 NaN
D 10 8000 NaN 8000 NaN
E NaN 8000 9999 8000 499.0
如果想要将df的位置(0,0)中的单元格更改为'"236"76"'之类的字符串,则可以使用以下选项:
df[0][0] = '"236"76"'
# %timeit df[0][0] = '"236"76"'
# 938 µs ± 83.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
或者使用pandas.DataFrame.at
df.at[0, 0] = '"236"76"'
# %timeit df.at[0, 0] = '"236"76"'
#15 µs ± 2.09 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
或者使用pandas.DataFrame.iat
df.iat[0, 0] = '"236"76"'
# %timeit df.iat[0, 0] = '"236"76"'
# 41.1 µs ± 3.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
或者使用pandas.DataFrame.loc
df.loc[0, 0] = '"236"76"'
# %timeit df.loc[0, 0] = '"236"76"'
# 5.21 ms ± 401 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
或者使用pandas.DataFrame.iloc
df.iloc[0, 0] = '"236"76"'
# %timeit df.iloc[0, 0] = '"236"76"'
# 5.12 ms ± 300 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
如果时间是相关的,使用pandas.DataFrame.at是最快的方法。
下面是所有用户提供的有效解决方案的摘要,用于以整数和字符串为索引的数据帧。
df。iloc, df。Loc和df。对于这两种类型的数据帧,df。Iloc仅适用于行/列整数索引df。Loc和df。At支持使用列名和/或整数索引设置值。
当指定的索引不存在时,df。Loc和df。At会将新插入的行/列追加到现有的数据帧,但df。iloc将引发“IndexError:位置索引器越界”。在Python 2.7和3.7中测试的工作示例如下:
import numpy as np, pandas as pd
df1 = pd.DataFrame(index=np.arange(3), columns=['x','y','z'])
df1['x'] = ['A','B','C']
df1.at[2,'y'] = 400
# rows/columns specified does not exist, appends new rows/columns to existing data frame
df1.at['D','w'] = 9000
df1.loc['E','q'] = 499
# using df[<some_column_name>] == <condition> to retrieve target rows
df1.at[df1['x']=='B', 'y'] = 10000
df1.loc[df1['x']=='B', ['z','w']] = 10000
# using a list of index to setup values
df1.iloc[[1,2,4], 2] = 9999
df1.loc[[0,'D','E'],'w'] = 7500
df1.at[[0,2,"D"],'x'] = 10
df1.at[:, ['y', 'w']] = 8000
df1
>>> df1
x y z w q
0 10 8000 NaN 8000 NaN
1 B 8000 9999 8000 NaN
2 10 8000 9999 8000 NaN
D 10 8000 NaN 8000 NaN
E NaN 8000 9999 8000 499.0
除了上面的答案之外,这里还有一个基准测试,比较了向已有的数据框架添加数据行的不同方法。它表明使用at或set-value对于大数据帧是最有效的方法(至少对于这些测试条件)。
为每一行创建新的数据框架,然后…
... 追加它(13.0 s)
... 串联它(13.1秒)
首先将所有新行存储在另一个容器中,转换为新数据帧一次,然后追加…
容器=列表的列表(2.0 s)
容器=列表字典(1.9 s)
预分配整个数据框架,遍历新行和所有列,并使用填充
... (0.6秒)
... Set_value (0.4 s)
在测试中,使用了包含100,000行和1,000列的现有数据框架和随机numpy值。在这个数据框架中,添加了100个新行。
代码见下文:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Nov 21 16:38:46 2018
@author: gebbissimo
"""
import pandas as pd
import numpy as np
import time
NUM_ROWS = 100000
NUM_COLS = 1000
data = np.random.rand(NUM_ROWS,NUM_COLS)
df = pd.DataFrame(data)
NUM_ROWS_NEW = 100
data_tot = np.random.rand(NUM_ROWS + NUM_ROWS_NEW,NUM_COLS)
df_tot = pd.DataFrame(data_tot)
DATA_NEW = np.random.rand(1,NUM_COLS)
#%% FUNCTIONS
# create and append
def create_and_append(df):
for i in range(NUM_ROWS_NEW):
df_new = pd.DataFrame(DATA_NEW)
df = df.append(df_new)
return df
# create and concatenate
def create_and_concat(df):
for i in range(NUM_ROWS_NEW):
df_new = pd.DataFrame(DATA_NEW)
df = pd.concat((df, df_new))
return df
# store as dict and
def store_as_list(df):
lst = [[] for i in range(NUM_ROWS_NEW)]
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
lst[i].append(DATA_NEW[0,j])
df_new = pd.DataFrame(lst)
df_tot = df.append(df_new)
return df_tot
# store as dict and
def store_as_dict(df):
dct = {}
for j in range(NUM_COLS):
dct[j] = []
for i in range(NUM_ROWS_NEW):
dct[j].append(DATA_NEW[0,j])
df_new = pd.DataFrame(dct)
df_tot = df.append(df_new)
return df_tot
# preallocate and fill using .at
def fill_using_at(df):
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
#print("i,j={},{}".format(i,j))
df.at[NUM_ROWS+i,j] = DATA_NEW[0,j]
return df
# preallocate and fill using .at
def fill_using_set(df):
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
#print("i,j={},{}".format(i,j))
df.set_value(NUM_ROWS+i,j,DATA_NEW[0,j])
return df
#%% TESTS
t0 = time.time()
create_and_append(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
create_and_concat(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
store_as_list(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
store_as_dict(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
fill_using_at(df_tot)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
fill_using_set(df_tot)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
我测试了,输出是df。Set_value稍微快一点,但官方方法df。At看起来是最快的非弃用的方法。
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(100, 100))
%timeit df.iat[50,50]=50 # ✓
%timeit df.at[50,50]=50 # ✔
%timeit df.set_value(50,50,50) # will deprecate
%timeit df.iloc[50,50]=50
%timeit df.loc[50,50]=50
7.06 µs ± 118 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
5.52 µs ± 64.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
3.68 µs ± 80.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
98.7 µs ± 1.07 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
109 µs ± 1.42 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
注意,这是为单个单元格设置值。对于向量来说,loc和iloc应该是更好的选择,因为它们是向量化的。