给定两个数据帧:

df1 = data.frame(CustomerId = c(1:6), Product = c(rep("Toaster", 3), rep("Radio", 3)))
df2 = data.frame(CustomerId = c(2, 4, 6), State = c(rep("Alabama", 2), rep("Ohio", 1)))

df1
#  CustomerId Product
#           1 Toaster
#           2 Toaster
#           3 Toaster
#           4   Radio
#           5   Radio
#           6   Radio

df2
#  CustomerId   State
#           2 Alabama
#           4 Alabama
#           6    Ohio

如何进行数据库样式(即sql样式)连接?也就是说,我如何获得:

df1和df2的内部连接:仅返回左表在右表中具有匹配键的行。df1和df2的外部连接:返回两个表中的所有行,连接左侧表中具有匹配键的记录。df1和df2的左外联接(或简单的左联接)返回左表中的所有行,以及右表中具有匹配键的所有行。df1和df2的右外部连接返回右表中的所有行,以及左表中具有匹配键的所有行。


额外学分:

如何执行SQL样式的选择语句?


在R Wiki上有一些很好的例子。我会在这里偷一对:

合并方法

由于键的名称相同,所以进行内部连接的简单方法是merge():

merge(df1, df2)

可以使用“all”关键字创建完整的内部联接(两个表中的所有记录):

merge(df1, df2, all=TRUE)

df1和df2的左外连接:

merge(df1, df2, all.x=TRUE)

df1和df2的右外连接:

merge(df1, df2, all.y=TRUE)

你可以翻转它们,拍打它们,然后摩擦它们,以获得你询问的其他两个外部连接:)

下标方法

使用下标方法与左边的df1进行左外部连接将是:

df1[,"State"]<-df2[df1[ ,"Product"], "State"]

另一种外部联接的组合可以通过对左外部联接下标示例进行排序来创建。(是的,我知道这相当于说“我会把它作为练习留给读者……”)


通过使用merge函数及其可选参数:

内部连接:merge(df1,df2)将适用于这些示例,因为R会通过公共变量名自动连接帧,但您很可能希望指定merge(df1,df1,by=“CustomerId”),以确保仅在所需字段上匹配。如果匹配变量在不同的数据帧中具有不同的名称,也可以使用by.x和by.y参数。

外部联接:合并(x=df1,y=df2,by=“CustomerId”,all=TRUE)

左外部:合并(x=df1,y=df2,by=“CustomerId”,all.x=TRUE)

右外部:合并(x=df1,y=df2,by=“CustomerId”,all.y=TRUE)

交叉联接:合并(x=df1,y=df2,by=NULL)

与内部联接一样,您可能希望将“CustomerId”显式传递给R作为匹配变量。我认为几乎总是最好明确说明要合并的标识符;如果输入data.frames发生意外变化,则会更安全,并且以后更容易阅读。

您可以通过给定向量(例如,by=c(“CustomerId”,“OrderId”))合并多个列。

如果要合并的列名不相同,则可以指定,例如,by.x=“CustomerId_in_df1”,by.y=“CustomerId.in_df2”,其中CustomerId_in_df1是第一个数据帧中的列名,CustomerId_in-df2是第二个数据帧的列名。(如果需要合并多个列,这些也可以是向量。)


我建议您查看Gabor Grothendieck的sqldf包,它允许您用SQL表示这些操作。

library(sqldf)

## inner join
df3 <- sqldf("SELECT CustomerId, Product, State 
              FROM df1
              JOIN df2 USING(CustomerID)")

## left join (substitute 'right' for right join)
df4 <- sqldf("SELECT CustomerId, Product, State 
              FROM df1
              LEFT JOIN df2 USING(CustomerID)")

我发现SQL语法比它的R等效语法更简单和自然(但这可能只是反映了我的RDBMS偏见)。

有关连接的更多信息,请参阅Gabor的sqldfGitHub。


内部连接有data.table方法,它非常节省时间和内存(对于一些更大的data.frames也是必要的):

library(data.table)
  
dt1 <- data.table(df1, key = "CustomerId") 
dt2 <- data.table(df2, key = "CustomerId")

joined.dt1.dt.2 <- dt1[dt2]

merge也适用于data.tables(因为它是通用的并调用merge.data.table)

merge(dt1, dt2)

stackoverflow上记录的data.table:如何执行data.table合并操作将外键上的SQL联接转换为R data.table语法合并更大数据的有效替代方案。帧R如何在R中与data.table进行基本的左外连接?

另一个选项是plyr包中的join函数。【2022年注意:plyr现已退役,并已被dplyr取代。dplyr中的连接操作在本答案中描述。】

library(plyr)

join(df1, df2,
     type = "inner")

#   CustomerId Product   State
# 1          2 Toaster Alabama
# 2          4   Radio Alabama
# 3          6   Radio    Ohio

类型选项:内部、左侧、右侧、完整。

从…起join:与merge不同,[join]保留x的顺序,无论使用何种连接类型。


2014年新增:

特别是如果您还对数据操作感兴趣(包括排序、过滤、子设置、汇总等),那么您应该看看dplyr,它提供了各种功能,所有这些功能都旨在帮助您处理数据帧和某些其他数据库类型。它甚至提供了相当复杂的SQL接口,甚至还提供了一个将(大多数)SQL代码直接转换为R的函数。

dplyr包中的四个连接相关功能是(引用):

inner_join(x,y,by=NULL,copy=FALSE,…):返回x,其中y中有匹配的值,以及x和y中的所有列left_join(x,y,by=NULL,copy=FALSE,…):返回x中的所有行,以及x和y中的所有列semi_join(x,y,by=NULL,copy=FALSE,…):返回x中存在匹配值的所有行y、 只保留x中的列。anti_join(x,y,by=NULL,copy=FALSE,…):返回x中的所有行其中y中没有匹配的值,只保留x中的列

这一切都很详细。

可以通过select(df,“column”)来选择列。如果这对您来说还不够SQL,那么还有SQL()函数,您可以在其中原样输入SQL代码,它将执行您指定的操作,就像您一直在用R编写一样(有关更多信息,请参阅dplyr/databases vignette)。例如,如果应用正确,sql(“SELECT*FROM hflights”)将从“hflights“dplyr表(一个“tbl”)中选择所有列。


你也可以使用哈德利·威克姆(Hadley Wickham)很棒的dplyr包来完成连接。

library(dplyr)

#make sure that CustomerId cols are both the same type
#they aren’t in the provided data (one is integer and one is double)
df1$CustomerId <- as.double(df1$CustomerId)

可变联接:使用df2中的匹配项将列添加到df1

#inner
inner_join(df1, df2)

#left outer
left_join(df1, df2)

#right outer
right_join(df1, df2)

#alternate right outer
left_join(df2, df1)

#full join
full_join(df1, df2)

过滤联接:过滤掉df1中的行,不修改列

#keep only observations in df1 that match in df2.
semi_join(df1, df2)

#drop all observations in df1 that match in df2.
anti_join(df1, df2)

dplyr从0.4开始实现了包括outer_join在内的所有连接,但值得注意的是,在0.4之前的前几个版本中,因此,在之后的相当长一段时间里,有很多非常糟糕的黑客解决方法用户代码(你仍然可以在那个时期的SO、Kaggle answers和github中找到这样的代码。因此,这个答案仍然有用。)

加入相关发布亮点:

版本0.5(2016年6月)

POSIXct类型、时区、重复项、不同因素级别的处理。更好的错误和警告。新后缀参数,用于控制重复变量名称接收的后缀(#1296)

版本0.4.0(2015年1月)

实施右联接和外联接(#96)可变联接,它从另一个表中的匹配行向一个表添加新变量。过滤联接,根据观察值是否与另一个表中的观察值匹配来过滤一个表的观察值。

版本0.3(2014年10月)

现在可以通过每个表中的不同变量进行left_join:df1%>%left_join(df2,c(“var1”=“var2”))

0.2版(2014年5月)

*_join()不再重新排序列名(#324)

版本0.1.3(2014年4月)

具有inner_join、left_join、semi_join、anti_joinouter_join尚未实现,回退是使用base::merge()(或plyr::join())尚未实现right_join和outer_join哈德利在这里提到了其他优势目前,dplyr所没有的一个小特性是,可以像Python panda那样,通过.x和.y列进行分隔。

根据哈德利在该问题中的评论采取的解决方法:

就行而言,rightjoin(x,y)与leftjoin(y,x)相同,只是列的顺序不同。轻松使用select(new_column_order)outer_join基本上是并集(leftjoin(x,y),rightjoin(y,x)),即保留两个数据帧中的所有行。


在连接两个数据帧时,每个数据帧约有100万行,一个数据帧有2列,另一个数据框约有20行,我惊讶地发现merge(…,all.x=TRUE,all.y=TRUE)比dplyr::full_join()更快。这是dplyr v0.4

合并需要约17秒,完全加入需要约65秒。

尽管如此,我还是需要一些食物,因为我通常默认使用dplyr来执行操作任务。


使用merge函数,我们可以选择左表或右表的变量,就像我们熟悉的SQL中的select语句一样(例如:从…中选择a.*…或选择b.*)我们必须添加额外的代码,这些代码将从新连接的表中子集。SQL:-从df1中选择a.*a内部联接df2 b,位于a.CustomerId=b.CustomerIdR:-mmerge(df1,df2,按.x=“CustomerId”,按.y=“CustomerId)[,名称(df1)]

同样的方式

SQL:-从df1中选择b.*a内部联接df2 b,位于a.CustomerId=b.CustomerIdR:-mmerge(df1,df2,by.x=“CustomerId”,by.y=“客户ID”)[,名称(df2)]


更新用于连接数据集的data.table方法。请参见以下每种连接类型的示例。有两种方法,一种是在将第二个data.table作为第一个参数传递给子集时使用[.data.table,另一种方法是使用merge函数,将其分派给快速data.table方法。

df1 = data.frame(CustomerId = c(1:6), Product = c(rep("Toaster", 3), rep("Radio", 3)))
df2 = data.frame(CustomerId = c(2L, 4L, 7L), State = c(rep("Alabama", 2), rep("Ohio", 1))) # one value changed to show full outer join

library(data.table)

dt1 = as.data.table(df1)
dt2 = as.data.table(df2)
setkey(dt1, CustomerId)
setkey(dt2, CustomerId)
# right outer join keyed data.tables
dt1[dt2]

setkey(dt1, NULL)
setkey(dt2, NULL)
# right outer join unkeyed data.tables - use `on` argument
dt1[dt2, on = "CustomerId"]

# left outer join - swap dt1 with dt2
dt2[dt1, on = "CustomerId"]

# inner join - use `nomatch` argument
dt1[dt2, nomatch=NULL, on = "CustomerId"]

# anti join - use `!` operator
dt1[!dt2, on = "CustomerId"]

# inner join - using merge method
merge(dt1, dt2, by = "CustomerId")

# full outer join
merge(dt1, dt2, by = "CustomerId", all = TRUE)

# see ?merge.data.table arguments for other cases

下面是基准测试baseR、sqldf、dplyr和data.table。基准测试未索引/未索引的数据集。基准测试是在50M-1行数据集上执行的,联接列上有50M-2个公共值,因此可以测试每个场景(内部、左侧、右侧、完整),并且联接仍然不容易执行。这是一种很好地强调连接算法的连接类型。时间截至sqldf:0.4.11,dplyr:0.7.8,data.table:1.12.0。

# inner
Unit: seconds
   expr       min        lq      mean    median        uq       max neval
   base 111.66266 111.66266 111.66266 111.66266 111.66266 111.66266     1
  sqldf 624.88388 624.88388 624.88388 624.88388 624.88388 624.88388     1
  dplyr  51.91233  51.91233  51.91233  51.91233  51.91233  51.91233     1
     DT  10.40552  10.40552  10.40552  10.40552  10.40552  10.40552     1
# left
Unit: seconds
   expr        min         lq       mean     median         uq        max 
   base 142.782030 142.782030 142.782030 142.782030 142.782030 142.782030     
  sqldf 613.917109 613.917109 613.917109 613.917109 613.917109 613.917109     
  dplyr  49.711912  49.711912  49.711912  49.711912  49.711912  49.711912     
     DT   9.674348   9.674348   9.674348   9.674348   9.674348   9.674348       
# right
Unit: seconds
   expr        min         lq       mean     median         uq        max
   base 122.366301 122.366301 122.366301 122.366301 122.366301 122.366301     
  sqldf 611.119157 611.119157 611.119157 611.119157 611.119157 611.119157     
  dplyr  50.384841  50.384841  50.384841  50.384841  50.384841  50.384841     
     DT   9.899145   9.899145   9.899145   9.899145   9.899145   9.899145     
# full
Unit: seconds
  expr       min        lq      mean    median        uq       max neval
  base 141.79464 141.79464 141.79464 141.79464 141.79464 141.79464     1
 dplyr  94.66436  94.66436  94.66436  94.66436  94.66436  94.66436     1
    DT  21.62573  21.62573  21.62573  21.62573  21.62573  21.62573     1

请注意,您可以使用data.table执行其他类型的联接:-连接时更新-如果要将值从另一个表查找到主表-联接时聚合-如果要在联接的键上聚合,则不必实现所有联接结果-重叠连接-如果要按范围合并-滚动联接-如果您希望合并能够通过向前或向后滚动来匹配前一行/后一行的值-非相等联接-如果联接条件不相等

要复制的代码:

library(microbenchmark)
library(sqldf)
library(dplyr)
library(data.table)
sapply(c("sqldf","dplyr","data.table"), packageVersion, simplify=FALSE)

n = 5e7
set.seed(108)
df1 = data.frame(x=sample(n,n-1L), y1=rnorm(n-1L))
df2 = data.frame(x=sample(n,n-1L), y2=rnorm(n-1L))
dt1 = as.data.table(df1)
dt2 = as.data.table(df2)

mb = list()
# inner join
microbenchmark(times = 1L,
               base = merge(df1, df2, by = "x"),
               sqldf = sqldf("SELECT * FROM df1 INNER JOIN df2 ON df1.x = df2.x"),
               dplyr = inner_join(df1, df2, by = "x"),
               DT = dt1[dt2, nomatch=NULL, on = "x"]) -> mb$inner

# left outer join
microbenchmark(times = 1L,
               base = merge(df1, df2, by = "x", all.x = TRUE),
               sqldf = sqldf("SELECT * FROM df1 LEFT OUTER JOIN df2 ON df1.x = df2.x"),
               dplyr = left_join(df1, df2, by = c("x"="x")),
               DT = dt2[dt1, on = "x"]) -> mb$left

# right outer join
microbenchmark(times = 1L,
               base = merge(df1, df2, by = "x", all.y = TRUE),
               sqldf = sqldf("SELECT * FROM df2 LEFT OUTER JOIN df1 ON df2.x = df1.x"),
               dplyr = right_join(df1, df2, by = "x"),
               DT = dt1[dt2, on = "x"]) -> mb$right

# full outer join
microbenchmark(times = 1L,
               base = merge(df1, df2, by = "x", all = TRUE),
               dplyr = full_join(df1, df2, by = "x"),
               DT = merge(dt1, dt2, by = "x", all = TRUE)) -> mb$full

lapply(mb, print) -> nul

对于具有0..*:0..1基数的左联接或具有0...1:0..*基数的右联接,可以将来自联接器(0..1表)的单边列直接分配到联接对象(0..*表)上,从而避免创建全新的数据表。这需要将来自joinee的键列匹配到joiner中,并根据分配对joiner的行进行索引和排序。

如果键是单个列,那么我们可以使用对match()的单个调用来进行匹配。这就是我将在这个答案中介绍的情况。

这是一个基于OP的示例,除了我在df2中添加了一个id为7的额外行,以测试joiner中不匹配键的情况。这实际上是df1左连接df2:

df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L)));
df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas'));
df1[names(df2)[-1L]] <- df2[match(df1[,1L],df2[,1L]),-1L];
df1;
##   CustomerId Product   State
## 1          1 Toaster    <NA>
## 2          2 Toaster Alabama
## 3          3 Toaster    <NA>
## 4          4   Radio Alabama
## 5          5   Radio    <NA>
## 6          6   Radio    Ohio

在上文中,我硬编码了一个假设,即键列是两个输入表的第一列。我认为,一般来说,这不是一个不合理的假设,因为如果你有一个带有键列的data.frame,如果它从一开始就没有被设置为data.frame的第一列,那就很奇怪了。您可以始终对列进行重新排序以使其如此。这种假设的一个有利结果是,键列的名称不必进行硬编码,尽管我认为这只是将一个假设替换为另一个假设。简洁是整数索引的另一个优点,也是速度。在下面的基准测试中,我将更改实现以使用字符串名称索引来匹配竞争实现。

我认为这是一个特别合适的解决方案,如果您有多个表要对单个大表进行左连接。为每次合并重复重建整个表是不必要的,而且效率很低。

另一方面,如果出于任何原因需要通过此操作保持被连接对象不变,则无法使用此解决方案,因为它直接修改了被连接对象。尽管在这种情况下,您可以简单地制作一份副本并在副本上执行就地分配。


作为补充说明,我简要探讨了多列键的可能匹配解决方案。不幸的是,我找到的唯一匹配的解决方案是:

低效的连接。例如匹配(交互(df1$a,df1$b),交互(df2$a,df2$b)),或与paste()相同的想法。低效的笛卡尔连接,例如外部(df1$a,df2$a,`==`)和外部(df1$b,df2$b,`==')。base R merge()和等效的基于包的合并函数,它们总是分配一个新表以返回合并结果,因此不适合基于就地分配的解决方案。

例如,请参阅匹配不同数据帧上的多个列并获得其他列作为结果,将两个列与其他两个列匹配,匹配多个列,以及我最初提出的就地解决方案中的问题,将R中具有不同行数的两个数据帧组合。


标杆管理

我决定做我自己的基准测试,看看就地分配方法与本问题中提供的其他解决方案相比如何。

测试代码:

library(microbenchmark);
library(data.table);
library(sqldf);
library(plyr);
library(dplyr);

solSpecs <- list(
    merge=list(testFuncs=list(
        inner=function(df1,df2,key) merge(df1,df2,key),
        left =function(df1,df2,key) merge(df1,df2,key,all.x=T),
        right=function(df1,df2,key) merge(df1,df2,key,all.y=T),
        full =function(df1,df2,key) merge(df1,df2,key,all=T)
    )),
    data.table.unkeyed=list(argSpec='data.table.unkeyed',testFuncs=list(
        inner=function(dt1,dt2,key) dt1[dt2,on=key,nomatch=0L,allow.cartesian=T],
        left =function(dt1,dt2,key) dt2[dt1,on=key,allow.cartesian=T],
        right=function(dt1,dt2,key) dt1[dt2,on=key,allow.cartesian=T],
        full =function(dt1,dt2,key) merge(dt1,dt2,key,all=T,allow.cartesian=T) ## calls merge.data.table()
    )),
    data.table.keyed=list(argSpec='data.table.keyed',testFuncs=list(
        inner=function(dt1,dt2) dt1[dt2,nomatch=0L,allow.cartesian=T],
        left =function(dt1,dt2) dt2[dt1,allow.cartesian=T],
        right=function(dt1,dt2) dt1[dt2,allow.cartesian=T],
        full =function(dt1,dt2) merge(dt1,dt2,all=T,allow.cartesian=T) ## calls merge.data.table()
    )),
    sqldf.unindexed=list(testFuncs=list( ## note: must pass connection=NULL to avoid running against the live DB connection, which would result in collisions with the residual tables from the last query upload
        inner=function(df1,df2,key) sqldf(paste0('select * from df1 inner join df2 using(',paste(collapse=',',key),')'),connection=NULL),
        left =function(df1,df2,key) sqldf(paste0('select * from df1 left join df2 using(',paste(collapse=',',key),')'),connection=NULL),
        right=function(df1,df2,key) sqldf(paste0('select * from df2 left join df1 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do right join proper, not yet supported; inverted left join is equivalent
        ##full =function(df1,df2,key) sqldf(paste0('select * from df1 full join df2 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing
    )),
    sqldf.indexed=list(testFuncs=list( ## important: requires an active DB connection with preindexed main.df1 and main.df2 ready to go; arguments are actually ignored
        inner=function(df1,df2,key) sqldf(paste0('select * from main.df1 inner join main.df2 using(',paste(collapse=',',key),')')),
        left =function(df1,df2,key) sqldf(paste0('select * from main.df1 left join main.df2 using(',paste(collapse=',',key),')')),
        right=function(df1,df2,key) sqldf(paste0('select * from main.df2 left join main.df1 using(',paste(collapse=',',key),')')) ## can't do right join proper, not yet supported; inverted left join is equivalent
        ##full =function(df1,df2,key) sqldf(paste0('select * from main.df1 full join main.df2 using(',paste(collapse=',',key),')')) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing
    )),
    plyr=list(testFuncs=list(
        inner=function(df1,df2,key) join(df1,df2,key,'inner'),
        left =function(df1,df2,key) join(df1,df2,key,'left'),
        right=function(df1,df2,key) join(df1,df2,key,'right'),
        full =function(df1,df2,key) join(df1,df2,key,'full')
    )),
    dplyr=list(testFuncs=list(
        inner=function(df1,df2,key) inner_join(df1,df2,key),
        left =function(df1,df2,key) left_join(df1,df2,key),
        right=function(df1,df2,key) right_join(df1,df2,key),
        full =function(df1,df2,key) full_join(df1,df2,key)
    )),
    in.place=list(testFuncs=list(
        left =function(df1,df2,key) { cns <- setdiff(names(df2),key); df1[cns] <- df2[match(df1[,key],df2[,key]),cns]; df1; },
        right=function(df1,df2,key) { cns <- setdiff(names(df1),key); df2[cns] <- df1[match(df2[,key],df1[,key]),cns]; df2; }
    ))
);

getSolTypes <- function() names(solSpecs);
getJoinTypes <- function() unique(unlist(lapply(solSpecs,function(x) names(x$testFuncs))));
getArgSpec <- function(argSpecs,key=NULL) if (is.null(key)) argSpecs$default else argSpecs[[key]];

initSqldf <- function() {
    sqldf(); ## creates sqlite connection on first run, cleans up and closes existing connection otherwise
    if (exists('sqldfInitFlag',envir=globalenv(),inherits=F) && sqldfInitFlag) { ## false only on first run
        sqldf(); ## creates a new connection
    } else {
        assign('sqldfInitFlag',T,envir=globalenv()); ## set to true for the one and only time
    }; ## end if
    invisible();
}; ## end initSqldf()

setUpBenchmarkCall <- function(argSpecs,joinType,solTypes=getSolTypes(),env=parent.frame()) {
    ## builds and returns a list of expressions suitable for passing to the list argument of microbenchmark(), and assigns variables to resolve symbol references in those expressions
    callExpressions <- list();
    nms <- character();
    for (solType in solTypes) {
        testFunc <- solSpecs[[solType]]$testFuncs[[joinType]];
        if (is.null(testFunc)) next; ## this join type is not defined for this solution type
        testFuncName <- paste0('tf.',solType);
        assign(testFuncName,testFunc,envir=env);
        argSpecKey <- solSpecs[[solType]]$argSpec;
        argSpec <- getArgSpec(argSpecs,argSpecKey);
        argList <- setNames(nm=names(argSpec$args),vector('list',length(argSpec$args)));
        for (i in seq_along(argSpec$args)) {
            argName <- paste0('tfa.',argSpecKey,i);
            assign(argName,argSpec$args[[i]],envir=env);
            argList[[i]] <- if (i%in%argSpec$copySpec) call('copy',as.symbol(argName)) else as.symbol(argName);
        }; ## end for
        callExpressions[[length(callExpressions)+1L]] <- do.call(call,c(list(testFuncName),argList),quote=T);
        nms[length(nms)+1L] <- solType;
    }; ## end for
    names(callExpressions) <- nms;
    callExpressions;
}; ## end setUpBenchmarkCall()

harmonize <- function(res) {
    res <- as.data.frame(res); ## coerce to data.frame
    for (ci in which(sapply(res,is.factor))) res[[ci]] <- as.character(res[[ci]]); ## coerce factor columns to character
    for (ci in which(sapply(res,is.logical))) res[[ci]] <- as.integer(res[[ci]]); ## coerce logical columns to integer (works around sqldf quirk of munging logicals to integers)
    ##for (ci in which(sapply(res,inherits,'POSIXct'))) res[[ci]] <- as.double(res[[ci]]); ## coerce POSIXct columns to double (works around sqldf quirk of losing POSIXct class) ----- POSIXct doesn't work at all in sqldf.indexed
    res <- res[order(names(res))]; ## order columns
    res <- res[do.call(order,res),]; ## order rows
    res;
}; ## end harmonize()

checkIdentical <- function(argSpecs,solTypes=getSolTypes()) {
    for (joinType in getJoinTypes()) {
        callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);
        if (length(callExpressions)<2L) next;
        ex <- harmonize(eval(callExpressions[[1L]]));
        for (i in seq(2L,len=length(callExpressions)-1L)) {
            y <- harmonize(eval(callExpressions[[i]]));
            if (!isTRUE(all.equal(ex,y,check.attributes=F))) {
                ex <<- ex;
                y <<- y;
                solType <- names(callExpressions)[i];
                stop(paste0('non-identical: ',solType,' ',joinType,'.'));
            }; ## end if
        }; ## end for
    }; ## end for
    invisible();
}; ## end checkIdentical()

testJoinType <- function(argSpecs,joinType,solTypes=getSolTypes(),metric=NULL,times=100L) {
    callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);
    bm <- microbenchmark(list=callExpressions,times=times);
    if (is.null(metric)) return(bm);
    bm <- summary(bm);
    res <- setNames(nm=names(callExpressions),bm[[metric]]);
    attr(res,'unit') <- attr(bm,'unit');
    res;
}; ## end testJoinType()

testAllJoinTypes <- function(argSpecs,solTypes=getSolTypes(),metric=NULL,times=100L) {
    joinTypes <- getJoinTypes();
    resList <- setNames(nm=joinTypes,lapply(joinTypes,function(joinType) testJoinType(argSpecs,joinType,solTypes,metric,times)));
    if (is.null(metric)) return(resList);
    units <- unname(unlist(lapply(resList,attr,'unit')));
    res <- do.call(data.frame,c(list(join=joinTypes),setNames(nm=solTypes,rep(list(rep(NA_real_,length(joinTypes))),length(solTypes))),list(unit=units,stringsAsFactors=F)));
    for (i in seq_along(resList)) res[i,match(names(resList[[i]]),names(res))] <- resList[[i]];
    res;
}; ## end testAllJoinTypes()

testGrid <- function(makeArgSpecsFunc,sizes,overlaps,solTypes=getSolTypes(),joinTypes=getJoinTypes(),metric='median',times=100L) {

    res <- expand.grid(size=sizes,overlap=overlaps,joinType=joinTypes,stringsAsFactors=F);
    res[solTypes] <- NA_real_;
    res$unit <- NA_character_;
    for (ri in seq_len(nrow(res))) {

        size <- res$size[ri];
        overlap <- res$overlap[ri];
        joinType <- res$joinType[ri];

        argSpecs <- makeArgSpecsFunc(size,overlap);

        checkIdentical(argSpecs,solTypes);

        cur <- testJoinType(argSpecs,joinType,solTypes,metric,times);
        res[ri,match(names(cur),names(res))] <- cur;
        res$unit[ri] <- attr(cur,'unit');

    }; ## end for

    res;

}; ## end testGrid()

下面是我前面演示的基于OP的示例基准:

## OP's example, supplemented with a non-matching row in df2
argSpecs <- list(
    default=list(copySpec=1:2,args=list(
        df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L))),
        df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas')),
        'CustomerId'
    )),
    data.table.unkeyed=list(copySpec=1:2,args=list(
        as.data.table(df1),
        as.data.table(df2),
        'CustomerId'
    )),
    data.table.keyed=list(copySpec=1:2,args=list(
        setkey(as.data.table(df1),CustomerId),
        setkey(as.data.table(df2),CustomerId)
    ))
);
## prepare sqldf
initSqldf();
sqldf('create index df1_key on df1(CustomerId);'); ## upload and create an sqlite index on df1
sqldf('create index df2_key on df2(CustomerId);'); ## upload and create an sqlite index on df2

checkIdentical(argSpecs);

testAllJoinTypes(argSpecs,metric='median');
##    join    merge data.table.unkeyed data.table.keyed sqldf.unindexed sqldf.indexed      plyr    dplyr in.place         unit
## 1 inner  644.259           861.9345          923.516        9157.752      1580.390  959.2250 270.9190       NA microseconds
## 2  left  713.539           888.0205          910.045        8820.334      1529.714  968.4195 270.9185 224.3045 microseconds
## 3 right 1221.804           909.1900          923.944        8930.668      1533.135 1063.7860 269.8495 218.1035 microseconds
## 4  full 1302.203          3107.5380         3184.729              NA            NA 1593.6475 270.7055       NA microseconds

在这里,我以随机输入数据为基准,尝试两个输入表之间不同的比例和不同的键重叠模式。该基准仍然限于单列整数键的情况。同样,为了确保就地解决方案能够同时适用于相同表的左和右连接,所有随机测试数据都使用0..10..1基数。这是通过在生成第二个data.frame的键列时采样而不替换第一个data.fframe的键列来实现的。

makeArgSpecs.singleIntegerKey.optionalOneToOne <- function(size,overlap) {

    com <- as.integer(size*overlap);

    argSpecs <- list(
        default=list(copySpec=1:2,args=list(
            df1 <- data.frame(id=sample(size),y1=rnorm(size),y2=rnorm(size)),
            df2 <- data.frame(id=sample(c(if (com>0L) sample(df1$id,com) else integer(),seq(size+1L,len=size-com))),y3=rnorm(size),y4=rnorm(size)),
            'id'
        )),
        data.table.unkeyed=list(copySpec=1:2,args=list(
            as.data.table(df1),
            as.data.table(df2),
            'id'
        )),
        data.table.keyed=list(copySpec=1:2,args=list(
            setkey(as.data.table(df1),id),
            setkey(as.data.table(df2),id)
        ))
    );
    ## prepare sqldf
    initSqldf();
    sqldf('create index df1_key on df1(id);'); ## upload and create an sqlite index on df1
    sqldf('create index df2_key on df2(id);'); ## upload and create an sqlite index on df2

    argSpecs;

}; ## end makeArgSpecs.singleIntegerKey.optionalOneToOne()

## cross of various input sizes and key overlaps
sizes <- c(1e1L,1e3L,1e6L);
overlaps <- c(0.99,0.5,0.01);
system.time({ res <- testGrid(makeArgSpecs.singleIntegerKey.optionalOneToOne,sizes,overlaps); });
##     user   system  elapsed
## 22024.65 12308.63 34493.19

我编写了一些代码来创建上述结果的日志图。我为每个重叠百分比生成了单独的绘图。这有点混乱,但我喜欢在同一个图中表示所有的解决方案类型和联接类型。

我使用样条插值来显示每个解决方案/连接类型组合的平滑曲线,并使用单独的pch符号绘制。连接类型由pch符号捕获,使用点表示内部,使用左括号和右括号表示左侧和右侧,使用菱形表示完整。解决方案类型由图例中所示的颜色捕获。

plotRes <- function(res,titleFunc,useFloor=F) {
    solTypes <- setdiff(names(res),c('size','overlap','joinType','unit')); ## derive from res
    normMult <- c(microseconds=1e-3,milliseconds=1); ## normalize to milliseconds
    joinTypes <- getJoinTypes();
    cols <- c(merge='purple',data.table.unkeyed='blue',data.table.keyed='#00DDDD',sqldf.unindexed='brown',sqldf.indexed='orange',plyr='red',dplyr='#00BB00',in.place='magenta');
    pchs <- list(inner=20L,left='<',right='>',full=23L);
    cexs <- c(inner=0.7,left=1,right=1,full=0.7);
    NP <- 60L;
    ord <- order(decreasing=T,colMeans(res[res$size==max(res$size),solTypes],na.rm=T));
    ymajors <- data.frame(y=c(1,1e3),label=c('1ms','1s'),stringsAsFactors=F);
    for (overlap in unique(res$overlap)) {
        x1 <- res[res$overlap==overlap,];
        x1[solTypes] <- x1[solTypes]*normMult[x1$unit]; x1$unit <- NULL;
        xlim <- c(1e1,max(x1$size));
        xticks <- 10^seq(log10(xlim[1L]),log10(xlim[2L]));
        ylim <- c(1e-1,10^((if (useFloor) floor else ceiling)(log10(max(x1[solTypes],na.rm=T))))); ## use floor() to zoom in a little more, only sqldf.unindexed will break above, but xpd=NA will keep it visible
        yticks <- 10^seq(log10(ylim[1L]),log10(ylim[2L]));
        yticks.minor <- rep(yticks[-length(yticks)],each=9L)*1:9;
        plot(NA,xlim=xlim,ylim=ylim,xaxs='i',yaxs='i',axes=F,xlab='size (rows)',ylab='time (ms)',log='xy');
        abline(v=xticks,col='lightgrey');
        abline(h=yticks.minor,col='lightgrey',lty=3L);
        abline(h=yticks,col='lightgrey');
        axis(1L,xticks,parse(text=sprintf('10^%d',as.integer(log10(xticks)))));
        axis(2L,yticks,parse(text=sprintf('10^%d',as.integer(log10(yticks)))),las=1L);
        axis(4L,ymajors$y,ymajors$label,las=1L,tick=F,cex.axis=0.7,hadj=0.5);
        for (joinType in rev(joinTypes)) { ## reverse to draw full first, since it's larger and would be more obtrusive if drawn last
            x2 <- x1[x1$joinType==joinType,];
            for (solType in solTypes) {
                if (any(!is.na(x2[[solType]]))) {
                    xy <- spline(x2$size,x2[[solType]],xout=10^(seq(log10(x2$size[1L]),log10(x2$size[nrow(x2)]),len=NP)));
                    points(xy$x,xy$y,pch=pchs[[joinType]],col=cols[solType],cex=cexs[joinType],xpd=NA);
                }; ## end if
            }; ## end for
        }; ## end for
        ## custom legend
        ## due to logarithmic skew, must do all distance calcs in inches, and convert to user coords afterward
        ## the bottom-left corner of the legend will be defined in normalized figure coords, although we can convert to inches immediately
        leg.cex <- 0.7;
        leg.x.in <- grconvertX(0.275,'nfc','in');
        leg.y.in <- grconvertY(0.6,'nfc','in');
        leg.x.user <- grconvertX(leg.x.in,'in');
        leg.y.user <- grconvertY(leg.y.in,'in');
        leg.outpad.w.in <- 0.1;
        leg.outpad.h.in <- 0.1;
        leg.midpad.w.in <- 0.1;
        leg.midpad.h.in <- 0.1;
        leg.sol.w.in <- max(strwidth(solTypes,'in',leg.cex));
        leg.sol.h.in <- max(strheight(solTypes,'in',leg.cex))*1.5; ## multiplication factor for greater line height
        leg.join.w.in <- max(strheight(joinTypes,'in',leg.cex))*1.5; ## ditto
        leg.join.h.in <- max(strwidth(joinTypes,'in',leg.cex));
        leg.main.w.in <- leg.join.w.in*length(joinTypes);
        leg.main.h.in <- leg.sol.h.in*length(solTypes);
        leg.x2.user <- grconvertX(leg.x.in+leg.outpad.w.in*2+leg.main.w.in+leg.midpad.w.in+leg.sol.w.in,'in');
        leg.y2.user <- grconvertY(leg.y.in+leg.outpad.h.in*2+leg.main.h.in+leg.midpad.h.in+leg.join.h.in,'in');
        leg.cols.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.join.w.in*(0.5+seq(0L,length(joinTypes)-1L)),'in');
        leg.lines.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in-leg.sol.h.in*(0.5+seq(0L,length(solTypes)-1L)),'in');
        leg.sol.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.main.w.in+leg.midpad.w.in,'in');
        leg.join.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in+leg.midpad.h.in,'in');
        rect(leg.x.user,leg.y.user,leg.x2.user,leg.y2.user,col='white');
        text(leg.sol.x.user,leg.lines.y.user,solTypes[ord],cex=leg.cex,pos=4L,offset=0);
        text(leg.cols.x.user,leg.join.y.user,joinTypes,cex=leg.cex,pos=4L,offset=0,srt=90); ## srt rotation applies *after* pos/offset positioning
        for (i in seq_along(joinTypes)) {
            joinType <- joinTypes[i];
            points(rep(leg.cols.x.user[i],length(solTypes)),ifelse(colSums(!is.na(x1[x1$joinType==joinType,solTypes[ord]]))==0L,NA,leg.lines.y.user),pch=pchs[[joinType]],col=cols[solTypes[ord]]);
        }; ## end for
        title(titleFunc(overlap));
        readline(sprintf('overlap %.02f',overlap));
    }; ## end for
}; ## end plotRes()

titleFunc <- function(overlap) sprintf('R merge solutions: single-column integer key, 0..1:0..1 cardinality, %d%% overlap',as.integer(overlap*100));
plotRes(res,titleFunc,T);


这是第二个大型基准测试,在键列的数量和类型以及基数方面更为繁重。对于这个基准,我使用了三个键列:一个字符、一个整数和一个逻辑,对基数没有限制(即0..*:0..*)。(一般来说,由于浮点比较的复杂性,用双值或复数值定义键列是不可取的,而且基本上没有人使用原始类型,更不用说键列了,所以我没有将这些类型包含在键列中。此外,为了提供信息,我最初尝试通过包含POSIXct键列来使用四个键列,但POSIXct类型没有发挥作用由于某种原因,可能是由于浮点比较异常,所以我删除了sqldf.indexed解决方案。)

makeArgSpecs.assortedKey.optionalManyToMany <- function(size,overlap,uniquePct=75) {

    ## number of unique keys in df1
    u1Size <- as.integer(size*uniquePct/100);

    ## (roughly) divide u1Size into bases, so we can use expand.grid() to produce the required number of unique key values with repetitions within individual key columns
    ## use ceiling() to ensure we cover u1Size; will truncate afterward
    u1SizePerKeyColumn <- as.integer(ceiling(u1Size^(1/3)));

    ## generate the unique key values for df1
    keys1 <- expand.grid(stringsAsFactors=F,
        idCharacter=replicate(u1SizePerKeyColumn,paste(collapse='',sample(letters,sample(4:12,1L),T))),
        idInteger=sample(u1SizePerKeyColumn),
        idLogical=sample(c(F,T),u1SizePerKeyColumn,T)
        ##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+sample(u1SizePerKeyColumn)
    )[seq_len(u1Size),];

    ## rbind some repetitions of the unique keys; this will prepare one side of the many-to-many relationship
    ## also scramble the order afterward
    keys1 <- rbind(keys1,keys1[sample(nrow(keys1),size-u1Size,T),])[sample(size),];

    ## common and unilateral key counts
    com <- as.integer(size*overlap);
    uni <- size-com;

    ## generate some unilateral keys for df2 by synthesizing outside of the idInteger range of df1
    keys2 <- data.frame(stringsAsFactors=F,
        idCharacter=replicate(uni,paste(collapse='',sample(letters,sample(4:12,1L),T))),
        idInteger=u1SizePerKeyColumn+sample(uni),
        idLogical=sample(c(F,T),uni,T)
        ##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+u1SizePerKeyColumn+sample(uni)
    );

    ## rbind random keys from df1; this will complete the many-to-many relationship
    ## also scramble the order afterward
    keys2 <- rbind(keys2,keys1[sample(nrow(keys1),com,T),])[sample(size),];

    ##keyNames <- c('idCharacter','idInteger','idLogical','idPOSIXct');
    keyNames <- c('idCharacter','idInteger','idLogical');
    ## note: was going to use raw and complex type for two of the non-key columns, but data.table doesn't seem to fully support them
    argSpecs <- list(
        default=list(copySpec=1:2,args=list(
            df1 <- cbind(stringsAsFactors=F,keys1,y1=sample(c(F,T),size,T),y2=sample(size),y3=rnorm(size),y4=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),
            df2 <- cbind(stringsAsFactors=F,keys2,y5=sample(c(F,T),size,T),y6=sample(size),y7=rnorm(size),y8=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),
            keyNames
        )),
        data.table.unkeyed=list(copySpec=1:2,args=list(
            as.data.table(df1),
            as.data.table(df2),
            keyNames
        )),
        data.table.keyed=list(copySpec=1:2,args=list(
            setkeyv(as.data.table(df1),keyNames),
            setkeyv(as.data.table(df2),keyNames)
        ))
    );
    ## prepare sqldf
    initSqldf();
    sqldf(paste0('create index df1_key on df1(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df1
    sqldf(paste0('create index df2_key on df2(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df2

    argSpecs;

}; ## end makeArgSpecs.assortedKey.optionalManyToMany()

sizes <- c(1e1L,1e3L,1e5L); ## 1e5L instead of 1e6L to respect more heavy-duty inputs
overlaps <- c(0.99,0.5,0.01);
solTypes <- setdiff(getSolTypes(),'in.place');
system.time({ res <- testGrid(makeArgSpecs.assortedKey.optionalManyToMany,sizes,overlaps,solTypes); });
##     user   system  elapsed
## 38895.50   784.19 39745.53

使用上面给出的相同绘图代码生成的绘图:

titleFunc <- function(overlap) sprintf('R merge solutions: character/integer/logical key, 0..*:0..* cardinality, %d%% overlap',as.integer(overlap*100));
plotRes(res,titleFunc,F);


对于所有列上的内部联接,还可以使用data.table-package中的finteract或dplyr包中的intersect作为合并的替代方法,而不指定by列。这将给出两个数据帧之间相等的行:

merge(df1, df2)
#   V1 V2
# 1  B  2
# 2  C  3

dplyr::intersect(df1, df2)
#   V1 V2
# 1  B  2
# 2  C  3

data.table::fintersect(setDT(df1), setDT(df2))
#    V1 V2
# 1:  B  2
# 2:  C  3

示例数据:

df1 <- data.frame(V1 = LETTERS[1:4], V2 = 1:4)
df2 <- data.frame(V1 = LETTERS[2:3], V2 = 2:3)

更新联接。另一个重要的SQL样式连接是“更新连接”,其中一个表中的列使用另一个表更新(或创建)。

正在修改OP的示例表。。。

sales = data.frame(
  CustomerId = c(1, 1, 1, 3, 4, 6), 
  Year = 2000:2005,
  Product = c(rep("Toaster", 3), rep("Radio", 3))
)
cust = data.frame(
  CustomerId = c(1, 1, 4, 6), 
  Year = c(2001L, 2002L, 2002L, 2002L),
  State = state.name[1:4]
)

sales
# CustomerId Year Product
#          1 2000 Toaster
#          1 2001 Toaster
#          1 2002 Toaster
#          3 2003   Radio
#          4 2004   Radio
#          6 2005   Radio

cust
# CustomerId Year    State
#          1 2001  Alabama
#          1 2002   Alaska
#          4 2002  Arizona
#          6 2002 Arkansas

假设我们想将客户的状态从cust添加到purchases表sales,忽略年份列。使用基数R,我们可以识别匹配的行,然后复制值:

sales$State <- cust$State[ match(sales$CustomerId, cust$CustomerId) ]

# CustomerId Year Product    State
#          1 2000 Toaster  Alabama
#          1 2001 Toaster  Alabama
#          1 2002 Toaster  Alabama
#          3 2003   Radio     <NA>
#          4 2004   Radio  Arizona
#          6 2005   Radio Arkansas

# cleanup for the next example
sales$State <- NULL

从这里可以看到,match从customer表中选择第一个匹配行。


更新具有多个列的联接。当我们只加入一列并且对第一场比赛感到满意时,上面的方法效果很好。假设我们希望客户表中的测量年份与销售年份相匹配。

正如@bgoldst的回答所提到的,在这种情况下,匹配交互可能是一种选择。更直接地说,可以使用data.table:

library(data.table)
setDT(sales); setDT(cust)

sales[, State := cust[sales, on=.(CustomerId, Year), x.State]]

#    CustomerId Year Product   State
# 1:          1 2000 Toaster    <NA>
# 2:          1 2001 Toaster Alabama
# 3:          1 2002 Toaster  Alaska
# 4:          3 2003   Radio    <NA>
# 5:          4 2004   Radio    <NA>
# 6:          6 2005   Radio    <NA>

# cleanup for next example
sales[, State := NULL]

正在滚动更新加入。或者,我们可能希望获取找到客户的最后一个状态:

sales[, State := cust[sales, on=.(CustomerId, Year), roll=TRUE, x.State]]

#    CustomerId Year Product    State
# 1:          1 2000 Toaster     <NA>
# 2:          1 2001 Toaster  Alabama
# 3:          1 2002 Toaster   Alaska
# 4:          3 2003   Radio     <NA>
# 5:          4 2004   Radio  Arizona
# 6:          6 2005   Radio Arkansas

以上三个示例都侧重于创建/添加新列。有关更新/修改现有列的示例,请参阅相关的R常见问题解答。