如何加入(合并)数据帧(内、外、左、右)

给定两个数据帧:

df1 = data.frame(CustomerId = c(1:6), Product = c(rep("Toaster", 3), rep("Radio", 3)))df2 = data.frame(CustomerId = c(2, 4, 6), State = c(rep("Alabama", 2), rep("Ohio", 1)))
df1#  CustomerId Product#           1 Toaster#           2 Toaster#           3 Toaster#           4   Radio#           5   Radio#           6   Radio
df2#  CustomerId   State#           2 Alabama#           4 Alabama#           6    Ohio

我如何做到数据库样式,即sql样式,连接?也就是说,我如何获得:

  • An内连接 ofdf1 anddf2
    仅返回左表在右表中有匹配键的行。
  • An外连接 ofdf1 anddf2
    返回两个表中的所有行,连接左侧的记录,这些记录在右侧表中具有匹配的键。
  • A左向外连接(或简单的左连接) ofdf1 anddf2
    返回左表中的所有行,以及右表中具有匹配键的任何行。
  • A右向外连接 ofdf1 anddf2
    返回右表中的所有行,以及左表中具有匹配键的任何行。

额外信用:

如何执行SQL样式的选择语句?

1689300 次浏览

rwiki上有一些很好的例子。我在这里偷几个:

合并方法

由于您的键的名称相同,因此进行内连接的简短方法是合并():

merge(df1, df2)

可以使用“all”关键字创建完整的内连接(两个表中的所有记录):

merge(df1, df2, all=TRUE)

df1和df2的左向外连接:

merge(df1, df2, all.x=TRUE)

df1和df2的右向外连接:

merge(df1, df2, all.y=TRUE)

你可以翻转它们,拍打它们并摩擦它们以获得你询问的另外两个外部连接:)

下标方法

使用下标方法左向外连接df1将是:

df1[,"State"]<-df2[df1[ ,"Product"], "State"]

外连接的另一种组合可以通过修改左向外连接下标示例来创建。(是的,我知道这相当于说“我会把它作为读者的练习……”)

通过使用merge函数及其可选参数:

内部连接:merge(df1, df2)将适用于这些示例,因为R会自动通过公共变量名连接帧,但您很可能希望指定merge(df1, df2, by = "CustomerId")以确保仅在所需的字段上匹配。如果匹配变量在不同的数据帧中有不同的名称,您也可以使用by.xby.y参数。

外连接:merge(x = df1, y = df2, by = "CustomerId", all = TRUE)

左外侧:merge(x = df1, y = df2, by = "CustomerId", all.x = TRUE)

右外侧:merge(x = df1, y = df2, by = "CustomerId", all.y = TRUE)

交叉连接:merge(x = df1, y = df2, by = NULL)

与内连接一样,您可能希望显式地将“CustmerId”作为匹配变量传递给R。我认为几乎总是最好显式声明要合并的标识符;如果输入data.frames意外更改并且以后更容易阅读,则更安全。

您可以通过给by一个向量来合并多列,例如by = c("CustomerId", "OrderId")

如果要合并的列名不相同,您可以指定,例如by.x = "CustomerId_in_df1", by.y = "CustomerId_in_df2",其中CustomerId_in_df1是第一个数据帧中的列名,CustomerId_in_df2是第二个数据帧中的列名。(如果需要在多个列上合并,这些也可以是向量。)

我建议检查Gabor Grothendieck的sqldf包,它允许您用SQL表达这些操作。

library(sqldf)
## inner joindf3 <- sqldf("SELECT CustomerId, Product, StateFROM df1JOIN df2 USING(CustomerID)")
## left join (substitute 'right' for right join)df4 <- sqldf("SELECT CustomerId, Product, StateFROM df1LEFT JOIN df2 USING(CustomerID)")

我发现SQL语法比它的R语法更简单、更自然(但这可能只是反映了我对RDBMS的偏见)。

有关连接的更多信息,请参阅Gabor的sqldf GitHub

内连接有data.table方法,它非常节省时间和内存(对于一些较大的data.frames是必要的):

library(data.table)  
dt1 <- data.table(df1, key = "CustomerId")dt2 <- data.table(df2, key = "CustomerId")
joined.dt1.dt.2 <- dt1[dt2]

merge也适用于data.tables(因为它是通用的并调用merge.data.table

merge(dt1, dt2)

data.table记录在stackoverflow上:
如何进行data.table合并操作
将外键上的SQL连接转换为Rdata.table语法
为更大的data.framesR合并的有效替代方案
如何做一个基本的左向外连接与data.table在R?

另一个选项是<强>层r包中的join函数。[2022年的注意事项:plyr现已退役,并已被dplyr。dplyr中的连接操作在中描述

library(plyr)
join(df1, df2,type = "inner")
#   CustomerId Product   State# 1          2 Toaster Alabama# 2          4   Radio Alabama# 3          6   Radio    Ohio

type的选项:innerleftrightfull

?join开始:与merge不同,[join]保留x的顺序,无论使用什么连接类型。

2014年新增:

特别是如果你还对一般的数据操作感兴趣(包括排序、过滤、子集、汇总等),你绝对应该看看dplyr,它带有各种函数,旨在促进你专门处理数据帧和某些其他数据库类型的工作。它甚至提供了相当精心的SQL界面,甚至还有一个将(大部分)SQL代码直接转换为R的函数。

dplyr包中四个与连接相关的函数是(引用):

  • inner_join(x, y, by = NULL, copy = FALSE, ...):返回所有行x,其中y中存在匹配值,以及x和y中的所有列
  • left_join(x, y, by = NULL, copy = FALSE, ...):返回x中的所有行,以及x和y中的所有列
  • semi_join(x, y, by = NULL, copy = FALSE, ...):返回x中存在匹配值的所有行y,只保留x的列。
  • anti_join(x, y, by = NULL, copy = FALSE, ...):返回x中的所有行其中y中没有匹配的值,只保留x
  • 中的列

所有的这里都非常详细。

选择列可以由select(df,"column")完成。如果这对您来说还不够SQL,那么可以使用sql()函数,您可以在其中按原样输入SQL代码,它将执行您指定的操作,就像您一直在R中编写一样(有关更多信息,请参阅dplyr/数据库小插图)。例如,如果应用正确,sql("SELECT * FROM hflights")将选择“HFANTS”dplyr表中的所有列(“tbl”)。

您也可以使用Hadley Wickham的dplyr软件包进行连接。

library(dplyr)
#make sure that CustomerId cols are both the same type#they aren’t in the provided data (one is integer and one is double)df1$CustomerId <- as.double(df1$CustomerId)

变异连接:使用df2中的匹配将列添加到df1

#innerinner_join(df1, df2)
#left outerleft_join(df1, df2)
#right outerright_join(df1, df2)
#alternate right outerleft_join(df2, df1)
#full joinfull_join(df1, df2)

过滤连接:过滤掉df1中的行,不修改列

#keep only observations in df1 that match in df2.semi_join(df1, df2)
#drop all observations in df1 that match in df2.anti_join(df1, df2)

dplyr从0.4开始实现了所有这些连接,包括outer_join,但值得注意的是在0.4之前的最初几个版本中,它没有提供#0,因此在之后的很长一段时间里,有很多非常糟糕的hacky解决方法用户代码漂浮(你仍然可以在SO、Kaggle答案、那个时期的github中找到这样的代码。因此这个答案仍然有用。)

加入相关发布亮点

v0.5(6/2016)

  • 处理POSIXct类型、时区、重复项、不同因子级别。更好的错误和警告。
  • 新的后缀参数来控制重复变量名接收的后缀(#1296)

v0.4.0(1/2015)

  • 实现右连接和外连接(#96)
  • 变异连接,将新变量从另一个表中的匹配行添加到一个表中。过滤连接,根据它们是否匹配另一个表中的观察结果来过滤来自一个表的观察结果。

v0.3(10/2014)

  • 现在可以在每个表中left_join不同的变量:df1%>%left_join(df2, c("var1"="var2"))

v0.2(5/2014)

  • *_join()不再重新排序列名(#324)

v0.1.3(4/2014)

哈德利在本期评论中的解决方法:

  • right_join(x, y)在行方面与left_join(y, x)相同,只是列的顺序不同。轻松处理选择(new_column_order)
  • outer_join基本上是联合(left_join(x, y),right_join(x, y))-即保留两个数据帧中的所有行。

在连接两个数据帧时,每个数据帧有100万行,一个有2列,另一个有20行,我惊讶地发现merge(..., all.x = TRUE, all.y = TRUE)dplyr::full_join()快。这是dplyr v0.4

合并需要17秒,full_join需要65秒。

虽然有些食物,因为我通常默认为dplyr操作任务。

  1. 使用merge函数,我们可以选择左表或右表的变量,就像我们都熟悉SQL中的选择语句一样(EX:选择a.*…或选择b.*from……)
  2. 我们必须添加额外的代码,这些代码将从新加入的表中子集。

    • SQL:-select a.* from df1 a inner join df2 b on a.CustomerId=b.CustomerId

    • R:-merge(df1, df2, by.x = "CustomerId", by.y = "CustomerId")[,names(df1)]

一样的方式

  • SQL:-select b.* from df1 a inner join df2 b on a.CustomerId=b.CustomerId

  • R:-合并(df1, df2, by. x="CustmerId", by. y=“客户ID”)[,名称(df 2)]

关于连接数据集的data.table方法的更新。请参阅下面每种连接类型的示例。有两种方法,一种是[.data.table将第二data.table作为子集的第一个参数传递时,另一种方法是使用merge函数,该函数分派到快速data.table方法。

df1 = data.frame(CustomerId = c(1:6), Product = c(rep("Toaster", 3), rep("Radio", 3)))df2 = data.frame(CustomerId = c(2L, 4L, 7L), State = c(rep("Alabama", 2), rep("Ohio", 1))) # one value changed to show full outer join
library(data.table)
dt1 = as.data.table(df1)dt2 = as.data.table(df2)setkey(dt1, CustomerId)setkey(dt2, CustomerId)# right outer join keyed data.tablesdt1[dt2]
setkey(dt1, NULL)setkey(dt2, NULL)# right outer join unkeyed data.tables - use `on` argumentdt1[dt2, on = "CustomerId"]
# left outer join - swap dt1 with dt2dt2[dt1, on = "CustomerId"]
# inner join - use `nomatch` argumentdt1[dt2, nomatch=NULL, on = "CustomerId"]
# anti join - use `!` operatordt1[!dt2, on = "CustomerId"]
# inner join - using merge methodmerge(dt1, dt2, by = "CustomerId")
# full outer joinmerge(dt1, dt2, by = "CustomerId", all = TRUE)
# see ?merge.data.table arguments for other cases

以下基准测试基于R、sqldf、dplyr和data.table.
基准测试未键控/未索引的数据集。基准测试是在50M-1行数据集上执行的,连接列上有50M-2个公共值,因此可以测试每个场景(内部、左侧、右侧、完整),并且连接仍然不能简单地执行。它是强调连接算法的连接类型。时间是sqldf:0.4.11dplyr:0.7.8data.table:1.12.0

# innerUnit: secondsexpr       min        lq      mean    median        uq       max nevalbase 111.66266 111.66266 111.66266 111.66266 111.66266 111.66266     1sqldf 624.88388 624.88388 624.88388 624.88388 624.88388 624.88388     1dplyr  51.91233  51.91233  51.91233  51.91233  51.91233  51.91233     1DT  10.40552  10.40552  10.40552  10.40552  10.40552  10.40552     1# leftUnit: secondsexpr        min         lq       mean     median         uq        maxbase 142.782030 142.782030 142.782030 142.782030 142.782030 142.782030sqldf 613.917109 613.917109 613.917109 613.917109 613.917109 613.917109dplyr  49.711912  49.711912  49.711912  49.711912  49.711912  49.711912DT   9.674348   9.674348   9.674348   9.674348   9.674348   9.674348# rightUnit: secondsexpr        min         lq       mean     median         uq        maxbase 122.366301 122.366301 122.366301 122.366301 122.366301 122.366301sqldf 611.119157 611.119157 611.119157 611.119157 611.119157 611.119157dplyr  50.384841  50.384841  50.384841  50.384841  50.384841  50.384841DT   9.899145   9.899145   9.899145   9.899145   9.899145   9.899145# fullUnit: secondsexpr       min        lq      mean    median        uq       max nevalbase 141.79464 141.79464 141.79464 141.79464 141.79464 141.79464     1dplyr  94.66436  94.66436  94.66436  94.66436  94.66436  94.66436     1DT  21.62573  21.62573  21.62573  21.62573  21.62573  21.62573     1

请注意,您可以使用data.table执行其他类型的连接:
-加入更新-如果您想从另一个表查找值到主表
-合并聚合-如果您想在要加入的键上聚合,则不必实现所有连接结果
-重叠连接-如果要按范围合并
-滚动连接-如果您希望合并能够通过向前或向后滚动来匹配前面/后面行的值
-非等等联接-如果您的连接条件不相等

复制代码:

library(microbenchmark)library(sqldf)library(dplyr)library(data.table)sapply(c("sqldf","dplyr","data.table"), packageVersion, simplify=FALSE)
n = 5e7set.seed(108)df1 = data.frame(x=sample(n,n-1L), y1=rnorm(n-1L))df2 = data.frame(x=sample(n,n-1L), y2=rnorm(n-1L))dt1 = as.data.table(df1)dt2 = as.data.table(df2)
mb = list()# inner joinmicrobenchmark(times = 1L,base = merge(df1, df2, by = "x"),sqldf = sqldf("SELECT * FROM df1 INNER JOIN df2 ON df1.x = df2.x"),dplyr = inner_join(df1, df2, by = "x"),DT = dt1[dt2, nomatch=NULL, on = "x"]) -> mb$inner
# left outer joinmicrobenchmark(times = 1L,base = merge(df1, df2, by = "x", all.x = TRUE),sqldf = sqldf("SELECT * FROM df1 LEFT OUTER JOIN df2 ON df1.x = df2.x"),dplyr = left_join(df1, df2, by = c("x"="x")),DT = dt2[dt1, on = "x"]) -> mb$left
# right outer joinmicrobenchmark(times = 1L,base = merge(df1, df2, by = "x", all.y = TRUE),sqldf = sqldf("SELECT * FROM df2 LEFT OUTER JOIN df1 ON df2.x = df1.x"),dplyr = right_join(df1, df2, by = "x"),DT = dt1[dt2, on = "x"]) -> mb$right
# full outer joinmicrobenchmark(times = 1L,base = merge(df1, df2, by = "x", all = TRUE),dplyr = full_join(df1, df2, by = "x"),DT = merge(dt1, dt2, by = "x", all = TRUE)) -> mb$full
lapply(mb, print) -> nul

对于具有0..*:0..1基数的左连接或具有0..1:0..*基数的右连接的情况,可以将来自连接器(0..1表)的单边列就地直接分配到连接器(0..*表)上,从而避免创建一个全新的数据表。这需要将连接器中的键列匹配到连接器中,并为分配相应地索引+排序连接器的行。

如果键是单列,那么我们可以使用对#0的单个调用来进行匹配。这就是我将在本答案中介绍的情况。

这是一个基于OP的示例,除了我在df2中添加了一个额外的行,ID为7来测试连接器中不匹配键的情况。这实际上是df1左连接df2

df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L)));df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas'));df1[names(df2)[-1L]] <- df2[match(df1[,1L],df2[,1L]),-1L];df1;##   CustomerId Product   State## 1          1 Toaster    <NA>## 2          2 Toaster Alabama## 3          3 Toaster    <NA>## 4          4   Radio Alabama## 5          5   Radio    <NA>## 6          6   Radio    Ohio

在上面的文章中,我硬编码了一个假设,即键列是两个输入表的第一列。我认为,一般来说,这不是一个不合理的假设,因为,如果你有一个键列的data.frame,如果它从一开始就没有被设置为data.frame的第一列,那就很奇怪了。你总是可以重新排列列以使其如此。这个假设的一个有利结果是,键列的名称不必硬编码,尽管我想它只是用一个假设替换另一个假设。简洁是整数索引的另一个优势,以及速度。在下面的基准测试中,我将更改实现以使用字符串名称索引来匹配竞争实现。

我认为这是一个特别合适的解决方案,如果您有几个表要与单个大表保持连接。为每次合并重复重建整个表是不必要的,效率低下。

另一方面,如果无论出于何种原因,您需要通过此操作保持连接器不变,则无法使用此解决方案,因为它直接修改了连接器。尽管在这种情况下,您可以简单地制作一个副本并对副本执行就地赋值。


作为补充说明,我简要地研究了多列键的可能匹配解决方案。不幸的是,我找到的唯一匹配解决方案是:

  • 低效的连接。例如match(interaction(df1$a,df1$b),interaction(df2$a,df2$b)),或与paste()相同的想法。
  • 低效笛卡尔连词,例如outer(df1$a,df2$a,`==`) & outer(df1$b,df2$b,`==`)
  • 基本Rmerge()和等效的基于包的合并函数,它们总是分配一个新表来返回合并结果,因此不适合基于就地分配的解决方案。

例如,请参阅匹配不同数据帧上的多列并获取其他列作为结果将两列与其他两列匹配在多个列上匹配,以及这个问题的重复,我最初提出了就地解决方案在R中合并两个具有不同行数的数据帧


标杆管理

我决定做我自己的基准测试,看看就地分配方法与这个问题中提供的其他解决方案相比如何。

测试代码:

library(microbenchmark);library(data.table);library(sqldf);library(plyr);library(dplyr);
solSpecs <- list(merge=list(testFuncs=list(inner=function(df1,df2,key) merge(df1,df2,key),left =function(df1,df2,key) merge(df1,df2,key,all.x=T),right=function(df1,df2,key) merge(df1,df2,key,all.y=T),full =function(df1,df2,key) merge(df1,df2,key,all=T))),data.table.unkeyed=list(argSpec='data.table.unkeyed',testFuncs=list(inner=function(dt1,dt2,key) dt1[dt2,on=key,nomatch=0L,allow.cartesian=T],left =function(dt1,dt2,key) dt2[dt1,on=key,allow.cartesian=T],right=function(dt1,dt2,key) dt1[dt2,on=key,allow.cartesian=T],full =function(dt1,dt2,key) merge(dt1,dt2,key,all=T,allow.cartesian=T) ## calls merge.data.table())),data.table.keyed=list(argSpec='data.table.keyed',testFuncs=list(inner=function(dt1,dt2) dt1[dt2,nomatch=0L,allow.cartesian=T],left =function(dt1,dt2) dt2[dt1,allow.cartesian=T],right=function(dt1,dt2) dt1[dt2,allow.cartesian=T],full =function(dt1,dt2) merge(dt1,dt2,all=T,allow.cartesian=T) ## calls merge.data.table())),sqldf.unindexed=list(testFuncs=list( ## note: must pass connection=NULL to avoid running against the live DB connection, which would result in collisions with the residual tables from the last query uploadinner=function(df1,df2,key) sqldf(paste0('select * from df1 inner join df2 using(',paste(collapse=',',key),')'),connection=NULL),left =function(df1,df2,key) sqldf(paste0('select * from df1 left join df2 using(',paste(collapse=',',key),')'),connection=NULL),right=function(df1,df2,key) sqldf(paste0('select * from df2 left join df1 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do right join proper, not yet supported; inverted left join is equivalent##full =function(df1,df2,key) sqldf(paste0('select * from df1 full join df2 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing)),sqldf.indexed=list(testFuncs=list( ## important: requires an active DB connection with preindexed main.df1 and main.df2 ready to go; arguments are actually ignoredinner=function(df1,df2,key) sqldf(paste0('select * from main.df1 inner join main.df2 using(',paste(collapse=',',key),')')),left =function(df1,df2,key) sqldf(paste0('select * from main.df1 left join main.df2 using(',paste(collapse=',',key),')')),right=function(df1,df2,key) sqldf(paste0('select * from main.df2 left join main.df1 using(',paste(collapse=',',key),')')) ## can't do right join proper, not yet supported; inverted left join is equivalent##full =function(df1,df2,key) sqldf(paste0('select * from main.df1 full join main.df2 using(',paste(collapse=',',key),')')) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing)),plyr=list(testFuncs=list(inner=function(df1,df2,key) join(df1,df2,key,'inner'),left =function(df1,df2,key) join(df1,df2,key,'left'),right=function(df1,df2,key) join(df1,df2,key,'right'),full =function(df1,df2,key) join(df1,df2,key,'full'))),dplyr=list(testFuncs=list(inner=function(df1,df2,key) inner_join(df1,df2,key),left =function(df1,df2,key) left_join(df1,df2,key),right=function(df1,df2,key) right_join(df1,df2,key),full =function(df1,df2,key) full_join(df1,df2,key))),in.place=list(testFuncs=list(left =function(df1,df2,key) { cns <- setdiff(names(df2),key); df1[cns] <- df2[match(df1[,key],df2[,key]),cns]; df1; },right=function(df1,df2,key) { cns <- setdiff(names(df1),key); df2[cns] <- df1[match(df2[,key],df1[,key]),cns]; df2; })));
getSolTypes <- function() names(solSpecs);getJoinTypes <- function() unique(unlist(lapply(solSpecs,function(x) names(x$testFuncs))));getArgSpec <- function(argSpecs,key=NULL) if (is.null(key)) argSpecs$default else argSpecs[[key]];
initSqldf <- function() {sqldf(); ## creates sqlite connection on first run, cleans up and closes existing connection otherwiseif (exists('sqldfInitFlag',envir=globalenv(),inherits=F) && sqldfInitFlag) { ## false only on first runsqldf(); ## creates a new connection} else {assign('sqldfInitFlag',T,envir=globalenv()); ## set to true for the one and only time}; ## end ifinvisible();}; ## end initSqldf()
setUpBenchmarkCall <- function(argSpecs,joinType,solTypes=getSolTypes(),env=parent.frame()) {## builds and returns a list of expressions suitable for passing to the list argument of microbenchmark(), and assigns variables to resolve symbol references in those expressionscallExpressions <- list();nms <- character();for (solType in solTypes) {testFunc <- solSpecs[[solType]]$testFuncs[[joinType]];if (is.null(testFunc)) next; ## this join type is not defined for this solution typetestFuncName <- paste0('tf.',solType);assign(testFuncName,testFunc,envir=env);argSpecKey <- solSpecs[[solType]]$argSpec;argSpec <- getArgSpec(argSpecs,argSpecKey);argList <- setNames(nm=names(argSpec$args),vector('list',length(argSpec$args)));for (i in seq_along(argSpec$args)) {argName <- paste0('tfa.',argSpecKey,i);assign(argName,argSpec$args[[i]],envir=env);argList[[i]] <- if (i%in%argSpec$copySpec) call('copy',as.symbol(argName)) else as.symbol(argName);}; ## end forcallExpressions[[length(callExpressions)+1L]] <- do.call(call,c(list(testFuncName),argList),quote=T);nms[length(nms)+1L] <- solType;}; ## end fornames(callExpressions) <- nms;callExpressions;}; ## end setUpBenchmarkCall()
harmonize <- function(res) {res <- as.data.frame(res); ## coerce to data.framefor (ci in which(sapply(res,is.factor))) res[[ci]] <- as.character(res[[ci]]); ## coerce factor columns to characterfor (ci in which(sapply(res,is.logical))) res[[ci]] <- as.integer(res[[ci]]); ## coerce logical columns to integer (works around sqldf quirk of munging logicals to integers)##for (ci in which(sapply(res,inherits,'POSIXct'))) res[[ci]] <- as.double(res[[ci]]); ## coerce POSIXct columns to double (works around sqldf quirk of losing POSIXct class) ----- POSIXct doesn't work at all in sqldf.indexedres <- res[order(names(res))]; ## order columnsres <- res[do.call(order,res),]; ## order rowsres;}; ## end harmonize()
checkIdentical <- function(argSpecs,solTypes=getSolTypes()) {for (joinType in getJoinTypes()) {callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);if (length(callExpressions)<2L) next;ex <- harmonize(eval(callExpressions[[1L]]));for (i in seq(2L,len=length(callExpressions)-1L)) {y <- harmonize(eval(callExpressions[[i]]));if (!isTRUE(all.equal(ex,y,check.attributes=F))) {ex <<- ex;y <<- y;solType <- names(callExpressions)[i];stop(paste0('non-identical: ',solType,' ',joinType,'.'));}; ## end if}; ## end for}; ## end forinvisible();}; ## end checkIdentical()
testJoinType <- function(argSpecs,joinType,solTypes=getSolTypes(),metric=NULL,times=100L) {callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);bm <- microbenchmark(list=callExpressions,times=times);if (is.null(metric)) return(bm);bm <- summary(bm);res <- setNames(nm=names(callExpressions),bm[[metric]]);attr(res,'unit') <- attr(bm,'unit');res;}; ## end testJoinType()
testAllJoinTypes <- function(argSpecs,solTypes=getSolTypes(),metric=NULL,times=100L) {joinTypes <- getJoinTypes();resList <- setNames(nm=joinTypes,lapply(joinTypes,function(joinType) testJoinType(argSpecs,joinType,solTypes,metric,times)));if (is.null(metric)) return(resList);units <- unname(unlist(lapply(resList,attr,'unit')));res <- do.call(data.frame,c(list(join=joinTypes),setNames(nm=solTypes,rep(list(rep(NA_real_,length(joinTypes))),length(solTypes))),list(unit=units,stringsAsFactors=F)));for (i in seq_along(resList)) res[i,match(names(resList[[i]]),names(res))] <- resList[[i]];res;}; ## end testAllJoinTypes()
testGrid <- function(makeArgSpecsFunc,sizes,overlaps,solTypes=getSolTypes(),joinTypes=getJoinTypes(),metric='median',times=100L) {
res <- expand.grid(size=sizes,overlap=overlaps,joinType=joinTypes,stringsAsFactors=F);res[solTypes] <- NA_real_;res$unit <- NA_character_;for (ri in seq_len(nrow(res))) {
size <- res$size[ri];overlap <- res$overlap[ri];joinType <- res$joinType[ri];
argSpecs <- makeArgSpecsFunc(size,overlap);
checkIdentical(argSpecs,solTypes);
cur <- testJoinType(argSpecs,joinType,solTypes,metric,times);res[ri,match(names(cur),names(res))] <- cur;res$unit[ri] <- attr(cur,'unit');
}; ## end for
res;
}; ## end testGrid()

以下是基于我之前演示的OP的示例的基准:

## OP's example, supplemented with a non-matching row in df2argSpecs <- list(default=list(copySpec=1:2,args=list(df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L))),df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas')),'CustomerId')),data.table.unkeyed=list(copySpec=1:2,args=list(as.data.table(df1),as.data.table(df2),'CustomerId')),data.table.keyed=list(copySpec=1:2,args=list(setkey(as.data.table(df1),CustomerId),setkey(as.data.table(df2),CustomerId))));## prepare sqldfinitSqldf();sqldf('create index df1_key on df1(CustomerId);'); ## upload and create an sqlite index on df1sqldf('create index df2_key on df2(CustomerId);'); ## upload and create an sqlite index on df2
checkIdentical(argSpecs);
testAllJoinTypes(argSpecs,metric='median');##    join    merge data.table.unkeyed data.table.keyed sqldf.unindexed sqldf.indexed      plyr    dplyr in.place         unit## 1 inner  644.259           861.9345          923.516        9157.752      1580.390  959.2250 270.9190       NA microseconds## 2  left  713.539           888.0205          910.045        8820.334      1529.714  968.4195 270.9185 224.3045 microseconds## 3 right 1221.804           909.1900          923.944        8930.668      1533.135 1063.7860 269.8495 218.1035 microseconds## 4  full 1302.203          3107.5380         3184.729              NA            NA 1593.6475 270.7055       NA microseconds

在这里,我对随机输入数据进行基准测试,尝试两个输入表之间不同的缩放和不同的键重叠模式。该基准测试仍然仅限于单列整数键的情况。此外,为了确保就地解决方案适用于同一表的左右连接,所有随机测试数据都使用0..1:0..1基数。这是通过在生成第二个键列时采样而不替换第一个data.frame的键列来实现的data.frame.

makeArgSpecs.singleIntegerKey.optionalOneToOne <- function(size,overlap) {
com <- as.integer(size*overlap);
argSpecs <- list(default=list(copySpec=1:2,args=list(df1 <- data.frame(id=sample(size),y1=rnorm(size),y2=rnorm(size)),df2 <- data.frame(id=sample(c(if (com>0L) sample(df1$id,com) else integer(),seq(size+1L,len=size-com))),y3=rnorm(size),y4=rnorm(size)),'id')),data.table.unkeyed=list(copySpec=1:2,args=list(as.data.table(df1),as.data.table(df2),'id')),data.table.keyed=list(copySpec=1:2,args=list(setkey(as.data.table(df1),id),setkey(as.data.table(df2),id))));## prepare sqldfinitSqldf();sqldf('create index df1_key on df1(id);'); ## upload and create an sqlite index on df1sqldf('create index df2_key on df2(id);'); ## upload and create an sqlite index on df2
argSpecs;
}; ## end makeArgSpecs.singleIntegerKey.optionalOneToOne()
## cross of various input sizes and key overlapssizes <- c(1e1L,1e3L,1e6L);overlaps <- c(0.99,0.5,0.01);system.time({ res <- testGrid(makeArgSpecs.singleIntegerKey.optionalOneToOne,sizes,overlaps); });##     user   system  elapsed## 22024.65 12308.63 34493.19

我编写了一些代码来创建上述结果的对数日志图。我为每个重叠百分比生成了一个单独的图。它有点混乱,但我喜欢将所有解决方案类型和连接类型表示在同一个图中。

我使用样条插值来显示每个解决方案/连接类型组合的平滑曲线,并使用单独的pch符号绘制。连接类型由pch符号捕获,使用点表示内部,左和右尖括号,使用菱形表示完整。解决方案类型由图例中所示的颜色捕获。

plotRes <- function(res,titleFunc,useFloor=F) {solTypes <- setdiff(names(res),c('size','overlap','joinType','unit')); ## derive from resnormMult <- c(microseconds=1e-3,milliseconds=1); ## normalize to millisecondsjoinTypes <- getJoinTypes();cols <- c(merge='purple',data.table.unkeyed='blue',data.table.keyed='#00DDDD',sqldf.unindexed='brown',sqldf.indexed='orange',plyr='red',dplyr='#00BB00',in.place='magenta');pchs <- list(inner=20L,left='<',right='>',full=23L);cexs <- c(inner=0.7,left=1,right=1,full=0.7);NP <- 60L;ord <- order(decreasing=T,colMeans(res[res$size==max(res$size),solTypes],na.rm=T));ymajors <- data.frame(y=c(1,1e3),label=c('1ms','1s'),stringsAsFactors=F);for (overlap in unique(res$overlap)) {x1 <- res[res$overlap==overlap,];x1[solTypes] <- x1[solTypes]*normMult[x1$unit]; x1$unit <- NULL;xlim <- c(1e1,max(x1$size));xticks <- 10^seq(log10(xlim[1L]),log10(xlim[2L]));ylim <- c(1e-1,10^((if (useFloor) floor else ceiling)(log10(max(x1[solTypes],na.rm=T))))); ## use floor() to zoom in a little more, only sqldf.unindexed will break above, but xpd=NA will keep it visibleyticks <- 10^seq(log10(ylim[1L]),log10(ylim[2L]));yticks.minor <- rep(yticks[-length(yticks)],each=9L)*1:9;plot(NA,xlim=xlim,ylim=ylim,xaxs='i',yaxs='i',axes=F,xlab='size (rows)',ylab='time (ms)',log='xy');abline(v=xticks,col='lightgrey');abline(h=yticks.minor,col='lightgrey',lty=3L);abline(h=yticks,col='lightgrey');axis(1L,xticks,parse(text=sprintf('10^%d',as.integer(log10(xticks)))));axis(2L,yticks,parse(text=sprintf('10^%d',as.integer(log10(yticks)))),las=1L);axis(4L,ymajors$y,ymajors$label,las=1L,tick=F,cex.axis=0.7,hadj=0.5);for (joinType in rev(joinTypes)) { ## reverse to draw full first, since it's larger and would be more obtrusive if drawn lastx2 <- x1[x1$joinType==joinType,];for (solType in solTypes) {if (any(!is.na(x2[[solType]]))) {xy <- spline(x2$size,x2[[solType]],xout=10^(seq(log10(x2$size[1L]),log10(x2$size[nrow(x2)]),len=NP)));points(xy$x,xy$y,pch=pchs[[joinType]],col=cols[solType],cex=cexs[joinType],xpd=NA);}; ## end if}; ## end for}; ## end for## custom legend## due to logarithmic skew, must do all distance calcs in inches, and convert to user coords afterward## the bottom-left corner of the legend will be defined in normalized figure coords, although we can convert to inches immediatelyleg.cex <- 0.7;leg.x.in <- grconvertX(0.275,'nfc','in');leg.y.in <- grconvertY(0.6,'nfc','in');leg.x.user <- grconvertX(leg.x.in,'in');leg.y.user <- grconvertY(leg.y.in,'in');leg.outpad.w.in <- 0.1;leg.outpad.h.in <- 0.1;leg.midpad.w.in <- 0.1;leg.midpad.h.in <- 0.1;leg.sol.w.in <- max(strwidth(solTypes,'in',leg.cex));leg.sol.h.in <- max(strheight(solTypes,'in',leg.cex))*1.5; ## multiplication factor for greater line heightleg.join.w.in <- max(strheight(joinTypes,'in',leg.cex))*1.5; ## dittoleg.join.h.in <- max(strwidth(joinTypes,'in',leg.cex));leg.main.w.in <- leg.join.w.in*length(joinTypes);leg.main.h.in <- leg.sol.h.in*length(solTypes);leg.x2.user <- grconvertX(leg.x.in+leg.outpad.w.in*2+leg.main.w.in+leg.midpad.w.in+leg.sol.w.in,'in');leg.y2.user <- grconvertY(leg.y.in+leg.outpad.h.in*2+leg.main.h.in+leg.midpad.h.in+leg.join.h.in,'in');leg.cols.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.join.w.in*(0.5+seq(0L,length(joinTypes)-1L)),'in');leg.lines.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in-leg.sol.h.in*(0.5+seq(0L,length(solTypes)-1L)),'in');leg.sol.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.main.w.in+leg.midpad.w.in,'in');leg.join.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in+leg.midpad.h.in,'in');rect(leg.x.user,leg.y.user,leg.x2.user,leg.y2.user,col='white');text(leg.sol.x.user,leg.lines.y.user,solTypes[ord],cex=leg.cex,pos=4L,offset=0);text(leg.cols.x.user,leg.join.y.user,joinTypes,cex=leg.cex,pos=4L,offset=0,srt=90); ## srt rotation applies *after* pos/offset positioningfor (i in seq_along(joinTypes)) {joinType <- joinTypes[i];points(rep(leg.cols.x.user[i],length(solTypes)),ifelse(colSums(!is.na(x1[x1$joinType==joinType,solTypes[ord]]))==0L,NA,leg.lines.y.user),pch=pchs[[joinType]],col=cols[solTypes[ord]]);}; ## end fortitle(titleFunc(overlap));readline(sprintf('overlap %.02f',overlap));}; ## end for}; ## end plotRes()
titleFunc <- function(overlap) sprintf('R merge solutions: single-column integer key, 0..1:0..1 cardinality, %d%% overlap',as.integer(overlap*100));plotRes(res,titleFunc,T);

合并基准单列整数密钥可选一对一99

合并基准单列整数密钥可选一对一50

合并基准单列整数密钥可选一对一


这是第二个大型基准测试,它的任务更重,涉及键列的数量和类型,以及基数。对于这个基准测试,我使用了三个键列:一个字符、一个整数和一个逻辑键列,对基数没有限制(即0..*:0..*)。(一般来说,由于浮点比较的复杂性,不建议定义具有双精度或复数值的键列,基本上没有人使用原始类型,更不用说键列了,所以我没有在键列中包含这些类型。此外,为了提供信息,我最初尝试通过包含POSIXct键列来使用四个键列,但由于某种原因,POSIXct类型与sqldf.indexed解决方案效果不佳,可能是由于浮点比较异常,所以我删除了它。)

makeArgSpecs.assortedKey.optionalManyToMany <- function(size,overlap,uniquePct=75) {
## number of unique keys in df1u1Size <- as.integer(size*uniquePct/100);
## (roughly) divide u1Size into bases, so we can use expand.grid() to produce the required number of unique key values with repetitions within individual key columns## use ceiling() to ensure we cover u1Size; will truncate afterwardu1SizePerKeyColumn <- as.integer(ceiling(u1Size^(1/3)));
## generate the unique key values for df1keys1 <- expand.grid(stringsAsFactors=F,idCharacter=replicate(u1SizePerKeyColumn,paste(collapse='',sample(letters,sample(4:12,1L),T))),idInteger=sample(u1SizePerKeyColumn),idLogical=sample(c(F,T),u1SizePerKeyColumn,T)##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+sample(u1SizePerKeyColumn))[seq_len(u1Size),];
## rbind some repetitions of the unique keys; this will prepare one side of the many-to-many relationship## also scramble the order afterwardkeys1 <- rbind(keys1,keys1[sample(nrow(keys1),size-u1Size,T),])[sample(size),];
## common and unilateral key countscom <- as.integer(size*overlap);uni <- size-com;
## generate some unilateral keys for df2 by synthesizing outside of the idInteger range of df1keys2 <- data.frame(stringsAsFactors=F,idCharacter=replicate(uni,paste(collapse='',sample(letters,sample(4:12,1L),T))),idInteger=u1SizePerKeyColumn+sample(uni),idLogical=sample(c(F,T),uni,T)##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+u1SizePerKeyColumn+sample(uni));
## rbind random keys from df1; this will complete the many-to-many relationship## also scramble the order afterwardkeys2 <- rbind(keys2,keys1[sample(nrow(keys1),com,T),])[sample(size),];
##keyNames <- c('idCharacter','idInteger','idLogical','idPOSIXct');keyNames <- c('idCharacter','idInteger','idLogical');## note: was going to use raw and complex type for two of the non-key columns, but data.table doesn't seem to fully support themargSpecs <- list(default=list(copySpec=1:2,args=list(df1 <- cbind(stringsAsFactors=F,keys1,y1=sample(c(F,T),size,T),y2=sample(size),y3=rnorm(size),y4=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),df2 <- cbind(stringsAsFactors=F,keys2,y5=sample(c(F,T),size,T),y6=sample(size),y7=rnorm(size),y8=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),keyNames)),data.table.unkeyed=list(copySpec=1:2,args=list(as.data.table(df1),as.data.table(df2),keyNames)),data.table.keyed=list(copySpec=1:2,args=list(setkeyv(as.data.table(df1),keyNames),setkeyv(as.data.table(df2),keyNames))));## prepare sqldfinitSqldf();sqldf(paste0('create index df1_key on df1(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df1sqldf(paste0('create index df2_key on df2(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df2
argSpecs;
}; ## end makeArgSpecs.assortedKey.optionalManyToMany()
sizes <- c(1e1L,1e3L,1e5L); ## 1e5L instead of 1e6L to respect more heavy-duty inputsoverlaps <- c(0.99,0.5,0.01);solTypes <- setdiff(getSolTypes(),'in.place');system.time({ res <- testGrid(makeArgSpecs.assortedKey.optionalManyToMany,sizes,overlaps,solTypes); });##     user   system  elapsed## 38895.50   784.19 39745.53

使用上面给出的相同绘图代码生成的绘图:

titleFunc <- function(overlap) sprintf('R merge solutions: character/integer/logical key, 0..*:0..* cardinality, %d%% overlap',as.integer(overlap*100));plotRes(res,titleFunc,F);

合并基准测试组合键可选多对多99

合并基准测试组合键可选多对多50

合并基准测试组合键可选多对多-1

对于所有列的内连接,您还可以使用data.table包中的fintersectdplyr包中的intersect作为merge的替代,而无需指定by列。这将给出两个数据帧之间相等的行:

merge(df1, df2)#   V1 V2# 1  B  2# 2  C  3
dplyr::intersect(df1, df2)#   V1 V2# 1  B  2# 2  C  3
data.table::fintersect(setDT(df1), setDT(df2))#    V1 V2# 1:  B  2# 2:  C  3

示例数据:

df1 <- data.frame(V1 = LETTERS[1:4], V2 = 1:4)df2 <- data.frame(V1 = LETTERS[2:3], V2 = 2:3)

更新加入。另一个重要的SQL式连接是“更新加入”,其中一个表中的列使用另一个表更新(或创建)。

修改OP的示例表…

sales = data.frame(CustomerId = c(1, 1, 1, 3, 4, 6),Year = 2000:2005,Product = c(rep("Toaster", 3), rep("Radio", 3)))cust = data.frame(CustomerId = c(1, 1, 4, 6),Year = c(2001L, 2002L, 2002L, 2002L),State = state.name[1:4])
sales# CustomerId Year Product#          1 2000 Toaster#          1 2001 Toaster#          1 2002 Toaster#          3 2003   Radio#          4 2004   Radio#          6 2005   Radio
cust# CustomerId Year    State#          1 2001  Alabama#          1 2002   Alaska#          4 2002  Arizona#          6 2002 Arkansas

假设我们想将客户的状态从cust添加到购买表sales,忽略年份列。使用基R,我们可以识别匹配的行,然后将值复制到:

sales$State <- cust$State[ match(sales$CustomerId, cust$CustomerId) ]
# CustomerId Year Product    State#          1 2000 Toaster  Alabama#          1 2001 Toaster  Alabama#          1 2002 Toaster  Alabama#          3 2003   Radio     <NA>#          4 2004   Radio  Arizona#          6 2005   Radio Arkansas
# cleanup for the next examplesales$State <- NULL

这里可以看到,match从客户表中选择第一个匹配行。


使用多列更新连接。当我们只加入单个列并对第一个匹配感到满意时,上述方法效果很好。假设我们希望客户表中的测量年份与销售年份匹配。

正如@bgoldst的回答所提到的,matchinteraction可能是这种情况的一个选择。更直接地说,可以使用data.table:

library(data.table)setDT(sales); setDT(cust)
sales[, State := cust[sales, on=.(CustomerId, Year), x.State]]
#    CustomerId Year Product   State# 1:          1 2000 Toaster    <NA># 2:          1 2001 Toaster Alabama# 3:          1 2002 Toaster  Alaska# 4:          3 2003   Radio    <NA># 5:          4 2004   Radio    <NA># 6:          6 2005   Radio    <NA>
# cleanup for next examplesales[, State := NULL]

滚动更新加入。或者,我们可能希望采用客户所在的最后一个状态:

sales[, State := cust[sales, on=.(CustomerId, Year), roll=TRUE, x.State]]
#    CustomerId Year Product    State# 1:          1 2000 Toaster     <NA># 2:          1 2001 Toaster  Alabama# 3:          1 2002 Toaster   Alaska# 4:          3 2003   Radio     <NA># 5:          4 2004   Radio  Arizona# 6:          6 2005   Radio Arkansas

上面的三个示例都专注于创建/添加新列。有关更新/修改现有列的示例,请参阅相关R FAQ