每当我想在R中做一些“映射”py的事情时,我通常尝试使用apply家族中的函数。

然而,我从来没有完全理解它们之间的区别——{sapply, lapply,等等}如何将函数应用到输入/分组输入,输出将是什么样子,甚至输入可以是什么——所以我经常只是浏览它们,直到我得到我想要的。

有人能解释一下什么时候用哪个吗?

我目前(可能不正确/不完全)的理解是……

sapply(vec, f): input is a vector. output is a vector/matrix, where element i is f(vec[i]), giving you a matrix if f has a multi-element output lapply(vec, f): same as sapply, but output is a list? apply(matrix, 1/2, f): input is a matrix. output is a vector, where element i is f(row/col i of the matrix) tapply(vector, grouping, f): output is a matrix/array, where an element in the matrix/array is the value of f at a grouping g of the vector, and g gets pushed to the row/col names by(dataframe, grouping, f): let g be a grouping. apply f to each column of the group/dataframe. pretty print the grouping and the value of f at each column. aggregate(matrix, grouping, f): similar to by, but instead of pretty printing the output, aggregate sticks everything into a dataframe.

题外话:我还没学过plyr或remodeling——plyr或remodeling会完全取代所有这些吗?


当前回答

在CRAN上最近发布的崩溃包中,我尝试将大多数常见的应用功能压缩为2个函数:

dapply (Data-Apply)将函数应用于矩阵和data.frames的行或列(默认值),并且(默认值)返回一个具有相同类型和相同属性的对象(除非每次计算的结果是原子的并且drop = TRUE)。对于data.frame列的性能与lapply相当,对于矩阵行或列的性能比apply快2倍。并行性可通过mclapply获得(仅适用于MAC)。

语法:

dapply(X, FUN, ..., MARGIN = 2, parallel = FALSE, mc.cores = 1L, 
       return = c("same", "matrix", "data.frame"), drop = TRUE)

例子:

# Apply to columns:
dapply(mtcars, log)
dapply(mtcars, sum)
dapply(mtcars, quantile)
# Apply to rows:
dapply(mtcars, sum, MARGIN = 1)
dapply(mtcars, quantile, MARGIN = 1)
# Return as matrix:
dapply(mtcars, quantile, return = "matrix")
dapply(mtcars, quantile, MARGIN = 1, return = "matrix")
# Same for matrices ...

BY是S3的一个泛型,用于拆分应用组合计算,使用矢量、矩阵和data.frame方法。它明显比tapply快,通过和聚合(也比plyr快,但在大数据上dplyr更快)。

语法:

BY(X, g, FUN, ..., use.g.names = TRUE, sort = TRUE,
   expand.wide = FALSE, parallel = FALSE, mc.cores = 1L,
   return = c("same", "matrix", "data.frame", "list"))

例子:

# Vectors:
BY(iris$Sepal.Length, iris$Species, sum)
BY(iris$Sepal.Length, iris$Species, quantile)
BY(iris$Sepal.Length, iris$Species, quantile, expand.wide = TRUE) # This returns a matrix 
# Data.frames
BY(iris[-5], iris$Species, sum)
BY(iris[-5], iris$Species, quantile)
BY(iris[-5], iris$Species, quantile, expand.wide = TRUE) # This returns a wider data.frame
BY(iris[-5], iris$Species, quantile, return = "matrix") # This returns a matrix
# Same for matrices ...

分组变量的列表也可以提供给g。

Talking about performance: A main goal of collapse is to foster high-performance programming in R and to move beyond split-apply-combine alltogether. For this purpose the package has a full set of C++ based fast generic functions: fmean, fmedian, fmode, fsum, fprod, fsd, fvar, fmin, fmax, ffirst, flast, fNobs, fNdistinct, fscale, fbetween, fwithin, fHDbetween, fHDwithin, flag, fdiff and fgrowth. They perform grouped computations in a single pass through the data (i.e. no splitting and recombining).

语法:

fFUN(x, g = NULL, [w = NULL,] TRA = NULL, [na.rm = TRUE,] use.g.names = TRUE, drop = TRUE)

例子:

v <- iris$Sepal.Length
f <- iris$Species

# Vectors
fmean(v)             # mean
fmean(v, f)          # grouped mean
fsd(v, f)            # grouped standard deviation
fsd(v, f, TRA = "/") # grouped scaling
fscale(v, f)         # grouped standardizing (scaling and centering)
fwithin(v, f)        # grouped demeaning

w <- abs(rnorm(nrow(iris)))
fmean(v, w = w)      # Weighted mean
fmean(v, f, w)       # Weighted grouped mean
fsd(v, f, w)         # Weighted grouped standard-deviation
fsd(v, f, w, "/")    # Weighted grouped scaling
fscale(v, f, w)      # Weighted grouped standardizing
fwithin(v, f, w)     # Weighted grouped demeaning

# Same using data.frames...
fmean(iris[-5], f)                # grouped mean
fscale(iris[-5], f)               # grouped standardizing
fwithin(iris[-5], f)              # grouped demeaning

# Same with matrices ...

在软件包小插图中,我提供了基准测试。使用快速函数编程要比使用dplyr或数据编程快得多。表,尤其适用于较小的数据,也适用于较大的数据。

其他回答

首先是乔兰的精彩回答——恐怕没有比这更好的答案了。

下面的助记法可以帮助你记住它们之间的区别。虽然有些是显而易见的,但有些可能不那么明显——对于这些,你会在Joran的讨论中找到理由。

助记符

Lapply是一个列表应用程序,作用于列表或向量并返回一个列表。 Sapply是一个简单的lapply(函数默认返回一个向量或矩阵) Vapply是一个经过验证的apply(允许预先指定返回对象类型) Rapply是针对嵌套列表(即列表中的列表)的递归应用 Tapply是一个带标记的应用程序,其中标记标识子集 Apply是通用的:将函数应用到矩阵的行或列(或者更一般地,应用到数组的维数)

构建正确的背景

如果您仍然觉得使用apply族有点陌生,那么可能您缺少了一个关键的观点。

这两篇文章会有所帮助。它们为应用函数族所提供的函数式编程技术提供了必要的背景知识。

Lisp的用户会立刻认出这个范例。如果您不熟悉Lisp,一旦您了解了FP,您将获得在R中使用的强大观点——而apply将更有意义。

高级R:函数式编程,Hadley Wickham著 《R中的简单函数式编程》,作者:Michael Barton

请看http://www.slideshare.net/hadley/plyr-one-data-analytic-strategy:的第21页

(希望这是清楚的,apply对应于@Hadley的apply和aggregate对应于@Hadley的ddply等。如果你没有从这张图片中得到它,同一幻灯片的第20张将会说明。)

(左边是输入,上面是输出)

我最近发现了一个相当有用的扫描函数,为了完整起见,我将它添加到这里:

扫描

基本思想是逐行或逐列遍历数组并返回修改后的数组。下面的例子将说明这一点(来源:datacamp):

假设你有一个矩阵,想要按列对它进行标准化:

dataPoints <- matrix(4:15, nrow = 4)

# Find means per column with `apply()`
dataPoints_means <- apply(dataPoints, 2, mean)

# Find standard deviation with `apply()`
dataPoints_sdev <- apply(dataPoints, 2, sd)

# Center the points 
dataPoints_Trans1 <- sweep(dataPoints, 2, dataPoints_means,"-")

# Return the result
dataPoints_Trans1
##      [,1] [,2] [,3]
## [1,] -1.5 -1.5 -1.5
## [2,] -0.5 -0.5 -0.5
## [3,]  0.5  0.5  0.5
## [4,]  1.5  1.5  1.5

# Normalize
dataPoints_Trans2 <- sweep(dataPoints_Trans1, 2, dataPoints_sdev, "/")

# Return the result
dataPoints_Trans2
##            [,1]       [,2]       [,3]
## [1,] -1.1618950 -1.1618950 -1.1618950
## [2,] -0.3872983 -0.3872983 -0.3872983
## [3,]  0.3872983  0.3872983  0.3872983
## [4,]  1.1618950  1.1618950  1.1618950

注意:对于这个简单的例子,同样的结果当然可以通过应用(dataPoints, 2, scale)更容易实现。

也许值得一提的是ave。ave是tapply的好兄弟。它以一种可以直接插入数据帧的形式返回结果。

dfr <- data.frame(a=1:20, f=rep(LETTERS[1:5], each=4))
means <- tapply(dfr$a, dfr$f, mean)
##  A    B    C    D    E 
## 2.5  6.5 10.5 14.5 18.5 

## great, but putting it back in the data frame is another line:

dfr$m <- means[dfr$f]

dfr$m2 <- ave(dfr$a, dfr$f, FUN=mean) # NB argument name FUN is needed!
dfr
##   a f    m   m2
##   1 A  2.5  2.5
##   2 A  2.5  2.5
##   3 A  2.5  2.5
##   4 A  2.5  2.5
##   5 B  6.5  6.5
##   6 B  6.5  6.5
##   7 B  6.5  6.5
##   ...

在基本包中,对于整个数据帧,没有任何东西可以像ave那样工作(就像by与tapply一样)。但你可以蒙混过关:

dfr$foo <- ave(1:nrow(dfr), dfr$f, FUN=function(x) {
    x <- dfr[x,]
    sum(x$m*x$m2)
})
dfr
##     a f    m   m2    foo
## 1   1 A  2.5  2.5    25
## 2   2 A  2.5  2.5    25
## 3   3 A  2.5  2.5    25
## ...

在CRAN上最近发布的崩溃包中,我尝试将大多数常见的应用功能压缩为2个函数:

dapply (Data-Apply)将函数应用于矩阵和data.frames的行或列(默认值),并且(默认值)返回一个具有相同类型和相同属性的对象(除非每次计算的结果是原子的并且drop = TRUE)。对于data.frame列的性能与lapply相当,对于矩阵行或列的性能比apply快2倍。并行性可通过mclapply获得(仅适用于MAC)。

语法:

dapply(X, FUN, ..., MARGIN = 2, parallel = FALSE, mc.cores = 1L, 
       return = c("same", "matrix", "data.frame"), drop = TRUE)

例子:

# Apply to columns:
dapply(mtcars, log)
dapply(mtcars, sum)
dapply(mtcars, quantile)
# Apply to rows:
dapply(mtcars, sum, MARGIN = 1)
dapply(mtcars, quantile, MARGIN = 1)
# Return as matrix:
dapply(mtcars, quantile, return = "matrix")
dapply(mtcars, quantile, MARGIN = 1, return = "matrix")
# Same for matrices ...

BY是S3的一个泛型,用于拆分应用组合计算,使用矢量、矩阵和data.frame方法。它明显比tapply快,通过和聚合(也比plyr快,但在大数据上dplyr更快)。

语法:

BY(X, g, FUN, ..., use.g.names = TRUE, sort = TRUE,
   expand.wide = FALSE, parallel = FALSE, mc.cores = 1L,
   return = c("same", "matrix", "data.frame", "list"))

例子:

# Vectors:
BY(iris$Sepal.Length, iris$Species, sum)
BY(iris$Sepal.Length, iris$Species, quantile)
BY(iris$Sepal.Length, iris$Species, quantile, expand.wide = TRUE) # This returns a matrix 
# Data.frames
BY(iris[-5], iris$Species, sum)
BY(iris[-5], iris$Species, quantile)
BY(iris[-5], iris$Species, quantile, expand.wide = TRUE) # This returns a wider data.frame
BY(iris[-5], iris$Species, quantile, return = "matrix") # This returns a matrix
# Same for matrices ...

分组变量的列表也可以提供给g。

Talking about performance: A main goal of collapse is to foster high-performance programming in R and to move beyond split-apply-combine alltogether. For this purpose the package has a full set of C++ based fast generic functions: fmean, fmedian, fmode, fsum, fprod, fsd, fvar, fmin, fmax, ffirst, flast, fNobs, fNdistinct, fscale, fbetween, fwithin, fHDbetween, fHDwithin, flag, fdiff and fgrowth. They perform grouped computations in a single pass through the data (i.e. no splitting and recombining).

语法:

fFUN(x, g = NULL, [w = NULL,] TRA = NULL, [na.rm = TRUE,] use.g.names = TRUE, drop = TRUE)

例子:

v <- iris$Sepal.Length
f <- iris$Species

# Vectors
fmean(v)             # mean
fmean(v, f)          # grouped mean
fsd(v, f)            # grouped standard deviation
fsd(v, f, TRA = "/") # grouped scaling
fscale(v, f)         # grouped standardizing (scaling and centering)
fwithin(v, f)        # grouped demeaning

w <- abs(rnorm(nrow(iris)))
fmean(v, w = w)      # Weighted mean
fmean(v, f, w)       # Weighted grouped mean
fsd(v, f, w)         # Weighted grouped standard-deviation
fsd(v, f, w, "/")    # Weighted grouped scaling
fscale(v, f, w)      # Weighted grouped standardizing
fwithin(v, f, w)     # Weighted grouped demeaning

# Same using data.frames...
fmean(iris[-5], f)                # grouped mean
fscale(iris[-5], f)               # grouped standardizing
fwithin(iris[-5], f)              # grouped demeaning

# Same with matrices ...

在软件包小插图中,我提供了基准测试。使用快速函数编程要比使用dplyr或数据编程快得多。表,尤其适用于较小的数据,也适用于较大的数据。