人们使用什么技巧来管理交互式R会话的可用内存?我使用下面的函数[基于Petr Pikal和David Hinds在2004年发布的r-help列表]来列出(和/或排序)最大的对象,并偶尔rm()其中一些对象。但到目前为止最有效的解决办法是……在64位Linux下运行,有充足的内存。

大家还有什么想分享的妙招吗?请每人寄一份。

# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
                        decreasing=FALSE, head=FALSE, n=5) {
    napply <- function(names, fn) sapply(names, function(x)
                                         fn(get(x, pos = pos)))
    names <- ls(pos = pos, pattern = pattern)
    obj.class <- napply(names, function(x) as.character(class(x))[1])
    obj.mode <- napply(names, mode)
    obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
    obj.size <- napply(names, object.size)
    obj.dim <- t(napply(names, function(x)
                        as.numeric(dim(x))[1:2]))
    vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
    obj.dim[vec, 1] <- napply(names, length)[vec]
    out <- data.frame(obj.type, obj.size, obj.dim)
    names(out) <- c("Type", "Size", "Rows", "Columns")
    if (!missing(order.by))
        out <- out[order(out[[order.by]], decreasing=decreasing), ]
    if (head)
        out <- head(out, n)
    out
}
# shorthand
lsos <- function(..., n=10) {
    .ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}

当前回答

For both speed and memory purposes, when building a large data frame via some complex series of steps, I'll periodically flush it (the in-progress data set being built) to disk, appending to anything that came before, and then restart it. This way the intermediate steps are only working on smallish data frames (which is good as, e.g., rbind slows down considerably with larger objects). The entire data set can be read back in at the end of the process, when all the intermediate objects have been removed.

dfinal <- NULL
first <- TRUE
tempfile <- "dfinal_temp.csv"
for( i in bigloop ) {
    if( !i %% 10000 ) { 
        print( i, "; flushing to disk..." )
        write.table( dfinal, file=tempfile, append=!first, col.names=first )
        first <- FALSE
        dfinal <- NULL   # nuke it
    }

    # ... complex operations here that add data to 'dfinal' data frame  
}
print( "Loop done; flushing to disk and re-reading entire data set..." )
write.table( dfinal, file=tempfile, append=TRUE, col.names=FALSE )
dfinal <- read.table( tempfile )

其他回答

I'm fortunate and my large data sets are saved by the instrument in "chunks" (subsets) of roughly 100 MB (32bit binary). Thus I can do pre-processing steps (deleting uninformative parts, downsampling) sequentially before fusing the data set. Calling gc () "by hand" can help if the size of the data get close to available memory. Sometimes a different algorithm needs much less memory. Sometimes there's a trade off between vectorization and memory use. compare: split & lapply vs. a for loop. For the sake of fast & easy data analysis, I often work first with a small random subset (sample ()) of the data. Once the data analysis script/.Rnw is finished data analysis code and the complete data go to the calculation server for over night / over weekend / ... calculation.

在将数据框架传递给回归函数的data=参数时,我积极地使用子集参数,只选择所需的变量。如果我忘记向公式和select=向量添加变量,确实会导致一些错误,但由于减少了对象的复制,它仍然节省了大量时间,并显著减少了内存占用。假设我有400万条记录和110个变量(我确实有)。例子:

# library(rms); library(Hmisc) for the cph,and rcs functions
Mayo.PrCr.rbc.mdl <- 
cph(formula = Surv(surv.yr, death) ~ age + Sex + nsmkr + rcs(Mayo, 4) + 
                                     rcs(PrCr.rat, 3) +  rbc.cat * Sex, 
     data = subset(set1HLI,  gdlab2 & HIVfinal == "Negative", 
                           select = c("surv.yr", "death", "PrCr.rat", "Mayo", 
                                      "age", "Sex", "nsmkr", "rbc.cat")
   )            )

通过设置上下文和策略:gdlab2变量是一个逻辑向量,它是为一组实验室测试的所有正常或几乎正常值的数据集中的主题构建的,而HIVfinal是一个字符向量,总结了艾滋病毒的初步和确认测试。

For both speed and memory purposes, when building a large data frame via some complex series of steps, I'll periodically flush it (the in-progress data set being built) to disk, appending to anything that came before, and then restart it. This way the intermediate steps are only working on smallish data frames (which is good as, e.g., rbind slows down considerably with larger objects). The entire data set can be read back in at the end of the process, when all the intermediate objects have been removed.

dfinal <- NULL
first <- TRUE
tempfile <- "dfinal_temp.csv"
for( i in bigloop ) {
    if( !i %% 10000 ) { 
        print( i, "; flushing to disk..." )
        write.table( dfinal, file=tempfile, append=!first, col.names=first )
        first <- FALSE
        dfinal <- NULL   # nuke it
    }

    # ... complex operations here that add data to 'dfinal' data frame  
}
print( "Loop done; flushing to disk and re-reading entire data set..." )
write.table( dfinal, file=tempfile, append=TRUE, col.names=FALSE )
dfinal <- read.table( tempfile )

确保在可重复的脚本中记录您的工作。不时地重新打开R,然后source()您的脚本。您将清除不再使用的任何东西,作为一个额外的好处,您将测试您的代码。

使用环境而不是列表来处理占用大量工作内存的对象集合。

原因是:每当列表结构的一个元素被修改时,整个列表都会被临时复制。如果列表的存储需求大约是可用工作内存的一半,这就会成为一个问题,因为这时必须将数据交换到慢速硬盘上。另一方面,环境不受这种行为的影响,它们可以类似于列表。

这里有一个例子:

get.data <- function(x)
{
  # get some data based on x
  return(paste("data from",x))
}

collect.data <- function(i,x,env)
{
  # get some data
  data <- get.data(x[[i]])
  # store data into environment
  element.name <- paste("V",i,sep="")
  env[[element.name]] <- data
  return(NULL)  
}

better.list <- new.env()
filenames <- c("file1","file2","file3")
lapply(seq_along(filenames),collect.data,x=filenames,env=better.list)

# read/write access
print(better.list[["V1"]])
better.list[["V2"]] <- "testdata"
# number of list elements
length(ls(better.list))

结合结构,如大。矩阵或数据。表允许修改其内容的地方,非常有效的内存使用可以实现。