如何一次导入多个。csv文件?

假设我们有一个包含多个data.csv文件的文件夹,每个文件包含相同数量的变量,但每个变量来自不同的时间。 在R中是否有一种方法可以同时导入它们而不是逐个导入?< / p >

我的问题是我有大约2000个数据文件要导入,并且只能通过使用代码单独导入它们:

read.delim(file="filename", header=TRUE, sep="\t")

效率不高。

486534 次浏览

如下所示,每个数据帧都应该作为单个列表中的单独元素:

temp = list.files(pattern="*.csv")
myfiles = lapply(temp, read.delim)

这里假设您将这些csv放在单个目录(您的当前工作目录)中,并且它们都具有小写扩展名.csv

如果你想将这些数据帧组合成一个单独的数据帧,请参阅其他答案中的解决方案,使用诸如do.call(rbind,...)dplyr::bind_rows()data.table::rbindlist()之类的东西。

如果你真的希望每个数据帧在一个单独的对象中,即使这通常是不可取的,你可以使用assign执行以下操作:

temp = list.files(pattern="*.csv")
for (i in 1:length(temp)) assign(temp[i], read.csv(temp[i]))

或者,在没有assign的情况下,为了演示(1)如何清理文件名以及(2)如何使用list2env,您可以尝试以下方法:

temp = list.files(pattern="*.csv")
list2env(
lapply(setNames(temp, make.names(gsub("*.csv$", "", temp))),
read.csv), envir = .GlobalEnv)

但是,最好还是把它们放在一个列表中。

除了使用lapply或R中的其他循环构造外,您还可以将CSV文件合并为一个文件。

在Unix中,如果文件没有头文件,那么很简单:

cat *.csv > all.csv

或者如果有标题,你可以找到一个字符串匹配标题,只有标题(即假设标题行都以“年龄”开头),你会这样做:

cat *.csv | grep -v ^Age > all.csv

我认为在Windows中,你可以从DOS命令框中使用COPYSEARCH(或FIND或其他东西)来做到这一点,但为什么不安装cygwin并获得Unix命令shell的功能呢?

这是我开发的代码,读取所有csv文件到R.它将为每个csv文件单独创建一个dataframe,并标题dataframe文件的原始名称(删除空格和.csv),我希望你发现它有用!

path <- "C:/Users/cfees/My Box Files/Fitness/"
files <- list.files(path=path, pattern="*.csv")
for(file in files)
{
perpos <- which(strsplit(file, "")[[1]]==".")
assign(
gsub(" ","",substr(file, 1, perpos-1)),
read.csv(paste(path,file,sep="")))
}

下面是一些使用R base将.csv文件转换为data.frame的选项,以及一些可用来读取R文件的包。

这比下面的选项要慢。

# Get the files names
files = list.files(pattern="*.csv")
# First apply read.csv, then rbind
myfiles = do.call(rbind, lapply(files, function(x) read.csv(x, stringsAsFactors = FALSE)))

编辑: -使用data.tablereadr提供了更多额外的选择

fread()版本,它是data.table包的函数。这是目前为止R中最快的选项

library(data.table)
DT = do.call(rbind, lapply(files, fread))
# The same using `rbindlist`
DT = rbindlist(lapply(files, fread))

使用readr,这是另一个用于读取csv文件的包。它比fread慢,比base R快,但有不同的功能。

library(readr)
library(dplyr)
tbl = lapply(files, read_csv) %>% bind_rows()

基于dnlbrk的注释,对于大文件,assign可以比list2env快得多。

library(readr)
library(stringr)


List_of_file_paths <- list.files(path ="C:/Users/Anon/Documents/Folder_with_csv_files/", pattern = ".csv", all.files = TRUE, full.names = TRUE)

通过将full.names参数设置为true,您将在文件列表中获得每个文件的完整路径作为单独的字符串,例如,List_of_file_paths[1]将类似于"C:/Users/Anon/Documents/Folder_with_csv_files/ fil1 .csv"

for(f in 1:length(List_of_filepaths)) {
file_name <- str_sub(string = List_of_filepaths[f], start = 46, end = -5)
file_df <- read_csv(List_of_filepaths[f])
assign( x = file_name, value = file_df, envir = .GlobalEnv)
}

你可以使用数据。table package的fread或base R read.csv而不是read_csv。file_name步骤允许您整理名称,以便每个数据帧不保留文件的完整路径作为其名称。 您可以扩展循环,在将数据表传输到全局环境之前对其进行进一步处理,例如:

for(f in 1:length(List_of_filepaths)) {
file_name <- str_sub(string = List_of_filepaths[f], start = 46, end = -5)
file_df <- read_csv(List_of_filepaths[f])
file_df <- file_df[,1:3] #if you only need the first three columns
assign( x = file_name, value = file_df, envir = .GlobalEnv)
}
一个快速和简洁的tidyverse解决方案: (比基础知识 read.csv快两倍多)

tbl <-
list.files(pattern = "*.csv") %>%
map_df(~read_csv(.))

data.tablefread()甚至可以再次将加载时间减少一半。(对于1/4的基地R次)

library(data.table)


tbl_fread <-
list.files(pattern = "*.csv") %>%
map_df(~fread(.))

stringsAsFactors = FALSE参数保持dataframe因子不受影响,(正如marbel指出的那样,这是fread的默认设置)

如果类型转换是厚颜无耻的,你可以用col_types参数强制所有列作为字符。

tbl <-
list.files(pattern = "*.csv") %>%
map_df(~read_csv(., col_types = cols(.default = "c")))

如果您想要深入子目录来构造最终要绑定的文件列表,那么请确保包含路径名,并在列表中以文件的全名注册这些文件。这将允许绑定工作在当前目录之外进行。(将完整的路径名视为护照,允许跨目录“边界”返回。)

tbl <-
list.files(path = "./subdirectory/",
pattern = "*.csv",
full.names = T) %>%
map_df(~read_csv(., col_types = cols(.default = "c")))

正如Hadley所描述的在这里(大约在中间):

map_df(x, f)实际上与do.call("rbind", lapply(x, f))....相同

额外的功能 - 在下面的评论中为每个Niks功能请求添加文件名:
*添加原始filename到每条记录。< / p >

代码解释:创建一个函数,在初始读取表期间将文件名附加到每个记录。然后使用该函数而不是简单的read_csv()函数。

read_plus <- function(flnm) {
read_csv(flnm) %>%
mutate(filename = flnm)
}


tbl_with_sources <-
list.files(pattern = "*.csv",
full.names = T) %>%
map_df(~read_plus(.))

(类型转换和子目录处理方法也可以在read_plus()函数内部处理,与上面建议的第二个和第三个变体中所示的方式相同。)

### Benchmark Code & Results
library(tidyverse)
library(data.table)
library(microbenchmark)


### Base R Approaches
#### Instead of a dataframe, this approach creates a list of lists
#### removed from analysis as this alone doubled analysis time reqd
# lapply_read.delim <- function(path, pattern = "*.csv") {
#     temp = list.files(path, pattern, full.names = TRUE)
#     myfiles = lapply(temp, read.delim)
# }


#### `read.csv()`
do.call_rbind_read.csv <- function(path, pattern = "*.csv") {
files = list.files(path, pattern, full.names = TRUE)
do.call(rbind, lapply(files, function(x) read.csv(x, stringsAsFactors = FALSE)))
}


map_df_read.csv <- function(path, pattern = "*.csv") {
list.files(path, pattern, full.names = TRUE) %>%
map_df(~read.csv(., stringsAsFactors = FALSE))
}




### *dplyr()*
#### `read_csv()`
lapply_read_csv_bind_rows <- function(path, pattern = "*.csv") {
files = list.files(path, pattern, full.names = TRUE)
lapply(files, read_csv) %>% bind_rows()
}


map_df_read_csv <- function(path, pattern = "*.csv") {
list.files(path, pattern, full.names = TRUE) %>%
map_df(~read_csv(., col_types = cols(.default = "c")))
}


### *data.table* / *purrr* hybrid
map_df_fread <- function(path, pattern = "*.csv") {
list.files(path, pattern, full.names = TRUE) %>%
map_df(~fread(.))
}


### *data.table*
rbindlist_fread <- function(path, pattern = "*.csv") {
files = list.files(path, pattern, full.names = TRUE)
rbindlist(lapply(files, function(x) fread(x)))
}


do.call_rbind_fread <- function(path, pattern = "*.csv") {
files = list.files(path, pattern, full.names = TRUE)
do.call(rbind, lapply(files, function(x) fread(x, stringsAsFactors = FALSE)))
}




read_results <- function(dir_size){
microbenchmark(
# lapply_read.delim = lapply_read.delim(dir_size), # too slow to include in benchmarks
do.call_rbind_read.csv = do.call_rbind_read.csv(dir_size),
map_df_read.csv = map_df_read.csv(dir_size),
lapply_read_csv_bind_rows = lapply_read_csv_bind_rows(dir_size),
map_df_read_csv = map_df_read_csv(dir_size),
rbindlist_fread = rbindlist_fread(dir_size),
do.call_rbind_fread = do.call_rbind_fread(dir_size),
map_df_fread = map_df_fread(dir_size),
times = 10L)
}


read_results_lrg_mid_mid <- read_results('./testFolder/500MB_12.5MB_40files')
print(read_results_lrg_mid_mid, digits = 3)


read_results_sml_mic_mny <- read_results('./testFolder/5MB_5KB_1000files/')
read_results_sml_tny_mod <- read_results('./testFolder/5MB_50KB_100files/')
read_results_sml_sml_few <- read_results('./testFolder/5MB_500KB_10files/')


read_results_med_sml_mny <- read_results('./testFolder/50MB_5OKB_1000files')
read_results_med_sml_mod <- read_results('./testFolder/50MB_5OOKB_100files')
read_results_med_med_few <- read_results('./testFolder/50MB_5MB_10files')


read_results_lrg_sml_mny <- read_results('./testFolder/500MB_500KB_1000files')
read_results_lrg_med_mod <- read_results('./testFolder/500MB_5MB_100files')
read_results_lrg_lrg_few <- read_results('./testFolder/500MB_50MB_10files')


read_results_xlg_lrg_mod <- read_results('./testFolder/5000MB_50MB_100files')




print(read_results_sml_mic_mny, digits = 3)
print(read_results_sml_tny_mod, digits = 3)
print(read_results_sml_sml_few, digits = 3)


print(read_results_med_sml_mny, digits = 3)
print(read_results_med_sml_mod, digits = 3)
print(read_results_med_med_few, digits = 3)


print(read_results_lrg_sml_mny, digits = 3)
print(read_results_lrg_med_mod, digits = 3)
print(read_results_lrg_lrg_few, digits = 3)


print(read_results_xlg_lrg_mod, digits = 3)


# display boxplot of my typical use case results & basic machine max load
par(oma = c(0,0,0,0)) # remove overall margins if present
par(mfcol = c(1,1)) # remove grid if present
par(mar = c(12,5,1,1) + 0.1) # to display just a single boxplot with its complete labels
boxplot(read_results_lrg_mid_mid, las = 2, xlab = "", ylab = "Duration (seconds)", main = "40 files @ 12.5MB (500MB)")
boxplot(read_results_xlg_lrg_mod, las = 2, xlab = "", ylab = "Duration (seconds)", main = "100 files @ 50MB (5GB)")


# generate 3x3 grid boxplots
par(oma = c(12,1,1,1)) # margins for the whole 3 x 3 grid plot
par(mfcol = c(3,3)) # create grid (filling down each column)
par(mar = c(1,4,2,1)) # margins for the individual plots in 3 x 3 grid
boxplot(read_results_sml_mic_mny, las = 2, xlab = "", ylab = "Duration (seconds)", main = "1000 files @ 5KB (5MB)", xaxt = 'n')
boxplot(read_results_sml_tny_mod, las = 2, xlab = "", ylab = "Duration (milliseconds)", main = "100 files @ 50KB (5MB)", xaxt = 'n')
boxplot(read_results_sml_sml_few, las = 2, xlab = "", ylab = "Duration (milliseconds)", main = "10 files @ 500KB (5MB)",)


boxplot(read_results_med_sml_mny, las = 2, xlab = "", ylab = "Duration (microseconds)        ", main = "1000 files @ 50KB (50MB)", xaxt = 'n')
boxplot(read_results_med_sml_mod, las = 2, xlab = "", ylab = "Duration (microseconds)", main = "100 files @ 500KB (50MB)", xaxt = 'n')
boxplot(read_results_med_med_few, las = 2, xlab = "", ylab = "Duration (seconds)", main = "10 files @ 5MB (50MB)")


boxplot(read_results_lrg_sml_mny, las = 2, xlab = "", ylab = "Duration (seconds)", main = "1000 files @ 500KB (500MB)", xaxt = 'n')
boxplot(read_results_lrg_med_mod, las = 2, xlab = "", ylab = "Duration (seconds)", main = "100 files @ 5MB (500MB)", xaxt = 'n')
boxplot(read_results_lrg_lrg_few, las = 2, xlab = "", ylab = "Duration (seconds)", main = "10 files @ 50MB (500MB)")

中等用例

Boxplot Comparison of Elapsed Time my typical use case .

更大的用例

Boxplot Comparison of Elapsed Time for Extra Large Load

用例的多样性

行:文件计数(1000,100,10)
列:最终数据帧大小(5MB, 50MB, 500MB)
(点击图片查看原始尺寸) Boxplot Comparison of Directory Size Variations < / p >

在最小的用例中,使用purrr和dplyr的C库带来的开销超过了执行大规模处理任务时所观察到的性能收益,因此基本R结果更好。

如果您想运行自己的测试,您可能会发现这个bash脚本很有帮助。

for ((i=1; i<=$2; i++)); do
cp "$1" "${1:0:8}_${i}.csv";
done

bash what_you_name_this_script.sh "fileName_you_want_copied" 100将创建100个按顺序编号的文件副本(在文件名的前8个字符和下划线之后)。

归因和欣赏

特别感谢:

  • 泰勒溜冰者Akrun用于演示微基准测试。
  • Jake Kaupp给我介绍了map_df() 在这里
  • David McLaughlin提供了关于改进可视化和讨论/确认在小文件、小数据框架分析结果中观察到的性能反转的有用反馈。
  • marbel用于指出fread()的默认行为。(我需要学习data.table。)

使用plyr::ldply,在读取400个csv文件时,通过启用.parallel选项,大约可以提高50%的速度,每个文件大约30-40 MB。示例包括一个文本进度条。

library(plyr)
library(data.table)
library(doSNOW)


csv.list <- list.files(path="t:/data", pattern=".csv$", full.names=TRUE)


cl <- makeCluster(4)
registerDoSNOW(cl)


pb <- txtProgressBar(max=length(csv.list), style=3)
pbu <- function(i) setTxtProgressBar(pb, i)
dt <- setDT(ldply(csv.list, fread, .parallel=TRUE, .paropts=list(.options.snow=list(progress=pbu))))


stopCluster(cl)

在我看来,大多数其他答案都被rio::import_list淘汰了,这是一个简洁的一行程序:

library(rio)
my_data <- import_list(dir("path_to_directory", pattern = ".csv"), rbind = TRUE)

任何额外的参数都被传递给rio::importrio可以处理R可以读取的几乎任何文件格式,并且它在可能的情况下使用data.tablefread,因此它也应该很快。

我喜欢使用list.files()lapply()list2env()(或fs::dir_ls()purrr::map()list2env())的方法。这看起来既简单又灵活。

或者,您可以尝试小包{tor} (r):默认情况下,它将文件从工作目录导入到列表(list_*()变体)或全局环境(load_*()变体)。

例如,这里我使用tor::list_csv()将工作目录中的所有.csv文件读入一个列表:

library(tor)


dir()
#>  [1] "_pkgdown.yml"     "cran-comments.md" "csv1.csv"
#>  [4] "csv2.csv"         "datasets"         "DESCRIPTION"
#>  [7] "docs"             "inst"             "LICENSE.md"
#> [10] "man"              "NAMESPACE"        "NEWS.md"
#> [13] "R"                "README.md"        "README.Rmd"
#> [16] "tests"            "tmp.R"            "tor.Rproj"


list_csv()
#> $csv1
#>   x
#> 1 1
#> 2 2
#>
#> $csv2
#>   y
#> 1 a
#> 2 b

现在我用tor::load_csv()将这些文件加载到全局环境中:

# The working directory contains .csv files
dir()
#>  [1] "_pkgdown.yml"     "cran-comments.md" "CRAN-RELEASE"
#>  [4] "csv1.csv"         "csv2.csv"         "datasets"
#>  [7] "DESCRIPTION"      "docs"             "inst"
#> [10] "LICENSE.md"       "man"              "NAMESPACE"
#> [13] "NEWS.md"          "R"                "README.md"
#> [16] "README.Rmd"       "tests"            "tmp.R"
#> [19] "tor.Rproj"


load_csv()


# Each file is now available as a dataframe in the global environment
csv1
#>   x
#> 1 1
#> 2 2
csv2
#>   y
#> 1 a
#> 2 b

如果你需要读取特定的文件,你可以用regexpignore.caseinvert来匹配它们的文件路径。


要获得更大的灵活性,请使用list_any()。它允许你通过参数.f提供reader函数。

(path_csv <- tor_example("csv"))
#> [1] "C:/Users/LeporeM/Documents/R/R-3.5.2/library/tor/extdata/csv"
dir(path_csv)
#> [1] "file1.csv" "file2.csv"


list_any(path_csv, read.csv)
#> $file1
#>   x
#> 1 1
#> 2 2
#>
#> $file2
#>   y
#> 1 a
#> 2 b

通过…传递附加参数或者在函数内部。

path_csv %>%
list_any(readr::read_csv, skip = 1)
#> Parsed with column specification:
#> cols(
#>   `1` = col_double()
#> )
#> Parsed with column specification:
#> cols(
#>   a = col_character()
#> )
#> $file1
#> # A tibble: 1 x 1
#>     `1`
#>   <dbl>
#> 1     2
#>
#> $file2
#> # A tibble: 1 x 1
#>   a
#>   <chr>
#> 1 b


path_csv %>%
list_any(~read.csv(., stringsAsFactors = FALSE)) %>%
map(as_tibble)
#> $file1
#> # A tibble: 2 x 1
#>       x
#>   <int>
#> 1     1
#> 2     2
#>
#> $file2
#> # A tibble: 2 x 1
#>   y
#>   <chr>
#> 1 a
#> 2 b

有很多文件和很多核心,fread xargs cat(下面描述)比前3个答案中最快的解决方案快大约50倍。

rbindlist lapply read.delim  500s <- 1st place & accepted answer
rbindlist lapply fread       250s <- 2nd & 3rd place answers
rbindlist mclapply fread      10s
fread xargs cat                5s

是时候把121401个csv读入一个data.table了。每次平均跑三次,然后四舍五入。每个csv有3列,一个标题行,平均有4.510行。Machine是一个96核的GCP VM。

@A5C1D2H2I1M1N2O1R2T1、@ leersej和@marbel给出的前三个答案本质上都是相同的:对每个文件应用fread(或read.delim),然后rbind/rbindlist结果data.tables。对于小型数据集,我通常使用rbindlist(lapply(list.files("*.csv"),fread))形式。对于中等规模的数据集,我使用并行的mclapply而不是lapply,如果有多个核心,那么lapply速度要快得多。

这比其他r内部的替代方案更好,但对于大量的小型csv来说,在速度问题上不是最好的。在这种情况下,首先使用cat将所有csv连接到一个csv中会更快,就像@Spacedman的答案一样。我将在R中添加一些关于如何做到这一点的细节:

x = fread(cmd='cat *.csv', header=F)

但是,如果每个csv都有一个头呢?

x = fread(cmd="awk 'NR==1||FNR!=1' *.csv", header=T)

如果你有如此多的文件以至于*.csv shell glob失败了怎么办?

x = fread(cmd='find . -name "*.csv" | xargs cat', header=F)

如果所有文件都有头文件,而且文件太多怎么办?

header = fread(cmd='find . -name "*.csv" | head -n1 | xargs head -n1', header=T)
x = fread(cmd='find . -name "*.csv" | xargs tail -q -n+2', header=F)
setnames(x,header)

如果结果连接的csv对于系统内存来说太大怎么办?(例如,/dev/shm out of space错误)

system('find . -name "*.csv" | xargs cat > combined.csv')
x = fread('combined.csv', header=F)

头吗?

system('find . -name "*.csv" | head -n1 | xargs head -n1 > combined.csv')
system('find . -name "*.csv" | xargs tail -q -n+2 >> combined.csv')
x = fread('combined.csv', header=T)

最后,如果您不希望将所有.csv文件放在一个目录中,而希望将其放在一组特定的文件中,该怎么办?(而且,它们都有头文件。)(这是我的用例。)

fread(text=paste0(system("xargs cat|awk 'NR==1||$1!=\"<column one name>\"'",input=paths,intern=T),collapse="\n"),header=T,sep="\t")

这和普通的fread xargs cat的速度差不多:)

注:用于数据。表pre-v1.11.6(2018年9月19日),省略cmd=fread(cmd=

总之,如果您对速度感兴趣,并且有很多文件和内核,那么fread xargs cat比前3个答案中最快的解决方案快大约50倍。

更新:这里是我写的一个函数,可以轻松地应用最快的解决方案。我在生产环境中使用了它,但是在信任它之前,您应该用自己的数据彻底测试它。

fread_many = function(files,header=T,...){
if(length(files)==0) return()
if(typeof(files)!='character') return()
files = files[file.exists(files)]
if(length(files)==0) return()
tmp = tempfile(fileext = ".csv")
# note 1: requires awk, not cat or tail because some files have no final newline
# note 2: parallel --xargs is 40% slower
# note 3: reading to var is 15% slower and crashes R if the string is too long
# note 4: shorter paths -> more paths per awk -> fewer awks -> measurably faster
#         so best cd to the csv dir and use relative paths
if(header==T){
system(paste0('head -n1 ',files[1],' > ',tmp))
system(paste0("xargs awk 'FNR>1' >> ",tmp),input=files)
} else {
system(paste0("xargs awk '1' > ",tmp),input=files)
}
DT = fread(file=tmp,header=header,...)
file.remove(tmp)
DT
}

更新2:这里是fread_many函数的一个更复杂的版本,用于需要结果数据的情况。表中包含每个csv的inpath的列。在这种情况下,还必须使用sep参数显式地指定csv分隔符。

fread_many = function(files,header=T,keep_inpath=F,sep="auto",...){
if(length(files)==0) return()
if(typeof(files)!='character') return()
files = files[file.exists(files)]
if(length(files)==0) return()
tmp = tempfile(fileext = ".csv")
if(keep_inpath==T){
stopifnot(sep!="auto")
if(header==T){
system(paste0('/usr/bin/echo -ne inpath"',sep,'" > ',tmp))
system(paste0('head -n1 ',files[1],' >> ',tmp))
system(paste0("xargs awk -vsep='",sep,"' 'BEGIN{OFS=sep}{if(FNR>1)print FILENAME,$0}' >> ",tmp),input=files)
} else {
system(paste0("xargs awk -vsep='",sep,"' 'BEGIN{OFS=sep}{print FILENAME,$0}' > ",tmp),input=files)
}
} else {
if(header==T){
system(paste0('head -n1 ',files[1],' > ',tmp))
system(paste0("xargs awk 'FNR>1' >> ",tmp),input=files)
} else {
system(paste0("xargs awk '1' > ",tmp),input=files)
}
}
DT = fread(file=tmp,header=header,sep=sep,...)
file.remove(tmp)
DT
}

注意:在读取csv之前,我的所有解决方案都假设它们都有相同的分隔符。如果不是所有的csv都使用相同的分隔符,可以分批使用rbindlist lapply fread、rbindlist mclapply fread或fread xargs cat,其中批处理中的所有csv都使用相同的分隔符。

只要你的电脑有多个核,下面的代码就能让你以最快的速度处理大数据:

if (!require("pacman")) install.packages("pacman")
pacman::p_load(doParallel, data.table, stringr)


# get the file name
dir() %>% str_subset("\\.csv$") -> fn


# use parallel setting
(cl <- detectCores() %>%
makeCluster()) %>%
registerDoParallel()


# read and bind all files together
system.time({
big_df <- foreach(
i = fn,
.packages = "data.table"
) %dopar%
{
fread(i, colClasses = "character")
} %>%
rbindlist(fill = TRUE)
})


# end of parallel work
stopImplicitCluster(cl)

更新于20/04/16: 当我发现一个可用于并行计算的新包时,使用以下代码提供了一个替代解决方案

if (!require("pacman")) install.packages("pacman")
pacman::p_load(future.apply, data.table, stringr)


# get the file name
dir() %>% str_subset("\\.csv$") -> fn


plan(multiprocess)


future_lapply(fn,fread,colClasses = "character") %>%
rbindlist(fill = TRUE) -> res


# res is the merged data.table

这是我读取多个文件并将它们组合成1个数据帧的具体示例:

path<- file.path("C:/folder/subfolder")
files <- list.files(path=path, pattern="/*.csv",full.names = T)
library(data.table)
data = do.call(rbind, lapply(files, function(x) read.csv(x, stringsAsFactors = FALSE)))

有人要求我将此功能添加到stackoverflow R包中。鉴于它是一个tinyverse包(不能依赖于第三方包),以下是我想到的:

#' Bulk import data files
#'
#' Read in each file at a path and then unnest them. Defaults to csv format.
#'
#' @param path        a character vector of full path names
#' @param pattern     an optional \link[=regex]{regular expression}. Only file names which match the regular expression will be returned.
#' @param reader      a function that can read data from a file name.
#' @param ...         optional arguments to pass to the reader function (eg \code{stringsAsFactors}).
#' @param reducer     a function to unnest the individual data files. Use I to retain the nested structure.
#' @param recursive     logical. Should the listing recurse into directories?
#'
#' @author Neal Fultz
#' @references \url{https://stackoverflow.com/questions/11433432/how-to-import-multiple-csv-files-at-once}
#'
#' @importFrom utils read.csv
#' @export
read.directory <- function(path='.', pattern=NULL, reader=read.csv, ...,
reducer=function(dfs) do.call(rbind.data.frame, dfs), recursive=FALSE) {
files <- list.files(path, pattern, full.names = TRUE, recursive = recursive)


reducer(lapply(files, reader, ...))
}

通过参数化读卡器和减速器功能,人们可以使用数据。table或dplyr(如果他们选择的话),或者只使用适用于较小数据集的基本R函数。

使用purrr包括文件id作为列:

library(tidyverse)




p <- "my/directory"
files <- list.files(p, pattern="csv", full.names=TRUE) %>%
set_names()
merged <- files %>% map_dfr(read_csv, .id="filename")

如果没有set_names().id=将使用整数指示符,而不是实际的文件名。

如果你想要一个短的文件名而不是完整的路径:

merged <- merged %>% mutate(filename=basename(filename))

使用readr 2.0.0以后,您可以通过提供file参数的路径列表,一次读取多个文件。下面是一个使用readr::read_csv()的例子。

packageVersion("readr")
#> [1] '2.0.1'
library(readr)
library(fs)


# create files to read in
write_csv(read_csv("1, 2 \n 3, 4", col_names = c("x", "y")), file = "file1.csv")
write_csv(read_csv("5, 6 \n 7, 8", col_names = c("x", "y")), file = "file2.csv")


# get a list of files
files <- dir_ls(".", glob = "file*csv")
files
#> file1.csv file2.csv


# read them in at once
# record paths in a column called filename
read_csv(files, id = "filename")
#> # A tibble: 4 × 3
#>   filename      x     y
#>   <chr>     <dbl> <dbl>
#> 1 file1.csv     1     2
#> 2 file1.csv     3     4
#> 3 file2.csv     5     6
#> 4 file2.csv     7     8

reprex包 (v2.0.1)创建于2021-09-16