关于使用fs.readdir进行异步目录搜索有什么想法吗?我意识到我们可以引入递归,并调用read目录函数来读取下一个目录,但我有点担心它不是异步的…
什么好主意吗?我已经看了node-walk,它很棒,但它不能像readdir那样只给我数组中的文件。虽然
寻找这样的输出…
['file1.txt', 'file2.txt', 'dir/file3.txt']
关于使用fs.readdir进行异步目录搜索有什么想法吗?我意识到我们可以引入递归,并调用read目录函数来读取下一个目录,但我有点担心它不是异步的…
什么好主意吗?我已经看了node-walk,它很棒,但它不能像readdir那样只给我数组中的文件。虽然
寻找这样的输出…
['file1.txt', 'file2.txt', 'dir/file3.txt']
There are basically two ways of accomplishing this. In an async environment you'll notice that there are two kinds of loops: serial and parallel. A serial loop waits for one iteration to complete before it moves onto the next iteration - this guarantees that every iteration of the loop completes in order. In a parallel loop, all the iterations are started at the same time, and one may complete before another, however, it is much faster than a serial loop. So in this case, it's probably better to use a parallel loop because it doesn't matter what order the walk completes in, just as long as it completes and returns the results (unless you want them in order).
一个平行循环看起来是这样的:
var fs = require('fs');
var path = require('path');
var walk = function(dir, done) {
var results = [];
fs.readdir(dir, function(err, list) {
if (err) return done(err);
var pending = list.length;
if (!pending) return done(null, results);
list.forEach(function(file) {
file = path.resolve(dir, file);
fs.stat(file, function(err, stat) {
if (stat && stat.isDirectory()) {
walk(file, function(err, res) {
results = results.concat(res);
if (!--pending) done(null, results);
});
} else {
results.push(file);
if (!--pending) done(null, results);
}
});
});
});
};
一个串行循环看起来像这样:
var fs = require('fs');
var path = require('path');
var walk = function(dir, done) {
var results = [];
fs.readdir(dir, function(err, list) {
if (err) return done(err);
var i = 0;
(function next() {
var file = list[i++];
if (!file) return done(null, results);
file = path.resolve(dir, file);
fs.stat(file, function(err, stat) {
if (stat && stat.isDirectory()) {
walk(file, function(err, res) {
results = results.concat(res);
next();
});
} else {
results.push(file);
next();
}
});
})();
});
};
并且在你的主目录中测试它(警告:如果你的主目录中有很多东西,结果列表将会非常大):
walk(process.env.HOME, function(err, results) {
if (err) throw err;
console.log(results);
});
编辑:改进的示例。
a .看一下文件模块。它有一个叫walk的函数:
文件。步行(开始,回调) 导航文件树,为每个目录调用回调,传入 (null, dirPath, dirs, files)。
这可能是为你准备的!是的,它是异步的。但是,如果需要的话,我认为您必须自己聚合完整的路径。
B.另一种选择,甚至是我的最爱之一:使用unix find来查找。为什么要再做一件已经编程好的事情呢?也许不是你真正需要的,但仍然值得一试:
var execFile = require('child_process').execFile;
execFile('find', [ 'somepath/' ], function(err, stdout, stderr) {
var file_list = stdout.split('\n');
/* now you've got a list with full path file names */
});
Find有一个很好的内置缓存机制,使得后续搜索非常快,只要只有少数文件夹被更改。
如果你想使用npm包,扳手是很好的选择。
var wrench = require("wrench");
var files = wrench.readdirSyncRecursive("directory");
wrench.readdirRecursive("directory", function (error, files) {
// live your dreams
});
编辑(2018): 作者在2015年弃用了这个包:
扳手.js已弃用,并且在相当长的一段时间内没有更新。我强烈建议使用fs-extra来执行任何额外的文件系统操作。
另一个很好的npm包是glob。
npm公司
它非常强大,应该能满足你所有的递归需求。
编辑:
实际上我对glob不是很满意,所以我创建了readdirp。
我非常有信心,它的API使得递归地查找文件和目录以及应用特定的过滤器非常容易。
阅读它的文档,以更好地了解它的功能和安装方式:
NPM安装readdirp
我喜欢上面chjj的答案,如果没有那个开始,我就无法创建我的并行循环版本。
var fs = require("fs");
var tree = function(dir, done) {
var results = {
"path": dir
,"children": []
};
fs.readdir(dir, function(err, list) {
if (err) { return done(err); }
var pending = list.length;
if (!pending) { return done(null, results); }
list.forEach(function(file) {
fs.stat(dir + '/' + file, function(err, stat) {
if (stat && stat.isDirectory()) {
tree(dir + '/' + file, function(err, res) {
results.children.push(res);
if (!--pending){ done(null, results); }
});
} else {
results.children.push({"path": dir + "/" + file});
if (!--pending) { done(null, results); }
}
});
});
});
};
module.exports = tree;
我也创建了一个Gist。欢迎评论。我仍然在NodeJS领域起步,所以这是我希望学到更多的一种方式。
因为每个人都应该写自己的,所以我写了一个。
步行(dir, cb, endCb) cb(文件) 零endCb (err |)
脏
module.exports = walk;
function walk(dir, cb, endCb) {
var fs = require('fs');
var path = require('path');
fs.readdir(dir, function(err, files) {
if (err) {
return endCb(err);
}
var pending = files.length;
if (pending === 0) {
endCb(null);
}
files.forEach(function(file) {
fs.stat(path.join(dir, file), function(err, stats) {
if (err) {
return endCb(err)
}
if (stats.isDirectory()) {
walk(path.join(dir, file), cb, function() {
pending--;
if (pending === 0) {
endCb(null);
}
});
} else {
cb(path.join(dir, file));
pending--;
if (pending === 0) {
endCb(null);
}
}
})
});
});
}
我最近编写了这个代码,并认为在这里分享它是有意义的。代码使用了异步库。
var fs = require('fs');
var async = require('async');
var scan = function(dir, suffix, callback) {
fs.readdir(dir, function(err, files) {
var returnFiles = [];
async.each(files, function(file, next) {
var filePath = dir + '/' + file;
fs.stat(filePath, function(err, stat) {
if (err) {
return next(err);
}
if (stat.isDirectory()) {
scan(filePath, suffix, function(err, results) {
if (err) {
return next(err);
}
returnFiles = returnFiles.concat(results);
next();
})
}
else if (stat.isFile()) {
if (file.indexOf(suffix, file.length - suffix.length) !== -1) {
returnFiles.push(filePath);
}
next();
}
});
}, function(err) {
callback(err, returnFiles);
});
});
};
你可以这样使用它:
scan('/some/dir', '.ext', function(err, files) {
// Do something with files that ends in '.ext'.
console.log(files);
});
为了以防有人发现它有用,我还整理了一个同步版本。
var walk = function(dir) {
var results = [];
var list = fs.readdirSync(dir);
list.forEach(function(file) {
file = dir + '/' + file;
var stat = fs.statSync(file);
if (stat && stat.isDirectory()) {
/* Recurse into a subdirectory */
results = results.concat(walk(file));
} else {
/* Is a file */
results.push(file);
}
});
return results;
}
提示:在筛选时使用更少的资源。这个函数本身的过滤器。例如:替换results.push(文件);下面的代码。根据需要调整:
file_type = file.split(".").pop();
file_name = file.split(/(\\|\/)/g).pop();
if (file_type == "json") results.push(file);
查看final-fs库。它提供了一个readdirRecursive函数:
ffs.readdirRecursive(dirPath, true, 'my/initial/path')
.then(function (files) {
// in the `files` variable you've got all the files
})
.otherwise(function (err) {
// something went wrong
});
看看装载机 https://npmjs.org/package/loaddir
npm无处不在
loaddir = require('loaddir')
allJavascripts = []
loaddir({
path: __dirname + '/public/javascripts',
callback: function(){ allJavascripts.push(this.relativePath + this.baseName); }
})
如果还需要扩展名,可以使用fileName而不是baseName。
一个额外的好处是,它也会监视文件,并再次调用回调。有大量的配置选项使它非常灵活。
我只是在短时间内用加载器从红宝石重造了守卫宝石
独立承诺实现
在这个例子中,我使用的是when.js承诺库。
var fs = require('fs')
, path = require('path')
, when = require('when')
, nodefn = require('when/node/function');
function walk (directory, includeDir) {
var results = [];
return when.map(nodefn.call(fs.readdir, directory), function(file) {
file = path.join(directory, file);
return nodefn.call(fs.stat, file).then(function(stat) {
if (stat.isFile()) { return results.push(file); }
if (includeDir) { results.push(file + path.sep); }
return walk(file, includeDir).then(function(filesInDir) {
results = results.concat(filesInDir);
});
});
}).then(function() {
return results;
});
};
walk(__dirname).then(function(files) {
console.log(files);
}).otherwise(function(error) {
console.error(error.stack || error);
});
我包含了一个可选参数includeDir,如果设置为true,它将在文件列表中包含目录。
使用node-dir可以生成您想要的输出
var dir = require('node-dir');
dir.files(__dirname, function(err, files) {
if (err) throw err;
console.log(files);
//we have an array of files now, so now we can iterate that array
files.forEach(function(path) {
action(null, path);
})
});
这是另一个实现。上述解决方案都没有任何限制,因此如果您的目录结构很大,它们都会崩溃并最终耗尽资源。
var async = require('async');
var fs = require('fs');
var resolve = require('path').resolve;
var scan = function(path, concurrency, callback) {
var list = [];
var walker = async.queue(function(path, callback) {
fs.stat(path, function(err, stats) {
if (err) {
return callback(err);
} else {
if (stats.isDirectory()) {
fs.readdir(path, function(err, files) {
if (err) {
callback(err);
} else {
for (var i = 0; i < files.length; i++) {
walker.push(resolve(path, files[i]));
}
callback();
}
});
} else {
list.push(path);
callback();
}
}
});
}, concurrency);
walker.push(path);
walker.drain = function() {
callback(list);
}
};
使用50的并发工作得非常好,并且几乎和小型目录结构的简单实现一样快。
这就是我的答案。希望它能帮助到一些人。
我的重点是使搜索例程可以停在任何地方,对于找到的文件,告诉原始路径的相对深度。
var _fs = require('fs');
var _path = require('path');
var _defer = process.nextTick;
// next() will pop the first element from an array and return it, together with
// the recursive depth and the container array of the element. i.e. If the first
// element is an array, it'll be dug into recursively. But if the first element is
// an empty array, it'll be simply popped and ignored.
// e.g. If the original array is [1,[2],3], next() will return [1,0,[[2],3]], and
// the array becomes [[2],3]. If the array is [[[],[1,2],3],4], next() will return
// [1,2,[2]], and the array becomes [[[2],3],4].
// There is an infinity loop `while(true) {...}`, because I optimized the code to
// make it a non-recursive version.
var next = function(c) {
var a = c;
var n = 0;
while (true) {
if (a.length == 0) return null;
var x = a[0];
if (x.constructor == Array) {
if (x.length > 0) {
a = x;
++n;
} else {
a.shift();
a = c;
n = 0;
}
} else {
a.shift();
return [x, n, a];
}
}
}
// cb is the callback function, it have four arguments:
// 1) an error object if any exception happens;
// 2) a path name, may be a directory or a file;
// 3) a flag, `true` means directory, and `false` means file;
// 4) a zero-based number indicates the depth relative to the original path.
// cb should return a state value to tell whether the searching routine should
// continue: `true` means it should continue; `false` means it should stop here;
// but for a directory, there is a third state `null`, means it should do not
// dig into the directory and continue searching the next file.
var ls = function(path, cb) {
// use `_path.resolve()` to correctly handle '.' and '..'.
var c = [ _path.resolve(path) ];
var f = function() {
var p = next(c);
p && s(p);
};
var s = function(p) {
_fs.stat(p[0], function(err, ss) {
if (err) {
// use `_defer()` to turn a recursive call into a non-recursive call.
cb(err, p[0], null, p[1]) && _defer(f);
} else if (ss.isDirectory()) {
var y = cb(null, p[0], true, p[1]);
if (y) r(p);
else if (y == null) _defer(f);
} else {
cb(null, p[0], false, p[1]) && _defer(f);
}
});
};
var r = function(p) {
_fs.readdir(p[0], function(err, files) {
if (err) {
cb(err, p[0], true, p[1]) && _defer(f);
} else {
// not use `Array.prototype.map()` because we can make each change on site.
for (var i = 0; i < files.length; i++) {
files[i] = _path.join(p[0], files[i]);
}
p[2].unshift(files);
_defer(f);
}
});
}
_defer(f);
};
var printfile = function(err, file, isdir, n) {
if (err) {
console.log('--> ' + ('[' + n + '] ') + file + ': ' + err);
return true;
} else {
console.log('... ' + ('[' + n + '] ') + (isdir ? 'D' : 'F') + ' ' + file);
return true;
}
};
var path = process.argv[2];
ls(path, printfile);
我修改了老特雷弗的承诺为蓝鸟工作的基础上的答案
var fs = require('fs'),
path = require('path'),
Promise = require('bluebird');
var readdirAsync = Promise.promisify(fs.readdir);
var statAsync = Promise.promisify(fs.stat);
function walkFiles (directory) {
var results = [];
return readdirAsync(directory).map(function(file) {
file = path.join(directory, file);
return statAsync(file).then(function(stat) {
if (stat.isFile()) {
return results.push(file);
}
return walkFiles(file).then(function(filesInDir) {
results = results.concat(filesInDir);
});
});
}).then(function() {
return results;
});
}
//use
walkDir(__dirname).then(function(files) {
console.log(files);
}).catch(function(e) {
console.error(e); {
});
我建议使用node-glob来完成这个任务。
var glob = require( 'glob' );
glob( 'dirname/**/*.js', function( err, files ) {
console.log( files );
});
为了好玩,这里有一个基于流的版本,它与highland.js streams库一起工作。作者之一是维克多·伍。
###
directory >---m------> dirFilesStream >---------o----> out
| |
| |
+--------< returnPipe <-----------+
legend: (m)erge (o)bserve
+ directory has the initial file
+ dirListStream does a directory listing
+ out prints out the full path of the file
+ returnPipe runs stat and filters on directories
###
_ = require('highland')
fs = require('fs')
fsPath = require('path')
directory = _(['someDirectory'])
mergePoint = _()
dirFilesStream = mergePoint.merge().flatMap((parentPath) ->
_.wrapCallback(fs.readdir)(parentPath).sequence().map (path) ->
fsPath.join parentPath, path
)
out = dirFilesStream
# Create the return pipe
returnPipe = dirFilesStream.observe().flatFilter((path) ->
_.wrapCallback(fs.stat)(path).map (v) ->
v.isDirectory()
)
# Connect up the merge point now that we have all of our streams.
mergePoint.write directory
mergePoint.write returnPipe
mergePoint.end()
# Release backpressure. This will print files as they are discovered
out.each H.log
# Another way would be to queue them all up and then print them all out at once.
# out.toArray((files)-> console.log(files))
使用承诺(Q)以函数式风格解决此问题:
var fs = require('fs'),
fsPath = require('path'),
Q = require('q');
var walk = function (dir) {
return Q.ninvoke(fs, 'readdir', dir).then(function (files) {
return Q.all(files.map(function (file) {
file = fsPath.join(dir, file);
return Q.ninvoke(fs, 'lstat', file).then(function (stat) {
if (stat.isDirectory()) {
return walk(file);
} else {
return [file];
}
});
}));
}).then(function (files) {
return files.reduce(function (pre, cur) {
return pre.concat(cur);
});
});
};
它返回一个数组的promise,所以你可以这样使用它:
walk('/home/mypath').then(function (files) { console.log(files); });
我必须将基于promise的砂光器库添加到列表中。
var sander = require('sander');
sander.lsr(directory).then( filenames => { console.log(filenames) } );
用递归
var fs = require('fs')
var path = process.cwd()
var files = []
var getFiles = function(path, files){
fs.readdirSync(path).forEach(function(file){
var subpath = path + '/' + file;
if(fs.lstatSync(subpath).isDirectory()){
getFiles(subpath, files);
} else {
files.push(path + '/' + file);
}
});
}
调用
getFiles(path, files)
console.log(files) // will log all files in directory
下面是完整的工作代码。按您的要求。您可以递归地获取所有文件和文件夹。
var recur = function(dir) {
fs.readdir(dir,function(err,list){
list.forEach(function(file){
var file2 = path.resolve(dir, file);
fs.stat(file2,function(err,stats){
if(stats.isDirectory()) {
recur(file2);
}
else {
console.log(file2);
}
})
})
});
};
recur(path);
在路径中给出你想要搜索的目录路径,如"c:\test"
使用bluebird promise.coroutine:
let promise = require('bluebird'),
PC = promise.coroutine,
fs = promise.promisifyAll(require('fs'));
let getFiles = PC(function*(dir){
let files = [];
let contents = yield fs.readdirAsync(dir);
for (let i = 0, l = contents.length; i < l; i ++) {
//to remove dot(hidden) files on MAC
if (/^\..*/.test(contents[i])) contents.splice(i, 1);
}
for (let i = 0, l = contents.length; i < l; i ++) {
let content = path.resolve(dir, contents[i]);
let contentStat = yield fs.statAsync(content);
if (contentStat && contentStat.isDirectory()) {
let subFiles = yield getFiles(content);
files = files.concat(subFiles);
} else {
files.push(content);
}
}
return files;
});
//how to use
//easy error handling in one place
getFiles(your_dir).then(console.log).catch(err => console.log(err));
Filehound库是另一种选择。它将递归地搜索给定目录(默认为工作目录)。它支持各种过滤器、回调、承诺和同步搜索。
例如,搜索当前工作目录中的所有文件(使用回调):
const Filehound = require('filehound');
Filehound.create()
.find((err, files) => {
if (err) {
return console.error(`error: ${err}`);
}
console.log(files); // array of files
});
或承诺,并指定特定的目录:
const Filehound = require('filehound');
Filehound.create()
.paths("/tmp")
.find()
.each(console.log);
更多的用例和使用示例请参考文档:https://github.com/nspragg/filehound
声明:我是作者。
下面是一个获得所有文件包括子目录的递归方法。
const FileSystem = require("fs");
const Path = require("path");
//...
function getFiles(directory) {
directory = Path.normalize(directory);
let files = FileSystem.readdirSync(directory).map((file) => directory + Path.sep + file);
files.forEach((file, index) => {
if (FileSystem.statSync(file).isDirectory()) {
Array.prototype.splice.apply(files, [index, 1].concat(getFiles(file)));
}
});
return files;
}
使用async/await,这应该工作:
const FS = require('fs');
const readDir = promisify(FS.readdir);
const fileStat = promisify(FS.stat);
async function getFiles(dir) {
let files = await readDir(dir);
let result = files.map(file => {
let path = Path.join(dir,file);
return fileStat(path).then(stat => stat.isDirectory() ? getFiles(path) : path);
});
return flatten(await Promise.all(result));
}
function flatten(arr) {
return Array.prototype.concat(...arr);
}
你可以用蓝鸟。许诺或许诺:
/**
* Returns a function that will wrap the given `nodeFunction`. Instead of taking a callback, the returned function will return a promise whose fate is decided by the callback behavior of the given node function. The node function should conform to node.js convention of accepting a callback as last argument and calling that callback with error as the first argument and success value on the second argument.
*
* @param {Function} nodeFunction
* @returns {Function}
*/
module.exports = function promisify(nodeFunction) {
return function(...args) {
return new Promise((resolve, reject) => {
nodeFunction.call(this, ...args, (err, data) => {
if(err) {
reject(err);
} else {
resolve(data);
}
})
});
};
};
Node 8+内置了Promisify
请参阅我对生成器方法的其他回答,它可以更快地得到结果。
另一个简单而有用的方法
function walkDir(root) {
const stat = fs.statSync(root);
if (stat.isDirectory()) {
const dirs = fs.readdirSync(root).filter(item => !item.startsWith('.'));
let results = dirs.map(sub => walkDir(`${root}/${sub}`));
return [].concat(...results);
} else {
return root;
}
}
这是我如何使用nodejs的fs。递归搜索目录的Readdir函数。
const fs = require('fs');
const mime = require('mime-types');
const readdirRecursivePromise = path => {
return new Promise((resolve, reject) => {
fs.readdir(path, (err, directoriesPaths) => {
if (err) {
reject(err);
} else {
if (directoriesPaths.indexOf('.DS_Store') != -1) {
directoriesPaths.splice(directoriesPaths.indexOf('.DS_Store'), 1);
}
directoriesPaths.forEach((e, i) => {
directoriesPaths[i] = statPromise(`${path}/${e}`);
});
Promise.all(directoriesPaths).then(out => {
resolve(out);
}).catch(err => {
reject(err);
});
}
});
});
};
const statPromise = path => {
return new Promise((resolve, reject) => {
fs.stat(path, (err, stats) => {
if (err) {
reject(err);
} else {
if (stats.isDirectory()) {
readdirRecursivePromise(path).then(out => {
resolve(out);
}).catch(err => {
reject(err);
});
} else if (stats.isFile()) {
resolve({
'path': path,
'type': mime.lookup(path)
});
} else {
reject(`Error parsing path: ${path}`);
}
}
});
});
};
const flatten = (arr, result = []) => {
for (let i = 0, length = arr.length; i < length; i++) {
const value = arr[i];
if (Array.isArray(value)) {
flatten(value, result);
} else {
result.push(value);
}
}
return result;
};
假设在节点项目根目录中有一个名为“/database”的路径。一旦这个承诺被解决,它应该吐出'/database'下的每个文件的数组。
readdirRecursivePromise('database').then(out => {
console.log(flatten(out));
}).catch(err => {
console.log(err);
});
它使用了节点8中最多的新功能,包括Promises、util/promisify、destructuring、async-await、map+reduce等等,让你的同事在试图弄清楚发生了什么时挠头。
节点 8+
没有外部依赖。
const { promisify } = require('util');
const { resolve } = require('path');
const fs = require('fs');
const readdir = promisify(fs.readdir);
const stat = promisify(fs.stat);
async function getFiles(dir) {
const subdirs = await readdir(dir);
const files = await Promise.all(subdirs.map(async (subdir) => {
const res = resolve(dir, subdir);
return (await stat(res)).isDirectory() ? getFiles(res) : res;
}));
return files.reduce((a, f) => a.concat(f), []);
}
使用
getFiles(__dirname)
.then(files => console.log(files))
.catch(e => console.error(e));
节点 10.10+
更新到节点10+,甚至更多的whizbang:
const { resolve } = require('path');
const { readdir } = require('fs').promises;
async function getFiles(dir) {
const dirents = await readdir(dir, { withFileTypes: true });
const files = await Promise.all(dirents.map((dirent) => {
const res = resolve(dir, dirent.name);
return dirent.isDirectory() ? getFiles(res) : res;
}));
return Array.prototype.concat(...files);
}
请注意,从节点11.15.0开始,您可以使用files.flat()而不是array. prototype.concat(…files)来扁平化files数组。
11 +节点
如果你想让所有人都大吃一惊,你可以使用下面使用异步迭代器的版本。除了非常酷之外,它还允许使用者每次提取一个结果,这使得它更适合于真正大的目录。
const { resolve } = require('path');
const { readdir } = require('fs').promises;
async function* getFiles(dir) {
const dirents = await readdir(dir, { withFileTypes: true });
for (const dirent of dirents) {
const res = resolve(dir, dirent.name);
if (dirent.isDirectory()) {
yield* getFiles(res);
} else {
yield res;
}
}
}
用法发生了变化,因为返回类型现在是异步迭代器而不是promise
;(async () => {
for await (const f of getFiles('.')) {
console.log(f);
}
})()
如果有人感兴趣,我在这里写了更多关于异步迭代器的文章:https://qwtel.com/posts/software/async-generators-in-the-wild/
另一个答案,但这次使用的是TypeScript:
/** * Recursively walk a directory asynchronously and obtain all file names (with full path). * * @param dir Folder name you want to recursively process * @param done Callback function, returns all files with full path. * @param filter Optional filter to specify which files to include, * e.g. for json files: (f: string) => /.json$/.test(f) */ const walk = ( dir: string, done: (err: Error | null, results ? : string[]) => void, filter ? : (f: string) => boolean ) => { let results: string[] = []; fs.readdir(dir, (err: Error, list: string[]) => { if (err) { return done(err); } let pending = list.length; if (!pending) { return done(null, results); } list.forEach((file: string) => { file = path.resolve(dir, file); fs.stat(file, (err2, stat) => { if (stat && stat.isDirectory()) { walk(file, (err3, res) => { if (res) { results = results.concat(res); } if (!--pending) { done(null, results); } }, filter); } else { if (typeof filter === 'undefined' || (filter && filter(file))) { results.push(file); } if (!--pending) { done(null, results); } } }); }); }); };
对于Node 10.3+,这里是一个For -await解决方案:
#!/usr/bin/env node
const FS = require('fs');
const Util = require('util');
const readDir = Util.promisify(FS.readdir);
const Path = require('path');
async function* readDirR(path) {
const entries = await readDir(path,{withFileTypes:true});
for(let entry of entries) {
const fullPath = Path.join(path,entry.name);
if(entry.isDirectory()) {
yield* readDirR(fullPath);
} else {
yield fullPath;
}
}
}
async function main() {
const start = process.hrtime.bigint();
for await(const file of readDirR('/mnt/home/media/Unsorted')) {
console.log(file);
}
console.log((process.hrtime.bigint()-start)/1000000n);
}
main().catch(err => {
console.error(err);
});
这种解决方案的好处是,您可以立即开始处理结果;例如,读取媒体目录中的所有文件需要12秒,但如果我这样做,我可以在几毫秒内得到第一个结果。
异步
const fs = require('fs')
const path = require('path')
const readdir = (p, done, a = [], i = 0) => fs.readdir(p, (e, d = []) =>
d.map(f => readdir(a[a.push(path.join(p, f)) - 1], () =>
++i == d.length && done(a), a)).length || done(a))
readdir(__dirname, console.log)
Sync
const fs = require('fs')
const path = require('path')
const readdirSync = (p, a = []) => {
if (fs.statSync(p).isDirectory())
fs.readdirSync(p).map(f => readdirSync(a[a.push(path.join(p, f)) - 1], a))
return a
}
console.log(readdirSync(__dirname))
异步读
function readdir (currentPath, done, allFiles = [], i = 0) {
fs.readdir(currentPath, function (e, directoryFiles = []) {
if (!directoryFiles.length)
return done(allFiles)
directoryFiles.map(function (file) {
var joinedPath = path.join(currentPath, file)
allFiles.push(joinedPath)
readdir(joinedPath, function () {
i = i + 1
if (i == directoryFiles.length)
done(allFiles)}
, allFiles)
})
})
}
readdir(__dirname, console.log)
注意:两个版本都将跟随符号链接(与原始fs.readdir相同)
TypeScript中基于承诺的递归解决方案,使用Array.flat()处理嵌套返回。
import { resolve } from 'path'
import { Dirent } from 'fs'
import * as fs from 'fs'
function getFiles(root: string): Promise<string[]> {
return fs.promises
.readdir(root, { withFileTypes: true })
.then(dirents => {
const mapToPath = (r: string) => (dirent: Dirent): string => resolve(r, dirent.name)
const directoryPaths = dirents.filter(a => a.isDirectory()).map(mapToPath(root))
const filePaths = dirents.filter(a => a.isFile()).map(mapToPath(root))
return Promise.all<string>([
...directoryPaths.map(a => getFiles(a, include)).flat(),
...filePaths.map(a => Promise.resolve(a))
]).then(a => a.flat())
})
}
谁想要一个公认答案的同步替代方案(我知道我做过):
var fs = require('fs');
var path = require('path');
var walk = function(dir) {
let results = [], err = null, list;
try {
list = fs.readdirSync(dir)
} catch(e) {
err = e.toString();
}
if (err) return err;
var i = 0;
return (function next() {
var file = list[i++];
if(!file) return results;
file = path.resolve(dir, file);
let stat = fs.statSync(file);
if (stat && stat.isDirectory()) {
let res = walk(file);
results = results.concat(res);
return next();
} else {
results.push(file);
return next();
}
})();
};
console.log(
walk("./")
)
只是简单的散步
let pending = [baseFolderPath]
function walk () {
pending.shift();
// do stuffs width pending[0] and change pending items
if (pending[0]) walk(pending[0])
}
walk(pending[0])
qwtel的答案变体,在TypeScript中
import { resolve } from 'path';
import { readdir } from 'fs/promises';
async function* getFiles(dir: string): AsyncGenerator<string> {
const entries = await readdir(dir, { withFileTypes: true });
for (const entry of entries) {
const res = resolve(dir, entry.name);
if (entry.isDirectory()) {
yield* getFiles(res);
} else {
yield res;
}
}
}
有一个名为cup-readdir的新模块,可以快速递归地搜索目录。它使用异步承诺,在处理深层目录结构时性能优于许多流行的模块。
它可以返回数组中的所有文件,并根据它们的属性对它们进行排序,但缺乏文件过滤和进入符号链接目录等功能。这对于只想从目录中获取每个文件的大型项目非常有用。这里是他们项目主页的链接。
这是一个简单的同步递归解决方案
const fs = require('fs')
const getFiles = path => {
const files = []
for (const file of fs.readdirSync(path)) {
const fullPath = path + '/' + file
if(fs.lstatSync(fullPath).isDirectory())
getFiles(fullPath).forEach(x => files.push(file + '/' + x))
else files.push(file)
}
return files
}
用法:
const files = getFiles(process.cwd())
console.log(files)
您可以异步地编写它,但是没有必要。只需确保输入目录存在并且可以访问。
简单,基于异步承诺
const fs = require('fs/promises');
const getDirRecursive = async (dir) => {
try {
const items = await fs.readdir(dir);
let files = [];
for (const item of items) {
if ((await fs.lstat(`${dir}/${item}`)).isDirectory()) files = [...files, ...(await getDirRecursive(`${dir}/${item}`))];
else files.push({file: item, path: `${dir}/${item}`, parents: dir.split("/")});
}
return files;
} catch (e) {
return e
}
};
用法:await getDirRecursive("./public");
现代基于promise的读dir递归版本:
const fs = require('fs');
const path = require('path');
const readDirRecursive = async (filePath) => {
const dir = await fs.promises.readdir(filePath);
const files = await Promise.all(dir.map(async relativePath => {
const absolutePath = path.join(filePath, relativePath);
const stat = await fs.promises.lstat(absolutePath);
return stat.isDirectory() ? readDirRecursive(absolutePath) : absolutePath;
}));
return files.flat();
}
短小、现代、高效:
import {readdir} from 'node:fs/promises'
import {join} from 'node:path'
const deepReadDir = async (dirPath) => await Promise.all(
(await readdir(dirPath, {withFileTypes: true})).map(async (dirent) => {
const path = join(dirPath, dirent.name)
return dirent.isDirectory() ? await deepReadDir(path) : path
}),
)
特别感谢函数提示使用{withFileTypes: true}。
这将自动将每个嵌套路径折叠成一个新的嵌套数组。例如,如果:
await deepReadDir('src')
返回如下内容:
[
[
'src/client/api.js',
'src/client/http-constants.js',
'src/client/index.html',
'src/client/index.js',
[ 'src/client/res/favicon.ico' ],
'src/client/storage.js'
],
[ 'src/crypto/keygen.js' ],
'src/discover.js',
[
'src/mutations/createNewMutation.js',
'src/mutations/newAccount.js',
'src/mutations/transferCredit.js',
'src/mutations/updateApp.js'
],
[
'src/server/authentication.js',
'src/server/handlers.js',
'src/server/quick-response.js',
'src/server/server.js',
'src/server/static-resources.js'
],
[ 'src/util/prompt.js', 'src/util/safeWriteFile.js' ],
'src/util.js'
]
但如果你想,你可以很容易地把它压平:
(await deepReadDir('src')).flat(Number.POSITIVE_INFINITY)
[
'src/client/api.js',
'src/client/http-constants.js',
'src/client/index.html',
'src/client/index.js',
'src/client/res/favicon.ico',
'src/client/storage.js',
'src/crypto/keygen.js',
'src/discover.js',
'src/mutations/createNewMutation.js',
'src/mutations/newAccount.js',
'src/mutations/transferCredit.js',
'src/mutations/updateApp.js',
'src/server/authentication.js',
'src/server/handlers.js',
'src/server/quick-response.js',
'src/server/server.js',
'src/server/static-resources.js',
'src/util/prompt.js',
'src/util/safeWriteFile.js',
'src/util.js'
]
还有一种方法。我把它放在这里。也许将来它会对某人有用。
const fs = require("fs");
const { promisify } = require("util");
const p = require("path");
const readdir = promisify(fs.readdir);
async function getFiles(path) {
try {
const entries = await readdir(path, { withFileTypes: true });
const files = entries
.filter((file) => !file.isDirectory())
.map((file) => ({
path: `${path}/${file.name}`,
ext: p.extname(`${path}/${file.name}`),
pathDir: path,
}));
const folders = entries.filter((folder) => folder.isDirectory());
for (const folder of folders) {
files.push(...(await getFiles(`${path}/${folder.name}`)));
}
return files;
} catch (error) {
return error;
}
}
用法:
getFiles(rootFolderPath)
.then()
.catch()
香草ES6 +异步/等待+小和可读
我没有在这篇文章中找到我想要的答案;在不同的答案中有一些相似的元素,但我只想要一些简单易读的东西。
为了防止它在未来帮助到任何人(比如几个月后的我自己),这就是我最终使用的:
const { readdir } = require('fs/promises');
const { join } = require('path');
const readdirRecursive = async dir => {
const files = await readdir( dir, { withFileTypes: true } );
const paths = files.map( async file => {
const path = join( dir, file.name );
if ( file.isDirectory() ) return await readdirRecursive( path );
return path;
} );
return ( await Promise.all( paths ) ).flat( Infinity );
}
module.exports = {
readdirRecursive,
}