在一个目录中保存多少文件有关系吗?如果是,一个目录中有多少文件是太多的,拥有太多文件的影响是什么?(这是在Linux服务器上。)

Background: I have a photo album website, and every image uploaded is renamed to an 8-hex-digit id (say, a58f375c.jpg). This is to avoid filename conflicts (if lots of "IMG0001.JPG" files are uploaded, for example). The original filename and any useful metadata is stored in a database. Right now, I have somewhere around 1500 files in the images directory. This makes listing the files in the directory (through FTP or SSH client) take a few seconds. But I can't see that it has any effect other than that. In particular, there doesn't seem to be any impact on how quickly an image file is served to the user.

我想过通过创建16个子目录来减少图像的数量:0-9和a-f。然后我将根据文件名的第一个十六进制数字将图像移动到子目录中。但是除了偶尔通过FTP/SSH列出目录之外,我不确定这样做是否有任何理由。


当前回答

完美的,

完美的,

完美无瑕:

(g.m. - rip)

function ff () { 
    d=$1; f=$2; 
    p=$( echo $f |sed "s/$d.*//; s,\(.\),&/,g; s,/$,," ); 
    echo $p/$f ; 
    }


ff _D_   09748abcGHJ_D_my_tagged_doc.json

0/9/7/4/8/a/b/c/G/H/J/09748abcGHJ_D_my_tagged_doc.json


ff -   gadsf12-my_car.json 

g/a/d/s/f/1/2/gadsf12-my_car.json

还有这个

ff _D_   0123456_D_my_tagged_doc.json

0/1/2/3/4/5/6/0123456_D_my_tagged_doc.json



ff .._D_   0123456_D_my_tagged_doc.json

0/1/2/3/4/0123456_D_my_tagged_doc.json

享受吧!

其他回答

我尊重这并不能完全回答你的问题,多少是太多了,但解决长期问题的一个想法是,除了存储原始文件元数据外,还存储它存储在磁盘上的哪个文件夹-规范化元数据。一旦一个文件夹的增长超出了性能、美观或其他原因的限制,你只需创建第二个文件夹并开始在那里放置文件……

不管怎样,我只是在ext4文件系统上创建了一个目录,其中有1,000,000个文件,然后通过web服务器随机访问这些文件。我没有注意到访问这些文件比(比如说)只有10个文件有任何溢价。

这与我几年前在ntfs上做这件事的经验完全不同。

上面的大多数答案都没有说明,对于最初的问题,没有“一刀切”的答案。

In today's environment we have a large conglomerate of different hardware and software -- some is 32 bit, some is 64 bit, some is cutting edge and some is tried and true - reliable and never changing. Added to that is a variety of older and newer hardware, older and newer OSes, different vendors (Windows, Unixes, Apple, etc.) and a myriad of utilities and servers that go along. As hardware has improved and software is converted to 64 bit compatibility, there has necessarily been considerable delay in getting all the pieces of this very large and complex world to play nicely with the rapid pace of changes.

恕我直言,没有一种方法可以解决问题。解决办法是研究各种可能性,然后通过反复试验找到最适合你特定需求的方法。每个用户必须确定什么适合他们的系统,而不是使用千篇一律的方法。

I for example have a media server with a few very large files. The result is only about 400 files filling a 3 TB drive. Only 1% of the inodes are used but 95% of the total space is used. Someone else, with a lot of smaller files may run out of inodes before they come near to filling the space. (On ext4 filesystems as a rule of thumb, 1 inode is used for each file/directory.) While theoretically the total number of files that may be contained within a directory is nearly infinite, practicality determines that the overall usage determine realistic units, not just filesystem capabilities.

我希望以上所有不同的答案都能促进思考和解决问题,而不是成为进步的不可逾越的障碍。

这实际上取决于所使用的文件系统,以及一些标志。

例如,ext3可以有数千个文件;但在几千次之后,它就变得非常缓慢了。主要是在列出目录时,但也在打开单个文件时。几年前,它获得了“htree”选项,这极大地缩短了给定文件名获取inode所需的时间。

就我个人而言,我使用子目录将大多数级别保持在1000个左右的项目以下。在您的例子中,我将创建256个目录,使用ID的最后两个十六进制数字。使用最后一个数字,而不是第一个数字,这样可以实现负载平衡。

完美的,

完美的,

完美无瑕:

(g.m. - rip)

function ff () { 
    d=$1; f=$2; 
    p=$( echo $f |sed "s/$d.*//; s,\(.\),&/,g; s,/$,," ); 
    echo $p/$f ; 
    }


ff _D_   09748abcGHJ_D_my_tagged_doc.json

0/9/7/4/8/a/b/c/G/H/J/09748abcGHJ_D_my_tagged_doc.json


ff -   gadsf12-my_car.json 

g/a/d/s/f/1/2/gadsf12-my_car.json

还有这个

ff _D_   0123456_D_my_tagged_doc.json

0/1/2/3/4/5/6/0123456_D_my_tagged_doc.json



ff .._D_   0123456_D_my_tagged_doc.json

0/1/2/3/4/0123456_D_my_tagged_doc.json

享受吧!