S3中每个目录的最大文件数

如果我有一百万张图片,是将它们存储在某个文件夹/子文件夹层次结构中更好,还是将它们直接放入一个桶中(没有任何文件夹) ?

将所有映像转储到一个无层次结构的 bucket 会减慢 LIST 操作吗?

在动态创建文件夹和子文件夹以及设置它们的 ACL (程序化地说)方面是否存在显著的开销?

51865 次浏览

I use a directory structure with a root then at least one sub directory. I often use "document import date" as the directory under the root. This can make managing backups a little easier. Whatever file system you are using you're bound to hit a file count limit (a practical if not a physycal limit) eventually. You might think about supporting multiple roots as well.

S3 doesn't respect hierarchical namespaces. Each bucket simply contains a number of mappings from key to object (along with associated metadata, ACLs and so on).

Even though your object's key might contain a '/', S3 treats the path as a plain string and puts all objects in a flat namespace.

In my experience, LIST operations do take (linearly) longer as object count increases, but this is probably a symptom of the increased I/O required on the Amazon servers, and down the wire to your client.

However, lookup times do not seem to increase with object count - it's most probably some sort of O(1) hashtable implementation on their end - so having many objects in the same bucket should be just as performant as small buckets for normal usage (i.e. not LISTs).

As for the ACL, grants can be set on the bucket and on each individual object. As there is no hierarchy, they're your only two options. Obviously, setting as many bucket-wide grants will massively reduce your admin headaches if you have millions of files, but remember you can only grant permissions, not revoke them, so the bucket-wide grants should be the maximal subset of the ACL for all its contents.

I'd recommend splitting into separate buckets for:

  • totally different content - having separate buckets for images, sound and other data makes for a more sane architecture
  • significantly different ACLs - if you can have one bucket with each object receiving a specific ACL, or two buckets with different ACLs and no object-specific ACLs, take the two buckets.

Answer to the original question "Max files per directory in S3" is: UNLIMITED. See also S3 limit to objects in a bucket.