Message ID | 20170609035943.26447-1-tytso@mit.edu |
---|---|
State | Accepted, archived |
Headers | show |
On Fri, Jun 09, 2017 at 11:55:56AM +0300, Artem Blagodarenko wrote: > Hello Theodore, > > Is it good time to add large_dir feature to this configuration? Andreas didn't include large_dir on the list of features that he enabled; what's the status of use of that feature by Luster system in production? - Ted
On Jun 9, 2017, at 11:59 AM, Theodore Ts'o <tytso@mit.edu> wrote: > > On Fri, Jun 09, 2017 at 11:55:56AM +0300, Artem Blagodarenko wrote: >> Hello Theodore, >> >> Is it good time to add large_dir feature to this configuration? > > Andreas didn't include large_dir on the list of features that he > enabled; what's the status of use of that feature by Lustre system in > production? I didn't include large_dir because it isn't included in the upstream kernels yet, and we haven't been using this in production due to lack of e2fsck support (which Seagate has finally implemented, thank you). By all means, I'm in favour of adding this feature to the testing matrix. Cheers, Andreas
On Fri, Jun 09, 2017 at 03:00:56PM -0600, Andreas Dilger wrote: > > I didn't include large_dir because it isn't included in the upstream > kernels yet, and we haven't been using this in production due to lack > of e2fsck support (which Seagate has finally implemented, thank you). Also, does Lustre use large_dir on the MDS server, or on the Lustre data server? Because I noticed that on the MDS server you're apparently not using extents: export EXT_MKFS_OPTIONS="-I 2048 -O ^64bit,mmp,uninit_bg,^extents,dir_nlink,... ^^^^^^^^ Are you really using large_dir on a file system that is using indirect block mapped files? - Ted
On Jun 23, 2017, at 3:41 PM, Theodore Ts'o <tytso@mit.edu> wrote: > > On Fri, Jun 09, 2017 at 03:00:56PM -0600, Andreas Dilger wrote: >> >> I didn't include large_dir because it isn't included in the upstream >> kernels yet, and we haven't been using this in production due to lack >> of e2fsck support (which Seagate has finally implemented, thank you). > > Also, does Lustre use large_dir on the MDS server, or on the Lustre > data server? Because I noticed that on the MDS server you're > apparently not using extents: > > export EXT_MKFS_OPTIONS="-I 2048 -O ^64bit,mmp,uninit_bg,^extents,dir_nlink,... > ^^^^^^^^ > Are you really using large_dir on a file system that is using indirect > block mapped files? Like I wrote above, we haven't been using large_dir in production yet. My plan was only to use it on the MDT, which is where the filesystem namespace is located. Indeed, we don't use extents on the MDT because: a) ext4 MDTs are strictly less than 16TB in size (usually < 8TB, even with 4B inodes) because of using "-I 2048" to limit the space per inode, so they never need more than 2^32 blocks b) the only files of any size on the MDT are directories or other log files, which are rarely created with contiguous block allocations, so extents usually take more space than indirect blocks (12 bytes vs 4). I think Seagate was also considering it on their huge OSTs (which always have extents enabled) since they may have large numbers of objects that are currently referenced by a limited number of directories (32 currently). I think the OST case would be better handled by creating multiple object subdirectories[*], since the internal object directory layout is not externally visible. If the objects are grouped temporally into directories, then they avoid consuming RAM when they are no longer actively in use, and empty directories can be shrunk or deleted over time when they are empty. This in turn would make it desirable to implement online directory shrinking as has been discussed many times in the past, but even "e2fsck -fD" would be able to shrink the old object directories offline as they become empty, rather than a huge directory that has completely random leaf block access. Cheers, Andreas [*] based on the FID sequence number like with DNE
On Fri, Jun 23, 2017 at 04:00:05PM -0600, Andreas Dilger wrote: > > Are you really using large_dir on a file system that is using indirect > > block mapped files? > > Like I wrote above, we haven't been using large_dir in production yet. > My plan was only to use it on the MDT, which is where the filesystem > namespace is located. Indeed, we don't use extents on the MDT because: > a) ext4 MDTs are strictly less than 16TB in size (usually < 8TB, even with > 4B inodes) because of using "-I 2048" to limit the space per inode, > so they never need more than 2^32 blocks > b) the only files of any size on the MDT are directories or other log > files, which are rarely created with contiguous block allocations, > so extents usually take more space than indirect blocks (12 bytes vs 4). OK, I'll try adding large_dir to the luster_mds configuration. I'm not sure we're actually going to end up creating a directory with 3 levels of htree nodes, though. In order to force that we'll probably need to create a new ext4-specific htree stress tester which creates a large number of filenames that very long, preferably while using a 1k block file system. - Ted
diff --git a/kvm-xfstests/test-appliance/files/root/fs/ext4/cfg/lustre_mds b/kvm-xfstests/test-appliance/files/root/fs/ext4/cfg/lustre_mds new file mode 100644 index 0000000..39bb382 --- /dev/null +++ b/kvm-xfstests/test-appliance/files/root/fs/ext4/cfg/lustre_mds @@ -0,0 +1,4 @@ +SIZE=small +export EXT_MKFS_OPTIONS="-I 2048 -O ^64bit,mmp,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init" +export EXT_MOUNT_OPTIONS="" +TESTNAME="Lustre MDS"
This thests the file system features which are used by the Lustre MDS that are upstream. (The dirdata feature is not currently in the upsteam kernel.) Signed-off-by: Theodore Ts'o <tytso@mit.edu> --- kvm-xfstests/test-appliance/files/root/fs/ext4/cfg/lustre_mds | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 kvm-xfstests/test-appliance/files/root/fs/ext4/cfg/lustre_mds