mbox series

[RFC,v3,00/61] Introduce parallel fsck to e2fsck pass1

Message ID 20201118153947.3394530-1-saranyamohan@google.com
Headers show
Series Introduce parallel fsck to e2fsck pass1 | expand

Message

Saranya Muruganandam Nov. 18, 2020, 3:38 p.m. UTC
Currently it has been popular that single disk could be more than TiB,
etc 16Tib with only one single disk, with this trend, one single
filesystem could be larger and larger and easily reach PiB with LUN system.

The journal filesystem like ext4 need be offline to do regular
check and repair from time to time, however the problem is e2fsck
still do this using single thread, this could be challenging at scale
for two reasons:

1) even with readahead, IO speed still limits several tens MiB per second.
2) could not utilize CPU cores.

It could be challenging to try multh-threads for all phase of e2fsck, but as
first step, we might try this for most time-consuming pass1, according to
our benchmarking it cost of 80% time for whole e2fck phase.

Pass1 is trying to scanning all valid inode of filesystem and check it one by
one, and the patchset idea is trying to split these to different threads and
trying to do this at the same time, we try to merge these inodes and corresponding
inode's extent information after threads finish.

To simplify complexity and make it less error-prone, the fix is still serialized,
since most of time there will be only minor errors for filesystem, what's important
for us is parallel reading and checking.

Here is a benchmarking on our Lustre filesystem with 1.2 PiB OSD ext4 based
filesystem:

DDN SFA18KE StorageServer
DCR(DeClustering RAID) with 162 x HGST 10TB NL-SAS
Tested Server
A Virtual Machine running on SFA18KE
8 x CPU cores (Xeon(R) Gold 6140)
150GB memory
CentoOS7.7 (Lustre patched kernel)

Created 600 Million x 32K byte files.

Without Patch		With Patch  thr=64
pass1: 13079.66		488.57 seconds
Total: 15673.33		3188.42

We have 5x total time reduction of total time which is very inspiring.

I've tested the whole patch series using 'make test' of e2fsck itself, and i
manually set default threads to 4 which still pass almost of test suite,
failure cases are below:

f_h_badroot f_multithread f_multithread_logfile f_multithread_no f_multithread_ok

h_h_badroot failed because out of order checking output, and others are because
of extra multiple threads log output.

The other test i've done to verify is:
1) filled a filesystem with different features enabled(encryption, shared xattr)
2) copied the above image out using e2image.
3) using e2fuzz to randomly corrupt above image, and then have another copy of
corrupted image.
4) running e2fsck with single thread(without patches) and multiple threads with
corrupted image respectively.
5) using valgrind to check any memory leak.
6) compare single thread and multiple threads fix results by mounting two images
back and compare the directores and files.(size and name match)

Any review or comments are welcomed!

Thanks you very much!

changelog v2->v3:
1) Parallelize rw_bitmaps
2) Configuration to turn off parallel fsck.(default=on)
3) Add f_multithread_ok
4) Add annotations to e2fsck_struct instead of reshuffle
5) Fix memory leaks
6) Some more extra fixes

Andreas Dilger (2):
  e2fsck: fix f_multithread_ok test
  e2fsck: misc cleanups for pfsck

Li Xi (18):
  e2fsck: add -m option for multithread
  e2fsck: copy context when using multi-thread fsck
  e2fsck: copy fs when using multi-thread fsck
  e2fsck: add assert when copying context
  e2fsck: copy bitmaps when copying context
  e2fsck: open io-channel when copying fs
  e2fsck: create logs for mult-threads
  e2fsck: optionally configure one pfsck thread
  e2fsck: add start/end group for thread
  e2fsck: split groups to different threads
  e2fsck: print thread log properly
  e2fsck: do not change global variables
  e2fsck: optimize the inserting of dir_info_db
  e2fsck: merge dir_info after thread finishes
  e2fsck: merge icounts after thread finishes
  e2fsck: merge dblist after thread finishes
  e2fsck: add debug codes for multiple threads
  e2fsck: merge fs flags when threads finish

Saranya Muruganandam (2):
  e2fsck: propagate number of threads
  e2fsck: Annotating fields in e2fsck_struct

Wang Shilong (39):
  e2fsck: clear icache when using multi-thread fsck
  e2fsck: copy badblocks when copying fs
  e2fsck: merge bitmaps after thread completes
  e2fsck: rbtree bitmap for dir
  e2fsck: merge badblocks after thread finishes
  e2fsck: merge counts after threads finish
  e2fsck: merge dx_dir_info after threads finish
  e2fsck: merge dirs_to_hash when threads finish
  e2fsck: merge context flags properly
  e2fsck: merge quota context after threads finish
  e2fsck: serialize fix operations
  e2fsck: move some fixes out of parallel pthreads
  e2fsck: split and merge invalid bitmaps
  e2fsck: merge EA blocks properly
  e2fsck: kickoff mutex lock for block found map
  e2fsck: allow admin specify number of threads
  e2fsck: adjust number of threads
  e2fsck: fix readahead for pfsck of pass1
  e2fsck: merge options after threads finish
  e2fsck: reset lost_and_found after threads finish
  e2fsck: merge extent depth count after threads finish
  e2fsck: simplify e2fsck context merging codes
  e2fsck: set E2F_FLAG_ALLOC_OK after threads
  e2fsck: wait fix thread finish before checking
  e2fsck: cleanup e2fsck_pass1_thread_join()
  e2fsck: avoid too much memory allocation for pfsck
  e2fsck: make default smallest RA size to 1M
  ext2fs: parallel bitmap loading
  e2fsck: update mmp block in one thread
  e2fsck: reset @inodes_to_rebuild if restart
  e2fsck: fix build for make rpm
  e2fsck: move ext2fs_get_avg_group to rw_bitmaps.c
  configure: enable pfsck by default
  test: add pfsck test
  e2fsck: fix race in ext2fs_read_bitmaps()
  e2fsck: fix readahead for pass1 without pfsck
  e2fsck: fix memory leaks with pfsck enabled
  ext2fs: fix to set tail flags with pfsck enabled
  e2fsck: update mmp block race

 MCONFIG.in                              |    1 +
 configure                               |   90 +-
 configure.ac                            |   26 +
 e2fsck/Makefile.in                      |    9 +-
 e2fsck/dirinfo.c                        |  238 ++-
 e2fsck/dx_dirinfo.c                     |   64 +
 e2fsck/e2fsck.8.in                      |    8 +-
 e2fsck/e2fsck.c                         |   11 +
 e2fsck/e2fsck.h                         |  102 +-
 e2fsck/logfile.c                        |   13 +-
 e2fsck/pass1.c                          | 1766 ++++++++++++++++++++---
 e2fsck/problem.c                        |   11 +
 e2fsck/problem.h                        |    3 +
 e2fsck/readahead.c                      |    4 +
 e2fsck/unix.c                           |   59 +-
 e2fsck/util.c                           |  193 ++-
 lib/config.h.in                         |    3 +
 lib/ext2fs/badblocks.c                  |   85 +-
 lib/ext2fs/bitmaps.c                    |   10 +
 lib/ext2fs/bitops.h                     |    2 +
 lib/ext2fs/blkmap64_rb.c                |   65 +
 lib/ext2fs/bmap64.h                     |    4 +
 lib/ext2fs/dblist.c                     |   38 +
 lib/ext2fs/ext2_err.et.in               |    3 +
 lib/ext2fs/ext2_io.h                    |    2 +
 lib/ext2fs/ext2fs.h                     |   19 +-
 lib/ext2fs/ext2fsP.h                    |    1 -
 lib/ext2fs/gen_bitmap64.c               |   62 +
 lib/ext2fs/icount.c                     |  107 ++
 lib/ext2fs/openfs.c                     |   48 +-
 lib/ext2fs/rw_bitmaps.c                 |  301 +++-
 lib/ext2fs/undo_io.c                    |   19 +
 lib/ext2fs/unix_io.c                    |   24 +-
 lib/support/mkquota.c                   |   39 +
 lib/support/quotaio.h                   |    3 +
 tests/f_itable_collision/expect.1       |    3 -
 tests/f_multithread/expect.1            |   25 +
 tests/f_multithread/expect.2            |    7 +
 tests/f_multithread/image.gz            |    1 +
 tests/f_multithread/name                |    1 +
 tests/f_multithread/script              |    4 +
 tests/f_multithread_completion/expect.1 |    2 +
 tests/f_multithread_completion/expect.2 |   23 +
 tests/f_multithread_completion/image.gz |    1 +
 tests/f_multithread_completion/name     |    1 +
 tests/f_multithread_completion/script   |    4 +
 tests/f_multithread_logfile/expect.1    |   25 +
 tests/f_multithread_logfile/image.gz    |    1 +
 tests/f_multithread_logfile/name        |    1 +
 tests/f_multithread_logfile/script      |   32 +
 tests/f_multithread_no/expect.1         |   26 +
 tests/f_multithread_no/expect.2         |   23 +
 tests/f_multithread_no/image.gz         |    1 +
 tests/f_multithread_no/name             |    1 +
 tests/f_multithread_no/script           |    4 +
 tests/f_multithread_ok/expect.1         |    8 +
 tests/f_multithread_ok/image.gz         |  Bin 0 -> 796311 bytes
 tests/f_multithread_ok/name             |    1 +
 tests/f_multithread_ok/script           |   21 +
 tests/f_multithread_preen/expect.1      |   11 +
 tests/f_multithread_preen/expect.2      |   23 +
 tests/f_multithread_preen/image.gz      |    1 +
 tests/f_multithread_preen/name          |    1 +
 tests/f_multithread_preen/script        |    4 +
 tests/f_multithread_yes/expect.1        |    2 +
 tests/f_multithread_yes/expect.2        |   23 +
 tests/f_multithread_yes/image.gz        |    1 +
 tests/f_multithread_yes/name            |    1 +
 tests/f_multithread_yes/script          |    4 +
 tests/test_one.in                       |    8 +
 70 files changed, 3335 insertions(+), 393 deletions(-)
 create mode 100644 tests/f_multithread/expect.1
 create mode 100644 tests/f_multithread/expect.2
 create mode 120000 tests/f_multithread/image.gz
 create mode 100644 tests/f_multithread/name
 create mode 100644 tests/f_multithread/script
 create mode 100644 tests/f_multithread_completion/expect.1
 create mode 100644 tests/f_multithread_completion/expect.2
 create mode 120000 tests/f_multithread_completion/image.gz
 create mode 100644 tests/f_multithread_completion/name
 create mode 100644 tests/f_multithread_completion/script
 create mode 100644 tests/f_multithread_logfile/expect.1
 create mode 120000 tests/f_multithread_logfile/image.gz
 create mode 100644 tests/f_multithread_logfile/name
 create mode 100644 tests/f_multithread_logfile/script
 create mode 100644 tests/f_multithread_no/expect.1
 create mode 100644 tests/f_multithread_no/expect.2
 create mode 120000 tests/f_multithread_no/image.gz
 create mode 100644 tests/f_multithread_no/name
 create mode 100644 tests/f_multithread_no/script
 create mode 100644 tests/f_multithread_ok/expect.1
 create mode 100644 tests/f_multithread_ok/image.gz
 create mode 100644 tests/f_multithread_ok/name
 create mode 100644 tests/f_multithread_ok/script
 create mode 100644 tests/f_multithread_preen/expect.1
 create mode 100644 tests/f_multithread_preen/expect.2
 create mode 120000 tests/f_multithread_preen/image.gz
 create mode 100644 tests/f_multithread_preen/name
 create mode 100644 tests/f_multithread_preen/script
 create mode 100644 tests/f_multithread_yes/expect.1
 create mode 100644 tests/f_multithread_yes/expect.2
 create mode 120000 tests/f_multithread_yes/image.gz
 create mode 100644 tests/f_multithread_yes/name
 create mode 100644 tests/f_multithread_yes/script

Comments

Theodore Ts'o Nov. 19, 2020, 3:58 p.m. UTC | #1
On Wed, Nov 18, 2020 at 07:38:46AM -0800, Saranya Muruganandam wrote:
> Currently it has been popular that single disk could be more than TiB,
> etc 16Tib with only one single disk, with this trend, one single
> filesystem could be larger and larger and easily reach PiB with LUN system.
> 
> The journal filesystem like ext4 need be offline to do regular
> check and repair from time to time, however the problem is e2fsck
> still do this using single thread, this could be challenging at scale
> for two reasons:
> 
> 1) even with readahead, IO speed still limits several tens MiB per second.
> 2) could not utilize CPU cores.
> 
> It could be challenging to try multh-threads for all phase of e2fsck, but as
> first step, we might try this for most time-consuming pass1, according to
> our benchmarking it cost of 80% time for whole e2fck phase.
> 
> Pass1 is trying to scanning all valid inode of filesystem and check it one by
> one, and the patchset idea is trying to split these to different threads and
> trying to do this at the same time, we try to merge these inodes and corresponding
> inode's extent information after threads finish.
> 
> To simplify complexity and make it less error-prone, the fix is still serialized,
> since most of time there will be only minor errors for filesystem, what's important
> for us is parallel reading and checking.
> 
> Here is a benchmarking on our Lustre filesystem with 1.2 PiB OSD ext4 based
> filesystem:
> 
> DDN SFA18KE StorageServer
> DCR(DeClustering RAID) with 162 x HGST 10TB NL-SAS
> Tested Server
> A Virtual Machine running on SFA18KE
> 8 x CPU cores (Xeon(R) Gold 6140)
> 150GB memory
> CentoOS7.7 (Lustre patched kernel)

This introductory patch presumably came from the original patch
series; hence "our Lustre file system".  Just to make it clearer, it's
probably better to make it clear who did which benchmarks.  And
Saranya, you might want to include your benchmark results since it
will be easier for people to replicate.

> I've tested the whole patch series using 'make test' of e2fsck itself, and i
> manually set default threads to 4 which still pass almost of test suite,
> failure cases are below:
> 
> f_h_badroot f_multithread f_multithread_logfile f_multithread_no f_multithread_ok
> 
> h_h_badroot failed because out of order checking output, and others are because
> of extra multiple threads log output.

And this "I" is Saranya, yes?

> Andreas Dilger (2):
>   e2fsck: fix f_multithread_ok test
>   e2fsck: misc cleanups for pfsck
> 
> Li Xi (18):
>   e2fsck: add -m option for multithread
>   e2fsck: copy context when using multi-thread fsck
>   e2fsck: copy fs when using multi-thread fsck
>   e2fsck: add assert when copying context
>   e2fsck: copy bitmaps when copying context
>   e2fsck: open io-channel when copying fs
>   e2fsck: create logs for mult-threads
>   e2fsck: optionally configure one pfsck thread
>   e2fsck: add start/end group for thread
>   e2fsck: split groups to different threads
>   e2fsck: print thread log properly
>   e2fsck: do not change global variables
>   e2fsck: optimize the inserting of dir_info_db
>   e2fsck: merge dir_info after thread finishes
>   e2fsck: merge icounts after thread finishes
>   e2fsck: merge dblist after thread finishes
>   e2fsck: add debug codes for multiple threads
>   e2fsck: merge fs flags when threads finish

The fact that all of these patches are prefixed with e2fsck: hides the
fact that some of these changes include changes to libext2fs.  It's
probably better to separate out the changes to libext2fs so we can pay
special attention to issues of presering the ABI.

I'll talk more about this in the individual patches.

						- Ted
Theodore Ts'o Nov. 23, 2020, 9:25 p.m. UTC | #2
On Wed, Nov 18, 2020 at 07:38:46AM -0800, Saranya Muruganandam wrote:
> I've tested the whole patch series using 'make test' of e2fsck itself, and i
> manually set default threads to 4 which still pass almost of test suite,
> failure cases are below:
> 
> f_h_badroot f_multithread f_multithread_logfile f_multithread_no f_multithread_ok
> 
> h_h_badroot failed because out of order checking output, and others are because
> of extra multiple threads log output.

I just tried the full series, and I'm only seeing one test failure.
Unfortunately, it's f_multithread, and it's double free crash:

...
Pass 5: Checking group summary information
Multiple threads triggered to read bitmaps
double free or corruption (!prev)
Signal (6) SIGABRT si_code=SI_TKILL
../e2fsck/e2fsck(+0x45fab)[0x556589911fab]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x14140)[0x7fe52ec34140]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x141)[0x7fe52ea96c41]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x123)[0x7fe52ea80537]
/lib/x86_64-linux-gnu/libc.so.6(+0x7e6c8)[0x7fe52ead96c8]
/lib/x86_64-linux-gnu/libc.so.6(+0x859ba)[0x7fe52eae09ba]
/lib/x86_64-linux-gnu/libc.so.6(+0x86ffc)[0x7fe52eae1ffc]
../lib/libext2fs.so.2(ext2fs_free_mem+0x23)[0x7fe52ed1091a]
../lib/libext2fs.so.2(+0x47329)[0x7fe52ed1f329]
../lib/libext2fs.so.2(+0x475b4)[0x7fe52ed1f5b4]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7)[0x7fe52ec28ea7]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fe52eb58d4f]
Exit status is 8

Here's the contents of f_multithread_ok.1.log after running
"./test_script --valgrind f_multithread_ok" in the tests directory:

Pass 1: Checking inodes, blocks, and sizes
[Thread 0] Scan group range [0, 1)
[Thread 1] Scan group range [1, 2)
[Thread 2] Scan group range [2, 3)
[Thread 3] Scan group range [3, 4)
[Thread 2] Scanned group range [2, 3), inodes 8192
[Thread 1] Scanned group range [1, 2), inodes 8192
[Thread 3] Scanned group range [3, 4), inodes 8192
[Thread 0] Scanned group range [0, 1), inodes 8192
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Multiple threads triggered to read bitmaps
==182288== Thread 2:
==182288== Conditional jump or move depends on uninitialised value(s)
==182288==    at 0x488E31B: read_bitmaps_range_start (rw_bitmaps.c:437)
==182288==    by 0x488E5B3: read_bitmaps_thread (rw_bitmaps.c:532)
==182288==    by 0x4965EA6: start_thread (pthread_create.c:477)
==182288==    by 0x4A7CD4E: clone (clone.S:95)
==182288== 
==182288== Conditional jump or move depends on uninitialised value(s)
==182288==    at 0x4839961: free (vg_replace_malloc.c:538)
==182288==    by 0x487F919: ext2fs_free_mem (ext2fs.h:1891)
==182288==    by 0x488E328: read_bitmaps_range_start (rw_bitmaps.c:438)
==182288==    by 0x488E5B3: read_bitmaps_thread (rw_bitmaps.c:532)
==182288==    by 0x4965EA6: start_thread (pthread_create.c:477)
==182288==    by 0x4A7CD4E: clone (clone.S:95)
==182288== 
==182288== Invalid free() / delete / delete[] / realloc()
==182288==    at 0x48399AB: free (vg_replace_malloc.c:538)
==182288==    by 0x487F919: ext2fs_free_mem (ext2fs.h:1891)
==182288==    by 0x488E328: read_bitmaps_range_start (rw_bitmaps.c:438)
==182288==    by 0x488E5B3: read_bitmaps_thread (rw_bitmaps.c:532)
==182288==    by 0x4965EA6: start_thread (pthread_create.c:477)
==182288==    by 0x4A7CD4E: clone (clone.S:95)
==182288==  Address 0x4b46100 is 0 bytes inside a block of size 3,144 free'd
==182288==    at 0x48399AB: free (vg_replace_malloc.c:538)
==182288==    by 0x487F919: ext2fs_free_mem (ext2fs.h:1891)
==182288==    by 0x488E328: read_bitmaps_range_start (rw_bitmaps.c:438)
==182288==    by 0x488E5B3: read_bitmaps_thread (rw_bitmaps.c:532)
==182288==    by 0x4965EA6: start_thread (pthread_create.c:477)
==182288==    by 0x4A7CD4E: clone (clone.S:95)
==182288==  Block was alloc'd at
==182288==    at 0x483877F: malloc (vg_replace_malloc.c:307)
==182288==    by 0x487F7C9: ext2fs_get_mem (ext2fs.h:1847)
==182288==    by 0x11EE47: e2fsck_allocate_context (e2fsck.c:27)
==182288==    by 0x11B228: PRS (unix.c:829)
==182288==    by 0x11CD55: main (unix.c:1465)
...

						- Ted