diff mbox series

Add goals of patch review and tips

Message ID 20230314114037.25581-1-rpalethorpe@suse.com
State Superseded
Headers show
Series Add goals of patch review and tips | expand

Commit Message

Richard Palethorpe March 14, 2023, 11:40 a.m. UTC
I see two options for patch review. Either we have a single senior
maintainer who does most of or it is distributed.

For now I think it needs to be distributed which is beyond the scope
of this commit.

In order to distribute it we need new contributors to review each
others' work at least for the first few revisions.

I think that anyone can review a patch if they put the work in to test
it and try to break it. Then understand why it is broken.

This commit states some ideas about how to do that, plus some tips for
more advanced patch review.

Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
Cc: Cyril Hrubis <chrubis@suse.cz>
Cc: Andrea Cervesato <andrea.cervesato@suse.de>
Cc: Avinesh Kumar <akumar@suse.de>
Cc: Wei Gao <wegao@suse.com>
Cc: Petr Vorel <pvorel@suse.cz>
---
 doc/maintainer-patch-review-checklist.txt | 78 ++++++++++++++++++++++-
 1 file changed, 77 insertions(+), 1 deletion(-)

Comments

Cyril Hrubis March 14, 2023, 1:18 p.m. UTC | #1
Hi!
> I see two options for patch review. Either we have a single senior
> maintainer who does most of or it is distributed.
> 
> For now I think it needs to be distributed which is beyond the scope
> of this commit.
> 
> In order to distribute it we need new contributors to review each
> others' work at least for the first few revisions.
> 
> I think that anyone can review a patch if they put the work in to test
> it and try to break it. Then understand why it is broken.
> 
> This commit states some ideas about how to do that, plus some tips for
> more advanced patch review.
> 
> Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
> Cc: Cyril Hrubis <chrubis@suse.cz>
> Cc: Andrea Cervesato <andrea.cervesato@suse.de>
> Cc: Avinesh Kumar <akumar@suse.de>
> Cc: Wei Gao <wegao@suse.com>
> Cc: Petr Vorel <pvorel@suse.cz>
> ---
>  doc/maintainer-patch-review-checklist.txt | 78 ++++++++++++++++++++++-
>  1 file changed, 77 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/maintainer-patch-review-checklist.txt b/doc/maintainer-patch-review-checklist.txt
> index 706b0a516..be0cd0961 100644
> --- a/doc/maintainer-patch-review-checklist.txt
> +++ b/doc/maintainer-patch-review-checklist.txt
> @@ -1,4 +1,80 @@
> -# Maintainer Patch Review Checklist
> +# Patch Review
> +
> +Anyone can and should review patches. It's the only way to get good at
> +patch review and for the project to scale.
> +
> +## Goals of patch review
> +

Maybe start with:

1. Catch typos and obvious mistakes

Everyone does these and usually they are easy to spot for anyone but the
author.

> +1. Prevent false positive test results
> +2. Prevent false negative test results
> +3. Make future changes as easy as possible

I would say that number 3 maybe be a bit controversial, I've seen cases
where attempts to futureproof the code caused steep increase in the test
complexity. So maybe:

3. Keep the code as simple as possible as well as futureproof

Or something along the lines.

> +## How to find clear errors
> +
> +A clear error is one where there is unlikely to be any argument if you
> +provide evidence of it. Evidence being an error trace or logical proof
> +that an error will occur in a common situation.
> +
> +The following are examples and may not be appropriate for all tests.
> +
> +* Merge the patch. It should apply cleanly to master.
> +* Compile the patch with default and non-default configurations.
> +  - Use sanitizers e.g. undefined behaviour, address.
> +  - Compile on non-x86
> +  - Compile on x86 with -m32

Maybe note here that some tests trigger undefined behavior
intentionally, we do have a few tests that dereference NULL to trigger
crash, etc.

> +* Use `make check`
> +* Run effected tests in a VM
> +  - Use single vCPU
> +  - Use many vCPUs and enable NUMA
> +  - Restrict RAM to < 1GB.
> +* Run effected tests on an embedded device
> +* Run effected tests on non-x86 machine in general
> +* Run reproducers on a kernel where the bug is present
> +* Run tests with "-i0"
> +* Compare usage of system calls with man page descriptions
> +* Compare usage of system calls with kernel code
> +* Search the LTP library for existing helper functions
> +
> +## How to find subtle errors
> +
> +A subtle error is one where you can expect some argument because you
> +do not have clear evidence of an error. It is best to state these as
> +questions and not make assertions if possible.
> +
> +Although if it is a matter of style or "taste" then senior maintainers
> +can assert what is correct to avoid bike shedding.
> +
> +* Ask what happens if there is an error, could it be debugged just
> +  with the test output?
> +* Are we testing undefined behaviour?
> +  - Could future kernel behaviour change without "breaking userland"?
> +  - Does the kernel behave differently depending on hardware?
> +  - Does it behave differently depending kernel on configuration?
> +  - Does it behave differently depending on the compiler?
> +* Will it scale to tiny and huge systems?
> +  - What happens if there are 100+ CPUs?
> +  - What happens if each CPU core is very slow?
> +  - What happens if there are 2TB or RAM?
> +* Are we repeating a pattern that can be turned into a library function?
> +* Is a single test trying to do too much?
> +* Could multiple similar tests be merged?
> +* Race conditions
> +  - What happens if a process gets preempted?
> +  - Could checkpoints or fuzzsync by used instead?
> +  - Note, usually you can insert a sleep to prove a race condition
> +    exists however finding them is hard
> +* Is there a simpler way to achieve the same kernel coverage?
> +
> +## How to get patches merged
> +
> +Once you think a patch is good enough you should add your Reviewed-by
> +tags. This means you will get some credit for getting the patch
> +merged. Also some blame if there are problems.

Maybe we should mention the Tested-by: tag explicitly here as well.

> +In addition you can expect others to review your patches and add their
> +tags. This will speed up the process of getting your patches merged.
> +
> +## Maintainers Checklist
>  
>  Patchset should be tested locally and ideally also in maintainer's fork in
>  GitHub Actions on GitHub.
> -- 
> 2.39.2
>
Petr Vorel March 14, 2023, 5:54 p.m. UTC | #2
Hi Richie,

> I see two options for patch review. Either we have a single senior
> maintainer who does most of or it is distributed.

> For now I think it needs to be distributed which is beyond the scope
> of this commit.

> In order to distribute it we need new contributors to review each
> others' work at least for the first few revisions.

> I think that anyone can review a patch if they put the work in to test
> it and try to break it. Then understand why it is broken.

> This commit states some ideas about how to do that, plus some tips for
> more advanced patch review.

Very nice improvements, thanks!
I agree with points Cyril already raised.

Reviewed-by: Petr Vorel <pvorel@suse.cz>

> Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
> Cc: Cyril Hrubis <chrubis@suse.cz>
> Cc: Andrea Cervesato <andrea.cervesato@suse.de>
> Cc: Avinesh Kumar <akumar@suse.de>
> Cc: Wei Gao <wegao@suse.com>
> Cc: Petr Vorel <pvorel@suse.cz>
> ---
>  doc/maintainer-patch-review-checklist.txt | 78 ++++++++++++++++++++++-
>  1 file changed, 77 insertions(+), 1 deletion(-)

> diff --git a/doc/maintainer-patch-review-checklist.txt b/doc/maintainer-patch-review-checklist.txt
> index 706b0a516..be0cd0961 100644
> --- a/doc/maintainer-patch-review-checklist.txt
> +++ b/doc/maintainer-patch-review-checklist.txt
> @@ -1,4 +1,80 @@
> -# Maintainer Patch Review Checklist
> +# Patch Review

I'd rename the page to patch-review.txt (can be done later).

> +
> +Anyone can and should review patches. It's the only way to get good at
> +patch review and for the project to scale.
> +
> +## Goals of patch review
> +
> +1. Prevent false positive test results
> +2. Prevent false negative test results
> +3. Make future changes as easy as possible
> +
> +## How to find clear errors
> +
> +A clear error is one where there is unlikely to be any argument if you
> +provide evidence of it. Evidence being an error trace or logical proof
> +that an error will occur in a common situation.
> +
> +The following are examples and may not be appropriate for all tests.
> +
> +* Merge the patch. It should apply cleanly to master.
> +* Compile the patch with default and non-default configurations.
very nit: you sometimes put dot at the end of list item, sometimes not.

> +  - Use sanitizers e.g. undefined behaviour, address.
> +  - Compile on non-x86
> +  - Compile on x86 with -m32
BTW: I suppose nobody bothers about 32bit arm or even other archs.
It's definitely out of scope in SUSE.

> +* Use `make check`
> +* Run effected tests in a VM
> +  - Use single vCPU
> +  - Use many vCPUs and enable NUMA
> +  - Restrict RAM to < 1GB.
> +* Run effected tests on an embedded device
> +* Run effected tests on non-x86 machine in general
Very nice list, which show how hard would be to do a proper testing
(not being run for most of the patches - it's found afterwards, but it's very
good you list it there).

> +* Run reproducers on a kernel where the bug is present
> +* Run tests with "-i0"
`-i0` (for better syntax).

I'd also mention -i100 (or even higher, e.g. -i1100 to catch errors like get
file descriptors exhausted due missing SAFE_CLOSE(fd)).

Also, both of these are already somehow mentioned at "New tests" section, I'd
remove it from there (enough to mention them just once).

> +* Compare usage of system calls with man page descriptions
> +* Compare usage of system calls with kernel code
> +* Search the LTP library for existing helper functions
> +
> +## How to find subtle errors
> +
> +A subtle error is one where you can expect some argument because you
> +do not have clear evidence of an error. It is best to state these as
> +questions and not make assertions if possible.
> +
> +Although if it is a matter of style or "taste" then senior maintainers
> +can assert what is correct to avoid bike shedding.
> +
> +* Ask what happens if there is an error, could it be debugged just
> +  with the test output?
> +* Are we testing undefined behaviour?
> +  - Could future kernel behaviour change without "breaking userland"?
> +  - Does the kernel behave differently depending on hardware?
> +  - Does it behave differently depending kernel on configuration?
> +  - Does it behave differently depending on the compiler?
> +* Will it scale to tiny and huge systems?
> +  - What happens if there are 100+ CPUs?
> +  - What happens if each CPU core is very slow?
> +  - What happens if there are 2TB or RAM?

Again, very good points, even it's hard to test all of these before.

> +* Are we repeating a pattern that can be turned into a library function?
> +* Is a single test trying to do too much?
> +* Could multiple similar tests be merged?
> +* Race conditions
> +  - What happens if a process gets preempted?
> +  - Could checkpoints or fuzzsync by used instead?
> +  - Note, usually you can insert a sleep to prove a race condition
> +    exists however finding them is hard
> +* Is there a simpler way to achieve the same kernel coverage?
> +
> +## How to get patches merged
> +
> +Once you think a patch is good enough you should add your Reviewed-by
> +tags. This means you will get some credit for getting the patch
> +merged. Also some blame if there are problems.
> +
> +In addition you can expect others to review your patches and add their
> +tags. This will speed up the process of getting your patches merged.
> +
> +## Maintainers Checklist

>  Patchset should be tested locally and ideally also in maintainer's fork in
>  GitHub Actions on GitHub.
I'd encourage people to enable GitHub Actions in their forks (I'm not sure how
many maintainers do this; best would be automation [1] [2], but nobody bothers
about CI and I'm sort of burn out driving it myself).

Kind regards,
Petr

[1] https://github.com/linux-test-project/ltp/issues/599
[2] https://github.com/linux-test-project/ltp/issues/600
Cyril Hrubis March 14, 2023, 6:16 p.m. UTC | #3
Hi!
> > +  - Use sanitizers e.g. undefined behaviour, address.
> > +  - Compile on non-x86
> > +  - Compile on x86 with -m32
> BTW: I suppose nobody bothers about 32bit arm or even other archs.
> It's definitely out of scope in SUSE.

As long as we have CONFIG_COMPAT enabled it's actually not out-of-scope.
Richard Palethorpe March 16, 2023, 10:18 a.m. UTC | #4
Hello,

Cyril Hrubis <chrubis@suse.cz> writes:

> Hi!
>> I see two options for patch review. Either we have a single senior
>> maintainer who does most of or it is distributed.
>> 
>> For now I think it needs to be distributed which is beyond the scope
>> of this commit.
>> 
>> In order to distribute it we need new contributors to review each
>> others' work at least for the first few revisions.
>> 
>> I think that anyone can review a patch if they put the work in to test
>> it and try to break it. Then understand why it is broken.
>> 
>> This commit states some ideas about how to do that, plus some tips for
>> more advanced patch review.
>> 
>> Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
>> Cc: Cyril Hrubis <chrubis@suse.cz>
>> Cc: Andrea Cervesato <andrea.cervesato@suse.de>
>> Cc: Avinesh Kumar <akumar@suse.de>
>> Cc: Wei Gao <wegao@suse.com>
>> Cc: Petr Vorel <pvorel@suse.cz>
>> ---
>>  doc/maintainer-patch-review-checklist.txt | 78 ++++++++++++++++++++++-
>>  1 file changed, 77 insertions(+), 1 deletion(-)
>> 
>> diff --git a/doc/maintainer-patch-review-checklist.txt b/doc/maintainer-patch-review-checklist.txt
>> index 706b0a516..be0cd0961 100644
>> --- a/doc/maintainer-patch-review-checklist.txt
>> +++ b/doc/maintainer-patch-review-checklist.txt
>> @@ -1,4 +1,80 @@
>> -# Maintainer Patch Review Checklist
>> +# Patch Review
>> +
>> +Anyone can and should review patches. It's the only way to get good at
>> +patch review and for the project to scale.
>> +
>> +## Goals of patch review
>> +
>
> Maybe start with:
>
> 1. Catch typos and obvious mistakes
>
> Everyone does these and usually they are easy to spot for anyone but the
> author.
>
>> +1. Prevent false positive test results
>> +2. Prevent false negative test results
>> +3. Make future changes as easy as possible
>
> I would say that number 3 maybe be a bit controversial, I've seen cases
> where attempts to futureproof the code caused steep increase in the
> test
> complexity. So maybe:
>
> 3. Keep the code as simple as possible as well as futureproof

Perhaps just

3. Keep the code as simple as possibe, but no simpler

This is possibly paraphrasing Einstein:
https://quoteinvestigator.com/2011/05/13/einstein-simple/


NOTE: I think future proofing is actually very dangerous. What I
      probably meant was

      3. Keep the code as simple as possible, while maintaining optionality,
         but if there appears to be a disproportionate increase in complexity
         for an increase in optionality then simplicity takes priority because
         identifying relevant optionality is hard.

      but "optionality" does not have a nice dictionary definition. I guess
      you could substitute it with "freedom". In any case it's not something I
      would want to write in documentation. There is no easy way to
      express it.

>
> Or something along the lines.

>
>> +## How to find clear errors
>> +
>> +A clear error is one where there is unlikely to be any argument if you
>> +provide evidence of it. Evidence being an error trace or logical proof
>> +that an error will occur in a common situation.
>> +
>> +The following are examples and may not be appropriate for all tests.
>> +
>> +* Merge the patch. It should apply cleanly to master.
>> +* Compile the patch with default and non-default configurations.
>> +  - Use sanitizers e.g. undefined behaviour, address.
>> +  - Compile on non-x86
>> +  - Compile on x86 with -m32
>
> Maybe note here that some tests trigger undefined behavior
> intentionally, we do have a few tests that dereference NULL to trigger
> crash, etc.

Indeed the address sanitizer completely breaks a lot of tests and
produces a lot of noise.

>
>> +* Use `make check`
>> +* Run effected tests in a VM
>> +  - Use single vCPU
>> +  - Use many vCPUs and enable NUMA
>> +  - Restrict RAM to < 1GB.
>> +* Run effected tests on an embedded device
>> +* Run effected tests on non-x86 machine in general
>> +* Run reproducers on a kernel where the bug is present
>> +* Run tests with "-i0"
>> +* Compare usage of system calls with man page descriptions
>> +* Compare usage of system calls with kernel code
>> +* Search the LTP library for existing helper functions
>> +
>> +## How to find subtle errors
>> +
>> +A subtle error is one where you can expect some argument because you
>> +do not have clear evidence of an error. It is best to state these as
>> +questions and not make assertions if possible.
>> +
>> +Although if it is a matter of style or "taste" then senior maintainers
>> +can assert what is correct to avoid bike shedding.
>> +
>> +* Ask what happens if there is an error, could it be debugged just
>> +  with the test output?
>> +* Are we testing undefined behaviour?
>> +  - Could future kernel behaviour change without "breaking userland"?
>> +  - Does the kernel behave differently depending on hardware?
>> +  - Does it behave differently depending kernel on configuration?
>> +  - Does it behave differently depending on the compiler?
>> +* Will it scale to tiny and huge systems?
>> +  - What happens if there are 100+ CPUs?
>> +  - What happens if each CPU core is very slow?
>> +  - What happens if there are 2TB or RAM?
>> +* Are we repeating a pattern that can be turned into a library function?
>> +* Is a single test trying to do too much?
>> +* Could multiple similar tests be merged?
>> +* Race conditions
>> +  - What happens if a process gets preempted?
>> +  - Could checkpoints or fuzzsync by used instead?
>> +  - Note, usually you can insert a sleep to prove a race condition
>> +    exists however finding them is hard
>> +* Is there a simpler way to achieve the same kernel coverage?
>> +
>> +## How to get patches merged
>> +
>> +Once you think a patch is good enough you should add your Reviewed-by
>> +tags. This means you will get some credit for getting the patch
>> +merged. Also some blame if there are problems.
>
> Maybe we should mention the Tested-by: tag explicitly here as well.

I'm not sure how we interpret Tested-by when deciding to merge; does it
mean someone is happy for the test to be merged or not?

Should someone add both tags if they have reviewed and tested it?

>
>> +In addition you can expect others to review your patches and add their
>> +tags. This will speed up the process of getting your patches merged.
>> +
>> +## Maintainers Checklist
>>  
>>  Patchset should be tested locally and ideally also in maintainer's fork in
>>  GitHub Actions on GitHub.
>> -- 
>> 2.39.2
>>
Richard Palethorpe March 16, 2023, 10:51 a.m. UTC | #5
Hello,

Petr Vorel <pvorel@suse.cz> writes:

> Hi Richie,
>
>> I see two options for patch review. Either we have a single senior
>> maintainer who does most of or it is distributed.
>
>> For now I think it needs to be distributed which is beyond the scope
>> of this commit.
>
>> In order to distribute it we need new contributors to review each
>> others' work at least for the first few revisions.
>
>> I think that anyone can review a patch if they put the work in to test
>> it and try to break it. Then understand why it is broken.
>
>> This commit states some ideas about how to do that, plus some tips for
>> more advanced patch review.
>
> Very nice improvements, thanks!
> I agree with points Cyril already raised.
>
> Reviewed-by: Petr Vorel <pvorel@suse.cz>
>
>> Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
>> Cc: Cyril Hrubis <chrubis@suse.cz>
>> Cc: Andrea Cervesato <andrea.cervesato@suse.de>
>> Cc: Avinesh Kumar <akumar@suse.de>
>> Cc: Wei Gao <wegao@suse.com>
>> Cc: Petr Vorel <pvorel@suse.cz>
>> ---
>>  doc/maintainer-patch-review-checklist.txt | 78 ++++++++++++++++++++++-
>>  1 file changed, 77 insertions(+), 1 deletion(-)
>
>> diff --git a/doc/maintainer-patch-review-checklist.txt b/doc/maintainer-patch-review-checklist.txt
>> index 706b0a516..be0cd0961 100644
>> --- a/doc/maintainer-patch-review-checklist.txt
>> +++ b/doc/maintainer-patch-review-checklist.txt
>> @@ -1,4 +1,80 @@
>> -# Maintainer Patch Review Checklist
>> +# Patch Review
>
> I'd rename the page to patch-review.txt (can be done later).
>
>> +
>> +Anyone can and should review patches. It's the only way to get good at
>> +patch review and for the project to scale.
>> +
>> +## Goals of patch review
>> +
>> +1. Prevent false positive test results
>> +2. Prevent false negative test results
>> +3. Make future changes as easy as possible
>> +
>> +## How to find clear errors
>> +
>> +A clear error is one where there is unlikely to be any argument if you
>> +provide evidence of it. Evidence being an error trace or logical proof
>> +that an error will occur in a common situation.
>> +
>> +The following are examples and may not be appropriate for all tests.
>> +
>> +* Merge the patch. It should apply cleanly to master.
>> +* Compile the patch with default and non-default configurations.
> very nit: you sometimes put dot at the end of list item, sometimes not.
>
>> +  - Use sanitizers e.g. undefined behaviour, address.
>> +  - Compile on non-x86
>> +  - Compile on x86 with -m32
> BTW: I suppose nobody bothers about 32bit arm or even other archs.
> It's definitely out of scope in SUSE.

Possibly it is in scope; 32bit makes a lot of sense on small cloud
VMs. It's also possible for ppc64 IIRC.

The only reason I mentioned x86 is because IIRC the flag is different on
some arches. These are just examples, but I could change the wording to
make it less specific?

>
>> +* Use `make check`
>> +* Run effected tests in a VM
>> +  - Use single vCPU
>> +  - Use many vCPUs and enable NUMA
>> +  - Restrict RAM to < 1GB.
>> +* Run effected tests on an embedded device
>> +* Run effected tests on non-x86 machine in general
> Very nice list, which show how hard would be to do a proper testing
> (not being run for most of the patches - it's found afterwards, but it's very
> good you list it there).
>
>> +* Run reproducers on a kernel where the bug is present
>> +* Run tests with "-i0"
> `-i0` (for better syntax).
>
> I'd also mention -i100 (or even higher, e.g. -i1100 to catch errors like get
> file descriptors exhausted due missing SAFE_CLOSE(fd)).

+1

>
> Also, both of these are already somehow mentioned at "New tests" section, I'd
> remove it from there (enough to mention them just once).

I didn't remove it because I am not sure if the "new tests" section is a
list of demands. Whereas these are just suggestions.

>
>> +* Compare usage of system calls with man page descriptions
>> +* Compare usage of system calls with kernel code
>> +* Search the LTP library for existing helper functions
>> +
>> +## How to find subtle errors
>> +
>> +A subtle error is one where you can expect some argument because you
>> +do not have clear evidence of an error. It is best to state these as
>> +questions and not make assertions if possible.
>> +
>> +Although if it is a matter of style or "taste" then senior maintainers
>> +can assert what is correct to avoid bike shedding.
>> +
>> +* Ask what happens if there is an error, could it be debugged just
>> +  with the test output?
>> +* Are we testing undefined behaviour?
>> +  - Could future kernel behaviour change without "breaking userland"?
>> +  - Does the kernel behave differently depending on hardware?
>> +  - Does it behave differently depending kernel on configuration?
>> +  - Does it behave differently depending on the compiler?
>> +* Will it scale to tiny and huge systems?
>> +  - What happens if there are 100+ CPUs?
>> +  - What happens if each CPU core is very slow?
>> +  - What happens if there are 2TB or RAM?
>
> Again, very good points, even it's hard to test all of these before.
>
>> +* Are we repeating a pattern that can be turned into a library function?
>> +* Is a single test trying to do too much?
>> +* Could multiple similar tests be merged?
>> +* Race conditions
>> +  - What happens if a process gets preempted?
>> +  - Could checkpoints or fuzzsync by used instead?
>> +  - Note, usually you can insert a sleep to prove a race condition
>> +    exists however finding them is hard
>> +* Is there a simpler way to achieve the same kernel coverage?
>> +
>> +## How to get patches merged
>> +
>> +Once you think a patch is good enough you should add your Reviewed-by
>> +tags. This means you will get some credit for getting the patch
>> +merged. Also some blame if there are problems.
>> +
>> +In addition you can expect others to review your patches and add their
>> +tags. This will speed up the process of getting your patches merged.
>> +
>> +## Maintainers Checklist
>
>>  Patchset should be tested locally and ideally also in maintainer's fork in
>>  GitHub Actions on GitHub.
> I'd encourage people to enable GitHub Actions in their forks (I'm not sure how
> many maintainers do this; best would be automation [1] [2], but nobody bothers
> about CI and I'm sort of burn out driving it myself).

I use it a lot. I don't remember having to enable anything, it seemed to
just work.

It provides a huge benefit I would say.

>
> Kind regards,
> Petr
>
> [1] https://github.com/linux-test-project/ltp/issues/599
> [2] https://github.com/linux-test-project/ltp/issues/600
Petr Vorel March 20, 2023, 8:04 a.m. UTC | #6
Hi all,

...
> > +## Maintainers Checklist

> >  Patchset should be tested locally and ideally also in maintainer's fork in
> >  GitHub Actions on GitHub.
> I'd encourage people to enable GitHub Actions in their forks (I'm not sure how

> many maintainers do this; best would be automation [1] [2], but nobody bothers
> about CI and I'm sort of burn out driving it myself).

+ also add Tested: link-to-github-actions-run below --- in patch would help
(it's than obvious that maintainer does not have to bother with doing it or
not hope that it fails on CentOS 7 old compiler or very new Fedora compiler).

Maybe also encourage people to create account in the patchwork and maintain
status their patches would help (set "Superseded" if they sent new patch version,
other statuses like "Accepted" or "Changes requested" are also sometimes
forgotten by the maintainer who post comments or merge the patch).

Both of these are small helps, but they still help LTP maintainers to have more
time for the review or for writing own patches.

But I can post a follow-up patch with these after your patch is merged if you
don't want to formulate them.

Kind regards,
Petr

> Kind regards,
> Petr

> [1] https://github.com/linux-test-project/ltp/issues/599
> [2] https://github.com/linux-test-project/ltp/issues/600
Petr Vorel March 20, 2023, 8:23 a.m. UTC | #7
Hi all,

> + also add Tested: link-to-github-actions-run below --- in patch would help
> (it's than obvious that maintainer does not have to bother with doing it or
> not hope that it fails on CentOS 7 old compiler or very new Fedora compiler).

> Maybe also encourage people to create account in the patchwork and maintain
> status their patches would help (set "Superseded" if they sent new patch version,
> other statuses like "Accepted" or "Changes requested" are also sometimes
> forgotten by the maintainer who post comments or merge the patch).
Example why helping to maintain the patches by submitter would help:
mknod01: Rewrite the test using new LTP API [1] followed by [v2,1/1] mknod01:
Rewrite the test using new LTP API [2].

Li reviewed v2, but later Cyril pushed v1 (manually updating patch) without
update patchwork. (Li review was ignored, I tried to apply v2 to merge it
because status was not updated.)

Kind regards,
Petr

[1] https://patchwork.ozlabs.org/project/ltp/patch/20230222034501.11800-1-akumar@suse.de/
[2] https://patchwork.ozlabs.org/project/ltp/patch/20230228154203.2783-1-akumar@suse.de/

> Both of these are small helps, but they still help LTP maintainers to have more
> time for the review or for writing own patches.

> But I can post a follow-up patch with these after your patch is merged if you
> don't want to formulate them.

> Kind regards,
> Petr
Petr Vorel March 20, 2023, 8:33 a.m. UTC | #8
Hi all,

> > + also add Tested: link-to-github-actions-run below --- in patch would help
> > (it's than obvious that maintainer does not have to bother with doing it or
> > not hope that it fails on CentOS 7 old compiler or very new Fedora compiler).

> > Maybe also encourage people to create account in the patchwork and maintain
> > status their patches would help (set "Superseded" if they sent new patch version,
> > other statuses like "Accepted" or "Changes requested" are also sometimes
> > forgotten by the maintainer who post comments or merge the patch).
> Example why helping to maintain the patches by submitter would help:
> mknod01: Rewrite the test using new LTP API [1] followed by [v2,1/1] mknod01:
> Rewrite the test using new LTP API [2].

> Li reviewed v2, but later Cyril pushed v1 (manually updating patch) without
> update patchwork. (Li review was ignored, I tried to apply v2 to merge it
> because status was not updated.)

And should go to the section "How to get patches merged".

But we also have "Developers corner" [3], maybe it should go there. Because I'm
not sure if contributors will read wiki page for maintainers.

BTW content of [3] is also in wiki homepage [4], but that's kind of duplicate
document, IMHO it'd be good to move this to single wiki page.

Kind regards,
Petr

> Kind regards,
> Petr

> [1] https://patchwork.ozlabs.org/project/ltp/patch/20230222034501.11800-1-akumar@suse.de/
> [2] https://patchwork.ozlabs.org/project/ltp/patch/20230228154203.2783-1-akumar@suse.de/
[3] https://github.com/linux-test-project/ltp#developers-corner
[4] https://github.com/linux-test-project/ltp/wiki#developers-corner

> > Both of these are small helps, but they still help LTP maintainers to have more
> > time for the review or for writing own patches.

> > But I can post a follow-up patch with these after your patch is merged if you
> > don't want to formulate them.

> > Kind regards,
> > Petr
Richard Palethorpe March 20, 2023, 9:25 a.m. UTC | #9
Hello,

Petr Vorel <pvorel@suse.cz> writes:

> Hi all,
>
>> + also add Tested: link-to-github-actions-run below --- in patch would help
>> (it's than obvious that maintainer does not have to bother with doing it or
>> not hope that it fails on CentOS 7 old compiler or very new Fedora compiler).
>
>> Maybe also encourage people to create account in the patchwork and maintain
>> status their patches would help (set "Superseded" if they sent new patch version,
>> other statuses like "Accepted" or "Changes requested" are also sometimes
>> forgotten by the maintainer who post comments or merge the patch).
> Example why helping to maintain the patches by submitter would help:
> mknod01: Rewrite the test using new LTP API [1] followed by [v2,1/1] mknod01:
> Rewrite the test using new LTP API [2].
>
> Li reviewed v2, but later Cyril pushed v1 (manually updating patch) without
> update patchwork. (Li review was ignored, I tried to apply v2 to merge it
> because status was not updated.)

Yes, I think patchwork is important.

Probably where we disagree is how agressive to be when removing stuff
from the default view. IMO a shared list can not be allowed to grow; you
can't leave TODO items on there unless they are next on your priority
list.

So I would say remove items (so they don't show up with the default
filter) aggressively and put them on a private list if you intend to do
them ever.

>
> Kind regards,
> Petr
>
> [1] https://patchwork.ozlabs.org/project/ltp/patch/20230222034501.11800-1-akumar@suse.de/
> [2] https://patchwork.ozlabs.org/project/ltp/patch/20230228154203.2783-1-akumar@suse.de/
>
>> Both of these are small helps, but they still help LTP maintainers to have more
>> time for the review or for writing own patches.
>
>> But I can post a follow-up patch with these after your patch is merged if you
>> don't want to formulate them.
>
>> Kind regards,
>> Petr
Li Wang March 20, 2023, 11:16 a.m. UTC | #10
On Mon, Mar 20, 2023 at 4:23 PM Petr Vorel <pvorel@suse.cz> wrote:

> Hi all,
>
> > + also add Tested: link-to-github-actions-run below --- in patch would
> help
> > (it's than obvious that maintainer does not have to bother with doing it
> or
> > not hope that it fails on CentOS 7 old compiler or very new Fedora
> compiler).
>
> > Maybe also encourage people to create account in the patchwork and
> maintain
> > status their patches would help (set "Superseded" if they sent new patch
> version,
>

I'm not sure if this brings advantages more than disadvantages
My concern is that probably caused the wrong operation if more
green hands can update the patch status in the patchwork. That
easily let us confused about where the patch has gone:).

Unless we have a way to grant limited permissions to account.

> other statuses like "Accepted" or "Changes requested" are also sometimes
> > forgotten by the maintainer who post comments or merge the patch).
> Example why helping to maintain the patches by submitter would help:
> mknod01: Rewrite the test using new LTP API [1] followed by [v2,1/1]
> mknod01:
> Rewrite the test using new LTP API [2].
>
> Li reviewed v2, but later Cyril pushed v1 (manually updating patch) without
> update patchwork. (Li review was ignored, I tried to apply v2 to merge it
> because status was not updated.)
>


I'd make a clarification for that mknod01 patch review,
the reason why Cyril merge V1 manually is that V2
involves new change (I neglected) in mknod02, which
should be separated in another patch.

Cyril did the right thing there. But he didn't explain that.



>
> Kind regards,
> Petr
>
> [1]
> https://patchwork.ozlabs.org/project/ltp/patch/20230222034501.11800-1-akumar@suse.de/
> [2]
> https://patchwork.ozlabs.org/project/ltp/patch/20230228154203.2783-1-akumar@suse.de/
>
> > Both of these are small helps, but they still help LTP maintainers to
> have more
> > time for the review or for writing own patches.
>
> > But I can post a follow-up patch with these after your patch is merged
> if you
> > don't want to formulate them.
>
> > Kind regards,
> > Petr
>
>
Petr Vorel March 20, 2023, 2:37 p.m. UTC | #11
> On Mon, Mar 20, 2023 at 4:23 PM Petr Vorel <pvorel@suse.cz> wrote:

> > Hi all,

> > > + also add Tested: link-to-github-actions-run below --- in patch would
> > help
> > > (it's than obvious that maintainer does not have to bother with doing it
> > or
> > > not hope that it fails on CentOS 7 old compiler or very new Fedora
> > compiler).

> > > Maybe also encourage people to create account in the patchwork and
> > maintain
> > > status their patches would help (set "Superseded" if they sent new patch
> > version,


> I'm not sure if this brings advantages more than disadvantages
> My concern is that probably caused the wrong operation if more
> green hands can update the patch status in the patchwork. That
> easily let us confused about where the patch has gone:).

> Unless we have a way to grant limited permissions to account.

Ordinary users (non-admins) have permissions to their patches (patches which
they sent). But any status can be set.  OK, let's not ask for it.


> > other statuses like "Accepted" or "Changes requested" are also sometimes
> > > forgotten by the maintainer who post comments or merge the patch).
> > Example why helping to maintain the patches by submitter would help:
> > mknod01: Rewrite the test using new LTP API [1] followed by [v2,1/1]
> > mknod01:
> > Rewrite the test using new LTP API [2].

> > Li reviewed v2, but later Cyril pushed v1 (manually updating patch) without
> > update patchwork. (Li review was ignored, I tried to apply v2 to merge it
> > because status was not updated.)



> I'd make a clarification for that mknod01 patch review,
> the reason why Cyril merge V1 manually is that V2
> involves new change (I neglected) in mknod02, which
> should be separated in another patch.

> Cyril did the right thing there. But he didn't explain that.

Thanks for detailed info. Yes, I didn't think Cyril anything wrong, I wanted to
document that more patch versions + not updating them can lead to confusion.

Kind regards,
Petr

> > Petr

> > [1]
> > https://patchwork.ozlabs.org/project/ltp/patch/20230222034501.11800-1-akumar@suse.de/
> > [2]
> > https://patchwork.ozlabs.org/project/ltp/patch/20230228154203.2783-1-akumar@suse.de/

> > > Both of these are small helps, but they still help LTP maintainers to
> > have more
> > > time for the review or for writing own patches.

> > > But I can post a follow-up patch with these after your patch is merged
> > if you
> > > don't want to formulate them.

> > > Kind regards,
> > > Petr
Petr Vorel March 20, 2023, 2:48 p.m. UTC | #12
> Hello,

> Petr Vorel <pvorel@suse.cz> writes:

> > Hi all,

> >> + also add Tested: link-to-github-actions-run below --- in patch would help
> >> (it's than obvious that maintainer does not have to bother with doing it or
> >> not hope that it fails on CentOS 7 old compiler or very new Fedora compiler).

> >> Maybe also encourage people to create account in the patchwork and maintain
> >> status their patches would help (set "Superseded" if they sent new patch version,
> >> other statuses like "Accepted" or "Changes requested" are also sometimes
> >> forgotten by the maintainer who post comments or merge the patch).
> > Example why helping to maintain the patches by submitter would help:
> > mknod01: Rewrite the test using new LTP API [1] followed by [v2,1/1] mknod01:
> > Rewrite the test using new LTP API [2].

> > Li reviewed v2, but later Cyril pushed v1 (manually updating patch) without
> > update patchwork. (Li review was ignored, I tried to apply v2 to merge it
> > because status was not updated.)

> Yes, I think patchwork is important.

> Probably where we disagree is how agressive to be when removing stuff
> from the default view. IMO a shared list can not be allowed to grow; you
> can't leave TODO items on there unless they are next on your priority
> list.

This is slightly different topic (I wrote about forgetting to update status),
but it's even more important.

> So I would say remove items (so they don't show up with the default
> filter) aggressively and put them on a private list if you intend to do
> them ever.

If it's obvious, that patch needs serious change I'm ok to set the status.
But because I mainly watch patchses in patchwork (and I'm not sure if I'm alone),
if the patch was posted recently, I wait few hours / one day to update the status
of the patch so that others can see it. But sure I'll stop doing it, if we all are
ok. (I should force to look into ML queue more often, there are also bug reports
and other discussions which aren't in patchwork.)

Kind regards,
Petr

> > Kind regards,
> > Petr

> > [1] https://patchwork.ozlabs.org/project/ltp/patch/20230222034501.11800-1-akumar@suse.de/
> > [2] https://patchwork.ozlabs.org/project/ltp/patch/20230228154203.2783-1-akumar@suse.de/

> >> Both of these are small helps, but they still help LTP maintainers to have more
> >> time for the review or for writing own patches.

> >> But I can post a follow-up patch with these after your patch is merged if you
> >> don't want to formulate them.

> >> Kind regards,
> >> Petr
Cyril Hrubis March 22, 2023, 4:43 p.m. UTC | #13
Hi!
> > + also add Tested: link-to-github-actions-run below --- in patch would help
> > (it's than obvious that maintainer does not have to bother with doing it or
> > not hope that it fails on CentOS 7 old compiler or very new Fedora compiler).
> 
> > Maybe also encourage people to create account in the patchwork and maintain
> > status their patches would help (set "Superseded" if they sent new patch version,
> > other statuses like "Accepted" or "Changes requested" are also sometimes
> > forgotten by the maintainer who post comments or merge the patch).
> Example why helping to maintain the patches by submitter would help:
> mknod01: Rewrite the test using new LTP API [1] followed by [v2,1/1] mknod01:
> Rewrite the test using new LTP API [2].
> 
> Li reviewed v2, but later Cyril pushed v1 (manually updating patch) without
> update patchwork. (Li review was ignored, I tried to apply v2 to merge it
> because status was not updated.)

Sorry for that, that happened because v2 was marked as changes requsted
but v1 kept hanging in patchwork and I only realized that later...
Cyril Hrubis March 22, 2023, 4:48 p.m. UTC | #14
Hi!
> >> +1. Prevent false positive test results
> >> +2. Prevent false negative test results
> >> +3. Make future changes as easy as possible
> >
> > I would say that number 3 maybe be a bit controversial, I've seen cases
> > where attempts to futureproof the code caused steep increase in the
> > test
> > complexity. So maybe:
> >
> > 3. Keep the code as simple as possible as well as futureproof
> 
> Perhaps just
> 
> 3. Keep the code as simple as possibe, but no simpler
> 
> This is possibly paraphrasing Einstein:
> https://quoteinvestigator.com/2011/05/13/einstein-simple/
> 
> 
> NOTE: I think future proofing is actually very dangerous. What I
>       probably meant was
> 
>       3. Keep the code as simple as possible, while maintaining optionality,
>          but if there appears to be a disproportionate increase in complexity
>          for an increase in optionality then simplicity takes priority because
>          identifying relevant optionality is hard.
> 
>       but "optionality" does not have a nice dictionary definition. I guess
>       you could substitute it with "freedom". In any case it's not something I
>       would want to write in documentation. There is no easy way to
>       express it.

That sounds way to complicated, unfortunately reality is often
complicated and cannot be overly simplified.

So I would go with the simple paraphrase to Einstein, that is short and
to the point.

> >> +## How to get patches merged
> >> +
> >> +Once you think a patch is good enough you should add your Reviewed-by
> >> +tags. This means you will get some credit for getting the patch
> >> +merged. Also some blame if there are problems.
> >
> > Maybe we should mention the Tested-by: tag explicitly here as well.
> 
> I'm not sure how we interpret Tested-by when deciding to merge; does it
> mean someone is happy for the test to be merged or not?
> 
> Should someone add both tags if they have reviewed and tested it?

Tested-by: means that someone actually tried the test and that it did
what it was supposed to do. This has obvious meaning for reproducers,
and yes for a reproducer you can add both tags and both are meaningful.

For regular tests Tested-by does not have that much value I guess.
Cyril Hrubis March 22, 2023, 4:49 p.m. UTC | #15
Hi!
> I'd make a clarification for that mknod01 patch review,
> the reason why Cyril merge V1 manually is that V2
> involves new change (I neglected) in mknod02, which
> should be separated in another patch.
> 
> Cyril did the right thing there. But he didn't explain that.

I did the right thing by accident though. I only realized there is v2
later on...
Petr Vorel March 23, 2023, 5:42 a.m. UTC | #16
> Hi!
> > I'd make a clarification for that mknod01 patch review,
> > the reason why Cyril merge V1 manually is that V2
> > involves new change (I neglected) in mknod02, which
> > should be separated in another patch.

> > Cyril did the right thing there. But he didn't explain that.

> I did the right thing by accident though. I only realized there is v2
> later on...

I really didn't want to blame anybody. I just wanted to demonstrate occasional
problems due wrong patch setup in patchwork. Therefore I proposed to all active
developers including these who aren't LTP maintainers (we have quite a few from
SUSE) to maintain their *latest* version of the patchset in patchwork (just to
set the older version either "Superseded" or "Changes requested". That would
reduce these confusions.

I'm ok not to make it as a written rule (it might lead to confusion to encourage
all contributors to make a patchwork account, OTOH it's useful even for rare
kernel contributions, thus I have patchwork accounts in both
https://patchwork.ozlabs.org/ and https://patchwork.kernel.org/).

Kind regards,
Petr
Petr Vorel March 23, 2023, 5:46 a.m. UTC | #17
> Hi!
> > >> +1. Prevent false positive test results
> > >> +2. Prevent false negative test results
> > >> +3. Make future changes as easy as possible

> > > I would say that number 3 maybe be a bit controversial, I've seen cases
> > > where attempts to futureproof the code caused steep increase in the
> > > test
> > > complexity. So maybe:

> > > 3. Keep the code as simple as possible as well as futureproof

> > Perhaps just

> > 3. Keep the code as simple as possibe, but no simpler

> > This is possibly paraphrasing Einstein:
> > https://quoteinvestigator.com/2011/05/13/einstein-simple/


> > NOTE: I think future proofing is actually very dangerous. What I
> >       probably meant was

> >       3. Keep the code as simple as possible, while maintaining optionality,
> >          but if there appears to be a disproportionate increase in complexity
> >          for an increase in optionality then simplicity takes priority because
> >          identifying relevant optionality is hard.

> >       but "optionality" does not have a nice dictionary definition. I guess
> >       you could substitute it with "freedom". In any case it's not something I
> >       would want to write in documentation. There is no easy way to
> >       express it.

> That sounds way to complicated, unfortunately reality is often
> complicated and cannot be overly simplified.

> So I would go with the simple paraphrase to Einstein, that is short and
> to the point.

+1

> > >> +## How to get patches merged
> > >> +
> > >> +Once you think a patch is good enough you should add your Reviewed-by
> > >> +tags. This means you will get some credit for getting the patch
> > >> +merged. Also some blame if there are problems.

> > > Maybe we should mention the Tested-by: tag explicitly here as well.

> > I'm not sure how we interpret Tested-by when deciding to merge; does it
> > mean someone is happy for the test to be merged or not?

> > Should someone add both tags if they have reviewed and tested it?

> Tested-by: means that someone actually tried the test and that it did
> what it was supposed to do. This has obvious meaning for reproducers,
> and yes for a reproducer you can add both tags and both are meaningful.

> For regular tests Tested-by does not have that much value I guess.

I rarely add Tested-by, usually for non-intel architecture or something
which was non-trivial for me to test.

Kind regards,
Petr
Petr Vorel May 16, 2023, 12:08 p.m. UTC | #18
Hi all,

I'd like to get this effort to be merged.
Here are suggestions what to fix (typos) / improve:

https://lore.kernel.org/ltp/ZBsxUp08kTPeF27T@yuki/
https://lore.kernel.org/ltp/87sfe52cms.fsf@suse.de/

Richie, could you please post v2?

Thanks!

Kind regards,
Petr
Richard Palethorpe May 18, 2023, 10:56 a.m. UTC | #19
Hello,

Petr Vorel <pvorel@suse.cz> writes:

> Hi all,
>
> I'd like to get this effort to be merged.
> Here are suggestions what to fix (typos) / improve:
>
> https://lore.kernel.org/ltp/ZBsxUp08kTPeF27T@yuki/
> https://lore.kernel.org/ltp/87sfe52cms.fsf@suse.de/
>
> Richie, could you please post v2?

I see there is some further discussion, next week perhaps.

>
> Thanks!
>
> Kind regards,
> Petr
diff mbox series

Patch

diff --git a/doc/maintainer-patch-review-checklist.txt b/doc/maintainer-patch-review-checklist.txt
index 706b0a516..be0cd0961 100644
--- a/doc/maintainer-patch-review-checklist.txt
+++ b/doc/maintainer-patch-review-checklist.txt
@@ -1,4 +1,80 @@ 
-# Maintainer Patch Review Checklist
+# Patch Review
+
+Anyone can and should review patches. It's the only way to get good at
+patch review and for the project to scale.
+
+## Goals of patch review
+
+1. Prevent false positive test results
+2. Prevent false negative test results
+3. Make future changes as easy as possible
+
+## How to find clear errors
+
+A clear error is one where there is unlikely to be any argument if you
+provide evidence of it. Evidence being an error trace or logical proof
+that an error will occur in a common situation.
+
+The following are examples and may not be appropriate for all tests.
+
+* Merge the patch. It should apply cleanly to master.
+* Compile the patch with default and non-default configurations.
+  - Use sanitizers e.g. undefined behaviour, address.
+  - Compile on non-x86
+  - Compile on x86 with -m32
+* Use `make check`
+* Run effected tests in a VM
+  - Use single vCPU
+  - Use many vCPUs and enable NUMA
+  - Restrict RAM to < 1GB.
+* Run effected tests on an embedded device
+* Run effected tests on non-x86 machine in general
+* Run reproducers on a kernel where the bug is present
+* Run tests with "-i0"
+* Compare usage of system calls with man page descriptions
+* Compare usage of system calls with kernel code
+* Search the LTP library for existing helper functions
+
+## How to find subtle errors
+
+A subtle error is one where you can expect some argument because you
+do not have clear evidence of an error. It is best to state these as
+questions and not make assertions if possible.
+
+Although if it is a matter of style or "taste" then senior maintainers
+can assert what is correct to avoid bike shedding.
+
+* Ask what happens if there is an error, could it be debugged just
+  with the test output?
+* Are we testing undefined behaviour?
+  - Could future kernel behaviour change without "breaking userland"?
+  - Does the kernel behave differently depending on hardware?
+  - Does it behave differently depending kernel on configuration?
+  - Does it behave differently depending on the compiler?
+* Will it scale to tiny and huge systems?
+  - What happens if there are 100+ CPUs?
+  - What happens if each CPU core is very slow?
+  - What happens if there are 2TB or RAM?
+* Are we repeating a pattern that can be turned into a library function?
+* Is a single test trying to do too much?
+* Could multiple similar tests be merged?
+* Race conditions
+  - What happens if a process gets preempted?
+  - Could checkpoints or fuzzsync by used instead?
+  - Note, usually you can insert a sleep to prove a race condition
+    exists however finding them is hard
+* Is there a simpler way to achieve the same kernel coverage?
+
+## How to get patches merged
+
+Once you think a patch is good enough you should add your Reviewed-by
+tags. This means you will get some credit for getting the patch
+merged. Also some blame if there are problems.
+
+In addition you can expect others to review your patches and add their
+tags. This will speed up the process of getting your patches merged.
+
+## Maintainers Checklist
 
 Patchset should be tested locally and ideally also in maintainer's fork in
 GitHub Actions on GitHub.