diff mbox series

mm/vma: Return the exact errno for __split_vma() and mas_store_gfp()

Message ID 20240909050226.2053-1-ice_yangxiao@163.com
State Not Applicable
Headers show
Series mm/vma: Return the exact errno for __split_vma() and mas_store_gfp() | expand

Commit Message

Xiao Yang Sept. 9, 2024, 5:02 a.m. UTC
__split_vma() and mas_store_gfp() returns several types of errno on
failure so don't ignore them in vms_gather_munmap_vmas(). For example,
__split_vma() returns -EINVAL when an unaligned huge page is unmapped.
This issue is reproduced by ltp memfd_create03 test.

Fixes: 6898c9039bc8 ("mm/vma: extract the gathering of vmas from do_vmi_align_munmap()")
Signed-off-by: Xiao Yang <ice_yangxiao@163.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202409081536.d283a0fb-oliver.sang@intel.com
---
 mm/vma.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Comments

Lorenzo Stoakes Sept. 9, 2024, 9:09 a.m. UTC | #1
On Mon, Sep 09, 2024 at 02:02:26PM GMT, Xiao Yang wrote:
> __split_vma() and mas_store_gfp() returns several types of errno on
> failure so don't ignore them in vms_gather_munmap_vmas(). For example,
> __split_vma() returns -EINVAL when an unaligned huge page is unmapped.
> This issue is reproduced by ltp memfd_create03 test.

Thanks for this! :)

Though pedantic note - please ensure to check scripts/get_maintainer.pl and cc-
the reviewers and maintainer, the maintainer being Andrew and the
reviewers being me, Liam and Vlastimil.

The maintainer is especially important as it's Andrew who'll take the patch
;)

I've cc'd them here :)

>
> Fixes: 6898c9039bc8 ("mm/vma: extract the gathering of vmas from do_vmi_align_munmap()")
> Signed-off-by: Xiao Yang <ice_yangxiao@163.com>
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202409081536.d283a0fb-oliver.sang@intel.com
> ---
>  mm/vma.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index 8d1686fc8d5a..3feeea9a8c3d 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -1200,7 +1200,8 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>  			goto start_split_failed;
>  		}
>
> -		if (__split_vma(vms->vmi, vms->vma, vms->start, 1))
> +		error = __split_vma(vms->vmi, vms->vma, vms->start, 1);
> +		if (error)
>  			goto start_split_failed;

We'd probably want to stop assigning error = ENOMEM and just leave it
uninitialised if we're always going to assign it rather than filter.

You'd want to make sure that you caught any case that relies on it being
pre-assigned though.

>  	}
>  	vms->prev = vma_prev(vms->vmi);
> @@ -1220,12 +1221,14 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>  		}
>  		/* Does it split the end? */
>  		if (next->vm_end > vms->end) {
> -			if (__split_vma(vms->vmi, next, vms->end, 0))
> +			error = __split_vma(vms->vmi, next, vms->end, 0);
> +			if (error)
>  				goto end_split_failed;

Related to point above, In this and above, you are now resetting error to 0
should this succeed while some later code might rely on this not being the
case.

Basically I'd prefer us, if Liam is cool with it, to just not initialise
error and assign when an error actually occurs.

But we filtered for a reason, need to figure out if that is still
needed...
m
>  		}
>  		vma_start_write(next);
>  		mas_set(mas_detach, vms->vma_count++);
> -		if (mas_store_gfp(mas_detach, next, GFP_KERNEL))
> +		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
> +		if (error)
>  			goto munmap_gather_failed;
>
>  		vma_mark_detached(next, true);
> --
> 2.46.0
>

I'm in general in favour of what this patch does (modulo the points about
not initialising error and checking that we don't rely on it being
initialised above), but it very much need's Liam's input.

If Liam is cool with it, I'll add tags, but let's hold off on this until we
have confirmation from him.

Thanks!
Xiao Yang Sept. 9, 2024, 12:55 p.m. UTC | #2
At 2024-09-09 17:09:43, "Lorenzo Stoakes" <lorenzo.stoakes@oracle.com> wrote:
>On Mon, Sep 09, 2024 at 02:02:26PM GMT, Xiao Yang wrote:
>> __split_vma() and mas_store_gfp() returns several types of errno on
>> failure so don't ignore them in vms_gather_munmap_vmas(). For example,
>> __split_vma() returns -EINVAL when an unaligned huge page is unmapped.
>> This issue is reproduced by ltp memfd_create03 test.
>
>Thanks for this! :)
>
>Though pedantic note - please ensure to check scripts/get_maintainer.pl and cc-
>the reviewers and maintainer, the maintainer being Andrew and the

>reviewers being me, Liam and Vlastimil.


Hi Lorenzo,


Thanks for your kind reminder.

>
>The maintainer is especially important as it's Andrew who'll take the patch
>;)
>
>I've cc'd them here :)
>
>>
>> Fixes: 6898c9039bc8 ("mm/vma: extract the gathering of vmas from do_vmi_align_munmap()")
>> Signed-off-by: Xiao Yang <ice_yangxiao@163.com>
>> Reported-by: kernel test robot <oliver.sang@intel.com>
>> Closes: https://lore.kernel.org/oe-lkp/202409081536.d283a0fb-oliver.sang@intel.com
>> ---
>>  mm/vma.c | 9 ++++++---
>>  1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/vma.c b/mm/vma.c
>> index 8d1686fc8d5a..3feeea9a8c3d 100644
>> --- a/mm/vma.c
>> +++ b/mm/vma.c
>> @@ -1200,7 +1200,8 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>>  			goto start_split_failed;
>>  		}
>>
>> -		if (__split_vma(vms->vmi, vms->vma, vms->start, 1))
>> +		error = __split_vma(vms->vmi, vms->vma, vms->start, 1);
>> +		if (error)
>>  			goto start_split_failed;
>
>We'd probably want to stop assigning error = ENOMEM and just leave it
>uninitialised if we're always going to assign it rather than filter.
>
>You'd want to make sure that you caught any case that relies on it being
>pre-assigned though.
>
>>  	}
>>  	vms->prev = vma_prev(vms->vmi);
>> @@ -1220,12 +1221,14 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>>  		}
>>  		/* Does it split the end? */
>>  		if (next->vm_end > vms->end) {
>> -			if (__split_vma(vms->vmi, next, vms->end, 0))
>> +			error = __split_vma(vms->vmi, next, vms->end, 0);
>> +			if (error)
>>  				goto end_split_failed;
>
>Related to point above, In this and above, you are now resetting error to 0
>should this succeed while some later code might rely on this not being the
>case.
>
>Basically I'd prefer us, if Liam is cool with it, to just not initialise

>error and assign when an error actually occurs.


Agreed. I will resend the v2 patch as you suggested.


Best Regards,
Xiao Yang

>
>But we filtered for a reason, need to figure out if that is still
>needed...
>m
>>  		}
>>  		vma_start_write(next);
>>  		mas_set(mas_detach, vms->vma_count++);
>> -		if (mas_store_gfp(mas_detach, next, GFP_KERNEL))
>> +		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
>> +		if (error)
>>  			goto munmap_gather_failed;
>>
>>  		vma_mark_detached(next, true);
>> --
>> 2.46.0
>>
>
>I'm in general in favour of what this patch does (modulo the points about
>not initialising error and checking that we don't rely on it being
>initialised above), but it very much need's Liam's input.
>
>If Liam is cool with it, I'll add tags, but let's hold off on this until we
>have confirmation from him.
>
>Thanks!
Liam R. Howlett Sept. 9, 2024, 2 p.m. UTC | #3
* Lorenzo Stoakes <lorenzo.stoakes@oracle.com> [240909 05:09]:
> On Mon, Sep 09, 2024 at 02:02:26PM GMT, Xiao Yang wrote:
> > __split_vma() and mas_store_gfp() returns several types of errno on
> > failure so don't ignore them in vms_gather_munmap_vmas(). For example,
> > __split_vma() returns -EINVAL when an unaligned huge page is unmapped.
> > This issue is reproduced by ltp memfd_create03 test.
> 
> Thanks for this! :)
> 
> Though pedantic note - please ensure to check scripts/get_maintainer.pl and cc-
> the reviewers and maintainer, the maintainer being Andrew and the
> reviewers being me, Liam and Vlastimil.
> 
> The maintainer is especially important as it's Andrew who'll take the patch
> ;)
> 
> I've cc'd them here :)
> 
> >
> > Fixes: 6898c9039bc8 ("mm/vma: extract the gathering of vmas from do_vmi_align_munmap()")

This fixes line will mean nothing in the long run, but Andrew can use it
to identify the target to squash things.

If this patch is merged and not squshed, you will create more work for
stable and get emails asking what commit it fixes.

> > Signed-off-by: Xiao Yang <ice_yangxiao@163.com>
> > Reported-by: kernel test robot <oliver.sang@intel.com>
> > Closes: https://lore.kernel.org/oe-lkp/202409081536.d283a0fb-oliver.sang@intel.com
> > ---
> >  mm/vma.c | 9 ++++++---
> >  1 file changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/vma.c b/mm/vma.c
> > index 8d1686fc8d5a..3feeea9a8c3d 100644
> > --- a/mm/vma.c
> > +++ b/mm/vma.c
> > @@ -1200,7 +1200,8 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
> >  			goto start_split_failed;
> >  		}
> >
> > -		if (__split_vma(vms->vmi, vms->vma, vms->start, 1))
> > +		error = __split_vma(vms->vmi, vms->vma, vms->start, 1);
> > +		if (error)
> >  			goto start_split_failed;
> 
> We'd probably want to stop assigning error = ENOMEM and just leave it
> uninitialised if we're always going to assign it rather than filter.
> 
> You'd want to make sure that you caught any case that relies on it being
> pre-assigned though.
> 
> >  	}
> >  	vms->prev = vma_prev(vms->vmi);
> > @@ -1220,12 +1221,14 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
> >  		}
> >  		/* Does it split the end? */
> >  		if (next->vm_end > vms->end) {
> > -			if (__split_vma(vms->vmi, next, vms->end, 0))
> > +			error = __split_vma(vms->vmi, next, vms->end, 0);
> > +			if (error)
> >  				goto end_split_failed;
> 
> Related to point above, In this and above, you are now resetting error to 0
> should this succeed while some later code might rely on this not being the
> case.
> 
> Basically I'd prefer us, if Liam is cool with it, to just not initialise
> error and assign when an error actually occurs.
> 
> But we filtered for a reason, need to figure out if that is still
> needed...
> m
> >  		}
> >  		vma_start_write(next);
> >  		mas_set(mas_detach, vms->vma_count++);
> > -		if (mas_store_gfp(mas_detach, next, GFP_KERNEL))
> > +		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
> > +		if (error)
> >  			goto munmap_gather_failed;
> >
> >  		vma_mark_detached(next, true);
> > --
> > 2.46.0
> >
> 
> I'm in general in favour of what this patch does (modulo the points about
> not initialising error and checking that we don't rely on it being
> initialised above), but it very much need's Liam's input.
> 
> If Liam is cool with it, I'll add tags, but let's hold off on this until we
> have confirmation from him.

We should probably drop the assignment all together.

Thanks,
Liam
Dan Carpenter Sept. 10, 2024, 1:37 p.m. UTC | #4
Hi Xiao,

kernel test robot noticed the following build warnings:

url:    https://github.com/intel-lab-lkp/linux/commits/Xiao-Yang/mm-vma-Return-the-exact-errno-for-__split_vma-and-mas_store_gfp/20240909-130325
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20240909050226.2053-1-ice_yangxiao%40163.com
patch subject: [PATCH] mm/vma: Return the exact errno for __split_vma() and mas_store_gfp()
config: x86_64-randconfig-161-20240910 (https://download.01.org/0day-ci/archive/20240910/202409102026.LOh8J1Lh-lkp@intel.com/config)
compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
| Closes: https://lore.kernel.org/r/202409102026.LOh8J1Lh-lkp@intel.com/

smatch warnings:
mm/vma.c:1263 vms_gather_munmap_vmas() warn: missing error code 'error'

vim +/error +1263 mm/vma.c

49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1166  /*
dba14840905f9e Liam R. Howlett 2024-08-30  1167   * vms_gather_munmap_vmas() - Put all VMAs within a range into a maple tree
6898c9039bc8e3 Liam R. Howlett 2024-08-30  1168   * for removal at a later date.  Handles splitting first and last if necessary
6898c9039bc8e3 Liam R. Howlett 2024-08-30  1169   * and marking the vmas as isolated.
6898c9039bc8e3 Liam R. Howlett 2024-08-30  1170   *
dba14840905f9e Liam R. Howlett 2024-08-30  1171   * @vms: The vma munmap struct
6898c9039bc8e3 Liam R. Howlett 2024-08-30  1172   * @mas_detach: The maple state tracking the detached tree
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1173   *
6898c9039bc8e3 Liam R. Howlett 2024-08-30  1174   * Return: 0 on success, -EPERM on mseal vmas, -ENOMEM otherwise

This comment needs to be updated.

49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1175   */
9014b230d88d7f Liam R. Howlett 2024-08-30  1176  int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
dba14840905f9e Liam R. Howlett 2024-08-30  1177  		struct ma_state *mas_detach)
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1178  {
01cf21e9e11957 Liam R. Howlett 2024-08-30  1179  	struct vm_area_struct *next = NULL;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1180  	int error = -ENOMEM;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1181  
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1182  	/*
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1183  	 * If we need to split any vma, do it now to save pain later.
20831cd6f814ea Liam R. Howlett 2024-08-30  1184  	 * Does it split the first one?
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1185  	 */
dba14840905f9e Liam R. Howlett 2024-08-30  1186  	if (vms->start > vms->vma->vm_start) {
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1187  
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1188  		/*
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1189  		 * Make sure that map_count on return from munmap() will
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1190  		 * not exceed its limit; but let map_count go just above
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1191  		 * its limit temporarily, to help free resources as expected.
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1192  		 */
dba14840905f9e Liam R. Howlett 2024-08-30  1193  		if (vms->end < vms->vma->vm_end &&
63fc66f5b6b18f Liam R. Howlett 2024-08-30  1194  		    vms->vma->vm_mm->map_count >= sysctl_max_map_count)
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1195  			goto map_count_exceeded;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1196  
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1197  		/* Don't bother splitting the VMA if we can't unmap it anyway */
dba14840905f9e Liam R. Howlett 2024-08-30  1198  		if (!can_modify_vma(vms->vma)) {
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1199  			error = -EPERM;
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1200  			goto start_split_failed;
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1201  		}
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1202  
013545e1b9bca0 Xiao Yang       2024-09-09  1203  		error = __split_vma(vms->vmi, vms->vma, vms->start, 1);
013545e1b9bca0 Xiao Yang       2024-09-09  1204  		if (error)
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1205  			goto start_split_failed;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1206  	}
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1207  	vms->prev = vma_prev(vms->vmi);
9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1208  	if (vms->prev)
9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1209  		vms->unmap_start = vms->prev->vm_end;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1210  
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1211  	/*
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1212  	 * Detach a range of VMAs from the mm. Using next as a temp variable as
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1213  	 * it is always overwritten.
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1214  	 */
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1215  	for_each_vma_range(*(vms->vmi), next, vms->end) {
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1216  		long nrpages;
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1217  
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1218  		if (!can_modify_vma(next)) {
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1219  			error = -EPERM;
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1220  			goto modify_vma_failed;
df2a7df9a9aa32 Pedro Falcato   2024-08-17  1221  		}
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1222  		/* Does it split the end? */
dba14840905f9e Liam R. Howlett 2024-08-30  1223  		if (next->vm_end > vms->end) {
013545e1b9bca0 Xiao Yang       2024-09-09  1224  			error = __split_vma(vms->vmi, next, vms->end, 0);
013545e1b9bca0 Xiao Yang       2024-09-09  1225  			if (error)
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1226  				goto end_split_failed;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1227  		}
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1228  		vma_start_write(next);
dba14840905f9e Liam R. Howlett 2024-08-30  1229  		mas_set(mas_detach, vms->vma_count++);
013545e1b9bca0 Xiao Yang       2024-09-09  1230  		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
013545e1b9bca0 Xiao Yang       2024-09-09  1231  		if (error)
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1232  			goto munmap_gather_failed;
6898c9039bc8e3 Liam R. Howlett 2024-08-30  1233  
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1234  		vma_mark_detached(next, true);
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1235  		nrpages = vma_pages(next);
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1236  
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1237  		vms->nr_pages += nrpages;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1238  		if (next->vm_flags & VM_LOCKED)
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1239  			vms->locked_vm += nrpages;
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1240  
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1241  		if (next->vm_flags & VM_ACCOUNT)
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1242  			vms->nr_accounted += nrpages;
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1243  
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1244  		if (is_exec_mapping(next->vm_flags))
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1245  			vms->exec_vm += nrpages;
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1246  		else if (is_stack_mapping(next->vm_flags))
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1247  			vms->stack_vm += nrpages;
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1248  		else if (is_data_mapping(next->vm_flags))
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1249  			vms->data_vm += nrpages;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1250  
dba14840905f9e Liam R. Howlett 2024-08-30  1251  		if (unlikely(vms->uf)) {
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1252  			/*
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1253  			 * If userfaultfd_unmap_prep returns an error the vmas
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1254  			 * will remain split, but userland will get a
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1255  			 * highly unexpected error anyway. This is no
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1256  			 * different than the case where the first of the two
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1257  			 * __split_vma fails, but we don't undo the first
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1258  			 * split, despite we could. This is unlikely enough
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1259  			 * failure that it's not worth optimizing it for.
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1260  			 */
dba14840905f9e Liam R. Howlett 2024-08-30  1261  			if (userfaultfd_unmap_prep(next, vms->start, vms->end,
dba14840905f9e Liam R. Howlett 2024-08-30  1262  						   vms->uf))
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29 @1263  				goto userfaultfd_error;

Needs an "error = -ENOMEM;" here.  I haven't reviewed this function outside of
what the zero day bot puts into this email.

49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1264  		}
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1265  #ifdef CONFIG_DEBUG_VM_MAPLE_TREE
dba14840905f9e Liam R. Howlett 2024-08-30  1266  		BUG_ON(next->vm_start < vms->start);
dba14840905f9e Liam R. Howlett 2024-08-30  1267  		BUG_ON(next->vm_start > vms->end);
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1268  #endif
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1269  	}
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1270  
17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1271  	vms->next = vma_next(vms->vmi);
9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1272  	if (vms->next)
9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1273  		vms->unmap_end = vms->next->vm_start;
49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1274
Liam R. Howlett Sept. 10, 2024, 1:55 p.m. UTC | #5
* Dan Carpenter <dan.carpenter@linaro.org> [240910 09:37]:
> Hi Xiao,
> 
> kernel test robot noticed the following build warnings:
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Xiao-Yang/mm-vma-Return-the-exact-errno-for-__split_vma-and-mas_store_gfp/20240909-130325
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link:    https://lore.kernel.org/r/20240909050226.2053-1-ice_yangxiao%40163.com
> patch subject: [PATCH] mm/vma: Return the exact errno for __split_vma() and mas_store_gfp()
> config: x86_64-randconfig-161-20240910 (https://download.01.org/0day-ci/archive/20240910/202409102026.LOh8J1Lh-lkp@intel.com/config)
> compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
> | Closes: https://lore.kernel.org/r/202409102026.LOh8J1Lh-lkp@intel.com/
> 
> smatch warnings:
> mm/vma.c:1263 vms_gather_munmap_vmas() warn: missing error code 'error'
> 
> vim +/error +1263 mm/vma.c
> 
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1166  /*
> dba14840905f9e Liam R. Howlett 2024-08-30  1167   * vms_gather_munmap_vmas() - Put all VMAs within a range into a maple tree
> 6898c9039bc8e3 Liam R. Howlett 2024-08-30  1168   * for removal at a later date.  Handles splitting first and last if necessary
> 6898c9039bc8e3 Liam R. Howlett 2024-08-30  1169   * and marking the vmas as isolated.
> 6898c9039bc8e3 Liam R. Howlett 2024-08-30  1170   *
> dba14840905f9e Liam R. Howlett 2024-08-30  1171   * @vms: The vma munmap struct
> 6898c9039bc8e3 Liam R. Howlett 2024-08-30  1172   * @mas_detach: The maple state tracking the detached tree
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1173   *
> 6898c9039bc8e3 Liam R. Howlett 2024-08-30  1174   * Return: 0 on success, -EPERM on mseal vmas, -ENOMEM otherwise
> 
> This comment needs to be updated.
> 
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1175   */
> 9014b230d88d7f Liam R. Howlett 2024-08-30  1176  int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
> dba14840905f9e Liam R. Howlett 2024-08-30  1177  		struct ma_state *mas_detach)
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1178  {
> 01cf21e9e11957 Liam R. Howlett 2024-08-30  1179  	struct vm_area_struct *next = NULL;
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1180  	int error = -ENOMEM;
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1181  
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1182  	/*
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1183  	 * If we need to split any vma, do it now to save pain later.
> 20831cd6f814ea Liam R. Howlett 2024-08-30  1184  	 * Does it split the first one?
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1185  	 */
> dba14840905f9e Liam R. Howlett 2024-08-30  1186  	if (vms->start > vms->vma->vm_start) {
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1187  
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1188  		/*
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1189  		 * Make sure that map_count on return from munmap() will
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1190  		 * not exceed its limit; but let map_count go just above
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1191  		 * its limit temporarily, to help free resources as expected.
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1192  		 */
> dba14840905f9e Liam R. Howlett 2024-08-30  1193  		if (vms->end < vms->vma->vm_end &&
> 63fc66f5b6b18f Liam R. Howlett 2024-08-30  1194  		    vms->vma->vm_mm->map_count >= sysctl_max_map_count)
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1195  			goto map_count_exceeded;
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1196  
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1197  		/* Don't bother splitting the VMA if we can't unmap it anyway */
> dba14840905f9e Liam R. Howlett 2024-08-30  1198  		if (!can_modify_vma(vms->vma)) {
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1199  			error = -EPERM;
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1200  			goto start_split_failed;
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1201  		}
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1202  
> 013545e1b9bca0 Xiao Yang       2024-09-09  1203  		error = __split_vma(vms->vmi, vms->vma, vms->start, 1);
> 013545e1b9bca0 Xiao Yang       2024-09-09  1204  		if (error)
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1205  			goto start_split_failed;
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1206  	}
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1207  	vms->prev = vma_prev(vms->vmi);
> 9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1208  	if (vms->prev)
> 9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1209  		vms->unmap_start = vms->prev->vm_end;
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1210  
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1211  	/*
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1212  	 * Detach a range of VMAs from the mm. Using next as a temp variable as
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1213  	 * it is always overwritten.
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1214  	 */
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1215  	for_each_vma_range(*(vms->vmi), next, vms->end) {
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1216  		long nrpages;
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1217  
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1218  		if (!can_modify_vma(next)) {
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1219  			error = -EPERM;
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1220  			goto modify_vma_failed;
> df2a7df9a9aa32 Pedro Falcato   2024-08-17  1221  		}
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1222  		/* Does it split the end? */
> dba14840905f9e Liam R. Howlett 2024-08-30  1223  		if (next->vm_end > vms->end) {
> 013545e1b9bca0 Xiao Yang       2024-09-09  1224  			error = __split_vma(vms->vmi, next, vms->end, 0);
> 013545e1b9bca0 Xiao Yang       2024-09-09  1225  			if (error)
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1226  				goto end_split_failed;
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1227  		}
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1228  		vma_start_write(next);
> dba14840905f9e Liam R. Howlett 2024-08-30  1229  		mas_set(mas_detach, vms->vma_count++);
> 013545e1b9bca0 Xiao Yang       2024-09-09  1230  		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
> 013545e1b9bca0 Xiao Yang       2024-09-09  1231  		if (error)
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1232  			goto munmap_gather_failed;
> 6898c9039bc8e3 Liam R. Howlett 2024-08-30  1233  
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1234  		vma_mark_detached(next, true);
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1235  		nrpages = vma_pages(next);
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1236  
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1237  		vms->nr_pages += nrpages;
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1238  		if (next->vm_flags & VM_LOCKED)
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1239  			vms->locked_vm += nrpages;
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1240  
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1241  		if (next->vm_flags & VM_ACCOUNT)
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1242  			vms->nr_accounted += nrpages;
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1243  
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1244  		if (is_exec_mapping(next->vm_flags))
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1245  			vms->exec_vm += nrpages;
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1246  		else if (is_stack_mapping(next->vm_flags))
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1247  			vms->stack_vm += nrpages;
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1248  		else if (is_data_mapping(next->vm_flags))
> 17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1249  			vms->data_vm += nrpages;
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1250  
> dba14840905f9e Liam R. Howlett 2024-08-30  1251  		if (unlikely(vms->uf)) {
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1252  			/*
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1253  			 * If userfaultfd_unmap_prep returns an error the vmas
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1254  			 * will remain split, but userland will get a
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1255  			 * highly unexpected error anyway. This is no
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1256  			 * different than the case where the first of the two
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1257  			 * __split_vma fails, but we don't undo the first
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1258  			 * split, despite we could. This is unlikely enough
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1259  			 * failure that it's not worth optimizing it for.
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1260  			 */
> dba14840905f9e Liam R. Howlett 2024-08-30  1261  			if (userfaultfd_unmap_prep(next, vms->start, vms->end,
> dba14840905f9e Liam R. Howlett 2024-08-30  1262  						   vms->uf))
> 49b1b8d6f68310 Lorenzo Stoakes 2024-07-29 @1263  				goto userfaultfd_error;
> 
> Needs an "error = -ENOMEM;" here.  I haven't reviewed this function outside of
> what the zero day bot puts into this email.

Thanks Dan.  This is already addressed in v2 [1].

[1]. https://lore.kernel.org/all/20240909125621.1994-1-ice_yangxiao@163.com/

Regards,
Liam

...
Xiao Yang Sept. 10, 2024, 1:55 p.m. UTC | #6
At 2024-09-10 21:37:44, "Dan Carpenter" <dan.carpenter@linaro.org> wrote:
>Hi Xiao,
>
>kernel test robot noticed the following build warnings:
>
>url:    https://github.com/intel-lab-lkp/linux/commits/Xiao-Yang/mm-vma-Return-the-exact-errno-for-__split_vma-and-mas_store_gfp/20240909-130325
>base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
>patch link:    https://lore.kernel.org/r/20240909050226.2053-1-ice_yangxiao%40163.com
>patch subject: [PATCH] mm/vma: Return the exact errno for __split_vma() and mas_store_gfp()
>config: x86_64-randconfig-161-20240910 (https://download.01.org/0day-ci/archive/20240910/202409102026.LOh8J1Lh-lkp@intel.com/config)
>compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
>
>If you fix the issue in a separate patch/commit (i.e. not just a new version of
>the same patch/commit), kindly add following tags
>| Reported-by: kernel test robot <lkp@intel.com>
>| Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
>| Closes: https://lore.kernel.org/r/202409102026.LOh8J1Lh-lkp@intel.com/
>
>smatch warnings:
>mm/vma.c:1263 vms_gather_munmap_vmas() warn: missing error code 'error'
>
>vim +/error +1263 mm/vma.c
>
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1166  /*
>dba14840905f9e Liam R. Howlett 2024-08-30  1167   * vms_gather_munmap_vmas() - Put all VMAs within a range into a maple tree
>6898c9039bc8e3 Liam R. Howlett 2024-08-30  1168   * for removal at a later date.  Handles splitting first and last if necessary
>6898c9039bc8e3 Liam R. Howlett 2024-08-30  1169   * and marking the vmas as isolated.
>6898c9039bc8e3 Liam R. Howlett 2024-08-30  1170   *
>dba14840905f9e Liam R. Howlett 2024-08-30  1171   * @vms: The vma munmap struct
>6898c9039bc8e3 Liam R. Howlett 2024-08-30  1172   * @mas_detach: The maple state tracking the detached tree
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1173   *
>6898c9039bc8e3 Liam R. Howlett 2024-08-30  1174   * Return: 0 on success, -EPERM on mseal vmas, -ENOMEM otherwise
>

>This comment needs to be updated.


Hi Dan,


Thanks for your reply.
It has been updated on my v2 patch[1].

[1]: https://lore.kernel.org/all/tixinevflrciek4bnjwzxv6dwqyokfhrhtmu6qndc7hs2qoizd@iqg2tpjjtwyt/T/ 


>
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1175   */
>9014b230d88d7f Liam R. Howlett 2024-08-30  1176  int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>dba14840905f9e Liam R. Howlett 2024-08-30  1177  		struct ma_state *mas_detach)
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1178  {
>01cf21e9e11957 Liam R. Howlett 2024-08-30  1179  	struct vm_area_struct *next = NULL;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1180  	int error = -ENOMEM;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1181  
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1182  	/*
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1183  	 * If we need to split any vma, do it now to save pain later.
>20831cd6f814ea Liam R. Howlett 2024-08-30  1184  	 * Does it split the first one?
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1185  	 */
>dba14840905f9e Liam R. Howlett 2024-08-30  1186  	if (vms->start > vms->vma->vm_start) {
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1187  
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1188  		/*
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1189  		 * Make sure that map_count on return from munmap() will
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1190  		 * not exceed its limit; but let map_count go just above
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1191  		 * its limit temporarily, to help free resources as expected.
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1192  		 */
>dba14840905f9e Liam R. Howlett 2024-08-30  1193  		if (vms->end < vms->vma->vm_end &&
>63fc66f5b6b18f Liam R. Howlett 2024-08-30  1194  		    vms->vma->vm_mm->map_count >= sysctl_max_map_count)
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1195  			goto map_count_exceeded;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1196  
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1197  		/* Don't bother splitting the VMA if we can't unmap it anyway */
>dba14840905f9e Liam R. Howlett 2024-08-30  1198  		if (!can_modify_vma(vms->vma)) {
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1199  			error = -EPERM;
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1200  			goto start_split_failed;
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1201  		}
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1202  
>013545e1b9bca0 Xiao Yang       2024-09-09  1203  		error = __split_vma(vms->vmi, vms->vma, vms->start, 1);
>013545e1b9bca0 Xiao Yang       2024-09-09  1204  		if (error)
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1205  			goto start_split_failed;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1206  	}
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1207  	vms->prev = vma_prev(vms->vmi);
>9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1208  	if (vms->prev)
>9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1209  		vms->unmap_start = vms->prev->vm_end;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1210  
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1211  	/*
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1212  	 * Detach a range of VMAs from the mm. Using next as a temp variable as
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1213  	 * it is always overwritten.
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1214  	 */
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1215  	for_each_vma_range(*(vms->vmi), next, vms->end) {
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1216  		long nrpages;
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1217  
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1218  		if (!can_modify_vma(next)) {
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1219  			error = -EPERM;
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1220  			goto modify_vma_failed;
>df2a7df9a9aa32 Pedro Falcato   2024-08-17  1221  		}
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1222  		/* Does it split the end? */
>dba14840905f9e Liam R. Howlett 2024-08-30  1223  		if (next->vm_end > vms->end) {
>013545e1b9bca0 Xiao Yang       2024-09-09  1224  			error = __split_vma(vms->vmi, next, vms->end, 0);
>013545e1b9bca0 Xiao Yang       2024-09-09  1225  			if (error)
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1226  				goto end_split_failed;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1227  		}
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1228  		vma_start_write(next);
>dba14840905f9e Liam R. Howlett 2024-08-30  1229  		mas_set(mas_detach, vms->vma_count++);
>013545e1b9bca0 Xiao Yang       2024-09-09  1230  		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
>013545e1b9bca0 Xiao Yang       2024-09-09  1231  		if (error)
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1232  			goto munmap_gather_failed;
>6898c9039bc8e3 Liam R. Howlett 2024-08-30  1233  
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1234  		vma_mark_detached(next, true);
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1235  		nrpages = vma_pages(next);
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1236  
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1237  		vms->nr_pages += nrpages;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1238  		if (next->vm_flags & VM_LOCKED)
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1239  			vms->locked_vm += nrpages;
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1240  
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1241  		if (next->vm_flags & VM_ACCOUNT)
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1242  			vms->nr_accounted += nrpages;
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1243  
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1244  		if (is_exec_mapping(next->vm_flags))
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1245  			vms->exec_vm += nrpages;
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1246  		else if (is_stack_mapping(next->vm_flags))
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1247  			vms->stack_vm += nrpages;
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1248  		else if (is_data_mapping(next->vm_flags))
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1249  			vms->data_vm += nrpages;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1250  
>dba14840905f9e Liam R. Howlett 2024-08-30  1251  		if (unlikely(vms->uf)) {
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1252  			/*
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1253  			 * If userfaultfd_unmap_prep returns an error the vmas
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1254  			 * will remain split, but userland will get a
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1255  			 * highly unexpected error anyway. This is no
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1256  			 * different than the case where the first of the two
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1257  			 * __split_vma fails, but we don't undo the first
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1258  			 * split, despite we could. This is unlikely enough
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1259  			 * failure that it's not worth optimizing it for.
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1260  			 */
>dba14840905f9e Liam R. Howlett 2024-08-30  1261  			if (userfaultfd_unmap_prep(next, vms->start, vms->end,
>dba14840905f9e Liam R. Howlett 2024-08-30  1262  						   vms->uf))
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29 @1263  				goto userfaultfd_error;
>
>Needs an "error = -ENOMEM;" here.  I haven't reviewed this function outside of

>what the zero day bot puts into this email.


Assigning error here also has been done on my v2 patch. Could you try the v2 patch?


Best Regards,
Xiao Yang

>
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1264  		}
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1265  #ifdef CONFIG_DEBUG_VM_MAPLE_TREE
>dba14840905f9e Liam R. Howlett 2024-08-30  1266  		BUG_ON(next->vm_start < vms->start);
>dba14840905f9e Liam R. Howlett 2024-08-30  1267  		BUG_ON(next->vm_start > vms->end);
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1268  #endif
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1269  	}
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1270  
>17f1ae9b40c6b0 Liam R. Howlett 2024-08-30  1271  	vms->next = vma_next(vms->vmi);
>9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1272  	if (vms->next)
>9c3ebeda8fb5a8 Liam R. Howlett 2024-08-30  1273  		vms->unmap_end = vms->next->vm_start;
>49b1b8d6f68310 Lorenzo Stoakes 2024-07-29  1274  
>
>-- 
>0-DAY CI Kernel Test Service
>https://github.com/intel/lkp-tests/wiki
diff mbox series

Patch

diff --git a/mm/vma.c b/mm/vma.c
index 8d1686fc8d5a..3feeea9a8c3d 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -1200,7 +1200,8 @@  int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 			goto start_split_failed;
 		}
 
-		if (__split_vma(vms->vmi, vms->vma, vms->start, 1))
+		error = __split_vma(vms->vmi, vms->vma, vms->start, 1);
+		if (error)
 			goto start_split_failed;
 	}
 	vms->prev = vma_prev(vms->vmi);
@@ -1220,12 +1221,14 @@  int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 		}
 		/* Does it split the end? */
 		if (next->vm_end > vms->end) {
-			if (__split_vma(vms->vmi, next, vms->end, 0))
+			error = __split_vma(vms->vmi, next, vms->end, 0);
+			if (error)
 				goto end_split_failed;
 		}
 		vma_start_write(next);
 		mas_set(mas_detach, vms->vma_count++);
-		if (mas_store_gfp(mas_detach, next, GFP_KERNEL))
+		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
+		if (error)
 			goto munmap_gather_failed;
 
 		vma_mark_detached(next, true);