[1/3] ACPICA: Dispatch active GPEs at init time

Message ID 1AE640813FDE7649BE1B193DEA596E886CF066E6@SHSMSX101.ccr.corp.intel.com
State Not Applicable
Headers show

Commit Message

Lv Zheng Aug. 10, 2017, 1:48 a.m.
Hi, Rafael

> From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> Subject: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> 
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> In some cases GPEs are already active when they are enabled by
> acpi_ev_initialize_gpe_block() and whatever happens next may depend
> on the result of handling the events signaled by them, so the
> events should not be discarded (which is what happens currently) and
> they should be handled as soon as reasonably possible.
> 
> For this reason, modify acpi_ev_initialize_gpe_block() to
> dispatch GPEs with the status flag set in-band right after
> enabling them.

In fact, what we need seems to be invoking acpi_ev_gpe_dispatch()
right after enabling an GPE. So there are 2 conditions related:
1. GPE is enabled for the first time.
2. GPE is initialized.

And we need to make sure that before acpi_update_all_gpes() is invoked,
all GPE EN bits are actually disabled.
What if we do this in this way:


Thanks and best regards
Lv

> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/acpi/acpica/evgpeblk.c |   28 +++++++++++++++++++---------
>  1 file changed, 19 insertions(+), 9 deletions(-)
> 
> Index: linux-pm/drivers/acpi/acpica/evgpeblk.c
> ===================================================================
> --- linux-pm.orig/drivers/acpi/acpica/evgpeblk.c
> +++ linux-pm/drivers/acpi/acpica/evgpeblk.c
> @@ -440,9 +440,11 @@ acpi_ev_initialize_gpe_block(struct acpi
>  			     void *ignored)
>  {
>  	acpi_status status;
> +	acpi_event_status event_status;
>  	struct acpi_gpe_event_info *gpe_event_info;
>  	u32 gpe_enabled_count;
>  	u32 gpe_index;
> +	u32 gpe_number;
>  	u32 i;
>  	u32 j;
> 
> @@ -470,30 +472,38 @@ acpi_ev_initialize_gpe_block(struct acpi
> 
>  			gpe_index = (i * ACPI_GPE_REGISTER_WIDTH) + j;
>  			gpe_event_info = &gpe_block->event_info[gpe_index];
> +			gpe_number = gpe_block->block_base_number + gpe_index;
> 
>  			/*
>  			 * Ignore GPEs that have no corresponding _Lxx/_Exx method
> -			 * and GPEs that are used to wake the system
> +			 * and GPEs that are used for wakeup
>  			 */
> -			if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
> -			     ACPI_GPE_DISPATCH_NONE)
> -			    || (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
> -				ACPI_GPE_DISPATCH_HANDLER)
> -			    || (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) ==
> -				ACPI_GPE_DISPATCH_RAW_HANDLER)
> +			if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) !=
> +			     ACPI_GPE_DISPATCH_METHOD)
>  			    || (gpe_event_info->flags & ACPI_GPE_CAN_WAKE)) {
>  				continue;
>  			}
> 
> +			event_status = 0;
> +			(void)acpi_hw_get_gpe_status(gpe_event_info,
> +						     &event_status);
> +
>  			status = acpi_ev_add_gpe_reference(gpe_event_info);
>  			if (ACPI_FAILURE(status)) {
>  				ACPI_EXCEPTION((AE_INFO, status,
>  					"Could not enable GPE 0x%02X",
> -					gpe_index +
> -					gpe_block->block_base_number));
> +					gpe_number));
>  				continue;
>  			}
> 
> +			if (event_status & ACPI_EVENT_FLAG_STATUS_SET) {
> +				ACPI_INFO(("GPE 0x%02X active on init",
> +					   gpe_number));
> +				(void)acpi_ev_gpe_dispatch(gpe_block->node,
> +							   gpe_event_info,
> +							   gpe_number);
> +			}
> +
>  			gpe_enabled_count++;
>  		}
>  	}

Comments

Rafael J. Wysocki Aug. 10, 2017, 4:06 p.m. | #1
On Thursday, August 10, 2017 3:48:58 AM CEST Zheng, Lv wrote:
> Hi, Rafael
> 
> > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > Subject: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > 
> > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > 
> > In some cases GPEs are already active when they are enabled by
> > acpi_ev_initialize_gpe_block() and whatever happens next may depend
> > on the result of handling the events signaled by them, so the
> > events should not be discarded (which is what happens currently) and
> > they should be handled as soon as reasonably possible.
> > 
> > For this reason, modify acpi_ev_initialize_gpe_block() to
> > dispatch GPEs with the status flag set in-band right after
> > enabling them.
> 
> In fact, what we need seems to be invoking acpi_ev_gpe_dispatch()
> right after enabling an GPE. So there are 2 conditions related:
> 1. GPE is enabled for the first time.
> 2. GPE is initialized.
> 
> And we need to make sure that before acpi_update_all_gpes() is invoked,
> all GPE EN bits are actually disabled.

But we don't do it today, do we?

And still calling _dispatch() should not be incorrect even if the GPE
has been enabled already at this point.  Worst case it just will
queue up the execution of _Lxx/_Exx which may or may not do anything
useful.

And BTW this is all done under acpi_gbl_gpe_lock so acpi_ev_gpe_detect()
will block on it if run concurrently and we've checked the status, so
we know that the GPE *should* be dispatched, so I sort of fail to see
the problem.

Thanks,
Rafael
Lv Zheng Aug. 11, 2017, 5:40 a.m. | #2
Hi, Rafael

> From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> 
> On Thursday, August 10, 2017 3:48:58 AM CEST Zheng, Lv wrote:
> > Hi, Rafael
> >
> > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > Subject: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > >
> > > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > >
> > > In some cases GPEs are already active when they are enabled by
> > > acpi_ev_initialize_gpe_block() and whatever happens next may depend
> > > on the result of handling the events signaled by them, so the
> > > events should not be discarded (which is what happens currently) and
> > > they should be handled as soon as reasonably possible.
> > >
> > > For this reason, modify acpi_ev_initialize_gpe_block() to
> > > dispatch GPEs with the status flag set in-band right after
> > > enabling them.
> >
> > In fact, what we need seems to be invoking acpi_ev_gpe_dispatch()
> > right after enabling an GPE. So there are 2 conditions related:
> > 1. GPE is enabled for the first time.
> > 2. GPE is initialized.
> >
> > And we need to make sure that before acpi_update_all_gpes() is invoked,
> > all GPE EN bits are actually disabled.
> 
> But we don't do it today, do we?

We don't do that.

> 
> And still calling _dispatch() should not be incorrect even if the GPE
> has been enabled already at this point.  Worst case it just will
> queue up the execution of _Lxx/_Exx which may or may not do anything
> useful.
> 
> And BTW this is all done under acpi_gbl_gpe_lock so acpi_ev_gpe_detect()
> will block on it if run concurrently and we've checked the status, so
> we know that the GPE *should* be dispatched, so I sort of fail to see
> the problem.

There is another problem related:
ACPICA clears GPEs before enabling it.
This is proven to be wrong, and we have to fix it:
https://bugzilla.kernel.org/show_bug.cgi?id=196249

without fixing this issue, in this solution, we surely need to save the
GPE STS bit before incrementing GPE reference count, and poll it according
to the saved STS bit. Because if we poll it after enabling, STS bit will
be wrongly cleared.

So if we can do this on top of the "GPE clear" fix, things can be done
in a simpler way - invoke acpi_ev_gpe_detect() after fully initializing
GPEs (as what I pasted).

However I should say - merging "GPE clear" fix might be risky.
So this patch can be in upstream prior than the simpler solution to leave
us a stable base line.

I'll send out the simpler solution along with the "GPE clear" fix.
Maybe you can consider to ship it after merging this patch.

Thanks and best regards
Lv
Rafael J. Wysocki Aug. 11, 2017, 12:23 p.m. | #3
On Friday, August 11, 2017 7:40:56 AM CEST Zheng, Lv wrote:
> Hi, Rafael
> 
> > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > 
> > On Thursday, August 10, 2017 3:48:58 AM CEST Zheng, Lv wrote:
> > > Hi, Rafael
> > >
> > > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > > Subject: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > > >
> > > > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > >
> > > > In some cases GPEs are already active when they are enabled by
> > > > acpi_ev_initialize_gpe_block() and whatever happens next may depend
> > > > on the result of handling the events signaled by them, so the
> > > > events should not be discarded (which is what happens currently) and
> > > > they should be handled as soon as reasonably possible.
> > > >
> > > > For this reason, modify acpi_ev_initialize_gpe_block() to
> > > > dispatch GPEs with the status flag set in-band right after
> > > > enabling them.
> > >
> > > In fact, what we need seems to be invoking acpi_ev_gpe_dispatch()
> > > right after enabling an GPE. So there are 2 conditions related:
> > > 1. GPE is enabled for the first time.
> > > 2. GPE is initialized.
> > >
> > > And we need to make sure that before acpi_update_all_gpes() is invoked,
> > > all GPE EN bits are actually disabled.
> > 
> > But we don't do it today, do we?
> 
> We don't do that.
> 
> > 
> > And still calling _dispatch() should not be incorrect even if the GPE
> > has been enabled already at this point.  Worst case it just will
> > queue up the execution of _Lxx/_Exx which may or may not do anything
> > useful.
> > 
> > And BTW this is all done under acpi_gbl_gpe_lock so acpi_ev_gpe_detect()
> > will block on it if run concurrently and we've checked the status, so
> > we know that the GPE *should* be dispatched, so I sort of fail to see
> > the problem.
> 
> There is another problem related:
> ACPICA clears GPEs before enabling it.
> This is proven to be wrong, and we have to fix it:
> https://bugzilla.kernel.org/show_bug.cgi?id=196249
> 
> without fixing this issue, in this solution, we surely need to save the
> GPE STS bit before incrementing GPE reference count, and poll it according
> to the saved STS bit. Because if we poll it after enabling, STS bit will
> be wrongly cleared.

I'm not sure if I understand you correctly, but why would we poll it?

In the $subject patch the status is checked and then
acpi_ev_add_gpe_reference() is called to add a reference to the GPE.

If this is the first reference (which will be the case in the majority
of cases), acpi_ev_enable_gpe() will be called and that will clear the
status.

Then, acpi_ev_gpe_dispatch() is called if the status was set and that
itself doesn't check the status.  It disables the GPE upfront (so the
status doesn't matter from now on until the GPE is enabled again) and
clears the status unconditionally if the GPE is edge-triggered.  This
means that for edge-triggered GPEs the clearing of the status by
acpi_ev_enable_gpe() doesn't matter here.

For level-triggered GPEs the status is not cleared by acpi_ev_gpe_dispatch()
itself, but _Lxx is executed (again, without checking the status) and
finally the status is cleared by acpi_ev_finish_gpe() anyway, so we
end up with cleared status no matter what (and that before re-enabling
the GPE).  End even if _Lxx itself checked the status (but honestly why
would it?), then this is a level-triggered GPE, so it will re-trigger
anyway if not handled this time.

So it looks like the fact that acpi_ev_enable_gpe() clears the status before
enabling the GPE doesn't matter for this patch, although it may matter in
general, but that is a different (and somewhat related) issue.

Thanks,
Rafael
Lv Zheng Aug. 15, 2017, 9:59 a.m. | #4
Hi,

> From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> 
> On Friday, August 11, 2017 7:40:56 AM CEST Zheng, Lv wrote:
> > Hi, Rafael
> >
> > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > >
> > > On Thursday, August 10, 2017 3:48:58 AM CEST Zheng, Lv wrote:
> > > > Hi, Rafael
> > > >
> > > > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > > > Subject: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > > > >
> > > > > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > > >
> > > > > In some cases GPEs are already active when they are enabled by
> > > > > acpi_ev_initialize_gpe_block() and whatever happens next may depend
> > > > > on the result of handling the events signaled by them, so the
> > > > > events should not be discarded (which is what happens currently) and
> > > > > they should be handled as soon as reasonably possible.
> > > > >
> > > > > For this reason, modify acpi_ev_initialize_gpe_block() to
> > > > > dispatch GPEs with the status flag set in-band right after
> > > > > enabling them.
> > > >
> > > > In fact, what we need seems to be invoking acpi_ev_gpe_dispatch()
> > > > right after enabling an GPE. So there are 2 conditions related:
> > > > 1. GPE is enabled for the first time.
> > > > 2. GPE is initialized.
> > > >
> > > > And we need to make sure that before acpi_update_all_gpes() is invoked,
> > > > all GPE EN bits are actually disabled.
> > >
> > > But we don't do it today, do we?
> >
> > We don't do that.
> >
> > >
> > > And still calling _dispatch() should not be incorrect even if the GPE
> > > has been enabled already at this point.  Worst case it just will
> > > queue up the execution of _Lxx/_Exx which may or may not do anything
> > > useful.
> > >
> > > And BTW this is all done under acpi_gbl_gpe_lock so acpi_ev_gpe_detect()
> > > will block on it if run concurrently and we've checked the status, so
> > > we know that the GPE *should* be dispatched, so I sort of fail to see
> > > the problem.
> >
> > There is another problem related:
> > ACPICA clears GPEs before enabling it.
> > This is proven to be wrong, and we have to fix it:
> > https://bugzilla.kernel.org/show_bug.cgi?id=196249
> >
> > without fixing this issue, in this solution, we surely need to save the
> > GPE STS bit before incrementing GPE reference count, and poll it according
> > to the saved STS bit. Because if we poll it after enabling, STS bit will
> > be wrongly cleared.
> 
> I'm not sure if I understand you correctly, but why would we poll it?
> 
> In the $subject patch the status is checked and then
> acpi_ev_add_gpe_reference() is called to add a reference to the GPE.
> 
> If this is the first reference (which will be the case in the majority
> of cases), acpi_ev_enable_gpe() will be called and that will clear the
> status.
> 
> Then, acpi_ev_gpe_dispatch() is called if the status was set and that
> itself doesn't check the status.  It disables the GPE upfront (so the
> status doesn't matter from now on until the GPE is enabled again) and
> clears the status unconditionally if the GPE is edge-triggered.  This
> means that for edge-triggered GPEs the clearing of the status by
> acpi_ev_enable_gpe() doesn't matter here.

No problem, I understood.
And was thinking thought this patch [edge GPE dispatch fix] should be
correct and good for current Linux upstream.

What I meant is:
PATCH [edge GPE clear fix] https://patchwork.kernel.org/patch/9894983/
is a fix we need for upstream as it is the only possible fix for the
issue fixed by it.
On top of that, when acpi_ev_enable_gpe() is called, GPE won't be cleared.
and then things can be done in a simpler way:
PATCH [edge GPE enable fix] https://patchwork.kernel.org/patch/9894989/

As [edge GPE clear fix] is risky, I think [edge GPE dispatch fix] is OK
for Linux upstream.

So we can have 2 processes:
1. Merge [edge GPE dispatch fix] and let [edge GPE clear fix] and
   [edge GPE enable fix] released from ACPICA upstream so that:
   1. We can enhance them in ACPICA upstream.
   2. It will be regression safer for us to merge [edge GPE clear fix].
2. Merge [edge GPE clear fix] and [edge GPE enable fix] without
   merging [edge GPE dispatch fix].

What I meant is:
It's up to you to decide which process we should take. :)

> 
> For level-triggered GPEs the status is not cleared by acpi_ev_gpe_dispatch()
> itself, but _Lxx is executed (again, without checking the status) and
> finally the status is cleared by acpi_ev_finish_gpe() anyway, so we
> end up with cleared status no matter what (and that before re-enabling
> the GPE).  End even if _Lxx itself checked the status (but honestly why
> would it?), then this is a level-triggered GPE, so it will re-trigger
> anyway if not handled this time.

Let's skip level-triggered GPE case.

> 
> So it looks like the fact that acpi_ev_enable_gpe() clears the status before
> enabling the GPE doesn't matter for this patch, although it may matter in
> general, but that is a different (and somewhat related) issue.

I agreed.

Thanks and best regards
Lv
Rafael J. Wysocki Aug. 15, 2017, 12:21 p.m. | #5
On Tuesday, August 15, 2017 11:59:00 AM CEST Zheng, Lv wrote:
> Hi,
> 
> > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > 
> > On Friday, August 11, 2017 7:40:56 AM CEST Zheng, Lv wrote:
> > > Hi, Rafael
> > >
> > > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > > Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > > >
> > > > On Thursday, August 10, 2017 3:48:58 AM CEST Zheng, Lv wrote:
> > > > > Hi, Rafael
> > > > >
> > > > > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > > > > Subject: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > > > > >
> > > > > > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > > > >
> > > > > > In some cases GPEs are already active when they are enabled by
> > > > > > acpi_ev_initialize_gpe_block() and whatever happens next may depend
> > > > > > on the result of handling the events signaled by them, so the
> > > > > > events should not be discarded (which is what happens currently) and
> > > > > > they should be handled as soon as reasonably possible.
> > > > > >
> > > > > > For this reason, modify acpi_ev_initialize_gpe_block() to
> > > > > > dispatch GPEs with the status flag set in-band right after
> > > > > > enabling them.
> > > > >
> > > > > In fact, what we need seems to be invoking acpi_ev_gpe_dispatch()
> > > > > right after enabling an GPE. So there are 2 conditions related:
> > > > > 1. GPE is enabled for the first time.
> > > > > 2. GPE is initialized.
> > > > >
> > > > > And we need to make sure that before acpi_update_all_gpes() is invoked,
> > > > > all GPE EN bits are actually disabled.
> > > >
> > > > But we don't do it today, do we?
> > >
> > > We don't do that.
> > >
> > > >
> > > > And still calling _dispatch() should not be incorrect even if the GPE
> > > > has been enabled already at this point.  Worst case it just will
> > > > queue up the execution of _Lxx/_Exx which may or may not do anything
> > > > useful.
> > > >
> > > > And BTW this is all done under acpi_gbl_gpe_lock so acpi_ev_gpe_detect()
> > > > will block on it if run concurrently and we've checked the status, so
> > > > we know that the GPE *should* be dispatched, so I sort of fail to see
> > > > the problem.
> > >
> > > There is another problem related:
> > > ACPICA clears GPEs before enabling it.
> > > This is proven to be wrong, and we have to fix it:
> > > https://bugzilla.kernel.org/show_bug.cgi?id=196249
> > >
> > > without fixing this issue, in this solution, we surely need to save the
> > > GPE STS bit before incrementing GPE reference count, and poll it according
> > > to the saved STS bit. Because if we poll it after enabling, STS bit will
> > > be wrongly cleared.
> > 
> > I'm not sure if I understand you correctly, but why would we poll it?
> > 
> > In the $subject patch the status is checked and then
> > acpi_ev_add_gpe_reference() is called to add a reference to the GPE.
> > 
> > If this is the first reference (which will be the case in the majority
> > of cases), acpi_ev_enable_gpe() will be called and that will clear the
> > status.
> > 
> > Then, acpi_ev_gpe_dispatch() is called if the status was set and that
> > itself doesn't check the status.  It disables the GPE upfront (so the
> > status doesn't matter from now on until the GPE is enabled again) and
> > clears the status unconditionally if the GPE is edge-triggered.  This
> > means that for edge-triggered GPEs the clearing of the status by
> > acpi_ev_enable_gpe() doesn't matter here.
> 
> No problem, I understood.
> And was thinking thought this patch [edge GPE dispatch fix] should be
> correct and good for current Linux upstream.
> 
> What I meant is:
> PATCH [edge GPE clear fix] https://patchwork.kernel.org/patch/9894983/
> is a fix we need for upstream as it is the only possible fix for the
> issue fixed by it.
> On top of that, when acpi_ev_enable_gpe() is called, GPE won't be cleared.
> and then things can be done in a simpler way:
> PATCH [edge GPE enable fix] https://patchwork.kernel.org/patch/9894989/
> 
> As [edge GPE clear fix] is risky, I think [edge GPE dispatch fix] is OK
> for Linux upstream.
> 
> So we can have 2 processes:
> 1. Merge [edge GPE dispatch fix] and let [edge GPE clear fix] and
>    [edge GPE enable fix] released from ACPICA upstream so that:
>    1. We can enhance them in ACPICA upstream.
>    2. It will be regression safer for us to merge [edge GPE clear fix].
> 2. Merge [edge GPE clear fix] and [edge GPE enable fix] without
>    merging [edge GPE dispatch fix].
> 
> What I meant is:
> It's up to you to decide which process we should take. :)

OK

So I would go with what is already there in my tree.  Please work on top of
that.

Thanks,
Rafael
Lv Zheng Aug. 17, 2017, 2:24 a.m. | #6
Hi,

> From: linux-acpi-owner@vger.kernel.org [mailto:linux-acpi-owner@vger.kernel.org] On Behalf Of Rafael J.
> Wysocki
> Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> 
> On Tuesday, August 15, 2017 11:59:00 AM CEST Zheng, Lv wrote:
> > Hi,
> >
> > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > >
> > > On Friday, August 11, 2017 7:40:56 AM CEST Zheng, Lv wrote:
> > > > Hi, Rafael
> > > >
> > > > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > > > Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > > > >
> > > > > On Thursday, August 10, 2017 3:48:58 AM CEST Zheng, Lv wrote:
> > > > > > Hi, Rafael
> > > > > >
> > > > > > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> > > > > > > Subject: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time
> > > > > > >
> > > > > > > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > > > > >
> > > > > > > In some cases GPEs are already active when they are enabled by
> > > > > > > acpi_ev_initialize_gpe_block() and whatever happens next may depend
> > > > > > > on the result of handling the events signaled by them, so the
> > > > > > > events should not be discarded (which is what happens currently) and
> > > > > > > they should be handled as soon as reasonably possible.
> > > > > > >
> > > > > > > For this reason, modify acpi_ev_initialize_gpe_block() to
> > > > > > > dispatch GPEs with the status flag set in-band right after
> > > > > > > enabling them.
> > > > > >
> > > > > > In fact, what we need seems to be invoking acpi_ev_gpe_dispatch()
> > > > > > right after enabling an GPE. So there are 2 conditions related:
> > > > > > 1. GPE is enabled for the first time.
> > > > > > 2. GPE is initialized.
> > > > > >
> > > > > > And we need to make sure that before acpi_update_all_gpes() is invoked,
> > > > > > all GPE EN bits are actually disabled.
> > > > >
> > > > > But we don't do it today, do we?
> > > >
> > > > We don't do that.
> > > >
> > > > >
> > > > > And still calling _dispatch() should not be incorrect even if the GPE
> > > > > has been enabled already at this point.  Worst case it just will
> > > > > queue up the execution of _Lxx/_Exx which may or may not do anything
> > > > > useful.
> > > > >
> > > > > And BTW this is all done under acpi_gbl_gpe_lock so acpi_ev_gpe_detect()
> > > > > will block on it if run concurrently and we've checked the status, so
> > > > > we know that the GPE *should* be dispatched, so I sort of fail to see
> > > > > the problem.
> > > >
> > > > There is another problem related:
> > > > ACPICA clears GPEs before enabling it.
> > > > This is proven to be wrong, and we have to fix it:
> > > > https://bugzilla.kernel.org/show_bug.cgi?id=196249
> > > >
> > > > without fixing this issue, in this solution, we surely need to save the
> > > > GPE STS bit before incrementing GPE reference count, and poll it according
> > > > to the saved STS bit. Because if we poll it after enabling, STS bit will
> > > > be wrongly cleared.
> > >
> > > I'm not sure if I understand you correctly, but why would we poll it?
> > >
> > > In the $subject patch the status is checked and then
> > > acpi_ev_add_gpe_reference() is called to add a reference to the GPE.
> > >
> > > If this is the first reference (which will be the case in the majority
> > > of cases), acpi_ev_enable_gpe() will be called and that will clear the
> > > status.
> > >
> > > Then, acpi_ev_gpe_dispatch() is called if the status was set and that
> > > itself doesn't check the status.  It disables the GPE upfront (so the
> > > status doesn't matter from now on until the GPE is enabled again) and
> > > clears the status unconditionally if the GPE is edge-triggered.  This
> > > means that for edge-triggered GPEs the clearing of the status by
> > > acpi_ev_enable_gpe() doesn't matter here.
> >
> > No problem, I understood.
> > And was thinking thought this patch [edge GPE dispatch fix] should be
> > correct and good for current Linux upstream.
> >
> > What I meant is:
> > PATCH [edge GPE clear fix] https://patchwork.kernel.org/patch/9894983/
> > is a fix we need for upstream as it is the only possible fix for the
> > issue fixed by it.
> > On top of that, when acpi_ev_enable_gpe() is called, GPE won't be cleared.
> > and then things can be done in a simpler way:
> > PATCH [edge GPE enable fix] https://patchwork.kernel.org/patch/9894989/
> >
> > As [edge GPE clear fix] is risky, I think [edge GPE dispatch fix] is OK
> > for Linux upstream.
> >
> > So we can have 2 processes:
> > 1. Merge [edge GPE dispatch fix] and let [edge GPE clear fix] and
> >    [edge GPE enable fix] released from ACPICA upstream so that:
> >    1. We can enhance them in ACPICA upstream.
> >    2. It will be regression safer for us to merge [edge GPE clear fix].
> > 2. Merge [edge GPE clear fix] and [edge GPE enable fix] without
> >    merging [edge GPE dispatch fix].
> >
> > What I meant is:
> > It's up to you to decide which process we should take. :)
> 
> OK
> 
> So I would go with what is already there in my tree.  Please work on top of
> that.

OK.
Then I'll do that from ACPICA upstream to have more eyes on the enhancements
and more time to create the stable baseline for the [edge GPE clear fix].

Best regards,
Lv

Patch

Index: linux-acpica/drivers/acpi/acpica/evxfgpe.c
===================================================================
--- linux-acpica.orig/drivers/acpi/acpica/evxfgpe.c
+++ linux-acpica/drivers/acpi/acpica/evxfgpe.c
@@ -97,6 +97,14 @@  acpi_status acpi_update_all_gpes(void)
 unlock_and_exit:
 	(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
 
+	/*
+	 * Poll GPEs to handle already triggered events.
+	 * It is not sufficient to trigger edge-triggered GPE with specific
+	 * GPE chips, software need to poll once after enabling.
+	 */
+	if (acpi_gbl_all_gpes_initialized) {
+		acpi_ev_gpe_detect(acpi_gbl_gpe_xrupt_list_head);
+	}
 	return_ACPI_STATUS(status);
 }
 
@@ -120,6 +128,7 @@  acpi_status acpi_enable_gpe(acpi_handle
 	acpi_status status = AE_BAD_PARAMETER;
 	struct acpi_gpe_event_info *gpe_event_info;
 	acpi_cpu_flags flags;
+	u8 poll_gpes = FALSE;
 
 	ACPI_FUNCTION_TRACE(acpi_enable_gpe);
 
@@ -135,12 +144,25 @@  acpi_status acpi_enable_gpe(acpi_handle
 		if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) !=
 		    ACPI_GPE_DISPATCH_NONE) {
 			status = acpi_ev_add_gpe_reference(gpe_event_info);
+			if (ACPI_SUCCESS(status) &&
+			    gpe_event_info->runtime_count == 1) {
+				poll_gpes = TRUE;
+			}
 		} else {
 			status = AE_NO_HANDLER;
 		}
 	}
 
 	acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
+
+	/*
+	 * Poll GPEs to handle already triggered events.
+	 * It is not sufficient to trigger edge-triggered GPE with specific
+	 * GPE chips, software need to poll once after enabling.
+	 */
+	if (poll_gpes && acpi_gbl_all_gpes_initialized) {
+		acpi_ev_gpe_detect(acpi_gbl_gpe_xrupt_list_head);
+	}
 	return_ACPI_STATUS(status);
 }
 ACPI_EXPORT_SYMBOL(acpi_enable_gpe)
Index: linux-acpica/drivers/acpi/acpica/utxfinit.c
===================================================================
--- linux-acpica.orig/drivers/acpi/acpica/utxfinit.c
+++ linux-acpica/drivers/acpi/acpica/utxfinit.c
@@ -284,6 +284,13 @@  acpi_status ACPI_INIT_FUNCTION acpi_init
 	}
 
 	/*
+	 * Cleanup GPE enabling status to make sure that the GPE settings are
+	 * as what the OSPMs expect. This should be done before enumerating
+	 * ACPI devices and operation region drivers.
+	 */
+	(void)acpi_hw_disable_all_gpes();
+
+	/*
 	 * Initialize all device/region objects in the namespace. This runs
 	 * the device _STA and _INI methods and region _REG methods.
 	 */