From patchwork Tue May 26 15:41:08 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Mostafa X-Patchwork-Id: 476571 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 71CFF140129; Wed, 27 May 2015 01:44:37 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1YxH1s-00072A-3L; Tue, 26 May 2015 15:44:32 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1YxH0A-0005w4-MD for kernel-team@lists.ubuntu.com; Tue, 26 May 2015 15:42:46 +0000 Received: from [10.172.68.52] (helo=fourier) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1YxH0A-0003dp-9h; Tue, 26 May 2015 15:42:46 +0000 Received: from kamal by fourier with local (Exim 4.82) (envelope-from ) id 1YxH07-0007XE-4X; Tue, 26 May 2015 08:42:43 -0700 From: Kamal Mostafa To: linux-kernel@vger.kernel.org, stable@vger.kernel.org, kernel-team@lists.ubuntu.com Subject: [PATCH 3.19.y-ckt 64/92] xen/events: Clear cpu_evtchn_mask before resuming Date: Tue, 26 May 2015 08:41:08 -0700 Message-Id: <1432654896-28478-65-git-send-email-kamal@canonical.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1432654896-28478-1-git-send-email-kamal@canonical.com> References: <1432654896-28478-1-git-send-email-kamal@canonical.com> X-Extended-Stable: 3.19 Cc: Kamal Mostafa , Boris Ostrovsky , David Vrabel X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com 3.19.8-ckt1 -stable review patch. If anyone has any objections, please let me know. ------------------ From: Boris Ostrovsky commit 5cec98834989a014a9560b1841649eaca95cf00e upstream. When a guest is resumed, the hypervisor may change event channel assignments. If this happens and the guest uses 2-level events it is possible for the interrupt to be claimed by wrong VCPU since cpu_evtchn_mask bits may be stale. This can happen even though evtchn_2l_bind_to_cpu() attempts to clear old bits: irq_info that is passed in is not necessarily the original one (from pre-migration times) but instead is freshly allocated during resume and so any information about which CPU the channel was bound to is lost. Thus we should clear the mask during resume. We also need to make sure that bits for xenstore and console channels are set when these two subsystems are resumed. While rebind_evtchn_irq() (which is invoked for both of them on a resume) calls irq_set_affinity(), the latter will in fact postpone setting affinity until handling the interrupt. But because cpu_evtchn_mask will have bits for these two cleared we won't be able to take the interrupt. With that in mind, we need to bind those two channels explicitly in rebind_evtchn_irq(). We will keep irq_set_affinity() so that we have a pass through generic irq affinity code later, in case something needs to be updated there as well. (Also replace cpumask_of(0) with cpumask_of(info->cpu) in rebind_evtchn_irq(): it should be set to zero in preceding xen_irq_info_evtchn_setup().) Signed-off-by: Boris Ostrovsky Reported-by: Annie Li Signed-off-by: David Vrabel Signed-off-by: Kamal Mostafa --- drivers/xen/events/events_2l.c | 10 ++++++++++ drivers/xen/events/events_base.c | 5 +++-- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c index 5db43fc..7dd4631 100644 --- a/drivers/xen/events/events_2l.c +++ b/drivers/xen/events/events_2l.c @@ -345,6 +345,15 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } +static void evtchn_2l_resume(void) +{ + int i; + + for_each_online_cpu(i) + memset(per_cpu(cpu_evtchn_mask, i), 0, sizeof(xen_ulong_t) * + EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD); +} + static const struct evtchn_ops evtchn_ops_2l = { .max_channels = evtchn_2l_max_channels, .nr_channels = evtchn_2l_max_channels, @@ -356,6 +365,7 @@ static const struct evtchn_ops evtchn_ops_2l = { .mask = evtchn_2l_mask, .unmask = evtchn_2l_unmask, .handle_events = evtchn_2l_handle_events, + .resume = evtchn_2l_resume, }; void __init xen_evtchn_2l_init(void) diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c index 70fba97..a1ec564 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -1279,8 +1279,9 @@ void rebind_evtchn_irq(int evtchn, int irq) mutex_unlock(&irq_mapping_update_lock); - /* new event channels are always bound to cpu 0 */ - irq_set_affinity(irq, cpumask_of(0)); + bind_evtchn_to_cpu(evtchn, info->cpu); + /* This will be deferred until interrupt is processed */ + irq_set_affinity(irq, cpumask_of(info->cpu)); /* Unmask the event channel. */ enable_irq(irq);