From patchwork Mon Mar 25 07:50:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 1063887 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44SRJs0zvjz9sSy for ; Mon, 25 Mar 2019 18:50:53 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729849AbfCYHuw (ORCPT ); Mon, 25 Mar 2019 03:50:52 -0400 Received: from ozlabs.ru ([107.173.13.209]:51355 "EHLO ozlabs.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729837AbfCYHuw (ORCPT ); Mon, 25 Mar 2019 03:50:52 -0400 Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id B3A11AE80007; Mon, 25 Mar 2019 03:50:48 -0400 (EDT) From: Alexey Kardashevskiy To: kvm-ppc@vger.kernel.org Cc: Alexey Kardashevskiy , Paul Mackerras Subject: [PATCH kernel] KVM: PPC: Fix compile without KVM_BOOK3S_HV_POSSIBLE Date: Mon, 25 Mar 2019 18:50:44 +1100 Message-Id: <20190325075044.60240-1-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This moves kvmppc_rm_ioba_validate() under CONFIG_KVM_BOOK3S_HV_POSSIBLE where it belongs as the function cannot be called otherwise. Fixes: 0230ca87ba64 ("KVM: PPC: Allocate guest TCEs on demand too", 2019-02-26) Signed-off-by: Alexey Kardashevskiy --- arch/powerpc/kvm/book3s_64_vio_hv.c | 64 ++++++++++++++--------------- 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c index 1cd9373f8bdc..3a2164b79887 100644 --- a/arch/powerpc/kvm/book3s_64_vio_hv.c +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c @@ -158,38 +158,6 @@ static u64 *kvmppc_page_address(struct page *page) return (u64 *) page_address(page); } -/* - * TCEs pages are allocated in kvmppc_tce_put() which won't be able to do so - * in real mode. - * Check if kvmppc_tce_put() can succeed in real mode, i.e. a TCEs page is - * allocated or not required (when clearing a tce entry). - */ -static long kvmppc_rm_ioba_validate(struct kvmppc_spapr_tce_table *stt, - unsigned long ioba, unsigned long npages, bool clearing) -{ - unsigned long i, idx, sttpage, sttpages; - unsigned long ret = kvmppc_ioba_validate(stt, ioba, npages); - - if (ret) - return ret; - /* - * clearing==true says kvmppc_tce_put won't be allocating pages - * for empty tces. - */ - if (clearing) - return H_SUCCESS; - - idx = (ioba >> stt->page_shift) - stt->offset; - sttpage = idx / TCES_PER_PAGE; - sttpages = _ALIGN_UP(idx % TCES_PER_PAGE + npages, TCES_PER_PAGE) / - TCES_PER_PAGE; - for (i = sttpage; i < sttpage + sttpages; ++i) - if (!stt->pages[i]) - return H_TOO_HARD; - - return H_SUCCESS; -} - /* * Handles TCE requests for emulated devices. * Puts guest TCE values to the table and expects user space to convert them. @@ -259,6 +227,38 @@ long kvmppc_tce_to_ua(struct kvm *kvm, unsigned long tce, EXPORT_SYMBOL_GPL(kvmppc_tce_to_ua); #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +/* + * TCEs pages are allocated in kvmppc_tce_put() which won't be able to do so + * in real mode. + * Check if kvmppc_tce_put() can succeed in real mode, i.e. a TCEs page is + * allocated or not required (when clearing a tce entry). + */ +static long kvmppc_rm_ioba_validate(struct kvmppc_spapr_tce_table *stt, + unsigned long ioba, unsigned long npages, bool clearing) +{ + unsigned long i, idx, sttpage, sttpages; + unsigned long ret = kvmppc_ioba_validate(stt, ioba, npages); + + if (ret) + return ret; + /* + * clearing==true says kvmppc_tce_put won't be allocating pages + * for empty tces. + */ + if (clearing) + return H_SUCCESS; + + idx = (ioba >> stt->page_shift) - stt->offset; + sttpage = idx / TCES_PER_PAGE; + sttpages = _ALIGN_UP(idx % TCES_PER_PAGE + npages, TCES_PER_PAGE) / + TCES_PER_PAGE; + for (i = sttpage; i < sttpage + sttpages; ++i) + if (!stt->pages[i]) + return H_TOO_HARD; + + return H_SUCCESS; +} + static long iommu_tce_xchg_rm(struct mm_struct *mm, struct iommu_table *tbl, unsigned long entry, unsigned long *hpa, enum dma_data_direction *direction)