From patchwork Mon May 14 13:47:09 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiang Liu X-Patchwork-Id: 159006 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 8FF9EB703B for ; Mon, 14 May 2012 23:51:46 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756785Ab2ENNvZ (ORCPT ); Mon, 14 May 2012 09:51:25 -0400 Received: from mail-pb0-f46.google.com ([209.85.160.46]:55035 "EHLO mail-pb0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755210Ab2ENNvY (ORCPT ); Mon, 14 May 2012 09:51:24 -0400 Received: by pbbrp8 with SMTP id rp8so6142734pbb.19 for ; Mon, 14 May 2012 06:51:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=AunXxt/P4/8K70aqVhdYDhXIRnXaeb1k+1xK8qvqzNg=; b=m1w4EqUDXAMMYg58kM20iPsb+UNm0DJJTZAz2uaOZxkZh3AlR8PH2FgwBz1rNIDZ0I /MtVIq4u4CHHCfiU2AdtvljWAJgEhJyU11SvwBJW7r3w2Z4VK3pjciBh3KemiECHfmN/ QYyV9iz0S3HvwOD6bjsMvfXHXRrelIY0YnmZ+ijFQp01nYTBFwBEr33OAtL0tWA12fMO /+F/AG89hGOqq8tHiUrFI1XHC1uWtW4afkP8Vkpxz4oDsCjRMO6q1C++aICSfLq1cqaz 4pycAgUWo9knipP9/ylWLv5hpkFECtJoY/O/vDaZNq/OTC2idVA96/YkbYfyY1hI1n8u emfA== Received: by 10.68.201.169 with SMTP id kb9mr16750586pbc.101.1337003482379; Mon, 14 May 2012 06:51:22 -0700 (PDT) Received: from localhost.localdomain ([221.221.27.187]) by mx.google.com with ESMTPS id pp8sm22345496pbb.21.2012.05.14.06.51.08 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 14 May 2012 06:51:21 -0700 (PDT) From: Jiang Liu To: Dan Williams , Maciej Sosnowski , Vinod Koul Cc: Jiang Liu , Keping Chen , linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, Jiang Liu Subject: [RFC PATCH v2 7/7] dmaengine: assign DMA channel to CPU according to NUMA affinity Date: Mon, 14 May 2012 21:47:09 +0800 Message-Id: <1337003229-9158-8-git-send-email-jiang.liu@huawei.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1337003229-9158-1-git-send-email-jiang.liu@huawei.com> References: <1337003229-9158-1-git-send-email-jiang.liu@huawei.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jiang Liu On systems with multiple CPUs and DMA devices, try optimize DMA performance by assigning DMA channels to CPUs according to NUMA affinity relationship. This may help architectures with memory controllers and DMA devices built into the same physical processor to avoid unnecessary cross socket traffic. Signed-off-by: Jiang Liu --- drivers/dma/dmaengine.c | 45 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 43 insertions(+), 2 deletions(-) diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index eca45c0..8a41bdf 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -266,6 +266,7 @@ static dma_cap_mask_t dma_cap_mask_all; struct dma_chan_tbl_ent { struct dma_chan *chan; struct dma_chan *prev_chan; + int node; }; /** @@ -467,6 +468,46 @@ static void dma_channel_quiesce(void) #endif } +/* Assign DMA channels to CPUs according to NUMA affinity relationship */ +static void dma_channel_set(int cap, int cpu, struct dma_chan *chan) +{ + int node; + int src_cpu; + struct dma_chan *src_chan; + struct dma_chan_tbl_ent *entry; + struct dma_chan_tbl_ent *src_entry; + + entry = per_cpu_ptr(channel_table[cap], cpu); + node = dev_to_node(chan->device->dev); + + /* Try to optimize if CPU and DMA channel belong to different node. */ + if (node != -1 && node != cpu_to_node(cpu)) { + for_each_online_cpu(src_cpu) { + src_entry = per_cpu_ptr(channel_table[cap], src_cpu); + src_chan = src_entry->chan; + + /* + * CPU online map may change beneath us due to + * CPU hotplug operations. + */ + if (src_chan == NULL) + continue; + + if (src_entry->node == node || + cpu_to_node(src_cpu) == node) { + entry->node = src_entry->node; + src_entry->node = node; + entry->chan = src_chan; + src_entry->chan = chan; + return; + } + } + } + + entry->node = node; + entry->chan = chan; +} + /** * dma_channel_rebalance - redistribute the available channels * @@ -501,8 +542,8 @@ static void dma_channel_rebalance(bool quiesce) chan = nth_chan(cap, n++); else chan = nth_chan(cap, -1); - entry = per_cpu_ptr(channel_table[cap], cpu); - entry->chan = chan; + if (chan) + dma_channel_set(cap, cpu, chan); } if (quiesce)