From patchwork Wed Aug 29 01:51:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Manlunas, Felix" X-Patchwork-Id: 963215 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=cavium.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=CAVIUMNETWORKS.onmicrosoft.com header.i=@CAVIUMNETWORKS.onmicrosoft.com header.b="DmKK2T5Q"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 420TCd3DLnz9s1x for ; Wed, 29 Aug 2018 11:52:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727107AbeH2FrI (ORCPT ); Wed, 29 Aug 2018 01:47:08 -0400 Received: from mail-by2nam01on0080.outbound.protection.outlook.com ([104.47.34.80]:27968 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725199AbeH2FrH (ORCPT ); Wed, 29 Aug 2018 01:47:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=t3rEhnF3uhcICDuJDGNWpGv4doeThkPLOsbJW+l4cTg=; b=DmKK2T5Qw3gAOSTiSdQbxpVv+TEvIMGPBXQPbJVK+8j/7DJvcEDSON4Df6usK9a0LO11WshhN6iiiKmj7jJWp/v3WHN3gvuuXZXqnVaQq6VPVoCECgwBlK+lFWVwnW9anxbWQbblwQHuKCEAgQfXEUCw0TGkwIuKeUXWx9rjaws= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Felix.Manlunas@cavium.com; Received: from localhost (50.233.148.155) by MWHPR07MB2830.namprd07.prod.outlook.com (2603:10b6:300:1d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1080.15; Wed, 29 Aug 2018 01:51:34 +0000 Date: Tue, 28 Aug 2018 18:51:30 -0700 From: Felix Manlunas To: davem@davemloft.net Cc: netdev@vger.kernel.org, raghu.vatsavayi@cavium.com, derek.chickles@cavium.com, satananda.burla@cavium.com, felix.manlunas@cavium.com, weilin.chang@cavium.com Subject: [PATCH net-next 1/4] liquidio: improve soft command handling Message-ID: <20180829015130.GA7915@felix-thinkpad.cavium.com> References: <20180829015058.GA7898@felix-thinkpad.cavium.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180829015058.GA7898@felix-thinkpad.cavium.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-Originating-IP: [50.233.148.155] X-ClientProxiedBy: MWHPR15CA0063.namprd15.prod.outlook.com (2603:10b6:301:4c::25) To MWHPR07MB2830.namprd07.prod.outlook.com (2603:10b6:300:1d::16) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 73d38e88-65a4-41af-12bd-08d60d51f32e X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989137)(4534165)(4627221)(201703031133081)(201702281549075)(8990107)(5600074)(711020)(2017052603328)(7153060)(7193020); SRVR:MWHPR07MB2830; X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB2830; 3:UqXQxFEVBJWqxijKS0uZkO/1cNB8oK+StI7J8f5/EAeBnn+SDhoX9QBXvcd98ux69pMJTDg+hZnmYyXmz0RQybL0oxo6KrMxFX89JaBkiYZu2p6Fm6laJb8TmYQcne9CZBWUKlzIXNa5UfvFE64SvWJlBnXMHTzAqKQdo8MNOo7Hk80Iid08QSREePnPll6SyoQ+k+hi0kuQ0QffB98cPIvx0OVKmC+JREh1UBw4LTzlyiIVuSrDh9ARoDBFWzyl; 25:6MghZqvlJ7GAo03/Yn+vWEKjmaPaQ+GT4KqKAwpNNeDmQTKwCGOmX6hwdnO7ihV9+XV49ZIp5WbJI+6BpqzxekjTXUEKj9np9jr2Kefta5sAr83Y/428zAbUWAmX/OMzBd3kpi+ThvJzgs5Zh7pTgzm+QO0BIyqTx+Vdso8RfXt+8RU7mF8tckydgqS3CpFdWZ1NQzJFFDwI4lvVTP766PmltuiP4l5IPpXuP40RMUeDN2IcrfDthexrh0AKtnK4lROXBkTRBPArqnbBSWDdtviJN6GBia5pgAr5s5SRxp4QuQOmQTAmr2qKGwcmABXFCQcEBWURWvuvuWQSG82ItA==; 31:QtrTDFhu/eJl3lvsMA0lKr5e6Lzg2daynDdm8m0OGvWs6RauOtR71nHoQaks27bMy3twPkWJrIt6KUUXQ95iRJkO/ojCNFFHj+Af8n9LLkO4bbPFE/L6m8YVsE2XYc2fh7ARkxT5yPXsFEVhUwxYdYf6RF0v0RZ0HsYfHAFStHuun5eToBuDAGRT2qvFTdEoVQlep2xyngffF7dlPedIz37VfDCUURXF0LAp0VISRWw= X-MS-TrafficTypeDiagnostic: MWHPR07MB2830: X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB2830; 20:Zr1335rm2pqGlaNkW+UyHafIq8QIhsX6ieuyTmPsZO2GzZZusXMYz6+WnpIVHTleIDg4HXzB7FirkrH78F3vyIVJqF3k7oVR/0kUzxx9xoVM4EIJ36wpmDHdODKMKRyTaxMWaQ+Z0XirUhIqiTuKfsvFKPKwQ5kW2wbaTKbqBSLgk0nKDBbZbZlRRm8YK34xK6DxsQKMgJpkZ00qk1OTlqvAOE3Lsk7Nm08fxbrgas+adS52VDd77UbWkdFCNDeE7n26YFCdO1KxPVDoxwULwWSrry0YjHfgFk9GZSFxZ2JJyOivmVQROks2+bapaRbnk6msgz+Ht+F6ewtam9TMJuBNp87D7AGlDf4rV0KRNfBKhIzPhL3CVfQzoXi9TYoBxoMY9rCL63bkBgja4vg3tSmHquHVdY3+cFTllYAeT4uxBnp3Z4W9pl5fX/WCw+jlCuFwujel9BaFeLse+SFxxNNE8kaqRMLAGcByS14FoNm5S13jq571tGaCsM98fr4L; 4:SVmJIn2T7ssrY4I37sCO+sADqZqkr8PX17LmIxZjz9lvnzYyqKXU0pxdaoDWUnZSrrjjtcLAbIdtXlRbWCHbxo0QMxmynhn5Wnm/8+Az95O49yEr7e0JKGS3Mv6pd0vbMx6gkBBENOfbwo/+Qf7OGbsHopZ1DNk3SOZLBQWDGhjzTX82VSjLbPTkEFuwL32J623wlGvD8ihvJTfkH9RyzJThqlvCpeDSnBZ1HLbhwd9IIM87w3Y+0ARpbw71gAu8oO3gGnuv7QRS1Q0UnQPlng== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3002001)(3231311)(944501410)(52105095)(10201501046)(93006095)(93001095)(149027)(150027)(6041310)(20161123558120)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123560045)(201708071742011)(7699016); SRVR:MWHPR07MB2830; BCL:0; PCL:0; RULEID:; SRVR:MWHPR07MB2830; X-Forefront-PRVS: 077929D941 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(39860400002)(136003)(346002)(376002)(396003)(366004)(199004)(189003)(446003)(5660300001)(11346002)(486006)(6916009)(476003)(76176011)(76506005)(186003)(33656002)(16526019)(305945005)(2351001)(956004)(97736004)(1076002)(105586002)(23726003)(107886003)(6666003)(44832011)(2906002)(106356001)(26005)(386003)(8936002)(3846002)(478600001)(16586007)(8676002)(86362001)(6116002)(2361001)(47776003)(68736007)(50466002)(25786009)(81166006)(58126008)(316002)(6496006)(7736002)(53936002)(14444005)(52116002)(72206003)(6486002)(4326008)(575784001)(66066001)(81156014)(18370500001); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR07MB2830; H:localhost; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB2830; 23:w5w8fsjrfmkOogF2EM9+iQDtBSNFJ5uSyhIRCAxyUVBeffPh3F4/kZ3lsq5VeRj4GDNUXQxOR9U6cNcW1+AyY95vf4fTcMLJTEu+HJxFW5yeJJHHLY+x/XQtlO3ZHV02Au92rzY1UzrsiDUcbnen6zJJe4KZ6AD45lba7Z5P7/lJzBg8Xaj8iAAQONCtoC+uBDQYsoQ6DufCUXRFi4ZLN0prGToVKn0PQjPxQ5LTyaOL8IUlP/nSk5XdIo9KBJCt150ZErYfPBc67i7G3jcGnL5wRien6rPo87OsFw7fcWmmv3dCoez7+IbIvLFqlQDE2oScFmG5YVCnpIZVT/AA0JQxT81M3dYgBRJMiWmNlV7MOozHNinxw6pvvJo7xW6glPcbh4M/C+j4VnQdL/SnneRHHAqhctX9x+wgkbFnUPoTyaYGVUZb0qDDK1BDtnTXglGzoiYeVXWTV1G0DYK16SjusGYH1Sw+ryBXoBfb7B9Fzzih/f5NY7XNQxMMm7q0yEy0p7dluazBtXOK8TJISy31W2iThz0Jm+XRLh/i4CJAxNA8nIdpck//Rxu5VqJ4HOig/WgyhWS2RGi+aUGBOx0qe0u9DYOePOuRynjAdtqQzctjzbc8M2cxsVvDxYMefgDhfxkpB8L+FtwIOTyF8Bz62k/jxBN1GZIrhdvum/5zzTdlTT2GcEDRG9aWYS+81Y6Lh8iDEfDHdhFVp3qxUQLl5ctobd6GPiWMfGZRwCCbHxa2F/2VBTHQE3519xRj2ERch3V28ZHlVTLZqbPBVJP+JukpA3be4XfI/Y76wa+y1svpAH1aYc8hD5qs6+eYKZ3cWcR5OufEIdnBhNp0ZBQWwy/SwbZ7paI+G5Qgdii6cCr7MamXx2g2fjzAlKyeG3wB9X7O5ukWjymKnEHfPf+VLY4euO0Xwv0jhi9N1cxoR7g9oZpl+8wNZc7yjqSBmfqVhzs1iAet2Kk1/r30PzbH55i84qTFqRQjdxvDN2naJ0K1nQrxkdXNMOri4qm27mNYJ7ZcFHlomQM43GxeZSz+mU62wB9uc6k0LznDEeq9lX7DuC3IBiwbsA21V19lPqmmCxeyTat+UHlHvnJsteWBT+r8rSHqvHZuwLw0r2DkG/nNSxgWB2Fk/eNet5bO7YGc0nDpteY1UsPDLZ3GeGYQMzOzCaWN3lFFTmnqH7sP528qlVe75FIAIfN+P1S/IqC9TpDlUnjyJAAWBbkAN3X9BcTx87A23jo4WnK5pMB+/HLZ7CmZ0mmg8xe260gSdffA5DSyQXJOo3UuG4NTr/Sjuh5CNc3dqrEK7pEE76V5kNqP1XI+LrCauEwsGmbt X-Microsoft-Antispam-Message-Info: XiCMUsM3hJi1AQvdk/EiZTiYocuU4VAvNOBzhPnh5xNHzDm2ENZQTUpHVVV6D2OlZi5NmowjUVpDId7FAoPbQntsRPAeoom16TKjq7iRkyroJHXhp6pkOZdlzr+4PXWG+DQTy3XOkyTmfGduiMzUPCldx9Y/Q9DVNEzykYOemJll3+RLG8z/DS7MUgVkkC1ey0/vnnBdiFiaKpSAkUPR5l8cq/cgdMe9F564b+ycgUb26A4pSwfl6TNpqI5VM8CfVEJ0fZrCLgKH51T+1ZEsizoreerDLJyZMS/rQNSdoH+KXaQDpdrS9NyioWhghefz982UNYX1yjLxucLTd2q79FjYk2HcCghbZvo0Gsy4iEE= X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB2830; 6:bRnemDTE+Q9C06XYZ7GSLXvGd8vwlLUyvi776P0dl8w/qrTUFIOvVHrl7z2FWE0azGr3/TD2hBEbUriAtcPeJvC0Sd31dRBxRXS/SZL4Wl2RGM0sbBlGQZljRkQ0VtZXkuRjEaj33zhd4EYwqo1EgNZeDlQ28LXsHiVGxKZ/6Fgea1ifEHHjiPMmIjyfnUXrMu0pTTJxWPHEKSCnlPnFNkVhMKqlMAe1EddqOMhDO+AA3E+fyZgxWKtWjqXN7ZSnq2chCuh3kP7RhoDbJLDLXF/wXuA1h3Tp8J9aka2BTY4JYddg305D6LvLQZVf7xLbx1+LhCybBR/BDYLVk2CAHjGJ0Cx7djjR4NHz1lPifP3tznlzLEvpb+hAw1DL5VmBGDo6kjGU1ULrx+6qPc/JRTq4oAELEIhVIbkrCXNm+otfc/xdYNPdtUvrz2ZglYBB+pGRYBNf51kQpW3uB5cSVg==; 5:+3TkjDo35ZJthxX/wp1+dkMSxnL4bvtj7hLn4DirmE9H+vWMHg8z9iyrQG4mCYLeE8+T9IPrJdo3MaI08UX/4FtEitgRPfzKZtQtz63mIznfJwqa8ClsEvqiG7dFVJInngxLXkBONf6CdtM9lUJBpgqSlXK7FSdb+UsdopnVKGA=; 7:yEoeqkIzBAseftjSYCiphIfy40HTO5uqu1hSkpYD8NjN+8cw8eDDSBQWSc/RmVpT+vJfqf+1Yf7q5brH1e1tV6ROiZoNFspR9zwkE9PSyjAd3OsAT76sY2LOU/NoZqHyi8sV1tju05MRNPqW2QlG7OxSaEZgaIQGZfS5Agxdqpf9klhctuZuJruljyQzvUkUlhgmabEa94tKr47OztwZ9swo059HGlTbrDn2OPo9a04IJqSpumBRnbhqTwcLgSqA SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Aug 2018 01:51:34.1961 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 73d38e88-65a4-41af-12bd-08d60d51f32e X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR07MB2830 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org 1. Set LIO_SC_MAX_TMO_MS as the maximum timeout value for a soft command (sc). All sc's use this value as a hard timeout value. Add expiry_time in struct octeon_soft_command to keep the hard timeout value. The field wait_time and timeout in struct octeon_soft_command will be obsoleted in the last patch of this patch series. 2. Add processing a synchronous sc in sc response thread lio_process_ordered_list. The memory allocated for a synchronous sc will be freed by lio_process_ordered_list() to the sc pool. 3. Add two response lists for lio_process_ordered_list to process the storage allocated for sc's: OCTEON_DONE_SC_LIST response list keeps all sc's which will be freed to the pool after their requestors have finished processing the responses. OCTEON_ZOMBIE_SC_LIST response list keeps all sc's which have got LIO_SC_MAX_TMO_MS timeout. When an sc gets a hard timeout, lio_process_order_list() will recheck its status 1 ms later. If the status has not updated by the firmware at that time, the sc will be removed from OCTEON_DONE_SC_LIST response list to OCTEON_ZOMBIE_SC_LIST response list. The sc's in the OCTEON_ZOMBIE_SC_LIST response list will be freed when the driver is unloaded. Signed-off-by: Weilin Chang Signed-off-by: Felix Manlunas --- drivers/net/ethernet/cavium/liquidio/lio_main.c | 31 +++++- drivers/net/ethernet/cavium/liquidio/lio_vf_main.c | 34 +++++- .../net/ethernet/cavium/liquidio/octeon_config.h | 2 +- drivers/net/ethernet/cavium/liquidio/octeon_iq.h | 11 ++ drivers/net/ethernet/cavium/liquidio/octeon_nic.c | 3 +- .../net/ethernet/cavium/liquidio/request_manager.c | 114 +++++++++++++++------ .../ethernet/cavium/liquidio/response_manager.c | 82 +++++++++++++-- .../ethernet/cavium/liquidio/response_manager.h | 4 +- 8 files changed, 232 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c index 6fb13fa..6663749 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c @@ -1037,12 +1037,12 @@ static void octeon_destroy_resources(struct octeon_device *oct) /* fallthrough */ case OCT_DEV_IO_QUEUES_DONE: - if (wait_for_pending_requests(oct)) - dev_err(&oct->pci_dev->dev, "There were pending requests\n"); - if (lio_wait_for_instr_fetch(oct)) dev_err(&oct->pci_dev->dev, "IQ had pending instructions\n"); + if (wait_for_pending_requests(oct)) + dev_err(&oct->pci_dev->dev, "There were pending requests\n"); + /* Disable the input and output queues now. No more packets will * arrive from Octeon, but we should wait for all packet * processing to finish. @@ -1052,6 +1052,31 @@ static void octeon_destroy_resources(struct octeon_device *oct) if (lio_wait_for_oq_pkts(oct)) dev_err(&oct->pci_dev->dev, "OQ had pending packets\n"); + /* Force all requests waiting to be fetched by OCTEON to + * complete. + */ + for (i = 0; i < MAX_OCTEON_INSTR_QUEUES(oct); i++) { + struct octeon_instr_queue *iq; + + if (!(oct->io_qmask.iq & BIT_ULL(i))) + continue; + iq = oct->instr_queue[i]; + + if (atomic_read(&iq->instr_pending)) { + spin_lock_bh(&iq->lock); + iq->fill_cnt = 0; + iq->octeon_read_index = iq->host_write_index; + iq->stats.instr_processed += + atomic_read(&iq->instr_pending); + lio_process_iq_request_list(oct, iq, 0); + spin_unlock_bh(&iq->lock); + } + } + + lio_process_ordered_list(oct, 1); + octeon_free_sc_done_list(oct); + octeon_free_sc_zombie_list(oct); + /* fallthrough */ case OCT_DEV_INTR_SET_DONE: /* Disable interrupts */ diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c index b778357..59c2dd9 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c @@ -471,12 +471,12 @@ static void octeon_destroy_resources(struct octeon_device *oct) case OCT_DEV_HOST_OK: /* fallthrough */ case OCT_DEV_IO_QUEUES_DONE: - if (wait_for_pending_requests(oct)) - dev_err(&oct->pci_dev->dev, "There were pending requests\n"); - if (lio_wait_for_instr_fetch(oct)) dev_err(&oct->pci_dev->dev, "IQ had pending instructions\n"); + if (wait_for_pending_requests(oct)) + dev_err(&oct->pci_dev->dev, "There were pending requests\n"); + /* Disable the input and output queues now. No more packets will * arrive from Octeon, but we should wait for all packet * processing to finish. @@ -485,7 +485,33 @@ static void octeon_destroy_resources(struct octeon_device *oct) if (lio_wait_for_oq_pkts(oct)) dev_err(&oct->pci_dev->dev, "OQ had pending packets\n"); - /* fall through */ + + /* Force all requests waiting to be fetched by OCTEON to + * complete. + */ + for (i = 0; i < MAX_OCTEON_INSTR_QUEUES(oct); i++) { + struct octeon_instr_queue *iq; + + if (!(oct->io_qmask.iq & BIT_ULL(i))) + continue; + iq = oct->instr_queue[i]; + + if (atomic_read(&iq->instr_pending)) { + spin_lock_bh(&iq->lock); + iq->fill_cnt = 0; + iq->octeon_read_index = iq->host_write_index; + iq->stats.instr_processed += + atomic_read(&iq->instr_pending); + lio_process_iq_request_list(oct, iq, 0); + spin_unlock_bh(&iq->lock); + } + } + + lio_process_ordered_list(oct, 1); + octeon_free_sc_done_list(oct); + octeon_free_sc_zombie_list(oct); + + /* fall through */ case OCT_DEV_INTR_SET_DONE: /* Disable interrupts */ oct->fn_list.disable_interrupt(oct, OCTEON_ALL_INTR); diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_config.h b/drivers/net/ethernet/cavium/liquidio/octeon_config.h index ceac743..056dceb 100644 --- a/drivers/net/ethernet/cavium/liquidio/octeon_config.h +++ b/drivers/net/ethernet/cavium/liquidio/octeon_config.h @@ -440,7 +440,7 @@ struct octeon_config { /* Response lists - 1 ordered, 1 unordered-blocking, 1 unordered-nonblocking * NoResponse Lists are now maintained with each IQ. (Dec' 2007). */ -#define MAX_RESPONSE_LISTS 4 +#define MAX_RESPONSE_LISTS 6 /* Opcode hash bits. The opcode is hashed on the lower 6-bits to lookup the * dispatch table. diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_iq.h b/drivers/net/ethernet/cavium/liquidio/octeon_iq.h index aecd0d3..3437d7f 100644 --- a/drivers/net/ethernet/cavium/liquidio/octeon_iq.h +++ b/drivers/net/ethernet/cavium/liquidio/octeon_iq.h @@ -294,11 +294,20 @@ struct octeon_soft_command { /** Time out and callback */ size_t wait_time; size_t timeout; + size_t expiry_time; + u32 iq_no; void (*callback)(struct octeon_device *, u32, void *); void *callback_arg; + + int caller_is_done; + u32 sc_status; + struct completion complete; }; +/* max timeout (in milli sec) for soft request */ +#define LIO_SC_MAX_TMO_MS 60000 + /** Maximum number of buffers to allocate into soft command buffer pool */ #define MAX_SOFT_COMMAND_BUFFERS 256 @@ -319,6 +328,8 @@ struct octeon_sc_buffer_pool { (((octeon_dev_ptr)->instr_queue[iq_no]->stats.field) += count) int octeon_setup_sc_buffer_pool(struct octeon_device *oct); +int octeon_free_sc_done_list(struct octeon_device *oct); +int octeon_free_sc_zombie_list(struct octeon_device *oct); int octeon_free_sc_buffer_pool(struct octeon_device *oct); struct octeon_soft_command * octeon_alloc_soft_command(struct octeon_device *oct, diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_nic.c b/drivers/net/ethernet/cavium/liquidio/octeon_nic.c index 150609b..b7364bb 100644 --- a/drivers/net/ethernet/cavium/liquidio/octeon_nic.c +++ b/drivers/net/ethernet/cavium/liquidio/octeon_nic.c @@ -75,8 +75,7 @@ octeon_alloc_soft_command_resp(struct octeon_device *oct, else sc->cmd.cmd2.rptr = sc->dmarptr; - sc->wait_time = 1000; - sc->timeout = jiffies + sc->wait_time; + sc->expiry_time = jiffies + msecs_to_jiffies(LIO_SC_MAX_TMO_MS); return sc; } diff --git a/drivers/net/ethernet/cavium/liquidio/request_manager.c b/drivers/net/ethernet/cavium/liquidio/request_manager.c index 5de5ce9..bd0153e 100644 --- a/drivers/net/ethernet/cavium/liquidio/request_manager.c +++ b/drivers/net/ethernet/cavium/liquidio/request_manager.c @@ -409,33 +409,22 @@ lio_process_iq_request_list(struct octeon_device *oct, else irh = (struct octeon_instr_irh *) &sc->cmd.cmd2.irh; - if (irh->rflag) { - /* We're expecting a response from Octeon. - * It's up to lio_process_ordered_list() to - * process sc. Add sc to the ordered soft - * command response list because we expect - * a response from Octeon. - */ - spin_lock_irqsave - (&oct->response_list - [OCTEON_ORDERED_SC_LIST].lock, - flags); - atomic_inc(&oct->response_list - [OCTEON_ORDERED_SC_LIST]. - pending_req_count); - list_add_tail(&sc->node, &oct->response_list - [OCTEON_ORDERED_SC_LIST].head); - spin_unlock_irqrestore - (&oct->response_list - [OCTEON_ORDERED_SC_LIST].lock, - flags); - } else { - if (sc->callback) { - /* This callback must not sleep */ - sc->callback(oct, OCTEON_REQUEST_DONE, - sc->callback_arg); - } - } + + /* We're expecting a response from Octeon. + * It's up to lio_process_ordered_list() to + * process sc. Add sc to the ordered soft + * command response list because we expect + * a response from Octeon. + */ + spin_lock_irqsave(&oct->response_list + [OCTEON_ORDERED_SC_LIST].lock, flags); + atomic_inc(&oct->response_list + [OCTEON_ORDERED_SC_LIST].pending_req_count); + list_add_tail(&sc->node, &oct->response_list + [OCTEON_ORDERED_SC_LIST].head); + spin_unlock_irqrestore(&oct->response_list + [OCTEON_ORDERED_SC_LIST].lock, + flags); break; default: dev_err(&oct->pci_dev->dev, @@ -755,8 +744,7 @@ int octeon_send_soft_command(struct octeon_device *oct, len = (u32)ih2->dlengsz; } - if (sc->wait_time) - sc->timeout = jiffies + sc->wait_time; + sc->expiry_time = jiffies + msecs_to_jiffies(LIO_SC_MAX_TMO_MS); return (octeon_send_command(oct, sc->iq_no, 1, &sc->cmd, sc, len, REQTYPE_SOFT_COMMAND)); @@ -791,11 +779,76 @@ int octeon_setup_sc_buffer_pool(struct octeon_device *oct) return 0; } +int octeon_free_sc_done_list(struct octeon_device *oct) +{ + struct octeon_response_list *done_sc_list, *zombie_sc_list; + struct octeon_soft_command *sc; + struct list_head *tmp, *tmp2; + spinlock_t *sc_lists_lock; /* lock for response_list */ + + done_sc_list = &oct->response_list[OCTEON_DONE_SC_LIST]; + zombie_sc_list = &oct->response_list[OCTEON_ZOMBIE_SC_LIST]; + + if (!atomic_read(&done_sc_list->pending_req_count)) + return 0; + + sc_lists_lock = &oct->response_list[OCTEON_ORDERED_SC_LIST].lock; + + spin_lock_bh(sc_lists_lock); + + list_for_each_safe(tmp, tmp2, &done_sc_list->head) { + sc = list_entry(tmp, struct octeon_soft_command, node); + + if (READ_ONCE(sc->caller_is_done)) { + list_del(&sc->node); + atomic_dec(&done_sc_list->pending_req_count); + + if (*sc->status_word == COMPLETION_WORD_INIT) { + /* timeout; move sc to zombie list */ + list_add_tail(&sc->node, &zombie_sc_list->head); + atomic_inc(&zombie_sc_list->pending_req_count); + } else { + octeon_free_soft_command(oct, sc); + } + } + } + + spin_unlock_bh(sc_lists_lock); + + return 0; +} + +int octeon_free_sc_zombie_list(struct octeon_device *oct) +{ + struct octeon_response_list *zombie_sc_list; + struct octeon_soft_command *sc; + struct list_head *tmp, *tmp2; + spinlock_t *sc_lists_lock; /* lock for response_list */ + + zombie_sc_list = &oct->response_list[OCTEON_ZOMBIE_SC_LIST]; + sc_lists_lock = &oct->response_list[OCTEON_ORDERED_SC_LIST].lock; + + spin_lock_bh(sc_lists_lock); + + list_for_each_safe(tmp, tmp2, &zombie_sc_list->head) { + list_del(tmp); + atomic_dec(&zombie_sc_list->pending_req_count); + sc = list_entry(tmp, struct octeon_soft_command, node); + octeon_free_soft_command(oct, sc); + } + + spin_unlock_bh(sc_lists_lock); + + return 0; +} + int octeon_free_sc_buffer_pool(struct octeon_device *oct) { struct list_head *tmp, *tmp2; struct octeon_soft_command *sc; + octeon_free_sc_zombie_list(oct); + spin_lock_bh(&oct->sc_buf_pool.lock); list_for_each_safe(tmp, tmp2, &oct->sc_buf_pool.head) { @@ -824,6 +877,9 @@ struct octeon_soft_command *octeon_alloc_soft_command(struct octeon_device *oct, struct octeon_soft_command *sc = NULL; struct list_head *tmp; + if (!rdatasize) + rdatasize = 16; + WARN_ON((offset + datasize + rdatasize + ctxsize) > SOFT_COMMAND_BUFFER_SIZE); diff --git a/drivers/net/ethernet/cavium/liquidio/response_manager.c b/drivers/net/ethernet/cavium/liquidio/response_manager.c index fe5b537..ac7747c 100644 --- a/drivers/net/ethernet/cavium/liquidio/response_manager.c +++ b/drivers/net/ethernet/cavium/liquidio/response_manager.c @@ -69,6 +69,8 @@ int lio_process_ordered_list(struct octeon_device *octeon_dev, u32 status; u64 status64; + octeon_free_sc_done_list(octeon_dev); + ordered_sc_list = &octeon_dev->response_list[OCTEON_ORDERED_SC_LIST]; do { @@ -111,26 +113,88 @@ int lio_process_ordered_list(struct octeon_device *octeon_dev, } } } - } else if (force_quit || (sc->timeout && - time_after(jiffies, (unsigned long)sc->timeout))) { - dev_err(&octeon_dev->pci_dev->dev, "%s: cmd failed, timeout (%ld, %ld)\n", - __func__, (long)jiffies, (long)sc->timeout); + } else if (unlikely(force_quit) || (sc->expiry_time && + time_after(jiffies, (unsigned long)sc->expiry_time))) { + struct octeon_instr_irh *irh = + (struct octeon_instr_irh *)&sc->cmd.cmd3.irh; + + dev_err(&octeon_dev->pci_dev->dev, "%s: ", __func__); + dev_err(&octeon_dev->pci_dev->dev, + "cmd %x/%x/%llx/%llx failed, ", + irh->opcode, irh->subcode, + sc->cmd.cmd3.ossp[0], sc->cmd.cmd3.ossp[1]); + dev_err(&octeon_dev->pci_dev->dev, + "timeout (%ld, %ld)\n", + (long)jiffies, (long)sc->expiry_time); status = OCTEON_REQUEST_TIMEOUT; } if (status != OCTEON_REQUEST_PENDING) { + sc->sc_status = status; + /* we have received a response or we have timed out */ /* remove node from linked list */ list_del(&sc->node); atomic_dec(&octeon_dev->response_list - [OCTEON_ORDERED_SC_LIST]. - pending_req_count); - spin_unlock_bh - (&ordered_sc_list->lock); + [OCTEON_ORDERED_SC_LIST]. + pending_req_count); + + if (!sc->callback) { + atomic_inc(&octeon_dev->response_list + [OCTEON_DONE_SC_LIST]. + pending_req_count); + list_add_tail(&sc->node, + &octeon_dev->response_list + [OCTEON_DONE_SC_LIST].head); + + if (unlikely(READ_ONCE(sc->caller_is_done))) { + /* caller does not wait for response + * from firmware + */ + if (status != OCTEON_REQUEST_DONE) { + struct octeon_instr_irh *irh; + + irh = + (struct octeon_instr_irh *) + &sc->cmd.cmd3.irh; + dev_dbg + (&octeon_dev->pci_dev->dev, + "%s: sc failed: opcode=%x, ", + __func__, irh->opcode); + dev_dbg + (&octeon_dev->pci_dev->dev, + "subcode=%x, ossp[0]=%llx, ", + irh->subcode, + sc->cmd.cmd3.ossp[0]); + dev_dbg + (&octeon_dev->pci_dev->dev, + "ossp[1]=%llx, status=%d\n", + sc->cmd.cmd3.ossp[1], + status); + } + } else { + complete(&sc->complete); + } + + spin_unlock_bh(&ordered_sc_list->lock); + } else { + /* sc with callback function */ + if (status == OCTEON_REQUEST_TIMEOUT) { + atomic_inc(&octeon_dev->response_list + [OCTEON_ZOMBIE_SC_LIST]. + pending_req_count); + list_add_tail(&sc->node, + &octeon_dev->response_list + [OCTEON_ZOMBIE_SC_LIST]. + head); + } + + spin_unlock_bh(&ordered_sc_list->lock); - if (sc->callback) sc->callback(octeon_dev, status, sc->callback_arg); + /* sc is freed by caller */ + } request_complete++; diff --git a/drivers/net/ethernet/cavium/liquidio/response_manager.h b/drivers/net/ethernet/cavium/liquidio/response_manager.h index 9169c28..ed4020d 100644 --- a/drivers/net/ethernet/cavium/liquidio/response_manager.h +++ b/drivers/net/ethernet/cavium/liquidio/response_manager.h @@ -53,7 +53,9 @@ enum { OCTEON_ORDERED_LIST = 0, OCTEON_UNORDERED_NONBLOCKING_LIST = 1, OCTEON_UNORDERED_BLOCKING_LIST = 2, - OCTEON_ORDERED_SC_LIST = 3 + OCTEON_ORDERED_SC_LIST = 3, + OCTEON_DONE_SC_LIST = 4, + OCTEON_ZOMBIE_SC_LIST = 5 }; /** Response Order values for a Octeon Request. */