From patchwork Mon Apr 14 09:37:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 2072194 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=tpeMyoC4; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=UYDY3RU1; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4ZbjGb2j8nz1yMm for ; Mon, 14 Apr 2025 19:51:03 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wfArHYyYFbZy1+RAqnyXoFeBZy8zx3TsgWCWlTdL/tA=; b=tpeMyoC4zm/iTh oNPtAH9PUZ6Ygpz/QEWS/iYIIS7K3CzEPbFCU0v7gavgbCKFwNU6sTCMMNIibRur7tymBCaqP2JVZ f1THU0pOZp6lCsiWNmwixOPBmxdBpWD5skC3jtG3MKwd1anvIrvdE/QKpUi7+C/ObbIW89PdXCk7N 5DotTthhl/jfiMwJnQUl7P/yfci1vpFecP7A3qqvnZQHsutms4xk06eEs3HbRA0ms+fIEtv43129R OFM6+V8bGH+Ria8u9Tb7Qou1/lTFNlITg6uP/FilMV2rE1hA1sye1s1UG+i9h2i1wj6MlUwkSEf5g 83O41Zl6q2AJ4AeOmSsg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1u4GT6-00000001L0y-0oEm; Mon, 14 Apr 2025 09:51:08 +0000 Received: from mgamail.intel.com ([192.198.163.19]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1u4GRy-00000001KYH-1wHh for linux-mtd@lists.infradead.org; Mon, 14 Apr 2025 09:49:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624198; x=1776160198; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9ukI5iPUv8J84CoV1OEbUeSFBHIZ7pB9cPVw1XNT34Y=; b=UYDY3RU1QoLDfAbZdP/BcC0NU4K/XM4y9hzTZQp6+pQA290LIvBKZVMM FXk8TBfsImGKfKYWMDMAJ0/OZ14aWcJkWHjJoenaB8bwu+orV7fqEoQN5 TWV3VyrtQkL+kfq+fiq6EttX15PKC7XL1j+ClHHYLV1+FuE6KdASOsHD7 1ajihuyvruYaZQrh1zCA92WT/IL+Xoix9XY6mrVozFy2mzUerShRbKhUK RJJezjkzdMd626Lic4zX8L+SUQ4vh3EDdvAlA9rkDaTilK/Eids+1bCBo gG7IGJmF9xFjB+H8sGZPPYSXCA9BQdJgqMqbZfX4tLrQ9vIQ9Xn/9LE+j w==; X-CSE-ConnectionGUID: n7NOtJT5SquVf606Mlab1g== X-CSE-MsgGUID: sa5pRzWKTVm7xgMPU7rlpg== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222760" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222760" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:45 -0700 X-CSE-ConnectionGUID: oxngW+g7SgKU5O2eFAjHPw== X-CSE-MsgGUID: B/x0YIgKTz2xGpBpNIyWiA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152954945" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:38 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler , Vitaly Lubart Subject: [PATCH v8 05/12] mtd: intel-dg: register with mtd Date: Mon, 14 Apr 2025 12:37:56 +0300 Message-ID: <20250414093803.2133463-6-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250414_024958_532353_AAEACFDA X-CRM114-Status: GOOD ( 21.34 ) X-Spam-Score: -4.4 (----) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Register the on-die nvm device with the mtd subsystem. Refcount nvm object on _get and _put mtd callbacks. For erase operation address and size should be 4K aligned. For write operation address and si [...] Content analysis details: (-4.4 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [192.198.163.19 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 RCVD_IN_VALIDITY_CERTIFIED_BLOCKED RBL: ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. [192.198.163.19 listed in sa-trusted.bondedsender.org] 0.0 RCVD_IN_VALIDITY_SAFE_BLOCKED RBL: ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. [192.198.163.19 listed in sa-accredit.habeas.com] 0.0 RCVD_IN_VALIDITY_RPBL_BLOCKED RBL: ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. [192.198.163.19 listed in bl.score.senderscore.com] -0.0 DKIMWL_WL_HIGH DKIMwl.org - High trust sender X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-mtd" Errors-To: linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Register the on-die nvm device with the mtd subsystem. Refcount nvm object on _get and _put mtd callbacks. For erase operation address and size should be 4K aligned. For write operation address and size has to be 4bytes aligned. CC: Rodrigo Vivi CC: Lucas De Marchi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Co-developed-by: Vitaly Lubart Signed-off-by: Vitaly Lubart Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 230 ++++++++++++++++++++++++++++- 1 file changed, 226 insertions(+), 4 deletions(-) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 6f67cf966d05..4023f2ebc344 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -5,6 +5,7 @@ #include #include +#include #include #include #include @@ -12,6 +13,8 @@ #include #include #include +#include +#include #include #include #include @@ -19,6 +22,8 @@ struct intel_dg_nvm { struct kref refcnt; + struct mtd_info mtd; + struct mutex lock; /* region access lock */ void __iomem *base; size_t size; unsigned int nregions; @@ -177,7 +182,6 @@ static int idg_nvm_is_valid(struct intel_dg_nvm *nvm) return 0; } -__maybe_unused static unsigned int idg_nvm_get_region(const struct intel_dg_nvm *nvm, loff_t from) { unsigned int i; @@ -209,7 +213,6 @@ static ssize_t idg_nvm_rewrite_partial(struct intel_dg_nvm *nvm, loff_t to, return len; } -__maybe_unused static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, loff_t to, size_t len, const unsigned char *buf) { @@ -266,7 +269,6 @@ static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, return len; } -__maybe_unused static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, loff_t from, size_t len, unsigned char *buf) { @@ -325,7 +327,6 @@ static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, return len; } -__maybe_unused static ssize_t idg_erase(struct intel_dg_nvm *nvm, u8 region, loff_t from, u64 len, u64 *fail_addr) { @@ -414,6 +415,147 @@ static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) return n; } +static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) +{ + struct intel_dg_nvm *nvm = mtd->priv; + unsigned int idx; + u8 region; + u64 addr; + ssize_t bytes; + loff_t from; + size_t len; + size_t total_len; + + if (WARN_ON(!nvm)) + return -EINVAL; + + if (!IS_ALIGNED(info->addr, SZ_4K) || !IS_ALIGNED(info->len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %llx\n", + info->addr, info->len); + info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + return -EINVAL; + } + + total_len = info->len; + addr = info->addr; + + guard(mutex)(&nvm->lock); + + while (total_len > 0) { + if (!IS_ALIGNED(addr, SZ_4K) || !IS_ALIGNED(total_len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %zx\n", addr, total_len); + info->fail_addr = addr; + return -ERANGE; + } + + idx = idg_nvm_get_region(nvm, addr); + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + return -ERANGE; + } + + from = addr - nvm->regions[idx].offset; + region = nvm->regions[idx].id; + len = total_len; + if (len > nvm->regions[idx].size - from) + len = nvm->regions[idx].size - from; + + dev_dbg(&mtd->dev, "erasing region[%d] %s from %llx len %zx\n", + region, nvm->regions[idx].name, from, len); + + bytes = idg_erase(nvm, region, from, len, &info->fail_addr); + if (bytes < 0) { + dev_dbg(&mtd->dev, "erase failed with %zd\n", bytes); + info->fail_addr += nvm->regions[idx].offset; + return bytes; + } + + addr += len; + total_len -= len; + } + + return 0; +} + +static int intel_dg_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, + size_t *retlen, u_char *buf) +{ + struct intel_dg_nvm *nvm = mtd->priv; + ssize_t ret; + unsigned int idx; + u8 region; + + if (WARN_ON(!nvm)) + return -EINVAL; + + idx = idg_nvm_get_region(nvm, from); + + dev_dbg(&mtd->dev, "reading region[%d] %s from %lld len %zd\n", + nvm->regions[idx].id, nvm->regions[idx].name, from, len); + + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + from -= nvm->regions[idx].offset; + region = nvm->regions[idx].id; + if (len > nvm->regions[idx].size - from) + len = nvm->regions[idx].size - from; + + guard(mutex)(&nvm->lock); + + ret = idg_read(nvm, region, from, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "read failed with %zd\n", ret); + return ret; + } + + *retlen = ret; + + return 0; +} + +static int intel_dg_mtd_write(struct mtd_info *mtd, loff_t to, size_t len, + size_t *retlen, const u_char *buf) +{ + struct intel_dg_nvm *nvm = mtd->priv; + ssize_t ret; + unsigned int idx; + u8 region; + + if (WARN_ON(!nvm)) + return -EINVAL; + + idx = idg_nvm_get_region(nvm, to); + + dev_dbg(&mtd->dev, "writing region[%d] %s to %lld len %zd\n", + nvm->regions[idx].id, nvm->regions[idx].name, to, len); + + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + to -= nvm->regions[idx].offset; + region = nvm->regions[idx].id; + if (len > nvm->regions[idx].size - to) + len = nvm->regions[idx].size - to; + + guard(mutex)(&nvm->lock); + + ret = idg_write(nvm, region, to, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "write failed with %zd\n", ret); + return ret; + } + + *retlen = ret; + + return 0; +} + static void intel_dg_nvm_release(struct kref *kref) { struct intel_dg_nvm *nvm = container_of(kref, struct intel_dg_nvm, refcnt); @@ -422,9 +564,80 @@ static void intel_dg_nvm_release(struct kref *kref) pr_debug("freeing intel_dg nvm\n"); for (i = 0; i < nvm->nregions; i++) kfree(nvm->regions[i].name); + mutex_destroy(&nvm->lock); kfree(nvm); } +static int intel_dg_mtd_get_device(struct mtd_info *mtd) +{ + struct mtd_info *master = mtd_get_master(mtd); + struct intel_dg_nvm *nvm = master->priv; + + if (WARN_ON(!nvm)) + return -EINVAL; + pr_debug("get mtd %s %d\n", mtd->name, kref_read(&nvm->refcnt)); + kref_get(&nvm->refcnt); + + return 0; +} + +static void intel_dg_mtd_put_device(struct mtd_info *mtd) +{ + struct mtd_info *master = mtd_get_master(mtd); + struct intel_dg_nvm *nvm = master->priv; + + if (WARN_ON(!nvm)) + return; + pr_debug("put mtd %s %d\n", mtd->name, kref_read(&nvm->refcnt)); + kref_put(&nvm->refcnt, intel_dg_nvm_release); +} + +static int intel_dg_nvm_init_mtd(struct intel_dg_nvm *nvm, struct device *device, + unsigned int nparts, bool writable_override) +{ + unsigned int i; + unsigned int n; + struct mtd_partition *parts = NULL; + int ret; + + dev_dbg(device, "registering with mtd\n"); + + nvm->mtd.owner = THIS_MODULE; + nvm->mtd.dev.parent = device; + nvm->mtd.flags = MTD_CAP_NORFLASH | MTD_WRITEABLE; + nvm->mtd.type = MTD_DATAFLASH; + nvm->mtd.priv = nvm; + nvm->mtd._write = intel_dg_mtd_write; + nvm->mtd._read = intel_dg_mtd_read; + nvm->mtd._erase = intel_dg_mtd_erase; + nvm->mtd._get_device = intel_dg_mtd_get_device; + nvm->mtd._put_device = intel_dg_mtd_put_device; + nvm->mtd.writesize = SZ_1; /* 1 byte granularity */ + nvm->mtd.erasesize = SZ_4K; /* 4K bytes granularity */ + nvm->mtd.size = nvm->size; + + parts = kcalloc(nvm->nregions, sizeof(*parts), GFP_KERNEL); + if (!parts) + return -ENOMEM; + + for (i = 0, n = 0; i < nvm->nregions && n < nparts; i++) { + if (!nvm->regions[i].is_readable) + continue; + parts[n].name = nvm->regions[i].name; + parts[n].offset = nvm->regions[i].offset; + parts[n].size = nvm->regions[i].size; + if (!nvm->regions[i].is_writable && !writable_override) + parts[n].mask_flags = MTD_WRITEABLE; + n++; + } + + ret = mtd_device_register(&nvm->mtd, parts, n); + + kfree(parts); + + return ret; +} + static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *aux_dev_id) { @@ -454,6 +667,7 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, return -ENOMEM; kref_init(&nvm->refcnt); + mutex_init(&nvm->lock); nvm->nregions = nregions; for (n = 0, i = 0; i < INTEL_DG_NVM_REGIONS; i++) { @@ -483,6 +697,12 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, goto err; } + ret = intel_dg_nvm_init_mtd(nvm, device, ret, invm->writable_override); + if (ret) { + dev_err(device, "failed init mtd %d\n", ret); + goto err; + } + dev_set_drvdata(&aux_dev->dev, nvm); return 0; @@ -499,6 +719,8 @@ static void intel_dg_mtd_remove(struct auxiliary_device *aux_dev) if (!nvm) return; + mtd_device_unregister(&nvm->mtd); + dev_set_drvdata(&aux_dev->dev, NULL); kref_put(&nvm->refcnt, intel_dg_nvm_release);