From patchwork Wed Apr 29 07:21:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279030 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=sAqJzUcK; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Bqh05DBpz9sT1 for ; Wed, 29 Apr 2020 17:21:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726662AbgD2HVe (ORCPT ); Wed, 29 Apr 2020 03:21:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726617AbgD2HVa (ORCPT ); Wed, 29 Apr 2020 03:21:30 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAE22C08ED7D for ; Wed, 29 Apr 2020 00:21:29 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id h186so1613132qkc.22 for ; Wed, 29 Apr 2020 00:21:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B5+eMuXSNPgzz6lDCnDEgxrLUrZaLtLqUz74Q0MALlk=; b=sAqJzUcKRR2PWCgfoNL8YLcvvURrO/v1iW6vUz7lJffdr6y43Uzzz0AxEPCTCDJ23y 7LrlyEKJbh0DYTJU1okrDwybkxKYrmkIlEoKYzl4uiUJAJ5iZpTMO1Ye2MtiZSGTOwKv f6VMtRrkd5eLHtQM7CCdLozYgNV9RIF3EUe6Gi44gKissWxndQH0mcAfJ50xgyX7Unoq 4VGL1PLMNsJ7rPdqlnUbbwQXMPjkb6EVJ+wBfYA2wgmiD29UPFcD+86eF+CgjF6mL/4/ mk2plw+YTq/16aDPBYSfEXGqOcyjNw5hJLZx5TFC2EEEzfKvRB13MkoEWYIqkxgHEO+j +teQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B5+eMuXSNPgzz6lDCnDEgxrLUrZaLtLqUz74Q0MALlk=; b=EDJwsxpb+TrWUnuhiwe5BH4aBwwEP9AOK3fbUNNcTHeld8UXeZns/iKQQxwpzZd8AB mzs+vZpOZEKo0wdMaWrp+bK1RAEWHqh+2lv5TkpoiGwVUbvaRaDV4NrMXY2Bs38fDwJC xfYd6BfhxT7+uywdewjkVaeT+BE9hqY7Y+8rZoCaWluvD7FjJvPS6WLfO8h1S5cLyTxA oY/6Hm3VMGT+/CHjKJAcQuc/Yx5swnJT1hby9cKh5NIcvNmtdqkj0TM2t+YkB5FtCNPo ZWX2vLcAPMY1CUApEI4e8/br1YHMEamUpdvSE5zmYgCtPiPIf0kZkR+l5yEuFRQngMpH JTrw== X-Gm-Message-State: AGi0PuZ4iJ3rHTHzGto1awfIQ+3FK6vxygK9JVLFhkHXzcQ3VNREb/vu CwOTClolTe3SXjoIZ7+kqWJ6qcrhDnY= X-Google-Smtp-Source: APiQypKEypcJIU7KUN7k5t+VIu3Cuza9yDDpu2lxe7BkycUVTw7yusdBevL+9YdhRGGMD9tDgpUBgB+0ObI= X-Received: by 2002:a0c:aadc:: with SMTP id g28mr32584810qvb.0.1588144889029; Wed, 29 Apr 2020 00:21:29 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:10 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-2-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 01/12] Documentation: Document the blk-crypto framework From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org The blk-crypto framework adds support for inline encryption. There are numerous changes throughout the storage stack. This patch documents the main design choices in the block layer, the API presented to users of the block layer (like fscrypt or layered devices) and the API presented to drivers for adding support for inline encryption. Signed-off-by: Satya Tangirala --- Documentation/block/index.rst | 1 + Documentation/block/inline-encryption.rst | 260 ++++++++++++++++++++++ 2 files changed, 261 insertions(+) create mode 100644 Documentation/block/inline-encryption.rst diff --git a/Documentation/block/index.rst b/Documentation/block/index.rst index 3fa7a52fafa46..026addfc69bc9 100644 --- a/Documentation/block/index.rst +++ b/Documentation/block/index.rst @@ -14,6 +14,7 @@ Block cmdline-partition data-integrity deadline-iosched + inline-encryption ioprio kyber-iosched null_blk diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst new file mode 100644 index 0000000000000..81bd081a1a722 --- /dev/null +++ b/Documentation/block/inline-encryption.rst @@ -0,0 +1,260 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================= +Inline Encryption +================= + +Background +========== + +Inline encryption hardware sits logically between memory and the disk, and can +en/decrypt data as it goes in/out of the disk. Inline encryption hardware has a +fixed number of "keyslots" - slots into which encryption contexts (i.e. the +encryption key, encryption algorithm, data unit size) can be programmed by the +kernel at any time. Each request sent to the disk can be tagged with the index +of a keyslot (and also a data unit number to act as an encryption tweak), and +the inline encryption hardware will en/decrypt the data in the request with the +encryption context programmed into that keyslot. This is very different from +full disk encryption solutions like self encrypting drives/TCG OPAL/ATA +Security standards, since with inline encryption, any block on disk could be +encrypted with any encryption context the kernel chooses. + + +Objective +========= + +We want to support inline encryption (IE) in the kernel. +To allow for testing, we also want a crypto API fallback when actual +IE hardware is absent. We also want IE to work with layered devices +like dm and loopback (i.e. we want to be able to use the IE hardware +of the underlying devices if present, or else fall back to crypto API +en/decryption). + + +Constraints and notes +===================== + +- IE hardware has a limited number of "keyslots" that can be programmed + with an encryption context (key, algorithm, data unit size, etc.) at any time. + One can specify a keyslot in a data request made to the device, and the + device will en/decrypt the data using the encryption context programmed into + that specified keyslot. When possible, we want to make multiple requests with + the same encryption context share the same keyslot. + +- We need a way for upper layers like filesystems to specify an encryption + context to use for en/decrypting a struct bio, and a device driver (like UFS) + needs to be able to use that encryption context when it processes the bio. + +- We need a way for device drivers to expose their inline encryption + capabilities in a unified way to the upper layers. + + +Design +====== + +We add a :c:type:`struct bio_crypt_ctx` to :c:type:`struct bio` that can +represent an encryption context, because we need to be able to pass this +encryption context from the upper layers (like the fs layer) to the +device driver to act upon. + +While IE hardware works on the notion of keyslots, the FS layer has no +knowledge of keyslots - it simply wants to specify an encryption context to +use while en/decrypting a bio. + +We introduce a keyslot manager (KSM) that handles the translation from +encryption contexts specified by the FS to keyslots on the IE hardware. +This KSM also serves as the way IE hardware can expose its capabilities to +upper layers. The generic mode of operation is: each device driver that wants +to support IE will construct a KSM and set it up in its struct request_queue. +Upper layers that want to use IE on this device can then use this KSM in +the device's struct request_queue to translate an encryption context into +a keyslot. The presence of the KSM in the request queue shall be used to mean +that the device supports IE. + +The KSM uses refcounts to track which keyslots are idle (either they have no +encryption context programmed, or there are no in-flight struct bios +referencing that keyslot). When a new encryption context needs a keyslot, it +tries to find a keyslot that has already been programmed with the same +encryption context, and if there is no such keyslot, it evicts the least +recently used idle keyslot and programs the new encryption context into that +one. If no idle keyslots are available, then the caller will sleep until there +is at least one. + + +blk-mq changes, other block layer changes and blk-crypto-fallback +================================================================= + +We add a pointer to a ``bi_crypt_context`` and ``keyslot`` to +:c:type:`struct request`. These will be referred to as the ``crypto fields`` +for the request. This ``keyslot`` is the keyslot into which the +``bi_crypt_context`` has been programmed in the KSM of the ``request_queue`` +that this request is being sent to. + +We introduce ``block/blk-crypto-fallback.c``, which allows upper layers to remain +blissfully unaware of whether or not real inline encryption hardware is present +underneath. When a bio is submitted with a target ``request_queue`` that doesn't +support the encryption context specified with the bio, the block layer will +en/decrypt the bio with the blk-crypto-fallback. + +If the bio is a ``WRITE`` bio, a bounce bio is allocated, and the data in the bio +is encrypted stored in the bounce bio - blk-mq will then proceed to process the +bounce bio as if it were not encrypted at all (except when blk-integrity is +concerned). ``blk-crypto-fallback`` sets the bounce bio's ``bi_end_io`` to an +internal function that cleans up the bounce bio and ends the original bio. + +If the bio is a ``READ`` bio, the bio's ``bi_end_io`` (and also ``bi_private``) +is saved and overwritten by ``blk-crypto-fallback`` to +``bio_crypto_fallback_decrypt_bio``. The bio's ``bi_crypt_context`` is also +overwritten with ``NULL``, so that to the rest of the stack, the bio looks +as if it was a regular bio that never had an encryption context specified. +``bio_crypto_fallback_decrypt_bio`` will decrypt the bio, restore the original +``bi_end_io`` (and also ``bi_private``) and end the bio again. + +Regardless of whether real inline encryption hardware is used or the +blk-crypto-fallback is used, the ciphertext written to disk (and hence the +on-disk format of data) will be the same (assuming the hardware's implementation +of the algorithm being used adheres to spec and functions correctly). + +If a ``request queue``'s inline encryption hardware claimed to support the +encryption context specified with a bio, then it will not be handled by the +``blk-crypto-fallback``. We will eventually reach a point in blk-mq when a +:c:type:`struct request` needs to be allocated for that bio. At that point, +blk-mq tries to program the encryption context into the ``request_queue``'s +keyslot_manager, and obtain a keyslot, which it stores in its newly added +``keyslot`` field. This keyslot is released when the request is completed. + +When a bio is added to a request, the request takes over ownership of the +``bi_crypt_context`` of the bio - in particular, the request keeps the +``bi_crypt_context`` of the first bio in its bio-list, and frees the rest +(blk-mq needs to be careful to maintain this invariant during bio and request +merges). + +To make it possible for inline encryption to work with request queue based +layered devices, when a request is cloned, its ``crypto fields`` are cloned as +well. When the cloned request is submitted, blk-mq programs the +``bi_crypt_context`` of the request into the clone's request_queue's keyslot +manager, and stores the returned keyslot in the clone's ``keyslot``. + + +API presented to users of the block layer +========================================= + +``struct blk_crypto_key`` represents a crypto key (the raw key, size of the +key, the crypto algorithm to use, the data unit size to use, and the number of +bytes required to represent data unit numbers that will be specified with the +``bi_crypt_context``). + +``blk_crypto_init_key`` allows upper layers to initialize such a +``blk_crypto_key``. + +``bio_crypt_set_ctx`` should be called on any bio that a user of +the block layer wants en/decrypted via inline encryption (or the +blk-crypto-fallback, if hardware support isn't available for the desired +crypto configuration). This function takes the ``blk_crypto_key`` and the +data unit number (DUN) to use when en/decrypting the bio. + +``blk_crypto_config_supported`` allows upper layers to query whether or not the +an encryption context passed to request queue can be handled by blk-crypto +(either by real inline encryption hardware, or by the blk-crypto-fallback). +This is useful e.g. when blk-crypto-fallback is disabled, and the upper layer +wants to use an algorithm that may not supported by hardware - this function +lets the upper layer know ahead of time that the algorithm isn't supported, +and the upper layer can fallback to something else if appropriate. + +``blk_crypto_start_using_key`` - Upper layers must call this function on +``blk_crypto_key`` and a ``request_queue`` before using the key with any bio +headed for that ``request_queue``. This function ensures that either the +hardware supports the key's crypto settings, or the crypto API fallback has +transforms for the needed mode allocated and ready to go. Note that this +function may allocate an ``skcipher``, and must not be called from the data +path, since allocating ``skciphers`` from the data path can deadlock. + +``blk_crypto_evict_key`` should be called by upper layers when they want +to ensure that a key is removed from memory and from any keyslots in inline +encryption hardware that the key might have been programmed into (or the +blk-crypto-fallback). + +API presented to device drivers +=============================== + +A :c:type:``struct keyslot_manager`` should be set up by device drivers in the +``request_queue`` of the device. The device driver needs to call +``blk_ksm_init`` on the ``keyslot_manager``, which specfying the number of +keyslots supported by the hardware. + +The device driver also needs to tell the KSM how to actually manipulate the +IE hardware in the device to do things like programming the crypto key into +the IE hardware into a particular keyslot. All this is achieved through the +:c:type:`struct keyslot_mgmt_ll_ops` field in the KSM that the device driver +must fill up after initing the ``keyslot_manager``. + +The KSM also handles runtime power management for the device when applicable +(e.g. when it wants to program a crypto key into the IE hardware, the device +must be runtime powered on) - so the device driver must also set the ``dev`` +field in the ksm to point to the `struct device` for the KSM to use for runtime +power management. + +``blk_ksm_reprogram_all_keys`` can be called by device drivers if the device +needs each and every of its keyslots to be reprogrammed with the key it +"should have" at the point in time when the function is called. This is useful +e.g. if a device loses all its keys on runtime power down/up. + +``blk_ksm_destroy`` should be called to free up all resources used by a keyslot +manager upon ``blk_ksm_init``, once the ``keyslot_manager`` is no longer +needed. + + +Layered Devices +=============== + +Request queue based layered devices like dm-rq that wish to support IE need to +create their own keyslot manager for their request queue, and expose whatever +functionality they choose. When a layered device wants to pass a clone of that +request to another ``request_queue``, blk-crypto will initialize and prepare the +clone as necessary - see ``blk_crypto_insert_cloned_request`` in +``blk-crypto.c``. + + +Future Optimizations for layered devices +======================================== + +Creating a keyslot manager for a layered device uses up memory for each +keyslot, and in general, a layered device merely passes the request on to a +"child" device, so the keyslots in the layered device itself are completely +unused, and don't need any refcounting or keyslot programming. We can instead +define a new type of KSM; the "passthrough KSM", that layered devices can use +to advertise an unlimited number of keyslots, and support for any encryption +algorithms they choose, while not actually using any memory for each keyslot. +Another use case for the "passthrough KSM" is for IE devices that do not have a +limited number of keyslots. + + +Interaction between inline encryption and blk integrity +======================================================= + +At the time of this patch, there is no real hardware that supports both these +features. However, these features do interact with each other, and it's not +completely trivial to make them both work together properly. In particular, +when a WRITE bio wants to use inline encryption on a device that supports both +features, the bio will have an encryption context specified, after which +its integrity information is calculated (using the plaintext data, since +the encryption will happen while data is being written), and the data and +integrity info is sent to the device. Obviously, the integrity info must be +verified before the data is encrypted. After the data is encrypted, the device +must not store the integrity info that it received with the plaintext data +since that might reveal information about the plaintext data. As such, it must +re-generate the integrity info from the ciphertext data and store that on disk +instead. Another issue with storing the integrity info of the plaintext data is +that it changes the on disk format depending on whether hardware inline +encryption support is present or the kernel crypto API fallback is used (since +if the fallback is used, the device will receive the integrity info of the +ciphertext, not that of the plaintext). + +Because there isn't any real hardware yet, it seems prudent to assume that +hardware implementations might not implement both features together correctly, +and disallow the combination for now. Whenever a device supports integrity, the +kernel will pretend that the device does not support hardware inline encryption +(by essentially setting the keyslot manager in the request_queue of the device +to NULL). When the crypto API fallback is enabled, this means that all bios with +and encryption context will use the fallback, and IO will complete as usual. +When the fallback is disabled, a bio with an encryption context will be failed. From patchwork Wed Apr 29 07:21:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279032 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=a94fv12i; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Bqh45XFqz9sSb for ; Wed, 29 Apr 2020 17:21:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726654AbgD2HVk (ORCPT ); Wed, 29 Apr 2020 03:21:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726635AbgD2HVc (ORCPT ); Wed, 29 Apr 2020 03:21:32 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6213EC035493 for ; Wed, 29 Apr 2020 00:21:31 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id c5so1625225qvi.10 for ; Wed, 29 Apr 2020 00:21:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sVCusx5bPORp6g4J8uz+hh0HFfrndys7+lb0RkLvDeI=; b=a94fv12ibrFufu3y3FRJJNxvh0fuv4suzv3aTbcCSN2MYoZmgJacE4UE3Svly8bwvF 2WPHS+Sugx3Z9CzG3uPseCDGUhzKWIMxccqHeU/Y6/p82XHrQTkIWr+oiINkfOMbablP WzSgQFqLBhI++24QXrimtiA16lV13tvYH81AAJ4k7RSHWSJhnD/13FwVR73oyNeR7lie psOjkRs4ZCrHTl66qh0HRo74VXFMVMXbJpIQ21qkgm8IFxgDyn4fEhGX1vzkPG5V+MXh lxL3poDbnr5ps47AAKeaP0ReyxEp4GoRJmzPObO2lkmy+yUxNxK2b4YrgUCYt4kQNQRu VPJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sVCusx5bPORp6g4J8uz+hh0HFfrndys7+lb0RkLvDeI=; b=cj0Rs+oIub9JqMQ7EUEdSygL23FT2TONpG1o87Bw52nKx12Z+ZrL3Spno0nPLIUuFN WpBewNSVxYlRll2TbKqIV0FGP298wNTuwbh/gbXfb3cLUeoeKnJjb9sEGU00RzkrilKM /hbncLbP0cst4MlVIaozchcSOV2nPjMdEQf8OHyFaHT3kEtwxnx5uetf94cjGlee4MIx gDGU3zOdbUDWNOwWETl3EyZhNY2qnny6IAQL0lOCzRU/ovE8rOIqVYtp+9o4yUo4N8T4 mOFwP/xBH0UiX/NDgTsdNNs11zXmic0G2eJ0te2VhTL/lo7n6eKSNPkmzBDe7l/wPuy3 IZ9A== X-Gm-Message-State: AGi0PuZKY6SPjVgOvMhi5XzYPou5GvZVRqqVSdxIPWdSUirf2j5STWgs 3kuK/9+uBxpDHaHpSYXbdK2vannbefs= X-Google-Smtp-Source: APiQypJLSDvvBxYz9hq/6H1Vv8S4CBsisP6QtSEEof0OFhYOpXKm7cMdvr0DUxh6QeYGpSZ5ragmYmyAv7M= X-Received: by 2002:ad4:548b:: with SMTP id q11mr31242006qvy.129.1588144890512; Wed, 29 Apr 2020 00:21:30 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:11 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-3-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 02/12] block: Keyslot Manager for Inline Encryption From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala , Eric Biggers Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Inline Encryption hardware allows software to specify an encryption context (an encryption key, crypto algorithm, data unit num, data unit size) along with a data transfer request to a storage device, and the inline encryption hardware will use that context to en/decrypt the data. The inline encryption hardware is part of the storage device, and it conceptually sits on the data path between system memory and the storage device. Inline Encryption hardware implementations often function around the concept of "keyslots". These implementations often have a limited number of "keyslots", each of which can hold a key (we say that a key can be "programmed" into a keyslot). Requests made to the storage device may have a keyslot and a data unit number associated with them, and the inline encryption hardware will en/decrypt the data in the requests using the key programmed into that associated keyslot and the data unit number specified with the request. As keyslots are limited, and programming keys may be expensive in many implementations, and multiple requests may use exactly the same encryption contexts, we introduce a Keyslot Manager to efficiently manage keyslots. We also introduce a blk_crypto_key, which will represent the key that's programmed into keyslots managed by keyslot managers. The keyslot manager also functions as the interface that upper layers will use to program keys into inline encryption hardware. For more information on the Keyslot Manager, refer to documentation found in block/keyslot-manager.c and linux/keyslot-manager.h. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- block/Kconfig | 7 + block/Makefile | 1 + block/keyslot-manager.c | 380 ++++++++++++++++++++++++++++++++ include/linux/blk-crypto.h | 51 +++++ include/linux/blkdev.h | 6 + include/linux/keyslot-manager.h | 106 +++++++++ 6 files changed, 551 insertions(+) create mode 100644 block/keyslot-manager.c create mode 100644 include/linux/blk-crypto.h create mode 100644 include/linux/keyslot-manager.h diff --git a/block/Kconfig b/block/Kconfig index 3bc76bb113a08..c04a1d5008421 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -185,6 +185,13 @@ config BLK_SED_OPAL Enabling this option enables users to setup/unlock/lock Locking ranges for SED devices using the Opal protocol. +config BLK_INLINE_ENCRYPTION + bool "Enable inline encryption support in block layer" + help + Build the blk-crypto subsystem. Enabling this lets the + block layer handle encryption, so users can take + advantage of inline encryption hardware if present. + menu "Partition Types" source "block/partitions/Kconfig" diff --git a/block/Makefile b/block/Makefile index 206b96e9387f4..fc963e4676b0a 100644 --- a/block/Makefile +++ b/block/Makefile @@ -36,3 +36,4 @@ obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c new file mode 100644 index 0000000000000..b584723b392ad --- /dev/null +++ b/block/keyslot-manager.c @@ -0,0 +1,380 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +/** + * DOC: The Keyslot Manager + * + * Many devices with inline encryption support have a limited number of "slots" + * into which encryption contexts may be programmed, and requests can be tagged + * with a slot number to specify the key to use for en/decryption. + * + * As the number of slots are limited, and programming keys is expensive on + * many inline encryption hardware, we don't want to program the same key into + * multiple slots - if multiple requests are using the same key, we want to + * program just one slot with that key and use that slot for all requests. + * + * The keyslot manager manages these keyslots appropriately, and also acts as + * an abstraction between the inline encryption hardware and the upper layers. + * + * Lower layer devices will set up a keyslot manager in their request queue + * and tell it how to perform device specific operations like programming/ + * evicting keys from keyslots. + * + * Upper layers will call blk_ksm_get_slot_for_key() to program a + * key into some slot in the inline encryption hardware. + */ +#include +#include +#include +#include +#include +#include +#include + +struct blk_ksm_keyslot { + atomic_t slot_refs; + struct list_head idle_slot_node; + struct hlist_node hash_node; + const struct blk_crypto_key *key; + struct blk_keyslot_manager *ksm; +}; + +static inline void blk_ksm_hw_enter(struct blk_keyslot_manager *ksm) +{ + /* + * Calling into the driver requires ksm->lock held and the device + * resumed. But we must resume the device first, since that can acquire + * and release ksm->lock via blk_ksm_reprogram_all_keys(). + */ + if (ksm->dev) + pm_runtime_get_sync(ksm->dev); + down_write(&ksm->lock); +} + +static inline void blk_ksm_hw_exit(struct blk_keyslot_manager *ksm) +{ + up_write(&ksm->lock); + if (ksm->dev) + pm_runtime_put_sync(ksm->dev); +} + +/** + * blk_ksm_init() - Initialize a keyslot manager + * @ksm: The keyslot_manager to initialize. + * @num_slots: The number of key slots to manage. + * + * Allocate memory for keyslots and initialize a keyslot manager. Called by + * e.g. storage drivers to set up a keyslot manager in their request_queue. + * + * Return: 0 on success, or else a negative error code. + */ +int blk_ksm_init(struct blk_keyslot_manager *ksm, unsigned int num_slots) +{ + unsigned int slot; + unsigned int i; + unsigned int slot_hashtable_size; + + memset(ksm, 0, sizeof(*ksm)); + + if (num_slots == 0) + return -EINVAL; + + ksm->slots = kvcalloc(num_slots, sizeof(ksm->slots[0]), GFP_KERNEL); + if (!ksm->slots) + return -ENOMEM; + + ksm->num_slots = num_slots; + + init_rwsem(&ksm->lock); + + init_waitqueue_head(&ksm->idle_slots_wait_queue); + INIT_LIST_HEAD(&ksm->idle_slots); + + for (slot = 0; slot < num_slots; slot++) { + ksm->slots[slot].ksm = ksm; + list_add_tail(&ksm->slots[slot].idle_slot_node, + &ksm->idle_slots); + } + + spin_lock_init(&ksm->idle_slots_lock); + + slot_hashtable_size = roundup_pow_of_two(num_slots); + ksm->log_slot_ht_size = ilog2(slot_hashtable_size); + ksm->slot_hashtable = kvmalloc_array(slot_hashtable_size, + sizeof(ksm->slot_hashtable[0]), + GFP_KERNEL); + if (!ksm->slot_hashtable) + goto err_destroy_ksm; + for (i = 0; i < slot_hashtable_size; i++) + INIT_HLIST_HEAD(&ksm->slot_hashtable[i]); + + return 0; + +err_destroy_ksm: + blk_ksm_destroy(ksm); + return -ENOMEM; +} +EXPORT_SYMBOL_GPL(blk_ksm_init); + +static inline struct hlist_head * +blk_ksm_hash_bucket_for_key(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key) +{ + return &ksm->slot_hashtable[hash_ptr(key, ksm->log_slot_ht_size)]; +} + +static void blk_ksm_remove_slot_from_lru_list(struct blk_ksm_keyslot *slot) +{ + struct blk_keyslot_manager *ksm = slot->ksm; + unsigned long flags; + + spin_lock_irqsave(&ksm->idle_slots_lock, flags); + list_del(&slot->idle_slot_node); + spin_unlock_irqrestore(&ksm->idle_slots_lock, flags); +} + +static struct blk_ksm_keyslot *blk_ksm_find_keyslot( + struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key) +{ + const struct hlist_head *head = blk_ksm_hash_bucket_for_key(ksm, key); + struct blk_ksm_keyslot *slotp; + + hlist_for_each_entry(slotp, head, hash_node) { + if (slotp->key == key) + return slotp; + } + return NULL; +} + +static struct blk_ksm_keyslot *blk_ksm_find_and_grab_keyslot( + struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key) +{ + struct blk_ksm_keyslot *slot; + + slot = blk_ksm_find_keyslot(ksm, key); + if (!slot) + return NULL; + if (atomic_inc_return(&slot->slot_refs) == 1) { + /* Took first reference to this slot; remove it from LRU list */ + blk_ksm_remove_slot_from_lru_list(slot); + } + return slot; +} + +unsigned int blk_ksm_get_slot_idx(struct blk_ksm_keyslot *slot) +{ + return slot - slot->ksm->slots; +} +EXPORT_SYMBOL_GPL(blk_ksm_get_slot_idx); + +/** + * blk_ksm_get_slot_for_key() - Program a key into a keyslot. + * @ksm: The keyslot manager to program the key into. + * @key: Pointer to the key object to program, including the raw key, crypto + * mode, and data unit size. + * @keyslot: A pointer to return the pointer of the allocated keyslot. + * + * Get a keyslot that's been programmed with the specified key. If one already + * exists, return it with incremented refcount. Otherwise, wait for a keyslot + * to become idle and program it. + * + * Context: Process context. Takes and releases ksm->lock. + * Return: BLK_STS_OK on success (and keyslot is set to the pointer of the + * allocated keyslot), or some other blk_status_t otherwise (and + * keyslot is set to NULL). + */ +blk_status_t blk_ksm_get_slot_for_key(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + struct blk_ksm_keyslot **slot_ptr) +{ + struct blk_ksm_keyslot *slot; + int slot_idx; + int err; + + *slot_ptr = NULL; + down_read(&ksm->lock); + slot = blk_ksm_find_and_grab_keyslot(ksm, key); + up_read(&ksm->lock); + if (slot) + goto success; + + for (;;) { + blk_ksm_hw_enter(ksm); + slot = blk_ksm_find_and_grab_keyslot(ksm, key); + if (slot) { + blk_ksm_hw_exit(ksm); + goto success; + } + + /* + * If we're here, that means there wasn't a slot that was + * already programmed with the key. So try to program it. + */ + if (!list_empty(&ksm->idle_slots)) + break; + + blk_ksm_hw_exit(ksm); + wait_event(ksm->idle_slots_wait_queue, + !list_empty(&ksm->idle_slots)); + } + + slot = list_first_entry(&ksm->idle_slots, struct blk_ksm_keyslot, + idle_slot_node); + slot_idx = blk_ksm_get_slot_idx(slot); + + err = ksm->ksm_ll_ops.keyslot_program(ksm, key, slot_idx); + if (err) { + wake_up(&ksm->idle_slots_wait_queue); + blk_ksm_hw_exit(ksm); + return errno_to_blk_status(err); + } + + /* Move this slot to the hash list for the new key. */ + if (slot->key) + hlist_del(&slot->hash_node); + slot->key = key; + hlist_add_head(&slot->hash_node, blk_ksm_hash_bucket_for_key(ksm, key)); + + atomic_set(&slot->slot_refs, 1); + + blk_ksm_remove_slot_from_lru_list(slot); + + blk_ksm_hw_exit(ksm); +success: + *slot_ptr = slot; + return BLK_STS_OK; +} + +/** + * blk_ksm_put_slot() - Release a reference to a slot + * @slot: The keyslot to release the reference of. + * + * Context: Any context. + */ +void blk_ksm_put_slot(struct blk_ksm_keyslot *slot) +{ + struct blk_keyslot_manager *ksm; + unsigned long flags; + + if (!slot) + return; + + ksm = slot->ksm; + + if (atomic_dec_and_lock_irqsave(&slot->slot_refs, + &ksm->idle_slots_lock, flags)) { + list_add_tail(&slot->idle_slot_node, &ksm->idle_slots); + spin_unlock_irqrestore(&ksm->idle_slots_lock, flags); + wake_up(&ksm->idle_slots_wait_queue); + } +} + +/** + * blk_ksm_crypto_cfg_supported() - Find out if the crypto_mode, dusize, dun + * bytes of a crypto_config are supported by a + * ksm. + * @ksm: The keyslot manager to check + * @cfg: The crypto configuration to check for. + * + * Checks for crypto_mode/data unit size/dun bytes support. + * + * Return: Whether or not this ksm supports the specified crypto key. + */ +bool blk_ksm_crypto_cfg_supported(struct blk_keyslot_manager *ksm, + const struct blk_crypto_config *cfg) +{ + if (!ksm) + return false; + if (!(ksm->crypto_modes_supported[cfg->crypto_mode] & + cfg->data_unit_size)) + return false; + if (ksm->max_dun_bytes_supported < cfg->dun_bytes) + return false; + return true; +} + +/** + * blk_ksm_evict_key() - Evict a key from the lower layer device. + * @ksm: The keyslot manager to evict from + * @key: The key to evict + * + * Find the keyslot that the specified key was programmed into, and evict that + * slot from the lower layer device. The slot must not be in use by any + * in-flight IO when this function is called. + * + * Context: Process context. Takes and releases ksm->lock. + * Return: 0 on success or if there's no keyslot with the specified key, -EBUSY + * if the keyslot is still in use, or another -errno value on other + * error. + */ +int blk_ksm_evict_key(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key) +{ + struct blk_ksm_keyslot *slot; + int err = 0; + + blk_ksm_hw_enter(ksm); + slot = blk_ksm_find_keyslot(ksm, key); + if (!slot) + goto out_unlock; + + if (WARN_ON_ONCE(atomic_read(&slot->slot_refs) != 0)) { + err = -EBUSY; + goto out_unlock; + } + err = ksm->ksm_ll_ops.keyslot_evict(ksm, key, + blk_ksm_get_slot_idx(slot)); + if (err) + goto out_unlock; + + hlist_del(&slot->hash_node); + slot->key = NULL; + err = 0; +out_unlock: + blk_ksm_hw_exit(ksm); + return err; +} + +/** + * blk_ksm_reprogram_all_keys() - Re-program all keyslots. + * @ksm: The keyslot manager + * + * Re-program all keyslots that are supposed to have a key programmed. This is + * intended only for use by drivers for hardware that loses its keys on reset. + * + * Context: Process context. Takes and releases ksm->lock. + */ +void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm) +{ + unsigned int slot; + + /* This is for device initialization, so don't resume the device */ + down_write(&ksm->lock); + for (slot = 0; slot < ksm->num_slots; slot++) { + const struct blk_crypto_key *key = ksm->slots[slot].key; + int err; + + if (!key) + continue; + + err = ksm->ksm_ll_ops.keyslot_program(ksm, key, slot); + WARN_ON(err); + } + up_write(&ksm->lock); +} +EXPORT_SYMBOL_GPL(blk_ksm_reprogram_all_keys); + +void blk_ksm_destroy(struct blk_keyslot_manager *ksm) +{ + if (!ksm) + return; + kvfree(ksm->slot_hashtable); + memzero_explicit(ksm->slots, sizeof(ksm->slots[0]) * ksm->num_slots); + kvfree(ksm->slots); + memzero_explicit(ksm, sizeof(*ksm)); +} +EXPORT_SYMBOL_GPL(blk_ksm_destroy); diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h new file mode 100644 index 0000000000000..570fdd251e032 --- /dev/null +++ b/include/linux/blk-crypto.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#ifndef __LINUX_BLK_CRYPTO_H +#define __LINUX_BLK_CRYPTO_H + +enum blk_crypto_mode_num { + BLK_ENCRYPTION_MODE_INVALID, + BLK_ENCRYPTION_MODE_AES_256_XTS, + BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV, + BLK_ENCRYPTION_MODE_ADIANTUM, + BLK_ENCRYPTION_MODE_MAX, +}; + +#define BLK_CRYPTO_MAX_KEY_SIZE 64 +/** + * struct blk_crypto_config - an inline encryption key's crypto configuration + * @crypto_mode: encryption algorithm this key is for + * @data_unit_size: the data unit size for all encryption/decryptions with this + * key. This is the size in bytes of each individual plaintext and + * ciphertext. This is always a power of 2. It might be e.g. the + * filesystem block size or the disk sector size. + * @dun_bytes: the maximum number of bytes of DUN used when using this key + */ +struct blk_crypto_config { + enum blk_crypto_mode_num crypto_mode; + unsigned int data_unit_size; + unsigned int dun_bytes; +}; + +/** + * struct blk_crypto_key - an inline encryption key + * @crypto_cfg: the crypto configuration (like crypto_mode, key size) for this + * key + * @data_unit_size_bits: log2 of data_unit_size + * @size: size of this key in bytes (determined by @crypto_cfg.crypto_mode) + * @raw: the raw bytes of this key. Only the first @size bytes are used. + * + * A blk_crypto_key is immutable once created, and many bios can reference it at + * the same time. It must not be freed until all bios using it have completed. + */ +struct blk_crypto_key { + struct blk_crypto_config crypto_cfg; + unsigned int data_unit_size_bits; + unsigned int size; + u8 raw[BLK_CRYPTO_MAX_KEY_SIZE]; +}; + +#endif /* __LINUX_BLK_CRYPTO_H */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 32868fbedc9e9..7b36695f71931 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -43,6 +43,7 @@ struct pr_ops; struct rq_qos; struct blk_queue_stats; struct blk_stat_callback; +struct blk_keyslot_manager; #define BLKDEV_MIN_RQ 4 #define BLKDEV_MAX_RQ 128 /* Default maximum */ @@ -474,6 +475,11 @@ struct request_queue { unsigned int dma_pad_mask; unsigned int dma_alignment; +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + /* Inline crypto capabilities */ + struct blk_keyslot_manager *ksm; +#endif + unsigned int rq_timeout; int poll_nsec; diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h new file mode 100644 index 0000000000000..18f3f5346843f --- /dev/null +++ b/include/linux/keyslot-manager.h @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#ifndef __LINUX_KEYSLOT_MANAGER_H +#define __LINUX_KEYSLOT_MANAGER_H + +#include +#include + +struct blk_keyslot_manager; + +/** + * struct blk_ksm_ll_ops - functions to manage keyslots in hardware + * @keyslot_program: Program the specified key into the specified slot in the + * inline encryption hardware. + * @keyslot_evict: Evict key from the specified keyslot in the hardware. + * The key is provided so that e.g. dm layers can evict + * keys from the devices that they map over. + * Returns 0 on success, -errno otherwise. + * + * This structure should be provided by storage device drivers when they set up + * a keyslot manager - this structure holds the function ptrs that the keyslot + * manager will use to manipulate keyslots in the hardware. + */ +struct blk_ksm_ll_ops { + int (*keyslot_program)(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + unsigned int slot); + int (*keyslot_evict)(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + unsigned int slot); +}; + +struct blk_keyslot_manager { + /* + * The struct blk_ksm_ll_ops that this keyslot manager will use + * to perform operations like programming and evicting keys on the + * device + */ + struct blk_ksm_ll_ops ksm_ll_ops; + + /* + * The maximum number of bytes supported for specifying the data unit + * number. + */ + unsigned int max_dun_bytes_supported; + + /* + * Array of size BLK_ENCRYPTION_MODE_MAX of bitmasks that represents + * whether a crypto mode and data unit size are supported. The i'th + * bit of crypto_mode_supported[crypto_mode] is set iff a data unit + * size of (1 << i) is supported. We only support data unit sizes + * that are powers of 2. + */ + unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX]; + + /* Device for runtime power management (NULL if none) */ + struct device *dev; + + /* Here onwards are *private* fields for internal keyslot manager use */ + + unsigned int num_slots; + + /* Protects programming and evicting keys from the device */ + struct rw_semaphore lock; + + /* List of idle slots, with least recently used slot at front */ + wait_queue_head_t idle_slots_wait_queue; + struct list_head idle_slots; + spinlock_t idle_slots_lock; + + /* + * Hash table which maps struct *blk_crypto_key to keyslots, so that we + * can find a key's keyslot in O(1) time rather than O(num_slots). + * Protected by 'lock'. + */ + struct hlist_head *slot_hashtable; + unsigned int log_slot_ht_size; + + /* Per-keyslot data */ + struct blk_ksm_keyslot *slots; +}; + +int blk_ksm_init(struct blk_keyslot_manager *ksm, unsigned int num_slots); + +blk_status_t blk_ksm_get_slot_for_key(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + struct blk_ksm_keyslot **slot_ptr); + +unsigned int blk_ksm_get_slot_idx(struct blk_ksm_keyslot *slot); + +void blk_ksm_put_slot(struct blk_ksm_keyslot *slot); + +bool blk_ksm_crypto_cfg_supported(struct blk_keyslot_manager *ksm, + const struct blk_crypto_config *cfg); + +int blk_ksm_evict_key(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key); + +void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm); + +void blk_ksm_destroy(struct blk_keyslot_manager *ksm); + +#endif /* __LINUX_KEYSLOT_MANAGER_H */ From patchwork Wed Apr 29 07:21:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279039 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=PYh1jm4c; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49BqhZ6YlSz9sTJ for ; Wed, 29 Apr 2020 17:22:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726740AbgD2HVj (ORCPT ); Wed, 29 Apr 2020 03:21:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726386AbgD2HVe (ORCPT ); Wed, 29 Apr 2020 03:21:34 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15045C09B041 for ; Wed, 29 Apr 2020 00:21:33 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id n22so1620620qtp.15 for ; Wed, 29 Apr 2020 00:21:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=MpYUjB/lvrTwXxpw62cHmX9eVVnKaaJwXdBTC/0vsgI=; b=PYh1jm4cIKQ8JcTzA3gFwI79HC+nvtttdprfPz/V4VEWJEOYRWzjGODYxtQb7bTZ6p l86A6P13wysPos58ioWBYPR0QUQBNgs/C/n7WaeIdgxiBkuq2fCJB+BOVWWwbRTgXxAw 9B32dQo2e57ld1bMsMShkmx6hFsHysY714eFRssaj9vahmewAqZVx70XmFPwCR5HGAvK 08nay2e7JAg43rkd9MbZG4yv4CDly2Cpwi3aukgnta60M8J3bYlELgrEf8x337avbuL4 KnZglZBxskhhPd73BQNG89JzOsUnlj8HGesnmTJJdKi5YaWQ5NvkBInkUrHRzz2ZL7pc JrnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=MpYUjB/lvrTwXxpw62cHmX9eVVnKaaJwXdBTC/0vsgI=; b=GUAMnco3yCmGaq/2MpG3aeW8fVVOVDgXWlVdx5XMf66AcXC8IANVox3zcctebBDL7N XiJVFnjgvNTR+dTjb06QzVYBt3FkvPKij08PXa7c3xyQeCE+8XDJPFR8IpT+S2++x4YE giXcvQVr4mMIErQ/r9G9bpLNfOoYzrQgJKs/o+BM408w8OuAgcTIgeYx3S4JktCfRoi9 9QDa79ms65/X+2D+peYvheepl6p84LwoakHF7ZemdOGsqGJxO/QgeJakO1ytOrSBKp/Q 7CEpfm3tns8+p+uxaSzePaT+HzxpJgA9UPUwqJZt/B3BWYSRSiq1vueeWfhptGp1dHOu blDw== X-Gm-Message-State: AGi0PuZGXv6WhNdZc58bvl4K+aqLrA+JUh9dPVX1+X96zD4HNF6CUkGs AVM6x+taXeHc1CS/GD2Q66r3WyiY6aI= X-Google-Smtp-Source: APiQypJ7yqG+B7GcQuuIH8bexmbTBPNue8nccOyCNxDzWKtIV3hPjxy9PXpTESH7jhiHfIPWplUypOknxPo= X-Received: by 2002:ad4:4a8b:: with SMTP id h11mr29813751qvx.210.1588144892193; Wed, 29 Apr 2020 00:21:32 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:12 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-4-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 03/12] block: Inline encryption support for blk-mq From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org We must have some way of letting a storage device driver know what encryption context it should use for en/decrypting a request. However, it's the upper layers (like the filesystem/fscrypt) that know about and manages encryption contexts. As such, when the upper layer submits a bio to the block layer, and this bio eventually reaches a device driver with support for inline encryption, the device driver will need to have been told the encryption context for that bio. We want to communicate the encryption context from the upper layer to the storage device along with the bio, when the bio is submitted to the block layer. To do this, we add a struct bio_crypt_ctx to struct bio, which can represent an encryption context (note that we can't use the bi_private field in struct bio to do this because that field does not function to pass information across layers in the storage stack). We also introduce various functions to manipulate the bio_crypt_ctx and make the bio/request merging logic aware of the bio_crypt_ctx. We also make changes to blk-mq to make it handle bios with encryption contexts. blk-mq can merge many bios into the same request. These bios need to have contiguous data unit numbers (the necessary changes to blk-merge are also made to ensure this) - as such, it suffices to keep the data unit number of just the first bio, since that's all a storage driver needs to infer the data unit number to use for each data block in each bio in a request. blk-mq keeps track of the encryption context to be used for all the bios in a request with the request's rq_crypt_ctx. When the first bio is added to an empty request, blk-mq will program the encryption context of that bio into the request_queue's keyslot manager, and store the returned keyslot in the request's rq_crypt_ctx. All the functions to operate on encryption contexts are in blk-crypto.c. Upper layers only need to call bio_crypt_set_ctx with the encryption key, algorithm and data_unit_num; they don't have to worry about getting a keyslot for each encryption context, as blk-mq/blk-crypto handles that. Blk-crypto also makes it possible for request-based layered devices like dm-rq to make use of inline encryption hardware by cloning the rq_crypt_ctx and programming a keyslot in the new request_queue when necessary. Note that any user of the block layer can submit bios with an encryption context, such as filesystems, device-mapper targets, etc. Signed-off-by: Satya Tangirala --- block/Makefile | 2 +- block/bio.c | 6 + block/blk-core.c | 21 +- block/blk-crypto-internal.h | 166 ++++++++++++++++ block/blk-crypto.c | 373 ++++++++++++++++++++++++++++++++++++ block/blk-map.c | 1 + block/blk-merge.c | 11 ++ block/blk-mq.c | 14 ++ block/blk.h | 2 + block/bounce.c | 2 + drivers/md/dm.c | 3 + include/linux/blk-crypto.h | 71 +++++++ include/linux/blk_types.h | 6 + include/linux/blkdev.h | 5 + 14 files changed, 678 insertions(+), 5 deletions(-) create mode 100644 block/blk-crypto-internal.h create mode 100644 block/blk-crypto.c diff --git a/block/Makefile b/block/Makefile index fc963e4676b0a..3ee88d3e807d2 100644 --- a/block/Makefile +++ b/block/Makefile @@ -36,4 +36,4 @@ obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o -obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o blk-crypto.o diff --git a/block/bio.c b/block/bio.c index 21cbaa6a1c20e..960303de1bb4a 100644 --- a/block/bio.c +++ b/block/bio.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include "blk.h" @@ -237,6 +238,8 @@ void bio_uninit(struct bio *bio) if (bio_integrity(bio)) bio_integrity_free(bio); + + bio_crypt_free_ctx(bio); } EXPORT_SYMBOL(bio_uninit); @@ -708,6 +711,8 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs) __bio_clone_fast(b, bio); + bio_crypt_clone(b, bio, gfp_mask); + if (bio_integrity(bio)) { int ret; @@ -1105,6 +1110,7 @@ void bio_advance(struct bio *bio, unsigned bytes) if (bio_integrity(bio)) bio_integrity_advance(bio, bytes); + bio_crypt_advance(bio, bytes); bio_advance_iter(bio, &bio->bi_iter, bytes); } EXPORT_SYMBOL(bio_advance); diff --git a/block/blk-core.c b/block/blk-core.c index 7e4a1da0715ea..77455973cbe03 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -38,6 +38,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -120,6 +121,7 @@ void blk_rq_init(struct request_queue *q, struct request *rq) rq->start_time_ns = ktime_get_ns(); rq->part = NULL; refcount_set(&rq->ref, 1); + blk_crypto_rq_set_defaults(rq); } EXPORT_SYMBOL(blk_rq_init); @@ -623,6 +625,8 @@ bool bio_attempt_back_merge(struct request *req, struct bio *bio, req->biotail = bio; req->__data_len += bio->bi_iter.bi_size; + bio_crypt_free_ctx(bio); + blk_account_io_start(req, false); return true; } @@ -647,6 +651,8 @@ bool bio_attempt_front_merge(struct request *req, struct bio *bio, req->__sector = bio->bi_iter.bi_sector; req->__data_len += bio->bi_iter.bi_size; + bio_crypt_attempt_front_merge(req, bio); + blk_account_io_start(req, false); return true; } @@ -1072,7 +1078,8 @@ blk_qc_t generic_make_request(struct bio *bio) /* Create a fresh bio_list for all subordinate requests */ bio_list_on_stack[1] = bio_list_on_stack[0]; bio_list_init(&bio_list_on_stack[0]); - ret = q->make_request_fn(q, bio); + if (blk_crypto_bio_prep(&bio)) + ret = q->make_request_fn(q, bio); blk_queue_exit(q); @@ -1120,7 +1127,7 @@ blk_qc_t direct_make_request(struct bio *bio) { struct request_queue *q = bio->bi_disk->queue; bool nowait = bio->bi_opf & REQ_NOWAIT; - blk_qc_t ret; + blk_qc_t ret = BLK_QC_T_NONE; if (!generic_make_request_checks(bio)) return BLK_QC_T_NONE; @@ -1132,8 +1139,8 @@ blk_qc_t direct_make_request(struct bio *bio) bio_io_error(bio); return BLK_QC_T_NONE; } - - ret = q->make_request_fn(q, bio); + if (blk_crypto_bio_prep(&bio)) + ret = q->make_request_fn(q, bio); blk_queue_exit(q); return ret; } @@ -1263,6 +1270,9 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request * should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq))) return BLK_STS_IOERR; + if (blk_crypto_insert_cloned_request(rq)) + return BLK_STS_IOERR; + if (blk_queue_io_stat(q)) blk_account_io_start(rq, true); @@ -1640,6 +1650,9 @@ int blk_rq_prep_clone(struct request *rq, struct request *rq_src, rq->ioprio = rq_src->ioprio; rq->extra_len = rq_src->extra_len; + if (rq->bio) + blk_crypto_rq_bio_prep(rq, rq->bio, gfp_mask); + return 0; free_and_out: diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h new file mode 100644 index 0000000000000..3ef0f7495be69 --- /dev/null +++ b/block/blk-crypto-internal.h @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#ifndef __LINUX_BLK_CRYPTO_INTERNAL_H +#define __LINUX_BLK_CRYPTO_INTERNAL_H + +#include +#include + +/* Represents a crypto mode supported by blk-crypto */ +struct blk_crypto_mode { + unsigned int keysize; /* key size in bytes */ + unsigned int ivsize; /* iv size in bytes */ +}; + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + +void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], + unsigned int inc); + +bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio); + +bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1, unsigned int bc1_bytes, + struct bio_crypt_ctx *bc2); + +static inline bool bio_crypt_ctx_back_mergeable(struct request *req, + struct bio *bio) +{ + return bio_crypt_ctx_mergeable(req->crypt_ctx, blk_rq_bytes(req), + bio->bi_crypt_context); +} + +static inline bool bio_crypt_ctx_front_mergeable(struct request *req, + struct bio *bio) +{ + return bio_crypt_ctx_mergeable(bio->bi_crypt_context, + bio->bi_iter.bi_size, req->crypt_ctx); +} + +static inline bool bio_crypt_ctx_merge_rq(struct request *req, + struct request *next) +{ + return bio_crypt_ctx_mergeable(req->crypt_ctx, blk_rq_bytes(req), + next->crypt_ctx); +} + +static inline void blk_crypto_rq_set_defaults(struct request *rq) +{ + rq->crypt_ctx = NULL; + rq->crypt_keyslot = NULL; +} + +static inline bool blk_crypto_rq_is_encrypted(struct request *rq) +{ + return rq->crypt_ctx; +} + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ + +static inline bool bio_crypt_rq_ctx_compatible(struct request *rq, + struct bio *bio) +{ + return true; +} + +static inline bool bio_crypt_ctx_front_mergeable(struct request *req, + struct bio *bio) +{ + return true; +} + +static inline bool bio_crypt_ctx_back_mergeable(struct request *req, + struct bio *bio) +{ + return true; +} + +static inline bool bio_crypt_ctx_merge_rq(struct request *req, + struct request *next) +{ + return true; +} + +static inline void blk_crypto_rq_set_defaults(struct request *rq) { } + +static inline bool blk_crypto_rq_is_encrypted(struct request *rq) +{ + return false; +} + +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ + +void __bio_crypt_advance(struct bio *bio, unsigned int bytes); +static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) +{ + if (bio_has_crypt_ctx(bio)) + __bio_crypt_advance(bio, bytes); +} + +void __bio_crypt_free_ctx(struct bio *bio); +static inline void bio_crypt_free_ctx(struct bio *bio) +{ + if (bio_has_crypt_ctx(bio)) + __bio_crypt_free_ctx(bio); +} + +static inline void bio_crypt_attempt_front_merge(struct request *rq, + struct bio *bio) +{ +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + if (bio_has_crypt_ctx(bio)) + memcpy(rq->crypt_ctx->bc_dun, bio->bi_crypt_context->bc_dun, + sizeof(rq->crypt_ctx->bc_dun)); +#endif +} + +bool __blk_crypto_bio_prep(struct bio **bio_ptr); +static inline bool blk_crypto_bio_prep(struct bio **bio_ptr) +{ + if (bio_has_crypt_ctx(*bio_ptr)) + return __blk_crypto_bio_prep(bio_ptr); + return true; +} + +blk_status_t __blk_crypto_init_request(struct request *rq); +static inline blk_status_t blk_crypto_init_request(struct request *rq) +{ + if (blk_crypto_rq_is_encrypted(rq)) + return __blk_crypto_init_request(rq); + return BLK_STS_OK; +} + +void __blk_crypto_free_request(struct request *rq); +static inline void blk_crypto_free_request(struct request *rq) +{ + if (blk_crypto_rq_is_encrypted(rq)) + __blk_crypto_free_request(rq); +} + +void __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio, + gfp_t gfp_mask); +static inline void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio, + gfp_t gfp_mask) +{ + if (bio_has_crypt_ctx(bio)) + __blk_crypto_rq_bio_prep(rq, bio, gfp_mask); +} + +/** + * blk_crypto_insert_cloned_request - Prepare a cloned request to be inserted + * into a request queue. + * @rq: the request being queued + * + * Return: BLK_STS_OK on success, nonzero on error. + */ +static inline blk_status_t blk_crypto_insert_cloned_request(struct request *rq) +{ + + if (blk_crypto_rq_is_encrypted(rq)) + return blk_crypto_init_request(rq); + return BLK_STS_OK; +} + +#endif /* __LINUX_BLK_CRYPTO_INTERNAL_H */ diff --git a/block/blk-crypto.c b/block/blk-crypto.c new file mode 100644 index 0000000000000..ce52bd4223238 --- /dev/null +++ b/block/blk-crypto.c @@ -0,0 +1,373 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +/* + * Refer to Documentation/block/inline-encryption.rst for detailed explanation. + */ + +#define pr_fmt(fmt) "blk-crypto: " fmt + +#include +#include +#include +#include +#include + +#include "blk-crypto-internal.h" + +const struct blk_crypto_mode blk_crypto_modes[] = { + [BLK_ENCRYPTION_MODE_AES_256_XTS] = { + .keysize = 64, + .ivsize = 16, + }, + [BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV] = { + .keysize = 16, + .ivsize = 16, + }, + [BLK_ENCRYPTION_MODE_ADIANTUM] = { + .keysize = 32, + .ivsize = 32, + }, +}; + +/* + * This number needs to be at least (the number of threads doing IO + * concurrently) * (maximum recursive depth of a bio), so that we don't + * deadlock on crypt_ctx allocations. The default is chosen to be the same + * as the default number of post read contexts in both EXT4 and F2FS. + */ +static int num_prealloc_crypt_ctxs = 128; + +module_param(num_prealloc_crypt_ctxs, int, 0444); +MODULE_PARM_DESC(num_prealloc_crypt_ctxs, + "Number of bio crypto contexts to preallocate"); + +static struct kmem_cache *bio_crypt_ctx_cache; +static mempool_t *bio_crypt_ctx_pool; + +static int __init bio_crypt_ctx_init(void) +{ + size_t i; + + bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0); + if (!bio_crypt_ctx_cache) + goto out_no_mem; + + bio_crypt_ctx_pool = mempool_create_slab_pool(num_prealloc_crypt_ctxs, + bio_crypt_ctx_cache); + if (!bio_crypt_ctx_pool) + goto out_no_mem; + + /* This is assumed in various places. */ + BUILD_BUG_ON(BLK_ENCRYPTION_MODE_INVALID != 0); + + /* Sanity check that no algorithm exceeds the defined limits. */ + for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) { + BUG_ON(blk_crypto_modes[i].keysize > BLK_CRYPTO_MAX_KEY_SIZE); + BUG_ON(blk_crypto_modes[i].ivsize > BLK_CRYPTO_MAX_IV_SIZE); + } + + return 0; +out_no_mem: + panic("Failed to allocate mem for bio crypt ctxs\n"); +} +subsys_initcall(bio_crypt_ctx_init); + +void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key, + const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], gfp_t gfp_mask) +{ + struct bio_crypt_ctx *bc = mempool_alloc(bio_crypt_ctx_pool, gfp_mask); + + bc->bc_key = key; + memcpy(bc->bc_dun, dun, sizeof(bc->bc_dun)); + + bio->bi_crypt_context = bc; +} + +void __bio_crypt_free_ctx(struct bio *bio) +{ + mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool); + bio->bi_crypt_context = NULL; +} + +void __bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask) +{ + dst->bi_crypt_context = mempool_alloc(bio_crypt_ctx_pool, gfp_mask); + *dst->bi_crypt_context = *src->bi_crypt_context; +} +EXPORT_SYMBOL_GPL(__bio_crypt_clone); + +void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], + unsigned int inc) +{ + int i = 0; + + while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) { + dun[i] += inc; + inc = (dun[i] < inc); + i++; + } +} + +void __bio_crypt_advance(struct bio *bio, unsigned int bytes) +{ + struct bio_crypt_ctx *bc = bio->bi_crypt_context; + + bio_crypt_dun_increment(bc->bc_dun, + bytes >> bc->bc_key->data_unit_size_bits); +} + +/* + * Returns true if @bc_dun plus @bytes converted to data units is equal to + * @next_dun, treating the DUNs as multi-limb integers. + */ +bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc, + unsigned int bytes, + const u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]) +{ + int i = 0; + unsigned int carry = bytes >> bc->bc_key->data_unit_size_bits; + + while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) { + if (bc->bc_dun[i] + carry != next_dun[i]) + return false; + /* + * If the addition in this limb overflowed, then we need to + * carry 1 into the next limb. Else the carry is 0. + */ + if ((bc->bc_dun[i] + carry) < carry) + carry = 1; + else + carry = 0; + i++; + } + + /* If the DUN wrapped through 0, don't treat it as contiguous. */ + return carry == 0; +} + +/* + * Checks that two bio crypt contexts are compatible - i.e. that + * they are mergeable except for data_unit_num continuity. + */ +static bool bio_crypt_ctx_compatible(struct bio_crypt_ctx *bc1, + struct bio_crypt_ctx *bc2) +{ + if (!bc1) + return !bc2; + + return bc2 && bc1->bc_key == bc2->bc_key; +} + +bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio) +{ + return bio_crypt_ctx_compatible(rq->crypt_ctx, bio->bi_crypt_context); +} + +/* + * Checks that two bio crypt contexts are compatible, and also + * that their data_unit_nums are continuous (and can hence be merged) + * in the order b_1 followed by b_2. + */ +bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1, unsigned int bc1_bytes, + struct bio_crypt_ctx *bc2) +{ + if (!bio_crypt_ctx_compatible(bc1, bc2)) + return false; + + return !bc1 || bio_crypt_dun_is_contiguous(bc1, bc1_bytes, bc2->bc_dun); +} + +/* + * Check that all I/O segments are data unit aligned, and set bio->bi_status + * on error. + */ +static bool bio_crypt_check_alignment(struct bio *bio) +{ + const unsigned int data_unit_size = + bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size; + struct bvec_iter iter; + struct bio_vec bv; + + bio_for_each_segment(bv, bio, iter) { + if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size)) + return false; + } + + return true; +} + +blk_status_t __blk_crypto_init_request(struct request *rq) +{ + return blk_ksm_get_slot_for_key(rq->q->ksm, rq->crypt_ctx->bc_key, + &rq->crypt_keyslot); +} + +/** + * __blk_crypto_free_request - Uninitialize the crypto fields of a request. + * + * @rq: The request whose crypto fields to uninitialize. + * + * Completely uninitializes the crypto fields of a request. If a keyslot has + * been programmed into some inline encryption hardware, that keyslot is + * released. The rq->crypt_ctx is also freed. + */ +void __blk_crypto_free_request(struct request *rq) +{ + blk_ksm_put_slot(rq->crypt_keyslot); + mempool_free(rq->crypt_ctx, bio_crypt_ctx_pool); + blk_crypto_rq_set_defaults(rq); +} + +/** + * __blk_crypto_bio_prep - Prepare bio for inline encryption + * + * @bio_ptr: pointer to original bio pointer + * + * Succeeds if the bio doesn't have inline encryption enabled or if the bio + * crypt context provided for the bio is supported by the underlying device's + * inline encryption hardware. Ends the bio with error otherwise. + * + * Caller must ensure bio has bio_crypt_ctx. + * + * Return: true on success; false on error (and bio->bi_status will be set + * appropriately, and bio_endio() will have been called so bio + * submission should abort). + */ +bool __blk_crypto_bio_prep(struct bio **bio_ptr) +{ + struct bio *bio = *bio_ptr; + const struct blk_crypto_key *bc_key = bio->bi_crypt_context->bc_key; + blk_status_t blk_st = BLK_STS_IOERR; + + /* Error if bio has no data. */ + if (WARN_ON_ONCE(!bio_has_data(bio))) + goto fail; + + if (!bio_crypt_check_alignment(bio)) + goto fail; + + /* + * Success if device supports the encryption context, and blk-integrity + * isn't supported by device/is turned off. + */ + if (!blk_ksm_crypto_cfg_supported(bio->bi_disk->queue->ksm, + &bc_key->crypto_cfg)) { + blk_st = BLK_STS_NOTSUPP; + goto fail; + } + + return true; +fail: + *bio_ptr->bi_status = blk_st; + bio_endio(*bio_ptr); + return false; +} + +/** + * __blk_crypto_rq_bio_prep - Prepare a request's crypt_ctx when its first bio + * is inserted + * + * @rq: The request to prepare + * @bio: The first bio being inserted into the request + * @gfp_mask: gfp mask + */ +void __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio, + gfp_t gfp_mask) +{ + if (!rq->crypt_ctx) + rq->crypt_ctx = mempool_alloc(bio_crypt_ctx_pool, gfp_mask); + *rq->crypt_ctx = *bio->bi_crypt_context; +} + +/** + * blk_crypto_init_key() - Prepare a key for use with blk-crypto + * @blk_key: Pointer to the blk_crypto_key to initialize. + * @raw_key: Pointer to the raw key. Must be the correct length for the chosen + * @crypto_mode; see blk_crypto_modes[]. + * @crypto_mode: identifier for the encryption algorithm to use + * @dun_bytes: number of bytes that will be used to specify the DUN when this + * key is used + * @data_unit_size: the data unit size to use for en/decryption + * + * Return: 0 on success, -errno on failure. The caller is responsible for + * zeroizing both blk_key and raw_key when done with them. + */ +int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key, + enum blk_crypto_mode_num crypto_mode, + unsigned int dun_bytes, + unsigned int data_unit_size) +{ + const struct blk_crypto_mode *mode; + + memset(blk_key, 0, sizeof(*blk_key)); + + if (crypto_mode >= ARRAY_SIZE(blk_crypto_modes)) + return -EINVAL; + + mode = &blk_crypto_modes[crypto_mode]; + if (mode->keysize == 0) + return -EINVAL; + + if (!is_power_of_2(data_unit_size)) + return -EINVAL; + + blk_key->crypto_cfg.crypto_mode = crypto_mode; + blk_key->crypto_cfg.dun_bytes = dun_bytes; + blk_key->crypto_cfg.data_unit_size = data_unit_size; + blk_key->data_unit_size_bits = ilog2(data_unit_size); + blk_key->size = mode->keysize; + memcpy(blk_key->raw, raw_key, mode->keysize); + + return 0; +} + +bool blk_crypto_config_supported(struct request_queue *q, + const struct blk_crypto_config *cfg) +{ + return blk_ksm_crypto_cfg_supported(q->ksm, cfg); +} + +/** + * blk_crypto_start_using_key() - Start using a blk_crypto_key on a device + * @key: A key to use on the device + * @q: the request queue for the device + * + * Upper layers must call this function to ensure that the hardware supports + * the key's crypto settings. + * + * Return: 0 on success; -ENOPKG if the hardware doesn't support the key + */ +int blk_crypto_start_using_key(const struct blk_crypto_key *key, + struct request_queue *q) +{ + if (blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg)) + return 0; + return -ENOPKG; +} +EXPORT_SYMBOL_GPL(blk_crypto_start_using_key); + +/** + * blk_crypto_evict_key() - Evict a key from any inline encryption hardware + * it may have been programmed into + * @q: The request queue who's associated inline encryption hardware this key + * might have been programmed into + * @key: The key to evict + * + * Upper layers (filesystems) should call this function to ensure that a key + * is evicted from hardware that it might have been programmed into. This + * will call blk_ksm_evict_key on the queue's keyslot manager, if one + * exists, and supports the crypto algorithm with the specified data unit size. + * + * Return: 0 on success or if key is not present in the q's ksm, -err on error. + */ +int blk_crypto_evict_key(struct request_queue *q, + const struct blk_crypto_key *key) +{ + if (q->ksm && blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg)) + return blk_ksm_evict_key(q->ksm, key); + + return 0; +} diff --git a/block/blk-map.c b/block/blk-map.c index b72c361911a48..92e23e5ff2458 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -549,6 +549,7 @@ int blk_rq_append_bio(struct request *rq, struct bio **bio) rq->biotail->bi_next = *bio; rq->biotail = *bio; rq->__data_len += (*bio)->bi_iter.bi_size; + bio_crypt_free_ctx(*bio); } return 0; diff --git a/block/blk-merge.c b/block/blk-merge.c index 1534ed736363f..a0c24b6e0eb3e 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -596,6 +596,8 @@ int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs) if (blk_integrity_rq(req) && integrity_req_gap_back_merge(req, bio)) return 0; + if (!bio_crypt_ctx_back_mergeable(req, bio)) + return 0; if (blk_rq_sectors(req) + bio_sectors(bio) > blk_rq_get_max_sectors(req, blk_rq_pos(req))) { req_set_nomerge(req->q, req); @@ -612,6 +614,8 @@ int ll_front_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs if (blk_integrity_rq(req) && integrity_req_gap_front_merge(req, bio)) return 0; + if (!bio_crypt_ctx_front_mergeable(req, bio)) + return 0; if (blk_rq_sectors(req) + bio_sectors(bio) > blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) { req_set_nomerge(req->q, req); @@ -661,6 +665,9 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, if (blk_integrity_merge_rq(q, req, next) == false) return 0; + if (!bio_crypt_ctx_merge_rq(req, next)) + return 0; + /* Merge is OK... */ req->nr_phys_segments = total_phys_segments; return 1; @@ -885,6 +892,10 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio) if (blk_integrity_merge_bio(rq->q, rq, bio) == false) return false; + /* Only merge if the crypt contexts are compatible */ + if (!bio_crypt_rq_ctx_compatible(rq, bio)) + return false; + /* must be using the same buffer */ if (req_op(rq) == REQ_OP_WRITE_SAME && !blk_write_same_mergeable(rq->bio, bio)) diff --git a/block/blk-mq.c b/block/blk-mq.c index a7785df2c9446..d7b5f338b2a95 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -26,6 +26,7 @@ #include #include #include +#include #include @@ -317,6 +318,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, #if defined(CONFIG_BLK_DEV_INTEGRITY) rq->nr_integrity_segments = 0; #endif + blk_crypto_rq_set_defaults(rq); /* tag was already set */ rq->extra_len = 0; WRITE_ONCE(rq->deadline, 0); @@ -474,6 +476,7 @@ static void __blk_mq_free_request(struct request *rq) struct blk_mq_hw_ctx *hctx = rq->mq_hctx; const int sched_tag = rq->internal_tag; + blk_crypto_free_request(rq); blk_pm_mark_last_busy(rq); rq->mq_hctx = NULL; if (rq->tag != -1) @@ -1782,6 +1785,7 @@ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, rq->__sector = bio->bi_iter.bi_sector; rq->write_hint = bio->bi_write_hint; blk_rq_bio_prep(rq, bio, nr_segs); + blk_crypto_rq_bio_prep(rq, bio, GFP_NOIO); blk_account_io_start(rq, true); } @@ -1983,6 +1987,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) struct request *same_queue_rq = NULL; unsigned int nr_segs; blk_qc_t cookie; + blk_status_t ret; blk_queue_bounce(q, &bio); __blk_queue_split(q, &bio, &nr_segs); @@ -2016,6 +2021,15 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) blk_mq_bio_to_request(rq, bio, nr_segs); + ret = blk_crypto_init_request(rq); + if (ret != BLK_STS_OK) { + bio->bi_status = ret; + bio_endio(bio); + blk_mq_free_request(rq); + return BLK_QC_T_NONE; + } + + plug = blk_mq_plug(q, bio); if (unlikely(is_flush_fua)) { /* Bypass scheduler for flush requests */ diff --git a/block/blk.h b/block/blk.h index 0a94ec68af32b..1f524ae216017 100644 --- a/block/blk.h +++ b/block/blk.h @@ -5,7 +5,9 @@ #include #include #include +#include #include +#include "blk-crypto-internal.h" #include "blk-mq.h" #include "blk-mq-sched.h" diff --git a/block/bounce.c b/block/bounce.c index f8ed677a1bf7e..c3aaed0701246 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -267,6 +267,8 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, break; } + bio_crypt_clone(bio, bio_src, gfp_mask); + if (bio_integrity(bio_src)) { int ret; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index db9e461146531..761c9e4fe5621 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -26,6 +26,7 @@ #include #include #include +#include #define DM_MSG_PREFIX "core" @@ -1334,6 +1335,8 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio, __bio_clone_fast(clone, bio); + bio_crypt_clone(clone, bio, GFP_NOIO); + if (bio_integrity(bio)) { int r; diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h index 570fdd251e032..af4807a493ca8 100644 --- a/include/linux/blk-crypto.h +++ b/include/linux/blk-crypto.h @@ -6,6 +6,8 @@ #ifndef __LINUX_BLK_CRYPTO_H #define __LINUX_BLK_CRYPTO_H +#include + enum blk_crypto_mode_num { BLK_ENCRYPTION_MODE_INVALID, BLK_ENCRYPTION_MODE_AES_256_XTS, @@ -48,4 +50,73 @@ struct blk_crypto_key { u8 raw[BLK_CRYPTO_MAX_KEY_SIZE]; }; +#define BLK_CRYPTO_MAX_IV_SIZE 32 +#define BLK_CRYPTO_DUN_ARRAY_SIZE (BLK_CRYPTO_MAX_IV_SIZE / sizeof(u64)) + +/** + * struct bio_crypt_ctx - an inline encryption context + * @bc_key: the key, algorithm, and data unit size to use + * @bc_dun: the data unit number (starting IV) to use + * + * A bio_crypt_ctx specifies that the contents of the bio will be encrypted (for + * write requests) or decrypted (for read requests) inline by the storage device + * or controller. + */ +struct bio_crypt_ctx { + const struct blk_crypto_key *bc_key; + u64 bc_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; +}; + +#include +#include + +struct request; +struct request_queue; + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + +static inline bool bio_has_crypt_ctx(struct bio *bio) +{ + return bio->bi_crypt_context; +} + +void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key, + const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], + gfp_t gfp_mask); + +bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc, + unsigned int bytes, + const u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]); + +int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key, + enum blk_crypto_mode_num crypto_mode, + unsigned int dun_bytes, + unsigned int data_unit_size); + +int blk_crypto_start_using_key(const struct blk_crypto_key *key, + struct request_queue *q); + +int blk_crypto_evict_key(struct request_queue *q, + const struct blk_crypto_key *key); + +bool blk_crypto_config_supported(struct request_queue *q, + const struct blk_crypto_config *cfg); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ + +static inline bool bio_has_crypt_ctx(struct bio *bio) +{ + return false; +} + +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ + +void __bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask); +static inline void bio_crypt_clone(struct bio *dst, struct bio *src, + gfp_t gfp_mask) +{ + if (bio_has_crypt_ctx(src)) + __bio_crypt_clone(dst, src, gfp_mask); +} + #endif /* __LINUX_BLK_CRYPTO_H */ diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 31eb92876be7c..d0a84b8b4c6d6 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -18,6 +18,7 @@ struct block_device; struct io_context; struct cgroup_subsys_state; typedef void (bio_end_io_t) (struct bio *); +struct bio_crypt_ctx; /* * Block error status values. See block/blk-core:blk_errors for the details. @@ -173,6 +174,11 @@ struct bio { u64 bi_iocost_cost; #endif #endif + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + struct bio_crypt_ctx *bi_crypt_context; +#endif + union { #if defined(CONFIG_BLK_DEV_INTEGRITY) struct bio_integrity_payload *bi_integrity; /* data integrity */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 7b36695f71931..98aae4638fda9 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -224,6 +224,11 @@ struct request { unsigned short nr_integrity_segments; #endif +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + struct bio_crypt_ctx *crypt_ctx; + struct blk_ksm_keyslot *crypt_keyslot; +#endif + unsigned short write_hint; unsigned short ioprio; From patchwork Wed Apr 29 07:21:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279042 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=I7kClOzm; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Bqhj2yz6z9sTJ for ; Wed, 29 Apr 2020 17:22:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726681AbgD2HWL (ORCPT ); Wed, 29 Apr 2020 03:22:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726618AbgD2HVe (ORCPT ); Wed, 29 Apr 2020 03:21:34 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9705DC09B043 for ; Wed, 29 Apr 2020 00:21:34 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id bm3so1663996qvb.0 for ; Wed, 29 Apr 2020 00:21:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5Mo+7MFFBs4sdz9XooTQaPI4Fqr1H1d4GlYPjtWQb8M=; b=I7kClOzmTA/wrxEymX9+bQY/uNDPHNXeuBH6wJwB7E9qYl97j1h9k4xUVUJbODo3tO JPZ7apzzY/QxnPmjsHNPBCOtBURyQkqwZymtt1uCUNp3xX9ELk1KWfTRAQgfnwlcBNts P6pCs01aabeU4M9ciU88WGASxRihBCxwBbNmA3d9P0AcnnCQI4cymkoXFJeYV5rUmFLr PRKeMz7xwWnkqZ5Fbull+TOmR8d9R38oSISpq1T+6E8mKArYYo7JBwDA4RDjYqIiJVRb S8URvchFKtXCETzCgDab7TavKIsCKuTWBQkdU3yOayOaS3ywEPq72UtGM3DMcZDeaytv hQtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5Mo+7MFFBs4sdz9XooTQaPI4Fqr1H1d4GlYPjtWQb8M=; b=S4vYKwznRj0iTzul27EFA5ySbnn8oVEwAy1XIhrtO86QE7umqwGtqZDYM7Lpk11MWs 8LQxYFfINHwO48cGKfA3L+JayVzbVsaL87a4hzD55Zdhon6g3s8gC27HnDtM3mMHujiI cPUOOiYORdfyj11OPiSBSs3vQBLXn3vUmYgT9Hrtn6U0YDAQJW5v/qGsb7vFqeTDc7GO UjsVHEtjQYcgnfwIeVEH17bdxsHNyYvbcBIaCwdyYVf4JMDkT21G9PEc6R1sSeydZwkc XASZ66G55gatGrkLPjqaOKyjZY0eexJ7lHyKsSut62R6yA7mS38YFhBrt5hBPSTZRTIs +n0g== X-Gm-Message-State: AGi0PuaZ5BRjLMfXEk2dYEHJyfTGMDYpn+n+uvSSYGibimvBimp1ZMn0 CimXsMwHhnvmBJZ+lclVBryL6AkhZZU= X-Google-Smtp-Source: APiQypKMUvPJ1WDTi+RZxM7ebkhMHyaIPvn+6oN+h7e3EzQfL31PHWMOYerxu68ZIum149zO8oAgKyVAGmM= X-Received: by 2002:ad4:4e65:: with SMTP id ec5mr32206941qvb.32.1588144893748; Wed, 29 Apr 2020 00:21:33 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:13 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-5-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 04/12] block: Make blk-integrity preclude hardware inline encryption From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Whenever a device supports blk-integrity, the kernel will now always pretend that the device doesn't support inline encryption (essentially by setting the keyslot manager in the request queue to NULL). There's no hardware currently that supports both integrity and inline encryption. However, it seems possible that there will be such hardware in the near future (like the NVMe key per I/O support that might support both inline encryption and PI). But properly integrating both features is not trivial, and without real hardware that implements both, it is difficult to tell if it will be done correctly by the majority of hardware that support both. So it seems best not to support both features together right now, and to decide what to do at probe time. Signed-off-by: Satya Tangirala --- block/bio-integrity.c | 3 +++ block/blk-integrity.c | 7 +++++++ block/keyslot-manager.c | 19 +++++++++++++++++++ include/linux/blkdev.h | 30 ++++++++++++++++++++++++++++++ 4 files changed, 59 insertions(+) diff --git a/block/bio-integrity.c b/block/bio-integrity.c index bf62c25cde8f4..3579ac0f6ec1f 100644 --- a/block/bio-integrity.c +++ b/block/bio-integrity.c @@ -42,6 +42,9 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio, struct bio_set *bs = bio->bi_pool; unsigned inline_vecs; + if (WARN_ON_ONCE(bio_has_crypt_ctx(bio))) + return ERR_PTR(-EOPNOTSUPP); + if (!bs || !mempool_initialized(&bs->bio_integrity_pool)) { bip = kmalloc(struct_size(bip, bip_inline_vecs, nr_vecs), gfp_mask); inline_vecs = nr_vecs; diff --git a/block/blk-integrity.c b/block/blk-integrity.c index ff1070edbb400..b45711fc37df4 100644 --- a/block/blk-integrity.c +++ b/block/blk-integrity.c @@ -409,6 +409,13 @@ void blk_integrity_register(struct gendisk *disk, struct blk_integrity *template bi->tag_size = template->tag_size; disk->queue->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES; + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + if (disk->queue->ksm) { + pr_warn("blk-integrity: Integrity and hardware inline encryption are not supported together. Unregistering keyslot manager from request queue, to disable hardware inline encryption.\n"); + blk_ksm_unregister(disk->queue); + } +#endif } EXPORT_SYMBOL(blk_integrity_register); diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c index b584723b392ad..834f45fdd33e2 100644 --- a/block/keyslot-manager.c +++ b/block/keyslot-manager.c @@ -25,6 +25,9 @@ * Upper layers will call blk_ksm_get_slot_for_key() to program a * key into some slot in the inline encryption hardware. */ + +#define pr_fmt(fmt) "blk_crypto: " fmt + #include #include #include @@ -378,3 +381,19 @@ void blk_ksm_destroy(struct blk_keyslot_manager *ksm) memzero_explicit(ksm, sizeof(*ksm)); } EXPORT_SYMBOL_GPL(blk_ksm_destroy); + +bool blk_ksm_register(struct blk_keyslot_manager *ksm, struct request_queue *q) +{ + if (blk_integrity_queue_supports_integrity(q)) { + pr_warn("Integrity and hardware inline encryption are not supported together. Hardware inline encryption is being disabled.\n"); + return false; + } + q->ksm = ksm; + return true; +} +EXPORT_SYMBOL_GPL(blk_ksm_register); + +void blk_ksm_unregister(struct request_queue *q) +{ + q->ksm = NULL; +} diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 98aae4638fda9..17738dab8ae04 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1562,6 +1562,12 @@ struct blk_integrity *bdev_get_integrity(struct block_device *bdev) return blk_get_integrity(bdev->bd_disk); } +static inline bool +blk_integrity_queue_supports_integrity(struct request_queue *q) +{ + return q->integrity.profile; +} + static inline bool blk_integrity_rq(struct request *rq) { return rq->cmd_flags & REQ_INTEGRITY; @@ -1642,6 +1648,11 @@ static inline struct blk_integrity *blk_get_integrity(struct gendisk *disk) { return NULL; } +static inline bool +blk_integrity_queue_supports_integrity(struct request_queue *q) +{ + return false; +} static inline int blk_integrity_compare(struct gendisk *a, struct gendisk *b) { return 0; @@ -1693,6 +1704,25 @@ static inline struct bio_vec *rq_integrity_vec(struct request *rq) #endif /* CONFIG_BLK_DEV_INTEGRITY */ +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + +bool blk_ksm_register(struct blk_keyslot_manager *ksm, struct request_queue *q); + +void blk_ksm_unregister(struct request_queue *q); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ + +static inline bool blk_ksm_register(struct blk_keyslot_manager *ksm, + struct request_queue *q) +{ + return true; +} + +static inline void blk_ksm_unregister(struct request_queue *q) { } + +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ + + struct block_device_operations { int (*open) (struct block_device *, fmode_t); void (*release) (struct gendisk *, fmode_t); From patchwork Wed Apr 29 07:21:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279041 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=R8mB6x0H; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Bqhd2bzkz9sTM for ; Wed, 29 Apr 2020 17:22:09 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726816AbgD2HWI (ORCPT ); Wed, 29 Apr 2020 03:22:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726634AbgD2HVh (ORCPT ); Wed, 29 Apr 2020 03:21:37 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EC7AC09B051 for ; Wed, 29 Apr 2020 00:21:36 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id k197so2311589ybk.21 for ; Wed, 29 Apr 2020 00:21:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sdsiJhReOnCaKoimpeMfi+CagR5Rt+P84RcH+Wp0rew=; b=R8mB6x0HYHpRRg9MX2StwyFDz8lyS4evFvsfme9neqPFYPQYE927DUVtHdZxPD/V4s pgSTAfnZAdP8LXJ2qEH4KTOZ0Qxs8waf2bRbiDPrAafvKcnPTJoaEHzwa+58a9lH1gya InD0RqAYUeKxHmfSJ7nD+URlvQr2/FPjwUhH59srLvj4ZaA63bPwJcZENik5j/J8/hd8 DTL4OW0gkPGDiR8wRHbUumITd6Jjp3qvHJBVHECVWwRUA4Sq7zWC8vWSGnvPFCILjTIH hcaqT8WVwXhfqDiUUoaTWLZWA1YHTNwDvoUtX2ZPqZgfNfBneXh89ZA5ZtDHml2ICEnj Aukw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sdsiJhReOnCaKoimpeMfi+CagR5Rt+P84RcH+Wp0rew=; b=DxV8aJjzgUKPhYL3OtU19MwgH0rq9N7twii53rV3Cn8icwjyhzOnrta1LNSR6hIg/U V+Yz4q/mAXBowx684aPoDs713cYrAx7VJPt92129NPDftqq9n/lh+Qx36eQoXDuflUy9 N+bnonWCXNNJ63S+4vFhAbKPWfr2G2OFg/+Xmomca4aS+vtcx8NILXwDx3Egf7PSiBJY JIyscsX651tbG/77OJUIUJTB3S/pDlWEUNpdrsE/tgWa92iyTmSS6iIBek0xGeXpZmSe 0qurQv4q1qWG6tXsPAJcC+O1DuakEFMmZT1rX4szG5zeat6XdPfP5G+/UERcUMxVBFn9 kyQQ== X-Gm-Message-State: AGi0PuYvwDTPOitCD5/dOJMS09cznEjoxgldss76rFk+/jaqy8E8KNGg a3VOs4Ee1jnosrI2builAICU/JtJyeE= X-Google-Smtp-Source: APiQypK+uqTGpkb+oFKP5vTiJ04fyN9wQrZF4jchonoGak/ODNF2PFnQHrNk7MU7XQuKXucoZBKKS9Y5ZfM= X-Received: by 2002:a25:9248:: with SMTP id e8mr27727030ybo.346.1588144895461; Wed, 29 Apr 2020 00:21:35 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:14 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-6-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 05/12] block: blk-crypto-fallback for Inline Encryption From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Blk-crypto delegates crypto operations to inline encryption hardware when available. The separately configurable blk-crypto-fallback contains a software fallback to the kernel crypto API - when enabled, blk-crypto will use this fallback for en/decryption when inline encryption hardware is not available. This lets upper layers not have to worry about whether or not the underlying device has support for inline encryption before deciding to specify an encryption context for a bio. It also allows for testing without actual inline encryption hardware - in particular, it makes it possible to test the inline encryption code in ext4 and f2fs simply by running xfstests with the inlinecrypt mount option, which in turn allows for things like the regular upstream regression testing of ext4 to cover the inline encryption code paths. For more details, refer to Documentation/block/inline-encryption.rst. Signed-off-by: Satya Tangirala --- block/Kconfig | 10 + block/Makefile | 1 + block/blk-crypto-fallback.c | 655 ++++++++++++++++++++++++++++++++++++ block/blk-crypto-internal.h | 35 ++ block/blk-crypto.c | 73 ++-- include/linux/blk-crypto.h | 2 +- 6 files changed, 751 insertions(+), 25 deletions(-) create mode 100644 block/blk-crypto-fallback.c diff --git a/block/Kconfig b/block/Kconfig index c04a1d5008421..0af3876237748 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -192,6 +192,16 @@ config BLK_INLINE_ENCRYPTION block layer handle encryption, so users can take advantage of inline encryption hardware if present. +config BLK_INLINE_ENCRYPTION_FALLBACK + bool "Enable crypto API fallback for blk-crypto" + depends on BLK_INLINE_ENCRYPTION + select CRYPTO + select CRYPTO_SKCIPHER + help + Enabling this lets the block layer handle inline encryption + by falling back to the kernel crypto API when inline + encryption hardware is not present. + menu "Partition Types" source "block/partitions/Kconfig" diff --git a/block/Makefile b/block/Makefile index 3ee88d3e807d2..78719169fb2af 100644 --- a/block/Makefile +++ b/block/Makefile @@ -37,3 +37,4 @@ obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o blk-crypto.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) += blk-crypto-fallback.o diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c new file mode 100644 index 0000000000000..43ffa40d4489b --- /dev/null +++ b/block/blk-crypto-fallback.c @@ -0,0 +1,655 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +/* + * Refer to Documentation/block/inline-encryption.rst for detailed explanation. + */ + +#define pr_fmt(fmt) "blk-crypto-fallback: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "blk-crypto-internal.h" + +static unsigned int num_prealloc_bounce_pg = 32; +module_param(num_prealloc_bounce_pg, uint, 0); +MODULE_PARM_DESC(num_prealloc_bounce_pg, + "Number of preallocated bounce pages for the blk-crypto crypto API fallback"); + +static unsigned int blk_crypto_num_keyslots = 100; +module_param_named(num_keyslots, blk_crypto_num_keyslots, uint, 0); +MODULE_PARM_DESC(num_keyslots, + "Number of keyslots for the blk-crypto crypto API fallback"); + +static unsigned int num_prealloc_fallback_crypt_ctxs = 128; +module_param(num_prealloc_fallback_crypt_ctxs, uint, 0); +MODULE_PARM_DESC(num_prealloc_crypt_fallback_ctxs, + "Number of preallocated bio fallback crypto contexts for blk-crypto to use during crypto API fallback"); + +struct bio_fallback_crypt_ctx { + struct bio_crypt_ctx crypt_ctx; + /* + * Copy of the bvec_iter when this bio was submitted. + * We only want to en/decrypt the part of the bio as described by the + * bvec_iter upon submission because bio might be split before being + * resubmitted + */ + struct bvec_iter crypt_iter; + union { + struct { + struct work_struct work; + struct bio *bio; + }; + struct { + void *bi_private_orig; + bio_end_io_t *bi_end_io_orig; + }; + }; +}; + +static struct kmem_cache *bio_fallback_crypt_ctx_cache; +static mempool_t *bio_fallback_crypt_ctx_pool; + +/* + * Allocating a crypto tfm during I/O can deadlock, so we have to preallocate + * all of a mode's tfms when that mode starts being used. Since each mode may + * need all the keyslots at some point, each mode needs its own tfm for each + * keyslot; thus, a keyslot may contain tfms for multiple modes. However, to + * match the behavior of real inline encryption hardware (which only supports a + * single encryption context per keyslot), we only allow one tfm per keyslot to + * be used at a time - the rest of the unused tfms have their keys cleared. + */ +static DEFINE_MUTEX(tfms_init_lock); +static bool tfms_inited[BLK_ENCRYPTION_MODE_MAX]; + +static struct blk_crypto_keyslot { + enum blk_crypto_mode_num crypto_mode; + struct crypto_skcipher *tfms[BLK_ENCRYPTION_MODE_MAX]; +} *blk_crypto_keyslots; + +static struct blk_keyslot_manager blk_crypto_ksm; +static struct workqueue_struct *blk_crypto_wq; +static mempool_t *blk_crypto_bounce_page_pool; + +/* + * This is the key we set when evicting a keyslot. This *should* be the all 0's + * key, but AES-XTS rejects that key, so we use some random bytes instead. + */ +static u8 blank_key[BLK_CRYPTO_MAX_KEY_SIZE]; + +static void blk_crypto_evict_keyslot(unsigned int slot) +{ + struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot]; + enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode; + int err; + + WARN_ON(slotp->crypto_mode == BLK_ENCRYPTION_MODE_INVALID); + + /* Clear the key in the skcipher */ + err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], blank_key, + blk_crypto_modes[crypto_mode].keysize); + WARN_ON(err); + slotp->crypto_mode = BLK_ENCRYPTION_MODE_INVALID; +} + +static int blk_crypto_keyslot_program(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + unsigned int slot) +{ + struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot]; + const enum blk_crypto_mode_num crypto_mode = + key->crypto_cfg.crypto_mode; + int err; + + if (crypto_mode != slotp->crypto_mode && + slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID) + blk_crypto_evict_keyslot(slot); + + slotp->crypto_mode = crypto_mode; + err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->raw, + key->size); + if (err) { + blk_crypto_evict_keyslot(slot); + return -EIO; + } + return 0; +} + +static int blk_crypto_keyslot_evict(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + unsigned int slot) +{ + blk_crypto_evict_keyslot(slot); + return 0; +} + +/* + * The crypto API fallback KSM ops - only used for a bio when it specifies a + * blk_crypto_key that was not supported by the device's inline encryption + * hardware. + */ +static const struct blk_ksm_ll_ops blk_crypto_ksm_ll_ops = { + .keyslot_program = blk_crypto_keyslot_program, + .keyslot_evict = blk_crypto_keyslot_evict, +}; + +static void blk_crypto_fallback_encrypt_endio(struct bio *enc_bio) +{ + struct bio *src_bio = enc_bio->bi_private; + int i; + + for (i = 0; i < enc_bio->bi_vcnt; i++) + mempool_free(enc_bio->bi_io_vec[i].bv_page, + blk_crypto_bounce_page_pool); + + src_bio->bi_status = enc_bio->bi_status; + + bio_put(enc_bio); + bio_endio(src_bio); +} + +static struct bio *blk_crypto_clone_bio(struct bio *bio_src) +{ + struct bvec_iter iter; + struct bio_vec bv; + struct bio *bio; + + bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL); + if (!bio) + return NULL; + bio->bi_disk = bio_src->bi_disk; + bio->bi_opf = bio_src->bi_opf; + bio->bi_ioprio = bio_src->bi_ioprio; + bio->bi_write_hint = bio_src->bi_write_hint; + bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; + bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; + + bio_for_each_segment(bv, bio_src, iter) + bio->bi_io_vec[bio->bi_vcnt++] = bv; + + bio_clone_blkg_association(bio, bio_src); + blkcg_bio_issue_init(bio); + + return bio; +} + +static bool blk_crypto_alloc_cipher_req(struct bio *src_bio, + struct blk_ksm_keyslot *slot, + struct skcipher_request **ciph_req_ret, + struct crypto_wait *wait) +{ + struct skcipher_request *ciph_req; + const struct blk_crypto_keyslot *slotp; + int keyslot_idx = blk_ksm_get_slot_idx(slot); + + slotp = &blk_crypto_keyslots[keyslot_idx]; + ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode], + GFP_NOIO); + if (!ciph_req) { + src_bio->bi_status = BLK_STS_RESOURCE; + return false; + } + + skcipher_request_set_callback(ciph_req, + CRYPTO_TFM_REQ_MAY_BACKLOG | + CRYPTO_TFM_REQ_MAY_SLEEP, + crypto_req_done, wait); + *ciph_req_ret = ciph_req; + + return true; +} + +static bool blk_crypto_split_bio_if_needed(struct bio **bio_ptr) +{ + struct bio *bio = *bio_ptr; + unsigned int i = 0; + unsigned int num_sectors = 0; + struct bio_vec bv; + struct bvec_iter iter; + + bio_for_each_segment(bv, bio, iter) { + num_sectors += bv.bv_len >> SECTOR_SHIFT; + if (++i == BIO_MAX_PAGES) + break; + } + if (num_sectors < bio_sectors(bio)) { + struct bio *split_bio; + + split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL); + if (!split_bio) { + bio->bi_status = BLK_STS_RESOURCE; + return false; + } + bio_chain(split_bio, bio); + generic_make_request(bio); + *bio_ptr = split_bio; + } + + return true; +} + +union blk_crypto_iv { + __le64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; + u8 bytes[BLK_CRYPTO_MAX_IV_SIZE]; +}; + +static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], + union blk_crypto_iv *iv) +{ + int i; + + for (i = 0; i < BLK_CRYPTO_DUN_ARRAY_SIZE; i++) + iv->dun[i] = cpu_to_le64(dun[i]); +} + +/* + * The crypto API fallback's encryption routine. + * Allocate a bounce bio for encryption, encrypt the input bio using crypto API, + * and replace *bio_ptr with the bounce bio. May split input bio if it's too + * large. Returns true on success. Returns false and sets bio->bi_status on + * error. + */ +static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) +{ + struct bio *src_bio, *enc_bio; + struct bio_crypt_ctx *bc; + struct blk_ksm_keyslot *slot; + int data_unit_size; + struct skcipher_request *ciph_req = NULL; + DECLARE_CRYPTO_WAIT(wait); + u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; + struct scatterlist src, dst; + union blk_crypto_iv iv; + unsigned int i, j; + bool ret = false; + blk_status_t blk_st; + + /* Split the bio if it's too big for single page bvec */ + if (!blk_crypto_split_bio_if_needed(bio_ptr)) + return false; + + src_bio = *bio_ptr; + bc = src_bio->bi_crypt_context; + data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; + + /* Allocate bounce bio for encryption */ + enc_bio = blk_crypto_clone_bio(src_bio); + if (!enc_bio) { + src_bio->bi_status = BLK_STS_RESOURCE; + return false; + } + + /* + * Use the crypto API fallback keyslot manager to get a crypto_skcipher + * for the algorithm and key specified for this bio. + */ + blk_st = blk_ksm_get_slot_for_key(&blk_crypto_ksm, bc->bc_key, &slot); + if (blk_st != BLK_STS_OK) { + src_bio->bi_status = blk_st; + goto out_put_enc_bio; + } + + /* and then allocate an skcipher_request for it */ + if (!blk_crypto_alloc_cipher_req(src_bio, slot, &ciph_req, &wait)) + goto out_release_keyslot; + + memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun)); + sg_init_table(&src, 1); + sg_init_table(&dst, 1); + + skcipher_request_set_crypt(ciph_req, &src, &dst, data_unit_size, + iv.bytes); + + /* Encrypt each page in the bounce bio */ + for (i = 0; i < enc_bio->bi_vcnt; i++) { + struct bio_vec *enc_bvec = &enc_bio->bi_io_vec[i]; + struct page *plaintext_page = enc_bvec->bv_page; + struct page *ciphertext_page = + mempool_alloc(blk_crypto_bounce_page_pool, GFP_NOIO); + + enc_bvec->bv_page = ciphertext_page; + + if (!ciphertext_page) { + src_bio->bi_status = BLK_STS_RESOURCE; + goto out_free_bounce_pages; + } + + sg_set_page(&src, plaintext_page, data_unit_size, + enc_bvec->bv_offset); + sg_set_page(&dst, ciphertext_page, data_unit_size, + enc_bvec->bv_offset); + + /* Encrypt each data unit in this page */ + for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) { + blk_crypto_dun_to_iv(curr_dun, &iv); + if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req), + &wait)) { + i++; + src_bio->bi_status = BLK_STS_IOERR; + goto out_free_bounce_pages; + } + bio_crypt_dun_increment(curr_dun, 1); + src.offset += data_unit_size; + dst.offset += data_unit_size; + } + } + + enc_bio->bi_private = src_bio; + enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio; + *bio_ptr = enc_bio; + ret = true; + + enc_bio = NULL; + goto out_free_ciph_req; + +out_free_bounce_pages: + while (i > 0) + mempool_free(enc_bio->bi_io_vec[--i].bv_page, + blk_crypto_bounce_page_pool); +out_free_ciph_req: + skcipher_request_free(ciph_req); +out_release_keyslot: + blk_ksm_put_slot(slot); +out_put_enc_bio: + if (enc_bio) + bio_put(enc_bio); + + return ret; +} + +/* + * The crypto API fallback's main decryption routine. + * Decrypts input bio in place, and calls bio_endio on the bio. + */ +static void blk_crypto_fallback_decrypt_bio(struct work_struct *work) +{ + struct bio_fallback_crypt_ctx *f_ctx = + container_of(work, struct bio_fallback_crypt_ctx, work); + struct bio *bio = f_ctx->bio; + struct bio_crypt_ctx *bc = &f_ctx->crypt_ctx; + struct blk_ksm_keyslot *slot; + struct skcipher_request *ciph_req = NULL; + DECLARE_CRYPTO_WAIT(wait); + u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; + union blk_crypto_iv iv; + struct scatterlist sg; + struct bio_vec bv; + struct bvec_iter iter; + const int data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; + unsigned int i; + blk_status_t blk_st; + + /* + * Use the crypto API fallback keyslot manager to get a crypto_skcipher + * for the algorithm and key specified for this bio. + */ + blk_st = blk_ksm_get_slot_for_key(&blk_crypto_ksm, bc->bc_key, &slot); + if (blk_st != BLK_STS_OK) { + bio->bi_status = blk_st; + goto out_no_keyslot; + } + + /* and then allocate an skcipher_request for it */ + if (!blk_crypto_alloc_cipher_req(bio, slot, &ciph_req, &wait)) + goto out; + + memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun)); + sg_init_table(&sg, 1); + skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size, + iv.bytes); + + /* Decrypt each segment in the bio */ + __bio_for_each_segment(bv, bio, iter, f_ctx->crypt_iter) { + struct page *page = bv.bv_page; + + sg_set_page(&sg, page, data_unit_size, bv.bv_offset); + + /* Decrypt each data unit in the segment */ + for (i = 0; i < bv.bv_len; i += data_unit_size) { + blk_crypto_dun_to_iv(curr_dun, &iv); + if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), + &wait)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } + bio_crypt_dun_increment(curr_dun, 1); + sg.offset += data_unit_size; + } + } + +out: + skcipher_request_free(ciph_req); + blk_ksm_put_slot(slot); +out_no_keyslot: + mempool_free(f_ctx, bio_fallback_crypt_ctx_pool); + bio_endio(bio); +} + +/** + * blk_crypto_fallback_decrypt_endio - clean up bio w.r.t fallback decryption + * + * @bio: the bio to clean up. + * + * Restore bi_private and bi_end_io, and queue the bio for decryption into a + * workqueue, since this function will be called from an atomic context. + */ +static void blk_crypto_fallback_decrypt_endio(struct bio *bio) +{ + struct bio_fallback_crypt_ctx *f_ctx = bio->bi_private; + + bio->bi_private = f_ctx->bi_private_orig; + bio->bi_end_io = f_ctx->bi_end_io_orig; + + /* If there was an IO error, don't queue for decrypt. */ + if (bio->bi_status) { + mempool_free(f_ctx, bio_fallback_crypt_ctx_pool); + bio_endio(bio); + return; + } + + INIT_WORK(&f_ctx->work, blk_crypto_fallback_decrypt_bio); + f_ctx->bio = bio; + queue_work(blk_crypto_wq, &f_ctx->work); +} + +/** + * blk_crypto_fallback_bio_prep - Prepare a bio to use fallback en/decryption + * + * @bio_ptr: pointer to the bio to prepare + * + * If bio is doing a WRITE operation, this splits the bio into two parts if it's + * too big (see blk_crypto_split_bio_if_needed). It then allocates a bounce bio + * for the first part, encrypts it, and update bio_ptr to point to the bounce + * bio. + * + * For a READ operation, we mark the bio for decryption by using bi_private and + * bi_end_io. + * + * In either case, this function will make the bio look like a regular bio (i.e. + * as if no encryption context was ever specified) for the purposes of the rest + * of the stack except for blk-integrity (blk-integrity and blk-crypto are not + * currently supported together). + * + * Returns true on success. Sets bio->bi_status and returns false on error. + */ +bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr) +{ + struct bio *bio = *bio_ptr; + struct bio_crypt_ctx *bc = bio->bi_crypt_context; + struct bio_fallback_crypt_ctx *f_ctx; + + if (!tfms_inited[bc->bc_key->crypto_cfg.crypto_mode]) { + bio->bi_status = BLK_STS_IOERR; + return false; + } + + if (!blk_ksm_crypto_cfg_supported(&blk_crypto_ksm, + &bc->bc_key->crypto_cfg)) { + bio->bi_status = BLK_STS_NOTSUPP; + return false; + } + + if (bio_data_dir(bio) == WRITE) + return blk_crypto_fallback_encrypt_bio(bio_ptr); + + /* + * bio READ case: Set up a f_ctx in the bio's bi_private and set the + * bi_end_io appropriately to trigger decryption when the bio is ended. + */ + f_ctx = mempool_alloc(bio_fallback_crypt_ctx_pool, GFP_NOIO); + f_ctx->crypt_ctx = *bc; + f_ctx->crypt_iter = bio->bi_iter; + f_ctx->bi_private_orig = bio->bi_private; + f_ctx->bi_end_io_orig = bio->bi_end_io; + bio->bi_private = (void *)f_ctx; + bio->bi_end_io = blk_crypto_fallback_decrypt_endio; + bio_crypt_free_ctx(bio); + + return true; +} + +int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key) +{ + return blk_ksm_evict_key(&blk_crypto_ksm, key); +} + +static bool blk_crypto_fallback_inited; +static int blk_crypto_fallback_init(void) +{ + int i; + int err = -ENOMEM; + + if (blk_crypto_fallback_inited) + return 0; + + prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE); + + err = blk_ksm_init(&blk_crypto_ksm, blk_crypto_num_keyslots); + if (err) + goto out; + err = -ENOMEM; + + blk_crypto_ksm.ksm_ll_ops = blk_crypto_ksm_ll_ops; + blk_crypto_ksm.max_dun_bytes_supported = BLK_CRYPTO_MAX_IV_SIZE; + + /* All blk-crypto modes have a crypto API fallback. */ + for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) + blk_crypto_ksm.crypto_modes_supported[i] = 0xFFFFFFFF; + blk_crypto_ksm.crypto_modes_supported[BLK_ENCRYPTION_MODE_INVALID] = 0; + + blk_crypto_wq = alloc_workqueue("blk_crypto_wq", + WQ_UNBOUND | WQ_HIGHPRI | + WQ_MEM_RECLAIM, num_online_cpus()); + if (!blk_crypto_wq) + goto fail_free_ksm; + + blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots, + sizeof(blk_crypto_keyslots[0]), + GFP_KERNEL); + if (!blk_crypto_keyslots) + goto fail_free_wq; + + blk_crypto_bounce_page_pool = + mempool_create_page_pool(num_prealloc_bounce_pg, 0); + if (!blk_crypto_bounce_page_pool) + goto fail_free_keyslots; + + bio_fallback_crypt_ctx_cache = KMEM_CACHE(bio_fallback_crypt_ctx, 0); + if (!bio_fallback_crypt_ctx_cache) + goto fail_free_bounce_page_pool; + + bio_fallback_crypt_ctx_pool = + mempool_create_slab_pool(num_prealloc_fallback_crypt_ctxs, + bio_fallback_crypt_ctx_cache); + if (!bio_fallback_crypt_ctx_pool) + goto fail_free_crypt_ctx_cache; + + blk_crypto_fallback_inited = true; + + return 0; +fail_free_crypt_ctx_cache: + kmem_cache_destroy(bio_fallback_crypt_ctx_cache); +fail_free_bounce_page_pool: + mempool_destroy(blk_crypto_bounce_page_pool); +fail_free_keyslots: + kfree(blk_crypto_keyslots); +fail_free_wq: + destroy_workqueue(blk_crypto_wq); +fail_free_ksm: + blk_ksm_destroy(&blk_crypto_ksm); +out: + return err; +} + +/* + * Prepare blk-crypto-fallback for the specified crypto mode. + * Returns -ENOPKG if the needed crypto API support is missing. + */ +int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num) +{ + const char *cipher_str = blk_crypto_modes[mode_num].cipher_str; + struct blk_crypto_keyslot *slotp; + unsigned int i; + int err = 0; + + /* + * Fast path + * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num] + * for each i are visible before we try to access them. + */ + if (likely(smp_load_acquire(&tfms_inited[mode_num]))) + return 0; + + mutex_lock(&tfms_init_lock); + err = blk_crypto_fallback_init(); + if (err) + goto out; + + if (tfms_inited[mode_num]) + goto out; + + for (i = 0; i < blk_crypto_num_keyslots; i++) { + slotp = &blk_crypto_keyslots[i]; + slotp->tfms[mode_num] = crypto_alloc_skcipher(cipher_str, 0, 0); + if (IS_ERR(slotp->tfms[mode_num])) { + err = PTR_ERR(slotp->tfms[mode_num]); + if (err == -ENOENT) { + pr_warn_once("Missing crypto API support for \"%s\"\n", + cipher_str); + err = -ENOPKG; + } + slotp->tfms[mode_num] = NULL; + goto out_free_tfms; + } + + crypto_skcipher_set_flags(slotp->tfms[mode_num], + CRYPTO_TFM_REQ_FORBID_WEAK_KEYS); + } + + /* + * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num] + * for each i are visible before we set tfms_inited[mode_num]. + */ + smp_store_release(&tfms_inited[mode_num], true); + goto out; + +out_free_tfms: + for (i = 0; i < blk_crypto_num_keyslots; i++) { + slotp = &blk_crypto_keyslots[i]; + crypto_free_skcipher(slotp->tfms[mode_num]); + slotp->tfms[mode_num] = NULL; + } +out: + mutex_unlock(&tfms_init_lock); + return err; +} diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h index 3ef0f7495be69..55112752fd6f1 100644 --- a/block/blk-crypto-internal.h +++ b/block/blk-crypto-internal.h @@ -11,10 +11,13 @@ /* Represents a crypto mode supported by blk-crypto */ struct blk_crypto_mode { + const char *cipher_str; /* crypto API name (for fallback case) */ unsigned int keysize; /* key size in bytes */ unsigned int ivsize; /* iv size in bytes */ }; +extern const struct blk_crypto_mode blk_crypto_modes[]; + #ifdef CONFIG_BLK_INLINE_ENCRYPTION void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], @@ -163,4 +166,36 @@ static inline blk_status_t blk_crypto_insert_cloned_request(struct request *rq) return BLK_STS_OK; } +#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK + +int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num); + +bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr); + +int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */ + +static inline int +blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num) +{ + pr_warn_once("crypto API fallback is disabled\n"); + return -ENOPKG; +} + +static inline bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr) +{ + pr_warn_once("crypto API fallback disabled; failing request.\n"); + (*bio_ptr)->bi_status = BLK_STS_NOTSUPP; + return false; +} + +static inline int +blk_crypto_fallback_evict_key(const struct blk_crypto_key *key) +{ + return 0; +} + +#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */ + #endif /* __LINUX_BLK_CRYPTO_INTERNAL_H */ diff --git a/block/blk-crypto.c b/block/blk-crypto.c index ce52bd4223238..c3636f6a63aef 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -19,14 +19,17 @@ const struct blk_crypto_mode blk_crypto_modes[] = { [BLK_ENCRYPTION_MODE_AES_256_XTS] = { + .cipher_str = "xts(aes)", .keysize = 64, .ivsize = 16, }, [BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV] = { + .cipher_str = "essiv(cbc(aes),sha256)", .keysize = 16, .ivsize = 16, }, [BLK_ENCRYPTION_MODE_ADIANTUM] = { + .cipher_str = "adiantum(xchacha12,aes)", .keysize = 32, .ivsize = 32, }, @@ -226,9 +229,16 @@ void __blk_crypto_free_request(struct request *rq) * * @bio_ptr: pointer to original bio pointer * - * Succeeds if the bio doesn't have inline encryption enabled or if the bio - * crypt context provided for the bio is supported by the underlying device's - * inline encryption hardware. Ends the bio with error otherwise. + * If the bio crypt context provided for the bio is supported by the underlying + * device's inline encryption hardware, do nothing. + * + * Otherwise, try to perform en/decryption for this bio by falling back to the + * kernel crypto API. When the crypto API fallback is used for encryption, + * blk-crypto may choose to split the bio into 2 - the first one that will + * continue to be processed and the second one that will be resubmitted via + * generic_make_request. A bounce bio will be allocated to encrypt the contents + * of the aforementioned "first one", and *bio_ptr will be updated to this + * bounce bio. * * Caller must ensure bio has bio_crypt_ctx. * @@ -240,28 +250,29 @@ bool __blk_crypto_bio_prep(struct bio **bio_ptr) { struct bio *bio = *bio_ptr; const struct blk_crypto_key *bc_key = bio->bi_crypt_context->bc_key; - blk_status_t blk_st = BLK_STS_IOERR; /* Error if bio has no data. */ - if (WARN_ON_ONCE(!bio_has_data(bio))) + if (WARN_ON_ONCE(!bio_has_data(bio))) { + bio->bi_status = BLK_STS_IOERR; goto fail; + } - if (!bio_crypt_check_alignment(bio)) + if (!bio_crypt_check_alignment(bio)) { + bio->bi_status = BLK_STS_IOERR; goto fail; + } /* - * Success if device supports the encryption context, and blk-integrity - * isn't supported by device/is turned off. + * Success if device supports the encryption context, or we succeeded + * in falling back to the crypto API. */ - if (!blk_ksm_crypto_cfg_supported(bio->bi_disk->queue->ksm, - &bc_key->crypto_cfg)) { - blk_st = BLK_STS_NOTSUPP; - goto fail; - } + if (blk_ksm_crypto_cfg_supported(bio->bi_disk->queue->ksm, + &bc_key->crypto_cfg)) + return true; - return true; + if (blk_crypto_fallback_bio_prep(bio_ptr)) + return true; fail: - *bio_ptr->bi_status = blk_st; bio_endio(*bio_ptr); return false; } @@ -324,10 +335,16 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key, return 0; } +/* + * Check if bios with @cfg can be en/decrypted by blk-crypto (i.e. either the + * request queue it's submitted to supports inline crypto, or the + * blk-crypto-fallback is enabled and supports the cfg). + */ bool blk_crypto_config_supported(struct request_queue *q, const struct blk_crypto_config *cfg) { - return blk_ksm_crypto_cfg_supported(q->ksm, cfg); + return IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) || + blk_ksm_crypto_cfg_supported(q->ksm, cfg); } /** @@ -335,17 +352,22 @@ bool blk_crypto_config_supported(struct request_queue *q, * @key: A key to use on the device * @q: the request queue for the device * - * Upper layers must call this function to ensure that the hardware supports - * the key's crypto settings. + * Upper layers must call this function to ensure that either the hardware + * supports the key's crypto settings, or the crypto API fallback has transforms + * for the needed mode allocated and ready to go. This function may allocate + * an skcipher, and *should not* be called from the data path, since that might + * cause a deadlock * - * Return: 0 on success; -ENOPKG if the hardware doesn't support the key + * Return: 0 on success; -ENOPKG if the hardware doesn't support the key and + * blk-crypto-fallback is either disabled or the needed algorithm + * is disabled in the crypto API; or another -errno code. */ int blk_crypto_start_using_key(const struct blk_crypto_key *key, struct request_queue *q) { if (blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg)) return 0; - return -ENOPKG; + return blk_crypto_fallback_start_using_mode(key->crypto_cfg.crypto_mode); } EXPORT_SYMBOL_GPL(blk_crypto_start_using_key); @@ -357,9 +379,7 @@ EXPORT_SYMBOL_GPL(blk_crypto_start_using_key); * @key: The key to evict * * Upper layers (filesystems) should call this function to ensure that a key - * is evicted from hardware that it might have been programmed into. This - * will call blk_ksm_evict_key on the queue's keyslot manager, if one - * exists, and supports the crypto algorithm with the specified data unit size. + * is evicted from any hardware that it might have been programmed into. * * Return: 0 on success or if key is not present in the q's ksm, -err on error. */ @@ -369,5 +389,10 @@ int blk_crypto_evict_key(struct request_queue *q, if (q->ksm && blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg)) return blk_ksm_evict_key(q->ksm, key); - return 0; + /* + * If the request queue's associated inline encryption hardware didn't + * have support for the key, then the key might have been programmed + * into the fallback keyslot manager, so try to evict from there. + */ + return blk_crypto_fallback_evict_key(key); } diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h index af4807a493ca8..d0454fd555450 100644 --- a/include/linux/blk-crypto.h +++ b/include/linux/blk-crypto.h @@ -60,7 +60,7 @@ struct blk_crypto_key { * * A bio_crypt_ctx specifies that the contents of the bio will be encrypted (for * write requests) or decrypted (for read requests) inline by the storage device - * or controller. + * or controller, or by the crypto API fallback. */ struct bio_crypt_ctx { const struct blk_crypto_key *bc_key; From patchwork Wed Apr 29 07:21:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279040 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=h4idqIq3; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Bqhc1Vtsz9sSl for ; Wed, 29 Apr 2020 17:22:08 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726705AbgD2HWF (ORCPT ); Wed, 29 Apr 2020 03:22:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726702AbgD2HVj (ORCPT ); Wed, 29 Apr 2020 03:21:39 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B62A8C0A3BF2 for ; Wed, 29 Apr 2020 00:21:37 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id h1so2375685ybk.2 for ; Wed, 29 Apr 2020 00:21:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6QyXfeGvk0WJl1C/wpxTI5v1ePziEcWn5TI+lV37Q2o=; b=h4idqIq3vg3qI3az49fLjU8pyo05Z3rz6LVHeUBJYJ/pjy/GizuukmHRVx3uaa/2Vk x7yxfAP/vf5yT1i34vJUHSgoaWztU2WjO8ky65TvcKJgZSRdqR3v3KUaldz5jhacfzaw p3kKekyoujKMZA2cpcMmlFuEFgqP+zCX1kB8WH0v6h7k5Wb7xWOoyyqmnWxVeRhOQe5q U1H4XiUEc64SVAEu9msI2M9iu3UWT8Kz7qJDJOKDCBHDuYIo/DMxKVyMSgvaCIYHH0Gy d3f4IbWrvDyLxgdKdxcx3jNfC2EtuFLN6JScFKNyd630FOxXODbUXADEC2cTzlFItu69 1J1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6QyXfeGvk0WJl1C/wpxTI5v1ePziEcWn5TI+lV37Q2o=; b=Zj5xtDrbxLNhFIKGtpmYk6NFio/uySVo30hnzfUkBHwLMF7zoeLyDLZaivrgcly2ZP hVdFPbEHAuVwcqssSMY7a6YBIvgZaU1AaYTXuP/FSAoikJ/yup1ROhxrqAt/ZZs2kkzQ Vc20vpMOfRbi7HHb1JdC2ZT2VkSUQwjnozBYoC+mL7VRUtw3vsmysqu7+F+9TLv5GuR3 M+q4RC/3O6Rdfq8aNoEtOsZIhsnqxX09e2x+IwWFwzP+RjyPr+ngOK8snBKbU7Q6vOZa xPuabiGCLFPq5RPw3NIE5tm8Q+nX+1fq6Eh+E2pMgGXoA6dKBxd7+1jI6NrixxffYGBC 6zyg== X-Gm-Message-State: AGi0PuYkFHgchIAzA3GEoiudgiFZE6fWgvVOmNsV9pJMSpSj83o0rbus FukYRt8AQzOUqRwWqFO/Ikk8hGD+VjQ= X-Google-Smtp-Source: APiQypKe2bTFtDKbDRK08NQKF+eK3/ALTHM6m1Iw0vYl0YFCsr7p8VUxQwQ8KcqDdfUUtB2kXBOo8fbq+gY= X-Received: by 2002:a25:6b0b:: with SMTP id g11mr49133618ybc.425.1588144896924; Wed, 29 Apr 2020 00:21:36 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:15 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-7-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 06/12] scsi: ufs: UFS driver v2.1 spec crypto additions From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Add the crypto registers and structs defined in v2.1 of the JEDEC UFSHCI specification in preparation to add support for inline encryption to UFS. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/ufshcd.c | 2 ++ drivers/scsi/ufs/ufshcd.h | 6 ++++ drivers/scsi/ufs/ufshci.h | 67 +++++++++++++++++++++++++++++++++++++-- 3 files changed, 73 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 698e8d20b4bac..2435c600cb2d9 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -4767,6 +4767,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) case OCS_MISMATCH_RESP_UPIU_SIZE: case OCS_PEER_COMM_FAILURE: case OCS_FATAL_ERROR: + case OCS_INVALID_CRYPTO_CONFIG: + case OCS_GENERAL_CRYPTO_ERROR: default: result |= DID_ERROR << 16; dev_err(hba->dev, diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 6ffc08ad85f63..1eebb589159d6 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -555,6 +555,12 @@ enum ufshcd_caps { * for userspace to control the power management. */ UFSHCD_CAP_RPM_AUTOSUSPEND = 1 << 6, + + /* + * This capability allows the host controller driver to use the + * inline crypto engine, if it is present + */ + UFSHCD_CAP_CRYPTO = (1 << 7), }; /** diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h index c2961d37cc1cf..c0651fe6dbbc6 100644 --- a/drivers/scsi/ufs/ufshci.h +++ b/drivers/scsi/ufs/ufshci.h @@ -90,6 +90,7 @@ enum { MASK_64_ADDRESSING_SUPPORT = 0x01000000, MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT = 0x02000000, MASK_UIC_DME_TEST_MODE_SUPPORT = 0x04000000, + MASK_CRYPTO_SUPPORT = 0x10000000, }; #define UFS_MASK(mask, offset) ((mask) << (offset)) @@ -143,6 +144,7 @@ enum { #define DEVICE_FATAL_ERROR 0x800 #define CONTROLLER_FATAL_ERROR 0x10000 #define SYSTEM_BUS_FATAL_ERROR 0x20000 +#define CRYPTO_ENGINE_FATAL_ERROR 0x40000 #define UFSHCD_UIC_HIBERN8_MASK (UIC_HIBERNATE_ENTER |\ UIC_HIBERNATE_EXIT) @@ -155,11 +157,13 @@ enum { #define UFSHCD_ERROR_MASK (UIC_ERROR |\ DEVICE_FATAL_ERROR |\ CONTROLLER_FATAL_ERROR |\ - SYSTEM_BUS_FATAL_ERROR) + SYSTEM_BUS_FATAL_ERROR |\ + CRYPTO_ENGINE_FATAL_ERROR) #define INT_FATAL_ERRORS (DEVICE_FATAL_ERROR |\ CONTROLLER_FATAL_ERROR |\ - SYSTEM_BUS_FATAL_ERROR) + SYSTEM_BUS_FATAL_ERROR |\ + CRYPTO_ENGINE_FATAL_ERROR) /* HCS - Host Controller Status 30h */ #define DEVICE_PRESENT 0x1 @@ -318,6 +322,61 @@ enum { INTERRUPT_MASK_ALL_VER_21 = 0x71FFF, }; +/* CCAP - Crypto Capability 100h */ +union ufs_crypto_capabilities { + __le32 reg_val; + struct { + u8 num_crypto_cap; + u8 config_count; + u8 reserved; + u8 config_array_ptr; + }; +}; + +enum ufs_crypto_key_size { + UFS_CRYPTO_KEY_SIZE_INVALID = 0x0, + UFS_CRYPTO_KEY_SIZE_128 = 0x1, + UFS_CRYPTO_KEY_SIZE_192 = 0x2, + UFS_CRYPTO_KEY_SIZE_256 = 0x3, + UFS_CRYPTO_KEY_SIZE_512 = 0x4, +}; + +enum ufs_crypto_alg { + UFS_CRYPTO_ALG_AES_XTS = 0x0, + UFS_CRYPTO_ALG_BITLOCKER_AES_CBC = 0x1, + UFS_CRYPTO_ALG_AES_ECB = 0x2, + UFS_CRYPTO_ALG_ESSIV_AES_CBC = 0x3, +}; + +/* x-CRYPTOCAP - Crypto Capability X */ +union ufs_crypto_cap_entry { + __le32 reg_val; + struct { + u8 algorithm_id; + u8 sdus_mask; /* Supported data unit size mask */ + u8 key_size; + u8 reserved; + }; +}; + +#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7) +#define UFS_CRYPTO_KEY_MAX_SIZE 64 +/* x-CRYPTOCFG - Crypto Configuration X */ +union ufs_crypto_cfg_entry { + __le32 reg_val[32]; + struct { + u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE]; + u8 data_unit_size; + u8 crypto_cap_idx; + u8 reserved_1; + u8 config_enable; + u8 reserved_multi_host; + u8 reserved_2; + u8 vsb[2]; + u8 reserved_3[56]; + }; +}; + /* * Request Descriptor Definitions */ @@ -339,6 +398,7 @@ enum { UTP_NATIVE_UFS_COMMAND = 0x10000000, UTP_DEVICE_MANAGEMENT_FUNCTION = 0x20000000, UTP_REQ_DESC_INT_CMD = 0x01000000, + UTP_REQ_DESC_CRYPTO_ENABLE_CMD = 0x00800000, }; /* UTP Transfer Request Data Direction (DD) */ @@ -358,6 +418,9 @@ enum { OCS_PEER_COMM_FAILURE = 0x5, OCS_ABORTED = 0x6, OCS_FATAL_ERROR = 0x7, + OCS_DEVICE_FATAL_ERROR = 0x8, + OCS_INVALID_CRYPTO_CONFIG = 0x9, + OCS_GENERAL_CRYPTO_ERROR = 0xA, OCS_INVALID_COMMAND_STATUS = 0x0F, MASK_OCS = 0x0F, }; From patchwork Wed Apr 29 07:21:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279033 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=F9g7etyW; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Bqh95835z9sSy for ; Wed, 29 Apr 2020 17:21:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726764AbgD2HVo (ORCPT ); Wed, 29 Apr 2020 03:21:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726635AbgD2HVk (ORCPT ); Wed, 29 Apr 2020 03:21:40 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34E7DC035493 for ; Wed, 29 Apr 2020 00:21:39 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id i13so2300678ybl.13 for ; Wed, 29 Apr 2020 00:21:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0aiZfOCHE6WFReYT1oUcAoa6v5nxJGKFBvTRimyUlpA=; b=F9g7etyWKcnYSfa5+MssIFrLBLfvTmWZfyp23bJ3zKt9wP5+tUBEPXwlLgrlvbDh9g aZpwzjxYxmgWxQZfFJ6snMUmz7KAFprTuMgg6Mqyf0zox69OzLo4sqT86Sq7yvRMpdqc p9p7W2/BQ8YQG4N6rBTjbExB2LWSlqKC8mX+YJvEioUryv7fgz9btDbWkO3elCbk5vEV Jey6zuEYJUpwCTW3WYN+fa9xni6xpfX2WsoUVAJMH/lPMFT9Eih86RknpWxJgnujD8Hf Abrkq8VAlGBM/3TqGB6jCQuwgj6c1BIZcTnX0TPVVFqPdOXuQKLMvgzh2wso5gK3cM5U 93vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0aiZfOCHE6WFReYT1oUcAoa6v5nxJGKFBvTRimyUlpA=; b=jdU2fo4W4D/Ga34wiH1B7qYwzwXXr6ekkx98Grtlns8wIbYnv4/ZWp4B4ndr6ZYJsI b0fm3xNbSLq7c1UJ6CPggqz3n+37mKDtv9ZnUNFsB2XAGuhZNrt1KpYblEyXdKL5RNHH /LDhKE0tHOHw4m8SbztC3NxrNmus/JCJ30OCHsrc40YnxGUA/0obzK8jeMFDbiusAARF 2dNwDDWFMnQub2HW0bCPEEqH+FXG6SSydwHIJsW4lZY8hE5Hy6xqZWakUBPYwSpX3D33 Z1CfOw6AUPrVU1PXUlAnP0u4GX3/sQBLbTR8eWT0bFiVwAePLLZWBmKBUIE/l4dJbj8b +IrQ== X-Gm-Message-State: AGi0PubAYhHlsrCSqWmQ1YT4WW9jZY0C5JPiG10aJr+It7HRmFAvTzAN an5gNdpVVONustUVOCnYzvTmjlm8+fQ= X-Google-Smtp-Source: APiQypLMYFyPMjv1BXn2gt0L7HHuOrrHS3iaDweQ9es8nvWpOq/Kl0ruDuLqR2E/Jl7Gst1LkLEqRoqQjuM= X-Received: by 2002:a5b:b83:: with SMTP id l3mr15860644ybq.154.1588144898446; Wed, 29 Apr 2020 00:21:38 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:16 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-8-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 07/12] scsi: ufs: UFS crypto API From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Introduce functions to manipulate UFS inline encryption hardware in line with the JEDEC UFSHCI v2.1 specification and to work with the block keyslot manager. The UFS crypto API will assume by default that a vendor driver doesn't support UFS crypto, even if the hardware advertises the capability, because a lot of hardware requires some special handling that's not specified in the aforementioned JEDEC spec. Each vendor driver must explicity set hba->caps |= UFSHCD_CAP_CRYPTO before ufshcd_hba_init_crypto is called to opt-in to UFS crypto support. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/Kconfig | 9 ++ drivers/scsi/ufs/Makefile | 1 + drivers/scsi/ufs/ufshcd-crypto.c | 226 +++++++++++++++++++++++++++++++ drivers/scsi/ufs/ufshcd-crypto.h | 42 ++++++ drivers/scsi/ufs/ufshcd.h | 12 ++ 5 files changed, 290 insertions(+) create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig index e2005aeddc2db..5ed3f209f8810 100644 --- a/drivers/scsi/ufs/Kconfig +++ b/drivers/scsi/ufs/Kconfig @@ -160,3 +160,12 @@ config SCSI_UFS_BSG Select this if you need a bsg device node for your UFS controller. If unsure, say N. + +config SCSI_UFS_CRYPTO + bool "UFS Crypto Engine Support" + depends on SCSI_UFSHCD && BLK_INLINE_ENCRYPTION + help + Enable Crypto Engine Support in UFS. + Enabling this makes it possible for the kernel to use the crypto + capabilities of the UFS device (if present) to perform crypto + operations on data being transferred to/from the device. diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile index 94c6c5d7334b6..197e178f44bce 100644 --- a/drivers/scsi/ufs/Makefile +++ b/drivers/scsi/ufs/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_SCSI_UFS_QCOM) += ufs-qcom.o obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o ufshcd-core-y += ufshcd.o ufs-sysfs.o ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o +ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c new file mode 100644 index 0000000000000..65a3115d2a2d4 --- /dev/null +++ b/drivers/scsi/ufs/ufshcd-crypto.c @@ -0,0 +1,226 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +#include "ufshcd.h" +#include "ufshcd-crypto.h" + +/* Blk-crypto modes supported by UFS crypto */ +static const struct ufs_crypto_alg_entry { + enum ufs_crypto_alg ufs_alg; + enum ufs_crypto_key_size ufs_key_size; +} ufs_crypto_algs[BLK_ENCRYPTION_MODE_MAX] = { + [BLK_ENCRYPTION_MODE_AES_256_XTS] = { + .ufs_alg = UFS_CRYPTO_ALG_AES_XTS, + .ufs_key_size = UFS_CRYPTO_KEY_SIZE_256, + }, +}; + +static void ufshcd_program_key(struct ufs_hba *hba, + const union ufs_crypto_cfg_entry *cfg, + int slot) +{ + int i; + u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg); + + ufshcd_hold(hba, false); + /* Ensure that CFGE is cleared before programming the key */ + ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0])); + for (i = 0; i < 16; i++) { + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]), + slot_offset + i * sizeof(cfg->reg_val[0])); + } + /* Write dword 17 */ + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]), + slot_offset + 17 * sizeof(cfg->reg_val[0])); + /* Dword 16 must be written last */ + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]), + slot_offset + 16 * sizeof(cfg->reg_val[0])); + ufshcd_release(hba); +} + +static int ufshcd_crypto_keyslot_program(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + unsigned int slot) +{ + struct ufs_hba *hba = container_of(ksm, struct ufs_hba, ksm); + const union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array; + const struct ufs_crypto_alg_entry *alg = + &ufs_crypto_algs[key->crypto_cfg.crypto_mode]; + u8 data_unit_mask = key->crypto_cfg.data_unit_size / 512; + int i; + int cap_idx = -1; + union ufs_crypto_cfg_entry cfg = { 0 }; + + BUILD_BUG_ON(UFS_CRYPTO_KEY_SIZE_INVALID != 0); + for (i = 0; i < hba->crypto_capabilities.num_crypto_cap; i++) { + if (ccap_array[i].algorithm_id == alg->ufs_alg && + ccap_array[i].key_size == alg->ufs_key_size && + (ccap_array[i].sdus_mask & data_unit_mask)) { + cap_idx = i; + break; + } + } + + if (WARN_ON(cap_idx < 0)) + return -EOPNOTSUPP; + + cfg.data_unit_size = data_unit_mask; + cfg.crypto_cap_idx = cap_idx; + cfg.config_enable = UFS_CRYPTO_CONFIGURATION_ENABLE; + + if (ccap_array[cap_idx].algorithm_id == UFS_CRYPTO_ALG_AES_XTS) { + /* In XTS mode, the blk_crypto_key's size is already doubled */ + memcpy(cfg.crypto_key, key->raw, key->size/2); + memcpy(cfg.crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2, + key->raw + key->size/2, key->size/2); + } else { + memcpy(cfg.crypto_key, key->raw, key->size); + } + + ufshcd_program_key(hba, &cfg, slot); + + memzero_explicit(&cfg, sizeof(cfg)); + return 0; +} + +static void ufshcd_clear_keyslot(struct ufs_hba *hba, int slot) +{ + /* + * Clear the crypto cfg on the device. Clearing CFGE + * might not be sufficient, so just clear the entire cfg. + */ + union ufs_crypto_cfg_entry cfg = { 0 }; + + ufshcd_program_key(hba, &cfg, slot); +} + +static int ufshcd_crypto_keyslot_evict(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + unsigned int slot) +{ + struct ufs_hba *hba = container_of(ksm, struct ufs_hba, ksm); + + ufshcd_clear_keyslot(hba, slot); + + return 0; +} + +bool ufshcd_crypto_enable(struct ufs_hba *hba) +{ + if (!(hba->caps & UFSHCD_CAP_CRYPTO)) + return false; + + /* Reset might clear all keys, so reprogram all the keys. */ + blk_ksm_reprogram_all_keys(&hba->ksm); + return true; +} + +static const struct blk_ksm_ll_ops ufshcd_ksm_ops = { + .keyslot_program = ufshcd_crypto_keyslot_program, + .keyslot_evict = ufshcd_crypto_keyslot_evict, +}; + +static enum blk_crypto_mode_num +ufshcd_find_blk_crypto_mode(union ufs_crypto_cap_entry cap) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(ufs_crypto_algs); i++) { + BUILD_BUG_ON(UFS_CRYPTO_KEY_SIZE_INVALID != 0); + if (ufs_crypto_algs[i].ufs_alg == cap.algorithm_id && + ufs_crypto_algs[i].ufs_key_size == cap.key_size) { + return i; + } + } + return BLK_ENCRYPTION_MODE_INVALID; +} + +/** + * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba + * @hba: Per adapter instance + * + * Return: 0 if crypto was initialized or is not supported, else a -errno value. + */ +int ufshcd_hba_init_crypto(struct ufs_hba *hba) +{ + int cap_idx = 0; + int err = 0; + enum blk_crypto_mode_num blk_mode_num; + int slot = 0; + int num_keyslots; + + /* + * Don't use crypto if either the hardware doesn't advertise the + * standard crypto capability bit *or* if the vendor specific driver + * hasn't advertised that crypto is supported. + */ + if (!(hba->capabilities & MASK_CRYPTO_SUPPORT) || + !(hba->caps & UFSHCD_CAP_CRYPTO)) + goto out; + + hba->crypto_capabilities.reg_val = + cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP)); + hba->crypto_cfg_register = + (u32)hba->crypto_capabilities.config_array_ptr * 0x100; + hba->crypto_cap_array = + devm_kcalloc(hba->dev, hba->crypto_capabilities.num_crypto_cap, + sizeof(hba->crypto_cap_array[0]), GFP_KERNEL); + if (!hba->crypto_cap_array) { + err = -ENOMEM; + goto out; + } + + /* The actual number of configurations supported is (CFGC+1) */ + num_keyslots = hba->crypto_capabilities.config_count + 1; + err = blk_ksm_init(&hba->ksm, num_keyslots); + if (err) + goto out_free_caps; + + hba->ksm.ksm_ll_ops = ufshcd_ksm_ops; + /* UFS only supports 8 bytes for any DUN */ + hba->ksm.max_dun_bytes_supported = 8; + hba->ksm.dev = hba->dev; + + /* + * Cache all the UFS crypto capabilities and advertise the supported + * crypto modes and data unit sizes to the block layer. + */ + for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap; + cap_idx++) { + hba->crypto_cap_array[cap_idx].reg_val = + cpu_to_le32(ufshcd_readl(hba, + REG_UFS_CRYPTOCAP + + cap_idx * sizeof(__le32))); + blk_mode_num = ufshcd_find_blk_crypto_mode( + hba->crypto_cap_array[cap_idx]); + if (blk_mode_num != BLK_ENCRYPTION_MODE_INVALID) + hba->ksm.crypto_modes_supported[blk_mode_num] |= + hba->crypto_cap_array[cap_idx].sdus_mask * 512; + } + + for (slot = 0; slot < num_keyslots; slot++) + ufshcd_clear_keyslot(hba, slot); + + return 0; + +out_free_caps: + devm_kfree(hba->dev, hba->crypto_cap_array); +out: + /* Indicate that init failed by clearing UFSHCD_CAP_CRYPTO */ + hba->caps &= ~UFSHCD_CAP_CRYPTO; + return err; +} + +void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q) +{ + if (hba->caps & UFSHCD_CAP_CRYPTO) + blk_ksm_register(&hba->ksm, q); +} + +void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba) +{ + blk_ksm_destroy(&hba->ksm); +} diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h new file mode 100644 index 0000000000000..22677619de595 --- /dev/null +++ b/drivers/scsi/ufs/ufshcd-crypto.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#ifndef _UFSHCD_CRYPTO_H +#define _UFSHCD_CRYPTO_H + +#ifdef CONFIG_SCSI_UFS_CRYPTO +#include "ufshcd.h" +#include "ufshci.h" + +bool ufshcd_crypto_enable(struct ufs_hba *hba); + +int ufshcd_hba_init_crypto(struct ufs_hba *hba); + +void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q); + +void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba); + +#else /* CONFIG_SCSI_UFS_CRYPTO */ + +static inline bool ufshcd_crypto_enable(struct ufs_hba *hba) +{ + return false; +} + +static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba) +{ + return 0; +} + +static inline void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q) { } + +static inline void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba) +{ } + +#endif /* CONFIG_SCSI_UFS_CRYPTO */ + +#endif /* _UFSHCD_CRYPTO_H */ diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 1eebb589159d6..e8f3127276abc 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -57,6 +57,7 @@ #include #include #include +#include #include "unipro.h" #include @@ -614,6 +615,10 @@ enum ufshcd_caps { * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for * device is known or not. * @scsi_block_reqs_cnt: reference counting for scsi block requests + * @crypto_capabilities: Content of crypto capabilities register (0x100) + * @crypto_cap_array: Array of crypto capabilities + * @crypto_cfg_register: Start of the crypto cfg array + * @ksm: the keyslot manager tied to this hba */ struct ufs_hba { void __iomem *mmio_base; @@ -733,6 +738,13 @@ struct ufs_hba { struct device bsg_dev; struct request_queue *bsg_queue; + +#ifdef CONFIG_SCSI_UFS_CRYPTO + union ufs_crypto_capabilities crypto_capabilities; + union ufs_crypto_cap_entry *crypto_cap_array; + u32 crypto_cfg_register; + struct blk_keyslot_manager ksm; +#endif }; /* Returns true if clocks can be gated. Otherwise false */ From patchwork Wed Apr 29 07:21:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279034 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=sa/uVtP6; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49BqhC6S2Nz9sSy for ; Wed, 29 Apr 2020 17:21:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726750AbgD2HVq (ORCPT ); Wed, 29 Apr 2020 03:21:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726746AbgD2HVk (ORCPT ); Wed, 29 Apr 2020 03:21:40 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B87A2C08E859 for ; Wed, 29 Apr 2020 00:21:40 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id n205so2305812ybf.14 for ; Wed, 29 Apr 2020 00:21:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gR1IO3YbwQNLcwOSEXpn5CarIY21aQbiMsndVPuVBuI=; b=sa/uVtP6N+f3jtazo0nv4GY6GzcT/XEOc/MtAiR8WdeFXQkV9vNFYWKCkeKp/VtD+r yNpusBHv+q5NED+7F8JrXQ6YEEFlUm55sdFuMmdR4ooasDMnkN422eGErwZI8DPTPdeF 2iLx0XKxaCdUlAkkFla5Ea95QZh0+toi5M4UfyTyIX7O++P8Wsr2HU4Fnw6deO/lyyJg ByHtgbmmLJcSf60AokTVgZtND02DAj/SHkRWnp3aE8fPqGKojQJPf7VV8vMyVVnRqQTE e4ffJ7UglGDigLd+2eeQMpUU+9tVTXV0Z6qMmiijylZcvWx4ivpD/x0vTJgWCX6+Qd1e rf7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gR1IO3YbwQNLcwOSEXpn5CarIY21aQbiMsndVPuVBuI=; b=pmrhlXXi306zhMEiZOw5Ouhg4M+Z8+ykzZAMGywQWJzjxbFlTHI/z6dKWOxaU4rIJQ bUvJdJs8K2jl6ZcXG5YfTkU/5lAS3o3sd508XGjNm798cNA5dcDBIR0pMmWtg8x+CmDe kSCGVmMLsbYiswSDwgFKrP7GtkQ46Cs9X7B+tUXkx8KSYd9be3dkSVQIYrn6WGhLJsxR hUHrtg18Hsp/fYL0dLC+VmGUrh5g77wp8E3jOZiJDX8pVbycivKNXEIyjHVt9Dtrtwrz pYobxEkuHA7r0WVPYxNxiC5fC4D9TavTX8AzbKBwJIrknx41UNb8es554GXnAOlzxqh+ 5+yw== X-Gm-Message-State: AGi0PuZ7UScIhQ4+wUDbX5q+OaQizA7uEtUu8iY2CqFAv1u0XbJ7O38u EjjWK/FVdRYxoKbb/eBUFj5tgQO4bDA= X-Google-Smtp-Source: APiQypLyIkVB8gxIb/Imtuq+2nRldlZjx7cBD5aCdAQoHD1CmhZyFlpk7+Lx2ABJWcNCaifuy+Gj20+un8A= X-Received: by 2002:a25:7a81:: with SMTP id v123mr5347207ybc.138.1588144899914; Wed, 29 Apr 2020 00:21:39 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:17 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-9-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 08/12] scsi: ufs: Add inline encryption support to UFS From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Wire up ufshcd.c with the UFS Crypto API, the block layer inline encryption additions and the keyslot manager. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/ufshcd-crypto.h | 18 +++++++++++++ drivers/scsi/ufs/ufshcd.c | 44 ++++++++++++++++++++++++++++---- drivers/scsi/ufs/ufshcd.h | 6 +++++ 3 files changed, 63 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h index 22677619de595..9578edb63e7b4 100644 --- a/drivers/scsi/ufs/ufshcd-crypto.h +++ b/drivers/scsi/ufs/ufshcd-crypto.h @@ -10,6 +10,20 @@ #include "ufshcd.h" #include "ufshci.h" +static inline void ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba, + struct scsi_cmnd *cmd, + struct ufshcd_lrb *lrbp) +{ + struct request *rq = cmd->request; + + if (rq->crypt_keyslot) { + lrbp->crypto_key_slot = blk_ksm_get_slot_idx(rq->crypt_keyslot); + lrbp->data_unit_num = rq->crypt_ctx->bc_dun[0]; + } else { + lrbp->crypto_key_slot = -1; + } +} + bool ufshcd_crypto_enable(struct ufs_hba *hba); int ufshcd_hba_init_crypto(struct ufs_hba *hba); @@ -21,6 +35,10 @@ void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba); #else /* CONFIG_SCSI_UFS_CRYPTO */ +static inline void ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba, + struct scsi_cmnd *cmd, + struct ufshcd_lrb *lrbp) { } + static inline bool ufshcd_crypto_enable(struct ufs_hba *hba) { return false; diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 2435c600cb2d9..041c0dd09ba5d 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -48,6 +48,7 @@ #include "unipro.h" #include "ufs-sysfs.h" #include "ufs_bsg.h" +#include "ufshcd-crypto.h" #define CREATE_TRACE_POINTS #include @@ -812,7 +813,12 @@ static void ufshcd_enable_run_stop_reg(struct ufs_hba *hba) */ static inline void ufshcd_hba_start(struct ufs_hba *hba) { - ufshcd_writel(hba, CONTROLLER_ENABLE, REG_CONTROLLER_ENABLE); + u32 val = CONTROLLER_ENABLE; + + if (ufshcd_crypto_enable(hba)) + val |= CRYPTO_GENERAL_ENABLE; + + ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE); } /** @@ -2220,6 +2226,8 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp, struct utp_transfer_req_desc *req_desc = lrbp->utr_descriptor_ptr; u32 data_direction; u32 dword_0; + u32 dword_1 = 0; + u32 dword_3 = 0; if (cmd_dir == DMA_FROM_DEVICE) { data_direction = UTP_DEVICE_TO_HOST; @@ -2238,9 +2246,17 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp, dword_0 |= UTP_REQ_DESC_INT_CMD; /* Transfer request descriptor header fields */ +#ifdef CONFIG_SCSI_UFS_CRYPTO + if (lrbp->crypto_key_slot >= 0) { + dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD; + dword_0 |= lrbp->crypto_key_slot; + dword_1 = lower_32_bits(lrbp->data_unit_num); + dword_3 = upper_32_bits(lrbp->data_unit_num); + } +#endif /* CONFIG_SCSI_UFS_CRYPTO */ + req_desc->header.dword_0 = cpu_to_le32(dword_0); - /* dword_1 is reserved, hence it is set to 0 */ - req_desc->header.dword_1 = 0; + req_desc->header.dword_1 = cpu_to_le32(dword_1); /* * assigning invalid value for command status. Controller * updates OCS on command completion, with the command @@ -2248,8 +2264,7 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp, */ req_desc->header.dword_2 = cpu_to_le32(OCS_INVALID_COMMAND_STATUS); - /* dword_3 is reserved, hence it is set to 0 */ - req_desc->header.dword_3 = 0; + req_desc->header.dword_3 = cpu_to_le32(dword_3); req_desc->prd_table_length = 0; } @@ -2504,6 +2519,9 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) lrbp->task_tag = tag; lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun); lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false; + + ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp); + lrbp->req_abort_skip = false; ufshcd_comp_scsi_upiu(hba, lrbp); @@ -2537,6 +2555,9 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba, lrbp->task_tag = tag; lrbp->lun = 0; /* device management cmd is not specific to any LUN */ lrbp->intr_cmd = true; /* No interrupt aggregation */ +#ifdef CONFIG_SCSI_UFS_CRYPTO + lrbp->crypto_key_slot = -1; /* No crypto operations */ +#endif hba->dev_cmd.type = cmd_type; return ufshcd_comp_devman_upiu(hba, lrbp); @@ -4625,6 +4646,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) if (ufshcd_is_rpm_autosuspend_allowed(hba)) sdev->rpm_autosuspend = 1; + ufshcd_crypto_setup_rq_keyslot_manager(hba, q); + return 0; } @@ -5905,6 +5928,9 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, lrbp->task_tag = tag; lrbp->lun = 0; lrbp->intr_cmd = true; +#ifdef CONFIG_SCSI_UFS_CRYPTO + lrbp->crypto_key_slot = -1; /* No crypto operations */ +#endif hba->dev_cmd.type = cmd_type; switch (hba->ufs_version) { @@ -8331,6 +8357,7 @@ EXPORT_SYMBOL_GPL(ufshcd_remove); */ void ufshcd_dealloc_host(struct ufs_hba *hba) { + ufshcd_crypto_destroy_keyslot_manager(hba); scsi_host_put(hba->host); } EXPORT_SYMBOL_GPL(ufshcd_dealloc_host); @@ -8541,6 +8568,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) /* Reset the attached device */ ufshcd_vops_device_reset(hba); + /* Init crypto */ + err = ufshcd_hba_init_crypto(hba); + if (err) { + dev_err(hba->dev, "crypto setup failed\n"); + goto out_remove_scsi_host; + } + /* Host controller enable */ err = ufshcd_hba_enable(hba); if (err) { diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index e8f3127276abc..8de208b74f95f 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -183,6 +183,8 @@ struct ufs_pm_lvl_states { * @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation) * @issue_time_stamp: time stamp for debug purposes * @compl_time_stamp: time stamp for statistics + * @crypto_key_slot: the key slot to use for inline crypto (-1 if none) + * @data_unit_num: the data unit number for the first block for inline crypto * @req_abort_skip: skip request abort task flag */ struct ufshcd_lrb { @@ -207,6 +209,10 @@ struct ufshcd_lrb { bool intr_cmd; ktime_t issue_time_stamp; ktime_t compl_time_stamp; +#ifdef CONFIG_SCSI_UFS_CRYPTO + int crypto_key_slot; + u64 data_unit_num; +#endif bool req_abort_skip; }; From patchwork Wed Apr 29 07:21:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279037 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=dfrGTfJj; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49BqhR6CZlz9sTH for ; Wed, 29 Apr 2020 17:21:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726766AbgD2HV6 (ORCPT ); Wed, 29 Apr 2020 03:21:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726742AbgD2HVp (ORCPT ); Wed, 29 Apr 2020 03:21:45 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57523C09B050 for ; Wed, 29 Apr 2020 00:21:42 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id f132so1652066qke.11 for ; Wed, 29 Apr 2020 00:21:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dcBMNhJyVF3JG7u2k+hpIfX5J5/+YNOMhft7o2B+Ju4=; b=dfrGTfJjx+xsOcKHeejLK4oMt9g5AcqcRK5/UgBb14pn/JbWFb1vMwUyV+V1slwjtj 2nFObH6Yei1hbvG9BW2Ll9vbpOcTaaQWiATCvTV1B/VYCTXeUPCW+hLpf049cVmCdboI rHS6z3TM/IJRit3QH4E1ap9PyEFxHqulYCq7YE0up+baJBD6tgxW4NrGKOf9TJ+F3+Km QcXzGA2M5GB6W6B2ghC6wC+zS04Bwtty77EltXC43D73oIukh3a67IIcPcLbMepNDXfr 2Xrzb2Hp+Ov7eBjxW/m2vgPs2VZpLO9whdG848N9ksL78T4b8vthHOeR4v9S1c4cj0LF qOJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dcBMNhJyVF3JG7u2k+hpIfX5J5/+YNOMhft7o2B+Ju4=; b=D/uNH8Rl+uUolgY+wqvzFH2yZWynEUz7oc8c1sDmHTRFb9ViGtPsUXc9Je63+upvNb 21S9u/0ilnzRco0ha7ht15M0d1HC9VByb+C9zPzNgpL3yDMJC2qSBD7zlVjzl29R6qAh v8JtB08Ld88c8TD0Nn+hbHsTGLIoOBdn4gd4WjJPkc3Imo0DgXmu4/dDBWMmnfvh/0cm RoQ1lL6m9T5xgJpZkTz/JraiFHHFXT1XEN/vk8tsDkpclRXLW9ZQI+zIcLh89/OE2pt7 ez/4Ve94qc9UamtUKw8SWtxEsdMYkuPFcFmu/rh4h+KSLVw8SJrc+MFAZdaW5byFdDD5 SR3w== X-Gm-Message-State: AGi0PuYfGJMb5th3WMTlfZagyAO2Dm3V2IEbYlUSjL+gJ37ojbm+mwYK iI9Ox9l8BHbJWzvIrDxK/o1usRy21Ss= X-Google-Smtp-Source: APiQypJz89cxZWdYzpcW5zsQCLnnJh9hiKeWJOQg2VT8hOJvTQfkY3cXLSwBZXj2P4l7aalg3AeJ9YcyX18= X-Received: by 2002:ad4:55e7:: with SMTP id bu7mr32187009qvb.88.1588144901506; Wed, 29 Apr 2020 00:21:41 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:18 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-10-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 09/12] fs: introduce SB_INLINECRYPT From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Introduce SB_INLINECRYPT, which is set by filesystems that wish to use blk-crypto for file content en/decryption. This flag maps to the '-o inlinecrypt' mount option which multiple filesystems will implement, and code in fs/crypto/ needs to be able to check for this mount option in a filesystem-independent way. Signed-off-by: Satya Tangirala --- fs/proc_namespace.c | 1 + include/linux/fs.h | 1 + 2 files changed, 2 insertions(+) diff --git a/fs/proc_namespace.c b/fs/proc_namespace.c index 273ee82d8aa97..8bf195d3bda69 100644 --- a/fs/proc_namespace.c +++ b/fs/proc_namespace.c @@ -49,6 +49,7 @@ static int show_sb_opts(struct seq_file *m, struct super_block *sb) { SB_DIRSYNC, ",dirsync" }, { SB_MANDLOCK, ",mand" }, { SB_LAZYTIME, ",lazytime" }, + { SB_INLINECRYPT, ",inlinecrypt" }, { 0, NULL } }; const struct proc_fs_info *fs_infop; diff --git a/include/linux/fs.h b/include/linux/fs.h index 4f6f59b4f22a8..38fc6c8d4f45b 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1376,6 +1376,7 @@ extern int send_sigurg(struct fown_struct *fown); #define SB_NODIRATIME 2048 /* Do not update directory access times */ #define SB_SILENT 32768 #define SB_POSIXACL (1<<16) /* VFS does not apply the umask */ +#define SB_INLINECRYPT (1<<17) /* Use blk-crypto for encrypted files */ #define SB_KERNMOUNT (1<<22) /* this is a kern_mount call */ #define SB_I_VERSION (1<<23) /* Update inode I_version field */ #define SB_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */ From patchwork Wed Apr 29 07:21:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279038 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=dK68pMFi; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49BqhS2mDVz9sTJ for ; Wed, 29 Apr 2020 17:22:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726797AbgD2HV5 (ORCPT ); Wed, 29 Apr 2020 03:21:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726766AbgD2HVp (ORCPT ); Wed, 29 Apr 2020 03:21:45 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 145C3C08ED7D for ; Wed, 29 Apr 2020 00:21:44 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id y7so2267356ybj.15 for ; Wed, 29 Apr 2020 00:21:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=C+M0QLRFee1UXwUNWpu6az9HhabGSxedoK+2usQQ6HA=; b=dK68pMFii1isQ7euM+KrZWwsCKNgbxFS1x4SI1Wr7PZXdlUh0pSgBIFMjV1AqFb863 z4YKAcnFoVoTIfxTxta8BoyfRxHwZ6IkA7BE8+/H/w0qsKTrUBW5xl0UF1e/Y3aN2tmY NqLt3aEM1i2JNUF6tp4MgE/bBTKi7eVzTHsbrjTDA18P1RSPni+U8aVOGTKfgZi7efQO ZerXmR1TkpdVs6vocW3CMqp7UratCBdpCx5ozJAmEF4xNTk+uiUyId5cxbucd25tKEaS TqazKck72KiMWhuWcFZg74l2HnpcW3A3vVnjtCy0B3mh/+k/gv0+Usrf/xbz+9Tbu9hM 6KdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=C+M0QLRFee1UXwUNWpu6az9HhabGSxedoK+2usQQ6HA=; b=Pv6H5655u0bkvxDstLCUVVO+b82nSiYBNOZo4wxLkO5DYka7xKmJ0wtxPldhhoDfpq gzBYOLQUlmzgdrlsm8z1mC1yvSE1CCA+sdQ8NRnNjMcBRS+voebtS/TLV0Wsa1BEjsQb uYBrFWo2Fm11aKfj+oZjoqIlK/6rpxZnnwuh5TxnY/+96ciqc7yPVmcJ6T+Jm7PNcL7g L7pt3vg4L9Ev48/BCx14Waf9R1ZrHSDnhBXtRgLIxCfOnqFMM1zUzxpNuJEJvBG2LzkP qqb3/svEmN/bXgET/30bOU8KN0tu2uvw59J769A+JvE8w6DFRxpZkKgfxamvEOsd0a88 f27A== X-Gm-Message-State: AGi0PuY4oAp/kvYafRORwQJv7QRxpRWudPi7xTiPlP5aQN0b3IGNGee0 U47NMuA5a7n7tubbldt3B3sAKSgjCN8= X-Google-Smtp-Source: APiQypIafFztjh0c1iR/Y173YjGitYzKpYhWwldzP+l5VACQsCKZm1iy8qBFeDJjtls6PNy/z+EBS1Z/46E= X-Received: by 2002:a25:dd81:: with SMTP id u123mr45680360ybg.109.1588144903211; Wed, 29 Apr 2020 00:21:43 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:19 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-11-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 10/12] fscrypt: add inline encryption support From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala , Eric Biggers Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Add support for inline encryption to fs/crypto/. With "inline encryption", the block layer handles the decryption/encryption as part of the bio, instead of the filesystem doing the crypto itself via Linux's crypto API. This model is needed in order to take advantage of the inline encryption hardware present on most modern mobile SoCs. To use inline encryption, the filesystem needs to be mounted with '-o inlinecrypt'. The contents of any encrypted files will then be encrypted using blk-crypto, instead of using the traditional filesystem-layer crypto. Fscrypt still provides the key and IV to use, and the actual ciphertext on-disk is still the same; therefore it's testable using the existing fscrypt ciphertext verification tests. Note that since blk-crypto has a fallack to Linux's crypto API, and also supports all the encryption modes currently supported by fscrypt, this feature is usable and testable even without actual inline encryption hardware. Per-filesystem changes will be needed to set encryption contexts when submitting bios and to implement the 'inlinecrypt' mount option. This patch just adds the common code. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- fs/crypto/Kconfig | 6 + fs/crypto/Makefile | 1 + fs/crypto/bio.c | 50 +++++ fs/crypto/crypto.c | 2 +- fs/crypto/fname.c | 4 +- fs/crypto/fscrypt_private.h | 120 ++++++++++-- fs/crypto/inline_crypt.c | 365 ++++++++++++++++++++++++++++++++++++ fs/crypto/keyring.c | 4 +- fs/crypto/keysetup.c | 92 ++++++--- fs/crypto/keysetup_v1.c | 16 +- include/linux/fscrypt.h | 57 ++++++ 11 files changed, 660 insertions(+), 57 deletions(-) create mode 100644 fs/crypto/inline_crypt.c diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig index 8046d7c7a3e9c..f1f11a6228ebf 100644 --- a/fs/crypto/Kconfig +++ b/fs/crypto/Kconfig @@ -24,3 +24,9 @@ config FS_ENCRYPTION_ALGS select CRYPTO_SHA256 select CRYPTO_SHA512 select CRYPTO_XTS + +config FS_ENCRYPTION_INLINE_CRYPT + bool "Enable fscrypt to use inline crypto" + depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION + help + Enable fscrypt to use inline encryption hardware if available. diff --git a/fs/crypto/Makefile b/fs/crypto/Makefile index 232e2bb5a337b..652c7180ec6de 100644 --- a/fs/crypto/Makefile +++ b/fs/crypto/Makefile @@ -11,3 +11,4 @@ fscrypto-y := crypto.o \ policy.o fscrypto-$(CONFIG_BLOCK) += bio.o +fscrypto-$(CONFIG_FS_ENCRYPTION_INLINE_CRYPT) += inline_crypt.o diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c index 4fa18fff9c4ef..1ea9369a76880 100644 --- a/fs/crypto/bio.c +++ b/fs/crypto/bio.c @@ -41,6 +41,52 @@ void fscrypt_decrypt_bio(struct bio *bio) } EXPORT_SYMBOL(fscrypt_decrypt_bio); +static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode, + pgoff_t lblk, sector_t pblk, + unsigned int len) +{ + const unsigned int blockbits = inode->i_blkbits; + const unsigned int blocks_per_page = 1 << (PAGE_SHIFT - blockbits); + struct bio *bio; + int ret, err = 0; + int num_pages = 0; + + /* This always succeeds since __GFP_DIRECT_RECLAIM is set. */ + bio = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); + + while (len) { + unsigned int blocks_this_page = min(len, blocks_per_page); + unsigned int bytes_this_page = blocks_this_page << blockbits; + + if (num_pages == 0) { + fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOFS); + bio_set_dev(bio, inode->i_sb->s_bdev); + bio->bi_iter.bi_sector = + pblk << (blockbits - SECTOR_SHIFT); + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); + } + ret = bio_add_page(bio, ZERO_PAGE(0), bytes_this_page, 0); + if (WARN_ON(ret != bytes_this_page)) { + err = -EIO; + goto out; + } + num_pages++; + len -= blocks_this_page; + lblk += blocks_this_page; + pblk += blocks_this_page; + if (num_pages == BIO_MAX_PAGES || !len) { + err = submit_bio_wait(bio); + if (err) + goto out; + bio_reset(bio); + num_pages = 0; + } + } +out: + bio_put(bio); + return err; +} + /** * fscrypt_zeroout_range() - zero out a range of blocks in an encrypted file * @inode: the file's inode @@ -75,6 +121,10 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, if (len == 0) return 0; + if (fscrypt_inode_uses_inline_crypto(inode)) + return fscrypt_zeroout_range_inline_crypt(inode, lblk, pblk, + len); + BUILD_BUG_ON(ARRAY_SIZE(pages) > BIO_MAX_PAGES); nr_pages = min_t(unsigned int, ARRAY_SIZE(pages), (len + blocks_per_page - 1) >> blocks_per_page_bits); diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index 1ecaac7ee3cb8..263bc676c73dd 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -95,7 +95,7 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw, DECLARE_CRYPTO_WAIT(wait); struct scatterlist dst, src; struct fscrypt_info *ci = inode->i_crypt_info; - struct crypto_skcipher *tfm = ci->ci_ctfm; + struct crypto_skcipher *tfm = ci->ci_key.tfm; int res = 0; if (WARN_ON_ONCE(len <= 0)) diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c index 4c212442a8f7f..0fca2d7a56453 100644 --- a/fs/crypto/fname.c +++ b/fs/crypto/fname.c @@ -117,7 +117,7 @@ int fscrypt_fname_encrypt(const struct inode *inode, const struct qstr *iname, struct skcipher_request *req = NULL; DECLARE_CRYPTO_WAIT(wait); const struct fscrypt_info *ci = inode->i_crypt_info; - struct crypto_skcipher *tfm = ci->ci_ctfm; + struct crypto_skcipher *tfm = ci->ci_key.tfm; union fscrypt_iv iv; struct scatterlist sg; int res; @@ -170,7 +170,7 @@ static int fname_decrypt(const struct inode *inode, DECLARE_CRYPTO_WAIT(wait); struct scatterlist src_sg, dst_sg; const struct fscrypt_info *ci = inode->i_crypt_info; - struct crypto_skcipher *tfm = ci->ci_ctfm; + struct crypto_skcipher *tfm = ci->ci_key.tfm; union fscrypt_iv iv; int res; diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h index dbced2937ec89..3ec6fecd331af 100644 --- a/fs/crypto/fscrypt_private.h +++ b/fs/crypto/fscrypt_private.h @@ -14,6 +14,7 @@ #include #include #include +#include #define CONST_STRLEN(str) (sizeof(str) - 1) @@ -166,6 +167,20 @@ struct fscrypt_symlink_data { char encrypted_path[1]; } __packed; +/** + * struct fscrypt_prepared_key - a key prepared for actual encryption/decryption + * @tfm: crypto API transform object + * @blk_key: key for blk-crypto + * + * Normally only one of the fields will be non-NULL. + */ +struct fscrypt_prepared_key { + struct crypto_skcipher *tfm; +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + struct fscrypt_blk_crypto_key *blk_key; +#endif +}; + /* * fscrypt_info - the "encryption key" for an inode * @@ -175,12 +190,20 @@ struct fscrypt_symlink_data { */ struct fscrypt_info { - /* The actual crypto transform used for encryption and decryption */ - struct crypto_skcipher *ci_ctfm; + /* The key in a form prepared for actual encryption/decryption */ + struct fscrypt_prepared_key ci_key; /* True if the key should be freed when this fscrypt_info is freed */ bool ci_owns_key; +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + /* + * True if this inode will use inline encryption (blk-crypto) instead of + * the traditional filesystem-layer encryption. + */ + bool ci_inlinecrypt; +#endif + /* * Encryption mode used for this inode. It corresponds to either the * contents or filenames encryption mode, depending on the inode type. @@ -205,7 +228,7 @@ struct fscrypt_info { /* * If non-NULL, then encryption is done using the master key directly - * and ci_ctfm will equal ci_direct_key->dk_ctfm. + * and ci_key will equal ci_direct_key->dk_key. */ struct fscrypt_direct_key *ci_direct_key; @@ -258,6 +281,7 @@ union fscrypt_iv { u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE]; }; u8 raw[FSCRYPT_MAX_IV_SIZE]; + __le64 dun[FSCRYPT_MAX_IV_SIZE / sizeof(__le64)]; }; void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num, @@ -300,6 +324,76 @@ extern int fscrypt_hkdf_expand(const struct fscrypt_hkdf *hkdf, u8 context, extern void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf); +/* inline_crypt.c */ +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +extern void fscrypt_select_encryption_impl(struct fscrypt_info *ci); + +static inline bool +fscrypt_using_inline_encryption(const struct fscrypt_info *ci) +{ + return ci->ci_inlinecrypt; +} + +extern int fscrypt_prepare_inline_crypt_key( + struct fscrypt_prepared_key *prep_key, + const u8 *raw_key, + const struct fscrypt_info *ci); + +extern void fscrypt_destroy_inline_crypt_key( + struct fscrypt_prepared_key *prep_key); + +/* + * Check whether the crypto transform or blk-crypto key has been allocated in + * @prep_key, depending on which encryption implementation the file will use. + */ +static inline bool +fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key, + const struct fscrypt_info *ci) +{ + /* + * The READ_ONCE() here pairs with the smp_store_release() in + * fscrypt_prepare_key(). (This only matters for the per-mode keys, + * which are shared by multiple inodes.) + */ + if (fscrypt_using_inline_encryption(ci)) + return READ_ONCE(prep_key->blk_key) != NULL; + return READ_ONCE(prep_key->tfm) != NULL; +} + +#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ + +static inline void fscrypt_select_encryption_impl(struct fscrypt_info *ci) +{ +} + +static inline bool fscrypt_using_inline_encryption( + const struct fscrypt_info *ci) +{ + return false; +} + +static inline int +fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key, + const u8 *raw_key, + const struct fscrypt_info *ci) +{ + WARN_ON(1); + return -EOPNOTSUPP; +} + +static inline void +fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key) +{ +} + +static inline bool +fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key, + const struct fscrypt_info *ci) +{ + return READ_ONCE(prep_key->tfm) != NULL; +} +#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ + /* keyring.c */ /* @@ -389,14 +483,11 @@ struct fscrypt_master_key { struct list_head mk_decrypted_inodes; spinlock_t mk_decrypted_inodes_lock; - /* Crypto API transforms for DIRECT_KEY policies, allocated on-demand */ - struct crypto_skcipher *mk_direct_tfms[__FSCRYPT_MODE_MAX + 1]; + /* Per-mode keys for DIRECT_KEY policies, allocated on-demand */ + struct fscrypt_prepared_key mk_direct_keys[__FSCRYPT_MODE_MAX + 1]; - /* - * Crypto API transforms for filesystem-layer implementation of - * IV_INO_LBLK_64 policies, allocated on-demand. - */ - struct crypto_skcipher *mk_iv_ino_lblk_64_tfms[__FSCRYPT_MODE_MAX + 1]; + /* Per-mode keys for IV_INO_LBLK_64 policies, allocated on-demand */ + struct fscrypt_prepared_key mk_iv_ino_lblk_64_keys[__FSCRYPT_MODE_MAX + 1]; } __randomize_layout; @@ -453,13 +544,16 @@ struct fscrypt_mode { int keysize; int ivsize; int logged_impl_name; + enum blk_crypto_mode_num blk_crypto_mode; }; extern struct fscrypt_mode fscrypt_modes[]; -extern struct crypto_skcipher * -fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key, - const struct inode *inode); +extern int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key, + const u8 *raw_key, + const struct fscrypt_info *ci); + +extern void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key); extern int fscrypt_set_per_file_enc_key(struct fscrypt_info *ci, const u8 *raw_key); diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c new file mode 100644 index 0000000000000..0676832817a74 --- /dev/null +++ b/fs/crypto/inline_crypt.c @@ -0,0 +1,365 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Inline encryption support for fscrypt + * + * Copyright 2019 Google LLC + */ + +/* + * With "inline encryption", the block layer handles the decryption/encryption + * as part of the bio, instead of the filesystem doing the crypto itself via + * crypto API. See Documentation/block/inline-encryption.rst. fscrypt still + * provides the key and IV to use. + */ + +#include +#include +#include +#include + +#include "fscrypt_private.h" + +struct fscrypt_blk_crypto_key { + struct blk_crypto_key base; + int num_devs; + struct request_queue *devs[]; +}; + +static int fscrypt_get_num_devices(struct super_block *sb) +{ + if (sb->s_cop->get_num_devices) + return sb->s_cop->get_num_devices(sb); + return 1; +} + +static void fscrypt_get_devices(struct super_block *sb, int num_devs, + struct request_queue **devs) +{ + if (num_devs == 1) + devs[0] = bdev_get_queue(sb->s_bdev); + else + sb->s_cop->get_devices(sb, devs); +} + +static unsigned int fscrypt_get_dun_bytes(const struct fscrypt_info *ci) +{ + unsigned int dun_bytes = 8; + + if (fscrypt_policy_flags(&ci->ci_policy) & + FSCRYPT_POLICY_FLAG_DIRECT_KEY) + dun_bytes += FS_KEY_DERIVATION_NONCE_SIZE; + + return dun_bytes; +} + +/* Enable inline encryption for this file if supported. */ +void fscrypt_select_encryption_impl(struct fscrypt_info *ci) +{ + const struct inode *inode = ci->ci_inode; + struct super_block *sb = inode->i_sb; + struct blk_crypto_config crypto_cfg; + int num_devs; + struct request_queue **devs; + int i; + + /* The file must need contents encryption, not filenames encryption */ + if (!fscrypt_needs_contents_encryption(inode)) + return; + + /* The crypto mode must be valid */ + if (ci->ci_mode->blk_crypto_mode == BLK_ENCRYPTION_MODE_INVALID) + return; + + /* The filesystem must be mounted with -o inlinecrypt */ + if (!(sb->s_flags & SB_INLINECRYPT)) + return; + + /* + * blk-crypto must support the crypto configuration we'll use for the + * inode on all devices in the sb + */ + crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode; + crypto_cfg.data_unit_size = sb->s_blocksize; + crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci); + num_devs = fscrypt_get_num_devices(sb); + devs = kmalloc_array(num_devs, sizeof(*devs), GFP_NOFS); + if (!devs) + return; + fscrypt_get_devices(sb, num_devs, devs); + + for (i = 0; i < num_devs; i++) { + if (!blk_crypto_config_supported(devs[i], &crypto_cfg)) + goto out_free_devs; + } + + ci->ci_inlinecrypt = true; +out_free_devs: + kfree(devs); +} + +int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key, + const u8 *raw_key, + const struct fscrypt_info *ci) +{ + const struct inode *inode = ci->ci_inode; + struct super_block *sb = inode->i_sb; + enum blk_crypto_mode_num crypto_mode = ci->ci_mode->blk_crypto_mode; + int num_devs = fscrypt_get_num_devices(sb); + int queue_refs = 0; + struct fscrypt_blk_crypto_key *blk_key; + int err; + int i; + unsigned int flags; + + blk_key = kzalloc(struct_size(blk_key, devs, num_devs), GFP_NOFS); + if (!blk_key) + return -ENOMEM; + + blk_key->num_devs = num_devs; + fscrypt_get_devices(sb, num_devs, blk_key->devs); + + err = blk_crypto_init_key(&blk_key->base, raw_key, crypto_mode, + fscrypt_get_dun_bytes(ci), sb->s_blocksize); + if (err) { + fscrypt_err(inode, "error %d initializing blk-crypto key", err); + goto fail; + } + + /* + * We have to start using blk-crypto on all the filesystem's devices. + * We also have to save all the request_queue's for later so that the + * key can be evicted from them. This is needed because some keys + * aren't destroyed until after the filesystem was already unmounted + * (namely, the per-mode keys in struct fscrypt_master_key). + */ + for (i = 0; i < num_devs; i++) { + if (!blk_get_queue(blk_key->devs[i])) { + fscrypt_err(inode, "couldn't get request_queue"); + err = -EAGAIN; + goto fail; + } + queue_refs++; + + flags = memalloc_nofs_save(); + err = blk_crypto_start_using_key(&blk_key->base, + blk_key->devs[i]); + memalloc_nofs_restore(flags); + if (err) { + fscrypt_err(inode, + "error %d starting to use blk-crypto", err); + goto fail; + } + } + /* + * Pairs with READ_ONCE() in fscrypt_is_key_prepared(). (Only matters + * for the per-mode keys, which are shared by multiple inodes.) + */ + smp_store_release(&prep_key->blk_key, blk_key); + return 0; + +fail: + for (i = 0; i < queue_refs; i++) + blk_put_queue(blk_key->devs[i]); + kzfree(blk_key); + return err; +} + +void fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key) +{ + struct fscrypt_blk_crypto_key *blk_key = prep_key->blk_key; + int i; + + if (blk_key) { + for (i = 0; i < blk_key->num_devs; i++) { + blk_crypto_evict_key(blk_key->devs[i], &blk_key->base); + blk_put_queue(blk_key->devs[i]); + } + kzfree(blk_key); + } +} + +/** + * fscrypt_inode_uses_inline_crypto - test whether an inode uses inline + * encryption + * @inode: an inode + * + * Return: true if the inode requires file contents encryption and if the + * encryption should be done in the block layer via blk-crypto rather + * than in the filesystem layer. + */ +bool fscrypt_inode_uses_inline_crypto(const struct inode *inode) +{ + return fscrypt_needs_contents_encryption(inode) && + inode->i_crypt_info->ci_inlinecrypt; +} +EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto); + +/** + * fscrypt_inode_uses_fs_layer_crypto - test whether an inode uses fs-layer + * encryption + * @inode: an inode + * + * Return: true if the inode requires file contents encryption and if the + * encryption should be done in the filesystem layer rather than in the + * block layer via blk-crypto. + */ +bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode) +{ + return fscrypt_needs_contents_encryption(inode) && + !inode->i_crypt_info->ci_inlinecrypt; +} +EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto); + +static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num, + u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE]) +{ + union fscrypt_iv iv; + int i; + + fscrypt_generate_iv(&iv, lblk_num, ci); + + BUILD_BUG_ON(FSCRYPT_MAX_IV_SIZE > BLK_CRYPTO_MAX_IV_SIZE); + memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE); + for (i = 0; i < ci->ci_mode->ivsize/sizeof(dun[0]); i++) + dun[i] = le64_to_cpu(iv.dun[i]); +} + +/** + * fscrypt_set_bio_crypt_ctx - prepare a file contents bio for inline encryption + * @bio: a bio which will eventually be submitted to the file + * @inode: the file's inode + * @first_lblk: the first file logical block number in the I/O + * @gfp_mask: memory allocation flags - these must be a waiting mask so that + * bio_crypt_set_ctx can't fail. + * + * If the contents of the file should be encrypted (or decrypted) with inline + * encryption, then assign the appropriate encryption context to the bio. + * + * Normally the bio should be newly allocated (i.e. no pages added yet), as + * otherwise fscrypt_mergeable_bio() won't work as intended. + * + * The encryption context will be freed automatically when the bio is freed. + */ +void fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode, + u64 first_lblk, gfp_t gfp_mask) +{ + const struct fscrypt_info *ci = inode->i_crypt_info; + u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; + + if (!fscrypt_inode_uses_inline_crypto(inode)) + return; + + fscrypt_generate_dun(ci, first_lblk, dun); + bio_crypt_set_ctx(bio, &ci->ci_key.blk_key->base, dun, gfp_mask); +} +EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx); + +/* Extract the inode and logical block number from a buffer_head. */ +static bool bh_get_inode_and_lblk_num(const struct buffer_head *bh, + const struct inode **inode_ret, + u64 *lblk_num_ret) +{ + struct page *page = bh->b_page; + const struct address_space *mapping; + const struct inode *inode; + + /* + * The ext4 journal (jbd2) can submit a buffer_head it directly created + * for a non-pagecache page. fscrypt doesn't care about these. + */ + mapping = page_mapping(page); + if (!mapping) + return false; + inode = mapping->host; + + *inode_ret = inode; + *lblk_num_ret = ((u64)page->index << (PAGE_SHIFT - inode->i_blkbits)) + + (bh_offset(bh) >> inode->i_blkbits); + return true; +} + +/** + * fscrypt_set_bio_crypt_ctx_bh - prepare a file contents bio for inline + * encryption + * @bio: a bio which will eventually be submitted to the file + * @first_bh: the first buffer_head for which I/O will be submitted + * @gfp_mask: memory allocation flags + * + * Same as fscrypt_set_bio_crypt_ctx(), except this takes a buffer_head instead + * of an inode and block number directly. + */ +void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio, + const struct buffer_head *first_bh, + gfp_t gfp_mask) +{ + const struct inode *inode; + u64 first_lblk; + + if (bh_get_inode_and_lblk_num(first_bh, &inode, &first_lblk)) + fscrypt_set_bio_crypt_ctx(bio, inode, first_lblk, gfp_mask); +} +EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx_bh); + +/** + * fscrypt_mergeable_bio - test whether data can be added to a bio + * @bio: the bio being built up + * @inode: the inode for the next part of the I/O + * @next_lblk: the next file logical block number in the I/O + * + * When building a bio which may contain data which should undergo inline + * encryption (or decryption) via fscrypt, filesystems should call this function + * to ensure that the resulting bio contains only logically contiguous data. + * This will return false if the next part of the I/O cannot be merged with the + * bio because either the encryption key would be different or the encryption + * data unit numbers would be discontiguous. + * + * fscrypt_set_bio_crypt_ctx() must have already been called on the bio. + * + * Return: true iff the I/O is mergeable + */ +bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode, + u64 next_lblk) +{ + const struct bio_crypt_ctx *bc = bio->bi_crypt_context; + u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; + + if (!!bc != fscrypt_inode_uses_inline_crypto(inode)) + return false; + if (!bc) + return true; + + /* + * Comparing the key pointers is good enough, as all I/O for each key + * uses the same pointer. I.e., there's currently no need to support + * merging requests where the keys are the same but the pointers differ. + */ + if (bc->bc_key != &inode->i_crypt_info->ci_key.blk_key->base) + return false; + + fscrypt_generate_dun(inode->i_crypt_info, next_lblk, next_dun); + return bio_crypt_dun_is_contiguous(bc, bio->bi_iter.bi_size, next_dun); +} +EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio); + +/** + * fscrypt_mergeable_bio_bh - test whether data can be added to a bio + * @bio: the bio being built up + * @next_bh: the next buffer_head for which I/O will be submitted + * + * Same as fscrypt_mergeable_bio(), except this takes a buffer_head instead of + * an inode and block number directly. + * + * Return: true iff the I/O is mergeable + */ +bool fscrypt_mergeable_bio_bh(struct bio *bio, + const struct buffer_head *next_bh) +{ + const struct inode *inode; + u64 next_lblk; + + if (!bh_get_inode_and_lblk_num(next_bh, &inode, &next_lblk)) + return !bio->bi_crypt_context; + + return fscrypt_mergeable_bio(bio, inode, next_lblk); +} +EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh); diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c index ab41b25d4fa1b..d8ab33f631ba2 100644 --- a/fs/crypto/keyring.c +++ b/fs/crypto/keyring.c @@ -44,8 +44,8 @@ static void free_master_key(struct fscrypt_master_key *mk) wipe_master_key_secret(&mk->mk_secret); for (i = 0; i <= __FSCRYPT_MODE_MAX; i++) { - crypto_free_skcipher(mk->mk_direct_tfms[i]); - crypto_free_skcipher(mk->mk_iv_ino_lblk_64_tfms[i]); + fscrypt_destroy_prepared_key(&mk->mk_direct_keys[i]); + fscrypt_destroy_prepared_key(&mk->mk_iv_ino_lblk_64_keys[i]); } key_put(mk->mk_users); diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c index 302375e9f719e..72481bd202def 100644 --- a/fs/crypto/keysetup.c +++ b/fs/crypto/keysetup.c @@ -19,6 +19,7 @@ struct fscrypt_mode fscrypt_modes[] = { .cipher_str = "xts(aes)", .keysize = 64, .ivsize = 16, + .blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS, }, [FSCRYPT_MODE_AES_256_CTS] = { .friendly_name = "AES-256-CTS-CBC", @@ -31,6 +32,7 @@ struct fscrypt_mode fscrypt_modes[] = { .cipher_str = "essiv(cbc(aes),sha256)", .keysize = 16, .ivsize = 16, + .blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV, }, [FSCRYPT_MODE_AES_128_CTS] = { .friendly_name = "AES-128-CTS-CBC", @@ -43,6 +45,7 @@ struct fscrypt_mode fscrypt_modes[] = { .cipher_str = "adiantum(xchacha12,aes)", .keysize = 32, .ivsize = 32, + .blk_crypto_mode = BLK_ENCRYPTION_MODE_ADIANTUM, }, }; @@ -62,9 +65,9 @@ select_encryption_mode(const union fscrypt_policy *policy, } /* Create a symmetric cipher object for the given encryption mode and key */ -struct crypto_skcipher *fscrypt_allocate_skcipher(struct fscrypt_mode *mode, - const u8 *raw_key, - const struct inode *inode) +static struct crypto_skcipher * +fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key, + const struct inode *inode) { struct crypto_skcipher *tfm; int err; @@ -107,30 +110,55 @@ struct crypto_skcipher *fscrypt_allocate_skcipher(struct fscrypt_mode *mode, return ERR_PTR(err); } -/* Given a per-file encryption key, set up the file's crypto transform object */ -int fscrypt_set_per_file_enc_key(struct fscrypt_info *ci, const u8 *raw_key) +/* + * Prepare the crypto transform object or blk-crypto key in @prep_key, given the + * raw key, encryption mode, and flag indicating which encryption implementation + * (fs-layer or blk-crypto) will be used. + */ +int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key, + const u8 *raw_key, const struct fscrypt_info *ci) { struct crypto_skcipher *tfm; + if (fscrypt_using_inline_encryption(ci)) + return fscrypt_prepare_inline_crypt_key(prep_key, raw_key, ci); + tfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key, ci->ci_inode); if (IS_ERR(tfm)) return PTR_ERR(tfm); + /* + * Pairs with READ_ONCE() in fscrypt_is_key_prepared(). (Only matters + * for the per-mode keys, which are shared by multiple inodes.) + */ + smp_store_release(&prep_key->tfm, tfm); + return 0; +} + +/* Destroy a crypto transform object and/or blk-crypto key. */ +void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key) +{ + crypto_free_skcipher(prep_key->tfm); + fscrypt_destroy_inline_crypt_key(prep_key); +} - ci->ci_ctfm = tfm; +/* Given a per-file encryption key, set up the file's crypto transform object */ +int fscrypt_set_per_file_enc_key(struct fscrypt_info *ci, const u8 *raw_key) +{ ci->ci_owns_key = true; - return 0; + return fscrypt_prepare_key(&ci->ci_key, raw_key, ci); } static int setup_per_mode_enc_key(struct fscrypt_info *ci, struct fscrypt_master_key *mk, - struct crypto_skcipher **tfms, + struct fscrypt_prepared_key *keys, u8 hkdf_context, bool include_fs_uuid) { + static DEFINE_MUTEX(mode_key_setup_mutex); const struct inode *inode = ci->ci_inode; const struct super_block *sb = inode->i_sb; struct fscrypt_mode *mode = ci->ci_mode; const u8 mode_num = mode - fscrypt_modes; - struct crypto_skcipher *tfm, *prev_tfm; + struct fscrypt_prepared_key *prep_key; u8 mode_key[FSCRYPT_MAX_KEY_SIZE]; u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)]; unsigned int hkdf_infolen = 0; @@ -139,10 +167,16 @@ static int setup_per_mode_enc_key(struct fscrypt_info *ci, if (WARN_ON(mode_num > __FSCRYPT_MODE_MAX)) return -EINVAL; - /* pairs with cmpxchg() below */ - tfm = READ_ONCE(tfms[mode_num]); - if (likely(tfm != NULL)) - goto done; + prep_key = &keys[mode_num]; + if (fscrypt_is_key_prepared(prep_key, ci)) { + ci->ci_key = *prep_key; + return 0; + } + + mutex_lock(&mode_key_setup_mutex); + + if (fscrypt_is_key_prepared(prep_key, ci)) + goto done_unlock; BUILD_BUG_ON(sizeof(mode_num) != 1); BUILD_BUG_ON(sizeof(sb->s_uuid) != 16); @@ -157,21 +191,17 @@ static int setup_per_mode_enc_key(struct fscrypt_info *ci, hkdf_context, hkdf_info, hkdf_infolen, mode_key, mode->keysize); if (err) - return err; - tfm = fscrypt_allocate_skcipher(mode, mode_key, inode); + goto out_unlock; + err = fscrypt_prepare_key(prep_key, mode_key, ci); memzero_explicit(mode_key, mode->keysize); - if (IS_ERR(tfm)) - return PTR_ERR(tfm); - - /* pairs with READ_ONCE() above */ - prev_tfm = cmpxchg(&tfms[mode_num], NULL, tfm); - if (prev_tfm != NULL) { - crypto_free_skcipher(tfm); - tfm = prev_tfm; - } -done: - ci->ci_ctfm = tfm; - return 0; + if (err) + goto out_unlock; +done_unlock: + ci->ci_key = *prep_key; + err = 0; +out_unlock: + mutex_unlock(&mode_key_setup_mutex); + return err; } int fscrypt_derive_dirhash_key(struct fscrypt_info *ci, @@ -203,7 +233,7 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci, * encryption key. This ensures that the master key is * consistently used only for HKDF, avoiding key reuse issues. */ - err = setup_per_mode_enc_key(ci, mk, mk->mk_direct_tfms, + err = setup_per_mode_enc_key(ci, mk, mk->mk_direct_keys, HKDF_CONTEXT_DIRECT_KEY, false); } else if (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) { @@ -213,7 +243,7 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci, * the IVs. This format is optimized for use with inline * encryption hardware compliant with the UFS or eMMC standards. */ - err = setup_per_mode_enc_key(ci, mk, mk->mk_iv_ino_lblk_64_tfms, + err = setup_per_mode_enc_key(ci, mk, mk->mk_iv_ino_lblk_64_keys, HKDF_CONTEXT_IV_INO_LBLK_64_KEY, true); } else { @@ -261,6 +291,8 @@ static int setup_file_encryption_key(struct fscrypt_info *ci, struct fscrypt_key_specifier mk_spec; int err; + fscrypt_select_encryption_impl(ci); + switch (ci->ci_policy.version) { case FSCRYPT_POLICY_V1: mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR; @@ -353,7 +385,7 @@ static void put_crypt_info(struct fscrypt_info *ci) if (ci->ci_direct_key) fscrypt_put_direct_key(ci->ci_direct_key); else if (ci->ci_owns_key) - crypto_free_skcipher(ci->ci_ctfm); + fscrypt_destroy_prepared_key(&ci->ci_key); key = ci->ci_master_key; if (key) { diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c index 801b48c0cd7f3..59c520b200cb0 100644 --- a/fs/crypto/keysetup_v1.c +++ b/fs/crypto/keysetup_v1.c @@ -146,7 +146,7 @@ struct fscrypt_direct_key { struct hlist_node dk_node; refcount_t dk_refcount; const struct fscrypt_mode *dk_mode; - struct crypto_skcipher *dk_ctfm; + struct fscrypt_prepared_key dk_key; u8 dk_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE]; u8 dk_raw[FSCRYPT_MAX_KEY_SIZE]; }; @@ -154,7 +154,7 @@ struct fscrypt_direct_key { static void free_direct_key(struct fscrypt_direct_key *dk) { if (dk) { - crypto_free_skcipher(dk->dk_ctfm); + fscrypt_destroy_prepared_key(&dk->dk_key); kzfree(dk); } } @@ -199,6 +199,8 @@ find_or_insert_direct_key(struct fscrypt_direct_key *to_insert, continue; if (ci->ci_mode != dk->dk_mode) continue; + if (!fscrypt_is_key_prepared(&dk->dk_key, ci)) + continue; if (crypto_memneq(raw_key, dk->dk_raw, ci->ci_mode->keysize)) continue; /* using existing tfm with same (descriptor, mode, raw_key) */ @@ -231,13 +233,9 @@ fscrypt_get_direct_key(const struct fscrypt_info *ci, const u8 *raw_key) return ERR_PTR(-ENOMEM); refcount_set(&dk->dk_refcount, 1); dk->dk_mode = ci->ci_mode; - dk->dk_ctfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key, - ci->ci_inode); - if (IS_ERR(dk->dk_ctfm)) { - err = PTR_ERR(dk->dk_ctfm); - dk->dk_ctfm = NULL; + err = fscrypt_prepare_key(&dk->dk_key, raw_key, ci); + if (err) goto err_free_dk; - } memcpy(dk->dk_descriptor, ci->ci_policy.v1.master_key_descriptor, FSCRYPT_KEY_DESCRIPTOR_SIZE); memcpy(dk->dk_raw, raw_key, ci->ci_mode->keysize); @@ -259,7 +257,7 @@ static int setup_v1_file_key_direct(struct fscrypt_info *ci, if (IS_ERR(dk)) return PTR_ERR(dk); ci->ci_direct_key = dk; - ci->ci_ctfm = dk->dk_ctfm; + ci->ci_key = dk->dk_key; return 0; } diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h index e3c2d2a155250..e02820b8e981e 100644 --- a/include/linux/fscrypt.h +++ b/include/linux/fscrypt.h @@ -64,6 +64,9 @@ struct fscrypt_operations { bool (*has_stable_inodes)(struct super_block *sb); void (*get_ino_and_lblk_bits)(struct super_block *sb, int *ino_bits_ret, int *lblk_bits_ret); + int (*get_num_devices)(struct super_block *sb); + void (*get_devices)(struct super_block *sb, + struct request_queue **devs); }; static inline bool fscrypt_has_encryption_key(const struct inode *inode) @@ -503,6 +506,60 @@ static inline void fscrypt_set_ops(struct super_block *sb, #endif /* !CONFIG_FS_ENCRYPTION */ +/* inline_crypt.c */ +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +extern bool fscrypt_inode_uses_inline_crypto(const struct inode *inode); + +extern bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode); + +extern void fscrypt_set_bio_crypt_ctx(struct bio *bio, + const struct inode *inode, + u64 first_lblk, gfp_t gfp_mask); + +extern void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio, + const struct buffer_head *first_bh, + gfp_t gfp_mask); + +extern bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode, + u64 next_lblk); + +extern bool fscrypt_mergeable_bio_bh(struct bio *bio, + const struct buffer_head *next_bh); + +#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ +static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode) +{ + return false; +} + +static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode) +{ + return fscrypt_needs_contents_encryption(inode); +} + +static inline void fscrypt_set_bio_crypt_ctx(struct bio *bio, + const struct inode *inode, + u64 first_lblk, gfp_t gfp_mask) { } + +static inline void fscrypt_set_bio_crypt_ctx_bh( + struct bio *bio, + const struct buffer_head *first_bh, + gfp_t gfp_mask) { } + +static inline bool fscrypt_mergeable_bio(struct bio *bio, + const struct inode *inode, + u64 next_lblk) +{ + return true; +} + +static inline bool fscrypt_mergeable_bio_bh(struct bio *bio, + const struct buffer_head *next_bh) +{ + return true; +} +#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ + /** * fscrypt_require_key - require an inode's encryption key * @inode: the inode we need the key for From patchwork Wed Apr 29 07:21:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279036 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=aACNFaM4; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49BqhJ712jz9sT9 for ; Wed, 29 Apr 2020 17:21:52 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726630AbgD2HVw (ORCPT ); Wed, 29 Apr 2020 03:21:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726775AbgD2HVp (ORCPT ); Wed, 29 Apr 2020 03:21:45 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 921C2C035495 for ; Wed, 29 Apr 2020 00:21:45 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d206so2312511ybh.7 for ; Wed, 29 Apr 2020 00:21:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8xi+eRNHWgXdVhsYfo5zSJ9HHfQ0KOMyDBcY/UWa0FM=; b=aACNFaM4lJkjm2/TPukWtO0EQPGkWc5eqqxNTyb9Wzh6Ih2BU+nopeyQnNEixxCL5h 0ESfx+gugDcbQmAOUtsI7LJikNiEBg08pNWv/9KcMqftrhkWhUuHe6ADWDCCXSWx/rDg 0LhAsiO8qIiYP+BouaVUCJHCU2NmDkz09lXYN7WwtRKsTQAF0jC3s2hyUnRrYPg2jllo k96Ph+yUD+JRbJmXCYaYhv5+7U1IS+1DfCX9utApfeU/GN8idHuJN28fyajK8RKD4J8E 0L20+ZvOIaC4uSTBRFlfWinE2uaYcyWVzGaYSMSEzTtzT07ruRufn06oujoMXZFwpnx3 pCOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8xi+eRNHWgXdVhsYfo5zSJ9HHfQ0KOMyDBcY/UWa0FM=; b=sXrvu5EXeDko5HadKuyZCygNq0/yejJpWVMb0lVyG5CbJznCbPg5x5tMJRknEzRWlP fgJIAJ9ZSYl0qWmLiQzLpT+pG8iyKafvt+hC0FTG1GoFEHBM9DWrr74htamdG/U6o2Gn aNb0Yyl8c2ezyoFg+KmGHhgQTbJULaTkWO6NFyI9z/mPOJOWcKJD+IMXRp8IqAi3T+na X45gfgw3SUKnFjrwyPMN31uQB7kF2KGG/1v3JrupdcHj7Xj+B2AW2UN/YCAsNO47u1Y3 t4fdHpjVHgWwiHgbDEjhZVBpLMthVw/wYlekIaCZwGarExU6nyeCZcLxVPXUdcbzgaqK 9lkw== X-Gm-Message-State: AGi0PuaPZt9SXkUUsvBA0sSvDACAEoZ4zIHUHekrII16+X8nkwmolhiR ITx/qrpp4R2dYQr78VfkPQL2zuCbbEs= X-Google-Smtp-Source: APiQypIH8TT4Mo/JtXG1WW1BaePDVhDF3SmNndTfdrJ2kvxBIrAIq1DPQFUWi44O9RM1bynR8pDrQMPxfKA= X-Received: by 2002:a25:cad6:: with SMTP id a205mr31258360ybg.377.1588144904760; Wed, 29 Apr 2020 00:21:44 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:20 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-12-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 11/12] f2fs: add inline encryption support From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala , Eric Biggers Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Wire up f2fs to support inline encryption via the helper functions which fs/crypto/ now provides. This includes: - Adding a mount option 'inlinecrypt' which enables inline encryption on encrypted files where it can be used. - Setting the bio_crypt_ctx on bios that will be submitted to an inline-encrypted file. - Not adding logically discontiguous data to bios that will be submitted to an inline-encrypted file. - Not doing filesystem-layer crypto on inline-encrypted files. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- Documentation/filesystems/f2fs.rst | 7 ++- fs/f2fs/compress.c | 2 +- fs/f2fs/data.c | 68 +++++++++++++++++++++++++----- fs/f2fs/super.c | 32 ++++++++++++++ 4 files changed, 97 insertions(+), 12 deletions(-) diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst index 87d794bc75a47..e0e0353f8a498 100644 --- a/Documentation/filesystems/f2fs.rst +++ b/Documentation/filesystems/f2fs.rst @@ -254,7 +254,12 @@ compress_extension=%s Support adding specified extension, so that f2fs can enab on compression extension list and enable compression on these file by default rather than to enable it via ioctl. For other files, we can still enable compression via ioctl. -====================== ============================================================ +inlinecrypt + Encrypt/decrypt the contents of encrypted files using the + blk-crypto framework rather than filesystem-layer encryption. + This allows the use of inline encryption hardware. The on-disk + format is unaffected. For more details, see + Documentation/block/inline-encryption.rst. Debugfs Entries =============== diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index df7b2d15eacde..a19c093711a68 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -975,7 +975,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, .submitted = false, .io_type = io_type, .io_wbc = wbc, - .encrypted = f2fs_encrypted_file(cc->inode), + .encrypted = fscrypt_inode_uses_fs_layer_crypto(cc->inode), }; struct dnode_of_data dn; struct node_info ni; diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index cdf2f626bea7a..0dfa8d3361428 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -457,6 +458,33 @@ static struct bio *__bio_alloc(struct f2fs_io_info *fio, int npages) return bio; } +static void f2fs_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode, + pgoff_t first_idx, + const struct f2fs_io_info *fio, + gfp_t gfp_mask) +{ + /* + * The f2fs garbage collector sets ->encrypted_page when it wants to + * read/write raw data without encryption. + */ + if (!fio || !fio->encrypted_page) + fscrypt_set_bio_crypt_ctx(bio, inode, first_idx, gfp_mask); +} + +static bool f2fs_crypt_mergeable_bio(struct bio *bio, const struct inode *inode, + pgoff_t next_idx, + const struct f2fs_io_info *fio) +{ + /* + * The f2fs garbage collector sets ->encrypted_page when it wants to + * read/write raw data without encryption. + */ + if (fio && fio->encrypted_page) + return !bio_has_crypt_ctx(bio); + + return fscrypt_mergeable_bio(bio, inode, next_idx); +} + static inline void __submit_bio(struct f2fs_sb_info *sbi, struct bio *bio, enum page_type type) { @@ -653,6 +681,9 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) /* Allocate a new bio */ bio = __bio_alloc(fio, 1); + f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host, + fio->page->index, fio, GFP_NOIO); + if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { bio_put(bio); return -EFAULT; @@ -841,12 +872,16 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio) trace_f2fs_submit_page_bio(page, fio); f2fs_trace_ios(fio, 0); - if (bio && !page_is_mergeable(fio->sbi, bio, *fio->last_block, - fio->new_blkaddr)) + if (bio && (!page_is_mergeable(fio->sbi, bio, *fio->last_block, + fio->new_blkaddr) || + !f2fs_crypt_mergeable_bio(bio, fio->page->mapping->host, + fio->page->index, fio))) f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL); alloc_new: if (!bio) { bio = __bio_alloc(fio, BIO_MAX_PAGES); + f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host, + fio->page->index, fio, GFP_NOIO); bio_set_op_attrs(bio, fio->op, fio->op_flags); add_bio_entry(fio->sbi, bio, page, fio->temp); @@ -903,8 +938,11 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio) inc_page_count(sbi, WB_DATA_TYPE(bio_page)); - if (io->bio && !io_is_mergeable(sbi, io->bio, io, fio, - io->last_block_in_bio, fio->new_blkaddr)) + if (io->bio && + (!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio, + fio->new_blkaddr) || + !f2fs_crypt_mergeable_bio(io->bio, fio->page->mapping->host, + fio->page->index, fio))) __submit_merged_bio(io); alloc_new: if (io->bio == NULL) { @@ -916,6 +954,8 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio) goto skip; } io->bio = __bio_alloc(fio, BIO_MAX_PAGES); + f2fs_set_bio_crypt_ctx(io->bio, fio->page->mapping->host, + fio->page->index, fio, GFP_NOIO); io->fio = *fio; } @@ -960,11 +1000,14 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr, for_write); if (!bio) return ERR_PTR(-ENOMEM); + + f2fs_set_bio_crypt_ctx(bio, inode, first_idx, NULL, GFP_NOFS); + f2fs_target_device(sbi, blkaddr, bio); bio->bi_end_io = f2fs_read_end_io; bio_set_op_attrs(bio, REQ_OP_READ, op_flag); - if (f2fs_encrypted_file(inode)) + if (fscrypt_inode_uses_fs_layer_crypto(inode)) post_read_steps |= 1 << STEP_DECRYPT; if (f2fs_compressed_file(inode)) post_read_steps |= 1 << STEP_DECOMPRESS; @@ -1988,8 +2031,9 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, * This page will go to BIO. Do we need to send this * BIO off first? */ - if (bio && !page_is_mergeable(F2FS_I_SB(inode), bio, - *last_block_in_bio, block_nr)) { + if (bio && (!page_is_mergeable(F2FS_I_SB(inode), bio, + *last_block_in_bio, block_nr) || + !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) { submit_and_realloc: __submit_bio(F2FS_I_SB(inode), bio, DATA); bio = NULL; @@ -2117,8 +2161,9 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, blkaddr = data_blkaddr(dn.inode, dn.node_page, dn.ofs_in_node + i + 1); - if (bio && !page_is_mergeable(sbi, bio, - *last_block_in_bio, blkaddr)) { + if (bio && (!page_is_mergeable(sbi, bio, + *last_block_in_bio, blkaddr) || + !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) { submit_and_realloc: __submit_bio(sbi, bio, DATA); bio = NULL; @@ -2337,6 +2382,9 @@ int f2fs_encrypt_one_page(struct f2fs_io_info *fio) /* wait for GCed page writeback via META_MAPPING */ f2fs_wait_on_block_writeback(inode, fio->old_blkaddr); + if (fscrypt_inode_uses_inline_crypto(inode)) + return 0; + retry_encrypt: fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(page, PAGE_SIZE, 0, gfp_flags); @@ -2510,7 +2558,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio) f2fs_unlock_op(fio->sbi); err = f2fs_inplace_write_data(fio); if (err) { - if (f2fs_encrypted_file(inode)) + if (fscrypt_inode_uses_fs_layer_crypto(inode)) fscrypt_finalize_bounce_page(&fio->encrypted_page); if (PageWriteback(page)) end_page_writeback(page); diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index f2dfc21c6abb0..8ccda50f28888 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -138,6 +138,7 @@ enum { Opt_alloc, Opt_fsync, Opt_test_dummy_encryption, + Opt_inlinecrypt, Opt_checkpoint_disable, Opt_checkpoint_disable_cap, Opt_checkpoint_disable_cap_perc, @@ -203,6 +204,7 @@ static match_table_t f2fs_tokens = { {Opt_alloc, "alloc_mode=%s"}, {Opt_fsync, "fsync_mode=%s"}, {Opt_test_dummy_encryption, "test_dummy_encryption"}, + {Opt_inlinecrypt, "inlinecrypt"}, {Opt_checkpoint_disable, "checkpoint=disable"}, {Opt_checkpoint_disable_cap, "checkpoint=disable:%u"}, {Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"}, @@ -788,6 +790,13 @@ static int parse_options(struct super_block *sb, char *options) f2fs_info(sbi, "Test dummy encryption mode enabled"); #else f2fs_info(sbi, "Test dummy encryption mount option ignored"); +#endif + break; + case Opt_inlinecrypt: +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + sb->s_flags |= SB_INLINECRYPT; +#else + f2fs_info(sbi, "inline encryption not supported"); #endif break; case Opt_checkpoint_disable_cap_perc: @@ -1583,6 +1592,8 @@ static void default_options(struct f2fs_sb_info *sbi) F2FS_OPTION(sbi).compress_ext_cnt = 0; F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON; + sbi->sb->s_flags &= ~SB_INLINECRYPT; + set_opt(sbi, INLINE_XATTR); set_opt(sbi, INLINE_DATA); set_opt(sbi, INLINE_DENTRY); @@ -2427,6 +2438,25 @@ static void f2fs_get_ino_and_lblk_bits(struct super_block *sb, *lblk_bits_ret = 8 * sizeof(block_t); } +static int f2fs_get_num_devices(struct super_block *sb) +{ + struct f2fs_sb_info *sbi = F2FS_SB(sb); + + if (f2fs_is_multi_device(sbi)) + return sbi->s_ndevs; + return 1; +} + +static void f2fs_get_devices(struct super_block *sb, + struct request_queue **devs) +{ + struct f2fs_sb_info *sbi = F2FS_SB(sb); + int i; + + for (i = 0; i < sbi->s_ndevs; i++) + devs[i] = bdev_get_queue(FDEV(i).bdev); +} + static const struct fscrypt_operations f2fs_cryptops = { .key_prefix = "f2fs:", .get_context = f2fs_get_context, @@ -2436,6 +2466,8 @@ static const struct fscrypt_operations f2fs_cryptops = { .max_namelen = F2FS_NAME_LEN, .has_stable_inodes = f2fs_has_stable_inodes, .get_ino_and_lblk_bits = f2fs_get_ino_and_lblk_bits, + .get_num_devices = f2fs_get_num_devices, + .get_devices = f2fs_get_devices, }; #endif From patchwork Wed Apr 29 07:21:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 1279035 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=IRg1v3bk; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49BqhG5Yqqz9sT9 for ; Wed, 29 Apr 2020 17:21:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726662AbgD2HVt (ORCPT ); Wed, 29 Apr 2020 03:21:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726787AbgD2HVs (ORCPT ); Wed, 29 Apr 2020 03:21:48 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19820C09B041 for ; Wed, 29 Apr 2020 00:21:47 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id ev8so1639686qvb.7 for ; Wed, 29 Apr 2020 00:21:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nrXg6HoaEfJW47Krpfjkr1jvIwQFw7yNAKlYTOw4AZ4=; b=IRg1v3bkQPaMm1bZ6+s65eTCidvU/jYjEuEWcWbWafzp0poG/wEiD4Ft4RoCUdZMPu 79C32TUuBdtm+P6p0c6RwIpm03qjox62MerfZoSUrA1g4D8CyOK5sGbQtSRjOc3+JDnw DDjx7rAeZkULVd23KO/4MMDalNfA+JWnrdVHReT89uSL88OwAEBN9cBSNnkRggeX5dPI Q/cDBs+97wLfjmqs5Z/nRF5QO9FEcaONFR+PPYdYLXhL/glHZhnNqmu3MfhMGi6BmTMC AF5ZJ9FKJoIKGB4hRy5lrRohRS0HSS9QpWgmdzKCxRU16QYLYueI6F6d1/t+E/1bjY4z tR2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nrXg6HoaEfJW47Krpfjkr1jvIwQFw7yNAKlYTOw4AZ4=; b=VbscqYAbpQim50HijP5PzsJjkzuFwttsVpmwwjZfkUUOWVcWA8DvZZldUAI5jZ12Pw 0l57BVKHQawc/2GUDoQbJFfs99SDM7puzBqvMp7pioIvHSZq4+XWqev/fl2Pz+ZrNSl9 iWcpzFiQxgxmbIou7JesfBvngs16E06G2eXvZg6oW2IykNUIE+sihYH+Lc/H3XAwJXWZ da4xooP6GS0Qf/9rH9beAOdo4FQ9q/1hs4MTx8nCYXL/OAIDea9dk6FvoKyFTPeN3ffv iFG/4pZjfje+u2MiHvLn2oGBSb6XzSaXIo0jNvJn08qFjXI8wzirxc+Ivvdotl+zn2ZZ Blig== X-Gm-Message-State: AGi0PuYSS75wEOrzrdrCvuGI2O02ynBaHv67iaxUfzN9jOyE8qvO1wLY 11tsTcx6wfRvQjtAqMc45mMWV5Y5Y/g= X-Google-Smtp-Source: APiQypJfveT2TrkkHHYQXIOi8wheNbifPDA1jNMuT2FuGwuwC69VYSVmjmCPS5XXvMU7xehqJOY5ZReVKmc= X-Received: by 2002:a0c:f012:: with SMTP id z18mr31914576qvk.42.1588144906176; Wed, 29 Apr 2020 00:21:46 -0700 (PDT) Date: Wed, 29 Apr 2020 07:21:21 +0000 In-Reply-To: <20200429072121.50094-1-satyat@google.com> Message-Id: <20200429072121.50094-13-satyat@google.com> Mime-Version: 1.0 References: <20200429072121.50094-1-satyat@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH v11 12/12] ext4: add inline encryption support From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-ext4@vger.kernel.org Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Eric Biggers , Satya Tangirala Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org From: Eric Biggers Wire up ext4 to support inline encryption via the helper functions which fs/crypto/ now provides. This includes: - Adding a mount option 'inlinecrypt' which enables inline encryption on encrypted files where it can be used. - Setting the bio_crypt_ctx on bios that will be submitted to an inline-encrypted file. Note: submit_bh_wbc() in fs/buffer.c also needed to be patched for this part, since ext4 sometimes uses ll_rw_block() on file data. - Not adding logically discontiguous data to bios that will be submitted to an inline-encrypted file. - Not doing filesystem-layer crypto on inline-encrypted files. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala --- Documentation/admin-guide/ext4.rst | 6 ++++++ fs/buffer.c | 7 ++++--- fs/ext4/inode.c | 4 ++-- fs/ext4/page-io.c | 6 ++++-- fs/ext4/readpage.c | 11 ++++++++--- fs/ext4/super.c | 9 +++++++++ 6 files changed, 33 insertions(+), 10 deletions(-) diff --git a/Documentation/admin-guide/ext4.rst b/Documentation/admin-guide/ext4.rst index 9443fcef18760..ed997e3766781 100644 --- a/Documentation/admin-guide/ext4.rst +++ b/Documentation/admin-guide/ext4.rst @@ -395,6 +395,12 @@ When mounting an ext4 filesystem, the following option are accepted: Documentation/filesystems/dax.txt. Note that this option is incompatible with data=journal. + inlinecrypt + Encrypt/decrypt the contents of encrypted files using the blk-crypto + framework rather than filesystem-layer encryption. This allows the use + of inline encryption hardware. The on-disk format is unaffected. For + more details, see Documentation/block/inline-encryption.rst. + Data Mode ========= There are 3 different data modes: diff --git a/fs/buffer.c b/fs/buffer.c index a60f60396cfa0..33827a55b5952 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -328,9 +328,8 @@ static void decrypt_bh(struct work_struct *work) static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate) { /* Decrypt if needed */ - if (uptodate && IS_ENABLED(CONFIG_FS_ENCRYPTION) && - IS_ENCRYPTED(bh->b_page->mapping->host) && - S_ISREG(bh->b_page->mapping->host->i_mode)) { + if (uptodate && + fscrypt_inode_uses_fs_layer_crypto(bh->b_page->mapping->host)) { struct decrypt_bh_ctx *ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC); if (ctx) { @@ -3047,6 +3046,8 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, */ bio = bio_alloc(GFP_NOIO, 1); + fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO); + bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); bio_set_dev(bio, bh->b_bdev); bio->bi_write_hint = write_hint; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 2a4aae6acdcb9..ac20b65766ece 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1088,7 +1088,7 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len, } if (unlikely(err)) { page_zero_new_buffers(page, from, to); - } else if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)) { + } else if (fscrypt_inode_uses_fs_layer_crypto(inode)) { for (i = 0; i < nr_wait; i++) { int err2; @@ -3738,7 +3738,7 @@ static int __ext4_block_zero_page_range(handle_t *handle, /* Uhhuh. Read error. Complain and punt. */ if (!buffer_uptodate(bh)) goto unlock; - if (S_ISREG(inode->i_mode) && IS_ENCRYPTED(inode)) { + if (fscrypt_inode_uses_fs_layer_crypto(inode)) { /* We expect the key to be set. */ BUG_ON(!fscrypt_has_encryption_key(inode)); err = fscrypt_decrypt_pagecache_blocks(page, blocksize, diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index de6fe969f7737..defd2e10dfd10 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -402,6 +402,7 @@ static void io_submit_init_bio(struct ext4_io_submit *io, * __GFP_DIRECT_RECLAIM is set, see comments for bio_alloc_bioset(). */ bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES); + fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO); bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); bio_set_dev(bio, bh->b_bdev); bio->bi_end_io = ext4_end_bio; @@ -418,7 +419,8 @@ static void io_submit_add_bh(struct ext4_io_submit *io, { int ret; - if (io->io_bio && bh->b_blocknr != io->io_next_block) { + if (io->io_bio && (bh->b_blocknr != io->io_next_block || + !fscrypt_mergeable_bio_bh(io->io_bio, bh))) { submit_and_retry: ext4_io_submit(io); } @@ -506,7 +508,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io, * (e.g. holes) to be unnecessarily encrypted, but this is rare and * can't happen in the common case of blocksize == PAGE_SIZE. */ - if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && nr_to_submit) { + if (fscrypt_inode_uses_fs_layer_crypto(inode) && nr_to_submit) { gfp_t gfp_flags = GFP_NOFS; unsigned int enc_bytes = round_up(len, i_blocksize(inode)); diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c index c1769afbf7995..68eac0aeffad3 100644 --- a/fs/ext4/readpage.c +++ b/fs/ext4/readpage.c @@ -195,7 +195,7 @@ static void ext4_set_bio_post_read_ctx(struct bio *bio, { unsigned int post_read_steps = 0; - if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)) + if (fscrypt_inode_uses_fs_layer_crypto(inode)) post_read_steps |= 1 << STEP_DECRYPT; if (ext4_need_verity(inode, first_idx)) @@ -232,6 +232,7 @@ int ext4_mpage_readpages(struct address_space *mapping, const unsigned blkbits = inode->i_blkbits; const unsigned blocks_per_page = PAGE_SIZE >> blkbits; const unsigned blocksize = 1 << blkbits; + sector_t next_block; sector_t block_in_file; sector_t last_block; sector_t last_block_in_file; @@ -264,7 +265,8 @@ int ext4_mpage_readpages(struct address_space *mapping, if (page_has_buffers(page)) goto confused; - block_in_file = (sector_t)page->index << (PAGE_SHIFT - blkbits); + block_in_file = next_block = + (sector_t)page->index << (PAGE_SHIFT - blkbits); last_block = block_in_file + nr_pages * blocks_per_page; last_block_in_file = (ext4_readpage_limit(inode) + blocksize - 1) >> blkbits; @@ -364,7 +366,8 @@ int ext4_mpage_readpages(struct address_space *mapping, * This page will go to BIO. Do we need to send this * BIO off first? */ - if (bio && (last_block_in_bio != blocks[0] - 1)) { + if (bio && (last_block_in_bio != blocks[0] - 1 || + !fscrypt_mergeable_bio(bio, inode, next_block))) { submit_and_realloc: submit_bio(bio); bio = NULL; @@ -376,6 +379,8 @@ int ext4_mpage_readpages(struct address_space *mapping, */ bio = bio_alloc(GFP_KERNEL, min_t(int, nr_pages, BIO_MAX_PAGES)); + fscrypt_set_bio_crypt_ctx(bio, inode, next_block, + GFP_KERNEL); ext4_set_bio_post_read_ctx(bio, inode, page->index); bio_set_dev(bio, bdev); bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9); diff --git a/fs/ext4/super.c b/fs/ext4/super.c index bf5fcb477f667..fb4a293cac0c3 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1509,6 +1509,7 @@ enum { Opt_journal_path, Opt_journal_checksum, Opt_journal_async_commit, Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback, Opt_data_err_abort, Opt_data_err_ignore, Opt_test_dummy_encryption, + Opt_inlinecrypt, Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota, Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota, Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err, @@ -1606,6 +1607,7 @@ static const match_table_t tokens = { {Opt_noinit_itable, "noinit_itable"}, {Opt_max_dir_size_kb, "max_dir_size_kb=%u"}, {Opt_test_dummy_encryption, "test_dummy_encryption"}, + {Opt_inlinecrypt, "inlinecrypt"}, {Opt_nombcache, "nombcache"}, {Opt_nombcache, "no_mbcache"}, /* for backward compatibility */ {Opt_removed, "check=none"}, /* mount option from ext2/3 */ @@ -1893,6 +1895,13 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token, case Opt_nolazytime: sb->s_flags &= ~SB_LAZYTIME; return 1; + case Opt_inlinecrypt: +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + sb->s_flags |= SB_INLINECRYPT; +#else + ext4_msg(sb, KERN_ERR, "inline encryption not supported"); +#endif + return 1; } for (m = ext4_mount_opts; m->token != Opt_err; m++)