sha2: new header <sha2.h>
diff mbox

Message ID 1427529610-3508728-1-git-send-email-shawn@churchofgit.com
State New
Headers show

Commit Message

Shawn Landden March 28, 2015, 8 a.m. UTC
Export the SHA2 family of functions in -lcrypt.

We already have these functions for crypt(), and many projects and spending much
effort reimplementing them. The most popular library, openSSL, also has infamous
licensing issues and is way overkill when AES is not needed. OpenSSL is the only
library I know of that support SHA2 cpu extensions as well.

Passes all existing tests. (We need new ones.)

v2: remove align.h, replace with memcpy()
    respond to style issues
    do not mess with NSS
    Add some documentation (I also have a full-featured man page written for Linux
       man-pages)
---
 ChangeLog                                          |  19 ++++
 crypt/Makefile                                     |   6 +-
 crypt/Versions                                     |   4 +
 crypt/sha2.h                                       |  65 +++++++++++++
 crypt/sha256-block.c                               |   4 +-
 crypt/sha256-crypt.c                               |   6 +-
 crypt/sha256.c                                     | 102 +++++++++++----------
 crypt/sha256.h                                     |  23 +++--
 crypt/sha256test.c                                 |  11 +--
 crypt/sha512-block.c                               |   4 +-
 crypt/sha512-crypt.c                               |   6 +-
 crypt/sha512.c                                     |  95 +++++++++++--------
 crypt/sha512.h                                     |  22 +++--
 crypt/sha512test.c                                 |   9 +-
 manual/crypt.texi                                  |  26 ++++++
 sysdeps/unix/sysv/linux/aarch64/libcrypt.abilist   |  12 +++
 sysdeps/unix/sysv/linux/alpha/libcrypt.abilist     |  12 +++
 sysdeps/unix/sysv/linux/arm/libcrypt.abilist       |  12 +++
 sysdeps/unix/sysv/linux/hppa/libcrypt.abilist      |  12 +++
 sysdeps/unix/sysv/linux/i386/libcrypt.abilist      |  12 +++
 sysdeps/unix/sysv/linux/ia64/libcrypt.abilist      |  12 +++
 .../unix/sysv/linux/m68k/coldfire/libcrypt.abilist |  12 +++
 .../unix/sysv/linux/m68k/m680x0/libcrypt.abilist   |  12 +++
 .../unix/sysv/linux/microblaze/libcrypt.abilist    |  12 +++
 .../unix/sysv/linux/mips/mips32/libcrypt.abilist   |  13 +++
 .../unix/sysv/linux/mips/mips64/libcrypt.abilist   |  12 +++
 sysdeps/unix/sysv/linux/nios2/libcrypt.abilist     |  12 +++
 .../sysv/linux/powerpc/powerpc32/libcrypt.abilist  |  12 +++
 .../sysv/linux/powerpc/powerpc64/libcrypt.abilist  |  12 +++
 .../unix/sysv/linux/s390/s390-32/libcrypt.abilist  |  12 +++
 .../unix/sysv/linux/s390/s390-64/libcrypt.abilist  |  12 +++
 sysdeps/unix/sysv/linux/sh/libcrypt.abilist        |  12 +++
 .../unix/sysv/linux/sparc/sparc32/libcrypt.abilist |  12 +++
 .../unix/sysv/linux/sparc/sparc64/libcrypt.abilist |  12 +++
 .../linux/tile/tilegx/tilegx32/libcrypt.abilist    |  12 +++
 .../linux/tile/tilegx/tilegx64/libcrypt.abilist    |  12 +++
 .../unix/sysv/linux/tile/tilepro/libcrypt.abilist  |  12 +++
 sysdeps/unix/sysv/linux/x86_64/64/libcrypt.abilist |  12 +++
 .../unix/sysv/linux/x86_64/x32/libcrypt.abilist    |  12 +++
 39 files changed, 566 insertions(+), 125 deletions(-)
 create mode 100644 crypt/sha2.h

Comments

Florian Weimer April 1, 2015, 10:22 p.m. UTC | #1
On 03/28/2015 09:00 AM, Shawn Landden wrote:
> Export the SHA2 family of functions in -lcrypt.
> 
> We already have these functions for crypt(), and many projects and spending much
> effort reimplementing them. The most popular library, openSSL, also has infamous
> licensing issues and is way overkill when AES is not needed. OpenSSL is the only
> library I know of that support SHA2 cpu extensions as well.

I don't think this should go into libcrypt.  We might need it inside
libc for a future PRNG.

The proposed API is likely incompatible with some forms of hardware
acceleration.  I already told you that you need to add
allocation/deallocation functions.  There is a reason why OpenSSL only
provides hardware acceleration with the EVP API.

In any case, this needs much more discussion and research before we can
commit to a stable API.
Rich Felker April 2, 2015, 12:43 a.m. UTC | #2
On Thu, Apr 02, 2015 at 12:22:28AM +0200, Florian Weimer wrote:
> On 03/28/2015 09:00 AM, Shawn Landden wrote:
> > Export the SHA2 family of functions in -lcrypt.
> > 
> > We already have these functions for crypt(), and many projects and spending much
> > effort reimplementing them. The most popular library, openSSL, also has infamous
> > licensing issues and is way overkill when AES is not needed. OpenSSL is the only
> > library I know of that support SHA2 cpu extensions as well.
> 
> I don't think this should go into libcrypt.  We might need it inside
> libc for a future PRNG.
> 
> The proposed API is likely incompatible with some forms of hardware
> acceleration.  I already told you that you need to add
> allocation/deallocation functions.  There is a reason why OpenSSL only
> provides hardware acceleration with the EVP API.

I'm opposed to allocation/deallocation functions. They have no benefit
and make the API largely useless since you can't rely on it --
allocation could always fail, and then you're better off just writing
your own version of SHA2 that can't fail. Allocation also means you
can't use it in an async signal context.

> In any case, this needs much more discussion and research before we can
> commit to a stable API.

I agree. This is not something that should be rushed. If it's going to
be considered, there should be input from multiple libc implementors
(on multiple systems, e.g. also BSDs) as to how the API should work.
Otherwise it's just going to be something non-portable that nobody can
use (or worse, have different APIs/behaviors on different systems).

Rich
Florian Weimer April 2, 2015, 8 a.m. UTC | #3
On 04/02/2015 02:43 AM, Rich Felker wrote:

> I'm opposed to allocation/deallocation functions. They have no benefit

They have the benefit that the library can ensure the required
alignment, instead of the caller having to provide it.

There might be hardware implementations which require additional
alignment, to the degree that it cannot be satisfied with the current
spare space in the buffer.

The non-opaque struct also gives the impression you can copy the struct
to implement HMACs, or that the struct contents is portable across
process invocations, which is something we should not promise.

> Allocation also means you
> can't use it in an async signal context.

As far as I can tell, the functions have not been proposed as
async-signal-safe, so this does not matter.

> I agree. This is not something that should be rushed. If it's going to
> be considered, there should be input from multiple libc implementors
> (on multiple systems, e.g. also BSDs) as to how the API should work.

I would rather look at crypto libraries for inspiration.
Rich Felker April 2, 2015, 3:17 p.m. UTC | #4
On Thu, Apr 02, 2015 at 10:00:25AM +0200, Florian Weimer wrote:
> On 04/02/2015 02:43 AM, Rich Felker wrote:
> 
> > I'm opposed to allocation/deallocation functions. They have no benefit
> 
> They have the benefit that the library can ensure the required
> alignment, instead of the caller having to provide it.

Then just make the caller-provided buffer twice the size it "should"
be. Then the callee can use the actual embedded buffer it with any
alignment it wants.

> There might be hardware implementations which require additional
> alignment, to the degree that it cannot be satisfied with the current
> spare space in the buffer.
> 
> The non-opaque struct also gives the impression you can copy the struct
> to implement HMACs, or that the struct contents is portable across
> process invocations, which is something we should not promise.

There's no reason to give that impression. The contents can be clearly
opaque, e.g.:

	long __opaque[NNN];

> > Allocation also means you
> > can't use it in an async signal context.
> 
> As far as I can tell, the functions have not been proposed as
> async-signal-safe, so this does not matter.

I agree this is a less important matter than breaking the no-fail
contract users will want/need, but I thought it was noteworthy still.

Rich
Shawn Landden April 8, 2015, 10:33 p.m. UTC | #5
On Thu, Apr 02, 2015 at 12:22:28AM +0200, Florian Weimer wrote:
> On 03/28/2015 09:00 AM, Shawn Landden wrote:
> > Export the SHA2 family of functions in -lcrypt.
> > 
> > We already have these functions for crypt(), and many projects and spending much
> > effort reimplementing them. The most popular library, openSSL, also has infamous
> > licensing issues and is way overkill when AES is not needed. OpenSSL is the only
> > library I know of that support SHA2 cpu extensions as well.
> 
> I don't think this should go into libcrypt.  We might need it inside
> libc for a future PRNG.
> 
> The proposed API is likely incompatible with some forms of hardware
> acceleration.  I already told you that you need to add
> allocation/deallocation functions.  There is a reason why OpenSSL only
> provides hardware acceleration with the EVP API.
A allocation/deallocation scheme would certainly be required to use AF_ALG
for example (to get access to hardware acceleration that is only accesable to
kernel space), but unless the processor is really slow, or it is done
asyncronously that will almost assuredly be slower because of the context
switches (this wouldn't be the case with AES, but we are not implementing that)
, except in the special case of using it with sendfile (which has always
been broken). There is currently no way to know that AF_ALG hardware
acceleration is available, and I don't think AF_ALG should be used for hashes.
(it currently is only supported in OpenSSL by an out-of-tree module)

SHA256 only requires 32-bit alignment, and SHA512 64-bit alignment,
for the "real" part of the ctx. (H[8]) The buffers used in libcrypt are
already twice as large as they need to be, and hardware implementations are
free to not update ctx during large *update() calls, but only at the end.
Fully aligned update() calls will not use or access the buffer at all.
I really don't see allocate/deallocate functions as benificial.
> 
> In any case, this needs much more discussion and research before we can
> commit to a stable API.
> 
> -- 
> Florian Weimer / Red Hat Product Security

Patch
diff mbox

diff --git a/ChangeLog b/ChangeLog
index 4a5cd16..f44c792 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,22 @@ 
+2015-03-27  Shawn Landden  <shawn@churchofgit.com>
+
+	* crypt/sha2.h, crypt/Versions: new header for -lcrypt, with new exports.
+	* crypt/sha512.c, sha256.c sha512.c sha512.h:
+	New functions: sha384_finish(), sha224_finish(), sha256(), sha512()
+	Add crypt/align.h, *_finish() functions no longer require aligned digest.
+	*_finish() functions now return pointer to digest.
+	sha256_update(), sha512_update() now return pointer to data parameter.
+	Rename struct sha256_ctx => sha256_ctx, and struct sha512_ctx => sha512_ctx.
+	Move sha256_ctx.buflen after sha256_ctx.buffer so that sha256_ctx.buffer
+	  is 128-bit aligned.
+	Add padding to sha512_ctx so that sha512_ctx.buffer is 128-bit aligned.
+	(sha256_update, sha512_update, sha256_finish, sha224_finish, sha512_finish,
+	  sha384_finish): make pointer arguments __restrict.
+	(__sha256_process_bytes, __sha512_process_bytes): change parameter order to match
+	  sha256_update(), sha512_update().
+	* sysdeps: Add to ABI.
+	* manual/crypt.texi: Add documentation.
+
 2015-03-08  Paul Pluzhnikov  <ppluzhnikov@google.com>
 
 	[BZ #16734]
diff --git a/crypt/Makefile b/crypt/Makefile
index 34c4dd7..3f839fc 100644
--- a/crypt/Makefile
+++ b/crypt/Makefile
@@ -22,13 +22,13 @@  subdir	:= crypt
 
 include ../Makeconfig
 
-headers := crypt.h
+headers := crypt.h sha2.h
 
 extra-libs := libcrypt
 extra-libs-others := $(extra-libs)
 
 libcrypt-routines := crypt-entry md5-crypt sha256-crypt sha512-crypt crypt \
-		     crypt_util
+		     crypt_util sha256 sha512
 
 tests := cert md5c-test sha256c-test sha512c-test badsalttest
 
@@ -42,7 +42,7 @@  CPPFLAGS-sha512-crypt.c = -DUSE_NSS -I$(shell nss-config --includedir)
 CPPFLAGS-md5-crypt.c = -DUSE_NSS -I$(shell nss-config --includedir)
 LDLIBS-crypt.so = -lfreebl3
 else
-libcrypt-routines += md5 sha256 sha512
+libcrypt-routines += md5
 
 tests += md5test sha256test sha512test
 
diff --git a/crypt/Versions b/crypt/Versions
index 389e7d5..69e06d9 100644
--- a/crypt/Versions
+++ b/crypt/Versions
@@ -2,4 +2,8 @@  libcrypt {
   GLIBC_2.0 {
     crypt; crypt_r; encrypt; encrypt_r; fcrypt; setkey; setkey_r;
   }
+  GLIBC_2.22 {
+    sha256; sha256_init; sha256_update; sha256_finish; sha224_finish;
+    sha512; sha512_init; sha512_update; sha512_finish; sha384_finish;
+  }
 }
diff --git a/crypt/sha2.h b/crypt/sha2.h
new file mode 100644
index 0000000..e768f04
--- /dev/null
+++ b/crypt/sha2.h
@@ -0,0 +1,65 @@ 
+/*
+ * SHA2: The SHA2 family of cryptographic functions
+ *
+ * Copyright (C) 2015 Free Software Foundation, Inc.
+ *
+ * The GNU C Library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * The GNU C Library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with the GNU C Library; if not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _SHA2_H
+#define _SHA2_H
+
+#include <unistd.h>
+
+typedef struct {
+  char __internal_state[176];
+} sha256_ctx __attribute__((__aligned__(16)));
+
+/* Convenience function. Returns the 3rd argument.  */
+void *sha256(const void *__restrict, size_t, void *__restrict);
+
+void sha256_init(sha256_ctx *);
+
+/* Returns the second argument.  */
+const void *sha256_update(sha256_ctx *__restrict, const void *__restrict, size_t);
+
+/* Returns the second argument.  */
+void *sha256_finish(sha256_ctx *__restrict, void *__restrict);
+
+/* Returns the second argument. Note sha224 is just a truncated sha256.  */
+void *sha224_finish(sha256_ctx *__restrict, void *__restrict);
+
+
+typedef struct {
+  char __internal_state[352];
+} sha512_ctx __attribute__((__aligned__(16)));
+
+/* Convenience function. Returns the 3rd argument.  */
+void *sha512(const void *__restrict, size_t, void *__restrict);
+
+void sha512_init(sha512_ctx *);
+
+/* Returns the second argument.  */
+const void *sha512_update(sha512_ctx *__restrict, const void *__restrict, size_t);
+
+/* Returns the second argument.  */
+void *sha512_finish(sha512_ctx *__restrict, void *__restrict);
+
+/* Returns the second argument. Note sha384 is just a truncated sha512.
+ * sha224/512 and sha256/512 can also be generated by truncating sha512,
+ * but no convenience function is provided.  */
+void *sha384_finish(sha512_ctx *__restrict, void *__restrict);
+
+#endif
diff --git a/crypt/sha256-block.c b/crypt/sha256-block.c
index 8a77096..4fbf04b 100644
--- a/crypt/sha256-block.c
+++ b/crypt/sha256-block.c
@@ -3,7 +3,7 @@ 
 /* Process LEN bytes of BUFFER, accumulating context into CTX.
    It is assumed that LEN % 64 == 0.  */
 void
-sha256_process_block (const void *buffer, size_t len, struct sha256_ctx *ctx)
+sha256_process_block (const void *buffer, size_t len, sha256_ctx *ctx)
 {
   const uint32_t *words = buffer;
   size_t nwords = len / sizeof (uint32_t);
@@ -50,7 +50,7 @@  sha256_process_block (const void *buffer, size_t len, struct sha256_ctx *ctx)
       /* Compute the message schedule according to FIPS 180-2:6.2.2 step 2.  */
       for (unsigned int t = 0; t < 16; ++t)
 	{
-	  W[t] = SWAP (*words);
+	  W[t] = be32toh (*words);
 	  ++words;
 	}
       for (unsigned int t = 16; t < 64; ++t)
diff --git a/crypt/sha256-crypt.c b/crypt/sha256-crypt.c
index d90e291..cdefa60 100644
--- a/crypt/sha256-crypt.c
+++ b/crypt/sha256-crypt.c
@@ -68,7 +68,7 @@  typedef int PRBool;
   __sha256_init_ctx (ctxp)
 
 # define sha256_process_bytes(buf, len, ctxp, nss_ctxp) \
-  __sha256_process_bytes(buf, len, ctxp)
+  __sha256_process_bytes(ctxp, buf, len)
 
 # define sha256_finish_ctx(ctxp, nss_ctxp, result) \
   __sha256_finish_ctx (ctxp, result)
@@ -189,8 +189,8 @@  __sha256_crypt_r (key, salt, buffer, buflen)
   NSSLOWHASHContext *nss_ctx = NULL;
   NSSLOWHASHContext *nss_alt_ctx = NULL;
 #else
-  struct sha256_ctx ctx;
-  struct sha256_ctx alt_ctx;
+  sha256_ctx ctx;
+  sha256_ctx alt_ctx;
 #endif
 
   /* Prepare for the real work.  */
diff --git a/crypt/sha256.c b/crypt/sha256.c
index b6db8b2..a2f5bf1 100644
--- a/crypt/sha256.c
+++ b/crypt/sha256.c
@@ -31,30 +31,6 @@ 
 
 #include "sha256.h"
 
-#if __BYTE_ORDER == __LITTLE_ENDIAN
-# ifdef _LIBC
-#  include <byteswap.h>
-#  define SWAP(n) bswap_32 (n)
-#  define SWAP64(n) bswap_64 (n)
-# else
-#  define SWAP(n) \
-    (((n) << 24) | (((n) & 0xff00) << 8) | (((n) >> 8) & 0xff00) | ((n) >> 24))
-#  define SWAP64(n) \
-  (((n) << 56)					\
-   | (((n) & 0xff00) << 40)			\
-   | (((n) & 0xff0000) << 24)			\
-   | (((n) & 0xff000000) << 8)			\
-   | (((n) >> 8) & 0xff000000)			\
-   | (((n) >> 24) & 0xff0000)			\
-   | (((n) >> 40) & 0xff00)			\
-   | ((n) >> 56))
-# endif
-#else
-# define SWAP(n) (n)
-# define SWAP64(n) (n)
-#endif
-
-
 /* This array contains the bytes used to pad the buffer to the next
    64-byte boundary.  (FIPS 180-2:5.1.1)  */
 static const unsigned char fillbuf[64] = { 0x80, 0 /* , 0, 0, ...  */ };
@@ -82,13 +58,13 @@  static const uint32_t K[64] =
   };
 
 void
-sha256_process_block (const void *, size_t, struct sha256_ctx *);
+sha256_process_block (const void *, size_t, sha256_ctx *);
 
 /* Initialize structure containing state of computation.
    (FIPS 180-2:5.3.2)  */
 void
 __sha256_init_ctx (ctx)
-     struct sha256_ctx *ctx;
+     sha256_ctx *ctx;
 {
   ctx->H[0] = 0x6a09e667;
   ctx->H[1] = 0xbb67ae85;
@@ -102,17 +78,10 @@  __sha256_init_ctx (ctx)
   ctx->total64 = 0;
   ctx->buflen = 0;
 }
+weak_alias(__sha256_init_ctx, sha256_init)
 
-
-/* Process the remaining bytes in the internal buffer and the usual
-   prolog according to the standard and write the result to RESBUF.
-
-   IMPORTANT: On some systems it is required that RESBUF is correctly
-   aligned for a 32 bits value.  */
-void *
-__sha256_finish_ctx (ctx, resbuf)
-     struct sha256_ctx *ctx;
-     void *resbuf;
+static void
+__sha256_finish_ctx_generic (sha256_ctx *ctx)
 {
   /* Take yet unprocessed bytes into account.  */
   uint32_t bytes = ctx->buflen;
@@ -125,30 +94,53 @@  __sha256_finish_ctx (ctx, resbuf)
   memcpy (&ctx->buffer[bytes], fillbuf, pad);
 
   /* Put the 64-bit file length in *bits* at the end of the buffer.  */
-#if _STRING_ARCH_unaligned
-  ctx->buffer64[(bytes + pad) / 8] = SWAP64 (ctx->total64 << 3);
-#else
-  ctx->buffer32[(bytes + pad + 4) / 4] = SWAP (ctx->total[TOTAL64_low] << 3);
-  ctx->buffer32[(bytes + pad) / 4] = SWAP ((ctx->total[TOTAL64_high] << 3) |
-					   (ctx->total[TOTAL64_low] >> 29));
-#endif
+  ctx->buffer64[(bytes + pad) / 8] = be64toh (ctx->total64 << 3);
 
   /* Process last bytes.  */
   sha256_process_block (ctx->buffer, bytes + pad + 8, ctx);
+}
+
+/* Process the remaining bytes in the internal buffer and the usual
+   prolog according to the standard and write the result to RESBUF.  */
+void *
+__sha256_finish_ctx (sha256_ctx *ctx, void *resbuf)
+{
+  __sha256_finish_ctx_generic (ctx);
 
-  /* Put result from CTX in first 32 bytes following RESBUF.  */
+  /* Put result from CTX in first 32 bytes following RESBUF.
+     This is the finish so we can trash ctx.  */
   for (unsigned int i = 0; i < 8; ++i)
-    ((uint32_t *) resbuf)[i] = SWAP (ctx->H[i]);
+    ctx->H[i] = be32toh (ctx->H[i]);
+
+  memcpy(resbuf, ctx->H, 32);
 
   return resbuf;
 }
+weak_alias(__sha256_finish_ctx, sha256_finish)
+
+/* Process the remaining bytes in the internal buffer and the usual
+   prolog according to the standard and write the result to RESBUF.  */
+void *
+__sha224_finish_ctx (sha256_ctx *ctx, void *resbuf)
+{
+  __sha256_finish_ctx_generic (ctx);
 
+  /* Put result from CTX in first 28 bytes following RESBUF.
+     This is the finish so we can trash ctx.  */
+  for (unsigned int i = 0; i < 7; ++i)
+    ctx->H[i] = be32toh (ctx->H[i]);
 
-void
-__sha256_process_bytes (buffer, len, ctx)
+  memcpy(resbuf, ctx->H, 28);
+
+  return resbuf;
+}
+weak_alias(__sha224_finish_ctx, sha224_finish)
+
+const void *
+__sha256_process_bytes (ctx, buffer, len)
+     sha256_ctx *ctx;
      const void *buffer;
      size_t len;
-     struct sha256_ctx *ctx;
 {
   /* When we already have some bits in our internal buffer concatenate
      both inputs first.  */
@@ -216,6 +208,20 @@  __sha256_process_bytes (buffer, len, ctx)
 	}
       ctx->buflen = left_over;
     }
+
+  return buffer;
 }
+weak_alias(__sha256_process_bytes, sha256_update)
 
+void *
+__sha256(const void *__restrict d, size_t n, void *__restrict md)
+{
+	sha256_ctx ctx;
+
+	sha256_init(&ctx);
+	sha256_update(&ctx, d, n);
+	sha256_finish(&ctx, md);
+	return md;
+}
+weak_alias(__sha256, sha256)
 #include <sha256-block.c>
diff --git a/crypt/sha256.h b/crypt/sha256.h
index 27e0fe6..6b1bf60 100644
--- a/crypt/sha256.h
+++ b/crypt/sha256.h
@@ -27,7 +27,7 @@ 
 
 
 /* Structure to save state of computation between the single steps.  */
-struct sha256_ctx
+typedef struct
 {
   uint32_t H[8];
 
@@ -38,32 +38,39 @@  struct sha256_ctx
 #define TOTAL64_high (BYTE_ORDER == LITTLE_ENDIAN)
     uint32_t total[2];
   };
-  uint32_t buflen;
   union
   {
     char buffer[128];
     uint32_t buffer32[32];
     uint64_t buffer64[16];
   };
-};
+  uint32_t buflen;
+  uint8_t __padding[4];
+} sha256_ctx __attribute__((aligned(16)));
 
 /* Initialize structure containing state of computation.
    (FIPS 180-2: 5.3.2)  */
-extern void __sha256_init_ctx (struct sha256_ctx *ctx) __THROW;
+extern void __sha256_init_ctx (sha256_ctx *__restrict ctx) __THROW;
 
 /* Starting with the result of former calls of this function (or the
    initialization function update the context for the next LEN bytes
    starting at BUFFER.
    It is NOT required that LEN is a multiple of 64.  */
-extern void __sha256_process_bytes (const void *buffer, size_t len,
-				    struct sha256_ctx *ctx) __THROW;
+extern const void *__sha256_process_bytes (sha256_ctx *__restrict ctx,
+			const void *__restrict buffer, size_t len) __THROW;
+
+/* Process the remaining bytes in the buffer and put result from CTX
+   in first 32 bytes following RESBUF. */
+extern void *__sha256_finish_ctx (sha256_ctx *__restrict ctx, void *__restrict resbuf)
+  __THROW;
 
 /* Process the remaining bytes in the buffer and put result from CTX
-   in first 32 bytes following RESBUF.
+   in first 28 bytes following RESBUF.
 
    IMPORTANT: On some systems it is required that RESBUF is correctly
    aligned for a 32 bits value.  */
-extern void *__sha256_finish_ctx (struct sha256_ctx *ctx, void *resbuf)
+extern void *__sha224_finish_ctx (sha256_ctx *__restrict ctx, void *__restrict resbuf)
   __THROW;
 
+extern void *__sha256(const void *__restrict d, size_t n, void *__restrict md) __THROW;
 #endif /* sha256.h */
diff --git a/crypt/sha256test.c b/crypt/sha256test.c
index 39e8030..be46f69 100644
--- a/crypt/sha256test.c
+++ b/crypt/sha256test.c
@@ -44,7 +44,7 @@  static const struct
 int
 main (void)
 {
-  struct sha256_ctx ctx;
+  sha256_ctx ctx;
   char sum[32];
   int result = 0;
   int cnt;
@@ -52,8 +52,7 @@  main (void)
   for (cnt = 0; cnt < (int) (sizeof (tests) / sizeof (tests[0])); ++cnt)
     {
       __sha256_init_ctx (&ctx);
-      __sha256_process_bytes (tests[cnt].input, strlen (tests[cnt].input),
-			      &ctx);
+      __sha256_process_bytes (&ctx, tests[cnt].input, strlen (tests[cnt].input));
       __sha256_finish_ctx (&ctx, sum);
       if (memcmp (tests[cnt].result, sum, 32) != 0)
 	{
@@ -63,7 +62,7 @@  main (void)
 
       __sha256_init_ctx (&ctx);
       for (int i = 0; tests[cnt].input[i] != '\0'; ++i)
-	__sha256_process_bytes (&tests[cnt].input[i], 1, &ctx);
+	__sha256_process_bytes (&ctx, &tests[cnt].input[i], 1);
       __sha256_finish_ctx (&ctx, sum);
       if (memcmp (tests[cnt].result, sum, 32) != 0)
 	{
@@ -77,7 +76,7 @@  main (void)
   memset (buf, 'a', sizeof (buf));
   __sha256_init_ctx (&ctx);
   for (int i = 0; i < 1000; ++i)
-    __sha256_process_bytes (buf, sizeof (buf), &ctx);
+    __sha256_process_bytes (&ctx, buf, sizeof (buf));
   __sha256_finish_ctx (&ctx, sum);
   static const char expected[32] =
     "\xcd\xc7\x6e\x5c\x99\x14\xfb\x92\x81\xa1\xc7\xe2\x84\xd7\x3e\x67"
@@ -90,7 +89,7 @@  main (void)
 
   __sha256_init_ctx (&ctx);
   for (int i = 0; i < 100000; ++i)
-    __sha256_process_bytes (buf, 10, &ctx);
+    __sha256_process_bytes (&ctx, buf, 10);
   __sha256_finish_ctx (&ctx, sum);
   if (memcmp (expected, sum, 32) != 0)
     {
diff --git a/crypt/sha512-block.c b/crypt/sha512-block.c
index c542db1..45c2049 100644
--- a/crypt/sha512-block.c
+++ b/crypt/sha512-block.c
@@ -3,7 +3,7 @@ 
 /* Process LEN bytes of BUFFER, accumulating context into CTX.
    It is assumed that LEN % 128 == 0.  */
 void
-sha512_process_block (const void *buffer, size_t len, struct sha512_ctx *ctx)
+sha512_process_block (const void *buffer, size_t len, sha512_ctx *ctx)
 {
   const uint64_t *words = buffer;
   size_t nwords = len / sizeof (uint64_t);
@@ -57,7 +57,7 @@  sha512_process_block (const void *buffer, size_t len, struct sha512_ctx *ctx)
       /* Compute the message schedule according to FIPS 180-2:6.3.2 step 2.  */
       for (unsigned int t = 0; t < 16; ++t)
 	{
-	  W[t] = SWAP (*words);
+	  W[t] = be64toh (*words);
 	  ++words;
 	}
       for (unsigned int t = 16; t < 80; ++t)
diff --git a/crypt/sha512-crypt.c b/crypt/sha512-crypt.c
index 9c581ab..4cfecea 100644
--- a/crypt/sha512-crypt.c
+++ b/crypt/sha512-crypt.c
@@ -68,7 +68,7 @@  typedef int PRBool;
   __sha512_init_ctx (ctxp)
 
 # define sha512_process_bytes(buf, len, ctxp, nss_ctxp) \
-  __sha512_process_bytes(buf, len, ctxp)
+  __sha512_process_bytes(ctxp, buf, len)
 
 # define sha512_finish_ctx(ctxp, nss_ctxp, result) \
   __sha512_finish_ctx (ctxp, result)
@@ -188,8 +188,8 @@  __sha512_crypt_r (key, salt, buffer, buflen)
   NSSLOWHASHContext *nss_ctx = NULL;
   NSSLOWHASHContext *nss_alt_ctx = NULL;
 #else
-  struct sha512_ctx ctx;
-  struct sha512_ctx alt_ctx;
+  sha512_ctx ctx;
+  sha512_ctx alt_ctx;
 #endif
 
   /* Prepare for the real work.  */
diff --git a/crypt/sha512.c b/crypt/sha512.c
index 608de82..980f878 100644
--- a/crypt/sha512.c
+++ b/crypt/sha512.c
@@ -31,26 +31,6 @@ 
 
 #include "sha512.h"
 
-#if __BYTE_ORDER == __LITTLE_ENDIAN
-# ifdef _LIBC
-#  include <byteswap.h>
-#  define SWAP(n) bswap_64 (n)
-# else
-#  define SWAP(n) \
-  (((n) << 56)					\
-   | (((n) & 0xff00) << 40)			\
-   | (((n) & 0xff0000) << 24)			\
-   | (((n) & 0xff000000) << 8)			\
-   | (((n) >> 8) & 0xff000000)			\
-   | (((n) >> 24) & 0xff0000)			\
-   | (((n) >> 40) & 0xff00)			\
-   | ((n) >> 56))
-# endif
-#else
-# define SWAP(n) (n)
-#endif
-
-
 /* This array contains the bytes used to pad the buffer to the next
    64-byte boundary.  (FIPS 180-2:5.1.2)  */
 static const unsigned char fillbuf[128] = { 0x80, 0 /* , 0, 0, ...  */ };
@@ -102,13 +82,13 @@  static const uint64_t K[80] =
   };
 
 void
-sha512_process_block (const void *buffer, size_t len, struct sha512_ctx *ctx);
+sha512_process_block (const void *buffer, size_t len, sha512_ctx *ctx);
 
 /* Initialize structure containing state of computation.
    (FIPS 180-2:5.3.3)  */
 void
 __sha512_init_ctx (ctx)
-     struct sha512_ctx *ctx;
+     sha512_ctx *ctx;
 {
   ctx->H[0] = UINT64_C (0x6a09e667f3bcc908);
   ctx->H[1] = UINT64_C (0xbb67ae8584caa73b);
@@ -122,17 +102,10 @@  __sha512_init_ctx (ctx)
   ctx->total[0] = ctx->total[1] = 0;
   ctx->buflen = 0;
 }
+weak_alias(__sha512_init_ctx, sha512_init)
 
-
-/* Process the remaining bytes in the internal buffer and the usual
-   prolog according to the standard and write the result to RESBUF.
-
-   IMPORTANT: On some systems it is required that RESBUF is correctly
-   aligned for a 32 bits value.  */
-void *
-__sha512_finish_ctx (ctx, resbuf)
-     struct sha512_ctx *ctx;
-     void *resbuf;
+static void
+__sha512_finish_ctx_generic (sha512_ctx *ctx)
 {
   /* Take yet unprocessed bytes into account.  */
   uint64_t bytes = ctx->buflen;
@@ -151,26 +124,55 @@  __sha512_finish_ctx (ctx, resbuf)
   memcpy (&ctx->buffer[bytes], fillbuf, pad);
 
   /* Put the 128-bit file length in *bits* at the end of the buffer.  */
-  ctx->buffer64[(bytes + pad + 8) / 8] = SWAP (ctx->total[TOTAL128_low] << 3);
-  ctx->buffer64[(bytes + pad) / 8] = SWAP ((ctx->total[TOTAL128_high] << 3) |
+  ctx->buffer64[(bytes + pad + 8) / 8] = be64toh (ctx->total[TOTAL128_low] << 3);
+  ctx->buffer64[(bytes + pad) / 8] = be64toh ((ctx->total[TOTAL128_high] << 3) |
 					   (ctx->total[TOTAL128_low] >> 61));
 
   /* Process last bytes.  */
   sha512_process_block (ctx->buffer, bytes + pad + 16, ctx);
+}
+
+/* Process the remaining bytes in the internal buffer and the usual
+   prolog according to the standard and write the result to RESBUF. */
+void *
+__sha512_finish_ctx (sha512_ctx *ctx, void *resbuf)
+{
+  __sha512_finish_ctx_generic (ctx);
 
-  /* Put result from CTX in first 64 bytes following RESBUF.  */
+  /* Put result from CTX in first 64 bytes following RESBUF.
+     This is the finish so we can trash ctx.  */
   for (unsigned int i = 0; i < 8; ++i)
-    ((uint64_t *) resbuf)[i] = SWAP (ctx->H[i]);
+    ctx->H[i] = be64toh (ctx->H[i]);
+
+  memcpy(resbuf, ctx->H, 64);
 
   return resbuf;
 }
+weak_alias(__sha512_finish_ctx, sha512_finish)
 
+/* Process the remaining bytes in the internal buffer and the usual
+   prolog according to the standard and write the result to RESBUF. */
+void *
+__sha384_finish_ctx (sha512_ctx *ctx, void *resbuf)
+{
+  __sha512_finish_ctx_generic (ctx);
 
-void
-__sha512_process_bytes (buffer, len, ctx)
+  /* Put result from CTX in first 48 bytes following RESBUF.
+     This is the finish so we can trash ctx.  */
+  for (unsigned int i = 0; i < 6; ++i)
+    ctx->H[i] = be64toh (ctx->H[i]);
+
+  memcpy(resbuf, ctx->H, 48);
+
+  return resbuf;
+}
+weak_alias(__sha384_finish_ctx, sha384_finish)
+
+const void *
+__sha512_process_bytes (ctx, buffer, len)
+     sha512_ctx *ctx;
      const void *buffer;
      size_t len;
-     struct sha512_ctx *ctx;
 {
   /* When we already have some bits in our internal buffer concatenate
      both inputs first.  */
@@ -239,6 +241,21 @@  __sha512_process_bytes (buffer, len, ctx)
 	}
       ctx->buflen = left_over;
     }
+
+  return buffer;
+}
+weak_alias(__sha512_process_bytes, sha512_update)
+
+void *
+__sha512 (const void *__restrict d, size_t n, void *__restrict md)
+{
+	sha512_ctx ctx;
+
+	sha512_init(&ctx);
+	sha512_update(&ctx, d, n);
+	sha512_finish(&ctx, md);
+	return md;
 }
+weak_alias(__sha512, sha512)
 
 #include <sha512-block.c>
diff --git a/crypt/sha512.h b/crypt/sha512.h
index 159f000..359ea37 100644
--- a/crypt/sha512.h
+++ b/crypt/sha512.h
@@ -28,7 +28,7 @@ 
 
 
 /* Structure to save state of computation between the single steps.  */
-struct sha512_ctx
+typedef struct
 {
   uint64_t H[8];
 
@@ -43,30 +43,40 @@  struct sha512_ctx
     uint64_t total[2];
   };
   uint64_t buflen;
+  uint8_t __padding[8];
   union
   {
     char buffer[256];
     uint64_t buffer64[32];
   };
-};
+} sha512_ctx __attribute__((aligned(16)));
 
 /* Initialize structure containing state of computation.
    (FIPS 180-2: 5.3.3)  */
-extern void __sha512_init_ctx (struct sha512_ctx *ctx) __THROW;
+extern void __sha512_init_ctx (sha512_ctx *__restrict ctx) __THROW;
 
 /* Starting with the result of former calls of this function (or the
    initialization function update the context for the next LEN bytes
    starting at BUFFER.
    It is NOT required that LEN is a multiple of 128.  */
-extern void __sha512_process_bytes (const void *buffer, size_t len,
-				    struct sha512_ctx *ctx) __THROW;
+extern const void *__sha512_process_bytes (sha512_ctx *__restrict ctx,
+				const void *__restrict buffer, size_t len) __THROW;
 
 /* Process the remaining bytes in the buffer and put result from CTX
    in first 64 bytes following RESBUF.
 
    IMPORTANT: On some systems it is required that RESBUF is correctly
    aligned for a 64 bits value.  */
-extern void *__sha512_finish_ctx (struct sha512_ctx *ctx, void *resbuf)
+extern void *__sha512_finish_ctx (sha512_ctx *ctx, void *__restrict resbuf)
   __THROW;
 
+/* Process the remaining bytes in the buffer and put result from CTX
+   in first 48 bytes following RESBUF.
+
+   IMPORTANT: On some systems it is required that RESBUF is correctly
+   aligned for a 64 bits value.  */
+extern void *__sha384_finish_ctx (sha512_ctx *ctx, void *__restrict resbuf)
+  __THROW;
+
+extern void *__sha512(const void *__restrict d, size_t n, void *__restrict md) __THROW;
 #endif /* sha512.h */
diff --git a/crypt/sha512test.c b/crypt/sha512test.c
index 792e9a7..296ce89 100644
--- a/crypt/sha512test.c
+++ b/crypt/sha512test.c
@@ -63,7 +63,7 @@  static const struct
 int
 main (void)
 {
-  struct sha512_ctx ctx;
+  sha512_ctx ctx;
   char sum[64];
   int result = 0;
   int cnt;
@@ -71,8 +71,7 @@  main (void)
   for (cnt = 0; cnt < (int) (sizeof (tests) / sizeof (tests[0])); ++cnt)
     {
       __sha512_init_ctx (&ctx);
-      __sha512_process_bytes (tests[cnt].input, strlen (tests[cnt].input),
-			      &ctx);
+      __sha512_process_bytes (&ctx, tests[cnt].input, strlen (tests[cnt].input));
       __sha512_finish_ctx (&ctx, sum);
       if (memcmp (tests[cnt].result, sum, 64) != 0)
 	{
@@ -82,7 +81,7 @@  main (void)
 
       __sha512_init_ctx (&ctx);
       for (int i = 0; tests[cnt].input[i] != '\0'; ++i)
-	__sha512_process_bytes (&tests[cnt].input[i], 1, &ctx);
+	__sha512_process_bytes (&ctx, &tests[cnt].input[i], 1);
       __sha512_finish_ctx (&ctx, sum);
       if (memcmp (tests[cnt].result, sum, 64) != 0)
 	{
@@ -96,7 +95,7 @@  main (void)
   memset (buf, 'a', sizeof (buf));
   __sha512_init_ctx (&ctx);
   for (int i = 0; i < 1000; ++i)
-    __sha512_process_bytes (buf, sizeof (buf), &ctx);
+    __sha512_process_bytes (&ctx, buf, sizeof (buf));
   __sha512_finish_ctx (&ctx, sum);
   static const char expected[64] =
     "\xe7\x18\x48\x3d\x0c\xe7\x69\x64\x4e\x2e\x42\xc7\xbc\x15\xb4\x63"
diff --git a/manual/crypt.texi b/manual/crypt.texi
index fd007cf..23d654f 100644
--- a/manual/crypt.texi
+++ b/manual/crypt.texi
@@ -428,3 +428,29 @@  each byte.
 The @code{ecb_crypt}, @code{cbc_crypt}, and @code{des_setparity}
 functions and their accompanying macros are all defined in the header
 @file{rpc/des_crypt.h}.
+
+@node SHA-2 Hash Functions
+@section SHA-2 Hash Functions
+
+@cindex FIPS 180-4
+The Secure Hash Algorythm 2 is described in the US Government Federal
+Information Processing Standards (FIPS) 180-4 published by the National
+Institute of Standards and Technology. Secure Hash Algorythm 1 (SHA-1) is not
+supported, nor is SHA-3.
+
+SHA-2 has two flavors, sha256 and sha512. sha224 and sha384 are also truncated
+versions of those two.
+
+@comment sha2.h
+@deftypefun void *sha256 (const void *@var{data}, size_t @var{size}, void *@var{digest})
+
+The @code{sha256} hashes the first @var{size} bytes of @var{date} and returns
+the sha256 digest in the first 32 bytes of @var{digest}.
+@end deftypefun
+
+@comment sha2.h
+@deftypefun void *sha512 (const void *@var{data}, size_t @var{size}, void *@var{digest})
+
+The @code{sha256} hashes the first @var{size} bytes of @var{date} and returns
+the sha512 digest in the first 64 bytes of @var{digest}.
+@end deftypefun
diff --git a/sysdeps/unix/sysv/linux/aarch64/libcrypt.abilist b/sysdeps/unix/sysv/linux/aarch64/libcrypt.abilist
index 177c536..0b1a14b 100644
--- a/sysdeps/unix/sysv/linux/aarch64/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/aarch64/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.17
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/alpha/libcrypt.abilist b/sysdeps/unix/sysv/linux/alpha/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/alpha/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/alpha/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/arm/libcrypt.abilist b/sysdeps/unix/sysv/linux/arm/libcrypt.abilist
index 8c874ed..806737d 100644
--- a/sysdeps/unix/sysv/linux/arm/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/arm/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.4
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/hppa/libcrypt.abilist b/sysdeps/unix/sysv/linux/hppa/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/hppa/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/hppa/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/i386/libcrypt.abilist b/sysdeps/unix/sysv/linux/i386/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/i386/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/i386/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/ia64/libcrypt.abilist b/sysdeps/unix/sysv/linux/ia64/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/ia64/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/ia64/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/m68k/coldfire/libcrypt.abilist b/sysdeps/unix/sysv/linux/m68k/coldfire/libcrypt.abilist
index 8c874ed..806737d 100644
--- a/sysdeps/unix/sysv/linux/m68k/coldfire/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/m68k/coldfire/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.4
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/m68k/m680x0/libcrypt.abilist b/sysdeps/unix/sysv/linux/m68k/m680x0/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/m68k/m680x0/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/m68k/m680x0/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/microblaze/libcrypt.abilist b/sysdeps/unix/sysv/linux/microblaze/libcrypt.abilist
index 0ac28c5..e43d483 100644
--- a/sysdeps/unix/sysv/linux/microblaze/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/microblaze/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.18
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/mips/mips32/libcrypt.abilist b/sysdeps/unix/sysv/linux/mips/mips32/libcrypt.abilist
index c548eee..23cccb8 100644
--- a/sysdeps/unix/sysv/linux/mips/mips32/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/mips/mips32/libcrypt.abilist
@@ -9,3 +9,16 @@  GLIBC_2.0
  setkey_r F
 _gp_disp
  _gp_disp A
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
+
diff --git a/sysdeps/unix/sysv/linux/mips/mips64/libcrypt.abilist b/sysdeps/unix/sysv/linux/mips/mips64/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/mips/mips64/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/mips/mips64/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/nios2/libcrypt.abilist b/sysdeps/unix/sysv/linux/nios2/libcrypt.abilist
index dd5a89c..5e656b6 100644
--- a/sysdeps/unix/sysv/linux/nios2/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/nios2/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.21
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc32/libcrypt.abilist b/sysdeps/unix/sysv/linux/powerpc/powerpc32/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/powerpc/powerpc32/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/powerpc/powerpc32/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/libcrypt.abilist b/sysdeps/unix/sysv/linux/powerpc/powerpc64/libcrypt.abilist
index a11230a..029eeeb 100644
--- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.3
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/s390/s390-32/libcrypt.abilist b/sysdeps/unix/sysv/linux/s390/s390-32/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/s390/s390-32/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/s390/s390-32/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/s390/s390-64/libcrypt.abilist b/sysdeps/unix/sysv/linux/s390/s390-64/libcrypt.abilist
index e3bd54f..7670fc2 100644
--- a/sysdeps/unix/sysv/linux/s390/s390-64/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/s390/s390-64/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.2
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/sh/libcrypt.abilist b/sysdeps/unix/sysv/linux/sh/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/sh/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/sh/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/sparc/sparc32/libcrypt.abilist b/sysdeps/unix/sysv/linux/sparc/sparc32/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/sparc/sparc32/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/sparc/sparc32/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/sparc/sparc64/libcrypt.abilist b/sysdeps/unix/sysv/linux/sparc/sparc64/libcrypt.abilist
index 1df145f..964f6bd 100644
--- a/sysdeps/unix/sysv/linux/sparc/sparc64/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/sparc/sparc64/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.0
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libcrypt.abilist b/sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libcrypt.abilist
index 608e5df..5d13529 100644
--- a/sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.12
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libcrypt.abilist b/sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libcrypt.abilist
index 608e5df..5d13529 100644
--- a/sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.12
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/tile/tilepro/libcrypt.abilist b/sysdeps/unix/sysv/linux/tile/tilepro/libcrypt.abilist
index 608e5df..5d13529 100644
--- a/sysdeps/unix/sysv/linux/tile/tilepro/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/tile/tilepro/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.12
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/x86_64/64/libcrypt.abilist b/sysdeps/unix/sysv/linux/x86_64/64/libcrypt.abilist
index 23d4ce0..3cd0764 100644
--- a/sysdeps/unix/sysv/linux/x86_64/64/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/x86_64/64/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.2.5
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F
diff --git a/sysdeps/unix/sysv/linux/x86_64/x32/libcrypt.abilist b/sysdeps/unix/sysv/linux/x86_64/x32/libcrypt.abilist
index 1a52738..ea70f11 100644
--- a/sysdeps/unix/sysv/linux/x86_64/x32/libcrypt.abilist
+++ b/sysdeps/unix/sysv/linux/x86_64/x32/libcrypt.abilist
@@ -7,3 +7,15 @@  GLIBC_2.16
  fcrypt F
  setkey F
  setkey_r F
+GLIBC_2.22
+ GLIBC_2.22 A
+ sha224_finish F
+ sha256 F
+ sha256_finish F
+ sha256_init F
+ sha256_update F
+ sha384_finish F
+ sha512 F
+ sha512_finish F
+ sha512_init F
+ sha512_update F