diff mbox series

Fix ASAN failures on SPARC64/Linux

Message ID 4930863.as9yYVlIhr@polaris
State New
Headers show
Series Fix ASAN failures on SPARC64/Linux | expand

Commit Message

Eric Botcazou March 11, 2019, 10:29 a.m. UTC
Hi,

ASAN was enabled for the SPARC architecture during GCC 9 development but it 
doesn't really work on SPARC64/Linux because of the specific layout of the 
virtual memory address space.  Fortunately this is (easily) fixable and the 
fix has been accepted upstream, along with other fixes for SPARC (I have 
attached the asan/asan_mapping_sparc64.h file accepted upstream).

But, since GCC also hardcodes the scaling done by ASAN, this also requires a 
small adjustment to the compiler proper by means of a hook, tentatively called 
TARGET_ASAN_SHADOW_LEFT_SHIFT, which is defined to NULL except for SPARC.  It
yields a 100% clean ASAN testsuite on SPARC64/Linux (32-bit and 64-bit).

Tested on SPARC64/Linux, SPARC/Solaris and x86-64/Linux, OK for the mainline?


2019-03-11  Eric Botcazou  <ebotcazou@adacore.com>

	PR sanitizer/80953
	* target.def (asan_shadow_left_shift): New hook.
	(asan_shadow_offset): Minor tweak.
	* doc/tm.texi.in: Add TARGET_ASAN_SHADOW_LEFT_SHIFT.
	* doc/tm.texi: Regenerate.
	* asan.c (asan_emit_stack_protection): Do a preliminary left shift if
	TARGET_ASAN_SHADOW_LEFT_SHIFT is positive.
	(build_shadow_mem_access): Likewise.
	* config/sparc/sparc.c (TARGET_ASAN_SHADOW_LEFT_SHIFT): Define to...
	(sparc_asan_shadow_left_shift): ...this.  New function.

Comments

Jakub Jelinek March 13, 2019, 9:30 a.m. UTC | #1
On Mon, Mar 11, 2019 at 11:29:39AM +0100, Eric Botcazou wrote:
> ASAN was enabled for the SPARC architecture during GCC 9 development but it 
> doesn't really work on SPARC64/Linux because of the specific layout of the 
> virtual memory address space.  Fortunately this is (easily) fixable and the 
> fix has been accepted upstream, along with other fixes for SPARC (I have 
> attached the asan/asan_mapping_sparc64.h file accepted upstream).

Is the size of the virtual address space hole constant though (and will it
remain constant)?
E.g. on powerpc64 or aarch64 there are in each case like 4-5 different VA
size configurations over the last 10+ years of kernel history and
configuration options and fortunately all that is hidden inside of libasan,
if you have older gcc and run into an unsupported VA configuration, all it
takes is update libasan to one that supports it and binaries continue to
work.
While in this testcase, the VA size is hardcoded into all the generated
code.  I guess running it on a VA layout that has the hole larger than
the one picked up (i.e. the high part of memory above the hole smaller;
supposedly for older kernel versions or older hw) should not be an issue,
but a big issue will be if the hole shrinks further, thus the high part of
memory above the hole grows.
Could libasan initialization if it detects just PROT_NONE mmap from the end
of hole to the start of the region it really supports (and fail if that
fails), so that backward compatibility is ensured?

> But, since GCC also hardcodes the scaling done by ASAN, this also requires a 
> small adjustment to the compiler proper by means of a hook, tentatively called 
> TARGET_ASAN_SHADOW_LEFT_SHIFT, which is defined to NULL except for SPARC.  It
> yields a 100% clean ASAN testsuite on SPARC64/Linux (32-bit and 64-bit).
> 
> Tested on SPARC64/Linux, SPARC/Solaris and x86-64/Linux, OK for the mainline?
> 
> 
> 2019-03-11  Eric Botcazou  <ebotcazou@adacore.com>
> 
> 	PR sanitizer/80953
> 	* target.def (asan_shadow_left_shift): New hook.
> 	(asan_shadow_offset): Minor tweak.
> 	* doc/tm.texi.in: Add TARGET_ASAN_SHADOW_LEFT_SHIFT.
> 	* doc/tm.texi: Regenerate.
> 	* asan.c (asan_emit_stack_protection): Do a preliminary left shift if
> 	TARGET_ASAN_SHADOW_LEFT_SHIFT is positive.
> 	(build_shadow_mem_access): Likewise.
> 	* config/sparc/sparc.c (TARGET_ASAN_SHADOW_LEFT_SHIFT): Define to...
> 	(sparc_asan_shadow_left_shift): ...this.  New function.

Also, don't you need some corresponding libsanitizer changes?

	Jakub
Eric Botcazou March 13, 2019, 9:58 a.m. UTC | #2
> Is the size of the virtual address space hole constant though (and will it
> remain constant)?

The kernel sources say that it's constant and with this position for SPARC-T4 
and later.  It's different (larger hole) for SPARC-T3 and earlier but I cannot 
really test.  I don't think that it will change for a given processor.

> E.g. on powerpc64 or aarch64 there are in each case like 4-5 different VA
> size configurations over the last 10+ years of kernel history and
> configuration options and fortunately all that is hidden inside of libasan,
> if you have older gcc and run into an unsupported VA configuration, all it
> takes is update libasan to one that supports it and binaries continue to
> work.

But a few targets have hardcoded VA size in TARGET_ASAN_SHADOW_OFFSET too.

> Could libasan initialization if it detects just PROT_NONE mmap from the end
> of hole to the start of the region it really supports (and fail if that
> fails), so that backward compatibility is ensured?

I'll investigate how targets supporting multiple VA size behave, but I don't 
have access to a large range of SPARC machines...

> Also, don't you need some corresponding libsanitizer changes?

Of course, just merged.
Jakub Jelinek March 13, 2019, 10:17 a.m. UTC | #3
On Wed, Mar 13, 2019 at 10:58:41AM +0100, Eric Botcazou wrote:
> > Is the size of the virtual address space hole constant though (and will it
> > remain constant)?
> 
> The kernel sources say that it's constant and with this position for SPARC-T4 
> and later.  It's different (larger hole) for SPARC-T3 and earlier but I cannot 
> really test.  I don't think that it will change for a given processor.
> 
> > E.g. on powerpc64 or aarch64 there are in each case like 4-5 different VA
> > size configurations over the last 10+ years of kernel history and
> > configuration options and fortunately all that is hidden inside of libasan,
> > if you have older gcc and run into an unsupported VA configuration, all it
> > takes is update libasan to one that supports it and binaries continue to
> > work.
> 
> But a few targets have hardcoded VA size in TARGET_ASAN_SHADOW_OFFSET too.

It actually is something that works with all the VA sizes that are
supported.

	Jakub
Jakub Jelinek March 13, 2019, 10:48 a.m. UTC | #4
On Wed, Mar 13, 2019 at 11:17:49AM +0100, Jakub Jelinek wrote:
> On Wed, Mar 13, 2019 at 10:58:41AM +0100, Eric Botcazou wrote:
> > > Is the size of the virtual address space hole constant though (and will it
> > > remain constant)?
> > 
> > The kernel sources say that it's constant and with this position for SPARC-T4 
> > and later.  It's different (larger hole) for SPARC-T3 and earlier but I cannot 
> > really test.  I don't think that it will change for a given processor.
> > 
> > > E.g. on powerpc64 or aarch64 there are in each case like 4-5 different VA
> > > size configurations over the last 10+ years of kernel history and
> > > configuration options and fortunately all that is hidden inside of libasan,
> > > if you have older gcc and run into an unsupported VA configuration, all it
> > > takes is update libasan to one that supports it and binaries continue to
> > > work.
> > 
> > But a few targets have hardcoded VA size in TARGET_ASAN_SHADOW_OFFSET too.
> 
> It actually is something that works with all the VA sizes that are
> supported.

The kernel says ATM that there are following possibilities for the hole:
[0x0000080000000000UL,0xfffff80000000000UL)
[0x0000800000000000UL,0xffff800000000000UL)
[0x0008000000000000UL,0xfff8000000000000UL)
[0x0010000000000000UL,0xfff0000000000000UL)

So, when using the MemToShadow(addr) (1UL << 43) + ((addr << 12) >> (12 + 3)) mapping,
the first valid address above the hole will have shadow at:
 0x0002070000000000UL (will not work, as it is inside of the VA hole)
 0x0001f80000000000UL (will not work, as it is inside of the VA hole)
 0x0001080000000000UL (this is the only case that will work)
 0x0000080000000000UL (will not work; that would mean that both the low and
		       high memory share the same shadow memory;
		       it could be made to work by doing mmap
		       (0xfff0000000000000UL, 0x8000000000000UL, MAP_FIXED, PROT_NONE)
		       at libasan initialization and failing if that doesn't
		       succeed)
Note for the first VA layout even the shadow offset 1UL << 43 will not work
at all even for the low part of the memory, as all of shadow memory is then in the
hole.

I think hardcoding one of the 4 choices in the ABI is undesirable.

Another possibility is instead of using constant offset + 2 shifts use a
variable offset + the normal >> 3, where the offset would be chosen by the
runtime library depending on the VA hole size and where the shadow for the
high memory would precede the shadow for the low memory.

	Jakub
Eric Botcazou March 13, 2019, 11 a.m. UTC | #5
> It actually is something that works with all the VA sizes that are
> supported.

Well, there were changes in the past that seem to indicate that this has not 
always been true but, of course, the very specific VM layout on SPARC 64-bit 
(apparently inherited from Solaris) makes things much more convoluted...

Moreover, I'm not sure this is a very important issue, people presumably don't 
run binaries compiled with -fsanitize=address in production, so having to 
recompile with a matching GCC version doesn't seem that much of a hurdle.
Eric Botcazou March 13, 2019, 11:21 a.m. UTC | #6
> So, when using the MemToShadow(addr) (1UL << 43) + ((addr << 12) >> (12 +
> 3)) mapping, the first valid address above the hole will have shadow at:
>  0x0002070000000000UL (will not work, as it is inside of the VA hole)
>  0x0001f80000000000UL (will not work, as it is inside of the VA hole)
>  0x0001080000000000UL (this is the only case that will work)
>  0x0000080000000000UL (will not work; that would mean that both the low and
> 		       high memory share the same shadow memory;
> 		       it could be made to work by doing mmap
> 		       (0xfff0000000000000UL, 0x8000000000000UL, MAP_FIXED, PROT_NONE)
> 		       at libasan initialization and failing if that doesn't
> 		       succeed)

OK, I can certainly do the last thing.

> Note for the first VA layout even the shadow offset 1UL << 43 will not work
> at all even for the low part of the memory, as all of shadow memory is then
> in the hole.

Yes, you need a kernel configured for SPARC-T4 or later.

> I think hardcoding one of the 4 choices in the ABI is undesirable.

Frankly I'm not sure why you care about the ABI of the AddressSanitizer...

> Another possibility is instead of using constant offset + 2 shifts use a
> variable offset + the normal >> 3, where the offset would be chosen by the
> runtime library depending on the VA hole size and where the shadow for the
> high memory would precede the shadow for the low memory.

But you still need to chop the high bits, otherwise you end up in the hole.
Jakub Jelinek March 13, 2019, 11:33 a.m. UTC | #7
On Wed, Mar 13, 2019 at 12:21:15PM +0100, Eric Botcazou wrote:
> > So, when using the MemToShadow(addr) (1UL << 43) + ((addr << 12) >> (12 +
> > 3)) mapping, the first valid address above the hole will have shadow at:
> >  0x0002070000000000UL (will not work, as it is inside of the VA hole)
> >  0x0001f80000000000UL (will not work, as it is inside of the VA hole)
> >  0x0001080000000000UL (this is the only case that will work)
> >  0x0000080000000000UL (will not work; that would mean that both the low and
> > 		       high memory share the same shadow memory;
> > 		       it could be made to work by doing mmap
> > 		       (0xfff0000000000000UL, 0x8000000000000UL, MAP_FIXED, PROT_NONE)
> > 		       at libasan initialization and failing if that doesn't
> > 		       succeed)
> 
> OK, I can certainly do the last thing.
> 
> > Note for the first VA layout even the shadow offset 1UL << 43 will not work
> > at all even for the low part of the memory, as all of shadow memory is then
> > in the hole.
> 
> Yes, you need a kernel configured for SPARC-T4 or later.
> 
> > I think hardcoding one of the 4 choices in the ABI is undesirable.
> 
> Frankly I'm not sure why you care about the ABI of the AddressSanitizer...

Because in real world, people do build asan instrumented shared libraries etc.,
sometimes as second set of libs and not everybody builds stuff for just his
own machine.  Plus, by hardcoding it in the compiler you even don't give a
choice to change it for other systems.  There is no runtime diagnostics if
you mix objects or shared libraries or executable with different settings.

> > Another possibility is instead of using constant offset + 2 shifts use a
> > variable offset + the normal >> 3, where the offset would be chosen by the
> > runtime library depending on the VA hole size and where the shadow for the
> > high memory would precede the shadow for the low memory.
> 
> But you still need to chop the high bits, otherwise you end up in the hole.

Not if the >> 3 shift is arithmetic shift.

For the
[0x0000080000000000UL,0xfffff80000000000UL)                                                                                                        
[0x0000800000000000UL,0xffff800000000000UL)                                                                                                        
[0x0008000000000000UL,0xfff8000000000000UL)                                                                                                        
[0x0010000000000000UL,0xfff0000000000000UL)                                                                                                        
if shadow is shadow_offset + ((long) addr >> 3), then shadow_offset could be
e.g.
0x0000030000000000UL
0x0000300000000000UL
0x0003000000000000UL
0x0004000000000000UL
or something similar, the shadow memory would map into something below the
hole (of course one needs to also take into account where exactly the
dynamic linker, stack and shared libraries are usually mapped).

	Jakub
Eric Botcazou March 14, 2019, 10:01 p.m. UTC | #8
> Not if the >> 3 shift is arithmetic shift.

Sorry, I don't understand how this can work.
Jakub Jelinek March 14, 2019, 10:39 p.m. UTC | #9
On Thu, Mar 14, 2019 at 11:01:37PM +0100, Eric Botcazou wrote:
> > Not if the >> 3 shift is arithmetic shift.
> 
> Sorry, I don't understand how this can work.

For some configurations, libasan defines SHADOW_OFFSET to
__asan_shadow_memory_dynamic_address (exported uptr symbol from libasan),
so SHADOW_OFFSET would be also __asan_shadow_memory_dynamic_address and
#define MEM_TO_SHADOW(mem) ((sptr(mem) >> SHADOW_SCALE) + (SHADOW_OFFSET))

For the different sizes of the address space hole:
[0x0000080000000000UL,0xfffff80000000000UL)
[0x0000800000000000UL,0xffff800000000000UL)
[0x0008000000000000UL,0xfff8000000000000UL)
[0x0010000000000000UL,0xfff0000000000000UL)
it would then be up to the asan initialization to figure out what value
of the __asan_shadow_memory_dynamic_address it wants to use.

Say for the largest hole, sptr(0)>>3 is 0,
sptr(0x0000080000000000UL)>>3 is 0x0000010000000000UL,
sptr(0xfffff80000000000UL)>>3 is 0xffffff0000000000UL,
sptr(0xffffffffffffffffUL)>>3 is 0xffffffffffffffffUL.
The VA has 8TiB before hole and 8TiB after hole, and needs 2TiB of shadow
memory.  You pick some 2TiB region, either in the area below hole, or above
hole, where nothing is mapped, let's say you pick
[0x0000020000000000UL,0x0000040000000000UL) as the shadow memory
and then __asan_shadow_memory_dynamic_address will be
0x0000020000000000UL + sptr(0x0000080000000000UL)>>3, i.e.
0x0000030000000000UL.
kLowMemBeg is 0, kLowMemEnd is 0x0000020000000000UL-1 (first region
where you can have normal data), then there would be a shadow memory
corresponding to all of memory above the hole (i.e.
0x0000020000000000UL..0x0000030000000000UL) followed immediately
by shadow memory for kLowMemBeg..kLowMemEnd (0x0000030000000000UL..
0x0000034000000000UL-1), followed by
kShadowGapBeg 0x0000034000000000UL through kShadowGapEnd
0x0000038000000000UL-1 and finally again shadow memory for the normal
memory between 0x0000040000000000UL..0x0000080000000000UL
(i.e. 0x0000038000000000UL..0x0000040000000000UL-1).
Note, the 0x0000020000000000UL choice was just an example, I believe
it would work if you just tried to mmap without MAP_FIXED a 2TiB region
for the shadow and whatever you get used together with the start and end
of VA hole to compute everything else.

Similarly for all the other VA hole sizes, just instead of 2TiB you need
32TiB, 512TiB or 1PiB of shadow memory (always size of memory before
VA hole divided by 4 (== size of both regions outside of hole divided by 8)).

gcc would then emit whatever sequence the memory model emits to
access external __asan_shadow_memory_dynamic_address symbol, shifted
arithmetically address >> 3 and added that to value of
__asan_shadow_memory_dynamic_address.

	Jakub
diff mbox series

Patch

Index: asan.c
===================================================================
--- asan.c	(revision 269546)
+++ asan.c	(working copy)
@@ -1380,6 +1380,7 @@  asan_emit_stack_protection (rtx base, rt
   unsigned char cur_shadow_byte = ASAN_STACK_MAGIC_LEFT;
   tree str_cst, decl, id;
   int use_after_return_class = -1;
+  unsigned int shift;
 
   if (shadow_ptr_types[0] == NULL_TREE)
     asan_init_shadow_ptr_types ();
@@ -1524,8 +1525,19 @@  asan_emit_stack_protection (rtx base, rt
   TREE_ASM_WRITTEN (decl) = 1;
   TREE_ASM_WRITTEN (id) = 1;
   emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl)));
-  shadow_base = expand_binop (Pmode, lshr_optab, base,
-			      gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT),
+  shadow_base = base;
+  if (targetm.asan_shadow_left_shift
+      && (shift = targetm.asan_shadow_left_shift ()) > 0)
+    {
+      shadow_base = expand_binop (Pmode, ashl_optab, shadow_base,
+				  gen_int_shift_amount (Pmode, shift),
+				  NULL_RTX, 1, OPTAB_DIRECT);
+      shift += ASAN_SHADOW_SHIFT;
+    }
+  else
+    shift = ASAN_SHADOW_SHIFT;
+  shadow_base = expand_binop (Pmode, lshr_optab, shadow_base,
+			      gen_int_shift_amount (Pmode, shift),
 			      NULL_RTX, 1, OPTAB_DIRECT);
   shadow_base
     = plus_constant (Pmode, shadow_base,
@@ -2023,9 +2035,24 @@  build_shadow_mem_access (gimple_stmt_ite
 {
   tree t, uintptr_type = TREE_TYPE (base_addr);
   tree shadow_type = TREE_TYPE (shadow_ptr_type);
+  unsigned int shift;
   gimple *g;
 
-  t = build_int_cst (uintptr_type, ASAN_SHADOW_SHIFT);
+  if (targetm.asan_shadow_left_shift
+      && (shift = targetm.asan_shadow_left_shift ()) > 0)
+    {
+      t = build_int_cst (uintptr_type, shift);
+      g = gimple_build_assign (make_ssa_name (uintptr_type), LSHIFT_EXPR,
+			       base_addr, t);
+      gimple_set_location (g, location);
+      gsi_insert_after (gsi, g, GSI_NEW_STMT);
+      base_addr = gimple_assign_lhs (g);
+      shift += ASAN_SHADOW_SHIFT;
+    }
+  else
+    shift = ASAN_SHADOW_SHIFT;
+
+  t = build_int_cst (uintptr_type, shift);
   g = gimple_build_assign (make_ssa_name (uintptr_type), RSHIFT_EXPR,
 			   base_addr, t);
   gimple_set_location (g, location);
Index: config/sparc/sparc.c
===================================================================
--- config/sparc/sparc.c	(revision 269546)
+++ config/sparc/sparc.c	(working copy)
@@ -674,6 +674,7 @@  static rtx sparc_struct_value_rtx (tree,
 static rtx sparc_function_value (const_tree, const_tree, bool);
 static rtx sparc_libcall_value (machine_mode, const_rtx);
 static bool sparc_function_value_regno_p (const unsigned int);
+static unsigned int sparc_asan_shadow_left_shift (void);
 static unsigned HOST_WIDE_INT sparc_asan_shadow_offset (void);
 static void sparc_output_dwarf_dtprel (FILE *, int, rtx) ATTRIBUTE_UNUSED;
 static void sparc_file_end (void);
@@ -835,6 +836,9 @@  char sparc_hard_reg_printed[8];
 #undef TARGET_EXPAND_BUILTIN_SAVEREGS
 #define TARGET_EXPAND_BUILTIN_SAVEREGS sparc_builtin_saveregs
 
+#undef TARGET_ASAN_SHADOW_LEFT_SHIFT
+#define TARGET_ASAN_SHADOW_LEFT_SHIFT sparc_asan_shadow_left_shift
+
 #undef TARGET_ASAN_SHADOW_OFFSET
 #define TARGET_ASAN_SHADOW_OFFSET sparc_asan_shadow_offset
 
@@ -12493,7 +12497,16 @@  sparc_init_machine_status (void)
 {
   return ggc_cleared_alloc<machine_function> ();
 }
-
+
+/* Implement the TARGET_ASAN_SHADOW_LEFT_SHIFT hook.  */
+
+static unsigned int
+sparc_asan_shadow_left_shift (void)
+{
+  /* This is tailored to the 52-bit VM layout on SPARC-T4 and later.  */
+  return TARGET_ARCH64 ? 12 : 0;
+}
+
 /* Implement the TARGET_ASAN_SHADOW_OFFSET hook.  */
 
 static unsigned HOST_WIDE_INT
Index: doc/tm.texi
===================================================================
--- doc/tm.texi	(revision 269546)
+++ doc/tm.texi	(working copy)
@@ -11975,10 +11975,17 @@  MIPS, where add-immediate takes a 16-bit
 is zero, which disables this optimization.
 @end deftypevr
 
+@deftypefn {Target Hook} {unsigned int} TARGET_ASAN_SHADOW_LEFT_SHIFT (void)
+Return the amount by which an address must first be shifted to the left
+and then back to the right, before being normally shifted to the right,
+to get the corresponding Address Sanitizer shadow address.  NULL means that
+such a left shift is not needed.
+@end deftypefn
+
 @deftypefn {Target Hook} {unsigned HOST_WIDE_INT} TARGET_ASAN_SHADOW_OFFSET (void)
-Return the offset bitwise ored into shifted address to get corresponding
-Address Sanitizer shadow memory address.  NULL if Address Sanitizer is not
-supported by the target.
+Return the offset added to a shifted address to get the corresponding
+Address Sanitizer shadow memory address.  NULL means that the Address
+Sanitizer is not supported by the target.
 @end deftypefn
 
 @deftypefn {Target Hook} {unsigned HOST_WIDE_INT} TARGET_MEMMODEL_CHECK (unsigned HOST_WIDE_INT @var{val})
Index: doc/tm.texi.in
===================================================================
--- doc/tm.texi.in	(revision 269546)
+++ doc/tm.texi.in	(working copy)
@@ -8110,6 +8110,8 @@  and the associated definitions of those
 
 @hook TARGET_CONST_ANCHOR
 
+@hook TARGET_ASAN_SHADOW_LEFT_SHIFT
+
 @hook TARGET_ASAN_SHADOW_OFFSET
 
 @hook TARGET_MEMMODEL_CHECK
Index: target.def
===================================================================
--- target.def	(revision 269546)
+++ target.def	(working copy)
@@ -4307,14 +4307,24 @@  DEFHOOK
 memory model bits are allowed.",
  unsigned HOST_WIDE_INT, (unsigned HOST_WIDE_INT val), NULL)
 
-/* Defines an offset bitwise ored into shifted address to get corresponding
-   Address Sanitizer shadow address, or -1 if Address Sanitizer is not
-   supported by the target.  */
+/* Defines the amount by which an address must first be shifted to the left
+   to get the corresponding Address Sanitizer shadow address.  */
+DEFHOOK
+(asan_shadow_left_shift,
+ "Return the amount by which an address must first be shifted to the left\n\
+and then back to the right, before being normally shifted to the right,\n\
+to get the corresponding Address Sanitizer shadow address.  NULL means that\n\
+such a left shift is not needed.",
+ unsigned int, (void),
+ NULL)
+ 
+/* Defines the offset added to a shifted address to get the corresponding
+   Address Sanitizer shadow address.  */
 DEFHOOK
 (asan_shadow_offset,
- "Return the offset bitwise ored into shifted address to get corresponding\n\
-Address Sanitizer shadow memory address.  NULL if Address Sanitizer is not\n\
-supported by the target.",
+ "Return the offset added to a shifted address to get the corresponding\n\
+Address Sanitizer shadow memory address.  NULL means that the Address\n\
+Sanitizer is not supported by the target.",
  unsigned HOST_WIDE_INT, (void),
  NULL)