diff mbox

[testsuite] don't use lto plugin if it doesn't work

Message ID orfw9hbx8p.fsf@livre.localdomain
State New
Headers show

Commit Message

Alexandre Oliva June 26, 2012, 9:04 p.m. UTC
I test i686-linux-gnu in a presumably unusual setting: it's an
x86_64-linux-gnu system, and I've configured the GCC build to use as
builddev tools wrapper scripts for as, ld, gnatmake and gcc that add
flags that make them default to 32-bit.

This worked fine for regression testing, but I've recently realized
(with the PR49888/53671 mishap) that I'm getting tons of LTO testsuite
failures (before and after, so no regression), because the 32-bit LTO
plugin built in this setting can't possibly be used by the 64-bit linker
installed on the system.  Obviously, passing -melf_i386 to the linker
through the wrapper is not enough for it to be able to dlopen a 32-bit
plugin ;-)

I'm considering installing alternate 32-bit tools on my system for
better testing, but I figured I'd betetr fix the test harness before
tackling this: if we find that the LTO plugin doesn't work, we'd better
not use -flto without -fno-use-linker-plugin, to avoid such spurious
failures as GCC attempts by its own initiative to use the linker plugin.

At first I wished there was a simpler, off-band way to tell it not to
use the LTO plugin, that didn't pollute the command line or the test
results, but before I even looked around for some such way, I figured it
would be useful to have information that the linker plugin was not used
in a particular test run, so I went for this patch instead.

Ok to install?

Comments

Jakub Jelinek June 26, 2012, 9:09 p.m. UTC | #1
On Tue, Jun 26, 2012 at 06:04:54PM -0300, Alexandre Oliva wrote:
> I test i686-linux-gnu in a presumably unusual setting: it's an
> x86_64-linux-gnu system, and I've configured the GCC build to use as
> builddev tools wrapper scripts for as, ld, gnatmake and gcc that add
> flags that make them default to 32-bit.
> 
> This worked fine for regression testing, but I've recently realized
> (with the PR49888/53671 mishap) that I'm getting tons of LTO testsuite
> failures (before and after, so no regression), because the 32-bit LTO
> plugin built in this setting can't possibly be used by the 64-bit linker
> installed on the system.  Obviously, passing -melf_i386 to the linker
> through the wrapper is not enough for it to be able to dlopen a 32-bit
> plugin ;-)
> 
> I'm considering installing alternate 32-bit tools on my system for
> better testing, but I figured I'd betetr fix the test harness before
> tackling this: if we find that the LTO plugin doesn't work, we'd better
> not use -flto without -fno-use-linker-plugin, to avoid such spurious
> failures as GCC attempts by its own initiative to use the linker plugin.
> 
> At first I wished there was a simpler, off-band way to tell it not to
> use the LTO plugin, that didn't pollute the command line or the test
> results, but before I even looked around for some such way, I figured it
> would be useful to have information that the linker plugin was not used
> in a particular test run, so I went for this patch instead.

I don't think testsuite harness is the right spot to deal with it, IMNSHO
gcc just shouldn't default to using the linker plugin solely based on
ld --version, it should take into account whether that linker can load
the plugin.

Right now I'm using the following hack in my ld wrapper script for
i686-linux builds on x86_64:
#!/bin/sh
case "$*" in
  --version) cat <<\EOF
GNU ld version 2.20.52.0.1-10.fc17 20100131
Copyright 2012 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or (at your option) a later
version.
This program has absolutely no warranty.
EOF
  exit 0;;
esac
exec /usr/bin/ld -m elf_i386 -L /usr/lib/ "$@"

(it lies about the ld version and thus gcc doesn't assume ld plugin will
work).

	Jakub
H.J. Lu June 26, 2012, 9:28 p.m. UTC | #2
On Tue, Jun 26, 2012 at 2:04 PM, Alexandre Oliva <aoliva@redhat.com> wrote:
> I test i686-linux-gnu in a presumably unusual setting: it's an
> x86_64-linux-gnu system, and I've configured the GCC build to use as
> builddev tools wrapper scripts for as, ld, gnatmake and gcc that add
> flags that make them default to 32-bit.
>
> This worked fine for regression testing, but I've recently realized
> (with the PR49888/53671 mishap) that I'm getting tons of LTO testsuite
> failures (before and after, so no regression), because the 32-bit LTO
> plugin built in this setting can't possibly be used by the 64-bit linker
> installed on the system.  Obviously, passing -melf_i386 to the linker
> through the wrapper is not enough for it to be able to dlopen a 32-bit
> plugin ;-)

I am using this Makefile fragment to bootstrap and test
-m32 and -mx32 GCC on Linux/x86-64:

ifneq ($(BUILD-ARCH),$(CPU))
ifeq (i386,$(ARCH))
TARGET-FLAGS=$(TARGET)
CC=gcc -m32
CXX=g++ -m32
FLAGS-TO-PASS+=CC="$(CC)"
FLAGS-TO-PASS+=CXX="$(CXX)"
# Need 32bit linker for LTO.  */
PATH:=/usr/local32/bin:$(PATH)
endif

ifeq (x32,$(ARCH))
CC=gcc -mx32
CXX=g++ -mx32
FLAGS-TO-PASS+=CC="$(CC)"
FLAGS-TO-PASS+=CXX="$(CXX)"
# Need x32 linker for LTO.  */
PATH:=/usr/localx32/bin:$(PATH)
endif
endif

[hjl@gnu-32 gcc-32bit]$ file /usr/localx32/bin/ld
/usr/localx32/bin/ld: ELF 32-bit LSB executable, x86-64, version 1
(SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.38,
BuildID[sha1]=0x85a2821594e122d4fc60741e2664c2b57888682e, not stripped
[hjl@gnu-32 gcc-32bit]$ file /usr/local32/bin/ld
/usr/local32/bin/ld: ELF 32-bit LSB executable, Intel 80386, version 1
(SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9,
not stripped
[hjl@gnu-32 gcc-32bit]$
Mike Stump June 26, 2012, 10:39 p.m. UTC | #3
On Jun 26, 2012, at 2:04 PM, Alexandre Oliva wrote:
> I test i686-linux-gnu in a presumably unusual setting

I like the setup and testing...

> This worked fine for regression testing, but I've recently realized
> (with the PR49888/53671 mishap) that I'm getting tons of LTO testsuite
> failures (before and after, so no regression), because the 32-bit LTO
> plugin built in this setting can't possibly be used by the 64-bit linker
> installed on the system.  Obviously, passing -melf_i386 to the linker
> through the wrapper is not enough for it to be able to dlopen a 32-bit
> plugin ;-)

So, let's kick this back to the gcc list for all the interested parties to chime in on...  I'd rather have 5 interested people architect the right, nice solution that is engineered to work and then use it.  I don't like any of the solutions so far.  On darwin, we'd just have the plugin and even ld be fat and then one could load that on any architecture, but that's cheating.  H.J.'s solution seems like the most reasonable short term solution, though, I kinda hate all the magic bits one needs for the environment.  Hum, thinking about this for a second...  If we build the plugin after sensing the 64-bitness of ld, using the flags appropriate for the linker...  That's be nice...  if people don't want to do that, then failing the build early when mismatched and simply documenting that one must have a 32-bit linker given to gcc for the build to work, at least would be more honest.
H.J. Lu June 26, 2012, 11:38 p.m. UTC | #4
On Tue, Jun 26, 2012 at 3:39 PM, Mike Stump <mikestump@comcast.net> wrote:
> On Jun 26, 2012, at 2:04 PM, Alexandre Oliva wrote:
>> I test i686-linux-gnu in a presumably unusual setting
>
> I like the setup and testing...
>
>> This worked fine for regression testing, but I've recently realized
>> (with the PR49888/53671 mishap) that I'm getting tons of LTO testsuite
>> failures (before and after, so no regression), because the 32-bit LTO
>> plugin built in this setting can't possibly be used by the 64-bit linker
>> installed on the system.  Obviously, passing -melf_i386 to the linker
>> through the wrapper is not enough for it to be able to dlopen a 32-bit
>> plugin ;-)
>
> So, let's kick this back to the gcc list for all the interested parties to chime in on...  I'd rather have 5 interested people architect the right, nice solution that is engineered to work and then use it.  I don't like any of the solutions so far.  On darwin, we'd just have the plugin and even ld be fat and then one could load that on any architecture, but that's cheating.  H.J.'s solution seems like the most reasonable short term solution, though, I kinda hate all the magic bits one needs for the environment.  Hum, thinking about this for a second...  If we build the plugin after sensing the 64-bitness of ld, using the flags appropriate for the linker...  That's be nice...  if people don't want to do that, then failing the build early when mismatched and simply documenting that one must have a 32-bit linker given to gcc for the build to work, at least would be more honest.

Bootstrap/test -m32/-mx32 with LTO on -m64 is similar to cross-compile,
in the sense that the native linker may not work with plugin.  We just
need to make the right linker available to GCC. My kludge uses PATH.
One can also include binutils source in GCC source tree.
Richard Biener June 27, 2012, 8:41 a.m. UTC | #5
On Tue, Jun 26, 2012 at 11:28 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Tue, Jun 26, 2012 at 2:04 PM, Alexandre Oliva <aoliva@redhat.com> wrote:
>> I test i686-linux-gnu in a presumably unusual setting: it's an
>> x86_64-linux-gnu system, and I've configured the GCC build to use as
>> builddev tools wrapper scripts for as, ld, gnatmake and gcc that add
>> flags that make them default to 32-bit.
>>
>> This worked fine for regression testing, but I've recently realized
>> (with the PR49888/53671 mishap) that I'm getting tons of LTO testsuite
>> failures (before and after, so no regression), because the 32-bit LTO
>> plugin built in this setting can't possibly be used by the 64-bit linker
>> installed on the system.  Obviously, passing -melf_i386 to the linker
>> through the wrapper is not enough for it to be able to dlopen a 32-bit
>> plugin ;-)
>
> I am using this Makefile fragment to bootstrap and test
> -m32 and -mx32 GCC on Linux/x86-64:
>
> ifneq ($(BUILD-ARCH),$(CPU))
> ifeq (i386,$(ARCH))
> TARGET-FLAGS=$(TARGET)
> CC=gcc -m32
> CXX=g++ -m32
> FLAGS-TO-PASS+=CC="$(CC)"
> FLAGS-TO-PASS+=CXX="$(CXX)"
> # Need 32bit linker for LTO.  */
> PATH:=/usr/local32/bin:$(PATH)
> endif
>
> ifeq (x32,$(ARCH))
> CC=gcc -mx32
> CXX=g++ -mx32
> FLAGS-TO-PASS+=CC="$(CC)"
> FLAGS-TO-PASS+=CXX="$(CXX)"
> # Need x32 linker for LTO.  */
> PATH:=/usr/localx32/bin:$(PATH)
> endif
> endif
>
> [hjl@gnu-32 gcc-32bit]$ file /usr/localx32/bin/ld
> /usr/localx32/bin/ld: ELF 32-bit LSB executable, x86-64, version 1
> (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.38,
> BuildID[sha1]=0x85a2821594e122d4fc60741e2664c2b57888682e, not stripped
> [hjl@gnu-32 gcc-32bit]$ file /usr/local32/bin/ld
> /usr/local32/bin/ld: ELF 32-bit LSB executable, Intel 80386, version 1
> (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9,
> not stripped
> [hjl@gnu-32 gcc-32bit]$

So I suppose the above would be good to have in collect-ld?

>
>
> --
> H.J.
Alexandre Oliva June 27, 2012, 9:07 a.m. UTC | #6
[Adding gcc@]

On Jun 26, 2012, "H.J. Lu" <hjl.tools@gmail.com> wrote:

> On Tue, Jun 26, 2012 at 3:39 PM, Mike Stump <mikestump@comcast.net> wrote:
>> On Jun 26, 2012, at 2:04 PM, Alexandre Oliva wrote:
>>> I test i686-linux-gnu in a presumably unusual setting
>> 
>> I like the setup and testing...
>> 
>>> This worked fine for regression testing, but I've recently realized
>>> (with the PR49888/53671 mishap) that I'm getting tons of LTO testsuite
>>> failures (before and after, so no regression), because the 32-bit LTO
>>> plugin built in this setting can't possibly be used by the 64-bit linker
>>> installed on the system.  Obviously, passing -melf_i386 to the linker
>>> through the wrapper is not enough for it to be able to dlopen a 32-bit
>>> plugin ;-)

>> So, let's kick this back to the gcc list for all the interested
>> parties to chime in on...  I'd rather have 5 interested people
>> architect the right, nice solution that is engineered to work and
>> then use it.

>> H.J.'s solution seems like the most reasonable short term solution

It's not a “solution”, it's just the same local arrangement I mentioned
I was leaning towards, after fixing the problem in the test harness,
that lets GCC use the plugin and fail even after explicitly testing for
that.  I don't see how that can possibly be perceived as not a bug.
Which is not to say that there aren't *other* bugs in place, and that
some of them might alleviate this one.

>> If we build the plugin after sensing the 64-bitness of ld, using the
>> flags appropriate for the linker...

Then we'd be disobeying the host setting specified or detected by
configure.

>> failing the build early when mismatched

Why?  We don't demand a working plugin.  Indeed, we disable the use of
the plugin if we find a linker that doesn't support it.  We just don't
account for the possibility of finding a linker that supports plugins,
but that doesn't support the one we'll build later.

> Bootstrap/test -m32/-mx32 with LTO on -m64 is similar to cross-compile,
> in the sense that the native linker may not work with plugin.  We just
> need to make the right linker available to GCC. My kludge uses PATH.
> One can also include binutils source in GCC source tree.

These are all reasonable suggestions to make the plugin work.  But
that's not what the patch addresses, namely what to do during testing
when the plugin is found to NOT work.

As I wrote in the original post, even if we were to detect that the
plugin is not supported and refrain from building it and testing it,
it's more valuable that the test summaries explicitly indicate, in each
FAIL or XFAIL, that the plugin was not used, rather than make room for
uncertainty as to whether the plugin was implicitly used or not.
Mike Stump June 27, 2012, 7:15 p.m. UTC | #7
On Jun 27, 2012, at 2:07 AM, Alexandre Oliva wrote:
> Why?  We don't demand a working plugin.  Indeed, we disable the use of
> the plugin if we find a linker that doesn't support it.  We just don't
> account for the possibility of finding a linker that supports plugins,
> but that doesn't support the one we'll build later.

If this is the preferred solution, then having configure check the 64-bitness of ld and turning off the plugin altogether on mismatches sounds like a reasonable course of action to me.
Alexandre Oliva June 28, 2012, 7:16 a.m. UTC | #8
On Jun 27, 2012, Mike Stump <mikestump@comcast.net> wrote:

> On Jun 27, 2012, at 2:07 AM, Alexandre Oliva wrote:
>> Why?  We don't demand a working plugin.  Indeed, we disable the use of
>> the plugin if we find a linker that doesn't support it.  We just don't
>> account for the possibility of finding a linker that supports plugins,
>> but that doesn't support the one we'll build later.

> If this is the preferred solution, then having configure check the
> 64-bitness of ld and turning off the plugin altogether on mismatches
> sounds like a reasonable course of action to me.

I'd very be surprised if I asked for an i686 native build to package and
install elsewhere, and didn't get a plugin just because the build-time
linker wouldn't have been able to run the plugin.
Jakub Jelinek June 28, 2012, 7:21 a.m. UTC | #9
On Thu, Jun 28, 2012 at 04:16:55AM -0300, Alexandre Oliva wrote:
> On Jun 27, 2012, Mike Stump <mikestump@comcast.net> wrote:
> 
> > On Jun 27, 2012, at 2:07 AM, Alexandre Oliva wrote:
> >> Why?  We don't demand a working plugin.  Indeed, we disable the use of
> >> the plugin if we find a linker that doesn't support it.  We just don't
> >> account for the possibility of finding a linker that supports plugins,
> >> but that doesn't support the one we'll build later.
> 
> > If this is the preferred solution, then having configure check the
> > 64-bitness of ld and turning off the plugin altogether on mismatches
> > sounds like a reasonable course of action to me.
> 
> I'd very be surprised if I asked for an i686 native build to package and
> install elsewhere, and didn't get a plugin just because the build-time
> linker wouldn't have been able to run the plugin.

Not disable plugin support altogether, but disable assuming the linker
supports the plugin.  If user uses explicit -f{,no-}use-linker-plugin,
it is his problem to care that the linker has support.  But the problem
is that when build-time ld is new enough gcc assumes it has to support
the plugin.  And that is not the case.

	Jakub
Alexandre Oliva June 28, 2012, 11:39 a.m. UTC | #10
On Jun 28, 2012, Jakub Jelinek <jakub@redhat.com> wrote:

> On Thu, Jun 28, 2012 at 04:16:55AM -0300, Alexandre Oliva wrote:
>> I'd very be surprised if I asked for an i686 native build to package and
>> install elsewhere, and didn't get a plugin just because the build-time
>> linker wouldn't have been able to run the plugin.

> Not disable plugin support altogether, but disable assuming the linker
> supports the plugin.

That still doesn't sound right to me: why should the compiler refrain
from using a perfectly functional linker plugin on the machine where
it's installed (not where it's built)?

Also, this scenario of silently deciding whether or not to use the
linker plugin could bring us to different test results for the same
command lines.  I don't like that.
Richard Biener June 28, 2012, 11:44 a.m. UTC | #11
On Thu, Jun 28, 2012 at 1:39 PM, Alexandre Oliva <aoliva@redhat.com> wrote:
> On Jun 28, 2012, Jakub Jelinek <jakub@redhat.com> wrote:
>
>> On Thu, Jun 28, 2012 at 04:16:55AM -0300, Alexandre Oliva wrote:
>>> I'd very be surprised if I asked for an i686 native build to package and
>>> install elsewhere, and didn't get a plugin just because the build-time
>>> linker wouldn't have been able to run the plugin.
>
>> Not disable plugin support altogether, but disable assuming the linker
>> supports the plugin.
>
> That still doesn't sound right to me: why should the compiler refrain
> from using a perfectly functional linker plugin on the machine where
> it's installed (not where it's built)?
>
> Also, this scenario of silently deciding whether or not to use the
> linker plugin could bring us to different test results for the same
> command lines.  I don't like that.

I don't like that we derive the default setting this way either.  In the end
I would like us to arrive at the point that LTO does not work at all without
a linker plugin.

Richard.

> --
> Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
> You must be the change you wish to see in the world. -- Gandhi
> Be Free! -- http://FSFLA.org/   FSF Latin America board member
> Free Software Evangelist      Red Hat Brazil Compiler Engineer
Mike Stump June 28, 2012, 1:50 p.m. UTC | #12
On Jun 28, 2012, at 12:16 AM, Alexandre Oliva <aoliva@redhat.com> wrote:
> On Jun 27, 2012, Mike Stump <mikestump@comcast.net> wrote:
>> On Jun 27, 2012, at 2:07 AM, Alexandre Oliva wrote:
>>> Why?  We don't demand a working plugin.  Indeed, we disable the use of
>>> the plugin if we find a linker that doesn't support it.  We just don't
>>> account for the possibility of finding a linker that supports plugins,
>>> but that doesn't support the one we'll build later.
> 
>> If this is the preferred solution, then having configure check the
>> 64-bitness of ld and turning off the plugin altogether on mismatches
>> sounds like a reasonable course of action to me.
> 
> I'd very be surprised if I asked for an i686 native build to package and
> install elsewhere, and didn't get a plugin just because the build-time
> linker wouldn't have been able to run the plugin.

The architecture of the compiler, last I knew it, was to smell out the feature set of the system, including libraries, headers, assemblers and linkers.  It uses this as static configuration parameters for the build.  One is not free to take the built compiler to a differently configured system at run time.

Now, with that as a backdrop, how exactly do you ever plan on using the plugin?  If there is no possible use for it, why then build it?

So, even if there is a way to toggle the feature on, which would mean the plug-in should be built, it should still be off initially, which it isn't.
Mike Stump June 28, 2012, 2:03 p.m. UTC | #13
On Jun 28, 2012, at 4:39 AM, Alexandre Oliva <aoliva@redhat.com> wrote:
> On Jun 28, 2012, Jakub Jelinek <jakub@redhat.com> wrote:
> 
>> On Thu, Jun 28, 2012 at 04:16:55AM -0300, Alexandre Oliva wrote:
>>> I'd very be surprised if I asked for an i686 native build to package and
>>> install elsewhere, and didn't get a plugin just because the build-time
>>> linker wouldn't have been able to run the plugin.
> 
>> Not disable plugin support altogether, but disable assuming the linker
>> supports the plugin.
> 
> That still doesn't sound right to me: why should the compiler refrain
> from using a perfectly functional linker plugin on the machine where
> it's installed (not where it's built?

See your point below for one reason.  The next would be because it would be a speed hit to re-check at runtime the qualities of the linker and do something different.  If the system had an architecture to avoid the speed hit and people wanted to do the work to support the runtime reconfigure, that'd be fine with me.  I don't think you system supports this, and I don't think you want to do that work, do you?

> Also, this scenario of silently deciding whether or not to use the
> linker plugin could bring us to different test results for the same
> command lines.  I don't like that.

Right, which is why the static configuration of the host system at build time is forever after an invariant.  The linker is smelled, it doesn't support plugins, therefore we can't ever use it, therefore we never build it...
Jakub Jelinek June 28, 2012, 2:08 p.m. UTC | #14
On Thu, Jun 28, 2012 at 07:03:37AM -0700, Mike Stump wrote:
> > Also, this scenario of silently deciding whether or not to use the
> > linker plugin could bring us to different test results for the same
> > command lines.  I don't like that.
> 
> Right, which is why the static configuration of the host system at build
> time is forever after an invariant.  The linker is smelled, it doesn't
> support plugins, therefore we can't ever use it, therefore we never build
> it...

THis test is not about whether to build the plugin, but whether to force
using it by default.  And to be able to use it by default, you need a
guarantee that all the linkers you'll use it with do support the plugin.
Therefore, if the build-time linker doesn't support it, I think it is just
fine not all of your linkers support the plugin and not enable it by
default.

	Jakub
Richard Biener June 28, 2012, 2:49 p.m. UTC | #15
On Thu, Jun 28, 2012 at 4:08 PM, Jakub Jelinek <jakub@redhat.com> wrote:
> On Thu, Jun 28, 2012 at 07:03:37AM -0700, Mike Stump wrote:
>> > Also, this scenario of silently deciding whether or not to use the
>> > linker plugin could bring us to different test results for the same
>> > command lines.  I don't like that.
>>
>> Right, which is why the static configuration of the host system at build
>> time is forever after an invariant.  The linker is smelled, it doesn't
>> support plugins, therefore we can't ever use it, therefore we never build
>> it...
>
> THis test is not about whether to build the plugin, but whether to force
> using it by default.  And to be able to use it by default, you need a
> guarantee that all the linkers you'll use it with do support the plugin.
> Therefore, if the build-time linker doesn't support it, I think it is just
> fine not all of your linkers support the plugin and not enable it by
> default.

I'd like to have a more reliable way to enable/disable the default use
of the linker-plugin then.  Something in config.gcc maybe, or at least
a flag I can specify at configure time.  If the default in config.gcc is
detected to not work then explicitely changing that (or confirming it)
would be required - otherwise we'd error out.

Richard.
Alexandre Oliva June 28, 2012, 10:19 p.m. UTC | #16
On Jun 28, 2012, Mike Stump <mikestump@comcast.net> wrote:

> On Jun 28, 2012, at 4:39 AM, Alexandre Oliva <aoliva@redhat.com> wrote:
>> That still doesn't sound right to me: why should the compiler refrain
>> from using a perfectly functional linker plugin on the machine where
>> it's installed (not where it's built?

> See your point below for one reason.

My point below suggests a reason for us to *verbosely* indicate the
change, e.g., in the test command line, like my patch does.

> The next would be because it would be a speed hit to re-check at
> runtime the qualities of the linker and do something different.

But then, our testsuite *does* re-check at runtime, but without my
patch, we're not using completely the result of the test.

> If the system had an architecture to avoid the speed hit and people
> wanted to do the work to support the runtime reconfigure, that'd be
> fine with me.

Me too, but I'm not arguing for or against that.  I'm just arguing for a
change to the test harness that will use the result of the dynamic test,
and verbosely so.

>> Also, this scenario of silently deciding whether or not to use the
>> linker plugin could bring us to different test results for the same
>> command lines.  I don't like that.

> Right, which is why the static configuration of the host system at
> build time is forever after an invariant.

That doesn't even match *current* reality.

We can run the testsuite on a machine that's neither the build system
nor the run-time target.  That's presumably why the test harness tests
whether the plugin works.  And that's one reason why we should use that
result instead of letting the compiler override it.

> The linker is smelled, it doesn't support plugins, therefore we can't
> ever use it, therefore we never build it...

'cept even in the build system it *does* support plugins, so it's just
reasonable for us to build the plugin, and for the compiler to expect to
be able to use it.

Now, this will work just fine if the compiler is installed on a system
that matches the host=target (i.e., native compiler) triplet specified
when building the compiler.  It might not work on the build machine, but
that's irrelevant, for we're not supposed to be able to use the compiler
on the build machine.  It might not work on the test machine, and that's
why the test harness tests for plugin support.  But the test harness
doesn't communicate back to the compiler its findings without my patch,
so if the test system doesn't happen to support plugins, we'd get tons
of pointless failures.

If we change the compiler configuration so that it disables the plugin
just because it guesses some potential incompatibility between the
linker and the plugin we're about to build, we'll lose features and
testing.

If we change the compiler to detect it dynamically, we'll get ambiguous
test results.  “did this -flto test use the plugin or not?”

Why would you want any of the scenarios in the two paragraphs above?

If you wouldn't, what do you have against the patch that complements the
plugin detection on the test machine in the test harness?
Mike Stump June 29, 2012, 8:14 p.m. UTC | #17
On Jun 28, 2012, at 3:19 PM, Alexandre Oliva wrote:
> On Jun 28, 2012, Mike Stump <mikestump@comcast.net> wrote:
>> The next would be because it would be a speed hit to re-check at
>> runtime the qualities of the linker and do something different.
> 
> But then, our testsuite *does* re-check at runtime, but without my
> patch, we're not using completely the result of the test.

Hum, we're not making progress in the conversation.  Let me reboot it.  First, let get to the heart of the matter.  That is the behavior of compiler.  Do you think it works perfectly and needs no changing in this area, or if not, what changes do you want?

My take was, the compiler should not try and use the plugin that won't work, and that this should be a static config bit built up from the usual config time methods for the host system.  Do you agree, if not why, and what is your take?
Alexandre Oliva July 2, 2012, 11:06 a.m. UTC | #18
On Jun 29, 2012, Mike Stump <mikestump@comcast.net> wrote:

> First, let get to the heart of the matter.  That is the behavior of
> compiler.

That's a distraction in the context of a patch to improve a feature
that's already present in the testsuite machinery, isn't it?  I have no
objection to discussing this other topic you want to bring forth, but
for reasons already stated and restated I don't think it precludes the
proposed patch and the improvements to testsuite result reporting it
brings about.

>Do you think it works perfectly and needs no changing in this area

I think the compiler is working just fine.  It is told at build time
whether or not to expect a linker with plugin support at run time, and
behaves accordingly.

Configure detects that based on linker version, which is in line with
various other binutils feature tests we have in place, so I don't see
that the test *needs* changing.  If it were to change in such a way
that, in a “native cross” build scenario, it failed to detect plugin
support that is actually available on the equivalent linker one would
find on the configured host=target run time platform, I'd be inclined to
regard that as a regression and a bug.

> My take was, the compiler should not try and use the plugin that won't work, and that this should be a static config bit built up from the usual config time methods for the host system.  Do you agree, if not why, and what is your take?

I agree with that.  Indeed, it seems like a restatement of what I just
wrote above, unless one thinks configure should assume the user lied in
the given triplets.  Because, really, we *are* lying to configure when
we say we're building a i686-linux-gnu native natively when the build
system is actually a x86_64-linux-gnu with some wrapper scripts that
approximate i686-linux-gnu.  If we tell configure we're building a
compiler for an i686-linux-gnu system, configure should listen to us,
rather than second-guess us.  And if we fail to provide it with an
environment that is sufficiently close to what we asked for, it's
entirely our own fault, rather than configure's fault for not realizing
we were cheating and compensating for our lies.

Now, of course, none of this goes against an ability to explicitly
specify whether or not to build a plugin, or to expect it to work with
the linker-for-target on host.  But I don't think we should change the
current default detection for the sake of the i686-native-on-x86_64
scenario, for it really is the best we can do in such a
native-but-not-quite scenario, for we can't possibly test properties of
the actual native linker if what's available at build time is some other
linker.

What we *can* and *should* do, IMHO, is to improve the test machinery,
so that if we happen to test a toolchain built for i686 on a non-i686
system whose linker fails to load the i686 plugin, we don't waste time
testing stuff the testsuite itself has already detected as
nonfunctional, and we avoid the thousands of failures that would ensue.

Another thing we may want to do documentat how to test GCC in such
fake-native settings, so that people can refer to it and save duplicate
effort and avoid inconsistent results.
Richard Biener July 2, 2012, 11:24 a.m. UTC | #19
On Mon, Jul 2, 2012 at 1:06 PM, Alexandre Oliva <aoliva@redhat.com> wrote:
> On Jun 29, 2012, Mike Stump <mikestump@comcast.net> wrote:
>
>> First, let get to the heart of the matter.  That is the behavior of
>> compiler.
>
> That's a distraction in the context of a patch to improve a feature
> that's already present in the testsuite machinery, isn't it?  I have no
> objection to discussing this other topic you want to bring forth, but
> for reasons already stated and restated I don't think it precludes the
> proposed patch and the improvements to testsuite result reporting it
> brings about.
>
>>Do you think it works perfectly and needs no changing in this area
>
> I think the compiler is working just fine.  It is told at build time
> whether or not to expect a linker with plugin support at run time, and
> behaves accordingly.
>
> Configure detects that based on linker version, which is in line with
> various other binutils feature tests we have in place, so I don't see
> that the test *needs* changing.  If it were to change in such a way
> that, in a “native cross” build scenario, it failed to detect plugin
> support that is actually available on the equivalent linker one would
> find on the configured host=target run time platform, I'd be inclined to
> regard that as a regression and a bug.
>
>> My take was, the compiler should not try and use the plugin that won't work, and that this should be a static config bit built up from the usual config time methods for the host system.  Do you agree, if not why, and what is your take?
>
> I agree with that.  Indeed, it seems like a restatement of what I just
> wrote above, unless one thinks configure should assume the user lied in
> the given triplets.  Because, really, we *are* lying to configure when
> we say we're building a i686-linux-gnu native natively when the build
> system is actually a x86_64-linux-gnu with some wrapper scripts that
> approximate i686-linux-gnu.  If we tell configure we're building a
> compiler for an i686-linux-gnu system, configure should listen to us,
> rather than second-guess us.  And if we fail to provide it with an
> environment that is sufficiently close to what we asked for, it's
> entirely our own fault, rather than configure's fault for not realizing
> we were cheating and compensating for our lies.
>
> Now, of course, none of this goes against an ability to explicitly
> specify whether or not to build a plugin, or to expect it to work with
> the linker-for-target on host.  But I don't think we should change the
> current default detection for the sake of the i686-native-on-x86_64
> scenario, for it really is the best we can do in such a
> native-but-not-quite scenario, for we can't possibly test properties of
> the actual native linker if what's available at build time is some other
> linker.
>
> What we *can* and *should* do, IMHO, is to improve the test machinery,
> so that if we happen to test a toolchain built for i686 on a non-i686
> system whose linker fails to load the i686 plugin, we don't waste time
> testing stuff the testsuite itself has already detected as
> nonfunctional, and we avoid the thousands of failures that would ensue.

If you consider what happens if we break the lto-plugin so that it fails
loading or crashes the linker, then it's clear that we _do_ want to see
this effect in the testsuite as FAIL.  Merely making tests UNSUPPORTED
in this case is not what we want ...

> Another thing we may want to do documentat how to test GCC in such
> fake-native settings, so that people can refer to it and save duplicate
> effort and avoid inconsistent results.

... which means that maybe we should not encourage such settings at all.

Richard.

> --
> Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
> You must be the change you wish to see in the world. -- Gandhi
> Be Free! -- http://FSFLA.org/   FSF Latin America board member
> Free Software Evangelist      Red Hat Brazil Compiler Engineer
Mike Stump July 2, 2012, 6:25 p.m. UTC | #20
On Jul 2, 2012, at 4:06 AM, Alexandre Oliva wrote:
> On Jun 29, 2012, Mike Stump <mikestump@comcast.net> wrote:
>> First, let get to the heart of the matter.  That is the behavior of
>> compiler.
> 
> That's a distraction in the context of a patch to improve a feature
> that's already present in the testsuite machinery, isn't it?

Without having a compiler to test, there is little reason to have a testsuite.  The behavior of the compiler is what drives the testsuite?!  The testsuite doesn't exist to test other than the behavior of the compiler we would like to see.  By defining the behavior of the compiler we'd like to see, we define exactly what we want to test.  If we can't agree on the compiler, then by definition, we can't agree on the testsuite.  So, first things first, we have to resolve the features in the compiler, so that we know in what direction we are traveling.

If you disagree, you'd have to enlighten me what purpose the testsuite serves.  I'm happy to listen.

> I have no
> objection to discussing this other topic you want to bring forth, but
> for reasons already stated and restated I don't think it precludes the
> proposed patch and the improvements to testsuite result reporting it
> brings about.

If the testsuite can paper over configuration bits that are wrong, and re-adjust the compiler, but, the compiler can't, then you wind up with testsuite results that don't reflect the expected runtime behavior of the compiler.  The testsuite tries to reproduce the environment that the compiler will see when the user uses it, so as to as faithfully as it can test the compiler, as the user will see it. It is an obligation for the user to provide the environment the compiler is to be used in to the compiler during build and test.

>> Do you think it works perfectly and needs no changing in this area
> 
> I think the compiler is working just fine.

Ah, then I'd leave you and Jakub to sort out if the linker should use the plugin by default when the plugin just isn't going to work...  I think he preferred to fix the compiler to not enable the plugin by default in this case.  This why agreement on the behavior of the compiler is critical.

>  It is told at build time
> whether or not to expect a linker with plugin support at run time, and
> behaves accordingly.
> 
> Configure detects that based on linker version, which is in line with
> various other binutils feature tests we have in place, so I don't see
> that the test *needs* changing.

Validating the the linker 64-bitness matches the plugin 64-bitness in addition to validating the version number.

> If it were to change in such a way
> that, in a “native cross” build scenario, it failed to detect plugin
> support that is actually available on the equivalent linker one would
> find on the configured host=target run time platform, I'd be inclined to
> regard that as a regression and a bug.

If I understand what you just said, I disagree.  The environment provided to configure, is the final environment, it is what is actually available, it is the one from which to make all decisions.  If the user doesn't like that, then the user is free to more faithfully provide the environment.

>> My take was, the compiler should not try and use the plugin that won't work, and that this should be a static config bit built up from the usual config time methods for the host system.  Do you agree, if not why, and what is your take?
> 
> I agree with that.

Odd.  So, does the compiler enable by default the plug-in when it can know that the plug-in isn't going to work?  I thought the entire point was that the compiler was using the plug-in and that plug-in wasn't working?

> Indeed, it seems like a restatement of what I just
> wrote above, unless one thinks configure should assume the user lied in
> the given triplets.  Because, really, we *are* lying to configure when
> we say we're building a i686-linux-gnu native natively when the build
> system is actually a x86_64-linux-gnu with some wrapper scripts that
> approximate i686-linux-gnu.  If we tell configure we're building a
> compiler for an i686-linux-gnu system, configure should listen to us,
> rather than second-guess us.  And if we fail to provide it with an
> environment that is sufficiently close to what we asked for, it's
> entirely our own fault, rather than configure's fault for not realizing
> we were cheating and compensating for our lies.

Jakub said disable by default, and I'm just going along with that as a given.  Since I agree with his position as well, I'd find it hard to argue against it.  If you want other than that, you'd have find support for that position.

gcc builds for the environment provided.  Sorry if you don't agree with that.  The reason why we do this is, gcc takes as gospel what you say.  If you say you have a 64-bit linker, then, you have a 64-bit linker.  If you say you have a 32-bit linker, then you have a 32-bit linker.  When you say you have a 64-bit linker, it isn't going to assume that you have a 32-bit linker.

> Now, of course, none of this goes against an ability to explicitly
> specify whether or not to build a plugin, or to expect it to work with
> the linker-for-target on host.  But I don't think we should change the
> current default detection for the sake of the i686-native-on-x86_64
> scenario, for it really is the best we can do in such a
> native-but-not-quite scenario, for we can't possibly test properties of
> the actual native linker if what's available at build time is some other
> linker.

We can test, and we do test.  What is available as build time, is what is available at run time.  There isn't some other linker.

> What we *can* and *should* do, IMHO, is to improve the test machinery,
> so that if we happen to test a toolchain built for i686 on a non-i686
> system whose linker fails to load the i686 plugin, we don't waste time
> testing stuff the testsuite itself has already detected as
> nonfunctional, and we avoid the thousands of failures that would ensue.

If the failures that are observed are the same failures the user would see in using the compiler, then, I disagree with you, it is appropriate to show the failures.  If the user doesn't see the failures, then I would agree with you.
diff mbox

Patch

for  gcc/testsuite/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* lib/c-torture.exp (LTO_TORTURE_OPTIONS): Add -fno-use-linker-plugin
	if plugin detection does not succeed.
	* lib/gcc-dg.exp (LTO_TORTURE_OPTIONS): Likewise.
	* lib/lto.exp (LTO_OPTIONS): Likewise.

Index: gcc/testsuite/lib/c-torture.exp
===================================================================
--- gcc/testsuite/lib/c-torture.exp.orig	2012-06-25 19:15:07.490911571 -0300
+++ gcc/testsuite/lib/c-torture.exp	2012-06-25 19:15:32.927836277 -0300
@@ -61,8 +61,8 @@  if [check_effective_target_lto] {
       ]
     } else {
       set LTO_TORTURE_OPTIONS [list \
-	  { -O2 -flto -flto-partition=none } \
-	  { -O2 -flto }
+	  { -O2 -flto -fno-use-linker-plugin -flto-partition=none } \
+	  { -O2 -flto -fno-use-linker-plugin }
       ]
     }
 }
Index: gcc/testsuite/lib/gcc-dg.exp
===================================================================
--- gcc/testsuite/lib/gcc-dg.exp.orig	2012-06-25 19:15:09.617738045 -0300
+++ gcc/testsuite/lib/gcc-dg.exp	2012-06-25 19:15:33.161817189 -0300
@@ -92,8 +92,8 @@  if [check_effective_target_lto] {
       set gcc_force_conventional_output "-ffat-lto-objects"
     } else {
       set LTO_TORTURE_OPTIONS [list \
-	  { -O2 -flto -flto-partition=none } \
-	  { -O2 -flto }
+	  { -O2 -flto -fno-use-linker-plugin -flto-partition=none } \
+	  { -O2 -flto -fno-use-linker-plugin }
       ]
     }
 }
Index: gcc/testsuite/lib/lto.exp
===================================================================
--- gcc/testsuite/lib/lto.exp.orig	2012-06-25 19:15:14.617330140 -0300
+++ gcc/testsuite/lib/lto.exp	2012-06-25 19:15:34.966669945 -0300
@@ -77,12 +77,12 @@  proc lto_init { args } {
 	  ]
  	} else {
 	  set LTO_OPTIONS [list	\
-	      {-O0 -flto -flto-partition=none } \
-	      {-O2 -flto -flto-partition=none } \
-	      {-O0 -flto -flto-partition=1to1 } \
-	      {-O2 -flto -flto-partition=1to1 } \
-	      {-O0 -flto }	\
-	      {-O2 -flto}		\
+	      {-O0 -flto -fno-use-linker-plugin -flto-partition=none } \
+	      {-O2 -flto -fno-use-linker-plugin -flto-partition=none } \
+	      {-O0 -flto -fno-use-linker-plugin -flto-partition=1to1 } \
+	      {-O2 -flto -fno-use-linker-plugin -flto-partition=1to1 } \
+	      {-O0 -flto -fno-use-linker-plugin }			\
+	      {-O2 -flto -fno-use-linker-plugin }			\
 	  ]
 	}
     }