diff mbox

[fix,PR71767,2/4,:,Darwin,configury] Arrange for ld64 to be detected as Darwin's linker

Message ID DF1AC536-D12C-40FB-90A4-2DDBC95BC93A@mentor.com
State New
Headers show

Commit Message

Iain Sandoe Nov. 6, 2016, 7:39 p.m. UTC
Hi Folks,

This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.

This adds an option --with-ld64[=version] that allows the configurer to specify that the Darwin ld64 linker is in use.  If the version is given then that will be used to determine the capabilities of the linker in native and canadian crosses.  For Darwin targets this flag will default to "on", since such targets require an ld64-compatible linker.

If a DEFAULT_LINKER is set via --with-ld= then this will also be tested to see if it is ld64.

The ld64 version is determined (unless overridden by --with-ld64=version) and this is exported for use in setting a default value for -mtarget-linker (needed for run-time code-gen changes to section choices).

In this initial patch, support for -rdynamic is converted to be detected at config time, or by the ld64 version if that is explicitly given (as an example of usage).

OK for trunk?
OK for open branches?
Iain

gcc/

2016-11-06  Iain Sandoe  <iain@codesourcery.com>

       PR target/71767
	* configure.ac (with-ld64): New arg-with.  gcc_ld64_version: New,
	new test.  gcc_cv_ld64_export_dynamic: New, New test.
	* configure: Regenerate.
	* config.in: Likewise.
	* darwin.h: Use LD64_HAS_DYNAMIC export. DEF_LD64: New, define.
	* darwin10.h(DEF_LD64): Update for this target version.
	* darwin12.h(LINK_GCC_C_SEQUENCE_SPEC): Remove rdynamic test.
	(DEF_LD64): Update for this target version.
---
 gcc/config/darwin.h   | 16 ++++++++++-
 gcc/config/darwin10.h |  5 ++++
 gcc/config/darwin12.h |  7 ++++-
 gcc/configure.ac      | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 100 insertions(+), 2 deletions(-)

Comments

Mike Stump Nov. 7, 2016, 5:48 p.m. UTC | #1
On Nov 6, 2016, at 11:39 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
> This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.

So, I really do hate to ask, but does this have to be a config option?  Normally, we'd just have configure examine things by itself.  For canadian crosses, there should be enough state present to key off of directly, specially if they are wired up to work.

I've rather have the thing that doesn't just work without that config flag, just work.  I'd like to think I can figure how how to make it just work, if given an idea of what doesn't actually work.

Essentially, you do the operation that doesn't work, detect it failed to work, then the you know it didn't work.
Iain Sandoe Nov. 7, 2016, 5:59 p.m. UTC | #2
> On 7 Nov 2016, at 09:51, Mike Stump <mikestump@comcast.net> wrote:
> 
> [ possible dup ]
> 
>> Begin forwarded message:
>> 
>> From: Mike Stump <mrs@mrs.kithrup.com>
>> Subject: Re: [PATCH fix PR71767 2/4 : Darwin configury] Arrange for ld64 to be detected as Darwin's linker
>> Date: November 7, 2016 at 9:48:53 AM PST
>> To: Iain Sandoe <Iain_Sandoe@mentor.com>
>> Cc: GCC Patches <gcc-patches@gcc.gnu.org>, Jeff Law <law@redhat.com>
>> 
>> On Nov 6, 2016, at 11:39 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>>> This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.
>> 
>> So, I really do hate to ask, but does this have to be a config option?  Normally, we'd just have configure examine things by itself.  For canadian crosses, there should be enough state present to key off of directly, specially if they are wired up to work.
>> 
>> I've rather have the thing that doesn't just work without that config flag, just work.  I'd like to think I can figure how how to make it just work, if given an idea of what doesn't actually work.
>> 
>> Essentially, you do the operation that doesn't work, detect it failed to work, then the you know it didn't work.

Well, if you can run the tool, that’s fine - I wanted to cover the base where we have a native or canadian that’s using a newer ld64 than is installed by the ‘last available xcode’ on a given platform - which is the common case (since the older versions of ld64 in particular don’t really support the features we want, they def. won’t support building LLVM for ex.).

I am *really really* trying to get away from the assumption that darwinNN implies some ld64 capability - because that’s just wrong, really - makes way too many assuptions.  I also want to get to the “end game” that we just configure *-*-darwin and use the cross-capability of the toolchain (we’re a ways away from that upstream, but my local patch set acheives it at least for 5.4 and 6.2).

It’s true that adding configure options is not #1 choice in life - but I think darwin is getting to the stage where there are too many choices to cover without.

Open to alternate suggestions, of course
Iain
Joseph Myers Nov. 7, 2016, 6:34 p.m. UTC | #3
On Sun, 6 Nov 2016, Iain Sandoe wrote:

> This adds an option --with-ld64[=version] that allows the configurer to 

New configure options should be documented in install.texi.
Jeff Law Nov. 7, 2016, 6:40 p.m. UTC | #4
On 11/07/2016 10:48 AM, Mike Stump wrote:
> On Nov 6, 2016, at 11:39 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>> This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.
>
> So, I really do hate to ask, but does this have to be a config option?  Normally, we'd just have configure examine things by itself.  For canadian crosses, there should be enough state present to key off of directly, specially if they are wired up to work.
>
> I've rather have the thing that doesn't just work without that config flag, just work.  I'd like to think I can figure how how to make it just work, if given an idea of what doesn't actually work.
>
> Essentially, you do the operation that doesn't work, detect it failed to work, then the you know it didn't work.
>
But how is that supposed to work in a cross environment when he can't 
directly query the linker's behavior?

In an ideal world we could trivially query the linker's behavior prior 
to invocation.  But we don't have that kind of infrastructure in place.

ISTM the way to go is to have a configure test to try and DTRT 
automatically for native builds and a flag to set for crosses (or 
potentially override the configure test).


Jeff
Mike Stump Nov. 7, 2016, 9:53 p.m. UTC | #5
On Nov 7, 2016, at 10:40 AM, Jeff Law <law@redhat.com> wrote:
> 
> On 11/07/2016 10:48 AM, Mike Stump wrote:
>> On Nov 6, 2016, at 11:39 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>>> This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.
>> 
>> So, I really do hate to ask, but does this have to be a config option?  Normally, we'd just have configure examine things by itself.  For canadian crosses, there should be enough state present to key off of directly, specially if they are wired up to work.
>> 
>> I've rather have the thing that doesn't just work without that config flag, just work.  I'd like to think I can figure how how to make it just work, if given an idea of what doesn't actually work.
>> 
>> Essentially, you do the operation that doesn't work, detect it failed to work, then the you know it didn't work.
>> 
> But how is that supposed to work in a cross environment when he can't directly query the linker's behavior?

:-)  So, the two most obvious solutions would be the programs that need to exist for a build, are portable and ported to run on a system that the build can use, or one can have a forwarding stub from a system the build uses to machine that can host the software that is less portable.  I've done both before, both work fine.  Portable software can also include things like simulators to run software under simulation on the local machine (or on a machine the forwarding stub links to).  I've done that as well.  For example, I've done native bootstraps of gcc on my non-bootstrappable cross compiler by running everything under GNU sim for bootstrap and enhancing the GNU sim stubs to include a few more system calls that bootstrap uses.  :-)  read/write already work, once just needs readdir and a few others.

Also, for darwin, in some cases, we can actually run the target or host programs on the build machine directly.

> In an ideal world we could trivially query the linker's behavior prior to invocation.  But we don't have that kind of infrastructure in place.

There are cases that just work.  If you have a forwarding stub for the cross, then you can just run it as usual.  If you have a BINFMT style simulator on the local machine, again, you can just run it.  And on darwin, there are cases where you can run target and/or host programs on the build machine directly.

For darwin, I can't tell if he wants the runtime property of the target system for programs that will be linked on it, or a behavior of the local linker that will do the deed.  For the local linker, that can be queried directly.  For the target system, we can know it's behavior by knowing what the target is.  We already know what the target is from the macosxversion flag, which embodies the dynamic linker. Also, for any specific version of macosx, there can have a table of what version of ld64 it has on it, by fiat.  We can say, if you want to target such a system, you should use the latest Xcode that supported that system.  This can reduce complexities and simplify our lives.

> ISTM the way to go is to have a configure test to try and DTRT automatically for native builds and a flag to set for crosses (or potentially override the configure test).


Sure, if it can't be known.

For example, if you have the target include directory, you don't to have flags for questions that can be answered by the target headers.  Ditto the libraries.  My question is what is the specific question we are asking?  Additionally answering things on the basis of version numbers isn't quite in the GNU spirit.  I'm not opposed to it, but, it is slightly better to form the actual question if possible.

In complex canadian cross scenarios, we might well want to grab the source to ld64 and compile it up, just as we would any other software for canadian environments.
Mike Stump Nov. 7, 2016, 9:56 p.m. UTC | #6
On Nov 7, 2016, at 9:59 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
> 
>> On 7 Nov 2016, at 09:51, Mike Stump <mikestump@comcast.net> wrote:
>> 
>> [ possible dup ]
>> 
>>> Begin forwarded message:
>>> 
>>> From: Mike Stump <mrs@mrs.kithrup.com>
>>> Subject: Re: [PATCH fix PR71767 2/4 : Darwin configury] Arrange for ld64 to be detected as Darwin's linker
>>> Date: November 7, 2016 at 9:48:53 AM PST
>>> To: Iain Sandoe <Iain_Sandoe@mentor.com>
>>> Cc: GCC Patches <gcc-patches@gcc.gnu.org>, Jeff Law <law@redhat.com>
>>> 
>>> On Nov 6, 2016, at 11:39 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>>>> This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.
>>> 
>>> So, I really do hate to ask, but does this have to be a config option?  Normally, we'd just have configure examine things by itself.  For canadian crosses, there should be enough state present to key off of directly, specially if they are wired up to work.
>>> 
>>> I've rather have the thing that doesn't just work without that config flag, just work.  I'd like to think I can figure how how to make it just work, if given an idea of what doesn't actually work.
>>> 
>>> Essentially, you do the operation that doesn't work, detect it failed to work, then the you know it didn't work.
> 
> Well, if you can run the tool, that’s fine - I wanted to cover the base where we have a native or canadian that’s using a newer ld64 than is installed by the ‘last available xcode’ on a given platform - which is the common case (since the older versions of ld64 in particular don’t really support the features we want, they def. won’t support building LLVM for ex.).
> 
> I am *really really* trying to get away from the assumption that darwinNN implies some ld64 capability - because that’s just wrong, really - makes way too many assuptions.  I also want to get to the “end game” that we just configure *-*-darwin and use the cross-capability of the toolchain (we’re a ways away from that upstream, but my local patch set acheives it at least for 5.4 and 6.2).
> 
> It’s true that adding configure options is not #1 choice in life - but I think darwin is getting to the stage where there are too many choices to cover without.
> 
> Open to alternate suggestions, of course

But, you didn't actually tell me the question that you're interested in.  It is that question that I'm curious about.
Iain Sandoe Nov. 8, 2016, 2:33 a.m. UTC | #7
> On 7 Nov 2016, at 13:56, Mike Stump <mikestump@comcast.net> wrote:
> 
> On Nov 7, 2016, at 9:59 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>> 
>>> On 7 Nov 2016, at 09:51, Mike Stump <mikestump@comcast.net> wrote:
>>> 
>>> [ possible dup ]
>>> 
>>>> Begin forwarded message:
>>>> 
>>>> From: Mike Stump <mrs@mrs.kithrup.com>
>>>> Subject: Re: [PATCH fix PR71767 2/4 : Darwin configury] Arrange for ld64 to be detected as Darwin's linker
>>>> Date: November 7, 2016 at 9:48:53 AM PST
>>>> To: Iain Sandoe <Iain_Sandoe@mentor.com>
>>>> Cc: GCC Patches <gcc-patches@gcc.gnu.org>, Jeff Law <law@redhat.com>
>>>> 
>>>> On Nov 6, 2016, at 11:39 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>>>>> This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.
>>>> 
>>>> So, I really do hate to ask, but does this have to be a config option?  Normally, we'd just have configure examine things by itself.  For canadian crosses, there should be enough state present to key off of directly, specially if they are wired up to work.
>>>> 
>>>> I've rather have the thing that doesn't just work without that config flag, just work.  I'd like to think I can figure how how to make it just work, if given an idea of what doesn't actually work.
>>>> 
>>>> Essentially, you do the operation that doesn't work, detect it failed to work, then the you know it didn't work.
>> 
>> Well, if you can run the tool, that’s fine - I wanted to cover the base where we have a native or canadian that’s using a newer ld64 than is installed by the ‘last available xcode’ on a given platform - which is the common case (since the older versions of ld64 in particular don’t really support the features we want, they def. won’t support building LLVM for ex.).
>> 
>> I am *really really* trying to get away from the assumption that darwinNN implies some ld64 capability - because that’s just wrong, really - makes way too many assuptions.  I also want to get to the “end game” that we just configure *-*-darwin and use the cross-capability of the toolchain (we’re a ways away from that upstream, but my local patch set acheives it at least for 5.4 and 6.2).
>> 
>> It’s true that adding configure options is not #1 choice in life - but I think darwin is getting to the stage where there are too many choices to cover without.
>> 
>> Open to alternate suggestions, of course
> 
> But, you didn't actually tell me the question that you're interested in.  It is that question that I'm curious about.

a) right now, we need to know the target linker version - while it’s not impossible to try and conjure up some test to see if a linker we can run supports coalesced sections or not, the configury code and complexity needed to support that would exceed what I’m proposing at present (and still would not cover the native and canadian cases).

- IMO it’s reasonable to decide on coalesced section availability based on the linker version and is at least correct (where deciding on the basis of system revision is wishful thinking at best).

I’m not debating the various solutions in your reply to Jeff - but honestly I wonder how many of them are realistically in reach of the typical end-user (I have done most of them at one stage or another, but I wonder how many would be stopped dead by “first find and build ld64, which itself needs a c++11 compiler and BTW needs you to build libLTO.dylib .. which needs you to build at least LLVM itself").

b) Given the speed of various older hardware, it’s an objective to get to the stage where we can build reliable native crosses (it seems that there are also people trying to do canadian - despite the trickiness).

c) it’s a high priority on my list to make it possible for Linux folks to be able to build a Darwin cross toolchain, I think that will help a lot with triage of issues

In short, I expect more use of cross and native crosses in the future…

.. so:

case 1. self-hosted build=host=target, ld64 = xcode whatever - no problem; we query live.

case 2. build != host but build arch == host arch- well, sometimes we can run the host tools (I’ve put that in my patch).

case 3. build != host, build arch != host arch .. I don’t believe there’s any more concise way of expressing the necessary data than passing the linker version to configure (there’s really no place one can go and look for its capability).

So - I agree that there are lots of possible solutions, the question is are there any that are less configury / lower maintenance (and accessible to our users)?

am I missing a point here?
Iain
Iain Sandoe Nov. 8, 2016, 5:14 a.m. UTC | #8
> On 7 Nov 2016, at 13:53, Mike Stump <mrs@mrs.kithrup.com> wrote:
> 
> On Nov 7, 2016, at 10:40 AM, Jeff Law <law@redhat.com> wrote:
>> 
>> On 11/07/2016 10:48 AM, Mike Stump wrote:
>>> On Nov 6, 2016, at 11:39 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>>>> This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.
>>> 
>>> So, I really do hate to ask, but does this have to be a config option?  Normally, we'd just have configure examine things by itself.  For canadian crosses, there should be enough state present to key off of directly, specially if they are wired up to work.
>>> 
>>> I've rather have the thing that doesn't just work without that config flag, just work.  I'd like to think I can figure how how to make it just work, if given an idea of what doesn't actually work.
>>> 
>>> Essentially, you do the operation that doesn't work, detect it failed to work, then the you know it didn't work.
>>> 
>> But how is that supposed to work in a cross environment when he can't directly query the linker's behavior?
> 
> :-)  So, the two most obvious solutions would be the programs that need to exist for a build, are portable and ported to run on a system that the build can use, or one can have a forwarding stub from a system the build uses to machine that can host the software that is less portable.  I've done both before, both work fine.  Portable software can also include things like simulators to run software under simulation on the local machine (or on a machine the forwarding stub links to).  I've done that as well.  For example, I've done native bootstraps of gcc on my non-bootstrappable cross compiler by running everything under GNU sim for bootstrap and enhancing the GNU sim stubs to include a few more system calls that bootstrap uses.  :-) read/write already work, once just needs readdir and a few others.

this is pretty “black belt” stuff - I don’t see most of our users wanting to dive this deeply … 
> 
> Also, for darwin, in some cases, we can actually run the target or host programs on the build machine directly.

I have that (at least weakly) in the patch posted - reluctant to add more smarts to make it cover more cases unless it’s proven useful.

> 
>> In an ideal world we could trivially query the linker's behavior prior to invocation.  But we don't have that kind of infrastructure in place.
> 
> There are cases that just work.  If you have a forwarding stub for the cross, then you can just run it as usual.  If you have a BINFMT style simulator on the local machine, again, you can just run it.  And on darwin, there are cases where you can run target and/or host programs on the build machine directly.
> 
> For darwin, I can't tell if he wants the runtime property of the target system for programs that will be linked on it, or a behavior of the local linker that will do the deed.  For the local linker, that can be queried directly. For the target system, we can know it's behavior by knowing what the target is.  We already know what the target is from the macosxversion flag, which embodies the dynamic linker. Also, for any specific version of macosx, there can have a table of what version of ld64 it has on it, by fiat.  We can say, if you want to target such a system, you should use the latest Xcode that supported that system.  This can reduce complexities and simplify our lives.

.. and produce the situation where we can never have a c++11 compiler on powerpc-darwin9, because the “required ld64” doesn’t support it (OK. maybe we don’t care about that) but supposing we can now have symbol aliasses with modern ld64 (I think we can) - would we want to prevent a 10.6 from using that?

So, I am strongly against the situation where we fix the capability of the toolchain on some assumption of externally-available tools predicated on the system revision.

The intent of my patch is to move away from this to a situation where we use configuration tests to determine the capability from the tools [when we can run them] and on the basis of their version(s) when we are configuring in a cross scenario.

>> ISTM the way to go is to have a configure test to try and DTRT automatically for native builds and a flag to set for crosses (or potentially override the configure test).
> 
> 
> Sure, if it can't be known.
> 
> For example, if you have the target include directory, you don't to have flags for questions that can be answered by the target headers.  Ditto the libraries.  My question is what is the specific question we are asking?  Additionally answering things on the basis of version numbers isn't quite in the GNU spirit.  I'm not opposed to it, but, it is slightly better to form the actual question if possible.

Actually, there’s a bunch of configury in GCC that picks up the version of binutils components when it can (an in-tree build) and makes decisions at least as a fall-back on that  (so precedent) - we can’t do that for ld64 (there’s no equivalent in-tree build) but we _can_ tell configure when we know,

> In complex canadian cross scenarios, we might well want to grab the source to ld64 and compile it up, just as we would any other software for canadian environments.

This is OK for professional toolchain folks, but it’s a complex set of operations c.f. “obtaining the relevant installation for the desired host”.

As a head’s up, the situation is even worse for the assembler FWIW - where we now have a bunch of different behaviours dependent on whether the underlying thing is “cctools” or “clang” and the version ranges of cctools and clang overlap.

cheers,
Iain
Iain Sandoe Nov. 8, 2016, 5:15 a.m. UTC | #9
> On 7 Nov 2016, at 13:53, Mike Stump <mrs@mrs.kithrup.com> wrote:
> 
> On Nov 7, 2016, at 10:40 AM, Jeff Law <law@redhat.com> wrote:
>> 
>> On 11/07/2016 10:48 AM, Mike Stump wrote:
>>> On Nov 6, 2016, at 11:39 AM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>>>> This is an initial patch in a series that converts Darwin's configury to detect ld64 features, rather than the current process of hard-coding them on target system version.
>>> 
>>> So, I really do hate to ask, but does this have to be a config option?  Normally, we'd just have configure examine things by itself.  For canadian crosses, there should be enough state present to key off of directly, specially if they are wired up to work.
>>> 
>>> I've rather have the thing that doesn't just work without that config flag, just work.  I'd like to think I can figure how how to make it just work, if given an idea of what doesn't actually work.
>>> 
>>> Essentially, you do the operation that doesn't work, detect it failed to work, then the you know it didn't work.
>>> 
>> But how is that supposed to work in a cross environment when he can't directly query the linker's behavior?
> 
> :-)  So, the two most obvious solutions would be the programs that need to exist for a build, are portable and ported to run on a system that the build can use, or one can have a forwarding stub from a system the build uses to machine that can host the software that is less portable.  I've done both before, both work fine.  Portable software can also include things like simulators to run software under simulation on the local machine (or on a machine the forwarding stub links to).  I've done that as well.  For example, I've done native bootstraps of gcc on my non-bootstrappable cross compiler by running everything under GNU sim for bootstrap and enhancing the GNU sim stubs to include a few more system calls that bootstrap uses.  :-) read/write already work, once just needs readdir and a few others.

this is pretty “black belt” stuff - I don’t see most of our users wanting to dive this deeply … 
> 
> Also, for darwin, in some cases, we can actually run the target or host programs on the build machine directly.

I have that (at least weakly) in the patch posted - reluctant to add more smarts to make it cover more cases unless it’s proven useful.

> 
>> In an ideal world we could trivially query the linker's behavior prior to invocation.  But we don't have that kind of infrastructure in place.
> 
> There are cases that just work.  If you have a forwarding stub for the cross, then you can just run it as usual.  If you have a BINFMT style simulator on the local machine, again, you can just run it.  And on darwin, there are cases where you can run target and/or host programs on the build machine directly.
> 
> For darwin, I can't tell if he wants the runtime property of the target system for programs that will be linked on it, or a behavior of the local linker that will do the deed.  For the local linker, that can be queried directly. For the target system, we can know it's behavior by knowing what the target is.  We already know what the target is from the macosxversion flag, which embodies the dynamic linker. Also, for any specific version of macosx, there can have a table of what version of ld64 it has on it, by fiat.  We can say, if you want to target such a system, you should use the latest Xcode that supported that system.  This can reduce complexities and simplify our lives.

.. and produce the situation where we can never have a c++11 compiler on powerpc-darwin9, because the “required ld64” doesn’t support it (OK. maybe we don’t care about that) but supposing we can now have symbol aliasses with modern ld64 (I think we can) - would we want to prevent a 10.6 from using that?

So, I am strongly against the situation where we fix the capability of the toolchain on some assumption of externally-available tools predicated on the system revision.

The intent of my patch is to move away from this to a situation where we use configuration tests to determine the capability from the tools [when we can run them] and on the basis of their version(s) when we are configuring in a cross scenario.

>> ISTM the way to go is to have a configure test to try and DTRT automatically for native builds and a flag to set for crosses (or potentially override the configure test).
> 
> 
> Sure, if it can't be known.
> 
> For example, if you have the target include directory, you don't to have flags for questions that can be answered by the target headers.  Ditto the libraries.  My question is what is the specific question we are asking?  Additionally answering things on the basis of version numbers isn't quite in the GNU spirit.  I'm not opposed to it, but, it is slightly better to form the actual question if possible.

Actually, there’s a bunch of configury in GCC that picks up the version of binutils components when it can (an in-tree build) and makes decisions at least as a fall-back on that  (so precedent) - we can’t do that for ld64 (there’s no equivalent in-tree build) but we _can_ tell configure when we know,

> In complex canadian cross scenarios, we might well want to grab the source to ld64 and compile it up, just as we would any other software for canadian environments.

This is OK for professional toolchain folks, but it’s a complex set of operations c.f. “obtaining the relevant installation for the desired host”.

As a head’s up, the situation is even worse for the assembler FWIW - where we now have a bunch of different behaviours dependent on whether the underlying thing is “cctools” or “clang” and the version ranges of cctools and clang overlap.

cheers,
Iain
Mike Stump Nov. 8, 2016, 4:18 p.m. UTC | #10
On Nov 7, 2016, at 6:33 PM, Iain Sandoe <iain_sandoe@mentor.com> wrote:
> 
> a) right now, we need to know the target linker version - while it’s not impossible to try and conjure up some test to see if a linker we can run supports coalesced sections or not, the configury code and complexity needed to support that would exceed what I’m proposing at present (and still would not cover the native and canadian cases).

A traditional canadian can run the host linker for the target on the build machine with --version (or whatever flag) and capture the version number.  I don't know what setup you have engineered for, since you didn't say.  First question, can you run the host linker for the target on the build machine?  If so, you can directly capture the output.  The next question is, is it the same version as the version that would be used on the host?

> I’m not debating the various solutions in your reply to Jeff - but honestly I wonder how many of them are realistically in reach of the typical end-user (I have done most of them at one stage or another, but I wonder how many would be stopped dead by “first find and build ld64, which itself needs a c++11 compiler and BTW needs you to build libLTO.dylib .. which needs you to build at least LLVM itself").

Package managers exist to solve that problem nicely, if someone wants a trivial solution.  They have the ability to scoop up binaries and just copy them onto a machine, solving hard chicken/egg problems.  Other possibilities are scripts that setup everything and release the scripts.

> am I missing a point here?

The answer to the two questions above.  The answer to the question, what specific question do you want answered, and what is available to the build machine, specifically to answer that question?

Also, you deflect on the os version to linker version number, but you never said what didn't actually work.  What specifically doesn't work?  This method is trivial and the mapping is direct and expressible in a few lines per version supported.  I still maintain that the only limitation is you must choose exactly 1 version per config triplet; I don't see that as a problem.  If it were, I didn't see you explain the problem.  Even if it is, that problem is solved after the general problem that nothing works today.  By having at least _a_ mapping, you generally solve the problem for most people, most of the time.

For example, if you target 10.0, there is no new features from an Xcode that was from 10.11.6 timeframe.  The only way to get those features, would be to run a newer ld64, and, if you are doing that, then you likely have enough sources to run ld directly.  And if you are running it directly, then you can just ask it what version it is.
Iain Sandoe Nov. 8, 2016, 4:31 p.m. UTC | #11
> On 8 Nov 2016, at 08:18, Mike Stump <mikestump@comcast.net> wrote:
> 
> On Nov 7, 2016, at 6:33 PM, Iain Sandoe <iain_sandoe@mentor.com> wrote:
>> 
>> a) right now, we need to know the target linker version - while it’s not impossible to try and conjure up some test to see if a linker we can run supports coalesced sections or not, the configury code and complexity needed to support that would exceed what I’m proposing at present (and still would not cover the native and canadian cases).
> 
> A traditional canadian can run the host linker for the target on the build machine with --version (or whatever flag) and capture the version number.  I don't know what setup you have engineered for, since you didn't say.  First question, can you run the host linker for the target on the build machine?  If so, you can directly capture the output.  The next question is, is it the same version as the version that would be used on the host?

I suppose that one could demand that - and require a build thus.

So I build x86_64-darwin14 X powerpc-darwin9
and then a native host  build = x86_64-darwin14 host = target = powerpc-darwin9
If we demand that the same version linker is used for all, then perhaps that could work.

It seems likely that we’ll end up with mis-configures and stuff hard to support with non-expert build folks.

>> I’m not debating the various solutions in your reply to Jeff - but honestly I wonder how many of them are realistically in reach of the typical end-user (I have done most of them at one stage or another, but I wonder how many would be stopped dead by “first find and build ld64, which itself needs a c++11 compiler and BTW needs you to build libLTO.dylib .. which needs you to build at least LLVM itself").
> 
> Package managers exist to solve that problem nicely, if someone wants a trivial solution.  They have the ability to scoop up binaries and just copy them onto a machine, solving hard chicken/egg problems.  Other possibilities are scripts that setup everything and release the scripts.

yes, I’m working on at least the latter (don’t have time to become a package manager).
> 
>> am I missing a point here?
> 
> The answer to the two questions above.  The answer to the question, what specific question do you want answered, and what is available to the build machine, specifically to answer that question?
> 
> Also, you deflect on the os version to linker version number, but you never said what didn't actually work.  What specifically doesn't work?  This method is trivial and the mapping is direct and expressible in a few lines per version supported.  I still maintain that the only limitation is you must choose exactly 1 version per config triplet; I don't see that as a problem.  If it were, I didn't see you explain the problem.  Even if it is, that problem is solved after the general problem that nothing works today.  By having at least _a_ mapping, you generally solve the problem for most people, most of the time.

It *requires* that one configures arch-darwinNN .. and doesn’t allow for arch-darwin (to mean anything other than  build=host=target) - but really we should be able to build arch-darwin wit config flags including -mmacosx-version-min= to be deployable on any darwin > than the specified min.  I actually do this for day job toolchains, it’s not purely hypothetical.  Since darwin toolchains are all supposed to be “just works” cross ones.

> For example, if you target 10.0, there is no new features from an Xcode that was from 10.11.6 timeframe.  The only way to get those features, would be to run a newer ld64, and, if you are doing that, then you likely have enough sources to run ld directly.  And if you are running it directly, then you can just ask it what version it is.

If we can engineer that a suitable ld64 can be run at configuration time so that the version can be discovered automatically, I’m with you 100% - but the scenarios put forward seem very complex for typical folks,

What would you prefer to replace this patch with?
Iain
Mike Stump Nov. 8, 2016, 5:14 p.m. UTC | #12
On Nov 7, 2016, at 9:15 PM, Iain Sandoe <iain@codesourcery.com> wrote:
> 
> this is pretty “black belt” stuff - I don’t see most of our users wanting to dive this deeply … 

No, not really.  It is true that newlib and GNU sim needs a few more lines of code, but that lines are written for linux and most other posix style systems already and can be contributed.  Once they are in, I used a vanishingly small change to configure, to put $RUN on the front of some command lines.  RUN="" for natives, and RUN=triple-run for selecting the GNU simulator.  In later builds, I didn't even do that, I do things like:

  CC=triplet-run\ $install/bin/triplet-gcc CXX=cyclops2e-run\ $install/bin/triplet-g++ ../newlib/configure --target=triplet --host=triplet2 --prefix=$install
  make -j$N CC_FOR_TARGET="triplet-run $install/bin/triplet-gcc"
  make -j$N CC_FOR_TARGET="triplet-run $install/bin/triplet-gcc" install

an an example on how to configure and build up newlib.  Works just fine as is.  gcc is the same, and just as easy.  For cross natives, don't expect it can be much easier, unless I added 2 lines of code into configure that tacked in the $RUN by itself.  If that were done, it's literally just a couple of lines, and then presto, it is the standard documented build methodology as it exists today.

>> Also, for darwin, in some cases, we can actually run the target or host programs on the build machine directly.
> 
> I have that (at least weakly) in the patch posted

If you can do that, then you can run:
$ /usr/bin/ld -v
@(#)PROGRAM:ld  PROJECT:ld64-274.1
configured to support archs: armv6 armv7 armv7s arm64 i386 x86_64 x86_64h armv6m armv7k armv7m armv7em (tvOS)
LTO support using: LLVM version 8.0.0, (clang-800.0.42.1)
TAPI support using: Apple TAPI version 1.30

$ ld -v
@(#)PROGRAM:ld  PROJECT:ld64-264.3.102
configured to support archs: i386 x86_64 x86_64h armv6 armv7 armv7s armv7m armv7k arm64 (tvOS)
LTO support using: LLVM version 3.8.1

and get a flavor of ld directly, no?  Isn't the version number you want something like 274.1 and 264.3.102?  I can write the sed to get that if you want.

  $ld -v

is the program I used to get the above, and I'm assuming that you can figure out how to set $ld.  This also works for natives, just as well, as can be seen above, as those two are the usual native commands.`

> .. and produce the situation where we can never have a c++11 compiler on powerpc-darwin9,

No, I never said that and never would accept that limitation.  If someone wanted to contribute patches to do that, I'd approve them.  If they wanted help and advice on how to make it happen, I'm there.  You are imagining limitations that don't exist.  I have envisioned, a world where everything just works and getting there is trivial.  Just takes a few well placed patches.

> because the “required ld64” doesn’t support it (OK. maybe we don’t care about that) but supposing we can now have symbol aliasses with modern ld64 (I think we can) - would we want to prevent a 10.6 from using that?

Nope.

> So, I am strongly against the situation where we fix the capability of the toolchain on some assumption of externally-available tools predicated on the system revision.

We aren't.  It is only a notion of what the meaning of a particular configure line is.  If you want it to mean,  blow chunks, then you are wrong for that choice.  If you want it to mean, it just works, then we are on the same page.  So, my question to you is, what happen when you don't give the linker version on the configure line?  Does it blow chunks or just work?  If it just works, why would you want to provide the flag?  If it blow chunks, then you now understand completely why I oppose that.

> The intent of my patch is to move away from this to a situation where we use configuration tests to determine the capability from the tools [when we can run them] and on the basis of their version(s) when we are configuring in a cross scenario.

No, it is backward progress.  You would require infinite argument lists for infinite complexity and infinite ways for breakage given how hard it is to formulate those arguments.  This is against the autoconf/configure spirit.  That spirit is no infinite argument list for infinite complexity, and everything just works.

When I say my cross native build is:

   ../newlib/configure --target=triplet --prefix=$install
  make && make install

Do you see why it must be right?  There just is no possibility that it is wrong.  Further, it is this today, tomorrow, yesterday and next century, by design.  I resist the configure flag because you said you can run ld, and as I've shown, -v will get us the information I think you seek, directly, no configure flag.  You can not said that you can't run it, and you have not said why the information you seek isn't in the output found.

> Actually, there’s a bunch of configury in GCC that picks up the version of binutils components when it can (an in-tree build) and makes decisions at least as a fall-back on that  (so precedent) - we can’t do that for ld64 (there’s no equivalent in-tree build) but we _can_ tell configure when we know,

We can, that's not the question, the question is why would we want to when we already know the answer?

>> In complex canadian cross scenarios, we might well want to grab the source to ld64 and compile it up, just as we would any other software for canadian environments.
> 
> This is OK for professional toolchain folks, but it’s a complex set of operations c.f. “obtaining the relevant installation for the desired host”.

  $ apt install darwin-gcc

if you want it.  Or, if you don't like that:

  $ wget .../install-darwin-gcc
  $ install-darwin-gcc

Pretty simple, no?
Mike Stump Nov. 8, 2016, 6:27 p.m. UTC | #13
On Nov 8, 2016, at 8:31 AM, Iain Sandoe <iain_sandoe@mentor.com> wrote:
> 
>> On 8 Nov 2016, at 08:18, Mike Stump <mikestump@comcast.net> wrote:
>> 
>> On Nov 7, 2016, at 6:33 PM, Iain Sandoe <iain_sandoe@mentor.com> wrote:
>>> 
>>> a) right now, we need to know the target linker version - while it’s not impossible to try and conjure up some test to see if a linker we can run supports coalesced sections or not, the configury code and complexity needed to support that would exceed what I’m proposing at present (and still would not cover the native and canadian cases).
>> 
>> A traditional canadian can run the host linker for the target on the build machine with --version (or whatever flag) and capture the version number.  I don't know what setup you have engineered for, since you didn't say.  First question, can you run the host linker for the target on the build machine?  If so, you can directly capture the output.  The next question is, is it the same version as the version that would be used on the host?
> 
> I suppose that one could demand that - and require a build thus.

It is a statement of what we have today, already a requirement.  Sorry you missed the memo.  The problem is software usually changes through time, and the old software can't acquire the features of the new software, so to get those you need the new software, not the old.  If there are no new features, in some rare cases, one might be able to use a range of versions, but the range that would work, would be the range that an expert tested and documented as working.  Absent that, it generally speaking isn't that safe to assume it is safe.  And, when it isn't safe and doesn't work, well, it just doesn't work.  Thinking it will, or hoping it will, won't make it so.  If those features were able to be pushed back into the old software, you're merely reinvent a source distribution of the the new software in different clothes, poorly.

> If we demand that the same version linker is used for all, then perhaps that could work.

More that that, it will just work; by design; and in the cases where that isn't the case, those are bugs that can be worked around or fixed.

> It seems likely that we’ll end up with mis-configures and stuff hard to support with non-expert build folks.

Nope.  Indeed, remember the only reason why we do this, is to make the phrase, it just works, true.  If you are imagining anything else, the flaw is mine in communicating why we do what it is that we do.

>>> I’m not debating the various solutions in your reply to Jeff - but honestly I wonder how many of them are realistically in reach of the typical end-user (I have done most of them at one stage or another, but I wonder how many would be stopped dead by “first find and build ld64, which itself needs a c++11 compiler and BTW needs you to build libLTO.dylib .. which needs you to build at least LLVM itself").
>> 
>> Package managers exist to solve that problem nicely, if someone wants a trivial solution.  They have the ability to scoop up binaries and just copy them onto a machine, solving hard chicken/egg problems.  Other possibilities are scripts that setup everything and release the scripts.
> 
> yes, I’m working on at least the latter (don’t have time to become a package manager).

The thing is, the build scripts have already been written, we're just 'making them work'.

>>> am I missing a point here?
>> 
>> The answer to the two questions above.  The answer to the question, what specific question do you want answered, and what is available to the build machine, specifically to answer that question?
>> 
>> Also, you deflect on the os version to linker version number, but you never said what didn't actually work.  What specifically doesn't work?  This method is trivial and the mapping is direct and expressible in a few lines per version supported.  I still maintain that the only limitation is you must choose exactly 1 version per config triplet; I don't see that as a problem.  If it were, I didn't see you explain the problem.  Even if it is, that problem is solved after the general problem that nothing works today.  By having at least _a_ mapping, you generally solve the problem for most people, most of the time.
> 
> It *requires* that one configures arch-darwinNN .. and doesn’t allow for arch-darwin (to mean anything other than  build=host=target)

No, that's a misunderstanding of my position.  Use the filter, does this mean 'it just works', or not.  If not, then that's not what I mean.

> If we can engineer that a suitable ld64 can be run at configuration time so that the version can be discovered automatically, I’m with you 100% - but the scenarios put forward seem very complex for typical folks,

A linker is a requirement of a build, this isn't supposed to be opaque.  A linker means you can run it.  Again, not exactly rocket science.  What would be rocket science is doing a build with no linker and no compiler, and no internet connection.  Further, this isn't a new requirement, it's been around for a while.

> What would you prefer to replace this patch with?

  ld_vers=$($ld -v | sed -n 's/.*ld64-//p')

sorry if that isn't obvious.
Iain Sandoe Nov. 8, 2016, 9:05 p.m. UTC | #14
> On 8 Nov 2016, at 10:27, Mike Stump <mikestump@comcast.net> wrote:
> 
> On Nov 8, 2016, at 8:31 AM, Iain Sandoe <iain_sandoe@mentor.com> wrote:
>> 
>>> On 8 Nov 2016, at 08:18, Mike Stump <mikestump@comcast.net> wrote:
>>> 
>>> On Nov 7, 2016, at 6:33 PM, Iain Sandoe <iain_sandoe@mentor.com> wrote:
>>>> 
>>>> a) right now, we need to know the target linker version - while it’s not impossible to try and conjure up some test to see if a linker we can run supports coalesced sections or not, the configury code and complexity needed to support that would exceed what I’m proposing at present (and still would not cover the native and canadian cases).
>>> 
>>> A traditional canadian can run the host linker for the target on the build machine with --version (or whatever flag) and capture the version number.  I don't know what setup you have engineered for, since you didn't say.  First question, can you run the host linker for the target on the build machine?  If so, you can directly capture the output.  The next question is, is it the same version as the version that would be used on the host?
>> 
>> I suppose that one could demand that - and require a build thus.
> 
> It is a statement of what we have today, already a requirement.  Sorry you missed the memo.  The problem is software usually changes through time, and the old software can't acquire the features of the new software, so to get those you need the new software, not the old.  If there are no new features, in some rare cases, one might be able to use a range of versions, but the range that would work, would be the range that an expert tested and documented as working.  Absent that, it generally speaking isn't that safe to assume it is safe.  And, when it isn't safe and doesn't work, well, it just doesn't work.  Thinking it will, or hoping it will, won't make it so.  If those features were able to be pushed back into the old software, you're merely reinvent a source distribution of the the new software in different clothes, poorly.
> 
>> If we demand that the same version linker is used for all, then perhaps that could work.
> 
> More that that, it will just work; by design; and in the cases where that isn't the case, those are bugs that can be worked around or fixed.
> 
>> It seems likely that we’ll end up with mis-configures and stuff hard to support with non-expert build folks.
> 
> Nope.  Indeed, remember the only reason why we do this, is to make the phrase, it just works, true.  If you are imagining anything else, the flaw is mine in communicating why we do what it is that we do.
> 
>>>> I’m not debating the various solutions in your reply to Jeff - but honestly I wonder how many of them are realistically in reach of the typical end-user (I have done most of them at one stage or another, but I wonder how many would be stopped dead by “first find and build ld64, which itself needs a c++11 compiler and BTW needs you to build libLTO.dylib .. which needs you to build at least LLVM itself").
>>> 
>>> Package managers exist to solve that problem nicely, if someone wants a trivial solution.  They have the ability to scoop up binaries and just copy them onto a machine, solving hard chicken/egg problems. Other possibilities are scripts that setup everything and release the scripts.
>> 
>> yes, I’m working on at least the latter (don’t have time to become a package manager).
> 
> The thing is, the build scripts have already been written, we're just 'making them work'.
> 
>>>> am I missing a point here?
>>> 
>>> The answer to the two questions above.  The answer to the question, what specific question do you want answered, and what is available to the build machine, specifically to answer that question?
>>> 
>>> Also, you deflect on the os version to linker version number, but you never said what didn't actually work.  What specifically doesn't work?  This method is trivial and the mapping is direct and expressible in a few lines per version supported.  I still maintain that the only limitation is you must choose exactly 1 version per config triplet; I don't see that as a problem.  If it were, I didn't see you explain the problem. Even if it is, that problem is solved after the general problem that nothing works today.  By having at least _a_ mapping, you generally solve the problem for most people, most of the time.
>> 
>> It *requires* that one configures arch-darwinNN .. and doesn’t allow for arch-darwin (to mean anything other than  build=host=target)
> 
> No, that's a misunderstanding of my position.  Use the filter, does this mean 'it just works', or not.  If not, then that's not what I mean.
> 
>> If we can engineer that a suitable ld64 can be run at configuration time so that the version can be discovered automatically, I’m with you 100% - but the scenarios put forward seem very complex for typical folks,
> 
> A linker is a requirement of a build, this isn't supposed to be opaque.  A linker means you can run it. Again, not exactly rocket science.  What would be rocket science is doing a build with no linker and no compiler, and no internet connection.  Further, this isn't a new requirement, it's been around for a while.
> 
>> What would you prefer to replace this patch with?
> 
>  ld_vers=$($ld -v | sed -n 's/.*ld64-//p')
> 
> sorry if that isn't obvious.

it’s both obvious and in my submitted patch,
 and totally fine if you require that the build system can run the linker for the host or you demand that we can’t use a newer linker on older systems such that the configuration can guess the capabilities on the basis of the target.

As I’ve explained - I do not want to demand that for my toolchains going forward - so clearly I need to maintain some out-of-tree increment; such is life, I’ll figure out an incremental patch at some future time.

Simple for the simple case is already part of my patch, but capability for the expert and non-simple case is also present, I can remove that for upstream, and keep it locally.

Iain
Mike Stump Nov. 8, 2016, 9:39 p.m. UTC | #15
On Nov 8, 2016, at 1:05 PM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
> 
> Simple for the simple case is already part of my patch, but capability for the expert and non-simple case is also present,

I'm trying to ask a specific question, what does the patch allow that can't be done otherwise?  Kinda a trivial question, and I don't see any answer.

I'm not looking for, it allows it to work, or it makes the expert case work.  I'm look for the specific question, and the specific information you want, and why ld -v doesn't get it.

For example, there is a host you are thinking of, there is a  build system you are think of, there is a target you are thinking of.  There is a set of software you have, likely a particular Xcode release.  There are binaries that you might have built up for it, there might be headers you grab from some place, libraries or SDKs from another place.  This might be done for quick debugging a darwin problem on a linux host, but without the full ability to generate a tool chain, it might be to support a full tool chain.

I want it to work, without specifying the it, doesn't let me see what problem you are solving.

I read through the PR listed, and it seems to just list a typical x86_64-apple-darwin12.6.0 native port.  That really isn't rocket science is it?  I mean, we see the wanting from the build tools directly.  So, the change isn't to support that, is it?
Iain Sandoe Nov. 9, 2016, 12:49 a.m. UTC | #16
> On 8 Nov 2016, at 13:39, Mike Stump <mikestump@comcast.net> wrote:
> 
> On Nov 8, 2016, at 1:05 PM, Iain Sandoe <Iain_Sandoe@mentor.com> wrote:
>> 
>> Simple for the simple case is already part of my patch, but capability for the expert and non-simple case is also present,
> 
> I'm trying to ask a specific question, what does the patch allow that can't be done otherwise?  Kinda a trivial question, and I don't see any answer.
> 
> I'm not looking for, it allows it to work, or it makes the expert case work.  I'm look for the specific question, and the specific information you want, and why ld -v doesn't get it.

ld -v gets it when you can execute ld.
It doesn’t get it when the $host ld is not executable on $build.
Providing the option to give the version allows that without requiring the complexity of other (possibly valid) solutions.  If you know that you’re building (my patched) ld64-253.9 for powerpc-darwin9 (crossed from x86-64-darwin14) it’s easy, just put —with-ld64=253.9 .. 

I think we’ve debated this enough - I’m OK with keeping my extra facility locally and will resubmit the patch with it removed in due course,
Iain

> 
> For example, there is a host you are thinking of, there is a  build system you are think of, there is a target you are thinking of.  There is a set of software you have, likely a particular Xcode release.  There are binaries that you might have built up for it, there might be headers you grab from some place, libraries or SDKs from another place.  This might be done for quick debugging a darwin problem on a linux host, but without the full ability to generate a tool chain, it might be to support a full tool chain.
> 
> I want it to work, without specifying the it, doesn't let me see what problem you are solving.
> 
> I read through the PR listed, and it seems to just list a typical x86_64-apple-darwin12.6.0 native port.  That really isn't rocket science is it?  I mean, we see the wanting from the build tools directly.  So, the change isn't to support that, is it?
>
Jeff Law Nov. 17, 2016, 9:19 p.m. UTC | #17
On 11/08/2016 05:49 PM, Iain Sandoe wrote:
>
>> On 8 Nov 2016, at 13:39, Mike Stump <mikestump@comcast.net> wrote:
>>
>> On Nov 8, 2016, at 1:05 PM, Iain Sandoe <Iain_Sandoe@mentor.com>
>> wrote:
>>>
>>> Simple for the simple case is already part of my patch, but
>>> capability for the expert and non-simple case is also present,
>>
>> I'm trying to ask a specific question, what does the patch allow
>> that can't be done otherwise?  Kinda a trivial question, and I
>> don't see any answer.
>>
>> I'm not looking for, it allows it to work, or it makes the expert
>> case work.  I'm look for the specific question, and the specific
>> information you want, and why ld -v doesn't get it.
>
> ld -v gets it when you can execute ld. It doesn’t get it when the
> $host ld is not executable on $build.
Exactly!

> Providing the option to give the version allows that without
> requiring the complexity of other (possibly valid) solutions.  If you
> know that you’re building (my patched) ld64-253.9 for powerpc-darwin9
> (crossed from x86-64-darwin14) it’s easy, just put —with-ld64=253.9
> ..
>
> I think we’ve debated this enough - I’m OK with keeping my extra
> facility locally and will resubmit the patch with it removed in due
> course, Iain
Your call. But ISTM the ability to specify the linker version or even 
better, its behaviour is a notable improvement for these crosses.

jeff
Iain Sandoe Nov. 18, 2016, 11:13 a.m. UTC | #18
> On 17 Nov 2016, at 21:19, Jeff Law <law@redhat.com> wrote:
> 
> On 11/08/2016 05:49 PM, Iain Sandoe wrote:
>> 
>>> On 8 Nov 2016, at 13:39, Mike Stump <mikestump@comcast.net> wrote:
>>> 
>>> On Nov 8, 2016, at 1:05 PM, Iain Sandoe <Iain_Sandoe@mentor.com>
>>> wrote:
>>>> 
>>>> Simple for the simple case is already part of my patch, but
>>>> capability for the expert and non-simple case is also present,
>>> 
>>> I'm trying to ask a specific question, what does the patch allow
>>> that can't be done otherwise?  Kinda a trivial question, and I
>>> don't see any answer.
>>> 
>>> I'm not looking for, it allows it to work, or it makes the expert
>>> case work.  I'm look for the specific question, and the specific
>>> information you want, and why ld -v doesn't get it.
>> 
>> ld -v gets it when you can execute ld. It doesn’t get it when the
>> $host ld is not executable on $build.
> Exactly!
> 
>> Providing the option to give the version allows that without
>> requiring the complexity of other (possibly valid) solutions.  If you
>> know that you’re building (my patched) ld64-253.9 for powerpc-darwin9
>> (crossed from x86-64-darwin14) it’s easy, just put —with-ld64=253.9
>> ..
>> 
>> I think we’ve debated this enough - I’m OK with keeping my extra
>> facility locally and will resubmit the patch with it removed in due
>> course, Iain
> Your call. But ISTM the ability to specify the linker version or even better, its behaviour is a notable improvement for these crosses.

Thanks, at least I’m not going completely crazy ;-)

However, I’d already revised the patch to take the approach Mike wanted, and thus didn’t add the documentation that Joseph noted was missing.  It takes quite a while to re-test this across the Darwin patch, so I’m going to put forward the amended patch (as Mike wanted it) and we can discuss ways to deal with native and Canadian crosses later.

FWIW, it’s possible (but rather kludgy) with the attached patch to override gcc_gc_ld64_version on the make line to build a native/Canadian cross with a “non-standard” ld64 version (I did at least build a native X powerpc-darwin9 on x86_64-darwin14 which was not a brick).

OK now for trunk?
open branches?
Iain

    gcc/
        * configure.ac (with-ld64): New var, set for Darwin, set on
        detection of ld64, gcc_cv_ld64_export_dynamic: New, New test.
        * darwin.h: Use LD64_HAS_DYNAMIC export. DEF_LD64: New, define.
        * darwin10.h(DEF_LD64): Update for this target version.
        * darwin12.h(LINK_GCC_C_SEQUENCE_SPEC): Remove rdynamic test.
        (DEF_LD64): Update for this target version.
Mike Stump Nov. 18, 2016, 4:55 p.m. UTC | #19
On Nov 18, 2016, at 3:13 AM, Iain Sandoe <iain@codesourcery.com> wrote:
> 
> Thanks, at least I’m not going completely crazy ;-)

I'll just note for completeness that Jeff also couldn't explain a failure of your latest patch.  If you run into one, let me know.

> OK now for trunk?

Ok.

> open branches?

Ok.
diff mbox

Patch

diff --git a/gcc/config/darwin.h b/gcc/config/darwin.h
index 045f70b..541bcb3 100644
--- a/gcc/config/darwin.h
+++ b/gcc/config/darwin.h
@@ -165,6 +165,12 @@  extern GTY(()) int darwin_ms_struct;
    specifying the handling of options understood by generic Unix
    linkers, and for positional arguments like libraries.  */
 
+#if LD64_HAS_EXPORT_DYNAMIC
+#define DARWIN_EXPORT_DYNAMIC " %{rdynamic:-export_dynamic}"
+#else
+#define DARWIN_EXPORT_DYNAMIC " %{rdynamic: %nrdynamic is not supported}"
+#endif
+
 #define LINK_COMMAND_SPEC_A \
    "%{!fdump=*:%{!fsyntax-only:%{!c:%{!M:%{!MM:%{!E:%{!S:\
     %(linker)" \
@@ -185,7 +191,9 @@  extern GTY(()) int darwin_ms_struct;
     %{!nostdlib:%{!nodefaultlibs:\
       %{%:sanitize(address): -lasan } \
       %{%:sanitize(undefined): -lubsan } \
-      %(link_ssp) %(link_gcc_c_sequence)\
+      %(link_ssp) \
+      " DARWIN_EXPORT_DYNAMIC " %<rdynamic \
+      %(link_gcc_c_sequence) \
     }}\
     %{!nostdlib:%{!nostartfiles:%E}} %{T*} %{F*} }}}}}}}"
 
@@ -932,4 +940,10 @@  extern void darwin_driver_init (unsigned int *,struct cl_decoded_option **);
    fall-back default.  */
 #define DEF_MIN_OSX_VERSION "10.5"
 
+#ifndef LD64_VERSION
+#define LD64_VERSION "85.2"
+#else
+#define DEF_LD64 LD64_VERSION
+#endif
+
 #endif /* CONFIG_DARWIN_H */
diff --git a/gcc/config/darwin10.h b/gcc/config/darwin10.h
index 5829d78..a81fbdc 100644
--- a/gcc/config/darwin10.h
+++ b/gcc/config/darwin10.h
@@ -32,3 +32,8 @@  along with GCC; see the file COPYING3.  If not see
 
 #undef DEF_MIN_OSX_VERSION
 #define DEF_MIN_OSX_VERSION "10.6"
+
+#ifndef LD64_VERSION
+#undef DEF_LD64
+#define DEF_LD64 "97.7"
+#endif
diff --git a/gcc/config/darwin12.h b/gcc/config/darwin12.h
index e366982..f88e2a4 100644
--- a/gcc/config/darwin12.h
+++ b/gcc/config/darwin12.h
@@ -21,10 +21,15 @@  along with GCC; see the file COPYING3.  If not see
 #undef  LINK_GCC_C_SEQUENCE_SPEC
 #define LINK_GCC_C_SEQUENCE_SPEC \
 "%:version-compare(>= 10.6 mmacosx-version-min= -no_compact_unwind) \
-   %{rdynamic:-export_dynamic} %{!static:%{!static-libgcc: \
+   %{!static:%{!static-libgcc: \
       %:version-compare(>= 10.6 mmacosx-version-min= -lSystem) } } \
    %{fno-pic|fno-PIC|fno-pie|fno-PIE|fapple-kext|mkernel|static|mdynamic-no-pic: \
       %:version-compare(>= 10.7 mmacosx-version-min= -no_pie) } %G %L"
 
 #undef DEF_MIN_OSX_VERSION
 #define DEF_MIN_OSX_VERSION "10.8"
+
+#ifndef LD64_VERSION
+#undef DEF_LD64
+#define DEF_LD64 "236.4"
+#endif
diff --git a/gcc/configure.ac b/gcc/configure.ac
index 338956f..1783a39 100644
--- a/gcc/configure.ac
+++ b/gcc/configure.ac
@@ -274,6 +274,26 @@  AC_ARG_WITH(gnu-ld,
 gnu_ld_flag="$with_gnu_ld",
 gnu_ld_flag=no)
 
+# With ld64; try to support native and canadian crosses by allowing the
+# configurer to specify the minium ld64 version expected.
+AC_ARG_WITH(ld64,
+[AS_HELP_STRING([[--with-ld64[=VERS]]],
+[arrange to work with Darwin's ld64; assume that the version is >= VERS if given])],
+[case "${withval}" in
+    no | yes)
+        ld64_flag="${withval}"
+        gcc_cv_ld64_version=
+        ;;
+    *)
+        ld64_flag=yes
+        gcc_cv_ld64_version="${withval}";;
+esac],
+[gcc_cv_ld64_version=
+case $target in
+    *darwin*) ld64_flag=yes;; # Darwin can only use a ld64-compatible linker.
+    *) ld64_flag=no;;
+esac])
+
 # With pre-defined ld
 AC_ARG_WITH(ld,
 [AS_HELP_STRING([--with-ld], [arrange to use the specified ld (full pathname)])],
@@ -283,6 +303,8 @@  if test x"${DEFAULT_LINKER+set}" = x"set"; then
     AC_MSG_ERROR([cannot execute: $DEFAULT_LINKER: check --with-ld or env. var. DEFAULT_LINKER])
   elif $DEFAULT_LINKER -v < /dev/null 2>&1 | grep GNU > /dev/null; then
     gnu_ld_flag=yes
+  elif $DEFAULT_LINKER -v < /dev/null 2>&1 | grep ld64- > /dev/null; then
+    ld64_flag=yes
   fi
   AC_DEFINE_UNQUOTED(DEFAULT_LINKER,"$DEFAULT_LINKER",
 	[Define to enable the use of a default linker.])
@@ -5254,6 +5276,58 @@  AC_DEFINE_UNQUOTED(LD_COMPRESS_DEBUG_OPTION, "$gcc_cv_ld_compress_debug_option",
 [Define to the linker option to enable compressed debug sections.])
 AC_MSG_RESULT($gcc_cv_ld_compress_debug)
 
+if test x"$ld64_flag" = x"yes"; then
+
+  # Set defaults for possibly untestable items.
+  gcc_cv_ld64_export_dynamic=0
+
+  if test "$build" = "$host"; then
+    darwin_try_test=1
+  else
+    darwin_try_test=0
+  fi
+  # On Darwin, because of FAT library support, it is usually possible to execute
+  # exes from compatible archs even when the host differs from the build system.
+  case "$build","$host" in
+    x86_64-*-darwin*,i?86-*-darwin* | powerpc64*-*-darwin*,powerpc*-*-darwin*)
+	darwin_try_test=1;;
+    *) ;;
+  esac
+
+  # If the configurer specified a minimum ld64 version to be supported, then use
+  # that to determine feature support.
+  if test x"${gcc_cv_ld64_version}" != x; then
+    AC_MSG_CHECKING(ld64 major version)
+    IFS=. read gcc_cv_ld64_major gcc_cv_ld64_minor <<< "${gcc_cv_ld64_version}"
+    AC_MSG_RESULT($gcc_cv_ld64_major)
+   if test "$gcc_cv_ld64_major" -ge 236; then
+      gcc_cv_ld64_export_dynamic=1
+    fi
+  elif test -x "$gcc_cv_ld" -a "$darwin_try_test" -eq 1; then
+    # If the version was not specified, try to find it.
+    AC_MSG_CHECKING(linker version)
+    if test x"${gcc_cv_ld64_version}" = x; then
+      gcc_cv_ld64_version=`$gcc_cv_ld -v 2>&1 | grep ld64 | sed s/.*ld64-// | awk '{print $1}'`
+    fi
+    AC_MSG_RESULT($gcc_cv_ld64_version)
+
+    AC_MSG_CHECKING(linker for -export_dynamic support)
+    gcc_cv_ld64_export_dynamic=1
+    if $gcc_cv_ld -export_dynamic < /dev/null 2>&1 | grep 'unknown option' > /dev/null; then
+      gcc_cv_ld64_export_dynamic=0
+    fi
+    AC_MSG_RESULT($gcc_cv_ld64_export_dynamic)
+  fi
+
+  if test x"${gcc_cv_ld64_version}" != x; then
+    AC_DEFINE_UNQUOTED(LD64_VERSION, "${gcc_cv_ld64_version}",
+      [Define to ld64 version.])
+  fi
+
+  AC_DEFINE_UNQUOTED(LD64_HAS_EXPORT_DYNAMIC, $gcc_cv_ld64_export_dynamic,
+  [Define to 1 if ld64 supports '-export_dynamic'.])
+fi
+
 # --------
 # UNSORTED
 # --------