diff mbox

[testsuite] dg-final object-size: fail if file does not exist

Message ID 4DFA2FD7.2070706@codesourcery.com
State New
Headers show

Commit Message

Janis Johnson June 16, 2011, 4:31 p.m. UTC
Currently the dg-final check "object-size" results in ERROR if the
assemble failed and the object file does not exist.  This patch fails
the test instead.  OK for trunk?

Janis
2011-06-16  Janis Johnson  <janisjo@codesourcery.com>

	* lib/scanasm.exp (object-size): Fail if object file does not exist.

Comments

Joseph Myers June 16, 2011, 5:08 p.m. UTC | #1
On Thu, 16 Jun 2011, Janis Johnson wrote:

> Currently the dg-final check "object-size" results in ERROR if the
> assemble failed and the object file does not exist.  This patch fails
> the test instead.  OK for trunk?

The set of testcase names - the things after "PASS: " or "FAIL: " or other 
statuses - should not depend on the results if comparison is to work well, 
so

+       fail "$testcase $output_file does not exist"

is a bad idea unless there is a corresponding

	pass "$testcase $output_file does not exist"

(obvious nonsense) as the alternative.  Instead you should:

* Make sure the compilation of the test produced its own PASS or FAIL 
line.

* If that failed, report the subsequent test as UNRESOLVED.

	unresolved "$testcase object-size $what $cmp $with"
Mike Stump June 16, 2011, 8:04 p.m. UTC | #2
On Jun 16, 2011, at 9:31 AM, Janis Johnson wrote:
> Currently the dg-final check "object-size" results in ERROR if the
> assemble failed and the object file does not exist.  This patch fails
> the test instead.

If you can arrange for call fail on only those things that later would have called pass or fail on, that would be better.  The idea is that we want to preserve all testcases, but just call pass or fail on each one, as appropriate.
Janis Johnson June 16, 2011, 10:13 p.m. UTC | #3
On 06/16/2011 10:08 AM, Joseph S. Myers wrote:
> On Thu, 16 Jun 2011, Janis Johnson wrote:
> 
>> Currently the dg-final check "object-size" results in ERROR if the
>> assemble failed and the object file does not exist.  This patch fails
>> the test instead.  OK for trunk?
> 
> The set of testcase names - the things after "PASS: " or "FAIL: " or other 
> statuses - should not depend on the results if comparison is to work well, 
> so
> 
> +       fail "$testcase $output_file does not exist"
> 
> is a bad idea unless there is a corresponding
> 
> 	pass "$testcase $output_file does not exist"
> 
> (obvious nonsense) as the alternative.  Instead you should:
> 
> * Make sure the compilation of the test produced its own PASS or FAIL 
> line.
> 
> * If that failed, report the subsequent test as UNRESOLVED.
> 
> 	unresolved "$testcase object-size $what $cmp $with"
> 

This issue also affects other procedures used from dg-final, including
the scan-assembler and scan-dump variants.  The scan-dump routines
append "dump file does not exist" to the usual FAIL messages and the
scan-assembler routines report ERROR.  I'll fix those after I
understand the correct fix for object-size.  These routines don't have
access to the pass/fail status of the compilation, and the compilation
step doesn't know about dg-final checks.

Pass/fail messages for "object-size text <=  32" are:

PASS: gcc.target/arm/ivopts-6.c object-size text <= 32
FAIL: gcc.target/arm/ivopts-6.c object-size text <= 32

If the file doesn't exist the message could be:

UNRESOLVED: gcc.target/arm/ivopts-6.c object-size text <= 32

Currently there are several possible causes for failure in object-size.
Some are errors in the test itself, like the wrong number of arguments,
but some others could be UNRESOLVED instead of ERROR, such as the "size"
command failing or producing unexpected output.  Can the UNRESOLVED line
include additional information about the reason for the failure, or
should the reason just be in a message in the log file?

Janis
Joseph Myers June 16, 2011, 11:18 p.m. UTC | #4
On Thu, 16 Jun 2011, Janis Johnson wrote:

> Currently there are several possible causes for failure in object-size.
> Some are errors in the test itself, like the wrong number of arguments,
> but some others could be UNRESOLVED instead of ERROR, such as the "size"
> command failing or producing unexpected output.  Can the UNRESOLVED line
> include additional information about the reason for the failure, or
> should the reason just be in a message in the log file?

My view is that reasons should be separately in the log file; there should 
be a fixed set of test names, each of which may be PASS, FAIL, UNRESOLVED 
etc. in a particular test run.  (I know the code reporting ICEs in test 
names doesn't conform to this; properly that should have separate "test 
for internal compiler error" test names rather than modifying the name of 
a failing test if it has an ICE.)
diff mbox

Patch

Index: lib/scanasm.exp
===================================================================
--- lib/scanasm.exp	(revision 175083)
+++ lib/scanasm.exp	(working copy)
@@ -351,6 +351,10 @@ 
     upvar 2 name testcase
     set testcase [lindex $testcase 0]
     set output_file "[file rootname [file tail $testcase]].o"
+    if ![file_on_host exists $output_file] {
+	fail "$testcase $output_file does not exist"
+	return
+    }
     set output [remote_exec host "$size" "$output_file"]
     set status [lindex $output 0]
     if { $status != 0 } {