From patchwork Fri Nov 2 19:47:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Diego Novillo X-Patchwork-Id: 196753 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id B27A62C00D0 for ; Sat, 3 Nov 2012 06:47:22 +1100 (EST) Comment: DKIM? See http://www.dkim.org DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=gcc.gnu.org; s=default; x=1352490443; h=Comment: DomainKey-Signature:Received:Received:Received:Received:Received: Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type:Content-Disposition:User-Agent: Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:Sender:Delivered-To; bh=hckk1vnTWJFGJc50aQ8u pDk6ueE=; b=fNBPYTrTX/Gw0dubqSpe6QibxUo4aI0GnzvusrFC5H/oMNXFXWHo WXht3HAQWDX6JrEUSoDBBAp3DCdVdE4sEknargWMdSTuBxNfu7a5dT6ZpZ531ODG 5XmjL3OnG/RzMYalnGAId4RD6NGSBveaQQkBbhOaopLOMiosOtU6/Ow= Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=gcc.gnu.org; h=Received:Received:X-SWARE-Spam-Status:X-Spam-Check-By:Received:Received:X-Google-DKIM-Signature:Received:Received:Received:Received:Date:From:To:Subject:Message-ID:MIME-Version:Content-Type:Content-Disposition:User-Agent:X-Gm-Message-State:X-IsSubscribed:Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help:Sender:Delivered-To; b=RhjFDWQ1MjGcAY8m/aNopYLoaYVCCYSc5b9x5qh0YQ/nxPZvtYmoo10AR4hUvW /vrg1PyZmKEwbBeI4teRh53BgWsBg5R1TKEEyA6WlCdmyO6vcQ0V5NETRHhNXJY4 8t9Bwedy6CAwxujtEtvQXZubV7l03mr6+gswtjO+43E4c=; Received: (qmail 26189 invoked by alias); 2 Nov 2012 19:47:18 -0000 Received: (qmail 26180 invoked by uid 22791); 2 Nov 2012 19:47:18 -0000 X-SWARE-Spam-Status: No, hits=-4.6 required=5.0 tests=AWL, BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, KHOP_RCVD_TRUST, RCVD_IN_DNSWL_LOW, RCVD_IN_HOSTKARMA_YE, RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mail-vc0-f201.google.com (HELO mail-vc0-f201.google.com) (209.85.220.201) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 02 Nov 2012 19:47:12 +0000 Received: by mail-vc0-f201.google.com with SMTP id n11so459970vch.2 for ; Fri, 02 Nov 2012 12:47:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=date:from:to:subject:message-id:mime-version:content-type :content-disposition:organization:user-agent:x-gm-message-state; bh=RyeYcFgN8c+hAAcdrTs34qHr5MjI2KNGnFuY3PYle44=; b=fbyx/J9t6lDACkaq4T7QyGihC48wF6vplpWAlXCuyYdI61V+/7hRtR45cE+FhXqG/z IGsRp6lVpZo+Q5OKLgcTEjbA1EB+qcHz6+KsPBsX4w7xIP47djTJD2AxmjEC8eC84DL7 hMqHRcA9qLg9Sno9d1ES/24Omk7FiD2WRBOJ6ojnNzPVUZ55rsi3SxeiCe3CTo7FWHRH zipOkclqnrJRu/nRNrSlbYmeTwOcWpzsOUvdL1CB+mGHCToNCT77qVTEXV89ipv6DTvl yCJ0Q4+lIDkMR+gok7pKjyoPoQX/wojCRZpFOFEO2Yd6/J/ILBoDi7Vz8TyVpPmTzJRk lP3g== Received: by 10.236.183.202 with SMTP id q50mr1810998yhm.29.1351885631870; Fri, 02 Nov 2012 12:47:11 -0700 (PDT) Received: from wpzn3.hot.corp.google.com (216-239-44-65.google.com [216.239.44.65]) by gmr-mx.google.com with ESMTPS id h50si663502yhi.3.2012.11.02.12.47.11 (version=TLSv1/SSLv3 cipher=AES128-SHA); Fri, 02 Nov 2012 12:47:11 -0700 (PDT) Received: from torture.tor.corp.google.com (torture.tor.corp.google.com [172.30.222.16]) by wpzn3.hot.corp.google.com (Postfix) with ESMTP id A8EF3100047; Fri, 2 Nov 2012 12:47:11 -0700 (PDT) Received: by torture.tor.corp.google.com (Postfix, from userid 54752) id D03B2C0585; Fri, 2 Nov 2012 15:47:10 -0400 (EDT) Date: Fri, 2 Nov 2012 15:47:10 -0400 From: Diego Novillo To: gcc-patches@gcc.gnu.org, Lawrence Crowl Subject: [contrib] Compare against a clean build in validate_failures.py Message-ID: <20121102194710.GA28934@google.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-Gm-Message-State: ALoCoQmGwlJikHgJsofDf41PqAKRn8RsH4TRA63ck/nX7i+ImhYVBwvpviHcsiwwMRADFMiXlEqozTnXeArbIGnXpFrFxYQES+effF+xb2buolmjWUehpFvljfBVYjNPdDfIj7CcDPdTi8Yjx4up9O3psk2y7eTI9yh9kNGc7pJ59UUEOcP1W/66oFIcTaW1mUBec5k9f20LmmePPiUU/02eIz9C5wyL5uiQCrPyxbQaAV4DxnkSo/s= X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Lawrence, this is the change I was referring to yesterday. See if it helps in comparing your results against clean builds. Add a new option --clean_build to validate_failures.py This is useful when you have two builds of the same compiler. One with your changes. The other one, a clean build at the same revision. Instead of using a manifest file, --clean_build will compare the results it gather from the patched build against those it gathers from the clean build. Usage $ cd /top/of/patched/gcc/bld $ validate_failures.py --clean_build=clean/bld-gcc Source directory: /usr/local/google/home/dnovillo/gcc/trunk Build target: x86_64-unknown-linux-gnu Getting actual results from build directory . ./x86_64-unknown-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum ./x86_64-unknown-linux-gnu/libffi/testsuite/libffi.sum ./x86_64-unknown-linux-gnu/libgomp/testsuite/libgomp.sum ./x86_64-unknown-linux-gnu/libgo/libgo.sum ./x86_64-unknown-linux-gnu/boehm-gc/testsuite/boehm-gc.sum ./x86_64-unknown-linux-gnu/libatomic/testsuite/libatomic.sum ./x86_64-unknown-linux-gnu/libmudflap/testsuite/libmudflap.sum ./x86_64-unknown-linux-gnu/libitm/testsuite/libitm.sum ./x86_64-unknown-linux-gnu/libjava/testsuite/libjava.sum ./gcc/testsuite/g++/g++.sum ./gcc/testsuite/gnat/gnat.sum ./gcc/testsuite/ada/acats/acats.sum ./gcc/testsuite/gcc/gcc.sum ./gcc/testsuite/gfortran/gfortran.sum ./gcc/testsuite/obj-c++/obj-c++.sum ./gcc/testsuite/go/go.sum ./gcc/testsuite/objc/objc.sum Getting actual results from build directory clean/bld-gcc clean/bld-gcc/x86_64-unknown-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum clean/bld-gcc/x86_64-unknown-linux-gnu/libffi/testsuite/libffi.sum clean/bld-gcc/x86_64-unknown-linux-gnu/libgomp/testsuite/libgomp.sum clean/bld-gcc/x86_64-unknown-linux-gnu/libgo/libgo.sum clean/bld-gcc/x86_64-unknown-linux-gnu/boehm-gc/testsuite/boehm-gc.sum clean/bld-gcc/x86_64-unknown-linux-gnu/libatomic/testsuite/libatomic.sum clean/bld-gcc/x86_64-unknown-linux-gnu/libmudflap/testsuite/libmudflap.sum clean/bld-gcc/x86_64-unknown-linux-gnu/libitm/testsuite/libitm.sum clean/bld-gcc/x86_64-unknown-linux-gnu/libjava/testsuite/libjava.sum clean/bld-gcc/gcc/testsuite/g++/g++.sum clean/bld-gcc/gcc/testsuite/gnat/gnat.sum clean/bld-gcc/gcc/testsuite/ada/acats/acats.sum clean/bld-gcc/gcc/testsuite/gcc/gcc.sum clean/bld-gcc/gcc/testsuite/gfortran/gfortran.sum clean/bld-gcc/gcc/testsuite/obj-c++/obj-c++.sum clean/bld-gcc/gcc/testsuite/go/go.sum clean/bld-gcc/gcc/testsuite/objc/objc.sum SUCCESS: No unexpected failures. 2012-11-02 Diego Novillo * testsuite-management/validate_failures.py: Add option --clean_build to compare test results against another build. diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py index be13cfd..7391937 100755 --- a/contrib/testsuite-management/validate_failures.py +++ b/contrib/testsuite-management/validate_failures.py @@ -292,7 +292,7 @@ def PrintSummary(msg, summary): def GetSumFiles(results, build_dir): if not results: - print 'Getting actual results from build' + print 'Getting actual results from build directory %s' % build_dir sum_files = CollectSumFiles(build_dir) else: print 'Getting actual results from user-provided results' @@ -300,6 +300,27 @@ def GetSumFiles(results, build_dir): return sum_files +def PerformComparison(expected, actual, ignore_missing_failures): + actual_vs_expected, expected_vs_actual = CompareResults(expected, actual) + + tests_ok = True + if len(actual_vs_expected) > 0: + PrintSummary('Unexpected results in this build (new failures)', + actual_vs_expected) + tests_ok = False + + if not ignore_missing_failures and len(expected_vs_actual) > 0: + PrintSummary('Expected results not present in this build (fixed tests)' + '\n\nNOTE: This is not a failure. It just means that these ' + 'tests were expected\nto fail, but they worked in this ' + 'configuration.\n', expected_vs_actual) + + if tests_ok: + print '\nSUCCESS: No unexpected failures.' + + return tests_ok + + def CheckExpectedResults(options): if not options.manifest: (srcdir, target, valid_build) = GetBuildData(options) @@ -320,24 +341,7 @@ def CheckExpectedResults(options): PrintSummary('Tests expected to fail', manifest) PrintSummary('\nActual test results', actual) - actual_vs_manifest, manifest_vs_actual = CompareResults(manifest, actual) - - tests_ok = True - if len(actual_vs_manifest) > 0: - PrintSummary('Build results not in the manifest', actual_vs_manifest) - tests_ok = False - - if not options.ignore_missing_failures and len(manifest_vs_actual) > 0: - PrintSummary('Manifest results not present in the build' - '\n\nNOTE: This is not a failure. It just means that the ' - 'manifest expected\nthese tests to fail, ' - 'but they worked in this configuration.\n', - manifest_vs_actual) - - if tests_ok: - print '\nSUCCESS: No unexpected failures.' - - return tests_ok + return PerformComparison(manifest, actual, options.ignore_missing_failures) def ProduceManifest(options): @@ -361,6 +365,20 @@ def ProduceManifest(options): return True +def CompareBuilds(options): + (srcdir, target, valid_build) = GetBuildData(options) + if not valid_build: + return False + + sum_files = GetSumFiles(options.results, options.build_dir) + actual = GetResults(sum_files) + + clean_sum_files = GetSumFiles(None, options.clean_build) + clean = GetResults(clean_sum_files) + + return PerformComparison(clean, actual, options.ignore_missing_failures) + + def Main(argv): parser = optparse.OptionParser(usage=__doc__) @@ -368,6 +386,14 @@ def Main(argv): parser.add_option('--build_dir', action='store', type='string', dest='build_dir', default='.', help='Build directory to check (default = .)') + parser.add_option('--clean_build', action='store', type='string', + dest='clean_build', default=None, + help='Compare test results from this build against ' + 'those of another (clean) build. Use this option ' + 'when comparing the test results of your patch versus ' + 'the test results of a clean build without your patch. ' + 'You must provide the path to the top directory of your ' + 'clean build.') parser.add_option('--force', action='store_true', dest='force', default=False, help='When used with --produce_manifest, ' 'it will overwrite an existing manifest file ' @@ -400,6 +426,8 @@ def Main(argv): if options.produce_manifest: retval = ProduceManifest(options) + elif options.clean_build: + retval = CompareBuilds(options) else: retval = CheckExpectedResults(options)