From patchwork Tue Oct 16 07:30:13 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 191750 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 9CEA62C008B for ; Tue, 16 Oct 2012 18:31:43 +1100 (EST) Received: from localhost ([::1]:53238 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TO1cr-0006Aa-Nh for incoming@patchwork.ozlabs.org; Tue, 16 Oct 2012 03:31:41 -0400 Received: from eggs.gnu.org ([208.118.235.92]:60923) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TO1bu-0003eS-2i for qemu-devel@nongnu.org; Tue, 16 Oct 2012 03:30:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TO1bs-0001qK-B1 for qemu-devel@nongnu.org; Tue, 16 Oct 2012 03:30:41 -0400 Received: from mail-pa0-f45.google.com ([209.85.220.45]:51531) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TO1bs-0001pC-4w for qemu-devel@nongnu.org; Tue, 16 Oct 2012 03:30:40 -0400 Received: by mail-pa0-f45.google.com with SMTP id fb10so5591558pad.4 for ; Tue, 16 Oct 2012 00:30:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references; bh=HYmX1bFRCDdU5g5zKKBXYs4e8Czb3XPmTn+WX3O4Sa0=; b=S8U5+szvq0FqlsHN0W45VoF0XSFK5xHja0ePP9jMGJACyelcDy0yLWwHrmRueFi8ae KDjPshgcxUdfzG68aWcO28zqmMM/gjUUV4r//fu6tMWqT4EuufufFQr4w9EouV5VPgV0 hpIFlU0jegwpKCczbZ2u6SeM3BsbqpNj+2tWrsdVhsXPPta7zPmIFq0fjFjLNEWWk2As Pb/1uc4Oxq4g7ZF/x5u9gk09Jd/mswz9e8br30tuRwtkzI59JM1PQYz4hc/OPIS0hN8A F+deteX4apscGL/GfiyPb29VwVRlFFDteGhh9OZFK3YcG9KqSjl1ILtMT1ubdhMDrzxs 397w== Received: by 10.66.77.7 with SMTP id o7mr39603574paw.37.1350372639815; Tue, 16 Oct 2012 00:30:39 -0700 (PDT) Received: from pebble.twiddle.home ([1.141.46.32]) by mx.google.com with ESMTPS id jw14sm10364647pbb.36.2012.10.16.00.30.37 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 16 Oct 2012 00:30:39 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 16 Oct 2012 17:30:13 +1000 Message-Id: <1350372614-30041-5-git-send-email-rth@twiddle.net> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <1350372614-30041-1-git-send-email-rth@twiddle.net> References: <1350372614-30041-1-git-send-email-rth@twiddle.net> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 209.85.220.45 Cc: blauwirbel@gmail.com Subject: [Qemu-devel] [PATCH 4/5] exec: Allocate code_gen_prologue from code_gen_buffer X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org We had a hack for arm and sparc, allocating code_gen_prologue to a special section. Which, honestly does no good under certain cases. We've already got limits on code_gen_buffer_size to ensure that all TBs can use direct branches between themselves; reuse this limit to ensure the prologue is also reachable. As a bonus, we get to avoid marking a page of the main executable's data segment as executable. Signed-off-by: Richard Henderson --- exec.c | 30 +++++++++++------------------- tcg/tcg.h | 2 +- 2 files changed, 12 insertions(+), 20 deletions(-) diff --git a/exec.c b/exec.c index 5e33a3d..8958b28 100644 --- a/exec.c +++ b/exec.c @@ -86,22 +86,7 @@ static int nb_tbs; /* any access to the tbs or the page table must use this lock */ spinlock_t tb_lock = SPIN_LOCK_UNLOCKED; -#if defined(__arm__) || defined(__sparc__) -/* The prologue must be reachable with a direct jump. ARM and Sparc64 - have limited branch ranges (possibly also PPC) so place it in a - section close to code segment. */ -#define code_gen_section \ - __attribute__((__section__(".gen_code"))) \ - __attribute__((aligned (32))) -#elif defined(_WIN32) && !defined(_WIN64) -#define code_gen_section \ - __attribute__((aligned (16))) -#else -#define code_gen_section \ - __attribute__((aligned (32))) -#endif - -uint8_t code_gen_prologue[1024] code_gen_section; +uint8_t *code_gen_prologue; static uint8_t *code_gen_buffer; static size_t code_gen_buffer_size; /* threshold to flush the translated code buffer */ @@ -221,7 +206,7 @@ static int tb_flush_count; static int tb_phys_invalidate_count; #ifdef _WIN32 -static void map_exec(void *addr, long size) +static inline void map_exec(void *addr, long size) { DWORD old_protect; VirtualProtect(addr, size, @@ -229,7 +214,7 @@ static void map_exec(void *addr, long size) } #else -static void map_exec(void *addr, long size) +static inline void map_exec(void *addr, long size) { unsigned long start, end, page_size; @@ -621,7 +606,14 @@ static inline void code_gen_alloc(size_t tb_size) exit(1); } - map_exec(code_gen_prologue, sizeof(code_gen_prologue)); + /* Steal room for the prologue at the end of the buffer. This ensures + (via the MAX_CODE_GEN_BUFFER_SIZE limits above) that direct branches + from TB's to the prologue are going to be in range. It also means + that we don't need to mark (additional) portions of the data segment + as executable. */ + code_gen_prologue = code_gen_buffer + code_gen_buffer_size - 1024; + code_gen_buffer_size -= 1024; + code_gen_buffer_max_size = code_gen_buffer_size - (TCG_MAX_OP_SIZE * OPC_BUF_SIZE); code_gen_max_blocks = code_gen_buffer_size / CODE_GEN_AVG_BLOCK_SIZE; diff --git a/tcg/tcg.h b/tcg/tcg.h index 7bafe0e..45e94f5 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -616,7 +616,7 @@ TCGv_i64 tcg_const_i64(int64_t val); TCGv_i32 tcg_const_local_i32(int32_t val); TCGv_i64 tcg_const_local_i64(int64_t val); -extern uint8_t code_gen_prologue[]; +extern uint8_t *code_gen_prologue; /* TCG targets may use a different definition of tcg_qemu_tb_exec. */ #if !defined(tcg_qemu_tb_exec)