Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/1.0/patches/2197994/?format=api
{ "id": 2197994, "url": "http://patchwork.ozlabs.org/api/1.0/patches/2197994/?format=api", "project": { "id": 14, "url": "http://patchwork.ozlabs.org/api/1.0/projects/14/?format=api", "name": "QEMU Development", "link_name": "qemu-devel", "list_id": "qemu-devel.nongnu.org", "list_email": "qemu-devel@nongnu.org", "web_url": "", "scm_url": "", "webscm_url": "" }, "msgid": "<20260219040150.2098396-2-pierrick.bouvier@linaro.org>", "date": "2026-02-19T04:01:37", "name": "[v4,01/14] target/arm: Move TCG-specific code out of debug_helper.c", "commit_ref": null, "pull_url": null, "state": "new", "archived": false, "hash": "d07b740cc091d36467e76b4cb50a6d637013699e", "submitter": { "id": 85798, "url": "http://patchwork.ozlabs.org/api/1.0/people/85798/?format=api", "name": "Pierrick Bouvier", "email": "pierrick.bouvier@linaro.org" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/qemu-devel/patch/20260219040150.2098396-2-pierrick.bouvier@linaro.org/mbox/", "series": [ { "id": 492635, "url": "http://patchwork.ozlabs.org/api/1.0/series/492635/?format=api", "date": "2026-02-19T04:01:36", "name": "target/arm: single-binary", "version": 4, "mbox": "http://patchwork.ozlabs.org/series/492635/mbox/" } ], "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/2197994/checks/", "tags": {}, "headers": { "Return-Path": "<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>", "X-Original-To": "incoming@patchwork.ozlabs.org", "Delivered-To": "patchwork-incoming@legolas.ozlabs.org", "Authentication-Results": [ "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256\n header.s=google header.b=Kyh0lPhl;\n\tdkim-atps=neutral", "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org\n (client-ip=209.51.188.17; helo=lists.gnu.org;\n envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n receiver=patchwork.ozlabs.org)" ], "Received": [ "from lists.gnu.org (lists.gnu.org [209.51.188.17])\n\t(using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fGfrp2Yr6z1xpY\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 19 Feb 2026 15:04:10 +1100 (AEDT)", "from localhost ([::1] helo=lists1p.gnu.org)\n\tby lists.gnu.org with esmtp (Exim 4.90_1)\n\t(envelope-from <qemu-devel-bounces@nongnu.org>)\n\tid 1vsvF4-0008MT-SA; Wed, 18 Feb 2026 23:02:18 -0500", "from eggs.gnu.org ([2001:470:142:3::10])\n by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <pierrick.bouvier@linaro.org>)\n id 1vsvF2-0008Lb-PV\n for qemu-devel@nongnu.org; Wed, 18 Feb 2026 23:02:16 -0500", "from mail-pg1-x535.google.com ([2607:f8b0:4864:20::535])\n by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)\n (Exim 4.90_1) (envelope-from <pierrick.bouvier@linaro.org>)\n id 1vsvEy-0001Cx-4g\n for qemu-devel@nongnu.org; Wed, 18 Feb 2026 23:02:16 -0500", "by mail-pg1-x535.google.com with SMTP id\n 41be03b00d2f7-c6e734ba92bso231277a12.3\n for <qemu-devel@nongnu.org>; Wed, 18 Feb 2026 20:02:11 -0800 (PST)", "from pc.taild8403c.ts.net (216-71-219-44.dyn.novuscom.net.\n [216.71.219.44]) by smtp.gmail.com with ESMTPSA id\n d9443c01a7336-2ad1a73200asm147636225ad.36.2026.02.18.20.02.08\n (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n Wed, 18 Feb 2026 20:02:09 -0800 (PST)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=linaro.org; s=google; t=1771473730; x=1772078530; darn=nongnu.org;\n h=content-transfer-encoding:mime-version:references:in-reply-to\n :message-id:date:subject:cc:to:from:from:to:cc:subject:date\n :message-id:reply-to;\n bh=eauig3ysS+xxOhSAgs3llLzf1uSxo9TErKgwme08ax4=;\n b=Kyh0lPhlw3U1+JY3DaMlTHjnH+HH0U/zGGSmT+JPhXX2CZffnmBTSxU+5dOWClpOXw\n h4U2pL8BjFnpZqfrvLn4gAPV+cwop6A2iigraGD+xW7O6jvkNfO0U9knbyxnx4u3v4OQ\n ETBRXsoT2v+W0mbRMRI4pZ9fxcxJvigGSGoqJo/R7zQBBm+vAuOsjtyQpvJqctH/8Uks\n 605CtQm4ezlI48Mobbl8ucxu6bBsnw6Quu75lPJm3+CTchpu37YCy5xNVI5TVZw318Cd\n 955Nk9xPBhmrXTVkAE5rm+gfM0ao5jAJjSQ1fdHou20g6rsqTYuwP2bF0ANgFcwCt0ip\n spoA==", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20230601; t=1771473730; x=1772078530;\n h=content-transfer-encoding:mime-version:references:in-reply-to\n :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from\n :to:cc:subject:date:message-id:reply-to;\n bh=eauig3ysS+xxOhSAgs3llLzf1uSxo9TErKgwme08ax4=;\n b=I0ZqoENOpLWoduz0SyhqoKYFjG1IDqt0m7wB3aPNEgxoeHxoDTlonR444/PK5YdXc9\n eg9YJdZ7xoWN3cKvbUQh+ehn6VkmUDeK2MEZema4axowkM9cPtmJsy3ibSpGBmK9vqkJ\n bHlMbbFzInwz5o//EYRe0A0RVCgpyGmJmmrcIVYYfPUnnULtJU/yai3SHJzyJbFp3RdM\n qKW44M/EblDw1q7gUvPCn5zlM9BSUpetPQ/U7/LgqqrtUa1GKaFXy1aZwwI4xXlTEhz2\n 2CPQ7vgtqrvztGwFkgOftIBL4ha/fWL5X8pmc+dGogKc6VLLnzPDdrFzAFFx779297Nu\n O4iw==", "X-Gm-Message-State": "AOJu0YxN1EbVqdrw6K3c+Wkv577tPyCRUIt28/vsEA3AsFUyT9u0DuC3\n 6fyCGed1OWlzh2G0JXqx3vhaZYUvXG+dVtINC7APR5PeHiRxN4WsDm1uQy7bl/P4VY2+duxcxts\n WV5qm", "X-Gm-Gg": "AZuq6aJULMKqykIqA/IZgQPe/nD8mmpZxKiS6z5+11Wsij8KkblahJijXUqi1GYtuo+\n I78ANMCPMKkD6zgkIeyZ0005ZVf4M0c7wCrsFx2brj1dPCs5KCb+YydgHMof4qKv7m4Iqen4d9u\n a7bPn8AzNBMeKYBMjCO+cQ0/4fkdOWYY5Ao9D3Y3s47wAHKdAhF33Jk5Cb/XWv8NToXETfku91T\n ssVf3kzoNKzLVMpgpsKBVpzxw1lg07PwDwI214cXIy7yHYGiOyJU5cbFmxIpn8EdzVBm1tMXbRO\n +1XT0cXgeUo4RMUvpvAE72jfs1BLhOf/xKDAjoqIy9iC4pPJXe1Cushgdmqv5mqdVcHUvLaBgLw\n tkVx3/t688tRP6p7m3YfMzqs+IMMjMQQiCD92Cxu4dktJGMR6m3vijs4/0kRhi4llD8ErXzdG2h\n hVbD2j/4e3AmTfeJzfD1xarr5pc1ypOaFUAmSmqMU06fknAHfJoXD/Ir8Bg4fya5ZlkU54zIoBn\n B1q", "X-Received": "by 2002:a17:903:2284:b0:2aa:df82:ed7a with SMTP id\n d9443c01a7336-2ad17552e10mr143164445ad.58.1771473729848;\n Wed, 18 Feb 2026 20:02:09 -0800 (PST)", "From": "Pierrick Bouvier <pierrick.bouvier@linaro.org>", "To": "qemu-devel@nongnu.org", "Cc": "=?utf-8?q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>,\n Pierrick Bouvier <pierrick.bouvier@linaro.org>,\n Paolo Bonzini <pbonzini@redhat.com>, qemu-arm@nongnu.org,\n kvm@vger.kernel.org, Richard Henderson <richard.henderson@linaro.org>,\n\t=?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>, anjo@rev.ng,\n Peter Maydell <peter.maydell@linaro.org>,\n Jim MacArthur <jim.macarthur@linaro.org>", "Subject": "[PATCH v4 01/14] target/arm: Move TCG-specific code out of\n debug_helper.c", "Date": "Wed, 18 Feb 2026 20:01:37 -0800", "Message-ID": "<20260219040150.2098396-2-pierrick.bouvier@linaro.org>", "X-Mailer": "git-send-email 2.47.3", "In-Reply-To": "<20260219040150.2098396-1-pierrick.bouvier@linaro.org>", "References": "<20260219040150.2098396-1-pierrick.bouvier@linaro.org>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "Received-SPF": "pass client-ip=2607:f8b0:4864:20::535;\n envelope-from=pierrick.bouvier@linaro.org; helo=mail-pg1-x535.google.com", "X-Spam_score_int": "-20", "X-Spam_score": "-2.1", "X-Spam_bar": "--", "X-Spam_report": "(-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1,\n DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,\n RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001,\n SPF_PASS=-0.001 autolearn=ham autolearn_force=no", "X-Spam_action": "no action", "X-BeenThere": "qemu-devel@nongnu.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "qemu development <qemu-devel.nongnu.org>", "List-Unsubscribe": "<https://lists.nongnu.org/mailman/options/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>", "List-Archive": "<https://lists.nongnu.org/archive/html/qemu-devel>", "List-Post": "<mailto:qemu-devel@nongnu.org>", "List-Help": "<mailto:qemu-devel-request@nongnu.org?subject=help>", "List-Subscribe": "<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=subscribe>", "Errors-To": "qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org", "Sender": "qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org" }, "content": "From: Peter Maydell <peter.maydell@linaro.org>\n\nThe target/arm/debug_helper.c file has some code which we need\nfor non-TCG accelerators, but quite a lot which is guarded by\na CONFIG_TCG ifdef. Move all this TCG-only code out to a\nnew file target/arm/tcg/debug.c.\n\nIn particular all the code requiring access to the TCG\nhelper function prototypes is in the moved code, so we can\ndrop the use of tcg/helper.h from debug_helper.c.\n\nSigned-off-by: Peter Maydell <peter.maydell@linaro.org>\nReviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>\nReviewed-by: Richard Henderson <richard.henderson@linaro.org>\nSigned-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>\n---\n target/arm/debug_helper.c | 769 ------------------------------------\n target/arm/tcg/debug.c | 782 +++++++++++++++++++++++++++++++++++++\n target/arm/tcg/meson.build | 2 +\n 3 files changed, 784 insertions(+), 769 deletions(-)\n create mode 100644 target/arm/tcg/debug.c", "diff": "diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c\nindex 579516e1541..352c8e5c8e7 100644\n--- a/target/arm/debug_helper.c\n+++ b/target/arm/debug_helper.c\n@@ -14,775 +14,6 @@\n #include \"exec/watchpoint.h\"\n #include \"system/tcg.h\"\n \n-#define HELPER_H \"tcg/helper.h\"\n-#include \"exec/helper-proto.h.inc\"\n-\n-#ifdef CONFIG_TCG\n-/* Return the Exception Level targeted by debug exceptions. */\n-static int arm_debug_target_el(CPUARMState *env)\n-{\n- bool secure = arm_is_secure(env);\n- bool route_to_el2 = false;\n-\n- if (arm_feature(env, ARM_FEATURE_M)) {\n- return 1;\n- }\n-\n- if (arm_is_el2_enabled(env)) {\n- route_to_el2 = env->cp15.hcr_el2 & HCR_TGE ||\n- env->cp15.mdcr_el2 & MDCR_TDE;\n- }\n-\n- if (route_to_el2) {\n- return 2;\n- } else if (arm_feature(env, ARM_FEATURE_EL3) &&\n- !arm_el_is_aa64(env, 3) && secure) {\n- return 3;\n- } else {\n- return 1;\n- }\n-}\n-\n-/*\n- * Raise an exception to the debug target el.\n- * Modify syndrome to indicate when origin and target EL are the same.\n- */\n-G_NORETURN static void\n-raise_exception_debug(CPUARMState *env, uint32_t excp, uint32_t syndrome)\n-{\n- int debug_el = arm_debug_target_el(env);\n- int cur_el = arm_current_el(env);\n-\n- /*\n- * If singlestep is targeting a lower EL than the current one, then\n- * DisasContext.ss_active must be false and we can never get here.\n- * Similarly for watchpoint and breakpoint matches.\n- */\n- assert(debug_el >= cur_el);\n- syndrome |= (debug_el == cur_el) << ARM_EL_EC_SHIFT;\n- raise_exception(env, excp, syndrome, debug_el);\n-}\n-\n-/* See AArch64.GenerateDebugExceptionsFrom() in ARM ARM pseudocode */\n-static bool aa64_generate_debug_exceptions(CPUARMState *env)\n-{\n- int cur_el = arm_current_el(env);\n- int debug_el;\n-\n- if (cur_el == 3) {\n- return false;\n- }\n-\n- /* MDCR_EL3.SDD disables debug events from Secure state */\n- if (arm_is_secure_below_el3(env)\n- && extract32(env->cp15.mdcr_el3, 16, 1)) {\n- return false;\n- }\n-\n- /*\n- * Same EL to same EL debug exceptions need MDSCR_KDE enabled\n- * while not masking the (D)ebug bit in DAIF.\n- */\n- debug_el = arm_debug_target_el(env);\n-\n- if (cur_el == debug_el) {\n- return extract32(env->cp15.mdscr_el1, 13, 1)\n- && !(env->daif & PSTATE_D);\n- }\n-\n- /* Otherwise the debug target needs to be a higher EL */\n- return debug_el > cur_el;\n-}\n-\n-static bool aa32_generate_debug_exceptions(CPUARMState *env)\n-{\n- int el = arm_current_el(env);\n-\n- if (el == 0 && arm_el_is_aa64(env, 1)) {\n- return aa64_generate_debug_exceptions(env);\n- }\n-\n- if (arm_is_secure(env)) {\n- int spd;\n-\n- if (el == 0 && (env->cp15.sder & 1)) {\n- /*\n- * SDER.SUIDEN means debug exceptions from Secure EL0\n- * are always enabled. Otherwise they are controlled by\n- * SDCR.SPD like those from other Secure ELs.\n- */\n- return true;\n- }\n-\n- spd = extract32(env->cp15.mdcr_el3, 14, 2);\n- switch (spd) {\n- case 1:\n- /* SPD == 0b01 is reserved, but behaves as 0b00. */\n- case 0:\n- /*\n- * For 0b00 we return true if external secure invasive debug\n- * is enabled. On real hardware this is controlled by external\n- * signals to the core. QEMU always permits debug, and behaves\n- * as if DBGEN, SPIDEN, NIDEN and SPNIDEN are all tied high.\n- */\n- return true;\n- case 2:\n- return false;\n- case 3:\n- return true;\n- }\n- }\n-\n- return el != 2;\n-}\n-\n-/*\n- * Return true if debugging exceptions are currently enabled.\n- * This corresponds to what in ARM ARM pseudocode would be\n- * if UsingAArch32() then\n- * return AArch32.GenerateDebugExceptions()\n- * else\n- * return AArch64.GenerateDebugExceptions()\n- * We choose to push the if() down into this function for clarity,\n- * since the pseudocode has it at all callsites except for the one in\n- * CheckSoftwareStep(), where it is elided because both branches would\n- * always return the same value.\n- */\n-bool arm_generate_debug_exceptions(CPUARMState *env)\n-{\n- if ((env->cp15.oslsr_el1 & 1) || (env->cp15.osdlr_el1 & 1)) {\n- return false;\n- }\n- if (is_a64(env)) {\n- return aa64_generate_debug_exceptions(env);\n- } else {\n- return aa32_generate_debug_exceptions(env);\n- }\n-}\n-\n-/*\n- * Is single-stepping active? (Note that the \"is EL_D AArch64?\" check\n- * implicitly means this always returns false in pre-v8 CPUs.)\n- */\n-bool arm_singlestep_active(CPUARMState *env)\n-{\n- return extract32(env->cp15.mdscr_el1, 0, 1)\n- && arm_el_is_aa64(env, arm_debug_target_el(env))\n- && arm_generate_debug_exceptions(env);\n-}\n-\n-/* Return true if the linked breakpoint entry lbn passes its checks */\n-static bool linked_bp_matches(ARMCPU *cpu, int lbn)\n-{\n- CPUARMState *env = &cpu->env;\n- uint64_t bcr = env->cp15.dbgbcr[lbn];\n- int brps = arm_num_brps(cpu);\n- int ctx_cmps = arm_num_ctx_cmps(cpu);\n- int bt;\n- uint32_t contextidr;\n- uint64_t hcr_el2;\n-\n- /*\n- * Links to unimplemented or non-context aware breakpoints are\n- * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or\n- * as if linked to an UNKNOWN context-aware breakpoint (in which\n- * case DBGWCR<n>_EL1.LBN must indicate that breakpoint).\n- * We choose the former.\n- */\n- if (lbn >= brps || lbn < (brps - ctx_cmps)) {\n- return false;\n- }\n-\n- bcr = env->cp15.dbgbcr[lbn];\n-\n- if (extract64(bcr, 0, 1) == 0) {\n- /* Linked breakpoint disabled : generate no events */\n- return false;\n- }\n-\n- bt = extract64(bcr, 20, 4);\n- hcr_el2 = arm_hcr_el2_eff(env);\n-\n- switch (bt) {\n- case 3: /* linked context ID match */\n- switch (arm_current_el(env)) {\n- default:\n- /* Context matches never fire in AArch64 EL3 */\n- return false;\n- case 2:\n- if (!(hcr_el2 & HCR_E2H)) {\n- /* Context matches never fire in EL2 without E2H enabled. */\n- return false;\n- }\n- contextidr = env->cp15.contextidr_el[2];\n- break;\n- case 1:\n- contextidr = env->cp15.contextidr_el[1];\n- break;\n- case 0:\n- if ((hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {\n- contextidr = env->cp15.contextidr_el[2];\n- } else {\n- contextidr = env->cp15.contextidr_el[1];\n- }\n- break;\n- }\n- break;\n-\n- case 7: /* linked contextidr_el1 match */\n- contextidr = env->cp15.contextidr_el[1];\n- break;\n- case 13: /* linked contextidr_el2 match */\n- contextidr = env->cp15.contextidr_el[2];\n- break;\n-\n- case 9: /* linked VMID match (reserved if no EL2) */\n- case 11: /* linked context ID and VMID match (reserved if no EL2) */\n- case 15: /* linked full context ID match */\n- default:\n- /*\n- * Links to Unlinked context breakpoints must generate no\n- * events; we choose to do the same for reserved values too.\n- */\n- return false;\n- }\n-\n- /*\n- * We match the whole register even if this is AArch32 using the\n- * short descriptor format (in which case it holds both PROCID and ASID),\n- * since we don't implement the optional v7 context ID masking.\n- */\n- return contextidr == (uint32_t)env->cp15.dbgbvr[lbn];\n-}\n-\n-static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)\n-{\n- CPUARMState *env = &cpu->env;\n- uint64_t cr;\n- int pac, hmc, ssc, wt, lbn;\n- /*\n- * Note that for watchpoints the check is against the CPU security\n- * state, not the S/NS attribute on the offending data access.\n- */\n- bool is_secure = arm_is_secure(env);\n- int access_el = arm_current_el(env);\n-\n- if (is_wp) {\n- CPUWatchpoint *wp = env->cpu_watchpoint[n];\n-\n- if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) {\n- return false;\n- }\n- cr = env->cp15.dbgwcr[n];\n- if (wp->hitattrs.user) {\n- /*\n- * The LDRT/STRT/LDT/STT \"unprivileged access\" instructions should\n- * match watchpoints as if they were accesses done at EL0, even if\n- * the CPU is at EL1 or higher.\n- */\n- access_el = 0;\n- }\n- } else {\n- uint64_t pc = is_a64(env) ? env->pc : env->regs[15];\n-\n- if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc != pc) {\n- return false;\n- }\n- cr = env->cp15.dbgbcr[n];\n- }\n- /*\n- * The WATCHPOINT_HIT flag guarantees us that the watchpoint is\n- * enabled and that the address and access type match; for breakpoints\n- * we know the address matched; check the remaining fields, including\n- * linked breakpoints. We rely on WCR and BCR having the same layout\n- * for the LBN, SSC, HMC, PAC/PMC and is-linked fields.\n- * Note that some combinations of {PAC, HMC, SSC} are reserved and\n- * must act either like some valid combination or as if the watchpoint\n- * were disabled. We choose the former, and use this together with\n- * the fact that EL3 must always be Secure and EL2 must always be\n- * Non-Secure to simplify the code slightly compared to the full\n- * table in the ARM ARM.\n- */\n- pac = FIELD_EX64(cr, DBGWCR, PAC);\n- hmc = FIELD_EX64(cr, DBGWCR, HMC);\n- ssc = FIELD_EX64(cr, DBGWCR, SSC);\n-\n- switch (ssc) {\n- case 0:\n- break;\n- case 1:\n- case 3:\n- if (is_secure) {\n- return false;\n- }\n- break;\n- case 2:\n- if (!is_secure) {\n- return false;\n- }\n- break;\n- }\n-\n- switch (access_el) {\n- case 3:\n- case 2:\n- if (!hmc) {\n- return false;\n- }\n- break;\n- case 1:\n- if (extract32(pac, 0, 1) == 0) {\n- return false;\n- }\n- break;\n- case 0:\n- if (extract32(pac, 1, 1) == 0) {\n- return false;\n- }\n- break;\n- default:\n- g_assert_not_reached();\n- }\n-\n- wt = FIELD_EX64(cr, DBGWCR, WT);\n- lbn = FIELD_EX64(cr, DBGWCR, LBN);\n-\n- if (wt && !linked_bp_matches(cpu, lbn)) {\n- return false;\n- }\n-\n- return true;\n-}\n-\n-static bool check_watchpoints(ARMCPU *cpu)\n-{\n- CPUARMState *env = &cpu->env;\n- int n;\n-\n- /*\n- * If watchpoints are disabled globally or we can't take debug\n- * exceptions here then watchpoint firings are ignored.\n- */\n- if (extract32(env->cp15.mdscr_el1, 15, 1) == 0\n- || !arm_generate_debug_exceptions(env)) {\n- return false;\n- }\n-\n- for (n = 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) {\n- if (bp_wp_matches(cpu, n, true)) {\n- return true;\n- }\n- }\n- return false;\n-}\n-\n-bool arm_debug_check_breakpoint(CPUState *cs)\n-{\n- ARMCPU *cpu = ARM_CPU(cs);\n- CPUARMState *env = &cpu->env;\n- vaddr pc;\n- int n;\n-\n- /*\n- * If breakpoints are disabled globally or we can't take debug\n- * exceptions here then breakpoint firings are ignored.\n- */\n- if (extract32(env->cp15.mdscr_el1, 15, 1) == 0\n- || !arm_generate_debug_exceptions(env)) {\n- return false;\n- }\n-\n- /*\n- * Single-step exceptions have priority over breakpoint exceptions.\n- * If single-step state is active-pending, suppress the bp.\n- */\n- if (arm_singlestep_active(env) && !(env->pstate & PSTATE_SS)) {\n- return false;\n- }\n-\n- /*\n- * PC alignment faults have priority over breakpoint exceptions.\n- */\n- pc = is_a64(env) ? env->pc : env->regs[15];\n- if ((is_a64(env) || !env->thumb) && (pc & 3) != 0) {\n- return false;\n- }\n-\n- /*\n- * Instruction aborts have priority over breakpoint exceptions.\n- * TODO: We would need to look up the page for PC and verify that\n- * it is present and executable.\n- */\n-\n- for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {\n- if (bp_wp_matches(cpu, n, false)) {\n- return true;\n- }\n- }\n- return false;\n-}\n-\n-bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)\n-{\n- /*\n- * Called by core code when a CPU watchpoint fires; need to check if this\n- * is also an architectural watchpoint match.\n- */\n- ARMCPU *cpu = ARM_CPU(cs);\n-\n- return check_watchpoints(cpu);\n-}\n-\n-/*\n- * Return the FSR value for a debug exception (watchpoint, hardware\n- * breakpoint or BKPT insn) targeting the specified exception level.\n- */\n-static uint32_t arm_debug_exception_fsr(CPUARMState *env)\n-{\n- ARMMMUFaultInfo fi = { .type = ARMFault_Debug };\n- int target_el = arm_debug_target_el(env);\n- bool using_lpae;\n-\n- if (arm_feature(env, ARM_FEATURE_M)) {\n- using_lpae = false;\n- } else if (target_el == 2 || arm_el_is_aa64(env, target_el)) {\n- using_lpae = true;\n- } else if (arm_feature(env, ARM_FEATURE_PMSA) &&\n- arm_feature(env, ARM_FEATURE_V8)) {\n- using_lpae = true;\n- } else if (arm_feature(env, ARM_FEATURE_LPAE) &&\n- (env->cp15.tcr_el[target_el] & TTBCR_EAE)) {\n- using_lpae = true;\n- } else {\n- using_lpae = false;\n- }\n-\n- if (using_lpae) {\n- return arm_fi_to_lfsc(&fi);\n- } else {\n- return arm_fi_to_sfsc(&fi);\n- }\n-}\n-\n-void arm_debug_excp_handler(CPUState *cs)\n-{\n- /*\n- * Called by core code when a watchpoint or breakpoint fires;\n- * need to check which one and raise the appropriate exception.\n- */\n- ARMCPU *cpu = ARM_CPU(cs);\n- CPUARMState *env = &cpu->env;\n- CPUWatchpoint *wp_hit = cs->watchpoint_hit;\n-\n- if (wp_hit) {\n- if (wp_hit->flags & BP_CPU) {\n- bool wnr = (wp_hit->flags & BP_WATCHPOINT_HIT_WRITE) != 0;\n-\n- cs->watchpoint_hit = NULL;\n-\n- env->exception.fsr = arm_debug_exception_fsr(env);\n- env->exception.vaddress = wp_hit->hitaddr;\n- raise_exception_debug(env, EXCP_DATA_ABORT,\n- syn_watchpoint(0, 0, wnr));\n- }\n- } else {\n- uint64_t pc = is_a64(env) ? env->pc : env->regs[15];\n-\n- /*\n- * (1) GDB breakpoints should be handled first.\n- * (2) Do not raise a CPU exception if no CPU breakpoint has fired,\n- * since singlestep is also done by generating a debug internal\n- * exception.\n- */\n- if (cpu_breakpoint_test(cs, pc, BP_GDB)\n- || !cpu_breakpoint_test(cs, pc, BP_CPU)) {\n- return;\n- }\n-\n- env->exception.fsr = arm_debug_exception_fsr(env);\n- /*\n- * FAR is UNKNOWN: clear vaddress to avoid potentially exposing\n- * values to the guest that it shouldn't be able to see at its\n- * exception/security level.\n- */\n- env->exception.vaddress = 0;\n- raise_exception_debug(env, EXCP_PREFETCH_ABORT, syn_breakpoint(0));\n- }\n-}\n-\n-/*\n- * Raise an EXCP_BKPT with the specified syndrome register value,\n- * targeting the correct exception level for debug exceptions.\n- */\n-void HELPER(exception_bkpt_insn)(CPUARMState *env, uint32_t syndrome)\n-{\n- int debug_el = arm_debug_target_el(env);\n- int cur_el = arm_current_el(env);\n-\n- /* FSR will only be used if the debug target EL is AArch32. */\n- env->exception.fsr = arm_debug_exception_fsr(env);\n- /*\n- * FAR is UNKNOWN: clear vaddress to avoid potentially exposing\n- * values to the guest that it shouldn't be able to see at its\n- * exception/security level.\n- */\n- env->exception.vaddress = 0;\n- /*\n- * Other kinds of architectural debug exception are ignored if\n- * they target an exception level below the current one (in QEMU\n- * this is checked by arm_generate_debug_exceptions()). Breakpoint\n- * instructions are special because they always generate an exception\n- * to somewhere: if they can't go to the configured debug exception\n- * level they are taken to the current exception level.\n- */\n- if (debug_el < cur_el) {\n- debug_el = cur_el;\n- }\n- raise_exception(env, EXCP_BKPT, syndrome, debug_el);\n-}\n-\n-void HELPER(exception_swstep)(CPUARMState *env, uint32_t syndrome)\n-{\n- raise_exception_debug(env, EXCP_UDEF, syndrome);\n-}\n-\n-void hw_watchpoint_update(ARMCPU *cpu, int n)\n-{\n- CPUARMState *env = &cpu->env;\n- vaddr len = 0;\n- vaddr wvr = env->cp15.dbgwvr[n];\n- uint64_t wcr = env->cp15.dbgwcr[n];\n- int mask;\n- int flags = BP_CPU | BP_STOP_BEFORE_ACCESS;\n-\n- if (env->cpu_watchpoint[n]) {\n- cpu_watchpoint_remove_by_ref(CPU(cpu), env->cpu_watchpoint[n]);\n- env->cpu_watchpoint[n] = NULL;\n- }\n-\n- if (!FIELD_EX64(wcr, DBGWCR, E)) {\n- /* E bit clear : watchpoint disabled */\n- return;\n- }\n-\n- switch (FIELD_EX64(wcr, DBGWCR, LSC)) {\n- case 0:\n- /* LSC 00 is reserved and must behave as if the wp is disabled */\n- return;\n- case 1:\n- flags |= BP_MEM_READ;\n- break;\n- case 2:\n- flags |= BP_MEM_WRITE;\n- break;\n- case 3:\n- flags |= BP_MEM_ACCESS;\n- break;\n- }\n-\n- /*\n- * Attempts to use both MASK and BAS fields simultaneously are\n- * CONSTRAINED UNPREDICTABLE; we opt to ignore BAS in this case,\n- * thus generating a watchpoint for every byte in the masked region.\n- */\n- mask = FIELD_EX64(wcr, DBGWCR, MASK);\n- if (mask == 1 || mask == 2) {\n- /*\n- * Reserved values of MASK; we must act as if the mask value was\n- * some non-reserved value, or as if the watchpoint were disabled.\n- * We choose the latter.\n- */\n- return;\n- } else if (mask) {\n- /* Watchpoint covers an aligned area up to 2GB in size */\n- len = 1ULL << mask;\n- /*\n- * If masked bits in WVR are not zero it's CONSTRAINED UNPREDICTABLE\n- * whether the watchpoint fires when the unmasked bits match; we opt\n- * to generate the exceptions.\n- */\n- wvr &= ~(len - 1);\n- } else {\n- /* Watchpoint covers bytes defined by the byte address select bits */\n- int bas = FIELD_EX64(wcr, DBGWCR, BAS);\n- int basstart;\n-\n- if (extract64(wvr, 2, 1)) {\n- /*\n- * Deprecated case of an only 4-aligned address. BAS[7:4] are\n- * ignored, and BAS[3:0] define which bytes to watch.\n- */\n- bas &= 0xf;\n- }\n-\n- if (bas == 0) {\n- /* This must act as if the watchpoint is disabled */\n- return;\n- }\n-\n- /*\n- * The BAS bits are supposed to be programmed to indicate a contiguous\n- * range of bytes. Otherwise it is CONSTRAINED UNPREDICTABLE whether\n- * we fire for each byte in the word/doubleword addressed by the WVR.\n- * We choose to ignore any non-zero bits after the first range of 1s.\n- */\n- basstart = ctz32(bas);\n- len = cto32(bas >> basstart);\n- wvr += basstart;\n- }\n-\n- cpu_watchpoint_insert(CPU(cpu), wvr, len, flags,\n- &env->cpu_watchpoint[n]);\n-}\n-\n-void hw_watchpoint_update_all(ARMCPU *cpu)\n-{\n- int i;\n- CPUARMState *env = &cpu->env;\n-\n- /*\n- * Completely clear out existing QEMU watchpoints and our array, to\n- * avoid possible stale entries following migration load.\n- */\n- cpu_watchpoint_remove_all(CPU(cpu), BP_CPU);\n- memset(env->cpu_watchpoint, 0, sizeof(env->cpu_watchpoint));\n-\n- for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_watchpoint); i++) {\n- hw_watchpoint_update(cpu, i);\n- }\n-}\n-\n-void hw_breakpoint_update(ARMCPU *cpu, int n)\n-{\n- CPUARMState *env = &cpu->env;\n- uint64_t bvr = env->cp15.dbgbvr[n];\n- uint64_t bcr = env->cp15.dbgbcr[n];\n- vaddr addr;\n- int bt;\n- int flags = BP_CPU;\n-\n- if (env->cpu_breakpoint[n]) {\n- cpu_breakpoint_remove_by_ref(CPU(cpu), env->cpu_breakpoint[n]);\n- env->cpu_breakpoint[n] = NULL;\n- }\n-\n- if (!extract64(bcr, 0, 1)) {\n- /* E bit clear : watchpoint disabled */\n- return;\n- }\n-\n- bt = extract64(bcr, 20, 4);\n-\n- switch (bt) {\n- case 4: /* unlinked address mismatch (reserved if AArch64) */\n- case 5: /* linked address mismatch (reserved if AArch64) */\n- qemu_log_mask(LOG_UNIMP,\n- \"arm: address mismatch breakpoint types not implemented\\n\");\n- return;\n- case 0: /* unlinked address match */\n- case 1: /* linked address match */\n- {\n- /*\n- * Bits [1:0] are RES0.\n- *\n- * It is IMPLEMENTATION DEFINED whether bits [63:49]\n- * ([63:53] for FEAT_LVA) are hardwired to a copy of the sign bit\n- * of the VA field ([48] or [52] for FEAT_LVA), or whether the\n- * value is read as written. It is CONSTRAINED UNPREDICTABLE\n- * whether the RESS bits are ignored when comparing an address.\n- * Therefore we are allowed to compare the entire register, which\n- * lets us avoid considering whether FEAT_LVA is actually enabled.\n- *\n- * The BAS field is used to allow setting breakpoints on 16-bit\n- * wide instructions; it is CONSTRAINED UNPREDICTABLE whether\n- * a bp will fire if the addresses covered by the bp and the addresses\n- * covered by the insn overlap but the insn doesn't start at the\n- * start of the bp address range. We choose to require the insn and\n- * the bp to have the same address. The constraints on writing to\n- * BAS enforced in dbgbcr_write mean we have only four cases:\n- * 0b0000 => no breakpoint\n- * 0b0011 => breakpoint on addr\n- * 0b1100 => breakpoint on addr + 2\n- * 0b1111 => breakpoint on addr\n- * See also figure D2-3 in the v8 ARM ARM (DDI0487A.c).\n- */\n- int bas = extract64(bcr, 5, 4);\n- addr = bvr & ~3ULL;\n- if (bas == 0) {\n- return;\n- }\n- if (bas == 0xc) {\n- addr += 2;\n- }\n- break;\n- }\n- case 2: /* unlinked context ID match */\n- case 8: /* unlinked VMID match (reserved if no EL2) */\n- case 10: /* unlinked context ID and VMID match (reserved if no EL2) */\n- qemu_log_mask(LOG_UNIMP,\n- \"arm: unlinked context breakpoint types not implemented\\n\");\n- return;\n- case 9: /* linked VMID match (reserved if no EL2) */\n- case 11: /* linked context ID and VMID match (reserved if no EL2) */\n- case 3: /* linked context ID match */\n- default:\n- /*\n- * We must generate no events for Linked context matches (unless\n- * they are linked to by some other bp/wp, which is handled in\n- * updates for the linking bp/wp). We choose to also generate no events\n- * for reserved values.\n- */\n- return;\n- }\n-\n- cpu_breakpoint_insert(CPU(cpu), addr, flags, &env->cpu_breakpoint[n]);\n-}\n-\n-void hw_breakpoint_update_all(ARMCPU *cpu)\n-{\n- int i;\n- CPUARMState *env = &cpu->env;\n-\n- /*\n- * Completely clear out existing QEMU breakpoints and our array, to\n- * avoid possible stale entries following migration load.\n- */\n- cpu_breakpoint_remove_all(CPU(cpu), BP_CPU);\n- memset(env->cpu_breakpoint, 0, sizeof(env->cpu_breakpoint));\n-\n- for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_breakpoint); i++) {\n- hw_breakpoint_update(cpu, i);\n- }\n-}\n-\n-#if !defined(CONFIG_USER_ONLY)\n-\n-vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)\n-{\n- ARMCPU *cpu = ARM_CPU(cs);\n- CPUARMState *env = &cpu->env;\n-\n- /*\n- * In BE32 system mode, target memory is stored byteswapped (on a\n- * little-endian host system), and by the time we reach here (via an\n- * opcode helper) the addresses of subword accesses have been adjusted\n- * to account for that, which means that watchpoints will not match.\n- * Undo the adjustment here.\n- */\n- if (arm_sctlr_b(env)) {\n- if (len == 1) {\n- addr ^= 3;\n- } else if (len == 2) {\n- addr ^= 2;\n- }\n- }\n-\n- return addr;\n-}\n-\n-#endif /* !CONFIG_USER_ONLY */\n-#endif /* CONFIG_TCG */\n-\n /*\n * Check for traps to \"powerdown debug\" registers, which are controlled\n * by MDCR.TDOSA\ndiff --git a/target/arm/tcg/debug.c b/target/arm/tcg/debug.c\nnew file mode 100644\nindex 00000000000..7dfb291a9bf\n--- /dev/null\n+++ b/target/arm/tcg/debug.c\n@@ -0,0 +1,782 @@\n+/*\n+ * ARM debug helpers used by TCG\n+ *\n+ * This code is licensed under the GNU GPL v2 or later.\n+ *\n+ * SPDX-License-Identifier: GPL-2.0-or-later\n+ */\n+#include \"qemu/osdep.h\"\n+#include \"qemu/log.h\"\n+#include \"cpu.h\"\n+#include \"internals.h\"\n+#include \"cpu-features.h\"\n+#include \"cpregs.h\"\n+#include \"exec/watchpoint.h\"\n+#include \"system/tcg.h\"\n+\n+#define HELPER_H \"tcg/helper.h\"\n+#include \"exec/helper-proto.h.inc\"\n+\n+/* Return the Exception Level targeted by debug exceptions. */\n+static int arm_debug_target_el(CPUARMState *env)\n+{\n+ bool secure = arm_is_secure(env);\n+ bool route_to_el2 = false;\n+\n+ if (arm_feature(env, ARM_FEATURE_M)) {\n+ return 1;\n+ }\n+\n+ if (arm_is_el2_enabled(env)) {\n+ route_to_el2 = env->cp15.hcr_el2 & HCR_TGE ||\n+ env->cp15.mdcr_el2 & MDCR_TDE;\n+ }\n+\n+ if (route_to_el2) {\n+ return 2;\n+ } else if (arm_feature(env, ARM_FEATURE_EL3) &&\n+ !arm_el_is_aa64(env, 3) && secure) {\n+ return 3;\n+ } else {\n+ return 1;\n+ }\n+}\n+\n+/*\n+ * Raise an exception to the debug target el.\n+ * Modify syndrome to indicate when origin and target EL are the same.\n+ */\n+static G_NORETURN void\n+raise_exception_debug(CPUARMState *env, uint32_t excp, uint32_t syndrome)\n+{\n+ int debug_el = arm_debug_target_el(env);\n+ int cur_el = arm_current_el(env);\n+\n+ /*\n+ * If singlestep is targeting a lower EL than the current one, then\n+ * DisasContext.ss_active must be false and we can never get here.\n+ * Similarly for watchpoint and breakpoint matches.\n+ */\n+ assert(debug_el >= cur_el);\n+ syndrome |= (debug_el == cur_el) << ARM_EL_EC_SHIFT;\n+ raise_exception(env, excp, syndrome, debug_el);\n+}\n+\n+/* See AArch64.GenerateDebugExceptionsFrom() in ARM ARM pseudocode */\n+static bool aa64_generate_debug_exceptions(CPUARMState *env)\n+{\n+ int cur_el = arm_current_el(env);\n+ int debug_el;\n+\n+ if (cur_el == 3) {\n+ return false;\n+ }\n+\n+ /* MDCR_EL3.SDD disables debug events from Secure state */\n+ if (arm_is_secure_below_el3(env)\n+ && extract32(env->cp15.mdcr_el3, 16, 1)) {\n+ return false;\n+ }\n+\n+ /*\n+ * Same EL to same EL debug exceptions need MDSCR_KDE enabled\n+ * while not masking the (D)ebug bit in DAIF.\n+ */\n+ debug_el = arm_debug_target_el(env);\n+\n+ if (cur_el == debug_el) {\n+ return extract32(env->cp15.mdscr_el1, 13, 1)\n+ && !(env->daif & PSTATE_D);\n+ }\n+\n+ /* Otherwise the debug target needs to be a higher EL */\n+ return debug_el > cur_el;\n+}\n+\n+static bool aa32_generate_debug_exceptions(CPUARMState *env)\n+{\n+ int el = arm_current_el(env);\n+\n+ if (el == 0 && arm_el_is_aa64(env, 1)) {\n+ return aa64_generate_debug_exceptions(env);\n+ }\n+\n+ if (arm_is_secure(env)) {\n+ int spd;\n+\n+ if (el == 0 && (env->cp15.sder & 1)) {\n+ /*\n+ * SDER.SUIDEN means debug exceptions from Secure EL0\n+ * are always enabled. Otherwise they are controlled by\n+ * SDCR.SPD like those from other Secure ELs.\n+ */\n+ return true;\n+ }\n+\n+ spd = extract32(env->cp15.mdcr_el3, 14, 2);\n+ switch (spd) {\n+ case 1:\n+ /* SPD == 0b01 is reserved, but behaves as 0b00. */\n+ case 0:\n+ /*\n+ * For 0b00 we return true if external secure invasive debug\n+ * is enabled. On real hardware this is controlled by external\n+ * signals to the core. QEMU always permits debug, and behaves\n+ * as if DBGEN, SPIDEN, NIDEN and SPNIDEN are all tied high.\n+ */\n+ return true;\n+ case 2:\n+ return false;\n+ case 3:\n+ return true;\n+ }\n+ }\n+\n+ return el != 2;\n+}\n+\n+/*\n+ * Return true if debugging exceptions are currently enabled.\n+ * This corresponds to what in ARM ARM pseudocode would be\n+ * if UsingAArch32() then\n+ * return AArch32.GenerateDebugExceptions()\n+ * else\n+ * return AArch64.GenerateDebugExceptions()\n+ * We choose to push the if() down into this function for clarity,\n+ * since the pseudocode has it at all callsites except for the one in\n+ * CheckSoftwareStep(), where it is elided because both branches would\n+ * always return the same value.\n+ */\n+bool arm_generate_debug_exceptions(CPUARMState *env)\n+{\n+ if ((env->cp15.oslsr_el1 & 1) || (env->cp15.osdlr_el1 & 1)) {\n+ return false;\n+ }\n+ if (is_a64(env)) {\n+ return aa64_generate_debug_exceptions(env);\n+ } else {\n+ return aa32_generate_debug_exceptions(env);\n+ }\n+}\n+\n+/*\n+ * Is single-stepping active? (Note that the \"is EL_D AArch64?\" check\n+ * implicitly means this always returns false in pre-v8 CPUs.)\n+ */\n+bool arm_singlestep_active(CPUARMState *env)\n+{\n+ return extract32(env->cp15.mdscr_el1, 0, 1)\n+ && arm_el_is_aa64(env, arm_debug_target_el(env))\n+ && arm_generate_debug_exceptions(env);\n+}\n+\n+/* Return true if the linked breakpoint entry lbn passes its checks */\n+static bool linked_bp_matches(ARMCPU *cpu, int lbn)\n+{\n+ CPUARMState *env = &cpu->env;\n+ uint64_t bcr = env->cp15.dbgbcr[lbn];\n+ int brps = arm_num_brps(cpu);\n+ int ctx_cmps = arm_num_ctx_cmps(cpu);\n+ int bt;\n+ uint32_t contextidr;\n+ uint64_t hcr_el2;\n+\n+ /*\n+ * Links to unimplemented or non-context aware breakpoints are\n+ * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or\n+ * as if linked to an UNKNOWN context-aware breakpoint (in which\n+ * case DBGWCR<n>_EL1.LBN must indicate that breakpoint).\n+ * We choose the former.\n+ */\n+ if (lbn >= brps || lbn < (brps - ctx_cmps)) {\n+ return false;\n+ }\n+\n+ bcr = env->cp15.dbgbcr[lbn];\n+\n+ if (extract64(bcr, 0, 1) == 0) {\n+ /* Linked breakpoint disabled : generate no events */\n+ return false;\n+ }\n+\n+ bt = extract64(bcr, 20, 4);\n+ hcr_el2 = arm_hcr_el2_eff(env);\n+\n+ switch (bt) {\n+ case 3: /* linked context ID match */\n+ switch (arm_current_el(env)) {\n+ default:\n+ /* Context matches never fire in AArch64 EL3 */\n+ return false;\n+ case 2:\n+ if (!(hcr_el2 & HCR_E2H)) {\n+ /* Context matches never fire in EL2 without E2H enabled. */\n+ return false;\n+ }\n+ contextidr = env->cp15.contextidr_el[2];\n+ break;\n+ case 1:\n+ contextidr = env->cp15.contextidr_el[1];\n+ break;\n+ case 0:\n+ if ((hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {\n+ contextidr = env->cp15.contextidr_el[2];\n+ } else {\n+ contextidr = env->cp15.contextidr_el[1];\n+ }\n+ break;\n+ }\n+ break;\n+\n+ case 7: /* linked contextidr_el1 match */\n+ contextidr = env->cp15.contextidr_el[1];\n+ break;\n+ case 13: /* linked contextidr_el2 match */\n+ contextidr = env->cp15.contextidr_el[2];\n+ break;\n+\n+ case 9: /* linked VMID match (reserved if no EL2) */\n+ case 11: /* linked context ID and VMID match (reserved if no EL2) */\n+ case 15: /* linked full context ID match */\n+ default:\n+ /*\n+ * Links to Unlinked context breakpoints must generate no\n+ * events; we choose to do the same for reserved values too.\n+ */\n+ return false;\n+ }\n+\n+ /*\n+ * We match the whole register even if this is AArch32 using the\n+ * short descriptor format (in which case it holds both PROCID and ASID),\n+ * since we don't implement the optional v7 context ID masking.\n+ */\n+ return contextidr == (uint32_t)env->cp15.dbgbvr[lbn];\n+}\n+\n+static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)\n+{\n+ CPUARMState *env = &cpu->env;\n+ uint64_t cr;\n+ int pac, hmc, ssc, wt, lbn;\n+ /*\n+ * Note that for watchpoints the check is against the CPU security\n+ * state, not the S/NS attribute on the offending data access.\n+ */\n+ bool is_secure = arm_is_secure(env);\n+ int access_el = arm_current_el(env);\n+\n+ if (is_wp) {\n+ CPUWatchpoint *wp = env->cpu_watchpoint[n];\n+\n+ if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) {\n+ return false;\n+ }\n+ cr = env->cp15.dbgwcr[n];\n+ if (wp->hitattrs.user) {\n+ /*\n+ * The LDRT/STRT/LDT/STT \"unprivileged access\" instructions should\n+ * match watchpoints as if they were accesses done at EL0, even if\n+ * the CPU is at EL1 or higher.\n+ */\n+ access_el = 0;\n+ }\n+ } else {\n+ uint64_t pc = is_a64(env) ? env->pc : env->regs[15];\n+\n+ if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc != pc) {\n+ return false;\n+ }\n+ cr = env->cp15.dbgbcr[n];\n+ }\n+ /*\n+ * The WATCHPOINT_HIT flag guarantees us that the watchpoint is\n+ * enabled and that the address and access type match; for breakpoints\n+ * we know the address matched; check the remaining fields, including\n+ * linked breakpoints. We rely on WCR and BCR having the same layout\n+ * for the LBN, SSC, HMC, PAC/PMC and is-linked fields.\n+ * Note that some combinations of {PAC, HMC, SSC} are reserved and\n+ * must act either like some valid combination or as if the watchpoint\n+ * were disabled. We choose the former, and use this together with\n+ * the fact that EL3 must always be Secure and EL2 must always be\n+ * Non-Secure to simplify the code slightly compared to the full\n+ * table in the ARM ARM.\n+ */\n+ pac = FIELD_EX64(cr, DBGWCR, PAC);\n+ hmc = FIELD_EX64(cr, DBGWCR, HMC);\n+ ssc = FIELD_EX64(cr, DBGWCR, SSC);\n+\n+ switch (ssc) {\n+ case 0:\n+ break;\n+ case 1:\n+ case 3:\n+ if (is_secure) {\n+ return false;\n+ }\n+ break;\n+ case 2:\n+ if (!is_secure) {\n+ return false;\n+ }\n+ break;\n+ }\n+\n+ switch (access_el) {\n+ case 3:\n+ case 2:\n+ if (!hmc) {\n+ return false;\n+ }\n+ break;\n+ case 1:\n+ if (extract32(pac, 0, 1) == 0) {\n+ return false;\n+ }\n+ break;\n+ case 0:\n+ if (extract32(pac, 1, 1) == 0) {\n+ return false;\n+ }\n+ break;\n+ default:\n+ g_assert_not_reached();\n+ }\n+\n+ wt = FIELD_EX64(cr, DBGWCR, WT);\n+ lbn = FIELD_EX64(cr, DBGWCR, LBN);\n+\n+ if (wt && !linked_bp_matches(cpu, lbn)) {\n+ return false;\n+ }\n+\n+ return true;\n+}\n+\n+static bool check_watchpoints(ARMCPU *cpu)\n+{\n+ CPUARMState *env = &cpu->env;\n+ int n;\n+\n+ /*\n+ * If watchpoints are disabled globally or we can't take debug\n+ * exceptions here then watchpoint firings are ignored.\n+ */\n+ if (extract32(env->cp15.mdscr_el1, 15, 1) == 0\n+ || !arm_generate_debug_exceptions(env)) {\n+ return false;\n+ }\n+\n+ for (n = 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) {\n+ if (bp_wp_matches(cpu, n, true)) {\n+ return true;\n+ }\n+ }\n+ return false;\n+}\n+\n+bool arm_debug_check_breakpoint(CPUState *cs)\n+{\n+ ARMCPU *cpu = ARM_CPU(cs);\n+ CPUARMState *env = &cpu->env;\n+ vaddr pc;\n+ int n;\n+\n+ /*\n+ * If breakpoints are disabled globally or we can't take debug\n+ * exceptions here then breakpoint firings are ignored.\n+ */\n+ if (extract32(env->cp15.mdscr_el1, 15, 1) == 0\n+ || !arm_generate_debug_exceptions(env)) {\n+ return false;\n+ }\n+\n+ /*\n+ * Single-step exceptions have priority over breakpoint exceptions.\n+ * If single-step state is active-pending, suppress the bp.\n+ */\n+ if (arm_singlestep_active(env) && !(env->pstate & PSTATE_SS)) {\n+ return false;\n+ }\n+\n+ /*\n+ * PC alignment faults have priority over breakpoint exceptions.\n+ */\n+ pc = is_a64(env) ? env->pc : env->regs[15];\n+ if ((is_a64(env) || !env->thumb) && (pc & 3) != 0) {\n+ return false;\n+ }\n+\n+ /*\n+ * Instruction aborts have priority over breakpoint exceptions.\n+ * TODO: We would need to look up the page for PC and verify that\n+ * it is present and executable.\n+ */\n+\n+ for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {\n+ if (bp_wp_matches(cpu, n, false)) {\n+ return true;\n+ }\n+ }\n+ return false;\n+}\n+\n+bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)\n+{\n+ /*\n+ * Called by core code when a CPU watchpoint fires; need to check if this\n+ * is also an architectural watchpoint match.\n+ */\n+ ARMCPU *cpu = ARM_CPU(cs);\n+\n+ return check_watchpoints(cpu);\n+}\n+\n+/*\n+ * Return the FSR value for a debug exception (watchpoint, hardware\n+ * breakpoint or BKPT insn) targeting the specified exception level.\n+ */\n+static uint32_t arm_debug_exception_fsr(CPUARMState *env)\n+{\n+ ARMMMUFaultInfo fi = { .type = ARMFault_Debug };\n+ int target_el = arm_debug_target_el(env);\n+ bool using_lpae;\n+\n+ if (arm_feature(env, ARM_FEATURE_M)) {\n+ using_lpae = false;\n+ } else if (target_el == 2 || arm_el_is_aa64(env, target_el)) {\n+ using_lpae = true;\n+ } else if (arm_feature(env, ARM_FEATURE_PMSA) &&\n+ arm_feature(env, ARM_FEATURE_V8)) {\n+ using_lpae = true;\n+ } else if (arm_feature(env, ARM_FEATURE_LPAE) &&\n+ (env->cp15.tcr_el[target_el] & TTBCR_EAE)) {\n+ using_lpae = true;\n+ } else {\n+ using_lpae = false;\n+ }\n+\n+ if (using_lpae) {\n+ return arm_fi_to_lfsc(&fi);\n+ } else {\n+ return arm_fi_to_sfsc(&fi);\n+ }\n+}\n+\n+void arm_debug_excp_handler(CPUState *cs)\n+{\n+ /*\n+ * Called by core code when a watchpoint or breakpoint fires;\n+ * need to check which one and raise the appropriate exception.\n+ */\n+ ARMCPU *cpu = ARM_CPU(cs);\n+ CPUARMState *env = &cpu->env;\n+ CPUWatchpoint *wp_hit = cs->watchpoint_hit;\n+\n+ if (wp_hit) {\n+ if (wp_hit->flags & BP_CPU) {\n+ bool wnr = (wp_hit->flags & BP_WATCHPOINT_HIT_WRITE) != 0;\n+\n+ cs->watchpoint_hit = NULL;\n+\n+ env->exception.fsr = arm_debug_exception_fsr(env);\n+ env->exception.vaddress = wp_hit->hitaddr;\n+ raise_exception_debug(env, EXCP_DATA_ABORT,\n+ syn_watchpoint(0, 0, wnr));\n+ }\n+ } else {\n+ uint64_t pc = is_a64(env) ? env->pc : env->regs[15];\n+\n+ /*\n+ * (1) GDB breakpoints should be handled first.\n+ * (2) Do not raise a CPU exception if no CPU breakpoint has fired,\n+ * since singlestep is also done by generating a debug internal\n+ * exception.\n+ */\n+ if (cpu_breakpoint_test(cs, pc, BP_GDB)\n+ || !cpu_breakpoint_test(cs, pc, BP_CPU)) {\n+ return;\n+ }\n+\n+ env->exception.fsr = arm_debug_exception_fsr(env);\n+ /*\n+ * FAR is UNKNOWN: clear vaddress to avoid potentially exposing\n+ * values to the guest that it shouldn't be able to see at its\n+ * exception/security level.\n+ */\n+ env->exception.vaddress = 0;\n+ raise_exception_debug(env, EXCP_PREFETCH_ABORT, syn_breakpoint(0));\n+ }\n+}\n+\n+/*\n+ * Raise an EXCP_BKPT with the specified syndrome register value,\n+ * targeting the correct exception level for debug exceptions.\n+ */\n+void HELPER(exception_bkpt_insn)(CPUARMState *env, uint32_t syndrome)\n+{\n+ int debug_el = arm_debug_target_el(env);\n+ int cur_el = arm_current_el(env);\n+\n+ /* FSR will only be used if the debug target EL is AArch32. */\n+ env->exception.fsr = arm_debug_exception_fsr(env);\n+ /*\n+ * FAR is UNKNOWN: clear vaddress to avoid potentially exposing\n+ * values to the guest that it shouldn't be able to see at its\n+ * exception/security level.\n+ */\n+ env->exception.vaddress = 0;\n+ /*\n+ * Other kinds of architectural debug exception are ignored if\n+ * they target an exception level below the current one (in QEMU\n+ * this is checked by arm_generate_debug_exceptions()). Breakpoint\n+ * instructions are special because they always generate an exception\n+ * to somewhere: if they can't go to the configured debug exception\n+ * level they are taken to the current exception level.\n+ */\n+ if (debug_el < cur_el) {\n+ debug_el = cur_el;\n+ }\n+ raise_exception(env, EXCP_BKPT, syndrome, debug_el);\n+}\n+\n+void HELPER(exception_swstep)(CPUARMState *env, uint32_t syndrome)\n+{\n+ raise_exception_debug(env, EXCP_UDEF, syndrome);\n+}\n+\n+void hw_watchpoint_update(ARMCPU *cpu, int n)\n+{\n+ CPUARMState *env = &cpu->env;\n+ vaddr len = 0;\n+ vaddr wvr = env->cp15.dbgwvr[n];\n+ uint64_t wcr = env->cp15.dbgwcr[n];\n+ int mask;\n+ int flags = BP_CPU | BP_STOP_BEFORE_ACCESS;\n+\n+ if (env->cpu_watchpoint[n]) {\n+ cpu_watchpoint_remove_by_ref(CPU(cpu), env->cpu_watchpoint[n]);\n+ env->cpu_watchpoint[n] = NULL;\n+ }\n+\n+ if (!FIELD_EX64(wcr, DBGWCR, E)) {\n+ /* E bit clear : watchpoint disabled */\n+ return;\n+ }\n+\n+ switch (FIELD_EX64(wcr, DBGWCR, LSC)) {\n+ case 0:\n+ /* LSC 00 is reserved and must behave as if the wp is disabled */\n+ return;\n+ case 1:\n+ flags |= BP_MEM_READ;\n+ break;\n+ case 2:\n+ flags |= BP_MEM_WRITE;\n+ break;\n+ case 3:\n+ flags |= BP_MEM_ACCESS;\n+ break;\n+ }\n+\n+ /*\n+ * Attempts to use both MASK and BAS fields simultaneously are\n+ * CONSTRAINED UNPREDICTABLE; we opt to ignore BAS in this case,\n+ * thus generating a watchpoint for every byte in the masked region.\n+ */\n+ mask = FIELD_EX64(wcr, DBGWCR, MASK);\n+ if (mask == 1 || mask == 2) {\n+ /*\n+ * Reserved values of MASK; we must act as if the mask value was\n+ * some non-reserved value, or as if the watchpoint were disabled.\n+ * We choose the latter.\n+ */\n+ return;\n+ } else if (mask) {\n+ /* Watchpoint covers an aligned area up to 2GB in size */\n+ len = 1ULL << mask;\n+ /*\n+ * If masked bits in WVR are not zero it's CONSTRAINED UNPREDICTABLE\n+ * whether the watchpoint fires when the unmasked bits match; we opt\n+ * to generate the exceptions.\n+ */\n+ wvr &= ~(len - 1);\n+ } else {\n+ /* Watchpoint covers bytes defined by the byte address select bits */\n+ int bas = FIELD_EX64(wcr, DBGWCR, BAS);\n+ int basstart;\n+\n+ if (extract64(wvr, 2, 1)) {\n+ /*\n+ * Deprecated case of an only 4-aligned address. BAS[7:4] are\n+ * ignored, and BAS[3:0] define which bytes to watch.\n+ */\n+ bas &= 0xf;\n+ }\n+\n+ if (bas == 0) {\n+ /* This must act as if the watchpoint is disabled */\n+ return;\n+ }\n+\n+ /*\n+ * The BAS bits are supposed to be programmed to indicate a contiguous\n+ * range of bytes. Otherwise it is CONSTRAINED UNPREDICTABLE whether\n+ * we fire for each byte in the word/doubleword addressed by the WVR.\n+ * We choose to ignore any non-zero bits after the first range of 1s.\n+ */\n+ basstart = ctz32(bas);\n+ len = cto32(bas >> basstart);\n+ wvr += basstart;\n+ }\n+\n+ cpu_watchpoint_insert(CPU(cpu), wvr, len, flags,\n+ &env->cpu_watchpoint[n]);\n+}\n+\n+void hw_watchpoint_update_all(ARMCPU *cpu)\n+{\n+ int i;\n+ CPUARMState *env = &cpu->env;\n+\n+ /*\n+ * Completely clear out existing QEMU watchpoints and our array, to\n+ * avoid possible stale entries following migration load.\n+ */\n+ cpu_watchpoint_remove_all(CPU(cpu), BP_CPU);\n+ memset(env->cpu_watchpoint, 0, sizeof(env->cpu_watchpoint));\n+\n+ for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_watchpoint); i++) {\n+ hw_watchpoint_update(cpu, i);\n+ }\n+}\n+\n+void hw_breakpoint_update(ARMCPU *cpu, int n)\n+{\n+ CPUARMState *env = &cpu->env;\n+ uint64_t bvr = env->cp15.dbgbvr[n];\n+ uint64_t bcr = env->cp15.dbgbcr[n];\n+ vaddr addr;\n+ int bt;\n+ int flags = BP_CPU;\n+\n+ if (env->cpu_breakpoint[n]) {\n+ cpu_breakpoint_remove_by_ref(CPU(cpu), env->cpu_breakpoint[n]);\n+ env->cpu_breakpoint[n] = NULL;\n+ }\n+\n+ if (!extract64(bcr, 0, 1)) {\n+ /* E bit clear : watchpoint disabled */\n+ return;\n+ }\n+\n+ bt = extract64(bcr, 20, 4);\n+\n+ switch (bt) {\n+ case 4: /* unlinked address mismatch (reserved if AArch64) */\n+ case 5: /* linked address mismatch (reserved if AArch64) */\n+ qemu_log_mask(LOG_UNIMP,\n+ \"arm: address mismatch breakpoint types not implemented\\n\");\n+ return;\n+ case 0: /* unlinked address match */\n+ case 1: /* linked address match */\n+ {\n+ /*\n+ * Bits [1:0] are RES0.\n+ *\n+ * It is IMPLEMENTATION DEFINED whether bits [63:49]\n+ * ([63:53] for FEAT_LVA) are hardwired to a copy of the sign bit\n+ * of the VA field ([48] or [52] for FEAT_LVA), or whether the\n+ * value is read as written. It is CONSTRAINED UNPREDICTABLE\n+ * whether the RESS bits are ignored when comparing an address.\n+ * Therefore we are allowed to compare the entire register, which\n+ * lets us avoid considering whether FEAT_LVA is actually enabled.\n+ *\n+ * The BAS field is used to allow setting breakpoints on 16-bit\n+ * wide instructions; it is CONSTRAINED UNPREDICTABLE whether\n+ * a bp will fire if the addresses covered by the bp and the addresses\n+ * covered by the insn overlap but the insn doesn't start at the\n+ * start of the bp address range. We choose to require the insn and\n+ * the bp to have the same address. The constraints on writing to\n+ * BAS enforced in dbgbcr_write mean we have only four cases:\n+ * 0b0000 => no breakpoint\n+ * 0b0011 => breakpoint on addr\n+ * 0b1100 => breakpoint on addr + 2\n+ * 0b1111 => breakpoint on addr\n+ * See also figure D2-3 in the v8 ARM ARM (DDI0487A.c).\n+ */\n+ int bas = extract64(bcr, 5, 4);\n+ addr = bvr & ~3ULL;\n+ if (bas == 0) {\n+ return;\n+ }\n+ if (bas == 0xc) {\n+ addr += 2;\n+ }\n+ break;\n+ }\n+ case 2: /* unlinked context ID match */\n+ case 8: /* unlinked VMID match (reserved if no EL2) */\n+ case 10: /* unlinked context ID and VMID match (reserved if no EL2) */\n+ qemu_log_mask(LOG_UNIMP,\n+ \"arm: unlinked context breakpoint types not implemented\\n\");\n+ return;\n+ case 9: /* linked VMID match (reserved if no EL2) */\n+ case 11: /* linked context ID and VMID match (reserved if no EL2) */\n+ case 3: /* linked context ID match */\n+ default:\n+ /*\n+ * We must generate no events for Linked context matches (unless\n+ * they are linked to by some other bp/wp, which is handled in\n+ * updates for the linking bp/wp). We choose to also generate no events\n+ * for reserved values.\n+ */\n+ return;\n+ }\n+\n+ cpu_breakpoint_insert(CPU(cpu), addr, flags, &env->cpu_breakpoint[n]);\n+}\n+\n+void hw_breakpoint_update_all(ARMCPU *cpu)\n+{\n+ int i;\n+ CPUARMState *env = &cpu->env;\n+\n+ /*\n+ * Completely clear out existing QEMU breakpoints and our array, to\n+ * avoid possible stale entries following migration load.\n+ */\n+ cpu_breakpoint_remove_all(CPU(cpu), BP_CPU);\n+ memset(env->cpu_breakpoint, 0, sizeof(env->cpu_breakpoint));\n+\n+ for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_breakpoint); i++) {\n+ hw_breakpoint_update(cpu, i);\n+ }\n+}\n+\n+#if !defined(CONFIG_USER_ONLY)\n+\n+vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)\n+{\n+ ARMCPU *cpu = ARM_CPU(cs);\n+ CPUARMState *env = &cpu->env;\n+\n+ /*\n+ * In BE32 system mode, target memory is stored byteswapped (on a\n+ * little-endian host system), and by the time we reach here (via an\n+ * opcode helper) the addresses of subword accesses have been adjusted\n+ * to account for that, which means that watchpoints will not match.\n+ * Undo the adjustment here.\n+ */\n+ if (arm_sctlr_b(env)) {\n+ if (len == 1) {\n+ addr ^= 3;\n+ } else if (len == 2) {\n+ addr ^= 2;\n+ }\n+ }\n+\n+ return addr;\n+}\n+\n+#endif /* !CONFIG_USER_ONLY */\ndiff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build\nindex 1b115656c46..6e9aed3e5de 100644\n--- a/target/arm/tcg/meson.build\n+++ b/target/arm/tcg/meson.build\n@@ -65,6 +65,7 @@ arm_common_ss.add(files(\n \n arm_common_system_ss.add(files(\n 'cpregs-at.c',\n+ 'debug.c',\n 'hflags.c',\n 'neon_helper.c',\n 'tlb_helper.c',\n@@ -72,6 +73,7 @@ arm_common_system_ss.add(files(\n 'vfp_helper.c',\n ))\n arm_user_ss.add(files(\n+ 'debug.c',\n 'hflags.c',\n 'neon_helper.c',\n 'tlb_helper.c',\n", "prefixes": [ "v4", "01/14" ] }