From patchwork Wed May 22 17:35:11 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Oester X-Patchwork-Id: 245688 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id ADE592C0604 for ; Thu, 23 May 2013 03:35:29 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756676Ab3EVRf2 (ORCPT ); Wed, 22 May 2013 13:35:28 -0400 Received: from mail-oa0-f47.google.com ([209.85.219.47]:53873 "EHLO mail-oa0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756563Ab3EVRf1 (ORCPT ); Wed, 22 May 2013 13:35:27 -0400 Received: by mail-oa0-f47.google.com with SMTP id m1so3036801oag.20 for ; Wed, 22 May 2013 10:35:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=date:from:to:cc:subject:message-id:mime-version:content-type :content-disposition:user-agent:x-gm-message-state; bh=3wjExa/05W94VxqUOPiSO3wSrsIxlGtBBhhr0PfDqHw=; b=i7faXY1/bQ47IuhzptCBw8cjYl1VK11KlU4fUAmZnHRzwfVCAuqBxI644UTutObVZC nQuromyHuFPaLQp1d6+V0UeBHhxjT5Y5mTJS2qppaNJqY6XsRKdBCMhWsYGLaxpakgS0 n7YkWnDweEoaf4Yne3/iPJbeVp1r3bHaEeoOIpffxYhbZ7kqLfli3CBk0m0CPMdBHUOR KJ8BQFC2MrjLuUwxpi9tD2zit6cjtB3vG+/qRGl8f1cdw9OTzccMmFaPMJMGRsun/Pug g7hVAGIpiXTmq9rxf/MrrmxPcukhCjK/7HN3l1eHYxal8J0A8D2ZGtjfCM1G75oXuCAn kerQ== X-Received: by 10.60.103.76 with SMTP id fu12mr5837640oeb.71.1369244127051; Wed, 22 May 2013 10:35:27 -0700 (PDT) Received: from gmail.com (wsip-24-249-192-237.pn.at.cox.net. [24.249.192.237]) by mx.google.com with ESMTPSA id x5sm8566392oep.1.2013.05.22.10.35.25 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 22 May 2013 10:35:26 -0700 (PDT) Date: Wed, 22 May 2013 13:35:11 -0400 From: Phil Oester To: netfilter-devel@vger.kernel.org Cc: pablo@netfilter.org, kaber@trash.net Subject: [PATCH] xtables: Add locking to prevent concurrent instances Message-ID: <20130522173511.GB860@gmail.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-Gm-Message-State: ALoCoQkyFbXhxPUWEn8nUxiJIxkCODqkJNjrekKNcrB/lp0QxBbREjN3b9SlhIJCW5sjRRBv8kBl Sender: netfilter-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netfilter-devel@vger.kernel.org There have been numerous complaints and bug reports over the years when admins attempt to run more than one instance of iptables simultaneously. Currently open bug reports which are related: 325: Parallel execution of the iptables is impossible 758: Retry iptables command on transient failure 764: Doing -Z twice in parallel breaks counters 822: iptables shows negative or other bad packet/byte counts As Patrick notes in 325: "Since this has been a problem people keep running into, I'd suggest to simply add some locking to iptables to catch the most common case." I started looking into alternatives to add locking, and of course the most common/obvious solution is to use a pidfile. But this has various downsides, such as if the application is terminated abnormally and the pidfile isn't cleaned up. And this also requires a writable filesystem. Using a UNIX domain socket file (e.g. in /var/run) has similar issues. Starting in 2.2, Linux added support for abstract sockets. These sockets require no filesystem, and automatically disappear once the application terminates. This is the locking solution I chose to implement in xtables-multi. As an added bonus, since each network namespace has its own socket pool, an iptables instance running in one namespace will not lock out an iptables instance running in another namespace. A filesystem approach would have to recognize and handle multiple network namespaces. As long as I was adding locking, I also chose to add a retry loop, with 3 attempts made to grab the lock before giving up. Phil Signed-off-by: Phil Oester diff --git a/iptables/xtables-multi.c b/iptables/xtables-multi.c index 8014d5f..9c22a82 100644 --- a/iptables/xtables-multi.c +++ b/iptables/xtables-multi.c @@ -1,8 +1,12 @@ #include #include #include +#include +#include +#include #include "xshared.h" +#include "xtables.h" #include "xtables-multi.h" #ifdef ENABLE_IPV4 @@ -35,7 +39,31 @@ static const struct subcommand multi_subcommands[] = { {NULL}, }; +#define XTMSOCKET_NAME "xtables_multi" +#define XTMSOCKET_LEN 14 + int main(int argc, char **argv) { - return subcmd_main(argc, argv, multi_subcommands); + int i = 0, ret, xtm_socket; + struct sockaddr_un xtm_addr; + + memset(&xtm_addr, 0, sizeof(xtm_addr)); + xtm_addr.sun_family = AF_UNIX; + strcpy(xtm_addr.sun_path+1, XTMSOCKET_NAME); + xtm_socket = socket(AF_UNIX, SOCK_STREAM, 0); + /* If we can't even create a socket, just revert to prior (lockless) behavior */ + if (xtm_socket < 0) + return subcmd_main(argc, argv, multi_subcommands); + + do { + ret = bind(xtm_socket, (struct sockaddr*)&xtm_addr, + offsetof(struct sockaddr_un, sun_path)+XTMSOCKET_LEN); + if (ret == 0) + return subcmd_main(argc, argv, multi_subcommands); + i++; + sleep(1); + } while (i <= 2); + + fprintf(stderr, "ERROR: unable to obtain lock - another instance is already running\n"); + exit(RESOURCE_PROBLEM); }