diff mbox series

[ovs-dev,2/2] ovsdb raft: Precheck prereq before proposing commit.

Message ID 1551466597-87991-2-git-send-email-hzhou8@ebay.com
State Accepted
Commit 2cd62f75c1b4af1fc0786afbae6b9fa910f37f0c
Headers show
Series [ovs-dev,1/2] ovsdb: Move trigger_run after storage_run and read_db. | expand

Commit Message

Han Zhou March 1, 2019, 6:56 p.m. UTC
From: Han Zhou <hzhou8@ebay.com>

In current OVSDB Raft design, when there are multiple transactions
pending, either from same server node or different nodes in the
cluster, only the first one can be successful at once, and following
ones will fail at the prerequisite check on leader node, because
the first one will update the expected prerequisite eid on leader
node, and the prerequisite used for proposing a commit has to be
committed eid, so it is not possible for a node to use the latest
prerequisite expected by the leader to propose a commit until the
lastest transaction is committed by the leader and updated the
committed_index on the node.

Current implementation proposes the commit as soon as the transaction
is requested by the client, which results in continously retry which
causes high CPU load and waste.

Particularly, even if all clients are using leader_only to connect to
only the leader, the prereq check failure still happens a lot when
a batch of transactions are pending on the leader node - the leader
node proposes a batch of commits using the same committed eid as
prerequisite and it updates the expected prereq as soon as the first
one is in progress, but it needs time to append to followers and wait
until majority replies to update the committed_index, which results in
continously useless retries of the following transactions proposed by
the leader itself.

This patch doesn't change the design but simplely pre-checks if current
eid is same as prereq, before proposing the commit, to avoid waste of
CPU cycles, for both leader and followers. When clients use leader_only
mode, this patch completely eliminates the prereq check failures.

In scale test of OVN with 1k HVs and creating and binding 10k lports,
the patch resulted in 90% CPU cost reduction on leader and >80% CPU cost
reduction on followers. (The test was with leader election base time
set to 10000ms, because otherwise the test couldn't complete because
of the frequent leader re-election.)

This is just one of the related performance problems of the prereq
checking mechanism dicussed at:

https://mail.openvswitch.org/pipermail/ovs-discuss/2019-February/048243.html
Signed-off-by: Han Zhou <hzhou8@ebay.com>
---
 ovsdb/TODO.rst      |  3 ---
 ovsdb/raft.c        |  2 +-
 ovsdb/raft.h        |  1 +
 ovsdb/storage.c     |  9 +++++++++
 ovsdb/storage.h     |  2 ++
 ovsdb/transaction.c | 10 ++++++++++
 ovsdb/transaction.h |  1 +
 ovsdb/trigger.c     |  4 ++++
 8 files changed, 28 insertions(+), 4 deletions(-)

Comments

Ben Pfaff March 4, 2019, 9:31 p.m. UTC | #1
On Fri, Mar 01, 2019 at 10:56:37AM -0800, Han Zhou wrote:
> From: Han Zhou <hzhou8@ebay.com>
> 
> In current OVSDB Raft design, when there are multiple transactions
> pending, either from same server node or different nodes in the
> cluster, only the first one can be successful at once, and following
> ones will fail at the prerequisite check on leader node, because
> the first one will update the expected prerequisite eid on leader
> node, and the prerequisite used for proposing a commit has to be
> committed eid, so it is not possible for a node to use the latest
> prerequisite expected by the leader to propose a commit until the
> lastest transaction is committed by the leader and updated the
> committed_index on the node.
> 
> Current implementation proposes the commit as soon as the transaction
> is requested by the client, which results in continously retry which
> causes high CPU load and waste.
> 
> Particularly, even if all clients are using leader_only to connect to
> only the leader, the prereq check failure still happens a lot when
> a batch of transactions are pending on the leader node - the leader
> node proposes a batch of commits using the same committed eid as
> prerequisite and it updates the expected prereq as soon as the first
> one is in progress, but it needs time to append to followers and wait
> until majority replies to update the committed_index, which results in
> continously useless retries of the following transactions proposed by
> the leader itself.
> 
> This patch doesn't change the design but simplely pre-checks if current
> eid is same as prereq, before proposing the commit, to avoid waste of
> CPU cycles, for both leader and followers. When clients use leader_only
> mode, this patch completely eliminates the prereq check failures.
> 
> In scale test of OVN with 1k HVs and creating and binding 10k lports,
> the patch resulted in 90% CPU cost reduction on leader and >80% CPU cost
> reduction on followers. (The test was with leader election base time
> set to 10000ms, because otherwise the test couldn't complete because
> of the frequent leader re-election.)
> 
> This is just one of the related performance problems of the prereq
> checking mechanism dicussed at:
> 
> https://mail.openvswitch.org/pipermail/ovs-discuss/2019-February/048243.html
> Signed-off-by: Han Zhou <hzhou8@ebay.com>

I *think* that this patch is going to be unreliable.  It appears to me
that what it does is wait until the current eid presented by the raft
storage is the one that we want.  But I don't think it's guaranteed that
that will ever happen.  What if we lose the raft connection, reconnect,
and skip past that particular eid?  I think in that kind of a case we'd
keep the trigger around forever and never discard it.
Han Zhou March 4, 2019, 9:53 p.m. UTC | #2
On Mon, Mar 4, 2019 at 1:31 PM Ben Pfaff <blp@ovn.org> wrote:
>
> On Fri, Mar 01, 2019 at 10:56:37AM -0800, Han Zhou wrote:
> > From: Han Zhou <hzhou8@ebay.com>
> >
> > In current OVSDB Raft design, when there are multiple transactions
> > pending, either from same server node or different nodes in the
> > cluster, only the first one can be successful at once, and following
> > ones will fail at the prerequisite check on leader node, because
> > the first one will update the expected prerequisite eid on leader
> > node, and the prerequisite used for proposing a commit has to be
> > committed eid, so it is not possible for a node to use the latest
> > prerequisite expected by the leader to propose a commit until the
> > lastest transaction is committed by the leader and updated the
> > committed_index on the node.
> >
> > Current implementation proposes the commit as soon as the transaction
> > is requested by the client, which results in continously retry which
> > causes high CPU load and waste.
> >
> > Particularly, even if all clients are using leader_only to connect to
> > only the leader, the prereq check failure still happens a lot when
> > a batch of transactions are pending on the leader node - the leader
> > node proposes a batch of commits using the same committed eid as
> > prerequisite and it updates the expected prereq as soon as the first
> > one is in progress, but it needs time to append to followers and wait
> > until majority replies to update the committed_index, which results in
> > continously useless retries of the following transactions proposed by
> > the leader itself.
> >
> > This patch doesn't change the design but simplely pre-checks if current
> > eid is same as prereq, before proposing the commit, to avoid waste of
> > CPU cycles, for both leader and followers. When clients use leader_only
> > mode, this patch completely eliminates the prereq check failures.
> >
> > In scale test of OVN with 1k HVs and creating and binding 10k lports,
> > the patch resulted in 90% CPU cost reduction on leader and >80% CPU cost
> > reduction on followers. (The test was with leader election base time
> > set to 10000ms, because otherwise the test couldn't complete because
> > of the frequent leader re-election.)
> >
> > This is just one of the related performance problems of the prereq
> > checking mechanism dicussed at:
> >
> > https://mail.openvswitch.org/pipermail/ovs-discuss/2019-February/048243.html
> > Signed-off-by: Han Zhou <hzhou8@ebay.com>
>
> I *think* that this patch is going to be unreliable.  It appears to me
> that what it does is wait until the current eid presented by the raft
> storage is the one that we want.  But I don't think it's guaranteed that
> that will ever happen.  What if we lose the raft connection, reconnect,
> and skip past that particular eid?  I think in that kind of a case we'd
> keep the trigger around forever and never discard it.

The function ovsdb_txn_precheck_prereq() compares the db->prereq with
the current eid from raft storage. Both values can change from
iteration to iteration. If raft reconnected and skipped the previous
eid, it shouldn't matter because the function checks the new prereq
against the new *current* eid.

In fact, prereq is last applied entry, so at least some day it should
catch up with the current eid, unless there are always new changes
appended to the log before the current node catch up. In that case
even without this change, the current node cannot propose any commit
sucessfully because it will encounter a prereq check failure. This
commit just avoid the waste of CPU and bandwidth in that same
situation - when current node cannot catch up with the latest appended
entry.
Ben Pfaff March 7, 2019, 10:40 p.m. UTC | #3
On Mon, Mar 04, 2019 at 01:53:38PM -0800, Han Zhou wrote:
> On Mon, Mar 4, 2019 at 1:31 PM Ben Pfaff <blp@ovn.org> wrote:
> >
> > On Fri, Mar 01, 2019 at 10:56:37AM -0800, Han Zhou wrote:
> > > From: Han Zhou <hzhou8@ebay.com>
> > >
> > > In current OVSDB Raft design, when there are multiple transactions
> > > pending, either from same server node or different nodes in the
> > > cluster, only the first one can be successful at once, and following
> > > ones will fail at the prerequisite check on leader node, because
> > > the first one will update the expected prerequisite eid on leader
> > > node, and the prerequisite used for proposing a commit has to be
> > > committed eid, so it is not possible for a node to use the latest
> > > prerequisite expected by the leader to propose a commit until the
> > > lastest transaction is committed by the leader and updated the
> > > committed_index on the node.
> > >
> > > Current implementation proposes the commit as soon as the transaction
> > > is requested by the client, which results in continously retry which
> > > causes high CPU load and waste.
> > >
> > > Particularly, even if all clients are using leader_only to connect to
> > > only the leader, the prereq check failure still happens a lot when
> > > a batch of transactions are pending on the leader node - the leader
> > > node proposes a batch of commits using the same committed eid as
> > > prerequisite and it updates the expected prereq as soon as the first
> > > one is in progress, but it needs time to append to followers and wait
> > > until majority replies to update the committed_index, which results in
> > > continously useless retries of the following transactions proposed by
> > > the leader itself.
> > >
> > > This patch doesn't change the design but simplely pre-checks if current
> > > eid is same as prereq, before proposing the commit, to avoid waste of
> > > CPU cycles, for both leader and followers. When clients use leader_only
> > > mode, this patch completely eliminates the prereq check failures.
> > >
> > > In scale test of OVN with 1k HVs and creating and binding 10k lports,
> > > the patch resulted in 90% CPU cost reduction on leader and >80% CPU cost
> > > reduction on followers. (The test was with leader election base time
> > > set to 10000ms, because otherwise the test couldn't complete because
> > > of the frequent leader re-election.)
> > >
> > > This is just one of the related performance problems of the prereq
> > > checking mechanism dicussed at:
> > >
> > > https://mail.openvswitch.org/pipermail/ovs-discuss/2019-February/048243.html
> > > Signed-off-by: Han Zhou <hzhou8@ebay.com>
> >
> > I *think* that this patch is going to be unreliable.  It appears to me
> > that what it does is wait until the current eid presented by the raft
> > storage is the one that we want.  But I don't think it's guaranteed that
> > that will ever happen.  What if we lose the raft connection, reconnect,
> > and skip past that particular eid?  I think in that kind of a case we'd
> > keep the trigger around forever and never discard it.
> 
> The function ovsdb_txn_precheck_prereq() compares the db->prereq with
> the current eid from raft storage. Both values can change from
> iteration to iteration. If raft reconnected and skipped the previous
> eid, it shouldn't matter because the function checks the new prereq
> against the new *current* eid.
> 
> In fact, prereq is last applied entry, so at least some day it should
> catch up with the current eid, unless there are always new changes
> appended to the log before the current node catch up. In that case
> even without this change, the current node cannot propose any commit
> sucessfully because it will encounter a prereq check failure. This
> commit just avoid the waste of CPU and bandwidth in that same
> situation - when current node cannot catch up with the latest appended
> entry.

After reading more code, and your explanation, I now understand.

I applied this to master.
diff mbox series

Patch

diff --git a/ovsdb/TODO.rst b/ovsdb/TODO.rst
index 3bd4e76..fb4a50f 100644
--- a/ovsdb/TODO.rst
+++ b/ovsdb/TODO.rst
@@ -39,9 +39,6 @@  OVSDB Clustering To-do List
 
 * Include index with monitor update?
 
-* Back off when transaction fails to commit?  Definitely back off until
-  the eid changes for prereq failures
-
 * Testing with replication.
 
 * Handling bad transactions in read_db().  (Kill the database?)
diff --git a/ovsdb/raft.c b/ovsdb/raft.c
index 68b527c..eee4f33 100644
--- a/ovsdb/raft.c
+++ b/ovsdb/raft.c
@@ -1906,7 +1906,7 @@  raft_get_eid(const struct raft *raft, uint64_t index)
     return &raft->snap.eid;
 }
 
-static const struct uuid *
+const struct uuid *
 raft_current_eid(const struct raft *raft)
 {
     return raft_get_eid(raft, raft->log_end - 1);
diff --git a/ovsdb/raft.h b/ovsdb/raft.h
index cd16782..3d44899 100644
--- a/ovsdb/raft.h
+++ b/ovsdb/raft.h
@@ -180,4 +180,5 @@  struct ovsdb_error *raft_store_snapshot(struct raft *,
 void raft_take_leadership(struct raft *);
 void raft_transfer_leadership(struct raft *, const char *reason);
 
+const struct uuid *raft_current_eid(const struct raft *);
 #endif /* lib/raft.h */
diff --git a/ovsdb/storage.c b/ovsdb/storage.c
index b810bff..e26252b 100644
--- a/ovsdb/storage.c
+++ b/ovsdb/storage.c
@@ -601,3 +601,12 @@  ovsdb_storage_write_schema_change(struct ovsdb_storage *storage,
     }
     return w;
 }
+
+const struct uuid *
+ovsdb_storage_peek_last_eid(struct ovsdb_storage *storage)
+{
+    if (!storage->raft) {
+        return NULL;
+    }
+    return raft_current_eid(storage->raft);
+}
diff --git a/ovsdb/storage.h b/ovsdb/storage.h
index 4a01fde..8a9bbab 100644
--- a/ovsdb/storage.h
+++ b/ovsdb/storage.h
@@ -91,4 +91,6 @@  struct ovsdb_storage *ovsdb_storage_open_standalone(const char *filename,
                                                     bool rw);
 struct ovsdb_schema *ovsdb_storage_read_schema(struct ovsdb_storage *);
 
+const struct uuid *ovsdb_storage_peek_last_eid(struct ovsdb_storage *);
+
 #endif /* ovsdb/storage.h */
diff --git a/ovsdb/transaction.c b/ovsdb/transaction.c
index 9fc1fd7..67ea771 100644
--- a/ovsdb/transaction.c
+++ b/ovsdb/transaction.c
@@ -1011,6 +1011,16 @@  struct ovsdb_txn_progress {
     struct ovsdb_storage *storage;
 };
 
+bool
+ovsdb_txn_precheck_prereq(const struct ovsdb *db)
+{
+    const struct uuid *eid = ovsdb_storage_peek_last_eid(db->storage);
+    if (!eid) {
+        return true;
+    }
+    return uuid_equals(&db->prereq, eid);
+}
+
 struct ovsdb_txn_progress *
 ovsdb_txn_propose_schema_change(struct ovsdb *db,
                                 const struct json *schema,
diff --git a/ovsdb/transaction.h b/ovsdb/transaction.h
index c819373..c21871a 100644
--- a/ovsdb/transaction.h
+++ b/ovsdb/transaction.h
@@ -29,6 +29,7 @@  void ovsdb_txn_set_txnid(const struct uuid *, struct ovsdb_txn *);
 const struct uuid *ovsdb_txn_get_txnid(const struct ovsdb_txn *);
 void ovsdb_txn_abort(struct ovsdb_txn *);
 
+bool ovsdb_txn_precheck_prereq(const struct ovsdb *db);
 struct ovsdb_error *ovsdb_txn_replay_commit(struct ovsdb_txn *)
     OVS_WARN_UNUSED_RESULT;
 struct ovsdb_txn_progress *ovsdb_txn_propose_commit(struct ovsdb_txn *,
diff --git a/ovsdb/trigger.c b/ovsdb/trigger.c
index 3f62dc9..6f4ed96 100644
--- a/ovsdb/trigger.c
+++ b/ovsdb/trigger.c
@@ -194,6 +194,10 @@  ovsdb_trigger_try(struct ovsdb_trigger *t, long long int now)
         struct ovsdb_txn *txn = NULL;
         struct ovsdb *newdb = NULL;
         if (!strcmp(t->request->method, "transact")) {
+            if (!ovsdb_txn_precheck_prereq(t->db)) {
+                return false;
+            }
+
             bool durable;
 
             struct json *result;